Skip to main content
QMSQMS
QMS
  • Welcome to your QMS
  • Quality Manual
  • Procedures
  • Records
  • Legit.Health Plus Version 1.1.0.0
    • CAPA Plan - BSI CE Mark Closeout
    • Index
    • Overview and Device Description
    • Information provided by the Manufacturer
    • Design and Manufacturing Information
      • R-TF-012-019 SOUPs
        • Aioboto3
        • Albumentations
        • Argon2-CFFI
        • Authlib
        • Boto3
        • Dependency Injector
        • Dill
        • FastAPI
        • PyTorch Grad-CAM
        • Httpx
        • NumPy
        • PyNVML
        • OpenCV
        • Pillow
        • Psutil
        • Pydantic
        • Pydantic Settings
        • PyYAML
        • Segmentation Models PyTorch
        • Starlette
        • Timm
        • PyTorch
        • TorchVision
        • Ultralytics YOLO
        • Uvicorn
      • R-TF-012-028 Software Requirement Specification
      • R-TF-012-029 Software Architecture Description
      • R-TF-012-030 Software Configuration Management Plan
      • R-TF-012-031 Product requirements specification
      • R-TF-012-037 Labeling and IFU Requirements
      • R-TF-012-040 Documentation level FDA
      • R-TF-012-041 Software Classification 62304
      • R-TF-012-043 Traceability Matrix
      • Review meetings
    • GSPR
    • Benefit-Risk Analysis and Risk Management
    • Product Verification and Validation
    • Post-Market Surveillance
  • Legit.Health Plus Version 1.1.0.1
  • Legit.Health Utilities
  • Licenses and accreditations
  • Applicable Standards and Regulations
  • Pricing
  • Public tenders
  • Legit.Health Plus Version 1.1.0.0
  • Design and Manufacturing Information
  • R-TF-012-019 SOUPs
  • PyNVML

PyNVML

General Information​

FieldValue
Package Namenvidia-ml-py (pynvml)
Manufacturer / VendorNVIDIA Corporation
Software CategoryLibrary
Primary DocumentationNVML Documentation, PyPI, pyNVML Docs
Programming Language(s)Python, C
LicenseBSD-3-Clause
Deployed Version(s)>=12.560.30 (version-locked at 13.590.44 across expert microservices)
Most Recent Available Version13.590.48
Last Review Date2026-01-27

Overview​

nvidia-ml-py provides Python bindings for the NVIDIA Management Library (NVML), a C-based programmatic interface for monitoring and managing NVIDIA GPUs. The package wraps NVML functions as Python methods using ctypes, converting NVML error codes into Python exceptions for clean error handling. NVML is the underlying library powering NVIDIA's nvidia-smi command-line tool and is designed as a platform for building third-party GPU management applications. The package is officially published and maintained by NVIDIA Corporation.

Within the medical device software, nvidia-ml-py serves as the GPU resource monitoring and detection layer within the distributed AI inference infrastructure. It is integrated into the legithp-expert framework, which provides the foundation for all 50+ clinical expert microservices. Specifically, nvidia-ml-py is used for:

  • GPU device detection: The NVMLGPUProvider adapter uses NVML to enumerate available CUDA GPUs, retrieve device handles, and query device counts during microservice initialization
  • Static device information: Retrieves immutable GPU properties including device name/model, total memory capacity, and CUDA compute capability for infrastructure logging and resource planning
  • Runtime metrics collection: Queries dynamic GPU metrics including current memory usage, GPU utilization percentage, and temperature for operational monitoring
  • Resource management: The SystemInfoService aggregates GPU metrics alongside CPU, memory, and disk usage to provide comprehensive resource visibility for the inference platform
  • Fallback architecture: Part of a provider chain where FallbackGPUProvider attempts PyTorch GPU detection first, falling back to direct NVML queries when PyTorch detection is insufficient

nvidia-ml-py was selected over alternatives due to:

  • Official support and maintenance by NVIDIA Corporation with regular updates aligned to driver releases
  • Direct access to low-level NVML functionality not exposed through PyTorch's CUDA interface
  • Permissive BSD-3-Clause license compatible with commercial medical device software
  • Graceful degradation when NVIDIA drivers are not installed or GPUs are not present
  • Comprehensive GPU metrics (utilization, temperature) beyond what PyTorch exposes
  • Clean Python exception handling for NVML error codes

Functional Requirements​

The following functional capabilities of this SOUP are relied upon by the medical device software.

Requirement IDDescriptionSource / Reference
FR-001Initialize the NVML library for subsequent API callspynvml.nvmlInit() function
FR-002Clean shutdown of NVML library resourcespynvml.nvmlShutdown() function
FR-003Query the total number of NVIDIA GPUs available on the systempynvml.nvmlDeviceGetCount() function
FR-004Obtain a device handle for a specific GPU by indexpynvml.nvmlDeviceGetHandleByIndex() function
FR-005Retrieve the name/model of a GPU devicepynvml.nvmlDeviceGetName() function
FR-006Query GPU memory information (total and used bytes)pynvml.nvmlDeviceGetMemoryInfo() function
FR-007Retrieve CUDA compute capability version (major, minor)pynvml.nvmlDeviceGetCudaComputeCapability() function
FR-008Query GPU utilization percentagepynvml.nvmlDeviceGetUtilizationRates() function
FR-009Query GPU temperature in Celsiuspynvml.nvmlDeviceGetTemperature() with NVML_TEMPERATURE_GPU

Performance Requirements​

The following performance expectations are relevant to the medical device software.

Requirement IDDescriptionAcceptance Criteria
PR-001NVML initialization shall complete within acceptable startup timeLibrary initialization does not dominate service startup latency
PR-002GPU metric queries shall not introduce significant overheadMetric queries complete in < 10ms under normal conditions
PR-003Library shall not cause memory leaks during continuous operationStable memory footprint with repeated metric polling
PR-004Shutdown shall release all NVML resources cleanlyNo resource leaks on process termination

Hardware Requirements​

The following hardware dependencies or constraints are imposed by this SOUP component.

Requirement IDDescriptionNotes / Limitations
HR-001NVIDIA GPU hardwareRequired for meaningful operation; library gracefully reports 0 GPUs if absent
HR-002NVIDIA GPU drivers installed on the host systemNVML is provided as part of the NVIDIA driver package
HR-003x86-64 or ARM64 processor architecturePre-built wheels available for common platforms

Software Requirements​

The following software dependencies and environmental assumptions are required by this SOUP component.

Requirement IDDescriptionDependency / Version Constraints
SR-001Python runtime environmentPython >=3.6 (ctypes module required)
SR-002NVIDIA GPU drivers with NVML libraryDriver version compatible with deployed NVML version
SR-003libnvidia-ml shared libraryProvided by NVIDIA driver installation

Known Anomalies Assessment​

This section evaluates publicly reported issues, defects, or security vulnerabilities associated with this SOUP component and their relevance to the medical device software.

A comprehensive search of security vulnerability databases was conducted for the nvidia-ml-py Python package. No CVEs or security advisories have been reported specifically targeting nvidia-ml-py as of the review date.

While no vulnerabilities affect the Python bindings directly, the following related NVIDIA vulnerabilities were assessed for potential applicability to the device's GPU monitoring infrastructure:

Anomaly ReferenceStatusApplicableRationaleReviewed At
CVE-2025-23266 (NVIDIA Container Toolkit)FixedNoCritical (CVSS 9.0) container escape vulnerability in NVIDIA Container Toolkit. Not applicable: this CVE affects the container toolkit, not the NVML library or Python bindings. The device uses standard driver installations, not container toolkit2026-01-27
CVE-2024-0126 (GPU Display Drivers)FixedNoCode execution vulnerability in GPU display drivers. Not applicable: the device deploys with driver versions that include fixes; nvidia-ml-py is a query-only interface that does not execute arbitrary code on the GPU2026-01-27

The package provides Python bindings to NVML, which is included in the NVIDIA driver package. Security issues affecting NVML itself would be addressed through driver updates rather than Python package updates, as the Python bindings are thin wrappers around the driver-provided shared library.

The device's usage pattern minimizes attack surface exposure:

  • Read-only operations: The device uses nvidia-ml-py exclusively for querying GPU information (device count, memory, utilization, temperature); no write operations or GPU configuration changes are performed
  • Internal monitoring only: GPU metrics are collected for internal resource monitoring and logging; no GPU information is exposed to external users or APIs
  • Graceful degradation: The NVMLGPUProvider implementation handles NVML initialization failures gracefully, logging warnings and reporting 0 GPUs rather than crashing
  • Process isolation: Each expert microservice runs in an isolated container with the GPU provider instantiated per-process
  • Version locking: Requirements lock files pin nvidia-ml-py to version 13.590.44 across all expert microservices
  • Lifecycle management: NVML shutdown is registered via atexit to ensure clean resource release on process termination
  • Driver compatibility: The locked nvidia-ml-py version (13.590.x) is aligned with deployed NVIDIA driver versions

Risk Control Measures​

The following risk control measures are implemented to mitigate potential security and operational risks associated with this SOUP component:

  • Version locking via requirements_lock.txt ensures reproducible, auditable deployments
  • Read-only usage pattern prevents any GPU configuration changes
  • Graceful handling of missing NVIDIA drivers or GPUs
  • Exception handling prevents crashes from individual GPU query failures
  • Container isolation limits potential impact of any exploitation
  • GPU metrics are used internally only; not exposed to external interfaces

Assessment Methodology​

The following methodology was used to identify and assess known anomalies:

  • Sources consulted:

    • National Vulnerability Database (NVD) search for "nvidia-ml-py" and "pynvml"
    • Snyk vulnerability database for nvidia-ml-py
    • NVIDIA Product Security page
    • NVIDIA Archived Security Bulletins
    • PyPI package security reports
    • GitHub repository issues for related projects (nvidia-ml-py3, pynvml)
  • Criteria for determining applicability:

    • Vulnerability must affect deployed versions (nvidia-ml-py 13.590.44)
    • Vulnerability must be exploitable through the device's operational context (read-only GPU monitoring)
    • Attack vector must be reachable through the device's interfaces (internal monitoring only)
    • Graceful degradation, process isolation, and read-only usage must not already mitigate the vulnerability

Signature meaning

The signatures for the approval process of this document can be found in the verified commits at the repository for the QMS. As a reference, the team members who are expected to participate in this document and their roles in the approval process, as defined in Annex I Responsibility Matrix of the GP-001, are:

  • Author: Team members involved
  • Reviewer: JD-003, JD-004
  • Approver: JD-001
Previous
NumPy
Next
OpenCV
  • General Information
  • Overview
  • Functional Requirements
  • Performance Requirements
  • Hardware Requirements
  • Software Requirements
  • Known Anomalies Assessment
    • Risk Control Measures
    • Assessment Methodology
All the information contained in this QMS is confidential. The recipient agrees not to transmit or reproduce the information, neither by himself nor by third parties, through whichever means, without obtaining the prior written permission of Legit.Health (AI Labs Group S.L.)