TorchVision
General Information
| Field | Value |
|---|---|
| Package Name | torchvision |
| Manufacturer / Vendor | PyTorch Foundation / Linux Foundation (Meta AI, Google, Microsoft, Amazon, and 662+ contributors) |
| Software Category | Library |
| Primary Documentation | Documentation, GitHub, PyPI, Releases |
| Programming Language(s) | Python, C++ |
| License | BSD-3-Clause License |
| Deployed Version(s) | >=0.15.0, >=0.19.0, >=0.21.0 (version-locked at 0.24.1) |
| Most Recent Available Version | 0.25.0 |
| Last Review Date | 2026-01-27 |
Overview
TorchVision is the official computer vision library for PyTorch, providing popular datasets, model architectures, and common image transformations. It is part of the PyTorch ecosystem and is maintained by the PyTorch Foundation under the Linux Foundation, with contributions from major technology organizations including Meta AI, Google, Microsoft, Amazon, and over 662 individual contributors.
Within the medical device software, torchvision is used exclusively for image preprocessing in the computer vision inference pipelines. It is integrated across multiple expert microservices for clinical sign classification and segmentation. Specifically, torchvision is used in:
- Core expert framework (
legithp-expert): Provides the foundational image preprocessing capabilities for all classification and segmentation experts - Segmenter base module: Image preprocessing for semantic segmentation models analyzing skin lesions and inflammatory patterns
- Condition classifier: Full preprocessing pipeline with test-time augmentation (TTA) for skin condition classification
- Follicular inflammatory pattern classifier: Specialized preprocessing with configurable normalization parameters for inflammatory pattern analysis
TorchVision was selected for:
- Official PyTorch ecosystem component ensuring tight integration and compatibility
- Industry-standard image transformation primitives optimized for deep learning workflows
- Consistent API across transforms enabling composable preprocessing pipelines
- Support for both PIL and tensor backends with automatic format conversion
- Active maintenance with regular updates aligned to PyTorch releases
- BSD-3-Clause license permitting commercial use in medical device software
Functional Requirements
The following functional capabilities of this SOUP are relied upon by the medical device software.
| Requirement ID | Description | Source / Reference |
|---|---|---|
| FR-001 | Compose multiple image transforms into a sequential preprocessing pipeline | transforms.Compose() |
| FR-002 | Resize images to specified dimensions with optional anti-aliasing | transforms.Resize() |
| FR-003 | Convert PIL Images to PyTorch tensors with automatic scaling to [0, 1] range | transforms.ToTensor() |
| FR-004 | Apply per-channel RGB normalization using configurable mean and standard deviation | transforms.Normalize() |
| FR-005 | Convert numpy arrays and tensors to PIL Image format for transform compatibility | transforms.ToPILImage() |
| FR-006 | Support tensor input/output mode for GPU-accelerated preprocessing | Tensor backend support |
Performance Requirements
The following performance expectations are relevant to the medical device software.
| Requirement ID | Description | Acceptance Criteria |
|---|---|---|
| PR-001 | Image transforms shall complete within acceptable API latency bounds | Preprocessing completes within the overall request timeout |
| PR-002 | Memory usage shall scale linearly with image dimensions | No memory leaks during repeated preprocessing operations |
| PR-003 | Tensor operations shall maintain IEEE 754 float32 numerical precision | No loss of precision affecting downstream model inference |
| PR-004 | Transform composition shall not introduce significant computational overhead | Chained transforms execute with minimal overhead vs. individual |
Hardware Requirements
The following hardware dependencies or constraints are imposed by this SOUP component.
| Requirement ID | Description | Notes / Limitations |
|---|---|---|
| HR-001 | Sufficient system memory for image pixel data | Memory requirements scale with image resolution (width x height x channels) |
| HR-002 | x86-64 or ARM64 processor architecture | Pre-built wheels available for common platforms |
| HR-003 | CUDA-compatible GPU for tensor backend transforms | CPU transforms supported; GPU optional but improves throughput |
Software Requirements
The following software dependencies and environmental assumptions are required by this SOUP component.
| Requirement ID | Description | Dependency / Version Constraints |
|---|---|---|
| SR-001 | Python runtime environment | Python >=3.10 |
| SR-002 | PyTorch deep learning framework | torch >=2.0.0 (version-matched with torchvision) |
| SR-003 | Pillow for PIL Image backend support | Pillow (bundled or system library) |
| SR-004 | NumPy for array interoperability | Compatible NumPy version for tensor conversion |
Known Anomalies Assessment
This section evaluates publicly reported issues, defects, or security vulnerabilities associated with this SOUP component and their relevance to the medical device software.
| Anomaly Reference | Status | Applicable | Rationale | Reviewed At |
|---|---|---|---|---|
| CVE-2025-32434 (PyTorch torch.load RCE with weights_only=True) | Fixed | No | Critical RCE vulnerability (CVSS 9.3) affecting PyTorch <=2.5.1. While torchvision depends on PyTorch, the device does not use torchvision model loading features. The device uses version-locked PyTorch >=2.6.0 which includes the fix | 2026-01-27 |
TorchVision is actively maintained as part of the PyTorch ecosystem with a robust release cycle aligned to PyTorch versions. The project follows PyTorch's security policy and uses GitHub Security Advisories for coordinated disclosure. According to vulnerability databases, no security vulnerabilities have been reported specifically for the torchvision transforms module used by the device.
The device's usage pattern minimizes attack surface exposure:
- Limited API surface: The device uses only the
transformsmodule for image preprocessing; no model loading, dataset downloading, or I/O operations from torchvision are utilized - No untrusted input to transforms: All image data is received through authenticated API endpoints and validated before preprocessing
- No model weight loading via torchvision: Model architectures and weights are loaded through PyTorch directly, not through
torchvision.models - Version locking: Requirements lock files pin torchvision to version 0.24.1, ensuring reproducible and auditable deployments matched to the PyTorch version
- Input validation: All inference inputs are validated for shape, type, and format before being passed to torchvision transforms
Risk Control Measures
The following risk control measures are implemented to mitigate potential security and operational risks associated with this SOUP component:
- Version locking via requirements_lock.txt ensures reproducible deployments
- PyTorch version alignment ensures compatibility and security patch coverage
- Input validation at API boundaries prevents malformed data from reaching transforms
- Container isolation limits potential impact of any exploitation
- No use of network-dependent features (model downloads, dataset fetching)
Assessment Methodology
The following methodology was used to identify and assess known anomalies:
-
Sources consulted:
- National Vulnerability Database (NVD) search for "torchvision"
- GitHub Security Advisories for pytorch/vision repository
- PyTorch security policy and disclosure process
- PyPI package security reports
- Snyk vulnerability database
- CVE.org search results for PyTorch ecosystem
-
Criteria for determining applicability:
- Vulnerability must affect deployed versions (0.15.0 - 0.24.1)
- Vulnerability must be exploitable in the device's operational context (image preprocessing only)
- Vulnerability must impact the specific torchvision functions used (transforms module)
- Attack vector must be reachable through the device's interfaces
Signature meaning
The signatures for the approval process of this document can be found in the verified commits at the repository for the QMS. As a reference, the team members who are expected to participate in this document and their roles in the approval process, as defined in Annex I Responsibility Matrix of the GP-001, are:
- Author: Team members involved
- Reviewer: JD-003, JD-004
- Approver: JD-001