Timm
General Information
| Field | Value |
|---|---|
| Package Name | timm |
| Manufacturer / Vendor | Ross Wightman / Hugging Face |
| Software Category | Library |
| Primary Documentation | Documentation, GitHub, PyPI |
| Programming Language(s) | Python |
| License | Apache License 2.0 |
| Deployed Version(s) | >=1.0.9, >=1.0.14 (version-locked at 1.0.22 and 1.0.24) |
| Most Recent Available Version | 1.0.24 |
| Last Review Date | 2026-01-26 |
Overview
timm (PyTorch Image Models) is a comprehensive deep learning library providing state-of-the-art computer vision models, layers, utilities, optimizers, schedulers, data loaders, and augmentation techniques. The library contains the largest collection of PyTorch image encoders and backbones, including architectures such as ResNet, EfficientNet, Vision Transformer (ViT), ConvNeXt, MobileNet, and many others.
Within the medical device software, timm serves as the foundational backbone provider for all image classification and feature extraction pipelines. The library enables transfer learning by providing pre-trained model architectures that can be adapted for dermatological image analysis tasks. The primary function used is timm.create_model(), which instantiates neural network backbones with configurable parameters.
timm was selected over alternatives due to:
- Comprehensive model coverage with 1,200+ pre-trained architectures
- Active maintenance and regular updates from Hugging Face
- Consistent API across different model families
- Extensive pre-training on ImageNet-1k and ImageNet-22k datasets
- Strong community adoption (36k+ GitHub stars) and documentation
- Apache 2.0 license permitting commercial use
Functional Requirements
The following functional capabilities of this SOUP are relied upon by the medical device software.
| Requirement ID | Description | Source / Reference |
|---|---|---|
| FR-001 | Create model instances from architecture name strings with configurable parameters | timm.create_model() function |
| FR-002 | Provide ConvNeXt family backbones (ConvNeXtV2, ConvNeXt-Small) for condition and pattern classifiers | ConvNeXt model implementations |
| FR-003 | Provide EfficientNet family backbones (B0, B5) for quality assessment and domain classification | EfficientNet model implementations |
| FR-004 | Support extraction of backbone feature dimensions via model.num_features attribute | Model architecture API |
| FR-005 | Allow removal of classification head via num_classes=0 parameter for custom head attachment | create_model() parameter handling |
| FR-006 | Support loading of pre-trained ImageNet weights via pretrained=True parameter | Pre-trained weight loading mechanism |
| FR-007 | Expose hidden layer dimensions via model.head_hidden_size for proper classifier head dimensioning | Model attribute access |
Performance Requirements
The following performance expectations are relevant to the medical device software.
| Requirement ID | Description | Acceptance Criteria |
|---|---|---|
| PR-001 | Model instantiation completes within acceptable initialization time | Model creation < 30 seconds on target hardware |
| PR-002 | Feature extraction maintains numerical precision for downstream processing | Float32 tensor output with standard IEEE 754 precision |
| PR-003 | Memory footprint allows concurrent model loading for multi-task pipelines | Model fits within allocated GPU memory (varies by model size) |
Hardware Requirements
The following hardware dependencies or constraints are imposed by this SOUP component.
| Requirement ID | Description | Notes / Limitations |
|---|---|---|
| HR-001 | CUDA-compatible GPU recommended for efficient inference | CPU inference supported but significantly slower |
| HR-002 | Sufficient GPU memory for model architecture | EfficientNet-B5: ~30MB, ConvNeXtV2-Base: ~350MB |
| HR-003 | System memory for model weight loading and tensor storage | Minimum 8GB RAM recommended for model operations |
Software Requirements
The following software dependencies and environmental assumptions are required by this SOUP component.
| Requirement ID | Description | Dependency / Version Constraints |
|---|---|---|
| SR-001 | Python runtime environment | Python >=3.8 |
| SR-002 | PyTorch deep learning framework | torch >=2.0.0 (recommend >=2.6.0) |
| SR-003 | torchvision for image transformations | torchvision >=0.15.0 |
| SR-004 | Hugging Face Hub for model weight downloads | huggingface_hub (optional, for weights) |
Known Anomalies Assessment
This section evaluates publicly reported issues, defects, or security vulnerabilities associated with this SOUP component and their relevance to the medical device software.
| Anomaly Reference | Status | Applicable | Rationale | Reviewed At |
|---|---|---|---|---|
| CVE-2025-32434 (PyTorch torch.load() RCE vulnerability) | Fixed | No | Affects PyTorch <=2.5.1; the device uses version-locked PyTorch dependencies. When loading timm models, weights_only=True combined with PyTorch >=2.6.0 mitigates this vulnerability. Production deployments use pre-verified, internally cached model weights | 2026-01-26 |
The timm library is maintained by Hugging Face with active development (36k+ GitHub stars, 5k+ forks, regular releases). According to Snyk's security analysis, the package has been scanned and no known vulnerabilities have been identified. The project does not maintain a formal security policy (SECURITY.md), which is common for research-originated open-source projects, but vulnerabilities in the underlying PyTorch framework are addressed promptly through version updates.
The device's usage pattern minimizes attack surface exposure:
- No dynamic model loading from external sources: All model architectures are instantiated via
timm.create_model()with predefined architecture names; no arbitrary model files are loaded from user input - Pre-trained weights from verified sources: Model weights are sourced exclusively from the Hugging Face model hub and cached internally before deployment
- Version locking: Requirements lock files (requirements_lock.txt) pin timm and PyTorch to verified versions, ensuring reproducible and auditable deployments
- Limited API surface: The device uses only
create_model()for backbone instantiation andnum_features/head_hidden_sizeattributes for dimensioning; no experimental or unsafe features are utilized
Risk Control Measures
The following risk control measures are implemented to mitigate potential security and operational risks associated with this SOUP component:
- Version locking via requirements_lock.txt files ensures reproducible deployments
- Pre-trained weights are sourced from verified Hugging Face model hub
- Model weights are cached and verified before deployment
- No dynamic model loading from external sources; all architectures are predefined
- Limited API surface usage (
create_model()and attribute access only)
Assessment Methodology
Known anomalies were identified and assessed using the following methodology:
-
Sources consulted:
- National Vulnerability Database (NVD) search for "timm" and "pytorch-image-models"
- GitHub Security Advisories for the huggingface/pytorch-image-models repository
- PyPI package security reports
- Dependency vulnerability scanners (pip-audit, safety)
- PyTorch framework security advisories (as timm's primary dependency)
-
Criteria for determining applicability:
- Vulnerability must affect deployed versions (1.0.9+)
- Vulnerability must be exploitable in the device's operational context
- Vulnerability must impact the specific timm functions used (primarily
create_model())
Signature meaning
The signatures for the approval process of this document can be found in the verified commits at the repository for the QMS. As a reference, the team members who are expected to participate in this document and their roles in the approval process, as defined in Annex I Responsibility Matrix of the GP-001, are:
- Author: Team members involved
- Reviewer: JD-003 Design & Development Manager, JD-004 Quality Manager & PRRC
- Approver: JD-001 General Manager