GP-028 AI Development
Table of contents
Purpose
This procedure describes the phases of development, validation, and maintenance in the lifecycle of the Artificial Intelligence (AI) models developed by AI Labs Group S.L. It details how we design, develop, test, release, and update the algorithms that power the Legit.Health Plus device, ensuring they are robust, safe, and effective for their intended purpose.
Scope
All AI activities performed by AI Labs Group S.L. for Legit.Health Plus, including new development, maintenance, retraining, and reevaluation of existing models, and any third-party AI components used within the device.
Reference Documents
- ISO 13485:2016 - Medical devices - Quality management systems
- ISO/IEC 42001:2023 - Information technology — Artificial intelligence — Management system
- MDR 2017/745 - Medical Device Regulation
- Regulation (EU) 2024/1689 - EU Artificial Intelligence Act
- Guidance MDCG 2019-11 - Qualification and Classification of Software in MDR 2017/745
- Guidance MDCG 2020-3 Rev.1 (May 2023) - Guidance on significant changes regarding the software in accordance with Article 120 of the MDR
- Good Machine Learning Practice for Medical Device Development: Guiding Principles
- IEC 62304:2006/AMD 1:2015 - Medical device software - Software life cycle processes
- GP-012 - Design, Redesign and Development
- GP-023 - Change control management
Terms and Definitons
Term / Abbreviation | Definition |
---|---|
Algorithm | A set of mathematical/logical operations coded by the AI team to achieve a specific task, such as image recognition, object detection, or semantic segmentation. |
AI | Artificial Intelligence |
Data Annotation | The action of associating a label to data (e.g., an ICD category to an image, a bounding box to a lesion) by a qualified person following a given protocol. |
Data Collection | The action of acquiring data (e.g., clinical images) from various sources, including clinical partners and public datasets, following a given protocol. |
Training Set | The subset of data used to fit or train the parameters of an AI model. |
Validation Set | The subset of data used to provide an unbiased evaluation of a model fit on the training set while tuning model hyperparameters. |
Test Set | A fixed subset of data used to provide an unbiased evaluation of a final model's performance after training is complete. |
Vision Transformer (ViT) | A specific neural network architecture used for image recognition tasks, inspired by the Transformer architecture. |
Test-Time Augmentation (TTA) | A technique used to improve model performance by creating multiple augmented versions of a test image and averaging the predictions. |
Model Bias | Systematic errors in model predictions that result from unrepresentative training data, algorithmic limitations, or evaluation methodology. |
Robustness Testing | Testing to ensure the model performs reliably under various perturbations, edge cases, and adversarial conditions. |
Retraining | The process of updating an existing AI model by modifying its architecture, training data, or training configuration. |
Reevaluation | The process of evaluating an existing, unchanged AI model against new test data or new performance metrics. |
Process and Responsibilities
Responsibilities
Role | Responsibilities |
---|---|
Technical Manager | - Application of the present procedure. - Overall management of team planning and resources. - Ensuring alignment with other QMS procedures. |
AI Team | - Data Management (Collection, Annotation, Partitioning). - Design of AI projects (Description, Plan, Risk Assessment). - Development of algorithms (Training, Calibration, Experimentation). - Verification and Validation of algorithms (Reporting, Testing). - Release and subsequent updates of algorithms. |
AI Development Cycle
The AI development cycle is structured into three primary phases, ensuring a systematic and controlled progression from conception to deployment.
Design Phase
Description and Plan
The AI Description (R-TF-028-001) is prepared by the AI team to outline the algorithms to be developed, including their specifications.
The development and deployment of these algorithms require defined resources and structured methodologies. These are documented in the AI Development Plan (R-TF-028-002), which provides details on data management, algorithm training, evaluation processes, release procedures, and AI risk management.
Data Collection and Annotation
The development of algorithms necessitates the use of labeled datasets. A single algorithm may require multiple data collections, each potentially annotated using different methods. Conversely, a single dataset may support the development of multiple algorithms and undergo various annotation processes.
The AI team is responsible for planning data collection and annotation activities in accordance with predefined instructions. All collected data and corresponding annotations are securely stored and fully traceable.
For each data collection, the AI team, in collaboration with the Clinical Operations team, prepares Data Collection Instructions (R-TF-028-003), specifying the methodology for data acquisition.
For each data annotation activity, the AI team—supported by the Medical team—prepares Data Annotation Instructions (R-TF-028-004), detailing the annotation procedures.
Project-specific data management requirements are defined within the AI Development Plan (R-TF-028-002).
AI Risk Management
During the design phase, the AI team develops the AI Risk Management Plan as part of the AI Development Plan (R-TF-028-002). The team also conducts risk identification, analysis, and evaluation, documented in the AI Risk Matrix (R-TF-028-011).
Additionally, the AI team collaborates with the product and software development teams to assess AI-related safety risks, which are recorded in the Safety Risk Matrix (???).
Development Phase
The development phase is a structured, agile, and iterative process in which the AI team designs, trains, evaluates, and refines models to meet the defined performance, safety, and compliance requirements. This stage is critical to ensuring that the resulting algorithms are robust, reliable, and suitable for integration into Legit.Health Plus.
Key activities in this phase include:
-
Model Architecture Design: Evaluating and implementing advanced architectures, such as Vision Transformers (ViT) for image recognition, based on literature reviews and internal experimentation. This may involve exploring novel configurations, adapting pre-existing models, or adjusting hyperparameters to optimize performance.
-
Data Management and Integrity: Preparing datasets through rigorous splitting into training, validation, and test sets, with subject-level separation applied where possible to mitigate data leakage. A fixed test set is maintained for consistent benchmarking. All dataset versions, transformations, and annotations are tracked for full traceability.
-
Model Training and Optimization: Training models using supervised learning, transfer learning from pre-trained weights (e.g., ImageNet), and applying data augmentation strategies to enhance sample diversity and improve generalization. Training pipelines are monitored with tools that record experiments, parameter settings, and results for reproducibility.
-
Model Calibration and Post-Processing: Applying post-processing techniques, such as temperature scaling, to calibrate model output probabilities, ensuring they are well-calibrated, interpretable, and aligned with clinical decision-making requirements. Where necessary, additional pre-processing or output transformations are implemented to meet performance and safety criteria.
-
Evaluation and Performance Monitoring: Defining and refining key performance metrics and success criteria, visualizing results for stakeholder review, and ensuring rigorous separation between training and evaluation data. Performance is assessed not only on accuracy but also on robustness, fairness, and clinical applicability.
-
Model Bias Analysis: Conducting comprehensive bias analysis to identify and mitigate systematic errors in model predictions. This includes:
- Assessment of training data quality, diversity, and representativity of the intended patient population
- Analysis of model performance across different demographic subgroups, skin types, lesion types, and clinical presentations
- Identification of potential sources of bias in data collection, annotation, or model architecture
- Documentation of bias mitigation strategies and their effectiveness
-
Robustness Testing: Evaluating model resilience and reliability under various conditions and perturbations, including:
- Removal or occlusion of lesions from images to test feature dependence
- Addition of problem-specific artifacts (e.g., rulers, markers, lighting variations, image quality degradation)
- Testing with edge cases and out-of-distribution samples
- Adversarial testing to identify potential failure modes
- Evaluation of model performance under realistic clinical conditions and variations
Throughout the development phase, interim results and experimental findings are communicated to relevant stakeholders. Any discoveries that impact prior assumptions or specifications may trigger revisions to the AI Description (R-TF-028-001), AI Development Plan (R-TF-028-002), and AI Risk Matrix (R-TF-028-011).
Verification and Validation Phase
Once each algorithm within a package is deemed ready for integration, the AI team prepares an AI Development Report (R-TF-028-005). This report provides a comprehensive account of the algorithm development process, including design, training, tuning, selection, verification, evaluation, and validation, demonstrating compliance with all applicable regulatory requirements. The report also incorporates the AI Risk Assessment, detailing identified risks and mitigation measures.
Upon successful verification and validation of all algorithms within the package, the package is formally released. Implementation guidelines for deploying the package within an SDK by the Software Development team are documented in the AI Release (R-TF-028-006), ensuring proper and consistent integration into the target software environment.
AI Description
The AI team describes the algorithm package with sufficient specificity to guide data collection, annotation, and development activities. This includes the specifications of the required algorithms (e.g., image recognition for ICD categories, object detection for lesion counting) and their intended integration into the Legit.Health Plus device.
AI Development Plan
The AI Team plans the resources and methodologies required to develop the algorithm package described in the AI Description (R-TF-028-001). The plan is a comprehensive document that must detail the entire scope of the development project.
Notably, the plan shall include:
-
Project Objectives: Clear goals for the algorithm package, aligned with the device's intended use.
-
Project Management & Resources: Definition of the human resources, roles, and project management approach.
-
Development Environment: Specification of the hardware, software, and tools (including SOUP as per GP-012) to be used.
-
Data Management Plan: A detailed strategy for data acquisition, curation, annotation, and partitioning. This plan may lead to the creation of specific Data Collection Instructions and Data Annotation Instructions.
-
Training and Evaluation Plan: The methodology for training, tuning, and evaluating the models, following good machine learning practices and the procedures outlined in R-TF-012-009. This includes defining metrics, acceptance criteria, and traceability measures.
-
Release Plan: Details on the deliverables for the Software Development team, including the list of algorithms, documentation, and dependencies.
-
Risk Management Plan: The strategy for managing AI-specific risks, which includes the creation of the first version of the AI Risk Matrix (F-GP-028-009).
After the plan is drafted, it is formally reviewed. Checks are performed using the AI Design Checks (R-TF-028-007) checklist to ensure that the defined specifications and requirements conform with good practices for AI development.
The AI Development Plan is a living document. It may be revised during the development phase based on the AI Team's exploratory work and findings. Consequently, Data Collection Instructions and Data Annotation Instructions may also be updated to meet the evolving data requirements of the project.
Data Collection Instructions
To ensure the acquisition of high-quality, relevant data for algorithm development, the AI Team, in collaboration with the Clinical Operations team, shall define and document formal Data Collection Instructions.
These instructions must provide clear specifications for:
-
Dataset Composition: The required characteristics of the data to be collected, including demographics and clinical presentation, to ensure it is representative of the intended patient population.
-
Dataset Size: An estimation of the required volume of data, with a clear rationale supporting its statistical significance for the intendedmodeling task.
-
Acquisition Protocol: Detailed clinical and technical requirements for data acquisition to ensure consistency and quality across all sources.
The acquisition instructions must be sufficiently specific to enable consistent data collection by qualified personnel and to serve as a verifiable record that the collected data meets the predefined requirements. If a formal clinical investigation is necessary to collect data, it shall be initiated and conducted in accordance with the company's established procedure for clinical investigations.
Data Annotation Instructions
The AI Team, with support from the clinical team, is responsible for providing detailed Data Annotation Instructions to all personnel tasked with labeling datasets. These instructions shall specify precisely how medical experts or other qualified annotators are to label the data required for algorithm development (e.g., applying ICD category labels, drawing bounding boxes for lesions).
Annotation instructions must be unambiguous and serve two primary functions:
-
To act as a clear guide for annotators to ensure consistency and accuracy.
-
To establish a baseline against which the quality of the resulting annotations can be formally evaluated.
All annotators must be formally trained on these instructions before commencing work. A record of this training shall be maintained to ensure traceability and document the competence of the personnel involved.
AI Risk Matrix
As an integral part of the design phase, the AI Team shall conduct an initial risk assessment focused on hazards unique to the AI development lifecycle. The team will identify, analyze, and evaluate potential AI risks related to data management, algorithm training, model evaluation, and deployment.
All identified risks shall be recorded in the AI Risk Matrix (R-TF-028-009). This initial risk analysis forms the basis for ongoing risk management activities throughout the project.
It is critical to recognize that these AI-specific risks can contribute to or result in broader safety risks for the medical device. Therefore, any identified "safety risks related to AI" must be formally communicated to the product and software development teams for inclusion in the overall safety risk management file.
AI Development Report
The development report summarizes how the algorithms were created and validated. It provides evidence that the models meet their predefined acceptance criteria and are fit for their intended purpose. The report includes detailed sections on data management, algorithm training, and a comprehensive evaluation of performance using the metrics specified in R-TF-012-009.
Data Management
The Data Management section of the AI Development Report shall document:
- Dataset Provenance: Complete traceability of all data sources, including clinical partners, public datasets, and proprietary collections
- Dataset Composition: Detailed description of the dataset characteristics, including size, demographics, clinical presentations, and representativity analysis
- Data Quality Verification: Evidence of data quality checks, including verification of image quality, annotation accuracy, and compliance with Data Collection Instructions (R-TF-028-003)
- Data Partitioning: Description of how data was split into training, validation, and test sets, including rationale for the split strategy and evidence of subject-level separation to prevent data leakage
- Annotation Quality: Evidence of annotation quality, including inter-annotator agreement metrics, compliance with Data Annotation Instructions (R-TF-028-004), and annotator training records
- Dataset Versioning: Complete version control of datasets used throughout the development process
Algorithm Training
The Algorithm Training section shall document:
- Model Architecture: Detailed description of the chosen architecture(s), including rationale for selection based on literature review and internal experimentation
- Training Configuration: Complete specification of hyperparameters, loss functions, optimization algorithms, data augmentation strategies, and training procedures
- Training Process: Description of the training methodology, including transfer learning strategies, convergence criteria, and monitoring procedures
- Experiment Tracking: Comprehensive records of all experiments, parameter settings, and results for full reproducibility and traceability
- Model Selection: Justification for the final model selection based on validation set performance and other relevant criteria
- Calibration and Post-Processing: Description of any calibration techniques (e.g., temperature scaling) or post-processing methods applied to improve model outputs
Algorithm Evaluation
The Algorithm Evaluation section shall provide comprehensive evidence that the model meets all acceptance criteria, including:
- Performance Metrics: Detailed results for all clinically relevant metrics (e.g., sensitivity, specificity, AUC, F1-score) on the fixed test set, with statistical confidence intervals where applicable
- Subgroup Analysis: Performance breakdown across relevant demographic and clinical subgroups to demonstrate consistency and identify potential disparities
- Bias Analysis: Comprehensive bias assessment including:
- Quality and representativity of training data relative to the intended patient population
- Performance analysis across different patient demographics, skin types, lesion types, and clinical presentations
- Identification of systematic errors or disparities in model predictions
- Documentation of bias mitigation strategies implemented and their effectiveness
- Robustness Testing: Results from robustness evaluation including:
- Performance under lesion removal/occlusion tests
- Resilience to problem-specific artifacts (rulers, markers, lighting variations, image degradation)
- Edge case and out-of-distribution sample performance
- Adversarial testing results and identified failure modes
- Clinical Applicability: Evidence that the model performs appropriately for its intended clinical use, including comparison to acceptance criteria defined in the AI Development Plan
- Traceability: Clear linkage between evaluation results and the requirements defined in the AI Description (R-TF-028-001) and Software Requirements Specifications
AI Release
The formal transfer of a new or updated algorithm package from the AI Team to the Software Development team is managed through an AI Release.
To facilitate a smooth integration, design transfer occurs proactively through collaborative sprint reviews attended by both teams. This allows the Software Development team to begin integration work even before the final algorithms are formally released.
The complete release package provided to the Software Development team shall include:
-
Algorithm Package: All required algorithms, models, and their associated configuration files necessary for deployment.
-
Release Report (R-TF-028-006): Comprehensive documentation detailing all instructions the Software Development team must follow to correctly integrate the algorithm package into the target software.
-
Technical Support: Ongoing support from the AI Team to assist the Software Development team with the integration process.
Prior to release, all deliverables are verified to ensure they were developed and packaged as expected. These checks are formally recorded using the AI V&V Checks (R-TF-028-008) checklist.
A new AI Release is not required for algorithm updates that do not alter the implementation or integration interface of the package.
AI Updates
The lifecycle of AI algorithms extends beyond their initial release. This section outlines the controlled processes for managing post-release updates to algorithm packages, which are categorized as either Retraining or Reevaluation.
Any update to an AI model requires a formal risk assessment. The AI risks associated with the update must be analyzed, and a benefit/risk impact assessment shall be conducted. Any potential new or modified safety risks related to AI resulting from the update must be assessed and communicated to the relevant teams.
All updates to AI models shall follow the versioning scheme defined in the "AI Model Versioning" section below, in accordance with MDCG 2020-3 Rev.1 (May 2023).
Retraining
Retraining is performed when an algorithm's core logic or data foundation is modified. This includes training on new or updated data, implementing a new model architecture, or changing key parameters/hyperparameters.
When an algorithm is retrained, the process must follow the appropriate AI Development Plan. Upon completion, an AI Retraining Report (R-TF-028-005) shall be produced. This report must describe:
-
Modifications and Rationale: A clear description of the changes made to the predicate algorithm and the goals for the retraining effort.
-
Data Management: Details of any new data collection or annotation activities.
-
Training Process: A summary of the training methodology if it differs from the predicate.
-
Performance Testing: A comprehensive evaluation of the retrained algorithm's performance.
Performance testing must demonstrate, using the same test data and metrics as the predicate algorithm, that the update shows non-regression in performance and meets the success criteria defined in the original AI Development Report or the updated AI Development Plan.
A new revision of the AI Release (R-TF-028-004) shall be issued to the Software Development team, and the release must be verified using the AI V&V Checks (R-TF-028-008).
A retraining must not involve modifications to the algorithm's fundamental input or output specifications. If such changes are required, the work is considered a new project, which necessitates the creation of a new AI Description, Development Plan, Development Report, and Release.
Reevaluation
Reevaluation is performed when an existing, unchanged algorithm is evaluated against new test data or new performance metrics, typically driven by a change in product requirements.
If the reevaluation is driven by changes to product requirements, the AI Description (R-TF-028-001) and AI Development Plan (R-TF-028-002) may need to be revised first. The reevaluation process itself is governed by the appropriate AI Development Plan.
Upon completion, an AI Reevaluation Report (R-TF-028-008) shall be produced. This report must describe:
-
Modifications and Rationale: A description of the changes to the evaluation plan from the original Development Report and the rationale for the reevaluation.
-
Data Management: Details of any new data collection or annotation activities used for the new test set.
-
Performance Testing: The results of the performance testing on the new test data, using both the original metrics and any new metrics.
The performance testing should demonstrate that the algorithm shows similar performance on the new testing data and/or meets the success criteria defined in the original AI Development Report or the updated AI Development Plan.
Reevaluation activities do not result in a new AI Release unless there are changes to the integration specifications or documentation that affect the Software Development team.
AI Model Versioning
This section defines the versioning scheme for AI models in accordance with MDCG 2020-3 Rev.1 (May 2023), which provides guidance on determining the significance of changes to medical device software.
The versioning scheme follows a four-part format: Major.Minor.Patch.Build (e.g., 1.2.0.45).
For AI models integrated into Legit.Health Plus, the significance of changes is determined as follows:
Patch Changes (X.X.X.↑)
No AI model changes constitute patch-level changes.
All retraining or modification of AI models are classified as either minor or major changes, as they inherently affect the clinical functionality and performance characteristics of the device.
Patch changes are reserved for:
- Bug fixes in non-AI software components
- Documentation updates that do not affect the model or its integration
- Minor UI/UX improvements that do not involve AI functionality
Minor Changes (X.X.↑.X)
Minor changes involve retraining of AI models where the intended use, clinical task, and fundamental model specifications remain unchanged. The following scenarios constitute minor changes:
Same Architecture, Modified Training Configuration
- Using the same model architecture but with different hyperparameters (e.g., learning rate, loss functions, batch size, regularization parameters)
- Adjusting optimization algorithms or training procedures to improve convergence
- Modifying data augmentation strategies while maintaining the same underlying architecture
Example: Adjusting the learning rate schedule to optimize convergence without altering the model structure or intended task.
Training Dataset Modifications
Dataset modifications that improve model quality without changing the intended patient population or clinical task:
-
Adding images: Incorporating additional images to improve generalization and robustness, provided the new data:
- Aligns with the validated data management process
- Maintains consistent quality, annotation standards, and preprocessing
- Remains within the scope of the intended patient population and use case as defined in the AI Description
-
Removing images: Excluding faulty or irrelevant images to enhance dataset quality, such as:
- Removing images with annotation errors or poor quality
- Excluding outliers identified through quality control processes
- Maintaining the same intended patient population and clinical use case
-
Fixing labels: Correcting mislabeled images to improve training accuracy, ensuring:
- No change to the clinical task or output categories
- Corrections are made according to established Data Annotation Instructions
- Changes are documented and traceable
Validation and Performance Requirements
All minor changes must satisfy the following requirements:
- Performance Evaluation: Models are validated using the same fixed test dataset to ensure consistent and traceable performance comparisons across versions
- Non-Regression Testing: Performance must demonstrate equivalence or improvement compared to the current model version using clinically relevant metrics (e.g., sensitivity, specificity, AUC, F1-score)
- Risk Assessment: The risk management file (per ISO 14971) is reviewed to confirm no new hazards are introduced and existing mitigations remain effective
- Lifecycle Compliance: Modifications adhere to software lifecycle processes (IEC 62304), with comprehensive functional, regression, and clinical performance testing
- Documentation: All changes are documented in an AI Retraining Report (R-TF-028-007) following the established AI Development Plan
Regulatory Considerations
Per MDCG 2020-3 Rev.1, minor changes as defined above are unlikely to require a new conformity assessment under MDR Article 120, provided that:
- The intended use and clinical function remain unchanged
- The risk profile and safety characteristics are not adversely affected
- Performance equivalence or improvement is demonstrated through rigorous testing
- All changes are implemented through a validated software development lifecycle
Major Changes (X.↑.X.X)
Major changes involve significant modifications to the AI model that affect its architecture, functionality, or clinical scope. The following scenarios constitute major changes:
Architecture Changes
-
Different architecture maintaining task consistency:
- Using a different neural network architecture (e.g., updating the CNN backbone, switching from ResNet to Vision Transformer) while maintaining the same number of output neurons and primary clinical task
- Rationale: The update enhances feature extraction capabilities as part of routine optimization and lifecycle management per IEC 62304
- The clinical purpose, input/output specifications, and intended medical use remain unchanged
-
Architecture changes with different neuron count:
- Any modification that changes the number of output neurons, thereby altering the target task
- This typically indicates a change in the clinical classification scheme or output categories
Input and Output Modifications
-
Different inputs:
- Training with varied input types (e.g., combining image and clinical data, or using different preprocessing techniques) to improve robustness
- The output task (e.g., classification of skin conditions) remains consistent
-
Multitask learning:
- Adding an auxiliary task during training (e.g., predicting a related clinical feature such as lesion severity alongside the primary diagnosis)
- The auxiliary task enhances feature learning during training
- The primary output layer retains the same neuron count and task
- Note: The auxiliary task is used only for training and does not affect the deployed model's clinical output
-
Output normalization:
- Training with a different output neuron configuration (e.g., erythema severity on a [0,10] continuous scale)
- Normalizing to the original output format (e.g., [0,4] ordinal categories) during post-processing
- The clinical task remains equivalent, but the internal representation differs
Scope Expansion
-
Increasing the number of visual signs or diagnostic categories:
- Adding new clinical conditions, lesion types, or visual signs to the model's classification capabilities
- This expands the device's intended use and requires updated clinical validation
-
Extending the intended patient population:
- Modifying the model to support additional demographics, age groups, skin types, or clinical presentations beyond the original scope
Validation and Regulatory Requirements
All major changes must satisfy the following requirements:
-
Comprehensive Validation: Full verification and validation activities as defined for new development, including:
- Complete performance evaluation on appropriate test datasets
- Subgroup analysis across all relevant clinical and demographic categories
- Bias analysis and robustness testing
- Clinical performance testing to demonstrate safety and effectiveness
-
Risk Management: Comprehensive risk assessment including:
- Identification and mitigation of new hazards introduced by the changes
- Update of the AI Risk Matrix (R-TF-028-011) and Safety Risk Matrix
- Benefit-risk analysis to justify the modification
-
Documentation: Production of:
- Updated AI Description (R-TF-028-001) if the model scope changes
- Updated AI Development Plan (R-TF-028-002)
- AI Retraining Report (R-TF-028-007) or new AI Development Report (R-TF-028-005) if warranted
- New AI Release (R-TF-028-006) to the Software Development team
-
Conformity Assessment: Major changes typically require a new conformity assessment under MDR Article 120, as they may affect:
- The intended purpose of the device
- The clinical performance characteristics
- The safety and benefit-risk profile
- The device classification
Version Change Decision Process
When determining whether a change is minor or major, the AI Team shall:
- Review the proposed modification against the criteria defined in this section
- Consult with the Technical Manager and, where appropriate, the Regulatory Affairs team
- Document the versioning decision in the AI Retraining Report or AI Development Report
- Ensure the version number is updated consistently across all related documentation and software artifacts
If a proposed change does not clearly fit into the minor or major categories, it shall be escalated to the Technical Manager for review and classification. In cases of uncertainty, the more conservative classification (major) should be applied to ensure appropriate regulatory and quality oversight.
Related QMS Documents
- T-028-001 AI Description
- T-028-002 AI Development Plan
- T-028-003 Data Collection Instructions
- T-028-004 Data Annotation Instructions
- T-028-005 AI Development Report
- T-028-006 AI Release
- T-028-007 AI Retraining Report
- T-028-008 AI Reevaluation Report
- T-028-009 AI Design Checks
- T-028-010 AI V&V Checks
- T-028-011 AI Risk Matrix
Signature meaning
The signatures for the approval process of this document can be found in the verified commits at the repository for the QMS. As a reference, the team members who are expected to participate in this document and their roles in the approval process, as defined in Annex I Responsibility Matrix
of the GP-001
, are:
- Author: Team members involved
- Reviewer: JD-003, JD-004
- Approver: JD-001