R-TF-012-006 Lifecycle plan and report_2023_001
Purpose
Define the techniques, tools, resources and activities related to the development of the Legit.Health Plus medical device (hereinafter, the device) to guarantee this development is performed following ISO 62304:2007/A1:2016 Medical device software. Software life-cycle processes
standard.
Terms and definitions
- Architecture: organizational structure of a system or component.
- Change request: documented specification of a change to be made to a medical device software.
- Evaluation: a systematic determination of the extent to which an entity meets its specified criteria.
- Legacy software: the medical device software that was legally placed on the market and that is already commercialized, but for which it does not exist sufficient objective evidence about the fact that was developed in compliance with the current version of this standard.
- Medical device software: a software system that has been developed to be incorporated into the medical device being developed or that is intended for use as a medical device.
- Problem report: a record of actual or potential behaviour of a software product that a user or other interested person believes to be unsafe, inappropriate for the intended use or contrary to specification.
- QMS: Quality Management System.
- Release: a particular version of a configuration item that is made available for a specific purpose.
- Safety: freedom from unacceptable risk.
- Security: protection of the information and data so that unauthorized people or systems cannot read or modify them and so that authorized persons or systems are not denied access to them.
- Software development life cycle model: conceptual structure spanning the life of the software from the definition of its requirements to its release, which:
- identifies the process, activities and tasks involved in development of medical device software,
- describes the sequence of and dependency between activities and tasks, and
- identifies the milestones at which the completeness of specified deliverables is verified.
- Software item: any identifiable part of a computer program, for example, source code, object code, control code, control data, or a set of these elements.
- Software system: an integrated collection of software items organized to accomplish a specific function or set of functions.
- Software unit: software item that is not subdivided into other items.
- SOUP: software of unknown provenance (acronym). Software item that is already developed and generally available and that has not been developed for the purpose of being incorporated into the medical device, also known as “off-the-shelf software”, or software item previously developed for which adequate records of the development processes are not available.
- Verification: confirmation through the provision of objective evidence that specified requirements have been fulfilled.
- Version: an identified instance of a configuration item.
Resources and responsibilities
JD-001
- Name: Andy Aguilar
- Position: General Manager
- Education: the employee holds a degree in Business Administration and Business Strategies from Tecnologico de Monterrey and has completed two internships: one in innovation and entrepreneurship at the University of Mississippi and another in business administration and marketing at the University of the Basque Country.
- Experience with product/process/technology or state of the art: Her professional experience includes more than 7 years in sales and more than 2 years as an e-commerce manager in two international companies. Her experience includes more than 5 years of working with the product throughout its life cycle, as she is one of the co-founders of the company and she has participated in the product manufacturing since the beginning.
- Risk management training or other related training: Not required
- Responsibility: To assign resources and approve the implementation of the requirements.
- Qualification required: Business, HR management, sales, health care environment knowledge.
- Authority: The main authority of the company
JD-003
- Name: Taig Mac Carthy
- Position: Design and Development Manager
- Education: with a specialization in Strategic Management and Innovation from Copenhagen Business School, he has a foundational understanding of business practices essential in product development. His knowledge in quality management systems is well-established, having completed ISO 13485, ISO 9001:2015, and ISO 27001 Lead Auditor certifications from Bureau Veritas Group. These certifications underscore his ability to maintain high-quality standards in device manufacturing. Additionally, his training in ICH Good Clinical Practice and as an Equal Opportunity Agent, alongside courses in Python, Data Science, and Graphic Design, provide a diverse skill set applicable to his current role. His academic journey also includes a degree from the University of the Basque Country.
- Experience with product/process/technology or state of the art: solid background in both the medical and entrepreneurial fields. He has contributed to four scientific publications in computer vision applied to medicine, showcasing his expertise in areas directly relevant to medical device development. His involvement from the inception of the company, given his position as co-founder, has afforded him comprehensive knowledge of the device's development journey. His six years as a front-end software developer and the founding of three companies demonstrate his technical skills and entrepreneurial mindset. Additionally, his authorship of two business management books indicates his grasp on business operations, all of which collectively support his capacity to lead in design and development.
- Risk management training or other related training: ISO 13485 and ISO 27001
- Responsibility: Software design and development management
- Qualification required: Business, programming, software life cycle, medical devices regulation.
- Authority: To manage all the software life cycle stages
JD-004
- Name: María Diez
- Position: Quality manager & Person Responsible for Regulatory Compliance (PRRC)
- Education: the employee studied Biology at the Complutense University of Madrid. In addition, she holds a PhD in Biochemistry and Molecular Biology from the same University.
- Experience with product/process/technology or state of the art: With more than 7 years of Quality and Regulatory experience, María started developing her abilities by implementing a Quality Management system based on ISO 15189, CLIA and Spanish sanitary regulations (specific for medical laboratories), clinical studies and in vitro Software as a medical device. In her last work experience, she developed and integrated QMS combining the ISO 9001, ISO 13485, ISO 15189 and ISO 27001 regulations with the requirements established at the 2017/746 European in vitro medical device regulations, again for a Software as medical device.
- Risk management training or other related training: ISO 14971, ISO 13485, ISO 9001, ISO 27001, 2017/745 Medical Device Regulations.
- Responsibility: To ensure all the procedures implemented by us are properly addressed and records are archived and maintained according to our procedures and the applicable regulations and to collaborate in the risk management analysis.
- Qualification required: knowledge of the medical devices regulation and applicable quality standards and more than 4 years working with integrated quality management systems and/or medical devices regulations.
- Authority: To review the documents and records of the QMS created by addressing the procedures implemented.
JD-005
- Name: Alfonso Medela
- Position: Technical Manager & Person Responsible for Regulatory Compliance (PRRC)
- Education: the employee holds a degree in Physics from the University of the Basque Country. In addition, he completed his training with an MSc in Physics at the University of Groningen and another Master in Big Data and Business Intelligence at the University of Deusto.
- Experience with product/process/technology or state of the art: expert in Computer Vision, machine learning and artificial intelligence with more than 5 years of experience in the development of projects with medical approaches. His experience includes his time at Tecnalia Research & Innovation where he worked as a data scientist focused on Deep Learning algorithms in the area of Computer Vision. He has written 7 papers on Machine Learning and Image Recognition, he also teaches workshops and courses on Machine Learning and Deep Learning. At the European level, he is one of the few experts on the few-shot learning methodology in the field of artificial intelligence.
- Risk management training or other related training: ISO 13485 and Medical Devices regulatory
- Responsibility: To perform the design and development of the medical device, the risk management analysis, the clinical evaluation and the proper release of each version of the medical device, to contact the Competent Authorities in the event of an adverse effect, to perform the proper post-market activities required and approve and plan the resolution of incidences or complaints received.
- Qualification required: Deep learning and AI in the medical environment, medical devices regulation.
- Authority: As the technical responsible for the product he has the authority to prioritize the activities that must be performed along the whole software life cycle and to approve all the documents and records of the software life cycle and the release of each version of the medical device.
JD-009
- Name: Ignacio Hernández
- Position: Medical Data Scientist
- Education: the employee has a degree in biomedical engineering and a master's degree in computer vision, robotics and machine learning.
- Experience with product/process/technology or state of the art: He has worked as a computer vision engineer with medical imaging in two healthcare companies: Overture Life and Medtronic Spain. He has experience in deep learning algorithms for medical imaging, database management and image processing. He has received several awards, among them Treelogic's 10th Innovator Spirit Award and the "Most compelling healthcare need" from Hacking Medicine Madrid.
- Risk management training or other related training: Not required
- Responsibility: To perform and review the activities that must be performed during the software life cycle.
- Qualification required: Image deep learning knowledge.
- Authority: To verify the performance of the activities developed during the software life cycle.
JD-007
- Name: Gerardo Fernández
- Position: Technology Manager
- Education: the employee studied mathematics and computer science at the Autonomous University of Madrid and has been learning programming in a self-taught way since then.
- Experience with product/process/technology or state of the art: Gerardo started his professional career in the development of web and mobile applications. In 2014 he started working under the name of his brand "Latte and Code". Two years later Promotive Group, one of the companies he worked with regularly offered him to be their CTO to develop a CRM for the company. After closing the stage in Grupo Promotive he went back to freelance development to seek new challenges. He even opened a YouTube channel where he can continue to enjoy one of his vocations: teaching others.
- Risk management training or other related training: Not required
- Responsibility: To ensure we have implemented, documented and performed all the activities related to the cybersecurity of the software manufacturing processes.
- Qualification required: Computer science and cybersecurity
- Authority: To choose and implement the best measures to maintain the cybersecurity level that the product requires.
JD-017
- Name: Alejandro Carmena
- Position: Machine Learning Ops
- Education: the employee completed a degree in energy engineering at the Rey Juan Carlos University of Madrid, and studied a MSc in industrial mathematics at the Carlos III University of Madrid.
- Experience with product/process/technology or state of the art: experienced in software achitecture for machine learning systems.
- Risk management training or other related training: Not required
- Responsibility: contribute to the design and development, specially the architecture
- Qualification required: degree in engineering or equivalent experience
- Authority: performance of the activities during the software life cycle.
IEC 62304:2006/Amd 1:2015 Checklist
Primary lifecycle processes
According to IEC 62304, the device can be classified as a Class B medical device. As such, not all the items of the checklist are required. When an item is required, it will be noted in the column Required
of the following table.
4. General Requirements
Reference | Software Lifecycle Process | Required | Fullfilled in |
---|---|---|---|
4.1 | Quality Management System | TRUE | Section Quality management system (QMS) in the document Quality Manual |
4.2 | Risk Management Process | TRUE | Section Risk management process in the document GP-013 Risk management |
4.3 | Software safety classification | TRUE | Section Software safety classification in the document Lifecycle plan and report |
4.4 | Legacy software | TRUE | Section Legacy software in the document Lifecycle plan and report |
5. Software development process
Reference | Software Lifecycle Process | Required | Fullfilled in |
---|---|---|---|
5.1 | Software development planning | - | - |
- 5.1.1 | Software development plan | TRUE | Section Software development planning in the document Lifecycle plan and report |
- 5.1.2 | Maintenance of software development plan | TRUE | Section Software maintenance process in the document Lifecycle plan and report |
- 5.1.3 | Reference of the software development plan towards the design and development | TRUE | Section Software development planning in the document Lifecycle plan and report |
- 5.1.4 | Software development standard, methods and tools planning | FALSE | Required for class C only |
- 5.1.5 | Software integration and integration testing planning | TRUE | Section Software integration and integration testing planning in the document Lifecycle plan and report |
- 5.1.6 | Planning of software verification | TRUE | Section Establishment of the software unit verification process in the document Lifecycle plan and report |
- 5.1.7 | Planning of software risk management | TRUE | Document R-TF-013-002 Risk management record and procedure GP-013 Risk management |
- 5.1.8 | Planning of documentation | TRUE | Section Documentation planning in the document Lifecycle plan and report |
- 5.1.9 | Planning of the software configuration management | TRUE | Section Software configuration management process in the document Lifecycle plan and report |
- 5.1.10 | Supporting items that need to be controlled | TRUE | Section Identification of the configuration in the document Lifecycle plan and report |
- 5.1.11 | Software configuration item controlled before verification | TRUE | Section Software configuration item control before verification in the document Lifecycle plan and report |
- 5.1.12 | Identification and avoidance of common software defects | TRUE | Section Identification and prevention of common software defects in the document Lifecycle plan and report |
5.2 | Software requirements analysis | - | - |
- 5.2.1 | Definition and documentation of software requirements from system requirements | TRUE | Section Definition and documentation of the software requirements from the system requirements in the document Lifecycle plan and report |
- 5.2.2 | Content of the software requirements | TRUE | Section Content of software requirements in the document Lifecycle plan and report |
- 5.2.3 | Integration of risk control measures into software requirements | TRUE | Section Integration of the risk control measures in the software requirements in the document Lifecycle plan and report |
- 5.2.4 | Re-evaluation of the risk analysis of the medical device | TRUE | Section Re-evaluation of the medical device risk analysis in the document Lifecycle plan and report |
- 5.2.5 | Update of the requirements of the system | TRUE | Section Update of requirements in the document Lifecycle plan and report |
- 5.2.6 | Verification of the requirements of the software | TRUE | Section Verification of the software requirements in the document Lifecycle plan and report |
5.3 | Software architectural design | - | - |
- 5.3.1 | Transformation of software requirements into an architecture | TRUE | Section Transformation of the software requirements into an architecture in the document Lifecycle plan and report |
- 5.3.2 | Develop an architecture for the interfaces of software items | TRUE | Section Development of an architecture for the interfaces of software items in the document Lifecycle plan and report |
- 5.3.3 | Specify functional and performance requirements of SOUP item | TRUE | Section Requirements in the SOUP records of the DHF |
- 5.3.4 | Specify system hardware and software required by SOUP item | TRUE | Section System requirements in the SOUP records in the DHF |
- 5.3.5 | Identify segregation necessary for risk control | FALSE | Required for class C only. Section Identification of the segregation necessary for the risk control in the document Lifecycle plan and report |
- 5.3.6 | Verify software architecture | TRUE | Section Verification of the software architecture in the document Lifecycle plan and report |
5.4 | Software detailed design | - | - |
- 5.4.1 | Subdivide software into software units | TRUE | Section Subdivision of the software architecture into software units in the document Lifecycle plan and report |
- 5.4.2 | Develop detailed design for each software unit | FALSE | Required for class C only. Section Development of the detailed design for each software unit in the document Lifecycle plan and report |
- 5.4.3 | Develop detailed design for interfaces | FALSE | Required for class C only. Section Development of the detailed design for the interfaces in the document Lifecycle plan and report |
- 5.4.4 | Verify detailed design | FALSE | Required for class C only. Section Verification of the detailed design in the document Lifecycle plan and report |
5.5 | Software unit implementation and verification | - | - |
- 5.5.1 | Implement each software unit | TRUE | Section Implementation of each software unit in the document Lifecycle plan and report |
- 5.5.2 | Establish software unit verification process | TRUE | Section Establishment of the software unit verification process in the document Lifecycle plan and report |
- 5.5.3 | Software unit acceptance criteria | TRUE | Section Software unit acceptance criteria in the document Lifecycle plan and report |
- 5.5.4 | Additional software unit acceptance criteria | FALSE | Required for class C only. Section Additional software unit acceptance criteria in the document Lifecycle plan and report |
- 5.5.5 | Software unit verification | TRUE | Section Software unit verification in the document Lifecycle plan and report |
5.6 | Software integration and integration testing | - | - |
- 5.6.1 | Integrate software units | TRUE | Section Integration of software units in the document Lifecycle plan and report |
- 5.6.2 | Verify software integration | TRUE | Section Verification of the software integration in the document Lifecycle plan and report |
- 5.6.3 | Software integration testing | TRUE | Section Software integration tests in the document Lifecycle plan and report |
- 5.6.4 | Software integration tests content | TRUE | Section Content of the integration tests in the document Lifecycle plan and report |
- 5.6.5 | Evaluate software integration test procedures | TRUE | Section Verification of the software integration test procedures in the document Lifecycle plan and report |
- 5.6.6 | Conduct regression tests | TRUE | Section Software regression tests in the document Lifecycle plan and report |
- 5.6.7 | Integration test record contents | TRUE | Section Content of the integration test records in the document Lifecycle plan and report |
- 5.6.8 | Use software problem resolution process | TRUE | Section Use of the software problem resolution process in the document Lifecycle plan and report |
5.7 | Software system testing | - | - |
- 5.7.1 | Establish tests for software requirements | TRUE | Section Establishment of tests for software requirements in the document Lifecycle plan and report |
- 5.7.2 | Use software problem resolution process | TRUE | Section Use of the software problem resolution process in the document Lifecycle plan and report |
- 5.7.3 | Retest after changes | TRUE | Section Retest after changes in the document Lifecycle plan and report |
- 5.7.4 | Verify software system testing | TRUE | Section Verification of software system tests in the document Lifecycle plan and report |
- 5.7.5 | Software system test record contents | TRUE | Section Content of the test records of the software system in the document Lifecycle plan and report |
5.8 | Software release | - | - |
- 5.8.1 | Ensure software verification is complete | TRUE | Section Assurance of the software verification completion in the document Lifecycle plan and report |
- 5.8.2 | Document known residual anomalies | TRUE | Section Documentation of the known residual anomalies in the document Lifecycle plan and report |
- 5.8.3 | Evaluate known residual anomalies | TRUE | Section Evaluation of the known residual anomalies in the document Lifecycle plan and report |
- 5.8.4 | Document released versions | TRUE | Section Documentation of the released versions in the document Lifecycle plan and report |
- 5.8.5 | Document how released software was created | TRUE | Section Documentation about how the released software was created in the document Lifecycle plan and report |
- 5.8.6 | Ensure activities and tasks are complete | TRUE | Section Assurance of activities and task completion in the document Lifecycle plan and report |
- 5.8.7 | Archive software | TRUE | Section Software archive in the document Lifecycle plan and report |
- 5.8.8 | Assure repeatability of software release | TRUE | Section Assurance of the safe delivery of the released software in the document Lifecycle plan and report |
6. Software maintenance process
Reference | Software Lifecycle Process | Required | Fullfilled in |
---|---|---|---|
6.1 | Establish software maintenance plan | TRUE | Section Establishment of the software maintenance plan in the document Lifecycle plan and report |
6.2 | Problem and modification analysis | - | - |
- 6.2.1 | Document and evaluate feedback | - | - |
-- 6.2.1.1 | Monitor feedback | TRUE | Section Monitor feedback in the document Lifecycle plan and report |
-- 6.2.1.2 | Document and evaluate feedback | TRUE | Section Monitor feedback in the document Lifecycle plan and report |
-- 6.2.1.3 | Evaluate problem reports affects on safety | TRUE | Section Monitor feedback in the document Lifecycle plan and report |
- 6.2.2 | Use software problem resolution process | TRUE | Section Establishment of the software maintenance plan in the document Lifecycle plan and report . Procedure GP-023 Change control management |
- 6.2.3 | Analyze change requests | TRUE | Procedure GP-023 Change control management |
- 6.2.4 | Change request approval | TRUE | Procedure GP-023 Change control management |
- 6.2.5 | Communicate to users and regulators | TRUE | Procedure GP-023 Change control management |
6.3 | Modification implementation | - | - |
- 6.3.1 | Use established process to implement modification | TRUE | Procedure GP-023 Change control management |
- 6.3.2 | Re-release modified software system | TRUE | Section Re-release of the modified software system in the document Lifecycle plan and report |
7. Software risk management process
Reference | Software Lifecycle Process | Required | Fullfilled in Document |
---|---|---|---|
7.1 | Analysis of software contributing to hazardous situations | - | - |
- 7.1.1 | Identify software items that could contribute to a hazardous situation | TRUE | Section Software risk management process in the document Lifecycle plan and report and document R-TF-013-002 Risk management record |
- 7.1.2 | Identify potential causes of contribution to a hazardous situation | TRUE | Document R-TF-013-002 Risk management record |
- 7.1.3 | Evaluate published SOUP anomaly lists | TRUE | Sections Lists of published anomalies and History of evaluation of SOUP anomalies inside each SOUP file in the DHF |
- 7.1.4 | Document potential causes | TRUE | Document R-TF-013-002 Risk management record |
- 7.1.5 | Document sequences of events | TRUE | Document R-TF-013-002 Risk management record |
7.2 | Risk control measures | - | - |
- 7.2.1 | Define risk control measures | TRUE | Document R-TF-013-002 Risk management record |
- 7.2.2 | Risk control measures implemented in the software | TRUE | Document R-TF-013-002 Risk management record |
7.3 | Verification of risk control measures | - | - |
- 7.3.1 | Verification of the risk control measures | TRUE | Document R-TF-013-002 Risk management record |
- 7.3.2 | Document any new sequence of events | TRUE | Document R-TF-013-003 Risk management record |
- 7.3.3 | Document traceability | TRUE | Document R-TF-013-003 Risk management record |
7.4 | Risk management of software changes | - | - |
- 7.4.1 | Analyze changes to medical device software with respect to safety | TRUE | Section Risk management of software changes in the document Lifecycle plan and report and procedure GP-023 Change control management |
- 7.4.2 | Analyze impact software changes on existing risk control measures | TRUE | Section Risk management of software changes in the document Lifecycle plan and report and procedure GP-023 Change control management |
- 7.4.3 | Perform risk management activities based on analyses | TRUE | Document R-TF-013-002 Risk management record |
8. Software configuration management process
Reference | Software Lifecycle Process | Required | Fullfilled in Document |
---|---|---|---|
8.1 | Identification of the configuration | - | - |
- 8.1.1 | Establishment of means to identify the configuration items | TRUE | Section Establishment of means to identify the configuration items in the document Lifecycle plan and report |
- 8.1.2 | Identification of SOUP | TRUE | Section Identification of SOUP in the document Lifecycle plan and report |
- 8.1.3 | Identification of the documentation of the system configuration | TRUE | Section Identification of the documentation of the system configuration in the document Lifecycle plan and report |
8.2 | Control of changes | - | - |
- 8.2.1 | Approval of change requests | TRUE | Section Change control in the document Lifecycle plan and report |
- 8.2.2 | Implementation of changes | TRUE | Section Change control in the document Lifecycle plan and report |
- 8.2.3 | Verification of changes | TRUE | Section Change control in the document Lifecycle plan and report |
- 8.2.4 | Provide means for the traceability of changes | TRUE | Section Change control in the document Lifecycle plan and report |
8.3 | Documentation regarding the configuration | TRUE | Section Software archive in the document Lifecycle plan and report |
9. Software problem resolution process
Reference | Software Lifecycle Process | Required | Fullfilled in Document |
---|---|---|---|
9.1 | Prepare problem reports | TRUE | Section Reception of non-conformity in the procedure GP-006 Non-conformity, Corrective and Preventive actions |
9.2 | Investigate the problem | TRUE | Section Non-conformity management in the procedure GP-006 Non-conformity, Corrective and Preventive actions |
9.3 | Advise relevant parties | TRUE | Section Criteria to notify an incident to the national competent authorities in the procedure GP-004 Vigilance system |
9.4 | Use change control process | TRUE | Procedure GP-023 Change control management |
9.5 | Maintain records | TRUE | Section Verification of the impact of CAPAs in the procedure GP-006 Non-conformity, Corrective and Preventive actions , procedure GP-001 Control of documents , procedure GP-013 Risk management |
9.6 | Analyse problems for trends | TRUE | Section Data analysis procedure in the procedure GP-020 QMS Data analysis |
9.7 | Verify software problem resolution | TRUE | Section Verification of the efficacy of CAPAs in the procedure GP-006 Non-conformity, Corrective and Preventive actions |
9.8 | Test documentation contents | TRUE | Sections Software regression tests and Content of the integration test records in the document Lifecycle plan and report |
General requirements
Quality Management System (QMS) and risk management
We have a Quality Management System (QMS), which is designed following two standards:
- ISO 13485:2018 Medical devices. Quality management systems. Requirements for regulatory purposes.
- ISO 14971:2020 Medical devices. Application of risk management to medical devices.
The reason is to manufacture our medical devices in compliance with regulatory requirements and meet customers' expectations.
You can read about our QMS and our risk management procedures in the documents Quality Manual
and GP-013 Risk management
, respectively.
Software safety classification
The device can be classified as follows:
- According to
Rule 11
of the2017/745 MDR
: Class IIa - According to
International Medical Device Regulators Forum (IMDRF) Software as a Medical Device (SaMD)
: Class II. - According to
ISO 62304:2007/A1:2016
: Class B
This classification is rooted in the specific characteristics and functions of the software, as well as its intended use and the potential risks associated with its application.
The primary purpose of the device is to aid healthcare organisations and HCPs at two clinical tasks:
- During the diagnosis of skin structures. The device achieves this by providing an interpretative distribution representation of possible International Classification of Diseases (ICD) classes that might be represented in the pixels content of the image.
- During the measurements of the severity of said conditions. The device achieves this by providing quantifiable data on the intensity, count and extent of clinical signs such as erythema, desquamation, and induration, among others.
The device is used in a driven manner by healthcare professionals. It is important to note that the software does not offer an immediate diagnosis. Instead, it provides a range of clinical data from analyzed images which assists healthcare practitioners in their clinical evaluations.
Potential risks
The potential risk of inaccurate diagnosis or misinterpretation of the device's output is mitigated by the fact that the device is used as a supportive tool rather than a definitive diagnostic solution.
The device is designed to be used by trained healthcare professionals who, in their capacity, have the competence to distinguish between the device's output and the actual clinical conditions of the patients.
Data privacy and compliance
We ensure compliance with data privacy regulations, particularly the General Data Protection Regulation (GDPR). The device processes personal identifiable data of patients, typically anonymous skin images. In instances where the images contain identifiable features, we act as a data processor under Article 28 of the GDPR, with the healthcare organizations acting as data controllers.
As a medical device, it operates under a Quality Management System in compliance with ISO 13485, and adheres to the norms of ISO 27001.
ISO 62304:2007/A1:2016 classification
Justification of Class B
safety classification
The combination of the software's advisory role, built-in safeguards, professional oversight, and its use within a broader assessment context significantly minimizes the likelihood of a hazardous situation arising solely from the software's failure. This aligns with the criteria for a Class B
classification under ISO 62304:2007/A1:2016
.
Non-Invasive Nature
The device does not directly interact with patients and cannot directly contribute to a hazardous situation. It's not a therapeutic device nor a treatment of any kind, and the gathering of data happens without entering in direct contact with the patient. Therefore, a failure of the device would not directly cause injury to the patient.
The identified risks associated with software failure, combined with mitigating factors, suggest that any resultant harm would be non-serious in nature. This is consistent with the criteria for Class B classification.
Layered decision-making
The device provides only one layer of information in a multi-layered decision-making process. Healthcare providers rely on a combination of clinical expertise, patient history, and other diagnostic tools, reducing the risk of a hazardous situation arising solely from the software's failure. This is explained also in the Instructions for Use of the device, and is embeded in the output of the device.
While the software functions primarily as a decision support tool, it has the potential to contribute to hazardous situations under certain conditions. However, the primary users of our software are IT professionals in healthcare settings, and indirectly HCPs in their organisations, whose expertise further reduces the risk of critical errors. This advisory role significantly limits the potential for hazardous outcomes directly attributable to the software.
Indirect role in clinical decisions
The device is designed to provide supplementary information to healthcare professionals, who are responsible for the final clinical decision-making. This indirect role significantly limits the possibility of serious injury resulting from software failure. The device's function is limited to providing information, presenting a low risk of direct harm.
Inbuilt safeguards
The software includes features like error flags or uncertainty indicators. One of the most obvious is the interpretative distribution of the ICD classes never including only one class, but always showing the Top-5 classes. These safeguards can alert healthcare providers to potential inaccuracies, prompting further investigation rather than sole reliance on the software.
Focus on non-lethal skin structures
The device deals with non-lethal dermatological classes. The only serious condition is melanoma. For this exception, clinicians follow established protocols which don't rely on the output of the device.
All of the ICD classes that the device can output are classified as non-lethal, with the sole exception of melanoma. It is crucial to understand that in instances where melanoma is suspected, even slightly, clinicians universally adhere to the protocol of conducting a biopsy to confirm a diagnostic suspicion. This established clinical practice is rooted in the fundamental understanding that the removal of a melanoma is a minor procedure compared to the significant risks associated with the disease. Consequently, practitioners will never rely solely on the device for information when it comes to identifying melanoma, ensuring a comprehensive and cautious approach to diagnosis and treatment.
As we have previously mentioned, the device's primary function is to provide an interpretative distribution representation of possible International Classification of Diseases (ICD) classes that might be represented in the pixels content of the image and to provide quantifiable data on the intensity, count and extent of clinical signs such as erythema, desquamation, and induration, among others. Even in the unlikely event of a failure or misinterpretation of the device's output, it would not directly cause harm to the patient. The healthcare professional would still have to interpret the data and make a decision based on their professional judgment.
In conclusion, given the intended use of the device, its functionality, the level of clinical decision support it provides, the potential risks, and its compliance with data privacy regulations, it is justified to classify the device as a Class II device under the IMDRF's SaMD regulations and a Class B according to ISO 62304:2007/A1:2016
.
Legacy software
This is the first version of the device.
We have a legacy device, called Legit.Health, that was commercialized as a Class I device under Directive 93/42/EEC, and in compliance with said regulation, as explained in its Declaration of Conformity.
The previous device differs from the current in that the old one consisted of a Desktop and Mobile application, as well as an API and the processors. The current device consists only on the API and the processors, the latter being the component that actually performs the medical features.
Software development process
Software development planning
We develop the device following the GP-012 Design, redesign and development
procedure and also following the specific details of the plan described in this procedure.
Planning of software integration and integration testing
In this integration plan we will detail how the software elements of the medical device are put together once they have been individually verified, and we will also see how to ensure the successful integration of the SOUPs with these software elements. All this will be supported by comprehensive testing protocols to guarantee functional coherence, reliability, and adherence to medical standards.
Integration strategy
Integration will be conducted in a phased, bottom-up approach, starting with the integration of low-level service modules and progressively incorporating higher-level orchestration layers. This method facilitates the early detection of interface defects and simplifies the management of complex service dependencies. A dedicated staging environment that replicates the production settings will be used for all integration activities, utilizing tools such as Bitbucket Pipelines for continuous integration, Docker for containerization, and Pytest for unit and integration testing.
The integration of SOUP components will be carried out with stringent oversight. Each SOUP component will be identified, and its risks assessed following the guidelines set forth in section 5.1.7 of IEC 62304. The integration of these components will adhere to the same rigorous testing and validation protocols applied to in-house developed software components, ensuring full compatibility and safety.
Testing strategy
The testing process is structured to cover multiple levels of the software architecture, beginning with unit testing of individual components to validate specific functions. Following successful unit testing, software components and microservices will be integrated incrementally. Each integration point undergoes testing to validate data consistency, error handling, and performance against requirements. Comprehensive system-wide tests are conducted post-integration to validate the end-to-end functionality of the software.
Special attention is given to interface testing to ensure accurate data exchange between microservices and functional testing to verify that each microservice meets its designated requirements. Performance testing is also carried out under expected and peak load conditions to ensure robustness under real-world operational stresses.
Automated testing plays a crucial role in improving the repeatability and efficiency of the testing processes. Following any updates or additions, regression tests are performed to confirm seamless integration of new code without disrupting existing functionalities.
Documentation and traceability are meticulously maintained throughout the testing phase. All test cases and results are stored using a versioning tool, ensuring traceability back to specific requirements. Tools such as Bitbucket, JIRA and databases are employed to manage this documentation effectively.
Documentation planning
We will include within the technical file of the device all the documents required according to the 2017/745 MDR and our QMS requirements, based on the applicable standards mentioned above and along the Legit.Health Plus description and specifications
document.
All of them will contain their title, purpose, scope and responsibilities, and they will be prepared, reviewed and approved according to the GP-001 Control of documents
procedure.
The following list shows the documents related to the design and development procedure together with their purpose and intended audience:
Legit.Health Plus description and specifications
- purpose: to document the device's information and specifications, including the applicable standards;
- intended audience: regulatory and quality team, product development team, Notified Body and internal/external auditors.
R-TF-008-001 GSPR
- purpose: to evaluate and document the safety and performance requirements applicable to the device and to document how the applicable requirements are implemented;
- intended audience: regulatory and quality team, product development team, clinical team, Notified Body and internal/external auditors.
Design History File
, comprised of:- Requirements
- purpose: to document user requirements, software requirement specification, design requirement and regulatory requirements;
- intended audience: regulatory and quality team, product development team, Notified Body and internal/external auditors.
- Activities
- purpose: to document the design verification;
- intended audience: regulatory and quality team, product development team, Notified Body and internal/external auditors.
- Test plans
- purpose: to document the design verification plans to ensure the device is capable of meeting the requirements established for its intended purpose;
- intended audience: regulatory and quality team, product development team, Notified Body and internal/external auditors.
- Test runs
- purpose: to document the design verification results to ensure the device is capable of meeting the requirements established for its intended purpose;
- intended audience: regulatory and quality team, product development team, Notified Body and internal/external auditors.
- Version release
- purpose: to document the design transfer to production;
- intended audience: regulatory and quality team, product development team, Notified Body and internal/external auditors.
- Design stage review
- purpose: to document the design review at each stage of the design and development process;
- intended audience: regulatory and quality team, product development team, Notified Body and internal/external auditors.
- SOUP
- purpose: to document the SOUP used in the software development, their requirements and any anomalies;
- intended audience: regulatory and quality team, product development team, Notified Body and internal/external auditors.
- Requirements
R-TF-012-005 Design change control
- purpose: to document the list of device's version releases and the changes implemented in each release;
- intended audience: regulatory and quality team, product development team, customer success team, Notified Body and internal/external auditors.
R-TF-012-006 Life cycle plan and report
- purpose: to define the techniques, tools, resources and activities related to the development of the device to guarantee this development is performed following
UNE-EN 62304:2007/A1:2016 Medical device software. Software life-cycle processes standard
; - intended audience: regulatory and quality team, product development team, Notified Body and internal/external auditors.
- purpose: to define the techniques, tools, resources and activities related to the development of the device to guarantee this development is performed following
R-TF-012-007 Formative evaluation plan
,R-TF-012-014 Summative evaluation plan
- purpose: to document plans for software usability testing according to the requirements set out in
UNE-EN 62366-1:2015/A1:2020 Application of usability engineering to medical devices
; - intended audience: regulatory and quality team, product development team, Notified Body and internal/external auditors.
- purpose: to document plans for software usability testing according to the requirements set out in
R-TF-012-008 Formative evaluation report
,R-TF-012-015 Summative evaluation report
- purpose: to document the results of the software usability testing activities
- intended audience: regulatory and quality team, product development team, Notified Body and internal/external auditors.
R-TF-012-009 Validation and testing of machine learning models
- purpose: to define the metrics and methodologies to test the performance of the different machine learning models implemented in the device;
- intended audience: regulatory and quality team, product development team (especially medical data scientists,
JD-009
), Notified Body and internal/external auditors.
R-TF-012-012 Customers product version control
- purpose: to monitor and document the device's version used by customers;
- intended audience: regulatory and quality team, product development team, customer success team, Notified Body and internal/external auditors.
R-TF-013-002 Risk management record
- purpose: to document the risk management process performed according to the requirements set out in
UNE-EN ISO 14971:2020 Medical devices - Application of risk management to medical devices
; - intended audience: regulatory and quality team, product development team, clinical team, Notified Body and internal/external auditors.
- purpose: to document the risk management process performed according to the requirements set out in
Legit.Health Plus IFU
- purpose: to provide the users with all the necessary information according to the requirements set out in
MDR 2017/745, Annex I
for the safe use of the device; - intended audience: regulatory and quality team, product development team, customer success team, clinical team, sales team, intended users, Notified Body and internal/external auditors.
- purpose: to provide the users with all the necessary information according to the requirements set out in
R-TF-001-008 Legit.Health Plus label
- purpose: to provide the users with the device's information according to the requirements set out in
MDR 2017/745, Annex I
; - intended audience: regulatory and quality team, product development team, intended users, Notified Body and internal/external auditors.
- purpose: to provide the users with the device's information according to the requirements set out in
Evidence of compliance to this clause can be found in GP-001 Control of documents
where we explain the review and approval method for all our documentation. We also explain responsabilities of each team member in the development, review, approval and modification of documentation.
Software configuration item control before verification
For effectively placing all the configuration items under configuration management control before verification, in order to be aligned with the ISO 62304 standard, it's crucial to adopt a systematic approach. This plan ensure that every configuration item is identified, version-controlled, and traceable throughout the development and maintenance processes.
Starting with configuration identification, the first step involves documenting each type of configuration item. For software items, this means detailing all software architecture components, complete with versions and dependencies, and adopting semantic versioning to monitor changes. Software tools such as Docker, Git, and the Python interpreter should be documented with their specific versions and configurations. Similarly, SOUP components like FastAPI, Pydantic, Pandas, and Numpy are listed with precise version numbers. Microservices configuration files, including JSON files, require distinct naming and versioning strategies. Deep learning models, too, need thorough documentation covering versions, training datasets, and parameters. Documentation, ranging from design specifications to user manuals and SOPs, should be versioned. Databases like MongoDB or AWS DocumentDB should have documented schemas and versions, considering their dynamic nature and critical role in applications.
The incorporation of a VCS such as Git is paramount for managing digital configuration items (which are all of them in the case of our device), including software code, microservices configuration files, and documentation. Access to modify controlled items is restricted to authorized personnel only, based on defined roles within our development team, ensuring that all changes are traceable to an individual. Also, it’s essential to use specialized tools for version-controlling databases and deep learning models due to their unique complexities.
A robust Change Management process is then established to review, approve, and log changes across all configuration items, ensuring traceability and justification that align with the software development lifecycle requirements. As mentioned earlier, we use JIRA for recording and tracking changes, issues, and features related to each configuration item.
Configuration Status Accounting (CSA) becomes an important component, recording and reporting the status of configuration items and any changes. By integrating the CSA system with the VCS and change management tools, status reporting can be automated, which provides real-time insights into the version history, change history, and current status of each item.
Regular configuration audits are necessary for maintaining compliance with identified requirements and for sticking to configuration management processes. These audits help in verifying the integrity and traceability of configuration items from development to deployment and maintenance.
A backup and recovery strategy is indispensable, emphasizing the regular, secure storage of backups, and the testing of recovery procedures to guarantee accurate and efficient restoration of configuration items.
Lastly, training the development team on configuration management practices and tool usage ensures everyone understands the importance of following the ISO 62304 standard. It's also vital to continually review and update the configuration management plan to stay compliant with evolving standards and project requirements.
Timing of configuration control
Configuration control is initiated at specific stages of the software development process to maintain the integrity and traceability of software items. Below are the key stages at which software items are placed under configuration control:
- Development phase: As soon as a software element, tool, or document is created or modified, it is immediately placed under configuration control to ensure that all changes and versions are tracked from the earliest stages of development.
- Testing phase: During testing, all items used or modified, including configuration files and databases, are placed under configuration control to manage versions and modifications made based on testing outcomes.
- Deployment phase: Prior to deployment, all software elements, tools, and documentation must be reviewed and verified under configuration control to ensure compliance with design specifications and regulatory requirements.
- Maintenance phase: Post-deployment, configuration control continues to manage updates, patches, and changes to the software and its associated elements to ensure ongoing compliance and performance integrity.
Identification and prevention of common software defects
To ensure the utmost safety and optimal performance of the device, we have conducted a comprehensive analysis aimed at identifying and preventing common software defects. These preventative measures and analyses have been thoroughly documented in the Risk Management Record.
Defects consideration in risk management
Every potential software defect, from compatibility issues to communication problems, has been rigorously identified, assessed for its impact on safety and performance, and addressed through preventive measures or mitigation strategies. This process ensures that no defect poses an unacceptable risk to users or patients.
Compatibility and integration
The device is designed to be accessed via API and integrated into the users' systems, ensuring a streamlined and efficient user experience. There are no restrictions related to combinations with/connections to other devices or equipment. Minor and non-restrictive compatibility issues may arise, particularly when it comes to the naming of key-value pairs of the API request and response. However, these defects do not contribute to an unacceptable risk, for the following reasons:
- FHIR compatibility: the device follows the Fast Healthcare Interoperability Resources (FHIR) standards for exchanging healthcare information electronically. However, we acknowledge that some customers may not adhere to these standards. To mitigate potential compatibility issues, we have developed comprehensive guidelines and provided detailed documentation to assist users in mapping their existing data structures to the FHIR standards, ensuring seamless integration and compatibility.
- Technical support and documentation: To prevent integration problems, the device is designed to be easily incorporated into any healthcare systems. We provide extensive support and documentation to guide users through the integration process, ensuring that the device functions harmoniously within the existing digital healthcare infrastructure.
- Key-value pair compatibility: this type of issues do not impact the use of the device, simply make more tedious the integration effort because the integrator must deliberately match data, instead of relying on a stardard like FHIR (which the device already provides).
Communication problems between components
Given the API-centric nature of the device, effective communication between different software components is as easy as one could dream of. Indeed, this is precisely the reason why we decided to commercialise the device as an API, because is the format with least communication or integration problems, guaranteeing a universally accessible solution.
We have implemented robust communication protocols and error-handling mechanisms to ensure that any potential issues are promptly identified and resolved, maintaining the integrity and reliability of the device.
Image capture device
Even is this is not part of the device, in our risk assessment we consider a risk the fact that images taken by the users may be of very poor quality. However, these defects do not contribute to an unacceptable risk, because we implemented a processor with the sole purpose of analysing image quality. And it is also worth mentioning that the processors have been trained with images of ample characteristics, including images with poor quality, so as to ensure reliability of the device. Alltogether, this brings the risk to a very acceptable situation.
Software requirements analysis
The system requirements and the software requirements are one and the same, due to the software nature of our device. Likewise, the system development process is the software development process.
Definition and documentation of the software requirements from the system requirements
We manage and document the software requirements in specific records of the DHF of the device (T-012-001 Requirements
).
Additionally, we analyze the GSPR that the 2017/745 MDR establish before manufacturing the product and compile the results of the analysis at the R-TF-008-001 GSPR
following the GP-008 Product requirements
procedure.
Content of software requirements
According to the UNE-EN 62304:2007/A1:2016
we consider and establish the following requirements when applicable, that are included within the DHF:
- Functional and capacity requirements
- System inputs and outputs
- Interfaces between software-system and other systems
- Software-driven alarms, warnings and operator messages
- Security requirements
- User interface requirements implemented by the software
- Database and data definition requirements
- Installation requirements and acceptance of software provided in the place of operation and maintenance
- Requirements regarding methods of operation and maintenance
- Requirements related to the network/data aspects
- User maintenance requirements
- User documentation to be developed
- Regulatory requirements.
The following table shows the types of requirements to which the requirements we have established during the design of the device belong.
Requirement ID | Functional and capacity | System inputs and outputs | Interfaces | Alarms, warnings and messages | Safety | Database and data definition | Installation | Operation and maintenance | Network | User maintenance | Regulatory | Architecture |
---|---|---|---|---|---|---|---|---|---|---|---|---|
REQ-001 | FALSE | TRUE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE |
REQ-002 | FALSE | TRUE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE |
REQ-003 | FALSE | TRUE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE |
REQ-004 | FALSE | TRUE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE |
REQ-005 | TRUE | TRUE | TRUE | FALSE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | TRUE | TRUE |
REQ-006 | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | TRUE | TRUE |
REQ-007 | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | TRUE | TRUE |
REQ-008 | FALSE | TRUE | FALSE | TRUE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE |
REQ-009 | FALSE | TRUE | FALSE | TRUE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE |
REQ-010 | FALSE | TRUE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE |
REQ-011 | FALSE | TRUE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE |
REQ-012 | TRUE | FALSE | TRUE | FALSE | FALSE | FALSE | TRUE | TRUE | TRUE | TRUE | TRUE | FALSE |
Integration of the risk control measures in the software requirements
In compliance with EN 62304, 5.2.3, our software risk management integrates risk control measures directly into the software requirements. This integration is detailed in our documentation.
The software risk management process has been addressed following the standard UNE-EN ISO 14971:2020. We have in place the following documentation to ensure the risks are properly managed:
- The procedure
GP-013 Risk management
to establish an efficient methodology. - A software risk management plan, described in
R-TF-013-001 Risk management plan
, that outlines our approach for identifying, evaluating, and mitigating risks in software development. - A software risk management record, documented in
R-TF-013-002 Risk management record
, where the implemented risk control measures are documented in the columns under the name “risk control” of said record. - A software risk management report, registered in
R-TF-013-003 Risk management report
.
In the Risk Management record, alongside the description of the risks and the mechanisms for control, mitigation and minimization, there is a column that references the associated requirements from the DHF.
Requirements for risk control and evidence of risk control measures can be found in the DHF. In the requirements, following our template for requirements, there is a section called Related risks where we list the risks for which the requirement provides a control measure. Evidence of control measures can be found in the tests for the activities that fulfill the relevant requirement.
For instance, REQ_005_The user can send requests and get back the output of the device as a response in a secure, efficient and versatile manner
references various risks. One of these risks is number 16
, which says:
- An organisation that is not a licensed care provider gets access to the service.
In the REQ_005
there is a section called Stringent Security Measures, that includes the text:
Authentication and Authorization: We employ robust authentication mechanisms such as OAuth or JWT to ensure that only authorized users can access the API. Role-based access control further restricts user privileges, enhancing data security.
And in Success metrics
, one of the goals is that User cannot interact with the device without a client key
, with the metric Device only accepts calls with a registered key
.
This is verified in the test: TEST_012_The user can send requests and get back the output of the device as a response in a secure, efficient and versatile manner
. There, in the section Requirement verification, there is a checkbox named Users cannot interact with the device without an access key
, which is checked, and also includes screen captures as evidence of the control measure.
This direct linking of risks to specific software requirements and their verification demonstrates our commitment to ensuring that risk control measures are effectively integrated into our software development process.
Re-evaluation of the medical device risk analysis
In accordance with GP-013 Risk Management
and ISO 14971, 5.1, our software risk analysis process is documented in R-TF-013-002 Risk Management Record
. The process actively considers any changes in software design, new hazard identifications, significant modifications in clinical use, or feedback from post-market surveillance. The re-evaluation of risks includes:
- Annual review to reassess existing risks and identify any new hazards.
- Immediate analysis upon identification of a new hazard or when changes in software usage or environment are recognized.
- Reassessment when modifications in risk control measures are required.
- Review and update with each software version release, including minor updates, to ensure all changes are reflected in risk analysis.
Each re-evaluation results in an updated R-TF-013-002 Risk Management Record
, capturing the current state of risks and control measures. The outcomes are summarized in R-TF-013-003 Risk Management Report
, where the effectiveness of risk control measures and the benefit-risk balance are assessed. This process ensures continuous monitoring and management of software-related risks throughout the product lifecycle.
Update of requirements
Relevant information (coming from the preclinical and clinical evaluation, post-market surveillance, post-market clinical follow-up, customer claims, customer complaints and/or customer feedback, among others) is periodically revised to confirm the software benefit/risk ratio as well as to monitor known hazards, trends or side-effects, and to identify new ones that may require an update of the risk management process and records.
When new hazards are identified or when known hazards suffer modifications, the software requirements are re-evaluated and updated. As new software requirements appear, a new software version and/or subversion is developed together with the revision of the corresponding technical documentation and affected annexes (Risk management, Design History File, Design verification and validation, Clinical evaluation, Instructions For Use and Labelling) following the GP-012 Design, redesign and development
procedure.
Verification of the software requirements
In compliance with EN 62304, 5.2.6, our process for verifying software requirements, as outlined in GP-012
, ensures each requirement is non-contradictory, unambiguous, testable, and effectively implements risk control measures. The verification process includes:
- Clarity and consistency review: each requirement is rigorously reviewed by
JD-003
to ensure clarity and prevent contradictions, with their signature as evidence, as explained in ourGP-012
. - Risk control integration: Requirements are cross-referenced with risk control measures from
R-TF-013-002 Risk Management Record
to confirm that each risk is adequately mitigated. - Testability and traceability: All requirements are drafted to allow the establishment of clear test criteria, ensuring traceability throughout the development process.
- Verification records: Detailed verification records for each requirement, including test plans, results, and analyses, are maintained in the DHF, demonstrating compliance with each requirement.
Minimum system and hardware requirements
Establishing the minimum environmental requirements for the operation of the medical device is essential to guarantee its compatibility, performance, stability, and adherence to regulatory standards. For this section we will distinguish the minimum software and hardware requirements of the system in three different stages: training, testing and deployment.
Training and testing requirements
To determine the minimum requirements for the training and testing stage, especially considering there are large computer vision models running on a GPU, several factors need to be taken into account.
Here's a breakdown, considering our specific requirements:
Software
- Operating System:
- Minimum: Ubuntu 22.04 LTS.
- Reasoning: Linux-based systems like Ubuntu are widely supported in the machine learning community and offer good compatibility with the PyTorch framework. This version in particular offers long-term support and is commonly used in deep learning environments.
- Docker:
- Latest stable version.
- Reasoning: To handle the deployment of multiple models and manage dependencies efficiently.
- Drivers:
- NVIDIA drivers for the RTX 3060, for instance.
- CUDA and cuDNN:
- A compatible version of CUDA Toolkit that matches the GPU drivers and PyTorch version. For instance, CUDA 11.2 could be a starting point, but this depends on the specific GPU model and driver compatibility.
- Reasoning: Necessary for GPU acceleration in PyTorch.
- Python:
- A Python environment (preferably CPython 3.8 or above) to run the models. This is critical for compatibility with PyTorch 2.x.
- Libraries and Dependencies:
- Necessary libraries for PyTorch (including PyTorch itself), along with any other dependencies specific to our models (e.g., for image processing).
Hardware
- GPU:
- Minimum: 4 x NVIDIA GeForce RTX 3060 (or equivalent).
- Reasoning: This GPU has sufficient compute capability and VRAM to handle multiple models like Vision Transformer (ViT), EfficientNet, U-Net, and YOLO for training purposes. It's a reasonable starting point for models implemented in PyTorch 2.x. At least four are needed so models like VIT base can be trained.
- CPU:
- Minimum: AMD Ryzen Threadripper PRO 5995WX 64-Cores (or equivalent).
- Reasoning: 64 cores and 128 threads provide enough processing power for handling the CPU-bound tasks of the system, such as loading datasets into memory and also supporting the GPU.
- RAM:
- Minimum: 252 GB.
- Reasoning: Training intensive computer vision models on large datasets requires substantial RAM. This capacity should be sufficient for training VIT architectures on a reasonable size of 224px and 512px .
- Storage:
- Minimum: 2 TB SSD.
- Reasoning: Solid-state drives (SSDs) are recommended for faster read/write speeds, which is crucial for loading data quickly. 2 TB should be sufficient to store the datasets and additional data.
- Network:
- As the device requires internet access datasets on AWS and other auxiliary resources from the cloud, a stable and fast internet connection is necessary.
- Power Supply:
- Ensure a stable and adequate power supply, especially when using high-end GPUs.
Inference requirements
In the inference stage, the minimum requirements are less demanding than in the previous stages, since in this case we do not operate with huge batches of images. However, the models' workload is not completely known in advance. It depends on the user traffic using the external device interfaces. Therefore, we must be prepared at the resource level to accommodate a variable demand of users and requests.
To determine the minimum requirements for the inference/deployment stage, especially considering there are computer vision models running on a GPU, several factors need to be taken into account. Here's a breakdown considering our specific requirements:
Hardware
- GPU:
- Minimum: NVIDIA GeForce RTX 3060 (or equivalent).
- Reasoning: This GPU has sufficient compute capability and VRAM to handle multiple models like Vision Transformer (ViT), EfficientNet, U-Net, and YOLO without significant latency for inference purposes. It's a reasonable starting point for models implemented in PyTorch 2.x.
- CPU:
- Minimum: Intel Core i7-9700K (or equivalent).
- Reasoning: 8 cores and 8 threads provide enough processing power for handling the CPU-bound tasks of the system, such as loading models into memory, processing I/O operations and data schema validation, but also supporting the GPU.
- RAM:
- Minimum: 32 GB.
- Reasoning: While the data size might not be a constraint during inference, having multiple models loaded simultaneously requires substantial RAM. This capacity should be sufficient for loading and running your models without causing memory bottlenecks.
- Storage:
- Minimum: 1 TB SSD.
- Reasoning: Solid-state drives (SSDs) are recommended for faster read/write speeds, which is crucial for loading models quickly. 1 TB should be sufficient to store the models, operating system, software dependencies, and accommodate the cached Docker layers.
- Network:
- As the device requires internet access to download AI models and other auxiliary resources from the cloud, a stable and fast internet connection is necessary.
- Power Supply:
- Ensure a stable and adequate power supply, especially when using high-end GPUs.
Software
- Operating System:
- Minimum: Ubuntu 22.04 LTS.
- Reasoning: Linux-based systems like Ubuntu are widely supported in the machine learning community and offer good compatibility with the PyTorch framework. This version in particular offers long-term support and is commonly used in deep learning environments.
- Docker:
- Latest stable version.
- Reasoning: To handle the deployment of multiple models and manage dependencies efficiently.
- Drivers:
- NVIDIA drivers for the RTX 3060, for instance.
- CUDA and cuDNN:
- A compatible version of CUDA Toolkit that matches the GPU drivers and PyTorch version. For instance, CUDA 11.2 could be a starting point, but this depends on the specific GPU model and driver compatibility.
- Reasoning: Necessary for GPU acceleration in PyTorch.
- Python:
- A Python environment (preferably CPython 3.8 or above) to run the models. This is critical for compatibility with PyTorch 2.x.
- Libraries and Dependencies:
- Necessary libraries for PyTorch (including PyTorch itself), along with any other dependencies specific to our models (e.g., for image processing).
Performance considerations
The listed hardware specifications are the minimum required to operate the system and load all models into memory for inference. However, it's important to note that while these specifications will allow the system to function, they may not guarantee optimal performance as per agreed metrics. For instance, using a higher-end GPU like the NVIDIA RTX 3080 or adding more RAM could significantly improve inference speed and throughput, meeting or exceeding performance expectations.
Final note
The actual requirements may vary depending on the exact size and complexity of each individual model. It is always recommended to continually benchmark the system under realistic conditions to ensure it meets both functional and performance requirements.
Cloud environment
When selecting a cloud provider to host a medical device software subject to strict regulations, it's critical to ensure that the chosen platform meets stringent requirements for reliability, security, compliance, and performance. Below are the key considerations and requirements that we have identified as being of crucial importance for a cloud provider in this context:
Regulatory Compliance
We need to be sure that the cloud provider complies with relevant healthcare regulations such as HIPAA, GDPR, FDA and other regional standards for data protection and privacy. Additionally, the cloud provider must maintain detailed audit trails for all system activities, facilitating traceability for compliance audits.
Service Level Agreements (SLAs)
Another essential requirement is being able to clearly define and review SLAs with the cloud provider, specifying uptime guarantees, response times for issue resolution, and penalties for service disruptions in order to ensure high availability, reliability and minimal downtime.
Legal and Contractual Agreements
It is critical to clearly define data ownership and establish contractual agreements regarding data handling and protection. Also, the cloud provider should include provisions for data migration or termination of services, ensuring a smooth exit strategy.
Security and Privacy
The cloud provider must incorporate strong end-to-end encryption mechanisms for data in transit and at rest, in compliance with industry regulations and to protect sensitive patient data. Also, they must enforce strict access controls with identity management systems to limit access to authorized personnel only, and to restrict access to sensitive resources (Role-Based Access Control), adhering to the principle of least privilege.
Data Backup and Disaster Recovery
The cloud provider should offer automated and regular backup options to prevent data loss, but also well-defined and efficient disaster recovery plan, including data replication, to quickly restore services and minimize downtime in case of unexpected events.
The cloud infrastructure should offer redundancy across multiple data centers and regions to mitigate the impact of hardware failures or other disasters (e.g., outages), ensuring business continuity.
Network and Infrastructure Security
The vendor's cloud platform shall implement robust network security measures, including firewalls, intrusion detection systems and network isolation, to control and monitor traffic between different device components and to protect against unauthorized access and potential cyber threats. Likewise, it should have measures in place to mitigate Distributed Denial of Service (DDoS) attacks and safeguard against service interruptions.
Data Portability and Interoperability
The cloud provider is required to support industry-standard data formats to facilitate data portability and interoperability. We must also verify the availability of well-documented APIs and integration capabilities for seamless interaction with other systems.
Monitoring and Logging
To gain insight into what is happening in the services contracted on the platform, it must implement real-time monitoring of system performance, resource utilization, application logs, suspicious activities, and other potential security threats. An alerting system is also necessary to notify the operations team of any anomalies or critical events, ensuring a rapid response to potential issues.
Additionally, it would be very desirable for the platform to implement centralized logging to collect, analyze, and store logs for troubleshooting, auditing, and compliance purposes.
Latency
We must consider the geographic distribution of the provider's data centers to minimize latency and ensure optimal performance for users across different regions.
Documentation and Support
The cloud provider should maintain and provide access to its platform documentation for reference and troubleshooting. Also, they must guarantee the availability of 24/7 technical support to address any issues promptly.
Cost Management
It is important to choose a cloud provider with transparent pricing models and cost management tools to monitor and control expenses. We must understand and plan for any potential cost fluctuations, and ensure there are no hidden fees.
Vendor Stability and Reputation
We shall consider the cloud provider's track record and experience in hosting critical applications, especially in the healthcare domain. It will also be useful to seek references and case studies from other clients with similar regulatory requirements.
So far we have outlined and described the general requirements that a cloud provider must satisfy to accommodate any regulated medical device. Additionally, in order for our particular medical device to operate under optimal conditions, the cloud infrastructure must also provide the following services and resources:
- Scalable Computing Capacity: The cloud provider should offer on-demand scalability to accommodate varying workloads, ensuring the ability to optimally handle increased demand during peak usage periods (keeping the system's responsiveness). It is also important to select and create instances with sufficient processing power and memory to support the medical device's computational requirements. The virtual machines' OS has to support the installation of Docker and Docker Compose.
- Load Balancer: The platform must have dynamic load balancing service(s) to distribute incoming traffic evenly across multiple instances, optimizing resource utilization and maintaining high availability. This way the medical device does not suffer performance degradation. It should also support regular health checks to automatically identify and redirect traffic away from unhealthy instances, ensuring continuous and uninterrupted service.
- GPUs for AI Inference: The vendor should provision dedicated, CUDA-compatible, 24/7 available, high-performance GPU instances to support efficient inference for the AI-based processors of the medical device. We will also evaluate the ability to scale GPU resources based on the demand to allow seamless adaptation to changing workloads.
- Infrastructure as Code: In case of not having its own Infrastructure as Code (IaC) service, at least the cloud provider must ensure compatibility with popular IaC frameworks (e.g., Terraform, AWS CloudFormation) to enable automated and reproducible template-based provisioning of infrastructure components. Also, the platform should encourage the use of immutable infrastructure practices, where updates or changes to infrastructure result in the replacement of the entire resource rather than modifying existing resources. This enhances consistency and reduces the risk of configuration drift. Lastly, we would greatly appreciate if the platform has support for version control of IaC scripts and configuration files to track changes systematically, enabling rollbacks and auditability.
In addition to all the requirements that we have extensively discussed, the solution offered by the cloud provider must also adhere to GP-010 Purchases and suppliers evaluation
guidelines, and evidence that the provider complies with its specifications is included in this procedure.
Software architectural design
Transformation of the software requirements into an architecture
The development of the device requires building a microservices-based architecture with an entry point that allows the user to access the device through a REST API. This is the optimal way of fulfilling the requirements. The requirements speak of users sending and receiving data, and for that purpose, the most secure and less cumbersome design is a RESTful API. This is an architecture that is widely used in today's technological landscape and that is a very common solution for devices that are meant to be integrated inside other systems, such as Electronic health records (EHRs).
Here's a list briefly describing each component of the software architecture:
- Web API Gateway: This unit acts as the main interface for users, handling connections through the HTTPS protocol. It is designed to manage both input and output of JSON files, where images are included not as files but encoded in Base64 within the text string. This component ensures access control and HTTP security for the medical device, making it the only directly user-interactable part.
- Processor: Operating within a private network, this component communicates with others via a dedicated API. It primarily focuses on machine learning models, especially in the area of computer vision. However, it can also handle classic computational methods.
- Orchestrator: This component serves as the brain of the operation, defining the interaction logic between other components to ensure smooth and efficient processing. It can create different workflows (pipelines) depending on the logic involved in calling the processors. The orchestrator should be capable of handling incoming requests from the API gateway, forwarding them to the relevant processor, and receiving responses.
- Report Builder: This unit is tasked with compiling API responses from various processors. It consolidates these responses into a uniform JSON file, ensuring consistency and compliance in the reporting format.
In summary, the architecture is designed in such a way that the computation happens in the processors. Processors are the ones that perform the medical (although some also non-medical) features of the device and that input and output clinical data. These processors are accesible via the Web API Gateway. Then, there is the Report Builder, that conjoins and formats the output of the different processors and combines them into a single document-like structure. And everything is orchestrated by the Orchestrator, which is the component that contains all the logic of how the different components interact with each other. Each microservice in the architecture is configured using Docker, to ensure consistency, scalability and stability across environments.
We adhere closely to the latest clinical literature, best practices, and standard care protocols. In quantifying the intensity, frequency, and extent of clinical signs, medicine relies on well-established, widely accepted scoring systems. Our processors are meticulously designed to align with these standards, ensuring precise and consistent integration with these recognized frameworks. Certain visual signs, such as erythema, are included in multiple scoring systems. To ensure consistency, we use the same model for each clinical sign across all processors. This approach eliminates the need to average or merge data, as the outputs for erythema and dryness remain identical across all processors that evaluate these signs.
The following diagram illustrates the basic architecture:
In the described software architecture, the report builder and orchestrator components are integral parts of a single microservice, unified under the report-builder
code repository. This architectural decision stems from the intrinsic nature of microservices communicating via HTTP protocol, which inherently operates in a stateless manner. This means that during their interaction, the orchestrator cannot transmit stateful information generated by the processors to the report builder with the expectation that such state is preserved across subsequent requests. To circumvent this limitation and ensure statefulness in the construction of reports, both components are consolidated within the same microservice. This arrangement allows for the maintenance of state essential for the report-building process, overcoming the stateless constraints imposed by the HTTP-based communication between microservices.
List of processors
Listed below are all the services that comprise the computationally intensive engine of the device, along with a brief description explaining each of them. For this purpose, we have classified the processors into clinical and non-clinical.
Non-clinical processors
This category includes processors that do not generate clinical data, in other words, they do not output preliminary results to be interpreted by a physician or other medical specialist. Currently, all processors of this type are AI based and perform validation and verification tasks on the input images.
- Quality validator: Determines whether an image is of adequate quality or not. It simply assesses whether an image is of sufficient quality in general terms, regardless of whether it belongs to the dermatology domain or not.
- Domain validator: Determines if the image belongs to the domain of dermatology, and if it does, whether its clinical or dermatoscopic.
Clinical processors
When processors generate clinical data to assist healthcare practitioners, then they fall into this category.
- ICD multiclass classifier: Generates an interpretative distribution of a list of ICD-11 classes based on the visible signs of an image.
- ICD binary classifier: Assesses whether there is an ICD class in the image or not.
- Binary referrer: Assesses whether or not the patient requires specialist medical attention.
- ASCORAD: This is an automatic version of the SCORAD, based on computer vision models that quantify clinical signs.
- ASALT: This a clinical tool that uses a computer vision segmentation model to automatically quantify the extent of scalp-hair loss.
- AIHS4: This is an automatic version of the IHS4, based on computer vision models to quantify clinical signs.
- APASI: This is the automatic version of the PASI, based on computer vision models that quantify clinical signs.
- ALADIN: This is a scoring system designed to quantify the number of inflammatory lesions.
- AUAS: This is an automatic equivalent of the objective part of the UAS that applies computer vision models to count hives automatically.
- APULSI: This an automated tool designed for quantifying the extent of visual signs such as maceration and tissue damage.
- AGPPGA: This is an automatic version of the GPPGA based on computer vison models that quantify clinical signs.
- NSIL: This is an automatic scoring system that quantifies common visual signs such as inflammatory lesions and dryness.
Operation workflow
Currently, our device operates on a single, well-defined workflow. As default configuration, the processors are executed sequentially in a pre-established order. The journey of an image through the system begins when a user submits it to the receiving HTTP API. The first step involves passing the image through the quality and domain validation models. This is a critical juncture in the processing pipeline, as it determines the suitability of the image for further analysis. If an image fails to meet the standards set by our quality model, it is immediately rejected, preventing any further processing. This stringent check ensures that only images of adequate quality proceed.
Suppose an image successfully passes the quality check. In that case, it advances to the next stage – the dermatological domain validation. This step is pivotal in ascertaining whether the image falls within the specific domain our device is designed to analyze. Should an image fail this domain verification, the workflow is halted. This decision ensures that our device focuses its analytical capabilities on relevant images, thereby enhancing the accuracy and relevance of our diagnostic outputs.
After an image clears these initial validation stages, it enters the core of the processing capabilities. Here, we utilize a ICD multiclass classifier, a series of scoring systems and other AI models, each contributing to the comprehensive diagnostic support our device offers. Notably, the performance of our processors' API is so efficient that the processing speed is imperceptible to the user. This efficiency has led us to maintain a sequential processing approach rather than parallel execution. While parallel processing is technically feasible – given that the results of our scoring systems and AI models do not depend on each other – we have not found it necessary to implement this approach. Our current sequential processing method ensures a smooth, uninterrupted flow, allowing for consistent, reliable analysis without sacrificing speed or user experience.
Software classification of software items
In a previous section of this document, the device is classified according to ISO 62304:2007/A1:2016.
Following the same logic presented in that section, we classify the software items according to a risk-based approach (by evaluating the severity score of the risks associated with the software items) as follows:
Class B
- Clinical processors
- Non-clinical processors (domain validator, quality validator)
- Web API Gateway
- Orchestrator
- Report builder
It is worth noting that the clinical processors are the ones that perform the clinical tasks of the device.
Development of an architecture for the interfaces of software items
As required by IEC 62304, Section 5.3.2, an architecture must be developed for the interfaces of the software elements and the components external to the software elements, and between the software elements. However, because the design of the device's software architecture is based on microservices, it has not been necessary to develop a dedicated architecture for the interfaces of the software elements.
Communication between software elements is achieved through the HTTP protocol. Microservices use HTTP requests (GET and POST methods) for inter-service communication, leveraging a private network to ensure isolation and minimize external security threats. This approach facilitates interoperability and ease of integration among the software elements, allowing for a stateless architecture where services can be scaled independently according to demand. Although operating within a private network, services are designed with future scalability in mind, potentially incorporating security mechanisms such as HTTPS, authentication, and authorization if the network context evolves.
On the other hand, external communications only occur with AWS data storage services, specifically S3 and DocumentDB. These communications are managed through dedicated drivers (such as PyMongo
) and SDKs provided by AWS.
Again, in this case we do not need to develop architecture for the interfaces of the software elements, but instead we delegate these tasks to SOUP elements.
Functional and performance requirements of SOUP item
At this level of detail of the system software architecture, no software element is identified as SOUP. All software elements outlined have been developed internally following the guidelines for the IEC 62304 life cycle model.
System hardware and software required by SOUP item
At this level of detail of the system software architecture, no software element is identified as SOUP. All software elements outlined have been developed internally following the guidelines for the IEC 62304 life cycle model.
Identification of the segregation necessary for the risk control
Due to its classification, this chapter does not apply to the medical device software as it only applies to Class C.
Verification of the software architecture
In compliance with ISO 62304, section 5.3.6, we establish and document traceability between the defined software requirements and the architectural components of the medical device. This process is essential for verifying that the architecture fulfills the software requirements, ensuring clarity, reliability, and maintainability. Below, we detail how each requirement is mapped to specific software items within the system's architecture, along with appropriate justification.
- Web API Gateway
- REQ_005: The Web API Gateway is the main interface for users, managing both input (requests) and output (responses). It's directly responsible for secure communication through HTTPS and handling JSON inputs and outputs, including images encoded in Base64.
- REQ_006: Implementing the FHIR healthcare interoperability standard for data exchange aligns with the responsibilities of the Web API Gateway, ensuring compliance with healthcare data standards.
- REQ_007: The API's capability to return meaningful error information when processes fail directly addresses this requirement, ensuring that users are informed about the nature of issues encountered.
- REQ_011: This requirement involves the user specifying the body site of the skin structure, which is facilitated by the Web API Gateway as it handles user inputs. The API is configured to accept and parse user requests, including specific parameters like the body site, and pass this information on to the system for processing.
- Processor
- REQ_001, REQ_002, REQ_003: These requirements pertain to analyzing images to extract quantitative data on clinical signs, directly aligned with the Processor's role in image analysis using AI-based computer vision methods.
- REQ_004: The interpretation of data to provide a distribution representation of possible ICD categories from images falls within the Processor's capabilities in image processing and classification.
- REQ_008, REQ_009, REQ_010: The Processor's computer vision capabilities make it suitable for assessing whether images represent skin structure, detecting the modality of the image (clinical or dermatoscopic), and determining their quality, addressing these requirements directly.
- Orchestrator
- REQ_005: While the Web API Gateway provides the interface for secure communication, the Orchestrator ensures that the processing workflows are managed in an efficient and versatile manner. It does this by coordinating the logic and interaction between components, making the overall process adaptable to different types of requests and providing high performance and flexibility in processing.
- It also plays a critical role in ensuring that the processing flows required to meet REQ_001 to REQ_011 are efficiently managed, though indirectly. It does not implement these requirements directly but enables their execution by coordinating the software components.
- Report builder
- REQ_001, REQ_002, REQ_003, REQ_004: The Report builder compiles and formats the quantitative data and interpretations generated by the Processor into a uniform document (typically JSON) for presentation to the user, indirectly supporting these requirements by providing a consistent and understandable report.
- REQ_008, REQ_009, REQ_010: By compiling API responses, the Report Builder also communicates if the image does not represent a skin structure (REQ_008), if the quality of the image is insufficient (REQ_009), and identifies the image modality (REQ_010). It implements these requirements by including specific notifications and/or metrics in the final report based on the analysis performed by the Processor, so that the user is informed about these aspects.
Additionally, we have conducted a rigorous evaluation of all SOUP components selected for the architectural design of our medical device (at this design level they do not appear yet but will be used in lower level features). This evaluation process involved verifying the compatibility, functionality, and reliability of each SOUP element within the software architecture. We have ensured that the integration of these SOUP components does not compromise the integrity, safety, or performance of the device. Documentation of this thorough evaluation process has been prepared and is available in the specific SOUP records of the DHF, demonstrating that each SOUP component is adequately supported by our software architecture.
Software detailed design
Subdivision of the software architecture into software units
Recognizing the critical role that a well-defined software architecture plays in the overall integrity and functionality of medical devices, we adopt a systematic and iterative approach to refine the structure and organization of our software components.
The objective of enhancing the software architecture encompasses several key facets:
- Compliance with specified requirements: Ensuring that the architecture of the software units aligns with predefined functional and non-functional requirements. This alignment is crucial for the software to perform its intended medical functions accurately and reliably.
- Efficiency in testing and maintenance: By structuring the software in a coherent and logical manner, we facilitate more efficient testing and maintenance activities. This organization allows for the identification and isolation of defects more quickly and ensures that enhancements can be made with minimal impact on existing functionality.
- Management of complexity: As software systems become increasingly complex, a robust architecture is essential to manage this complexity. This involves defining clear interfaces between software units, adopting standardized design patterns, and ensuring that the system's modular structure supports both current and future needs.
- Enhancement of performance: Optimizing the software architecture for performance to ensure that the device operates efficiently under various conditions. This includes careful resource management, minimizing latency, and optimizing algorithms and data structures.
- Scalability and flexibility: Designing the software architecture to accommodate future expansion and modification. The architecture should support the addition of new features, adaptation to changing regulatory requirements, and scaling to meet varying loads without compromising system stability or performance.
The following diagram splits each high-level software item into software units that we have identified as indivisible:
Below is a detailed breakdown of the software units within the device architecture. We will go item by item describing its parts. As for the Web API Gateway, we have broken it down the into the following units:
- Authenticator: For the purpose of establishing security of user sessions, it generates a JSON Web Token (JWT) for users upon successful registration or login, encapsulating their session information in a secure, compact manner. Additionally, it is responsible for the critical function of verifying the validity of JWTs supplied by users in subsequent requests, thereby enforcing authentication and preventing unauthorized access.
- Password Manager: This component plays a significant role in safeguarding user credentials within the medical device software ecosystem. Besides creating variable length secure passwords, it employs robust cryptographic algorithms to transform plain-text passwords into hashed versions before storage, ensuring that passwords are not stored in a form that could be easily compromised. Moreover, it provides functionality to compare a submitted password with its stored hashed counterpart, facilitating secure authentication while keeping actual passwords obscured.
- API Call Recorder: Operating within the framework of the medical device, this software unit is designed to meticulously log usage data. Upon receiving specific information from a user, it utilizes a predefined data schema to accurately construct a usage record. This record is then persistently stored in a document-based database, allowing for semi-structured data accumulation and retrieval. This component provides valuable insights into device usage patterns.
- Database Operator: It is tasked with the registration of API-related events, capturing user interactions with the system in real-time and ensuring a seamless audit trail for compliance and operational analysis. Furthermore, it facilitates comprehensive CRUD operations that allow the effective management of authentication and authorization mechanisms, including user information lookup and verification processes.
- Record Model: The data model that meticulously defines the schema and enforces the validation logic of the information stored in the database. This component is activated whenever a user interacts with an API endpoint, ensuring that all data captured and stored adhere to predefined standards and formats, thereby maintaining the reliability and integrity of the device's data management system.
- Configuration Model: The data model that defines the schema and implement validation logic for the Web API Gateway configuration, facilitating seamless integration and interaction between various services.
The Report Builder has been subdivided into the following elements:
- Builder: It transforms the raw outputs from various processors into a structured report that adheres to a specified presentation format. This component operates by ingesting the output schemas (or tailored versions thereof) produced by the processors, selecting the required fields to compile a comprehensive report. The Builder supports multiple report types, including but not limited to the Fast Healthcare Interoperability Resources (FHIR) standard, thereby ensuring versatility and compliance with prevailing healthcare data exchange protocols.
- Data Model Extender: It enhances operational capabilities of various data models housed in the centralized data model repository. By integrating additional functionalities into these schemas, the Extender facilitates more sophisticated data handling and processing capabilities.
- Configuration Model: It enforces a rigorous validation logic and data schema for the Report Builder's configuration settings. This software unit ensures that all configuration inputs adhere to predefined standards and formats, guaranteeing the integrity and reliability of the report building process.
The software elements comprising the Orchestrator component are as follows:
- Specialist Manager: An intermediary communication layer within the architecture, responsible for the secure and efficient transmission of validated requests to designated processors via HTTP, as dictated by the corresponding director class. This element not only ensures the accurate delivery of requests but also performs minimal manipulation of the responses received from processors. This manipulation aims to enhance compatibility and facilitate seamless integration with other software units, optimizing the device's overall interoperability.
- Aggregator: It performs complex aggregation operations on a diverse set of data, encompassing both clinical and non-clinical results derived from multiple images submitted by users. This component utilizes analytical techniques to synthesize data, focusing on key metrics such as probability distributions generated by the ICD multiclass classifier, or preliminary findings procured during the post-processing stage, just to provide a few examples. Through the synthesis of comprehensive diagnostic insights, it contributes to improve the device's decision-support efficacy.
- Director: It coordinates the sequence of operations encompassing data mapping, the invocation of specialized processors, and the compilation of report sections. It supports various execution strategies, including sequential and complete processing of all inputs, as well as conditional processing based on predefined criteria. Thanks to its adaptive workflow management capabilities, the Director can be tailored to the specific requirements of each diagnostic scenario.
- Data Mapper: This component is designed to reconcile discrepancies between the keys present in incoming data structures (initially from the HTTP receiver and poised to accommodate alternative data sources) and the keys expected by the processors' input data schemas. It facilitates the seamless integration of disparate data sources, ensuring the consistency and reliability of data processing across the device's operational spectrum.
Processors typically have a trained AI model inside them, specifically a computer vision model. Depending on the type of task carried out by the computer vision models, the software elements inside each processor can be categorized into segmenters, classifiers and detectors.
- Segmenter: This type of model divides an image into segments, making it possible to identify and outline specific parts of the image with precision. For example, in delimiting a skin lesion, a segmentation model would accurately outline the boundary of the lesion, differentiating it from healthy skin. This is crucial for medical diagnostics, where the exact size and shape of a lesion could indicate the severity or type of condition.
- Classifier: Classification models categorize entire images or specific features within images into predefined classes. When quantifying the intensity of a visible skin sign, a classification model could examine an image of the skin and classify the severity of the sign (such as redness or swelling) into categories such as mild, moderate, or severe. This aids in assessing the condition's severity and monitoring its progression or response to treatment.
- Detector: Detection models identify and locate objects within an image, often drawing bounding boxes around them. For example, in the context of counting acne lesions, a detection model would scan an image of the skin and pinpoint each acne lesion. This is useful for assessing the extent of acne on the skin and could help in evaluating the effectiveness of acne treatments over time.
Even though there are fundamental differences between them, the various types of processors share the following software units:
- Backbone: It is essentially the heart of the software's processing system, incorporating deep learning capabilities to handle and compute data, generating critical results. It operates as the backend, executing the complex algorithms and processes that underpin the system's functionality. This core component is fundamental in producing the insights and analyses that are vital for the system's outputs and final reports.
- Backbone Loader: It Initializes and sets up the backbone model within the software's processing environment, ensuring it's properly loaded into memory. It handles the configuration of both operational and infrastructure parameters for the model's optimal performance. This includes setting up the computational resources, GPU memory allocation, and any necessary dependencies or environmental variables.
- Preprocessor: The preprocessor prepares and formats data to meet the specific input requirements of the backbone model. It encompasses the processing logic that is applied immediately before data is introduced into the system's core AI engine. This involves tasks such as cleaning, normalizing, and transforming the data to ensure compatibility with the backbone's operational parameters.
- Postprocessor: The postprocessor handles the output data generated by the backbone model after predictions or analyses have been made. It implements the necessary post-processing logic to convert the backbone's raw output into a format that is easily interpretable by humans. This may include tasks such as formatting the results into readable reports, applying thresholding to predictions for classification, or scaling output values to a recognizable range. The postprocessor ensures that the insights and conclusions drawn from the AI's computational processes are accessible and understandable, facilitating decision-making and further analysis by end-users.
- Predictor: It is a comprehensive software object that encapsulates several functions necessary for executing AI-driven tasks such as segmentation, classification, or detection. It begins with loading the inference configuration and the backbone model into the system. Following this initial setup, it applies preprocessor operations to prepare input data for analysis and postprocessor operations to refine the model's output for user interpretation. At its core, the predictor manages the invocation of the backbone model to generate predictions based on the input data. This integration of loading configurations, preprocessing, predicting, and postprocessing within a single object streamlines the workflow.
- Scorer: It is a specialized component found in select processors, particularly in some of those designed to host scoring systems. It implements a straightforward mathematical formula to compute a score, which typically quantifies the severity of a given skin condition. This scoring mechanism allows for a standardized evaluation of the pathology, facilitating consistent and objective analysis.
- Input Model: The input model specifies the structure and schema of data accepted by a processor or subprocessor. It outlines the format, types, and constraints of the input data, ensuring that all incoming data adheres to these predefined schemas. Additionally, the input model encompasses data validation logic, which is crucial for verifying that the data meets the necessary criteria before processing begins. This validation step helps prevent errors and inconsistencies in the processing pipeline by ensuring only correctly formatted and valid data is processed.
- Output Model: The output model establishes the structure and schema of data produced by a processor or subprocessor. It determines the format, types, and constraints of the output data, ensuring consistency and compatibility with other system elements that utilize this data. Integral to the output model is the data validation logic, which confirms that the outgoing data aligns with the predefined schemas. This validation process guarantees that the data not only meets the expected standards but also is readily interpretable and usable by subsequent processes or interfaces within the system.
- Configuration Model: The configuration model dictates the schema for setting up and customizing the processor or subprocessor, detailing the configuration parameters and their allowable values. This model includes validation logic to ensure that any configuration file provided adheres strictly to the predefined schema. This validation process helps in preventing errors during system initialization and operation, by verifying that all configuration settings are correct and within expected limits.
Finally, we have created a number of utility functions that are shared by several software items. These functions are located in centralized repositories, enabling them to be effortlessly integrated as packages into various microservices as needed. Given the vast array of utility functions available, we've categorized only the most crucial ones under the umbrella of system software units, emphasizing their importance in the infrastructure. Among all these units, we focus on:
- Medical Image: This class is pivotal, primarily offering conversion capabilities across diverse data structures for image storage, including but not limited to text, integer arrays, or binary files and streams. It extends its utility with specialized image processing features, tailor-made for our domain of interest. The inclusion of custom image processing functionalities allows for adaptable manipulation and analysis of medical imagery.
- Medical Image Loader: This component provides factory functions for generating a Medical Image from a multitude of input formats. Its versatility in handling various source formats simplifies the process of integrating and standardizing medical imaging data, facilitating easier access and manipulation within our ecosystem.
- AWS S3 Downloader: It allows us to easily download a file from AWS S3, and is mainly used to retrieve trained AI models. Acting as a dynamic cache, it streamlines the process of transferring models from remote to local storage, significantly enhancing the accessibility by reducing load times and ensuring the models are readily available for inference.
Development of the detailed design for each software unit
Due to its classification, this chapter does not apply to the medical device software as it only applies to Class C.
Development of the detailed design for the interfaces
Due to its classification, this chapter does not apply to the medical device software as it only applies to Class C.
Verification of the detailed design
Due to its classification, this chapter does not apply to the medical device software as it only applies to Class C.
Software unit implementation and verification
Implementation of each software unit
In compliance with the IEC 62304, Section 5.5.1, we have ensured that all software units were developed according to our predefined development process. Each piece of code is systematically version-controlled within Bitbucket repositories specifically designated for the medical device in the DHF. This approach facilitates traceability, allows for efficient management of software changes, and guarantees the integrity and security of the code base throughout the development lifecycle.
Establishment of the software unit verification process
Verification of a software unit involves a systematic, objective process of evaluating a software unit to guarantee it correctly implements specified requirements. This process seeks to identify discrepancies between the software unit's design and its implementation, ensuring the unit is safe, performs as intended, meets all design specifications, complies with regulatory standards, and does not introduce unintended functionality or errors.
Since we're in the process of verifying software units, it makes sense that unit testing should be our go-to strategy for verification. Unit testing is a software testing method where individual units or components of a software are tested independently to validate that each unit performs as designed. This testing approach allows for the isolation of each part of the software to identify, examine, and fix any defects early in the development cycle. Implementing unit testing effectively contributes to the software's reliability, facilitates changes, and simplifies integration.
To enhance the efficacy and efficiency of our testing process, we have incorporated automated testing techniques. Automated testing involves the use of software tools to execute tests on the software automatically, without the need for manual intervention. This method significantly increases the scope and depth of tests to improve software quality. This technique ensures consistent test execution, saves time and resources, and enables more frequent testing of the software as it evolves.
In our development environment, we have adopted pytest
and hypothesis
as our primary tools for writing and running unit tests in Python. Pytest is a powerful testing tool that supports simple unit tests as well as complex functional testing for applications. Hypothesis is an advanced testing library that works with Pytest to automatically generate test cases, aiming to cover a wide range of scenarios and edge cases that might not have been considered. This combination of Pytest and Hypothesis enhances our ability to discover bugs and verify that our software units meet the specified requirements efficiently.
All the designed and written unit tests can be found in the top-level tests directory within each medical device code repository on Bitbucket.
To measure code coverage, we use Pytest together with a plugin named pytest-cov
. First, we need to configure these tools in our code projects to track the coverage data when running the unit tests. The tools will generate reports in a number of formats such as HTML, XML or JSON, detailing which parts of the code were executed during tests. For convenience, we have preferred the HTML format.
After running the tests, we analyze the HTML coverage report to identify any parts of the code that are not adequately covered by tests. If the coverage is below a certain threshold, we'll need to write additional tests to cover the critical sections that were missed. Once updated, we rerun the tests to confirm that coverage meets our declared target.
We have also incorporated this unit testing and coverage reporting into our CI pipelines. This integration helps automate the process of generating coverage reports on every commit and can be set to alert us or fail the build if the coverage falls below the specified threshold.
In addition to the techniques mentioned so far, our verification strategy also includes other controls:
- Static Analysis: We use static analysis tools to examine the software unit's code without executing it. This analysis helps identify syntax errors, code structure issues, potential vulnerabilities, and compliance with coding standards.
- Code Review: Peer code reviews are conducted by experienced developers not directly involved in the development of the software unit. This approach facilitates the identification of logical errors, oversight in compliance with coding standards, and opportunities for optimization.
- Traceability Analysis: We perform traceability analysis to ensure that every requirement has been implemented in the software unit. This involves mapping each requirement to specific parts of the software unit's implementation, ensuring no requirement is overlooked.
Software unit acceptance criteria
As part of our acceptance criteria, we stipulate the following:
- Test execution outcome: A fundamental acceptance criterion is that all unit tests designed for a software unit must successfully pass upon execution. This means each test must meet its predefined assertions and conditions without failures, thereby affirming the software unit's correctness and functionality as specified.
- Test coverage: We mandate a minimum of 90% code coverage through unit tests. This high threshold ensures that the vast majority of our codebase is verified for functionality and reliability, minimizing the risk of undetected errors in untested code.
Achieving high coverage percentages and ensuring all tests pass are fundamental to confirming the software's quality and compliance with the stringent requirements of the IEC 62304 standard. This approach facilitates the identification and rectification of potential issues early in the development process, enhancing the overall quality and safety of the medical device software.
Additional software unit acceptance criteria
Due to its classification, this chapter does not apply to the medical device software as it only applies to Class C.
Software unit verification
We verify the correct implementation of the software units code by applying the established verification processes and ensuring that the software units have passed the defined acceptance criteria.
The records of the software unit verification process are stored in the infrastructure of our storage provider, AWS. The options we have considered for storing the verification reports have been relational and document databases, and S3. When evaluating these different storage options, we have used the following criteria:
- Data structure and complexity: Verification reports may contain structured data (test case descriptions, outcomes, metrics) and unstructured data (logs, error descriptions). A document database can naturally accommodate this variability.
- Scalability: The ability to scale the database easily in response to the amount of data generated by verification processes is crucial.
- Query patterns: How the data will be accessed and queried, including the need for complex joins or the predominance of key-value lookups.
- Integration and tooling: Compatibility with existing tools and services used in the software development lifecycle.
Given the nature of software unit verification reports, which likely include structured data alongside more complex, potentially unstructured data (such as logs or detailed error reports), AWS DocumentDB emerges as the best option. DocumentDB's flexibility in handling documents (JSON-like structures) makes it well-suited for storing various types of verification data, from structured metrics to verbose descriptions of test cases and outcomes. This flexibility allows us to adapt the reporting as the software evolves without being constrained by a rigid schema.
Moreover, DocumentDB's scalability and managed service characteristics ensure that as the volume of verification data grows, the database can scale to meet demand without extensive database management overhead. Its compatibility with MongoDB also facilitates integration with existing, well-known tools and platforms, simplifying the development and deployment of the verification reporting system.
Finally, considering access patterns to verification reports (which may include querying by date, test ID, software version, or other metadata) DocumentDB offers robust indexing and querying capabilities that can efficiently support these requirements. While relational databases like Amazon RDS or Amazon Aurora offer strong consistency and transactional capabilities which are important for certain types of applications, the nature of software unit verification reports makes Amazon DocumentDB the more suitable choice for this specific use case.
An example of one of the many verification reports for unit tests that are stored in AWS DocumentDB is the following JSON document:
{
"reportId": "UT-20240208-076",
"testType": "Unit",
"softwareUnitId": "MedicalImage",
"version": "1.1.0",
"date": "2024-01-16",
"description": "Verify the functionality of the MedicalImage class in converting images between PIL, Numpy, Base64, and other formats.",
"author": {
"name": "Alejandro Carmena Magro",
"employeeID": "JD-017",
"role": "Machine Learning Ops"
},
"environment": {
"operatingSystem": "Ubuntu 22.04.3 LTS",
"pythonVersion": "3.10",
"dependencyVersions": {
"numpy": "1.26.2",
"pillow": "9.4.0"
}
},
"executionTime": "0.3928 seconds",
"inputData": {
"format": "JPEG",
"size": "1024x768",
"colorMode": "RGB"
},
"result": [
{
"testCaseId": "TC201",
"description": "Convert an image from PIL to Numpy array format.",
"expectedResult": "Image successfully converted to Numpy array with preserved dimensions and color channels.",
"actualResult": "Image successfully converted to Numpy array with preserved dimensions and color channels.",
"status": "pass",
"comments": ""
},
{
"testCaseId": "TC202",
"description": "Convert an image from Numpy array to Base64 format.",
"expectedResult": "Image successfully encoded to Base64 with no data loss or corruption.",
"actualResult": "Image successfully encoded to Base64 with no data loss or corruption.",
"status": "pass",
"comments": ""
},
{
"testCaseId": "TC203",
"description": "Convert an image from Base64 to PIL format.",
"expectedResult": "Image successfully decoded from Base64 and converted to PIL format with correct metadata.",
"actualResult": "Image successfully decoded from Base64 but metadata incorrect.",
"status": "fail",
"comments": "Metadata handling for Base64 to PIL conversion needs review."
},
{
"testCaseId": "TC204",
"description": "Round-trip conversion from PIL through Numpy to Base64 and back to PIL.",
"expectedResult": "Image remains identical to the original after the round-trip conversion.",
"actualResult": "Minor discrepancies in image quality detected after round-trip conversion.",
"status": "fail",
"comments": "Possible precision loss in the Numpy to Base64 step. Requires further investigation."
}
],
"summary": {
"total": 4,
"passed": 2,
"failed": 2,
"passRate": "50%",
"failedList": ["TC203", "TC204"]
},
"conclusion": {
"status": "Partial Pass",
"recommendations": "Address failures in metadata handling and precision loss issues before proceeding. Ensure that all format conversions maintain integrity and data consistency across various transformations."
}
}
The report reveals that not all test cases were successful. However, the issues identified were addressed through our problem resolution process. We chose to share this specific report to illustrate what it looks like when not all tests pass, offering a realistic view of the testing process.
Coverage reports are stored in the cov_reports directory of each code repository. Due to the complexity and format of the HTML coverage reports generated by pytest-cov
, it would be impractical to provide a raw example of this type of report here. Instead, we are going to describe what a typical HTML coverage report would look like:
- Project name/header: At the top of the report, we'll find the project's name or a header indicating that it's a coverage report. This sets the context for what the report will cover.
- Summary table: This is one of the key elements of the report. It summarizes the overall test coverage for the entire project. The summary usually includes:
- Total coverage percentage: It displays the aggregated coverage across all files, showing what percentage of the total code base has been tested. This value is the one that should be greater than or equal to the code coverage percentage stipulated in the Software unit acceptance criteria section.
- Lines covered/total lines: This shows the number of lines of code that were executed during the tests, compared to the total number of lines in the code base.
- Missing lines: These are the lines of code that were not executed during the tests, indicating areas that lack coverage.
- Excluded lines: Lines that have been explicitly excluded from testing (for example, simple utility functions or external libraries) are listed here.
- File list: Each file in the project that has been included in the coverage analysis is listed in this section. For each file, the report often provides:
- File path: The relative path to the file within the project's directory structure.
- Coverage percentage: The percentage of lines covered by tests in that specific file.
- Covered lines/total lines: The number of lines covered by tests versus the total number of lines in the file.
- Detailed file reports: By clicking on a file name in the file list, you can access a more detailed report for that file. The detailed view displays the source code of the file, with lines of code that have been executed highlighted in green and the lines that have not been executed highlighted in red. This is very useful for developers as it visually identifies the exact lines that require more comprehensive testing.
Software integration and integration testing planning
Integration of software units
In adherence to the guidelines stipulated in IEC 62304, Section 5.6.1, we have meticulously executed the integration of software units in accordance with our previously established and documented integration plan (referenced in GP-012 Design, redesign and development
).
Our integration process commenced with the identification and verification of software units, ensuring their readiness for integration in line with the stipulations of our integration plan. This plan detailed a strategic approach to the sequence of software unit integration, which was devised to mitigate risks, facilitate error detection, and streamline the integration process.
Key elements of our integration plan included:
- Integration Sequence: A clearly defined order for integrating software units, prioritizing critical components and dependencies to ensure functional coherence and stability throughout the integration process.
- Interface Definitions: Detailed specifications of interfaces between software units were adhered to, guaranteeing compatibility and functional integrity between interconnected units.
- Integration Testing Strategy: A comprehensive testing strategy was employed to validate the functionality and interoperability of integrated units, encompassing unit tests, interface tests, and system-level tests to detect and rectify integration issues promptly.
Verification of the software integration
Addressing the specific requirements of Section 5.6.2 of IEC 62304, involves focusing on the verification that software units have been correctly integrated into software items and the software system, without delving into the details of integration testing and its outcomes, which are covered in other sections of the standard (5.6.3, 5.6.4, 5.6.5, and 5.6.7). Given this clarification, the process for verifying software integration is focused on the evidence of successful integration, as opposed to the process of integration testing itself.
The verification of software integration in this context emphasizes confirming the assembly of software units into software items (larger software components) and the complete software system. With this in mind, in this section we acknowledge that we have verified the following:
- Review of integration plan: We have evaluated the adherence to the integration plan, ensuring that all software units were integrated as planned, in the correct sequence, and without omitting any units or integration steps.
- Static analysis: We have conducted static analysis of the codebase to ensure that interfaces between integrated units are correctly implemented, and there are no syntactical or structural issues that would prevent the software items from functioning as intended.
- Configuration management review: We have verified that all integrated software units and the resulting software items are correctly managed within a configuration management system. This includes ensuring that all versions and variations are correctly identified, tracked, and documented.
- Dependency verification: We have confirmed that all dependencies between software units have been correctly resolved and that the integrated software items can operate together without conflict or missing dependencies.
Software integration tests
Integration testing is a critical phase in the software development lifecycle, particularly for medical device software, where reliability and safety are paramount. It involves the process of combining and testing multiple software units or components together to ensure that they function correctly as a group. This step is essential for identifying and addressing any issues, defects or inconsistencies that arise from the interactions between integrated components, which might not be apparent when these components are tested in isolation.
In the context of IEC 62304, integration testing is not just recommended but required. To ensure a comprehensive testing process, our integration testing covers a wide range of scenarios and use cases, including both normal and abnormal operating conditions. This approach helped us to verify not only the functional correctness of the software when modules work together but also its ability to handle errors gracefully and maintain operational integrity under various conditions.
Our testing was conducted in stages, progressively integrating and testing components to build up the software's functionality. This incremental approach allowed us to isolate and resolve integration issues effectively as we moved towards the full system integration. We documented all test cases, results, and any actions taken to resolve identified issues, ensuring traceability, repeatability and accountability throughout the testing process. For this purpose, we maintain a centralized Bitbucket repository named integration-tests
. This repository is the hub for storage, versioning, and collaborative refinement of our integration test suite.
With every test case, expected results are clearly documented. These expectations are derived from the software's functional specifications and the integration requirements. By comparing actual outcomes with these predefined results, we can ascertain the correctness of unit integration.
Regarding traceability, each test case is traceable back to specific software requirements and design specifications for comprehensive coverage of the software’s functionality and performance expectations. This traceability also facilitates the identification and rectification of any discrepancies uncovered during testing.
Furthermore, integration testing is performed in a controlled environment that closely replicate the software's intended operational context, ensuring that test environments are reproducible and that tests are executed against the correct version of the software.
To enhance efficiency and repeatability, we employ automated testing tools, with a particular focus on regression testing. This focus allows us to efficiently re-test components as the software undergoes changes. We've integrated Bitbucket Pipelines into our CI/CD strategy, which triggers our suite of low time-consuming integration and regression tests automatically with every code commit. With this setup we have immediate feedback on the impact of changes, enabling the quick detection and resolution of any integration issues.
Besides executing lightweight integration tests when committing code, all integration tests are run periodically (at least once a week), or before major software releases or updates, to verify the stability and compatibility of all software items and microservices.
Content of the integration tests
This part of the document outlines the specific content and considerations that have been incorporated into our integration testing strategy. Our approach is designed to comply with the requirements of IEC 62304, Section 5.6.4.
We have developed detailed test cases based on the software integration plan. These cases will specifically target interactions between integrated software items, focusing on data flow, error handling, and functional performance.
- Functionality testing: Verifying that the integrated components function together as intended, fulfilling all specified requirements. For this, the software units must contain no errors in the code, function correctly for all use cases, and the code of one unit must not clash with the code of another.
- Interface testing: Ensuring that data is correctly passed between modules, and that all interfaces behave as expected under various conditions. This includes testing of APIs, data formats, and protocols as defined in the software design specifications.
- Performance testing: Assessing the performance of the software item when multiple components interact, especially focusing on response times, throughput, and resource utilization under load.
- Error handling testing: Evaluating the system's ability to handle errors gracefully during component interaction, including the logging of errors and the software's resilience to recover from unexpected states.
For each test case, we have defined specific test data, which includes:
- Normal Operational Data: To simulate regular operation and verify that the system behaves as expected under standard conditions.
- Boundary Condition Data: To test the limits of component interactions and ensure stability and correct functionality at the extremes of input and operational parameters.
- Invalid and Unexpected Data: To challenge the system’s error handling and recovery processes, in order to evaluate robustness and reliability.
Verification of the software integration test procedures
We have conducted a thorough evaluation of the software integration test procedures to ensure their correctness. This evaluation process included a review of the test procedure design, the methods employed for testing, and the criteria for success, to verify that they are adequate for identifying defects and ensuring the software components interact correctly within the system. We also confirmed that these procedures are consistently applied and documented, meeting the standard's requirements for traceability and repeatability. Our findings and the details of the evaluation process have been documented following the template T-012-017 Integration test review
to provide clear evidence of compliance.
Software regression tests
Regression testing is a key component of the software development lifecycle. This type of testing ensure that newly integrated software items or updates do not adversely affect the existing functionality of the software. It involves re-running functional and non-functional tests to verify that previously developed and tested software still performs after a change. If a defect is found, it can be fixed before the software is released. This process is vital in maintaining the integrity and reliability of medical device software, where safety and efficacy are paramount.
The frequency of regression testing may vary based on the nature of changes or updates to the software:
- Minor changes: For minor changes or bug fixes that are unlikely to impact existing functionalities, regression testing may be less frequent.
- Major changes: When significant modifications, such as the introduction of new AI models, algorithms, or extensive feature enhancements are made, regression testing should be conducted more comprehensively.
Certain critical updates or major software releases demand more frequent regression testing. This is particularly relevant when the changes have potential implications for patient safety, data security, or regulatory compliance. Such updates may include:
- Updates to AI models: If AI models are updated or retrained with new data, we must perform extensive regression testing to ensure that the updated models perform accurately and safely.
- Regulatory changes: When there are updates to medical regulations, guidelines, or standards, the software may require modifications to remain compliant. Regression testing is then conducted before and after such updates to ensure adherence to the latest standards.
Below is an example of a detailed results report generated by the automated regression testing process. This JSON document, along with others generated by unit and integration tests, is stored in AWS DocumentDB for traceability and audit purposes.
{
"reportId": "RT-20240208-054",
"testType": "Regression",
"deviceVersion": "2.0.2",
"date": "2024-02-08",
"description": "Regression test for the integration of updated AI models for enhanced skin pathology detection and severity assessment.",
"author": {
"name": "Alejandro Carmena Magro",
"employeeID": "JD-017",
"role": "Machine Learning Ops"
},
"tester": {
"name": "Alejandro Carmena Magro",
"employeeID": "JD-017",
"role": "Machine Learning Ops"
},
"environment": {
"operatingSystem": "Ubuntu 22.04.3 LTS",
"hardware": {
"cpu": "Intel i9 14900k",
"ram": "64GB",
"gpu": "NVIDIA RTX 6000"
}
},
"execution": {
"startTime": "2024-02-08T10:01:06Z",
"endTime": "2024-02-08T10:27:18Z",
"durationMinutes": 26
},
"result": {
"total": 71,
"passed": 68,
"failed": 3,
"failures": [
{
"testId": "AI2001",
"description": "Accuracy test for melanoma detection",
"expectedResult": "95%",
"actualResult": "93.24%",
"reason": "Reduced accuracy potentially due to model overfitting on new data set",
"action": "Review training data set and model parameters"
},
{
"testId": "AI2005",
"description": "Latency test for image processing",
"expectedResult": "0.5 seconds",
"actualResult": "1.76 seconds",
"reason": "Increased processing time observed for high-resolution images",
"action": "Optimize image preprocessing and model inference pipeline"
},
{
"testId": "AI2015",
"description": "Erythema assessment for psoriasis",
"expectedResult": "Correct erythema classification for 98% of cases",
"actualResult": "Correct erythema classification for 96% of cases",
"reason": "Misclassification observed in borderline cases",
"action": "Adjust classification thresholds and retrain model with augmented data"
}
]
},
"conclusion": {
"status": "Partially successful",
"notes": "The majority of regression tests passed, indicating no major defects were introduced with the integration of the updated AI models. However, two issues were identified that require attention. The development team is tasked with investigating and resolving these issues."
}
}
Content of the integration test records
We thoroughly document and manage the records of integration tests. This is done according to IEC 62304, Section 5.6.7, in order to ensure that every integration test performed on the software is documented, retrievable for retesting, and clearly attributed to the individual who conducted the test. These practices are fundamental to maintaining the transparency, reliability, and traceability of the testing process, thereby supporting the overall quality and safety of the medical device software.
To illustrate the evidence, below is an example of an integration test record in JSON format. This record is representative of the tests conducted on our medical device. For the purpose of auditability and compliance, such records are systematically and securely stored in AWS DocumentDB.
{
"recordId": "IT-20240314-0012",
"testType": "Integration",
"date": "2024-03-14",
"description": "Integration Test for ASCORAD Processor",
"softwareVersion": "1.2.5",
"author": {
"name": "Alejandro Carmena Magro",
"employeeID": "JD-017",
"role": "Machine Learning Ops"
},
"tester": {
"name": "Alejandro Carmena Magro",
"employeeID": "JD-017",
"role": "Machine Learning Ops"
},
"environment": {
"operatingSystem": "Ubuntu 22.04.3 LTS",
"hardware": {
"cpu": "Intel i9 14900k",
"ram": "64GB",
"gpu": "NVIDIA RTX 6000"
}
},
"execution": {
"startTime": "2024-03-14T09:43:18Z",
"endTime": "2024-03-14T09:44:02Z",
"durationMinutes": 1,
"steps": [
{
"step": 1,
"description": "Load the segmentation model",
"expectedResult": "Model loaded successfully",
"actualResult": "Model loaded successfully",
"status": "pass",
"comments": ""
},
{
"step": 2,
"description": "Convert the input image to Pytorch tensor",
"expectedResult": "Image was converted to Pytorch tensor and the pixel values are of type float64",
"actualResult": "Image was converted to Pytorch tensor and the pixel values are of type float64",
"status": "pass",
"comments": ""
},
{
"step": 3,
"description": "Assess severity of atopic dermatitis",
"expectedResult": "Severity assessment matches expert annotations with kappa score > 0.65",
"actualResult": "Kappa score of 0.68",
"status": "pass",
"comments": ""
}
],
"summary": {
"overallStatus": "pass",
"totalSteps": 3,
"passedSteps": 3,
"failedSteps": 0,
"anomalies": []
},
"additionalNotes": "All test steps executed as expected. The module's accuracy and reliability are within the targeted range for this version."
}
}
Use of the software problem resolution process
Our protocol ensures that all anomalies encountered during software integration and integration testing are systematically entered and addressed within our established problem resolution framework, as detailed in the document GP-006 Non-conformities. Corrective and preventive actions
. This process delineates the steps for logging, analyzing, and resolving such anomalies, which allows us to guarantee the compliance with the standard and maintain the integrity and safety of the medical device.
Software system testing
The primary aim of software system testing under ISO 62304 is to validate the entire software as a complete and integrated system. This means testing the software to verify that all its components work together seamlessly, the software meets its intended functionality and requirements safely, and it is ready to be released to the market.
The outcomes of the system tests allows us to identify potential opportunities for improvement as well as implement corrective actions to solve current errors and preventive actions to anticipate future errors.
Establishment of tests for software requirements
Aiming to have a comprehensive and traceable verification process for all software requirements, we have prepared a test suite for our software system to ensure all software requirements are met, rigorously documenting all system tests, their configurations, and associated evidence within the DHF.
It's important to note that the majority of our system tests undergo rigorous verification through clinical validations, a process essential for affirming the safety and efficacy of our medical device in real-world clinical settings. Additionally, we employ specific metrics and scores calculated on test datasets to evaluate the performance of our computer vision models.
Use of the software problem resolution process
When we find an anomaly during system testing, we use the problem resolution process in an effort to address the anomaly. This process begins when an anomaly or problem is identified in the reports generated by the system tests. The recognition of these issues triggers the start of the problem resolution process.
Once an anomaly is identified, the next step involves creating a ticket in the Jira project R-006-002 List of non-conformities, claims and communications
. In this step we document the anomaly and begin the formal process of investigation. The ticket includes a detailed description of the anomaly found and an initial analysis of the possible causes of the problem. This documentation is essential for tracking the issue and ensuring that all relevant information is available for those tasked with investigating and resolving the anomaly.
If, upon analysis, the team considers the anomaly to be solvable, a new ticket is created within the Jira. This ticket is crucial for planning the resolution process. It includes the definition of a CAPA (Corrective and Preventive Action) plan, setting a due date to remedy the problem, outlining the actions required to solve it, and evaluating the effectiveness of the implemented actions. Here we ensure that there is a clear roadmap for addressing the anomaly and that the proposed solutions are both practical and measurable.
The resolution phase is marked by thorough implementation of the actions outlined in the CAPA plan. The effectiveness of these actions is closely monitored to ensure that the anomaly is resolved satisfactorily. Once the problem has been successfully resolved, the final step is to mark the ticket as Solved
. This act signifies the closure of the anomaly resolution process and confirms that the issue has been addressed in a manner that meets the team's standards for quality and effectiveness.
An illustrative example of this process in action is the anomaly identified on 2023-01-10_R-006-003_When uploading a photo of atopic dermatitis, the app is 99.98% sure that it's psoriasis
. For this particular issue, investigation and reception tickets R006001-43
and R006001-76
were created, respectively.
Retest after changes
As it is explained in the GP-012
procedure, every time we detect the need for a change, a new requirement is opened at the DHF, and it is reviewed, verified, validated and approved before its release as they are managed as new requirements.
When reviewing the changes, we evaluate the impact of the changes on the product, over inputs and outputs of risk management (following GP-013 Risk Management
) and over the processes of product delivery (GP-011 Provision of services
).
The list of all the changes performed along the different versions releases is recorded on the cover of the DHF following the template R-TF-012-005 Design change control
.
Verification of software system tests
We verify the adequacy of test procedures, traceability, completeness of testing against software requirements, and criteria satisfaction for pass/fail determinations by reviewing and approving the system test plans (T-012-018 Test plan
).
The traceability of test procedures to software requirements is documented in the T-012-018 Test plan
of the DHF in the section Software requirements verified
, guaranteeing that every software requirement has a corresponding test that verifies its implementation and functionality. Evidence that all software requirements have been tested or otherwise verified is found in both the Requirement verification
checklist and in the Evidence
section of each test run (T-012-003 Test run
). Additionally, the outcomes of the tests execution, aligning with predefined pass/fail criteria, are detailed at the beginning of each test run in the Result
section.
Content of the test records of the software system
Following the guidelines of IEC 62304, Section 5.7.5, we systematically document all the software system test records, strictly adhering to the standard's mandate for record-keeping of test results, the maintenance of sufficient records to enable test repeatability, and the clear identification of personnel involved in testing.
Detailed test results are recorded in the Evidence
section of each test run stored in the DHF. These results provide comprehensive analysis of test results, including methodologies employed, data collected, and in-depth assessment of the outcomes.
Furthermore, at the beginning of each test run, in the Result
section, there is the overall result of the test (Passed
or Failed
), offering a clear and immediate insight into the test outcome. This enables quick reference and overview, supporting efficient project management and oversight. Each test run also includes the identification of the tester who conducted the test, alongside the approver of the test run who has the responsibility of ensuring the test results meet established acceptance criteria. This practice ensures accountability and traceability, aligning with the standard’s requirements for identifying test personnel. By documenting the roles and responsibilities in this manner, we establish a clear audit trail that facilitates any necessary reviews or assessments.
Software release
Assurance of the software verification completion
Before the release of a new version or subversion of the software, we verify the adequacy of the verification strategies and test procedures through the evidence of software requirements accomplishment, which is documented in the DHF.
Moreover, when successful, the records described in the previous chapter also evidence the completion of the software verification.
Documentation of the known residual anomalies
All the known residual anomalies are registered in an anomaly table in the section Defects and issues
at the end of each test run record.
Evaluation of the known residual anomalies
Before we release a new version, we thoroughly evaluate any residual anomalies to ensure the software's reliability and safety. This evaluation process is an integral part of our system testing and defect management workflow.
Initially, after all unit and integration tests have successfully passed, we conduct system tests that comprehensively verify all software requirements. The outcomes of these tests are compared with the expected results. Any discrepancies or unexpected outcomes are documented in an anomaly table in the section Defects and issues
at the end of each test run record. This step makes sure that every defect, no matter how minor, is recorded for further analysis.
Following the identification and documentation of defects, an analysis is conducted to understand the root causes and impacts of these anomalies. This analysis informs the necessary corrections and modifications required to resolve the defects. If the implemented changes introduce new functionalities or significantly alter existing ones, new system tests are created to adequately cover these modifications.
Once corrections are made and any new tests are designed, the entire suite of original and new system tests is re-executed. The results of these tests are then evaluated to determine if any defects persist. This iterative process of testing, defect recording, analysis, correction, and re-testing is repeated until no inadmissible defects are identified.
Occasionally, minor defects may persist in new software releases. These are referred to as "known residual anomalies". When such anomalies are identified, they are carefully assessed for their impact on the overall system. The evaluation criteria ensure that these defects are minor and have a low impact, confirming they do not compromise the security or functionality of the device. Only defects that meet these stringent criteria are accepted, since they pose no risk to the device's performance or user safety.
Documentation of the released versions
The released versions documents are recorded in the corresponding space of the DHF, following the template T-012-004 Version release
, whose content is described in the GP-012
.
Additionally, before releasing any software version, we verify that:
- The CE certificate and the Declaration of Conformity is available.
- The CE marking label is included.
- The label and instructions for use comply with the legal requirements and are by the initially approved models.
- The established development guidelines have been followed.
- The characteristics of the software and the results of implementation, integration and development are as specified.
- The clinical evaluation and risk management records demonstrate compliance with the general safety and performance requirements.
The list of all the changes performed along the different version releases is recorded on the cover of the DHF following the template T-012-005 Design change control
.
Documentation about how the released software was created
The release of the software is performed using the technologies and tools explained in the software development plan section, and the corresponding records of the release are recorded in the corresponding space of the DHF, following the template T-012-004 Version release
from the GP-012
.
Assurance of activities and task completion
As it is described in the GP-012
, to ensure that the results meet the requirements, we conduct systematic product design and development reviews on each phase: requirements specifications, development of the activities, verification, validation and release of the product. As previously mentioned, the approval or review is registered by signing the record (for the requirements, tests and releases) or by completing the activities flow by the activity responsible. The beginning of each phase also implies that the previous one has been completed and reviewed.
All the tasks and activities of the software development plan are completed. The following documents evidence the completion:
- Plan:
Legit.Health Plus_DHF
- Design and development:
R-TF-008-001 GSPR
Legit.Health Plus_DHF
R-TF-012-006 Life cycle plan and report
R-TF-012-008 Formative evaluation report
R-TF-012-015 Summative evaluation report
R-TF-013-002 Risk management record
R-TF-015-003 CER
- Test:
- Activities (template
T-012-002 Activities
) (is part ofLegit.Health Plus_DHF
) - Test plans (template
T-012-018 Test plan
) (is part ofLegit.Health Plus_DHF
) - Test runs (template
T-012-003 Test run
) (is part ofLegit.Health Plus_DHF
)
- Activities (template
- Release:
- Version release (template
T-012-004 Version release
) (is part ofLegit.Health Plus_DHF
) Legit.Health Plus_IFU
Legit.Health Plus_Label
- Version release (template
- Feedback:
- Records generated from the Procedures
GP-004 Vigilance system
,GP-006 Non-conformity. Corrective and preventive actions
,GP-007 Post-market surveillance
,GP-014 Feedback and complaints
andGP-017 Technical Assistance
.
- Records generated from the Procedures
Software archive
As we already described, and as its widely done, we use Git. It allows the management of software versions as well as the recovering of old versions. Regarding the reliability, we have signed Service Level Agreements (SLA) with the supplier of our cloud Git vendor to ensure repeatability.
According to the procedure GP-001 Control of documents
and based on Annex 9 of MDR 2017/745, the documentation retention period is established in 10 years counting from the commercialization of the last product.
Using Git to manage the software configuration also ensures by design that we correctly document the configuration of the configuration of the device.
Assurance of the safe delivery of the released software
In compliance with IEC 62304, Section 5.8.8, we ensure that the released device can be delivered in the point of use without corruption or unauthorized change. This is achieved following the procedure GP-012 Design, redesign and development
and GP-010 Purchases and suppliers evaluation
, although there is a safety-by-design element: due to the nature of the device, managing organisations access our API and interact with the processors without the device having to travel, neither geographically nor digitally. Only the data they input and the device's output are transmitted.
It is worth mentioning the following elements of our design:
- Authentication and authorization: we employ robust authentication mechanisms such as OAuth 2.0 and JWT to ensure that only authorized users can access the web API. Role-based access control further restricts user privileges, enhancing data security.
- Data encryption: all data transmitted between the user and the API is encrypted using industry-standard encryption protocols, such as SSL/TLS, to protect against eavesdropping and data breaches.
- Security analysis: Periodic security audits and vulnerability assessments are conducted to identify and address potential security risks. This proactive approach helps us stay ahead of emerging threats. As we explained in the
GP-012
, we take into account information security considerations aligned with ISO/IEC 27001:2022 and the OWASP® Foundation's OWASP Top 10 framework to foresee vulnerabilities.
In the GP-012
we also define how we analyze changes to medical device software with respect to safety (IEC 62304, Section 7.4.1) and how we analyze impact software changes on existing risk control measures (IEC 62304, Section 7.4.2), which is also referenced in GP-013 Risk management
.
Software maintenance process
Establishment of the software maintenance plan
The redesign and development process allows continuous integration whenever improvements or problem corrections are needed once the software has been released. Any input received from users, patients, developers, providers or any other person involved in the development or use of the software is analyzed and managed to be integrated into the redesign and development process, if necessary. The software maintenance plan is mostly established in procedure GP-023 Change control management
, but also includes the aspects described below.
Monitor feedback
Regarding how we document and evaluate feedback and how we evaluate problem reports affects on safety, the procedures GP-004 Vigilance system
, GP-006 Non-conformity. Corrective and preventive actions
, GP-007 Post-market surveillance
, GP-014 Feedback and complaints
and GP-017 Technical Assistance Service
have as main objectives the assessment of input information from various sources to improve the device's safety and performance as well as the ongoing compliance with the established requirements.
These procedures describe the reception, documentation, evaluation, resolution and follow-up of any communication related to the released software.
The input communications can come from:
- Customer feedback
- Customer complaints
- Customer satisfaction surveys
- Non-conformities
- Incident notification
- Post-market surveillance
- Post-market clinical follow-up
- Internal and external audits
- Company meetings
- Other sources
All the communications are analyzed, based on a risk analysis approach, and are considered software maintenance when a modification of the software code must be performed.
Depending on the nature of the change, a new version or subversion of the device will be developed and a different level of technical documentation revision will be required, but every change, that can be significant or non-significant according to the procedure GP-023 Change control management
, is performed according to the GP-012 Design, redesign and development
procedure as a new requirement.
Backups
The data is meticulously updated at regular intervals to ensure optimal performance and up-to-date information. With a precise update frequency of 12 hours, our robust system guarantees that the most current data is consistently available to users.
To safeguard the integrity of our data, we have implemented a comprehensive backup strategy. Our carefully devised plan utilizes an incremental backup approach, which efficiently captures and stores any modifications made to the data since the last backup. This method not only reduces storage requirements but also minimizes the time and resources needed for the backup process.
By employing this best-in-class data update and backup system, we maintain a high standard of reliability, efficiency, and security, providing our clients with the utmost confidence in the quality and accuracy of the information provided by our device.
SOUP maintenance
We ensure that SOUP elements are rigorously monitored and evaluated to support the safety, reliability, and effectiveness of the medical device in compliance with ISO 62304. The procedure applies to all SOUP components integrated into the medical device, covering their evaluation, upgrade, patching, bug fixing, and the management of their obsolescence.
First, an up-to-date inventory of all SOUPs used in the medical device is maintained. This inventory include details such as the version, maintainer(s), license, code repository (if applicable), and the specific functionality or role of each SOUP element within the medical device system. All this information can be found in the SOUP recrds stored as part of the DHF.
A continuous monitoring strategy is established to identify any new updates, patches, vulnerabilities, or obsolescence notifications related to the SOUP elements. This involves staying informed through vendor release notes, changelogs, security bulletins, and relevant user communities or forums. When updates or patches are released, or vulnerabilities are identified, an impact analysis is conducted to evaluate the necessity and implications of integrating these changes. This analysis considers how updates will affect the device's safety, effectiveness, and regulatory compliance.
The Security vulnerabilities of SOUPs and software tools section of the document R-TF-007-001 Post-Market Surveillance (PMS) Plan
specifies how often all public resources related to the maintenance of each SOUP are reviewed.
Before implementing any changes to SOUPs, a detailed plan is developed outlining the steps for integration, resource requirements, and testing strategies. Testing is a critical step, performed in a controlled environment to ensure that the update does not negatively affect the device's functionality, safety, or performance. Following successful testing, the software documentation, including the SOUP inventory, risk management files, and system specifications, is updated to reflect the changes made.
In cases where SOUP elements become obsolete or are nearing the end of their support lifecycle, obsolescence planning is initiated to identify suitable alternatives or replacements. This includes evaluating potential impacts and implementing replacement components with minimal disruption, following the same stringent evaluation and testing procedures.
Throughout this process, comprehensive documentation and records of all SOUP maintenance activities are maintained. This includes monitoring reports, impact analyses, decision rationales, testing results, and records of implementation actions. Regular reviews of the SOUP maintenance process are conducted to identify improvements, ensuring the process remains effective and aligned with best practices and regulatory requirements.
AI/ML model re-training
Deep learning models exhibit a dynamic nature, making them conducive to re-training, which can substantially enhance their performance over time. To ensure optimal functionality, we engage in continuous monitoring of our models, employing predefined metrics tailored to each model's objectives for effectiveness assessment. This monitoring process incorporates feedback from both users and expert healthcare professionals, as well as insights gleaned from post-market surveillance and internal evaluations.
Identifying specific triggers to initiate retraining is crucial for maintaining the effectiveness of a model. These triggers are primarily based on the following factors:
- Accumulation or generation of new data:
- Updates due to the addition of new image atlases or collaborations with image providers and healthcare institutions may yield new data, thereby potentially improving model performance.
- As our image recognition model is trained only on ICD categories that have enough sample size (more than N images). The remaining categories are excluded from the training and validation pipeline. After accumulating new image data, either from updated image atlases or collaborations with image providers or healthcare institutions, it is possible that some of the previously discarded categories have enough samples to be included. In such cases, the new candidate categories are reviewed to finally decide whether to add them to the training/validation/testing pipeline.
- Revision of existing data: as the image recognition dataset is built from different sources, it contains a large collection of categories that are manually reviewed to obtain the final taxonomy of ICD-11 categories to be used to train the model. This taxonomy is periodically reviewed to create the most accurate and representative structure possible. Each revision of the taxonomy leads to a new training and validation round.
- Availability of new data for specific tasks: the models in charge of the quantification of intensity, count, and extent of clinical signs are re-trained specifically when new image data is available. New data need not only include images but also their corresponding labels required for each specific task. This also applies to image quality assessment and image domain check models.
- Integration of state-of-the-art models and algorithms: continual exploration of cutting-edge models and algorithms allows us to pursue performance enhancements.
- User feedback and performance monitoring:
- User feedback indicating performance issues prompts a reassessment of data or dataset revision to improve model performance.
- Regular monitoring of model performance based on updated label data ensures compliance with minimum threshold requirements for the metrics of each task.
Once the decision to re-train a model is made, the following steps are undertaken:
- Data collection: if necessary, acquire new data aligned with the model's objectives. This can involve either annotating new images with the assistance of expert healthcare professionals or sourcing relevant datasets from available atlases. Both the accumulation and generation of new data are conducted in close collaboration with expert healthcare professionals.
- Data preprocessing: the dataset is cleaned using our image quality assessment and domain check models, by fixing a minimum threshold for acceptance/rejection of new images. Variability analysis is conducted to ensure annotation reliability, and images failing quality or domain criteria are excluded.
- Model training: the model is retrained using the updated dataset and selected algorithms. Training strategies mirror previous approaches, with adaptations made to accommodate new datasets or architectures. Strategies can be adjusted in response to performance degradation, involving modifications such as re-balancing compensation weights for imbalanced items, refining loss functions, adjusting data sampling techniques, optimizing algorithms, refining learning rate strategies, or implementing other techniques aimed at boosting model performance. Depending on whether the model architecture has changed, two scenarios may unfold:
- Same Model Architecture: Retraining begins from previously trained weights rather than random initialization.
- New Model Architecture: A new model is trained from scratch, employing best practices for initializing the initial model weights.
- Verification: ensures that the retrained model meets specified requirements and functions correctly within its intended environment. This is achieved through careful partitioning of the dataset into training, validation, and testing sets. The training dataset is utilized to explore optimal model weights, while the validation set aids in assessing model convergence and selecting the best weights derived from the model search process. Finally, the test dataset is employed to validate that the final model satisfies the minimum thresholds required for the validation metrics defined for each task.
Regarding our decision of not to retrain the models automatically, but rather manually after collecting a significant number of new images or reviewing existing data, it is deliberate and rooted in several strategic considerations:
- Firstly, manual retraining allows us to ensure that the new data being introduced for retraining is of the highest quality and relevance. With this approach, we are able to perform a more rigorous quality check, ensuring that the new data doesn't inadvertently introduce biases or errors. Automatic retraining, although efficient, could potentially incorporate data that might not be ideal for maintaining the high standards of precision required in medical diagnostics.
- Secondly, manual retraining provides an opportunity for in-depth analysis and understanding of new data trends. This understanding is crucial for our team, as it allows us to make informed adjustments to the model, ensuring that it not only learns from the new data but also aligns with the latest clinical insights. Lastly, manual retraining offers a layer of control and oversight. It ensures that any modifications to the model are deliberate, transparent, and align with regulatory requirements, maintaining the trust and reliability that healthcare practitioners and organizations place in our device.
Analysis of problems and changes #### Monitoring feedback ##### Feedback
control ##### Documenting and evaluating feedback ##### Evaluation of problem report affects on safety #### Use of the software problem resolution process
Analysis and approval of change requests #### Communication to users and
regulators
Changes implementation
Re-release of the modified software system
Once we have performed all the required tests following the GP-012
and the software is considered verified and validated, we release a new version of the product into the market.
When required, we will inform the users and regulatory organizations about any issue in the released software and the consequences of unchanged continuous use as well as the nature of any change available and how to obtain and install the changes according to the procedures GP-004 Vigilance system
and GP-012 Design, redesign and development
.
Software risk management process
We describe in the GP-013 Risk management
document the procedure to control and establish the risk management of a medical device, from its conception throughout the design, development, placing into the market and post-market phase (along its life cycle), to guarantee the protection of the health of patients and elimination or minimization risks as much as possible.
The identification of known and foreseeable hazards and hazardous situations is based on the intended use, the reasonably foreseeable misuse and the characteristics related to the safety of the medical device software.
All the software elements, that can contribute to a hazardous situation are identified in the risk analysis performed which is documented in the corresponding R-TF-013-002 Risk management record
of the Technical File, which also includes the foreseeable sequence of events, harm, risk, parts and people affected and the potential cause or mechanism of failure.
After the risk analysis, we perform an initial risk evaluation for each hazard and hazardous situation, which is the product of the probability of the occurrence of the harm (P) times the severity of the harm (S). If this value, called RPN (Risk Priority Number), is greater than the limit established by the manufacturer (risk acceptability), it will be necessary to apply risk control measures to reduce this risk level until it becomes acceptable. These measures will be also described in the corresponding R-TF-013-002 Risk management record
of the Technical File.
The effectiveness of each risk control measure is verified in the mentioned record, along with the possibility of risks arising from risk control measures.
Analysis of software contributing to hazardous situations
Identification of the software elements that could contribute to a hazardous situation #### Identification of potential causes contributing to a hazardous situation #### Evaluation of SOUP anomaly published lists #### Documentation of potential causes #### Documentation of event sequences ### Risk control measures #### Definition of risk control measures #### Risk control measures implemented in the software ### Verification of risk control measures #### Verification of the risk control measures #### Documentation of any new sequence of events #### Documentation of traceability
Risk management of software changes
As previously mentioned, when a software changes into a new software version, we perform a revision of the technical documentation including the risk analysis to confirm the software benefit/risk ratio as well as to monitor known hazards, trends or side-effects, and to identify emergent hazards.
A new R-TF-013-002 Risk management record
is then created, indicating the new hazards and/or the revaluation of the new conditions of known hazards or the existing risk control measures, such as additional potential causes contributing to the hazardous situation or additional risk control measures to be implemented.
Analysis of changes to the medical device software with respect to safety
Analysis of the impact of software changes on existing risk control
measures #### Performing risk management activities based on analyses
Cybersecurity threat mitigation
During the development of the medical device, we have undertaken extensive measures to ensure compliance with regulatory standards, especially in terms of cybersecurity. However, it is important to clarify the rationale behind the exclusion of certain AI-specific cybersecurity threats, namely data-poisoning, model stealing, and adversarial attacks, from our documentation.
Concerning data poisoning, we recognize that this threat involves the intentional manipulation of training data, which can lead to erroneous or biased outcomes from the AI system. However, in the context of our device's development and operational environment, we have implemented robust data governance and quality control measures. These measures ensure that the training data is sourced from reliable and verifiable medical datasets, and undergoes stringent validation processes. Furthermore, our device's design minimizes the reliance on continuous learning post-deployment, thereby significantly reducing the risk associated with data poisoning.
Let's dig a little deeper into data governance and the quality control measures we have adopted. Our data governance framework is structured around several core principles designed to ensure the integrity and security of the training data. These principles include data provenance, quality assurance, access control, and continuous monitoring.
- Data Provenance: We maintain a record of the sources from which our data is obtained, including detailed metadata about the origin, context, and collection methodologies of the datasets. This ensures that all training data can be traced back to reputable and reliable medical institutions or studies.
- Quality Assurance: Before any dataset is integrated into our training processes, it undergoes a rigorous quality assessment. This includes checks for accuracy, consistency, completeness, and bias. Our team of data scientists and domain experts uses a combination of automated tools and manual review to validate datasets according to these criteria. Discrepancies or anomalies detected during this stage are investigated and resolved before the data is considered suitable for model training.
- Access Control: Access to the training data is regulated through an access control system. This system ensures that only authorized personnel with a defined need-to-know basis can access sensitive data. We employ security measures such as multi-factor authentication, encryption, and regular access audits, to prevent unauthorized data access and mitigate the risk of internal data manipulation.
- Continuous Monitoring: Even after the initial training phase, we continuously monitor the performance and behavior of the machine learning models to detect any signs of bias or drift that could suggest the presence of poisoned data. This validation process enables us to promptly identify and address any issues that may arise during the lifecycle of the device.
Regarding model stealing, this threat pertains to the unauthorized extraction or replication of the AI models, which could potentially lead to intellectual property theft or unauthorized use. Our models are securely stored in AWS S3, a cloud storage service known for its robust security measures. AWS provides comprehensive security features that effectively mitigate the risk of model stealing, including advanced encryption and access control mechanisms. Given that the responsibility of safeguarding stored data against such threats primarily lies with AWS, as per our service agreement, we have focused our cybersecurity documentation on aspects directly under our control.
Lastly, adversarial attacks involve manipulating input data to deceive the AI system into making incorrect decisions or predictions. Our AI models are tailored for specific, well-defined medical tasks, operating within controlled environments where the likelihood of encountering adversarial inputs is significantly reduced. Additionally, the models are designed with inherent robustness to variations in input data, further diminishing the effectiveness of potential adversarial attacks.
Software configuration management process
The JD-017 - Machine learning ops
(pertaining to the product development team) is primarily responsible for performing software configuration management activities within our organisation. This position ensures that the development environment is properly configured and maintained for all stages of the software lifecycle.
We use Docker as the containerization technology for microservices, to maintain consistency across different environments. This approach simplifies configuration management and reduces the risk of environment-specific issues.
Identification of the configuration
Establishment of means to identify the configuration items
Defining schemes for the unique identification of configuration items (CIs) for our medical device is crucial for ensuring effective software configuration management, as required by the standard IEC 62304, Section 8.1. The identifiers we have defined for each type of configuration item are presented below:
- Software items of the device architecture
- Identifier:
SI-{ComponentType}-{ComponentName}-{Version}
- Example:
SI-Module-MedicalImage-1.0.0
refers to the initial version of the medical image software unit. - Description:
{ComponentType}
categorizes the component into types like Library, Module, Class, Function or Service.{ComponentName}
is a unique identifier for the component within its type category, and{Version}
represents its current version.
- Identifier:
- Development and software tools
- Identifier:
ST-{ToolType}-{ToolName}-{Version}
- Example:
ST-Virtualization-DockerEngine-25.0.0
indicates Docker version 25.0.0 used as the containerization tool. - Description: Tools used in development, mentioning the tool type (VCS, Virtualization, Interpreter, Code Editor, Code Interpreter, etc.), the tool name (Docker, Git, Python, etc.) and version.
- Identifier:
- Configuration files
- Identifier:
CONF-{ServiceOrTool}-{Env}-{Version}
- Example:
CONF-AGPPGA-Prod-2.1
refers to version 2.1 of the production environment configuration for the AGPPGA Service. - Description: Configuration files for each microservice or software tool.
{ServiceOrTool}
identifies the specific microservice or tool name,{Env}
specifies the environment (Dev, Test, Prod), and{Version}
denotes the configuration version.
- Identifier:
- AI models
- Identifier:
AIM-{ModelName}-{TaskType}-{Version}
- Example:
AIM-ASCORAD-Segmentation-1.0.3
refers to version 1.0.3 of the ASCORAD segmentation model. - Description: Each computer vision model used for predictions.
{TaskType}
represents the type of task performed by the AI model (segmentation, classification, detection, forecasting, etc.), while{ModelName}
is the name we have assigned to a particular model.
- Identifier:
- Documentation
- Identifier:
DOC-{DocType}-{Version}
- Example:
DOC-IFU-2.0
refers to version 2.0 of the Instructions For Use. - Description: Documents related to development and usage.
{DocType}
could be Requirements, Design, IFU, etc.
- Identifier:
- Databases
- Identifier:
DB-{DatabaseType}-{DatabaseName}-{Version}
- Example:
DB-MongoDB-APIAuth-3.1
indicates version 3.1 of the database used for user authentication and authorization in the web API. - Description: Information on databases.
{DatabaseType}
specifies the type of database, such as a relational database (RDBMS), NoSQL database, or a specific database management system (DBMS) like MongoDB, MySQL, etc.{DatabaseName}
identifies the particular database (e.g., APIAuth, UsageData), and{Version}
indicates its version (e.g., 3.1, 4.0.1).
- Identifier:
All elements, including SOUP, device-specific software components, and microservices configurations, are documented in detail with their version history managed through Bitbucket. Changes to these items follow a documented change management process, including testing and peer review where applicable.
We validate all the tools we use in the software development process before using them to ensure they meet required functionality and reliability standards.
Additionally, any third-party tools undergo a supplier evaluation process where the supplier is assessed for compliance with quality, regulatory and security standards.
Any updates to the tools or changes in the supplier's terms are evaluated to ensure they continue to meet the required standards.
For SOUP elements, we perform regular security and compliance checks to ensure they do not introduce vulnerabilities into the system. We document these checks as part of our configuration management process.
Identification of SOUP
In addition to the other configuration items, it's important to address the inclusion and management of SOUP elements within our software system. Due to the microservices architecture of the device, which relies on multiple versions of the same SOUP elements across different parts of the system, documenting these elements using our standard identifier system presents challenges. This complexity arises from the need to tailor specific SOUP versions to the unique requirements and dependencies of each microservice.
Therefore, to maintain clarity and ensure thorough documentation, all SOUP elements that are integral to the system configuration are catalogued and described in the SOUP records stored in the DHF. The specific version of SOUP used by each microservice of the device is documented in the Related software items section of each SOUP record. This approach allows us to manage the diversity of SOUP versions and ensure that each microservice is associated with the appropriate version of its dependent SOUP elements.
Identification of the documentation of the system configuration
This part of the document provides a detailed record of the configuration items and their versions that comprise the system configuration of the device. Each item has been assigned a unique identifier according to the specified conventions, reflecting its role within the system, its version, and its type where applicable. This structured approach facilitates traceability and version control.
Although the software units are also configuration items of the software system, we have decided to identify only the microservices and internal libraries (many of them corresponding to code repositories) of the medical device. The software units have already been listed and described in the section Subdivision of the software architecture into software units
, and all units share a common version number for the same code repository, so it is sufficient to know to which repository a software unit belongs and the version of that repository to identify the software unit.
- Microservices and libraries of the device architecture
- SI-Service-WebAPIGateway-2.0.0
- SI-Service-ReporBuilder-2.1.0
- SI-Service-QualityValidator-1.0.1
- SI-Service-DomainValidator-1.1.1
- SI-Service-ICDMulticlassClassifier-3.0.0
- SI-Service-ICDBinaryClassifier-1.0.0
- SI-Service-BinaryReferrer-1.2.0
- SI-Service-AGPPGA-1.0.0
- SI-Service-AIHS4-1.0.0
- SI-Service-ALADIN-1.0.7
- SI-Service-APASI-1.0.0
- SI-Service-APASIClassifier-1.3.0
- SI-Service-APASISegmenter-2.0.2
- SI-Service-APULSI-2.0.0
- SI-Service-APULSIStageAdvancedClassifier-1.0.4
- SI-Service-APULSIStageAdvancedSegmenter-1.2.0
- SI-Service-APULSIStageOneClassifier-1.0.0
- SI-Service-APULSIStageOneSegmenter-1.0.1
- SI-Service-ASALT-1.5.0
- SI-Service-ASCORAD-1.0.0
- SI-Service-ASCORADClassifier-1.0.0
- SI-Service-ASCORADSegmenter-1.1.2
- SI-Service-AUAS-1.0.0
- SI-Service-NSIL-1.0.1
- SI-Service-HeadDetector-1.2.4
- SI-Service-ScoradForecaster-2.4.0
- SI-Service-ClassicScoringSystems-1.6.0
- SI-Library-Utils-2.2.0
- SI-Library-DataSchemas-0.60.1
- Development and deployment software tools
- ST-VCS-Git-2.42.0
- ST-Driver-NVIDIA-525.147.05
- ST-Virtualization-NVIDIAContainerToolkit-1.15.0
- ST-Virtualization-DockerEngine-24.0.7
- ST-CodeEditor-VSCode-1.89
- ST-Testing-Pytest-8.2.0
- ST-Testing-Coverage-7.5.1
- ST-StaticAnalysis-Ruff-0.4.4
- ST-Deployment-DockerCompose-2.21.0
- ST-Deployment-DeviceOperator-2.5.2
- Configuration files
- CONF-QualityValidator-Prod-1.0.1
- CONF-DomainValidator-Prod-1.1.1
- CONF-ICDMulticlassClassifier-Prod-3.0.0
- CONF-ICDMulticlassClassifierPathologyAttributes-Prod-3.0.0
- CONF-ICDBinaryClassifier-Prod-1.0.0
- CONF-BinaryReferrer-Prod-1.2.0
- CONF-AGPPGA-Prod-1.0.0
- CONF-AIHS4-Prod-1.0.0
- CONF-ALADIN-Prod-1.0.7
- CONF-APASIClassifier-Prod-1.3.0
- CONF-APASISegmenter-Prod-2.0.2
- CONF-APULSIStageAdvancedClassifier-Prod-1.0.4
- CONF-APULSIStageAdvancedSegmenter-Prod-1.2.0
- CONF-APULSIStageOneClassifier-Prod-1.0.0
- CONF-APULSIStageOneSegmenter-Prod-1.0.1
- CONF-ASALT-Prod-1.5.0
- CONF-ASCORADClassifier-Prod-1.0.0
- CONF-ASCORADSegmenter-Prod-1.1.2
- CONF-AUAS-Prod-1.0.0
- CONF-HeadDetector-Prod-1.2.4
- CONF-ScoradForecaster-Prod-2.4.0
- CONF-Deployment-Base-2.0.0
- CONF-Deployment-Dev-2.0.0
- CONF-Deployment-Prod-2.0.0
- AI models
- AIM-QualityValidator-Classifier-5.0.0
- AIM-DomainValidator-Classifier-4.1.0
- AIM-ICDMulticlassClassifier-Classifier-24.0.0
- AIM-ICDBinaryClassifier-Classifier-1.0.0
- AIM-BinaryReferrer-Classifier-1.2.0
- AIM-AGPPGA-Classifier-2.0.0
- AIM-AIHS4-Detector-1.0.0
- AIM-ALADIN-Detector-3.0.0
- AIM-APASI-Classifier-1.0.0
- AIM-APASI-Segmenter-1.0.2
- AIM-APULSIStageAdvanced-Classifier-2.0.0
- AIM-APULSIStageAdvanced-Segmenter-2.0.0
- AIM-APULSIStageOne-Classifier-1.0.0
- AIM-APULSIStageOne-Segmenter-1.0.0
- AIM-ASALT-Segmenter-2.1.0
- AIM-ASCORAD-Classifier-2.0.0
- AIM-ASCORAD-Segmenter-2.0.3
- AIM-AUAS-Detector-2.0.0
- AIM-HeadDetector-Detector-2.1.2
- AIM-ScoradForecaster-Forecaster-1.5.0
- Documentation
- DOC-DeviceIFU-2.0
- DOC-SeverityForecastGuide-1.1
- Databases
- DB-AWS_RDS-APIAuthData-2.0
- DB-AWS_DocumentDB-APIUsageData-2.0
- Storage
- STG-S3-MedicalDeviceBucket
Change control
Changes in software configuration are managed according to the previous section Software maintenance process and the procedure GP-023 Change control management
. There we explain the approval of change requests, the implementation of changes and the verification of changes. Furthermore, we explain how we use our DHF to provide of means for the traceability of changes.
Git and the documentation management tool used for the activities (T-012-002 Activities
) of the DHF offer full traceability of the software changes and versions, registering relationships between change requests, problem reports and change request approvals. Additionally, these tools allow for the keeping of retrievable records of the history of the controlled configuration items, including the system configuration.
Approval of change requests #### Implementation of changes
Verification of changes #### Provision of means for traceability of changes
Documentation regarding the configuration
Software problem resolution process
All the changes required to solve the problems detected are performed according to the GP-023 Change control management
and recorded within the DHF in the development environment, and verified and validated before the new version of the software is released.
As it is explained in the GP-014 Feedback and complaints
, all general communications received from clients or other commercial operators will be registered in our ticket management tool, which is described in our GP-014 Feedback and complaints
.
Any explicit customer communication that is considered a software problem or a non-conformity will be documented according to the GP-006 Non-conformities. Corrective and preventive actions
, where will be analyzed, evaluated and approved to achieve the root cause of the problem. Then, we will plan the actions taken or to be taken to solve it.
The possible influence of the problem report on the software security is also evaluated and tested at the verification and validation steps (the tests realization) of the corresponding modification development.
All the problems detected affecting the device, including bugs, are considered non-conformities and managed accordingly, and they will be included at the DHF as new requirements and managed according to the GP-012 Design, redesign and development
procedure. This way, we use a established process to implement modifications (GP-023 Change control management
) and we re-release the modified software system as a new version of the same device.
Furthermore, following GP-014 Feedback and complaints
and GP-006 Non-conformities. Corrective and preventive actions
we manage the writing reports of problems, investigating the problem and the notification to the relevant stakeholders, including notifying an incident to the national competent authorities.
Regarding problem resolution, in GP-006 Non-conformities. Corrective and preventive actions
we explain how we verify software problem resolution, and explain the content of the record of the testing. And following GP-007 Post-market surveillance
, we analyse and verify software problem resolution.
Additionally, we periodically review the performance of the QMS according to GP-020 QMS Data analysis
procedure, including the appearance of trends in software problems reports.
Preparation of problem reports ### Investigating the problem
Notification to the relevant parties ### Use of the change control process ### Record keeping ### Analysis of problems for trends ### Verification of software problem resolution ### Content of the test documentation
Annex I: Design control matrix
User requirements control matrix
Requirement | TEST 1 | TEST 2 | TEST 3 | TEST 4 | TEST 7 | TEST 8 | TEST 9 | TEST 10 | TEST 11 | TEST 12 | TEST 13 |
---|---|---|---|---|---|---|---|---|---|---|---|
UR-1.1 | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
UR-2.1 | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
UR-3.1 | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
UR-4.1 | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
UR-5.1 | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE |
UR-6.1 | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE |
UR-7.1 | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
UR-8.1 | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
UR-9.1 | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE |
UR-10.1 | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
UR-11.1 | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE |
UR-12.1 | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE |
SRS control matrix
Requirement | TEST 1 | TEST 2 | TEST 3 | TEST 4 | TEST 7 | TEST 8 | TEST 9 | TEST 10 | TEST 11 | TEST 12 | TEST 13 |
---|---|---|---|---|---|---|---|---|---|---|---|
SRS-1.2 | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
SRS-2.2 | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
SRS-3.2 | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
SRS-4.2 | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
SRS-5.2 | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE |
SRS-6.2 | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE |
SRS-7.2 | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
SRS-8.2 | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
SRS-9.2 | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE |
SRS-10.2 | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
SRS-11.2 | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE |
SRS-12.2 | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE |
Design requirements control matrix
Requirement | TEST 1 | TEST 2 | TEST 3 | TEST 4 | TEST 7 | TEST 8 | TEST 9 | TEST 10 | TEST 11 | TEST 12 | TEST 13 |
---|---|---|---|---|---|---|---|---|---|---|---|
DR-1.3 | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
DR-2.3 | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
DR-3.3 | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
DR-4.3 | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
DR-5.3 | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE |
DR-6.3 | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE |
DR-7.3 | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
DR-8.3 | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
DR-9.3 | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE |
DR-10.3 | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
DR-11.3 | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE |
DR-12.3 | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE |
Regulatory requirements control matrix
Requirement | TEST_011 | TEST_012 | R-TF-008-001 | R-TF-012-006 | R-TF-013-001 | R-TF-013-002 | R-TF-013-003 | Legit.Health Plus IFU | R-TF-001-006 | R-TF-012-007 | R-TF-012-008 | GP-050 |
---|---|---|---|---|---|---|---|---|---|---|---|---|
RR-5.4 | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE |
RR-5.5 | FALSE | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | FALSE |
RR-6.4 | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE |
RR-7.4 | FALSE | FALSE | FALSE | FALSE | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | TRUE | FALSE |
RR-7.5 | FALSE | FALSE | TRUE | FALSE | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
RR-12.4 | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE |
Signature meaning
The signatures for the approval process of this document can be found in the verified commits at the repository for the QMS. As a reference, the team members who are expected to participate in this document and their roles in the approval process, as defined in Annex I Responsibility Matrix
of the GP-001
, are:
- Author: Team members involved
- Reviewer: JD-003, JD-004
- Approver: JD-001