GP-024 Cybersecurity
Procedure flowchart
Purpose
The purpose of this document is to establish a comprehensive set of requirements and guidelines for ensuring the cybersecurity and transparency of all our medical devices, which are software devices that are primarily powered by AI/ML models.
This procedure aims to safeguard the integrity, confidentiality, and availability of data processed by these devices, while also ensuring that the devices function as intended with transparency and are resilient against cybersecurity threats. This is crucial for maintaining patient safety, complying with regulatory standards, and upholding our commitment to delivering secure and reliable medical solutions.
In the case that, in the future, one of our devices it not primarily powered by AI/ML, we would create a diferent cybersecurity procedure. Likewise, if in the future one of our medical devices was not appropiately covered by this procedure, we would create a new one specific to the device.
Scope
This document applies to the design and development process of all our medical devices, which are software devices that are primarily powered by AI/ML models. It encompasses all aspects of cybersecurity as it pertains to AI/ML models and auxiliary software items from the initial design phase to the deployment and lifecycle management of these models. This includes, but is not limited to, risk assessment, mitigation strategies, data protection measures, and incident response plans specific to AI/ML models employed in our medical devices.
In the case that, in the future, one of our devices it not primarily powered by AI/ML, we would create a diferent cybersecurity procedure. Likewise, if in the future one of our medical devices was not appropiately covered by this procedure, we would create a new one specific to the device.
Definitions
- AI/ML Model: Artificial Intelligence/Machine Learning model used for processing and analyzing data within our medical devices.
- Cybersecurity: Measures and processes employed to protect AI/ML models from unauthorized access, use, disclosure, disruption, modification, or destruction. Sometimes called simply "security".
- Integrity: The assurance that data is not altered in an unauthorized manner.
- Confidentiality: Ensuring that sensitive information is not disclosed to unauthorized individuals or entities.
- Availability: The assurance that AI/ML models and related data are accessible and usable as needed.
- Vulnerability: A weakness in cybersecurity measures that could be exploited by a threat.
- Threat: A potential cause of an unwanted incident, which may result in harm to a system or organization.
- Risk Management: The process of identifying, assessing, and mitigating risks. In the context of this proceduce, "Risk Management" will refer to risks associated with cybersecurity.
Responsibilities
JD-001
- To approve the procedure.
JD-005
- To ensure that the activities are carried out according to the methodology established in the present procedure.
JD-003
- To oversee and coordinate the methodology outlined in this procedure.
JD-004
- To ensure that the activities are performed following this procedure, and that all the records required are properly generated, reviewed, approved and archived accordingly.
Inputs and references
European Union
- IMDRF Principles and Practices for Medical Device Cybersecurity (
IMDRF/CYBER WG/N60FINAL:2020
). - MDCG 2019-16 - Guidance on Cybersecurity for medical devices (
MDCG 2019-16
). - MDR Annex I, GSPR 17
- Other user, regulatory, technical, security and other requirements related to cyber security.
United States
- FDA Final Guidance on Cybersecurity for Medical Devices: Content of Premarket Submissions for Management of Cybersecurity (October 2022)
- FDA Draft Guidance on Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions (April 2022)
- FDA Postmarket Management of Cybersecurity in Medical Devices (December 2016)
21 CFR Part 820
Quality System Regulation (QSR)21 CFR Part 11
Electronic Records and Signatures- NIST Cybersecurity Framework (Referenced by the FDA)
- FDASIA Section 618: Medical Device Security
- General Wellness: Policy for Low-Risk Devices (January 2019)
Outputs
- R-TF-024-001 Cybersecurity Management Plan
- R-TF-024-002 Threat modeling report
- R-TF-024-003 Cybersecurity risk management record
- R-TF-024-004 Software bill of materials
- R-TF-024-005 Static code analysis report
- R-TF-024-006 Penetration test report
- R-TF-024-007 Cybersecurity post-market record
- R-TF-012-006 Lifecycle plan and report
Development
Alignment with Relevant Legislation and Standards
Ensuring Compliance with Regulations and Standards
It's crucial to align the cybersecurity measures with relevant EU legislation, international guidelines, and best practices. This ensures compliance and provides a framework for implementing effective cybersecurity strategies.
- Aligning with EU Legislation and Guidelines: Ensure the cybersecurity strategy is in compliance with EU regulations and follows international guidelines.
- Adhering to Recognized Standards: Follow the standards and best practices outlined in the document
MDCG 2019-16
, the documentIMDRF/CYBER WG/N60FINAL:2020
, and other relevant standards.
Initial Risk Assessment and Management
This step heavily overlaps with GP-013 Risk management
, and thus the outcome of the Identification of Cybersecurity Risks is also recorded in the same records generated during the Risk management process itself.
Implementing a Detailed Risk Management Process
Effective risk management involves continuously identifying, analyzing, and managing potential cybersecurity risks throughout the product's life cycle. This ensures that the security measures evolve and adapt to new threats and vulnerabilities as they arise.
- Dynamic Risk Identification and Management: Regularly perform risk identification and analysis, updating the risk management strategies to reflect new insights.
- Lifecycle Risk Management Strategy: Establish a risk management process that spans the entire lifecycle of the product, addressing both current and future risks.
Identification of Cybersecurity Risks
Before developing software components for our medical devices, it is crucial to identify potential cybersecurity risks. This step involves a comprehensive evaluation of various threats that could compromise the security and functionality of the device, including the AI/ML models and their auxiliary software components, such as orchestrators, HTTP APIs or report builders. These risks might stem from unauthorised access to sensitive data through the APIs, tampering with the model's output through the report builder, or exploiting vulnerabilities within the AI/ML model itself.
It is essential to consider a range of attack vectors, including but not limited to network-related threats, physical access to the device, and potential security weaknesses introduced through third-party integrations.
- Conduct a Thorough Risk Assessment: Evaluate the software components, including the AI/ML models, for potential vulnerabilities and threats specific to medical devices.
- Identify Potential Cybersecurity Threats: Look for vulnerabilities like unauthorized data access, data tampering, and model manipulation.
- Consider Various Attack Vectors: Analyze network threats, physical access threats, and vulnerabilities from third-party integrations.
Balancing Cybersecurity with Safety Risk Management
A harmonized approach to managing both cybersecurity risks and safety risks is critical. This ensures comprehensive protection for both the device and its users, addressing the complex interplay between cybersecurity and device safety.
- Integrating Cybersecurity with Safety Risk Management: Develop a balanced approach that addresses both cybersecurity and safety risks.
- Harmonizing Risk Management Processes: Align the processes for managing cybersecurity risks with the
Evaluating Risk Impacts
After identifying potential risks, the next step is to evaluate the impact of these risks on critical aspects such as patient safety, data integrity, the reliability of the devices, and overall device functionality. This evaluation helps in prioritizing the risks, enabling a targeted approach to addressing the most significant threats first. Understanding the severity and likelihood of these risks is fundamental to developing an effective cybersecurity strategy.
- Impact on Patient Safety and Data Integrity: Assess how risks might affect patient safety and the integrity of the medical data processed by the AI/ML models.
- Model Reliability and Device Functionality: Consider the potential impact on the reliability of the AI/ML models and the functionality of the medical device.
- Risk Prioritization: Prioritize the identified risks based on their potential impact and the likelihood of occurrence.
Conducting Thorough Security Evaluations
A comprehensive security risk assessment is key to understanding the vulnerabilities within the software components, including the AI/ML models, and their potential impact. This step is crucial for the development of targeted strategies to mitigate identified risks effectively.
- Performing Comprehensive Vulnerability Assessments: Systematically evaluate the AI/ML models and the auxiliary software items to identify vulnerabilities and assess their potential impact on device functionality and patient safety.
Mitigation Strategy Development
Developing effective strategies to mitigate identified cybersecurity risks is a critical step in securing AI/ML models and the auxiliary software items. This involves designing and implementing a range of protective measures to safeguard against the identified threats. Strategies might include technical solutions like encryption and access controls, as well as organizational measures such as staff training and policy development.
- Developing Robust Mitigation Strategies: Create comprehensive strategies to address and mitigate the identified cybersecurity risks.
- Implementing Encryption and Access Controls: Use advanced encryption methods and access control mechanisms to protect sensitive data.
- Redundancy Systems for Data Integrity: Incorporate redundancy systems to maintain data integrity and ensure the availability of the devices.
AI/ML Model and Auxiliary Software Item Design and Security Features
This step heavily overlaps with GP-012 Design, Redesign and Development
, and thus the outcome of the AI/ML Model and Auxiliary Software Item Design and Security Features is also recorded in the same records generated during the Design, Redesign and Development itself.
Security by Design
Incorporating security considerations from the very beginning of the design process is essential. This 'Security by Design' approach ensures that the models are inherently secure and resilient to potential cybersecurity threats. It involves integrating security features that are robust yet do not hinder the model's functionality or performance. This proactive approach to security helps in minimizing vulnerabilities from the outset.
- Integrating Security in Design Phase: Embed security features during the design phase of the the development.
- Balancing Security with Functionality: Ensure that the integration of security features does not adversely affect the device's performance.
Integrating Secure by Design Principles
The 'Secure by Design' approach is pivotal in ensuring that security is not an afterthought but an integral component of AI/ML models and the auxiliary software items from the beginning. This involves designing models and auxiliary software items with inherent security features that are robust enough to withstand potential cyber threats while maintaining functionality.
- Adopt a Proactive Security Approach: Incorporate security considerations right from the initial design phase, ensuring that security is an integral part of the development process.
- Embedding Security Features: Design AI/ML models, APIs and other software components with built-in security features such as advanced encryption, access controls, and intrusion detection systems.
Considering Entire Lifecycle of AI/ML Models and Auxiliary Software Items
Addressing the entire lifecycle of AI/ML models and their auxiliary software items ensures that security is maintained not just at the development stage but throughout the product's operational life. This includes regular updates, maintenance, and eventual decommissioning.
- Developing a Lifecycle Management Strategy: Create a comprehensive plan that addresses all phases of the device lifecycle, including maintenance and updates.
- Ongoing Monitoring and Regular Updates: Set up a system for continuous monitoring and timely updating of the AI/ML models in response to new cybersecurity threats, including the auxiliary software items.
Incorporation of Security Controls
The integration of comprehensive security controls is a vital part of the design process. These controls include mechanisms for user authentication, data encryption, and authorization protocols in the conectivity components of the devices. They play a crucial role in protecting the AI/ML models from unauthorized access and ensuring the security of data processed by these models. Secure coding practices are also crucial in minimizing vulnerabilities during the software development lifecycle.
- Implementation of Security Mechanisms: Apply robust security controls like authentication and authorization protocols.
- Utilizing Secure Coding Practices: Adopt secure coding standards to reduce software vulnerabilities.
Data Protection and Privacy
This step heavily overlaps with GP-050 Data Protection
and GP-052 Data Privacy Impact Assessment (DPIA)
, and thus the outcome of the Data Protection and Privacy is also recorded in the same records generated during the Data Protection activities.
Data protection and privacy are paramount, especially when dealing with sensitive patient data. Ensuring that all data used and generated by AI/ML models is securely handled and stored is a critical aspect of model development. Employing privacy-preserving methods, such as data anonymization, adds an additional layer of security, helping to protect patient privacy and comply with regulatory requirements.
- Secure Data Handling and Storage: Implement measures to ensure the secure handling and storage of sensitive data.
- Privacy-Preserving Techniques: Utilize techniques like data anonymization and pseudonymization to enhance data privacy.
Implementation of AI/ML Models
This step heavily overlaps with GP-012 Design, Redesign and Development
, and thus the outcome of the AI/ML Model and Auxiliary Software Item Design and Security Features is also recorded in the same records generated during the Design, Redesign and Development itself. It also overlaps with GP-101 Information security
.
Secure Implementation Practices
Implementing AI/ML models securely is vital for maintaining the integrity of medical devices. This includes not only the coding of the models themselves but also the environment in which they operate, and their auxiliary software items, such as orchestrators, APIs and report builders, when applicable. Secure implementation practices encompass a wide range of activities, from the selection of secure development platforms to the deployment of the models in a controlled and secure manner.
- Selection of Secure Development Platforms: Choose development platforms known for their robust security features.
- Controlled Deployment Environments: Deploy AI/ML models in environments that are secure, monitored, and controlled.
- Regular Security Audits: Conduct regular security audits of the development and deployment environments to identify and address any potential security issues.
Integration with Existing Systems
The integration of AI/ML models with existing systems and infrastructure must be handled with care to maintain system integrity and security. This includes ensuring compatibility with existing security protocols and infrastructure, as well as ensuring that the integration does not introduce new vulnerabilities.
- Compatibility with Security Protocols: Ensure AI/ML models are compatible with existing security protocols and infrastructure. This is done, mostly, through APIs build with the HTTP protocol.
- Assessment of Integration Risks: Evaluate the risks associated with integrating the devices into existing systems.
- Mitigation of Integration Vulnerabilities: Implement measures to mitigate any new vulnerabilities that may arise from the integration process.
Verification and Validation
This step heavily overlaps with GP-012 Design, Redesign and Development
, and thus the outcome of the Verification and Validation is also recorded in the same records generated during the Design, Redesign and Development itself.
Security Testing
Once the devices are implemented, rigorous security testing is crucial to ensure that they are free from vulnerabilities and function as intended. This involves a variety of testing methodologies, including penetration testing, static and dynamic code analysis, and vulnerability assessments.
- Comprehensive Penetration Testing: Conduct thorough penetration testing to identify potential security weaknesses, specially focused on the conectivity components such as APIs.
- Static and Dynamic Code Analysis: Utilize static and dynamic code analysis tools to detect vulnerabilities and coding flaws.
- Regular Vulnerability Assessments: Perform regular assessments to identify and address newly discovered vulnerabilities in the AI/ML models and their auxiliary software items.
Validation of Data Integrity and Model Accuracy
Validating the integrity of the data used by AI/ML models and the accuracy of the models themselves is essential for ensuring that the medical device operates safely and effectively. This involves checking that the data has not been tampered with and that the models produce reliable and accurate outputs.
Validating the integrity of the data input to the models through the connectivity components and the accuracy of the models themselves is essential for ensuring that the medical device operates safely and effectively. This involves checking that the data has not been tampered with and that the models produce reliable and accurate outputs.
- Data Integrity Checks: Implement processes to ensure that the data fed into the AI/ML models has not been altered or tampered with.
- Model Accuracy Assessments: Regularly assess the accuracy of the AI/ML models to ensure they are performing as intended and providing reliable outputs.
- Continuous Monitoring and Improvement: Establish a system for continuous monitoring of model performance and accuracy, and for making improvements as needed.
Post-Deployment Monitoring and Maintenance
This step heavily overlaps with GP-007 Post-market surveillance
, and thus the outcome of the Post-Deployment Monitoring and Maintenance is also recorded in the same records generated during the Post-market surveillance itself. It also overlaps with GP-100 Business Continuity (BCP) and Disaster Recovery plans (DRP)
.
Establishing Post-Market Monitoring Systems
Implementing a robust post-market surveillance system is essential to monitor the effectiveness of cybersecurity measures continuously and to respond to new threats or incidents that may arise during the operational life of the devices.
- Setting Up a Surveillance System: Establish a system for ongoing monitoring of cybersecurity effectiveness.
- Regular Reporting and Vigilance: Maintain vigilance through regular reviews and reporting of any cybersecurity incidents or potential threats.
Continuous Monitoring for Cybersecurity Threats
After deployment, continuous monitoring of the device, including its AI/ML models, is essential to promptly identify and respond to any cybersecurity threats. This includes monitoring for unusual activities, potential security breaches, and emerging vulnerabilities that could affect the models.
- Active Monitoring Systems: Implement systems to actively monitor devices for signs of security breaches or anomalies.
- Regular Security Updates: Keep the devices up-to-date with the latest security patches and updates.
- Response to Emerging Threats: Develop a protocol for quickly responding to new cybersecurity threats as they emerge.
Maintenance and Updates
Regular maintenance and updates of the AI/ML models and the auxiliary software items are crucial to ensure their ongoing security and effectiveness. This includes updating the models to address known vulnerabilities, improve their functionality, and adapt to changing cybersecurity landscapes.
- Scheduled Maintenance Checks: Conduct scheduled maintenance to ensure the devices are functioning optimally.
- Updating Models for New Threats: Regularly update the models to protect against newly identified cybersecurity threats.
- Documentation of Changes and Updates: Keep detailed records of all maintenance activities, updates, and changes made to the AI/ML models and auxiliary software.
Incident Response Planning
This step heavily overlaps with GP-004 Vigilance system
, and thus the outcome of the Incident Response Planning is also recorded in the same records generated during the Vigilance system activities.
Development of an Incident Response Plan
Creating a robust incident response plan is crucial for promptly addressing any security incidents that may arise. This plan should outline the steps to be taken in the event of a security breach, including containment, investigation, mitigation, and recovery processes.
- Incident Response Team: Establish a dedicated incident response team responsible for managing cybersecurity incidents.
- Defined Incident Response Procedures: Develop clear procedures for responding to cybersecurity incidents, including roles and responsibilities.
- Regular Training and Drills: Conduct regular training sessions and drills to ensure that the incident response team is prepared to act swiftly and effectively.
Post-Incident Analysis and Reporting
After a security incident, conducting a thorough analysis and creating detailed reports are essential for understanding what happened and preventing future incidents. This includes analyzing the root cause of the incident, assessing the effectiveness of the response, and implementing lessons learned.
- Detailed Incident Analysis: Analyze the incident to understand the root cause and the impact on the AI/ML models and patient safety.
- Effectiveness Assessment: Evaluate the effectiveness of the incident response and identify areas for improvement.
- Implementation of Lessons Learned: Apply the insights gained from the incident analysis to strengthen the AI/ML models' cybersecurity measures and update the incident response plan.
Documentation and Instructions for Use
This step heavily overlaps with section Minimum IT requirements in the IFU of procedure GP-001 Control of documents
and most importantly R-TF-001-006 IFU and label validation
.
The IFU is a vital component in ensuring the safe and effective use of our medical device. It is crucial that the information it contains is not only accurate and up-to-date but also secure and easily accessible. To achieve this, we have implemented a series of robust cybersecurity measures and transparent processes.
Use of the IFU and Documentation for the Cybersecurity of the Devices
Clear and thorough documentation, including detailed instructions for use, is essential for the effective operation and maintenance of the devices. This documentation serves as a vital resource for users and maintains transparency regarding cybersecurity measures.
- Creating detailed and clear documentation: Ensure that all documentation related to the device is comprehensive, clear, and easily accessible.
- User-friendly Instructions for Use: Develop instructions that are straightforward and easy to follow, covering all relevant aspects of cybersecurity.
Cybersecurity Measures in IFU Development and Maintenance
The IFUs of our devices are developed and maintained with the same high standards of security and integrity as the software medical devices themselves. This includes:
- Version Control and Content Management: Utilizing Git for managing content and versions, ensuring traceability and integrity in the documentation process.
- Secure Development Practices: Implementing signed commits with GPG Keys for authenticity, and a structured branch system requiring approvals for merging changes, thereby safeguarding against unauthorized modifications.
- Automated Verification: Employing automated checks to validate code correctness, ensuring the IFU is free from bugs or errors before any updates are deployed.
- Programmatic Deployment: Deploying updates directly to the server after rigorous checks, enhancing the security and reliability of the IFU.
- Environment Security: Secure storage of environment variables in the Git repository and on the deployment server, further reinforcing the safety of our documentation process.
Transparency in Documentation
Transparency is key in our documentation process, ensuring that users have clear, accurate, and reliable information at all times. This involves:
- Regular Updates and Code Reviews: Conducting regular updates to the IFU, accompanied by thorough code reviews to maintain the highest standards of accuracy and clarity in our instructions.
- Traceable Modifications: Each update or modification to the IFU is traceable, with a clear record of what changes were made, by whom, and when. This transparency is crucial for maintaining user trust and regulatory compliance.
- Accessible Revision History: Maintaining an accessible history of revisions in our Git repository, allowing for easy tracking of changes over time.
Ensuring Easy Access and Compliance
- Dual Accessibility: The IFU is not only accessible via a secure, dedicated URL but is also integrated into the output of our medical devices' output. This dual method of access ensures that users can always access the most current version of the IFU in a manner most convenient to them.
- Compliance with Regulations: All the measures we take in the development, maintenance, and distribution of the IFU are in strict compliance with relevant regulations. This ensures that our users receive not only the most secure but also the most compliant form of documentation possible.
Transparency Requirements of AI/ML Models
Transparency in the devices, and specially in the AI/ML models, is crucial for ensuring trust, safety, and efficacy in medical devices. It involves providing clear, accurate, and accessible information about the AI/ML models, including how they are developed, validated, and function within the device. This transparency is essential for regulatory compliance, healthcare provider trust, and patient safety.
Key Aspects of Transparency
Description of AI/ML models and development and validation process
This step is accomlished with GP-012 Design, Redesign and Development
and in the Technical File
of each device, most prominently in the R-TF-012-006 Lifecycle plan and report
.
- Model Functionality: Provide a detailed description of what the AI/ML model does, including its purpose, capabilities, and limitations.
- Data Used in Training: Clearly describe the data used to train the model, including sources, types, and any preprocessing steps.
- Model Development: Document the development process of the AI/ML models, including algorithms used, feature selection, and model architecture.
- Validation Methods: Describe the methods used to validate the AI/ML models, including performance metrics, testing datasets, and validation procedures.
Decision-Making Process
This step is accomlished with the Customer information or information supplied by the manufacturer, as explained is GP-001 Control of documents
.
- Accessible Documentation: Ensure that all documentation related to the AI/ML models and the related software is accessible, clear, and written in language that is understandable to its intended audience.
- Explainability of Decisions: Ensure that the AI/ML model's decision-making process is explainable to users. Provide information on how the model processes data and arrives at conclusions.
- User Guidance: Offer guidance on how healthcare providers should interpret and use the outputs of the AI/ML model and the report build by the device in clinical decision-making.
Performance Monitoring
This step is accomlished with GP-007 Post-market surveillance
.
- Real-World Performance: Document the performance of the AI/ML models in real-world settings, including any discrepancies observed between expected and actual outcomes.
- Ongoing Monitoring: Describe the process for ongoing monitoring of the model's performance, including how updates and modifications are managed.
Ethical and Legal Considerations
This step heavily overlaps with GP-050 Data Protection
and GP-052 Data Privacy Impact Assessment (DPIA)
, and thus the outcome of the Data Protection and Privacy is also recorded in the same records generated during the Data Protection activities.
- Data Privacy and Security: Detail the measures implemented to protect patient data privacy and security, in line with applicable regulations.
- Bias and Fairness: Address potential biases in AI/ML models and the steps taken to ensure fairness in outcomes across different patient demographics.
Associated documents
Procedures
GP-012 Design, Redesign and Development
GP-018 Infrastructure and facilities
GP-013 Risk management
GP-050 Data Protection
GP-052 Data Privacy Impact Assessment (DPIA)
GP-004 Vigilance system
Signature meaning
The signatures for the approval process of this document can be found in the verified commits at the repository for the QMS. As a reference, the team members who are expected to participate in this document and their roles in the approval process, as defined in Annex I Responsibility Matrix
of the GP-001
, are:
- Author: Team members involved
- Reviewer: JD-003, JD-004
- Approver: JD-001