Next-level security for AI-based software as medical device

To prove the safety and performance of AI-based software as medical device, there are special requirements for documentation due to data management and model development, as we recently reported in our blog post.

Safety and Security

Safety and (information) security are strongly interacting with each other. In 2019, the Medical Device Coordination Group (MDCG) published the document “Guidance on Cybersecurity for Medical Devices” (MDCG 2019-16) with the following key requirements:

  1. Information security is considered as shared responsibility of all stakeholders.
  2. Requirements exist both before and after market entry (defense in depth product strategy during life-cycle).
  3. Manufacturers shall anticipate and assess the potential exploitation of vulnerabilities that may result from reasonably foreseeable misuse.
  4. Based on this assessment, measures shall be taken against the exploitation of these vulnerabilities.
  5. Risks originating from safety and information security shall be considered together also regarding their interaction.

To begin with, the basic protective measures for AI-based software do not differ from those for classic software in order to achieve the overall protective goals of confidentiality, integrity and availability. For example, these are measures against denial-of-service attacks (loss of availability) and malware.

However, there are also specific threats to AI-based software that are inherent in the technical characteristics and are directed against either the model or the data sets. In the following, we will explain some examples.

Identifying information security threats

As outlined above, the difficult task is to detect threats in advance, i.e., before distribution or before an attacker recognizes and exploits a potential vulnerability.

This approach is called Threat Modelling / Threat Analysis and is an established procedure when considering information security.

Procedure

Threat modelling describes a systematic procedure to identify potential threats, such as structural vulnerabilities or the lack of appropriate protective measures.

This requires knowledge of the attacker profile as well as the characteristics of the system under consideration, the most likely attack vectors and the assets most desired by an attacker (the values to be protected).

System properties > Attackers (profile) > Attack vectors > Assets (to be protected)

Threat modelling answers questions such as “What are the most important threats?”, “Where am I most vulnerable to attack?” and “What can I do to protect myself against these threats?”.

Relevant standards

Threat modelling and threat analysis are recommended in relevant standards and technical reports for the consideration of security (e.g., IEC TR 60601-4-5, Medical electrical equipment – Part 4-5: Guidance and interpretation – Safety-related technical security specifications).

An established model (but not the only one!) is “STRIDE” developed by Microsoft, whose abbreviation stands for security threats in six categories:

  • Spoofing
  • Tampering
  • Repudiation
  • Information disclosure
  • Denial of Service
  • Elevation of Privilege

Application to AI-based systems

This model can – and should! – be applied to AI-based systems.

Answer the question (for example, using the above categories): “What can go wrong in this AI-based system that we are working with and we (would like to) rely on?”

We will show you what such a threat analysis could look like in the following examples.

Model Poisoning

Adversarial Attacks

In simple terms, adversarial attacks consist of distorting the input data in such a way that the AI model behaves incorrectly. Examples are images of moles and the retina as well as chest X-rays in which distortions invisible to the human eye were introduced. In all cases, the respective models erroneously diagnosed disease although the images were actually without any findings (Ma, X.et al. 2019). Access to input data is by no means unlikely, as has been demonstrated by unprotected PACS servers in the past. Nevertheless, such attacks also require detailed knowledge of the typically hidden internal network topology of production systems. Ren and co-authors have recently highlighted numerous possible protection measures, such as adversarial training.

Generative Adversarial Network (GAN) based Attacks

Three-dimensional CT images can be automatically altered by an attacker using a GAN in such a way that structures are either added or removed (Mirsky, Y. et al. 2019). Even a radiologist is then unable to detect these manipulations. Access to the input data can occur via malware on the radiologist’s PC if direct access to the PACS server is not possible.

Data Poisoning

Data poisoning involves manipulating the training data for the AI model so that the performance of the product is negatively affected. However, this attack scenario requires a continuously learning system, which is currently not certifiable in the field of medical devices. Filtering has become established as a possible protective measure to detect data anomalies.

Model Stealing

Model stealing involves sending a large amount of input data to the AI-based software to use the output data (e.g. a diagnosis) to train a second AI model. In other words, a kind of replication of the original AI model takes place.

Privacy

Privacy and information security can only be considered together. According to the EU General Data Protection Regulation health data are considered a special category of personal data and merit particular protection in Europe. Manufacturers can realize privacy via measures such as differential privacy as well as anonymization and pseudonymization of output data (Kaissis et al. 2020).

Recommendations

Manufacturers should consider information security together with safety in the risk management for their product.

The threat analysis outlined above is part of the risk management process. It can be applied both to “conventional information security issues” and to AI-based software.

For products placed on the market, it is also important to continuously identify and analyze new threats as part of the monitoring process. If necessary, new protective measures must be implemented via the change management process.

Write a comment!

Your email address will not be published. Required fields are marked *