Security and Privacy Issues in Deep Learning

Understand the security and data privacy issues in Deep Learning to build secure AI systems

Renu Khandelwal
8 min readOct 27, 2021

This article is heavily adapted and inspired by Security and Privacy Issues in Deep Learning. You will understand the different types of AI attacks and defense techniques against those attacks.

Why do we need security in deep learning?

AI applications have penetrated our daily lives. We use Siri, Google Assistant, Alexa, or Cortana as voice assistants. The recommendation engine recommends movies on Netflix, videos on Youtube, friends on Facebook, object detection in self-driving cars, or disease prediction using patient's imaging data. Any vulnerabilities in these AI systems can cause misprediction that will compromise its integrity and efficiency.

An artificial intelligence attack is when an attacker can manipulate the AI system to alter its behavior to serve a malicious end goal

Designing any secure system needs to clearly define the boundary between the system and the outside world, ensuring that an attacker can never access or modify critical parts of a system. In AI, the most vital ingredient is the training data and model.

An attacker can inject malicious data during training or inferencing by intentionally perturbing inputs, as shown below.

--

--

Renu Khandelwal

A Technology Enthusiast who constantly seeks out new challenges by exploring cutting-edge technologies to make the world a better place!