spml-sut.github.io

Security and Privacy in Machine Learning
Sharif University of Technology, Iran
CE Department
Spring 2023

   

Welcome to the public page for the course on Security and Privacy in Machine Learning (SPML). The main objectives of the course are to introduce students to the principles of security and privacy in machine learning. The students become familiar with the vulnerabilities of machine learning in the training and inference phases and the methods to improve the robustness and privacy of machine learning models.

Course Logistics

Instructor

   Amir Mahdi Sadeghzadeh
   Office: CE-501 (DNSL)
   Office Hours: By appointment (through Email)
   Email: amsadeghzadeh_at_gmail.com
   URL: amsadeghzadeh.github.io

Course Staff

Course Pages

Main References

The main references for the course are many research papers in top-tier conferences and journals in computer security (SP, CCS, Usenix Security, EuroSP) and machine learning (NeurIPS, ICLR, ICML, CVPR, ECCV). Three following books are used for presenting background topics in machine learning and deep learning in the first part of the course.

Grading Policy

Assignments (30%), Mid-term (and Mini-exam) (20%), Papers review and presentation(20%), and Final (30%).

Course Policy

Academic Honesty

Sharif CE Department Honor Code (please read it carefully!)

Homework Submission

Submit your answers in .pdf or .zip file in course page on Quera website, with the following format: HW[HW#]-[FamilyName]-[std#] (For example HW3-Hoseini-401234567)

Late Policy

   

   

# Date Topic Content Lecture Reading HWs
1 11/17 Course Intro. The scope and contents of the course Lec1 Towards the Science of Security and Privacy in Machine Learning  
2 11/22 Public Holiday        
3 11/24 Machine Learning ML Intro., Perceptron, Logistic regression Lec2 Pattern Recognition and Machine Learning Ch.1 & Ch.4
Deep Learning Ch.5
 
4 11/29 Public Holiday        
5 12/1 Linear Classifier Gradient descent, Regularization Lec3 Pattern Recognition and Machine Learning Ch.1 & Ch.4
Deep Learning Ch.6
 
6 12/6 Neural Networks (NNs) Softmax Classifier, Neural networks Lec4 Deep Learning Ch.6
The Neural Network, A Visual Introduction
Why are neural networks so effective?
HW1
7 12/8 Neural Networks (NNs) Neural networks Lec5 Deep Learning Ch.6
Backpropagation for a Linear Layer
What is backpropagation really doing?
 
8 12/13 Neural Networks (NNs) Forward and backward propagation Lec6 Deep Learning Ch.9  
9 12/15 Convolutional NNs Convolutional Neural Networks (CNNs) Lec7 Deep Learning Ch.9  
10 12/20 Regularization
Optimization
Mini-Exam, Batch Normalization, CNNs Architecture Lec8 Dive into Deep Learning Ch. 8  
11 12/22 Adversarial Examples CNNs Architecture, AE Generating Methods Lec9 Dive into Deep Learning Ch. 8
Intriguing Properties of Neural Networks
HW2
12 1/14 Adversarial Examples AE Generating Methods Lec10 Intriguing Properties of Neural Networks  
13 1/19 Adversarial Examples AE Generating Methods Lec11 Explaining and Harnessing Adversarial Examples  
14 1/21 Adversarial Examples AE Generating Methods Lec12 Towards Evaluating the Robustness of Neural Networks HW3
15 1/26 Adversarial Examples AE Generating Methods Lec13 Universal Adversarial Perturbations
Adversarial Patch
 
16 1/28 Adversarial Examples Defenses Against AEs Lec14 Towards Deep Learning Models Resistant to Adversarial Attacks
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
 
17 2/2 Public Holiday        
18 2/4 Adversarial Examples Defenses Against AEs Lec15 Certified Adversarial Robustness via Randomized Smoothing
Provably robust deep learning via adversarially trained smoothed classifiers
 
- 2/7 Mid-term Exam        
19 2/9 Adversarial Examples Defenses Against AEs Lec16 Certified Adversarial Robustness via Randomized Smoothing
Provably robust deep learning via adversarially trained smoothed classifiers
 
20 2/11 Adversarial Examples Black-box AEs Lec17 Practical Black-Box Attacks against Machine Learning
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
 
21 2/16 Presentation Student presentation Pres1-1 Adversarial Training for Free!
Data Augmentation Can Improve Robustness
Adversarial Examples for Malware Detection
Perceptual Adversarial Robustness: Defense Against Unseen Threat Models
 
22 2/18 Adversarial Examples Black-box AEs - Data Poisoning Lec18 Black-box Adversarial Attacks with Limited Queries and Information
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Clean-Label Backdoor Attacks
HW4
23 2/23 Presentation Student presentation Pres1-2 Adversarial Examples Are Not Bugs, They Are Features
Intriguing Properties of Vision Transformers
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Increasing Confidence in Adversarial Robustness Evaluations
 
24 2/25 Poisoning Poisoning Lec19 Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Deep Partition Aggregation: Provable Defense against General Poisoning Attacks
 
25 2/30 Model Extraction ME Attacks Lec20 High Accuracy and High Fidelity Extraction of Neural Networks
Knockoff Nets: Stealing Functionality of Black-Box Models
 
26 3/1 Model Extraction - Privacy ME Defenses - Privacy Risks Lec21 Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
Membership Inference Attacks against Machine Learning Models
 
27 3/6 Privacy Privacy Risks Lec22 Passive and Active White-box Inference Attacks against Centralized and Federated Learning
The Algorithmic Foundations of Differential Privacy
HW5
28 3/8 Privacy Differential Privacy Lec23 The Algorithmic Foundations of Differential Privacy  
29 3/13 Privacy Privacy-preserving DL Lec24 Deep Learning with Differential Privacy
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
HW6
30 4/10 Presentation Student Presentation Pres2 Reverse-Engineering Deep ReLU Networks
Trojaning Attack on Neural Networks
Poisoning and Backdooring Contrastive Learning
Extracting Training Data from Large Language Models
Deep Leakage from Gradients
Label-Only Membership Inference Attacks
Renyi Differential Privacy
Large Language Models Can Be Strong Differentially Private Learners