USENIX-23-Fall Interesting Papers
Table of Contents
- 1. AI-related
- 1.1. DONE A Data-free Backdoor Injection Approach in Neural Networks
- 1.2. DONE A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots
- 1.3. DONE Lost at C: A User Study on the Security Implications of Large Language Model Code Assistants
- 1.4. DONE Rethinking White-Box Watermarks on Deep Learning Models under Neural Structural Obfuscation
- 2. Related and Interesting but not relevant to my work
- 3. Not relevant but interesting
Paper accepted list: https://www.usenix.org/conference/usenixsecurity23/fall-accepted-papers
Researches what I am interested are as follows:
1. AI-related
1.1. DONE A Data-free Backdoor Injection Approach in Neural Networks
- https://www.usenix.org/conference/usenixsecurity23/presentation/lv
- backdoor injection: means changing the weights of the original model, to obtain a new model with backdoors
- idea: by introduce the "substitute" dataset to achieve "data-free".
1.2. DONE A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots
- https://www.usenix.org/conference/usenixsecurity23/presentation/zhang-boyang
- Task: Steal the hyper-parammters of NN models.
1.3. DONE Lost at C: A User Study on the Security Implications of Large Language Model Code Assistants
- https://www.usenix.org/conference/usenixsecurity23/presentation/sandoval
- The risk introduced by AI applications, here is "CODEX" and "Copilot".
1.4. DONE Rethinking White-Box Watermarks on Deep Learning Models under Neural Structural Obfuscation
- https://www.usenix.org/conference/usenixsecurity23/presentation/yan
- Scenario:
- The attackers obtained the NN (which is where we call it as white-box) model first.
- Due this model will generate data (e.g. images) with watermark, the attacker will try to change the model to generate data without watermark.
- Method: too complex.
2. Related and Interesting but not relevant to my work
- Authenticated private information retrieval
- DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local Smoothing
- Don’t be Dense: Efficient Keyword PIR for Sparse Databases
- Fine-grained Poisoning Attack to Local Differential Privacy Protocols for Mean and Variance Estimation
- GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation
- HECO: Fully Homomorphic Encryption Compiler
- https://www.usenix.org/conference/usenixsecurity23/presentation/koch
3. Not relevant but interesting
- OK Is Not Enough: A Large Scale Study of Consent Dialogs in Smartphone Applications
- Title Encrypted.
- https://www.usenix.org/conference/usenixsecurity23/presentation/wu-mingshi