Publications

ワークショップ (国内) Measuring Lower Bounds of Local Differential Privacy via Adversary Instantiations in Federated Learning

松本 茉倫 (お茶の水女子大学), 髙橋 翼, リュウ センペイ, 小口 正人 (お茶の水女子大学)

第 25回情報論的学習理論ワークショップ (IBIS 2022)

2022.11.20

Local differential privacy (LDP) gives a strong privacy guarantee to be used in a distributed setting like federated learning (FL). LDP mechanisms in FL protect a client’s gradient by randomizing it on the client; however, how can we interpret the privacy level given by the randomization? Moreover, what types of attacks can we mitigate in practice? To answer these questions, we introduce an empirical privacy test by measuring the lower bounds of LDP. The privacy test estimates how an adversary predicts if a reported randomized gradient was crafted from a raw gradient g1 or g2. We then instantiate six adversaries in FL under LDP to measure empirical LDP at various attack surfaces, including a worst-case attack that reaches the theoretical upper bound of LDP. The empirical privacy test with the adversary instantiations enables us to interpret LDP more intuitively and discuss relaxation of the privacy parameter until a particular instantiated attack surfaces. We also demonstrate numerical observations of the measured privacy in these adversarial settings, and the worst-case attack is not realistic in FL. In the end, we also discuss the possible relaxation of privacy levels in FL under LDP.

Paper : Measuring Lower Bounds of Local Differential Privacy via Adversary Instantiations in Federated Learning新しいタブまたはウィンドウで開く (外部サイト)