Publications

WORKSHOP (INTERNATIONAL) Differentially Private Variational Autoencoders with Term-wise Gradient Aggregation

Tsubasa Takahashi, Shun Takagi (Kyoto University), Hajime Ono (SOKENDAI), Tatsuya Komatsu

Theory and Practice of Differential Privacy (TPDP 2020)

November 13, 2020

This paper studies how to learn variational autoencoders with a variety of divergences under differential privacy constraints. We often build a VAE with an appropriate prior distribution to describe the desired properties of the learned representations and introduce a divergence as a regularization term to close the representations to the prior. Using differentially private SGD (DP-SGD), which randomizes a stochastic gradient by injecting a dedicated noise designed according to the gradient's sensitivity, we can easily build a differentially private model. However, we reveal that attaching several divergences increase the sensitivity from O(1) to O(B) in terms of batch size B. That results in injecting a vast amount of noise that makes it hard to learn. To solve the above issue, we propose term-wise DP-SGD that crafts randomized gradients in two different ways tailored to the compositions of the loss terms. The term-wise DP-SGD keeps the sensitivity at O(1) even when attaching the divergence. We can therefore reduce the amount of noise. In our experiments, we demonstrate that our method works well with two pairs of the prior distribution and the divergence.

Paper : Differentially Private Variational Autoencoders with Term-wise Gradient Aggregationopen into new tab or window (external link)