diff --git a/_posts/2023-09-06-inverse-sensitivity.md b/_posts/2023-09-06-inverse-sensitivity.md index e7a5add..ecd0b7e 100644 --- a/_posts/2023-09-06-inverse-sensitivity.md +++ b/_posts/2023-09-06-inverse-sensitivity.md @@ -28,7 +28,7 @@ For example, if \\\(X\_1, \\cdots X\_n\\\) are i.i.d. samples from a standard Ga ## Using Local Sensitivity Intuitively, the local sensitivity is the "real" sensitivity of the function and the global sensitivity is only a worst-case upper bound. -Thus it seems natural to add noise scaled to the local sensitivity instead of the global sensisitivy. +Thus it seems natural to add noise scaled to the local sensitivity instead of the global sensitivity. Unfortunately, naïvely adding noise scaled to local sensitivity doesn't satisfy differential privacy. The problem is that the local sensitivity itself can reveal information. @@ -43,7 +43,7 @@ In fact, there are many methods in the differential privacy literature to exploi The best-known methods for exploiting local sensitivity are _smooth sensitivity_ [[NRS07](https://cs-people.bu.edu/ads22/pubs/NRS07/NRS07-full-draft-v1.pdf "Kobbi Nissim, Sofya Raskhodnikova, Adam Smith. Smooth Sensitivity and Sampling in Private Data Analysis. STOC 2007.")][^2] and _propose-test-release_ [[DL09](https://www.stat.cmu.edu/~jinglei/dl09.pdf "Cynthia Dwork, Jing Lei. Differential Privacy and Robust Statistics. STOC 2009.")][^3]. -In this post we will cover a different general-purpose technique known as the _inverse sensitivity mechanism_. This technique is folklore.[^4] It was first systematically studied by Asi and Duchi [[AD20](https://arxiv.org/abs/2005.10630 "Hilal Asi, John Duchi. Near Instance-Optimality in Differential Privacy. 2020."),[AD20](https://papers.nips.cc/paper/2020/hash/a267f936e54d7c10a2bb70dbe6ad7a89-Abstract.html "Hilal Asi, John Duchi. Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms. NeurIPS 2020.")], who also named the method. +In this post we will cover a different general-purpose technique. This technique is folklore.[^4] It was first systematically studied by Asi and Duchi [[AD20](https://arxiv.org/abs/2005.10630 "Hilal Asi, John Duchi. Near Instance-Optimality in Differential Privacy. 2020."),[AD20](https://papers.nips.cc/paper/2020/hash/a267f936e54d7c10a2bb70dbe6ad7a89-Abstract.html "Hilal Asi, John Duchi. Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms. NeurIPS 2020.")], who also named the method the _inverse sensitivity mechanism_. ## The Inverse Sensitivity Mechanism @@ -81,7 +81,7 @@ By the properties of the exponential mechanism and Lemma 1, \\\(M\\\) satisfies The privacy guarantee of the inverse sensitivity mechanism is easy and, in particular, it doesn't depend on the properties of \\\(f\\\). This means that the utility will need to depend on the properties of \\\(f\\\). -By the properties of the exponential mechanism, we can guaranatee that the output has low loss: +By the standard properties of the exponential mechanism, we can guaranatee that the output has low loss: > **Lemma 3.** Let \\\(M : \\mathcal{X}^\* \to \\mathcal{Y}\\\) be as defined in Equation 5 with the loss from Equation 4. For all inputs \\\(x \\in \\mathcal{X}^\*\\\) and all \\\(\\beta\\in(0,1)\\\), we have \\\[\\mathbb{P}\\left\[\\ell(x,M(x)) < \\frac2\\varepsilon\\log\\left\(\\frac{\|\\mathcal{Y}\|}{\\beta}\\right\) \\right\] \ge 1-\beta.\\tag{6}\\\] @@ -118,8 +118,8 @@ We leave you with a riddle: What can we do if even the local sensitivity of our [^3]: Roughly, the propose-test-release framework computes an upper bound on the local sensitivity in a differentially private manner and then uses this upper bound as the noise scale. (We hope to give more detail about both propose-test-release and smooth sensitivity in future posts.) -[^4]: Properly attributing the inverse sensitivity mechanism is difficult. The earliest published instances of the inverse sensitivity mechanism of which we are aware of are from 2011 and 2013 [[MMNW11](https://www.cs.columbia.edu/~rwright/Publications/pods11.pdf "Darakhshan Mir, S. Muthukrishnan, Aleksandar Nikolov, Rebecca N. Wright. Pan-private algorithms via statistics on sketches. PODS 2011.")§3.1,[JS13](hhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC4681528/ "Aaron Johnson, Vitaly Shmatikov. Privacy-preserving data exploration in genome-wide association studies. KDD 2013.")§5]; but this was not novel even then. Asi and Duchi [[AD20](https://arxiv.org/abs/2005.10630 "Hilal Asi, John Duchi. Near Instance-Optimality in Differential Privacy. 2020.")§1.2] state that McSherry and Talwar [[MT07](https://ieeexplore.ieee.org/document/4389483 "Frank McSherry, Kunal Talwar. Mechanism Design via Differential Privacy. FOCS 2007.")] considered it in 2007. In any case, the name we choose to use was coined in 2020 [[AD20](https://arxiv.org/abs/2005.10630 "Hilal Asi, John Duchi. Near Instance-Optimality in Differential Privacy. 2020.")]. +[^4]: Properly attributing the inverse sensitivity mechanism is difficult. The earliest published instances of the inverse sensitivity mechanism of which we are aware of are from 2011 and 2013 [[MMNW11](https://www.cs.columbia.edu/~rwright/Publications/pods11.pdf "Darakhshan Mir, S. Muthukrishnan, Aleksandar Nikolov, Rebecca N. Wright. Pan-private algorithms via statistics on sketches. PODS 2011.")§3.1,[JS13](hhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC4681528/ "Aaron Johnson, Vitaly Shmatikov. Privacy-preserving data exploration in genome-wide association studies. KDD 2013.")§5]; but this was not novel even then. Asi and Duchi [[AD20](https://arxiv.org/abs/2005.10630 "Hilal Asi, John Duchi. Near Instance-Optimality in Differential Privacy. 2020.")§1.2] state that McSherry and Talwar [[MT07](https://ieeexplore.ieee.org/document/4389483 "Frank McSherry, Kunal Talwar. Mechanism Design via Differential Privacy. FOCS 2007.")] considered it in 2007. In any case, the name we use was coined in 2020 [[AD20](https://arxiv.org/abs/2005.10630 "Hilal Asi, John Duchi. Near Instance-Optimality in Differential Privacy. 2020.")]. -[^5]: Assuming that the output space \\\(\\mathcal{Y}\\\) is finite is a significant assumption. While it can be relaxed, it is to some extent an unavoidable limitation of the technique. To apply the inverse sensitivity mechanism to the median we must discretize and bound the inputs; bounding the inputs does impose a finite global sensitivity, but the dependence on the bound is logarithmic, so the bound can be conservative. +[^5]: Assuming that the output space \\\(\\mathcal{Y}\\\) is finite is a significant assumption. While it can be relaxed, it is to some extent an unavoidable limitation of the technique. To apply the inverse sensitivity mechanism to the median we must discretize and bound the inputs; bounding the inputs does impose a finite global sensitivity, but the dependence on the bound is logarithmic, so the bound can be fairly large. Assuming that the function is surjective is a minor assumption that ensures that the loss is always well-defined; otherwise we can define the loss to be infinite for points that are not in the range of the function. [^6]: Note that we can use other selection algorithms, such as permute-and-flip [[MS20](https://arxiv.org/abs/2010.12603 "Ryan McKenna, Daniel Sheldon. Permute-and-Flip: A new mechanism for differentially private selection. NeurIPS 2020.")] or report-noisy-max [[DKSSWXZ21](https://arxiv.org/abs/2105.07260 "Zeyu Ding, Daniel Kifer, Sayed M. Saghaian N. E., Thomas Steinke, Yuxin Wang, Yingtai Xiao, Danfeng Zhang. The Permute-and-Flip Mechanism is Identical to Report-Noisy-Max with Exponential Noise. 2021.")] or gap-max [[CHS14](https://arxiv.org/abs/1409.2177 "Kamalika Chaudhuri, Daniel Hsu, Shuang Song. The Large Margin Mechanism for Differentially Private Maximization. NIPS 2014."),[BDRS18](https://dl.acm.org/doi/10.1145/3188745.3188946 " Mark Bun, Cynthia Dwork, Guy N. Rothblum, Thomas Steinke. Composable and versatile privacy via truncated CDP. STOC 2018."),[BKSW19](https://arxiv.org/abs/1905.13229 "Mark Bun, Gautam Kamath, Thomas Steinke, Zhiwei Steven Wu. Private Hypothesis Selection. NeurIPS 2019.")].