From 7c2704e90052a1780a608834ad34be40b384fde7 Mon Sep 17 00:00:00 2001 From: Thomas Steinke Date: Fri, 1 Sep 2023 13:23:56 -0700 Subject: [PATCH] post about inverse sensitivity mechanism --- _posts/2023-09-06-inverse-sensitivity.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2023-09-06-inverse-sensitivity.md b/_posts/2023-09-06-inverse-sensitivity.md index e37c78b..e7a5add 100644 --- a/_posts/2023-09-06-inverse-sensitivity.md +++ b/_posts/2023-09-06-inverse-sensitivity.md @@ -104,7 +104,7 @@ In this post we've covered the inverse sensitivity mechanism and showed that it The inverse sensitivity mechanism is a simple demonstration that there is more to differential privacy than simply adding noise scaled to global sensitivity; there are many more techniques in the literature. -The inverse sensitivity mechanism has two main limitations. First, it is, in general, not computationally efficient. Computing the loss function is intractible for an arbitrary \\\(f\\\) (but can be done efficiently for simple examples like the median). Second, the \\\(\\log\|\\mathcal{Y}\|\\\) term in the accuracy guarantee is problematic when the output space is large, such as when we have high-dimensional outputs. +The inverse sensitivity mechanism has two main limitations. First, it is, in general, not computationally efficient. Computing the loss function is intractable for an arbitrary \\\(f\\\) (but can be done efficiently for simple examples like the median). Second, the \\\(\\log\|\\mathcal{Y}\|\\\) term in the accuracy guarantee is problematic when the output space is large, such as when we have high-dimensional outputs. While there are other techniques that can be used instead of inverse sensitivity, they suffer from some of the same limitations. Thus finding ways around these limitations is the subject of [active research](/colt23-bsp/) [[BKSW19](https://arxiv.org/abs/1905.13229 "Mark Bun, Gautam Kamath, Thomas Steinke, Zhiwei Steven Wu. Private Hypothesis Selection. NeurIPS 2019."),[HKMN23](https://arxiv.org/abs/2212.05015 "Samuel B. Hopkins, Gautam Kamath, Mahbod Majid, Shyam Narayanan. Robustness Implies Privacy in Statistical Estimation. STOC 2023."),[FDY22](https://cse.hkust.edu.hk/~yike/ShiftedInverse.pdf "Juanru Fang, Wei Dong, Ke Yi. Shifted Inverse: A General Mechanism for Monotonic Functions under User Differential Privacy. CCS 2022."),[DHK23](https://arxiv.org/abs/2301.07078 "John Duchi, Saminul Haque, Rohith Kuditipudi. A Fast Algorithm for Adaptive Private Mean Estimation. COLT 2023."),[BHS23](https://arxiv.org/abs/2301.12250 "Gavin Brown, Samuel B. Hopkins, Adam Smith. Fast, Sample-Efficient, Affine-Invariant Private Mean and Covariance Estimation for Subgaussian Distributions. COLT 2023."),[AUZ23](https://arxiv.org/abs/2302.01855 "Hilal Asi, Jonathan Ullman, Lydia Zakynthinou. From Robustness to Privacy and Back. 2023.")]. We leave you with a riddle: What can we do if even the local sensitivity of our function is unbounded? For example, suppose we want to approximate \\\(f(x) = \\max\_i x_i\\\). Surprisingly, there are still things we can do and we intend to write a follow-up post on this.