Skip to content

Commit

Permalink
post about inverse sensitivity mechanism
Browse files Browse the repository at this point in the history
  • Loading branch information
tasquatch committed Sep 1, 2023
1 parent 2ee120c commit 7c2704e
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion _posts/2023-09-06-inverse-sensitivity.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ In this post we've covered the inverse sensitivity mechanism and showed that it

The inverse sensitivity mechanism is a simple demonstration that there is more to differential privacy than simply adding noise scaled to global sensitivity; there are many more techniques in the literature.

The inverse sensitivity mechanism has two main limitations. First, it is, in general, not computationally efficient. Computing the loss function is intractible for an arbitrary \\\(f\\\) (but can be done efficiently for simple examples like the median). Second, the \\\(\\log\|\\mathcal{Y}\|\\\) term in the accuracy guarantee is problematic when the output space is large, such as when we have high-dimensional outputs.
The inverse sensitivity mechanism has two main limitations. First, it is, in general, not computationally efficient. Computing the loss function is intractable for an arbitrary \\\(f\\\) (but can be done efficiently for simple examples like the median). Second, the \\\(\\log\|\\mathcal{Y}\|\\\) term in the accuracy guarantee is problematic when the output space is large, such as when we have high-dimensional outputs.
While there are other techniques that can be used instead of inverse sensitivity, they suffer from some of the same limitations. Thus finding ways around these limitations is the subject of [active research](/colt23-bsp/) [[BKSW19](https://arxiv.org/abs/1905.13229 "Mark Bun, Gautam Kamath, Thomas Steinke, Zhiwei Steven Wu. Private Hypothesis Selection. NeurIPS 2019."),[HKMN23](https://arxiv.org/abs/2212.05015 "Samuel B. Hopkins, Gautam Kamath, Mahbod Majid, Shyam Narayanan. Robustness Implies Privacy in Statistical Estimation. STOC 2023."),[FDY22](https://cse.hkust.edu.hk/~yike/ShiftedInverse.pdf "Juanru Fang, Wei Dong, Ke Yi. Shifted Inverse: A General Mechanism for Monotonic Functions under User Differential Privacy. CCS 2022."),[DHK23](https://arxiv.org/abs/2301.07078 "John Duchi, Saminul Haque, Rohith Kuditipudi. A Fast Algorithm for Adaptive Private Mean Estimation. COLT 2023."),[BHS23](https://arxiv.org/abs/2301.12250 "Gavin Brown, Samuel B. Hopkins, Adam Smith. Fast, Sample-Efficient, Affine-Invariant Private Mean and Covariance Estimation for Subgaussian Distributions. COLT 2023."),[AUZ23](https://arxiv.org/abs/2302.01855 "Hilal Asi, Jonathan Ullman, Lydia Zakynthinou. From Robustness to Privacy and Back. 2023.")].

We leave you with a riddle: What can we do if even the local sensitivity of our function is unbounded? For example, suppose we want to approximate \\\(f(x) = \\max\_i x_i\\\). Surprisingly, there are still things we can do and we intend to write a follow-up post on this.
Expand Down

0 comments on commit 7c2704e

Please sign in to comment.