A curated list of interesting counter AI techniques, examples, and stories. Apotropaic: adjective | apo·tro·pa·ic : "designed to avert evil"
=====================================================
https://github.com/BruceMacD/Adversarial-Faces | "Testing the effectiveness of practical implementations of adversarial examples against facial recognition."
https://arstechnica.com/features/2020/04/some-shirts-hide-you-from-cameras-but-will-anyone-wear-them/ | "It's theoretically possible to become invisible to cameras. But can it catch on?"
https://ai.facebook.com/blog/using-radioactive-data-to-detect-if-a-data-set-was-used-for-training/ | "We have developed a new technique to mark the images in a data set so that researchers can determine whether a particular machine learning model has been trained using those images."
http://simonweckert.com/googlemapshacks.html | "99 second hand smartphones are transported in a handcart to generate virtual traffic jam in Google Maps."
https://www.macpierce.com/the-camera-shy-hoodie | IR lights in hoddie to obscure night cameras.
https://cvdazzle.com/ | "Dazzle explores how fashion can be used as camouflage from face-detection technology, the first step in automated face recognition."
https://algotransparency.org | "We aim to inform citizens on the mechanisms behind the algorithms that determine and shape our access to information. YouTube is the first platform on which we’ve conducted the experiment. We are currently developing tools for other platforms."
https://boingboing.net/2019/03/08/hot-dog-or-not.html | "Towards a general theory of "adversarial examples," the bizarre, hallucinatory motes in machine learning's all-seeing eye"
https://www.labsix.org/about/ | "LabSix is an independent, entirely student-run AI research group composed of MIT undergraduate and graduate students... Much of our current research is in the area of adversarial examples, at the intersection of machine learning and security."
- https://www.labsix.org/physical-objects-that-fool-neural-nets/
- https://www.labsix.org/limited-information-adversarial-examples/
https://techcrunch.com/2017/03/17/laying-a-trap-for-self-driving-cars/ | Artist James Bridle produces a salt based trap for a self driving car.
https://medium.com/@kcimc/how-to-recognize-fake-ai-generated-images-4d1f6f9a2842 | Kyle McDonald on things to look for in generated images.
http://sandlab.cs.uchicago.edu/fawkes/ | The SAND Lab at University of Chicago developed Fawkes to take personal images and makes tiny, pixel-level changes that are invisible to the human eye, in a process we call image cloaking, but make them unsuitable for image recognition.
https://adversarial.io/ | Adversarial.io is an easy-to-use webapp for altering image material, in order to make it machine-unreadable.
https://github.com/BillDietrich/fake_contacts | Android phone app that creates fake contacts, which will be stored on your smartphone along with your real contacts. This feeds fake data to any apps or companies who are copying our private data to use or sell it. This is called "data-poisoning".
https://www.technologyreview.com/2020/02/19/868188/hackers-can-trick-a-tesla-into-accelerating-by-50-miles-per-hour/ | A two inch piece of tape fooled the Tesla’s cameras and made the car quickly and mistakenly speed up.