Featured in this episode of Tech News of the Week
AI is great, right? Companies like Midjourney and Dall-E can create you an image with just a simple prompt like “Edward Scissorhands fights the Lawnmower Man in a Nelson Rockefeller oil painting,” or “15,000 cats look at you judgmentally as you frantically struggle to complete lightning round items, hyperrealistic.” Nightmare fuel prompts, the both of them.
The output is usually great, or disturbing, but the problem is in order to train that AI, companies had to kind of… scrape images from literally everywhere on the internet without the express permission of the artists. This is a hilarious turn-around of the whole ‘the internet should be free’ argument, especially considering it’s nearly impossible to stop this scraping, and in most cases it’s not illegal.
Lately we are fighting back- with… MATH. A team led by University of Chicago professor Ben Zhao has created an algorithm called Nightshade that, when applied to images, causes the AI engine to go bananas upon ingestion. With only a hundred or so images the AI will suddenly not be able to tell a dog from a cat, or hats from cakes, or handbags from toasters. All of which I find hilarious.
It’s probably too late for the curated inventory of purloined images in the OpenAI archives of the world, but if nothing else it’s probably going to force a conversation that that billion dollar company has so far been uninterested in having; namely, “What if maybe we paid the artists?” Crazy talk, I know. Expect more info about Nightshade when it’s globally unveiled at ACM’s Computer and Communications Security Conference 2023 at the end of November, when I’ll probably rerun this article basically word for word.