Generative AI has been a game-changer since OpenAI’s ChatGPT made its debut. It can chat like a human, produce images, and even answer our most burning questions. But at what cost?
It uses masses of data scraped from the web, including images published by artists and photographers – without consent or compensation. Sounds unfair, doesn’t it?
Enter Nightshade.
This clever tool, developed by a team of researchers, can ‘poison’ the training data by adding invisible pixels to artwork before it’s uploaded to the web.
The result? It confuses the AI training model, causing it to produce erroneous images. Imagine a dog turning into a cat, or a car morphing into a cow.
Ben Zhao, the professor leading the Nightshade team, believes this could shift the power balance back to artists and serve as a warning to tech firms that disrespect copyright and intellectual property rights.
Nightshade will be open-source when it’s released, so others can refine it and make it more effective. However, it should be used as a ‘last defence for content creators against web scrapers,’ the team advised.
Meanwhile, OpenAI allows artists to remove their work from its training data, but the process has been described as extremely onerous. Making this process easier might discourage artists from using Nightshade, saving OpenAI and others from potential disruption.
Do you think disruption is the answer here, or is there a better solution?
#ArtistsRights #GenerativeAI #Nightshade
https://www.digitaltrends.com/computing/new-corruption-tool-nightshade-ai/