Kategorien: BilderNachrichten

Geschützte" Bilder sind mit KI leichter, nicht schwieriger zu stehlen

We’ve all seen how artificial intelligence can tweak, tinker, and outright transform images. It’s an impressive and, at times, unsettling showcase of power. In an attempt to shield their works from AI’s sculpting hands, many artists have turned to protective tools such as PhotoGuard, Mistund Glaze. These solutions pepper the images with adversarial noise – slight alterations that are invisible to the human eye but are enough to baffle AI systems. The noise is designed to stop generative models from learning or modifying the content, including copyrighted images and artworks.

However, in a twist of irony, this defense strategy may have inadvertently rolled out the red carpet for AI intrusion. A team of U.S. researchers recently challenged the effectiveness of adversarial noise, and their findings suggest it may leave the images mehr prone to AI edits. Using the Stable Diffusion model as a testbed, they discovered that the added layer of protection didn’t repel AI interference – it invited it.

The researchers experimented using a range of artworks and photographs, putting the adversarial noise’s resilience to the test by running image-to-image generation and style transfer tasks. They employed both subtle and dramatic changes, with cues such as “A young girl in a pink dress going into a wooden cabin” switching to “A young boy in a blue shirt going into a brick house” or “Two cats lounging on a couch.” Regardless of the specifics, the results were strikingly consistent – the “protected” images regularly produced results that fell in line with the instructions more accurately than their unprotected counterparts.

The researchers attribute this unlikely scenario to the workings of diffusion models. These models encode images into a latent space before injecting noise in several steps. Generation of new images involves reversing this process, under the guidance of a text prompt. When adversarial noise is introduced from the get-go, uncertainty burgeons within the latent space. This prompts the model to lean more on the text instruction during the denoising process, which quite unexpectedly results in a final image that answers the text prompt more fittingly.

In their quest to leave AI perplexed, it appears as though the creators have ended up aiding their performance. These unexpected findings cast serious doubts on adversarial perturbation methods as a reliable image protection and highlight the need for alternatives. One possible candidate is C2PA, a provenance framework that tags metadata on images the moment they’re created. This won’t protect the image content, but it does offer a trail of breadcrumbs that can confirm its authenticity.

Despite best efforts to safeguard our visual content, the technological magic called adversarial noise might just be serving us the exact opposite of protection we have long believed it to offer. Amid the new findings, it’s clear that we need to reconsider our game plan when it comes to combating AI misuse in the realm of visual media. For a more detailed account of this intriguing study, check out the original article on Unite.AI.

Max Krawiec

Teilen Sie
Herausgegeben von
Max Krawiec

Diese Website verwendet Cookies.