TGArchive
·1 хв читання · 161 слово·👁 13.3K32

🎨 AI De‑noiser: Off‑the‑shelf image‑to‑image models break image protection Researchers have uncovered a surprising vuln…

🎨 AI De‑noiser: Off‑the‑shelf image‑to‑image models break image protection

Researchers have uncovered a surprising vulnerability: standard image‑to‑image AI models (like Stable Diffusion, DALL‑E and similar) can be repurposed as generic “de‑noisers” — they strip away protective perturbations added to images by dedicated protection schemes.

What does it mean?
Many services add invisible noise to images to guard against copying, style mimicry, or deepfake manipulation. It turns out that breaking this protection doesn’t require specialized attacks — you can just ask any generative model to “enhance” the picture.

The experiment:
The team tested 8 case studies across 6 different protection systems. In every case, off‑the‑shelf models performed better than previous purpose‑built attacks while keeping the image quality high for the adversary.

Bottom line:
Many current protection schemes offer a false sense of security. Any future image‑protection mechanism must be benchmarked against attacks from readily available GenAI tools.

🔗 Paper (arXiv, Feb 25, 2026): https://arxiv.org/abs/2602.22197
📄 PDF: https://arxiv.org/pdf/2602.22197

#AI #Security #Deepfake #GenerativeModels #ImageProtection #ScienceNews #Technology

Відкрити в Telegram
Повернутись до каналу