当前位置: 首页> 娱乐> 明星 > 【论文阅读】Stealing Image-to-Image Translation Models With a Single Query(2024)

【论文阅读】Stealing Image-to-Image Translation Models With a Single Query(2024)

时间:2025/7/11 1:14:35来源:https://blog.csdn.net/Glass_Gun/article/details/141687534 浏览次数:0次

在这里插入图片描述

摘要

Training deep neural networks(训练深度神经网络) requires(需要) significant computational resources(大量计算资源) and large datasets(大型数据集) that are often confidential(机密的) or expensive(昂贵的) to collect. As a result(因此), owners(所有者) tend to(倾向于) protect(保护) their models by allowing(允许) access(通过) only via an API. Many works demonstrated(展示) the possibility(可能性) of stealing(窃取) such protected models by repeatedly querying(反复查询) the API. However, to date(到目前为止), research has predominantly(主要) focused on(集中) stealing classification models(窃取分类模型), for which a very large number of queries(大量的查询) has been found necessary. In this paper, we study the possibility of stealing image-to-image models(窃取图像到图像模型的可能性). Surprisingly(令人惊讶的是), we find(发现) that many such models can be stolen with as little as a single, small-sized, query image using simple distillation(使用简单蒸馏). We study this phenomenon(现象) on a wide variety(各种各样) of model architectures(模型架构), datasets(数据集), and tasks(任务), including denoising(去噪), deblurring(去模糊), deraining(去训练), super-resolution(超分辨), and biological image-to-image translation(生物图像到图像的转换). Remarkably(值得注意的是), we find(发现) that the vulnerability to stealing attacks(窃取攻击) is shared by CNNs and by models with attention mechanisms(注意力机制), and that stealing is commonly possible even without knowing the architecture of the target model(即使不知道目标模型的体系架构).

关键字:【论文阅读】Stealing Image-to-Image Translation Models With a Single Query(2024)

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

责任编辑: