您的位置:首页 > 游戏 > 手游 > 【论文阅读】Enhance Model Stealing Attack via Label Refining(2022)

【论文阅读】Enhance Model Stealing Attack via Label Refining(2022)

2024/10/6 16:26:15 来源:https://blog.csdn.net/Glass_Gun/article/details/141310398  浏览:    关键词:【论文阅读】Enhance Model Stealing Attack via Label Refining(2022)

摘要

With machine learning models(机器学习模型) being increasingly(越来越多) deployed(部署), model stealing attacks(模型窃取攻击) have raised an increasing interest. Extracting decision-based models(基于决策的模型窃取) is a more challenging task with the information of class similarity missing(类相似度信息缺失的情况下). In this paper, we propose a novel(新颖) and effective(有效) model stealing method as Label Refining via Feature Distance (LRFD 基于特征距离的标签精炼), to re-dig the class similarity(重新挖掘类相似度). Specifically(具体来说), since the information of class similarity(类相似度信息) can be represented by the distance between samples(样本之间的距离) from different classes(不同类) in the feature space(特征空间), we design a soft label(软标签) construction module(构造模型) inspired by the prototype learning(原型学习), and transfer the knowledge in the soft label to the substitution model(替代模型). Extensive experiments conducted on(进行了大量的实验) four widely-used(广泛使用的) datasets consistently demonstrate(一致表明) that our method yields(产生) a model with significantly greater functional similar

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com