Yaguan Qian
YOU?
Author Swipe
View article: Unveiling the veil: high-frequency components as the key to understanding medical DNNs’ vulnerability to adversarial examples
Unveiling the veil: high-frequency components as the key to understanding medical DNNs’ vulnerability to adversarial examples Open
Deep Neural Networks (DNNs) have demonstrated outstanding performance in various medical image processing tasks. However, recent studies have revealed a heightened vulnerability of medical DNNs to adversarial attacks compared to their natu…
View article: Rethinking multi‐spatial information for transferable adversarial attacks on speaker recognition systems
Rethinking multi‐spatial information for transferable adversarial attacks on speaker recognition systems Open
Adversarial attacks have been posing significant security concerns to intelligent systems, such as speaker recognition systems (SRSs). Most attacks assume the neural networks in the systems are known beforehand, while black‐box attacks are…
View article: A Frequency Domain Adversarial Attack in Medical Image Analysis System
A Frequency Domain Adversarial Attack in Medical Image Analysis System Open
Deep neural networks (CNNs) have gained popularity in medical image analysis tasks, such as cancer diagnosis and lesion detection. However, recent research has revealed that medical deep learning systems are vulnerable to adversarial examp…
View article: Robust Filter Pruning Guided by Deep Frequency-Features for Edge Intelligence
Robust Filter Pruning Guided by Deep Frequency-Features for Edge Intelligence Open
View article: Developing Hessian-free second-order adversarial examples for adversarial training
Developing Hessian-free second-order adversarial examples for adversarial training Open
Recent studies show that deep neural networks (DNNs) are extremely vulnerable to elaborately designed adversarial examples. Adversarial training, which uses adversarial examples as training data, has been proven to be one of the most effec…
View article: F$^2$AT: Feature-Focusing Adversarial Training via Disentanglement of Natural and Perturbed Patterns
F$^2$AT: Feature-Focusing Adversarial Training via Disentanglement of Natural and Perturbed Patterns Open
Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by well-designed perturbations. This could lead to disastrous results on critical applications such as self-driving cars, surveillance security, and medical diagnos…
View article: Robust Backdoor Attacks on Object Detection in Real World
Robust Backdoor Attacks on Object Detection in Real World Open
Deep learning models are widely deployed in many applications, such as object detection in various security fields. However, these models are vulnerable to backdoor attacks. Most backdoor attacks were intensively studied on classified mode…
View article: DP-FEDAW: FEDERATED LEARNING WITH DIFFERENTIAL PRIVACY IN NON-IID DATA
DP-FEDAW: FEDERATED LEARNING WITH DIFFERENTIAL PRIVACY IN NON-IID DATA Open
Federated learning can effectively utilize data from various users to coordinately train machine learning models while ensuring that data does not leave the user's device. However, it also faces the challenge of slow global model convergen…
View article: Robust Filter Pruning Guided by Deep Frequency-Features for Edge Intelligence
Robust Filter Pruning Guided by Deep Frequency-Features for Edge Intelligence Open
View article: Privacy amplification for wireless federated learning with Rényi differential privacy and subsampling
Privacy amplification for wireless federated learning with Rényi differential privacy and subsampling Open
A key issue in current federated learning research is how to improve the performance of federated learning algorithms by reducing communication overhead and computing costs while ensuring data privacy. This paper proposed an efficient wire…
View article: Towards the Desirable Decision Boundary by Moderate-Margin Adversarial Training
Towards the Desirable Decision Boundary by Moderate-Margin Adversarial Training Open
Adversarial training, as one of the most effective defense methods against adversarial attacks, tends to learn an inclusive decision boundary to increase the robustness of deep learning models. However, due to the large and unnecessary inc…
View article: Hessian-Free Second-Order Adversarial Examples for Adversarial Learning
Hessian-Free Second-Order Adversarial Examples for Adversarial Learning Open
Recent studies show deep neural networks (DNNs) are extremely vulnerable to the elaborately designed adversarial examples. Adversarial learning with those adversarial examples has been proved as one of the most effective methods to defend …
View article: Edge-Aware Guidance Fusion Network for RGB–Thermal Scene Parsing
Edge-Aware Guidance Fusion Network for RGB–Thermal Scene Parsing Open
RGB–thermal scene parsing has recently attracted increasing research interest in the field of computer vision. However, most existing methods fail to perform good boundary extraction for prediction maps and cannot fully use high-level feat…
View article: EI-MTD: Moving Target Defense for Edge Intelligence against Adversarial Attacks
EI-MTD: Moving Target Defense for Edge Intelligence against Adversarial Attacks Open
Edge intelligence has played an important role in constructing smart cities, but the vulnerability of edge nodes to adversarial attacks becomes an urgent problem. A so-called adversarial example can fool a deep learning model on an edge no…
View article: A Method to Extract Pedestrian Biological Features from Video for Open-World Re-Identification
A Method to Extract Pedestrian Biological Features from Video for Open-World Re-Identification Open
View article: A Method to Extract Pedestrian Biological Features from Video for Open-World Re-Identification
A Method to Extract Pedestrian Biological Features from Video for Open-World Re-Identification Open
View article: Crossmodality Person Reidentification Based on Global and Local Alignment
Crossmodality Person Reidentification Based on Global and Local Alignment Open
RGB‐infrared (RGB‐IR) person reidentification is a challenge problem in computer vision due to the large crossmodality difference between RGB and IR images. Most traditional methods only carry out feature alignment, which ignores the uniqu…
View article: Edge-aware Guidance Fusion Network for RGB Thermal Scene Parsing
Edge-aware Guidance Fusion Network for RGB Thermal Scene Parsing Open
RGB thermal scene parsing has recently attracted increasing research interest in the field of computer vision. However, most existing methods fail to perform good boundary extraction for prediction maps and cannot fully use high level feat…
View article: Person Re-identification based on Robust Features in Open-world
Person Re-identification based on Robust Features in Open-world Open
Deep learning technology promotes the rapid development of person re-identifica-tion (re-ID). However, some challenges are still existing in the open-world. First, the existing re-ID research usually assumes only one factor variable (view,…
View article: Towards Speeding up Adversarial Training in Latent Spaces
Towards Speeding up Adversarial Training in Latent Spaces Open
Adversarial training is wildly considered as one of the most effective way to defend against adversarial examples. However, existing adversarial training methods consume unbearable time, due to the fact that they need to generate adversari…
View article: Exploring Security Vulnerabilities of Deep Learning Models by Adversarial Attacks
Exploring Security Vulnerabilities of Deep Learning Models by Adversarial Attacks Open
Nowadays, deep learning models play an important role in a variety of scenarios, such as image classification, natural language processing, and speech recognition. However, deep learning models are shown to be vulnerable; a small change to…
View article: Visually Imperceptible Adversarial Patch Attacks on Digital Images
Visually Imperceptible Adversarial Patch Attacks on Digital Images Open
The vulnerability of deep neural networks (DNNs) to adversarial examples has attracted more attention. Many algorithms have been proposed to craft powerful adversarial examples. However, most of these algorithms modified the global or loca…
View article: Towards Imperceptible Adversarial Image Patches Based on Network Explanations.
Towards Imperceptible Adversarial Image Patches Based on Network Explanations. Open
View article: EI-MTD:Moving Target Defense for Edge Intelligence against Adversarial Attacks
EI-MTD:Moving Target Defense for Edge Intelligence against Adversarial Attacks Open
With the boom of edge intelligence, its vulnerability to adversarial attacks becomes an urgent problem. The so-called adversarial example can fool a deep learning model on the edge node to misclassify. Due to the property of transferabilit…
View article: TEAM: We Need More Powerful Adversarial Examples for DNNs
TEAM: We Need More Powerful Adversarial Examples for DNNs Open
Although deep neural networks (DNNs) have achieved success in many application fields, it is still vulnerable to imperceptible adversarial examples that can lead to misclassification of DNNs easily. To overcome this challenge, many defensi…
View article: TEAM: An Taylor Expansion-Based Method for Generating Adversarial Examples
TEAM: An Taylor Expansion-Based Method for Generating Adversarial Examples Open
Although Deep Neural Networks(DNNs) have achieved successful applications in many fields, they are vulnerable to adversarial examples.Adversarial training is one of the most effective methods to improve the robustness of DNNs, and it is ge…
View article: EMSGD: An Improved Learning Algorithm of Neural Networks With Imbalanced Data
EMSGD: An Improved Learning Algorithm of Neural Networks With Imbalanced Data Open
In this paper, the influence of data imbalance on neural networks is discussed, and an improved learning algorithm to solve this problem is proposed. The experimental results show that in the case of imbalanced data, the training error of …
View article: Spot Evasion Attacks: Adversarial Examples for License Plate Recognition Systems with Convolutional Neural Networks
Spot Evasion Attacks: Adversarial Examples for License Plate Recognition Systems with Convolutional Neural Networks Open
Recent studies have shown convolution neural networks (CNNs) for image recognition are vulnerable to evasion attacks with carefully manipulated adversarial examples. Previous work primarily focused on how to generate adversarial examples c…
View article: Pi-calculus based Bayesian Trust Web Service Composition
Pi-calculus based Bayesian Trust Web Service Composition Open
To enhance the reliability of trust Web service composition, Pi-calculus based formal verification of trust Web service composition is proposed.Bayesian trust Web service composition is firstly defined abstractly; then Pi-calculus is used …