Mengqi Xue
YOU?
Author Swipe
View article: Dataset Ownership Verification in Contrastive Pre-trained Models
Dataset Ownership Verification in Contrastive Pre-trained Models Open
High-quality open-source datasets, which necessitate substantial efforts for curation, has become the primary catalyst for the swift progress of deep learning. Concurrently, protecting these datasets is paramount for the well-being of the …
View article: Multi-Round Social Advertising: Ad Sequence Recommendation Via Influence Maximization
Multi-Round Social Advertising: Ad Sequence Recommendation Via Influence Maximization Open
View article: LG-CAV: Train Any Concept Activation Vector with Language Guidance
LG-CAV: Train Any Concept Activation Vector with Language Guidance Open
Concept activation vector (CAV) has attracted broad research interest in explainable AI, by elegantly attributing model predictions to specific concepts. However, the training of CAV often necessitates a large number of high-quality images…
View article: Exploration of the Combination of Online and Offline Teaching of Undergraduate Computer English
Exploration of the Combination of Online and Offline Teaching of Undergraduate Computer English Open
In the undergraduate computer English teaching, it is found that the prior knowledge base of undergraduates is uneven, and the cognition of professional English terms of related nouns is different, which brings difficulties to the teaching…
View article: On the Evaluation Consistency of Attribution-based Explanations
On the Evaluation Consistency of Attribution-based Explanations Open
Attribution-based explanations are garnering increasing attention recently and have emerged as the predominant approach towards \textit{eXplanable Artificial Intelligence}~(XAI). However, the absence of consistent configurations and system…
View article: A Comprehensive Study of Structural Pruning for Vision Models
A Comprehensive Study of Structural Pruning for Vision Models Open
Structural pruning has emerged as a promising approach for producing more efficient models. Nevertheless, the community suffers from a lack of standardized benchmarks and metrics, leaving the progress in this area not fully comprehended. T…
View article: Comparative Influence Maximization with Adaptive
Comparative Influence Maximization with Adaptive Open
View article: Generalization Matters: Loss Minima Flattening via Parameter Hybridization for Efficient Online Knowledge Distillation
Generalization Matters: Loss Minima Flattening via Parameter Hybridization for Efficient Online Knowledge Distillation Open
Most existing online knowledge distillation(OKD) techniques typically require sophisticated modules to produce diverse knowledge for improving students' generalization ability. In this paper, we strive to fully utilize multi-model settings…
View article: Schema Inference for Interpretable Image Classification
Schema Inference for Interpretable Image Classification Open
In this paper, we study a novel inference paradigm, termed as schema inference, that learns to deductively infer the explainable predictions by rebuilding the prior deep neural network (DNN) forwarding scheme, guided by the prevalent philo…
View article: Jointly Complementary&Competitive Influence Maximization with Concurrent Ally-Boosting and Rival-Preventing
Jointly Complementary&Competitive Influence Maximization with Concurrent Ally-Boosting and Rival-Preventing Open
In this paper, we propose a new influence spread model, namely, Complementary\&Competitive Independent Cascade (C$^2$IC) model. C$^2$IC model generalizes three well known influence model, i.e., influence boosting (IB) model, campaign obliv…
View article: Constituent Attention for Vision Transformers
Constituent Attention for Vision Transformers Open
View article: Evaluation and Improvement of Interpretability for Self-Explainable Part-Prototype Networks
Evaluation and Improvement of Interpretability for Self-Explainable Part-Prototype Networks Open
Part-prototype networks (e.g., ProtoPNet, ProtoTree, and ProtoPool) have attracted broad research interest for their intrinsic interpretability and comparable accuracy to non-interpretable counterparts. However, recent works find that the …
View article: A Survey of Neural Trees
A Survey of Neural Trees Open
Neural networks (NNs) and decision trees (DTs) are both popular models of machine learning, yet coming with mutually exclusive advantages and limitations. To bring the best of the two worlds, a variety of approaches are proposed to integra…
View article: ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition
ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition Open
Prototypical part network (ProtoPNet) has drawn wide attention and boosted many follow-up studies due to its self-explanatory property for explainable artificial intelligence (XAI). However, when directly applying ProtoPNet on vision trans…
View article: Meta-attention for ViT-backed Continual Learning
Meta-attention for ViT-backed Continual Learning Open
Continual learning is a longstanding research topic due to its crucial role in tackling continually arriving tasks. Up to now, the study of continual learning in computer vision is mainly restricted to convolutional neural networks (CNNs).…
View article: Knowledge Amalgamation for Object Detection with Transformers
Knowledge Amalgamation for Object Detection with Transformers Open
Knowledge amalgamation (KA) is a novel deep model reusing task aiming to transfer knowledge from several well-trained teachers to a multi-talented and compact student. Currently, most of these approaches are tailored for convolutional neur…
View article: Learn Decision Trees with Deep Visual Primitives
Learn Decision Trees with Deep Visual Primitives Open
View article: Bootstrapping ViTs: Towards Liberating Vision Transformers from Pre-training
Bootstrapping ViTs: Towards Liberating Vision Transformers from Pre-training Open
Recently, vision Transformers (ViTs) are developing rapidly and starting to challenge the domination of convolutional neural networks (CNNs) in the realm of computer vision (CV). With the general-purpose Transformer architecture replacing …
View article: KDExplainer: A Task-oriented Attention Model for Explaining Knowledge Distillation
KDExplainer: A Task-oriented Attention Model for Explaining Knowledge Distillation Open
Knowledge distillation (KD) has recently emerged as an efficacious scheme for learning compact deep neural networks (DNNs). Despite the promising results achieved, the rationale that interprets the behavior of KD has yet remained largely u…
View article: KDExplainer: A Task-oriented Attention Model for Explaining Knowledge Distillation
KDExplainer: A Task-oriented Attention Model for Explaining Knowledge Distillation Open
Knowledge distillation (KD) has recently emerged as an efficacious scheme for learning compact deep neural networks (DNNs). Despite the promising results achieved, the rationale that interprets the behavior of KD has yet remained largely u…
View article: Association between migration status and caesarean section delivery based on a modified Robson classification in China
Association between migration status and caesarean section delivery based on a modified Robson classification in China Open
View article: Customizing Student Networks From Heterogeneous Teachers via Adaptive Knowledge Amalgamation
Customizing Student Networks From Heterogeneous Teachers via Adaptive Knowledge Amalgamation Open
A massive number of well-trained deep networks have been released by developers online. These networks may focus on different tasks and in many cases are optimized for different datasets. In this paper, we study how to exploit such heterog…