Jiangmeng Li
YOU?
Author Swipe
View article: CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning
CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning Open
View article: On the Transferability and Discriminability of Repersentation Learning in Unsupervised Domain Adaptation
On the Transferability and Discriminability of Repersentation Learning in Unsupervised Domain Adaptation Open
In this paper, we addressed the limitation of relying solely on distribution alignment and source-domain empirical risk minimization in Unsupervised Domain Adaptation (UDA). Our information-theoretic analysis showed that this standard adve…
View article: CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning
CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning Open
Self-supervised topological deep learning (TDL) represents a nascent but underexplored area with significant potential for modeling higher-order interactions in simplicial complexes and cellular complexes to derive representations of unlab…
View article: Rethinking the Bias of Foundation Model under Long-tailed Distribution
Rethinking the Bias of Foundation Model under Long-tailed Distribution Open
Long-tailed learning has garnered increasing attention due to its practical significance. Among the various approaches, the fine-tuning paradigm has gained considerable interest with the advent of foundation models. However, most existing …
View article: Continual Test-Time Adaptation for Single Image Defocus Deblurring via Causal Siamese Networks
Continual Test-Time Adaptation for Single Image Defocus Deblurring via Causal Siamese Networks Open
Single image defocus deblurring (SIDD) aims to restore an all-in-focus image from a defocused one. Distribution shifts in defocused images generally lead to performance degradation of existing methods during out-of-distribution inferences.…
View article: Visual Reinforcement Learning Via Sequential Consistency Preserved Policy Contrast from Optimal Transport View
Visual Reinforcement Learning Via Sequential Consistency Preserved Policy Contrast from Optimal Transport View Open
View article: Integrating the Expression and Discrimination Via Bilateral Compensation for Molecular Property Prediction
Integrating the Expression and Discrimination Via Bilateral Compensation for Molecular Property Prediction Open
View article: Empowering Graph Contrastive Learning with Topological Rationale
Empowering Graph Contrastive Learning with Topological Rationale Open
View article: M2I2: Learning Efficient Multi-Agent Communication via Masked State Modeling and Intention Inference
M2I2: Learning Efficient Multi-Agent Communication via Masked State Modeling and Intention Inference Open
Communication is essential in coordinating the behaviors of multiple agents. However, existing methods primarily emphasize content, timing, and partners for information sharing, often neglecting the critical aspect of integrating shared in…
View article: Neuromodulated Meta-Learning
Neuromodulated Meta-Learning Open
Humans excel at adapting perceptions and actions to diverse environments, enabling efficient interaction with the external world. This adaptive capability relies on the biological nervous system (BNS), which activates different brain regio…
View article: On the Generalization and Causal Explanation in Self-Supervised Learning
On the Generalization and Causal Explanation in Self-Supervised Learning Open
Self-supervised learning (SSL) methods learn from unlabeled data and achieve high generalization performance on downstream tasks. However, they may also suffer from overfitting to their training data and lose the ability to adapt to new ta…
View article: Rethinking Misalignment in Vision-Language Model Adaptation from a Causal Perspective
Rethinking Misalignment in Vision-Language Model Adaptation from a Causal Perspective Open
Foundational Vision-Language models such as CLIP have exhibited impressive generalization in downstream tasks. However, CLIP suffers from a two-level misalignment issue, i.e., task misalignment and data misalignment, when adapting to speci…
View article: Towards the Causal Complete Cause of Multi-Modal Representation Learning
Towards the Causal Complete Cause of Multi-Modal Representation Learning Open
Multi-Modal Learning (MML) aims to learn effective representations across modalities for accurate predictions. Existing methods typically focus on modality consistency and specificity to learn effective representations. However, from a cau…
View article: Teleporter Theory: A General and Simple Approach for Modeling Cross-World Counterfactual Causality
Teleporter Theory: A General and Simple Approach for Modeling Cross-World Counterfactual Causality Open
Leveraging the development of structural causal model (SCM), researchers can establish graphical models for exploring the causal mechanisms behind machine learning techniques. As the complexity of machine learning applications rises, singl…
View article: Revisiting Spurious Correlation in Domain Generalization
Revisiting Spurious Correlation in Domain Generalization Open
Without loss of generality, existing machine learning techniques may learn spurious correlation dependent on the domain, which exacerbates the generalization of models in out-of-distribution (OOD) scenarios. To address this issue, recent w…
View article: Interventional Imbalanced Multi-Modal Representation Learning via $β$-Generalization Front-Door Criterion
Interventional Imbalanced Multi-Modal Representation Learning via $β$-Generalization Front-Door Criterion Open
Multi-modal methods establish comprehensive superiority over uni-modal methods. However, the imbalanced contributions of different modalities to task-dependent predictions constantly degrade the discriminative performance of canonical mult…
View article: Introducing Diminutive Causal Structure into Graph Representation Learning
Introducing Diminutive Causal Structure into Graph Representation Learning Open
When engaging in end-to-end graph representation learning with Graph Neural Networks (GNNs), the intricate causal relationships and rules inherent in graph data pose a formidable challenge for the model in accurately capturing authentic da…
View article: MSI: Multi-modal Recommendation via Superfluous Semantics Discarding and Interaction Preserving
MSI: Multi-modal Recommendation via Superfluous Semantics Discarding and Interaction Preserving Open
Multi-modal recommendation aims at leveraging data of auxiliary modalities (e.g., linguistic descriptions and images) to enhance the representations of items, thereby accurately recommending items that users prefer from the vast expanse of…
View article: Learning Invariant Causal Mechanism from Vision-Language Models
Learning Invariant Causal Mechanism from Vision-Language Models Open
Contrastive Language-Image Pretraining (CLIP) has achieved remarkable success, but its performance can degrade when fine-tuned in out-of-distribution (OOD) scenarios. We model the prediction process using a Structural Causal Model (SCM) an…
View article: Hierarchical Topology Isomorphism Expertise Embedded Graph Contrastive Learning
Hierarchical Topology Isomorphism Expertise Embedded Graph Contrastive Learning Open
Graph contrastive learning (GCL) aims to align the positive features while differentiating the negative features in the latent space by minimizing a pair-wise contrastive loss. As the embodiment of an outstanding discriminative unsupervise…
View article: T2MAC: Targeted and Trusted Multi-Agent Communication through Selective Engagement and Evidence-Driven Integration
T2MAC: Targeted and Trusted Multi-Agent Communication through Selective Engagement and Evidence-Driven Integration Open
Communication stands as a potent mechanism to harmonize the behaviors of multiple agents. However, existing work primarily concentrates on broadcast communication, which not only lacks practicality, but also leads to information redundancy…
View article: Rethinking Causal Relationships Learning in Graph Neural Networks
Rethinking Causal Relationships Learning in Graph Neural Networks Open
Graph Neural Networks (GNNs) demonstrate their significance by effectively modeling complex interrelationships within graph-structured data. To enhance the credibility and robustness of GNNs, it becomes exceptionally crucial to bolster the…
View article: Rethinking Dimensional Rationale in Graph Contrastive Learning from Causal Perspective
Rethinking Dimensional Rationale in Graph Contrastive Learning from Causal Perspective Open
Graph contrastive learning is a general learning paradigm excelling at capturing invariant information from diverse perturbations in graphs. Recent works focus on exploring the structural rationale from graphs, thereby increasing the discr…
View article: Graph Partial Label Learning with Potential Cause Discovering
Graph Partial Label Learning with Potential Cause Discovering Open
Graph Neural Networks (GNNs) have garnered widespread attention for their potential to address the challenges posed by graph representation learning, which face complex graph-structured data across various domains. However, due to the inhe…
View article: BayesPrompt: Prompting Large-Scale Pre-Trained Language Models on Few-shot Inference via Debiased Domain Abstraction
BayesPrompt: Prompting Large-Scale Pre-Trained Language Models on Few-shot Inference via Debiased Domain Abstraction Open
As a novel and effective fine-tuning paradigm based on large-scale pre-trained language models (PLMs), prompt-tuning aims to reduce the gap between downstream tasks and pre-training objectives. While prompt-tuning has yielded continuous ad…
View article: T2MAC: Targeted and Trusted Multi-Agent Communication through Selective Engagement and Evidence-Driven Integration
T2MAC: Targeted and Trusted Multi-Agent Communication through Selective Engagement and Evidence-Driven Integration Open
Communication stands as a potent mechanism to harmonize the behaviors of multiple agents. However, existing works primarily concentrate on broadcast communication, which not only lacks practicality, but also leads to information redundancy…
View article: Hierarchical Topology Isomorphism Expertise Embedded Graph Contrastive Learning
Hierarchical Topology Isomorphism Expertise Embedded Graph Contrastive Learning Open
Graph contrastive learning (GCL) aims to align the positive features while differentiating the negative features in the latent space by minimizing a pair-wise contrastive loss. As the embodiment of an outstanding discriminative unsupervise…
View article: Rethinking Dimensional Rationale in Graph Contrastive Learning from Causal Perspective
Rethinking Dimensional Rationale in Graph Contrastive Learning from Causal Perspective Open
Graph contrastive learning is a general learning paradigm excelling at capturing invariant information from diverse perturbations in graphs. Recent works focus on exploring the structural rationale from graphs, thereby increasing the discr…
View article: Rethinking Causal Relationships Learning in Graph Neural Networks
Rethinking Causal Relationships Learning in Graph Neural Networks Open
Graph Neural Networks (GNNs) demonstrate their significance by effectively modeling complex interrelationships within graph-structured data. To enhance the credibility and robustness of GNNs, it becomes exceptionally crucial to bolster the…
View article: Unsupervised social event detection via hybrid graph contrastive learning and reinforced incremental clustering
Unsupervised social event detection via hybrid graph contrastive learning and reinforced incremental clustering Open