Kien Do
YOU?
Author Swipe
View article: Universal Multi-Domain Translation via Diffusion Routers
Universal Multi-Domain Translation via Diffusion Routers Open
Multi-domain translation (MDT) aims to learn translations between multiple domains, yet existing approaches either require fully aligned tuples or can only handle domain pairs seen in training, limiting their practicality and excluding man…
View article: Learning Structural Causal Models from Ordering: Identifiable Flow Models
Learning Structural Causal Models from Ordering: Identifiable Flow Models Open
In this study, we address causal inference when only observational data and a valid causal ordering from the causal graph are available. We introduce a set of flow models that can recover component-wise, invertible transformation of exogen…
View article: h-Edit: Effective and Flexible Diffusion-Based Editing via Doob's h-Transform
h-Edit: Effective and Flexible Diffusion-Based Editing via Doob's h-Transform Open
We introduce a theoretical framework for diffusion-based image editing by formulating it as a reverse-time bridge modeling problem. This approach modifies the backward process of a pretrained diffusion model to construct a bridge that conv…
View article: Bidirectional Diffusion Bridge Models
Bidirectional Diffusion Bridge Models Open
Diffusion bridges have shown potential in paired image-to-image (I2I) translation tasks. However, existing methods are limited by their unidirectional nature, requiring separate models for forward and reverse translations. This not only do…
View article: Finding the Trigger: Causal Abductive Reasoning on Video Events
Finding the Trigger: Causal Abductive Reasoning on Video Events Open
This paper introduces a new problem, Causal Abductive Reasoning on Video Events (CARVE), which involves identifying causal relationships between events in a video and generating hypotheses about causal chains that account for the occurrenc…
View article: Predicting the Reliability of an Image Classifier under Image Distortion
Predicting the Reliability of an Image Classifier under Image Distortion Open
In image classification tasks, deep learning models are vulnerable to image distortions i.e. their accuracy significantly drops if the input images are distorted. An image-classifier is considered "reliable" if its accuracy on distorted im…
View article: A District-level Ensemble Model to Enhance Dengue Prediction and Control for the Mekong Delta Region of Vietnam
A District-level Ensemble Model to Enhance Dengue Prediction and Control for the Mekong Delta Region of Vietnam Open
The Mekong Delta Region of Vietnam faces increasing dengue risks driven by urbanization, globalization, and climate change. This study introduces a probabilistic forecasting model for predicting dengue incidence and outbreaks with one to t…
View article: Learning Structural Causal Models from Ordering: Identifiable Flow Models
Learning Structural Causal Models from Ordering: Identifiable Flow Models Open
In this study, we address causal inference when only observational data and a valid causal ordering from the causal graph are available. We introduce a set of flow models that can recover component-wise, invertible transformation of exogen…
View article: Generating Realistic Tabular Data with Large Language Models
Generating Realistic Tabular Data with Large Language Models Open
While most generative models show achievements in image data generation, few are developed for tabular data generation. Recently, due to success of large language models (LLM) in diverse tasks, they have also been used for tabular data gen…
View article: Stable Hadamard Memory: Revitalizing Memory-Augmented Agents for Reinforcement Learning
Stable Hadamard Memory: Revitalizing Memory-Augmented Agents for Reinforcement Learning Open
Effective decision-making in partially observable environments demands robust memory management. Despite their success in supervised learning, current deep-learning memory models struggle in reinforcement learning environments that are par…
View article: Combining Deep Reinforcement Learning and Search with Generative Models for Game-Theoretic Opponent Modeling
Combining Deep Reinforcement Learning and Search with Generative Models for Game-Theoretic Opponent Modeling Open
Opponent modeling methods typically involve two crucial steps: building a belief distribution over opponents' strategies, and exploiting this opponent model by playing a best response. However, existing approaches typically require domain-…
View article: Enhancing Length Extrapolation in Sequential Models with Pointer-Augmented Neural Memory
Enhancing Length Extrapolation in Sequential Models with Pointer-Augmented Neural Memory Open
We propose Pointer-Augmented Neural Memory (PANM) to help neural networks understand and apply symbol processing to new, longer sequences of data. PANM integrates an external neural memory that uses novel physical addresses and pointer man…
View article: Variational Flow Models: Flowing in Your Style
Variational Flow Models: Flowing in Your Style Open
We propose a systematic training-free method to transform the probability flow of a "linear" stochastic process characterized by the equation X_{t}=a_{t}X_{0}+σ_{t}X_{1} into a straight constant-speed (SC) flow, reminiscent of Rectified Fl…
View article: Revisiting the Dataset Bias Problem from a Statistical Perspective
Revisiting the Dataset Bias Problem from a Statistical Perspective Open
In this paper, we study the "dataset bias" problem from a statistical standpoint, and identify the main cause of the problem as the strong correlation between a class attribute u and a non-class attribute b in the input x, represented by p…
View article: Domain Generalisation via Risk Distribution Matching
Domain Generalisation via Risk Distribution Matching Open
We propose a novel approach for domain generalisation (DG) leveraging risk distributions to characterise domains, thereby achieving domain invariance. In our findings, risk distributions effectively highlight differences between training d…
View article: Beyond Surprise: Improving Exploration Through Surprise Novelty
Beyond Surprise: Improving Exploration Through Surprise Novelty Open
We present a new computing model for intrinsic rewards in reinforcement learning that addresses the limitations of existing surprise-driven explorations. The reward is the novelty of the surprise rather than the surprise norm. We estimate …
View article: Social Motivation for Modelling Other Agents under Partial Observability in Decentralised Training
Social Motivation for Modelling Other Agents under Partial Observability in Decentralised Training Open
Understanding other agents is a key challenge in constructing artificial social agents. Current works focus on centralised training, wherein agents are allowed to know all the information about others and the environmental state during tra…
View article: Memory-Augmented Theory of Mind Network
Memory-Augmented Theory of Mind Network Open
Social reasoning necessitates the capacity of theory of mind (ToM), the ability to contextualise and attribute mental states to others without having access to their internal cognitive structure. Recent machine learning approaches to ToM h…
View article: Memory-Augmented Theory of Mind Network
Memory-Augmented Theory of Mind Network Open
Social reasoning necessitates the capacity of theory of mind (ToM), the ability to contextualise and attribute mental states to others without having access to their internal cognitive structure. Recent machine learning approaches to ToM h…
View article: Causal Inference via Style Transfer for Out-of-distribution Generalisation
Causal Inference via Style Transfer for Out-of-distribution Generalisation Open
Out-of-distribution (OOD) generalisation aims to build a model that can generalise well on an unseen target domain using knowledge from multiple source domains. To this end, the model should seek the causal dependence between inputs and la…
View article: Face Swapping as A Simple Arithmetic Operation
Face Swapping as A Simple Arithmetic Operation Open
We propose a novel high-fidelity face swapping method called "Arithmetic Face Swapping" (AFS) that explicitly disentangles the intermediate latent space W+ of a pretrained StyleGAN into the "identity" and "style" subspaces so that a latent…
View article: Rare T263P epidermal growth factor receptor extracellular domain mutation of advanced non-small cell lung cancer with benefit of the first-line afatinib in a Vietnamese male patient
Rare T263P epidermal growth factor receptor extracellular domain mutation of advanced non-small cell lung cancer with benefit of the first-line afatinib in a Vietnamese male patient Open
Background: A T263P mutation is one of the rare EGFR mutation, located in 7p11.2, a change in the amino acid residue at position 263 in the epidermal growth factor receptor protein where L-threonine has been replaced by L-proline. This mis…
View article: Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation Open
Data-free Knowledge Distillation (DFKD) has attracted attention recently thanks to its appealing capability of transferring knowledge from a teacher network to a student network without using training data. The main idea is to use a genera…
View article: Black-box Few-shot Knowledge Distillation
Black-box Few-shot Knowledge Distillation Open
Knowledge distillation (KD) is an efficient approach to transfer the knowledge from a large "teacher" network to a smaller "student" network. Traditional KD methods require lots of labeled training samples and a white-box teacher (paramete…
View article: Defense Against Multi-target Trojan Attacks
Defense Against Multi-target Trojan Attacks Open
Adversarial attacks on deep learning-based models pose a significant threat to the current AI infrastructure. Among them, Trojan attacks are the hardest to defend against. In this paper, we first introduce a variation of the Badnet kind of…
View article: Episodic Policy Gradient Training
Episodic Policy Gradient Training Open
We introduce a novel training procedure for policy gradient methods wherein episodic memory is used to optimize the hyperparameters of reinforcement learning algorithms on-the-fly. Unlike other hyperparameter searches, we formulate hyperpa…
View article: Learning to Constrain Policy Optimization with Virtual Trust Region
Learning to Constrain Policy Optimization with Virtual Trust Region Open
We introduce a constrained optimization method for policy gradient reinforcement learning, which uses a virtual trust region to regulate each policy update. In addition to using the proximity of one single old policy as the normal trust re…
View article: Learning Theory of Mind via Dynamic Traits Attribution
Learning Theory of Mind via Dynamic Traits Attribution Open
Machine learning of Theory of Mind (ToM) is essential to build social agents that co-live with humans and other agents. This capacity, once acquired, will help machines infer the mental states of others from observed contextual action traj…