Hugh Chen
YOU?
Author Swipe
View article: ExplaiNAble BioLogical Age (ENABL Age): an artificial intelligence framework for interpretable biological age
ExplaiNAble BioLogical Age (ENABL Age): an artificial intelligence framework for interpretable biological age Open
National Science Foundation and National Institutes of Health.
View article: An explainable AI framework for interpretable biological age
An explainable AI framework for interpretable biological age Open
Background An individual’s biological age is a measurement of health status and provides a mechanistic understanding of aging. Age clocks estimate a biological age of an individual based on their various features . Existing clocks have key…
View article: Contrastive Corpus Attribution for Explaining Representations
Contrastive Corpus Attribution for Explaining Representations Open
Despite the widespread use of unsupervised models, very few methods are designed to explain them. Most explanation methods explain a scalar model output. However, unsupervised models output representation vectors, the elements of which are…
View article: Algorithms to estimate Shapley value feature attributions
Algorithms to estimate Shapley value feature attributions Open
Feature attributions based on the Shapley value are popular for explaining machine learning models; however, their estimation is complex from both a theoretical and computational standpoint. We disentangle this complexity into two factors:…
View article: Interpretable Machine Learning Prediction of All-Cause Mortality
Interpretable Machine Learning Prediction of All-Cause Mortality Open
Background Unlike linear models, complex machine learning models can capture non-linear interrelations and provide opportunities to identify novel risk factors. Explainable artificial intelligence can improve prediction accuracy and reveal…
View article: Uncovering expression signatures of synergistic drug response using an ensemble of explainable AI models
Uncovering expression signatures of synergistic drug response using an ensemble of explainable AI models Open
Complex machine learning models are poised to revolutionize the treatment of diseases like acute myeloid leukemia (AML) by helping physicians choose optimal combinations of anti-cancer drugs based on molecular features. While accurate pred…
View article: Explaining a Series of Models by Propagating Local Feature Attributions
Explaining a Series of Models by Propagating Local Feature Attributions Open
Pipelines involving a series of several machine learning models (e.g., stacked generalization ensembles, neural network feature extractors) improve performance in many domains but are difficult to understand. To improve their transparency,…
View article: Explaining a Series of Models by Propagating Shapley Values
Explaining a Series of Models by Propagating Shapley Values Open
Local feature attribution methods are increasingly used to explain complex machine learning models. However, current methods are limited because they are extremely expensive to compute or are not capable of explaining a distributed series …
View article: Interpretable machine learning prediction of all-cause mortality
Interpretable machine learning prediction of all-cause mortality Open
Prior studies on all-cause mortality traditionally use linear models; however, growing field of explainable artificial intelligence (XAI) can improve prediction accuracy over traditional linear models using complex machine learning (ML) mo…
View article: True to the Model or True to the Data?
True to the Model or True to the Data? Open
A variety of recent papers discuss the application of Shapley values, a concept for explaining coalitional games, for feature attribution in machine learning. However, the correct way to connect a machine learning model to a coalitional ga…
View article: Forecasting adverse surgical events using self-supervised transfer learning for physiological signals
Forecasting adverse surgical events using self-supervised transfer learning for physiological signals Open
Hundreds of millions of surgical procedures take place annually across the world, which generate a prevalent type of electronic health record (EHR) data comprising time series physiological signals. Here, we present a transferable embeddin…
View article: Deep Transfer Learning for Physiological Signals.
Deep Transfer Learning for Physiological Signals. Open
Deep learning is increasingly common in healthcare, yet transfer learning for physiological signals (e.g., temperature, heart rate, etc.) is under-explored. Here, we present a straightforward, yet performant framework for transferring know…
View article: Explaining Models by Propagating Shapley Values of Local Components
Explaining Models by Propagating Shapley Values of Local Components Open
In healthcare, making the best possible predictions with complex models (e.g., neural networks, ensembles/stacks of different models) can impact patient welfare. In order to make these complex models explainable, we present DeepSHAP for mi…
View article: Explainable AI for Trees: From Local Explanations to Global Understanding
Explainable AI for Trees: From Local Explanations to Global Understanding Open
Tree-based machine learning models such as random forests, decision trees, and gradient boosted trees are the most popular non-linear predictive models used in practice today, yet comparatively little attention has been paid to explaining …
View article: Hybrid Gradient Boosting Trees and Neural Networks for Forecasting Operating Room Data
Hybrid Gradient Boosting Trees and Neural Networks for Forecasting Operating Room Data Open
Time series data constitutes a distinct and growing problem in machine learning. As the corpus of time series data grows larger, deep models that simultaneously learn features and classify with these features can be intractable or suboptim…
View article: Anesthesiologist-level forecasting of hypoxemia with only SpO2 data using deep learning
Anesthesiologist-level forecasting of hypoxemia with only SpO2 data using deep learning Open
We use a deep learning model trained only on a patient's blood oxygenation data (measurable with an inexpensive fingertip sensor) to predict impending hypoxemia (low blood oxygen) more accurately than trained anesthesiologists with access …
View article: Checkpoint Ensembles: Ensemble Methods from a Single Training Process
Checkpoint Ensembles: Ensemble Methods from a Single Training Process Open
We present the checkpoint ensembles method that can learn ensemble models on a single training process. Although checkpoint ensembles can be applied to any parametric iterative learning technique, here we focus on neural networks. Neural n…