Carlos Scheidegger
YOU?
Author Swipe
View article: Persistent Classification: Understanding Adversarial Attacks by Studying Decision Boundary Dynamics
Persistent Classification: Understanding Adversarial Attacks by Studying Decision Boundary Dynamics Open
There are a number of hypotheses underlying the existence of adversarial examples for classification problems. These include the high‐dimensionality of the data, the high codimension in the ambient space of the data manifolds of interest, …
View article: Persistent Classification: A New Approach to Stability of Data and Adversarial Examples
Persistent Classification: A New Approach to Stability of Data and Adversarial Examples Open
There are a number of hypotheses underlying the existence of adversarial examples for classification problems. These include the high-dimensionality of the data, high codimension in the ambient space of the data manifolds of interest, and …
View article: Reducing Access Disparities in Networks using Edge Augmentation✱
Reducing Access Disparities in Networks using Edge Augmentation✱ Open
In social networks, a node's position is, in and of itself, a form of social capital. Better-positioned members not only benefit from (faster) access to diverse information, but innately have more potential influence on information spread.…
View article: IEEE Visualization and Graphics Technical Community (VGTC)
IEEE Visualization and Graphics Technical Community (VGTC) Open
The VGTC is actively involved in national initiatives that study and promote the immediate and long-range challenges in its core and related research areas.
View article: Traveler: Navigating Task Parallel Traces for Performance Analysis
Traveler: Navigating Task Parallel Traces for Performance Analysis Open
Understanding the behavior of software in execution is a key step in identifying and fixing performance issues. This is especially important in high performance computing contexts where even minor performance tweaks can translate into larg…
View article: Reducing Access Disparities in Networks using Edge Augmentation
Reducing Access Disparities in Networks using Edge Augmentation Open
In social networks, a node's position is a form of \it{social capital}. Better-positioned members not only benefit from (faster) access to diverse information, but innately have more potential influence on information spread. Structural bi…
View article: Traveler: Navigating Task Parallel Traces for Performance Analysis
Traveler: Navigating Task Parallel Traces for Performance Analysis Open
Understanding the behavior of software in execution is a key step in identifying and fixing performance issues. This is especially important in high performance computing contexts where even minor performance tweaks can translate into larg…
View article: VIS 2021 Area Curation Committee
VIS 2021 Area Curation Committee Open
View article: VIS 2021 Conference Committee
VIS 2021 Conference Committee Open
View article: NeuralCubes: Deep Representations for Visual Data Exploration
NeuralCubes: Deep Representations for Visual Data Exploration Open
Visual exploration of large multi-dimensional datasets has seen tremendous progress in recent years, allowing users to express rich data queries that produce informative visual summaries, all in real time. Techniques based on data cubes ar…
View article: UnProjection: Leveraging Inverse-Projections for Visual Analytics of High-Dimensional Data
UnProjection: Leveraging Inverse-Projections for Visual Analytics of High-Dimensional Data Open
Projection techniques are often used to visualize high-dimensional data, allowing users to better understand the overall structure of multi-dimensional spaces on a 2D screen. Although many such methods exist, comparably little work has bee…
View article: UnProjection: Leveraging Inverse-Projections for Visual Analytics of\n High-Dimensional Data
UnProjection: Leveraging Inverse-Projections for Visual Analytics of\n High-Dimensional Data Open
Projection techniques are often used to visualize high-dimensional data,\nallowing users to better understand the overall structure of multi-dimensional\nspaces on a 2D screen. Although many such methods exist, comparably little work\nhas …
View article: Comparing Deep Neural Nets with UMAP Tour
Comparing Deep Neural Nets with UMAP Tour Open
Neural networks should be interpretable to humans. In particular, there is a growing interest in concepts learned in a layer and similarity between layers. In this work, a tool, UMAP Tour, is built to visually inspect and compare internal …
View article: VIS 2021 Conference Committee
VIS 2021 Conference Committee Open
View article: VIS 2021 Area Curation Committee
VIS 2021 Area Curation Committee Open
View article: STFT-LDA: An Algorithm to Facilitate the Visual Analysis of Building Seismic Responses
STFT-LDA: An Algorithm to Facilitate the Visual Analysis of Building Seismic Responses Open
Civil engineers use numerical simulations of a building's responses to seismic forces to understand the nature of building failures, the limitations of building codes, and how to determine the latter to prevent the former. Such simulations…
View article: Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning Models
Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning Models Open
The interpretation of deep neural networks (DNNs) has become a key topic as more and more people apply them to solve various problems and making critical decisions. Concept-based explanations have recently become a popular approach for pos…
View article: The ANTARES Astronomical Time-domain Event Broker
The ANTARES Astronomical Time-domain Event Broker Open
We describe the Arizona-NOIRLab Temporal Analysis and Response to Events System (ANTARES), a software instrument designed to process large-scale streams of astronomical time-domain alerts. With the advent of large-format CCDs on wide-field…
View article: Information access representations and social capital in networks
Information access representations and social capital in networks Open
Social network position confers power and social capital. In the setting of online social networks that have massive reach, creating mathematical representations of social capital is an important step towards understanding how network posi…
View article: Clustering via Information Access in a Network.
Clustering via Information Access in a Network. Open
Information flow in a graph (say, a social network) has typically been modeled using standard influence propagation methods, with the goal of determining the most effective ways to spread information widely. More recently, researchers have…
View article: Visualizing Neural Networks with the Grand Tour
Visualizing Neural Networks with the Grand Tour Open
View article: Problems with Shapley-value-based explanations as feature importance\n measures
Problems with Shapley-value-based explanations as feature importance\n measures Open
Game-theoretic formulations of feature importance have become popular as a\nway to "explain" machine learning models. These methods define a cooperative\ngame between the features of a model and distribute influence among these input\nelem…
View article: Problems with Shapley-value-based explanations as feature importance measures
Problems with Shapley-value-based explanations as feature importance measures Open
Game-theoretic formulations of feature importance have become popular as a way to "explain" machine learning models. These methods define a cooperative game between the features of a model and distribute influence among these input element…
View article: Anteater: Interactive Visualization for Program Understanding
Anteater: Interactive Visualization for Program Understanding Open
Understanding and debugging long, complex programs can be extremely difficult; it often includes significant, manual program instrumentation and searches through source files. In this paper, we present Anteater, an interactive visualizatio…
View article: Anteater: Interactive Visualization of Program Execution Values in Context
Anteater: Interactive Visualization of Program Execution Values in Context Open
Debugging is famously one the hardest parts in programming. In this paper, we tackle the question: what does a debugging environment look like when we take interactive visualization as a central design principle? We introduce Anteater, an …
View article: Disentangling Influence: Using disentangled representations to audit model predictions
Disentangling Influence: Using disentangled representations to audit model predictions Open
Motivated by the need to audit complex and black box models, there has been extensive research on quantifying how data features influence model predictions. Feature influence can be direct (a direct influence on model outcomes) and indirec…
View article: Disentangling Influence: Using Disentangled Representations to Audit Model Predictions
Disentangling Influence: Using Disentangled Representations to Audit Model Predictions Open
Motivated by the need to audit complex and black box models, there has been extensive research on quantifying how data features influence model predictions. Feature influence can be direct (a direct influence on model outcomes) and indirec…
View article: Fairness in representation: quantifying stereotyping as a representational harm
Fairness in representation: quantifying stereotyping as a representational harm Open
While harms of allocation have been increasingly studied as part of the subfield of algorithmic fairness, harms of representation have received considerably less attention. In this paper, we formalize two notions of stereotyping and show h…
View article: Selective Wander Join: Fast Progressive Visualizations for Data Joins
Selective Wander Join: Fast Progressive Visualizations for Data Joins Open
Progressive visualization offers a great deal of promise for big data visualization; however, current progressive visualization systems do not allow for continuous interaction. What if users want to see more confident results on a subset o…
View article: Assessing the Local Interpretability of Machine Learning Models
Assessing the Local Interpretability of Machine Learning Models Open
The increasing adoption of machine learning tools has led to calls for accountability via model interpretability. But what does it mean for a machine learning model to be interpretable by humans, and how can this be assessed? We focus on t…