Siddhartha Satpathi
YOU?
Author Swipe
View article: Developing scanner change invariant brain age models for aging and dementia studies
Developing scanner change invariant brain age models for aging and dementia studies Open
Background Brain age models are increasingly being used as measures of brain health and neurodegeneration. These models can be deep learning (DL) based and/or classical methods and provide a brain age gap (BAG) which is the difference betw…
View article: Evaluation and interpretation of DTI-ALPS, a proposed surrogate marker for glymphatic clearance, in a large population-based sample
Evaluation and interpretation of DTI-ALPS, a proposed surrogate marker for glymphatic clearance, in a large population-based sample Open
DTI-ALPS can be reliably automated in large samples. The computed DTI-ALPS was associated with vascular dysfunction (vascular risk and WMH) and may provide additional complementary information about cognitive decline. The low associations …
View article: Evaluating DTI‐ALPS, an imaging surrogate for glymphatic clearance, in a large population‐based sample
Evaluating DTI‐ALPS, an imaging surrogate for glymphatic clearance, in a large population‐based sample Open
Background Diffusion tensor imaging along perivascular spaces index (DTI‐ALPS), which measures diffusivity increases in the perivascular spaces along the medullary veins, is being increasingly utilized as a surrogate marker of glymphatic c…
View article: Sample Complexity and Overparameterization Bounds for Temporal-Difference Learning With Neural Network Approximation
Sample Complexity and Overparameterization Bounds for Temporal-Difference Learning With Neural Network Approximation Open
In this paper, we study the dynamics of temporal difference learning with neural network-based value function approximation over a general state space, namely, \emph{Neural TD learning}. We consider two practically used algorithms, project…
View article: The Dynamics of Gradient Descent for Overparametrized Neural Networks
The Dynamics of Gradient Descent for Overparametrized Neural Networks Open
We consider the dynamics of gradient descent (GD) in overparameterized single hidden layer neural networks with a squared loss function. Recently, it has been shown that, under some conditions, the parameter values obtained using GD achiev…
View article: Sample Complexity and Overparameterization Bounds for Projection-Free Neural TD Learning.
Sample Complexity and Overparameterization Bounds for Projection-Free Neural TD Learning. Open
We study the dynamics of temporal-difference learning with neural network-based value function approximation over a general state space, namely, \emph{Neural TD learning}. Existing analysis of neural TD learning relies on either infinite w…
View article: Learning Latent Events from Network Message Logs: A Decomposition Based Approach.
Learning Latent Events from Network Message Logs: A Decomposition Based Approach. Open
In this communication, we describe a novel technique for event mining using a decomposition based approach that combines non-parametric change-point detection with LDA. We prove theoretical guarantees about sample-complexity and consistenc…
View article: Learning Latent Events from Network Message Logs
Learning Latent Events from Network Message Logs Open
We consider the problem of separating error messages generated in large distributed data center networks into error events. In such networks, each error event leads to a stream of messages generated by hardware and software components affe…
View article: Perfect clustering from pairwise comparisons
Perfect clustering from pairwise comparisons Open
We consider a pairwise comparisons model with n users and m items. Each user is shown a few pairs of items, and when a pair of items is shown to a user, he or she expresses a preference for one of the items based on a probabilistic model. …
View article: Group-Sparse Model Selection: Hardness and Relaxations
Group-Sparse Model Selection: Hardness and Relaxations Open
Group-based sparsity models are proven instrumental in linear regression problems for recovering signals from much fewer measurements than standard compressive sensing. The main promise of these models is the recovery of "interpretable" si…
View article: Optimal Offline and Competitive Online Strategies for Transmitter–Receiver Energy Harvesting
Optimal Offline and Competitive Online Strategies for Transmitter–Receiver Energy Harvesting Open
A joint transmitter-receiver energy harvesting model is considered, where\nboth the transmitter and receiver are powered by (renewable) energy harvesting\nsource. Given a fixed number of bits, the problem is to find the optimal\ntransmissi…