Utkarsh Nath
YOU?
Author Swipe
View article: Guiding Diffusion with Deep Geometric Moments: Balancing Fidelity and Variation
Guiding Diffusion with Deep Geometric Moments: Balancing Fidelity and Variation Open
Text-to-image generation models have achieved remarkable capabilities in synthesizing images, but often struggle to provide fine-grained control over the output. Existing guidance approaches, such as segmentation maps and depth maps, intro…
View article: Deep Geometric Moments Promote Shape Consistency in Text-to-3D Generation
Deep Geometric Moments Promote Shape Consistency in Text-to-3D Generation Open
To address the data scarcity associated with 3D assets, 2D-lifting techniques such as Score Distillation Sampling (SDS) have become a widely adopted practice in text-to-3D generation pipelines. However, the diffusion models used in these t…
View article: Learning Low-Rank Feature for Thorax Disease Classification
Learning Low-Rank Feature for Thorax Disease Classification Open
Deep neural networks, including Convolutional Neural Networks (CNNs) and Visual Transformers (ViT), have achieved stunning success in medical image domain. We study thorax disease classification in this paper. Effective extraction of featu…
View article: RNAS-CL: Robust Neural Architecture Search by Cross-Layer Knowledge Distillation
RNAS-CL: Robust Neural Architecture Search by Cross-Layer Knowledge Distillation Open
Deep Neural Networks are vulnerable to adversarial attacks. Neural Architecture Search (NAS), one of the driving tools of deep neural networks, demonstrates superior performance in prediction accuracy in various machine learning applicatio…
View article: Similarity-based Distance for Categorical Clustering using Space Structure
Similarity-based Distance for Categorical Clustering using Space Structure Open
Clustering is spotting pattern in a group of objects and resultantly grouping the similar objects together. Objects have attributes which are not always numerical, sometimes attributes have domain or categories to which they could belong t…
View article: Better Together: Resnet-50 accuracy with $13 \times $ fewer parameters and at $3 \times $ speed
Better Together: Resnet-50 accuracy with $13 \times $ fewer parameters and at $3 \times $ speed Open
Recent research on compressing deep neural networks has focused on reducing the number of parameters. Smaller networks are easier to export and deploy on edge-devices. We introduce Adjoined networks as a training approach that can regulari…
View article: Adjoined Networks: A Training Paradigm with Applications to Network Compression
Adjoined Networks: A Training Paradigm with Applications to Network Compression Open
Compressing deep neural networks while maintaining accuracy is important when we want to deploy large, powerful models in production and/or edge devices. One common technique used to achieve this goal is knowledge distillation. Typically, …