Exploring foci of
2025-04-11
Efficient Few-Shot Neural Architecture Search by Counting the Number of Nonlinear Functions
2025-04-11 • Youngmin Oh, Hyunju Lee, Bumsub Ham
Neural architecture search (NAS) enables finding the best-performing architecture from a search space automatically. Most NAS methods exploit an over-parameterized network (i.e., a supernet) containing all possible architectures (i.e., subnets) in the search space. However, the subnets that share the same set of parameters are likely to have different characteristics, interfering with each other during training. To address this, few-shot NAS methods have been proposed that divide the space into a few subspaces and…
Neural Circuit
Neural Oscillation
Residual Neural Network
Efficient Frontier
Efficient-Market Hypothesis
Feedforward Neural Network
Recurrent Neural Network
Neural Correlates Of Consciousness
Convolutional Neural Network
Exploring foci of
2025-04-11
Maximizing the Position Embedding for Vision Transformers with Global Average Pooling
2025-04-11 • Wonjun Lee, Bumsub Ham, Suhyun Kim
In vision transformers, position embedding (PE) plays a crucial role in capturing the order of tokens. However, in vision transformer structures, there is a limitation in the expressiveness of PE due to the structure where position embedding is simply added to the token embedding. A layer-wise method that delivers PE to each layer and applies independent LNs for token embedding and PE has been adopted to overcome this limitation. In this paper, we identify the conflicting result that occurs in a layer-wise structu…
Fowler's Position
Sex Position
69 (Sex Position)
Squatting Position
Emergency Position-Indicating Radiobeacon
Serial-Position Effect
Object Linking And Embedding
Stress Position
Defensive Fighting Position
Exploring foci of
2025-02-07
ELITE: Enhanced Language-Image Toxicity Evaluation for Safety
2025-02-07 • Wonjun Lee, Daewoo Lee, Eugene Choi, Shuishan Yu, Ashkan Yousefpour, Haon Park, Bumsub Ham, Suhyun Kim
Current Vision Language Models (VLMs) remain vulnerable to malicious prompts that induce harmful outputs. Existing safety benchmarks for VLMs primarily rely on automated evaluation methods, but these methods struggle to detect implicit harmful content or produce inaccurate evaluations. Therefore, we found that existing benchmarks have low levels of harmfulness, ambiguous data, and limited diversity in image-text pair combinations. To address these issues, we propose the ELITE benchmark, a high-quality safety evalu…
Sniper Elite V2
Enhanced Driver's License
Sniper Elite Iii
Mk 14 Enhanced Battle Rifle
Enhanced Graphics Adapter
The Elite (Novel)
Classroom Of The Elite
Canadian Elite Basketball League
Elite (Video Game)
Exploring foci of
2025-02-05
Maximizing the Position Embedding for Vision Transformers with Global Average Pooling
2025-02-05 • Wonjun Lee, Bumsub Ham, Suhyun Kim
In vision transformers, position embedding (PE) plays a crucial role in capturing the order of tokens. However, in vision transformer structures, there is a limitation in the expressiveness of PE due to the structure where position embedding is simply added to the token embedding. A layer-wise method that delivers PE to each layer and applies independent Layer Normalizations for token embedding and PE has been adopted to overcome this limitation. In this paper, we identify the conflicting result that occurs in a l…
T-Distributed Stochastic Neighbor Embedding
Trendelenburg Position
Net International Investment Position
Object Linking And Embedding
Defensive Fighting Position
Fowler's Position
Financial Position Of The United States
Spreadeagle (Position)
Sex Position
Exploring foci of
2025-03-13
Subnet-Aware Dynamic Supernet Training for Neural Architecture Search
2025-03-13 • Jeimin Jeon, Youngmin Oh, Junghyup Lee, Donghyeon Baek, Dohyung Kim, Chanho Eom, Bumsub Ham
N-shot neural architecture search (NAS) exploits a supernet containing all candidate subnets for a given search space. The subnets are typically trained with a static training strategy (e.g., using the same learning rate (LR) scheduler and optimizer for all subnets). This, however, does not consider that individual subnets have distinct characteristics, leading to two problems: (1) The supernet training is biased towards the low-complexity subnets (unfairness); (2) the momentum update in the supernet is noisy (noi…
Dynamic Time Warping
Dynamic Random-Access Memory
Synchronous Dynamic Random-Access Memory
Dynamic Html
Dynamic Systems Development Method
Dynamic Currency Conversion
Dynamic Positioning
Dynamic Web Page
Dynamic Programming