Tabular Benchmarks for Joint Architecture and Hyperparameter\n Optimization Article Swipe
YOU?
·
· 2019
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.1905.04970
· OA: W4288357194
Due to the high computational demands executing a rigorous comparison between\nhyperparameter optimization (HPO) methods is often cumbersome. The goal of this\npaper is to facilitate a better empirical evaluation of HPO methods by\nproviding benchmarks that are cheap to evaluate, but still represent realistic\nuse cases. We believe these benchmarks provide an easy and efficient way to\nconduct reproducible experiments for neural hyperparameter search. Our\nbenchmarks consist of a large grid of configurations of a feed forward neural\nnetwork on four different regression datasets including architectural\nhyperparameters and hyperparameters concerning the training pipeline. Based on\nthis data, we performed an in-depth analysis to gain a better understanding of\nthe properties of the optimization problem, as well as of the importance of\ndifferent types of hyperparameters. Second, we exhaustively compared various\ndifferent state-of-the-art methods from the hyperparameter optimization\nliterature on these benchmarks in terms of performance and robustness.\n