Exploring foci of:
Vol 3
Mutual Information of Neural Network Initialisations: Mean Field Approximations
July 2021 • Jared Tanner, Giuseppe Ughi
The ability to train randomly initialised deep neural networks is known to depend strongly on the variance of the weight matrices and biases as well as the choice of nonlinear activation. Here we complement the existing geometric analysis of this phenomenon with an information theoretic alternative. Lower bounds are derived for the mutual information between an input and hidden layer outputs. Using a mean field analysis we are able to provide analytic lower bounds as functions of network weight and bias variances …
Perspective (Graphical)
Computer Science
Mathematics
Artificial Intelligence
Physics
Chemistry
Phenotype
Quantum Mechanics
Geometry
Business
Accounting
Biochemistry