The University of Sussex

Matrix logarithm parametrizations for regularized neural network regression models

Peter M. Williams

Neural networks are commonly used to model conditional probability distributions. The idea is to represent distributional parameters as functions of conditioning events, where the function is determined by the architecture and weights of the network. An issue to be resolved is the link between distributional parameters and network outputs. The latter are unconstrained real numbers whereas distributional parameters may be required to lie in proper subsets, or be mutually constrained, e.g. by the positive definiteness requirement for a covariance matrix. The paper explores the matrix-logarithm parametrization of covariance matrices for multivariate normal distributions. From a Bayesian point of view the choice of parametrization is linked to the choice of prior. This is treated by investigating the invariance of predictive distributions, for the chosen parametrization, with respect to an important class of priors.


Download compressed postscript file