Regularization Strategies for the Extreme Learning Machine

Sergio Decherchi, Paolo Gastaldo, Rodolfo Zunino.

The Extreme Learning Machine exhibits notable learning speed and good generalization ability. This paper discusses theoretical properties and applicative extensions of ELM. The paper first derives the analytical expression of the Vapnik-Chervonenkis dimension of an ELM, then shows that the ELM learning principle can be reduced to a non-regularized network when a particular kernel is adopted. In a subsequent analysis, coupling the original formulation with spectral regularization makes the ELM model compliant with Statistical Learning Theory. The resulting regularized ELM is proved to be equivalent to a Regularized Least Squares problem that embeds a particular kernel. The paper derives under what conditions regularization mechanisms enhance the generalization ability of ELM's in practice, and compares two spectral regularization strategies, namely, Truncated Singular Value Decomposition (TSVD) and Tikhonov regularization. Experimental results on several testbeds witness the beneficial effects of regularization on both accuracy and numerical stability, and show that Tikhonov regularization outperforms TSVD. Empirical evidence always confirmed theoretical expectations, especially in complex problems with limited samples.

Matlab version of the Regularized ELM (including both TSVD and Tikhonov regularization):


The Matlab code includes: the training procedure for the RELM model, the test procedure for the RELM model, an example of regression problem and an example of classification problem.