A Digital Implementation of Extreme Learning Machines for Resource-Constrained Devices

Edoardo Ragusa, Christian Gianoglio, Paolo Gastaldo, Rodolfo Zunino.

The availability of compact digital circuitry for the support of neural networks is a key requirement for resource-constrained embedded systems. This brief tackles the implementation of single hidden-layer feedforward neural networks (SLFNs), based on hard-limit activation functions, on reconfigurable devices. The resulting design strategy relies on a novel learning procedure that inherits the approach adopted in the Extreme Learning Machine paradigm. The eventual training process balances accuracy and network complexity effectively, thus supporting a digital architecture that prioritizes area utilization over computational performance.

Available material

This page make available the VHDL code that implements the classifier. All the material is packed into a password protected zip file. Please contact Paolo Gastaldo (paolo.gastaldo@unige.it) to get the password.

The implementation refers to a classifier trained on the Breast Cancer Wisconsin dataset; the classifier has 9 inputs and 10 neurons in hidden layer. The training phase was completed offline; hence, the number of neurons and the hidden weights are hardcoded into the digital design.

A signed 2-complement fixed-point representation is adopted; 16 bits encode all inputs quantities. All the inputs are rescaled into the range [0,1].



  1. Implement the VHDL code with ELM.vhd as top level entity
  2. Start the simulation by using ELM_tb.vhd as testbench, which feeds the predictor with test patterns

Output Description

The device generates as output the file simulationPred.txt, which provides -for each pattern- the class predicted by the classifier.

When the classifiers assigns a pattern to class +1, the file contains binary value set to 0.

When the classifiers assigns a pattern to class -1, the file contains binary value set to 1.