Books     Neural Networks
Neuron Models

Crude MFNN Implementation

The following code listing implements a crude MFNN in one of the simplest as well as inelegant ways possible. Nonetheless, the code makes what needs to be taken care of clearer.


#include <vector>
#include <numeric>

/*
 * NOTE:
 *
 * std::inner_product could be implemented as follows
 *
 * Result result = 0;
 * for(; begin1 < end1; ++begin1, ++begin2)
 *   result += (*begin1) * (*begin2);
 * return result;
 */

// this function calculates the dot-product of two vectors
template < class InputIt1, class InputIt2, class Result = double >
inline Result dot(InputIt1 begin1, InputIt1 end1, InputIt2 begin2) {
  return std::inner_product(
    std::move(begin1), std::move(end1), std::move(begin2),
    static_cast<Result>(0) );
}

int mfnn_function() {

  int input[] = { 0, 0 };
  double theta_hidden[]   = { 0.2, 0.2 };
  double theta_output[]   = { 0.2 };
  double weights_hidden[] = { 0.15, 0.15, 0.3, 0.3 };
  double weights_output[] = { -0.3, 0.3 };

  // action-potential of the first hidden neuron
  double nh1_field = dot(input, input + 2, weights_hidden);

  // action-potential of the second hidden neuron
  double nh2_field = dot(input, input + 2, weights_hidden + 2);

  // activation / output of the hidden neurons
  std::vector<int> hidden_output {
    static_cast<int>(nh1_field >= theta_hidden[0]),
    static_cast<int>(nh2_field >= theta_hidden[1])
  };

  // action-potential of the output neuron
  double no_field = dot(
    hidden_output.begin(), hidden_output.end(), weights_output);

  // activation / output of the output neuron
  int network_output = static_cast<int>(no_field >= theta_output[0]);

  return network_output;
}

Note that this function only runs the (0,0) input case, for the remaining cases we need to feed the input from input neurons to the hidden neurons then to the output for each pair. This crude implementation reveals the components we need to consider as well as the structure of signal flow inside the network. Each input pattern is fed into the input layer, the input individual features or signals (individual components) are multiplied with the weights from the input layer to the hidden layer to produce the action-potential for each neuron in the hidden layer. As a fully-connected input layer, i.e. every neuron in the input layer is connected to every neuron in the next, hidden, layer, there will be j weights for each input neuron i, where j is the number of neurons in the next layer. In this specific case, we have two weights for each input neuron. Likewise, we have one weight connecting each hidden neuron to the single neuron in the output layer.