English 中文(简体)
Backpropagation issues
原标题:

I have a couple of questions about how to code the backpropagation algorithm of neural networks:

The topology of my networks is an input layer, hidden layer and output layer. Both the hidden layer and output layer have sigmoid functions.

  1. First of all, should I use the bias? To where should I connect the bias in my network? Should I put one bias unit per layer in both the hidden layer and output layer? What about the input layer?
  2. In this link, they define the last delta as the input - output and they backpropagate the deltas as can be seen in the figure. They hold a table to put all the deltas before actually propagating the errors in a feedforward fashion. Is this a departure from the standard backpropagation algorithm? alt text
  3. Should I decrease the learning factor over time?
  4. In case anyone knows, is Resilient Propagation an online or batch learning technique?

Thanks

edit: One more thing. In the following picture, d f1(e) / de, assuming I m using the sigmoid function, is f1(e) * [1- f1(e)], right? alt text

最佳回答

Your question needs to be specified a bit more thoroughly... What is your need? Generalization or memorization? Are you anticipating a complex pattern matching data set, or a continuous-domain input-output relationship? Here are my $0.02:

  1. I would suggest you leave a bias neuron in just in case you need it. If it is deemed unnecessary by the NN, training should drive the weights to negligible values. It will connect to every neuron in the layer up ahead, but is not connected to from any neuron in the preceding layer.

  2. The equation looks like standard backprop as far as I can tell.

  3. It is hard to generalize whether your learning rate needs to be decreased over time. The behaviour is highly data-dependent. The smaller your learning rate, the more stable your training will be. However, it can be painfully slow, especially if you re running it in a scripting language like I did once upon a time.

  4. Resilient backprop (or RProp in MATLAB) should handle both online and batch training modes.

I d just like to add that you might want to consider alternative activation functions if possible. The sigmoid function doesn t always give the best results...

问题回答
  1. It varies. Personally, I don t see much of a reason for bias, but I haven t studied NN enough to actually make a valid case for or against them. I d try it out to and test results.

  2. That s correct. Backpropagation involves calculation of deltas first, and then propagating them across the network.

  3. Yes. Learning factor should be decreased over time. However, with BP, you can hit local, incorrect plateaus, so sometimes around the 500th iteration, it makes sense to reset the learning factor to the intial rate.

  4. I can t answer that.....never heard anything about RP.





相关问题
Backpropagation issues

I have a couple of questions about how to code the backpropagation algorithm of neural networks: The topology of my networks is an input layer, hidden layer and output layer. Both the hidden layer ...

How to determine for which value artificial neuron will fire?

I m trying to determine for the articial neuron shown below the values (0 or 1) for the inputs i1, i2, and i3 for which it will fire (i0 is the input for the bias weight and will always be -1). The ...

Prototyping neural networks

from your experience, which is the most effective approach to implement artificial neural networks prototypes? It is a lot of hype about R (free, but I didn t work with it) or Matlab (not free), ...

AIML pattern matching - howto?

I m having a problem trying to understand how does AIML pattern matching works. What s the difference between _ and *? And how I should use them to get the best match? I have this document only, ...

Continuous output in Neural Networks

How can I set Neural Networks so they accept and output a continuous range of values instead of a discrete ones? From what I recall from doing a Neural Network class a couple of years ago, the ...

Competitive Learning in Neural Networks

I am playing with some neural network simulations. I d like to get two neural networks sharing the input and output nodes (with other nodes being distinct and part of two different routes) to compete. ...

热门标签