Link to home
Start Free TrialLog in
Avatar of emilio
emilio

asked on

Floating Point: Overflow

I made a big enough program that has a bug. Sometimes it makes an error and then terminates. The error message is Floating Point: Overflow, coming from BC450RTL.DLL, and I can't find the point where the problem comes from.
The program trains neural networks, and as the training is an iterative process I can't know the moment when the error will happen, and that makes too tedious to run the program step by step to find the bug.
I have tried exception handling (try and catch), but I don't know which argument I must use in catch.
So, if anyone has an idea for finding the bug I will be very glad, because I'm a little bit desperate now.
Avatar of inter
inter
Flag of Türkiye image

Hi friend,
I used to have several of those float ugly errors when programming NNS with real valued weigths. The most common of all are overflow especially if you have a large number of connections. Tell me what NN you use (BPN, Kohonen, ART) and I may help further.
by the way you can catch all exceptions with
  catch(...) {
  }
Regards, Igor
ASKER CERTIFIED SOLUTION
Avatar of Tommy Hui
Tommy Hui

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of emilio
emilio

ASKER

Certainly, I don't find the way to catch the floating point error, but I don't know how to set up a signal handler.
On the other hand, to give more information about my problem, the kind of network that I use is a special case of RBF, with the neurons of the hidden layer feedbacked with FIR filters. The floating point error comes when the filters have an order greater than 2. I thought that I understood how to do it, but the system doesn't work as expected, and I'm a little bit down for it.
I apprecciate your help
Hi again,
Lets think together. When the order of FIR filters are > 2 either there may occur an overflow due to power operation or the denomitators approaches to 0. So my first suggestion is to check your DIVISION operations.
For example
*  if Dividend is > 1 and your Divisor is < 1E-10 set the result to your MAX but not divide
           this may cause an overflow
Regards, Igor
Avatar of emilio

ASKER

What happens is a little bit strange. The error comes during the training, and when the order of the filter is 0, that is, a common RBF, everything works perfectly, so the problem must be during the propagation of a signal through the net, or during the updating of the coeficients in the filters.
Every neuron in the net is implemented as an object, and I checked out the operations with possible error, that is exponentials. Well, setting up a limit in these operations doesn't solve the problem. The error comes anyway.
I could send you the code for the object "neuron" if you want to, though the comments will not help very much, since they're in Spanish :-|

Avatar of emilio

ASKER

What happens is a little bit strange. The error comes during the training, and when the order of the filter is 0, that is, a common RBF, everything works perfectly, so the problem must be during the propagation of a signal through the net, or during the updating of the coeficients in the filters.
Every neuron in the net is implemented as an object, and I checked out the operations with possible error, that is exponentials. Well, setting up a limit in these operations doesn't solve the problem. The error comes anyway.
I could send you the code for the object "neuron" if you want to, though the comments will not help very much, since they're in Spanish :-|

Avatar of emilio

ASKER

Adjusted points to 600
"try"   that code !

__try {
...................//code to test
{

__except (  (GetExceptionCode() == EXCEPTION_FLT_OVERFLOW)  ?   EXCEPTION_EXECUTE_HANDLER  :  EXCEPTION_C0NTINUE_SEARCH )
{
   //   overflow management

}

Jean-Paul

Avatar of emilio

ASKER

The part of the code where the problem comes from can be seen at
http://www.geocities.com/SiliconValley/Hills/6142/Neuron.doc
There I wrote the theoretical formulas for the algorithm, and the code to implement a neuron in the hidden layer. If anyone wants to have a look.
Hi emilio !

I just have visited your Home page.....
but where have you hidden your Neuron. doc ???????

JPaul
Avatar of emilio

ASKER

I have been at some friends home this weekend and I could get to this document without any problem from their computer.
So the document is there, and not hiding. Anyway I'll try to upload it to any ftp server.
Once again, the address is:
http://www.geocities.com/SiliconValley/Hills/6142/Neuron.doc 
Anyway, thanks a lot for trying.
Avatar of emilio

ASKER

There is now an ftp site to reach the problematic code:
the server is ftp.oce.orst.edu
and the directory is /pub/incoming/emilio/
There you can find these three files:
- Neuron.doc, with the algorithm explanation
- GAUSS.CPP, with the code for the neuron (it's included in the former document)
- NEURONA.HPP, with the headings and class declarations (also included in Neuron.doc)
The first document is still available in the other http site.

Avatar of emilio

ASKER

Hi, friends.
I thank you all your help. I finally found the bug in my program, just reviewing the algorithm. The problem that I had was not only the ugly floating point error. The training didn't give good results, because I was not applying the algorithm properly, but reviewing the theory I solved at the same time the training problem and the overflow. The bug was only an index, that made the gradient descent go wrong.
Now I'd like to upgrade all of your answers, thui, inter and JPM. I don't know which one will get the points, but even though I found the bug I thank you all.
I think I will be able to show my graduation design project in a couple of weeks.