# simultaneous equations in c++

could anyone please help me with source code or some advice on writing a program to solve a set of 3 equations with 3 unknowns. i have found code for solving sets of 2 equations with 2 unknowns but i can't seem to convert this to deal with 3 equations.
Asked:
###### Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Commented:
What sort of equations? linear, polynomial (what order), differential, etc.
0
Commented:
I think rskell is talking about 3 by 3 linear equations.
0
Commented:
Yes :-), This is the question why am I here for.

A LA equation is of the form

Ax = y  where A is the 3x3 coefficient matrix
x are the set of unknowns
y are constants

OK?

If A is nonsingular (positive semi definite) there is a unique solution. Other wise the solution is parametric.

I assume that you have a uniquely solvable problem.

To solve this we should find the inverse of matrix A, then the solution will be:

x = ((A)^-1).y   here (A)^-1 denotes the inverse of A

and

(A)^-1 = Transpose(adj(A))/Det(A)

adj is the adjoint which I can hardly explain in plain text
tranpose is normal martix transposition
Det is normal martix determinant

SO,

typedef
float TVector[3];
TVector *PVector;
TVector TMatrix[3];
TMatrix *PMatrix;

// The A is the coefficient matrix
// y is the constant matrix that is formed by each
// equations constant
// The solution to problem returns in y where
// y = [x1 x2 x3]
int Solve3_3(PMatrix A, PVector y) {
float x1, x2, x3;
float a,b,c,d,e,f,g,h,i;
float det; //determinant
TMatrix I; //inverse matrix would be here
// now invert the matrix
// for easy programming transfer the matrix
a = A->[0][0];  b = A->[0][1]; c = A->[0][2];
d = A->[1][0];  e = A->[1][1]; f = A->[1][2];
g = A->[2][0];  h = A->[2][1]; i = A->[2][2];
// Find the determinant
det = a*e*j + d*h*c + g*b*f - (g*e*c + h*f*a + j*d*b);
// the solution does not exists if det == 0
if (det == 0) return 0;
// Find adjoint transpose this is a bit lengthy
I[0][0] = e*j - f*h; I[0][1] = b*j - c*h; I[0][2] = b*f - e*c;
I[1][0] = d*j - f*g; I[1][1] = a*j - c*g; I[1][2] = a*f - d*c;
I[2][0] = d*h - e*g; I[2][1] = a*h - b*g; I[2][2] = a*e - d*b;
// find the solution
x1 = I[0][0]*y->[0]+ I[0][1]*y->[1]+  I[0][2]*y->[2];
x2 = I[1][0]*y->[0]+ I[1][1]*y->[1]+  I[1][2]*y->[2];
x3 = I[2][0]*y->[0]+ I[2][1]*y->[1]+  I[2][2]*y->[2];
// transfer solution to return vector
// note that we do not yet divice by det so do that
y->[0] = x1 / det;
y->[1] = x2 / det;
y->[2] = x3 / det;
// return success
return 1;
};

Test the code please, I do not have C compiler on this machine. But it should be %99 correct. To convice you I am doing PhD in electronics. Bugs are wellcome, I do solve them

Please comment immediatelly
Igor
0
Commented:
I have a program that I wrote that find the answer to multiply two 3X3 matrices together.
Does that help you any?
0
Author Commented:
the equations that i'm trying to solve are of the following form with the unknown values being x,y and z. the known constants are a and b. notation : x^2 = x squared and ax^2 = a multiplied by x squared etc.

i) -1/y[ax - x^2 - bxz + (x^2)z - ab + bx + (b^2)z - axz] = 0

ii) -1/y[az - xz - bz^2 + xz^2 - a + x + bz -xz] = 0

iii) -85/2y + 1/2y^2[a^2 - 2ax + x^2 - 2abz + 2(a+b)xz -2bxz^2
+ b^2z + x^2z^2 = 0

0
Author Commented:
thanks jsouth but i think you are thinking of matrix multiplication, but what i need is a simultaneous equation solver for the types of equations which i have described in a comment.
0
Commented:
FYI jsouth's matric manipulation solution is a technique for solving simultaneous LINEAR equations.  Unfortunately you don't have linear equations.  I'm not aware of any general procedures for solving systems of polynomials.  You equations may be solvable (I didn't try it) for a paricualar a and b.  But as a and b change the steps for solving may change.  Even the nature of the solution may change.
0
Author Commented:
in response to nietod, i may have been a bit unclear but i meant to say that the a and b in the equations do not change and are constant which are known.
0
Commented:
If that is the case, then you only need to solve these equations one time.  That is, solve them yourself rather than have the computer solve them.
0
Author Commented:
to solve these myself would be very difficult since the variables are multiplied by each other and divided by other variables and i was just wondering if there was code somewhere on how to solve these using a program as i will have other similar equations to solve as my project continues and if i could find a program to do this it would save me a lot of time and would be more reliable than myself.
0
Commented:
Solving general equations such as these requires symbolic manipulation.  A program to so is going to be EXTREMELY complex.  There are commercial ones available (to use, not the source code!) such as mathmatica, mathcad, and mathlab.  They tend to be quite costly.  I beleive mathametica is about \$700.  Mathcad might be cheaper.  If you have a limited number of equations to solve, and the algabra to do it with, you can save a lot of money doing it yourself.
0
Commented:
I just got an advertisement in the mail for MatLab, a mathamatics package that does symbolic manipulation.  They can be reached at (508) 647-7000.
0
Commented:
If you are a college student, you probably have MatLab or MathCad on the machines on campus.  The algebra gets kind of messy, but it is definitly do-able.  If you are out of college, then it would be a good touch up to do them by hand!  :)
0
Commented:
For my engineering class, I use EES (Engineering Equation Solver), this's a very great software, I just type the equation as many as I want and it will print out the result. I believe that MathLab will do the same thing.

Try looking at these softwares on your campus computer if you are a college student.
0
Commented:
Since you say you can solve 2 equations with 2 unknowns,
can you start by solving
[ax - x^2 - bxz + (x^2)z - ab + bx + (b^2)z - axz] = 0
[az - xz - bz^2 + xz^2 - a + x + bz -xz] = 0

0
Commented:
Have you ever considered using the gauss-jordan elimination
method (simple matrix mathematics)..

It should be easy to implement as code.. :)
0
Commented:
gauss-jordan elimination only works for a set of linear equations, knu_boeh
0
Commented:
http://www.netlib.org/minpack/ex/file08

Full Fortran code for solving N nonlinear equations in N variables.   Should transfer to c/c++ fairly easily, if you
watch your implicit conversions.
0
Commented:
The method you could use is Gauss Elimination
It actually solves the equation for
Ax=b where A,x and b are all matrix

eg.
a11x1 + a12x2+ a13x3 =b1
a21x1 + a22x2 +a23x3 =b2
a31x1 + a31x2 +a33x3 =b3
the above is a 3 by 3 matrix that you desire
where
A is a 3 by 3 matrix
x a 3 by 1 matrix
and b a 3 by 1 matrix

Eg

8x2 + 2x3 =-7
3x1 + 5x2  + 2x3 = 8
6x1 + 2x2  + 8x3 =26
We have to arrange the matrix first..always
That is to make sure that the 1st row will have a a11
the 2nd row a22. this is called pivoting
We set the pivot to x1 first
Pivoting
3x1 + 5x2  + 2x3 = 8 Pivot
8x2 + 2x3 =-7
6x1 + 2x2  + 8x3 =26

The second step is called elimination
That is start by eliminating the x1
To do this we multiply the first(pivot) row by 2 and use this
answer to subtract the 3rd row and then place this new
answer in row 3.
3x1 + 5x2  + 2x3 = 8
8x2 + 2x3 =-7
-8x2+ 4x3 =10
The second step is repeated and now we eliminate
the next variable, that is x2. The pivot is now
set to the 2nd row
This is done by adding row2 to row3 and placing the
answer in row3
3x1 + 5x2  + 2x3 = 8
8x2 + 2x3 =-7
6x3 = 3

Lastly we do back substitution to find
the rest of the answer
You would notice that the pivoting and
elimination actually only serve one
purpose, that is to obtain a triangular
matrix.

Regards
Ben
xtac@pacific.net.sg
0
Author Commented:
thanks crespo but gaussian elimination can't solve equations of the type shown in my earlier comment which i am trying to solve as the unknown parameters are both squared and multiplied by each other which gaussian elimination cannot deal with.
thanks anyway.
0
Commented:
Dear rskell,

This problem can be solved by iterative techiques using optimization.

if we have

f(x,y,z) = 0;
g(x,y,z) = 0;
h(x,y,z) = 0;

and from the equations you specified we know that y <> 0

The question can be translated into form

Minimize(f(x,y,z) + g(x,y,z) + h(x,y,z))
s.t.
z <> 0

Does this mean something,(or do I continue?)

Igor
Minimize (
0
Commented:
And please, give those a and b constants.

Igor
0
Commented:
Don't you mean
Minimize(f(x,y,z)*f(x,y,z) + g(x,y,z)*g(x,y,z) + h(x,y,z)*h(x,y,z))
0
Commented:
Sorry,

Yes, you are right, make them convex and continious. But we have a second order pole at y=0. Any way we may get rid of it.

Come on lets discuss and finish this today
Igor

0
Commented:
(actually, x=0,z=0,y=infinity should get arbitrarily close to f=g=h=0...)
0
Commented:
Then we may add them to the the Subject To part of the optimization problem.

What descent algorithm do we use -in your opinion- ozo? Steepest descent, conjugate gradient...?

Igor
0
Commented:
maybe multiply i) and ii) by y, and iii) by y^2, with the result still = 0.

For smooth functions like these, I'd think most any descent algorithm should suffice.
There can't be that many local minima, so just start over if it gets stuck.

But is rskell looking for any (approximate) solution,
or for an equation giving all solutions?
0
Commented:
Sorry for the delay, I should be out for a moment. To find all family of solutions, we should contruct a Optimum Control problem involving functionals, but the other one is not quite hard.

As you said in the early comments we can safely solve only the first two equations with two unknowns then find z that (x,z) satisfying the third, if only rskell give us those a and b. Alternatively, since we eliminate y, we can find solutions for (x,z) on 2tupples in equation i; get the only ones that fits into eqn2, and jump to the eqn 3 with them. As known, in general we can solve a single eqn with 2 unknowns if it is nonlinear. Isn't it?(To simply attack the problem we may use this method. Theoreticaly one may raise several arguments but we can try.) I am now giving the partial derivatives, please check them out:(after muliplying them with y and grouping)

Eqn f=(I(x,z))^2

ð                               ð
---(f) = 2I(x,z)(1-z)(a+b-2x) , ---(f) = 2I(x,z)(x^2-(a+b)x +  b^2)
ðx                               ðz

Eqn g=(II(x,z))^2

ð                             ð
---(g) = 2II(x,z)(-2x + 1)  , ---(g) = 2II(x,z)(a + (b-x)(1-2z))
ðx                            ðz

We may start the either one, are they ok?

Igor
0
Commented:
There are two methods for find roots to nonlinear sets of equations.
One is Newton-Raphson Method
The other is Globally convergent Methods.
Has anyone heard about these two Methods?

0
Commented:
Dear friends,

Yes, the Newton-Raphson(NR) method is a well known method for finding roots of roots of equations. It converges to the solution quite fast. It utilizes 2nd order derivatives together with the tangent hyperplane intersection with the hyperplanes formed by the axises. But when the problem is multi dimensional, we can not directly apply the 1D NR. We should form a matrix called Hessian which contitues the 2nd order partial derivatives of all functions w.r.t. all the variables in the equation set. All this are examined in the scope of Optimization(minimization or maximization of functions) and Optimum Control (minimum time, minimum energy etc...problems). The other thing "Globally convergent Methods" are include several methods. They are proposed mainly for complex error surfaces which have several local minima (for minimization problems). The well know Globally convergent method is e.g. Simulated Annealing, which gives small disturbance to the solution when a local minima is reached hoping to step over the nearest local maxima and reach a more appropriate local minima, by this way it seek the global maxima. The other are Evolutionary approaches, Boltzman machine approach and combination of them such as GESA(Guided Evolutionary Simulated Annealing).

Sorry, if I go out the scope of the question
Bye
Igor
0
Commented:
Your problem will be solved when you into any standard book on Numerical Analysis. They have all sorts of methods to solve all sort of equations.

0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Author Commented:
thanks to everyone who helped and i think with all the advise that everyone gave i should be able to solve my problem.

regards,
rskell.
0
###### It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Programming Languages-Other

From novice to tech pro — start learning today.