Link to home
Start Free TrialLog in
Avatar of jtcy
jtcy

asked on

*AI* Minimax Help (Game Tree)

I need to implement a soccer player with . Everytime it has a turn, it checks its around for the best path to move to hit the ball to the goal. It;s kind of hard to explain it here but what I need to know is how can I actually implement the game tree using minimax algorithm. Here is a stupid player I have got but it doesnt use minimax. I am hopping to make one using minimax algorithm.

/**
 * DumbPlayer has a very simple strategy. First, it will run straight for
 * the ball, with no regard for what is in the way. (It is easy for a single
 * player to block a team of DumbPlayer's.) Second, if it is on top of the ball,
 * it will kick it toward the goal. Third, it will run around for a while to
 * celebrate being so smart! DumbPlayer then reverts to hunting for the ball.
 */

public class DumbPlayer extends Player
{
      /***
       * Constructs a DumbPlayer.
       * <p>
       * f is passed to the super class for initialisation.
       * @param f The SoccerField this player is playing on.
       */
      public DumbPlayer( SoccerField f )
      {
            super( f );
            
            fState = 0;
            fStepsRemaining = 0;
            fMissedTurns = 0;
      }
      
               public SoccerMove haveTurn()
      {
            SoccerMove s = null;
            
            switch ( fState )
            {
                  case 0:
                        /* Hunt the ball: */
                        {
                              int bPos[] = getBallPosition();
                              int pPos[] = new int[ 2 ];
                              int gPos[] = new int[ 2 ];
                              
                              gPos[ 0 ] = getField().getTeamGoalX( getTeam().getID() );
                              gPos[ 1 ] = getField().getTeamGoalY( getTeam().getID() );
                              
                              pPos[ 0 ] = getTeam().getPlayerX( getID() );
                              pPos[ 1 ] = getTeam().getPlayerY( getID() );
                              
                              /***
                               * Adjust the position we are aiming for so that
                               * if we kick from there, the ball will go towards
                               * the goal.
                               */
                              if ( bPos[ 0 ] > gPos[ 0 ] )
                                    bPos[ 0 ]++;
                              else
                                    bPos[ 0 ]--;
                                    
                              if ( bPos[ 1 ] > gPos[ 1 ] )
                                    bPos[ 1 ]++;
                              else if ( bPos[ 1 ] < gPos[ 1 ] )
                                    bPos[ 1 ]--;
                              
                              if ( ( pPos[ 0 ] == bPos[ 0 ] ) && ( pPos[ 1 ] == bPos[ 1 ] ) )
                              {
                                    if ( getField().canKickBall( getTeam().getID(), getID() ) )
                                    {
                                          s = SoccerMove.constructKick();
                                          fState = 1;
                                          fStepsRemaining = 10 + (int)( 11 * Math.random() );
//                                          System.out.println( "Changing to state -> Run " + getID() );
                                    }
                                    else
                                          fMissedTurns++;
                              }
                              else
                              {
                                    int move[] = new int[ 2 ];
                                    
                                    for ( int i = 0; i < 2; i++ )
                                    {
                                          if ( bPos[ i ] > pPos[ i ] )
                                                move[ i ] = 1;
                                          else if ( bPos[ i ] == pPos[ i ] )
                                                move[ i ] = 0;
                                          else
                                                move[ i ] = -1;
                                    }
                                    
                                    if ( getField().canMoveBy( getTeam().getID(), getID(), move[ 0 ], move[ 1 ] ) )
                                          s = SoccerMove.constructMove( move[ 0 ], move[ 1 ] );
                                    else
                                          fMissedTurns++;
                              }
                              
                              if ( fMissedTurns >= 5 )
                              {
                                    fState                  = 1;
                                    fStepsRemaining = 20;
                                    
//                                    System.out.println( "Stuck, changing to state -> Run " + getID() );
                              }
                        }
                        break;
                  case 1:
                        /* Run off! */
                        {
                              int dir = (int)( 8 * Math.random() );
                              
                              switch ( dir )
                              {
                                    case 0:
                                          s = SoccerMove.constructMove( 1, 1 );
                                          break;
                                    case 1:
                                          s = SoccerMove.constructMove( 1, 0 );
                                          break;
                                    case 2:
                                          s = SoccerMove.constructMove( 1, -1 );
                                          break;
                                    case 3:
                                          s = SoccerMove.constructMove( 0, 1 );
                                          break;
                                    case 4:
                                          s = SoccerMove.constructMove( 0, -1 );
                                          break;
                                    case 5:
                                          s = SoccerMove.constructMove( -1, 1 );
                                          break;
                                    case 6:
                                          s = SoccerMove.constructMove( -1, 0 );
                                          break;
                                    case 7:
                                          s = SoccerMove.constructMove( -1, -1 );
                                          break;
                              }
                              
                              fStepsRemaining--;
                              
                              if ( fStepsRemaining == 0 )
                              {
                                    fState                  = 0;
                                    fMissedTurns      = 0;
//                                    System.out.println( "Changing to state -> Hunt " + getID() );
                              }
                        }
                        break;
            }
            
            return s;
      }
      

      private int fState;
      private int fStepsRemaining;
      private int fMissedTurns;
}
Avatar of jtcy
jtcy

ASKER

I need to know how to construct the game tree. So everytime the player has a turn, do I construct a root and 8 nodes, and have the root adding those 8 nodes?
Root would be remains same.  Only the possibilities would vary.
Avatar of jtcy

ASKER

How to build the game tree?
For a minimax algorithm you must construct a decision tree of all possible moves. A decision tree is perhaps most easily explained w.r.t. to a turn based strategy game like chess, othello or checkers. Each move by a player is referred to as a ply and the possible moves are represented by a layer in the tree. Let's suppose each player has 8 possible moves, and that it is whites turn. The tree would start with the boards current position. The first layer would represent all of whites possible moves. The second layer would represent (for each of those moves) all of blacks possible counter moves, and the third layer would represent all of whites possible responses. Then there would be (given 8 possible moves for each play at each ply) 512 different board positions at the "bottom" of the three ply tree (8*8*8).

Now, the core of this kind of algorithm lies in *evaluating* the board positions at the bottom of the tree. For each board position an evaluator method is called that assigns a score of some sort for the board position. The minimax name comes from the fact that each layers score is maximized for whites turns, and minimized for black. So, in our example, layer 1 represents whites 8 possible moves, layer 2 represents blacks 8 possible countermoves for each of those (a total of 64 board positions), and layer 3 are the 512 responses white could make. The 512 board positions at the bottom are assigned scores, usually a higher number being better for white, and a lower number (continuing into negative numbers) meaning better for black. At the first of the 64 board positions in layer two, there are 8 "associated" responses by white. The minimax algorithm reasonably assumes that white will make the move that maximizes the score for white. So if whites countermove values in that subtree range from -1 to 5, the score assigned to the first node in layer 2 will be 5. The same is done for each of the 64 nodes in layer 2; that is they are assigned the maximum value of the score of the 8 white responses in their subtree.
Now, in the 8 nodes in layer 1, the minimax algorithm must assume the opposite. That is, that black will choose the move that *minimizes* the score, since that is best for black. So for the first node in layer 1, if the 8 countermoves have scores ranging from -2 to 6, the minimax algorithm reasonably assumes that black will choose the move that will give black the best score, i.e. the minimum. So the first node would be assigned a value of -2. This is also done for each (of the 8 nodes) in layer 1. Then finally the move selected for white is the node is the maximum score in layer 1.

So, the minimax algorithm assumes that each player is going maximize their score at each ply. In words it would be something like: if white makes this move, black could make each of these counter moves. Well, if black does this, then white could make this great play, so black won't do that. If black does that, white can't do much (white's maximum at the bottom is lower) so that's a better move etc. etc.

So, as you can see, the strength of this algorithm depends totally on the evaluator algorithm at the bottom. A poor evaluator will not help the algorithm select the best moves.

I'm unclear on a lot of the details of your soccer game, so I'm not clear on how, or even if, it could be applied in your case. At minimum you would have to develop some idea of what constitutes "better" and "worse" board positions in your game.
Avatar of jtcy

ASKER

imladris, thanks a lot for explaining those to me. Actually, I understand the whole idea of how minimax works. What I am confused of is simply how to peform the theory over my soccer project. What I am doing now is simply get current position, and create a node of it, and then get its other positions around and create nodes for each respectively, so there are a root and 8 other nodes, added to the root. and then for each of the 8 nodes, do the same thing....etc. But wouldnt this be too complicated and hard to implement? I heard from coursemates that I could build the tree implicitly, avoiding to build a tree first and then search for the best path. I have no idea how to do it.

The task is simply to write the brain for a robot soccer player. The largest component of this task is that of planning, or deciding where to move. In order to make this decision we're going to set up a tree describing all possible moves that me and my opponent might make in the future. We'll then use minimax to make a decision about which of these choices to take.

The robot soccer player can move around the field, and can kick the ball. The field rectangular and is broken up into squares. The ball and the robots occupy one square each. Each robot moves one square per turn, so moving requires only that you specify the direction that you want to go. In order to kick the ball you have to be in a square adjacent to it. When kicked the ball will move away for 3 squares. There are only 9 possible choices at each turn - 8 movements and a kick. I can only take one action at any turn, so can either move a square in any direction or kick the ball.

I uploaded the files online already, could you or someone else have a look at them?
 - SoccerSimulator.jar - the simulator
- DumbPlayer.java - This is a player that you can test your robot against. It is not an example solution, because it doesn't use minimax to make its decisions

http://www.virginstudent.co.uk/editprofile/default.asp?UniqueID=397827&ApplicationID=15&FolderID=35560&p=15010
Minimax depends on two things. The ability to represent the problem as board positions, and an algorithm to evaluate those board positions.

If there are two robots (and the ball) on the field this should be relatively straightforward. The board position is represented by a grid and the position of the two robots and the position of the ball. The first ply would have one entry for the board after the eight moves one robot can make and possibly a ninth if he could kick. The second ply would contain all of the opponents possible counter moves in each board position.

If there are 22 robots on the field the tree gets a lot bigger, especially if one assumes (as would seem reasonable) that each of your players gets to move. That would mean the first ply of the tree contains (more than) 8 to the power 11 moves (that's over 8 billion possibilities) (if the directions each player can move is labelled 1 through 8, and a move is described by the direction each player goes, the first move to consider is "11111111111", the second is "21111111111", the third is "31111111111" etc. etc.)

I doubt that is practical, especially considering that to get to the second ply you would have to consider 8 billion countermoves for each of those.

Alternatively, your descriptions hint that the "decision code" is separate for each robot. The code for dumbPlayer indicates that the objective here is to write code for a specific robot or set of robots each of which "individually" attempt to determine an optimal course. In minimax terms they are facing the same problem though. For two players it would be workable. For 22 the number of possibilities will quickly outstrip the computers capacity to cope. Minimax works by representing all possible board positions and then evaluating them. With 8 billion moves per ply that's probably just not practical.

The last alternative I can imagine (for the 22 player scenario) is for each player to run a minimax on board positions that disregard all other players. This would certainly keep the tree size to manageable levels. On the other hand, it is entirely unclear how helpful it would be, because it is not taking the other players (on either team) possible moves into account. It seems unlikely to me that this scenario would produce solutions that are in any way superior to non-minimax kind of decision making.
Avatar of jtcy

ASKER

We don't need to build 22 players. We just need to build one player. There are only 9 possible choices at each turn - 8 movements and a kick.

What I don understand are:

1. Is the cost function returning the distance between the player and the ball at current turn?

2. Do I implement them like...everytime i have a turn, i just create a root node which holds the current coordinate of the player, and then expand the coordinate 1 level (to adjacent) and create nodes of each of them, until a depth of 2 (assume 2 level). And then if level = 2, evaluate the nodes and return the minimum distance of level 2 and pass up? and then the root decides the maximum node in level 1 to be the path to move?

Not for a minimax algorithm, no.

A minimax operates on board position. So let's assume your soccer field is a grid of 20 by 20 squares. You could represent this with a 2 dimensional array. Each element in the array would be empty, contain a soccer player or contain a ball (or both?). You would have to assign some suitable representation for those things (e.g. -1 for empty, 1 for soccer player, 2 for ball (3 for both?)).

The current board position is at the root.
At level one you would have 8 or 9 boards, representing all possible moves. Going still with the 1 through 8 movement scheme, the first board at level 1 would be the board position after the player had moved to 1 (forward and left), the second board after he moved 2 (forward), the third board after he moved 3 (forward and right) etc. etc. There may be a ninth board representing the state of the board after a kick. For level two you would have board representations representing all possible countermoves. So take the first board at level 1, then at level 2 there will be 8 (or 9) more boards, reperesenting the board position for the other side moving to 1, or 2, or 3 etc. and possible a ninth for a kick. Then the same thing for the second board at level 1. This can down to as many ply depth as you want (subject to required processing time).

There is *no* cost function. The core of minimax is an *evaluation* algorithm. This algorithm must assign a score to a board position. So you would have to come up with some notion of what a better board position is. Assign, for instance, more points to the score if the player is closer to the ball. The minimax tree considers *all* possible board positions (to a certain plydepth). In our above example the evaluation algorithm must be applied to all the board positions at the "bottom" of the minimax decision tree. These scores are what drive the decision as to what move to select.

It will select the moves that go towards board positions with the highest score. So if you assign higher scores to board positions in which the player is closer to the ball, the algorithm will produce moves that accomplish that.

So as an example, let's a assume a 3 ply tree and an evaluation algorithm that scores "9 minus (distance to ball)". Let's assume that the board has rows numbered left to right, and columns top to bottom. Let's assume your player is at row 15, column 10, the other player at 0,0 and the ball at 10,10. A terse representation of the board would be [15,10;0,0;10,10] (coordinates for your player, the other player and the ball). So the 8 boards at the first level would be (representing all possible moves):

[14,9;0,0;10,10], [14,10;0,0;10,10], [14,11;0,0;10,10], [15,9;0,0;10,10], [15,11;0,0;10,10], [16,9;0,0;10,10], [16,10;0,0;10,10], [16,11;0,0;10,10]

There would in fact only be three countermoves possible for each of these, since the other player is in the corner. So the second level for the first board would be:

[14,9;1,0;10,10], [14,9;1,1;10,10], [14,9;0,1;10,10].

For the second board at level one it would be:

[14,10;1,0;10,10], [14,10;1,1;10,10], [14,10;0,1;10,10].

etc. etc. for all board positions at level 1.

Now the 8 boards for level three for the first board at level 1, and the first board at level two would be:

[13,8;1,0;10,10], [13,9;1,0;10,10], [13,10;1,0;10,10],
[14,8;1,0;10,10], [14,10;1,0;10,10], [15,8;1,0;10,10],
[15,9;1,0;10,10], [15,10;1,0;10,10].

The 8 boards for the first board at level 1 and the second board at level 2 would be the same, except that the other player would be at 1,1 instead of 1,0; and for the boards for the third board the other player would be at 0,1.

Scores here range from about 6 (at 13,10 the player is 3 squares away from the ball; 9-3=6) to about 4 (at 15,10 he is 5 squares away from the ball (9-5=4). Since, in our simple minded evaluation algorithm the position of the other player isn't taken into account, the best result of the first move (going forward and left; that is going to 14,9) is a score of 6. There will be similar results for the first three moves (14,9; 14,10 and 14,11). For the first move of 15,9, however would result in final boards like:

[14,8;1,0;10,10], [14,9;1,0;10,10], [14,10;1,0;10,10],
[15,8;1,0;10,10], [15,10;1,0;10,10], [16,8;1,0;10,10],
[16,9;1,0;10,10], [16,10;1,0;10,10]

These will have scores ranging from 5 through 3. And for the moves in which the player moved to row 16 the scores will be even worse. So as these scores propogate up the tree, and the algorithm picks the best moves, it will tend towards the ones in which the player moves closer to the ball, because that produces better scores. So, in this case the algorithm will pick one of the first three moves (in which it moved to row 14) because that winds up with the best scores.
Avatar of jtcy

ASKER

um...I finally made my own minimax algorithm:

       
public class MyPlayer extends Player
 {
       
      private int current_level = 0;

       public MyPlayer(SoccerField f)
        {
              super(f);
              
        }
       
       
       public SoccerMove haveTurn()
        {
              SoccerMove s = null;
              
            GameState root = new GameState( this );
              int bestPath = minimax(root, 2);
        
              // get stuck here
            
            
            return s;
            
      }






      
      
      /***
       * This method takes in a GameState and generates a Vector
       * containing every move from that position.
       */

      public Vector makeMoves(GameState s)
        {
              
             Vector moves = new Vector();
             
              // storing all possible moves into a vector...
              
            for (int dx = -1; dx <= 1; dx++)
             {
                   for (int dy = -1; dy <= 1; dy++)
                    {
                          GameState temp = new GameState(s);
                          if (temp.canMoveBy(dx,dy))
                           {
                                 temp.moveBy(dx,dy);
                                moves.addElement(temp);
                           }      
                     }
                   
             }
            
 
             return moves;
     }
     
    public int minimax(GameState aState, int depth)
     {
           if (current_level == depth)
                   return evaluate(aState);
           else
            {
                  
                 Vector movements = makeMoves(aState);            // create all possible moves...
                 Vector costs = new Vector();                        // a vector to store all the costs, just in case...
                 current_level++;
                 
                 for (int i=0; i<movements.size();i++)
                  {
                        GameState temp = (GameState) movements.get(i);
                        int score = minimax(temp,depth+1);
                        costs.addElement(new Integer(score));
                  }
                 
                 
                 if (current_level % 2 == 1) // if minimum level
                        return minimum(costs);
                 else
                       return maximum(costs);
                 
            }
     
               
     }
      

      public int minimum(Vector costs)
       {
          int min = ((Integer) costs.get ( 0 ) ).intValue () ;
          for ( int i = 1, iSize = costs.size () ; i < iSize ; i ++ )
           {
                 Integer temp = ( Integer ) costs.get ( i ) ;
                 int tempVal = temp.intValue () ;

                 if ( tempVal < min )
                  min = tempVal ;

           }    
          return min;

     }
     
      public int maximum(Vector costs)
       {
          int max = ((Integer) costs.get ( 0 ) ).intValue () ;
          for ( int i = 1, iSize = costs.size () ; i < iSize ; i ++ )
           {
                 Integer temp = ( Integer ) costs.get ( i ) ;
                 int tempVal = temp.intValue () ;

                 if ( tempVal > max )
                  max = tempVal ;

           }    
          return max;

     }


         public int evaluate(GameState s)
        {
              // Take in a game state and calculate its
              // current distance from the ball.      
              
              int x = (int)s.curPlayer().getX();
             int y = (int)s.curPlayer().getY();
             
             int ballX = (int)s.getBallState().getX();
             int ballY = (int)s.getBallState().getY();
             
             int x_diff = x - ballX;
             int y_diff = y - ballY;
             
             int answer = x_diff*x_diff + y_diff*y_diff;
             
             return -((int)Math.sqrt((double) answer));

        
        }
              
              
      
      public String getTeamName()
      {
            return "Athlon ";
      }
      
              

}



I was told that i missed the kicking code in makeMoves method. ~



       
Avatar of jtcy

ASKER

The documentations are at the site, www.geocities.com/jonathan_tay_2000 (if u willing to help)
Well, it looks like you're on your way.

There a couple of key issues I think.

The inner loop of minimax:

               for (int i=0; i<movements.size();i++)
                {
                     GameState temp = (GameState) movements.get(i);
                     int score = minimax(temp,depth+1);
                     costs.addElement(new Integer(score));
                }

I think it needs to refrain from altering the value of depth. Your exit condition is "if(current_level==depth)", and current_level is incremented in the body. So if you originally set depth to 2, you would want it to be 2 when it hits the second level.
If you pass "depth+1", you probably won't ever get to the "bottom".

Your minimax method is returning a score, which is good. But it also, somehow, needs to return the move that is associated with that score. Perhaps a class containing a score and a SoccerMove object would be suitable. That will provide the root call of the minimax method a result it can use to return in haveTurn. This also means that the current minimum and maximum methods you have are inadequate.
I think it would be more efficient (which is important since you only have 30 milliseconds) and simpler all around to keep track of score and move inside the loop. Have a currentscore and currentmove variable and update them, if necessary. Something like:

               curmax=(currentlevel%2==1?true:false);
               curscore=(curmax?-1000000:1000000);
               curmove=-1;
               for (int i=0; i<movements.size();i++)
                {
                     GameState temp = (GameState) movements.get(i);
                     int score = minimax(temp,depth+1);
                     if((curmax && score>curscore) || (!curmax && score<curscore))
                     {    curscore=score;
                           curmove=i;
                     }
                }

Then, at the end, return something indicating both the score and the move.
Also note that if the currentlevel is odd (currentlevel%2==1), I think you need to maximize the score. For the first level, for instance, which is your move, and which is odd, you want to pick the move with the highest score, right?
Avatar of jtcy

ASKER

I put the whole thing in a zip file called minimax.zip, could u have a look there? I cant seem to understand fully and how to get it working properly. :~~(
I'm not sure how my looking at more code is going to enable me to make things clearer for you. As this is obviously homework I cannot simply write code for you. All I can do is explain specifics or answer specific questions.

If you have a specific difficulty, please ask about it, and I will be happy to work at clarifying it.
Avatar of jtcy

ASKER

I don understand ur code above:

curmax=(currentlevel%2==1?true:false);                  // what is this?
               curscore=(curmax?-1000000:1000000);    // and this?
               curmove=-1;
               for (int i=0; i<movements.size();i++)
                {
                     GameState temp = (GameState) movements.get(i);
                     int score = minimax(temp,depth+1);
                     if((curmax && score>curscore) || (!curmax && score<curscore))
                     {    curscore=score;
                           curmove=i;
                     }
                }
Avatar of jtcy

ASKER

I have all my code here:

******************************************************************************

/* My Player class needed to construct a tree.
 *
 */
 
import java.lang.*;
import java.util.*;


public class MyPlayer extends Player
 {
       
      private int current_level = 0;
      private int turn = 0;

       
       public MyPlayer(SoccerField f)
        {
              super(f);
              
        }
       
       
       public SoccerMove haveTurn()
        {
              SoccerMove s = null;
              
            GameState root = new GameState( this );
              int bestPath = minimax(root, 4);
              int[] move = new int[2];
              int i = 0;
              
              Vector temp5 = makeMoves(root);
              while (i<temp5.size())
               {
                     GameState a = (GameState)temp5.get(i);
                     int b = evaluate(a);
                     if (b==bestPath)
                      {
                            if (i==0)
                             {
                                   move[0] = -1;
                                   move[1] = -1;
                             }
                            if (i==1)
                             {
                                  move[0] = -1;
                                  move[1] = 0;
                             }
                            if (i==2)
                             {
                                   move[0] = -1;
                                  move[1] = 1;
                             }
                            if (i==3)
                             {
                                   move[0] = 0;
                                  move[1] = -1;
                             }
                            if (i==4)
                             {
                                   move[0] = 0;
                                  move[1] = 0;
                             }
                            if (i==5)
                             {
                                   move[0] = 0;
                                  move[1] = 1;
                             }
                            if (i==6)
                             {
                                   move[0] = 1;
                                  move[1] = -1;
                             }
                            if (i==7)
                             {
                                   move[0] = 1;
                                  move[1] = 0;
                             }
                            if (i==8)
                             {
                                   move[0] = 1;
                                   move[1] = 1;
                             }
                            
                             if (getField().canMoveBy(getTeam().getID(), getID(), move[0], move[1]))
                                          s = SoccerMove.constructMove(move[0], move[1]);
                             if (getField().canKickBall(getTeam().getID(), getID()))
                                         s = SoccerMove.constructKick();
                      }
                      i++;
               }
               
            
            
            return s;
            
      }






      
      
      /***
       * This method takes in a GameState and generates a Vector
       * containing every move from that position.
       */

      public Vector makeMoves(GameState s)
        {
              
             Vector moves = new Vector();
             
              // storing all possible moves into a vector...
              
            for (int dx = -1; dx <= 1; dx++)
             {
                   for (int dy = -1; dy <= 1; dy++)
                    {
                          GameState temp = new GameState(s);
                          if (temp.canMoveBy(dx,dy))
                           {
                                 temp.moveBy(dx,dy);
                                moves.addElement(temp);
                           }      
                          
                    }
                   
             }
      
           GameState temp = new GameState(s);
         if (temp.canKick())
           {
              temp.kick();        // kick it??? *blur*
              moves.addElement(temp);
           }
        return moves;
     }

 
    public int minimax(GameState aState, int depth)
     {
           if (current_level == depth)
                   return evaluate(aState);
           else
            {
                  
                 Vector movements = makeMoves(aState);            // create all possible moves...
                 Vector costs = new Vector();                        // a vector to store all the costs, just in case...
                 current_level++;
                 
                 for (int i=0; i<movements.size();i++)
                  {
                        GameState temp = (GameState) movements.get(i);
                        int score = minimax(temp,depth);
                        costs.addElement(new Integer(score));
                  }
                 
                 
                 if (current_level % 2 == 1) // if minimum level
                        return minimum(costs);
                 else
                       return maximum(costs);
                 
            }
     
               
     }
      

      public int minimum(Vector costs)
       {
          int min = ((Integer) costs.get ( 0 ) ).intValue () ;
          for ( int i = 1, iSize = costs.size () ; i < iSize ; i ++ )
           {
                 Integer temp = ( Integer ) costs.get ( i ) ;
                 int tempVal = temp.intValue () ;

                 if ( tempVal < min )
                  min = tempVal ;

           }    
          return min;

     }
     
      public int maximum(Vector costs)
       {
          int max = ((Integer) costs.get ( 0 ) ).intValue () ;
          for ( int i = 1, iSize = costs.size () ; i < iSize ; i ++ )
           {
                 Integer temp = ( Integer ) costs.get ( i ) ;
                 int tempVal = temp.intValue () ;

                 if ( tempVal > max )
                  max = tempVal ;

           }    
          return max;

     }


         public int evaluate(GameState s)
        {
              // Take in a game state and calculate its
              // current distance from the ball.      
              
              int x = (int)s.curPlayer().getX();
             int y = (int)s.curPlayer().getY();
             
             int ballX = (int)s.getBallState().getX();
             int ballY = (int)s.getBallState().getY();
             
             int x_diff = x - ballX;
             int y_diff = y - ballY;
             
             int answer = x_diff*x_diff + y_diff*y_diff;
             
             return -((int)Math.sqrt((double) answer));

        
        }
              
              
      /***
       * This method provides our custom team name. See if you can guess who
       * wrote this class!
       * @return The name of our team.
       */
      public String getTeamName()
      {
            return "Athlon ";
      }
      
              

}

*****************************************************************************

However, the result is, the player only moves when the ball moves and it follow the moves of the ball. meaning, if the ball moves right, then it moves right, if the ball moves left, it moves left. :~~~~~~~~~~~~(

Here;s the message i got from the lecturer:

That's the idea. The cost function you've implemented should make the
player follow
the ball. If that's happening, then you need to think about writing a
new cost function
that will make your player score goals. That's something that you
really need to
figure out yourself. A good starting point is to look at the field and
work out
what measures you can take. Calculate all these in your function, then
try taking
various combinations of them. You'll need to experiment with the
function to get
a good result. Remember that the function needs to get bigger as the
state of the
board gets better.

 

PLS HELP~~~~~~~~~~~~~~
        
ASKER CERTIFIED SOLUTION
Avatar of imladris
imladris
Flag of Canada image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
How are you getting on with this? Have you completed it?

If so it is now time to close the question and assign a grade to the answer.

If not, let me know where things are at.