Link to home
Start Free TrialLog in
Avatar of deleyd
deleydFlag for United States of America

asked on

C++ ((16-bit minus 32-bit) < 0) always false?

#include "stdafx.h"
typedef unsigned short u16;
typedef unsigned long  u32;

int _tmain(int argc, _TCHAR* argv[])
{
  u16 x = 0;
  u32 y = 6;
  int z = 0;

  if ( (x - (y*7)) < 0)
  {
    z = 1;
  }

  return 0;
}

Open in new window

The problem is line 11, the if statement. Viewing the disassembly, the assembly code generated ends with a jmp statement, always skipping over the z = 1;, implying the condition is always FALSE no matter what values are used for x and y.

Why? Has something to do with mixing 16-bit and 32-bit integers.
disassembly.png
Avatar of jkr
jkr
Flag of Germany image

The problem is that you are using unsigned variables - these can never by smaller than null by definition, that's why. Just remove the 'unsigned' declaration and everything works as expected.
BTW, to illustrate that:

#include <tchar.h>
#include <stdio.h>
typedef unsigned short u16;
typedef unsigned long  u32;
typedef short i16;
typedef long  i32;


int _tmain(int argc, _TCHAR* argv[])
{
  u32 x = 0;
  u32 y = 6;
  int z = 0;

  i32 x1 = 0;
  i32 y1 = 6;

  if ( (x - (y*7)) < 0)
  {
    printf("true for unsigned types\n");
  }
  else printf("false for unsigned types\n");

  if ( (x1 - (y1*7)) < 0)
  {
    printf("true for signed types\n");
  }
  else printf("false for signed types\n");

  return 0;
}
                                  

Open in new window


Output:

false for unsigned types
true for signed types
BTW², extended demonstration with the calculated values, so you'll see what actually happens behind the scenes:

#include <tchar.h>
#include <stdio.h>
typedef unsigned short u16;
typedef unsigned long  u32;
typedef short i16;
typedef long  i32;


int _tmain(int argc, _TCHAR* argv[])
{
  u32 x = 0;
  u32 y = 6;
  int z = 0;
  u32 result;

  i32 x1 = 0;
  i32 y1 = 6;
  i32 result1;

  if ((result =  (x - (y*7))) < 0)
  {
    printf("true for unsigned types\n");
  }
  else printf("false for unsigned types\n");

  printf ("result: %u\n", result);

  if ((result1 = (x1 - (y1*7))) < 0)
  {
    printf("true for signed types\n");
  }
  else printf("false for signed types\n");

  printf ("result1: %d\n", result);

  return 0;
}
                                  

Open in new window


Output:

false for unsigned types
result: 4294967254
true for signed types
result1: -42
Avatar of deleyd

ASKER

How about if I change the unsigned 16-bit to a signed-16 bit?
#include "stdafx.h"
typedef signed short s16;
typedef unsigned long  u32;

int _tmain(int argc, _TCHAR* argv[])
{
  s16 x = 0;
  u32 y = 6;
  int z = 0;

  if (x - y < 0)
  {
    z = 1;
  }

  if (x < y)
  {
    z = 1;
  }
  return 0;
}

Open in new window

Now I have a signed 16-bit subtracting an unsigned 32-bit, and I get an unconditional jmp statement again.

However, if I switch the statement around to compare (x < y), now I'm comparing a signed 16-bit with an unsigned 32-bit, and it works OK.

What's happening?
disassembly2.png
ASKER CERTIFIED SOLUTION
Avatar of jkr
jkr
Flag of Germany image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial