The output is 0, not 1 as might be initially believed, as the int s1 is converted to an unsigned int with value UINT_MAX.

This is something to do with conversion rules and integer ranking; I roughly understand the conversion rules ("Usual Arithmetic Conversions" section on the link) - and believe that rule #3 is being used:

If the operand that has unsigned integer type has rank greater than or equal to the rank of the type of the other operand, the operand with signed integer type is converted to the type of the operand with unsigned integer type.

...but I don't understand the ranking: clearly for the conversion above to take place the unsigned value has been given a greater rank than the signed value but where is it specified that this will happen??

Can someone please clarify ?

(Sorry if I have missed it from the above link but I have spent a long time looking at this which might be to blame:))

Scroll up a bit from the code snippet you posted, to 'Integer Conversion Rank'
There you'll find the statement:
The rank of any unsigned integer type shall equal the rank of the corresponding signed

Scroll up a bit from the code snippet you posted, to 'Integer Conversion Rank'

There you'll find the statement:

The rank of any unsigned integer type shall equal the rank of the corresponding signed integer type, if any.

which directly correlates to

If the operand that has unsigned integer type has rank greater than or equal to the rank of the type of the other operand, the operand with signed integer type is converted to the type of the operand with unsigned integer type.

actually signed and unsigned integers (of same size) were not really "converted" for compare operations but "reinterpreted", what means that the highest (most significant) bit either was interpreted as sign bit or not.

because of that you could do like

unsigned int ui = -1; // ui is MAX_UINTint i = -1;assert(ui == i); // expression is true

generally in such cases the direction of the cast should follow the "nature" of the value. so in case of a negative number you should cast the "unsigned" to "signed" and not vice versa.

I am not sure why your first example works: if ui is MAX_UINT then how on earth is that equal to -1; is it because the -1 is being converted to the unsigned int equivalent which is also MAX_UINT?? And if this is true, is it because of the second rule that jkr quoted above: "...the operand with signed integer type is converted to the type of the operand with unsigned integer type"

>> what means that the highest (most significant) bit either was interpreted as sign bit or not

That is really interesting; could you possible elaboate with an example?

>> ...so in case of a negative number you should cast the "unsigned" to "signed" and not vice versa

How come you are casting the uint ui to an int then - or have I misinterpreted your comment??

jkr/Sara:

From the quote jkr made:

"The rank of any unsigned integer type shall equal the rank of the corresponding signed integer type, if any"

Does "corresponding signed integer type" mean "the same numeric value", so "(unsigned int)-1" has the corresponding integer type as "(int)-1". but not the same corresponding integer type as "(int)-2"? In this case, what rule determines which of (unsigned int)-1 and (int)-2 has the highest rank?

Thanks very much in advance both!

0

Understanding your enemy is essential. These six sources will help you identify the most popular threat actor tactics, techniques, and procedures (TTPs).

Nope, the "corresponding signed integer type" means a type with the same precision, i.e. the corresponding signed type to an 'unsigned short' is 'short', to an 'unsigned long' it's long.

a 32-bit integer with all bits set is -1 for signed and is 2^32-1 for unsigned (MAX_UINT == 4294967295). if you add 1 to the number in both cases the result is 0 what is in the second case because of an overflow. 0 means that no bit is set.

all negative numbers of a signed integer have the highest bit (bit 31 for 'int' and 'unsigned int') set to 1. the highest bit of 0 ... 31 is also called the most significant bit as it is responsible for the highest range of numbers from 2^31 to 2^32-1. the 2^31-1 (hex: 0x7FFFFFFF, decimal: 2147483647) is the highest number of a signed integer. if you add 1 to that number you would get -2^31 (decimal -2147483648) for 'int' variable and 2^31 (decimal ) for unsigned int. for both the hex value is 0x80000000 or binary 10000000000000000000000000000000 where the most left bit is 1 and all other are 0.

summary: for 'signed' and 'unsigned' integers with the same bit count we have the same bit combinations which were "interpreted" differently when the highest bit was set.

for a signed integer the second part of the unsigned numbers would be interpreted as the negative numbers without any conversion (all bits keep their value).

>> ...so in case of a negative number you should cast the "unsigned" to "signed" and not vice versa

How come you are casting the uint ui to an int then - or have I misinterpreted your comment??

my statement was: if you compare an signed integer with an unsigned integer you could either do a cast on the signed integer or you could cast the unsigned integer to signed. both
cases are equivalent and would make the comparison better readable as now both operands have same type. but if the signed integer could have a negative number - for example because it was a difference - you better do the cast from unsigned to signed and not vice versa such that the "nature" of the value was not changed. in the debugger both sides would show for example -4 rather than 4294967292 or 0xFFFFFFFB then.

Integrating threat intelligence can be challenging, and not all companies are ready. These resources can help you build awareness and prepare for defense.

Many modern programming languages support the concept of a property -- a class member that combines characteristics of both a data member and a method. These are sometimes called "smart fields" because you can add logic that is applied automaticall…

Windows programmers of the C/C++ variety, how many of you realise that since Window 9x Microsoft has been lying to you about what constitutes Unicode (http://en.wikipedia.org/wiki/Unicode)? They will have you believe that Unicode requires you to use…