As may be familiar to some of you, we are in the middle of
a port from a 16-bit DOS to 32-bit NT enviornment, a process
which has raised some concerns on my part about the fact that
the NATIVE types int, and unsigned, by def'n, have grown from 2 to 4 bytes.
Problem is that in our application, we have always
had some types whose "size" is always going to be the same,
uint_16 is unsigned short
int_16 is signed short
Accordingly, I'm concerned about two effects of this process.
1. Suppose I have
Under 16-bit, it's entirely OK to say
my_int_16 = my_native_int; (A)
my_native_int = my_int_16; (B)
(after all, the types are essentially the same).
However, under 32-bit, statement (A) assigns a 32-bit
var to a 16-bit and statement (B) does the opposite.
My gut feeling here is that I'm OK since by nature
of the application, the data would not exceed the
range (-32768 to 32767), so I'm not that worried
about TRUNCATIONS. However, I need to
be sure that I did not overlook anything, either w/
truncations or otherwise, and brutal honesty here would be appreciated :)
2. My greater concern is passing values to functions.
void foo(int_16 i16, char *s);
void bar(char *s, int_16 i16);
To which I pass value as follows:
foo (i, "HELLO");
My worry here is: will passing BY VALUE, a 32-bit variable
(i) to a function expecting a 16-bit value mess up the stacK?
Again, I'm not concerned about truncations, since by
nature of the application, i will never exceed 16-bit
values but more concerned about the internal stuff like
3. I realize, btw, that if I am passing by REFERENCE that
I will have to make the types compatible, but I'm hoping
that the compiler will point out those instances to me.