Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 344
  • Last Modified:

how do i turn a 16 bit int into two 8 bit char variables?

the title pretty much says it all. the code i was using is as follows.

       void breakdown(int number, char *high, char *low)
       {
              *low = (number);
              *high = (number>>8);
       }


*high corresponds to the upper 8 bits of the integer and *low corresponds to the lower 8 bits of the integer. also, if you have any suggestions on how to implement the reverse process, it would be appriciated. our current code for that is as follows.
       
        char highdata, lowdata;
        //assignment of highdata and lowdata
      int build, result = 0;
      build = (lowdata);
      result = (highdata)<<8;
      result = (highdata | build);


this does not seem to be working. any suggestions would be appriciated :)

0
ikOnone
Asked:
ikOnone
  • 3
  • 3
  • 3
  • +2
2 Solutions
 
ikOnoneAuthor Commented:
just to clarify, i just want the bits of the integer, i do not want it in ASCII or anything else. just converting an integer defined in 16 bits split into two chars in 8 bits each. we are trying to write an int value into two 8 bit registers.
thanks!
0
 
ikOnoneAuthor Commented:
also, for the code to put the two chars back into an int, we can't just directly type cast it into an int because one char is the high order bits and one is the low order so we need it in one int value.
0
 
Jaime OlivaresCommented:
void breakdown(int number, unsigned char *high, unsined char *low)
{
             *low = number & 0xFF;
             *high = number>>8;
}
0
Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
Jaime OlivaresCommented:
You have to pay attention to sign, it is not the same to threat a signed than an unsigned int or char, in fact, will be better:

void breakdown(unsigned int number, unsigned char *high, unsined char *low)   /* notice unsigned int */
{
             *low = number & 0xFF;
             *high = number>>8;
}

To reconstruct, just have to do:
    unsigned int number;
    number = (high << 8) | low;

Assuming all are unsigned.
0
 
jkrCommented:
IMHO the easiest way:

union u16to8 {

    short s;
    char ac[2];
};

char low,hi;
union u16to8 u;

u.s = 0x0101;

/* the follwing might depend on your machine's endianity */
hi = u.ac[0];
low = u.ac[1];

0
 
Jaime OlivaresCommented:
Again I insist that unsigned qualifier is an important issue, because most compilers assume them are signed, maybe jkr's solution could be:

typedef union {
    unsigned short s;
    unsigned char ac[2];
} u16to8;

Also the typedef will allow you to use as:

u16to8 u;
0
 
jkrCommented:
>> Again I insist that unsigned qualifier is an important issue

Yup, I agree, missed that.
0
 
jkrCommented:
Taking back the agreement, the 'signedness' should only be relevant if you'd perform mathematical operation on the 'short' members - when just using them to separate the bytes, I'd check what type the input parameters (IOW: the data to be split) actually has :o)
0
 
Peter-DartonCommented:
jkr, please take your "easiest way" out to the back yard and shoot it :-)

There is no guarantee that a short occupies twice the space as a char - if it occupies any more than 2 char's, or any less, the method fails.  Also, there is no guarantee of endianness.
It's horrid (I quite like it, it has a certain appeal, but it's still horrid :-)

I must confess that I've used such techniques myself in the past, but I was using fixed-size typedefs such a UINT8 and UINT16 that were matched to the architecture and I also used the appropriate ntoh & hton macros to ensure endian issues were dealt with.
Having done all that, just shifting left/right by the appropriate number of bits was easier and clearer.

I think people who want to split an integer type (be it long, int or short) into bytes (and back again) are better off using shift-right (or shift-left) operations and bitmasks (and unsigned arithmetic internally, even if the inputs and outputs need to be signed in the end, one does all the working with unsigned, otherwise bad things happen).
0
 
HendrikTYRCommented:
... or a lot less elegant :

short (or whatever) newVar = high * 256 + low;

hehe
0
 
Peter-DartonCommented:
jaime_olivares's comment Date: 11/13/2004 02:36PM PST covered the main issue - unsigned/signed maths.
jkr proposed a "quick & dirty" method which will work on some many machines, which may, therefore, be appropriate to the original poster (depends what platform they're using)

Personally, I'd suggest jaime_olivares got there with a definative answer first (albeit with a typo "unsined" should be "unsigned"), so if you're after a recommendation from the existing posts, I'd point you at jaime_olivares's 2nd post in this thread.
0
 
Peter-DartonCommented:
Personally, I prefer code that isn't affected by endian issues (well, code where the source isn't affected by endian issues - obviously the compiler's output will be different), and I'm not a fan of using unions to write one thing then read another.
I can appreciate the cleverness, but I've also learnt that cleverness isn't always a good idea (now, if the union example had included code that dealt with endian issues AND included a nice big comment that explained what it was going to do so that any future software maintainer couldn't use the excuse "I didn't understand the code" and keep their job THEN I'd agree it's better - it's certainly a much better general solution for mapping one datatype onto another, but the original question was specific about 8+8bit -> 16bit -> 8+8bit)
Ultimately, the >>8 method will work on anything, whereas the union method will get the results the wrong way around on many machines - it's clever but it's not complete answer to this question.

One would hope, however, that a suitably clued-up compiler would generate identical opcodes for both methods tho' (if optimisation was enabled and debugging wasn't), but I'd expect that many compilers will understand how to optimise one scenario better than the other, but which is better optimised by more compilers is another issue entirely :-)

Anyway, I've rambled on enough :-)
0

Featured Post

What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

  • 3
  • 3
  • 3
  • +2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now