You're not only converting ASCII to binary, you're converting HEX ASCII to binary. It's important to know what base arithmetic must be performed. :)

The good news is that it's really quite easy once you get the hang of it.

Consider a base 10 number. You and I see the character '0' as "nothing", but the computer "sees" a string of bits that display as a '0'. You and I see a '1' a the number one, but the computer "sees" a string of bits that display as a '1'. Same for the other digits.

To convert the number from ASCII to binary, you simply subtract the code for '0'. Since the characters '0', '1', '2', '3', etc are consective, you wind up with the binary form of the digit.

With hex conversions you have potentially two steps for each digit. If the digit is '0' to '9' you simply subtract '0' (the ASCII code for 0). If it's 'a' to 'f' (or 'A' to 'F'), you subtract the ASCII code for 'a' (or 'A') and add 10. You add 10 because 'a' (or 'A') represents 10 decimal and subtracting 'a' (or 'A') reduces to zero.

Programmatically, you convert a digit like this:

if (isdigit(ASCIIChar))

BinaryDigit = ASCIIChar - '0';

else

BinaryDigit = tolower(ASCIIChar) - 'a' + 10;

Written on one line, it looks like this:

BinaryDigit = (isdigit(ASCIIChar) ? ASCIIChar - 10 : tolower(ASCIIChar) - 'a' + 10;

Note that for this kind of arithmetic ASCIIChar is an unsigned char.

Good Luck!

Kent