asked on # Binary/Hexadecimal conversion

Hi, Experts. For learning purposes, I'm writein a small program in Delphi that convert numbers and strings from/To Binary and Hexadecimal format. Actually, I wrote two section: the first one convert decimal numbers and strings to binary, the other one converts numbers and strings to hexadecimal format. To call the right function I set a flag: when the user click 'Convert' button, the program check if user typed only numbers or even letters: in the first case, program will treat the input as a decimal number in the second one program will treat input as a string. Setting the flag accordingly, the program knows how reconvert data to its original format.

Hoping to have been clear, I tell the problem: I'm now trying to write the third section, to convert data between binary and hexadecimal format and the problem is: how to understand if data in hexadecimal format are a number or a literal string? I could leave this problem to my hipothetical user, giving it the responsability to check a radio button to tell the program what the input is, but... this is trivial. I think that computers know how to distinguish an hexadecimal string from an hexadecimal number but how they do? What does the trick so the software itself can understand if it has to display a string or a number reading a hexadecimal value?

I know this could seem stupid, but I'm trying to learn Assembly and I understood that one must deeply understand this type of logic to learn - or so I feel...

Thanks to everyone will try to help me.

Hoping to have been clear, I tell the problem: I'm now trying to write the third section, to convert data between binary and hexadecimal format and the problem is: how to understand if data in hexadecimal format are a number or a literal string? I could leave this problem to my hipothetical user, giving it the responsability to check a radio button to tell the program what the input is, but... this is trivial. I think that computers know how to distinguish an hexadecimal string from an hexadecimal number but how they do? What does the trick so the software itself can understand if it has to display a string or a number reading a hexadecimal value?

I know this could seem stupid, but I'm trying to learn Assembly and I understood that one must deeply understand this type of logic to learn - or so I feel...

Thanks to everyone will try to help me.

Assembly

Thank you for your reply, pmasotta. Seeing your nick, I think you're italian, as I am. In this case you're the second italian expert I find. The first one helped me to some encryiption issue in Delphi and suggested me Write Greate code Vol 2. Now I'm studying The Art Of Assembly Language by the same author (Randall Hyde). So I explain.

I know my conversions is unneeded and I know that my program will be the queen of the inutilities, but I'm deploying it only to familiarize myself with hex and binary string. For instance, writing my inutility helped me to understand why the decimal 16 becomes 10 in hexadecimal.

I'm a bit confused, so it's better to reformulate my question. And it is better to go step by step.

First step: to display a decimal number with its binary representation you have to use an algorithm called division-by-two. To do the same with a literal string you have to do the same thing using the ASCII value for each character of the string. If you use this method (take the ascii value) to convert number also, you get a different result. 1 as integer is different by 1 as char and its binary representation is different: I'm wrong here?

I know my conversions is unneeded and I know that my program will be the queen of the inutilities, but I'm deploying it only to familiarize myself with hex and binary string. For instance, writing my inutility helped me to understand why the decimal 16 becomes 10 in hexadecimal.

I'm a bit confused, so it's better to reformulate my question. And it is better to go step by step.

First step: to display a decimal number with its binary representation you have to use an algorithm called division-by-two. To do the same with a literal string you have to do the same thing using the ASCII value for each character of the string. If you use this method (take the ascii value) to convert number also, you get a different result. 1 as integer is different by 1 as char and its binary representation is different: I'm wrong here?

this is ambiguos... it could be:

1) the binary representation of a number stored in memory printed out as decimal

2) the decimal number x binary stored in memory printed out on its binary form

division-by-two. is used to convert a decimal representation of a number on its binary representation

when you have a binary representaion of 1 byte can be interpreted in is arithmetic form (a number from 0 to 255) or can be translated by a table i.e. ASCII where every single binary representation adopts a special meaning. i.e. the hex 0x30 represents the character "0", the hex 0x33 represents the ASCII "3", hexadecimal 0x41 represents ASCII "A", hexadecimal 0x42 represents ASCII "B", etc etc...

it is just a matter of interpretation on how you want to consider a byte considering its arithmetic "meaning" of its ASCII "meaning"

there's even a hibrid form

if you want to generate all the capitalized letters of the alphabet you can start with 0x41 and add 1 in a loop from 0 to 26

then you have an ASCII representation combined with an arithmetic process...

you can do many things with assembler and C....

Experts Exchange has (a) saved my job multiple times, (b) saved me hours, days, and even weeks of work, and often (c) makes me look like a superhero! This place is MAGIC!

Walt Forbes

Weel, my english is bad so or I don't understand what you mean or I can't explain what I want - or perhaps both things... :)

Let me speak by examples. I do this

I type 123 and I get 01111011 - this is what you get if you manually use division-by-two

I type pmasotta and I get 01110000 01101101 01100001 01110011 01101111 01110100 01110100 01100001 - each char from last to first is converted in its ASCII code and this code is converted to binary using divisio-by-two.

I don't know if this is correct or if it makes sense (but I know that this has no practical utility). So overall I wish to know if this is correct: 01110000 01101101 01100001 01110011 01101111 01110100 01110100 01100001 is the memory representation of string pmasotta indeed?

Let me speak by examples. I do this

I type 123 and I get 01111011 - this is what you get if you manually use division-by-two

I type pmasotta and I get 01110000 01101101 01100001 01110011 01101111 01110100 01110100 01100001 - each char from last to first is converted in its ASCII code and this code is converted to binary using divisio-by-two.

I don't know if this is correct or if it makes sense (but I know that this has no practical utility). So overall I wish to know if this is correct: 01110000 01101101 01100001 01110011 01101111 01110100 01110100 01100001 is the memory representation of string pmasotta indeed?

your English is good, don't worry

*I type 123 * where do you type 123?

*I get 01111011* where do you get 1111011?

*this is what you get if you manually use division-by-two*

division-by-two is a process for converting the decimal representation of a number on its binary one...

*each char from last to first is converted in its ASCII code and this code is converted to binary using divisio-by-two.*

this is wrong, when you punch on the keyboard the "A" char the keyboard send to the computer the encoding of a capital "a" and the computer knows how to convert that encoding on an ASCII representation of "A" or 0x41 and your program get that binary value but the division-by-two never happens with ascii chars... division by 2 is an extra step when converting the input of a number represented on its decimal form to be converted to its binary form for further uP arithmetic processing or memory storage

*...is the memory representation of string pmasotta indeed?*

yes it is, it is the sequence of my name expressed by a sequence of the correspondent ASCII codes

division-by-two is a process for converting the decimal representation of a number on its binary one...

this is wrong, when you punch on the keyboard the "A" char the keyboard send to the computer the encoding of a capital "a" and the computer knows how to convert that encoding on an ASCII representation of "A" or 0x41 and your program get that binary value but the division-by-two never happens with ascii chars... division by 2 is an extra step when converting the input of a number represented on its decimal form to be converted to its binary form for further uP arithmetic processing or memory storage

yes it is, it is the sequence of my name expressed by a sequence of the correspondent ASCII codes

Wow, what a confusion! Let me say 'Thank you!' first: you're very very nice to waste your time with my delirium :-) well, go on.

*"where do you type 123? "*

In an edit box in my Delphi program

*" where do you get 1111011?"*

In a memo or text box in my program also

*"division-by-two is a process for converting the decimal representation of a number on its binary one...*

"

Yes: I implemented a little simple function to do this:

123 is odd so place a 1

123 / 2 = 61,5 ->61 is odd so place a 1

61 / 2 = 30,5 -> 30 is even so place a 0

and so on. If I type a string and I click Convert button my function iterates on string from last char to the first one, use the built in Ord function to get the ASCII decimal value of each char and use this number to get binary:

string: pmasotta

last char: a -> decimal value 97

97 / 2 etc gives 01100001

and so on.

I'm simply writing some function to operate these conversions and to burn in my brain these mechanisms. What I wished was to understand how a program knows if a value stored in computer memory is a string or a number: a text editor represent numbers as chars but Win7 Calc knows they are numbers so the text editor will use a method to convert memory data in the correct format and Calc will use another method to treat its data. I had this doubt because I saw that if you use a method 123 becomes 01111011 but if you use tha other one (that for strings) 123 becomes 00110001 00110010 00110011: this means that the user always will see 123 displayed on its monitor but with the first one he will can operate mathematics operations, with the second one he'll can not. But as you said, this depend by the software user is using and if it expects integer numbers or strings...

Speaking with you has guided me to a more clear understanding of this aspect of computers. Or I'm still wrong?

In an edit box in my Delphi program

In a memo or text box in my program also

"

Yes: I implemented a little simple function to do this:

123 is odd so place a 1

123 / 2 = 61,5 ->61 is odd so place a 1

61 / 2 = 30,5 -> 30 is even so place a 0

and so on. If I type a string and I click Convert button my function iterates on string from last char to the first one, use the built in Ord function to get the ASCII decimal value of each char and use this number to get binary:

string: pmasotta

last char: a -> decimal value 97

97 / 2 etc gives 01100001

and so on.

I'm simply writing some function to operate these conversions and to burn in my brain these mechanisms. What I wished was to understand how a program knows if a value stored in computer memory is a string or a number: a text editor represent numbers as chars but Win7 Calc knows they are numbers so the text editor will use a method to convert memory data in the correct format and Calc will use another method to treat its data. I had this doubt because I saw that if you use a method 123 becomes 01111011 but if you use tha other one (that for strings) 123 becomes 00110001 00110010 00110011: this means that the user always will see 123 displayed on its monitor but with the first one he will can operate mathematics operations, with the second one he'll can not. But as you said, this depend by the software user is using and if it expects integer numbers or strings...

Speaking with you has guided me to a more clear understanding of this aspect of computers. Or I'm still wrong?

Get an unlimited membership to EE for less than $4 a week.

Unlimited question asking, solutions, articles and more.

Log in or sign up to see answer

Become an EE member today7-DAY FREE TRIAL

Members can start a 7-Day Free trial then enjoy unlimited access to the platform

or

Learn why we charge membership fees

We get it - no one likes a content blocker. Take one extra minute and find out why we block content.

Not exactly the question you had in mind?

Sign up for an EE membership and get your own personalized solution. With an EE membership, you can ask unlimited troubleshooting, research, or opinion questions.

ask a question
Thank you pmasotto for your patience.

I have more clear ideas now.

On to the next...

I have more clear ideas now.

On to the next...

one thing is the memory representation of a variable and other thing is its printing/reading format.

when you store a char in memory (8bits) you really store the 8 bits that correspond to a particular character on (let say) the ASCII table... or it could also be an integer number less than decimal number 256..

when you store an integer (4 bytes) in memory on a little-indian archiitecture you have the 32 bits that correspond to the integer stored on the variable...

if you have a string of chars you have a sequence of bytes holding the binary corresponding to the ASCII of the stored characters...

as you can see you always store BINARY when you want to print out the info holded on tjose variables you have functions (like printf) that can take care about formating like decimal, hexadecimal, etc and print the content of a numeric variable on the right way, the same when you have to print a string of characters...

then the differentiation between an alpha char or a single byte number can only be known by the programmer when coding the program.

all your conversions are really needed if you print out a numeric variables or if you need to interpret numeric user input.