Why does this compile?

Well, I know why it does, I just was wondering at what point, and at what part of the java API does this get converted into proper code?

Is it at compilation?  Or execution?

Hehehe...now THIS is unmaintanable code ;-)

File:  a.java-----------------------------------------
LVL 35
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Because unicode characters (\uHHHH) are read as souce by the javac compiler.
The unicode characters are converted as part of preparation for compilation.

Here for example is the explanation for one such character:

But as far as I know, all they are converted then
Well - according to this:

The conversion is not of all unicode but from all the others TO unicode... It is from the page I give here:
Since most operating environments do not support Unicode, Java uses a pre-processing phase to make sure that all of the characters of a program are in Unicode. This pre-processing comprises two steps:

Translate the program source into Unicode characters if it is in an encoding other than Unicode. Java defines escape sequences that allow all characters that can be represented in Unicode to be represented in other character encodings, such as ASCII or EBCDIC. The escape sequences are recognized by the compiler, even if the program is already represented in Unicode.

Divide the stream of Unicode characters into lines.

Conversion to Unicode
The first thing a Java compiler does is translate its input from the source character encoding (e.g., ASCII or EBCDIC) into Unicode. During the conversion process, Java translates escape sequences of the form \u followed by four hexadecimal digits into the Unicode characters indicated by the given hexadecimal values. These escape sequences let you represent Unicode characters in whatever character set you are using for your source code, even if it is not Unicode. For example, \u0000 is a way of representing the NUL character. "

I suppose it is better explanation than teh other one. But in all cases the explanation is - in pre-proccessing all the characters are turned to the same type (no matter if you use 'b' or the unicode value for this...)

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
OWASP: Threats Fundamentals

Learn the top ten threats that are present in modern web-application development and how to protect your business from them.

and so 'a.java' gets compiled to 'a.class' :-)
TimYatesAuthor Commented:
But shouldn't the character sequence:


for example, only be valid inside a String literal of a java file?
The preproccessing change all the characters not only the ones in the String literals. The idea is not ot have troubles with the encodings I suppose. So it is a valid character no matter where it is.

"A Java program is a sequence of characters. These characters are represented using 16-bit numeric codes defined by the Unicode standard.[1] Unicode is a 16-bit character encoding standard that includes representations for all of the characters needed to write all major natural languages, as well as special symbols for mathematics. Unicode defines the codes 0 through 127 to be consistent with ASCII. Because of that consistency, Java programs can be written in ASCII without any need for programmers to be aware of Unicode. "

From the second link... It looks like the normal ASCII coding we use is just for programmers convinience...
Why don't you look at the link. I'll try to find some more info about all this....
Nope.  It is just *also* valid there.   javac thinks in unicode, so it is not really the unicode literals that are translated. It is everything else that is converted to unicode from whithever alphabet the file is written in.

Can you give us any source why you think so?
Nothing better than what you yourself have already given. Guess I butted in. sorry.

No problem. Just it is an interested thing (and something that not everyone even ever thought of ) so if you have any other sources?

TimYatesAuthor Commented:
> The preproccessing change all the characters

So Java DOES have a preprocessor?

Why am I not allowed #ifdef then? ;-)

A question I ask myself every time when I start writing in Java after a few days writing in C :)

It looks like the compiler makes some pre proccessing...

TimYatesAuthor Commented:
Thanks to both of you :-)

It still seems wrong...

Either it should only do chars in String constants, or allow me #defines ;-)

Hee hee!

Back to moving house!! :-)


Is not the #ifdef of C intended for machine specific situations?
   If this comp has 16-bit words then do this, else do that.

In the enthusiasm of "we are making a machine independent language" that would be left out.

Anyway  #ifdef (and particularly #define) are some of the prime 'shot yourself in the foot'-features of C. I tend to say good riddance :-)

regards JakobA
Absolutely agree that it seems wrong but... can we do anything? :)

TimYatesAuthor Commented:
> I tend to say good riddance :-)

Yeah, but they were SOOOO useful for building the same source up for different machines/releases, etc

// Full release code in here

It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.