Solved

Encoding.Unicode.GetBytes

Posted on 2006-10-27
8
1,100 Views
Last Modified: 2010-05-18
Easy 500 to someone who understands....

Consider the following and tell me why inputBytes does not always equal outputBytes? I think it has something to do with the size of inputBytes but what can I do to coerce input bytes to always be 'convertable' to and from a unicode string?

 byte[] inputBytes;
.
.
// inputBytes is created from 'somewhere'
.
.
byte[] outputBytes = Encoding.Unicode.GetBytes(Encoding.GetString(inputBytes));
0
Comment
Question by:Solveweb
  • 4
  • 3
8 Comments
 
LVL 22

Expert Comment

by:_TAD_
ID: 17820479

That's because the Input bytes are probably encoded with a default encoding that is not Unicode.

I would guess ASCII, UTF-8 or Latin1 encoding is the default

In any case, you will want to convert the encoding

Here's a site that may help
http://msdn2.microsoft.com/en-us/library/kdcak6ye.aspx


0
 
LVL 22

Expert Comment

by:_TAD_
ID: 17820480

That's because the Input bytes are probably encoded with a default encoding that is not Unicode.

I would guess ASCII, UTF-8 or Latin1 encoding is the default

In any case, you will want to convert the encoding

Here's a site that may help
http://msdn2.microsoft.com/en-us/library/kdcak6ye.aspx


0
 
LVL 22

Expert Comment

by:_TAD_
ID: 17820481

That's because the Input bytes are probably encoded with a default encoding that is not Unicode.

I would guess ASCII, UTF-8 or Latin1 encoding is the default

In any case, you will want to convert the encoding

Here's a site that may help
http://msdn2.microsoft.com/en-us/library/kdcak6ye.aspx


0
3 Use Cases for Connected Systems

Our Dev teams are like yours. They’re continually cranking out code for new features/bugs fixes, testing, deploying, testing some more, responding to production monitoring events and more. It’s complex. So, we thought you’d like to see what’s working for us.

 

Author Comment

by:Solveweb
ID: 17820513
Actually the inputBytes isnt encoded from a string at all - Its created using a custom authentication routine, so I cant exactly 'convert' the Encoding from anything. at all...
0
 
LVL 22

Expert Comment

by:_TAD_
ID: 17820629


Sure it is... You show it being encoded right here:

byte[] outputBytes = Encoding.Unicode.GetBytes(Encoding.GetString(inputBytes));


First you take the input bytes and encode them into ASCII (or whatever your default encoding is) {Encoding.GetString(inputBytes)}, and then you decode them with Unicode {Encoding.Unicode.GetBytes()}.


since you are not using "byte[] outputBytes = inputBytes"  It is clear that the input bytes are in a format other than Unicode.  You have to do a transformation if the bytes aren't in the right format.

0
 

Author Comment

by:Solveweb
ID: 17820764
Sorry --- The code example was wrong --- Should have been as follows which clearly converts to and from the same code page --- I have also added a code snippet that demonstrated the same issue when xk gets to [0, 216] ....

byte[] inputBytes;
// inputBytes is created from 'somewhere'
byte[] outputBytes = Encoding.Unicode.GetBytes(Encoding.Unicode.GetString(inputBytes));

//problem can also be demonstrated with the following snippet....
for (byte xi = 0; xi < 255; xi++)
            {
                for (byte xj = 0; xj < 255; xj++)
                {
                    byte[] xk = new byte[2] { xi, xj };
                    string xs = Encoding.Default.GetString(xk);
                    if (xs==string.Empty)
                        string badCodeThatDoesntEncode = "yes";
                }

            }
0
 
LVL 4

Accepted Solution

by:
ostdp earned 500 total points
ID: 17821665
You may have a case of invalid characters occuring during the conversion. In multibyte character sets not all two byte sequences are valid sequences, hence if you are creating the inputBytes in a non unicode compatible fashion (you said authentication, so I assume a hash function), the default behavior of the encoders is to _discard_ invalid sequences, hence the discrepancy between inputBytes and outputBytes.

Btw. the default string encoding in .Net is unicode.
0
 

Author Comment

by:Solveweb
ID: 17822736
Rats! It would be nice if there was a way of doing this - Simply to squash down a byte array to as small as possible string representation (single byte string conversion not good enough). Now I know - Unicode doesnt mean quite mean two byte encoding in the way I thought it might. Hmm.. back to the drawing board

Thanks
0

Featured Post

NAS Cloud Backup Strategies

This article explains backup scenarios when using network storage. We review the so-called “3-2-1 strategy” and summarize the methods you can use to send NAS data to the cloud

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

This article introduced a TextBox that supports transparent background.   Introduction TextBox is the most widely used control component in GUI design. Most GUI controls do not support transparent background and more or less do not have the…
Entity Framework is a powerful tool to help you interact with the DataBase but still doesn't help much when we have a Stored Procedure that returns more than one resultset. The solution takes some of out-of-the-box thinking; read on!
This Micro Tutorial will give you a basic overview how to record your screen with Microsoft Expression Encoder. This program is still free and open for the public to download. This will be demonstrated using Microsoft Expression Encoder 4.
Finds all prime numbers in a range requested and places them in a public primes() array. I've demostrated a template size of 30 (2 * 3 * 5) but larger templates can be built such 210  (2 * 3 * 5 * 7) or 2310  (2 * 3 * 5 * 7 * 11). The larger templa…

810 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question