Solved

Encoding.Unicode.GetBytes

Posted on 2006-10-27
8
1,088 Views
Last Modified: 2010-05-18
Easy 500 to someone who understands....

Consider the following and tell me why inputBytes does not always equal outputBytes? I think it has something to do with the size of inputBytes but what can I do to coerce input bytes to always be 'convertable' to and from a unicode string?

 byte[] inputBytes;
.
.
// inputBytes is created from 'somewhere'
.
.
byte[] outputBytes = Encoding.Unicode.GetBytes(Encoding.GetString(inputBytes));
0
Comment
Question by:Solveweb
  • 4
  • 3
8 Comments
 
LVL 22

Expert Comment

by:_TAD_
Comment Utility

That's because the Input bytes are probably encoded with a default encoding that is not Unicode.

I would guess ASCII, UTF-8 or Latin1 encoding is the default

In any case, you will want to convert the encoding

Here's a site that may help
http://msdn2.microsoft.com/en-us/library/kdcak6ye.aspx


0
 
LVL 22

Expert Comment

by:_TAD_
Comment Utility

That's because the Input bytes are probably encoded with a default encoding that is not Unicode.

I would guess ASCII, UTF-8 or Latin1 encoding is the default

In any case, you will want to convert the encoding

Here's a site that may help
http://msdn2.microsoft.com/en-us/library/kdcak6ye.aspx


0
 
LVL 22

Expert Comment

by:_TAD_
Comment Utility

That's because the Input bytes are probably encoded with a default encoding that is not Unicode.

I would guess ASCII, UTF-8 or Latin1 encoding is the default

In any case, you will want to convert the encoding

Here's a site that may help
http://msdn2.microsoft.com/en-us/library/kdcak6ye.aspx


0
 

Author Comment

by:Solveweb
Comment Utility
Actually the inputBytes isnt encoded from a string at all - Its created using a custom authentication routine, so I cant exactly 'convert' the Encoding from anything. at all...
0
Find Ransomware Secrets With All-Source Analysis

Ransomware has become a major concern for organizations; its prevalence has grown due to past successes achieved by threat actors. While each ransomware variant is different, we’ve seen some common tactics and trends used among the authors of the malware.

 
LVL 22

Expert Comment

by:_TAD_
Comment Utility


Sure it is... You show it being encoded right here:

byte[] outputBytes = Encoding.Unicode.GetBytes(Encoding.GetString(inputBytes));


First you take the input bytes and encode them into ASCII (or whatever your default encoding is) {Encoding.GetString(inputBytes)}, and then you decode them with Unicode {Encoding.Unicode.GetBytes()}.


since you are not using "byte[] outputBytes = inputBytes"  It is clear that the input bytes are in a format other than Unicode.  You have to do a transformation if the bytes aren't in the right format.

0
 

Author Comment

by:Solveweb
Comment Utility
Sorry --- The code example was wrong --- Should have been as follows which clearly converts to and from the same code page --- I have also added a code snippet that demonstrated the same issue when xk gets to [0, 216] ....

byte[] inputBytes;
// inputBytes is created from 'somewhere'
byte[] outputBytes = Encoding.Unicode.GetBytes(Encoding.Unicode.GetString(inputBytes));

//problem can also be demonstrated with the following snippet....
for (byte xi = 0; xi < 255; xi++)
            {
                for (byte xj = 0; xj < 255; xj++)
                {
                    byte[] xk = new byte[2] { xi, xj };
                    string xs = Encoding.Default.GetString(xk);
                    if (xs==string.Empty)
                        string badCodeThatDoesntEncode = "yes";
                }

            }
0
 
LVL 4

Accepted Solution

by:
ostdp earned 500 total points
Comment Utility
You may have a case of invalid characters occuring during the conversion. In multibyte character sets not all two byte sequences are valid sequences, hence if you are creating the inputBytes in a non unicode compatible fashion (you said authentication, so I assume a hash function), the default behavior of the encoders is to _discard_ invalid sequences, hence the discrepancy between inputBytes and outputBytes.

Btw. the default string encoding in .Net is unicode.
0
 

Author Comment

by:Solveweb
Comment Utility
Rats! It would be nice if there was a way of doing this - Simply to squash down a byte array to as small as possible string representation (single byte string conversion not good enough). Now I know - Unicode doesnt mean quite mean two byte encoding in the way I thought it might. Hmm.. back to the drawing board

Thanks
0

Featured Post

Top 6 Sources for Identifying Threat Actor TTPs

Understanding your enemy is essential. These six sources will help you identify the most popular threat actor tactics, techniques, and procedures (TTPs).

Join & Write a Comment

This article describes a simple method to resize a control at runtime.  It includes ready-to-use source code and a complete sample demonstration application.  We'll also talk about C# Extension Methods. Introduction In one of my applications…
Introduction Hi all and welcome to my first article on Experts Exchange. A while ago, someone asked me if i could do some tutorials on object oriented programming. I decided to do them on C#. Now you may ask me, why's that? Well, one of the re…
Excel styles will make formatting consistent and let you apply and change formatting faster. In this tutorial, you'll learn how to use Excel's built-in styles, how to modify styles, and how to create your own. You'll also learn how to use your custo…
In this tutorial you'll learn about bandwidth monitoring with flows and packet sniffing with our network monitoring solution PRTG Network Monitor (https://www.paessler.com/prtg). If you're interested in additional methods for monitoring bandwidt…

743 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

18 Experts available now in Live!

Get 1:1 Help Now