I work with a lot of IEEE-compressed single-precision floating point data in fixed-length record file formats. A field containing a compressed Single as stored in the file is four bytes of (normally) non-plaintext ASCII characters. They often include one or more NULL-zero characters in any of the four byte positions. The .NET framework seems to be behaving inconsistently when those four bytes are loaded into a string.
90% of the time a string is truncated at the first Chr(0) character in the character sequence. Yes... 90% of the time. It depends on the context. If you return the string BYVAL, for example, the .NET runtime cuts it down. However, if you use a StringBuilder and then convert and assign it to a string with StringBuilderVar.ToString() then Chr(0)'s do NOT cause the destination string to be truncated. But then StringVar.Substring(iOffset, iLen) DOES truncate the results if it contains a Chr(0)!
Can this behavior be avoided? Is there a special version of the string variable that I can use that doesn't try and "fix" my string for me? I would like to continue using strings if possible, since they are one of the best features of the BASIC language. As it is, it is almost like I am coding in C because I'm having to do things one character at a time using character arrays.