I have a byte array that when decoded shoudl contain plus symbols and accented characters.
Part of the string is "5190056340+51900112190" and also "Nürnberg +49/9221/709-286"
The plus symbols are being mis-interpreted when I decode from a pure binary stream in the byte array to UTF7.
When I pull it out using ASCII (and save to notepad), I see "N|rnberg +49/9221/709-286", which is the right phone number but note how the Umlaut in Neurenburg has gone. The other part comes out as 5190056340????.
When I decode with UTF7 (and save to notepad) I see this: "Nürnberg ß?O286". The umlaut is there, but some how the plus symbol has turn the following 12 characters into nonsense. This is the encoding I use in the program.
When I use UTF8 (and save to notepad), I see this:
The letter U has disappeared!
I am able to use Windows-1252 encoding, which preserves the umluat and also correctly understands the plus sign.
My question is: when does the + sign force the numbers that follow it to come out as nonsense, and how can I prevent it when not using Windows-1252?