Link to home
Start Free TrialLog in
Avatar of TomPreen
TomPreen

asked on

Data Import from a CSV file

I have been supplied with a text file with comma separated values which require update to a database.

When read in programmatically the string has an ASCII Null between every character
eg.   D A V 0 0 1 , J u l i e , D a v i e s - J o n e s ,

In notepad the file displays correctly as DAV001,Julie,Davies-Jones

The supplier of the file is downloading the file from a web site of an international company so I am unable to contact the originator.

Since Notepad is automatically stripping out the nulls is there a Windows API function I can utilise to replicate the Notepad functionality
ASKER CERTIFIED SOLUTION
Avatar of Olaf Doschke
Olaf Doschke
Flag of Germany image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
What i normally do if i have converting issues is trying to open the CSV in Excel because it holds it's original information and then copy and paste it into NotePad. Normally this does do the trick. The only other option is to ask for a correctly configured file..
Avatar of TomPreen
TomPreen

ASKER

Interesting.

 I have done hundred of imports using Fox Pro over the last 10 years and I have never seen this issue before.
Well,

more typical for internet services is UTF-8 encoding, which encodes the most commonly used 128 characters (eg letters, digits) in the same way as ASCII and many ANSI codepages. That's why you might never have stumbled upon this, even if you already precessed UTF-8 data only using these 128 characters.

Unicode (specifcally ISO 10646) is capable to encode more characters, it defines 17 planes with each potentially 65535 characters. Unicode Version 7 is "just" using 113.021 of theses potential codes.

MS mostly uses UTF-16 for Unicode, which is using 16bits or two bytes for most characters, no characters are encoded with a single byte, also the simple ones of ASCII or of ANSI codepages, so you surely stumble upon this encoding problem the first time you get UTF-16 data.

Character/Text/Data encoding (in the sense of codepages, not in the sense of cryptographic encoding) is a topic you should be aware of at least, even if you stumble upon this for the first time. Unicode is one of the weaknesses of VFP you can only partly cope with. You can easily store unicode in binary fields, but the native VFP controls will not display unicode and you have all kind of symptoms with it. Some operations will irreversably change unknown chars with '?' and so you always better keep original files for postprocessing into something VFP can work with, once such conversion losses in data are recognized.

Bye, Olaf.
The file in question had one other thing that I have not seen.

The first two characters were ASCII 255 and ASCII 254.

Is this a signal that it is Unicode?

Also Excel does not recognize the file as a CSV file in the conventional way by creating columns for each comma separated variable, all values for each line appeared in column A separated by commas
>The first two characters were ASCII 255 and ASCII 254.
This is the BOM I mentioned earlier. Look that up in Wikipedia and you get it explained.

>Is this a signal that it is Unicode?
Yes and no. It's just a few applications setting and expecting a BOM at the begin of a file. In other cases files may simply be Unicode without any hint in itself. Today XML is more common and has standard tags and attributes defining the encoding.

You may use the first two bytes as indicator, but not as proof.

Bye, Olaf.
That might be the BOM (byte order marker)