Data Import from a CSV file

TomPreen
TomPreen used Ask the Experts™
on
I have been supplied with a text file with comma separated values which require update to a database.

When read in programmatically the string has an ASCII Null between every character
eg.   D A V 0 0 1 , J u l i e , D a v i e s - J o n e s ,

In notepad the file displays correctly as DAV001,Julie,Davies-Jones

The supplier of the file is downloading the file from a web site of an international company so I am unable to contact the originator.

Since Notepad is automatically stripping out the nulls is there a Windows API function I can utilise to replicate the Notepad functionality
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
Software Developer
Commented:
Notepad is not striping out the NULLs, but notepad is capable to identify Unicode, while fox is not.

The simple solution: Stripe off chr(0):
ansi = chrtran(unicodetext,chr(0),'')

Open in new window


The correcter solution
ansi = ?Strconv(unicode,6,1252,1)

Open in new window


You'll have an additional problem, as most probably notepad added a few bytes called BOM (byte order mark) marking the rest of the file as Unicode, so record 1 pr the first field name might contain unwanted characters.

Another simple solution is to save again with notepad via Save As, then set the encoding to ANSI instead of Unicode.

Bye, Olaf.

Commented:
What i normally do if i have converting issues is trying to open the CSV in Excel because it holds it's original information and then copy and paste it into NotePad. Normally this does do the trick. The only other option is to ask for a correctly configured file..

Author

Commented:
Interesting.

 I have done hundred of imports using Fox Pro over the last 10 years and I have never seen this issue before.
Exploring SharePoint 2016

Explore SharePoint 2016, the web-based, collaborative platform that integrates with Microsoft Office to provide intranets, secure document management, and collaboration so you can develop your online and offline capabilities.

Olaf DoschkeSoftware Developer

Commented:
Well,

more typical for internet services is UTF-8 encoding, which encodes the most commonly used 128 characters (eg letters, digits) in the same way as ASCII and many ANSI codepages. That's why you might never have stumbled upon this, even if you already precessed UTF-8 data only using these 128 characters.

Unicode (specifcally ISO 10646) is capable to encode more characters, it defines 17 planes with each potentially 65535 characters. Unicode Version 7 is "just" using 113.021 of theses potential codes.

MS mostly uses UTF-16 for Unicode, which is using 16bits or two bytes for most characters, no characters are encoded with a single byte, also the simple ones of ASCII or of ANSI codepages, so you surely stumble upon this encoding problem the first time you get UTF-16 data.

Character/Text/Data encoding (in the sense of codepages, not in the sense of cryptographic encoding) is a topic you should be aware of at least, even if you stumble upon this for the first time. Unicode is one of the weaknesses of VFP you can only partly cope with. You can easily store unicode in binary fields, but the native VFP controls will not display unicode and you have all kind of symptoms with it. Some operations will irreversably change unknown chars with '?' and so you always better keep original files for postprocessing into something VFP can work with, once such conversion losses in data are recognized.

Bye, Olaf.

Author

Commented:
The file in question had one other thing that I have not seen.

The first two characters were ASCII 255 and ASCII 254.

Is this a signal that it is Unicode?

Also Excel does not recognize the file as a CSV file in the conventional way by creating columns for each comma separated variable, all values for each line appeared in column A separated by commas
Olaf DoschkeSoftware Developer

Commented:
>The first two characters were ASCII 255 and ASCII 254.
This is the BOM I mentioned earlier. Look that up in Wikipedia and you get it explained.

>Is this a signal that it is Unicode?
Yes and no. It's just a few applications setting and expecting a BOM at the begin of a file. In other cases files may simply be Unicode without any hint in itself. Today XML is more common and has standard tags and attributes defining the encoding.

You may use the first two bytes as indicator, but not as proof.

Bye, Olaf.
Top Expert 2014

Commented:
That might be the BOM (byte order marker)

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial