Data Import from a CSV file

I have been supplied with a text file with comma separated values which require update to a database.

When read in programmatically the string has an ASCII Null between every character
eg.   D A V 0 0 1 , J u l i e , D a v i e s - J o n e s ,

In notepad the file displays correctly as DAV001,Julie,Davies-Jones

The supplier of the file is downloading the file from a web site of an international company so I am unable to contact the originator.

Since Notepad is automatically stripping out the nulls is there a Windows API function I can utilise to replicate the Notepad functionality
TomPreenAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Olaf DoschkeSoftware DeveloperCommented:
Notepad is not striping out the NULLs, but notepad is capable to identify Unicode, while fox is not.

The simple solution: Stripe off chr(0):
ansi = chrtran(unicodetext,chr(0),'')

Open in new window


The correcter solution
ansi = ?Strconv(unicode,6,1252,1)

Open in new window


You'll have an additional problem, as most probably notepad added a few bytes called BOM (byte order mark) marking the rest of the file as Unicode, so record 1 pr the first field name might contain unwanted characters.

Another simple solution is to save again with notepad via Save As, then set the encoding to ANSI instead of Unicode.

Bye, Olaf.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
rhandelsCommented:
What i normally do if i have converting issues is trying to open the CSV in Excel because it holds it's original information and then copy and paste it into NotePad. Normally this does do the trick. The only other option is to ask for a correctly configured file..
0
TomPreenAuthor Commented:
Interesting.

 I have done hundred of imports using Fox Pro over the last 10 years and I have never seen this issue before.
0
Get expert help—faster!

Need expert help—fast? Use the Help Bell for personalized assistance getting answers to your important questions.

Olaf DoschkeSoftware DeveloperCommented:
Well,

more typical for internet services is UTF-8 encoding, which encodes the most commonly used 128 characters (eg letters, digits) in the same way as ASCII and many ANSI codepages. That's why you might never have stumbled upon this, even if you already precessed UTF-8 data only using these 128 characters.

Unicode (specifcally ISO 10646) is capable to encode more characters, it defines 17 planes with each potentially 65535 characters. Unicode Version 7 is "just" using 113.021 of theses potential codes.

MS mostly uses UTF-16 for Unicode, which is using 16bits or two bytes for most characters, no characters are encoded with a single byte, also the simple ones of ASCII or of ANSI codepages, so you surely stumble upon this encoding problem the first time you get UTF-16 data.

Character/Text/Data encoding (in the sense of codepages, not in the sense of cryptographic encoding) is a topic you should be aware of at least, even if you stumble upon this for the first time. Unicode is one of the weaknesses of VFP you can only partly cope with. You can easily store unicode in binary fields, but the native VFP controls will not display unicode and you have all kind of symptoms with it. Some operations will irreversably change unknown chars with '?' and so you always better keep original files for postprocessing into something VFP can work with, once such conversion losses in data are recognized.

Bye, Olaf.
0
TomPreenAuthor Commented:
The file in question had one other thing that I have not seen.

The first two characters were ASCII 255 and ASCII 254.

Is this a signal that it is Unicode?

Also Excel does not recognize the file as a CSV file in the conventional way by creating columns for each comma separated variable, all values for each line appeared in column A separated by commas
0
Olaf DoschkeSoftware DeveloperCommented:
>The first two characters were ASCII 255 and ASCII 254.
This is the BOM I mentioned earlier. Look that up in Wikipedia and you get it explained.

>Is this a signal that it is Unicode?
Yes and no. It's just a few applications setting and expecting a BOM at the begin of a file. In other cases files may simply be Unicode without any hint in itself. Today XML is more common and has standard tags and attributes defining the encoding.

You may use the first two bytes as indicator, but not as proof.

Bye, Olaf.
0
aikimarkCommented:
That might be the BOM (byte order marker)
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Microsoft Development

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.