Modifier1000
asked on
How to export data from Visual FoxPro 9 that includes a 'memo' data type?
Hello Experts,
I have a table with a few thousand records in Visual FoxPro 9. About 5 columns are of the 'Memo' type. I need to export to a CSV file. But the 'Memo' type does not export correctly.
I have searched and found a few pieces of code to use, but about every 8 rows, the data is shifted-over about 5 columns.
Any ideas on a solid way to convert this ancient Visual FoxPro data into a CSV?
I have a table with a few thousand records in Visual FoxPro 9. About 5 columns are of the 'Memo' type. I need to export to a CSV file. But the 'Memo' type does not export correctly.
I have searched and found a few pieces of code to use, but about every 8 rows, the data is shifted-over about 5 columns.
Any ideas on a solid way to convert this ancient Visual FoxPro data into a CSV?
You should take this solution: http://support.microsoft.com/kb/241424
The only thing you might want to modify is, that not all field values are delimited with "", so set lcDelilmiter in the case depending on the field type, eg '"' for C or M, but '' (empty string) for D,T and I (date, datetime, and integer). You get the idea.
Also, depending on where you input your csv again and how this is interpreting it (there are several differing formats indeed) you need to pay attention to commas within memos, especially if you don't delimit them with quotes. And as another pitfall, VFP could never cope with linebreaks, even if it would import from csv into memo. So that part also may be important to avoid a shift of columns andor rows. The offending chars in memos may also be tabs, if it's also tabs seperating columns and not just commas.
Bye, Olaf.
The only thing you might want to modify is, that not all field values are delimited with "", so set lcDelilmiter in the case depending on the field type, eg '"' for C or M, but '' (empty string) for D,T and I (date, datetime, and integer). You get the idea.
Also, depending on where you input your csv again and how this is interpreting it (there are several differing formats indeed) you need to pay attention to commas within memos, especially if you don't delimit them with quotes. And as another pitfall, VFP could never cope with linebreaks, even if it would import from csv into memo. So that part also may be important to avoid a shift of columns andor rows. The offending chars in memos may also be tabs, if it's also tabs seperating columns and not just commas.
Bye, Olaf.
You need to use a lowlevel function or STRTOFILE to do that.
Make sure character fields are put in quotes.
Make sure the character fields do not contain quotes themselves or convert to the other quotes using like this STRTRAN(string,'"',"'")
Make sure CHR(10) and CHR(13) are converted to space STRTRAN(string,CHR(13),SPA CE(1))
If you need the data for HTML purposes you can change them to """ or "<br>" respectively.
Make sure character fields are put in quotes.
Make sure the character fields do not contain quotes themselves or convert to the other quotes using like this STRTRAN(string,'"',"'")
Make sure CHR(10) and CHR(13) are converted to space STRTRAN(string,CHR(13),SPA
If you need the data for HTML purposes you can change them to """ or "<br>" respectively.
Just a note: If you need to preserve double quotes in output CSV then you must duplicate them and enclose the whole string to double quotes. It works for Excel at least.
Enclosing strings into double quotes also allows CR LF inside strings - again accepted by Excel.
Enclosing strings into double quotes also allows CR LF inside strings - again accepted by Excel.
ASKER
Ok, so the link that Olaf_Doschke provided is actually the code I was using. And jrbbldr is correct as to separate the issue at hand.
Actually, it's #2 of jrbbldr that is my main problem.
The MEMO fields are of one contiguous string, consisting of a few paragraphs.
I tried the code below, yet some of my MEMO fields are separated out into other fields, which shifts the data for the remaining columns over. The code I'm using is:
Actually, it's #2 of jrbbldr that is my main problem.
The MEMO fields are of one contiguous string, consisting of a few paragraphs.
I tried the code below, yet some of my MEMO fields are separated out into other fields, which shifts the data for the remaining columns over. The code I'm using is:
CLOSE ALL
CLEAR ALL
lcFieldString = ''
lcMemo = ''
USE GETFILE('dbf', 'Select DBF') && Prompts for table to be used.
lnFieldCount = AFIELDS(laGetFields) && Builds array of fields from the
&& selected table.
*!* Prompt for Output file and use Low-Level functions
*!* to create it.
lcTextFile = FCREATE(GETFILE('txt', 'Select Text'))
*!* Starts scanning the table and converts the fields
*!* values according to their types **
SCAN
WAIT WINDOW STR(RECNO()) + ' Of ' + STR(RECCOUNT()) NOWAIT
FOR lnCount = 1 TO lnFieldCount
lcType = laGetFields(lnCount, 2)
IF lcType # 'G' && Don't try to turn a general field into a string
lcString = EVALUATE(laGetFields(lnCount, 1))
EndIf
DO CASE
CASE lcType = 'M' && Process the Memo Fields
lnMemoLines = MEMLINES(EVALUATE(laGetFields(lnCount,1)))
FOR lnLoop = 1 TO lnMemoLines
IF lnLoop < lnMemoLines
lcMemo = lcMemo + ;
ALLTRIM(MLINE(EVALUATE(laGetFields(lnCount, 1)), ;
lnLoop)) + ' '
ELSE
lcMemo = lcMemo + ;
ALLTRIM(MLINE(EVALUATE(laGetFields(lnCount, 1)), ;
lnLoop))
ENDif
ENDfor
lcString = lcMemo
lcMemo = ''
CASE lcType = 'G' && Process the General Fields
lcString = 'Gen'
CASE lcType = 'D' && Process the Date Fields
lcString = DTOC(lcString)
CASE lcType = 'T' && Process the DateTime Fields
lcString = TTOC(lcString)
CASE lcType = 'N' && Process the Numeric Fields
lcString = STR(lcString, LEN(STR(lcString)), 2)
CASE lcType = 'I' && Process the Integer Fields
lcString = STR(lcString)
CASE lcType = 'L' && Process the Logical Fields
IF lcString = .T.
lcString = 'T'
ELSE
lcString = 'F'
ENDif
ENDcase
IF lnCount < lnFieldCount && Determines if the last field was
&& processed and sets the closing quote.
lcFieldString = lcFieldString + '"' + lcString + '"' + ','
ELSE
lcFieldString = lcFieldString + '"' + lcString + '"'
ENDif
ENDfor
FPUTS(lcTextFile, lcFieldString) && Writes string to the text file.
lcFieldString = ''
ENDscan
FCLOSE(lcTextFile)
CLOSE All
CLEAR All
WAIT WINDOW 'Text File Creation Completed' NOWAIT
I've updated the code, so it supports more data types, preserves numeric fields decimal places, allows NULL values, and produces CSV file (semicolon is used as the separator). Memo fields should not cause shifting.
CLOSE ALL
CLEAR ALL
lcFieldString = ''
lcMemo = ''
USE GETFILE('dbf', 'Select DBF') && Prompts for table to be used.
lnFieldCount = AFIELDS(laGetFields) && Builds array of fields from the
&& selected table.
*!* Prompt for Output file and use Low-Level functions
*!* to create it.
lcTextFile = FCREATE(GETFILE('csv', 'Select Output CSV file'))
*!* Starts scanning the table and converts the fields
*!* values according to their types **
SCAN
WAIT WINDOW STR(RECNO()) + ' Of ' + STR(RECCOUNT()) NOWAIT
FOR lnCount = 1 TO lnFieldCount
lcType = laGetFields(lnCount, 2)
IF lcType # 'G' && Don't try to turn a general field into a string
lcString = EVALUATE(laGetFields(lnCount, 1))
EndIf
DO CASE
CASE lcType $ 'MCV' && Process the Memo Fields
lcString = STRTRAN(lcString, '"', '""')
CASE lcType = 'G' && Process the General Fields
lcString = 'Gen'
CASE lcType = 'D' && Process the Date Fields
lcString = IIF(EMPTY(lcString) OR ISNULL(lcString), '', DTOC(lcString))
CASE lcType = 'T' && Process the DateTime Fields
lcString = IIF(EMPTY(lcString) OR ISNULL(lcString), '', TTOC(lcString))
CASE lcType = 'N' && Process the Numeric Fields
lcString = STR(lcString, laGetFields(lnCount, 3), laGetFields(lnCount, 4))
CASE lcType = 'I' && Process the Integer Fields
lcString = ALLTRIM(STR(lcString, 11))
CASE lcType = 'Y' && Process the Currency Fields
lcString = ALLTRIM(STR(lcString, 20, 4))
CASE lcType = 'L' && Process the Logical Fields
DO CASE
CASE ISNULL(lcString)
lcString = ''
CASE lcString = .T.
lcString = 'T'
OTHERWISE
lcString = 'F'
ENDCASE
CASE lcType $ 'QW' && Process the Memo Fields
lcString = TRANSFORM(lcString)
ENDcase
IF lnCount < lnFieldCount && Determines if the last field was
&& processed and sets the closing quote.
lcFieldString = lcFieldString + '"' + NVL(lcString, '') + '"' + ';'
ELSE
lcFieldString = lcFieldString + '"' + NVL(lcString, '') + '"' + CHR(13) + CHR(10)
ENDIF
ENDfor
FWRITE(lcTextFile, lcFieldString) && Writes string to the text file.
lcFieldString = ''
ENDscan
FCLOSE(lcTextFile)
CLOSE All
CLEAR All
WAIT WINDOW 'Text File Creation Completed' NOWAIT
ASKER
@pcelba: Well, I tried the code, thanks by the way, and it seemed to break up the data even more. So now instead of having ~1500 records, I had over 44,000 records using your code.
Having ~1500 is the correct amount.
Having ~1500 is the correct amount.
How did you count 44000 records? The number of lines in output file does not mean number of records created when you open such file in Excel.
If CR LF characters mean problem in your memo then it is very easy to remove such characters in above routine. Just let me know. You could also post a few records as a data sample. If you add .TXT extension to your files then it should be possible to post them here.
If CR LF characters mean problem in your memo then it is very easy to remove such characters in above routine. Just let me know. You could also post a few records as a data sample. If you add .TXT extension to your files then it should be possible to post them here.
ASKER
Well, I read the records given from NotePad++. It was not the total line numbers, but the records broken up and viewed in NotePad++
I would absolutely post the data, but I cannot.
Now, I may be able to post some dummy data in the same format. I will try that tomorrow.
I would absolutely post the data, but I cannot.
Now, I may be able to post some dummy data in the same format. I will try that tomorrow.
You should also mention what is the purpose of the CSV file. If you just need to open it in Notepad then you have to decide how to process formatting and control characters in your memo fields.
ASKER
The purpose of the CSV file is to open it in Excel and view/fix any bad data entered from the data entry analyst.
Bad being typo's or incorrect data.
Bad being typo's or incorrect data.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
You can also open the data in a WebBrowserControl inside VFP and fix the data and update the tables accordingly.
OK, Excel. Does the CSV file generate 44,000 lines in Excel sheet? It will be 1500. I am almost sure.
And what is the next step after the data review? Do you plan to import these data back to DBF?
And what is the next step after the data review? Do you plan to import these data back to DBF?
Excel is not giving the usual dialog for csv import since Excel 2003(?) In contrast to name "comma seperated values", Excel looks for TABS as field delimiters by default.
To get a dialog, where you can decide how Excel should treat the import file, you need to import csv by creating a new empty sheet, going to the data tab and select "from text" there.
Or export tab delimited values and replace all tabs in memo fields with 4 spaces, eg.
Beside these hacks:
What hinders you to use foxpro for the browsing of data? What's better in Excel? Excel allows different types for each cell, so once a user makes new or follow up error in a cell you won't be able to reimport the 'corrected' data.
If you want to use the spell checking you can use word via automation or Craig Boyds vfp spellchecker.
Bye, Olaf.
To get a dialog, where you can decide how Excel should treat the import file, you need to import csv by creating a new empty sheet, going to the data tab and select "from text" there.
Or export tab delimited values and replace all tabs in memo fields with 4 spaces, eg.
Beside these hacks:
What hinders you to use foxpro for the browsing of data? What's better in Excel? Excel allows different types for each cell, so once a user makes new or follow up error in a cell you won't be able to reimport the 'corrected' data.
If you want to use the spell checking you can use word via automation or Craig Boyds vfp spellchecker.
Bye, Olaf.
My Excel 2007 opens the CSV generated by above code without problems and even TABs and doublequotes are allowed in texts.
Well, I have Excel 2010. I don't know which settings could influence this.
Anyway I just tried to use CURSORTOXML() on a form file (which is a dbf with memo field "method" besides some other) and load that into Excel and that does not only work fine, excel also creates a nice first line of filters and alternating backcolors for table rows.
So why not go that route?
Bye, Olaf.
Anyway I just tried to use CURSORTOXML() on a form file (which is a dbf with memo field "method" besides some other) and load that into Excel and that does not only work fine, excel also creates a nice first line of filters and alternating backcolors for table rows.
So why not go that route?
USE your.dbf
CursorToXML(Alias(),"d:\test\your.xml",1,512,0,"1")
Open that in Excel 2010 (or 2007?). It's at least worth giving it a try, isn't it?Bye, Olaf.
Hmmm, long memo fields (text) are displayed as ######################, and I am unable to force the correct formatting...
The line of filters is just a simple decoration. (Yes, sometimes useful.)
And the XML is twice as big as CSV but I agree XML is more advanced data format than CSV.
The line of filters is just a simple decoration. (Yes, sometimes useful.)
And the XML is twice as big as CSV but I agree XML is more advanced data format than CSV.
The fields displaying #### should perhaps just be widened to show their real content. Using a scx as initial "dbf" I see the code of the methods memo in Excel 2010, quit nice.
XML has the big advantage foxpro does export memo field without any further code anyway, and you could also take the xml output and transform it, if it doesn't fit.
Bye, Olaf.
XML has the big advantage foxpro does export memo field without any further code anyway, and you could also take the xml output and transform it, if it doesn't fit.
Bye, Olaf.
ASKER
"Just use either VFP Automation of Excel or the Excel ODBC connection..."
Easy and worked great!
Thanks everyone!
Easy and worked great!
Thanks everyone!
1. How to Export data table values from VFP into a CSV file so as to include the Memo field values.
2. How to format the Memo field values so as to eliminate the leading spaces ("data is shifted-over about 5 columns")
Is that correct?
If you have working code already to accomplish #1, then you will only need to address #2.
Depending on how the code you have is put together, you could modify that code to ALLTRIM() the Memo field values if the data is all in one contiguous string.
But if Memo field data is in distinct Lines of text and some of those 'lines' contain leading spaces, then the Export code will need to be modified differently.
Perhaps an example of the Memo field data would help us understand the changes you need to make better.
Good Luck