?
Solved

URGENT - EXPORT IMPORT PROBLEM

Posted on 2003-02-27
7
Medium Priority
?
475 Views
Last Modified: 2008-02-01
1) We have DB2 7.2 EE server on win NT 4 and 2000.

2) We have received a non-delimited ascii text based file of data exported from remote source database table.

3) The table at both ends has same structure which cannot be changed i.e. 8 columns of integer, varchar, decimal, date and char(1) variety. The last column is of char(1) variety. There is no primary & foreign key and check constraints and all are not null.

3) We tried to import it but since data was missing for the last column it failed. Now if there was a default value it would have succeeded.

4) The date format of data in ascii file was not available in drop down of date while importing. So, we edited the ascii file to match one. If we keep date, timestamp and datetime drop downs as null, what happens. Will DB2 understand the date from start and end co-ordinate. What if we choose timestamp/datetime and there was no time component in ascii date values.

5) So we assumed that there will be a space(s) there and created a single space in text pad and imported using positional co-ordinates for the columns and were successful.

6) There were 2 rows only in the ascii file but it showed 4 rows imported, 2 successful and 2 failed. We found that correct data is imported in command center. In control center, for varchar columns data appeared as 123,456,789 instead of 123456789 in ascii file. In CLP, all
data is okay but last column(char(1)) shows value 4 for all rows which is incorrect.

7) Why is this difference in display and which do we beleive to be correct data in the table.

8) Now we tried to export data in table as to how it appears in ascii file. To our surprise, there is no non-delimited ascii file as an option. Then how come such a file was exported from remote source database table. Could this be a difference in VERSION/TYPE/PLATFORM(OS) of DB2 that they are having w.r.t. us.

9) I know that .ixf dump files are not portable across platforms but ascii text based files are. Will there be difference in export/import features across VERSION/TYPE/PLATFORM(OS).

10) Will export from higher version source be acceptable to lower version target or the other way round?

11) Will LOAD be more useful in this case over IMPORT.

12) There is a worksheet format(WSF) also for export/import. Worksheet means MS-Excel which has extension of .xls. Then how come .wsf.

13) Export/import of .ixf was successful at our end itself. Do you think next time round, if they send a .ixf dump from remote source database table, there are higher chances of it getting imported at our end. What will be effect of VERSION/TYPE/PLATFORM(OS) of DB2 differences from source to target in this case.

0
Comment
Question by:k_murli_krishna
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 4
  • 2
7 Comments
 
LVL 17

Author Comment

by:k_murli_krishna
ID: 8034124
14) The point is if the last column is not null and there is no default value, how come no data turned up for a char(1) data type. The only way this is possible is they have inserted empty space character. In this case will this empty space be part of the format of ascii file i.e. in text pad, we can see a jump of cursor by one.

15) I have given 200 points but will give (more) points to each experts answers if they are valid & good.
0
 
LVL 9

Accepted Solution

by:
mglxxx earned 800 total points
ID: 8034549
4) This seems to be due to an inadequacy in the control
center. If you are using the command line and
have date-columns where the format is 'YYYYMMDD', for
example, you could use
import from the_file of asc
modified by dateformat="YYYYMMDD"
method L ....
I've never used the GUI tools for doing import/load or
export.

6/7) This is difficult to answer without knowing the
format of the data file. Could you post a line of
that file?

8) Obviously, the other side didn't use export for creating
the file but a custom made program.

9) IXF generated by DB2 are portable between platforms.
See the 'Data Movement Utilities Guide', Chapter 6.
There are differences between UNIX and Win, but the manual
says that you DB2 on both platforms can process IXF
files generated on either platform.

10) You can never know when or if at all
IBM will change the IXF format. I'd assume that
for the time being IXF files are portable over different
versions of DB2.

11) LOAD can be useful, if most of the records in the
data file are correct. Then you could direct LOAD to
put incorrect (i.e. rejected) rows in a separate file,
fix the row in that file and just import those (hopefully)
few rows.

12) AFAIK .wsf means Lotus 1-2-3

13) Since the other side seems to be using a special
program to export those data, I'd tell them to make
sure that they use space characters for NULL values in
the last column and to make sure that space characters
which appear in the last column in the table actually
make their way into the data file.
If there are data missing in the table, there's nothing
to be gained by using a different export format.

14) If their export program works as designed, blank
characters in the table should also show up in the
ascii file.
0
 
LVL 17

Author Comment

by:k_murli_krishna
ID: 8040609
1) Thank you, mglxxx.

2) We managed to import non-delimited file sent from remote database table or delimited ascii file exported from our own table from the command line in custom date format. But for remotely sent ascii file, we had to introduce a empty space for last column(char(1)) as follows.

3) I am presenting here the table structure and ascii file contents from remote database table:

CNTR_NUM_ID                    INTEGER NOT NULL,
BUR_EFF_DTE                    DATE NOT NULL,
BUR_STT_CDE                    CHAR(2) NOT NULL,
CLM_ID                         INTEGER NOT NULL,
FILE_NUM                       CHAR(4) NOT NULL,
BENF_DOB_DTE                   DATE NOT NULL,
BENF_DPCY_CDE                  CHAR(1) NOT NULL,
BENF_REL_CDE                   CHAR(1) NOT NULL

   914604001994-01-0131  21140131002001974-02-02 8
   914604001994-01-0131  21140131002001972-01-11 5

4) Here we have data in columns but for one less & some are merged together. What kind of a custom program can it be that generates such an output i.e. in which languages it is possible to write such programs.

5) Could it be that they have same table in 2 different databases and they wanted only non-delimited ascii file to resort to custom program.

6) This is all the more so since there is no non-delimited file option available in export at least for our DB2 7.2 EE on win NT/2000. From command line can we do this by framing a suitable command.

7) Please refer to your comment's 6/7) point.
0
Certified OpenStack Administrator Course

We just refreshed our COA course based on the Newton exam.  With 14 labs, this course goes over the different OpenStack services that are part of the certification: Dashboard, Identity Service, Image Service, Networking, Compute, Object Storage, Block Storage, and Orchestration.

 
LVL 17

Author Comment

by:k_murli_krishna
ID: 8085982
1) All columns are not null but someplace data is missing i.e. empty spaces were inserted.

2) There were some junk characters also visible when opened in notepad/textpad/wordpad.

How do we import the non-delimited ascii file keeping all above in mind.
0
 
LVL 13

Expert Comment

by:ghp7000
ID: 8088617
Re Point 3: The data format you present seems to be coming from COBOL based mainframe. The easiest way to load this type of data, assuming that the format you present is uniform throughout the ascii file, is to write a C program that stores each field in a variable, adds the required missing blank value and then insert (NO LOAD OR IMPORT) the record into the table. The 'junk' charcaters you are referring to are actually COBOL manipulators (indicators). Some common ones are &,{,}, and there are numeric indicators for + and  - values. For a full list of the 'junk' characters, consult some COBOL sites on the web.
Re non delimited export of data, you can accomplish this as follows:
select concat(coalesce(col1, ' '), ',')||
       concat(coalesce(col2, ' '), ',')||
       concat(coalesce(col3, ' '), ',')||
repeat as necessary
concat(coalesce(col3, ' '),


0
 
LVL 13

Expert Comment

by:ghp7000
ID: 8088644
Re Point 3: The data format you present seems to be coming from COBOL based mainframe. The easiest way to load this type of data, assuming that the format you present is uniform throughout the ascii file, is to write a C program that stores each field in a variable, adds the required missing blank value and then insert (NO LOAD OR IMPORT) the record into the table. The 'junk' characters you are referring to are actually COBOL manipulators (indicators). Some common ones are &,{,}, and there are numeric indicators for + and  - values. For a full list of the 'junk' characters, consult some COBOL sites on the web.
Re non delimited export of data, you can accomplish this as follows:
select concat(coalesce(col1, ' '), ',')||
      concat(coalesce(col2, ' '), ',')||
      concat(coalesce(col3, ' '), ',')||
repeat as necessary
      concat(coalesce(last column, ' ')
      from table
and then pipe the entire output to a file with the redirection operator > export_file.txt or whatever name/extension you want to call it.

 
0
 
LVL 17

Author Comment

by:k_murli_krishna
ID: 8101060
ghp7000, your answer was also good. please refer to http://www.experts-exchange.com/questions/Q_20541972.html & just add a comment.
0

Featured Post

NEW Veeam Agent for Microsoft Windows

Backup and recover physical and cloud-based servers and workstations, as well as endpoint devices that belong to remote users. Avoid downtime and data loss quickly and easily for Windows-based physical or public cloud-based workloads!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Recursive SQL in UDB/LUW (you can use 'recursive' and 'SQL' in the same sentence) A growing number of database queries lend themselves to recursive solutions.  It's not always easy to spot when recursion is called for, especially for people una…
Recursive SQL in UDB/LUW (it really isn't that hard to do) Recursive SQL is most often used to convert columns to rows or rows to columns.  A previous article described the process of converting rows to columns.  This article will build off of th…
In this video we outline the Physical Segments view of NetCrunch network monitor. By following this brief how-to video, you will be able to learn how NetCrunch visualizes your network, how granular is the information collected, as well as where to f…
Visualize your data even better in Access queries. Given a date and a value, this lesson shows how to compare that value with the previous value, calculate the difference, and display a circle if the value is the same, an up triangle if it increased…
Suggested Courses
Course of the Month10 days, 1 hour left to enroll

762 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question