Bulk Inserting Dates


I am using Bulk Insert to import large amounts of data into SQL tables. I have come across a problem when I need to insert date fields with the value "1899-12-30". Whenever such a field is imported, Bulk Insert generates an error saying "Server: Msg 4864, Level 16, State 1, Line 1
Bulk insert data conversion error (type mismatch) for row 2072, column 9 (LAST_AMENDED_DATE)."

I have tried changing the field type from smalldatetime to date time but hasn't been successful.

This is a sample row from the text file I am importing (pipes are delimiting the fields): -
PO008726|X001 041|1|0|0|1|9|0.00|1899-12-30|BIMPORTD|0||1||1||WI|0.00|0||0|WI|I

Is there any way of going around this?

Who is Participating?
AustinSevenConnect With a Mentor Commented:
Or... How about copying  your existing SQL Server database table schema to another empty table and then changing the data type of the datetime column to varchar (25).  Then do the bulk insert again.   This will give you the opportunity to easily query the data to find duff values.   You could extend this test into a solution by using this new table as a staging table.   Such a table can be 'loosely typed' to allow for dirty data.   Obvioiusly, you would need a procedure to clean the data and then insert into the original table.    I guess this strategy depends on the liklihood of you getting bad data.  

Can not imagine why it's failing as there's nothing wrong with the date format in the example you've given.   Have you looked at the row corresponding to row 2072?   Is it a valid date in that row?  As a test, can you remove row 2072 from the input file and re-run the test?

Also... what happens if you bcp in the data?   What happens if you DTS in the data?  Would be interested to know if you can isolate row 2072.

Build your data science skills into a career

Are you ready to take your data science career to the next step, or break into data science? With Springboard’s Data Science Career Track, you’ll master data science topics, have personalized career guidance, weekly calls with a data science expert, and a job guarantee.

As AustinSeven says, there doesn;t appear to be anything worng with the sample row. Can you isolate the row 2072?

Alternatively, can you import the file using DTS, and set it up to log error file.
You can set it up to have max errors to be say 999. That way you can load most of the data, and have the bad rows logged. YOu can then examine the bad rows to determine what's wrong with them.
jbonelloAuthor Commented:
I have tried removing the indicated row from the text file and the import went in without any problems.
I have also tried changing the year from 1899 to 1900 and this also worked fine. However changing the date isn't a viable option for me.
Also, I am restricted to using Bulk Insert because this forms part of a larger process so I can't use BCP or DTS at this stage.
Melih SARICAIT ManagerCommented:
In that row date value is below '1753/01/01'

If ur data may contain this kinda earlier dates u must change ur datatype to varchar

jbonelloAuthor Commented:
Thanks Austin,

I used varchar in the staging table then converted into a datetime in the production table.

Jan FranekCommented:
It looks like your date column is of SMALLDATETIME type - this type supports dates from January 1, 1900, to December 31, 2079. To import date from year 1899 you need to use DATETIME type (January 1, 1753 to December 31, 9999)
I've had issues like this before as well.  My solution was to write a program to scrub the data before the bulk insert ever runs.

In VB -

Open the file
read each line and replace the bad date with a good one -
    VB Code - Replace(strTemp, "1899-12-30", "1900-12-30"

Write the "scrubbed" line to a new file
Read the scrubbed file in to your bulk insert.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.