Steve Wales
asked on
ORA-01841 error when manipulating strings representing dates
OK, I'm completely stumped on this one.
I have a table that contains a date stored as a CHAR(8 CHAR) in the format YYYYMMDD. It is a date representative of the last date a maintenance occurred on a piece of equipment.
There's another column which is a scheduled frequency, which is NUMBER(7).
(It's a 3rd party app, we have no control over the data types presented before someone says "Make the column a date!").
The query is trying return rows based on last scheduled date + frequency being in a certain range.
(I've cut out all the where clauses for brevity).
If I run this:
gets me:
I have thought that there may be bad data in the last_sch_date column - when I looked at the contents of just that one column for all rows returned with the base where clause, all contain data and every row meets the YYYYMMDD mask (and is a valid date). There are some rows in the table outside the scope of the WHERE clause where the date column is spaces or contains '00000000' - but these are specifically excluded in the where clause by LAST_SCH_DATE > '00000000'. You can also see the result set contains no invalid data.
I've also tried:
Further, I've also tried it this way:
I get exactly the same result.
The bafflement is strong with this one.
Anyone have any ideas ?
I have a table that contains a date stored as a CHAR(8 CHAR) in the format YYYYMMDD. It is a date representative of the last date a maintenance occurred on a piece of equipment.
There's another column which is a scheduled frequency, which is NUMBER(7).
(It's a 3rd party app, we have no control over the data types presented before someone says "Make the column a date!").
The query is trying return rows based on last scheduled date + frequency being in a certain range.
(I've cut out all the where clauses for brevity).
If I run this:
SELECT last_sch_date, sched_freq_1,
to_char(to_date(LAST_SCH_DATE, 'YYYYMMDD') + sched_freq_1, 'YYYYMMDD') as date_plus_freq
from table where blah blah blah
I get this:
LAST_SCH SCHED_FREQ_1 DATE_PLU
-------- ------------ --------
20150608 30 20150708
20090522 28 20090619
20150624 30 20150724
20150624 30 20150724
20140922 30 20141022
20140922 30 20141022
20150628 28 20150726
20150412 28 20150510
20150609 30 20150709
20110315 30 20110414
20150628 30 20150728
20150628 28 20150726
20150628 28 20150726
13 rows selected.
Adding in this bitselect * from
(SELECT last_sch_date, sched_freq_1,
to_char(to_date(LAST_SCH_DATE, 'YYYYMMDD') + sched_freq_1, 'YYYYMMDD') as date_plus_freq
from table where blah blah blah)
WHERE date_plus_freq between '20150701' and '20150730'
gets me:
to_char(to_date(LAST_SCH_DATE, 'YYYYMMDD') + sched_freq_1, 'YYYYMMDD') as date_plus_freq
*
ERROR at line 5:
ORA-01841: (full) year must be between -4713 and +9999, and not be 0
I have thought that there may be bad data in the last_sch_date column - when I looked at the contents of just that one column for all rows returned with the base where clause, all contain data and every row meets the YYYYMMDD mask (and is a valid date). There are some rows in the table outside the scope of the WHERE clause where the date column is spaces or contains '00000000' - but these are specifically excluded in the where clause by LAST_SCH_DATE > '00000000'. You can also see the result set contains no invalid data.
I've also tried:
select *
from
(
SELECT last_sch_date, sched_freq_1,
to_date(LAST_SCH_DATE, 'YYYYMMDD') + sched_freq_1 as date_plus_freq
FROM Table
WHERE blah blah blah
)
WHERE date_plus_freq between to_date('20150701','YYYYMMDD') and to_date('20150730','YYYYMMDD')
Further, I've also tried it this way:
SELECT *
FROM Table
WHERE blah blah blah
AND to_char(to_date(LAST_SCH_DATE, 'YYYYMMDD') + sched_freq_1, 'YYYYMMDD') >= '20150701'
AND to_char(to_date(LAST_SCH_DATE, 'YYYYMMDD') + sched_freq_1, 'YYYYMMDD') <= '20150730'
I get exactly the same result.
The bafflement is strong with this one.
Anyone have any ideas ?
Just for grins:
create table mytesttab as
SELECT last_sch_date, sched_freq_1,
to_char(to_date(LAST_SCH_D ATE, 'YYYYMMDD') + sched_freq_1, 'YYYYMMDD') as date_plus_freq
from table where blah blah blah
Then try the other queries from the new table.
create table mytesttab as
SELECT last_sch_date, sched_freq_1,
to_char(to_date(LAST_SCH_D
from table where blah blah blah
Then try the other queries from the new table.
ASKER
Yeah tried those scenarios already - running CTAS to create a table - or creating from scratch, it works just fine.
When using the data that is present already, I get the error.
Since the base select without the date boundaries returns 13 valid rows, I just don't know what to do with this one now.
When using the data that is present already, I get the error.
Since the base select without the date boundaries returns 13 valid rows, I just don't know what to do with this one now.
Is there an index on any of the columns in the query?
The WAG is possibly index corruption?
Second guess:
Bounce the database. Reboots have fixed many an unexplained error...
The WAG is possibly index corruption?
Second guess:
Bounce the database. Reboots have fixed many an unexplained error...
ASKER
To further throw a wrench into this one.
There's another column called work group on this table. It defines which area is doing the maintenance.
There are 13 rows, spread across 5 work groups.
If I add in work group to the where and select them one at a time, they all work. Take work group out and it blows up again.
Execution plan shows no index access on this table - it's doing "TABLE ACCESS STORAGE FULL" - it's an Exadata.
There's another column called work group on this table. It defines which area is doing the maintenance.
There are 13 rows, spread across 5 work groups.
If I add in work group to the where and select them one at a time, they all work. Take work group out and it blows up again.
Execution plan shows no index access on this table - it's doing "TABLE ACCESS STORAGE FULL" - it's an Exadata.
I had an issue many years back that whenever I selected a specific row in a table I would get an ora-03113. I grabbed the info from the table I could, deleted and reinserted the row and everything was good from then on.
Can you move the table to a different tablespace and move it back? Maybe it just needs to relocate some blocks?
>>it's an Exadata.
Must be nice to have a TON of $$$!!!!!
I went to a seminar/sales pitch on Exadata recently. They are really proud of their engineering of older technology, aren't they!!!
Can you move the table to a different tablespace and move it back? Maybe it just needs to relocate some blocks?
>>it's an Exadata.
Must be nice to have a TON of $$$!!!!!
I went to a seminar/sales pitch on Exadata recently. They are really proud of their engineering of older technology, aren't they!!!
ASKER
Not my machine :) One of my customers.
I'm also wondering if there's a possible data corruption.
Moved the table out and back - same result.
Going to ask their DBA to run utlvalid / utlchn1 just for giggles and then I'll try analyze table ... validate structure - just to make sure there are no invalid blocks (although I would have thought that if there were, the move might have corrected them - but maybe there's an issue in a block somewhere). This is just weird.
I'm also wondering if there's a possible data corruption.
Moved the table out and back - same result.
Going to ask their DBA to run utlvalid / utlchn1 just for giggles and then I'll try analyze table ... validate structure - just to make sure there are no invalid blocks (although I would have thought that if there were, the move might have corrected them - but maybe there's an issue in a block somewhere). This is just weird.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
I bet there are constraints on the table where you can't CTAS, truncate/delete, insert?
If there aren't many, could you modify them to deferrable, delete, insert, commit, then change them back? Never tried that but in theory it should be possible.
I'm beginning to think that we are WAY over thinking this and we'll both go DUH when it is finally figured out.
If there aren't many, could you modify them to deferrable, delete, insert, commit, then change them back? Never tried that but in theory it should be possible.
I'm beginning to think that we are WAY over thinking this and we'll both go DUH when it is finally figured out.
in case its useful
**Oracle 11g R2 Schema Setup**:
CREATE TABLE Table1
(LAST_SCH char(8), SCHED_FREQ_1 int, "DATE_PLU" int)
;
INSERT ALL
INTO Table1 ("LAST_SCH", "SCHED_FREQ_1", "DATE_PLU")
VALUES ('00000000', 30, 20150708)
INTO Table1 ("LAST_SCH", "SCHED_FREQ_1", "DATE_PLU")
VALUES (NULL, 30, 20150708)
INTO Table1 ("LAST_SCH", "SCHED_FREQ_1", "DATE_PLU")
VALUES ('20150608', 30, 20150708)
INTO Table1 ("LAST_SCH", "SCHED_FREQ_1", "DATE_PLU")
VALUES ('20090522', 28, 20090619)
INTO Table1 ("LAST_SCH", "SCHED_FREQ_1", "DATE_PLU")
VALUES ('20150624', 30, 20150724)
INTO Table1 ("LAST_SCH", "SCHED_FREQ_1", "DATE_PLU")
VALUES ('20150624', 30, 20150724)
INTO Table1 ("LAST_SCH", "SCHED_FREQ_1", "DATE_PLU")
VALUES ('20140922', 30, 20141022)
INTO Table1 ("LAST_SCH", "SCHED_FREQ_1", "DATE_PLU")
VALUES ('20140922', 30, 20141022)
INTO Table1 ("LAST_SCH", "SCHED_FREQ_1", "DATE_PLU")
VALUES ('20150628', 28, 20150726)
INTO Table1 ("LAST_SCH", "SCHED_FREQ_1", "DATE_PLU")
VALUES ('20150412', 28, 20150510)
INTO Table1 ("LAST_SCH", "SCHED_FREQ_1", "DATE_PLU")
VALUES ('20150609', 30, 20150709)
INTO Table1 ("LAST_SCH", "SCHED_FREQ_1", "DATE_PLU")
VALUES ('20110315', 30, 20110414)
INTO Table1 ("LAST_SCH", "SCHED_FREQ_1", "DATE_PLU")
VALUES ('20150628', 30, 20150728)
INTO Table1 ("LAST_SCH", "SCHED_FREQ_1", "DATE_PLU")
VALUES ('20150628', 28, 20150726)
INTO Table1 ("LAST_SCH", "SCHED_FREQ_1", "DATE_PLU")
VALUES ('20150628', 28, 20150726)
SELECT * FROM dual
;
**Query 1**:
select
LAST_SCH, SCHED_FREQ_1, LAST_SCH + SCHED_FREQ_1
from (
select case when LAST_SCH > '00000000' then to_date(LAST_SCH,'YYYYMMDD') end as LAST_SCH , SCHED_FREQ_1
from table1
)
**[Results][2]**:
| LAST_SCH | SCHED_FREQ_1 | LAST_SCH+SCHED_FREQ_1 |
|-----------------------------|--------------|---------------------------|
| (null) | 30 | (null) |
| (null) | 30 | (null) |
| June, 08 2015 00:00:00 | 30 | July, 08 2015 00:00:00 |
| May, 22 2009 00:00:00 | 28 | June, 19 2009 00:00:00 |
| June, 24 2015 00:00:00 | 30 | July, 24 2015 00:00:00 |
| June, 24 2015 00:00:00 | 30 | July, 24 2015 00:00:00 |
| September, 22 2014 00:00:00 | 30 | October, 22 2014 00:00:00 |
| September, 22 2014 00:00:00 | 30 | October, 22 2014 00:00:00 |
| June, 28 2015 00:00:00 | 28 | July, 26 2015 00:00:00 |
| April, 12 2015 00:00:00 | 28 | May, 10 2015 00:00:00 |
| June, 09 2015 00:00:00 | 30 | July, 09 2015 00:00:00 |
| March, 15 2011 00:00:00 | 30 | April, 14 2011 00:00:00 |
| June, 28 2015 00:00:00 | 30 | July, 28 2015 00:00:00 |
| June, 28 2015 00:00:00 | 28 | July, 26 2015 00:00:00 |
| June, 28 2015 00:00:00 | 28 | July, 26 2015 00:00:00 |
[1]: http://sqlfiddle.com/#!4/99c82/7
ASKER
Paul, I don't know WHY your solution works ...
But it does.
That's all I needed :)
Thanks!
(Thanks for you contributions as well, Steve)
But it does.
That's all I needed :)
Thanks!
(Thanks for you contributions as well, Steve)
No worries Steve (Wales), it sometimes just needs another set of eyes to make the right suggestion
However it's a very rare day when I spot something that slightwv hasn't
However it's a very rare day when I spot something that slightwv hasn't
>>However it's a very rare day when I spot something that slightwv hasn't
It happens all the time!!! I couldn't figure out how 'bad' data was sneaking in past the where clause.
It happens all the time!!! I couldn't figure out how 'bad' data was sneaking in past the where clause.
>>"I couldn't figure out how 'bad' data was sneaking in past the where clause."
between the second and third "blah"
between the second and third "blah"
I added 'bad data' to my test case.
Running 11.2.0.3
Open in new window