Link to home
Start Free TrialLog in
Avatar of glivie
glivie

asked on

Exception thrown when SQLExecDirect is called for inserting binary data. Next ODBC operation hangs the application.

Hi,

I am using Oracle client version 8.1.7 to connect to Oracle database.

I am running a loop where ODBC api (version 3.x i guess) is used to insert character string, numeric and binary data are inserted.
SQLExecDirect is used to execute the Prepared SQL Insert statement with parameters.

The Problem I am facing is,

For some reason, for certain binary data, the SQLExecDirect() function throws an error instead of return error values. Then when I ignore this exception and try to continue with the next insert statement, SQLParamOptions() function call just hangs the application.

Please help me out with this.

Thanks,
glivie.
Avatar of slightwv (䄆 Netminder)
slightwv (䄆 Netminder)

There is a general page that talks about exceptions for SQLExecDirect at: http://odbcrouter.com/api/SQLExecDirect.shtml

If that does not help, I need some more information:

What is the error message you are receiving?

Who's ODBC driver are you using:  Oracle or Microsoft?

What is the data type of the Oracle table you are trying to insert into?
glivie,
   If the error is in the SQLExecDirect() function, there's a good chance it's a buffer problem.  The error is aborting the transaction, so any calls that try to continue processing that transaction just hang.  How big is the binary data you are inserting?

Good luck!
DrSQL
Avatar of glivie

ASKER

Hi,

I get an exception stating an error "0xC0000005: Access violation writing location...". I was unable to catch this exception under the c++ standard exception and still do not know what type of exception is this.

First, let me point out that the function call to SQLExecDirect() does not return at all but since I have it in a try-catch(), I get an unknown exception.

So, the database is Oracle 9i and I am using the Oracle client 8.1.7 with Oracle ODBC driver 3.x.

I was debugging the code and found out that there are two instances where this exception occurs.

Instance1: Binary(SQL_LONGVARBINARY) data Insertion into a table with the following structure.
 f1                                 NOT NULL NUMBER
 f2                                 NOT NULL NUMBER
 f3                                 NOT NULL NUMBER
 f4                                 BLOB
 f5                                 NOT NULL NUMBER(2)

The size of databuffer while calling SQLBindParameter() is 2857 bytes. But the buffer is filled with data of only 1028 bytes. so the remaining bytes are junk chars.

When SQLExecDirect is called an exception is thrown after which any operation on the connection hangs the application.

Instance2: Varchar(SQL_VARCHAR) data insertion into a table with the following structure.
f1                               NOT NULL NUMBER
f2                               NOT NULL NUMBER
f3                               NOT NULL NUMBER
f4                               NOT NULL NUMBER
f5                               VARCHAR2(255)

The size of databuffer while calling SQLBindParameter() is 66. And again only 9 characters (null terminated string) are written to the buffer and remaining are junk chars.

When SQLExecDirect is called an exception is thrown after which any operation on the connection hangs the application.

One more point to mention here is I have similar table structures getting similar data which does not fail. But these two operations fail almost all the time. I have been testing this and these two scenarios occur almost 9/10 times. Yes there were few occassions where there was no problems.


For testing purpose, I also have another login in the same database where I have one more set of tables with only one exception, Instead of BLOB type these tables have LONG RAW type of fields. The application can work for both the logins without any code change sine all the table names and their structure remains the same except for this BLOB -> LONG RAW type change.
In this login, I have no problem inserting binary data or any other data. This is also puzzling me as I was wondering if the buffer initialization code has the problem. If the code is wrong then it should fail for both the logins which does not happen.

I think you must be getting the clear picture now.

And DrSQL, are you saying that the buffer that I am using in my code has the problem or is it internal to the ODBC api?
How should tackle this buffer problem if it is the culprit?

Thank you,

Regards,
glivie.
glivie,
   That helps.  Are you doing a to_lob in your prepared sql for instance1?  Because you're passing a long raw and inserting into a BLOB.  It should be:

insert into <table> values (:1, :2, :3, to_lob(:4), :5)

Here's a link to the to_lob:
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/functions139a.htm#84932

I'm not finding the correct syntax for the SQL_VARCHAR issue in instance2.  The problem is similar - that datatype is varchar, not varchar2.  There isn't a SQL_VARCHAR2 (and you shouldn't need one - they're very similar).  It might require a CAST-AS:


insert into <table> values (:1, :2, :3, :4, cast (:5) as varchar2)

But I doubt it.

Good luck!
DrSQL

You're out of my area of expertise.  I've not done any c++ ODBC programming.  I can only offer general insight from here on out.

The general error code you posted tends to mean that you are attempting to write to a memory area that is no longer there.  If it only faild on blob data, I would tend to think that the lob locator wasn't properly set up or had been reset.  Since it also fails on straight varchar data, this doesn't seem to be the case.

Since you mention the code 'sometimes' works, I'm thinking a logic trap in the code somewhere.  I'd look into more detailed error trapping.  For example, calls like SQLBindParameter return specific errors that the try-catch may not catch.

You might also look into posting a reference question over in the c++ forum linking back to this one.
DrSQL,

I don't think you can use to_lob in a straight insert.  I'm pretty sure you can only use it in a select.

It may need to look like:

insert into <table> (select :1, :2, :3, to_lob(:4), :5 from dual)
glivie,
   (Thanks, slightwv, you're probably right).  I don't find anything on the varchar/varchar2 - Oracle no longer makes a distinction (not since v8 and then it was just a caution), so that isn't the issue.  Not to mention the thousands of programs written that use SQL_VARCHAR datatypes.  

Could you print out the contents of the f5 string for instance2?  Could you also post the prepared sql for instance2?

Good luck!
DrSQL
glivie,
   Any chance we're dealing with unicode data here?  Or, some other multi-byte character set?

Good luck!
DrSQL
Avatar of glivie

ASKER

DrSQL,Slightwv,

Thanks to you both on your replies. But I am yet to find a break on this issue.

I "think" the data getting inserted has unicode data / multibyte data in it. What if it has multibyte/unicode characters in it? Is there any specific problem with this?

Again, why does the other type of tables have no problems even with this character sets?

One more thing important to tell you is that the specific login in which these problems occur is working on partitioned tables. This is what my dba had to be say. nothing more. I dont understand how would partitioned tables have different data types in them.

i.e., say LOGIN1 has the functional hassle free tables (LONGRAW field types) and LOGIN2 is the partitioned table that has the problematic tables with BLOB type fields. How can we have this??

One more question I have here is how to enable the Oracle odbc tracing and logging? Will it be of any help to get some more information on this error. I ask this because, the exception is getting thrown from the SQORA.dll only.(i used windbg to get this info)

Thanks,
glivie.
glivie,
   Well, unicode/multibyte data doesn't map to varchar or blob datatypes.  It instead maps to nvarchar and nclob.  If this is the only login where you're getting these characters (like the Japan Office, for example), that would explain it.  I'd need to know more about your nls settings to help.  And, you might also want to find out what column the partitioning is being done on and find out if THAT is a multibyte.

Good luck!
DrSQL
Avatar of glivie

ASKER

DrSQL,

The data does not change according to the logins. The data is always created from a single client machine. The server application inserts this data into the db.

So, why should the insert fail with one login and not with another? The data is just the same for both logins.

The partitioning is done on a date column, so that has nothing to do with the data getting inserted but only the date & time the insertion happened.

I dont have the oracle db on my machine but another machine.
But the NLS settings on my machine (where the exception occurs):
\ORACLE\{NLS_LANG} = NA
\ORACLE\HOME0\{NLS_LANG} = AMERICAN_AMERICAWE8ISO8859P1

But again, the only question keeps bothering me is why fail in one login and not in another?

Do you know how to get the trace & logs working for odbc layer?

Thanks,
glivie.
Avatar of glivie

ASKER

I have found out some more interesting things in this problem. The problem was not with binary data at all. Instead what happens is this:

I have a transaction which is continuously inserting data of various sizes and types using SQLExecDirect in to the different tables of the database.
And when around 80 insertions are done, the SQLExecDirect function call fails and throws an exception.

Again and again, my only question is why does the error occur only with one login (with blob field tables) and not the other(with longraw field tables).

I also tried another code which we insert 100 records in a transaction mode for this same login. There seems to be no problems with that one.
glivie,
    It is starting to sound like a user profile issue.  If there are quotas for this user (resource limits), then you may be hitting them.  It could also be that they are using a smaller TEMPSPACE tablespace than the other users.  I don't think you can specify rollback by user (you can by transaction), so I don't think that is it. It would so explain why you're not getting the full error - the operation is cut off at the server.  Try these queries for the different users (at least one that is working and for the one that isn't):

select * from USER_TS_QUOTAS;
select * from user_resource_limits;
select * from dba_profiles where profile in (select profile from user_users);
select * from user_users; /* if you don't see anything else obvious, post this */

   If there's anything there that's different, it could be the reason.

Good luck!
DrSQL
Avatar of glivie

ASKER

DrSQL thanks for the response.

There are few things that has to be looked at.

I ran 200 consecutive insertions in a transaction from a sample code which did not fail even in the problematic login. Only when i run the application this problem occurs. But the same application works fine with the other login.

I have found the following observations from your listed queries. But could not infer much from this. Please help me with this on how to proceed here. (I have selected only those columns that differ).

For "select * from user_users" the following are the records in different logins

UserName                      Default_tablespace       Temp_tablespace
-------------------------------------------------------
User1            TD_DATA            Temp            -(when run in login1)
User2            TD_DATA            Temp            -(run in login2)
User3            TD_DATA2            Temp            -(run in login3 - badlogin)


For "select * from USER_TS_QUOTAS" in different logins, the following is the result.

Goodlogin1:


TABLESPACE_NAME      BYTES            MAX_BYTES      BLOCKS            MAX_BLOCKS
-----------------------------------------------------------------------------------------------------------------------
for goodlogin1:

TD_DATA            13068402688      0            1595264            0
TD_IDX            5017436160      0            612480            0

for goodlogin2:

TD_DATA            4729077760      0            1595264            0
TD_IDX            516947968      0      63104            0

for Badlogin:

TD_DATA2            206635008      -1      25224            -1
TD_IDX                  0      0            0            0



For select * from user_resource_limits, all values are "UNLIMITED" in all the logins.


"select * from profiles where profile in (select profile from user_users)"
does not work since there is profile field in both the profiles or user_users tables.

select * from users - users table does not exist.


Please write to me how to proceed with this problem.

Thank you,

Regards,
glivie.
glivie,

  I think we need to look at those default tablespaces:

select tablespace_name, sum(bytes) / 1024 / 1024 / 1024 gb, autoextensible
from dba_data_files where tablespace_name like 'TD_DATA%';

Good luck!
DrSQL

P.S.  It's interesting that the bad login is the only one with ANY tablespace quotas and they are UNLIMITED.  Isn't there ANY entry for the bad login in the TS_QUOTA view for TD_DATA?  Because, if the table that "bad login" is inserting into extends, the resources are charged against "bad login" and if it's in TD_DATA and "bad login" has no quota there, then...
Avatar of glivie

ASKER

Hi DrSQL,

I could not find that dba_data_files table on my database.

And one more input that I want to give you is that, with 10g client (SQORA32.dll version 3.52) the exception occurs in another function call "SQLAllocStmt" not the "SQLExecDirect" but just after the same no. of insert statements have been executed in the transaction.

Please help.

Thanks, glivie.
glivie,
   Sorry, I added an extra "_".  The correct query is:

select tablespace_name, sum(bytes) / 1024 / 1024 / 1024 gb, autoextensible
from dba_data_files where tablespace_name like 'TD_DATA%';

The fact that the error is now in "SQLAllocStmt" makes me think we're now on the right track.

Good luck!
DrSQL
Avatar of glivie

ASKER

DrSQL,

I have clearly identified that the table space should not be a problem at all.

What I did is, dropped all the tables which had LONG RAW types under User2's login and recreated them with BLOB types.

To refresh the previous logins:
UserName                     Default_tablespace      Temp_tablespace
-------------------------------------------------------
User1          TD_DATA          Temp          -(when run in login1)
User2          TD_DATA          Temp          -(run in login2)
User3          TD_DATA2          Temp          -(run in login3 - badlogin)


Now, if there was any problem only with the table space there should be no exception occurring with this login (User2). But even with this login, I am facing the same problem. There is no question of partitioning either as I dont have any in this login.

Only one thing that I was able to find is when I used the SQLFreeStmt(statement, SQL_RESET_PARAMS) function between two inserts, the exception does not occur.
So far the pattern of the ODBC code was as follows:

Create environment handle;
Create connection handle and bind with environment;
Create statement handle and bind with connection;

Loop For (X times) {
create sqlCommand string;
Set SQLParamOptions to insert N number of rows;

SQLBindParam(1);
SQLBindParam(2);
SQLBindParam(3);
SQLBindParam(4); //data buffer which may vary in its size for each data item in the loop

SQLExecDirect(statement, sqlCommand, SQL_NTS);

}

With this code the exception always occurs when BLOB types are used for table fields. What is the reason that the same code works fine for LONG RAW types without any problems?

Once I include the SQLFreeStmt() after/before each insert, the exception does not occur in User3 login & the modified User2 login.

But I am unable to understand why this happens.

Hope you can provide me some details here.

Thanks,
glivie.
glivie,
    Does that mean that the SQLFreeStmt(statement, SQL_RESET_PARAMS) function returns SQL_SUCCESS or SQL_SUCCESS_WITH_INFO?  If so, then we need to look at your environment. The SQLFreeStmt is cleaning up cursors and memory area - if you're that constrained for resources, there may be more that we need to look at, but as a start...  what is the setting in the init.ora of the database for open_cursors?  We might want to increase it and try again.  Objects use extra cursors (when compared with LONG).

Good luck!
DrSQL
Avatar of glivie

ASKER

DrSQL,

I think you got it wrong with this SQLFreeStmt() call. There is no problems using this function and it always returns SQL_SUCCESS. In fact, this function call has solved the exception problem.

Only when I dont use this function between two inserts, the exception occurs.
This means I am re-using the statement handle (with the old parameter buffers) again for the all the inserts.

I want to make sure that this SQLFreeStmt() call is the fix for this problem and also would like to know the reason why this problem occurs.

As of the cursors settings, I do not have the access to the database machine currently. But will try to get that info.

You have mentioned that objects use extra cursors when compared to LONG. But doesn't the LONG RAW store objects as well. In which case, even the LONG RAW should have had problem with cursors and memory right?
glivie,
   I was just check ing on the SQLFreeStmt() and making sure it wasn't masking an error.  It does sound like there are either issues with the information left in the memory structures that are getting flushed by the SQLFreeStmt() , or you are so constrained by resources that you need the SQLFreeStmt() to re-use the memory structures.  I'll see if I can find a bug listing, but it certainly sounds like it is the appropriate fix.

  As for my comment about objects - yes, long raw can be used to store objects, but long raw is not treated as an object.  Objects get special attention and there are more recursive sql operations necessary to resolve that extra attention (dba_objects, some lob-specific validation, etc.).  Long raw assumes you know what you're dealing with and doesn't give you any help or special attention.

Good luck!
DrSQL
Avatar of glivie

ASKER

DrSQL,

I have found a link which talks about a possible reason for this problem. But it is not meant for the Oracle ODBC driver, it is for DB2 driver.

http://www.uic.rsu.ru/doc/db/DB2CLI/sqll1307.htm#HDRFNBNDPA

In the above link please note the section where it says,

"After the SQL statement has been executed, and the results processed, the application may wish to reuse the statement handle to execute a different SQL statement. If the parameter marker specifications are different (number of parameters, length or type) then SQLFreeStmt() should be called with SQL_RESET_PARAMS to reset or clear the parameter bindings."

Even here they have not given any specific problem or what will be the consequence of not calling SQLFreeStmt(). I was guessing if this is exactly what happens with my code.

Were you able to find out any other leads?

Thanks,
glivie.
ASKER CERTIFIED SOLUTION
Avatar of DrSQL - Scott Anderson
DrSQL - Scott Anderson
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial