JDN
asked on
Lookup query runs very slow
Hi experts,
I'm using a lookup query in a DBGrid to automatically display an article description in the lookup field as soon as the user types in the article code.
The lookup query is TQuery that's using a complex query to retrieve article information from (indexed) SQL Server tables. The result of this lookup query contains two fields: the article code and the article description. The article code is the key field that is linked to the field the user types in the grid. The article description is the result field that is displayed in another column of the grid.
The result of the lookup query is a set of about 10.000 article codes and descriptions.
Technically, this works. There is however one major problem: the lookups are rather slow. I'm having a serious performance problem caused by the lookups: the lookup time seems to increase when the user enters more records in the grid. When 30 records are entered there is a significant delay when posting a new record, the lookup takes longer than with only a few records in the grid. When about 100 records have been entered, it's almost impossible to work with the application anymore.
When I remove the link to the lookup query, there's no delay at all.
I'm using Delphi 5 Enterprise, SQL Server 7, and the TQuery linked to the DBGrid uses cached updates. As mentioned, without the lookup field there's no delay at all and below 30 records in the grid there's also no noticeable delay.
Can anyone tell me how to speed up this lookup? How can I prevent this increasing delay when the number of entered records in the grid is growing? Perhaps someone did ever have the same problem?
Any help is welcome.
Thanks,
JDN
I'm using a lookup query in a DBGrid to automatically display an article description in the lookup field as soon as the user types in the article code.
The lookup query is TQuery that's using a complex query to retrieve article information from (indexed) SQL Server tables. The result of this lookup query contains two fields: the article code and the article description. The article code is the key field that is linked to the field the user types in the grid. The article description is the result field that is displayed in another column of the grid.
The result of the lookup query is a set of about 10.000 article codes and descriptions.
Technically, this works. There is however one major problem: the lookups are rather slow. I'm having a serious performance problem caused by the lookups: the lookup time seems to increase when the user enters more records in the grid. When 30 records are entered there is a significant delay when posting a new record, the lookup takes longer than with only a few records in the grid. When about 100 records have been entered, it's almost impossible to work with the application anymore.
When I remove the link to the lookup query, there's no delay at all.
I'm using Delphi 5 Enterprise, SQL Server 7, and the TQuery linked to the DBGrid uses cached updates. As mentioned, without the lookup field there's no delay at all and below 30 records in the grid there's also no noticeable delay.
Can anyone tell me how to speed up this lookup? How can I prevent this increasing delay when the number of entered records in the grid is growing? Perhaps someone did ever have the same problem?
Any help is welcome.
Thanks,
JDN
ASKER
Meikl,
I tried your suggestion, but there's no difference in performance. When I put the lookup query in a view or stored procedure, there's still an increasing delay as the number of entered records in the dbgrid is growing.
When I limit the result of the lookup query to 100 records (normally 10.000 records) there isn't any delay when entering records, but that's - of course - not the solution.
So, it doesn't make any diffence whether I put the lookup sql in a TQuery or in a view. With 10.000 records in the lookup, you cannot enter more than about 30 records in the dbgrid. If you enter more records in the dbgrid, the delay after posting a new record becomes unacceptable.
JDN
I tried your suggestion, but there's no difference in performance. When I put the lookup query in a view or stored procedure, there's still an increasing delay as the number of entered records in the dbgrid is growing.
When I limit the result of the lookup query to 100 records (normally 10.000 records) there isn't any delay when entering records, but that's - of course - not the solution.
So, it doesn't make any diffence whether I put the lookup sql in a TQuery or in a view. With 10.000 records in the lookup, you cannot enter more than about 30 records in the dbgrid. If you enter more records in the dbgrid, the delay after posting a new record becomes unacceptable.
JDN
well,
10000 records in a lookuplist
is very unconformtable for the user
and yes the lookupdataset must locate
its record, which costs time depending on the rows, which are displayed in the grid.
i would recommend another interface-design,
(i do this already,
if possible lookupvalues becomes more than 50 entries)
in this case i use no lookupfield,
instead i do join the resultfield directly
into the resultset of the gridquery.
this gridfield i set the button-property to
cb-ellipsys, and if the user presses
this button a seperate searchform comes up,
where the user can filter and select
the right entry. the gridfield itself is not editable directly.
just as suggestion
another possibilty would be
(if you don't want a redesign),
to load the whole lookup-dataset
into a memory-dataset, because by so
much records a "normal" dataset
is not able to cache all data and
must therefore by some locates
the server, which is time-exoensive in your case.
hope this helps
meikl ;-)
meikl
10000 records in a lookuplist
is very unconformtable for the user
and yes the lookupdataset must locate
its record, which costs time depending on the rows, which are displayed in the grid.
i would recommend another interface-design,
(i do this already,
if possible lookupvalues becomes more than 50 entries)
in this case i use no lookupfield,
instead i do join the resultfield directly
into the resultset of the gridquery.
this gridfield i set the button-property to
cb-ellipsys, and if the user presses
this button a seperate searchform comes up,
where the user can filter and select
the right entry. the gridfield itself is not editable directly.
just as suggestion
another possibilty would be
(if you don't want a redesign),
to load the whole lookup-dataset
into a memory-dataset, because by so
much records a "normal" dataset
is not able to cache all data and
must therefore by some locates
the server, which is time-exoensive in your case.
hope this helps
meikl ;-)
meikl
ASKER
Meikl,
I cannot join the result field directly to the dbgrid query, because this query becomes read-only. I use a RequestLive:=true, and with joins in a query this RequestLive won't work.
The function of my lookup result field is to present an (read-only) article description in the dbgrid as soon as the user types in the article code. So, an ellipsys button isn't a solotion for this. My lookup is merely for information purposes, not to present possible selections to the user.
Your memory-dataset solotion sounds interesting. Can you give me some more information about this, with a code example?
Thanks,
JDN
I cannot join the result field directly to the dbgrid query, because this query becomes read-only. I use a RequestLive:=true, and with joins in a query this RequestLive won't work.
The function of my lookup result field is to present an (read-only) article description in the dbgrid as soon as the user types in the article code. So, an ellipsys button isn't a solotion for this. My lookup is merely for information purposes, not to present possible selections to the user.
Your memory-dataset solotion sounds interesting. Can you give me some more information about this, with a code example?
Thanks,
JDN
oh thats easy,
just use the memorydataset,
which is included in the rxlib (free),
at appstart fill in this memorydataset
the result of your query (batchmove,
if possible or a iteration)
instead the query use the
memorydataset as lookupsource
a sample i can provide tomorrow,
because today i come too late
to my home
meikl ;-)
just use the memorydataset,
which is included in the rxlib (free),
at appstart fill in this memorydataset
the result of your query (batchmove,
if possible or a iteration)
instead the query use the
memorydataset as lookupsource
a sample i can provide tomorrow,
because today i come too late
to my home
meikl ;-)
I use TClientDataset as internal datasets. It is very useful (some additional futures). And you can create your lookup in briefcase mode. There will be large delay only for first data uploading from server.
Set fileName on OnBeforeClose method for storing this lookup in local disk.
You can create trigger on your articles, which will set some flag in hash table, when data changes. Or update it from your application.
Create on lookup dataset OnBeforeOpen method which will check hash code from server and compare it with local value, I use same TClientDataset hash table on client machine, which store local copy in flat file. If server side data not changes set fileName property to local file. This works realy faster.
Good luck,
and sorry for my English, Siarhej.
Set fileName on OnBeforeClose method for storing this lookup in local disk.
You can create trigger on your articles, which will set some flag in hash table, when data changes. Or update it from your application.
Create on lookup dataset OnBeforeOpen method which will check hash code from server and compare it with local value, I use same TClientDataset hash table on client machine, which store local copy in flat file. If server side data not changes set fileName property to local file. This works realy faster.
Good luck,
and sorry for my English, Siarhej.
There is some code from my program, may be in will be usefull for you:
/////TDM - Local data module
// cdsHashCodes - Hash codes table //
procedure TDM.cdsHashCodesAfterOpen( DataSet: TDataSet);
begin
if cdsHashCodes.FileName='' then
begin
cdsHashCodes.FileName:=Inc ludeTraili ngBackslas h(gLocalDa taPath)+'H ashCodes.b in';
cdsHashCodes.First;
while cdsHashCodes.RecordCount > 0 do
begin
cdsHashCodes.Delete;
cdsHashCodes.Next;
end;
DeleteLocalDataFiles;
end;
cdsHashCodes.AddIndex('xpk HASH_ID',' HASH_ID',[ ixPrimary] ,'','',0);
cdsHashCodes.IndexName:='x pkHASH_ID' ;
end;
procedure TDM.cdsHashCodesBeforeClos e(DataSet: TDataSet);
begin
//FreeCreatedIndexes;
end;
procedure TDM.cdsHashCodesBeforeOpen (DataSet: TDataSet);
begin
if FileExists(IncludeTrailing Backslash( gLocalData Path)+'Has hCodes.bin ') then
cdsHashCodes.FileName:=Inc ludeTraili ngBackslas h(gLocalDa taPath)+'H ashCodes.b in'
else
cdsHashCodes.FileName:='';
end;
/////////////////////
function TDM.GetHashCode(const AHashCode_ID:Double):Doubl e;
begin
cdsHashCodes.Active:=True;
if cdsHashCodes.Locate('HASH_ ID',AHashC ode_ID,[]) then
Result:=Self.cdsHashCodes. FieldByNam e('HASH_VA LUE').Valu e
else
Result:=-1;
end;
function TDM.GetNewHashCode(const AHASH_ID:Variant):Variant;
begin
Result:=RDM.GetNewHashCode (AHASH_ID) ;
end;
function TDM.GetHashCodeValue(const AHASH_ID:Variant):Variant;
begin
Result:=RDM.GetHashCodeVal ue(AHASH_I D);
end;
function TDM.SetHashCode(const AHashCode_ID:Double;var HashCode_Value:Double):boo lean;
begin
cdsHashCodes.Active:=True;
Result:=False;
if cdsHashCodes.Locate('HASH_ ID',AHashC ode_ID,[]) then
begin
if (Self.cdsHashCodes.FieldBy Name('HASH _VALUE').V alue<>Hash Code_Value ) then
begin
if Not (cdsHashCodes.State in [dsInsert,dsEdit]) then
cdsHashCodes.Edit;
Self.cdsHashCodes.FieldByN ame('HASH_ VALUE').Va lue:=HashC ode_Value;
cdsHashCodes.Post;
Result:=True;
end;
end
else
begin
Self.cdsHashCodes.AppendRe cord([AHas hCode_ID, HashCode_Value]);
Result:=True;
end;
end;
function TDM.LocalDataNotValid(cons t aHashCode:Variant):boolean ;
var
oldHashCode,newHashCode:Do uble;
begin
Result:=True;
oldHashCode:=DM.GetHashCod e(aHashCod e);
newHashCode:=DM.GetHashCod eValue(aHa shCode);
if (oldHashCode=newHashCode) then
Result:=False;
end;
///////////////////////
procedure TdmEMP.cdsEmpLocationLookU pBeforeOpe n(DataSet: TDataSet);
var
oldHashCode,newHashCode:Do uble;
begin
if gUseLocalCache then
begin
oldHashCode:=DM.GetHashCod e(1);
newHashCode:=DM.GetHashCod eValue(1);
if (newHashCode > 0) and (oldHashCode=newHashCode) then
begin
if FileExists(IncludeTrailing Backslash( gLocalData Path)+'Loc ationLookU p.bin') then
cdsEmpLocationLookUp.FileN ame:=
IncludeTrailingBackslash(g LocalDataP ath)+'Loca tionLookUp .bin';
end
else
begin
cdsEmpLocationLookUp.FileN ame:='';
DM.SetHashCode(1,newHashCo de);
end;
end;
end;
procedure TdmEMP.cdsEmpLocationLookU pBeforeClo se(DataSet : TDataSet);
begin
if gUseLocalCache then
cdsEmpLocationLookUp.FileN ame:=Inclu deTrailing Backslash( gLocalData Path)+'Loc ationLookU p.bin'
else
cdsEmpLocationLookUp.FileN ame:='';
end;
// RDM - Emulating remote data module in the same application
// I have some different RDM with different DataSets to different servers
////////////////////////// //////////
// Create new hash code
// AHASH_ID - hash identifier
function TRDM.GetNewHashCode(const AHASH_ID:Variant):Variant;
var
sqlStr:String;
newCode:Double;
begin
Result:=GetHashCodeValue(A HASH_ID);;
newCode:=0;
if Result>=0 then
newCode:=Result+1;
sqlStr:='update '+DB_OWNER+'HASH_CODES'
+' set HASH_VALUE='+FloatToStr(ne wCode)
+' where HASH_ID='+String(AHASH_ID) ;
Result:=Self.ExecuteSqlCom mand(sqlSt r);
end;
function TRDM.GetHashCodeValue(cons t AHASH_ID:Variant):Variant;
var
theDataQry:TBS_ADODataSet;
sqlStr:String;
begin
Result:=-1;
theDataQry:=TBS_ADODataSet .Create(Se lf);
try
theDataQry.Connection:=DbC onnection;
sqlStr:='select HASH_VALUE from '+DB_OWNER+'HASH_CODES'
+' where HASH_ID='+String(AHASH_ID) ;
theDataQry.CommandText:=sq lStr;
theDataQry.Open;
if Not theDataQry.FieldByname('HA SH_VALUE') .IsNull then
Result:=theDataQry.FieldBy name('HASH _VALUE').V alue
else
begin
sqlStr:='insert into '+DB_OWNER+'HASH_CODES'
+' (HASH_ID,HASH_VALUE) values ('
+String(AHASH_ID)+' ,1)';
Self.ExecuteSqlCommand(sql Str);
Result:=0;
end;
theDataQry.Close;
finally
theDataQry.Free;
end;
end;
/////TDM - Local data module
// cdsHashCodes - Hash codes table //
procedure TDM.cdsHashCodesAfterOpen(
begin
if cdsHashCodes.FileName='' then
begin
cdsHashCodes.FileName:=Inc
cdsHashCodes.First;
while cdsHashCodes.RecordCount > 0 do
begin
cdsHashCodes.Delete;
cdsHashCodes.Next;
end;
DeleteLocalDataFiles;
end;
cdsHashCodes.AddIndex('xpk
cdsHashCodes.IndexName:='x
end;
procedure TDM.cdsHashCodesBeforeClos
begin
//FreeCreatedIndexes;
end;
procedure TDM.cdsHashCodesBeforeOpen
begin
if FileExists(IncludeTrailing
cdsHashCodes.FileName:=Inc
else
cdsHashCodes.FileName:='';
end;
/////////////////////
function TDM.GetHashCode(const AHashCode_ID:Double):Doubl
begin
cdsHashCodes.Active:=True;
if cdsHashCodes.Locate('HASH_
Result:=Self.cdsHashCodes.
else
Result:=-1;
end;
function TDM.GetNewHashCode(const AHASH_ID:Variant):Variant;
begin
Result:=RDM.GetNewHashCode
end;
function TDM.GetHashCodeValue(const
begin
Result:=RDM.GetHashCodeVal
end;
function TDM.SetHashCode(const AHashCode_ID:Double;var HashCode_Value:Double):boo
begin
cdsHashCodes.Active:=True;
Result:=False;
if cdsHashCodes.Locate('HASH_
begin
if (Self.cdsHashCodes.FieldBy
begin
if Not (cdsHashCodes.State in [dsInsert,dsEdit]) then
cdsHashCodes.Edit;
Self.cdsHashCodes.FieldByN
cdsHashCodes.Post;
Result:=True;
end;
end
else
begin
Self.cdsHashCodes.AppendRe
Result:=True;
end;
end;
function TDM.LocalDataNotValid(cons
var
oldHashCode,newHashCode:Do
begin
Result:=True;
oldHashCode:=DM.GetHashCod
newHashCode:=DM.GetHashCod
if (oldHashCode=newHashCode) then
Result:=False;
end;
///////////////////////
procedure TdmEMP.cdsEmpLocationLookU
var
oldHashCode,newHashCode:Do
begin
if gUseLocalCache then
begin
oldHashCode:=DM.GetHashCod
newHashCode:=DM.GetHashCod
if (newHashCode > 0) and (oldHashCode=newHashCode) then
begin
if FileExists(IncludeTrailing
cdsEmpLocationLookUp.FileN
IncludeTrailingBackslash(g
end
else
begin
cdsEmpLocationLookUp.FileN
DM.SetHashCode(1,newHashCo
end;
end;
end;
procedure TdmEMP.cdsEmpLocationLookU
begin
if gUseLocalCache then
cdsEmpLocationLookUp.FileN
else
cdsEmpLocationLookUp.FileN
end;
// RDM - Emulating remote data module in the same application
// I have some different RDM with different DataSets to different servers
//////////////////////////
// Create new hash code
// AHASH_ID - hash identifier
function TRDM.GetNewHashCode(const AHASH_ID:Variant):Variant;
var
sqlStr:String;
newCode:Double;
begin
Result:=GetHashCodeValue(A
newCode:=0;
if Result>=0 then
newCode:=Result+1;
sqlStr:='update '+DB_OWNER+'HASH_CODES'
+' set HASH_VALUE='+FloatToStr(ne
+' where HASH_ID='+String(AHASH_ID)
Result:=Self.ExecuteSqlCom
end;
function TRDM.GetHashCodeValue(cons
var
theDataQry:TBS_ADODataSet;
sqlStr:String;
begin
Result:=-1;
theDataQry:=TBS_ADODataSet
try
theDataQry.Connection:=DbC
sqlStr:='select HASH_VALUE from '+DB_OWNER+'HASH_CODES'
+' where HASH_ID='+String(AHASH_ID)
theDataQry.CommandText:=sq
theDataQry.Open;
if Not theDataQry.FieldByname('HA
Result:=theDataQry.FieldBy
else
begin
sqlStr:='insert into '+DB_OWNER+'HASH_CODES'
+' (HASH_ID,HASH_VALUE) values ('
+String(AHASH_ID)+' ,1)';
Self.ExecuteSqlCommand(sql
Result:=0;
end;
theDataQry.Close;
finally
theDataQry.Free;
end;
end;
ASKER
Siarhej,
The data in the article table doesn't change very much, so a local table that updated once in a while should work too.
But I don't get your idea with the TClientDataset as lookup. Can you explain it more in detail? Perhaps a brief example with some code?
Thanks,
JDN
The data in the article table doesn't change very much, so a local table that updated once in a while should work too.
But I don't get your idea with the TClientDataset as lookup. Can you explain it more in detail? Perhaps a brief example with some code?
Thanks,
JDN
ASKER
Thanks Siarhej,
(sorry, it seems I typed my last question while you were posting you answer).
I will take a thorough look at you example.
There might be another solotion for the problem. I found out that, when I make the lookup query (q_article it's called) inactive in the BeforeInsert event of the dbgrid-query, the new record is added immediatly, without delay. With q_article active there is a delay of 2 seconds before a new row is inserted.
Problem here is that I cannot make q_article active again in the AfterInsert event. The look-ups for new rows don't work anymore after q_article is made inactive in the BeforeInsert.
I tried to make the LookupDataset property of the field nil in a BeforeInsert, but that's not allowed on an open dataset.
So, if there would be a way to code in the BeforeInsert event of the dbgrid-query something like: "don't look-up anything now" and in the AfterInsert: "now you may continue the look-ups" (without a delay of course) than the problem would also be solved.
Any ideas about this approach are welcome too.
Thanks,
JDN
(sorry, it seems I typed my last question while you were posting you answer).
I will take a thorough look at you example.
There might be another solotion for the problem. I found out that, when I make the lookup query (q_article it's called) inactive in the BeforeInsert event of the dbgrid-query, the new record is added immediatly, without delay. With q_article active there is a delay of 2 seconds before a new row is inserted.
Problem here is that I cannot make q_article active again in the AfterInsert event. The look-ups for new rows don't work anymore after q_article is made inactive in the BeforeInsert.
I tried to make the LookupDataset property of the field nil in a BeforeInsert, but that's not allowed on an open dataset.
So, if there would be a way to code in the BeforeInsert event of the dbgrid-query something like: "don't look-up anything now" and in the AfterInsert: "now you may continue the look-ups" (without a delay of course) than the problem would also be solved.
Any ideas about this approach are welcome too.
Thanks,
JDN
I really suggest separating your insert, update, and retrieval queries.
When I was still new to large databases I often tried to use the construct you are using, but I found that it was too cumbersome and didn't give me the degree of control i needed to tune the app. For comparison purposes, I talk about databases in terabyes now, and I used to be concerend about a few megabytes.
Once you separate the queries, the question of a join becomes a non-issue. You can really tune the performance of your system and gain a greater degree of control over the commit/error response systems. You can also offload the expensive work to the high powered database server.
I recently tuned an Access process that was designed around cursoring, the same way you are trying to work, and reduced its time to completion from over 30 minutes to under 10 seconds to.
Generally, when dealing with datasets, I find that cursoring is bad. A lookup field is implicitly a cursoring operation.
When I was still new to large databases I often tried to use the construct you are using, but I found that it was too cumbersome and didn't give me the degree of control i needed to tune the app. For comparison purposes, I talk about databases in terabyes now, and I used to be concerend about a few megabytes.
Once you separate the queries, the question of a join becomes a non-issue. You can really tune the performance of your system and gain a greater degree of control over the commit/error response systems. You can also offload the expensive work to the high powered database server.
I recently tuned an Access process that was designed around cursoring, the same way you are trying to work, and reduced its time to completion from over 30 minutes to under 10 seconds to.
Generally, when dealing with datasets, I find that cursoring is bad. A lookup field is implicitly a cursoring operation.
I'm bored waiting for a batch job to run on the mainframe, and I can't go home until it's run and been checked, so I did a bit of math for you.
Assume 100 rows of data in your master dataset. The number of rows in your lookup dataset are relatively unimportant, we'll assume instantaneous access inside the database.
Overhead for each database transaction is a flat 50 milliseconds per transaction. This is establishing the conversation, transmitting the query, retrieveing the result set, and closing the conversation. Our actual measurements are quite a bit higher, but there are better ways to do things so I'll be generous.
-------
Under the design paradigm you are using, the lookup represents a requirement for the VCL to cursor through the master dataset and fire the lookup query once for every row in the master dataset.
Time to retrieve 100 records for initial dataset = 1 conversation * 50 milliseconds = 50 milliseconds
Time to retrieve lookup records = 100 conversations * 50 seconds = 5,000 milliseconds
Total time in conversations = 5,050 milliseconds ... excluding time executing queries and application logic.
-----
In contrast, a joined query requires one conversation, which is a flat 50 milliseconds.
Assume 100 rows of data in your master dataset. The number of rows in your lookup dataset are relatively unimportant, we'll assume instantaneous access inside the database.
Overhead for each database transaction is a flat 50 milliseconds per transaction. This is establishing the conversation, transmitting the query, retrieveing the result set, and closing the conversation. Our actual measurements are quite a bit higher, but there are better ways to do things so I'll be generous.
-------
Under the design paradigm you are using, the lookup represents a requirement for the VCL to cursor through the master dataset and fire the lookup query once for every row in the master dataset.
Time to retrieve 100 records for initial dataset = 1 conversation * 50 milliseconds = 50 milliseconds
Time to retrieve lookup records = 100 conversations * 50 seconds = 5,000 milliseconds
Total time in conversations = 5,050 milliseconds ... excluding time executing queries and application logic.
-----
In contrast, a joined query requires one conversation, which is a flat 50 milliseconds.
nice trick, jdn,
try following
in your beforeinsert-event add
dataset.disablecontrols;
in your afterinsert-event add
dataset.enablecontrols;
i've heard, that by this way attached datasets
(lookupdatasets, childdatasets) are also disabled until you call enable, but never checked myself.
meikl ;-)
try following
in your beforeinsert-event add
dataset.disablecontrols;
in your afterinsert-event add
dataset.enablecontrols;
i've heard, that by this way attached datasets
(lookupdatasets, childdatasets) are also disabled until you call enable, but never checked myself.
meikl ;-)
Note that whether you use a query or a stored procedure the overheads are the same.
hi swift,
see my second comment part1
meikl ;-)
see my second comment part1
meikl ;-)
meikl: Good input.
swift, yours too,
specially your calculation is from interest ;-)
specially your calculation is from interest ;-)
ASKER
Swift,
I agree with you that joins are much faster, and I'm using joines whenever possible.
In this case however, joins cannot be used I'm afraid.
I will explain the situation more detailed:
The user must enter an order in the dbgrid. When he enters the article code in the first column, the article description should appear automatically (read-only) in the second column. In the other columns he can enter price, quantity, discount, etc.
The article description that appears after typing in the article code is important for the user. While the user enters an order he must be able to scroll back and forward through the dbgrid, and see all order lines with the corresponding article descriptions.
I believe this way of entry can only be performed by means of a lookup field. By using joins the description won't appear automatically when the user enters the article code. Besides that, if I use a join the dataset becomes read-only (the RequestLive has no effect). So, I cannot see how to achieve this way of data entry with joins.
Meikl,
The EnableControl and DisableControl doesn't seem to have any effect on the lookup table.
However, I found out that you CAN change the active property of the lookup table. In my previous posting I said that I can make the lookup query inactive, but not active again. Well, I guess I did something wrong the first time because now I works. In the BeforeInsert of the dbgrid dataset I've now placed a q_article.active := false and in the AfterInsert there's a q_article.active := true.
This seem to work; adding a new row to a large order is a lot faster now because there's no lookup.
Although the input is a lot faster now, I'm not quite satisfied with this work-around. I feel it's not a professional solution. Besides that the loading of an existing order in the dbgrid takes still a long time because the description field for all records must be looked up. I agree, Swift, that for this purpose the join would be perfect, but than I cannot add new rows because the dataset is read-only.
Thanks for all comments, more ideas would be welcome.
JDN
I agree with you that joins are much faster, and I'm using joines whenever possible.
In this case however, joins cannot be used I'm afraid.
I will explain the situation more detailed:
The user must enter an order in the dbgrid. When he enters the article code in the first column, the article description should appear automatically (read-only) in the second column. In the other columns he can enter price, quantity, discount, etc.
The article description that appears after typing in the article code is important for the user. While the user enters an order he must be able to scroll back and forward through the dbgrid, and see all order lines with the corresponding article descriptions.
I believe this way of entry can only be performed by means of a lookup field. By using joins the description won't appear automatically when the user enters the article code. Besides that, if I use a join the dataset becomes read-only (the RequestLive has no effect). So, I cannot see how to achieve this way of data entry with joins.
Meikl,
The EnableControl and DisableControl doesn't seem to have any effect on the lookup table.
However, I found out that you CAN change the active property of the lookup table. In my previous posting I said that I can make the lookup query inactive, but not active again. Well, I guess I did something wrong the first time because now I works. In the BeforeInsert of the dbgrid dataset I've now placed a q_article.active := false and in the AfterInsert there's a q_article.active := true.
This seem to work; adding a new row to a large order is a lot faster now because there's no lookup.
Although the input is a lot faster now, I'm not quite satisfied with this work-around. I feel it's not a professional solution. Besides that the loading of an existing order in the dbgrid takes still a long time because the description field for all records must be looked up. I agree, Swift, that for this purpose the join would be perfect, but than I cannot add new rows because the dataset is read-only.
Thanks for all comments, more ideas would be welcome.
JDN
Option 1: Use a TClientDataset for your editing dataset, using the IProvider mechanism to connect to your retrieval query. You will then need to also create your insert, update, and delete queries for the provider to use. Your query can be dead, but your dataset can be live. I used this in one web based solution I built a couple of years back. In my experience this also speeds things up considerably because the MIDAS system has its own caching. Your query doesn't need to be live for you to achieve the effects you are looking for.
Option 2: Use a TClientDataset in briefcase model operation, where you manually populate and extract data for communication to your server. This gives you maximum control, and makes it possible to manage and perform quite complex operations across very slow lines. It also permits you to put the SQL communication in its own thread so the end user sees no delay, even when large amounts of data are transfered across slowdialup connections.
Option 2: Use a TClientDataset in briefcase model operation, where you manually populate and extract data for communication to your server. This gives you maximum control, and makes it possible to manage and perform quite complex operations across very slow lines. It also permits you to put the SQL communication in its own thread so the end user sees no delay, even when large amounts of data are transfered across slowdialup connections.
or use cachedupdates and a TUpdateSQL as Updateobject
I'm sure that works most of the time.
A year or so ago I had some hard to figure problems that I eventually tracked deep in the TUpdateSQL logic of the VCL So I started keeping my logic all exposed so I can see and debug it. It wasn't that much more work and I was able to guarantee that problems would be exposed early and be trackable.
A year or so ago I had some hard to figure problems that I eventually tracked deep in the TUpdateSQL logic of the VCL So I started keeping my logic all exposed so I can see and debug it. It wasn't that much more work and I was able to guarantee that problems would be exposed early and be trackable.
Can reasonably in LookUp use only field Code and cut down part from Article for reduction of volume of the data. And the complete version of the description to show on pressing the additional button.
For example: SUBSTR (Article, 20) + '.. ' As C _ Article
For example: SUBSTR (Article, 20) + '.. ' As C _ Article
To swift99.
If linked with TDatasetProvider component DataSet realizes function GetKeyFields, that it is not necessary to create
Insert, update, and delete queries for the provider to make them updatable. You only must set UpdateMode to upWhereKeyOnly value, and IProvider will generate all of them. In some servers(I use Oracle with table owner name), you must create GetTableName method to TDatasetProvider component to.
Such as:
procedure TDM.dspCustomerGetTableNam e(Sender: TObject; DataSet: TDataSet;
var TableName: String);
begin
TableName:=DB_OWNER+'CUSTO MER';
end;
Example realization of a method GetKeyFields on TADODataSet
////////////////////////// ////////// ////////// ////////// ///
unit BS_ADODataSet;
interface
uses
Windows, Messages, SysUtils, Classes, Graphics, Controls, Forms, Dialogs,
Db, ADODB;
type
TBS_ADODataSet = class(TADODataSet)
private
FUniqueFields: string;
protected
function GetKeyFields: string;
function PSGetKeyFields: string; override;
public
//
published
property UniqueFields: string read FUniqueFields write FUniqueFields;
end;
implementation
function TBS_ADODataSet.GetKeyField s: string;
begin
Result := Trim(FUniqueFields);
end;
function TBS_ADODataSet.PSGetKeyFie lds: string;
begin
// First try to obtain the unique fields through the ProviderFlags of the Fields
Result := inherited PSGetKeyFields;
// If these are not defined, use the UniqueFields or constraints of the DataSet
if Result = '' then
if (Trim(FUniqueFields) <> '') then
Result := GetKeyFields;
end;
end.
////////////////////////// ////////// /////
If linked with TDatasetProvider component DataSet realizes function GetKeyFields, that it is not necessary to create
Insert, update, and delete queries for the provider to make them updatable. You only must set UpdateMode to upWhereKeyOnly value, and IProvider will generate all of them. In some servers(I use Oracle with table owner name), you must create GetTableName method to TDatasetProvider component to.
Such as:
procedure TDM.dspCustomerGetTableNam
var TableName: String);
begin
TableName:=DB_OWNER+'CUSTO
end;
Example realization of a method GetKeyFields on TADODataSet
//////////////////////////
unit BS_ADODataSet;
interface
uses
Windows, Messages, SysUtils, Classes, Graphics, Controls, Forms, Dialogs,
Db, ADODB;
type
TBS_ADODataSet = class(TADODataSet)
private
FUniqueFields: string;
protected
function GetKeyFields: string;
function PSGetKeyFields: string; override;
public
//
published
property UniqueFields: string read FUniqueFields write FUniqueFields;
end;
implementation
function TBS_ADODataSet.GetKeyField
begin
Result := Trim(FUniqueFields);
end;
function TBS_ADODataSet.PSGetKeyFie
begin
// First try to obtain the unique fields through the ProviderFlags of the Fields
Result := inherited PSGetKeyFields;
// If these are not defined, use the UniqueFields or constraints of the DataSet
if Result = '' then
if (Trim(FUniqueFields) <> '') then
Result := GetKeyFields;
end;
end.
//////////////////////////
Sorry
SUBSTR (Article, 20) || '.. ' As C _ Article
SUBSTR (Article, 20) || '.. ' As C _ Article
Sorry
SUBSTR (Article, 20) || '.. ' As C _ Article
SUBSTR (Article, 20) || '.. ' As C _ Article
well, back to the memorydataset mentioned earlier on this thread
a sample, using the rxmemorydata-component from the rxlib, as it is just not much
procedure TForm1.FormCreate(Sender: TObject);
begin
query1.open; //open your complex query
rxmemorydata1.LoadFromData Set(query1 ,0,lmCopy) ; //copy all to memory
query1.close; //never needed
rxmemorydata1.Open; //open memory dataset
end;
use the memorydatset now as lookupsource
easy or?
meikl ;-)
a sample, using the rxmemorydata-component from the rxlib, as it is just not much
procedure TForm1.FormCreate(Sender: TObject);
begin
query1.open; //open your complex query
rxmemorydata1.LoadFromData
query1.close; //never needed
rxmemorydata1.Open; //open memory dataset
end;
use the memorydatset now as lookupsource
easy or?
meikl ;-)
ASKER
Meikl,
Yes, it's easy, but it doesn't work that way.
When I use your lines of code, the query is perfectly copied to the memory dataset; I checked the contents and this seems all right.
However, when I change the LookupDataSet property of the lookup fields of my dbgrid dataset from the original dataset to the memory dataset, there aren't any lookups at all.
I.e.: with the LookupDataSet property set to q_article (the original complex query), I can type a code in the first column of the dbgrid and, as a result, the second column is automatically filled with the description.
Now I added your code (and of course a TMemoryData dropped on my form), and in the Field Editor I changed the LookupDataSet of the second column to the MemoryData dataset. The MemoryData dataset also contains all information of q_article now (I checked by assigning it to a temporary dbgrid).
When I enter an article code now in the first dbgrid column, there's no description found and placed in the second column. Instead a "O" is placed in this column.
Did I forget someting?
Thanks,
JDN
Yes, it's easy, but it doesn't work that way.
When I use your lines of code, the query is perfectly copied to the memory dataset; I checked the contents and this seems all right.
However, when I change the LookupDataSet property of the lookup fields of my dbgrid dataset from the original dataset to the memory dataset, there aren't any lookups at all.
I.e.: with the LookupDataSet property set to q_article (the original complex query), I can type a code in the first column of the dbgrid and, as a result, the second column is automatically filled with the description.
Now I added your code (and of course a TMemoryData dropped on my form), and in the Field Editor I changed the LookupDataSet of the second column to the MemoryData dataset. The MemoryData dataset also contains all information of q_article now (I checked by assigning it to a temporary dbgrid).
When I enter an article code now in the first dbgrid column, there's no description found and placed in the second column. Instead a "O" is placed in this column.
Did I forget someting?
Thanks,
JDN
? hmmm .... checking
ASKER
Meikl,
Just tried your solotion in another form with a different lookup: invoice number as keyfield, customer name as result field. When the user here types in an invoice number in the first column of a dbgrid, the second column shows automatically the customer name.
(by the way, with this lookup there's no problem because the result is not that large)
When I change the original query here by the MemoryData dataset, the result for all invoice numbers is also "0" (zero) like in the article lookups. The MemoryData dataset however is filled with the right lookup data.
Perhaps, a lookup dataset can't be a MemoryData dataset, unless I forgot something??
JDN
Just tried your solotion in another form with a different lookup: invoice number as keyfield, customer name as result field. When the user here types in an invoice number in the first column of a dbgrid, the second column shows automatically the customer name.
(by the way, with this lookup there's no problem because the result is not that large)
When I change the original query here by the MemoryData dataset, the result for all invoice numbers is also "0" (zero) like in the article lookups. The MemoryData dataset however is filled with the right lookup data.
Perhaps, a lookup dataset can't be a MemoryData dataset, unless I forgot something??
JDN
maybe the keyfield is not properly set
as far as i know if u change the lookupsource,
then the lookupresultfield and lookupfield are cleard
and must be reentered (but i'm not sure yout this)
can check myself this evening (no delphi on hand yet)
meikl ;-)
as far as i know if u change the lookupsource,
then the lookupresultfield and lookupfield are cleard
and must be reentered (but i'm not sure yout this)
can check myself this evening (no delphi on hand yet)
meikl ;-)
ASKER
Meikl,
When I change the value of the LookupDataset property in the Fields Editor, the LookupKeyfields and LookupResultfield don't change in this editor.
Also, when I check these propeties at runtime, the values are correct. The LookupDataset points to the new MemoryData dataset, and LookupKeyfields and LookupResultfield are pointing to the correct fields in this dataset (same names as with the original q_article dataset).
Strange thing...
JDN
When I change the value of the LookupDataset property in the Fields Editor, the LookupKeyfields and LookupResultfield don't change in this editor.
Also, when I check these propeties at runtime, the values are correct. The LookupDataset points to the new MemoryData dataset, and LookupKeyfields and LookupResultfield are pointing to the correct fields in this dataset (same names as with the original q_article dataset).
Strange thing...
JDN
You can make updateble DataSet with join query, using TClientDataSet, or may be TQuery with TUpdateQuery which have typed by hand ModyfySQL property. I’m using this trick and it works.
If we have two tables:
ArticleLst
ArticleLst_id : number
SomeData : varchar
Article_id : number
and
Article
Article_id : number
Article_Code: varchar
ArticleData : varchar
You can use next SQL in ArticleLst dataset:
Select
AL.ArticleLst_id, /* Primary key */
AL.SomeData,
AL.Article_id,
A.ArticleData as Ro_ArticleData
From
ArticleLst AL,
Article A
Where
AL.Article_id = A.Article_id
After fetching fields in ArticleLst dataset, set on field Ro_ArticleData ReadOnly property to true. And now you can create additional LookUp field (for example L_ Article_Code) from another dataset, which included codes, and may be additional Cuted Article data like:
Select
Article_id,
Article_Code || SUBSTR (ArticleData, 20) || '.. ' As C _ Article
From
Article
Now you can use updatable ArticleLst dataset with read only Ro_ArticleData field and lookable field Article codes. But Article will shown only after posting updates.
If you use TclientDataset then you mast create some methods like:
procedure TDM.cdsArticleApplyUpates;
begin
if cdsArticle.ChangeCount > 0 then
begin
cdsArticle.ApplyUpdates(-1 );
Self.CommitUpdates;
end;
end;
procedure TDM.cdsArticleAfterPost(Da taSet: TDataSet);
begin
cdsArticleApplyUpates;
cdsArticle.RefreshRecord;
end;
procedure TDM.cdsArticleAfterDelete( DataSet: TDataSet);
begin
cdsArticleApplyUpates;
end;
procedure TDM.cdsArticleBeforeClose( DataSet: TDataSet);
begin
if (cdsArticle.State in [dsInsert,dsedit])and(Mess ageDlg('Da ta Changed. Save?',
mtConfirmation, [mbYes, mbNo], 0) = mrYes) then
cdsArticle.Post;
end;
//I use TADO connection
function TDM.CommitUpdates:boolean;
begin
Result:=DMR.CommitUpdates; // CommitUpdates in remote data module
end;
// Commit Updates in remote data module to be sure of saving data
function TDMR.CommitUpdates:boolean ;
begin
if DbConnection.InTransaction then
try
DbConnection.CommitTrans;
Result:=True;
except
Result:=False;
end
end;
But as for me, it is not a very good idea, to upload so much data from server.
I would prefer to show Article data from additional query by pressing a button, or create thread, which will query this data, onAfterScrolling method. When you scrolling, put your current Article_Id in some global Variant variable, and check its value by thread, if it is not null, then:
1: store this value in internal thread variable
2: reopen Article Data query, with key by stored variable
3: after fetching data, compare this value with stored
4: if it changed, then go to label 2, else set it to null
This technology is very hard to understanding, but it work very fast – fetched only last record. Or if, you need to fetch all records, organize something like stack in TStringList, and store key values on it.
If we have two tables:
ArticleLst
ArticleLst_id : number
SomeData : varchar
Article_id : number
and
Article
Article_id : number
Article_Code: varchar
ArticleData : varchar
You can use next SQL in ArticleLst dataset:
Select
AL.ArticleLst_id, /* Primary key */
AL.SomeData,
AL.Article_id,
A.ArticleData as Ro_ArticleData
From
ArticleLst AL,
Article A
Where
AL.Article_id = A.Article_id
After fetching fields in ArticleLst dataset, set on field Ro_ArticleData ReadOnly property to true. And now you can create additional LookUp field (for example L_ Article_Code) from another dataset, which included codes, and may be additional Cuted Article data like:
Select
Article_id,
Article_Code || SUBSTR (ArticleData, 20) || '.. ' As C _ Article
From
Article
Now you can use updatable ArticleLst dataset with read only Ro_ArticleData field and lookable field Article codes. But Article will shown only after posting updates.
If you use TclientDataset then you mast create some methods like:
procedure TDM.cdsArticleApplyUpates;
begin
if cdsArticle.ChangeCount > 0 then
begin
cdsArticle.ApplyUpdates(-1
Self.CommitUpdates;
end;
end;
procedure TDM.cdsArticleAfterPost(Da
begin
cdsArticleApplyUpates;
cdsArticle.RefreshRecord;
end;
procedure TDM.cdsArticleAfterDelete(
begin
cdsArticleApplyUpates;
end;
procedure TDM.cdsArticleBeforeClose(
begin
if (cdsArticle.State in [dsInsert,dsedit])and(Mess
mtConfirmation, [mbYes, mbNo], 0) = mrYes) then
cdsArticle.Post;
end;
//I use TADO connection
function TDM.CommitUpdates:boolean;
begin
Result:=DMR.CommitUpdates;
end;
// Commit Updates in remote data module to be sure of saving data
function TDMR.CommitUpdates:boolean
begin
if DbConnection.InTransaction
try
DbConnection.CommitTrans;
Result:=True;
except
Result:=False;
end
end;
But as for me, it is not a very good idea, to upload so much data from server.
I would prefer to show Article data from additional query by pressing a button, or create thread, which will query this data, onAfterScrolling method. When you scrolling, put your current Article_Id in some global Variant variable, and check its value by thread, if it is not null, then:
1: store this value in internal thread variable
2: reopen Article Data query, with key by stored variable
3: after fetching data, compare this value with stored
4: if it changed, then go to label 2, else set it to null
This technology is very hard to understanding, but it work very fast – fetched only last record. Or if, you need to fetch all records, organize something like stack in TStringList, and store key values on it.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
Meikl,
Glad to hear the failing rxMemoryDataset wasn't my fault.
About the loading of the fields in a TStringList and using calculated fields, can explain a bit more about that? I've always used lookup fields, I'm not very familiar with calculated fields (sorry).
Thanks,
JDN
Glad to hear the failing rxMemoryDataset wasn't my fault.
About the loading of the fields in a TStringList and using calculated fields, can explain a bit more about that? I've always used lookup fields, I'm not very familiar with calculated fields (sorry).
Thanks,
JDN
ASKER
Meikl,
Forget my last question, I figured it out myself.
I've read the result set in a TStringList and use the OnCalcField event of the dbgrid to fill the calculated fields (the former lookup fields).
There's a little more memory needed now by my application, but the lookups are very fast. Users don't have to wait anymore while adding a new record to a a large order.
So, to play the game fair, I will give you the points for your last TStringList hint.
Thanks,
JDN
Forget my last question, I figured it out myself.
I've read the result set in a TStringList and use the OnCalcField event of the dbgrid to fill the calculated fields (the former lookup fields).
There's a little more memory needed now by my application, but the lookups are very fast. Users don't have to wait anymore while adding a new record to a a large order.
So, to play the game fair, I will give you the points for your last TStringList hint.
Thanks,
JDN
let your query select the records from this view
benefit,
the complex query for the view is already
known by server and no additional sql-parsing
must be done
hint:
actualize the statistics of the server
(if sql-server supports this)
meikl ;-)