Link to home
Start Free TrialLog in
Avatar of Imbeginner
Imbeginner

asked on

Mster_detail Databases

Hello,
I used a master-detail databases,my databse is access 2000,and used these components to create it:
adoconnection, adotable  ,datasource,dbgrid,dbnavigator.
 I want to delete a record in master table that can be cascade to detail table, but if any datail records was existed this operation not executable .
But I want do it!?. any work (or technic) was existed that I can do this work?
I cant  use any other dbgrids(such as expressquantum grid) for its price.
Thanks at advance

M.reza
Avatar of esoftbg
esoftbg
Flag of Bulgaria image

Hi,
Before delete a master record you need to delete all detail records attached to this master record.
Avatar of Imbeginner
Imbeginner

ASKER

that is what i say in my question.
i wanna solution for it.
if the master database is

invoice.db in table1 and detail datbase is detail.db in table2

 an the link is on field for example

invoicenum

tehn in the befordelte events on table1




 if not(detailinvoicenum.isnull) then  raise egreska.create('YOU FIRS MUST DELETE THE RECORDS FROM DETAL...');


of course u must first declare exception

type
  EGreska = class(Exception);



another aprouch is (without exception

in ondelete event u do something like
if table1.fiedbyname('invoicenum').isnull then begin table1.cancel; showmessage('u cen not delete record u must first delete....');


thanks a lot for points :))




 

I am not sure I understand your question.

Are you ...

1. trying to set up the DBMS' referential integrity to cascade the delete for you?

2. trying to cascade the delete under your software control?

3. trying to prevent the delete from cascading after you have set it up to do so in the DBMS?

I'm not sure I understand where the DBGrids etcetera figure into your question, since the visual control manipulations are a different issue than referential integrity.

If #1, then you set up the RI when you configure your Access database using the graphical "relationships" tool.  The DBMS then handles eveything for you in the most efficient manner.

----

If #2, then you would typically remove the RI that you initially created, and then in the "BeforeDelete" event of the master table you would fire a query containing the SQL "Delete from detailTable where keyField = :Masterkey".  Don't forget to set the Masterkey parameter, and be aware that Access is quirky at best.  

If you have problems then set the delete query up at runtime with deleteDetailQuery.SQL = format ('Delete from DetailTable where keyField = %s', [ Mastertable.FieldByName ('KeyField').AsString ])

----

If #3, then in the BeforeDelete event you fire a query containing the SQL "Select count (*) from DetailTable where keyField = :Masterkey".  The annotation in #2 about handling Access' quirkiness applies.

The SQL based design is based upon a shared database design.  If the database is not shared, then you may simply check the detail recordset's recordcount.

----

I generally recommend against deleting records at all except after the completion of a sucessful archival process.  Instead, I will mark records as inactive, or "invalidate" them.  That way if there is a problem with the delete process I can always recover the "missing" data, or better yet give the user the tools to fix any problems.  

Likewise, I prefer not to "update" records for any purpose other than to invalidate them.  Instead, I will invalidate the old record and insert a new one, giving a 100% reproducible audit trail all of the time, and giving me the ability to cleanly back out any number of transactions in case of problems.

It's amazing what three years of carrying a pager does to your design strategies!  :o)
Hi,
If you delete successfuly the master record whithout delete its detail records, in your detail table will exist records without master record. Who needs these records ?
Swiff99,
 Great answer ,thank a lot.
#2 is what I  Need,
I used these instructions but this error was raised:
‘Data type mismatch in criteria expression’
but my instructions:
with   adoquery1 do
begin
CONNECTION:=ADOCONNECTION1;
with sql do
begin
clear;
add(format('Delete * from student where AID=%s ',[adotable1.FieldByName('AID').asstring ]));
open;
end;
end;
say this that  my master table(article) has Aid field  char(15)
and detail table(student)also has char(15) for AID field(foreign key).

How can solve this problem?

Thanks
m.reza
You need quotes around the %s.

'Delete * from student where AID=''%s'''

or

'Delete * from student where AID="%s"'

depending on your DBMS - I believe that Access prefers the latter form.

What you are doing is having the Delphi program write and execute an SQL program, so the SQL syntax must be observed in the string you pass to the SQL compiler (ADO in this case).
Swiff99,
 Great answer ,thank a lot.
#2 is what I  Need,
I used these instructions but this error was raised:
‘Data type mismatch in criteria expression’
but my instructions:
with   adoquery1 do
begin
CONNECTION:=ADOCONNECTION1;
with sql do
begin
clear;
add(format('Delete * from student where AID=%s ',[adotable1.FieldByName('AID').asstring ]));
open;
end;
end;
say this that  my master table(article) has Aid field  char(15)
and detail table(student)also has char(15) for AID field(foreign key).

How can solve this problem?

Thanks
m.reza
Also, rather than using "Open", use "ExecSQL".

Delete doesnot return values.  The field arguments are unique to access in the delete statement for use in interactive proofing.  This statement would raise an error with any SQL engine except Access.
I have two questions
1:different between sql.open  and sql.execsql?
2: if I used the master key field in more than one table in my  program (keywords table and  teacher table),
I used this sql statement but this error was raised:
“Syntax error in join operation”
and  instructions is:
with   adoquery1 do
begin
CONNECTION:=ADOCONNECTION1;
with sql do
begin
clear;
add(format('Delete * from (student,keywords,teacher) where (sudent.AID="%s") and (teacher.aid="%s" )and (keywords.aid="%s")',[adotable1.FieldByName('AID').asstring,adotable1.FieldByName('AID').asstring,adotable1.FieldByName('AID').asstring ]));
execsql;
end;
end;

how can solve this problem?

Thanks a lot
m.reza
ASKER CERTIFIED SOLUTION
Avatar of swift99
swift99

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SQL.Open expects a returned dataset, and will raise exceptions if it does not receive it.

ExecSQL simply passes the SQL to the DBMS and waits for a success/fail code.  If the DBMS tries to send a resutl set then it will raise an exception.  ExecSQL is used for Insert, Delete, Update operations, and Select into table statements, plus all DDL statements.  Open is used only for Select operations that return a cursor.

SQL Delete can only apply to one table at a time.  It does not support joins, so each table needs to have its own Delete SQL.  If you think about it a bit then you will see it is an obvious limitation of the design of any stateless DBMS language.

with adoquery1 do
begin
  CONNECTION:=ADOCONNECTION1;
  SQL.clear;
  SQL.add(format('Delete from student where (AID="%s")',[adotable1.FieldByName('AID').asstring]));
  execsql;
end;

with adoquery1 do
begin
  CONNECTION:=ADOCONNECTION1;
  SQL.clear;
  SQL.add(format('Delete from teacher where (AID="%s")',[adotable1.FieldByName('AID').asstring]));
  execsql;
end;

with adoquery1 do
begin
  CONNECTION:=ADOCONNECTION1;
  SQL.clear;
  SQL.add(format('Delete from keywords where (AID="%s")',[adotable1.FieldByName('AID').asstring]));
  execsql;
end;

Also, you will need to handle the exceptions that wsome DBMS' will throw if there is no record to delete.
This a good time for you to start researching transaction control and two phase commits.  Access does support them.
Got too busy for a while - my employer, who is paying for my time and the internet connection, takes priority.

If you use transaction control anywhere then _ALL_ of your queries must use it.

All of your database transactions should follow this format.

try
  ADOConenction1.StartTransaction;

  .... all of your SQL's

  ADOConnection1.Commit;
except
  ADOConnection1.Rollback;
  raise;
end;

Since you are using the events to chain, you would put the StartTransaction at the start of the BeforeXxx events, your commit at the end of the AfterXxx events, and the Rollback in the OnXxxError events.
thanks for your expert coding,
i learned many notes in this question.

but some thing that should be considered

the right syntax for your final coding about transaction is(my ado version is 2.7 )  :

try
 ADOConenction1.begintrans;

 .... all of your SQL's

 ADOConnection1.Committrans;
except
 ADOConnection1.Rollbacktrans;
 raise;
end;
what is the benefit of transaction in this question?

thanks a lot
m.reza
hi again,
my master table is article table,and I used some tables(student,teacher,keyword)  that depended to that table.
When I want to add a record to article table other tables should be filled with  necessary information,for it I want a controller that inform of existing necessary information in other tables,
I think I should add a event handler such as (beforeinsert or beforepost)to my article table and the write  appropriate sql sentences that inform of this subject(existing necessary information in other tables).
You prefer of using a beforeinsert or beforepost or …?

Thanks a lot
m.reza

>> the right syntax for your final coding about transaction is ...

Good work ... you're catching on.  I coded the transactioning off the top of my head without reference to any manuals (I don't use the ADO components), and you correctly identified and corrected it without prompting.

-------
>> what is the benefit of transaction in this question?

Transaction control gives you atomicity of everything within the transaction - everything goes, or nothing goes.  In situations where part of the transaction, say the first two queries, succeed, then the next one fails, aborting the process in mid stream, an uncontrolled process will have data out of sync.  A transaction controlled process will roll back to its state before any queries were run, so you won't have invalid data.

----------------
>> I think I should add a event handler such as ...

I typically would use a very different construction entirely - I have spent way too much time rescuing badly formed cascading dataset events, and solving problems by remote control that were caused by hidden event loops.

I do not typically address tables via TXxxTable components because, in the databases I hit, both the database itself and the concurrent user base are way too large to allow such a design. (8TB and 8000 concurrent users)

Instead, all of my transactions are done with queries.  I fetch the data into either business objects or client datasets in briefcase mode, work with the data via the GUI or internal processes, and then submit the changes to the database with queries wrapped by process methods in the "Model" layer of the MVC pattern.

I use the client dataset events mainly for edit checks.

I strongly recommend "Design Patterns" by Gamma, Helm, Johnson, and Vlissides.  I also recommend "Bitter Java", in which you can substiture the word "Delphi" every place you see the word "Java" and it is still as true.
yes, I see ,for large scale of users  should use a high performance strategic solutions,
but my program ….
My program has  (4 tables and one  user).please  pay attention to my last comment again(suppose have a one user)
One other question:
The number of concurrent users that can connect to  Access databases (not use of high performance strategic solutions)?

I  will  read these books at first time.
BeforeInsert and BeforePost do two different jobs.

I would use BeforeInsert for editing processes, and before Post for transactional processes (aka queries).
Access is not recommended for more than one user, but it can support concurrency for at least a dozen users.  M$oft doesn't recommend heavy concurency in Access.

Access can be a high performance strategic solution to some problems - I personally don't like the product, but there are places where it provides a very elegant and ... get this ... high performance solution.
please describe this comment with more clarity.

<<I would use BeforeInsert for editing processes, and before Post for transactional processes (aka queries). >>

what is mean of AKA?



best regards
m.reza
aka = in other words  (english slang, from some latin phrase)
please describe this comment with more clarity.

<<I would use BeforeInsert for editing processes, and before Post for transactional processes (aka queries). >>

what is mean of AKA?



best regards
m.reza
BeforeInsert is used for performing edit checks when you press the "Insert" key or some other GUI related operation.

BeforePost is used to perform last second edit checks and initiate cascading events between 1. (the time you press the "Save" or "Post" button, or performa any other operation that starts the commit of your data to the database), and 2. (the time that your record is actually saved to the database).
thanks for all of your assisting ,
please tell,if anything  remained about this issue and what i do that inceament the performance of my programming?


best regards
m.reza
thanks for all of your assisting ,
please tell,if anything  remained about this issue and what i do that inceament the performance of my programming?


best regards
m.reza
You're welcome!

To improve performance, there are a few general guidelines.  Without seeing your database and code I cannot give any specific suggestions.

The database I/O is typically where your biggest bottleneck is.

1. The use of good indexes can improve performance by HUGE factors - in one recent project I reviewed, using Access back end, the addition of one compound index to the database improved performance by over 1000 times.  With Access you can create compound indexes, but you  have to do it in code because the visual designer won't represent them.

Let's take a relatively small hypothetical table with 100,000 records that are each 1024 bytes long.  The table size will be 102,400,000 bytes.  Without using an index, the maximum search time would be 12,500 I/0, or 125,000 milliseconds.  The average search time would be 50% of that.

Let's add an index that we can search against, a 32 bit integer for a primary key.  Each index record will be 4 bytes for the value, plus 1 byte for some flags, plus 4 bytes for the page ID, plus 2 bytes for the offset into the page for the referenced record.  Depending on the flags, the referenced record may be in the table space or in the index space.  

The index record will be 11 bytes, so the index itself will be 1,100,000 bytes.  A sequential search of the index for the primary key would be 10 times as fast as a sequential search of the table, 1,250 I/O or 12,500 milliseconds.  But it gets better.

The database is organized into pages, typically 8192 bytes to a page.  The index is organized as a B+ tree within that page structure, and each page holds roughly (pagesize/index recordsize) = (8192/11) = 744 index entries.  

The average count of pages that will need to be read in this case are roughly 2 against the index (744^2 is 553,536, which is greater than the number of records in the table), and 1 against the table, for a total of 3 I/O.  This gives an average access time of 30 ms on a 10 ms hard drive, with a maximum access time of 30 ms since the average case is so much larger than the actual size of the table.  This represents a performance improvement of 4167 times.

2. Badly structured queries will degrade performance by a large factor.  Watch for cartesian joins (bad).  Try to use fields in your query's "where" clause that are indexed.

3. COM only supports up to 5000 transactions per second.  When using ADO, you are limited by the performance of COM, so treat it just like any other slow communications device.  Keep operations on one side or the other, and use bulk operations where possible.

4. Cache data locally to your app rather than "crossing the wire" wherever possible.  The use of ClientDatasets can significantly improve performance and allow you to continue using VCL dataset event programming.
About1-

How can I calculate the index records in access and oracle?
Because
At first I don’t know access(or oracle) how calculate the index size?
Is Below statement  is YOUR OPINION  or that ACCESS say this for CALCULATING THE SIZE OF INDEXES?

<< Let's add an index that we can search against, a 32 bit integer for a primary key.  Each index record will be 4 bytes for the value, plus 1 byte for some flags, plus 4 bytes for the page ID, plus 2 bytes for the offset into the page for the referenced record.  Depending on the flags, the referenced record may be in the table space or in the index space.  >>

Is any article that describe it?



Second, which form of searching would be selected for special databases?

sequential,linear hasing, B+tree, patricia, or  …



thanks a lot.

m.reza
About1-

How can I calculate the index records in access and oracle?
Because
At first I don’t know access(or oracle) how calculate the index size?
Is Below statement  is YOUR OPINION  or that ACCESS say this for CALCULATING THE SIZE OF INDEXES?

<< Let's add an index that we can search against, a 32 bit integer for a primary key.  Each index record will be 4 bytes for the value, plus 1 byte for some flags, plus 4 bytes for the page ID, plus 2 bytes for the offset into the page for the referenced record.  Depending on the flags, the referenced record may be in the table space or in the index space.  >>

Is any article that describe it?



Second, which form of searching would be selected for special databases?

sequential,linear hasing, B+tree, patricia, or  …



thanks a lot.

m.reza
1. This is basic principles - it will vary somewhat (+/- 5%) because different index implementations will have different overheads, but it describes the technology in use by all major DBMS' in use today.  I got most of this from a class titled "DB2 Internals", and I measured the DB2 expectations against Access so I know from personal experience that it holds true there as well.  These computations will hold true for Oracle and any other B+ tree implementation.  B+ trees are used in all current commercial and most open source relational databases.

2. What do you mean by "special databases"?  When you put in a screw you use the correct screw driver.  There are places for each of these technologies, and places where they are not appropriate.

Hashing is useful if you have a fixed amount of data.  I use it primarily for memory based structures up to 4,000,000 objects.  It would be useful for small databases and would offer fast access to data known to be there, but it does not give you sorting or the capacity to easily determine if something is NOT there.  A hash table is generally considered full at 65% of its physical capacity, so you need to be in a position where you can afford to "waste" 1/3 of the available space to gain 20 ms on every record retrieved.

Sequential indexes offer only modest performance gains, and are still typically used in "object bases".  Note that the only two vendors of commercial "object bases" filed bankruptcy.  Their products failed to penetrate the market because of poor performance.

B+ tree is a good general purpose mechanism that offers serious performance benefits, that is proven over time, is readily expandable to the physical limits of the hardware,  and that there are mature technologies for managing.  It would be indexing scheme of choice for most applications.

I am not familiar with the "patricia" scheme.
about
<<Second, which form of searching would be selected for special databases?>>

i say that, example access has a default way for searching
or dynimically change his serching when example  increasing the database records?


but about patricia trie

when the size of a key increased the vale search for comparing it with serach key also increased(that exist in
b+-tree) with use of patricia trie the serach for a key decreament to one compare.


this structure has been divided to three stage
1-make a binary trie
2-compact trie
3-make patricia trie of compact trie


for more information

refer to  'fundamentals of data sructures in pascall'  
ellis horowitz,sartaj sahni







I'll check it out.  Thanks.