How to synchronize table data records

I've got 10 Notebooks from sales force with 10 identical single-installations (no network support). My Problem is: There are 10 Paradox tables with all clients adresses of the company but each of them contains some different data. i.e. Table1 contains phone-number of client Smith and Table4 contains prename of Smith. The  customer number is identic. What can I do to synchronize all these data records to only one 'comlete' table without dublicates ?
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.


I would:

1. Create a new empty table containing all fields it should contain including CustNo.

2. Create and run a query like:

INSERT INTO "NewBigTable.db"
(custno, custname, custaddress, <etc. all fields in the table> )
SELECT A.custno, B.custname, C.custaddress, // etc. for all fields in "NewBigTable.db"
FROM "Table1.db" A, "Table2.db" B, "Table3.db" C, // etc. tables
WHERE (A.custno = B.custno) AND
(A.custno  = C.custno) AND // etc. for all tables

Table1 must contain all customers that have to be added.

Regards, Geo
You may come up with a new problem - it table A has 1 record for customer and table B has 2 records for the same customer - for example it the customer has 2 phone numbers (different).

So think which tables you can join on "NewBigTable" and use "INSERT INTO ... SELECT" statement as geobul suggested.
Yep, that's true. My proposal would work only if your tables contain one record per customer and there is a record for every customer. (I hope it will work, actually, because the SQL statement is very large and complicated and BDE might refuse its execution. I don't know).

If that's not true then I'd make a program which will open first table, loop through its records, check if that custno already exists in the BigTable and if not - add a new record filling some fields (custno and one with data, name for instance). And the same for all other tables which contain one record per customer - if custno doesn't exist then add a new record, otherwise fill a field of the existing record with data (addres from Table2, phone number from Table3, etc.). Tables which contain more than one record per customer should not be added but remain as separate ones.

Regards, Geo
Cloud Class® Course: Amazon Web Services - Basic

Are you thinking about creating an Amazon Web Services account for your business? Not sure where to start? In this course you’ll get an overview of the history of AWS and take a tour of their user interface.

You will also need to deal with conflicting information when table  A has one phone number for customer X and Table  b has a different phone number for customrer X, You don't want to lose the good number.  So in creating the one big table you are going to have decide on either a hierarchy under which fields from table A are more valid than table B which is more valid than table C etc.  If there is no overlap and no conflict it is very straightforward and the answers above will give you what you need.  But in that case a massive Join query will be enough to create a master table and then restructrure it so that the customer number is the key field.

A second way is to use two steps create the master table with duplicates if they exist, and then create a second table from the first which using a unique select on the customer number field.

None of us know what is in the ten tables.  If it is a normalized relational database than converting it to a flat database is probably not a good idea anyway.  It would make more sense to create a Flat View, using a query or form, in that case.

A relational database designed correctly will lead to fewer problems and quicker and more efficient updates than a flat database.
Brainwashed2Author Commented:
hi geo, Ivanov, LSORRELLS,

I've tried your suggestions but I think I'm to stupid (brainwashed again). My last attempt was the following, but without a result.



FROM "A1.DB"A, "A1.DB"B, "A1.DB"C, "A1.DB"D, "A1.DB"E, "A1.DB"F, "A1.DB"G, "A1.DB"H,  "A1.DB"I, "A1.DB"J, "A1.DB"K, "A1.DB"L,  "A1.DB"M, "A1.DB"N, "A1.DB"O, "A2.DB"A, "A2.DB"B, "A2.DB"C, "A2.DB"D, "A2.DB"E, "A2.DB"F, "A2.DB"G, "A2.DB"H,  "A2.DB"I, "A2.DB"J, "A2.DB"K, "A2.DB"L,  "A2.DB"M, "A2.DB"N, "A2.DB"O, "A3.DB"A, "A3.DB"B, "A3.DB"C, "A3.DB"D, "A3.DB"E, "A3.DB"F, "A3.DB"G, "A3.DB"H,  "A3.DB"I, "A3.DB"J, "A3.DB"K, "A3.DB"L,  "A3.DB"M, "A3.DB"N, "A3.DB"O, "A4.DB"A, "A4.DB"B, "A4.DB"C, "A4.DB"D, "A4.DB"E, "A4.DB"F, "A4.DB"G, "A4.DB"H,  "A4.DB"I, "A4.DB"J, "A4.DB"K, "A4.DB"L,  "A4.DB"M, "A4.DB"N, "A4.DB"O, "A5.DB"A, "A5.DB"B, "A5.DB"C, "A5.DB"D, "A5.DB"E, "A5.DB"F, "A5.DB"G, "A5.DB"H,  "A5.DB"I, "A5.DB"J, "A5.DB"K, "A5.DB"L,  "A5.DB"M, "A5.DB"N, "A5.DB"O


The tables are all containting the fields NR,KDM,UID,STNR,KDART,NAME,ADRESSE1,ADRESSE2,ADRESSE3,ADRESSE4,PLZ, ORT,BLZ,KTONR,BANK with only one record in each field. There is the possibiltiy, that customers have more than one CustomerNumber but this  doesn't matter - I can filter it later. Also I cannot say, which table is the 'completest' so it's necessary to get one 'AllAdr.DB' with all the completed fields of the others. The up-to-dateness is only secondary. Perhaps there's an other way to get a solution... I've no problem to merge all DBs to an access.mdb - do you think there is a better chance
First that query will require that each customer is in every table or it will fail to act on that customer's record.

If all of the tables have the same fields then you can merger them into one paradox table using a ttable.batchmove method which would be more efficient both from time and effort.

And you don't have to deal with the sql query and joins.

I think you would use syntax like

create a table object for the big table.

var tablelist: array[1..10] of Widestring; K:shortint; endvar

for k := 1 to 10 do tablelist[K] := 'A' + inttostr(K) + '.DB'  {This populates your array with the list of table names assuming they are named A1 thru A10}
For K := 1 to 10 do bigtable.batchmove(tablelist[K],batAppend)  {This appends all of the data into your big table using your array}  Some might do it in one step or using one for and begin and end but this way just shows it more visually.

Once you have the big table you can later add keys and eliminate dupes.  You can change the "batAppend" in the method above to "batupdate" and it will update append new records and update existing ones with changed data when run.

You should attach the code to a button called append.

Brainwashed2Author Commented:
Hi Lonnie,

I've tried your suggestion already in former times - however the result was not which I search.
I need a solution, which examines identical customer numbers and which SUPPLEMENTS associated empty fields with possibly existing contents of the other Tables(fields) - like a
synchronization betweet mobile-phone and outlook-dates,contacts....
New customer numbers are to be added with the associated entries. Double cust.numbers have to appear for manual treatment one behind the other. Any of this
address.db is nearly 30.000 !! records and I must find a solution with as few differences as possible.  
Well once you have the big table you can use an sql query and groupby the customer number and in all the other fields use a "maximum"  summary operator .

You will end up with a single record for each customer number and all empty records will have been replaced.


I'm sorry but your big SQL is wrong. There are two main possibilities:

Is it true to say that Table1 contains all valid phone numbers of your customers and Table4 contains all valid names?
Is it true that any table contains all valid data for one (or more) field(s)?
If the above is true, please tell us the source table name for each field of the 15 fields in the big table. Like:
KDM is in Table2
NAME is in Table4

And I'll write the SQL.

Or alternatively one customer may have its BANK (for instance) in Table5 and another one's BANK to be in Table6 and NOT in Table5.

In that case I'd make a program which will:
- for each table from Table1 to Table10:
  - open one table (TableX)
  - loop through its records and for every record:
    - check if that NR already exists in the BigTable and if not - add a new record else position at the record that exists
    - for every field in TableX check if its value is not null and not empty (i.e. contains a value) and
      store that value in the corresponding field for that NR in the big table
  - end records loop
- end tables loop

Please decide which one best fits your needs and tell us.

Regards, Geo
Brainwashed2Author Commented:
Hi Geo, hi Lonnie,

First I need one big table - ok  - but with batAppend I've a little problem. CustNr (named NR) is an indexed and
autoincrement value and so I get only for the first table correct informations. (i.e.: each table contains 30.000
customers - with the result, that  in 'BigTable' last customer of table10 gets CustNr 300.000).  

Next I think, Geo's solution No. 2 is the best fitting one for CustNr. 1 to 29.000 because all adresses are identical
with CustNr and only the completeness is varying. Here I need your help for a program which loops through all
records and makes any necessary additions.

Last I have to list CustNr. 29.001 to 30.000. Here is the problem, that different custumers have the same CustNr.
The necessary adjustments for these 999 Records will be made manually.

Finally the updated and most complete 30.000 records will be served on LAN.

Regards Tom

Something like the following for the first 29,000 records in each table.
You already have the big table created and empty with the following fields:

Your ten tables are named Table1 .. Table10.

  TForm1 = class(TForm)
    tblWork: TTable;
    tblBig: TTable;
    Button1: TButton;
    procedure Button1Click(Sender: TObject);
    { Private declarations }
    { Public declarations }

  Form1: TForm1;


{$R *.DFM}

procedure TForm1.Button1Click(Sender: TObject);
  i, k: integer;
  // open the big table
  with tblBig do begin
    DatabaseName := 'c:\somehere'; // <--change the path here for the big table
    TableName := 'BigTable.db';

  // tables loop
  for k := 1 to 10 do begin
    // open a table
    with tblWork do begin
      if Active then Close;
      DatabaseName := 'c:\somehere';  // <--change the path here for Table1 .. Table10
      TableName := 'Table' + IntToStr(k) + '.db';

    // loop the records 1..29000
    for i := 1 to 29000 do begin
      // check if NR already exists and add a new record if it it doesn't
      // you may create an index and use that index for fast search like FindKey method
      if tblBig.Locate('NR', tblWork.FieldByName('NR').AsString, []) then begin
        tblBig.Edit; // set the found record in edit state
      end else begin
        tblBig.Append; // new record
        tblBig.FieldByName('NR').AsString := tblWork.FieldByName('NR').AsString; // NR field

      // now check field values and update the big table
      if not tblWork.FieldByName('KDM').IsNull then begin
        // additional check for string fields only
        if tblWork.FieldByName('KDM').AsString <> '' then begin
          tblBig.FieldByName('KDM').AsString := tblWork.FieldByName('KDM').AsString

      if not tblWork.FieldByName('UID').IsNull then begin
        // additional check for string fields only
        // if tblWork.FieldByName('UID').AsString <> '' then begin
          tblBig.FieldByName('UID').AsString := tblWork.FieldByName('UID').AsString
        // end;

      if not tblWork.FieldByName('STNR').IsNull then begin
        // additional check for string fields only
        if tblWork.FieldByName('STNR').AsString <> '' then begin
          tblBig.FieldByName('STNR').AsString := tblWork.FieldByName('STNR').AsString
      // and the same 11 more times for the rest of the field list

      // Post the changes in the big table
    tblWork.Close; // close work table
  end; // tables loop

  // close the big table

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
I haven't tested the code above, so some errors are possible.
Forgot to say that there are two tables on the form: one for the big table and one for all others.

For the records above 29000 I can't help with a program. As you said: manual processing ;-)
First things first.   Do not use autoincrement for customer number field in Big table (You actually want the duplicates to exist as duplicates so set it to be an integer, long integer or even number.)  This will prevent it from issuing new customer numbers as data is appended.  It will merely store the existing number.
Once you consolidate the data as one master table without duplicates you can change the field back to an autoincrement field.

Once that is done the previous instructions should work without manual processing.  This is assuming there is no way of knowing whether data in one table is more valid than data in another.   The only assumption my method makes is that some data is better than no data and not that some data is better than other different data which is invariably true.  but requires more information than I have.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.