• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 940
  • Last Modified:

Good Table Design.

Hi,

What is a good design for Table:

1. Detail Stock. It is used for recording transaction history. It could be use for FIFO, LIFO and Average.

2. Total Stock. It is used for user interface thus need faster loading.

Those table should be sync every time change affect to Detail Stock Table.

Please help ?

Thank you.
0
emi_sastra
Asked:
emi_sastra
  • 14
  • 9
  • 6
  • +3
3 Solutions
 
jmoss111Commented:
Hello emi_sastra,

You can find a lot of great database schema examples at http://www.databaseanswers.org/data_models/index.htm

Regards,

Jim
0
 
emi_sastraAuthor Commented:
Hi Jim,

The link provide so many helpful database,  thanks, but it is not what I am looking for now.

Thank you.
0
 
Phil_CrusaderCommented:
Hello emi_sastra,

I'm not quite sure what your looking for.  If it's for a system where you have a growing list of inventory items, I usually stick to maintaining a parent table for the items then using individual transaction records on a child table with date tags to tracking individual stock changes.  

<PARENT Table>
[id|name|start qty|date of last physical inventory|current qty]

<CHILD Table>
[id|qty|date]
That being said I use a batch job that runs frequently to update a summary table for reports.

<Total Table>
[id|name|date|qty]
You'll notice this is a rather simplistic view, however, it relies heavily on your backend processing to implement your inventory process(FIFO, LIFO, average).

To implement a true FIFO, LIFO system, you may need to maintain two inventory tables.  One to be used to classify each item, and another one to distinguish between them, just like tracking the product code and serial number of each item.  Is this the direction your heading?
0
Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
emi_sastraAuthor Commented:
Hi Phil_Crusader,


I. Detail Stock:

1. ItemCode.
2. TrsDate.     (This field could have more than one in the same ItemCode).
3. ItemQty.
4. ItemUsed

II. Total Stock.

1. ItemCode.
2. ItemQty (ItemQty-ItemUsed)

Does the above structure optimized?

Thank you.
0
 
Phil_CrusaderCommented:
emi_sastra,

Yes, it is but you would not be able to use it to track individual transactions.  Just like the table I posted earlier, it is heavily dependent on its back-end to implement FIFO/LIFO inventory processes, but it would be good for average inventory processing.
0
 
Phil_CrusaderCommented:
emi_sastra,

I should point out, that your table design above doesn't mean you can't do record tracking.  It only means that you can't do it with your inventory records.  You can do still do record tracking, but it will need to be done using your sales table(i am assuming of course that your system also has a sales module).

There are many ways really to approach this, what really matters is that the design you choose fits your needs and maintenance plan.  That will make it easier for you in the future to make upgrades or fixes.

Regards,
PC
0
 
emi_sastraAuthor Commented:
Hi Phil_Crusader,

There is comment about my table structure on other question.

Part of the problem is that the table design is not great, but that is pretty typical.  I see it a lot even in production systems.  A sub-optimal database design always makes the application code more complex.

Please just consent on the two tables, could it be optimized.
I should use trigger to update Total Stock every changing on Detail Stock.
The Total Stock table is used to show Total Stock of each item to the user.

Any suggestion to the comment?

Thank you.
0
 
Phil_CrusaderCommented:
emi_sastra,

I have 3 comments based on the table design you posted.  
1> from item #1 in your original post above, you will not be able to do record transaction history.  Because you don't have sufficient details saved.  If that is alright with you, then your table design should be good enough.
2> Inventory stock model, will have to be implemented on the back-end server. FIFO, LIFO and AVERAGE, operations are not optimized, but is flexible enough to handle all three.
3> you don't need the total stock table, you can use the table detail alone and use a group query with aggregate functions to get your total summary easily.

ex.
SELECT ItemCode, SUM(ItemQty) - SUM(ItemUsed)
FROM     DetailStock
Group by ItemCode
With that you can optimize your database, since you no longer need those triggers to update the stock total table.

If you want the tables to handle every problem, then it will become more complicated.  Otherwise a simple table design is usually the best.

Hope this helps.
0
 
emi_sastraAuthor Commented:
Hi Phil_Crusader,

My  Detail Stock should be:

1. ItemCode.
2. TrsId.
3. TrsDate.     (This field could have more than one in the same ItemCode).
4. ItemQty.
5. ItemUsed

Is it right?


SELECT ItemCode, SUM(ItemQty) - SUM(ItemUsed)
FROM     DetailStock
Group by ItemCode

Will this cause overhead if there are more than 1,000 item and more 1,000 of transaction of each Item in the database. 1,000 x 1,000 = 1,000,000. ?

<PARENT Table>
[id|name|start qty|date of last physical inventory|current qty]

When "current qt" is updated?

Thank you.
0
 
Phil_CrusaderCommented:
emi_sastra,

Yes, that is right.  I've seen good results with over 32,000 items and thousands of transactions, however, this was on a postgresql database.  I honestly don't know if MSSQL will perform as well.

However, the simple query call should help reduce any overhead you encounter.  Just remember, since your using aggregate functions it will have more overhead than a simple query using a total stock table.  But I prefer better overall database performance v.s. fast reporting.

If you want to test which one is better you can use your two table design and monitor your report overhead, then use the SQL statement above and verify how slower it will be.  If the additional overhead is acceptable to you, its easy to just kill the triggers and remove the extra table.

Current qty is updated every transaction.  In my inventory tables, when I do a physical inventory count it is saved as start qty + the date I did the physical.  All transactions after that updates the current qty, this allows me to check if my inventory and physical matches by checking the current start qty vs the current qty of the last inventory entry.
0
 
Phil_CrusaderCommented:
emi_sastra,

I just noticed you added a TrsId to the table, you can remove that.  That will not help you do record transaction history if that was your intention.  If that was for a primary key, you can still remove it, your ItemCode + TrsDate makes a good composite key.
0
 
emi_sastraAuthor Commented:
Hi Phil_Crusader,

3> you don't need the total stock table, you can use the table detail alone and use a group query with aggregate functions to get your total summary easily.

<PARENT Table>
[id|name|start qty|date of last physical inventory|current qty]

But you need to update this table every of transaction happened right? If yes, what do you use to update it using SP or Trigger?
Or do mean that I just have <PARENT TABLE"> as the Total Stock Table?

Thank you.
0
 
Mark WillsTopic AdvisorCommented:
Have not read all the posts - so apologise in advance....

normally in a fifo type environment you have the detailed transactions representing individual stock movements and then the inventory master which holds the stock on hand and any "pending" reserved stock etc...

from what I can see, the detail stock table is akin to the fifo or movements table, and the total stock is akin to the inventory master.

in that regard, would be more inclined to have detail something like :

DETAIL
1. ID - unique / identity - Primary Key
3. ItemCode.                 - Secondary / non-clustered Index - sometimes can be the ID of the inventory master - but could just be the code (volume / size dependant)
4. Transactiontype   (e.g. sale, receipt, adjustment, etc).
5. TransactionDate.  
6. Quantity.

plus what ever else is needed at a detail level (location or warehouse, price / cost so you can post fifo valued transactions to GL if going that far)

MASTER
1. ID - unique / identity   - primary key
2. ItemCode.                    - indexed
3. ItemName
4. Stock_On_Hand

plus whatever else is needed as an inventory master (various groups, size, weight, etc) - sometimes doubles as the product master, though, really is the inventory control side of a product, so be wary of combining if really doing inventory management (and might need warehouse / bin locations etc as a subsidiary of inventory master)...

As detailed transactions are posted, then can adjust stock on hand accordingly. Individual transactions are effectively always added to a fifo environment so you can get the applicable actions at a point in time (such as cost, sales price, and basic inventory movements over time). Because they are individual and discrete transactions (carrying the correct sign - ie positives mean incoming stock transaction (ie add to stock on hand) and negatives mean outgoing (such as sales reducing stock). that way you only ever have to "add" to the balance / stock on hand figure.

in some environments, stock on hand is calculated dynamically - which is fine for relatively low transaction volumes, and sometimes we see "opening balance" type scenarios to minimise the "real time" calculations, and is calculated at various financial events (such as end of month).

anyway, hope that is of some interest...
0
 
emi_sastraAuthor Commented:
Hi mark_wills,

Your comment is long enough, quite confusing without sample.

I would not using Item Master to hold current stock since it could have many warehouse, rack, brand and etc.

Suppose we have report like :

ItemCode  |  Beg. Bal  |   In   |  Out   |  End Bal

Should we have a table like the report?

Thank you.

0
 
Phil_CrusaderCommented:
emi_sastra,

I can see how you got confused, let me rephrase...

The table you posted

Detail Stock
1. ItemCode.
2. TrsDate.     (This field could have more than one in the same ItemCode).
3. ItemQty.
4. ItemUsed

This will work as is, but will not be able to use report transaction history

If you need the transaction History, then your tables will need to look something like this

Detail Stock Parent

1. ItemCode.
2. TrsDate.     (This field could have more than one in the same ItemCode).
3. ItemQty.

Detail Stock Child
1.  ItemCode
2.  TrsDate
3.  Qty

This is a simple example, you can add fields to this as you feel necessary, but this model is an easy way of getting report transaction history.  This should also allow you to produce the report you posted above using SQL.

If you have specific reports you have in mind, maybe you can post them?

0
 
Mark WillsTopic AdvisorCommented:
The inventory master and the item master (assuming that is your product master) are two different tables - the inventory master is more like your "total" table. By the sounds of it, your item master if being replicated by warehouse might be missing that "higher" table - need to consider good relational design (ie if item is held in mulitple warehouses, probably have the "item master" then another table being "item locations" which has all the different warehouses). Similarly, it is sometimes the case that the detail transactions are held as a "status" being a "draw down" on an existing receipt, but really isn't detailed transactions at all... But we digress...

If there is a distinct and clearly defined period and an appropriate "rollover" function at the end of that period then you can match your table desing more in keeping with output results, so long as it does not violate the foundation of relational design. in this case, your master can hold that data so long as it has the period it pertains to...

0
 
Mark WillsTopic AdvisorCommented:
thanks @phil_crusader: it does clear it up a bit - think the table designs versus the reporting requirement does confuse things a little.
0
 
emi_sastraAuthor Commented:
Hi Phil_Crusader,

I understand your comment, but again the table is too detail (performance consideration) to generate report.

May be we should have Monthly Stock Table:

1. YearMonth
2. ItemCode.
3. BegBalance
4. ItemIn
5. ItemOut

This table should be updated every transaction occurs. Then it will be very fast to get the report.

Thank you.

0
 
Phil_CrusaderCommented:
emi_sastra,

Having a Monthly Stock Table/Total Stock Table will definitely help with getting the reports out.  The only reason I left that out, was because I don't know how important it is for you to get the reports out fast.  That plus, depending on how large you data volume is, you may need to decide if its necessary to have this.

Honestly though, I don't really think the performance hit will be great, even for large data volumes, given the simplistic nature of your report.  However, if you want the fastest solution, doing a summary table (like your Monthly Stock Table) is definitely faster.
0
 
emi_sastraAuthor Commented:
Hi Phil_Crusader,

Ok. Could you provide sample query to get the monthly summary report using the detail table?

Detail Stock Child
1.  ItemCode
2.  TrsDate
3.  Qty

Thank you.
0
 
Phil_CrusaderCommented:
emi_sastra,

Two things...

1> you'll need the Last Physical Inventory Date (should have caught that earlier)

Detail Stock Parent

1. ItemCode.
2. TrsDate.     (This field could have more than one in the same ItemCode).
3. ItemQty.
4. LastPhysicalDate

Detail Stock Child
1.  ItemCode
2.  TrsDate
3.  Qty
Without that, your queries will always be run from the date of your first transaction.


2> after doing the query, I realized its a bit more complex than I recalled (my bad).  I used a similar one in an inventory POS system, this ran for a minute or two but I should point out, if you do have thousands of transactions per item per day, then you have way more data than I did, as mine was about hundreds of transactions per item per month, although I should point out our item table has at least 32k records.

Function GetLastPhysical(Date inputDate, pItemCode) {
SELECT   PhysicalDate INTO LastPhysical
FROM	 DetailStockParent
WHERE	 PhysicalDate < inputDate
AND      itemCode = pItemCode
ORDER BY PhysicalDate DESC
LIMIT 1
 
return LastPhysical
}
 
SELECT	 a.itemCode
	,a.Qty
	,IN.Qty
	,OUT.Qty
	,a.Qty + IN.Qty - OUT.Qty
FROM	(SELECT ItemTable.itemCode
		DetailStockParent.Qty
	 FROM	ItemTable
	 RIGHT OUTER JOIN DetailStockParent
	 WHERE	 DetailStockParent.PhysicalDate = GetLastPhysical(StartDate, itemCode)
	 AND	 ItemTable.itemCode = DetailStockParent.itemCode) a
LEFT JOIN (SELET  itemCode
	   	 ,Qty
	   FROM	  DetailStockChild
	   WHERE  TrsDate >= GetLastPhysical(StartDate, itemCode)
	   AND	  TrsDate <= EndDate
	   AND	  Qty > 0) IN
LEFT JOIN (SELET  itemCode
	   	 ,Qty
	   FROM	  DetailStockChild
	   WHERE  TrsDate >= GetLastPhysical(StartDate, itemCode)
	   AND	  TrsDate <= EndDate
	   AND	  Qty > 0) OUT
WHERE a.itemCode = IN.itemCode
AND   a.itemCode = OUT.itemCode

Open in new window

0
 
emi_sastraAuthor Commented:
What is LastPhysicalDate? When it is feeded?

I know it is complex using the detail data to get report, that's why I always use total table to get make it easy, and for along history of data with more than 100MB should be a problem. 1 or 2 minutes should be considered a long process to get report, usually should less than 10 secs.

Thank you.
0
 
dbbishopCommented:
I have not read all the posts here but think I have an idea of what you are looking for. I would stay away from triggers for two reason. One, in a transaction-based system, they can be killers especially when you are using data from one table to try to keep another in sync. Second, they do not 100% guarentee synchronization. For one thing, you rely on the trigger to keep data in table2 in sync with table2, but what happens if a change is made in the data in table2. If the tables become out of sync, you basically need to recreate the summary table.

I would consider one of two possibilities:
1.  Create a summary table at regular intervals, whether it be weekly, daily, twice/day or hourly.
2.  Create an indexed view. This will affect transaction processes since the indexes on the view will need to be updated when a transaction occurs that adds or changes the underlying table's indexes, but I would at least give it a try. Microsoft has a good entry in MSDN about creating indexed views at http://www.microsoft.com/technet/prodtechnol/sql/2000/reskit/part10/c3661.mspx?mfr=true
0
 
emi_sastraAuthor Commented:
Hi dbbishop,

Second, they do not 100% guarentee synchronization.
Please provide more explanation.

For one thing, you rely on the trigger to keep data in table2 in sync with table2, but what happens if a change is made in the data in table2.
Please provide more explanation/sample.

Create a summary table at regular intervals, whether it be weekly, daily, twice/day or hourly.
Could it be done automatically? How to do it?

Thank you.



0
 
dbbishopCommented:
You have a detail table and a summary table. If you use triggers in the detail table to update the summary table, this is fine. Bu what happens if someone goes into the summary table and makes a change manually. You've lost synchronization, and unless you have some kind of audit query that runs regularly, you'd never know.

As far as scheduling the summary table on a regular basis, just create a job to to it and scehdule it to run at whatever intervals you want. Create the code that would drop the current summary table and then create a new summary table. You might determine when your 'off' hours are (e.g. 7AM, NOON - most people are at lunch) and 4:30 PM and run it at those times.
0
 
emi_sastraAuthor Commented:
Hi dbbishop,

I am sorry,  just back again.

Bu what happens if someone goes into the summary table and makes a change manually. You've lost synchronization, and unless you have some kind of audit query that runs regularly, you'd never know.

-->That almost impossible to be happening.

As far as scheduling the summary table on a regular basis, just create a job to to it and scehdule it to run at whatever intervals you want. Create the code that would drop the current summary table and then create a new summary table. You might determine when your 'off' hours are (e.g. 7AM, NOON - most people are at lunch) and 4:30 PM and run it at those times.

---> Please provide sample/step how to do it.

Thank you.

0
 
Mark WillsTopic AdvisorCommented:
You request it to be deleted because "NO REPSONSE", sorry, but think there has been a lot of help for you in this thread. The last question that you ask about is how to schedule a job, nothing to do with table design.
0
 
dbbishopCommented:
Agreed. Lots of examples, links, suggestions...
0
 
emi_sastraAuthor Commented:
Hi All,

Just one question that I could make decision to reward point.

I. Detail Stock:

1. WarehouseCode
2. ItemCode.
3. TrsDate.     (This field could have more than one in the same ItemCode).
4. ItemQty.
5. ItemUsed

I am using FIFO stock.

User screen : Stock

WareHouse :  ......................

ItemCode   ItemName       Qty
C001          COCA COLA     10
F001           FANTA              12

Let's say there are more than 10 thousand itemCode.

My question, in order to make a faster loading to show the Stock to user, what should I do:

1. Create a Total Stock Table? How to update it, using trigger or SP to update it at every single transaction?
2. Get it using query from Detail Stock?

Thank you.






0
 
dbbishopCommented:
What is difference between ItemQty and ItemUsed?

SQL Server is definately suited to handle 10,000 items in a table. I have a database with over 30,000,000 rows and I have 60-70 users pulling data from it in a very timely manner. Response time is sub-second for pulling multiple rows based on an account number (ItemCode in your case) through a VB.Net application interfaced with SQL Server 2000.

I am not sure that you want to maintain an Item Total in the database. Is it a total of that item at a warehouse, the total system-wide? By using proper indexing and a query, you should be able to calculate the total items (in stock, sold to a customer by date, sold to a customer within a period of time, etc) at the time you pull that data.

I have never written a banking system, but I do not necessarily know that for an account, there is an actual 'remaining balance' column kept in the database. The remaining balance is the total deposits and credits minus the total withdrawals or debits. If you calculate it every time, there is no chance for error.

Do everything through stored procedures in SQL Server. If you want to update an item quantity in a warehouse, pass the warehouse code, item code, and quantity to a procedure. You could even have the procedure return the remaining quantity after update. Do joins on the various tables to pull your relational data (warehouse name, item description, etc.)
0
 
emi_sastraAuthor Commented:
Hi dbbishop,

A : What is difference between ItemQty and ItemUsed?
Q : ItemQty is total buy/produce/return sell, while total ItemUsed is sold/return buy--per TrsDate.

Let's say we  have 10,000 items x 1 transaction per day (TrsDate) x 365 days = 3,650,000 rows per Year.

A: Response time is sub-second for pulling multiple rows based on an account number (ItemCode in your case) through a VB.Net application interfaced with SQL Server 2000.

Q: One Item Code per Sub-second, and if we want to print current ready stock then multiply by 10,000 would be terrible? More than 2 minutes. Let's 1/60 sec per Item Code. It just retrieving data and the needs time again to feed to CR? Please comment If I am wrong.

Q : Could you provide sample of your stock table structure ? May be I could learn from that.

A : Do everything through stored procedures in SQL Server.
Q : What is the different SP and Trigger, since I've been told not to use Trigger in earlier post. To me Trigger is also a SP, but call it automatically by SQL.

Thank you.





0
 
dbbishopCommented:
I do not think there is a direct corrolation betewwn the number of rows in the base table and the number of rows the query works with. In other words, if you have a 100,000 row table and you need to pull 10 rows of datra, you cannot say it will take ten times longer to pull 100 rows. The query plan SQL uses, the indexes you have in the table, and how effectively the query is writeen all come into play. These are not necessarily issues that can be 'taught' in one sitting for 500 points. These are skills that you gain as you become familiar with database design, indexes (clustered vs. unclusterd) the use of your primary key, structuring a query to be efficient, etc.
At best, you can learn all this stitting down at SQL Server with about $150 worth of books and playing with the smaples over a few months, or you can pop down a few grand and take a course or two from knowledgable (and reputable) training centers.

The only thing I've ever done that took over two minutes was running a 56,000,000 row table against itself with a LEFT JOIN to identify missing data, and several ETL processes where I am staging 80,000 rows of text data into a staging table and then parsing it into tables along with all kinds of data conversions.
0
 
dbbishopCommented:
A trigger is a specialized stored procedure that fires automatically when any action takes place that would affect the data in the table (adding, deletingor changing data). Since you can do that with a stored procedure, why have the overhead of a stored procedure doing the work, they table having to determine there is a trigger, and then the trigger doing some work.

To me, a trigger is essential when you MUST have an audit trail of changes to the data where information can be provided outside the actual transaction data (timestamp, logged in user. terminal, resource info) and you are usually writing data to a separate audit table so you can track those changes. (John Smith at terminal SHD7621 changed the last name of Acct 2384723847 from Smith to Jones at "01/12/2009 06:12:55" in the User table.

I worked on a new criminal justice system back in the late 90s and we had to log when someone went to the bathroom. Not quite that bad, but we have well over 30 tables in the database and each one had triggers and when a change was made multiple records were written to a serarate log table to show userID, terminal/IP Address, Date & Time, table name that was modified, old value and new value. If 10 columns were changed in one row, it created 10 audit records. A real headache and all done with triggers.

Try to keep it as simple as you can.
0
 
emi_sastraAuthor Commented:
Hi All,

I think I get some knowledge from all the comments here, there I should split the point.
It is difficult to split the point but I'll try.

Thank you to all of you for the comments.


0
 
ksaeidiCommented:
Phil_Crusader::  
I also working on eth eprject like that  with different table  and different names .
It seems like you know what rae you talking about and you are good in access daatbase .

Check my bio for email so I can yalk about this project  which can benfit youy too

thanks
0

Featured Post

What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

  • 14
  • 9
  • 6
  • +3
Tackle projects and never again get stuck behind a technical roadblock.
Join Now