SQL Views

I have a big problem......I have 4 sql tables that are fairly large let say 10-12 GB each. Every month I dump previous month data and add new month data. Here is the situation user needs to access certain data from these big tables for example users looking for data in the field1 where the value is 123. I have been creating sub tables for the users. But now its becoming problem managing them. I tried creating views but it was taking for ever. Also the users use MS Access with sql link tables to access the data. is there a better way of doing this so I dont have to create sub tables. Also there is no index in tables cuz every month I drop old month data and add new month data. I do understand that by indexing it improves performance but we are talking millions of records here in a table. Please help me out here to resolve this problem. Also each table contains 110 coulumn.

Thank you.

proudpaki9Asked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

proudpaki9Author Commented:
Also just to be exact on record count each table contains 200 million records.

Thanks
0
David ToddSenior DBACommented:
Hi,

Indexes would help the queries a lot.

Suggest that you drop the indexes prior to dumping the table and reloading them, and as part of that process create these indexes.

Views is a good idea, but without indexes isn't going to be much better than querying the base table.

What version and edition of SQL are you running, for instance SQL 2000 standard edition, or SQL 2005 Enterprise edition ...

Cheers
  David
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
proudpaki9Author Commented:
sql 2000 standard edition....
0
Powerful Yet Easy-to-Use Network Monitoring

Identify excessive bandwidth utilization or unexpected application traffic with SolarWinds Bandwidth Analyzer Pack.

David ToddSenior DBACommented:
Hi,

You can try to partition the data. That is, split into sub-tables by month or whatever, and have a view that is
select *
from table1

union all

select *
from table2

... etc ...

There is a feature to automate some of this in SQL 2005 Enterprise ...

But still could be done manually.

This does need indexes as well.

The idea is that users don't see the complexity - always querying from one table or view, but the query optimiser can quickly work out that the data for this query can all come from table1 and not have to read its way through all ... tables.

Another idea is to move many of the lesser read columns into another table. This has the effect of getting more rows on each data page, so each disk read returns more rows, meaning less reads, and hopefully better response.

But all these need indexes and statistics.

I suggest that you capture a few of yoru favourite querries and experiment with the index tuning wizard.

HTH
  David
0
Ken SelviaRetiredCommented:
I browsed a few of your other questions.  Am I right that you are using DTS to load this data?  Are the input files already sorted?

If so, consider BULK INSERT and a clustered index.

truncate mytable
create clustered index mt_field1_idx on mytable (field1)

BULK INSERT dbo.mytable from 'c:\temp\myfile.txt'
ORDER = field1

Telling BULK INSERT the input file is already in the same order as the clustered index will allow SQL to load the file without sorting it, and your queries for field1=123 should be very quick

0
Ken SelviaRetiredCommented:
One more thing.

If the data is not sorted, you don't want do use a clustered index.

create a regular index on field1 AFTER you load the table

truncate mytable
drop index mytable.mt_field1_idx

bulk insert or DTS here

create index my_field1_idx on mytable (field1)

Even for 100 million rows, creating a single NON CLUSTERED index on an integer field should be relatively fast. (10 - 15 minutes I'm guessing)



0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Microsoft SQL Server

From novice to tech pro — start learning today.