• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 954
  • Last Modified:

DataReader vs DataAdapter

At the moment I use a DataAdapter to read from a stored procedure and store the results in a dataset which is then cached.

I have heard that the DataAdapter can give poor performance, so I am considering using a datareader instead of the dataadapter but I would like to keep the dataset so that it can be cached.

Can anyone tell me whether it is the DataAdapter that performs badly or the DataSet ? (i.e. will I waste a lot of time for no gain if I re-code to implement a DataReader?)
0
Mamine
Asked:
Mamine
  • 4
  • 3
  • 2
5 Solutions
 
mmarinovCommented:
Hi Mamine,

the mecanism is different - dataset keeps the data in memory and check offline of the database for their statuses ( new,modified, delete )
the datareader is ontime connection to the database and you can not cache it

the performance is damage if you have a lot datasets in the memory, but nor from one

Regards!
B..M
mmarinov
0
 
MamineAuthor Commented:
Hiya BM! (and thanks for the quick feedback)

As always, I have a few questions ...

1. My data is readonly on the web page (but it is updated from another non-web program) .... will the dataset automatically detect the changes ?

2. I realise I cannot cache the datareader but I could use the datareader to read the data into a dataset and then cache the dataset .... if the performance hit is with the dataadapter this should improve performance but if the performance problems are with the dataset it won't ... hence I'm trying to understand which causes the problems. Are you saying there's no performance problem if there is only one dataset anyway ? regardless of which method I use ?

3. Although I only have one dataset per page, I am caching them all (ie for multiple pages in the site) ... will this give me a performance problem ?
0
 
mmarinovCommented:
Mamine,

1. the dataset detect changes on it so it can automatically ( through dataadapter ) can update, delete, create records - not detecting in the database
2. all depends on the records and table structure that you have used. actually for smaller tables i use both the dataadapter and datareader. for bigger tables on the purposes but in most cases dataadapter and it is not slower a lot. i've not tested the performance on reading - but i think it is not a big difference between them
3. why do you want to cache every page's dataset ?

Regards!
B..M
mmarinov
0
Concerto Cloud for Software Providers & ISVs

Can Concerto Cloud Services help you focus on evolving your application offerings, while delivering the best cloud experience to your customers? From DevOps to revenue models and customer support, the answer is yes!

Learn how Concerto can help you.

 
CJ_SCommented:
When you cache the dataset in memory, you can add constraints to it. The cache will then remove the object when changes have been made to the database.

The performance problems you might have heard of the DataAdapter is that it fills a dataset. If you first fill the dataset, and then use that dataset to fill your own business objects you have a performance hit (you fill two business objects). But when using a datareader you remove the dataset from the business logic, which will improve performance. The dataset is the actual performance hit.

Although you can remove the whole datadapter and use the datareader you will still have to fill the dataset yourself, while the dataadapter does that by default. Personally I would keep the datadapter if you want a dataset.

Also consider caching the output of the pages and controls instead of caching the actual datasets.

Regards,
CJ.
0
 
MamineAuthor Commented:
CJ - thanks for the clarity on the dataset

BM/CJ -

The stored procedure I am using is quite complex (hence the need to cache) and uses processed data from several joined tables (and on one page includes a recursive loop). There is no corresponding update/delete routine. I can't see how either the dataset or the cache could detect changes in the underlying tables ???

The reason I want to cache the dataset for every page is because the processing is such a performance hit, so if someone goes to one page then goes to a linked page for example and comes back I don't want to have to read the Db again.

The reason I'm caching the dataset rather than the output page is so that I can re-use the data for many users but still track sorting and paging for each user.

0
 
CJ_SCommented:
>> The stored procedure I am using is quite complex (hence the need to cache) and uses processed data from several joined tables (and on one
>> page includes a recursive loop). There is no corresponding update/delete routine. I can't see how either the dataset or the cache could detect >> changes in the underlying tables ???

You define the constraints. If there are one or more specific tables you would like to watch you can add a constraint to that table, so that when it changes the cached object is destroyed. It requires you to define the constraints.

You could also create a calculation mechanism (dts / job / stored procedure) inside the SQL Server and schedule this to calculate and insert into a seperate table. This table can then contain the calulcated data and requery every time. Depending on the amount of records you have this would be a good or bad idea.
0
 
MamineAuthor Commented:
This sounds really useful!!! Can you give me a link or example of sample code ?
0
 
CJ_SCommented:
Search for Caching and Dependencies combined with .Net and you should find lots of useful articles. Here's one:

http://msdn.microsoft.com/msdnmag/issues/03/04/WickedCode/default.aspx
0
 
MamineAuthor Commented:
Many Thanks :)
0

Featured Post

Concerto's Cloud Advisory Services

Want to avoid the missteps to gaining all the benefits of the cloud? Learn more about the different assessment options from our Cloud Advisory team.

  • 4
  • 3
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now