• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 192
  • Last Modified:

Maintaining 2 indentical SQL Server Databases

My team has 2 web projects taking data from the same SQL server 2000 database.  One project can have read and write right and data is updated from this project while the other just needs the read right from this database.  When many users query data at the same time, the server will choke and times out on certain queries.  The fastest solution probably is to have 2 copy of the same database running at the same time and each project gets data from its own database.  We can accept the fact that the read only project may not have data updated in real time because data is updated from the project that can write data into the database but we would like to keep data to be in sync in real time as close as possible, what may be the suitable solution for this?

Thank you.
0
lenguyen0366
Asked:
lenguyen0366
4 Solutions
 
Imran Javed ZiaCommented:
Hi,
There are many ways to address these issues:
Use Data Replication Services
Use Mirror Server
Database Snapshot

I recommend you to use later version of SQL Server may be 2005 or 2008. hope it will address lot of performance issues

Thanks
0
 
Alpesh PatelAssistant ConsultantCommented:
Please use mirroring or Log shipping is the best way
0
 
Kobe_LenjouCommented:
I suppose one of the requirements is that the databases need to be up & running 100% of the time?
This rules out quite a lot scenario's.

However I would not go this wway, and improve the application/queries/indexes to prevent long locks from accuring.

Also as suggested upgrading to the latest en greatest SQL version will help you (much smarter locking)
If you hppen to behet to SQL2005/8, please try the 'snapshot isolation level' for the read-only app, it will solve all your problems ;-)

0
 
Ryan McCauleyCommented:
You don't mention what indexing and performance tuning you've done on the existing database - instead of adding a second copy of the data, the "timeout and choke" problems can possibly be solved by seeing which queries are responsible and adding some indexes to support them. If the indexes aren't present to do their job, those queries can be using table scans or other operations that are very expensive, and the right indexes can make those queries return almost instantly.

On a basic level, you can use the SQLServer/SSMS canned reports to see which queries are expensive, as far as disk access goes. In SSMS, connect to your server, and then right-click on the server and select "Reports" -> "Standard Reports" -> "Performance - Top Queries By Total IO". This report will quickly show you the top 15 queries run on your server (since it was last restarted) that resulted in the highest volume of data read from disk (across all executions - if a query is lightweight, but was run a million times, it could show up above a heavy query that's run once/day, for example).

Once you have the expensive queries in hand, you can either talk to your DBA about creating some indexes to service them, or if you don't have the DBA (or are the de-facto DBA), can you cheat a bit by placing the query in an SSMS window, and from the Query menu, selecting "Analyze query in the database engine tuning advisor". This is a tool that comes with SQL Server (when you install the "Management Tools - Complete" option in the installer) that will look at your query, compare it to your database, see how it would best be pulled, and then recommend indexes and statistics that would improve performance for that query. Keep in mind that indexes, in exchange for speeding lookups, can have a slight negative impact on updates/deletes - it's slight, but over-indexing your data can outweigh the benefits in some cases. Once you have some recommendations from the DETA, you can apply them (preferably first to a test copy, if you have it) and see how much they improve performance.

While you could do a second copy of the database, you'd have to manage it to make sure they're in sync all the time, and that can get complicated. If you can avoid a second copy, and continue to allow both processes to use the single copy you already have, that's the ideal solution. You may even be able to add some indexes to improve the performance of both applications, not just the slow one.

Good luck!
0

Featured Post

Veeam and MySQL: How to Perform Backup & Recovery

MySQL and the MariaDB variant are among the most used databases in Linux environments, and many critical applications support their data on them. Watch this recorded webinar to find out how Veeam Backup & Replication allows you to get consistent backups of MySQL databases.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now