Database size

I have a db which involves a lot of inserts and updates, and as such its growing at huge rates i.e. its currently 5gig even though it only contains 2 million rows.

I'm assuming this is to do with transaction logs etc

Is there a way i configure the db to stop this rate of growth e.g. ignore the log file, or something simular

Thanks
Webbo
Webbo_1980Asked:
Who is Participating?

Improve company productivity with a Business Account.Sign Up

x
 
AriMcConnect With a Mentor Commented:
Depending on the criticality of your data you can do several things. Please read the following article, especially chapter "Prevent the transaction log files from growing unexpectedly":

http://support.microsoft.com/kb/873235

0
 
Anthony PerkinsConnect With a Mentor Commented:
You have two options:
1.  Start backing up your Transaction Log.  This will keep it from growing.
2.  If you do not need point-in-time restores, then change the Recovery Model to Simple.
0
 
Rajkumar GsSoftware EngineerCommented:
Another way - Shrink database log file frequently

1. Execute this command against the database
Dump transaction YOURDBNAME with no_log

Open in new window

2. Right-Click database - Shrink Files - Select 'Reorganize pages before releasing usused space' - Enter 'Shrink File to ' - 0 - Click OK

Log file may be occupying more harddisk space. So this could save space and db size.

Raj

0
What Kind of Coding Program is Right for You?

There are many ways to learn to code these days. From coding bootcamps like Flatiron School to online courses to totally free beginner resources. The best way to learn to code depends on many factors, but the most important one is you. See what course is best for you.

 
Anthony PerkinsCommented:
>>Another way - Shrink database log file frequently<<
Please do not recommend this.  This is a very bad idea.
0
 
Rajkumar GsSoftware EngineerCommented:
Yes. You are right, acperkins.

>> e.g. ignore the log file, or something simular
Becuase this only I posted this comment, but not recommends this

Raj
0
 
Webbo_1980Author Commented:
Thanks acperkins

1.  Start backing up your Transaction Log.  This will keep it from growing.
2.  If you do not need point-in-time restores, then change the Recovery Model to Simple.

The concerning thing is it is already on simple? And i've created a few backup but the space isnt going down..

I've since found the report feature and the stat against my db at present is as follows...

Total space usage: 8,341.19Mb
Data Files:               5.532.25Mb
Transaction:            2,808.94Mb

Looking at the tranaction log space only arund 5 % is used, however regarding the data files, 95% of it appears to be data.

Any ideas what i can do to get this down?
0
 
AriMcCommented:
You say there are 2M rows in the database. Dividing 5.5 GB by that gives roughly 3 kilobytes per row. Since you haven't provided any information about your database structure, it's hard to tell if this is around the right figure but it could easily be true if you have a couple of fairly wide character fields in the most commonly inserted table(s).

If you are using fixed length char columns, you might want to experiment with varchar especially if most of the time the columns are empty or very short in length.



0
 
Jim P.Commented:
SQL 2005 and below is based on 8K pages. I presume the 2008 is the same.

This gets into deep database design, but you can get more data into less space by doing everything by eights. So if you have a a field that could be up to 25 characters, but generally never will be that large set it as a varchar(24). Same with 50 being a varchar(48). The extra characters will force SQL to build the DB so that you have an extra page reserved for 1 or 2 characters. The rest of that 8k area will be reserved but always empty.

This also leads to efficiency for the engine. When SQL fetches (or writes) data it is still doing it in 8k blocks. So SQL is saying get these 3 8k pages with a varchar(24). When you have a varchar(25) it will fetch 4 8k pages even though the 4th page only has 1 character in it -- maybe.

With a thousand rows this doesn't show much. But when you have a million rows that adds up.

Normalizing to this level is rare, but in a large app/db it can add up.
0
 
Anthony PerkinsCommented:
>>Any ideas what i can do to get this down?<<
Depending on your edition you could benefit form using compression.
0
 
Aaron ShiloConnect With a Mentor Chief Database ArchitectCommented:
hi you mentioned you have  " alot of updates"

well this could cause fragmentation, i would consider using reorgenize or rebuild to your main indexes (clustered and non clustered)  and check if any space was released.
0
 
LowfatspreadConnect With a Mentor Commented:
@jimpen
   not sure i follow why you are suggesting 8 byte boundaries will have the effect you suggest (extra page retrieval requests etc...) have you any MS/third party documentation on this?

@webbo
  could you give us some more background on your data?
  what do you mean by insert/update activity ... can you describe a typical transaction scenario (lifecycle) for on of your data elements?

  do you have a lot of Null data?  ---> have you considered sparse columns

  what is the nature of the database OLAP,OLTP,MIS,...



0
 
Anthony PerkinsCommented:

jimpen,
I cannot view the document from that link you posted, but I suspect that Lowfatspread is perhaps confused as I am to your reference first to the use of 8K pages and then jump to the conclusion that columns need to be in multiples of 8 bytes.
0
 
superkhanConnect With a Mentor Commented:
Hi there,

DBCC SHRINKFILE and SHRINKDATABASE commands may not work
recovery model from full to simple if you do not want to use the transaction log files during a disaster recovery operation.
Back up the transaction log files regularly to delete the inactive transactions in your transaction log.
Design the transactions to be small.
Make sure that no uncommitted transactions continue to run for an indefinite time.
Schedule the Update Statistics option to occur daily.
To defragment the indexes to benefit the workload performance in your production environment, use the DBCC INDEXDEFRAG Transact-SQL statement instead of the DBCC DBREINDEX Transact-SQL statement. If you run the DBCC DBREINDEX statement, the transaction log may expand significantly when your SQL Server database is in Full recovery mode. Additionally, the DBCC INDEXDEGRAG statement does not hold the locks for a long time, unlike the DBCC DBREINDEX statement.
0
 
Jim P.Commented:
@Lowfatspread and ac,

Let me withdraw my comments. That was what I was taught in a SQL7/2K class.

Maybe it was the system at the time or just the blowhard teaching the class.

My most humblest apologies.
if object_id('dbo.RemoveNonAlphaCharacters') is not null
begin
  drop function dbo.RemoveNonAlphaCharacters
end
GO
Create Function [dbo].[RemoveNonAlphaCharacters](@Temp VarChar(1000))
Returns VarChar(1000)
AS
Begin

    While PatIndex('%[^a-z]%', @Temp) > 0
        Set @Temp = Stuff(@Temp, PatIndex('%[^a-z]%', @Temp), 1, '')

    Return @TEmp
End;

GO
if object_id('dbo.RemoveNonNumericCharacters') is not null
begin
  drop function dbo.RemoveNonNumericCharacters
end
GO
Create Function [dbo].[RemoveNonNumericCharacters](@Temp VarChar(1000))
Returns VarChar(1000)
AS
Begin

    While PatIndex('%[^0-9]%', @Temp) > 0
        Set @Temp = Stuff(@Temp, PatIndex('%[^0-9]%', @Temp), 1, '')

    Return @TEmp
End;

GO

Declare @i as int
Declare @GUID as uniqueidentifier
Declare @Varch as varchar(36)
Declare @IntVal as bigint

if object_id('dbo.SizetestNonPaged') is not null
begin
  drop table dbo.SizetestNonPaged
end
 
Create table SizetestNonPaged
(ident	bigint identity(1,1),
 col1	varchar(25),
 col2	varchar(50),
 col3	varchar(75),
 col4   bigint);


if object_id('dbo.SizetestPaged') is not null
begin
  drop table dbo.SizetestPaged
end
 

Create table SizetestPaged
(ident	bigint identity(1,1),
 col1	varchar(24),
 col2	varchar(48),
 col3	varchar(64),
 col4	bigint);
set nocount on;
set @i = 1
while @i <= 1000000
begin
	Select	@GUID = NewId()
	Select	@Varch = DBO.RemoveNonAlphaCharacters(cast(@GUID as varchar(36)))
	Select	@IntVal = left(DBO.RemoveNonNumericCharacters(cast(@GUID as varchar(36))),10)

	insert into SizetestNonPaged 
		(col1, col2, col3, col4)
	values(left(@Varch,20), left(@Varch,30), left(@Varch,20) + '-' + left(@Varch,20), @IntVal)

	insert into SizetestPaged 
		(col1, col2, col3, col4)
	values(left(@Varch,20), left(@Varch,30), left(@Varch,20) + '-' + left(@Varch,20), @IntVal)
	if (select (@i)% (100000)) = 0 
		 print @i
	select @i = @i + 1
end
set nocount off;

--exec sp_updatestats;

exec sp_spaceused 'SizetestNonPaged';
exec sp_spaceused 'SizetestPaged';

Open in new window

0
 
Webbo_1980Author Commented:
Hi Everyone,
Thanks for your help, and appologies for the delay in replying, but i've decided to hire a consultant to come in and reivew this for me. I plan to highlight everything mentioned here and will get back shortly with detailed outcome / conclusion shortly.

Thanks
Webbo
0
 
Webbo_1980Author Commented:
Thanks everyone for your help but we never did find solution and project was cancelled for other reasons.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.