Link to home
Start Free TrialLog in
Avatar of sqldba2013
sqldba2013

asked on

Database partitioning error in sql server

Hi All,

We have implemented DB partition in one of our prod sql server.

While executing stored proc, we are receiving below error every time.

Please advise how to resolve below error.

Msg 1834, Level 20, State 1, Line 5
The file 'H:\MSSQL10.XYZ\MSSQL\Data\H1_Partition\Raw_Partitions\DF_20091221.ndf' cannot be overwritten.  It is being used by database 'H1_Partition'

Msg 1828, Level 20, State 4, Line 5
The logical file name " DF_20091221" is already in use. Choose a different name.
Msg 622, Level 16, State 3, Line 17
The filegroup "FG_20091221" has no files assigned to it. Tables, indexes, text columns, ntext columns, and image columns cannot be populated on this filegroup until a file is added.
The statement has been terminated.
ASKER CERTIFIED SOLUTION
Avatar of PadawanDBA
PadawanDBA

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of sqldba2013
sqldba2013

ASKER

Thanks PadawanDBA for reply.

is this error related to sp?

I can not post my business code in this forum.

Could you pls give me some hints on this error.
Sure.  My guess is that the sp that is generating the errors is performing partition maintenance.  I would start by checking the list of files that are assigned to the partitioned table and see if any of them have the logical name in question and then whether or not there is data in that file.  It's possible that it could be an error in the logic of the partition split.

Edit: definitely understand not being able to post specifics in here.  wasn't sure if it was sensitive information or not!
Hi PadawanDBA,

I'm herewith attaching output of above query and SP. Please advise me how to fix below error.

I am increasing points from 250 to 500.

Error description:

Msg 1834, Level 16, State 1, Line 5
The file 'I:\MSSQL10.HUBACKEND\MSSQL\Data\LU_EVDO_Raw_Partition\Raw_Partitions\DF_Raw_20131220.ndf' cannot be overwritten.  It is being used by database 'LU_EVDO_Raw_Partition'.  
Msg 1828, Level 16, State 4, Line 5
The logical file name "DF_Raw_20131220" is already in use. Choose a different name.
Msg 622, Level 16, State 3, Line 17
The filegroup "FG_Raw_20131220" has no files assigned to it. Tables, indexes, text columns, ntext columns, and image columns cannot be populated on this filegroup until a file is added.
The statement has been terminated.
Output.xlsx
SP.txt
Can you post the results of the following:

use LU_EVDO_Raw_Partition';

select
	*
from
	sys.filegroups
where
	name like '%20131220%';


select
	*
from
	sys.master_files
where
	physical_name like '%20131220%';

SELECT  pf.name AS pf_name ,
        ps.name AS partition_scheme_name ,
        p.partition_number ,
        ds.name AS partition_filegroup ,
        pf.type_desc AS pf_type_desc ,
        pf.fanout AS pf_fanout ,
        pf.boundary_value_on_right ,
        OBJECT_NAME( si.object_id ) AS object_name ,
        rv.value AS range_value ,
        SUM(
				CASE 
					WHEN si.index_id IN ( 1, 0 ) THEN p.rows
                    ELSE 0
				END
			) AS num_rows ,
        SUM( dbps.reserved_page_count ) * 8 / 1024. AS reserved_mb_all_indexes ,
        SUM(
				CASE ISNULL(si.index_id, 0)
					WHEN 0 THEN 0
					ELSE 1
				END
			) AS num_indexes
FROM    sys.destination_data_spaces AS dds
        JOIN sys.data_spaces AS ds ON dds.data_space_id = ds.data_space_id
        JOIN sys.partition_schemes AS ps ON dds.partition_scheme_id = ps.data_space_id
        JOIN sys.partition_functions AS pf ON ps.function_id = pf.function_id
        LEFT JOIN sys.partition_range_values AS rv ON pf.function_id = rv.function_id
                                                        AND dds.destination_id = CASE pf.boundary_value_on_right
                                                                                    WHEN 0 THEN rv.boundary_id
                                                                                    ELSE rv.boundary_id + 1
                                                                                END
        LEFT JOIN sys.indexes AS si ON dds.partition_scheme_id = si.data_space_id
        LEFT JOIN sys.partitions AS p ON si.object_id = p.object_id
                                            AND si.index_id = p.index_id
                                            AND dds.destination_id = p.partition_number
        LEFT JOIN sys.dm_db_partition_stats AS dbps ON p.object_id = dbps.object_id
                                                        AND p.partition_id = dbps.partition_id
WHERE
        ds.name like '%20131220%'
GROUP BY ds.name ,
        p.partition_number ,
        pf.name ,
        pf.type_desc ,
        pf.fanout ,
        pf.boundary_value_on_right ,
        ps.name ,
        si.object_id ,
        rv.value;

Open in new window


Please scrub out any sensitive information (i'm mostly curious on the existence of the filegroup and whether or not it has anything in it...).  I am looking through the sp now to see if i spot any issues with the boundary point split logic.
Hello,

Please find the attached 1st query output. I will share reaming query's output tomorrow.

Please review the given SP and help me on this to find root cause of issue.
1st-Query-Output.xlsx
Is this the sp that does the maintenance on creating the new filegroup, file and then performs the maintenance on the partition function (ie. splitting the upper boundary)?  The reason I ask is as I sift through the partitions that exist for the partition function in question, I see that you have the empty bucket (for lack of a better term) that handles anything out of range at the beginning, but I don't see any empty buckets at the end (beyond the FG_Raw_20131220).  If there's no splitting of the boundary point, it's going to have that null range value (where it's not tied to a boundary point, so everything that is range right is dumping into that FG_Raw_20131219 filegroup).  If you look at the rowcounts for that filegroup, notice how many more rows than the other daily filegroups it has stored in it.  I have a suspicion that everything that has a date greater than 12/19/2013 is getting dumped into that partition.  Can you post the part that does the partition function maintenance?  If you don't have anything that does that, that's fine, I can help you create it.
Thanks for your comments.

To be honest, I don't have scripting knowledge and I am new to SQL server DBA technology.

Please share scripts, so I can post output of results.

Background of issue: we have created maintenance job to run every day at 7 PM and this job includes 2 steps that is
step 1: execute SP (i have already shared with you)
step 2: Updated stats

At the beginning, it was taking around 5 hrs to complete the job because of heavy data.

Now it was throwing an error after 3 hrs and job get failed.
Mainly I am inquiring about the sp's:

PartitionMgr_CreateHourlyPartition
PartitionMgr_DropHourlyPartition

I don't think they're doing what they're supposed to be doing.  I am going to post an example of what they should look like (that's the only way I can really think to explain it).  In the interim, here's a fantastic set of videos that will give you some really really good information on table partitioning: http://www.brentozar.com/archive/2013/01/sql-server-table-partitioning-tutorial-videos-and-scripts/.

It's going to take me a little bit to weave creating the sample TSQL specific to your example in with my work.  Table partitioning definitely falls under the umbrella of advanced sql server administration, so it requires a bit of thought.
Sorry this took so long to get to you...  But in order to fix your particular situation, here's what you are probably going to need to do.  You'll want to backup the database and restore it to somewhere you can test this on and ensure that it is working (absolutely not in production first), but I believe to fix your problem you'll need to do something like this:

alter partition function PF_Raw_Daily() split range( '2013-12-20' );

Open in new window


In order to fix this for the remaining days between 2013-12-20 and today, you'll want something similar to this (i have everything set as print instead of execute for the dynamic SQL parts so it doesn't execute anything and will show you everything it's going to do so you can verify it looks good before you change the prints to execute) - in fact, you'll probably want to make the maintenance do the same thing as what this does (this may very well error out when you try to run it, I was trying to guess on your architecture in some places).

The following creates a partition helper schema and a view for what I gave you earlier as a partitioning helper view:
CREATE SCHEMA [ph] AUTHORIZATION dbo;
GO

--Create a view to see partition information by filegroup
CREATE VIEW ph.FileGroupDetail
AS
SELECT  
        pf.name AS pf_name ,
        ps.name AS partition_scheme_name ,
        p.partition_number ,
        ds.name AS partition_filegroup ,
        pf.type_desc AS pf_type_desc ,
        pf.fanout AS pf_fanout ,
        pf.boundary_value_on_right ,
        OBJECT_NAME( si.object_id ) AS object_name ,
        rv.value AS range_value ,
        SUM(
				CASE 
					WHEN si.index_id IN ( 1, 0 ) THEN p.rows
                    ELSE 0
				END
			) AS num_rows ,
        SUM( dbps.reserved_page_count ) * 8 / 1024. AS reserved_mb_all_indexes ,
        SUM(
				CASE ISNULL(si.index_id, 0)
					WHEN 0 THEN 0
					ELSE 1
				END
			) AS num_indexes
FROM    
        sys.destination_data_spaces AS dds
              JOIN sys.data_spaces AS ds ON dds.data_space_id = ds.data_space_id
              JOIN sys.partition_schemes AS ps ON dds.partition_scheme_id = ps.data_space_id
              JOIN sys.partition_functions AS pf ON ps.function_id = pf.function_id
              LEFT JOIN sys.partition_range_values AS rv ON pf.function_id = rv.function_id
                                                        AND dds.destination_id = CASE pf.boundary_value_on_right
                                                                                    WHEN 0 THEN rv.boundary_id
                                                                                    ELSE rv.boundary_id + 1
                                                                                END
              LEFT JOIN sys.indexes AS si ON dds.partition_scheme_id = si.data_space_id
              LEFT JOIN sys.partitions AS p ON si.object_id = p.object_id
                                            AND si.index_id = p.index_id
                                            AND dds.destination_id = p.partition_number
              LEFT JOIN sys.dm_db_partition_stats AS dbps ON p.object_id = dbps.object_id
                                                        AND p.partition_id = dbps.partition_id
GROUP BY 
        ds.name ,
        p.partition_number ,
        pf.name ,
        pf.type_desc ,
        pf.fanout ,
        pf.boundary_value_on_right ,
        ps.name ,
        si.object_id ,
        rv.value;

Open in new window


The part to correct the partitioning issue:

use LU_EVDO_Raw_Partition_BACKUP;

declare @max_date date,
		@partition_max date,
		@curr_partition_date date,
		@new_boundary date,
		@new_boundary_nvarchar nvarchar( 10 ),
		@alterDatabaseFileGroup_DynamicSQL nvarchar( 4000 ),
		@alterDatabaseFile_DynamicSQL nvarchar( 4000 ),
		@alterPartitionScheme_DynamicSQL nvarchar( 4000 ),
		@alterPartitionFunction_DynamicSQL nvarchar( 4000 ),
		@filePath nvarchar( 256 ),
		@fileSize nvarchar( 256 ),
		@fileGrowth nvarchar( 256 );

select 
	@new_boundary = cast( 
		max
		( 
			case
				when range_value is not null then range_value
				else N'1970-01-01 00:00:00.000'
			end
		) as date ),
	@filePath = N'I:\MSSQL10.HUBACKEND\MSSQL\Data\LU_EVDO_Raw_Partition',
	@fileSize = N'1024MB',
	@fileGrowth = N'512MB',
        @partition_max = dateadd( dd, 5, getdate() );
from 
	ph.FileGroupDetail;

set @new_boundary = dateadd( dd, 1, @new_boundary );

while( @new_boundary <= @partition_max )
begin
	set @new_boundary_nvarchar = convert( nvarchar( 10 ), @new_boundary, 112 );

	set @alterDatabaseFileGroup_DynamicSQL = N'alter database Claram_PartitionTest add filegroup FG_Raw_' + @new_boundary_nvarchar + N';';
	print @alterDatabaseFileGroup_DynamicSQL;
	set @alterDatabaseFile_DynamicSQL = N'alter database Claram_PartitionTest add file( name = DF_Raw_' + @new_boundary_nvarchar + N', filename = ''' + @filePath + N'\DF_Raw_' + @new_boundary_nvarchar + N'.ndf'', size = ' + @fileSize + N', filegrowth = ' + @fileGrowth + N' ) to filegroup FG_Raw_' + @new_boundary_nvarchar + N';';
	print @alterDatabaseFile_DynamicSQL;
	set @alterPartitionScheme_DynamicSQL = N'alter partition scheme PS_Raw_Daily next used FG_Raw_' + @new_boundary_nvarchar + N';'
	print @alterPartitionScheme_DynamicSQL;
	set @alterPartitionFunction_DynamicSQL = N'alter partition function PF_Raw_Daily() split range( ''' + convert( nvarchar( 10 ), @new_boundary, 120 )  + N''' );';
	print @alterPartitionFunction_DynamicSQL;

	set @new_boundary = dateadd( dd, 1, @new_boundary );
end;

Open in new window


This will add partitions in perpetuity, so it does not clean up the lower end of the partition.  If you need any help with that part, let me know and I can help with that as well.  Again, please make sure you're NOT RUNNING THESE IN PRODUCTION without testing them first.  I don't want to be that guy that mucks up your database ;)

Edit:

I tried to make as much of it as possible parameterized to make it easy for you to change things according to what you need.
Hi Again,

Please find the attached output results of partition helper schema & partition details.

Still I have not executed below command.
alter partition function PF_Raw_Daily() split range( '2013-12-20' );
                                           
Can I go ahead with above command?
Partition-details.xlsx
Partition-helper-schema.xlsx
I really would recommend performing that on a restored backup before you execute in production.  I'm pretty sure it's what you need, but again, not certain enough to stake your production database on it without testing it first....
use LU_EVDO_Raw_Partition_BACKUP;

declare @max_date date,
		@partition_max date,
		@curr_partition_date date,
		@new_boundary date,
		@new_boundary_nvarchar nvarchar( 10 ),
		@alterDatabaseFileGroup_DynamicSQL nvarchar( 4000 ),
		@alterDatabaseFile_DynamicSQL nvarchar( 4000 ),
		@alterPartitionScheme_DynamicSQL nvarchar( 4000 ),
		@alterPartitionFunction_DynamicSQL nvarchar( 4000 ),
		@filePath nvarchar( 256 ),
		@fileSize nvarchar( 256 ),
		@fileGrowth nvarchar( 256 );

select 
	@new_boundary = cast( 
		max
		( 
			case
				when range_value is not null then range_value
				else N'1970-01-01 00:00:00.000'
			end
		) as date ),
	@filePath = N'I:\MSSQL10.HUBACKEND\MSSQL\Data\LU_EVDO_Raw_Partition',
	@fileSize = N'1024MB',
	@fileGrowth = N'512MB',
        @partition_max = dateadd( dd, 5, getdate() );
from 
	ph.FileGroupDetail;

set @new_boundary = dateadd( dd, 1, @new_boundary );

while( @new_boundary <= @partition_max )
begin
	set @new_boundary_nvarchar = convert( nvarchar( 10 ), @new_boundary, 112 );

	set @alterDatabaseFileGroup_DynamicSQL = N'alter database Claram_PartitionTest add filegroup FG_Raw_' + @new_boundary_nvarchar + N';';
	print @alterDatabaseFileGroup_DynamicSQL;
	set @alterDatabaseFile_DynamicSQL = N'alter database Claram_PartitionTest add file( name = DF_Raw_' + @new_boundary_nvarchar + N', filename = ''' + @filePath + N'\DF_Raw_' + @new_boundary_nvarchar + N'.ndf'', size = ' + @fileSize + N', filegrowth = ' + @fileGrowth + N' ) to filegroup FG_Raw_' + @new_boundary_nvarchar + N';';
	print @alterDatabaseFile_DynamicSQL;
	set @alterPartitionScheme_DynamicSQL = N'alter partition scheme PS_Raw_Daily next used FG_Raw_' + @new_boundary_nvarchar + N';'
	print @alterPartitionScheme_DynamicSQL;
	set @alterPartitionFunction_DynamicSQL = N'alter partition function PF_Raw_Daily() split range( ''' + convert( nvarchar( 10 ), @new_boundary, 120 )  + N''' );';
	print @alterPartitionFunction_DynamicSQL;

	set @new_boundary = dateadd( dd, 1, @new_boundary );
end;
                                            

Open in new window

I got the below error while executing above script, pls suggest on below error.

Invalid column name range_value
Did you run the queries that create the partition helper schema and partition helper view in the backup database?
Yes, I have already sent output results for partition helper schema.

But i got error for above script that is 'Invalid column range_value''
There shouldn't be any issue with that column unless there is in issue with the view, so let's try running a simple query to see what it yields:

select range_value from ph.FileGroupDetail;

Open in new window


Just let me know if you get values back or if you get an error on that one.  This will be in the restored copy of the database.  If you don't get anything back or you get an error, please try the following (and let me know if you get stuff back or not) as well:

select * from ph.FileGroupDetail

Open in new window

--