Still celebrating National IT Professionals Day with 3 months of free Premium Membership. Use Code ITDAY17

x

Microsoft SQL Server 2005

71K

Solutions

25K

Contributors

Microsoft SQL Server 2005 is a suite of relational database management system (RDBMS) products providing multi-user database access functionality.Component services include integration (SSIS), reporting (SSRS), analysis (SSAS), data quality, master data, T-SQL and performance tuning. It includes support for managing XML data and allows a database server to be exposed over web services using Tabular Data Stream (TDS) packets encapsulated within SOAP (protocol) requests.

Share tech news, updates, or what's on your mind.

Sign up to Post

This article explains how to reset the password of the sa account on a Microsoft SQL Server.  The steps in this article work in SQL 2005, 2008, 2008 R2, 2012, 2014 and 2016.
4
 
LVL 51

Expert Comment

by:Mark Wills
Comment Utility
Good Article and very well presented. Gets my YES vote :)
0
 

Expert Comment

by:Abrienne Jonethan
Comment Utility
Very Good and Informative Article,
I have also gone through other blog as there are other ways to reset a Lost SA Password in Microsoft SQL Server.
Apart from recovering from manual way you can also use free software like SQL Password Recovery
0
Industry Leaders: We Want Your Opinion!
Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

In this article we will get to know that how can we recover deleted data if it happens accidently. We really can recover deleted rows if we know the time when data is deleted by using the transaction log.
4
 
LVL 10

Expert Comment

by:Jason clark
Comment Utility
Thanks for sharing this helpful article. But in my scenario i didn't have any recent backup of my deleted records also the recovery mode is set to simple. Then I tried SysTools SQL Log Analyzer and successfully recovered deleted records via SQL Transaction log file. it's really helpful if you don't have any backup of your database.
1
 

Expert Comment

by:parag Deshmukh
Comment Utility
Hi Yashwant,

I did not get one thing, when we are taking full backup of database before deletion of rows. Restore of that backup should directly give us all rows since we have taken backup before deletion of rows. Then why do we need transaction log backup?
0
Use this article to create a batch file to backup a Microsoft SQL Server database to a Windows folder.  The folder can be on the local hard drive or on a network share.  This batch file will query the SQL server to get the current date & time and will then append that to the file name.
0
 

Expert Comment

by:Shaad Parihar
Comment Utility
The above code is for one database.
If i have multiple databases and i want to use a single batch file.how i add multiple database in the code.
0
In this article I will describe the Copy Database Wizard method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
4
 
LVL 7

Expert Comment

by:Yashwant Vishwakarma
Comment Utility
Nice & informative article Vitor !!!
Voted as Good Article :)
0
 
LVL 52

Author Comment

by:Vitor Montalvão
Comment Utility
Thank you Yashwant
0
In this article I will describe the Backup & Restore method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
8
 
LVL 66

Expert Comment

by:Jim Horn
Comment Utility
Voted Yes.
0
 
LVL 7

Expert Comment

by:Yashwant Vishwakarma
Comment Utility
Nice Explanation !!!
0
In this article I will describe the Detach & Attach method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
2
 
LVL 66

Expert Comment

by:Jim Horn
Comment Utility
Nicely done.  Voted Yes.
0
Occasionally there is a need to clean table columns, especially if you have inherited legacy data. There are obviously many ways to accomplish that, including elaborate UPDATE queries with anywhere from one to numerous REPLACE functions (even within REPLACE functions). Over time this can be quite unreadable. It can also become a maintenance chore where this is not a single one-off effort.

This happened in a recent question here on EE. I'd like to share the solution that came out of that thread. I have written a function called udf_CleanDataFn. It can be invoke within any query as follows:
 
SELECT dbo.udf_CleanDataFn(YourColumn, @Options)

@Options is a varchar(255) field, which can accept any of the following "options":
 
  • ltrim - This will left-trim the value in YourColumn.
  • rtrim - This will right-trim the value in YourColumn.
  • trim - This will both left- and right-trim the value.
  • nocomma - This will remove any and all comma's.
  • nospace - This will remove any and all spaces.
  • nopunc - This will remove all standard punctuations (!@#$% etc)
  • nonum - This will remove all numbers (0-9).
  • noalpha - This will remove all alpha characters (A-Z and a-z).
  • alphaonly - This will remove everything except alpha characters.
  • numonly - This will remove everything except numbers.

Following are a few working examples that will demonstrate the different results . This first example takes out only the commas:
 
SELECT dbo.udf_CleanDataFn('This !  is *  a  13,21  @  test', 'nocomma')
Result:
1
 

Expert Comment

by:Nick Sawe
Comment Utility
ok thanks, i will give it a go. thanks for sharing this by the way !
0
 
LVL 20

Author Comment

by:dsacker
Comment Utility
You're welcome. If it proves helpful, please click the "Helpful" button. :)
0
If you have heard of RFC822 date formats, they can be quite a challenge in SQL Server. RFC822 is an Internet standard format for email message headers, including all dates within those headers. The RFC822 protocols are available in detail at:
 

The problem is that RFC822 date formats do not always fit within SQL Server's valid date formats. The CONVERT/CAST functions within SQL Server will only convert a limited variety of date formats. When trying to convert RFC822 date formats, some of the RFC822 accepted formats will result in the well-known T-SQL error:
 
Msg 241, Level 16, State 1, Line 1 
Conversion failed when converting date and/or time from character string.

According to the RFC822: Standard for ARPA Internet Text Messages, as long as a date adheres to the constraints in section 5 of the RFC822 protocols, it is a valid RFC822 date. Many RFC822 Dates fail to convert into a T-SQL datetime format.

For instance, the following is a valid RFC822 date:
 
Wed, 27 Sep 2006 21:49:19 GMT

This format (and a number of other RFC822 formats) will not convert to a datetime type. In addition SQL Server does not inherently recognize nor convert time zone codes. (See footnote 1 at the end of this article for a list of valid RFC822 date formats.)

I have written a function called …
1
 
LVL 20

Author Comment

by:dsacker
Comment Utility
Appreciate that. And thanks to Eric. He gave me some good pointers along the way.
0
 
LVL 66

Expert Comment

by:Jim Horn
Comment Utility
I ran the code blocks and the result was excellent.  Nice job.
0
SQL Server engine let you use a Windows account or a SQL Server account to connect to a SQL Server instance. This can be configured immediatly during the SQL Server installation or after in the Server Authentication section in the Server properties screen, Security tab:
SQLAuthenticationMode.PNGBy default Windows Authentication mode is selected since is the one recommended by Microsoft and also the one that is more secure because it uses Windows Security tokens so no passwords will be passed through the network when connecting to the SQL Server instance. It is also the one that is more easy to use since the user doesn't need to remember one more login name and password and it is more easy for maintenance since it will be managed by the Active Directory (AD) so it will respect the user AD policies.
The creation of a Windows authentication login is as simple as providing the domain and user name as the Login name:
DomainLogin.PNGThe same can me performed using the following T-SQL command:
CREATE LOGIN [MyDomain\DomainUserName] FROM WINDOWS

Open in new window

In Windows authentication mode the sa login is disabled so ensure that someone else has the sysadmin role.


SQL Server logins are managed inside the SQL Server engine. The passwords are stored in the SQL Server master database and when connecting to the SQL Server instance the user credential is passing through the network. So, when creating a SQL Server authentication login a login name and password need to be provided:
MSSQLLogin.PNGThe same can me performed using the following T-SQL command:


Open in new window

6
 
LVL 52

Author Comment

by:Vitor Montalvão
Comment Utility
Submitted
0
by Mark Wills

Attending one of Rob Farley's seminars the other day, I heard the phrase "The Accidental DBA" and fell in love with it. It got me thinking about the plight of the newcomer to SQL Server...  So if you are the accidental DBA, or, simply new to SQL Server then there are a few concepts we can share to make your life easier.

Based on the frequency of questions we see in Experts-Exchange, I thought a 'light' discussion about the Transaction Log would be a good place to start.

Microsoft's SQL Server is a great database. Generally, it can look after itself, but there are one or two aspects that will need your attention. You can interactively access, configure, manage, administer, and develop your databases using SQL Server Management Studio (SSMS). With SSMS, the database administrator (i.e., you) can administer any of the components that we will be discussing.  Now, this article is not an introduction into SSMS, but there are a couple of tools you will need to know. For managing your database, in SSMS, use the Object explorer to right click on your database and then Properties for a range of settings and information, or Tasks to shrink or backup files. Also in Object Explorer is the Management group, expand that for Maintenance plans.

A SQL Database is made up of three types of files, the two main files are primary data file with the extension of MDF and the transaction log file with the extension of LDF. The third is an optional secondary data file and …
26
 
LVL 51

Author Comment

by:Mark Wills
Comment Utility
It is a little pedantic, but if it is the difference between understanding a very import differentiator, then it is an important addition.

I am thinking that a seperate paragraph is required. then again, our comments are not "private", so the thread does provide that clarity.

The more important factor is, if running a "FULL" recovery model, then you really have to do transaction log backups, otherwise your transaction logs will continue to grow. So, your transaction log really must be managed by doing those transaction log backups. It is not just for recoverability, backups are an important part of database health. Of course running a "FULL" recovery model is the reason in the first place to give you the highest possible mode for recoverability. Bit of a catch-22 in some regards.
0
 
LVL 7

Expert Comment

by:Yashwant Vishwakarma
Comment Utility
Nice share!!! Voted Yes :-)
Waiting for more from you in future.....
0
How to Use the Help Bell
LVL 10
How to Use the Help Bell

Need to boost the visibility of your question for solutions? Use the Experts Exchange Help Bell to confirm priority levels and contact subject-matter experts for question attention.  Check out this how-to article for more information.

Data architecture is an important aspect in Software as a Service (SaaS) delivery model. This article is a study on the database of a single-tenant application that could be extended to support multiple tenants. The application is web-based developed using Microsoft technologies – classic ASP, ASP.Net, VB.net and SQL Server 2005.

Basic requirements:
The key aspects to be considered while moving to multi-tenant data architecture for the application are given below.
Firstly, data isolation should be maintained between different tenants. This is the basic concept of multi-tenancy where data of one tenant should not be available to other tenants.
Secondly, we need to consider some data statistics in the application. Data is inserted into primary tables every 20 seconds and there could be around 100,000 rows per day in each of these tables. In future, there should be provision for data to be added to these tables every 4 seconds. There are around 20-25 tables per tenant and around 5 common tables.
Finally, one of the most important requirements is that changes should be done with no impact to infrastructure as it would mean additional costs.

Multi-tenant data architecture:
With SQL Server (as this is the database being used), the approaches to implement multi-tenancy are Separate Database, Shared Database Separate Schema and Share Database Shared Schema (Refer to Multi-Tenant Data Architecture from MSDN
2
In SQL Server, when rows are selected from a table, does it retrieve data in the order in which it is inserted?  Many believe this is the case.

Let us try to examine for ourselves with an example.

To get started, use the following script, which I ran on my sever to create a table OrderTest in database TempDB and, additionally, to insert 4 rows into the table.
USE TempDB 
GO
CREATE TABLE OrderTest(N1 INT NOT NULL, C1 VARCHAR(100) NOT NULL)
GO 
INSERT INTO OrderTest(N1, C1) VALUES(1, 'First Insert') 
GO 
INSERT INTO OrderTest(N1, C1) VALUES(10, 'Second Insert') 
GO 
INSERT INTO OrderTest(N1, C1) VALUES(2, 'Third Insert') 
GO 
INSERT INTO OrderTest(N1, C1) VALUES(4, 'Fourth Insert') 
GO 

Open in new window


We are ready for the test.  Let us select data from the table.
USE TempDB 
GO
SELECT * FROM OrderTest 
GO

Open in new window

TestOutput1The results are indeed retrieved in the order in which it is inserted.  Convinced?

Now I am going to add a primary key to the table.
USE TempDB 
GO
ALTER TABLE OrderTest ADD PRIMARY KEY(N1)
GO 

Open in new window


With the primary key in place, let us once again select data from the table using the previous select statement.
USE TempDB 
GO
SELECT * FROM OrderTest 
GO

Open in new window

TestOutput2The rows are NOT returned in the order in which it is inserted!!  Surprising??

What made the difference?  

In the first case, the SQL Server query optimizer decided to do a table scan while it opted for a Clustered Index Scan in the latter case.   See the following execution plans for an illustration.
 ExecutionPlan1 ExecutionPlan2
The verdict.

SQL server does not guarantee any order of output (unless you explicitly append an ORDER BY clause to the query to order your output in a desired way added).  Further, it is the SQL Server query optimizer that decides how the data should be retrieved.  Hence never assume that data will be returned in the order in which it is inserted.
4
INTRODUCTION:

While tying your database objects into builds and your enterprise source control system takes a third-party product (like Visual Studio Database Edition or Red-Gate's SQL Source Control), you can achieve some protection using a single stored procedure that periodically and automatically takes a snapshot of all your objects and archives them. At any point in time, you can see the history for a given database object, and it makes it trivial to restore a previous version of an altered or dropped stored procedure.

This is accomplished using a stored procedure (provided in full at the end of the article) that's created in the master database, and then scheduled to run at whatever interval you choose - anywhere from once/minute to once/week, or anywhere in between.

This Stored Procedure goes through the following steps:

1

If the Master.dbo.coSourceControl table (used to store the history) doesn't exist, it creates it

2

For each database on the server (so new databases are added automatically), it performs steps 3-5:

3

It grabs the text contents of all the user objects (not flagged as "IsMsShipped")

4

Compares the contents of each to the last known copy (if there is one)

5

If the object is new or has changed, add a new copy to the source control table in master

6

At the end, it will output the number of objects updated across all databases
The history is kept in a …
2

0. Introduction

When running hybrid database environments, you often need to query some data from a remote db of any type, while being connected to your MS SQL Server database.
Problems start when you try to combine that with some "user input" passed using parameters, usually in conjunction with stored procedures.

For simplifying I won't refer to a Stored Procedure in the below examples, instead use some script that could run "on it's own".

So, consider this rough setup:
A SQL 2005+ instance/database named SQL_DB
Some (remote) Oracle (or whatever type) database named REMOTE_DB
A linked server configured on the MS SQL 2005+ instance to that remote db, named LINKED_DB
On that (remote) database, a table named TEST.ACCOUNT, with a couple of colums with all possible data types
I assume that the reader knows how to setup a Linked Server, otherwise you go to Creating Linked Servers


1. Usual techniques

There are several techniques to query the remote table, for example, to get the balance from the account 12345:
SELECT balance FROM LINKED_DB..TEST.ACCOUNT WHERE id = 12345

Open in new window

or, with the parameter method:
declare @id int 
set @id = 12345
SELECT balance FROM LINKED_DB..TEST.ACCOUNT WHERE id = @id

Open in new window

Syntax note: LINKED_DB..TEST.ACCOUNT is the so-called four-parts name, build as: <linked_server_name>.<database_name>.<schema_name>.<table_name>.
For some linked server drivers or providers, the database_name is not needed or even not supported, and the schema_name might not be needed either, so if the linked server is configured to connect with TEST login, you might even write:
SELECT balance FROM LINKED_DB...TEST_ACCOUNT

Open in new window

4
 
LVL 143

Author Comment

by:Guy Hengel [angelIII / a3]
Comment Utility
Some eventually useful information in this context is the feature WITH RESULT SETS for the EXECUTE statement:
http://technet.microsoft.com/en-us/library/ms188332.aspx
0
by Mark Wills

PIVOT is a great facility. Well that was the opening line in my previous similarly named article : http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/SQL-Server-2005/A_653-Dynamic-Pivot-Procedure-for-SQL-Server.html

But what if you want to pivot multiple columns, or maybe multiple aggregates, or both, or don't have the PIVOT statement?  That's exactly what happened recently in a question where the SQL 2008 server was actually pointing to a SQL 2000 database.

So how do we solve such a problem?  Simple: we go back to basics and write our own pivot routine and that is exactly what the solution was for the above situation (seen here: http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/SQL-Server-2005/Q_26675089.html).

Basically, we need to get the different values which will become the new columns, aggregate something for those columns, and then do a group by and case statement to check for those columns. Sounds easy, right? Well, not quite.  For the specific problem it can be easier than the approach actually taken, but when you start playing with dates (such as month names) then you have to be able to make sure that the correct sequence is achieved and that adds a few challenges when trying to resolve the problem using data to help build the dynamic SQL.

So lets get started.  But first we need some data...

-- first our customer masterfile (yeah I know it is pretty ordinary)

CREATE TABLE 

Open in new window

11
 
LVL 51

Author Comment

by:Mark Wills
Comment Utility
Hi keithfearnley,

Thanks for the feedback and for the additional insights.

Apologies for not acknowledging your comment earlier, was looking at extending the routines for a MTD and YTD requirement and only just noticed your comment.

Cheers,
Mark
0
 
LVL 8

Expert Comment

by:Leo Torres
Comment Utility
Hello mark,
I noticed that you used master..spt_values
why will it work with Month,Day, Year, But not Quarter
I have some commented code I added but it was to analyze

USE [AdventureWorks2008R2]
GO
/****** Object:  StoredProcedure [dbo].[uPivot]    Script Date: 02/28/2012 13:56:48 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

ALTER PROCEDURE [dbo].[uPivot] (
                 @sourcedata varchar(8000),
                 @Pivot_Over_Source_Column varchar(2000), 
                 @pivot_value_column varchar(2000), 
                 @Pivot_Value_Aggregates varchar(2000),    
                 @pivot_value_format varchar(500) = NULL)
AS

/*
@sourcedata is any legitimate select statement
@Pivot_Over_Source_Column is the column(s) that define the row
@pivot_value_column is the source of data giving the new column names
@Pivot_Value_Aggregates is what the individual cells will contain NB mandatory square brackets immediately after each aggregate
@pivot_value_format is used as the convert style for @pivot_value_column with the additions of 'day' and 'month' and 'year'

-----------------------------+-----------------------+-----------------------+
Pivot_Over_Source_Column     |Pivot_Value_Column 1   |Pivot_Value_Column 2   |
-----------------------------+-----------------------+-----------------------+
row1                         |Pivot_Value_Aggregates |Pivot_Value_Aggregates |
-----------------------------+-----------------------+-----------------------+
row1                         |Pivot_Value_Aggregates |Pivot_Value_Aggregates |
-----------------------------+-----------------------+-----------------------+

Now those aggregates in @Pivot_Value_Aggregates (yes there can be more than one)
must be expressed as a legit aggregate followed by [] or [suffix] (no spaces)

e.g.	1) sum(my_column)[]       
	2) count(my_column)[_count], max(my_column)[_max]

The first example will simply have the headings as defined by @pivot_value_column
The second will have two columns the first as per @pivot_value_column plus the suffix '_count' 
the second column as per @pivot_value_column plus the suffix '_max'

*/

Set nocount on

declare @sql varchar(8000)
declare @columns varchar(8000)
declare @id int

-- we need to use a few temp tables to unpack our delimited list of columns.
-- while these temp tables will survive just within this stored procedure, 
-- I have an old habit of dropping first - mainly because I write these things initially just as T-SQL code then turn it into a SP

If object_id('tempdb..#pivot_column_name','U') is not null 
	drop table #pivot_column_name
If object_id('tempdb..#pivot_column_style','U') is not null 
	drop table #pivot_column_style
If object_id('tempdb..#pivot_column_data','U') is not null 
	drop table #pivot_column_data

create table #pivot_column_name (id int identity,pivot_column_name varchar(255))
create table #pivot_column_data (id int identity,pivot_column_name varchar(255),pivot_column_data varchar(255),pivot_column_donor varchar(255))
create table #pivot_column_style (id int identity,pivot_column_style varchar(5))

-- now unpack our delimited list of pivot columns

insert into #pivot_column_name (pivot_column_name)
select substring(pivot_value_column,number,charindex(',',pivot_value_column+',',number)-number) 
from (select @pivot_value_column as pivot_value_column) p
cross join (select number from master..spt_values where type = 'p')  n
where substring(','+pivot_value_column,number,1) = ','
and number <= len(pivot_value_column)


--Set @sql =
--'insert into #pivot_column_name (pivot_column_name)
--select substring(pivot_value_column,number,charindex('','',pivot_value_column+'','',number)-number) 
--from (select '+@pivot_value_column+' as pivot_value_column) p
--cross join (select number from master..spt_values where type = ''p'')  n
--where substring('',''+pivot_value_column,number,1) = '',''
--and number <= len(pivot_value_column)'

--Print @sql

--exec @sql

--insert into #pivot_column_style (pivot_column_style)
Select a.* Into #temp from (
select substring(pivot_column_style,number,charindex(',',pivot_column_style+',',number)-number) as id,n.high,n.low,n.number,n.status,n.type
from (select isnull(@pivot_value_format,'Quarter') as pivot_column_style) p
cross join (select s.high,s.low,s.number,s.status,s.type from master..spt_values s /*where type = 'p'*/)  n
where substring(','+pivot_column_style,number,1) = ','
and number <= len(pivot_column_style)
) a
Select * from #temp

insert into #pivot_column_style (pivot_column_style)
select substring(pivot_column_style,number,charindex(',',pivot_column_style+',',number)-number)
from (select isnull(@pivot_value_format,'') as pivot_column_style) p
cross join (select number from master..spt_values where type = 'p')  n
where substring(','+pivot_column_style,number,1) = ','
and number <= len(pivot_column_style)
 --and unpack out delimited list of formats needed for those columns

--Set @sql =
--'insert into #pivot_column_style (pivot_column_style)
--select substring(pivot_column_style,number,charindex('','',pivot_column_style+'','',number)-number)
--from (select isnull('+@pivot_value_format+','''') as pivot_column_style) p
--cross join (select number from master..spt_values where type = ''p'')  n
--where substring('',''+pivot_column_style,number,1) = '',''
--and number <= len(pivot_column_style)'

--Print @sql

--exec @sql

Select * from #pivot_column_name
Select * from #pivot_column_style

-- now we can examine our source data to get the unique values for the new columns

Select @id=min(id) from #pivot_column_name

While @id>0
Begin
    Select @pivot_value_column=pivot_column_name from #pivot_column_name where id = @id
    set @pivot_value_format = isnull((select pivot_column_style from #pivot_column_style where id = @id),'')
    set @sql = 'select distinct ''' + -- next column is the definition we need to use when comparing the actual data in the final query
               case when isnumeric(isnull(@pivot_value_format,'')) = 1 then 'case when convert(varchar(255),' + @pivot_value_column +','+@pivot_value_format +')'
                    when isnull(@pivot_value_format,'') = 'Day' then 'case when datename(Day,' + @pivot_value_column +')'
                    when isnull(@pivot_value_format,'') = 'Month' then 'case when datename(Month,' + @pivot_value_column +')'
                    when isnull(@pivot_value_format,'') = 'Year' then 'case when datename(Year,' + @pivot_value_column +')'
                    when isnull(@pivot_value_format,'') = 'Quarter' then 'case when datename(Quarter,' + @pivot_value_column +')'

                    else 'case when convert(varchar(255),' + @pivot_value_column + ')'
               end
               +''','+    -- next column is the data we need to use when comparing the actual data in the final query
               case when isnumeric(isnull(@pivot_value_format,'')) = 1 then ''', convert(varchar(255),' + @pivot_value_column +','+@pivot_value_format +')'
                    when isnull(@pivot_value_format,'') = 'Day' then ' datename(Day,' + @pivot_value_column +')'
                    when isnull(@pivot_value_format,'') = 'Month' then ' datename(Month,' + @pivot_value_column +')'
                    when isnull(@pivot_value_format,'') = 'Year' then ' datename(Year,' + @pivot_value_column +')'
                    when isnull(@pivot_value_format,'') = 'Quarter' then ' datename(Quarter,' + @pivot_value_column +')'

                    else ' convert(varchar(255),' + @pivot_value_column + ')'
               end
               +','+ @pivot_value_column 
               +' from ('+ @sourcedata +') srce order by '+@pivot_value_column
    Print @sql
    insert into #pivot_column_data(pivot_column_name,pivot_column_data,pivot_column_donor)
    exec (@sql)
    Select @id=min(id) from #pivot_column_name where id > @id
end 
Select * from #pivot_column_data
-- Because of dates we need to keep the original datetime to get correct chronology, but that could cause dupes, so no must remove any dupes from our column list

delete #pivot_column_data -- kill any dupes
where exists (select * from #pivot_column_data d2 where d2.id > #pivot_column_data.id and d2.Pivot_column_name = #pivot_column_data.pivot_column_name and d2.pivot_column_data = #pivot_column_data.pivot_column_data)

-- with a distinct list of new columns, we can now construct the aggregate values for each of the columns

select @columns = isnull(@columns+',',',') + replace(replace( @pivot_value_aggregates,'(','(' + Pivot_Column_name +' =''' + pivot_column_data + ''' THEN ' ),')[', ' ELSE '''' END) as [' + pivot_column_data )
from #pivot_column_data

-- with the columns fully defined, it becomes a fairly simple group by 

set @sql = 'select ' + @pivot_over_source_column+@columns + ' from ('+ @sourcedata +') srce GROUP BY ' + @pivot_over_source_column
exec (@sql)

-- now clean up - unnecessary inside a stored procedure, but again that old habit...

if object_id('tempdb..#pivot_column_name','U') is not null drop table #pivot_column_name
if object_id('tempdb..#pivot_column_data','U') is not null drop table #pivot_column_data
If object_id('tempdb..#pivot_column_style','U') is not null drop table #pivot_column_style

-- and that is that.

Open in new window

0
So every once in a while at work I am asked to export data from one table and insert it into another on a different server.  I hate doing this.  There's so many different tables and data types.  Some column data needs quoted and some doesn't.  What if it has an identity column?  I've taken a few steps to make things easier for me.

First of all, this is a work in progress.  There's bound to be little bugs that I haven't worked out yet because I literally just wrote it 10 minutes ago when asked to do this.  But as updates to this happen, I will post post new iterations of "SQL Insert Generator".

Secondly, this is for SQL 2005 and later only.  Trying to do this working within the limitations of the varchar() and nvarchar() data types in SQL 2000 would have been too much of a pain in the butt.  But if you are working with fairly narrow tables then it should work in SQL 2000.  Just change the nvarchar(MAX) variables to nvarchar(4000).

IF object_id('dbo.up_GenerateInserts') IS NULL
     EXEC ('create procedure dbo.up_GenerateInserts as set nocount on')
GO

ALTER PROCEDURE dbo.up_GenerateInserts
      @TableSchema       sysname
     ,@TableName         sysname
     ,@IDField           sysname
     ,@IDMin             SQL_VARIANT = NULL
     ,@IDMax             SQL_VARIANT = NULL
     ,@IncludeIdent      BIT = 0
     ,@IncludeIfNot      BIT = 1
AS
SET NOCOUNT ON

DECLARE 
      @InsertSQL         NVARCHAR(MAX)
     ,@SelectSQL         NVARCHAR(MAX)
     ,@IFNotSQL     

Open in new window

2
 
LVL 93

Expert Comment

by:Patrick Matthews
Comment Utility
Brandon,

Thanks for sharing, this is brilliant!  I wish I had had something like this available the few times I had had to script out the complete recreation of a table, including the values :)

Patrick
0
When writing XML code a very difficult part is when we like to remove all the elements or attributes from the XML that have no data.

I would like to share a set of recursive MSSQL stored procedures that I have made to remove those elements from the XML document. I have used XPath queries and functions to do that in combination with dynamic code. The names of the variables are in Spanish so maybe you will suffer a little to understand the code so I will try to explain.

Because in a given XML we don't know how many elements and attributes we have or what their names are, I use the XPath local-name function to find out the name of the nodes and use the modify function to delete the current empty element or the current empty attribute. I also use dynamic code in SQL so the database engine allows me to execute the XPath sentences even if I don't know the names of the elements. The code is a little bit complex because I have used recursion to pass through the entire xml document.

The following stored procedure deletes a given empty attribute from a given element in a given XML. Note that the XML must be passed in xml type.

An empty attribute means: <Node emptyattribute=""/>

 
/******************************* spXMLDeleteEmptyAttribute *************************************************/
if exists (select * from sysobjects where id = object_id('dbo.spXMLDeleteEmptyAttribute') and type = 'P') drop procedure dbo.spXMLDeleteEmptyAttribute
GO             
CREATE 

Open in new window

1
I've encountered valid database schemas that do not have a primary key.  For example, I use LogParser from Microsoft to push IIS logs into a SQL database table for processing and analysis.  However, occasionally due to user error or a scheduled task design flaw duplicates have been created.

All essential business data should always be protected with constraints such as a primary key or a unique index.

In my case, one database table with the IIS logs had over a 100,000,000 rows without duplicates!  The usual solutions -- to create a temporary or duplicate table, a while loop or a dreaded resource consuming cursor -- are not always practical.

So in SQL 2005 I've found a very useful function that allowed me to delete duplicates with one query!

First off you have to determine if there are duplicates.  For these examples I'm going to create the following table:
CREATE TABLE [dbo].[Duplicates]([DuplicatesNonNull] [varchar](50) NOT NULL) ON [PRIMARY]

Open in new window


Next, I'll populate it with duplicates:
INSERT INTO Duplicates (DuplicatesNonNull) VALUES ('Duplicate1')
INSERT INTO Duplicates (DuplicatesNonNull) VALUES ('Duplicate2')
INSERT INTO Duplicates (DuplicatesNonNull) VALUES ('Duplicate3')
INSERT INTO Duplicates (DuplicatesNonNull) VALUES ('Unique1')
INSERT INTO Duplicates (DuplicatesNonNull) VALUES ('Duplicate2')
INSERT INTO Duplicates (DuplicatesNonNull) VALUES ('Duplicate1')
INSERT INTO Duplicates (DuplicatesNonNull) VALUES 

Open in new window

4
 
LVL 51

Author Comment

by:Ted Bouskill
Comment Utility
@cyberwiki: I don't mind constructive discussion,  portions of your comment was rude and yes I am going to report it to the moderators.

NOTE: In your examples you have a GUID which is unique.  In that case I wouldn't use CHECKSUM() which means I think you are missing my point.  I'm not saying the CHECKSUM() is a final solution and yes it can be abused (like anything can).

Check out my test of your example when I used them as strings (which is likely in a log) along with some other examples which is in the spirit of friendly discussion.
SELECT CHECKSUM('3DB7D309-A8F4-47C4-BA90-0CB458B44CB0', 'avb', 'BCZ', 'ckz')
UNION ALL
SELECT CHECKSUM('EFE7F2C5-19F9-42B9-9C16-21BED41E882B', 'AAE', 'BRQ', 'CAW')

Results: 
1711681669
390310668

SELECT CHECKSUM(CAST('3DB7D309-A8F4-47C4-BA90-0CB458B44CB0' AS uniqueidentifier), 'avb', 'BCZ', 'ckz')
UNION ALL
SELECT CHECKSUM(CAST('EFE7F2C5-19F9-42B9-9C16-21BED41E882B' AS uniqueidentifier), 'AAE', 'BRQ', 'CAW')

Results:
107127020
107127020

SELECT BINARY_CHECKSUM(CAST('3DB7D309-A8F4-47C4-BA90-0CB458B44CB0' AS uniqueidentifier), 'avb', 'BCZ', 'ckz')
UNION ALL
SELECT BINARY_CHECKSUM(CAST('EFE7F2C5-19F9-42B9-9C16-21BED41E882B' AS uniqueidentifier), 'AAE', 'BRQ', 'CAW')

Results:
1025885929
1023845076

Open in new window

0
 
LVL 23

Expert Comment

by:Racim BOUDJAKDJI
Comment Utility
@tedbilly

I believe that ciberwiki is simply trying to point out the limitations in using CHECKSUM based approach for dealing with duplicates tuples and pretty much most his comments are perfectly valid.  

Knowing these limitations does not say it is a bad solution, it simply completes it.  

No bad intent in that.

Regards...
0
Introduction
This article will provide a solution for an error that might occur installing a new SQL 2005 64-bit cluster. This article will assume that you are fully prepared to complete the installation and describes the error as it occurred during an installation I eventually completed.

The problem
All the settings for the installation were made and the installation started. The entire installation progresses correctly except at the very end it gives the following error:

error

The installation was blocked with an error that wasn't clear. Clicking the retry-button did't help so "Cancel" is was the only option. The installation rolled back and I'd lost about a 45 minutes of your time monitoring a failed installation.

Let's review the configuration of the cluster node that had this problem. The node had the following hardware.
Note: Not every item of hardware is listed only those that could affect the installation.

DL 580 G5
34 GB memory
4 x Intel Xeon E7450 (6 core) CPU
Several logical disks
Both nodes for the cluster have the same hardware specifications.

I started puzzling over what could be the problem. Did I use the wrong settings for the installation? The log files didn't give me any direction as to where the problem could be; they only showed that the installation stopped at starting the services.

In a desperate attempt to fix things, I tried to do the installation again but the same error occurred at the same point.

The solution
0
Complete VMware vSphere® ESX(i) & Hyper-V Backup
LVL 4
Complete VMware vSphere® ESX(i) & Hyper-V Backup

Capture your entire system, including the host, with patented disk imaging integrated with VMware VADP / Microsoft VSS and RCT. RTOs is as low as 15 seconds with Acronis Active Restore™. You can enjoy unlimited P2V/V2V migrations from any source (even from a different hypervisor)

I am showing a way to read/import the excel data in table using SQL server 2005...

Suppose there is an Excel file "Book1" at location "C:\temp" with column "First Name" and "Last Name".

Now to import this Excel data into the table, we will use the OpenRowSet method

OpenRowset contains all the information that is required to connect to remote data source.
It can also be used (alternatively) to access table from linked server

select * 
from OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=C:\temp\book1.xls', 'SELECT * FROM [Sheet1$]')

Open in new window


This will read all the data from the excel sheet 1. If it had a named worksheet, then we would need to use the actual name.

But this is an Ad hoc query statement, and so, might not be permitted to be executed by the SQL server 2005 (but enabled by default in SQL Server 2000).

So, we will need to go into SQL Server configuration and enable the Ad Hoc queries. Firstly, it is an advanced options, so we will need to enable that first, and then we can deal with enabling the adhoc query.

To configure the SQL Server for these advanced statements, execute the following T-SQL in a query window:
sp_configure 'show advanced options', 1
GO

reconfigure
GO

Open in new window


This will result in a confirmation of the configuration changes :
Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.

And now we have enabled the advanced options, we can now focus on the ad hoc query option. To do so, execute the following T-SQL in a query window:
sp_configure 'Ad Hoc Distributed Queries', 1
GO

reconfigure
GO

Open in new window


This will result in a confirmation of the configuration changes :
1
Ever wondered the date a "day" first occurred?

Realize I am not talking about the beginning of time?

Use SQL Server 2005?

Then this article is for you!

In this SQL Server 2005 tip you will see how to determine the date on which a requested day of the week occurred in any month of any year.  A typical scenario would be in a calendaring system that allows you to establish meetings on the first Monday of every month.  Consequently, as this function has evolved over time, its ability now goes beyond just the first occurrence and can return the second, third, ..., or even last occurrence of a given weekday.

Since a month can start on any day of the week, having an ordinal weekday calculation is not a trivial task.  For example: the first Monday in August 2009 fell on the 3rd of August; the first Tuesday for September 2009, September 1st; the last Tuesday of September 2008, September 30th.

With that in mind, let's begin going through the sections...

What you need; what you get.
Validate input and default values just in case.
It's all in the date.
The User Defined Function Code.

1. What you need; what you get.


Given a year and month, our function will get a specific weekday occurrence using its integer day of week value (each locale may use a different @@datefirst value, but, typically for the US, Sunday is 1; Monday, 2; …
9
 
LVL 18

Expert Comment

by:Rajar Ahmed
Comment Utility
Thanks mwvisa1 . great article.
soon it may come in handy for me..!
0
 
LVL 60

Author Comment

by:Kevin Cross
Comment Utility
Glad to hear it, meeran03!
Thank you for the vote also as well as others I had not yet acknowledged. I am just happy to have helped in some way.

Best regards,
Kevin
0
Moving Database files by Offline method
-Prabhakaran
This document will show you how to move the database files without detaching the databases as we do in SQL 2000.
This method will be applicable for only SQL 2005\SQL 2008 databases

To move files, we have to specify the current logical name of the file and the new file path, which includes the new file name.
I would like to tell important point in this method we can move only once file at a time in this manner.
To move data or log files to new location, follow these steps

1.      Take a sp_helpfile output of the database, which will provide the logical file name of the data and log files.
sp_help
2.      Set  the database you want to work with offline by doing below
 ALTER DATABASE dbadb SET OFFLINE 

Open in new window

offline
3.      Move one file at a time to the new location
In Windows Explorer physically move the file to the new location, and then tell the SQL Server Database where that file now lives.
 
  ALTER DATABASE dbadb
 MODIFY FILE(NAME=dbadb,FILENAME='E:\Data\MSSQL\dbadb.mdf')
 GO 

Open in new window

modify fileYou can move only one file at a time, So if you want to move the more than one file like data and log files, first move the file using the step3 described above then repeat the same step for other files of the database.
 
4.      Set back the database to online
 
 ALTER DATABASE dbadb SET ONLINE   

Open in new window


onlineBefore bringing the database to offline there should be no user connections to database either you can do this by killing all the connections or by issueing the following statement
 
 ALTER DATABASE dbadb SET SINGLE_USER WITH ROLLBACK IMMEDIATE 

Open in new window



 
  ALTER DATABASE dbadb SET MULTI_USER WITH ROLLBACK IMMEDIATE 

Open in new window


   
5.      Verify the move by checking sp_helpfile output of the database again and you can observe the new path.
verify
 
0
By Mark Wills

Not so long ago, there was a fairly tricky question on Experts Exchange to do with unstringing a field, and some very good answers. Not wanting to be left out of the "fray", I submitted one of my old tricks. Despite the code being around for a while in various guises, my posting was met with intrigue and various superlatives. Naturally, I felt rather EElated at the response, though, cannot take all the credit.

So, I thought I would share some of those tricks that center around the use of a rather special table in MS SQL Server known as spt_values in the master database.

First, what is spt_values ?

It has been around since the beginning, it is not a system table, but is very much a part of SQL. It is rather simple, it is Microsoft's very own "look up" table for all those system type things (determined by the TYPE column) where you need a name. More importantly, it is probably always resident in memory.

Let's have a quick peek, and please scroll through the list after you have run the first select command, and you will see quite a variety of "names" which are actually held as numbers within the MS systems...

select * from master..spt_values

-- Now, the ones we are interested in are :

select number from master..spt_values where type = 'p' 

-- which gives us all the numbers from 0 to 2047 inclusive. 

Open in new window


So, what can we do with a list of numbers ?

Well, in this article, we can unstring a delimited field, and we can …
15
 
LVL 41

Expert Comment

by:Sharath
Comment Utility
Mark, nice article.

Why don't you provide examples on splitting the string, displaying vertically etc. here?

As you have already answered such questions, you can add links to those questions for anyone's reference whoever want to check the magic of spt_values.

-Sharath
0
 
LVL 60

Expert Comment

by:Kevin Cross
Comment Utility
Here is a nice follow-up tip from chapmandew on how spt_values trick from Mark can be utilized even when a higher than 2048 is needed.

http://sqlservernation.com/blogs/tipweek/archive/2010/06/27/quickly-generate-a-number-sequence-in-sql-server.aspx

Again this isn't always the optimal method over say a persisted numbers table, but boy have I had fun with this so just wanted to give kudos to Mark again despite the fact I already relinquished my vote above! :)
0
By Mark Wills

We often hear about Fragmentation, and generally have an idea that it is about broken bits, or bad for performance, or at least, is generally not a good thing. But what does it really mean ?

By way of analogy, think of the humble Telephone Directory. It is nicely laid out, sequentially in alphabetical sequence by Name. Think about receiving the next edition, not by replacing the directory, but by giving you the added or altered pages which you append to the end. After a few updates, you will soon find that the end of the directory makes no sequential sense at all and you spend an inordinate amount of time trying to find that elusive phone number. It has become fragmented and your directories performance is most likely measured in terms of how long it takes before it hits the bin.


Types of Fragmentation

With SQL Server systems there are two basic areas of fragmentation: Disk fragmentation, dealing with physical files, and SQL Database fragmentation, primarily dealing with how data is stored inside that physical file. There are a lot of parallels, and we will need to look at both.

Disk fragmentation is more of a problem with the classic IO sub-system architectures, and much less of a problem on the more sophisticated IO system solutions such as SAN. Disk systems are made up of building blocks known as sectors, and collections of sectors known as clusters. When physical files occupy non-contiguous space, or, the growth of a physical …
13
 
LVL 47

Expert Comment

by:David
Comment Utility
One major thing you never wrote about .. in the real world people are going to be using a RAID subsystem and/or virtualization.  The RAID configuration, stripe size, chunk size, flush settings, block size, filesystem parameters, caching methodology / buffering settings, read load balancing  are all going to have a much greater effect on overall performance then fragmentation.

But my intention is not to diminish this great article, it is to warn them that you need to factor in the hardware config.  The SQL tools are blissfully ignorant and actually wrong when it pertains to fragmentation based on where the data is on the disk drives.  If you are running SQL inside a VM on a RAID subsystem then you are getting even less useful information, borderline useless if multiple VM share the same disk drives.

In any event, a great read and reference guide. Thank you for writing it.
0
 
LVL 51

Author Comment

by:Mark Wills
Comment Utility
Hi David,

Can only agree... But writing about he RAID configuration, stripe size, chunk size, flush settings, block size, filesystem parameters, caching methodology / buffering settings, read load balancing  are all vitally important and would make this more like a mini-book than intended in this Article.

It is a great topic though and would encourage you to write that Article :)

I seem to recall an Article by dlethe, and was going to reach out quite a few years back to suggest writing such an Article....

Cheers,
Mark Wills
0
by Mark Wills

PIVOT is a great facility and solves many an EAV (Entity - Attribute - Value) type transformation where we need the information held as data within a column to become columns in their own right. Now, in some cases that is relatively easy to do, other times it can be a challenge (and that is why we are here).

Let's have a quick look at the PIVOT function...

SELECT
          <display_column_list>

FROM

          (SELECT <source_columns> as Column_Source
                         ,<column_to_be_aggregated> as Column_Value
                         ,<column_with_new_column_names> as Column_List
           FROM <datasource> ) as DataSource

PIVOT
          (<aggregate_function>(Column_Value)  FOR  Column_List  IN
          ([<new_column_1_heading>],[<new_column_2_heading>],...,[<new_column_N_heading>]) ) PivotTable

ORDER BY <column_number_or_name>;

That looks pretty straight forward, except for one or two small details:

1) First up, we need to know the <display_column_list>
    easy enough, just do a Select * instead and problem solved (except that it does control display sequence)

2) Secondly, we need to know and hard code the new column headings as in :
     ([<new_column_1_heading>],[<new_column_2_heading>],...,[<new_column_N_heading>])

And that last point is often the big challenge. Not so bad if we are doing something static like "months in a year", just list out those months in sequence, and make sure you can cast the…
32
 
LVL 51

Author Comment

by:Mark Wills
Comment Utility
Yeah, should be...

Would add in another definition for the totals (similar to @columns).

But then we can no longer simply cheat with "select * from" we would have to add in that new computed column, and probably a parameter to decide when / if to use.

If you are feeling adventurous, there is a slightly more involved Article that should be able to do that for you : http://www.experts-exchange.com/Programming/Languages/SQL_Syntax/A_4256-Dynamic-Pivot-Procedure-without-the-Pivot-function.html

Cheers,
Mark
0
 

Expert Comment

by:Wil999
Comment Utility
Thanks voor the quick reply Mark, i will look in to the article!
0

Microsoft SQL Server 2005

71K

Solutions

25K

Contributors

Microsoft SQL Server 2005 is a suite of relational database management system (RDBMS) products providing multi-user database access functionality.Component services include integration (SSIS), reporting (SSRS), analysis (SSAS), data quality, master data, T-SQL and performance tuning. It includes support for managing XML data and allows a database server to be exposed over web services using Tabular Data Stream (TDS) packets encapsulated within SOAP (protocol) requests.