I'm upgrading an enterprise app database with 130 tables to support unicode. As far as I know this means turning all varchar to nvarchar, text to ntext and char to nchar.
I wrote the following script...
'alter table ' + b.name + ' alter column ' + a.name + ' nvarchar(' + convert(varchar(40), a.length) + ')' +
case a.isnullable when 1 then ' NULL' else ' NOT NULL' END As AlterStmt,
b.name, a.name, a.xtype, a.length, a.* from syscolumns as a inner join sysobjects as b on a.id = b.id
where a.xtype = 167
and b.type = 'U'
order by b.name, a.name
Thats giving me alter statements for every varchar field, but when I try to run it, it fails because indexes exist. There probably are also one or two char fields which form keys although 99% will be pk'd on an incremental int field
Question is: is there an easy way of getting through this without dropping all the indexes etc ? Otherwise I suppose I have to go the generate sql script route - drop the indexes, run the alter tables, and then recreate the indexes.
I am looking for the least possible impact solution since the current database is live at over 100 customer sites and I must minimise risk during upgrades