I’m new to T-SQL; all my experience is in a completely different database environment (Openedge). I’ve learned enough to write the procedure below — but also enough to know that I don’t know enough!
This routine will have to go into a live environment soon, and it works, but I’m quite certain there are a number of c**k-ups and gotchas in it that I know nothing about.
The routine copies data from table A to table B, replacing the data in table B. The tables could be in any database. I plan to call this routine multiple times from another stored procedure. Permissions aren’t a problem: the routine will be run by the dba as a timed job.
Could I have your suggestions as to how to make it fit best-practice? To bullet-proof it?
ALTER PROCEDURE [dbo].[copyTable2Table]
@sdb varchar(30),
@stable varchar(30),
@tdb varchar(30),
@ttable varchar(30),
@raiseerror bit = 1,
@debug bit = 0
as
begin
set nocount on
declare @source varchar(65)
declare @target varchar(65)
declare @dropstmt varchar(100)
declare @insstmt varchar(100)
declare @ErrMsg nvarchar(4000)
declare @ErrSeverity int
set @source = '[' + @sdb + '].[dbo].[' + @stable + ']'
set @target = '[' + @tdb + '].[dbo].[' + @ttable + ']'
set @dropStmt = 'drop table ' + @target
set @insStmt = 'select * into ' + @target + ' from ' + @source
set @errMsg = ''
set @errSeverity = 0
if @debug = 1
print('Drop:' + @dropStmt + ' Insert:' + @insStmt)
-- drop the target table, copy the source table to the target
begin try
begin transaction
exec(@dropStmt)
exec(@insStmt)
commit
end try
begin catch
if @@trancount > 0
rollback
select @errMsg = error_message(),
@errSeverity = error_severity()
end catch
-- update the log table
insert into HHG_system.dbo.copyaudit
(copytime, copyuser, source, target, errmsg, errseverity)
values( getdate(), user_name(user_id()), @source, @target, @errMsg, @errSeverity)
if @debug = 1
print ( 'Message:' + @errMsg + ' Severity:' + convert(Char, @errSeverity) )
-- handle errors, return value
if @errMsg <> ''
begin
if @raiseError = 1
raiserror(@errMsg, @errSeverity, 1)
return 1
end
return 0
END
Thanks!
I’m speaking from a Sybase perspective here (I’m not sure if you’re using SQLServer or Sybase) but I suspect you’ll find the same issues in either environment, so here goes…
Firstly, I’d echo the comments made in earlier answers about the assumed dbo ownership of the tables.
Then I’d check with your DBAs that this stored proc will be granted permissions to drop tables in any database other than tempdb. In my experience, DBAs hate this and rarely provide it as an option due to the potential for disaster.
DDL operations like drop table are only allowed in a transaction if the database has been configured with the option
sp_dboption my_database, "ddl in tran", true. Generally speaking, things done inside transactions involving DDL should be very short since they will lock up the frequently referenced system tables like sysobjects and in doing so, block the progress of other dataserver processes. Given that we’ve no way of knowing how much data needs to be copied, it could end up being a very long transaction which locks things up for everyone for a while. What’s more, the DBAs will need to run that command on every database which contains tables that might contain a ‘@Target’ table of this stored proc. If you were to use a transaction for thedrop tableit’d be a good idea to make it separate from any transaction handling the data insertion.While you can do
drop tablecommands in a transaction if theddl in tranoption is set, it’s not possible to doselect * intoinside a transaction. Sinceselect * intois a combination of table creation with insert, it would implicitly lock up the database (possibly for a while if there’s a lot of data) if it were executed in a transaction.If there are foreign key constraints on your @target table, you won’t be able to just drop it without getting rid of the foreign key constraints first.
If you’ve got an ‘id’ column which relies upon a numeric identity type (often used as an autonumber feature to generate values for surrogate primary keys), be aware that you won’t be able to copy the values from the ‘@Source’ table’s ‘id’ column across to the ‘@Target’ table’s id column.
I’d also check the size of your transaction log in any possible database which might hold a ‘@Target’ table in relation to the size of any possible ‘@Source’ table. Given that all the copying is being done in a single transaction, you may well find yourself copying a table so large that it blows out the transaction log in your prod dataserver, bringing all processes to a crashing halt. I’ve seen people using chunking to achieve this over particularly large tables, but then you end up needing to put your own checks into the code to make sure that you’ve actually captured a consistent snapshot of the table.
Just a thought – if this is being used to get snapshots, how about BCP? That could be used to dump out the contents of the table giving you the snapshot you’re looking for. If you use the -c option you’d even get it in a human readable form.
All the best,
Stuart