I am going to dump some data from one db to another one. I am using
set identity_insert MyTable on
GO
INSERT INTO MyTable SELECT * FROM sourceDB.dbo.MyTable
GO
set identity_insert MyTable off
Is anyway to get this to work? There's 30 tables and it will be time consuming to add the list of column names to the insert statement. I am using SQL server 2000. Will be upgraded to SQL server 2008 in the near future.
EDIT: 开发者_如何学PythonMyTable has an identity column.
Just drag and drop the column names from the object browser. You can do it one step it takes about 1 second longer than writing select * and you should never use select * in production code anyway. It is a poor practice.
I am concerned about you inserting Identity columns though, this is something that should almost never be done. What if the original table had some identity columns that are the same as existing ids in the new table? Make sure to check for this before deciding to insert id values from another table. I prefer to do the insert to the parent table and get a new id and match it to the old id (output is good for this in 2008) and then use the new id for any child tables but join on the old id.
Having just tried this scenario on a SQL Server 2000 SP2 machine, I receive this error, and seems to confirm your observations.
An explicit value for the identity column in table 'Foo2' can only be specified when a column list is used and IDENTITY_INSERT is ON.
set identity_insert Foo2 on
GO
INSERT INTO Foo2 select top 100 * from Foo where id > 110000
GO
set identity_insert Foo2 off
The SELECT INTO
suggestion would only help to get the data into a staging table. At some point, you'd have to explicitly state the column names.
Tip on column names: Highlight the table in a SSMS query, and hit Alt-F1 (shortcut for sp_help
). You can then copy/paste the resulting column_name
into your query, and simply add the commas by hand. Take the shortcuts a step further, and paste them into Excel, type one comma, and copy down the columns.
HLGEM's answer is impractical for many tables. I'm in a situation where we ran a script we run from time-to-time to clear out dev and test databases, which made one of the developers yell out. Apparently, there are now 22 tables that should not be included in that clear-out process. Good thing I had a backup. I put the table names in a scratch table called "undo" and used the following to generate column lists:
select outside.TABLE_NAME,
stuff((select ',' + COLUMN_NAME
from INFORMATION_SCHEMA.COLUMNS as inside
where inside.TABLE_NAME = outside.TABLE_NAME
order by ORDINAL_POSITION
for xml path('')), 1, 1, '') as column_list
from INFORMATION_SCHEMA.COLUMNS as outside
where TABLE_NAME in (select table_name from dbo.undo)
group by TABLE_NAME
I then copied the output to Excel and used ye olde =CONCATENATE function to generate all the IDENTITY_INSERT and INSERT INTO statements, copy/pasted back to SSMS, and voila! Yes, Excel is your friend.
If you don't need to preserve the values of the ID column, you can:
Drop the ID column
Make an ID column in your new schema in the new target DB (make it the last column in the column list)
INSERT INTO newdb.dbo.newtable
SELECT * FROM olddb.dbo.oldtable
And it will preserve the rest of your columns while generating new ID info.
精彩评论