Are there any benefits to calling a SQLCLR stored procedure from a regular T-SQL trigger rather than deploying a SQLCLR trigger straight away?
I need to be notified of a specific column change (StatusID
) in a very large table.
Currently a number of windows services are used. Each moni开发者_Python百科tors its own StatusID, i.e. queries db for specific StatusID
: SELECT a,b,c FROM t WHERE StatusID = @status
.
I want to try moving logic from services to SQLCLR assembly and invoke them using a SQLCLR trigger. Is it a good idea? Are there better ideas?
In my opinion, this does not require SQLCLR. However, it depends on what you mean with "being notified". If possible, I'd use a usual trigger and a SQL Agent Job to do the notification.
Are there any benefits to call SQLCLR stored procedure from regular trigger rather than deploy SQLCLR trigger straight away?
Well, there are some differences. How those differences relate to benefits or drawbacks depends on the specifics of each situation.
T-SQL trigger calling a SQLCLR object (stored procedure or function)
T-SQL triggers can determine what table they are attached to via a query similar to:
SELECT ss.[name] AS [SchemaName], so.[name] AS [TableName] FROM [sys].[objects] so INNER JOIN [sys].[schemas] ss ON ss.[schema_id] = so.[schema_id] WHERE so.[object_id] = (SELECT so2.[parent_object_id] FROM [sys].[objects] so2 WHERE so2.[object_id] = @@PROCID);
A SQLCLR object called from a T-SQL trigger would not have direct access to the
INSERTED
andDELETED
pseudo-tables (this is also true for T-SQL stored procedures or evenEXEC()
calls). Those tables could be copied into local temporary tables that could then be read from the SQLCLR object (or T-SQL stored procedure orEXEC()
call), but that is quite a bit more time and I/O to create the table, read from the pseudo-table(s), and write it again totempdb
.SQLCLR trigger
A SQLCLR trigger can interact with the
INSERTED
andDELETED
pseudo-tables via Dynamic SQL (something T-SQL triggers cannot do). Of course, if the SQLCLR code needs toSELECT
from the pseudo tables in the first place, then you pretty much need to use a SQLCLR trigger (else do the silly copy-pseudo-tables-to-local-temp-tables trick).SQLCLR triggers have no access to
@@PROCID
(even if creating aSqlConnection
usingContext Connection = true;
) so there is really no easy way to determine what Table was being modified that caused the trigger to get fired. I emphasized "easy" because it is possible (though I have not tested it) to use a T-SQL trigger, set as the "first" trigger on the table, to setCONTEXT_INFO
with the table name, but then you can't useCONTEXT_INFO
for anything else. There is probably another way that I almost have worked out, but have not tested it yet (when I get a chance to test, if it works, I will try to remember to update here with a link to the instructions).
I need to be notified on specific column change (StatusID) in very large table. Currently a number of windows services is used. Each monitors its own StatusID, i.e. quires db for specific StatusID: SELECT a,b,c FROM t WHERE StatusID = @status.
This can definitely be handled better, and without using SQLCLR. SQLCLR can be great, but it should be used when necessary. In this case, you just need a queue table that records the PK column(s) of the table for rows that change.
CREATE TABLE dbo.TableChangeQueue
(
TableChangeQueueID INT IDENTITY(-2140000000, 1)
NOT NULL
CONSTRAINT [PK_TableChangeQueue] PRIMARY KEY,
[a] datatype_a,
[b] datatype_b,
[c] datatype_c
);
Then use a regular T-SQL trigger to INSERT
into that queue. Something along the lines of:
INSERT INTO dbo.TableChangeQueue ([a], [b], [c])
SELECT [a], [b], [c]
FROM INSERTED;
Then each Windows service can read from the queue table (not the large table). When the records have been processed, delete them from the queue table, based on their TableChangeQueueID
values. You could read them and delete at the same time using the OUTPUT
clause on the DELETE
statement, but if the process fails you wouldn't want to lose the ability to re-try processing them.
This is, of course, a simple implementation and allows for duplicate key entries if those rows get updated more than once prior to being processed. If that is a problem (i.e. you need the key values to be unique), then there are ways of handling that in the Trigger. Just be careful that the Trigger doesn't change data being processed by the Windows Service, unless that wouldn't present a problem.
There are many options here, but can't be specific without knowing how the processes work. But regardless, the main point is to separate the handling of the changed StatusID
s from the DML statements since triggers run within a internal, system-generated transaction. This is why you don't want to process the changed rows immediately as part of this trigger as it could easily increase blocking on the table since the transaction won't complete until the trigger completes; and if the trigger errors then the DML statement will get rolled-back.
If you call the services directly from the CLR you'll have to use a standard SOAP binding, because the CLR only supports .NET 2.0 style Web references.
Using the SQL Server Service broker would be a more robust solution, but as you noted it's complicated. The price you will pay for using the SQLCLR is that your update transactions will be rolled back if you have an unhandled error thrown out of your trigger. Also transaction throughput will certainly be affected if you're blocking for a service call with every update.
精彩评论