I'm trying to debug some legacy Integration Services code, and really want some confirmation on what I think the problem is:
We have a very large data task inside a control flow container. This control flow container is set up with TransactionOption = supported - i.e. it will 'inherit' transactions from parent containers, but none are set up here.
Inside the data flow there is a call to a stored pro开发者_如何转开发c that writes to a table with pseudo code something like:
"If a record doesn't exist that matches these parameters then write it"
Now, the issue is that there are three records being passed into this proc all with the same parameters, so logically the first record doesn't find a match and a record is created. The second record (with the same parameters) also doesn't find a match and another record is created.
My understanding is that the first 'record' passed to the proc in the dataflow is uncommitted and therefore can't be 'read' by the second call. The upshot being that all three records create a row, when logically only the first should.
In this scenario am I right in thinking that it is the uncommitted transaction that stops the second call from seeing the first? Even setting the isolation level on the container doesn't help because it's not being wrapped in a transaction anyway....
Hope that makes sense, and any advice gratefully received. Work-arounds confer god-like status on you.
Is the flow too large to stream all these rows through an aggregate first, to eliminate the duplicates?
If the changes are inside the same transaction they should be visible to each other. And I don't think that SSIS would create a transaction per statement / SP call, so my opinion is that the problem is elsewhere.
精彩评论