开发者

Debatch the Incoming message and create fixed batch of input messages in the orchestration?

开发者 https://www.devze.com 2023-02-01 01:34 出处:网络
http://blogs.msdn.com/b/brajens/archive/2006/07/20/biztalk-custom-receive-pipeline-component-to-batch-messages.aspx

http://blogs.msdn.com/b/brajens/archive/2006/07/20/biztalk-custom-receive-pipeline-component-to-batch-messages.aspx

I am following above article, but i want to implement this in orchestration

How can i implement this please advise, here are the some of thoughts i know.

  1. Using foreach loop.

    a)read each single message and added to new message until it reaches the fixed batch

    b) using xpath position /[local-name()='Root' and namespace-uri()='http://mycompany.com']/[local-name()='content' and namespace-uri()='' and position() >= 0 and position < fixed size] (Which is not working)

  2. Call the Custom pipline com开发者_JS百科ponent ( like above)


First of all thanks guyz for all your valuable replies,

Right now I implemented this in the orchestration using foreach loop, xpath(toget count and single input node) and Message Variable (Created new unbounded message to assign concatenated fixed no of input messages). It is working fine now.

Do you guyz agree to this appoach or do you have any concerns?


The best solution is indeed to use a custom pipeline component such as the one linked to in your question. The beauty of this is that you can call the pipeline from within the orchestration, and it needs not be tied to a specific receive location: http://geekswithblogs.net/sthomas/archive/2005/06/16/44023.aspx.

Please, notice that the sample component you mention does not make use of a streaming technique; which means it loads the entire contents of the original message in memory. This could be a problem if you intend to process potentially large messages (which is often why debatching is used for).


A simple way of achieving high throughput (large messages and/or large amout of messages) is to run the incoming message thru a map in the receive port that uses a custom XSLT. This map will then group the XML in right sized groups - after the mapping the XML could look something like this.

<ns0:sample xmlns:ns0='http://MGSIBREDemo.LoanRequest' xmlns:ns1='http://MGSIBREDemo.LoanRequestGroup'>
 <ns1:group>
  <data1><name>name1</name></data1>
  <data1><name>name2</name></data1>
  <data1><name>name3</name></data1>
  <data1><name>name4</name></data1>
 </ns1:group>
 <ns1:group>
  <data1><name>name5</name></data1>
  <data1><name>name6</name></data1>
  <data1><name>name7</name></data1>
  <data1><name>name8</name></data1>
 </ns1:group>
 <ns1:group>
  <data1><name>name9</name></data1>
  <data1><name>name10</name></data1>
 </ns1:group>
</ns0:sample>

After this first step you'll then have to send the message out of BizTalk and receive it back in again. You can then use an ordinary XML dissasebler pipeline component to debatch the message (as for example described here).

The big advantage to this technice is that you'll use an BizTalk map in a port to transform the message and a out of the box pipeline component. Both of these handles the messages in a streaming fasion and you'll be able to handle both big messages and big amout of messages.

The disadvantge is however that it kills the performance as you write message to disk or queue to then read them back in.

It be great if it be possible to debatch on the send side using the XML assembler but this isn't possible today.

So unless you're ok with reading all messages into memory (using XmlDocument), or spending the time to write your own streaming Xml dissasmbler this is the "hack" we're stuck with.

0

精彩评论

暂无评论...
验证码 换一张
取 消