Hi there,
We have recently introduced SAP Advanced Event Mesh (AEM), which under the hood is ‘Solace PubSub+ Cloud’ in our Integration Landscape, and now I am seeking for the best way to implement ‘Exactly Once In Order’ Sequencing in our Integration.
We are on Version 10.9.1.114-0.
The Integration is going to do the following:
- Source System will Publish the Switching Job details (XML) to SAP AEM from WebLogic JMS Server.
- We have another SAP Integration Layer which then subscribes to the Solace Queue (or Topic), does Message Transformation, and then forwards that to External System.
The Challenge:
We want messages belonging to one Job (lets say Job A), should be delivered to External System in Sequence. Lets say Job A has Operation 1, 2 and 3. If for some reason Operation 2 fails to get delivered to External System, Job 3 should not go. However, Job B and it’s Operations can continue to go as usual.
So, we need FIFO for individual Job Operation, and not across all the Job Operations.
Analysis done so far:
- We can create One Exclusive Queue and One DMQ for the process, and Publish everything there. By doing it all the Jobs and their Operations will follow Sequence by Default. In All the Success scenarios, this should be okay (I guess). But in case of failure, the message will move to DMQ and the Subsequent message for one Job will not know and ‘might’ get processed.
- I went through Partition Queues, but that doesn’t seem to fit the purpose, as there the messages (Job Operations in our case) of one Job can go to any Partition
- Non-Durable (Temporary) Queues: What I have understood so far, we can request the Publisher (Client) to create Queues Dynamically and all the messages (Operation) of one Job should be sent to ONE Queue. As soon as those messages are consumed by our SAP Integration and forwarded to Externally System, these Queues get deleted - I am not 100% sure if this is how this will behave. In case of failures, I would expect the Queue remains intact and messages belonging to that Queue will pile up, without impacting the other Jobs.
Following things are Mandatory:
- Guaranteed Delivery of the messages to External System
- Order of Operations belonging to same Job should be maintained
- Order for Different Jobs is not Mandatory, and those can be parallelly processed.
Any help on this would be really appreciated.
-Anuj
1 Like
Not sure if anyone has an idea on how to address this.
Can this be done without adding any external persistence layer for storing the failed messages.
I am thinking of the following. Any help would be really appreciated
- Query DMQ first: By doing this, I will query DMQ first before going forward. If I find any failed message there then push the next one behind it. I can probably use: Browsing Messages
- Use Persistence layer to store failures:
All the failure will be updated in a Database. Message attributes will be used as a composite key to keep every message as a distinct one.
This DB Table will be queried before continuing the process of sending the message to a Receiver.
Hi Anuj - Good to see this question on the forum, but the best that you can leverage with Solace is Partitioning of the queues. You can additionally leverage the dynamic routing to achieve your requirement. By design the broker provides the feature of In-Order. If you bring external layer in your design, then you need to ensure that the order is maintained in your persistence layer too.
There is another SAP product that you can compare this functionality with SAP PI/PO but the whole pipeline supports that feature of “Exactly Once In-Order” respecting the Quality of Service (QoS).
As I said prior, Partitioning of Queues could help you cushion that but if your scenario is more failure driven then I am yet to find any robust solution and would like to see how this thread shapes-up
Hey @AnujDulta
I wanted to offer a different solution to your usecase 
I would model your jobs each with multiple operations within them as seperate consumers which have their own queue. So :
- Job A with 3 seperate operations will have its own execlusive queue : job-a-queue
- Job B with its own operations will have its own execlusive queue : job-b-queue
Each queue should have its own DMQ setup and you should have max retries and exponential back offs configured on both the queue and the SAP AEM adapter in CI.
Each operation should be idempotent and this can be acheived by using local idempotent process call step and each operation should proceed execution only if the previous step has been processed successfully. You can also have conditions to ensure that each operation for a given message is executed only if there is no record of it having successfully completed previously.
If there is an error in job A-Operation 2, you should :
- Handle the exceptions in operation 2 gracefully so that you know the root cause of the error
- Based on the root cause, determine if this is a recoverable error or not.
- If yes, then retry processing, if not let the message move to the DMQ.
This way if there is an error in operation 2, the subsequent operation for that message will not happen, the message will be moved to a DMQ from where it can be debugged and replayed if required.
There is a pretty detailed SAP community blog over here on this topic : https://community.sap.com/t5/technology-blog-posts-by-sap/enabling-in-order-processing-with-sap-integration-suite-advanced-event-mesh/ba-p/13703498
1 Like