Abstract: Through a case, this article found a problem related to the aggregation strategy of Camel Multicast components. By viewing the Camel source code, the cause of the problem was found and a solution was given. I hope this article can help Camel users who encounter the same problem.

This article is shared from the HUAWEI cloud community " a problem encountered when using the Apache Camel Multicast component ", author: middleware brother.

1 Introduction

This article is translated from the article "ROUTING MULTICAST OUTPUT AFTER ENCOUNTERING PARTIAL FAILURES" published in the Apache Camel community by Reji Mathews of Huawei Canada Research Institute. After obtaining the consent of the original author, this article has made some changes to some parts of the original text.

2 Introduction to Multicast Components

Multicast is a powerful EIP component in Apache Camel (hereinafter referred to as "Camel"), which can send messages to multiple sub-paths and then execute them in parallel.

Refer to the official website document, we can configure the Multicast component in two ways:

  • All sub-paths are executed independently, and the result of the last-response sub-path is used as the final output. This is also the default configuration of the Multicast component.
  • By implementing Camel's aggregation strategy (Aggregation Strategy), a custom aggregator is used to process the output of all sub-paths.

3 Problem description

The use case in this article is as follows: Use Jetty component to publish an API, after calling the API, the message will be sent to the two sub-paths "direct:A" and "direct:B" respectively. After processing with a custom aggregation strategy, proceed to the next steps. An exception is thrown in "direct:A" to simulate a failure; "direct:B" runs normally. At the same time, an exception handling strategy is defined in onException.

The Camel version used in this article is 3.8.0

@Override
public void configure() throws Exception {
    onException(Exception.class)
        .useOriginalMessage()
        .handled(true)
        .log("Exception handler invoked")
        .transform().constant("{\"data\" : \"err\"}")
        .end();
 
 from("jetty:http://localhost:8081/myapi?httpMethodRestrict=GET")
        .log("received request")
        .log("Entering multicast")
        .multicast(new SimpleFlowMergeAggregator())
        .parallelProcessing().to("direct:A", "direct:B")
        .end()
        .log("Aggregated results ${body}")
        .log("Another log")
        .transform(simple("{\"result\" : \"success\"}"))
        .end();
 
    from("direct:A")
        .log("Executing PATH_1 - exception path")
        .transform(constant("DATA_FROM_PATH_1"))
        .log("Starting exception throw")
        .throwException(new Exception("USER INITIATED EXCEPTION"))
        .log("PATH_1")
        .end();
 
    from("direct:B")
        .log("Executing PATH_2 - success path")
        .delayer(1000)
        .transform(constant("DATA_FROM_PATH_2"))
        .log("PATH_2")
        .end();
}

The custom aggregator SimpleFlowMergeAggregator is defined as follows, where we put the results of all sub-paths into a list object.

public class SimpleFlowMergeAggregator implements AggregationStrategy {
    private static final Logger LOGGER = LoggerFactory.getLogger(SimpleFlowMergeAggregator.class.getName());
    @Override
    public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
        LOGGER.info("Inside aggregator " + newExchange.getIn().getBody());
        if(oldExchange == null) {
            String data = newExchange.getIn().getBody(String.class);
            List<String> aggregatedDataList = new ArrayList<>();
            aggregatedDataList.add(data);
            newExchange.getIn().setBody(aggregatedDataList);
            return newExchange;
        }
 
        List<String> oldData = oldExchange.getIn().getBody(List.class);
        oldData.add(newExchange.getIn().getBody(String.class));
        oldExchange.getIn().setBody(oldData);
 
        return oldExchange;
    }
}

Based on the understanding of the execution logic of the Multicast component, we believe that when there are multiple sub-paths, the operation result should be: if one of the sub-paths can run successfully, use the aggregated result to continue the subsequent steps; if all the sub-paths fail to run , The entire route is stopped. In this case, because the sub-path "direct:A" is operating abnormally and the sub-path "direct:B" is operating normally, the next two steps of log (log) and transformation (transform) should be executed normally.

Run the above case, the log information is as follows:

2021-05-06 12:43:18.565 INFO 13956 --- [qtp916897446-42] route1 : received request
2021-05-06 12:43:18.566 INFO 13956 --- [qtp916897446-42] route1 : Entering multicast
2021-05-06 12:43:18.575 INFO 13956 --- [ #4 - Multicast] route2 : Executing PATH_1 - exception path
2021-05-06 12:43:18.575 INFO 13956 --- [ #4 - Multicast] route2 : Starting exception throw
2021-05-06 12:43:18.578 INFO 13956 --- [ #4 - Multicast] route2 : Exception handler invoked
2021-05-06 12:43:18.579 INFO 13956 --- [ #4 - Multicast] c.e.d.m.SimpleFlowMergeAggregator : Inside aggregator {"data" : "err"}
2021-05-06 12:43:19.575 INFO 13956 --- [ #3 - Multicast] route3 : Executing PATH_2 - success path
2021-05-06 12:43:21.576 INFO 13956 --- [ #3 - Multicast] route3 : PATH_2
2021-05-06 12:43:21.576 INFO 13956 --- [ #3 - Multicast] c.e.d.m.SimpleFlowMergeAggregator : Inside aggregator DATA_FROM_PATH_2

Observing the above log, we found that after completing the aggregation of the results of the two subpaths, the subsequent two steps of log (log) and transformation (transform) were not executed. This is not in line with our desired result.

After many tests, we also found that only when the first sub-path ("direct:A") of the aggregator SimpleFlowMergeAggregator is executed abnormally, this subsequent step will not be executed; and if the first sub-path The path ("direct:A") is executed successfully, even if the execution of another sub path ("direct:B") fails, the subsequent steps will continue.

4 Problem analysis

Next, we look at the Camel source code to find out the cause of the above phenomenon.

In the Pipeline.java of the camel-core-processors module, there is such a piece of code in the run() method:

@Override
public void run() {
    boolean stop = exchange.isRouteStop();
    int num = index;
    boolean more = num < size;
    boolean first = num == 0;
 
    if (!stop && more && (first || continueProcessing(exchange, "so breaking out of pipeline", LOG))) {
 
 // prepare for next run
        if (exchange.hasOut()) {
            exchange.setIn(exchange.getOut());
            exchange.setOut(null);
        }
 
 // get the next processor
        AsyncProcessor processor = processors.get(index++);
 
        processor.process(exchange, this);
    } else {
 // copyResults is needed in case MEP is OUT and the message is not an OUT message
        ExchangeHelper.copyResults(exchange, exchange);
 
 // logging nextExchange as it contains the exchange that might have altered the payload and since
 // we are logging the completion if will be confusing if we log the original instead
 // we could also consider logging the original and the nextExchange then we have *before* and *after* snapshots
        if (LOG.isTraceEnabled()) {
            LOG.trace("Processing complete for exchangeId: {} >>> {}", exchange.getExchangeId(), exchange);
        }
 
        AsyncCallback cb = callback;
        taskFactory.release(this);
        reactiveExecutor.schedule(cb);
    }
}

Among them, this if judgment determines whether to continue with the subsequent steps:

if (!stop && more && (first || continueProcessing(exchange, "so breaking out of pipeline", LOG)))

It can be seen that in the following three cases, the subsequent steps will not be executed:

  1. The previous step has marked the exchange object as stopped.

boolean stop = exchange.isRouteStop();

  1. No subsequent steps can be performed.

boolean more = num < size;

  1. The continueProcessing() method returns false.

Let's take a look at the code of the continueProcessing() method.

public final class PipelineHelper {
    public static boolean continueProcessing(Exchange exchange, String message, Logger log) {
        ExtendedExchange ee = (ExtendedExchange) exchange;
        boolean stop = ee.isFailed() || ee.isRollbackOnly() || ee.isRollbackOnlyLast()
                || (ee.isErrorHandlerHandledSet() && ee.isErrorHandlerHandled());
        if (stop) {
            if (log.isDebugEnabled()) {
                StringBuilder sb = new StringBuilder();
                sb.append("Message exchange has failed: ").append(message).append(" for exchange: ").append(exchange);
                if (exchange.isRollbackOnly() || exchange.isRollbackOnlyLast()) {
                    sb.append(" Marked as rollback only.");
                }
                if (exchange.getException() != null) {
                    sb.append(" Exception: ").append(exchange.getException());
                }
                if (ee.isErrorHandlerHandledSet() && ee.isErrorHandlerHandled()) {
                    sb.append(" Handled by the error handler.");
                }
                log.debug(sb.toString());
            }
 
            return false;
        }
        if (ee.isRouteStop()) {
            if (log.isDebugEnabled()) {
                log.debug("ExchangeId: {} is marked to stop routing: {}", exchange.getExchangeId(), exchange);
            }
            return false;
        }
 
        return true;
    }
}

It can be seen that when an exception occurs during the execution and is caught by the exception handler, the continueProcessing() method will return false.

Going back to our case, the first sub-path ("direct:A") to the aggregator SimpleFlowMergeAggregator will be used as the basis for subsequent aggregation, and the other sub-paths ("direct:B") will be added on this basis. body data. In fact, many Camel users will use this approach to implement custom aggregation strategies. But there is a problem with this: during exception handling, the exchange object of the subpath "direct:A" will be set with a status flag, and this status flag will be passed to the downstream to determine whether to continue to perform the next steps. Since the status of the exchange object of the "direct:A" subpath that is the basis of aggregation is "abnormal", the continueProcessing() method will eventually return false, and the subsequent steps will not be executed.

5 Solution

For the above problems, users can use multiple methods to set the state of the exchange object during exception handling. This article adopts the following solutions: if the first sub-path is executed normally, continue with the subsequent steps; if the first sub-path is executed abnormally, exchange it with other successfully executed sub-paths, and then continue with the subsequent steps.

The updated custom aggregator SimpleFlowMergeAggregator is as follows:

public class SimpleFlowMergeAggregator implements AggregationStrategy {
    private static final Logger LOGGER = LoggerFactory.getLogger(SimpleFlowMergeAggregator.class.getName());
    @Override
    public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
        LOGGER.info("Inside aggregator " + newExchange.getIn().getBody());
        if(oldExchange == null) {
            String data = newExchange.getIn().getBody(String.class);
            List<String> aggregatedDataList = new ArrayList<>();
            aggregatedDataList.add(data);
            newExchange.getIn().setBody(aggregatedDataList);
            return newExchange;
        }
 
        if(hadException(oldExchange)) {
            if(!hadException(newExchange)) {
 // aggregate and swap the base
                LOGGER.info("Found new exchange with success. swapping the base exchange");
                List<String> oldData = oldExchange.getIn().getBody(List.class);
                oldData.add(newExchange.getIn().getBody(String.class));
            // swapped the base here
                newExchange.getIn().setBody(oldData);                 
                return newExchange;
            }
        }
 
        List<String> oldData = oldExchange.getIn().getBody(List.class);
        oldData.add(newExchange.getIn().getBody(String.class));
        oldExchange.getIn().setBody(oldData);
 
        return oldExchange;
    }
 
 
    private boolean hadException(Exchange exchange) {
 
        if(exchange.isFailed()) {
            return true;
        }
 
        if(exchange.isRollbackOnly()) {
            return true;
        }
 
        if(exchange.isRollbackOnlyLast()) {
            return true;
        }
 
        if(((ExtendedExchange)exchange).isErrorHandlerHandledSet()
                && ((ExtendedExchange)exchange).isErrorHandlerHandled()) {
            return true;
        }
 
        return false;
    }
}

Run the above case again, the log information is as follows:

2021-05-06 12:46:19.122 INFO 2576 --- [qtp174245837-45] route1 : received request
2021-05-06 12:46:19.123 INFO 2576 --- [qtp174245837-45] route1 : Entering multicast
2021-05-06 12:46:19.130 INFO 2576 --- [ #3 - Multicast] route2 : Executing PATH_1 - exception path
2021-05-06 12:46:19.130 INFO 2576 --- [ #3 - Multicast] route2 : Starting exception throw
2021-05-06 12:46:19.134 INFO 2576 --- [ #3 - Multicast] route2 : Exception handler invoked
2021-05-06 12:46:19.135 INFO 2576 --- [ #3 - Multicast] c.e.d.m.SimpleFlowMergeAggregator : Inside aggregator {"data" : "err"}
2021-05-06 12:46:20.130 INFO 2576 --- [ #4 - Multicast] route3 : Executing PATH_2 - success path
2021-05-06 12:46:22.132 INFO 2576 --- [ #4 - Multicast] route3 : PATH_2
2021-05-06 12:46:22.132 INFO 2576 --- [ #4 - Multicast] c.e.d.m.SimpleFlowMergeAggregator : Inside aggregator DATA_FROM_PATH_2
2021-05-06 12:46:22.132 INFO 2576 --- [ #4 - Multicast] c.e.d.m.SimpleFlowMergeAggregator : Found new exchange with success. swapping the base exchange
2021-05-06 12:46:22.133 INFO 2576 --- [ #4 - Multicast] route1 : Aggregated results {"data" : "err"},DATA_FROM_PATH_2
2021-05-06 12:46:22.133 INFO 2576 --- [ #4 - Multicast] route1 : Another log

It can be seen that after using the new custom aggregation strategy, the subsequent log and transform steps are successfully executed.

6 Conclusion

Through a case study, this article found a problem related to the aggregation strategy of Camel Multicast components. By viewing the Camel source code, the cause of the problem was found and a solution was given.

I hope this article can help Camel users who encounter the same problem.

Click to follow, and learn about Huawei Cloud's fresh technology for the first time~


华为云开发者联盟
1.4k 声望1.8k 粉丝

生于云,长于云,让开发者成为决定性力量