Java EE 8 High Performance
上QQ阅读APP看书,第一时间看更新

Interceptors/decorators

Interceptors are the CDI way of adding custom handlers on top of a bean. For instance, our logging handler will be this interceptor in CDI:

@Log
@Interceptor
@Priority(Interceptor.Priority.APPLICATION)
public class LoggingInterceptor implements Serializable {
@AroundInvoke
public Object invoke(final InvocationContext context) throws Exception {
final Logger logger = Logger.getLogger(context.getTarget().getClass().getName());
logger.info(() -> "Calling " + context.getMethod().getName());
try {
return context.proceed();
} finally {
logger.info(() -> "Called " + context.getMethod().getName());
}
}
}

Decorators do the same job but they are applied automatically based on the interface(s) they implement and get the current implementation injected. They don't require a binding (such as @Log to put on a method to activate LoggingInterceptor), but they are more specific to a set of types.

In terms of the performance, an interceptor/decorator will obviously add some logic and, therefore, some execution time. But it also adds a more vicious overhead: the context creation. This part depends on the implementation of the CDI your server uses (Weld, OpenWebBeans, CanDI, and so on). However, if you don't have any interceptor, the container doesn't need to create a context and, therefore, to populate it. Most of the context creation is cheap but the getParameter() method, which represents the parameters of the method, can be expensive, since it requires converting a stack call into an array.

CDI implementations have multiple choices here and we will not go through all of them. What is important to keep in mind here is the following equation:

business_code_execution_time + interceptors_code_execution_time < method_execution_time

If you only have interceptors that don't do much, you can often assume that the container makes it as right as possible. If you compare this with a framework where you do it all manually, you will probably see this overhead.

By itself, the associated overhead is still acceptable, not big enough to not use interceptors in your code regarding the maintenance/complexity versus the performance trade-off. However, when you start adding a lot of interceptors, you need to ensure that they are well implemented too. What does this mean? To understand, we need to step back and see how interceptors are used.

To link an interceptor and an implementation, you need to use what we call an interceptor binding, which is the marker annotation of your interceptor (decorated with @InterceptorBinding). No big issues until here, but this binding often holds some configuration, making the interceptor behavior configurable.

If we use back our logging interceptor, the logger name is configurable:

@InterceptorBinding
@Retention(RUNTIME)
@Target({TYPE, METHOD})
public @interface Log {
/**
* @return the logger name to use to trace the method invocations.
*/
@Nonbinding
String value();
}

Now, LoggingInterceptor needs to get back the value, which will be passed to the logger factory to get the logger instance that our interceptor will use to decorate the actual bean invocation. This means that we can just modify our previous implementation, as shown in the following snippet, to respect the logger configuration:

@Log("")
@Interceptor
@Priority(Interceptor.Priority.APPLICATION)
public class LoggingInterceptor implements Serializable {
@AroundInvoke
public Object invoke(final InvocationContext context) throws Exception {
final String loggerName = getLoggerName();
final Logger logger = Logger.getLogger(loggerName);
logger.info(() -> "Calling " + context.getMethod().getName());
try {
return context.proceed();
} finally {
logger.info(() -> "Called " + context.getMethod().getName());
}
}
}

All the tricky part is in getLoggerName(). A bad and fragile - because it relies on plain reflection and not CDI metamodel - but common implementation is as follows:

private String getLoggerName(InvocationContext context) {
return ofNullable(context.getMethod().getAnnotation(Log.class))
.orElseGet(() -> context.getTarget().getClass().getAnnotation(Log.class))
.value();
}

Why is it fragile? Because there is no guarantee that the class handling works, as you can get a proxy instance and ignore the stereotype usage. It is bad because it utilizes reflection at every invocation and the JVM is not really optimized for such usage. The implementer should call getAnnotation only once.

Regarding the performances, a better implementation will be to ensure that we don't use reflection every time there is an invocation call, but only once, since the Java model (the Class metadata) doesn't change at runtime in general. To do it, we can use ConcurrentMap which will hold the already computed names in memory and avoid to do it again and again when the same method is called:

private final ConcurrentMap<Method, String> loggerNamePerMethod = new ConcurrentHashMap<>();

private String getLoggerName(InvocationContext context) {
return loggerNamePerMethod.computeIfAbsent(context.getMethod(), m -> ofNullable(m.getAnnotation(Log.class))
.orElseGet(() -> context.getTarget().getClass().getAnnotation(Log.class))
.value());
}

It simply caches the logger name per method and computes it once. This way, no reflection after the first call is involved; instead, we rely on the cache. ConcurrentHashMap is a good candidate for it and its overhead is negligible compared to a synchronized structure.

To be fast, do we just need to ensure that the interceptors are caching metadata? Actually, it is not enough. Remember that the interceptors are beans with an enforced scope: @Dependent. This scope means create every time you need. In the context of an interceptor, it means create an instance of the interceptor every time you create an intercepted bean.

If you think of a @RequestScoped bean, then its interceptors will be created for every request and the cache, which totally defeats the purpose.

To solve it, do not cache in the interceptor but in an @ApplicationScoped bean, which is injected into the interceptor:

@ApplicationScoped
class Cache {
@Inject
private BeanManager beanManager;

private final ConcurrentMap<Method, String> loggerNamePerMethod = new ConcurrentHashMap<>();

String getLoggerName(final InvocationContext context) {
return loggerNamePerMethod.computeIfAbsent(context.getMethod(), mtd -> {
// as before
});
}
}

@Log("")
@Interceptor
@Priority(Interceptor.Priority.APPLICATION)
public class LoggingInterceptor implements Serializable {
@Inject
private Cache cache;

@AroundInvoke
public Object invoke(final InvocationContext context) throws Exception {
final String loggerName = cache.getLoggerName(context);
final Logger logger = Logger.getLogger(loggerName);
logger.info(() -> "Calling " + context.getMethod().getName());
try {
return context.proceed();
} finally {
logger.info(() -> "Called " + context.getMethod().getName());
}
}
}

This simple trick ensures that our cache is @ApplicationScoped itself and, therefore, computed only once per application. If you want to make sure you don't compute it at runtime at all, you can even enforce it to be initialized through a CDI extension in an observer of the AfterDeploymentValidation event (but this is less impacting on the performance).

To conclude this part, note that the specifications now rely on interceptors to provide their features and integrate together (Security API, JTA, JSF, JAX-RS, and so on). The EJB specification was providing the JTA integration until Java EE 7 (replaced by @Transactional) and the security API until Java EE 8 (replaced by Security API). It was an ad-hoc implementation of these integrations (such as our Container at the beginning of this chapter), but it is strictly equivalent to the interceptor functional use. And in terms of the performance, both implementations (EJB and CDI based) are often very close.