Coffee Documentation

The coff:ee is a JakartaEE solution set designed to bring together common algorithms from the enterprise world, and provide a basic solution that can be tailored to our own needs if required.

Each company develops its own solution set, which tries to bring projects together, so that the same modules don’t have to be rewritten, copied and maintained.

This solution set is suitable to serve both SOA and Microservice architecture. Its architecture is modular and can be overwritten at project level for almost everything. The framework is based on the following systems, which are crucial for the whole operation:

1. Architecture structure

Diagram

2. Coffee Core

2.1. coffee-se

The purpose of this module is to collect classes that are as independent as possible from the Jakarta EE API.

2.1.1. coffee-se-logging

Module for logging- and MDC-related JakartaEE components without any dependencies.

Logging

Contains a basic logging system using java.util.logging.Logger.

sample usage in SE environment
import en.icellmobilsoft.coffee.se.logging.Logger;

private Logger log = Logger.getLogger(LogSample.class);

public String logMessage() {
    log.trace("sample log message");
    return "something";
}

For more description and usage via CDI, see coffee-cdi/logger.

MDC

The module contains its own framework for MDC (Mapped Diagnostic Context) management. This is because there may be different MDC solutions on specific projects (e.g. jboss, slf4j, logback…​). It is used via the static methods of the hu.icellmobilsoft.coffee.se.logging.mdc.MDC class.

Inside the MDC class, it tries to search for an MDC solution available on the classpath and delegate requests to the found class. Currently the org.jboss.logging.MDC and org.slf4j.MDC implementations are supported, but can be extended to the project level using the service loader module.

MDC extension

To use unsupported MDC implementations at the Coffee level, the MDCAdapter and MDCAdapterProvider interfaces and then load the implemented MDCAdapterProvider using the service loader mechanism.

Example of CustomMDC connection:
  1. Implement MDCAdapter for CustomMDC:

    com.project.CustomMDCAdapter
    public class CustomMDCAdapter implements MDCAdapter {
    
        @Override
        public String get(String key){
            //The adapter delegates its calls to our CustomMdc
            return CustomMDC.get(key);
        }
    }
  2. Implement MDCAdapterProvider to build CustomMDCAdapter:

    com.project.CustomMDCAdapterProvider
    public class CustomMDCAdapterProvider implements MDCAdapterProvider {
    
        @Override
        public MDCAdapter getAdapter() throws Exception;{
            return new CustomMDCAdapter();
        }
    }
  3. Bind CustomMDCAdapterProvider via service loader:

    META-INF/services/en.icellmobilsoft.coffee.se.logging.mdc.MDCAdapterProvider
    com.project.CustomMDCAdapterProvider
MDC order

Try to list the available MDC implementations in the following order. The MDC used inside will be the first one working:

  1. ServiceLoader extensions

  2. org.jboss.logging.MDC

  3. org.slf4j.MDC .

  4. CoffeeMDCAdapter .

    • coff:ee implementation, fallback only

    • values are stored in ThreadLocal

    • must be handled separately if you want it to be logged.

2.2. coffee-dto

Module designed to summarize the ancestors of the basic DTO, Adapters and Exception classes, mainly to be able to extract some common code into the Coffee jakartaEE solution set. It should not have any dependencies, except annotations serving documentation and JAXB functions.

It consists of several submodules.

2.2.1. coffee-dto-base

Contains java classes that serve as ancestors for the entire Coffee javaEE solution set.

This is where the java.time (java 8+) class for handling XSD universal adapters and basic paths to rest endpoints is served.

The module contains the basic exception classes. All other exceptions that are created, can only be derived from these.

2.2.2. coffee-dto-xsd

Its content should preferably be XSDs to serve as guides on projects. They should preferably contain very universal XSD simple and complexType, which projects can bend to their own image.

This module does not generate DTO objects, it only adds XSDs.

2.2.3. coffee-dto-gen

This module is used to generate the Coffee DTO. It is organized separately to be easily replaced from the dependency structure.

2.2.4. coffee-dto-impl

The module serves as a sample implementation for coffee-dto-base, coffee-dto-xsd and coffee-dto-gen. Projects using Coffee will include it. If the DTOs generated by Coffee do not match the target project, you will have to excelude them at this module.

By itself it is a universal, usable module.

2.3. coffee-cdi

The purpose of this module is to connect the jakarta EE, logger, deltaspike and microprofile-config.

All CDI based. Description of components and use cases below:

2.3.1. logger

Coffee uses its own logging system, for several reasons:

  • wraps the actual logging system (currently it is java util log)

  • collects all the logs that are logged at the request level, logs all logging that is logged at the logging level, including logging that is not written to the console (or elsewhere) because logging is set to a higher level (for example, if the root logger is set to INFO level, then the TRACE, DEBUG level log will not be output anywhere). We are able to log these logs in case of an error to help the debugging process

  • other information can be put into the log request level container

  • check which level of log level is the highest logged

Each class has its own logger, loggers are not inheritable, not transferable. The CDI + jboss logger provides the basis for using it:

sample use
(1)
@Inject
@ThisLogger
private hu.icellmobilsoft.coffee.cdi.logger.AppLogger log;


(2)
@Inject
private hu.icellmobilsoft.coffee.se.logging.Logger log;

(3)
import hu.icellmobilsoft.coffee.cdi.logger.LogProducer;

public static String blabla() {
    LogProducer.getStaticDefaultLogger(BlaBla.class).trace("class blabla");
    LogProducer.getStaticDefaultLogger("BlaBla").trace("class name blabla");
    return "blabla";
}

(4)
import hu.icellmobilsoft.coffee.se.logging.Logger;

public static String blabla() {
    Logger.getLogger(BlaBla.class).trace("class blabla");
    Logger.getLogger("BlaBla").trace("class name blabla");
    return "blabla";
}
1 where we work with the class in @RequestScope (or higher scope) (90% of the cases)
2 where there is no RequestScope
3 where inject is not used, e.g. in static methods
4 in JavaSE environments e.g. cliens jars

If a parameter is included in the logging, it should be placed between "[parameter]" characters EXCEPT.

parameter logging pattern
log.trace("Generated id: [{0}]", id);

The purpose of this is to be able to immediately identify the variable value when looking at the log, and also if it has a value of "" (empty String) or null.

2.3.2. config

As a configuration solution we use the microprofile-config solution.

In short, what is it? You can specify configuration parameters in a wide range of ways. It is not enough to burn a configuration parameter into some properties file, because it may have a different value from environment to environment. The values can be specified at a separate level, be it ETCD, properties, system property, environment property or whatever. The microprofile-config can look up a given key from all available sources and use the highest priority value. Basic use cases:

  • in code - when not using CDI container, static methods and other cases

  • static - at runtime of the program all

  • dynamic - the key value is searched for dynamically, each time it is used. configuration changes can be made without restarting (for example, setting some system property at runtime)

configuration sample
import org.eclipse.microprofile.config.inject.ConfigProperty;
import org.eclipse.microprofile.config.ConfigProvider;
import javax.inject.Provider;


(1)
public String kodban() {
    Config config = ConfigProvider.getConfig();
    String keyValue = config.getValue("key", String.class);
    return keyValue;
}


(2)
@Inject
@ConfigProperty(name="key")
String keyValue;

public String statikusan() {
    return keyValue;
}


(3)
@Inject
@ConfigProperty(name="key")
Provider<String> keyValue;

public String dynamic() {
    return keyValue.get();
}
1 value retrieved in code
2 static value input
3 dynamic value retrieval

2.3.3. trace

The annotations in the hu.icellmobilsoft.coffee.cdi.trace.annotation package allow the modules of coff::ee to provide trace information. The annotations are used to allow coff::ee modules to plug into an existing trace flow or to start a new flow.

The internal logic of coffee trace implementation is independent and selectable within the projects. If no trace implementation is selected, then by default, no trace propagation occurs in the system.

  • Trace usage

    • The @Traced annotation allows a method to become traceable.

      • SpanAttribute - links the span data of coff:ee modules to the values of mp-opentracing or by mp-telemetry

        • component - module identifier that is part of the trace, e.g. redis-stream

        • kind - specify type of span, e.g. CONSUMER (default INTERNAL)

        • dbType - database type, e.g. redis

sample IOpenTraceHandler
...
@Inject
private ITraceHandler traceHandler;
...

public Object execute(CdiQueryInvocationContext context) {
//create jpa query ...
Traced traced = new Traced.Literal(SpanAttribute.Database.COMPONENT, SpanAttribute.Database.KIND, SpanAttribute.Database.DB_TYPE);
String operation = context.getRepositoryClass() + "." + method.getName();
return traceHandler.runWithTrace(() -> context.executeQuery(jpaQuery), traced, operation);
}
sample @Traced
@Traced(component = SpanAttribute.Redis.Stream.COMPONENT, kind = SpanAttribute.Redis.Stream.KIND, dbType = SpanAttribute.Redis.DB_TYPE)
@Override
public void onStream(StreamEntry streamEntry) throws BaseException {
...
}

2.3.4. Metrics

The internal logic of coffee has a metric implementation that is independent and selectable within projects. If no metric implementation is selected, then the default Noop*MetricsHandler is activated in the system.

The choice of implementations can be found in the coffee-module-mp-metrics/micrometer documentation.

2.4. coffee-configuration

The purpose of this module is to complement the Microprofile config, e.g. by caching

inject configuration values
// for limited use where speed and high transaction count are not important,
// in such cases it is worth choosing one of the other methods
@Inject
@ConfigProperty(name="key")
Provider<String> keyValue;


// traditional microprofile-config query
public String kodban() {
    Config config = ConfigProvider.getConfig();
    String keyValue = config.getValue("key", String.class);
    return keyValue;
}

2.4.1. ConfigurationHelper class

This class allows you to query type specific configurations.

Query configuration
@Inject
private ConfigurationHelper configurationHelper;
...
Integer ttl = configurationHelper.getInteger("public.login.session.token.validity");

2.4.2. ApplicationConfiguration class

Similar to the ConfigurationHelper class, but uses @ApplicationScope level caching, where cached values are stored for 30 minutes. It allows these cached values to be immediately which can be used to build additional logic (e.g. change the value externally in ETCD), and then topic JMS to forget the values and immediately read them again).

ApplicationConfiguration example
@Inject
private ApplicationConfiguration applicationConfiguration;

public String kodban() {
    String minVersion = applicationConfiguration.getString(EtcdVal.KEY_PUBLIC_INVOICE_MIN_REQUEST_VERSION);
    return minVersion;
}

2.5. coffee-tool

The purpose of this module is to collect basic util classes.

All date, String, Number or other static classes should be placed here.

MavenURLHandler

Auxiliary class for XSD Catalog usage, allows to

maven:hu.icellmobilsoft.coffee.dto.xsd:coffee-dto-xsd:jar::!/xsd/en/icellmobilsoft/coffee/dto/common/common.xsd

URL handling in the code. It needs to be activated separately, so it’s not a complication (but it’s not activated either).

AesGcmUtil

AES 256 GCM encryption and decryption helper class. The cipher is based on AES/GCM/NoPadding, the key must be 256 bits long, the IV is 12 bytes long. Includes helper methods for key and IV generation.

Example usage:

String inputText = "test input test";
byte[] key = AesGcmUtil.generateKey(); //B64: HMTpQ4/aEDKoPGMMqMtjeTJ2s26eOEv1aUrE+syjcB8=
byte[] iv = AesGcmUtil.generateIv(); //B64: 5nqOVSjoGYk/oSwj

byte[] encoded = AesGcmUtil.encryptWithAes256GcmNoPadding(key, inputText.getBytes(StandardCharsets.UTF_8), iv);
String encodedB64 = Base64.getEncoder().encodeToString(encoded); //fRCURHp5DWXtrESNHMo1DUoAcejvKDu9Y5wd5zXblg==

byte[] decoded = AesGcmUtil.decryptWithAes256GcmNoPadding(key, encoded, iv);
String decodedString = new String(decoded, StandardCharsets.UTF_8); //test input test
Given key-IV pair must not be reused! For this reason, encrypt/decrypt without IV should only be called with single-use keys!
JsonUtil

JsonUtil is a kind of wrapper of Gson

example - deserialization of generic type
String paramString = "[{\"key\":\"testTitleKey\",\"value\":\"testTitleValue\"}]";
Type paramListType = new TypeToken<List<ParameterType>>() {}.getType();
List<ParameterType> list = JsonUtil.toObjectUncheckedEx(paramString, paramListType);
System.out.println(list.get(0).getKey());

2.6. coffee-jpa

The purpose of this module is to connect the JPA.

Includes the deltaspike jpa and hibernate hookup. It contains the paging helper classes, transaction handling classes, and the ancestor of all *Services, the BaseService class.

2.6.1. TransactionHelper

Contains helper methods with @Transactional annotations for @FunctionalInterfaces declared in the FunctionalInterfaces class.

Thus, it is possible that if the first entry point of a class is not in a transaction, but we want to run a code snippet in a transaction within it, we just need to highlight the desired snippet in a private method or in any @FunctionalInterface provided by coffee and call the corresponding method of TransactionHelper, which will perform the transactional execution.

This avoids the need to highlight the logic to be run in the transaction in a public method with @Transactional annotation and call the method created inside the class via CDI.current() or to do the same by highlighting it in a separate class.

TransactionHelper usage example
import jakarta.enterprise.inject.Model;
import jakarta.inject.Inject;

import hu.icellmobilsoft.coffee.dto.exception.BaseException;
import hu.icellmobilsoft.coffee.dto.exception.InvalidParameterException;
import hu.icellmobilsoft.coffee.jpa.helper.TransactionHelper;

@Model
public class TransactionHelperExample {

    @Inject
    private InvoiceService invoiceService;

    @Inject
    private TransactionHelper transactionHelper;

    public void example(Invoice invoice) throws BaseException {
        if (invoice == null) {
            throw new InvalidParameterException("invoice is NULL!");
        }

        // operations outside the transaction
        // ...

        // BaseExceptionFunction running in transaction
        transactionHelper.executeWithTransaction(invoiceService::save, invoice);

        // in-transaction BaseExceptionRunner (e.g.: for void method)
        transactionHelper.executeWithTransaction(() -> saveInvoice(invoice));

        // operations outside the transaction
        // ...
    }

    private void saveInvoice(Invoice invoice) throws BaseException {
        invoiceService.save(invoice);
    }
}

2.6.2. BatchService

BatchService is used to perform bulk database operations (insert, update, delete) in a group.
It mainly contains batch operations based on PreparedStatement, in which SQL compilation is performed with the support of Hibernate.

Type support

BatchService supports operations on the following types with certainty:

Support type Java types

Null treated as value

All types evaluated as null by hibernate.

CustomType treated as type

Currently, there is no specific type, it is used due to the possibility of extending

Enum treated as type

All enum types.

ManyToOneType treated as type

All types annotated with @ManyToOne.

ConvertedBasicType treated as type

All types with a converter.

BasicType treated as a type

boolean, Boolean

char, Character

java.sql.Date, java.sql.Time, java.sql.Timestamp

java.util.Date, java.util.Calendar

LocalDate, LocalTime, LocalDateTime, OffsetTime, OffsetDateTime, ZonedDateTime, Instant

Blob, byte[], Byte[]

Managed by JDBC driver

byte, short, int, long, float, double

byte, short, integer, long, float, double

BigInteger, BigDecimal

String

All other types not in the list.

For special types it is recommended to use a custom converter.
Types not listed here are handled by the specific JDBC driver.
Type support has been tested on postgresql and oracle databases to ensure correct operation. For all other databases, type coverage is in theory mostly guaranteed, but anomalies may occur.
Null value handling

BatchService starts type handling with a null scan. If the value of any type returned by hibernate is null, BatchService sets the value as SqlTypes.NULL JDBC type using the PreparedStatement.setNull() method.

JDBC drivers also use this method.
CustomType handling

BatchService currently does not handle any CustomType directly, but it provides the possibility to extend it. By default, all CustomType types are handled by the JDBC driver used!

Enum types handling

BatchService handles the resolution of Enum types as follows:

Java code value inserted by BatchService
@Column(name = "ENUM_DEFAULT")
private EnumType enumDefault;

The order associated with the given enum value, the value is inserted according to the ordinal() method.

@Column(name = "ENUM_ORDINAL")
@Enumerated(value = EnumType.ORDINAL)
private EnumType enumOrdinal;

The order associated with the given enum value, the value is inserted according to the ordinal() method.

@Column(name = "ENUM_STRING")
@Enumerated(value = EnumType.STRING)
private EnumType enumString;

The name associated with the given non-value, the value is inserted according to the name() method.

ManyToOneType handling

Within an entity, fields that use the @ManyToOne (jakarta.persistence.ManyToOne) annotation are treated by hibernate as ManyToOneType.
These ManyToOneType types are handled by BatchService as follows:

Java code value inserted by BatchService
@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = "MANY_TO_ONE_SAMPLE_ENTITY")
private SampleEntity manyToOneSampleEntity;

The BatchService takes the unique identifier of the given entity and injects it using EntityHelper.getLazyId().

ConvertedBasicType handling

Fields within an entity that have a converter, i.e. have the @Convert (jakarta.persistence.Convert) annotation placed on them, are treated by hibernate as ConvertedBasicType.

For example:
@Convert(converter = YearMonthAttributeConverter.class)
@Column(name = "YEAR_MONTH")
private YearMonth yearMonth;

For this type, Hibernate contains the JDBC type and all additional settings for the converted value, but the conversion must be done manually. Thus, the BatchService calls the passed converter, and then passes it on to the BasicType management process with the value obtained during the conversion and the ConvertedBasicType (BasicType) type set appropriately by hibernate.

BasicType handling

The BasicType type combines the java and jdbc types, so for each java type it contains the corresponding jdbc type. The separations are thus done according to the jdbc type code stored in the jdbc type.

Date type BasicType handling

SqlTypes.DATE BasicType types with a jdbc type code SqlTypes.DATE are handled by BatchService as follows:

Java code value inserted by BatchService
@Column(name = "DATE")
private java.sql.Date date;

Can be set directly, without conversion, using the PreparedStatement.setDate() method.

@Column(name = "LOCAL_DATE")
private LocalDate localDate;

Converted to java.sql.Date type, then set using PreparedStatement.setDate() method.

@Temporal(TemporalType.DATE)
@Column(name = "DATE_TEMPORAL_DATE")
private java.util.Date dateTemporalDate;

Converted to java.sql.Date type, then set using PreparedStatement.setDate() method.

@Temporal(TemporalType.DATE)
@Column(name = "CALENDAR_TEMPORAL_DATE")
private Calendar calendarTemporalDate;

Converted to java.sql.Date type, then set using PreparedStatement.setDate() method.

Types not in the table are set by the JDBC driver.
Time type BasicType handling

The SqlTypes.TIME and SqlTypes.TIME_WITH_TIMEZONE jdbc type code BasicType types are handled by BatchService as follows:

Java code value inserted by BatchService
@Column(name = "TIME")
private Time time;

Can be set directly, without conversion, using the PreparedStatement.setTime() method.

@Column(name = "LOCAL_TIME")
private LocalTime localTime;

Converted to java.sql.Time type, then set using PreparedStatement.setTime() method.

@Column(name = "OFFSET_TIME")
private OffsetTime offsetTime;

The system returned by ZoneId.systemDefault() is converted to a time zone, then converted to java.sql.Time and set using the PreparedStatement.setTime() method.

@Temporal(TemporalType.TIME)
@Column(name = "DATE_TEMPORAL_TIME")
private java.util.Date dateTemporalTime;

Converted to java.sql.Time type, then set using PreparedStatement.setTime() method.

@Temporal(TemporalType.TIME)
@Column(name = "CALENDAR_TEMPORAL_TIME")
private Calendar calendarTemporalTime;

Converted to java.sql.Time type, then set using PreparedStatement.setTime() method.

For the types listed in the table, if hibernate.jdbc.time_zone is set in persistence.xml, then the time zone is also passed to the PreparedStatement.setTime() method, so that the JDBC driver can perform the appropriate time offset according to the time zone.

It is up to the JDBC driver to set the types not listed in the table.
Timestamp type BasicType handling

The SqlTypes.TIMESTAMP, SqlTypes.TIMESTAMP_UTC and SqlTypes.TIMESTAMP_WITH_TIMEZONE jdbc type code BasicType types are handled by BatchService as follows:

Java code value inserted by BatchService
@Column(name = "TIMESTAMP_DEFAULT")
private Timestamp timestampDefault;

Can be set directly, without conversion, using the PreparedStatement.setTimestamp() method.

@Column(name = "LOCAL_DATE_TIME")
private LocalDateTime localDateTime;

Converted to java.sql.Timestamp type, then set using PreparedStatement.setTimestamp() method.

@Column(name = "OFFSET_DATE_TIME")
private OffsetDateTime offsetDateTime;

The system returned by ZoneId.systemDefault() is converted to a time zone, then converted to java.sql.Timestamp type, and set using PreparedStatement.setTimestamp() method.

@Column(name = "ZONED_DATE_TIME")
private ZonedDateTime zonedDateTime;

The system returned by ZoneId.systemDefault() is converted to a time zone, then converted to java.sql.Timestamp type and set using PreparedStatement.setTimestamp() method.

@Column(name = "INSTANT")
private Instant instant;

The system time zone returned by ZoneId.systemDefault() is converted to java.sql.Timestamp and then set using PreparedStatement.setTimestamp().

@Column(name = "DATE_DEFAULT")
private java.util.Date dateDefault;

Converted to java.sql.Timestamp type, then set using PreparedStatement.setTimestamp() method.

@Temporal(TemporalType.TIMESTAMP)
@Column(name = "DATE_TEMPORAL_TS")
private java.util.Date dateTemporalTS;

Converted to java.sql.Timestamp type, then set using PreparedStatement.setTimestamp() method.

@Column(name = "CALENDAR_DEFAULT")
private Calendar calendarDefault;

Converted to java.sql.Timestamp type, then set using PreparedStatement.setTimestamp() method.

@Temporal(TemporalType.TIMESTAMP)
@Column(name = "CALENDAR_TEMPORAL_TS")
private Calendar calendarTemporalTS;

Converted to java.sql.Timestamp type, then set using PreparedStatement.setTimestamp() method.

For the types listed in the table, if hibernate.jdbc.time_zone is set in persistence.xml, then the time zone is also passed to the PreparedStatement.setTimestamp() method, so that the JDBC driver can perform the appropriate time offset according to the time zone.

It is up to the JDBC driver to set the types not listed in the table.
Boolean type BasicType handling

BasicType types with SqlTypes.BOOLEAN jdbc type code are handled by BatchService as follows:

Java code value inserted by BatchService
@Column(name = "BOOLEAN_PRIMITIVE")
private boolean booleanPrimitive;

Can be set directly, without conversion, using the PreparedStatement.setBoolean() method.

@Column(name = "BOOLEAN_WRAPPER")
private Boolean booleanWrapper;

Can be set directly, without transformation, using the PreparedStatement.setBoolean() method.

For types not listed in the table, it is up to the JDBC driver to set them.
Char type BasicType handling

BasicType types with SqlTypes.BOOLEAN jdbc type code are handled by BatchService as follows:

Java Code value inserted by BatchService
@Column(name = "CHAR_PRIMITIVE")
private char charPrimitive;

Converted to String type, then set using PreparedStatement.setString() method.

@Column(name = "CHAR_WRAPPER")
private Character charWrapper;

Converted to String type, then set using PreparedStatement.setString() method.

Types not listed in the table are set by the JDBC driver.
Data type BasicType handling

The SqlTypes.BLOB, SqlTypes.VARBINARY and SqlTypes.LONGVARBINARY jdbc type code BasicType types are handled by BatchService as follows:

Java code value inserted by BatchService
@Lob
@Column(name = "DEFAULT_BLOB")
private Blob defaultBlob;

Converted to InputStream type, then set using PreparedStatement.setBinaryStream() method.

@Column(name = "PRIMITIVE_BYTE_ARRAY")
private byte[] primitiveByteArray;

Can be set directly, without conversion, using the PreparedStatement.setBytes() method.

@Column(name = "WRAPPER_BYTE_ARRAY")
private Byte[] wrapperByteArray;

If the legacy array handling is enabled:

  • Converted to primitive byte[] type, then set using PreparedStatement.setBytes() method.

If the legacy array handling is not enabled:

  • We let it be handled by the used JDBC driver.

If possible, it is recommended to use byte[] instead.
@Lob
@Column(name = "LOB_PRIMITIVE_BYTE_ARRAY")
private byte[] lobPrimitiveByteArray;

Can be set directly, without conversion, using the PreparedStatement.setBytes() method.

@Lob
@Column(name = "LOB_WRAPPER_BYTE_ARRAY")
private Byte[] lobWrapperByteArray;

If the legacy array handling is enabled:

  • Converted to primitive byte[] type, then set using PreparedStatement.setBytes() method.

If the legacy array handling is not enabled:

  • We let it be handled by the used JDBC driver.

If possible, it is recommended to use byte[] instead.
The JDBC driver is responsible for setting types not included in the table.

To enable legacy array handling, persistence.xml needs to be extended with the following property: <property name="hibernate.type.wrapper_array_handling" value="legacy"/>.
See more: hibernate 6.2 migration guide

2.6.3. microprofile-health support

The DatabaseHealth can check if the database is reachable. The DatabasePoolHealth can check how loaded the connection pool used for operations to the database is.

This function is based on metric data, so it is necessary that one of the implementations is activated.

pom.xml
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-module-mp-micrometer</artifactId> (1)
</dependency>
<!-- or -->
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-module-mp-metrics</artifactId> (2)
</dependency>
1 Micrometer metric implementation
2 Microprofile-metrics metric implementation
Startup example
@ApplicationScoped
public class DatabaseHealthCheck {

    @Inject
    private DatabaseHealth databaseHealth;

    @Inject
    private Config mpConfig;

    public HealthCheckResponse checkDatabaseConnection() {
        DatabaseHealthResourceConfig config = new DatabaseHealthResourceConfig();
        config.setBuilderName("oracle");
        config.setDatasourceUrl("jdbc:postgresql://service-postgredb:5432/service_db?currentSchema=service");
        String datasourceName = mpConfig.getOptionalValue(IConfigKey.DATASOURCE_DEFAULT_NAME, String.class)
                .orElse(IConfigKey.DATASOURCE_DEFAULT_NAME_VALUE);
        config.setDsName(datasourceName);
        try {
            return databaseHealth.checkDatabaseConnection(config);
        }catch (BaseException e) {
            // need to be careful with exceptions, because the probe check will fail if we don't handle the exception correctly
            return HealthCheckResponse.builder().name("oracle").up().build();
        }
    }

    @Produces
    @Startup
    public HealthCheck produceDataBaseCheck() {
        return this::checkDatabaseConnection;
    }
}
Readiness example
@ApplicationScoped
public class DatabaseHealthCheck {

    @Inject
    private DatabaseHealth databaseHealth;

    public HealthCheckResponse checkDatabasePoolUsage() {
        try {
            return databasePoolHealth.checkDatabasePoolUsage("oracle");
        }catch (BaseException e) {
            return HealthCheckResponse.builder().name("oracle").up().build();
        }
    }

    @Produces
    @Readiness
    public HealthCheck produceDataBasePoolCheck() {
        return this::checkDatabasePoolUsage;
    }
}

2.7. coffee-rest

Module designed for REST communication and management.

It includes the apache http client, various REST loggers and filters. It also contains the language, REST activator and Response util class.

2.7.1. BaseRestLogger

This class is used to log HTTP request-response requests to the application. It is activated manually at the project level using the following pattern:

activate in project
package hu.icellmobilsoft.project.common.rest.logger;

import javax.inject.Inject;
import javax.ws.rs.ext.Provider;

import hu.icellmobilsoft.coffee.cdi.logger.AppLogger;
import hu.icellmobilsoft.coffee.cdi.logger.ThisLogger;
import hu.icellmobilsoft.coffee.rest.log.BaseRestLogger;
import hu.icellmobilsoft.coffee.rest.log.LogConstants;

@Provider (1)
public class RestLogger extends BaseRestLogger {

    @Inject
    @ThisLogger
    private AppLogger log;

    @Override
    public String sessionKey() { (2)
        return LogConstants.LOG_SESSION_ID;
    }
}
1 JAX-RS activator (this is the thing that activates it)
2 Session ID key name in HTTP header

The HTTP request-response log itself is compiled by the hu.icellmobilsoft.coffee.rest.log.RequestResponseLogger class, and can be used in other situations if needed, for example logging an error.

During request logging, sensitive data is masked both from the headers and from the json/xml body (e.g. X-PASSWORD: 1234 instead of X_PASSWORD: *). The data is determined whether it is to be protected based on its key (key for headers and JSON content, tag for XML), which by default is a key corresponding to the regexes [\w\s]*?secret[\w\s]*? or [\w\s]*?pass[\w\s]*? (e.g. userPassword, secretToken, …​), if needed in the project, the regex can be overwritten by specifying the configuration coffee.config.log.sensitive.key.pattern in one of the default microprofile-config sources (sys var, env var, META-INF/microprofile-config.properties), multiple patterns can be specified separated by commas.

example request log
2019-02-01 16:31:33.044 INFO  [thread:default task-1] [hu.icellmobilsoft.coffee.rest.log.BaseRestLogger] [sid:2G7XOSOJBCFRMW08] - * Server in-bound request
> POST http://localhost:8083/external/public/sampleService/sample/interface
> -- Path parameters:
> -- Query parameters:
> -- Header parameters:
> accept: text/xml;charset=utf-8
> Connection: keep-alive
> Content-Length: 106420
> content-type: text/xml;charset=utf-8
> Host: localhost:8083
> User-Agent: Apache-HttpClient/4.5.3 (Java/1.8.0_191)
> X-Client-Address: 10.10.20.49
> X-CustomerNumber: 10098990
> X-Password: *
> X-UserName: sample
>
> entity: [<?xml version="1.0" encoding="UTF-8"?>
<SampleRequest xmlns="http://schemas.nav.gov.hu/OSA/1.0/api">
    <header>
        <requestId>RID314802331803</requestId>
        <timestamp>2019-02-01T15:31:32.432Z</timestamp>
        <requestVersion>1.1</requestVersion>
        <headerVersion>1.0</headerVersion>
    </header>
    <user>
        <passwordHash>*</passwordHash>
... // röviditve
example response log
2019-02-01 16:31:34.042 INFO  [thread:default task-1] [hu.icellmobilsoft.coffee.rest.log.BaseRestLogger] [sid:2G7XOSOJBCFRMW08] - < Server response from [http://localhost:8083/external/public/sampleService/sample/interface]:
< Status: [200], [OK]
< Media type: [text/xml;charset=UTF-8]
< -- Header parameters:
< Content-Type: [text/xml;charset=UTF-8]
< entity: [{"transactionId":"2G7XOSYJ6VUEJJ09","header":{"requestId":"RID314802331803","timestamp":"2019-02-01T15:31:32.432Z","requestVersion":"1.1","headerVersion":"1.0"},"result":{"funcCode":"OK"},"software":{"softwareId":"123456789123456789","softwareName":"string","softwareOperation":"LOCAL_SOFTWARE","softwareMainVersion":"string","softwareDevName":"string","softwareDevContact":"string","softwareCountryCode":"HU","softwareDescription":"string"}]

2.7.2. Optimized BaseRestLogger

This class works similarly as the BaseRestLogger. The only difference is that it uses less memory, because it doesn’t copy the streams of the request and response entities for logging, but collects the entities while reading and writing them.

It is activated manually at the project level using the following pattern:

activate in project
package hu.icellmobilsoft.project.common.rest.logger;

import javax.inject.Inject;
import javax.ws.rs.ext.Provider;
import hu.icellmobilsoft.coffee.cdi.logger.AppLogger;
import hu.icellmobilsoft.coffee.cdi.logger.ThisLogger;
import hu.icellmobilsoft.coffee.dto.common.LogConstants;
import hu.icellmobilsoft.coffee.rest.log.optimized.BaseRestLogger;

@Provider (1)
public class RestLogger extends BaseRestLogger {

    @Inject
    @ThisLogger
    private AppLogger log;

    @Override
    public String sessionKey() { (2)
        return LogConstants.LOG_SESSION_ID;
    }
}
1 JAX-RS activator (this is the thing that activates it)
2 Session ID key name in HTTP header

The HTTP request-response log itself is compiled by the hu.icellmobilsoft.coffee.rest.log.optimized.RequestResponseLogger class, with the temprorary @Named("optimized_RequestResponseLogger") annotation. The request and response entity log limits are determined here according to whether the request or response entity is application/octet-stream or multipart/form-data and the REST interface is not annotated with the LogSpecifier then we limit the log size.

2.7.3. LogSpecifier

REST logging can be customized per endpoint with the hu.icellmobilsoft.coffee.rest.log.annotation.LogSpecifier annotation, this can be specified multiple times in one place, and its scope can be limited by the target field, field, of which more than one can be specified in the annotation (activated by default for all targets); this gives the possibility to customize REST request-response, microprofile-client request-response separately.

Only one LogSpecifier per endpoint LogSpecifierTarget can be used.

Specifiable targets are enum values of hu.icellmobilsoft.coffee.rest.log.annotation.enumeration.LogSpecifierTarget:

LogSpecifierTarget Scope

REQUEST

REST endpoint request

RESPONSE

REST endpoint response-a

CLIENT_REQUEST

Microprofile REST Client endpoint request

CLIENT_RESPONSE

Microprofile REST Client endpoint response-a

Currently the LogSpecifier is prepared for the following cases:

  • logging of the request-response on the endpoint can be disabled with the noLog option of the LogSpecifier annotation.

  • on the endpoint, the size of the logged body can be limited by the maxEntityLogSize field of the LogSpecifier annotation.

if maxEntityLogSize is set to a value other than LogSpecifier.NO_LOG, then only the first 5000 characters of the request will be written for the application/octet-stream mediaType received by the REST endpoint.
When using the optimized BaseRestLogger class, if the LogSpecifier annotation is not specified, then in the case of application/octet-stream és multipart/form-data mediaTypes, only the first 5000 characters of the request and response entities are logged.
LogSpecifier example
    @POST
    @Produces({ MediaType.APPLICATION_JSON, MediaType.TEXT_XML, MediaType.APPLICATION_XML })
    @Consumes({ MediaType.APPLICATION_JSON, MediaType.TEXT_XML, MediaType.APPLICATION_XML })
    @LogSpecifier(target={LogSpecifierTarget.REQUEST, LogSpecifierTarget.CLIENT_REQUEST}, maxEntityLogSize = 100) (1)
    @LogSpecifier(target=LogSpecifierTarget.RESPONSE, maxEntityLogSize = 5000) (2)
    @LogSpecifier(target=LogSpecifierTarget.CLIENT_RESPONSE, noLog = true) (3)
    WithoutLogResponse postWithoutLog(WithoutLogRequest withoutLogRequest) throws BaseException;
1 Request entity log size is limited to 100 bytes, also for REST calls and microprofile client usage
2 Response entity log size limited to 5000 characters for REST calls
3 Disables response logging for microprofile rest client responses.
LogSpecifiersAnnotationProcessor

The LogSpecifier is associated with hu.icellmobilsoft.coffee.rest.log.annotation.processing.LogSpecifiersAnnotationProcessor, whose purpose is to prevent multiple values from being defined for the same target due to the redefinability of LogSpecifier. To do this, it checks at compile time how many @LogSpecifier have been defined per LogSpecifierTarget, if it finds more than one, it fails the compilation.

Invalid example
    @POST
    @Produces({ MediaType.APPLICATION_JSON, MediaType.TEXT_XML, MediaType.APPLICATION_XML })
    @Consumes({ MediaType.APPLICATION_JSON, MediaType.TEXT_XML, MediaType.APPLICATION_XML })
    @LogSpecifier(maxEntityLogSize =  100) (1)
    @LogSpecifier(target = LogSpecifierTarget.RESPONSE, maxEntityLogSize =  5000) (2)
    ValidatorResponse postValidatorTest(ValidatorRequest validatorRequest) throws BaseException;
1 Since no target is specified, the log size of each entity is limited to 100 bytes/character, including LogSpecifierTarget.RESPONSE.
2 LogSpecifierTarget.RESPONSE limits entity log size to 5000 characters.

Since in the above example the size of the REST response should be 100 for the first annotation and 5000 for the second annotation, to avoid hidden logic the LogSpecifiersAnnotationProcessor will fail the translation with the following error:

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project project-sample-service: Compilation failure
[ERROR] .../project-sample-service/src/main/java/hu/icellmobilsoft/project/sample/service/rest/ISampleTestRest.java:[43,23] Multiple LogSpecifiers are defined for the [RESPONSE] of [postValidatorTest]! Conflicting LogSpecifiers:[[@hu.icellmobilsoft.coffee.rest.log.annotation.LogSpecifier(noLog=false, maxEntityLogSize=100, target={REQUEST, RESPONSE, CLIENT_REQUEST, CLIENT_RESPONSE}), @hu. icellmobilsoft.coffee.rest.log.annotation.LogSpecifier(noLog=false, maxEntityLogSize=5000, target={RESPONSE})]]

2.7.4. JaxbTool

The purpose of this class is to summarize the transformations and manipulations related to XML objects. Its structure is fully modular, you can customize everything to your project’s needs using the CDI. Its modules provide this functionality by default:

Request version determination

This is provided by the IRequestVersionReader interface. It implements a built-in and replaceable class hu.icellmobilsoft.coffee.rest.validation.xml.reader.XmlRequestVersionReader.

Based on the pattern of the

 ...<header>...<requestVersion>1.1</requestVersion>...</header>...

XML structure, of course you are free to modify it to another structure or even read the HTTP header.

XSD error collection

In the case of marshal (DTO → XML String) or unmarshal (XML String/Stream → DTO), you can request a check according to XSD. In this case hu.icellmobilsoft.coffee.rest.validation.xml.exception.XsdProcessingException to get a list of errors that violate XSD rules. These errors are handled and provided by the IXsdValidationErrorCollector interface.

The implementing built-in and replaceable class is hu.icellmobilsoft.coffee.rest.validation.xml.error.XsdValidationErrorCollector.

XSD (schema) file handling

Additional logic is required to handle XSD schema description files, since they can have various bindings. This problem is addressed by the IXsdResourceResolver interface.

The implementor is a built-in and interchangeable class hu.icellmobilsoft.coffee.rest.validation.xml.utils.XsdResourceResolver. The basic problem that XSDs import each other in a common directory is also a basic problem, but being able to import XSDs from another project requires extra logic. In this class this situation is handled.

2.7.5. XSD Catalog schema management

The description in XSD Catalog and generation deals with XSD generation. This section focuses on the activation in the code - XML validation using XSD catalog.

The whole function is performed by the JaxbTool class. It is intentionally built in a modular way so that it can be easily adapted to needs. As described above, Coffee includes an implementation of IXsdResourceResolver, that can read the schema structure specified in the XSD Catalog. This class is called

hu.icellmobilsoft.coffee.rest.validation.catalog.PublicCatalogResolver
@Alternative
@Priority(100)
public class PublicCatalogResolver implements LSResourceResolver, IXsdResourceResolver {

Since we use maven-bound dependencies to generate the XSD Catalog, such as:

/xxx/super.catalog.xsd
...
<public publicId="http://common.dto.coffee.icellmobilsoft.hu/common" uri="maven:hu.icellmobilsoft.coffee.dto.xsd:coffee-dto-xsd:jar::!/xsd/hu.icellmobilsoft.coffee/dto/common/common.xsd"/>
...

So you need to be prepared to manage the maven: URI protocol. This is done in the hu.icellmobilsoft.coffee.tool.protocol.handler.MavenURLHandler class, which needs to be activated. This can be done in several ways, the recommended solution is the following:

src/main/resources/META-INF/services/java.net.spi.URLStreamHandlerProvider
hu.icellmobilsoft.coffee.rest.validation.catalog.MavenURLStreamHandlerProvider

So you need to create the file src/main/resources/META-INF/services/java.net.spi.URLStreamHandlerProvider and include the class that handles it (Coffee part).

There may be systems (e.g. Thorntail), which are not able to read this file in time for the application to run. In such cases, there is another option via URL.setURLStreamHandlerFactory(factory);.
Catalog JaxbTool activation

After the maven: URI protocol handling setup, there are only 2 things left to do:

  • activate PublicCatalogResolver

  • Specify catalog file

Activating PublicCatalogResolver is done in the classic CDI way:

beans.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://www.oracle.com/webfolder/technetwork/jsc/xml/ns/javaee/beans_1_1.xsd"
    version="1.1" bean-discovery-mode="all">

    <alternatives>
        <class>hu.icellmobilsoft.coffee.rest.validation.catalog.PublicCatalogResolver</class>
    </alternatives>
</beans>

And the catalog xsd file is specified via the configuration, more specifically the key

coffee.config.xml.catalog.path

here is an example:

project-defaults.yml
coffee:
    config:
        xml:
            catalog:
                path: xsd/hu/icellmobilsoft/project/dto/super.catalog.xml

After that we are ready and the XSD Catalog will do the XSD schema reading.

2.7.6. Json support

The framework supports JSON format messages in addition to XML for REST communication. To serialize/deserialize these messages, it uses an external module, Gson, maintained by Google The framework complements/upgrades Gson with some custom adapters. Below is an example JSON, and its own added adapters. The ISO 8601 standard is used for the time-related values, except in one case. In the case of the Date class, the format has been changed to the universal UNIX epoch in milliseconds

example.json
{
    "date": 1549898614051,
    "xmlGregorianCalendar": "2019-02-11T15:23:34.051Z",
    }, "bytes": "dGVzdFN0cmluZw==",
    "string": "test1",
    "clazz": "hu.icellmobilsoft.coffee.tool.gson.JsonUtilTest",
    "offsetDateTime": "2019-02-11T15:23:34.051Z",
    "offsetTime": "15:23:34.051Z",
    "localDate": "2019-02-11",
    "duration": "P1Y1M1DT1H1M1S"
}
Table 1. format of serialization of own added adapters for each type
Java type Format

Class

Return value of the Class.getName()` method.

XMLGregorianCalendar

Return value of the XMLGregorianCalendar.toXMLFormat()` method. By default, `XMLGregorianCalendarImpl is the available descendant for this abstract class.

Date

Returns the time since 1970-01-01T00:00:00.000 in milliseconds.

OffsetDateTime`

Return value of the DateTimeFormatter.ISO_OFFSET_DATE_TIME method, where offset is specified instead of zone.

OffsetTime`

Return value of the method DateTimeFormatter.ISO_OFFSET_TIME where offset is specified instead of zone.

LocalDate

Return value of the `DateTimeFormatter.ISO_DATE method.

Duration

Return value of javax.xml.datatype.Duration.toString().

byte[] `

Return value of Base64.getName().encodeToString(). Default encoder: RFC4648

Note: Most of the JSON-related operations are of a utility nature and are publicly available under coffee-tool in the JsonUtil class.

2.7.7. OpenApiFilter

Microprofile OpenApi provides the ability to set additional OpenApi configuration via the implementation of the org.eclipse.microprofile.openapi.OASFilter interface. The implementation of hu.icellmobilsoft.coffee.rest.filter.OpenAPIFilter contains within the project the generic error codes related to coffee error handling and the corresponding response objects, which are generally applied to all endpoints crossed by the filter, providing more accurate documentation compared to the openapi.yml config file written in microservices using coffee, since this information is dynamically loaded. To activate this filter in the configuration, you need to specify mp.openapi.filter in the configuration key hu.icellmobilsoft.coffee.rest.filter.OpenAPIFilter, which is the class that implements it.

Example in a microprofile default properties config:

microprofile-default.properties
mp.openapi.filter=hu.icellmobilsoft.coffee.rest.filter.OpenAPIFilter
Customizability

The implementation can be further refined by adding a mapping, of which an example is given:

CustomerOpenAPIFilter
package hu.icellmobilsoft.test.rest.filter;

...

@Vetoed
public class CustomerOpenAPIFilter extends OpenAPIFilter {

    private static final String CUSTOM_999_RESPONSE = "#/components/schemas/Custom999Response";

    @Override
    protected Map<Integer, APIResponse> getCommonApiResponseByStatusCodeMap() { (1)
        Map<Integer, APIResponse> apiResponseByStatusCodeMap = super.getCommonApiResponseByStatusCodeMap();
        APIResponse customApiResponse = OASFactory.createAPIResponse() //
                .content(OASFactory.createContent()
                        .addMediaType(MediaType.APPLICATION_JSON,
                                OASFactory.createMediaType().schema(OASFactory.createSchema().ref(CUSTOM_999_RESPONSE)))
                        .addMediaType(MediaType.APPLICATION_XML,
                                OASFactory.createMediaType().schema(OASFactory.createSchema().ref(CUSTOM_999_RESPONSE)))
                        .addMediaType(MediaType.TEXT_XML,
                                OASFactory.createMediaType().schema(OASFactory.createSchema().ref(CUSTOM_999_RESPONSE))))
                .description(Response.Status.BAD_REQUEST.getReasonPhrase() //
                        + "\n" + "* Custom 999 error" //
                        + "\n\t **resultCode** = *OPERATION_FAILED*" //
                );
        apiResponseByStatusCodeMap.put(999,customApiResponse );
        return apiResponseByStatusCodeMap;
    }

    @Override
    protected List<Parameter> getCommonRequestHeaderParameters() { (2)
        Parameter xCustomHeader1 = OASFactory.createObject(Parameter.class).name("X-CUSTOM-HEADER-1").in(Parameter.In.HEADER).required(false)
                .description("Description of custom header 1").schema(OASFactory.createObject(Schema.class).type(Schema.SchemaType.STRING));
        Parameter xCustomHeader2 = OASFactory.createObject(Parameter.class).name("X-CUSTOM-HEADER-2").in(Parameter.In.HEADER).required(false)
                .description("Description of custom header 2").schema(OASFactory.createObject(Schema.class).type(Schema.SchemaType.STRING));
        List<Parameter> headerParams = new ArrayList();
        headerParams.add(xCustomHeader1);
        headerParams.add(xCustomHeader2);
        return headerParams;
    }
}
1 Example of adding a custom response with http status code 999. It is important to note that Custom999Response must exist in the DTOs.
2 Example of specifying 2 custom headers with description schema.

and so the configuration of the following is added:

microprofile-default.properties
mp.openapi.filter=hu.icellmobilsoft.test.rest.filter.CustomerOpenAPIFilter

2.7.8. MessageBodyWriter

A module contains application/octet-stream + BaseResultType writert. This allows the system to send an octet-stream response to any own DTO BaseResultType object. This is very useful, for example, when generating a file with an error.

2.7.9. ProjectStage

The module contains a Deltaspike inspired ProjectStage object which can be injected. Its role is to be able to specify at runtime, via configuration, whether the project is running in production, development or test mode.

It can be used by specifying 2 configurations:

  • coffee.app.projectStage

  • org.apache.deltaspike.ProjectStage

The values that can be specified are converted to hu.icellmobilsoft.coffee.rest.projectstage.ProjectStageEnum. Each enum value contains which config value represents which enum.

It is important to point out that if no config value is specified, or if no config value is found in the list of one of the enum names, they behave as PRODUCTIONs in ProjectStage!

Configurations can be specified from multiple locations using Microprofile Config, but only the first one in the order described above will be considered.

Currently, in the project, if the ProjectStage value is not Production, the system will return a broader response for errors.

Using this works as follows:

MyBean
@Dependent
public class MyBean {
    private @Inject ProjectStage projectStage;

    public void fn() {
        if (projectStage.isProductionStage()) {
            // do some production stuff...
        }
    }
}

For possible further breakdowns, use as follows:

MyBean
@Dependent
public class MyBean {
    private @Inject ProjectStage projectStage;

    public void fn() {
        if (projectStage.getProjectStageEnum() == ProjectStageEnum.DEVELOPMENT) {
            // do some development stuff...
        }
    }
}

2.7.10. Jsonb configuration

The default implementor of Jsonb is Eclipse Yasson. This can be changed using default configurations:

project-defaults.yml
coffee:
  jsonb:
    config:
      propertyvisibilitystrategyclass: "en.icellmobilsoft.coffee.rest.provider.FieldOnlyVisibilityStrategy" (1)
      binarydatastrategy: "BASE_64" (2)

In the above configuration, 2 elements can be set:

1 full access to the class implementing the interface PropertyVisibilityStrategy in jakarta.json.bind.config.PropertyVisibilityStrategy.
2 This type can take the values of the enum BinaryDataStrategy in jakarta.json.bind.config.BinaryDataStrategy to determine how binary data is handled.

2.8. coffee-grpc

The purpose of this module is to support gRPC communication and handling.

2.8.1. coffee-grpc-api

A collector for general gRPC handling of the Coff:ee API (annotations, version, …​).

2.8.2. coffee-grpc-base

A collector for general Protobuf and gRPC classes. It includes exception handling, status handling, and other general CDI (Contexts and Dependency Injection) and Coff:ee functionalities.

ExceptionMapper and ExceptionHandler

A generic ExceptionMapper interface following the JAX-RS pattern. It allows converting a specific Exception type to a gRPC Status using the capabilities provided by CDI.

2.8.3. coffee-grpc-protoc

A helper tool used for proto → class generation. The logic utilizes the Mustache template system, which will be present in the com.salesforce.jprotoc.ProtocPlugin system.

Example usage in a pom.xml:

    <build>
        <plugins>
            <plugin>
                <groupId>com.github.os72</groupId>
                <artifactId>protoc-jar-maven-plugin</artifactId>
                <configuration>
...
                    <outputTargets>
...
                        <outputTarget>
                            <type>grpc-coffee</type>
                            <pluginArtifact>hu.icellmobilsoft.coffee:coffee-grpc-protoc:${version.hu.icellmobilsoft.coffee}</pluginArtifact>
                        </outputTarget>
                    </outputTargets>
                </configuration>
...
            </plugin>
        </plugins>
    </build>

A more complex example can be found in the backend-sampler project’s pom.xml.

2.8.4. coffee-grpc-server-extension

Module containing a CDI-compatible implementation of a gRPC server.

It reads all classes implementing IGrpcService and delegates them to the gRPC service through the GrpcServerManager.

Implemented features:

  • gRPC server configuration based on NettyServerBuilder

  • MDC (Mapped Diagnostic Context) handling

  • Request/Response log, applicable with LogSpecifier annotation on GRPC service implementation method/class

  • Exception handling

    • Grpc status code mapping

      • General exception: hu.icellmobilsoft.coffee.grpc.server.mapper.GrpcGeneralExceptionMapper

      • BaseException exception: hu.icellmobilsoft.coffee.grpc.server.mapper.GrpcBaseExceptionMapper

    • Grpc header response additions:

      • Business error code (com.google.rpc.ErrorInfo)

      • Business error code translation by request locale (com.google.rpc.LocalizedMessage)

      • Debug informations (com.google.rpc.DebugInfo)

Server thread pool

The thread handling is an important part of the gRPC server. Two solutions have been implemented:

  • ThreadPoolExecutor - default thread pool:

    • Configurable through the coffee.grpc.server.threadpool.default configuration.

  • ManagedExecutorService - Jakarta EE managed thread pool:

    • A thread pool managed by the server, with context propagation support.

gRPC server configuration
coffee:
  grpc:
    server:
      port: 8199 # default 8199
      maxConnectionAge: 60000000 # nano seconds, default Long.MAX_VALUE
      maxConnectionAgeGrace: 60000000 # nano seconds, default Long.MAX_VALUE
      maxInboundMessageSize: 4194304 # Bytes, default 4 * 1024 * 1024 (4MiB)
      maxInboundMetadataSize: 8192 # Bytes, default 8192 (8KiB)
      maxConnectionIdle: 60000000 # nano seconds, default Long.MAX_VALUE
      keepAliveTime: 5 # minutes, default 5
      keepAliveTimeout: 20 # seconds, default 20
      permitKeepAliveTime: 5 # minutes, default 5
      permitKeepAliveWithoutCalls: false

 # default false
      threadPool:
        default:
          corePoolSize: 64 # default 32
          maximumPoolSize: 64 # default 32
          keepAliveTime: 60000 # milliseconds, default 0
        jakarta:
          active: true # default false (1)
1 if true, then coffee.grpc.server.threadpool.default is ignored.

2.8.5. gRPC client (coffee-grpc-client-extension)

It includes support for implementing a gRPC client. This includes:

  • Configuration management

  • Request logging

  • Response logging

gRPC client configuration
coffee:
  grpc:
    client:
      _configKey_:
        host: localhost # default localhost
        port: 8199 # default 8199
        maxInboundMetadataSize: 8192 # Bytes, default 8192 (8KiB)
CDI inject DummyServiceGrpc usage
@Inject
@GrpcClient(configKey = "_configKey_") (1)
private DummyServiceGrpc.DummyServiceBlockingStub dummyGrpcService; (2)

...
// add header
DummyServiceGrpc.DummyServiceBlockingStub stub = GrpcClientHeaderHelper
    .addHeader(dummyGrpcServiceStub, GrpcClientHeaderHelper.headerWithSid(errorLanguage)); (3)

// equivalent with `stub.getDummy(dummyRequest);` + exception handling
DummyResponse helloResponse = GrpcClientWrapper.call(stub::getDummy, dummyRequest); (4)
...
1 Configuration key for connection parameters (e.g., server host and port)
2 Generated service Stub
3 Add custom header
4 gRPC service call + exception handling

2.8.6. gRPC Metrics

The gRPC server and client can optionally activate interceptors to provide metric data. For this, only the inclusion of the Maven dependency is required:

enable gRPC server microprofile-metrics implementation
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-grpc-server-extension</artifactId>
</dependency>
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-grpc-metrics-mpmetrics</artifactId>
</dependency>
enable gRPC client microprofile-metrics implementation
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-grpc-client-extension</artifactId>
</dependency>
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-grpc-metrics-mpmetrics</artifactId>
</dependency>

If the metric module is not included at the dependency level, the server/client operation remains unchanged, only metric data is not provided.

Provided metrics:

  • gRPC server

    • Received request counter

    • Responded response counter

    • Request-response processing per second

  • gRPC Client

    • Sent request counter

    • Responded response counter

    • Request-response processing per second

2.8.7. gRPC Tracing

The gRPC server and client can optionally activate interceptors to provide tracing data. For this, only the inclusion of the Maven dependency is required:

enable gRPC server microprofile-opentracing implementation
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-grpc-server-extension</artifactId>
</dependency>
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-grpc-tracing-opentracing</artifactId>
</dependency>
enable gRPC server https://github.com/eclipse/microprofile-telemetry implementation
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-grpc-server-extension</artifactId>
</dependency>
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-grpc-tracing-telemetry</artifactId>
</dependency>
enable gRPC client microprofile-opentracing implementation
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-grpc-client-extension</artifactId>
</dependency>
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-grpc-tracing-opentracing</artifactId>
</dependency>
enable gRPC client https://github.com/eclipse/microprofile-telemetry implementation
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-grpc-client-extension</artifactId>
</dependency>
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-grpc-tracing-telemetry</artifactId>
</dependency>

If the tracing module is not included at the dependency level, the server/client operation remains unchanged, only tracing data is not provided.

2.8.8. coffee-dto/coffee-dto-xsd2proto

A collector of generated schema2proto for general XSD descriptors (coffee-dto-xsd module) and other manually created proto files. This package serves to use Coff:ee proto files, so projects don’t need to generate them again.

Unfortunately, the used schema2proto plugin is not compatible with the Windows operating system, so automatic compilation generation is not set. If there are any changes to the XSD files, the following command needs to be executed on a Linux-compatible system:

mvn clean install -Dschema2proto -Dcopy-generated-sources

The schema2proto parameter activates XSD → proto generation, and the copy-generated-sources parameter activates copying the generated proto files into the sources. Afterward, the changes will appear in the git diff.

2.8.9. coffee-dto/coffee-dto-stub-gen

Contains all Coff:ee proto files and their generated classes. The plugin generates an interface descriptor that can be implemented in a full CDI environment. It also generates a BindableService implementation that delegates gRPC calls to the implemented interface.

3. Coffee model

A model module can have several submodules containing database tables for a specific purpose.

3.1. coffee-model-base

The purpose of this module is to define the base table structure.

Each table must contain common ID, AUDIT and VERSION columns. It provides ancestor classes, id generator and audit field loader for these. It includes the inclusion of deltaspike DATA to facilitate the criteria API, and maven also instantiates SingularAttribute classes on these ancestors.

3.1.1. ID generation

The purpose of ID generation is to work with unique and non-sequential ID values (strings) under all circumstances.[1] For generation, we use the EntityIdGenerator.java class, this is used by default with annotation on the entity java class, but if needed it can be called directly with the generate() method. Operationally, the generate method will generate a new ID for the entity if the ID value is not already set (null), otherwise it will use the existing ID.

The algorithm will generate an ID of up to 16 lengths [0-9a-zA-Z] using random characters, taking into account the nanosecond part of the current time, e.g. '2ZJMG008YRR4E5NW'

Example code for an identifier used on the AbstractIdentifiedEntity.java class:

@Id
@Column(name = "X__ID", length = 30)
@GenericGenerator(name = "entity-id-generator", strategy = "hu.icellmobilsoft.coffee.model.base.generator.EntityIdGenerator")
@GeneratedValue(generator = "entity-id-generator", strategy = GenerationType.IDENTITY)
private String id;

Example code for an Entity where an identifier is required:

@Entity
@Table(name = "SIMPLE_TABLE")
public class SimpleEntity extends AbstractIdentifiedEntity {

}

We have the option to set the timezone through environment variable or system property. If we don’t set this, we’ll use the system’s timezone by default.

The variable: COFFEE_MODEL_BASE_JAVA_TIME_TIMEZONE_ID

Docker container
service:
    environment:
        COFFEE_MODEL_BASE_JAVA_TIME_TIMEZONE_ID: Europe/Budapest
Command line
java -Dcoffee.model.base.java.time.timezone.id=Europe/Budapest
java code
System.setProperty("coffee.model.base.java.time.timezone.id","Europe/Budapest");

3.1.2. Audit columns

The audit columns are used to track all table-level movements, e.g. insertion or modification of a new record. The columns used for this purpose are separated. Items tracking insertions are stored in columns X__INSDATE and X__INSUSER and the value of these fields is only stored in case of an insert, not in case of an update. The elements used for modification are stored in columns X__MODDATE and X__MODUSER. These are predefined in the AbstractAuditEntity.java class and are automatically loaded using the appropriate annotation (e.g. @CreatedOn, @ModifiedOn)

Example code for an Entity where audit elements are required:

@Entity
@Table(name = "SIMPLE_TABLE")
public class SimpleEntity extends AbstractAuditEntity {

}

3.1.3. Version column

The version column is a strictly technical element defined in the AbstractEntity.java class. It is used by Hibernate via @Version annotation, ensuring that during a merge operation the entity can remain intact using optimistic lock concurrency control.

It is recommended to extend each entity class with the AbstractIdentifiedAuditEntity.java class so that it will already contain the ID, AUDIT and VERSION columns
@Entity
@Table(name = "SIMPLE_TABLE")
public class SimpleEntity extends AbstractIdentifiedAuditEntity {

}

3.2. coffee-model-security

The purpose of this module is to provide a generic privilege management and collection of related entities.

Based on the logic of the different entitlement systems in previous projects, the following have been collected entity classes of the basic tables. A project is free to use them independently, therefore there is no relationship between entities, so as not to limit the possible combinations.

4. Coffee modules

4.1. coffee-module-activemq

The purpose of this module is to connect to Apache ActiveMQ.

Contains various server classes and methods for JMS management

4.2. coffee-module-csv

Module to generate a CSV file from Java beans using binding annotations, or parse a CSV file to produce beans.

The bean annotation looks like this:

public class TestBean {

    @CsvBindByNamePosition(position = 0, column = "IDENTIFIER")
    private long id;

    @CsvBindByNamePosition(position = 4)
    private String name;

    @CsvBindByNamePosition(position = 2)
    private boolean active;

    @CsvDate("yyyy-MM-dd")
    @CsvBindByNamePosition(position = 3)
    private LocalDate creationDate;

    @CsvBindByNamePosition(position = 1)
    private status status;

    // getters, setters...
}

The module provides a @CsvBindByNamePosition annotation to specify the position and name of a CSV column.

A list of instances of such a class can be converted to CSV with default csv format use the following call:

String csv = CsvUtil.toCsv(beans, TestBean.class);

The result of the above call:

"IDENTIFIER"; "STATUS"; "ACTIVE"; "CREATIONDATE"; "NAME"
"11";"IN_PROGRESS";"true";"2021-11-23";"foo"
"12";"DONE";"false";"2020-01-02";"bar"

If you want to change the csv format use the overloaded methods with config:

CsvWriterConfig csvWriterConfig = new CsvWriterConfig.Builder()
                .withQuotechar('\'')
                .withSeparator(',')
                .build();
String csv = CsvUtil.toCsv(beans, TestBean.class, csvWriterConfig);

The result of the above call:

'IDENTIFIER','STATUS','ACTIVE','CREATIONDATE','NAME'
'11','IN_PROGRESS','true','2021-11-23','foo'
'12','DONE','false','2020-01-02','bar'

To convert backwards, the following call can be used for default csv format:

List<TestBean> beans = CsvUtil.toBean(csv, TestBean.class);

To convert backwards, the following call can be used for custom csv format:

CSVParserBuilder csvParserBuilder = new CSVParserBuilder()
                .withSeparator(',')
                .withQuoteChar('\'');
List<TestBean> beans = CsvUtil.toBean(csv, TestBean.class, csvParserBuilder);

4.2.1. Disambiguation

The field spacing in which we want to languageize the values must be specified in the LocalizationConverter , which can be done with annotations starting with @CsvCustomBind:

@CsvCustomBindByNamePosition(position = 0, converter = LocalizationConverter.class)
private status status;

@CsvCustomBindByNamePosition(position = 1, converter = LocalizationConverter.class)
private boolean active;
If you also want to localize custom types, or want to localize managed types If you want to use localization types or change the logic of managed types, you can do so by deriving from LocalizationConverter.

Then we need to specify the language for the columns and the values of the fields we want to annotate (e.g. in the messages_en.properties file):

java.lang.Boolean.TRUE=Yes
java.lang.Boolean.FALSE=No
hu.icellmobilsoft.coffee.module.csv.LocalizedTestBean.status=Status
hu.icellmobilsoft.coffee.module.csv.LocalizedTestBean.active=Active
hu.icellmobilsoft.coffee.module.csv.LocalizedTestBean$Status.IN_PROGRESS=Progress
hu.icellmobilsoft.coffee.module.csv.LocalizedTestBean$Status.DONE=Done

Finally, you need to call the CsvUtil#toLocalizedCsv method with the selected language:

String csv = CsvUtil.toLocalizedCsv(beans, TestBean.class, "en");

The code in the examples will result in the following CSV:

"Status"; "Active"
"Progress"; "Yes"
"Done"; "No"

4.3. coffee-module-document

The purpose of this module is to store and manage template texts. So if you have a SMS, Email, PDF or other text to be filled with variable parameters in different languages, this module will handle it.

It contains a generic DTO submodule (optional) in which are predefined the communication objects. The working principle is to input the code of the template, key-value pairs of parameters to be filled in, it is substituted into the template, and return it in response. If not all the required parameters are entered, it loads the default values, which are also stored in the module. The module is also capable of saving files, but this is of limited use because it saves them to a database.

4.4. coffee-module-etcd

Module to manage ETCD, implement Microprofile ConfigSource

Functioning of the official

maven
<dependency>
    <groupId>io.etcd</groupId>
    <artifactId>jetcd-core</artifactId>
    <version>0.7.5</version>
</dependency>

based driver, extended with enterprise usage model options. More precise description on separate page

4.4.1. Deploying configurations, configuring ETCD host

To deploy ETCD configurations, the coffee-module-etcd module is pulled in as a dependency. This will provide the auxiliary classes needed for configuration management, and contains an EtcdConfig implementation to define it, that the coffee.etcd.default.url property included in the configuration will be the ETCD’s reachability. With the help of the following properties, additional optional configurations are possible for establishing the ETCD connection:

  • coffee.etcd.default.connection.timeout.millis: connection timeout in milliseconds

  • coffee.etcd.default.retry.delay: retry delay

  • coffee.etcd.default.retry.max.delay: maximum retry delay

  • coffee.etcd.default.keepalive.time.seconds: keepalive time in seconds

  • coffee.etcd.default.keepalive.timeout.seconds: keepalive timeout in seconds

  • coffee.etcd.default.keepalive.without.calls: keepalive without calls (true/false)

  • coffee.etcd.default.retry.chrono.unit: retry period unit

  • coffee.etcd.default.retry.max.duration.seconds: maximum retry duration in seconds

  • coffee.etcd.default.wait.for.ready: enable gRPC wait for ready semantics (true/false)

On the backend side, there are several ways to manage configurations. These typically support and complement each other.

4.4.2. Microprofile-config

The microprofile-config annotation can be used to inject specific configuration values. In order for microprofile-config to detect ETCD storage, in our code, we need to activate the coffee-module-etcd in ConfigSource implementation in the code of your code:

  • hu.icellmobilsoft.coffee.module.etcd.producer.DefaultEtcdConfigSource - default config source

  • hu.icellmobilsoft.coffee.module.etcd.producer.CachedEtcdConfigSource - cached config source. The cache will remember the retrieved value for 30 minutes, even if the ETCD does not contain it, so we can reduce a lot of repetitive queries. Cache is a thread safe singleton, it is possible to clear it by calling EtcdConfigSourceCache.instance().clear().

  • hu.icellmobilsoft.coffee.module.etcd.producer.RuntimeEtcdConfigSource - Can be activated during runtime as needed (e.g., AfterBeanDiscovery) as follows

    RuntimeEtcdConfigSource.setActive(true);
  • hu.icellmobilsoft.coffee.module.etcd.producer.FilteredEtcdConfigSource - Decides whether to search for the key in ETCD based on configuration regex pattern parameters

    coffee:
      configSource:
        FilteredEtcdConfigSource:
          pattern:
            include: ^(public|private)\\.
            exclude: ^private\\.

    The example works with the following logic:

    1. exclude - is at the beginning of the processing order, so it will be evaluated before include. Optional. If not specified, filtering will not be activated, allowing all keys to pass through. If the searched key matches the pattern, it will not search in ETCD.

    2. include - optional. If not specified, filtering will not be activated, allowing all keys to pass through. It will search for the key in ETCD only if the searched key matches the pattern.

    3. Patterns

      1. "private.sample.key1" - it will not search for the key in ETCD because the exclude pattern filters it out.

      2. "public.sample.key2" - it will search for the key in ETCD because the exclude pattern allows it and the include pattern matches.

      3. "org.sample.key3" - it will not search for the key in ETCD because the exclude pattern allows it but the include pattern filters it out.

Example ConfigSource activation:

src/main/resources/META-INF/services/org.eclipse.microprofile.config.spi.ConfigSource
hu.icellmobilsoft.coffee.module.etcd.producer.CachedEtcdConfigSource

The priority of ConfigSources is set to 150.

It is possible to inject String, Integer, Boolean, Long, Float and Double configurations. The ETCD always stores String, the parsing to the desired type is done after reading the value. The mechanism uses ConfigEtcdHandler in the background to read the values. See configuration module

4.4.3. ConfigEtcdHandler class

Provides a way to read and write ETCD configuration values in a context. It uses ConfigEtcdService in the background.

Write configuration
@Inject
private ConfigEtcdHandler configEtcdHandler;
...
configEtcdHandler.putValue("public.email.sender", "noreply@sample.teszt.hu");
query configuration
@Inject
private ConfigEtcdHandler configEtcdHandler;
...
String adminEmail = configEtcdHandler.getValue("public.email.sender");
Reference to another configuration

ConfigEtcdHandler, and thus indirectly ConfigurationHelper and the @ConfigProperty annotation, also allow it, the value of one config to refer to another config. In this case, { and } characters to specify the referenced configuration.

reference to another configuration
@Inject
private ConfigEtcdHandler configEtcdHandler;
...
configEtcdHandler.putValue("protected.iop.url.main", "http://sample-sandbox.hu/kr_esb_gateway/services/IOPService?wsdl");
configEtcdHandler.putValue("protected.iop.url.alternate", "http://localhost:8178/SampleMockService/IOPService2?wsdl");
configEtcdHandler.putValue("public.iop.url", "{protected.iop.url.main}");
String contactEmail = configEtcdHandler.getValue("public.iop.url"); //A return value "http://sample-sandbox.hu/kr_esb_gateway/services/IOPService?wsdl"

The reference must strictly refer to a specific other configuration, no other content is allowed. For example, the embedded reference will not be resolved (http://{other.etcd.conf}:8178/SampleMockService/IOPService2?wsdl).

4.4.4. ConfigEtcdService class

Provides the ability to query, write, list, search for configuration values. The lowest class of those listed. All of the above mechanisms work through this implement their functionality. Presumably you will only need to use it if you delete it, list configurations.

Write, query, delete a configuration
@Inject
private ConfigEtcdService configEtcdService;
...
configEtcdService.putValue("protected.iop.url.main", "http://sample-sandbox.hu/kr_esb_gateway/services/IOPService?wsdl"); //write
String senderEmail = configEtcdService.getValue("protected.iop.url.main"); //read
configEtcdService.delete("protected.iop.url.main"); //delete
list configurations
@Inject
private ConfigEtcdService configEtcdService;
...
Map<String, String> allConfigMap = configEtcdService.getList(); //list all configuration
Map<String, String> publicConfigMap = configEtcdService.searchList("public."); //list configurations with a given prefix key (cannot be an empty String)

When requesting or deleting a non-existent configuration, the service throws a BONotFoundException. Since this mechanism is used by all listed options, this is true for all of them.

4.4.5. Namespaces, configuration naming conventions

The configuration handler does not support separate namespaces, all information stored in etcd is accessible.

Each configuration key starts with a visibility prefix. They are managed according to the following conventions:

Prefix Description

private.

Only the configuration available to the backend

protected.

Accessible for both backend and frontend, frontend read-only configuration

public.

A configuration available to both backend and frontend, frontend can change its value

4.4.6. Configuration management using Command Line Tool

Download and unpack the ETCD package for your system: https://github.com/coreos/etcd/releases/

Set the ETCDCTL_API environment variable to 3:

#Linux
export ETCDCTL_API=3

#Windows
set ETCDCTL_API=3

From the command line, you can use etcdctl to read and write the values in the ETCD configuration:

#Read the whole configuration
etcdctl --endpoints=%ETCD_ENDPOINTS% get "" --from-key

#Read the value of a given configuration
etcdctl --endpoints=%ETCD_ENDPOINTS% get private.sample

#Write the value of a given configuration
etcdctl --endpoints=%ETCD_ENDPOINTS% put private.sample ertek

4.4.7. Logging

The retrieved keys and the resulting values are logged unless the key matches the regular expression [\w\s]*?secret[\w\s]*? or [\w\s]*?pass[\w\s]*?, in which case the value is masked and logged. The default regex can be overridden by specifying coffee.config.log.sensitive.key.pattern in one of the default microprofile-config sources (sys var, env var, META-INF/microprofile-config.properties), multiple patterns can be specified separated by commas.

4.4.8. microprofile-health támogatás

The EtcdHealth can check if the etcd server is reachable.

Startup example
@ApplicationScoped
public class EtcdHealthCheck {

    @Inject
    private EtcdHealth etcdHealth;

    public HealthCheckResponse check() {
        try {
            return etcdHealth.checkConnection("etcd");
        } catch (BaseException e) {
            return HealthCheckResponse.builder().name("etcd").up().build();
        }
    }

    @Produces
    @Startup
    public HealthCheck produceEtcdCheck() {
        return this::check;
    }
}

4.5. coffee-module-localization

4.5.1. Localization

Localization functionality in a backend system is useful from several perspectives, for example error codes, enum translations or even language-agnostic generation of documents. For this purpose, deltaspike Messages and i18n which is still being enhanced with newer CDI features.

It has three components:

  • language detection (LocaleResolver)

  • language files

  • language localization manager (LocalizedMessage)

Language (LocaleResolver)

By default, deltaspike includes a built-in language resolver, which returns the running JVM locale, of course this is not appropriate for a system, so you have to use CDI to create @Alternative like this:

ProjectLocaleResolver
import java.util.Locale;

import javax.annotation.Priority;
import javax.enterprise.context.Dependent;
import javax.enterprise.inject.Alternative;
import javax.inject.Inject;
import javax.interceptor.Interceptor;

import org.apache.commons.lang3.StringUtils;
import org.apache.deltaspike.core.impl.message.DefaultLocaleResolver;

import hu.icellmobilsoft.coffee.cdi.logger.AppLogger;
import hu.icellmobilsoft.coffee.cdi.logger.ThisLogger;
import hu.icellmobilsoft.project.common.rest.header.ProjectHeader;

@Dependent
@Alternative
@Priority(Interceptor.Priority.APPLICATION + 10)
public class ProjectLocaleResolver extends DefaultLocaleResolver {

    private static final long serialVersionUID = 1L;

    public static final String DEFAULT_LANGUAGE = "hu";

    @Inject
    private ProjectHeader header;

    @Inject
    @ThisLogger
    private AppLogger log;

    @Override
    public Locale getLocale() {
        if (header != null) {
            log.debug("header language: [{0}]", header.getLanguage());
            String language = header.getLanguage();
            if (StringUtils.isNotBlank(language)) {
                return new Locale(language);
            }
        }
        return new Locale(DEFAULT_LANGUAGE);
    }
}

In this example, we request the language from a CDI ProjectHeader managed class, which is loaded from, for example, the REST HTTP header data.

Of course, you still need to activate this @Alternative in the beans.xml file.

Language files

The language files work on the normal Java supported renderer, so the ResourceBundle.html according to. In short, if there is a dictionary file then the file name + postfix array gives the language resolution.

The system supports the ResourceBundle name "i18n.messages" by default, example file:

src/main/resources/i18n/messages_hu.properties
pattern.date.full = yyyy-MM-dd HH:mm:ss
pattern.date = yyyy-MM-dd
pattern.time = HH:mm:ss
pattern.date.time = yyyy-MM-dd HH:mm

hu.icellmobilsoft.coffee.dto.exception.enums.CoffeeFaultType.GENERIC_EXCEPTION = Nem várt hiba történt!

From the file you can see that it collects content valid for the "hu" locale.

The set of dictionary files can be freely extended with the coffee.config.resource.bundles configuration key, where you can list more/other bundles:

project-defaults.yml
coffee:
    config:
        resource:
            bundles: i18n.messages,i18n.validations (1)
1 must be specified without space and quotes

It can be seen that there are 2 groups of bundles of dictionaries, which can be for example

  • src/main/resources/i18n/messages.properties

  • src/main/resources/i18n/messages_en.properties

  • src/main/resources/i18n/messages_hu.properties

  • src/main/resources/i18n/validations.properties

  • src/main/resources/i18n/validations_en.properties

  • src/main/resources/i18n/validations_hu.properties

Localization Manager (LocalizedMessage)

The type-safe solution of deltaspike can be fully used, but it is mostly not suitable for projects, there you mainly need to answer the localized message based on dynamic keys (e.g. error codes).

For this purpose the LocalizedMessage class was created. It contains a pairwise dictionary resolution for enums and classes, for example, freely extensible, even modifiable with @Alternative. Some samples of usage:

import hu.icellmobilsoft.coffee.module.localization.LocalizedMessage;

...
    @Inject
    private LocalizedMessage localizedMessage;
...
    protected String createDateTimePattern() {
        return StringUtils.defaultString(localizedMessage.message("{pattern.date.full}"), "yyyy.MM.dd HH:mm:ss");
    }

    protected String localizeEnum(Enum<?> enumValue) {
        return localizedMessage.message(enumValue);
    }

    protected String getMessage(String faultType) {
        return localizedMessage.message(GeneralExceptionMapper.class, faultType);
    }
...

4.6. coffee-module-mongodb

The module is designed to support the management of the MongoDB NOSQL database, for which it contains various server classes and methods. It is based on the CDI extension cdi-spec.

4.6.1. Implementation in the project

pom.xml
Coffee module activation
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-module-mongodb</artifactId>
</dependency>
Configuration in yml:
project-defaults.yml
coffee:
    mongodb:
        xmlapi: (1)
             database: xmlapi
             uri: mongodb://sample_xmlapi:sample_xmlapi@sample-sandbox.icellmobilsoft.hu:27017/sample_xmlapi?ssl=false
             connectionsPerHost: 150 #default: 100
             minConnectionsPerHost: 1 #default: 1
             connectTimeout: 10000 #default: 10000
             serverSelectionTimeout: 5000 #default: 5000
             socketTimeout: 0 #default: 0
             maxConnectionIdleTime: 20000 #default: 20000
             maxConnectionLifeTime: 20000 #default: 20000
             heartbeatFrequency: 500 #default: 500
             minHeartbeatFrequency: 500 #default: 500
        tada:
            database: tada
            uri: mongodb://sample_tada:sample_tada@sample-sandbox.icellmobilsoft.hu:27017
            connectionsPerHost: 400 #default: 150
1 Unique identifier for the mongoDB connection (configKey). Specifying "database" and "uri" is mandatory. All others have default values. Parameters are described in detail in the MongoConfigHelper class.

You can specify parameters for multiple monogDB servers. The uniqueness of configKey defines the database.

Usage
@MongoClientConfiguration annotation

This annotation allows you to inject a MongoDbClient object. MongoDB can be accessed through this.

@Inject
@MongoClientConfiguration(configKey = "xmlapi") (1)
private MongoDbClient mongoDbClient;
1 The configKey value defined in the yml. Unique identifier.

The coff:ee mongoDB module will build the MongoDbClient object based on the values specified in the yml. The collection to use must be specified separately. This is the default MongoService implementation which can work with BasicDBObject type.

Using Service
// specify collection
mongoDbClient.initRepositoryCollection("xmlapi_collection");

// use MongoUtils to manage typed objects
String dtoJson = MongoJsonUtil.toJson(dtoDocumentType);
BasicDBObject dtoDocument = MongoUtil.jsonToBasicDbObject(dtoJson);

// insert the element
mongoDbClient.insertOne(dtoDocument);

// get the id of the inserted element
String id = dtoDocument.getString(MongoConstants.COLUMN_MONGO_ID);

// search with filter
BasicDBObject filter = new BasicDBObject();
filter.put(MongoConstants.COLUMN_MONGO_ID, new ObjectId(mongoId));
BasicDBObject result = mongoDbClient.findFirst(filter);

// search by id
BasicDBObject result = mongoDbClient.findById(mongoId);
@MongoServiceConfiguration annotation

This annotation is exploited by the CDI extension, for all classes that extend the MongoService<T> class, it is automatically generated an @Producer to allow injection.

Using Service
/**
 * MongoService extension, specifying POJO
 */
@Dependent
public class CustomMongoService extends MongoService<MongoEntity> {
    //no need to overwrite anything
}

/**
 * The extension injects CustomMongoService based on the configKey and collectionKey parameters
 */
@Inject
@MongoServiceConfiguration(configKey = "xmlapi", collectionKey = "xmlapi_collection")
private CustomMongoService customMongoService;

The ancestor of CustomMongoService is MongoService<T>, which causes the extension to process and set the generic parameter in the background object (MongoEntity) that CustomMongoService is working with. The operations in the ancestor can be overridden. The CustomMongoService scope value can also be overridden if the @Dependent would not be correct. When used from @Stateless EJB, the @Model scope does not work, it is only applicable on Rest interfaces where injected MongoService instances are terminated at the end of the http call.

In use, it is not necessary to specify the collection, as this is done in the annotation. Any MongoService can use any collection, no restrictions.

// query
MongoEntity entity = customMongoService.findById("mongoId");

// insert
MongoEntity mongoEntity = new MongoEntity();
customMongoService.insertOne(mongoEntity);

4.7. coffee-module-notification

Module for central management of notifications (email, push notification,…​).

The module can be tightly coupled with the coffee-module-document module, from which it collects the messages and texts to be sent. It not only handles the sending, but also stores who sent what, to whom, when and what. Currently it can handle these notifications:

  • email

  • push notification

Includes a generic DTO submodule (optional) with predefined communication objects.

4.8. coffee-module-redis

The purpose of this module is to manage the online key-value of Redis.

The "redis.clients:jedis" java driver based module not only serves queries and saves from Redis, but can also provide a cache management based on the CDI interceptor saved in Redis.

with the current jedis version 5.1.2, Redis compatibility is backwards to 6.0.x

4.8.1. RedisConnection

The @RedisConnection qualifier has been introduced. Using this, there is no need to define/implement Redis configuration, JedisPool, Jedis, RedisService separately per Redis connection; they are all built and injected via CDI. Configuration in yaml:

coffee:
    redis:
        auth: (1)
            host: sample-sandbox.icellmobilsoft.hu #default: localhost
            port: 6380 #default: 6380
            password: pass1234 #default: null
            database: 1 #default: 0
            pool:
                default: (2)
                    maxtotal: 128 #default: 64
                    maxidle: 32 #default: 16
                custom: (2)
                    maxtotal: 12 #default: 64
                    maxidle: 3 #default: 16
            timeout: 5000 #default: 5000
1 Unique identifier of the redis connection (configKey). All fields are optional.
2 Unique identifier of the pool within the redis connection (poolConfigKey). All fields are optional.

Use the RedisManager associated with the above config:

@Inject
@RedisConnection(configKey = "auth")
private RedisManager authRedisManager;

@Inject
@RedisConnection(configKey = "auth", poolConfigKey = "custom")
private RedisManager authRedisManagerCustomPool;

The first case will use the "default" pool settings, in the second case, the "custom" pool settings.

4.8.2. RedisManager

The class RedisManager and its associated RedisManagerProducer are introduced. The producer will produce the RedisManager with the called configKey value, making it available to retrieve Jedis from the JedisPool when we want to perform some action on redis. RedisManager’s job is to federate the use of Jedis functions, which handles common logging, error handling, connection usage. The defined methods expect Jedis functions to be run, this can be done using initConnection() and closeConnection(), or through the runWithConnection methods. This approach allows Redis connections to be used as long as they are needed, saving a lot of resources.

Example of using multiple redis operations. This is typically set/get combined with expire. In such a case, no Jedis instance is requested from the pool unnecessarily and when done, it is closed.

@Inject
@RedisConnection(configKey = "auth")
private RedisManager redisManager;

try (RedisManagerConnection connection = redisManager.initConnection()) { (1)
    redisManager.run(Jedis::set, "set", "key", "value");
    redisManager.run(Jedis::expire, "expire", "key", 300);
}
1 Initialize the Jedis instance on which the operations are performed, use try-with-resource to automatically close it

Example of performing an operation.

@Inject
@RedisConnection(configKey = "auth")
private RedisManager redisManager;

redisManager.runWithConnection(Jedis::set, "set", "key", "value"); (1)
1 Perform operation, framework handles opening and closing connection.

4.8.3. Redis operations

hscan

It iterates through the object in the selected redis db by key. Data may change during the operation, it does not block (this is an advantage over the SMEMBERS operation). There is no guarantee that all items will be returned, it depends on the size of the objects stored in the redis. If it exceeds the limit the default COUNT will be 10, it will not return more elements in a request. It is possible to parameterize the COUNT value if the size limit is exceeded. For more information about limits: https://redis.io/commands/scan

redisManager.runWithConnection(Jedis::hscan, "hscan", "key", 10, 100);
rpush

Packs the value into the list specified as the key and sets the expiry value of the list in seconds. In response, the amount of items in the list. More information: https://redis.io/commands/rpush + https://redis.io/commands/expire

redisManager.runWithConnection(Jedis::rpush, "rpush", "key", "element", 100);
lpop/rpop

(coffee v1.4+) Given a list of keys, retrieves and returns the first (lpop) or last (rpop) value from the list. If the list is empty, Redis automatically deletes it. If we ask for the data from a non-existing list (empty in the meantime and deleted by Redis) the system will return Optional.empty(). For more information about the limits: https://redis.io/commands/lpop

redisManager.runWithConnection(Jedis::lpop, "lpop", "key"); (1)
redisManager.runWithConnection(Jedis::rpop, "rpop", "key"); (1)
1 If no value is found then Optional.empty()
lmove

Combination of pop+push in one and sets the expiration value of the list in seconds. In effect, it moves the list item to another list. If the list does not exist it creates it, if the source and destination lists are the same, it can wrap the item from the beginning of the list to the end, or however you want. In response, the item will be. For more information: https://redis.io/commands/lmove + https://redis.io/commands/expire

// from the beginning of the sourceListKey to the end of the destinationListKey
redisManager.runWithConnection(Jedis::lmove, "lmove", "sourceListKey", "destinationListKey", ListDirection.LEFT, ListDirection.RIGHT);
// from the end to the beginning
redisManager.runWithConnection(Jedis::lmove, "lmove", "sourceListKey", "sourceListKey", ListDirection.RIGHT, ListDirection.LEFT);
removeValueFromList

Removes all items matching the parameter from the given list. For more information see: https://redis.io/commands/lrem

redisManager.runWithConnection(Jedis::lrem, "removeValueFromList", listKey, 0, "removeValue");

4.8.4. microprofile-health support

The RedisHealth can check if the Redis server is reachable.

Startup example
@ApplicationScoped
public class RedisHealthCheck {

    @Inject
    private RedisHealth databaseHealth;

    public HealthCheckResponse check(String redisConfig) {
        ManagedRedisConfig managedRedisConfig = ...
        try {
            return redisHealth.checkConnection(managedRedisConfig, "redis");
        } catch (BaseException e) {
            return HealthCheckResponse.builder().name("redis").up().build();
        }
    }

    @Produces
    @Startup
    public HealthCheck produceRedisCheck() {
        return this::check;
    }
}

4.8.5. microprofile-metrics support

The JedisConnectionProducer provides metrics about the usage of the Jedis pool.

metrics example
# HELP coffee_jedis_pool_active Active connection number
# TYPE coffee_jedis_pool_active gauge
coffee_jedis_pool_active{configKey="redisConfig",poolConfigKey="default"} 10.0
# HELP coffee_jedis_pool_idle Idle connection number
# TYPE coffee_jedis_pool_idle gauge
coffee_jedis_pool_idle{configKey="redisConfig",poolConfigKey="default"} 5.0

The metrics can be overridden using the @Alternative or @Specializes annotations.

metrics override example
@ApplicationScoped
@Alternative
public class CustomJedisMetricsHandler extends JedisMetricsHandler {
  public void addMetric(String configKey, String poolConfigKey, JedisPool jedisPool) throws BaseException {
  ...
  }
}

4.9. coffee-module-redisstream

Module designed to support the increasingly popular stream in the personification of Redis. Redis Stream is a new feature that was added to Redis 5+. It combines the classic Redis publisher/subscriber functionality with the JMS queue needs, providing an alternative solution to replace JMS. The concept description can be found at Redis streams-intro, from which this implementation was derived, with the addition of enterprise requirements.

4.9.1. RedisConnection

The coffee-module-redisstream module uses the [coffee-module-redis] module for Redis connection management. The Redis connection setup is the same as described for [coffee-module-redis], it is based on the "key" there, only through its own annotation class, which also allows other stream settings.

with the current jedis version 4.2.1, Redis compatibility is backwards to 2.8.x

4.9.2. Message and content

Since the implementation uses the jedis driver, so there is a restriction on the format in which the general message frame. Normally this is an XSD which is part of an API, but now, due to the specificity of the driver (redis.clients.jedis.StreamEntry object is the carrier), these are just keys. These keys are called hu.icellmobilsoft.coffee.module.redisstream.config.IRedisStreamConstant.Common interface:

  • message - business content of redis stream message. This is preferably mostly a String ID (DB PK) for some data, or if the need is more complicated, a JSON referring to a custom API. You should aim to keep them to a minimum, contain "reusable" structures. Of course, business needs do not always allow this, but practice tells us that in most cases only one identifier is used.

  • ttl - redis stream message expiry time in Long epoch millisec format. So a timestamp that will tell the consumer when it expires and then the consumer just ACKs the message without processing the content.

  • hu.icellmobilsoft.coffee.dto.common.LogConstants.LOG_SESSION_ID pointer ("extSessionId") - The "process identifier" in the system, which can be linked beyond the rest input and asynch message operations.

    • Attention must be paid to "uniqueness" when training the value, especially when 1 process forks into N asynch processes. This is the case of failower jobs for example, so the original process identifier must be supplemented with a new unique identifier. Expect the possibility of several such levels. This is done by the StreamMessageParameter.FLOW_ID_EXTENSION ("flowIdExtension") variable in the system, currently only used for redis streams.

When browsing Redis content, this may look like this:

sample of simple content:
{
  "extSessionId": "3OXV5ZUSUAF1KA8G_3OCISPU2RW0NWR7M",
  "flowIdExtension": "3OCISPU2RW0NWR7M",
  "message": "sample-10415900/2022-01/",
  "ttl": "1646381700045"
}

The value of extSessionId here is already seen to be a "composite" process identifier, where "3OXV5ZUSUAF1KA8G" is the original process ID, and then appended with "3OCISPU2RW0NWR7M" which is the unique identifier of the asynch. When browsing the logs, you can then clearly see where the process is split.

sample for more complex content:
{
  "extSessionId": "#TEST-SimpleTest5546-3OW013B5CP8CMH07_3OW013Z1JLNPOP09",
  "message": {
    "blacklisted": false,
    "changeDate": "2022-03-03T01:50:38.035812+01:00",
    "identifier": "3OW01426SX6BP5KW",
    "inputDate": "2022-03-02T23:00:00Z",
    "version": 0
  },
  "ttl": "1646268938291"
}

The message shown here is a more complex message content, should belong somewhere in the API (XSD) and JSON format is recommended to use.

4.9.3. Configuration

Configuration is done via the @RedisStreamConsumer and @RedisStreamProducer qualifiers. Configuration in yaml:

yaml config file
coffee:
   redisstream:
       sampleGroup: (1)
           stream:
               read:
                   timeoutmillis: 60000 #default: 60000 (2)
           producer:
               maxlen: 10000 #default none (3)
               ttl: 300000 #millisec, default none (4)
           consumer:
               threadsCount: 2 #default: 1 (5)
               retryCount: 2 #default: 1 (6)
               manualAck: true # default: false (7)
1 Unique name of the stream group. All fields are optional.
2 Stream consumer timeout - how long to wait for the message in 1 iteration. If no message is received in the stream for this number of ms, it will close the connection and reopen a new connection in a new iteration.
3 Maximum size of stream messages. Each time a new message is inserted, it deletes older messages from the stream, even if they have or have not been processed.
4 (Coff:ee 1.6.0+) Stream message expiration time. On inserting each new message, discard older messages from the stream, even if they have or have not been processed.
5 The number of independent threads to start in a given group (sampleGroup) consumer.
6 (Coff:ee 1.4.0+) The number of times the given group (sampleGroup) consumer should retry on BaseException. For other errors not resulting from BaseException this setting is ignored.
7 If true, an explicit XACK call is required at the end of message processing. If an exception is thrown during processing, this is omitted and the message can be reprocessed manually. If false, then an automatic ack occurs already during message processing. Default: false. See redis XREADGROUP documentation:

The NOACK subcommand can be used to avoid adding the message to the PEL in cases where reliability is not a requirement and the occasional message loss is acceptable. This is equivalent to acknowledging the message when it is read.

When specifying …​producer.maxlen and …​producer.ttl at the same time the parameter …​producer.ttl will not be taken into account!

This includes an EE level setting, which is needed if the default is not enough to start extra threads (e.g. maximum thread count). These settings vary by application server, e.g:

MDC

The system logs at MDC level as "retryCounter", the number of iterations of the retry (coffee.redisstream.sampleGroup.consumer.retryCount configuration).

RedisStreamService

All Redis stream operations are handled by the hu.icellmobilsoft.coffee.module.redisstream.service.RedisStreamService class. If needed, it can be accessed directly via CDI, but it is more practical to use the classes created for Producer and Consumer.

Producer

To send messages to a stream, use the hu.icellmobilsoft.coffee.module.redisstream.publisher.RedisStreamPublisher class, such as:

@Inject
@RedisStreamProducer(configKey = "streamConfigKey", group = "streamGroup") (1)
private RedisStreamPublisher redisStreamPublisher;
...
redisStreamPublisher.publish("message"); (2)
// or
redisStreamPublisher.publish("alternativeGroup", "message");
redisStreamPublisher.publish(List.of("message-1", "message-2"));
redisStreamPublisher.publish("alternativeGroup", List.of("message-1", "message-2"));
redisStreamPublisher.publishPublications(List.of(
        RedisStreamPublication.of("group-1", "message-1"),
        RedisStreamPublication.of("group-2", "message-2")
// parameterization of the message
long expiry = Instant.now().plus(5, ChronoUnit.MINUTES).toEpochMilli();
Map<String, String> map = Map.ofEntries(RedisStreamPublisher.parameterOf(StreamMessageParameter.TTL, expiry));
redisStreamPublisher.publish("message", parameters); (3)

// or
RedisStreamPublication publication = RedisStreamPublication.of(id).withTTL(defaultTTL).withParameter(StreamMessageParameter.FLOW_ID_EXTENSION, id))
redisStreamPublisher.publishPublication(publication); (4)
1 "group" is not mandatory in all cases
2 The "message" content itself will be stored in a kind of coffee stream message structure, which is the key of IRedisStreamConstant.Common.DATA_KEY_MESSAGE. The message itself is supplemented with extra information, such as a process identifier.
3 It is also possible to specify custom project specific parameters. The options provided by the system are described in hu.icellmobilsoft.coffee.module.redisstream.config.StreamMessageParameter enum class
4 RedisStreamPublication plays an all-in-one role in the message sending process, parameters set override the group set in redisStreamPublisher.
Each publish call is made on a separate Jedis connection, so given In some cases, you may want to collect the messages and pass them as a list.
RedisStreamPublication

If you need to submit several messages at once, you may want to use the hu.icellmobilsoft.coffee.module.redisstream.publisher.RedisStreamPublication class, which is prepared to add its own parameters to each message, or even send messages to other streams, than what happens with the RedisStreamPublisher inject.

Examples are:

  • StreamMessageParameter.TTL - Message expiry time

  • StreamMessageParameter.FLOW_ID_EXTENSION - Role to complement the SID logging for easier browsing of logs

  • + other custom settings

Consumer

Use SampleConsumer for the above config:

IRedisStreamConsumer.class
package hu.icellmobilsoft.redis.consume;

import javax.enterprise.context.Dependent;
import javax.inject.Inject;

import hu.icellmobilsoft.coffee.dto.exception.BaseException;
import hu.icellmobilsoft.coffee.module.redisstream.annotation.RedisStreamConsumer;
import hu.icellmobilsoft.coffee.module.redisstream.consumer.IRedisStreamConsumer;
import hu.icellmobilsoft.coffee.se.logging.Logger;
import hu.icellmobilsoft.sample.requestScope.Counter;
import hu.icellmobilsoft.sample.dependent.CounterDependent;
import hu.icellmobilsoft.sample.applicationScope.CounterApplication;
import redis.clients.jedis.StreamEntry;

@Dependent
@RedisStreamConsumer(configKey = "redisConfigKey", group = "sampleGroup")
public class SampleConsumer implements IRedisStreamConsumer {

    @Inject
    private Logger log;

    @Inject
    private Counter counter; (1)

    @Inject
    private CounterDependent counterDependent; (2)

    @Inject
    private CounterApplication counterApplication; (3)

    @Override
    public void onStream(StreamEntry streamEntry) throws BaseException {
        log.info("Processing streamEntry [{0}]", streamEntry);
        counter.print();
        counterDependent.print();
        counterApplication.print();
    }
}
1 The Counter class works in RequestScope
2 The CounterDependent class works as Dependent
3 The CounterApplication class operates in ApplicationScope
IRedisStreamPipeConsumer.class

There is a more complex IRedisStreamPipeConsumer, which is designed to allow extended stream consumption. Compared to the IRedisStreamConsumer there are so many changes, the return value of Map<String, Object> onStream(StreamEntry streamEntry) is is the input of void afterAck(StreamEntry streamEntry, Map<String, Object> onStreamResult). The two functions run completely separate in their own requestScope.

In an EE environment, it is necessary to add other logic to the consumer, such as the process identifier, unique metadata, therefore it is recommended to use the hu.icellmobilsoft.coffee.module.redisstream.consumer.AbstractStreamConsumer which will prepare the implementing consumer. This logic is used to send messages to the hu.icellmobilsoft.coffee.module.redisstream.publisher.RedisStreamPublisher class.

import javax.enterprise.inject.Model;
import javax.inject.Inject;

import hu.icellmobilsoft.coffee.dto.exception.BaseException;
import hu.icellmobilsoft.coffee.module.redisstream.annotation.RedisStreamConsumer;
import hu.icellmobilsoft.coffee.module.redisstream.consumer.AbstractStreamConsumer;

@Model
@RedisStreamConsumer(configKey = "redisConfigKey", group = "redisGroup")
public class SampleConsumer extends AbstractStreamConsumer {

    @Inject
    private Provider<Sample> sample;

    @Override
    public void doWork(String text) throws BaseException { (1)
        sample.process(text);
    }
}
1 The content can be string or json, which from StreamEntry is the value of the key RedisStreamConstant.Common#DATA_KEY_MAIN
How does it work?

At application startup, for example (there are several options), it looks for the CDI @Observes @Initialized(ApplicationScoped.class) event all classes that:

  • hu.icellmobilsoft.coffee.module.redisstream.consumer.IRedisStreamConsumer interface

  • hu.icellmobilsoft.coffee.module.redisstream.annotation.RedisStreamConsumer annotated with

From the annotation of the classes found, the redis connection key and the stream group name are known, from which the name of the stream key and the settings are added. It iterates through the classes and creates as many instances as each one is configured to create, which it runs in separate threads using hu.icellmobilsoft.coffee.module.redisstream.consumer.RedisStreamConsumerExecutor.

In an infinite loop in each thread, the algorithm queries Redis for messages. First it checks if there is a specified group and stream, if not it creates one. In subsequent rounds it does not check this. If a message is received, it creates an automatically handled RequestScope to execute the business:

  1. so that our usual RequestScope logic can be used to process the message

  2. each message is actually a real request, except that it does not come in REST

  3. this logic also follows the JMS scope handling

After successful message processing, it closes the RequestScope and issues the ACK command.

Starter

There are several ways to start a consumer, CDI event, CDI extension, manual/delayed start, etc…​

For these, a hu.icellmobilsoft.coffee.module.redisstream.bootstrap.BaseRedisConsumerStarter class and a hu.icellmobilsoft.coffee.module.redisstream.bootstrap.ConsumerStarterExtension CDI extension pattern (this can be a problem for example for JNDI bindings used in consumers)

coffee does not start consumers by itself, this has to be done by everyone in the project based on their own needs.

4.9.4. Non-ACKed messages

This implementation does not deal with retrieved but not ACKed messages. These need to be handled locally on a case by case basis as to what to do with them. The hu.icellmobilsoft.coffee.module.redisstream.service.RedisStreamService class contains query and handling methods for this purpose, which can be used in the stuck business process.

4.9.5. Graceful shutdown support

The Redis consumers got stuck during service shutdown and stalled during processing. To support graceful shutdown, the hu.icellmobilsoft.coffee.module.redisstream.bootstrap.ConsumerLifeCycleManager class was created, which waits for the consumers to complete their ongoing operations.

By default, it is enabled, but it can be disabled in the following way:

import jakarta.enterprise.context.ApplicationScoped;
import jakarta.enterprise.context.BeforeDestroyed;
import jakarta.enterprise.event.Observes;
import jakarta.enterprise.inject.Specializes;

import hu.icellmobilsoft.coffee.module.redisstream.bootstrap.ConsumerLifeCycleManager;

@ApplicationScoped
@Specializes
public class ProjectConsumerLifeCycleManager extends ConsumerLifeCycleManager {
    public void stop(@Observes @BeforeDestroyed(ApplicationScoped.class) Object init) {
        //
    }
}

4.10. coffee-module-redispubsub

Module is designed to implement "topic" messaging using the Redis Pub/Sub function on the basis of microprofile-reactive-messaging.

4.10.1. Redis Pub/Sub

The Pub/Sub function implements classic publisher-subscriber messaging. Publishers send a message to a given Redis pub/sub channel, which is received by all subscribers who have just subscribed. Unlike Redis streams, messages are not even stored in-memory on the channel, if a subscriber subscribes to the channel afterwards, he will not receive the previous messages, and therefore the subscription will not be by repeated requests (e.g. XREAD), but on a reserved grpc connection.

4.10.2. RedisConnection

The coffee-module-redispubsub module uses the [coffee-module-redis] module to handle Redis connection. The Redis connection setup is the same as described in [coffee-module-redis], it is based on the "key" there.

4.10.3. Microprofile-reactive-messaging

The module integrates the Pub/Sub solution using the microprofile reactive messaging API, thus configuration and message sending/receiving is done according to the specification (therefore, with minimal code modification, the module can be used as a replacement for existing connectors (e.g. Kafka, MQTT, Google Pub/Sub…​)). Integration is done via the PubSubConnector class, which is used to register subscribers, and is used to send messages

Wildfly configuration

To use this module, you need to activate microprofile-reactive-streams-operators-smallrye under wildfly, microprofile-reactive-messaging-smallrye subsystems:

jboss-cli.sh
/extension=org.wildfly.extension.microprofile.reactive-messaging-smallrye:add
/extension=org.wildfly.extension.microprofile.reactive-streams-operators-smallrye:add
/subsystem=microprofile-reactive-streams-operators-smallrye:add
/subsystem=microprofile-reactive-messaging-smallrye:add

4.10.4. Subscriber(consumer) creation

Subscriber creation is done with the configurations and annotations defined by microprofile-reactive-messaging.

mp reactive incoming config
coffee:
  redis:
    pubsubredis: (1)
    #...
      pool:
        pubsubpool:
        #...

mp:
  messaging:
    incoming:
      test-in: (2)
        connector: coffee-redis-pubsub (3)
        connection-key: pubsubredis (4)
        pool-key: pubsubpool (5)
        pub-sub-channel: channel1 (6)
        retry-seconds: 60 (7)
1 Redis connection and pool settings to use
2 Incoming mp channel key, in code you can refer to it to process the message.
3 The channel uses the module connector, fixed to coffee-redis-pubsub.
4 Coffee redis module connection key coffee.redis.*, required parameter
5 Coffee redis pool key coffee.redis.*.pool.*, optional, default value default
6 Optional parameter, redis Pub/Sub channel name, if not specified, the incoming mp channel key (key <1> - test-in) is taken as redis channel by default, for a more detailed description of when it might be necessary to specify this parameter: Same channel publisher and subscriber within a service.
7 Optional parameter, in case of subscribe failure, how many seconds to wait before retrying, default 30s
subscriber method
@ApplicationScoped (1)
public class TestListener {


    @Incoming("test-in") (2)
    void consume(String test) {
        //logic
    }
}
1 microprofile-reactive-messaging only allows Dependent or ApplicationScoped beans for consumer
2 work with the mp channel key specified in the config

4.10.5. Publisher creation

Publisher creation is also done with the configurations and annotations defined by microprofile-reactive-messaging.

mp reactive outgoing config
coffee:
  redis:
    pubsubredis: (1)
    #...
      pool:
        pubsubpool:
        #...


mp:
  messaging:
    outgoing:
      test-out: (2)
        connector: coffee-redis-pubsub (3)
        connection-key: pubsubredis (4)
        pool-key: pubsubpool (5)
        pub-sub-channel: channel1 (6)
1 The redis connection and pool settings to use
2 Outgoing mp channel key, in code you can refer to it to process the message.
3 The channel uses the module connector, fixed to coffee-redis-pubsub.
4 Coffee redis module connection key coffee.redis.*, mandatory parameter
5 Coffee redis module pool key coffee.redis.*.pool.*, optional, default value default
6 Optional parameter, redis Pub/Sub channel name, if not specified, then by default the outgoing mp channel key (key <1> - test-out) is taken as redis channel, for a more detailed description of when it might be necessary to specify this parameter: Same channel publisher and subscriber within a service.
publishing method
@Model
public class TestAction {


    @Inject
    @Channel("test-out") (1)
    private Emitter<String> emitter;


    void sendMessage(String test) {
        //logic
        emitter.send(test); (2)
    }
}
1 work with the mp channel key specified in the config
2 send message, returning with completionStage.

4.10.6. Message

The module wraps all messages in a PubSubMessage object, this contains the sender SID, which is read by the consumer and set in MDC. The class implements org.eclipse.microprofile.reactive.messaging.Message so that the consumer method parameter as described in the documentation Methods consuming data.

message example
{
    "context": {
        "extSessionId": "3VUTBZCQOIHUAM07"
    },
    "payload": "test0"
}
set/override SID in message

If you want to manually set the SID of the message, you have to send PubSubMessage to the emitter instead of payload.

example for own sid
@Model
public class TestAction {

    @Inject
    @Channel("test")
    private Emitter<PubSubMessage> emitter;

    void sendMessage() {
        //logic
        emitter.send(PubSubMessage.of("test", Map.of(LogConstants.LOG_SESSION_ID, "customSID")));
    }
}

4.10.7. mp-reactive-messaging own shares

Same channel publisher and subscriber within a service

Within a microservice, microprofile-reactive-messaging does not allow to create both publisher and subscriber for the same key, but if such a need should arise, it can be separated using the pub-sub-channel attribute the name of the microprofile channel within the service and the name of the associated redis pub/sub channel, example: Subscriber and producer on same service.

Using multiple producers on the same channel

By default, a message can be sent to a channel from a single location within the service, if you want to have multiple beans, you can do so by activating the mp.messaging.outgoing.test-out.merge=true config.

Configuration key constraints

If the microprofile-reactive-messaging subtype is activated and there is any mp.messaging.* in mp-config then there must be a corresponding subscriber or producer in the deployment! This can cause problems with shared config files.

4.10.8. Examples

Subscriber and producer on separate service
seperated pub sub.drawio
Publisher
publisher config
coffee:
  redis:
    sample: (1)
      database: 0
      host: bs-sample-redis
      port: 6379
mp:
  messaging:
    outgoing:
      test: (2)
        connector: coffee-redis-pubsub
        connection-key: sample (1)
1 redis connection setup
2 emitter key
publishing method
@Model
public class TestAction {

    @Inject
    @Channel("test") (1)
    private Emitter<String> emitter;

    void sendMessage() {
        //logic
        emitter.send("test123");
    }
}
1 mp.messaging.outgoing key
Subscriber
config
coffee:
  redis:
    sample: (1)
      database: 0
      host: bs-sample-redis
      port: 6379
mp:
  messaging:
    incoming:
      test: (2)
        connector: coffee-redis-pubsub
        connection-key: sample (1)
1 redis connection setup
2 subscriber key
subscriber method
@ApplicationScoped
public class TestListener {

    @Incoming("test") (1)
    void consume(String test) {
        //logic
    }
}
1 mp.messaging.incoming key
Subscriber and producer on same service
same pub sub.drawio
config
coffee:
  redis:
    sample: (1)
      database: 0
      host: bs-sample-redis
      port: 6379
mp:
  messaging:
    incoming:
      test-in: (2)
        connector: coffee-redis-pubsub
        connection-key: sample (1)
        pub-sub-channel: test (4)
    outgoing:
      test-out: (3)
        connector: coffee-redis-pubsub
        connection-key: sample (1)
        pub-sub-channel: test (4)
1 redis connection setup
2 subscriber key
3 emitter key
4 redis channel name
publishing method
@Model
public class TestAction {

    @Inject
    @Channel("test-out") (1)
    private Emitter<String> emitter;

    void sendMessage() {
        //logic
        emitter.send("test");
    }
}
1 mp.messaging.outgoing key
subscriber method
@ApplicationScoped
public class TestListener {

    @Incoming("test-in") (1)
    void consume(String test) {
        //logic
    }
}
1 mp.messaging.incoming key

4.10.9. Shortcomings, possibilities for further development

  • Multi-threaded async processing

    Since all subscribers receive the message, it makes sense to subscribe one thread per redis channel, currently the logic after message arrival is also implemented on a single thread (similar to the JMS topic mdb’s). Consumer side multi-threading can be solved, for this you could have Util/Helper class (e.g. for MDC setup, number of threads etc…​)

    multi-threaded processing
    @ApplicationScoped
    public class TestListener {
    
        @Resource(name = "java:jboss/ee/concurrency/executor/default")
        private ExecutorService executorService;
    
        @Incoming("test")
        CompletionStage<Void> consume(Message<String> test){
            return CompletableFuture.runAsync(() -> {
                //logic
            }, executorService);
        }
    }
  • Support for Redis Pub/Sub PSUBSCRIBE operation, allowing to subscribe to patterns, e.g. with PSUBSCRIBE ch* the subscribing client will also receive messages sent to ch1,ch2,cha channels.

  • Project level override possibility e.g. with service loader mechanism

  • Tracing how to

4.11. coffee-module-ruleng

The purpose of this module is to support a universal rule system evaluation.

There is a growing need to perform multiple evaluations on a single data set, the results of which determine the output of the overall processing. Example use cases:

  • submitted invoice processing

    • invoice header checks (simple data, external dependency)

    • invoice item checks (scroll through list)

    • invoice summary checks (calculated data)

    • special checks (combination of the above)

  • rent, discount checks

    • real owner, maturity, issuer (external dependency)

    • discount rate, eligibility for use (calculated data)

  • loan application

    • check application data

    • reviews

  • many other use cases

4.11.1. Principles

It works entirely on the basis of CDI, focusing on the following needs:

  1. modularizable according to CDI principles, almost all framework operations can be individually modified

  2. 1 rule 1 independent class

  3. the internal logic of the rule follows the KISS (keep is simple, stupid) principle

  4. traditional unlimited EE logic can be used

  5. the rule evaluation must be fault-tolerant, null-safe, only focusing on its own data, not interested in evaluating another rule

  6. possibility to sort and group rules (parallel evaluation)

  7. possibility to interrupt the processing in the queue while the processing of the other group is independent

  8. evaluation result can be positive or negative, logic does not matter how it is used

  9. disable/enable rule according to versioned data

  10. all rules must end with the same class type

  11. multiple evaluations are possible within 1 rule (to be avoided, but sometimes required)

4.11.2. Rule

Many types of rule evaluation are possible, starting from the following pattern

Sample Rule
import javax.enterprise.inject.Model;
import javax.inject.Inject;

import hu.icellmobilsoft.coffee.cdi.annotation.Range;
import hu.icellmobilsoft.coffee.cdi.annotation.Version;
import hu.icellmobilsoft.coffee.dto.exception.BaseException;
import hu.icellmobilsoft.coffee.dto.exception.TechnicalException;
import hu.icellmobilsoft.coffee.dto.exception.enums.CoffeeFaultType;
import hu.icellmobilsoft.coffee.module.ruleng.rule.IRule;
import hu.icellmobilsoft.project.enums.ValidatorFaultType;
import hu.icellmobilsoft.project.rule.RuleException;
import hu.icellmobilsoft.project.rule.CustomRuleResult;
import hu.icellmobilsoft.project.rule.DataHelper;
import hu.icellmobilsoft.project.annotation.Ac;
import hu.icellmobilsoft.project.schemas.data.LineType;

@Model
@Version(include = @Range(from = "1.1")) // Optional (1)
@Ac // Rule category (2)
public class AcExampleRule implements
    IRule<LineType, CustomRuleResult>, (3)
    IRuleSelector { // Optional (4)

    @Inject
    private DataHelper dataHelper; (5)

    @Override
    public CustomRuleResult apply(LineType input) throws RuleException, BaseException { (6)
        if (input == null) {
            throw new TechnicalException(CoffeeFaultType.WRONG_OR_MISSING_PARAMETERS, "input is null");
        }

        if (input.getSub() != null && !input.getSub().getData() ! = null (7)
            && input.getSub().getData().compareTo(dataHelper.getValue()) == 0) {
            return new CustomRuleResult(ValidatorFaultType.INVALID_SUB_DATA);
        }
        return null; (8)
    }

    @Override
    public int order() { (9)
        return 0;
    }

    @Override
    public Enum<?> group() { (10)
        return RuleGroup.NONE;
    }
}
1 Rule activation by data version - NOT REQUIRED
2 Rule category. This is an annotation of type @Qualifier
3 IRule<InputType, CustomRuleResult> - input data type and output
4 Rule grouping and ordering options - NOT REQUIRED
5 This is how to input the precalculated data to be used by the rule for evaluation
6 RuleException - thrown when an interrupt is needed in the rule evaluation
7 null-safe checks
8 Can return a positive workaround if required
9 Rule order, 0 by default
10 Rule group, by default NONE
How it works:
  • AcExampleRule will be activated on data with version 1.1+. This is determined by the hu.icellmobilsoft.coffee.tool.version.ComparableVersion class.

    • It is possible to specify different version intervals

  • AcExampleRule is a rule of category "Ac" with input LineType (can be anything) and a CustomRuleResult (extends hu.icellmobilsoft.coffee.module.ruleng.rule.RuleResult) is evaluated

  • Currently also uses IRuleSelector, which is optional. The implemented methods use the default values in the example. If you have multiple rules for the same combination of category and implementation, the system will group and implement them accordingly

    • order() - will apply the evaluation in ascending order.

    • group() - rule group. If a rule in the group is caught by a RuleException, then the subsequent rules in the order will not be executed. No other group is affected by the interruption, groups are independent of each other

If you do not put an IRuleSelector interface on the rule, the rules will run by default according to Class.SimpleName, and will belong to RuleGroup.NONE
  • Check the input data, focus only on whether the data to check exists

  • The data is evaluated and the role of the rule ends here

The evaluation CustomRuleResult can be customized to the needs of the project, the condition is that hu.icellmobilsoft.coffee.module.ruleng.rule.RuleResult is the ancestor

4.11.3. Validator

It is intended to handle rules belonging to the rule category.

Sample validator
import java.lang.annotation.Annotation;

import javax.enterprise.inject.Model;
import javax.enterprise.util.TypeLiteral;

import hu.icellmobilsoft.coffee.module.ruleng.rule.IRule;
import hu.icellmobilsoft.coffee.module.ruleng.evaluator.AbstractEvaluator;
import hu.icellmobilsoft.project.schemas.data.LineType;
import hu.icellmobilsoft.sample.common.system.validator.rule.CustomRuleResult;
import hu.icellmobilsoft.sample.invoice.common.action.evaluator.annotation.Ac;

@Model
public class ACEvaluatorLineType extends AbstractEvaluator<LineType, CustomRuleResult> {

    @Override
    protected Annotation cdiSelectLiteral() {
        return new Ac.Literal(); (1)
    }

    @Override
    protected TypeLiteral<IRule<LineType, CustomRuleResult>> cdiTypeLiteral() {
        return new TypeLiteral<IRule<LineType, CustomRuleResult>>() { (2)
            private static final long serialVersionUID = 1L;
        };
    }
}
1 Rule category Qualifier annotation
2 IRule<LineType, CustomRuleResult> implemeted rules of type CDI literal
How it works:
  • There can also be more than one validator, each rule category and implementation must have its own.

  • The first thing it does is read from the CDI container the category and implementation rules it handles.

    • It tries to group them by the IRuleSelector mentioned above and then sorts them by order and then by class name

  • Runs through the categorized rules, collects the results

  • Returns in response the results of all rules run

4.12. coffee-module-mp-opentracing

The purpose of this module is to support microprofile opentracing which includes the following principles:

  • Coffee compatibility - allows coffee modules to provide trace information. The included interceptor can handle the modules of coff:ee, it can be enhanced by introducing its own members, which are contained in the coffee-cdi module.

  • It complements metrics and logging for full observability.

  • Currently we use our own classes as it is not possible to manage well at extension level which component should provide trace information.

4.12.1. microprofile-opentracing

Based on https://github.com/eclipse/microprofile-opentracing and the https://github.com/opentracing-contrib/java-interceptors project. We use our own binding instead of the org.eclipse.microprofile.opentracing.Traced annotation, as this way we can completely decouple the traceability of our modules.

4.12.2. Core

The bindings in the extension are used to make the different technologies of the modules traceable.

OpenTraceInterceptor

The interceptor handles methods annotated with hu.icellmobilsoft.coffee.cdi.trace.annotation.Traced. This is used by the OpenTraceInterceptor. With this solution, it becomes easy to later replace/modify the trace interceptor’s behavior. It automatically handles the trace flow for rest, redis, redis-stream, etcd and gRPC.

coffee-module-redis trace

The OpenTraceInterceptor works with span values filled in the @Traced annotation. If the value of @Traced.component in the annotation is 'Jedis' then the interceptor will take the functionName parameter as the basis for determining the span name, otherwise the class name of the annotated method will be used. This value decides whether the Jedis client operation should join an existing trace stream, rather than start a new trace stream by itself. This operation is implemented via the RedisManager.

coffee-module-redisstream trace

Annotated the AbstractStreamConsumer.onStream method and the AbstractStreamPipeConsumer.onStream method. A new trace is started with the tag values specified via the @Traced annotation. It automatically includes Rest calls and Jedis operations outgoing from here. The span operation name will be the name of the class containing the annotated method.

coffee-module-etcd trace

The ConfigEtcdHandler has been annotated, the method calls in it will return trace data.

coffee-deltaspike-data trace

The OpenTraceHandler manages the channeling of database operations in the repository layer into the trace flow.

OpenTraceErrorResponseFilter

Other operations that return an HTTP500 or higher error code will not appear as errors in the trace. This filter should handle it, but it is currently exception based, which will not happen in the trace. https://github.com/opentracing-contrib/java-jaxrs/blob/master/opentracing-jaxrs2/src/main/java/io/opentracing/contrib/jaxrs2/server/SpanFinishingFilter.java The filter is supposed to handle this case.

Thorntail based project config

Default uses the Jaeger trace implementation. https://docs.thorntail.io/2.5.0.Final/#_jaeger

project-defaults.yml
thorntail:
    jaeger:
        service-name: ${service.name}
        sampler-type: const #4 supported types: 'const', 'probabilistic', 'ratelimiting' and 'remote'.
        sampler-parameter: 1 #For 'Constant' sampler, 0 means no traces and 1 means all traces.
        remote-reporter-http-endpoint: 'http://jaeger-collector.istio-system:14268/api/traces'
pom.xml
<dependency>.
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-module-mp-opentracing</artifactId>
</dependency>
<dependency>.
    <groupId>io.thorntail</groupId>
    <artifactId>microprofile-opentracing</artifactId>
</dependency>
<dependency>
    <groupId>io.thorntail</groupId>
    <artifactId>jaeger</artifactId>
</dependency>
Wildfly based project config
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-module-mp-opentracing</artifactId>
</dependency>

4.13. coffee-module-mp-metrics/micrometer

The purpose of the modules is to support microprofile metrics and micrometer, which includes the following principles:

  • Coffee Compatibility - provides the opportunity for Coffee modules to provide metric information. The interceptor within it is capable of handling Coffee modules and is extensible.

4.13.1. coffee-core

Contains a metric-independent Noop* implementation that is activated by default when no specific metric implementation is plugged in.

4.13.2. coffee-module-redis

The module contains metrics that can be activated based on the metric implementation:

pom.xml
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-module-redis</artifactId>
</dependency>

<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-module-mp-micrometer</artifactId> (1)
</dependency>
<!-- or -->
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-module-mp-metrics</artifactId> (2)
</dependency>
1 Micrometer metric implementation
2 Microprofile-metrics metric implementation

4.14. coffee-module-mp-restclient

The purpose of this module is to support a microprofile restclient which includes the following principles:

  • Coffee compatibility - copying original request headers, passing log MDC keys, exception handling

  • Business process traceability - request/response logging

  • HTTP communication customization - internal/external HTTP communication modifiability (e.g. between services different REST http than say when sending request to XY, different headers, content, content-type, etc..), TLS encryption, URL and behaviour (timeout, repeat, etc…​) override

  • Exception customization - locally modifiable error handling

4.14.1. microprofile-rest-client

The starting point is microprofile-rest-client which is a member of the micropfile.io group. It can do many things off our shoulders, it can be used on its own but needs to be extended to meet the needs of projects some useful features.

4.14.2. Core

Most of the above principles are implemented in the microprofile-rest-client the rest is done by the next few classes.

DefaultLoggerClientRequestFilter

This is a default REST client REQUEST logger, which takes into account if LogSpecifier is specified with CLIENT_REQUEST target (LogSpecifier). So it will log the HTTP request data sent by the client:

  • HTTP method, URL address

  • HTTP headers

  • cookies

  • Entity

It sends all this to the logger at INFO level.

DefaultLoggerClientResponseFilter

This is a default REST client RESPONSE logger, which takes into account if you have specified LogSpecifier with CLIENT_RESPONSE target (LogSpecifier). So it will log the HTTP request response data sent by the client:

  • HTTP status + accessories

  • HTTP headers

  • cookies

  • locale, location values

  • Entity

All this is sent to the logger at INFO level

DefaultSettingClientRequestFilter

This is a default REST client "REST setting copy". Its job is to bind the REST call through services, so that, for example, the basics of logging and authentication can work.

In other words, when a REST request comes into the service, processing in the microservice requires a call through to another microservice, then that HTTP call needs to include those HTTP headers in the same way, which were entered in the service request, so that the authentication there can be successful. Specifically, the setting of the authentication MDC variables will be included in the HTTP headers sent out.

DefaultBaseExceptionResponseExceptionMapper

Its purpose is to process the error received in the response, from the usual coffee BaseException hu.icellmobilsoft.coffee.dto.exception.RestClientResponseException class.

DefaultRestClientBuilderListener

This is a default "activator" for connecting the above listed classes. This class can be used directly or configured freely. Additionally, the HTTP client timeout values are defined in this class:

  • 5 sec connect timeout

  • 1 min read timeout

If there is any operational change request, it can be influenced through this central "activator" class, further options are mentioned below, which are done according to the rules of microprofile-rest-client.

That "activator" should also be taken into account in the microprofile-rest-client must be registered in the following file:

src/main/resources/META-INF/services/org.eclipse.microprofile.rest.client.spi.RestClientBuilderListener
# coffee default
hu.icellmobilsoft.coffee.module.mp.restclient.provider.DefaultRestClientBuilderListener
# projects customized
#hu.icellmobilsoft.sample.invoice.service.provider.ProjectRestClientBuilderListener

This is a plain txt file without postfix and extension.

FaultTypeParserExtension

Gathers enum classes implementing the IFaultType interface and annotated with @hu.icellmobilsoft.coffee.cdi.annotation.FaultTypeCode. Based on this, FaultTypeParser is able to parse the response String faultCode in mp rest client calls to enum and map it to the corresponding exception.

IMPORTANT

It can only parse enums accessible by the container, this requires that beans.xml is present in the implementation module.

The read implementations can be ordered by the @Priority annotation, the default priority is 500.

4.14.3. Implementation in the project

pom.xml
Coffee module activation
<dependency>
    <groupId>hu.icellmobilsoft.coffee.module.mp.restclient</groupId>
    <artifactId>coffee-module-mp-restclient</artifactId>
</dependency>

You may also need to activate the application server extension, which for example in thorntail server is:

thorntail application server activation
<dependency>
    <groupId>io.thorntail</groupId>
    <artifactId>microprofile-restclient</artifactId>
</dependency>

4.14.4. Usage examples

For a complete detailed description of the usage itself, see microprofile-rest-client relese releases. We will mention a few examples here locally.
Sample usage
Initialization

In the class where the REST operations are defined (if you follow the company recommended REST structure then this is the REST interface) you need to add the @RegisterRestClient annotation. This basically tells the microprofile-rest-client system to refer to the REST endpoints defined in it as HTTP REST clients. In the client itself you will be able to use the types and annotations used here, the burden falls on the separate settings for these (e.g. text/xml, application/json, entity class, etc…​)

@Tag(name = IInvoiceTestRest.TAG_TEST, description = "SYSTEM REST test operations required for Invoice Processor")
@Path(InvoicePath.TEST_INVOICE_SERVICE)
@RegisterRestClient (1)
public interface IInvoiceTestRest {

    static final String TAG_TEST = "Test";
    ...
1 add the @RegisterRestClient annotation. Usually nothing else is needed (unless there are some special needs), old functionality is not affected
Using HTTP client

The most used instances of HTTP REST client in the code:

CDI inject
import javax.enterprise.inject.Model;
import javax.inject.Inject;

import org.eclipse.microprofile.rest.client.inject.RestClient;

import hu.icellmobilsoft.coffee.dto.exception.BaseException;
import hu.icellmobilsoft.coffee.module.mp.restclient.util.MPRestClienUtil;

@Model
public class TestAction {

    @Inject
    @RestClient (1)
    private IInvoiceTestRest iInvoiceTestRest; (2)

    public String test() throws BaseException {
        try {
            iInvoiceTestRest.postValidatorTest(entityClass); (3)
        } catch (Exception e) { (4)
            throw MPRestClientUtil.toBaseException(e); (5)
        }
        return null;
    }
}
1 mp-rest-client @Qualifier annotation that creates the HTTP client wrapper
2 interface marked with the @RegisterRestClient annotation
3 HTTP REST client call - this is where the configuration settings (URL, HTTP header, timeout, etc…​) come into play
4 general HTTP management. The operation itself is defined as BaseException but it is at the service level, we are using it as a client and at <1> we wrapped it in a wrapper, which may return with other RuntimeException errors
5 Coffee level pre-written Exception compiler

In fact, a boilerplate wrapper will be created for the whole thing to simplify the coding even more.

inline
import java.net.URI;

import javax.enterprise.inject.Model;
import javax.inject.Inject;

import org.eclipse.microprofile.rest.client.RestClientBuilder;

import hu.icellmobilsoft.coffee.dto.exception.BaseException;
import hu.icellmobilsoft.coffee.module.mp.restclient.util.MPRestClienUtil;

@Model
public class TestAction {

    public String doWorkAgainstApi(URI uri, Object entity) {
        try {
            IInvoiceTestRest iInvoiceTestRest = RestClientBuilder //
                    .newBuilder() (1)
                    .baseUri(uri) (2)
                    .build(IInvoiceTestRest.class); (3)
            return iInvoiceTestRest.postValidatorTest(entity);
        } catch (Exception e) { (4)
            throw MPRestClientUtil.toBaseException(e); (5)
        }
        return null;
    }
}
1 here calls DefaultRestClientBuilderListener, any setting of which can be overridden.
2 override the URI defined in the configs
3 interface marked with the @RegisterRestClient annotation
4 general error handling. The operation itself has BaseException defined but the builder wrapped it in a wrapper, which may return other RuntimeException errors
5 Exception compiler pre-written in Coffee

This use case is very specific, if possible, strive for a CDI and configuration level solution.

Configuration options

Configurations can be specified at the same time as annotations, but of course the options of microprofile-config are also given now. I will also list some of the most common configuration patterns. The syntax itself is the covariate:

category-key-name/mp-rest/key

or

full-class-name/mp-rest/key

Where:

  • category-key-name - keyword we choose in our code and used in the @RegisterRestClient(configKey="invoiceService") annotation, which in our case is for example "invoiceService"

  • full-class-name - class (in our case, rather interface) name, where the @RegisterRestClient annotation is loaded. Avoid this kind of configuration if possible, as later refactoring may cause hidden errors in the configurations

  • /mp-rest - microprofile-rest-client default keyword

  • /key - the key itself supported by microprofile-rest-client, e.g.: url, providers, readTimeout, etc…​

project-default.yml - sample configuration
"invoiceService/mp-rest/url": http://localhost:8083
"invoiceService/mp-rest/providers": hu.icellmobilsoft.project.invoice.CustomProvider

#or the other option

"hu.icellmobilsoft.project.invoice.service.rest.IInvoiceTestRest/mp-rest/url": http://localhost:8083
"hu.icellmobilsoft.project.invoice.service.rest.IInvoiceTestRest/mp-rest/providers": hu.icellmobilsoft.project.invoice.CustomProvider

4.15. coffee-module-totp

The purpose of this module is to create and verify Time-based one-time passwords (TOTP) according to the standard. We have implemented the algorithm described in RFC 6238, supplemented by the microprofile-config we use.

The algorithm in rfc-6238 https://tools.ietf.org/html/rfc6238

4.15.1. Implementation in the project

pom.xml
Coffee module activation
<dependency>
    <groupId>hu.icellmobilsoft.coffee.module.totp</groupId>
    <artifactId>coffee-module-totp</artifactId>
</dependency>

Table 1 shows the configuration keys used by the module.

Table 2. configuration parameters
parameter name default value description

totp.password.digits.length

6

The length of the password generated by our method (max. 9)

totp.password.timestep.millisec

30000

The length of the time window in milliseconds associated with the currently generated password

totp.password.hash.algorithm

HmacSha1

Use this hash algorithm to generate the password

totp.password.secret.length

16

A secret of this length is generated/used

totp.verify.additional.windows.count

0

This parameter is used to extend the password verification to adjacent time windows

The default values are set so that the module generates the same otp as google authenticator, but of course you can deviate from this.

The main methods
  • TOTPVerifier.verify

This allows us to validate the resulting OTP by extending the validity check to additional time windows. You can specify all the data needed for the verification in a parameter, but you can also use the overload methods, where the stored/default values in the configuration are used.

  • TOTPGenerator.generateBase32Secret

This generates a (shared) secret for the user, by default 16 bytes long. The desired length can be passed as a parameter.

  • TOTPGenerator.generatePassword

If we need an OTP, we can use the generatePassword method for this, also specifying all the parameters needed to generate it, or based on the configured/default values.

Usage:
@Inject
private TOTPVerifier totpVerifier;
@Inject
private TOTPGenerator totpGenerator;

//secretKey generation, you can optionally specify the length of the secret in a parameter, but 16 is usually sufficient
String secretKeyBase32Encoded1 = totpGenerator.generateBase32Secret();
String secretKeyBase32Encoded2 = totpGenerator.generateBase32Secret(32);

//check the received password
//throws TechnicalExceptiont with error code CoffeeFaultType.INVALID_ONE_TIME_PASSWORD in case of wrong password
totpVerifier.verify(secretKey, clientOtp, currentUTCTimestamp, codeDigits, hashAlgorithm);
//if we use default configuration, we can call methods with less parameters
totpVerifier.verify(secret, clientOtp);
totpVerifier.verify(secret, clientOtp, currentUTCTimestamp);


//totp generation
//output of the following methods is the generated password
String otp1 = totpGenerator.generatePassword(secret, currentUTCTimestamp, digits, TOtpAlgorithm);
//if we use default configuration, we can call methods with less parameters
String otp2 = totpGenerator.generatePassword(secret);
String otp3 = totpGenerator.generatePassword(secret, currentUTCTimestamp);
It is advisable to persist the last validated and accepted OTP at the project level, so that it cannot be reentered.

Coffee implements a default TOTPGenerator (DefaultTOTPGneratorImpl) and a TOTPVerifier (DefaultTOTPVerifierImpl) class, which can be overridden in the project using CDI if required.

4.16. coffee-module-configdoc

The purpose of this module is to generate documentation from classes containing configuration keys

4.16.1. Usage

To use it, the sub-dependency must be added to pom.xml:

<dependency>.
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-module-configdoc</artifactId>
</dependency>
For static configuration keys

Next, the classes containing the configuration keys must be annotated with the @ConfigDoc annotation.

If this is done by default, the generated asciidoc will be included in the .jar file at compile time as META-INF/config_keys.adoc. The keys are displayed in separate tables according to the prefix before the first dot character.

Example
@ConfigDoc (1)
public interface ConfigDocExample {

    /**
     * test prefix
     */
    @ConfigDoc(exclude = true) (2)
    String PREFIX = "test.";

    /**
     * test2
     */
    String test2 = "test2.xxx";

    /**
     * Lorem ipsum dolor sit amet, consectetur adipisicing elit. Illo, placeat!
     */
    String foo = PREFIX + "foo";

    /**
     * Lorem ipsum dolor sit amet, consectetur adipisicing elit. Iusto, sapiente?
     */
    @ConfigDoc(description = "Override...") (3)
    String bar = PREFIX + "bar";

    /**
     * Lorem ipsum dolor sit amet, consectetur adipisicing elit. Ipsam, similique?
     * @since 3.14159 (4)
     */
    @ConfigDoc(defaultValue = "5000") (5)
    String baz = PREFIX + "baz";

    /**
     * Lorem ipsum dolor sit amet, consectetur adipisicing elit. Ipsam, similique?
     *
     * @since 3.14159
     */
    @ConfigDoc(defaultValue = "999", isStartupParam = true, isRuntimeOverridable = true) (6)
    String features = PREFIX + "features";

    /**
     * Lorem ipsum dolor sit amet, consectetur adipisicing elit. Ipsam, similique?
     *
     * @since 3.14159
     */
    @ConfigDoc(defaultValue = "1234", title = "Title Test") (7)
    String titleTest = PREFIX + "title";
}
1 Generation is activated on a class with the annotation @ConfigDoc
2 The exclude field can be used to exclude fields from the generation
3 By default, the javadoc description is included in the generated file, which can be overwritten with the description field of the annotation
4 The since column of the generated table can be retrieved from the @since javadoc tag
5 The default value for the configuration can be specified
6 The isStartupParam marks if the parameter is for startup. The isRuntimeOverridable marks if the parameter can be overridden at runtime. Both of the parameters are going to be shown under the `Features`column represented as emojis:
  • For isStartupParam true the emoji is: 🚀

  • For isRuntimeOverridable true the emoji is: ⏳

7 With the title parameter we can overwrite the default title for the generated tables(optional).
coffee module configdoc example1
Figure 1. The result of the above example code
For dynamic configuration keys

For configs with dynamic keys (e.g. redis, mongo db), the keys must be included in a MessageFormat format, then the class containing the keys must be annotated with the @DynamicConfigTemplate annotation, and the class or its variables with the already known @ConfigDoc annotation.

From the classes annotated with @DynamicConfigTemplate, a template adoc corresponding to the @ConfigDoc annotation will be created, in the META-INF/config-templates folder, under the name fully-qualified-class-name.adoc.

After that, the qualifier and/or injection point for the config must be annotated in the @DynamicConfigDocs annotation, where templateClass is the class containing the keys.

When processing the @DynamicConfigDocs annotation, both the injection point and the qualifier are read, with preference given to the injection point.

If this is done by default, the compiled .jar file will include the generated asciidoc in the `META-INF/

Template class
@ConfigDoc
@DynamicConfigTemplate (1)
public interface DynamicConfigTemplateExample {

    /**
     * test prefix
     */
    @ConfigDoc(exclude = true) (2)
    String PREFIX = "test.";

    /**
     * Lorem ipsum dolor sit amet, consectetur adipisicing elit. Illo, placeat!
     */
    String foo = PREFIX + "{0}.foo"; (3)
}
1 Template generation is activated on a class with the annotation @ConfigDoc and @DynamicConfigTemplate
2 On the fields, @ConfigDoc can be used to generate the template
3 Part of the key variable with MessageFormat placeholders
qualifier
@DynamicConfigDocs( (1)
        template = DynamicConfigTemplateExample.class, (2)
        title = "Dynamic config {0} config keys", (3)
        description = "Dyn configuration keys" (4)
)
public @interface DynamicConfigurationQualifierExample {

    /**
     * Config key of the desired dynamic configuration
     *
     * @return config key
     */
    String configKey();

}
1 @DynamicConfigDocs annotation containing default values for qualifier
2 The template to use for dynamic config
3 Default address for the config (may contain placeholders)
4 Default description of the config (may contain placeholders)
Injection point
public class DynamicConfigInjectionPointExample {

    @Inject
    @DynamicConfigDocs(templateVariables = "abc") (1)
    @DynamicConfigurationQualifierExample(configKey = "abc")
    private Object injectedConfig;

    @Inject
    @DynamicConfigDocs(templateVariables = "xyz", title = "Title override for config key {0}") (2)
    @DynamicConfigurationQualifierExample(configKey = "xyz")
    private Object otherConfig;
}
1 The config key to insert into the template in the qualifier is abc
2 Second config with different key: xyz, with overwritten address
coffee module configdoc dynamic example1
Figure 2. result of the above example code

4.16.2. Configuration

Since the generation uses an annotation processor, it can be configured at compile time with -A. This can be specified via maven-compiler-plugin for maven:

example pom.xml
<build>
    <plugins>
        <plugin>
            <artifactId>maven-compiler-plugin</artifactId>
            <configuration>
                <compilerArgs>
                    <arg>-Acoffee.configDoc.outputDir=${project.basedir}/../docs/</arg> (1)
                    <arg>-Acoffee.configDoc.outputFileName=${project.name}_config.adoc</arg> (2)
                    <arg>-Acoffee.configDoc.outputToClassPath=false</arg> (3)
                    <arg>-Acoffee.configDoc.dynamicOutputFileName=dynamic_${project.name}_config.adoc</arg> (4)
                    <arg>-Acoffee.configDoc.columns=key,since,description</arg> (5)
                </compilerArgs>
            </configuration>
        </plugin>
    </plugins>
</build>
1 The folder where the generated file will be placed. Default: META-INF/
2 Name of the generated file. Default: config_keys.adoc
3 Whether the generated file should be put on the classpath, i.e. whether we want it to be included in the generated jar file. Default: true
4 Name of the generated file for dynamic configurations. Default: dynamic_config_keys.adoc
5 The columns displayed in the generated table in the order specified. Default: key, source, description, default_value, since (all columns)

5. Coffee Quarkus extensions

5.1. General

In the case of Quarkus, several CDI elements need to be handled differently.

The most important article where it is summarized what is involved is on the quarkus page: Quarkus.io - WRITING YOUR OWN EXTENSION. It describes the philosophy behind Quarkus extensions.

Quarkus extension building basics help: Quarkus.io - BUILDING MY FIRST EXTENSION

List of extensions built specifically for Quarkus: Link

5.2. coffee-module-mp-restclient-extension

The purpose of this module is that if coffee-module-mp-rest-client is used in quarkus, it replaces the elements in it that are not supported by quarkus.

Quarkus does not support Extensions as FaultyType enum classes are bundled together in the module, so this needs to be resolved in Quarkus Extension.

currently Quarkus version 3.2.x is used in extensions.

To use the module, you should not use coffee-module-mp-rest-client in the dependencies, but use the extension that contains the module.

pom.xml
    <dependency>.
        <groupId>hu.icellmobilsoft.coffee</groupId>
        <artifactId>coffee-module-mp-restclient-extension</artifactId>
        <version>${version.hu.icellmobilsoft.coffee}</version>
        <type>pom</type>
        <scope>import</scope>
    </dependency>

Migration descriptions

Older migration descriptions are available at here

1. v2.0.0

The major version jump was made from coff:ee v1.13.x, later features may already differ between V1 and V2 versions.

coff:ee v1.13.x → v2.0.0 migration description, new features, changes

1.1. Changes

1.1.1. General

This version aims to implement Jakarta EE 10, which follows the change from java import javax.* → jakarta.* throughout the project.

Since almost all dependencies have been updated, so listing them one by one is redundant. Mostly cosmetic version changes, or EE10 implementations.

The content of the beans.xml files has been updated:

beans.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="https://jakarta.ee/xml/ns/jakartaee"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="
      https://jakarta.ee/xml/ns/jakartaee
      https://jakarta.ee/xml/ns/jakartaee/beans_4_0.xsd">
</beans>

Include Jandex-maven-plugin on all modules. Consolidated the use of classLoader across several modules, using Thread.currentThread().getContextClassLoader() everywhere except in the coffee-deltaspike module!

BOM version elevations:

Major changes were:

  • org.jboss.resteasy.resteasy-jaxrs 3.6.2.Final → org.jboss.resteasy.resteasy-core 6.2.1.Final - went through resteasy dependency breakdown, this should be noted on projects

  • org.hibernate.hibernate-core 5.4.24.Final → 6.1.5.Final

  • org.apache.activemq.activemq-client 5.15.13 → org.apache.activemq.artemis-jakarta-client 2.27.0 - this is a big jump in drivers, a lot of changes have been made. You have to be very careful when implementing changes in projects.

1.1.2. coffee-dto

  • The jakarta conversion is /coffee-dto-gen/src/main/resources/xsd/hu/icellmobilsoft/coffee/dto/bindings.xjb file requires modification of the header:

<jaxb:bindings version="3.0"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    xmlns:jaxb="https://jakarta.ee/xml/ns/jaxb"
    xmlns:xjc="http://java.sun.com/xml/ns/jaxb/xjc"
    xmlns:annox="http://annox.dev.java.net"
    jaxb:extensionBindingPrefixes="xjc annox">
...
</jaxb:bindings>
IMPORTANT

In the JAXB 4.0 specification, the version of the jaxb xml schema, remains 3.0! Jakarta XML Binding 4.0

For XSD → DTO generation, the project uses a forked jaxb4 maven plugin: phax/maven-jaxb2-plugin#v016. The plugin is a fork of the previously used jaxb2 plugin, a dependency swap is sufficient com.helger.maven:jaxb40-maven-plugin:0.16.0.

WARNING

No jaxb3 or jaxb4 versions of the org.jvnet.jaxb2_commons:jaxb2-basics-annotate plugin are currently available, In coffee, the earlier one works with the 4.0 plugin, but may cause problems in projects or later.

1.1.3. coffee-deltaspike

The coffee-deltaspike subproject has been created for this purpose, to replace the functionality that was lost by dropping the deltaspike dependency there is nothing to replace it with. This could be a permanent solution, time will tell.

coffee-deltaspike-message

Contains the classes responsible for the language, 1:1 correspondence to the solution used so far - at class name level. The language is a "lite" copy of the original (https://deltaspike.apache.org/documentation/core.html#Messagesandi18n), without @MessageBundle and @MessageTemplate. The MessageContext and Locale resolvers are completely as in the original. The DefaultMessageResolver and the DefaultMessageContext are interleaved on several threads, fixed via DefaultMessageResolve.

coffee-deltaspike-data

Replaces the original deltaspike-data-module functionality, except for a few things:

  • Deactivating Repositories

  • Transactions

  • Multiple entityManager

  • Auditing

Changes from the original deltaspike code:

  • EntityRepositoryHandler got a @Dependent annotation, It is not yet known why it was not included. It was originally on CriteriaSupportHandler for a similar purpose, without it, DelegateQueryBuilder.lookup (line 111) could not find it the correct server class for the EntityRepository interface.

  • The asm dependency has been removed, instead repository calls are made via dynamic proxies.

  • New extension hu.icellmobilsoft.coffee.deltaspike.data.extension.RepositoryExtension which handles repository interfaces and their associated proxies. The extension’s job is to channel the resulting proxy calls to the central QueryHandler class.

  • QueryHandler interface lookup has been partially changed due to the new proxies. This is necessary in order to be able to use the processed repository metadata, so it is not necessary to rewrite the deltaspike core logic.

Migration

On implementing projects, possible deltaspike message and data dependency should be replaced by the coffee-deltaspike-message and coffee-deltaspike-data dependency respectively. The org.apache.deltaspike.jpa.api.transaction.Transactional annotation is replaced by hu.icellmobilsoft.coffee.jpa.annotation.Transactional, must be replaced everywhere.

1.1.4. coffee-rest

Project stage management

Deltaspike core after ejecting org.apache.deltaspike.core.api.projectstage.ProjectStage was removed and replaced by hu.icellmobilsoft.coffee.rest.projectstage.ProjectStage. It searches for the correct keys in all configs.

Migration

Replace org.apache.deltaspike.core.api.projectstage.ProjectStagehu.icellmobilsoft.coffee.rest.projectstage.ProjectStage class.

1.1.5. coffee-model-base

The deltaspike data dependency has been removed. The former deltaspike data CurrentUser has been replaced by hu.icellmobilsoft.coffee.model.base.annotation.CurrentUser annotation. The AuditProvider, TimestampsProvider classes are replaced by the former deltaspike data Instead of implementing the PrePersistAuditListener and PreUpdateAuditListener interfaces, they are implemented as methods with the java-like PrePersist and PreUpdate annotations have been provided. The deltaspike AuditEntityListener has been removed from AbstractEntity, and the AbstractAuditEntity classes have been replaced by the following annotation: @EntityListeners({ TimestampsProvider.class, AuditProvider.class }).

Migration

Change the annotation of deltaspike data org.apache.deltaspike.data.api.audit.CurrentUser to hu.icellmobilsoft.coffee.model.base.annotation.CurrentUser.

1.1.6. coffee-jpa

  • The deltaspike-jpa-module has been discarded, it is no longer needed.

  • BatchService has been updated with the new features of hibernate 6, full type conversion.
    BatchService’s type handling itself has been rethought and handles the more problematic types separately. For more information see BatchService.

Migration
  • Since hibernate 6 has rethought the type handling and coffee has done the same for BatchService, so special attention must be paid to projects to ensure that all types in the entity work as expected. If any methods are overwritten, they should be checked first, if it works without the overrides. This is important because the type changes in hibernate 6 itself and the re-thought BatchService type handling brought a lot of new features and high type handling. If you do need to override projects for whatever reason, they will probably need to be updated.

1.1.7. coffee-module-artemis

The driver jakarta EE 10 and the changes to Jakarta Messaging 3.1 in it have changed a lot: ActiveMQ Artemis embraces Jakarta EE.

You should test the JmsHandler.createTextMessage and JmsUtil.newJmsException functions, where the change was specifically affected, changed the original concept with delay messages.

1.1.8. coffee-module-notification

Unfortunately there is no java-compatible release of the Apache commons-email dependency yet, so the coffee-module-notification module has been removed from the coffee modules. Next issue handles: EMAIL-203 or commons-email Gihub PR pull request.

Migration

coffee-module-notification module has been removed.

1.1.9. coffee-module-mp-opentracing

Module has been optimized, so some classes (e.g. OpenTraceExtension) have become redundant. The @hu.icellmobilsoft.coffee.cdi.trace.annotation.Traced annotation replaces all functions, which can still be used to trace the individual modules of coffee.

Migration

The former @Traceable annotation should be replaced by @hu.icellmobilsoft.coffee.cdi.trace.annotation.Traced annotation.

1.1.10. junit tests

Parameterized junit tests with @ParameterizedTest annotation (e.g. hu.icellmobilsoft.coffee.rest.projectstage.ProjectStageProducerTest) are annotated with @ExplicitParamInjection. Without this, the CDI managed parameter injection will not work.

1.1.11. coffee-module-csv

During csvUtil csv generation, the line separator was replaced: ICSVWriter.DEFAULT_LINE_END (\n) → System.lineSeparator(). Thus, an operating system dependent line separator is used.

Migration

The changes do not result in any changeover work, it is backwards compatible.

1.1.12. coffee-se-logging

In JbossMDCAdpater, there was an error in logging the parameter, which has been fixed.

Migration

Changes do not result in migration work, backwards compatible.

1.2. coffee-module-etcd

  • The CONNECT_TIMEOUT_MILLIS parameter has been introduced in hu.icellmobilsoft.coffee.module.etcd.util.EtcdClientBuilderUtil, this prevents ramp-up timeout errors caused by a mismatch between the query and the timeout parameter to the etcd server.

Migration

The changes do not result in any migration work, it is backwards compatible.

2. v2.0.0 → v2.1.0

coff:ee v2.0.0 → v2.1.0 migration description, news, changes

2.1. What’s new

THe gRPC support has been introduced. Thanks to this, a new subproject collector named coffee-grpc was created.

2.1.1. coffee-grpc

The following feature supports have been added to the system:

  • coffee-grpc-api: General gRPC handling for the Coff:ee API, including annotations and versioning.

  • coffee-grpc-base: Collector of protobuf and gRPC stubs for general usage.

  • coffee-grpc-protoc: Support for generating classes from proto files.

  • coffee-dto/coffee-dto-xsd2proto: Generated proto files generated from the coffee-dto-xsd XSD files.

  • coffee-dto/coffee-dto-stub-gen: Generated classes from Coff:ee proto files.

  • coffee-grpc-server-extension - Support for CDI gRPC server implementation.

  • coffee-grpc-client-extension - Support for CDI gRPC client implementation.

  • coffee-grpc-traces-api - Coffee tracing API (annotations…​)

  • coffee-grpc-opentracing-impl - gRPC microprofile-opentracing implementation

2.2. Changes

2.2.1. coffee-module-etcd

  • Bump io.etcd:jetcd-core 0.6.1 → 0.7.5

  • In case of multiple classloaders, the DefaultEtcdConfigImpl class was unable to find the values defined in the microprofile-config.properties file.

Migration

Changes are backwards compatible doesnt need any migration.

2.2.2. coffee-rest

  • An OptimisticLockException has been introduced, with the default error code CoffeeFaultType.OPTIMISTIC_LOCK_EXCEPTION.

  • The error handling in the DefaultBaseExceptionMapper class has been redesigned:

  • Business error - for BusinessException, a status code of 422 is given instead of the previous 500 status codes.

  • For a technical error - OptimisticLockException, a status code of 409 is given instead of the previous 500 status codes.

  • JsonMessageBodyReaderBase uses the charset attribute coming in HTTP header Content-Type when deserializing the JSON request. Proper use of header: Content-Type: application/json; charset=utf-8. If charset is not specified the deserialization uses UTF-8 by default!

Migration
  • To keep the exception status code handling as before, it is necessary to create a separate ExceptionMapper on projects.

2.2.3. coffee-module-mp-restclient

  • The DefaultBaseExceptionResponseExceptionMapper sets the HTTP status code in the RestClientResponseException

Migration

Changes are backwards compatible doesnt need any migration.

2.2.4. coffee-tool

  • NPE fix in AnnotationUtil.getAnnotation(Class<?> clazz, Class<A> annotationClass) method

Migration

Changes are backwards compatible doesnt need any migration.

2.2.5. coffee-jpa

  • NoSuchMethodException fix in JPAUtil.toNativeSQL(NoEx) which has been caused by the hibernate 6.x upgrade.

  • Possible integer-overflow fix in PagingUtil.getPagingResult* methods. XSD validation and some internal checks have been added.

Migration

PagingUtil methods may throw BaseException from now on, which may need to be handled in the calling application. (though probably not necessary)

The rest of changes are backwards compatible and doesnt need any migration.

2.2.6. BatchService

  • Even the native insert/update methods (batchMergeNative, batchInsertNative, batchUpdateNative) take the insertable and updateable flags into account.

Migration

Changes are backwards compatible doesnt need any migration.

2.2.7. coffee-cdi

  • The hu.icellmobilsoft.coffee.cdi.trace.constants.Tags has received new values for passing relational database trace data.

  • New package for modularizing tracing operations, hu.icellmobilsoft.coffee.cdi.trace.spi

  • IOpenTraceHandler Enables the integration of dynamic tracing implementation.

  • OpenTraceHandlerProducer Provides the activated tracing module and gives default behavior if no tracing module is plugged in.

Migration

The changes do not result in any migration work, it is backwards compatible.

2.2.8. coffee-deltaspike-data

  • The repository layer received dynamic trace handling, provided that the tracing module is active.

Migration

The changes do not result in any migration work, it is backwards compatible.

2.2.9. coffee-module-mp-opentracing

  • OpenTraceHandler Facilitates the placement of modules into the trace flow where reliance on the existing OpenTraceInterceptor is not possible.

Migration

The changes do not result in any migration work, it is backwards compatible.

3. v2.1.0 → v2.2.0

coff:ee v2.1.0 → v2.2.0 migration description, news, changes

3.1. What’s new

3.1.1. coffee-tool

  • JsonUtil is able to deserialize generic types as well.

Migration

Changes are backwards compatible doesnt need any migration.

3.1.2. coffee-jpa

  • microprofile-health support

Migration

Changes are backwards compatible doesnt need any migration.

3.1.3. coffee-module-redis

  • microprofile-health support

  • Jedis connection metrics support

Migration

Changes are backwards compatible doesnt need any migration.

3.1.4. coffee-module-etcd

  • microprofile-health support

Migration

Changes are backwards compatible doesnt need any migration.

3.1.5. coffee-module-base

  • The ClassFieldsAndMethodsCache caused a ConcurrentModificationException error under high load.

Migration

Changes are backwards compatible doesnt need any migration.

4. v2.2.0 → v2.3.0

coff:ee v2.2.0 → v2.3.0 migration description, news, changes

  • Java 21 support - Coffee runs on Java 21 and is now supported with CI.

  • Bump parent pom 1.2.0 → 1.4.0 - maven plugin versions updated

  • The project assumes that beans.xml is set to the default bean-discovery-mode="annotated". Therefore, we have removed all "@Vetoed" comments from previously managed classes.

  • The Javadoc was updated with the introduction of "default constructors", which was published at: java 16 and java 18 with new functions.

4.1. coffee-model-base

  • The AbstractEntity.toString() function uses the type of the column instead of the value in case of java.sql.Blob and java.sql.Clob. It used the value of the field in such cases, for e.g. it read the stream in case of logging.

Migration

Changes are backwards compatible doesnt need any migration.

4.2. coffee-tool

  • New ParamValidatorUtil helper class for uniform validation of parameters of public functions.

Migration

Changes are backwards compatible doesnt need any migration.

4.3. coffee-module-etcd

  • The configuration parameters for establishing the ETCD connection used in EtcdClientBuilder have been extracted as microprofile-config parameters.

Migration

Changes are backwards compatible doesnt need any migration.

5. v2.3.0 → v2.4.0

coff:ee v2.3.0 → v2.4.0 migration description, news, changes

5.1. coffee-rest

  • RequestResponseLogger memory tuning: Based on the hu.icellmobilsoft.coffee.rest.log.BaseRestLogger class an optimized version was created: hu.icellmobilsoft.coffee.rest.log.optimized.BaseRestLogger. With its help applications use less memory while logging request and response bodies. In addition, the hu.icellmobilsoft.coffee.rest.log.optimized.RequestResponseLogger class was created as well (Temporarily with the @Named("optimized_RequestResponseLogger") annotation. This way we keep the old implementation which will be removed later), based on the hu.icellmobilsoft.coffee.rest.log.RequestResponseLogger, where the request and response entity log limits are determined according to whether the request or response entity is application/octet-stream or multipart/form-data and the REST interface is not annotated with the LogSpecifier then we limit the log size. Also, in this version, the BYTECODE_MAX_LOG constant has been renamed to ENTITY_MAX_LOG.

Migration
  • When switching to the optimized BaseRestLogger, it is advisable to switch to the ENTITY_MAX_LOG constant instead of BYTECODE_MAX_LOG, if it is used, as the former may be deleted over time.

6. v2.4.0 → v2.5.0

coff:ee v2.4.0 → v2.5.0 migration description, news, changes

6.1. Global

  • jandex-maven-plugin - test classes were excluded from indexing

6.2. Metric independency

Development involving several modules, the goal of which is to implement metrics with several types be able to function.

6.2.1. coffee-jpa & coffee-module-redis

  • The modules were independent from the metric implementation, you can freely choose between microprofile-metrics or micrometer

  • moved hu.icellmobilsoft.coffee.jpa.health.DatabaseHealthConstant.DEFAULT_DATASOURCE_NAMEhu.icellmobilsoft.coffee.cdi.config.IConfigKey.DATASOURCE_DEFAULT_NAME_VALUE

Migration

At the dependency level, you need to choose the metric implementation that the service should use. Otherwise, everything else is backward compatible.

<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-module-redis</artifactId>
</dependency>

<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-module-mp-micrometer</artifactId> (1)
</dependency>
<!-- or -->
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-module-mp-metrics</artifactId> (2)
</dependency>
1 Micrometer metric implementation
2 Microprofile-metrics metric implementation

6.2.2. coffee-grpc-metrics-impl

Renames coffee-grpc-metrics-impl → coffee-grpc-metrics-mpmetrics

Migration

Without rename changes are backwards compatible doesnt need any migration.

6.3. Other developments

6.3.1. coffee-tool

  • Added tool class for "AES/CBC/PKCS5PADDING" de/cipher (hu.icellmobilsoft.coffee.tool.utils.crypto.AesCbcCryptoUtil)

  • Added SHA3-512 message digest to hu.icellmobilsoft.coffee.tool.utils.string.EncodeUtil class, deprecating old Sha512(String str).

  • In MavenURLHandler class, fix getting resource over class to current thread-based getting

  • The ResponseEntityCollectorOutputStream has been modified. Previously, it stored characters represented by multiple bytes (e.g., UTF-8 Hungarian accented characters) by casting them to char, which could result in incorrect text. From now on, we store them as bytes. It is the responsibility of the caller to use the appropriate character encoding when converting these bytes into text.

Migration

Changes are backwards compatible doesnt need any migration.

6.3.2. coffee-module-redisstream

  • The Redis consumers received functionality to assist graceful shutdown (hu.icellmobilsoft.coffee.module.redisstream.bootstrap.ConsumerLifeCycleManager)

Migration

Changes are backwards compatible doesnt need any migration.

6.3.3. coffee-rest

  • If version is not defined in the validateXMLs annotation, it won’t attempt to read the requestVersion from the request body.

  • The hu.icellmobilsoft.coffee.rest.validation.xml.annotation.ValidateXML and hu.icellmobilsoft.coffee.rest.validation.xml.annotation.ValidateXMLs annotations are moved to the coffee-cdi module.

Migration

As a result of the move, if these annotations are also used on the usage project, the imports must be updated:

hu.icellmobilsoft.coffee.rest.validation.xml.annotation.ValidateXMLhu.icellmobilsoft.coffee.cdi.annotation.xml.ValidateXML hu.icellmobilsoft.coffee.rest.validation.xml.annotation.ValidateXMLshu.icellmobilsoft.coffee.cdi.annotation.xml.ValidateXMLs

6.3.4. coffee-module-etcd

  • Removed classes and methods annotated with @Deprecated(since = "1.3.0", forRemoval = true)

Migration
  • hu.icellmobilsoft.coffee.module.etcd.service.BaseEtcdService removed. Its full-fledged replacement is the hu.icellmobilsoft.coffee.module.etcd.service.EtcdService class, which works explicitly with String values.

  • Method getList() has been removed from hu.icellmobilsoft.coffee.module.etcd.service.ConfigEtcdService. Use the compatible getAll() method.

6.3.5. coffee-jpa

  • Added @Dependent annotation to hu.icellmobilsoft.coffee.jpa.sql.batch.BatchService

Migration

Changes are backwards compatible doesnt need any migration.

6.3.6. coffee-module-configdoc

  • Nullpointer thrown during compile time when javadoc was missing before annotation @ConfigDoc.

Migration

Changes are backwards compatible doesnt need any migration.

6.3.7. coffee-module-mp-restclient

  • FaultTypeParserExtension has been modified, it looks for FaultType enums annotated with @hu.icellmobilsoft.coffee.cdi.annotation.FaultTypeCode.

  • IFaultType interface marked as deprecated. (Use the @FaultTypeCode instead)

Migration

If you have a FaultType enum in your project, annotate it with the @FaultTypeCode annotation and remove the IFaultType interface. beans.xml must be existed in the META-INF directory!

6.4. Trace detachment

Development involving multiple modules with the aim of enabling tracing to work with various implementations.

6.4.1. coffee-cdi

  • Renaming hu.icellmobilsoft.coffee.cdi.trace.constants.Tags to hu.icellmobilsoft.coffee.cdi.trace.constants.SpanAttribute

  • Received basic OpenTelemetry standard constants.

  • The @Traced annotation provides trace with the default INTERNAL kind type if not specified when using the annotation.

  • Renaming hu.icellmobilsoft.coffee.cdi.trace.spi.IOpenTraceHandler to hu.icellmobilsoft.coffee.cdi.trace.spi.ITraceHandler

Migration

At the dependency level, you need to choose which tracing implementation the service should use; otherwise, everything else is backward compatible.

<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-module-mp-opentracing</artifactId> (1)
</dependency>
<!-- or -->
<dependency>
    <groupId>hu.icellmobilsoft.coffee</groupId>
    <artifactId>coffee-module-mp-telemetry</artifactId> (2)
</dependency>
1 microprofile-opentracing implementation
2 microprofile-telemetry implementation

If the values of hu.icellmobilsoft.coffee.cdi.trace.constants.Tags were used, you can find the constants in the hu.icellmobilsoft.coffee.cdi.trace.constants.SpanAttribute class instead.

You should inject ITraceHandler instead of IOpenTraceHandler.

6.4.2. coffee-grpc-opentracing-impl

  • Rename coffee-grpc-opentracing-impl to coffee-grpc-traces-mpopentracing

Migration
  • Use the dependency coffee-grpc-tracing-opentracing instead of coffee-grpc-opentracing-impl

6.4.3. coffee-grpc-traces-api

  • ITracesInterceptor has been discontinued for simpler interceptor search.

Migration
  • When creating a custom interceptor, use the built-in io.grpc.ServerInterceptor instead of ITracesInterceptor.

6.4.4. coffee-module-mongodb

  • Removed classes and method annotated with @Deprecated(forRemoval = true, since = "1.1.0")

  • Removed unimplemented and unused method: hu.icellmobilsoft.coffee.module.mongodb.service.MongoService#getMongoCollectionName()

Migration
  • Instead of hu.icellmobilsoft.coffee.module.mongodb.annotation.MongoConfiguration use: hu.icellmobilsoft.coffee.module.mongodb.extension.MongoClientConfiguration

  • Instead of hu.icellmobilsoft.coffee.module.mongodb.config.MongoDbConfig use: hu.icellmobilsoft.coffee.module.mongodb.extension.MongoConfigHelper

  • Instead of hu.icellmobilsoft.coffee.module.mongodb.config.MongoDbConfigImpl use: hu.icellmobilsoft.coffee.module.mongodb.extension.MongoClientConfiguration

  • Instead of hu.icellmobilsoft.coffee.module.mongodb.handler.MongoDbHandler use: hu.icellmobilsoft.coffee.module.mongodb.extension.MongoDbClient

  • Instead of hu.icellmobilsoft.coffee.module.mongodb.producer.MongoFactory use: hu.icellmobilsoft.coffee.module.mongodb.extension.MongoDbClientFactory

  • Instead of hu.icellmobilsoft.coffee.module.mongodb.service.MongoServiceImpl use: hu.icellmobilsoft.coffee.module.mongodb.extension.MongoDbClient

  • Instead of hu.icellmobilsoft.coffee.module.mongodb.service.MongoService#getMongoCollection() use: hu.icellmobilsoft.coffee.module.mongodb.extension.MongoDbClient#initRepositoryCollection(java.lang.String)

7. v2.5.0 → v2.6.0

coff:ee v2.5.0 → v2.6.0 migration description, news, changes

7.1. Global

7.1.1. Coffee BOM

💥 BREAKING CHANGE 💥

Global change that coffee-bom was used to import coffee dependency on projects. This has been changed to coffee-bom-project.

You then need to import the project elements as follows:

<dependencyManagement>
    <dependency>
        <groupId>hu.icellmobilsoft.coffee</groupId>
        <artifactId>coffee-bom-project</artifactId> (1)
        <version>${version.hu.icellmobilsoft.coffee}</version>
        <type>pom</type>
        <scope>import</scope>
    </dependency>
</dependencyManagement>

This also means that the transitive dependencies that were previously put on projects from coffee-bom, are now removed.
Because of this, these dependencies will be reported as not found by maven, so they will need to be defined in the project using coffee.

coffee-bom → coffee-bom-project -re változott

  • Jandex index config fix.

7.1.2. Hibernate

Migration
  • It is necessary to read the Hibernate 6.2.x migration documentation and update the affected parts of the project.

  • The Timezone and offset storage now default to DEFAULT, therefore the time zones are also saved. Since we haven’t saved the time zone so far and the BatchService is also designed not to save, this is causing us problems. To ensure that no time zones are saved, it is necessary to include the following property in persistence.xml: <property name="hibernate.timezone.default_storage" value="NORMALIZE"/>.

  • The Byte[] and Character[] types mapping changed. The Byte[] types were handled until now, but due to hibernate changes, one of the following is required:

    • Replace Byte[] with byte[]

    • For the old behavior, it is necessary to enable the legacy array processing in the persistence.xml with the following property: <property name="hibernate.type.wrapper_array_handling" value="legacy"/>

7.2. Changes

7.2.1. coffee-jpa

BatchService
  • Based on the Hibernate 6.2.x changes, the Insert, Update and Delete SQL statements have been updated.

  • Since Hibernate 6.2.x, the EnumType is deprecated, because they are processing enum values differently, so the handling of the EnumType in BatchService has been deleted.

Migration

Changes are backwards compatible doesn’t need any migration.

JpaUtil
  • Since Hibernate 6.2.x, the JdbcSelect class was renamed to JdbcOperationQuerySelect, and the getSql() method used from it to getSqlString(), so we updated the JpaUtil with these changes.

Migration

Changes are backwards compatible doesn’t need any migration.

Transactional annotation

In the hu.icellmobilsoft.coffee.jpa.annotation.Transactional annotation the annotation @Stereotype has been removed and in @Target the annotation ElementType.TYPE too.

Migration

Changes are backwards compatible doesn’t need any migration.

7.2.2. coffee-rest

  • fix javax.annotation.processing.Processor file (rename back to javax.annotation.processing.Processor, because it hasn’t changed to javax)

RequestVersionReader

RequestVersionReader has been rewritten. The stream mark() and reset() may have caused errors in some cases.

Migration

IXmlRequestVersionReader and IJsonRequestVersionReader have been removed, in case you used it update the interface to IRequestVersionReader.

Jsonb Config

Added the definition of the Jsonb configuration.

Migration

The changes do not result in migration work, backwards default values are set for correct usage.

7.2.3. coffee-module-mp-restclient

Jsonb Config

Added the definition of the Jsonb configuration.

Migration

The changes do not result in migration work, backwards default values are set for correct usage.

7.2.4. Coffee Quarkus Extensions

A new module called Coffee Quarkus extension has been created, which adds other elements needed for Quarkus to some of the coffee modules. First element is coffee-module-mp-restclient-extension Second element is coffee-deltaspike-data-extension

7.2.5. coffee-deltaspike-message

  • Changed the pacakage of org.apache.deltaspike.core.util to org.apache.deltaspike.core.util.message, since 2 separate modules have the same original package, which is sensitived by Quarkus.

Migration
  • org.apache.deltaspike.core.util.ClassUtilsorg.apache.deltaspike.core.util.message.ClassUtils

  • org.apache.deltaspike.core.util.PropertyFileUtilsorg.apache.deltaspike.core.util.message.PropertyFileUtils

coffee-module-csv

Added new methods for changing the csv format. For example csv separator, escape char.

Migration

Changes are backwards compatible doesn’t need any migration.

7.2.6. coffee-model-base

  • The AbstractEntity.toString() function uses the type of the property instead of the value in case of java.io.InputStream, java.io.OutputStream, java.io.Reader and java.io.Writer. It used the value of the property in such cases, for e.g. it read the stream in case of logging.

Migration

Changes are backwards compatible doesn’t need any migration.

7.2.7. coffee-rest

  • The XsdHelper provides the opportunity to delete the schema and JAXB context cache.

  • EmptyRequestVersionReader now has Dependent scope, so Quarkus bean discovery will now be able to find it.

Migration

Changes are backwards compatible doesn’t need any migration.

7.2.8. coffee-module-redisstream

ConsumerLifeCycleManager
  • If the ConsumerLifeCycleManager.CONSUMER_COUNTER less than one, the ConsumerLifeCycleManager.SEMAPHORE.acquire(); call will be skipped in the ConsumerLifeCycleManager.stop() method so in those cases where the container didn’t contain any consumer, for e.g. in tests, the shutdown phase won’t be stacked.

Migration

Changes are backwards compatible doesn’t need any migration.

8. v2.6.0 → v2.7.0

coff:ee v2.6.0 → v2.7.0 migration description, news, changes

8.1. Global

8.1.1. coffee-se-api

A new module that defines a basic Coffee API such as enums, DTOs, exceptions, which can only have Java SE dependencies.

Contents:

  • hu.icellmobilsoft.coffee.se.api.exception.BaseException (based on hu.icellmobilsoft.coffee.dto.exception.BaseException)

  • hu.icellmobilsoft.coffee.se.api.exception.enums.Severity (based on hu.icellmobilsoft.coffee.dto.exception.enums.Severity)

  • hu.icellmobilsoft.coffee.se.api.exception.BusinessException (based on hu.icellmobilsoft.coffee.dto.exception.BusinessException)

  • hu.icellmobilsoft.coffee.se.api.exception.DtoConversionException (based on hu.icellmobilsoft.coffee.dto.exception.DtoConversionException)

8.1.2. coffee-se-function

A new module which contains functional interfaces used in Coffee, which can only have Java SE and such Coffee modules that can also only have Java SE dependencies.

Contents: (based on hu.icellmobilsoft.coffee.tool.common.FunctionalInterfaces, but declares the new hu.icellmobilsoft.coffee.se.api.exception.BaseException)

  • hu.icellmobilsoft.coffee.se.function.BaseExceptionConsumer

  • hu.icellmobilsoft.coffee.se.function.BaseExceptionFunction

  • hu.icellmobilsoft.coffee.se.function.BaseExceptionFunction2

  • hu.icellmobilsoft.coffee.se.function.BaseExceptionFunction3

  • hu.icellmobilsoft.coffee.se.function.BaseExceptionFunction4

  • hu.icellmobilsoft.coffee.se.function.BaseExceptionFunction5

  • hu.icellmobilsoft.coffee.se.function.BaseExceptionFunction6

  • hu.icellmobilsoft.coffee.se.function.BaseExceptionRunner

  • hu.icellmobilsoft.coffee.se.function.BaseExceptionSupplier

8.1.3. coffee-tool

  • hu.icellmobilsoft.coffee.tool.common.FunctionalInterfaces has become deprecated. The wrapped functional interfaces extends the new interfaces declared in coffee-se-function module.

Migration

Changes are backwards compatible doesn’t need any migration. However, if a new functional interface implemented, then the new hu.icellmobilsoft.coffee.se.api.exception.BaseException must be handled in the given locations e.g. exception mappers, type checks like e instanceof BaseException, try-catch blocks.

8.1.4. coffee-dto-base

  • hu.icellmobilsoft.coffee.dto.exception.enums.Severity has become deprecated.

  • hu.icellmobilsoft.coffee.dto.exception.BaseException has become deprecated.

  • hu.icellmobilsoft.coffee.dto.exception.BusinessException has become deprecated.

  • hu.icellmobilsoft.coffee.dto.exception.DtoConversionException has become deprecated.

Migration
  • The hu.icellmobilsoft.coffee.se.api.exception.enums.Severity enum defined in the coffee-se-api module should be used instead of the old hu.icellmobilsoft.coffee.dto.exception.enums.Severity.

  • Deprecated Exceptions are replaced with hu.icellmobilsoft.coffee.se.api.exception.* package (hu.icellmobilsoft.coffee.dto.exception.BaseExceptionhu.icellmobilsoft.coffee.se.api.exception.BaseException).

    • The original getSeverity() function returns with the new hu.icellmobilsoft.coffee.se.api.exception.enums.Severity type. Use the getOldSeverity() function to get the original type.

8.1.5. coffee-cdi

💥 BREAKING CHANGE 💥

  • hu.icellmobilsoft.coffee.cdi.trace.spi.ITraceHandler changes:

    • The runWithTrace() function has been renamed to runWithTraceNoException(), which expects java.util.function.Supplier or java.lang.Runnable as an argument. Functions traced in this way can only throw RTE.

    • The original runWithTrace() function’s argument is replaced to hu.icellmobilsoft.coffee.se.function.BaseExceptionSupplier or hu.icellmobilsoft.coffee.se.function.BaseExceptionRunner, which can throw hu.icellmobilsoft.coffee.se.api.exception.BaseException.

Migration
  • The original ITraceHandler.runWithTrace() calls must handle the new hu.icellmobilsoft.coffee.se.api.exception.BaseException.

  • If we trace a function that can only throw RTE, then we must use the runWithTraceNoException() function.

8.1.6. coffee-rest

  • The classes found in the hu.icellmobilsoft.coffee.rest.exception package use the new hu.icellmobilsoft.coffee.se.api.exception.BaseException. Such as DefaultBaseExceptionMapper, DefaultExceptionMessageTranslator, DefaultGeneralExceptionMapper and IExceptionMessageTranslator.

  • The hu.icellmobilsoft.coffee.rest.log.optimized.BaseRestLogger has been improved so the HTTP GET calls are logged now.

Migration
  • jakarta.ws.rs.ext.ExceptionMapper and hu.icellmobilsoft.coffee.rest.exception.IExceptionMessageTranslator implementations must use the new hu.icellmobilsoft.coffee.se.api.exception.BaseException.

8.1.7. coffee-grpc

  • New hu.icellmobilsoft.coffee.grpc.base.converter.ProtoConverter interface, which support anyDto ←→ protoDto conversion

  • New util class for support proto date conversion (hu.icellmobilsoft.coffee.grpc.base.util.ProtoDateUtil)

  • Improved hu.icellmobilsoft.coffee.grpc.server.interceptor.ServerRequestInterceptor and hu.icellmobilsoft.coffee.grpc.server.interceptor.ServerResponseInterceptor logging interceptors, which can be parameterized with @LogSpecifiers and @LogSpecifier annotations.

  • Improved hu.icellmobilsoft.coffee.grpc.server.interceptor.ErrorHandlerInterceptor now can handle additional error information into Grpc response:

    • Business error code (FaultType)

    • Translated error code

    • Debug information (stacktrace)

  • Expanded coffee-grpc-client-extension module:

    • Bugfix in GrpcClientExtension to only produce virtual beans for clients of type AbstractBlockingStub.

    • GrpcClientConfig extended with the maxInboundMetadataSize parameter, which serves to set the size of incoming grpc headers in the possible response (for example, due to the size of debug information).

    • New GrpcClienResponseException which handles coffee Grpc server error response

    • New GrpcClientHeaderHelper which handles sending Grpc message header

Migration

Changes are backwards compatible doesn’t need any migration.

8.1.8. coffee-module-redis

  • Jedis driver version bump 4.2.3 → 5.1.2

The new version supports redis from version 6.0!

Migration

Changes are backwards compatible doesn’t need any migration.

8.1.9. coffee-module

  • The @ConfigDoc annotation got two new parameters(optionals), isStartupParam and isRuntimeOverridable. In the generated table there is a new column named Features. In this column we can see each of the new parameters represented as emojis in the case of true value(default is false).

    • For isStartupParam true the emoji is: 🚀

    • For isRuntimeOverridable true the emoji is: ⏳

  • The @ConfigDoc annotation got a new parameter(optional), title. It gives the possibility of overwriting the generated table names if we don’t want to use the default.

Migration
  • Changes are backwards compatible doesn’t need any migration, although it is now possible to use these three new parameters in the @ConfigDoc annotation.

8.1.10. coffee-model

  • The TimestampsProvider got a new parameter which can be set as an ENV variable for manipulating the timezone(optional).

    • COFFEE_MODEL_BASE_JAVA_TIME_TIMEZONE_ID

Migration
  • Changes are backwards compatible doesn’t need any migration.

8.1.11. coffee-deltaspike-data

  • Tracing has been added to org.apache.deltaspike.data.impl.builder.MethodQueryBuilder.

Migration

Changes are backwards compatible doesn’t need any migration.

9. HOWTO

9.1. XML XSD version dependent validation

Implementation of incoming xml validation depending on different xsd versions.

9.1.1. Version dependent validation outside Coffee in projects

Example of how to use validation: The annotated endpoint:

public interface ISampleService {

    @POST
    @Path("/customer/sample")
    @Consumes(value = { MediaType.TEXT_XML, MediaType.APPLICATION_XML })
    @Produces(value = { MediaType.TEXT_XML, MediaType.APPLICATION_XML })
    @LogSpecifier(maxResponseEntityLogSize = LogSpecifier.NO_LOG)
    SampleResponse postSampleRequest(
        @ValidateXMLs({
            @ValidateXML(version = @Version(include = @Range(from = "1.0", to = "1.9")), xsdPath = ""),
            @ValidateXML(version = @Version(include = @Range(from = "1.10")), xsdPath = "xsd_wsdl/hu/icellmobilsoft/sample/1.0/sample.xsd")
        }) SampleRequest sampleRequest) throws BaseException;

or for example:

public interface ISampleService {

    @POST
    @Path("/customer/sample")
    @Consumes(value = { MediaType.TEXT_XML, MediaType.APPLICATION_XML })
    @Produces(value = { MediaType.TEXT_XML, MediaType.APPLICATION_XML })
    @LogSpecifier(maxResponseEntityLogSize = LogSpecifier.NO_LOG)
    SampleResponse postSampleRequest(
        @ValidateXML(version = @Version(include = @Range(from = "1.10")), xsdPath = "xsd_wsdl/hu/icellmobilsoft/sample/1.0/sample.xsd") SampleRequest sampleRequest) throws BaseException;

or for example:

public interface ISampleService {

    @POST
    @Path("/customer/sample")
    @Consumes(value = { MediaType.TEXT_XML, MediaType.APPLICATION_XML })
    @Produces(value = { MediaType.TEXT_XML, MediaType.APPLICATION_XML })
    @LogSpecifier(maxResponseEntityLogSize = LogSpecifier.NO_LOG)
    SampleResponse postSampleRequest(
        @ValidateXML(xsdPath = "xsd_wsdl/hu/icellmobilsoft/sample/1.0/sample.xsd") SampleRequest sampleRequest) throws BaseException;

9.1.2. Also prepare the providers

We need one for the XML

@Provider
@Consumes({MediaType.APPLICATION_XML, MediaType.TEXT_XML})
@Priority(Priorities.ENTITY_CODER)
public class XMLRequestMessageBodyReader extends XmlMessageBodyReaderBase<BasicRequestType> {

}

Another one for JSON

@Provider
@Consumes({ MediaType.APPLICATION_JSON })
@Priority(Priorities.ENTITY_CODER)
public class JsonRequestMessageBodyReader extends JsonMessageBodyReaderBase<BaseRequestType> {

}

The JSON XSD validation is done by converting the inputStream to a DTO using the JSON parser (which is returned by the provider for further business logic processing), and then run an XML marshaller with XSD enabled on it to validate it. If errors occur during this process, they are handled at provider level. Then everything is done with the errors as with the XML validation.

9.1.3. If you want to use your own LSResourceResolver

We need to implement the IXsdResourceResolver interface (with @Alternative annotation). Then we need to register the alternative class in beans.xml e.g.:

<alternatives>
    <class>hu.icellmobilsoft.sample.xmlvalidation.xmlutils.ProjectXsdResourceResolver</class>
</alternatives>

You can also use your own implementations of XsdHelper (IXsdHelper), XmlRequestVersionReader (IXmlRequestVersionReader), XsdValidationErrorCollector (IXsdValidationErrorCollector).

9.1.4. Troubleshooting at project level

Our ExceptionMapper implementation may also be complementary:

    private Response handleException(Exception e, ReasonCodeType reasonCode, FunctionCodeType functionCode) {
        if (e instanceof XsdProcessingExceptionWrapper) {
            XsdProcessingExceptionWrapper processingException = (XsdProcessingExceptionWrapper) e;
            if (processingException.getCause() instanceof XsdProcessingException) {
                XsdProcessingException xsdEx = (XsdProcessingException) processingException.getCause();
                return restExceptionMapper.toResponse(xsdEx);
            }
        }


    public Response toResponse(BaseException e) {
        ...
        } else if (e instanceof XsdProcessingException) {
            TechnicalFault f = new TechnicalFault();
            // getLocalizedMessage-ben osszeszedjuk a hibakat
            f.setMsg(HandleXsdProcessingException.generateDetailedMessage((XsdProcessingException) e));
            f.setReasonCode(ReasonCodeType.INVALID_REQUEST);
            f.setFuncCode(FunctionCodeType.ERROR);
            return Response.status(Response.Status.BAD_REQUEST).entity(f).build();

Here’s what else to watch out for: all XSD errors are returned by Coffee. These should be extracted separately, e.g. like this:

public static String generateDetailedMessage(XsdProcessingException invalidRequestException) {
       if (invalidRequestException == null) {
           return null;
       }
       StringBuilder msg = new StringBuilder();
       for (XMLValidationError xmlValidationError : invalidRequestException.getErrors()) {
           if (xmlValidationError != null) {
               if (msg.length() > 0) {
                   msg.append('\n');
               }
               msg.append(xmlValidationError.getField()).append(" - ").append(xmlValidationError.getError());
           }
       }

	return msg.length() > 0 ? invalidRequestException.getLocalizedMessage() + " errors:\n" + msg.toString() : invalidRequestException.getLocalizedMessage();
}

9.1.5. If there has been no XSD validation so far

  1. and you want to handle the transition period (the old version does not validate, the new one does). Then give schemaPath an empty ("") String.

9.2. XSD Catalog and generation

Customizable XSD generation and validation.

The XSD catalog itself is a rather complicated topic within the XSD standard. It is not important to go deeper into it, we will only deal with what we need below, and it is worth using it for modularization both in the framework and in projects. The whole "XSD catalogue" is an OASIS standard, which is xml-catalogs can be found at.

9.2.1. General

We generally use XSD to get rid of the basic valadication of DTO objects, such as:

  • input string length - how long the field value can be (character length)

  • mandatory - mandatory to fill in or not

  • minimum, maximum values - for example minimum 0, 1970-01-01, -3.14, …​

  • type mandatory - date, int, boolean, etc…​

  • defined values - ENUM

  • string, date pattern - "\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(.\d{1,3})?Z", "[a-zA-Z0-9\-@.]{6,30}", …​

  • other XSD options…​

In addition, it greatly assists object-oriented DTO generation, recycling - a well-written XSD structure can be very easily used for coding.

It is also important that we use JAVA language but the REST interface objects we write are defined in XSD and can be passed to a connected external client. You can generate your own DTOs from XSD in any language without having to rewrite them manually.

We try to keep the XSD as complex and documented as possible because we use an openapi-jaxb plugin that complements the DTOs with openapi annotations, where all data, restrictions and description. Thanks to this Swagger can also serve as a complete REST interface documentation for the developer, developer, tester, frontend, server and client, without the need to invest extra effort in post-documentation of the product.

Lest it seem that XSD is perfect, I will mention one of its biggest drawbacks - if the input is not XML (e.g. JSON), we can only solve the validation by extra transformation to XML. But the above listed advantages can save so much time that the price is we are willing to pay - so far all problems could be solved…​

9.2.2. Sample

Suppose we have a structure consisting of several XSDs. In this case we want a user DTO with has 2 elements, userName and password, these must satisfy the following constraints:

/coffee-dto-xsd/src/main/resources/xsd/hu/icellmobilsoft/coffee/dto/common/common.xsd
<?xml version="1.0" encoding="UTF-8"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://common.dto.coffee.icellmobilsoft.hu/common"
    targetNamespace="http://common.dto.coffee.icellmobilsoft.hu/common" elementFormDefault="qualified"
    attributeFormDefault="unqualified">

...

    <!-- SIMPLE TYPES -->
    <xsd:simpleType name="SimpleText255Type">
        <xsd:restriction base="xsd:string">
            <xsd:maxLength value="255" />
        </xsd:restriction>
    </xsd:simpleType>
    <xsd:simpleType name="EmailType">
        <xsd:restriction base="xsd:string">
            <xsd:maxLength value="200" />
            <xsd:pattern value="[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4}" />
        </xsd:restriction>
    </xsd:simpleType>

...

<xsd:schema
Traditional solution
/coffee-dto-xsd/src/main/resources/xsd/hu/icellmobilsoft/coffee/dto/common/commonservice.xsd
<?xml version="1.0" encoding="UTF-8"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    xmlns:common="http://common.dto.coffee.icellmobilsoft.hu/common" (1)
    xmlns="http://common.dto.coffee.icellmobilsoft.hu/commonservice"
    targetNamespace="http://common.dto.coffee.icellmobilsoft.hu/commonservice"
    elementFormDefault="qualified" attributeFormDefault="unqualified">

    <xsd:import namespace="http://common.dto.coffee.icellmobilsoft.hu/common" schemaLocation="common.xsd"/> (2)

...

    <xsd:complexType name="UserType">
        <xsd:sequence>
            <xsd:element name="userName" type="common:EmailType"> (3)
                <xsd:annotation>
                    <xsd:documentation>User login id.
                    </xsd:documentation>
                </xsd:annotation>
            </xsd:element>
            <xsd:element name="password"
                type="common:SimpleText255Type"> (3)
                <xsd:annotation>
                    <xsd:documentation>User login password hash.
                    </xsd:documentation>
                </xsd:annotation>
            </xsd:element>
        </xsd:sequence>
    </xsd:complexType>

...

<xsd:schema
1 xml namespace definition, named "common" and the import "http://common.dto.coffee.icellmobilsoft.hu/common" will be the source
2 Mount (import) the referenced namespace, using file path.
3 Reference to a type in another XSD.

Advantages:

  • Easy to manage and read addictions.

  • All XSD/XML management programs can use it natively, code assist works, native XSD validation.

Disadvantages:

  • In code, validation under XSD requires writing a custom resolver for external dependencies.

  • The xsd file defined in the schemaLocation path must be locally available.

  • Within a project, an import from another module can be very cumbersome (e.g. "../../../../../../../../../../target/unpacked-files/coffee-resources/xsd/hu/icellmobilsoft/coffee/dto/common/common.xsd").

  • Not customizable (explained in more detail below).

9.2.3. Solution with catalog

In this case 2 files are required:

/coffee-dto-xsd/src/main/resources/xsd/hu/icellmobilsoft/coffee/dto/common/commonservice.xsd
<?xml version="1.0" encoding="UTF-8"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    xmlns:common="http://common.dto.coffee.icellmobilsoft.hu/common" (1)
    xmlns="http://common.dto.coffee.icellmobilsoft.hu/commonservice"
    targetNamespace="http://common.dto.coffee.icellmobilsoft.hu/commonservice"
    elementFormDefault="qualified" attributeFormDefault="unqualified">

    <xsd:import namespace="http://common.dto.coffee.icellmobilsoft.hu/common"/> (2)

...

    <xsd:complexType name="UserType">
        <xsd:sequence>
            <xsd:element name="userName" type="common:EmailType"> (3)
                <xsd:annotation>
                    <xsd:documentation>Felhasználó bejelentkezési azonosító.
                    </xsd:documentation>
                </xsd:annotation>
            </xsd:element>
            <xsd:element name="password"
                type="common:SimpleText255Type">  (3)
                <xsd:annotation>
                    <xsd:documentation>Felhasználó bejelentkezési jelszó hash.
                    </xsd:documentation>
                </xsd:annotation>
            </xsd:element>
        </xsd:sequence>
    </xsd:complexType>

...

<xsd:schema
1 xml namespace definition, named "common" and the import "http://common.dto.coffee.icellmobilsoft.hu/common" will be the source
2 Mount (import) the referenced namespace - only namespace reference (no file path).
3 Reference to a type in another XSD.
/coffee-dto-impl/src/main/resources/xsd/hu/icellmobilsoft/coffee/dto/super.catalog.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<catalog xmlns="urn:oasis:names:tc:entity:xmlns:xml:catalog">
    <public publicId="http://common.dto.coffee.icellmobilsoft.hu/common" (1)
        uri="maven:hu.icellmobilsoft.coffee.dto.xsd:coffee-dto-xsd:jar::!/xsd/hu/icellmobilsoft/coffee/dto/common/common.xsd" />
</catalog>
1 xml namespace definition, "http://common.dto.coffee.icellmobilsoft.hu/common" import namespace where to find the file source.

Advantages:

  • Customizable

  • Independent XSD files can be imported that are not in the project.

  • One place for all imported XSD files.

  • Validation by XSD in code is easier than in the general solution and is universal.

Disadvantages:

  • Setting required for XSD/XML management programs, or producing your own catalog file.

9.2.4. Use cases

Imagine a case where Coffee generates some very basic DTO objects. This is important so that a common "generic" code base can be created in Coffee, For example, a generic error handling, logging, apache client response processing, etc (where DTOs are involved). If there is no generic class to cast types to, such boilercode is not possible in Coffee, since only the "Object" itself could be referenced. Nor is it a solution for Coffee to force some fixed types, because then projects would not be able to customize and extend it (e.g. replacing XMLGregorianCalendar with java.time.OffsetDateTime). Traditional XSD import is not appropriate in this situation, because it looks for the import xsd path file in a fixed location, which is not part of our project but part of Coffee, and refers to a relative path within Coffee.

The catalog file provides a solution. The catalog is a separate file, you can make your own version of it. In it, we only use the basic XSDs that meet our needs. Whatever does not suit us, we have to copy the original XSD with everything and extend it with the Coffee DTO type. If the namespace and the complexType names are not changed, it will generate the same DTO class as in Coffee. This will be found by JAVA via the classpath and all Coffee logic can continue to work. If the change is very drastic you can use the CDI to replace the Coffee logic completely.

Generation

The generation is done in our case with maven. Example:

    <dependencies>
        <dependency>
            <groupId>hu.icellmobilsoft.coffee.dto.xsd</groupId>
            <artifactId>coffee-dto-xsd</artifactId> (1)
        </dependency>

        ...

        <dependency>
            <groupId>org.eclipse.microprofile.openapi</groupId>
            <artifactId>microprofile-openapi-api</artifactId> (2)
            <version>3.0.0</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>com.helger.maven</groupId>
                <artifactId>jaxb40-maven-plugin</artifactId> (3)
                <version>0.16.0</version>
                <executions>
                    <execution>
                        <id>coffee-super</id>
                        <goals>
                            <goal>generate</goal>
                        </goals>
                        <configuration>
                            <strict>false</strict>
                            <!-- https://github.com/highsource/maven-jaxb2-plugin/wiki/Catalogs-in-Strict-Mode -->
                            <catalog>src/main/resources/xsd/hu/icellmobilsoft/coffee/dto/super.catalog.xml</catalog>  (4)
                            <schemaIncludes>
                                <include>xsd/hu/icellmobilsoft/coffee/dto/super.xsd</include>  (5)
                            </schemaIncludes>
                            <bindingIncludes>
                                <include>xsd/hu/icellmobilsoft/coffee/dto/bindings.xjb</include>  (6)
                            </bindingIncludes>
                            <generateDirectory>${project.build.directory}/generated-sources/src/main/java</generateDirectory>  (7)
                        </configuration>
                    </execution>
                </executions>
                <configuration>
                    <verbose>true</verbose>
                    <schemaDirectory>src/main/resources</schemaDirectory>
                    <args>
                        <arguments>-openapify</arguments> (8)
                        <arguments>-Xfluent-api</arguments> (9)
                        <arguments>-Xannotate</arguments> (10)
                    </args>
                    <plugins>
                        <plugin>
                            <groupId>hu.icellmobilsoft.jaxb</groupId>
                            <artifactId>openapi-jaxb-plugin</artifactId> (8)
                            <version>2.0.0</version>
                        </plugin>
                        <plugin>
                            <groupId>net.java.dev.jaxb2-commons</groupId>
                            <artifactId>jaxb-fluent-api</artifactId> (9)
                            <version>2.1.8</version>
                        </plugin>
                        <plugin>
                            <groupId>org.jvnet.jaxb2_commons</groupId>
                            <artifactId>jaxb2-basics-annotate</artifactId> (10)
                            <version>1.0.4</version>
                        </plugin>
                        <plugin>
                            <groupId>com.fasterxml.jackson.core</groupId>
                            <artifactId>jackson-databind</artifactId> (10)
                            <version>2.9.9.1</version>
                        </plugin>
                    </plugins>
                </configuration>
            </plugin>
        </plugins>
    </build>
...
1 This package contains the XSD files
2 The generated DTOs contain OpenApi annotations, and to compile the generated classes, it is necessary to include the specification
3 Maven plugin which controls the generation
4 XSD catalog file path
5 Main XSD to be generated. This can actually be several small ones, but then you have to keep modifying pom.xml whenever a change occurs. It is simpler to list them in a central xsd. Also in such cases the global settings would have to be handled individually.
6 XJB file, here you can set customized deviations, for example XMLGregorianCalendar → java.time replacement…​
7 Where to generate the classes. The plugin will put this automatically in the source code sources, Eclipse and IDEA will handle it automatically.
8 Switch to generate the OpenApi annotations, hu.icellmobilsoft.jaxb:openapi-jaxb-plugin by plugin.
9 Switch to generate the methods for fluent encoding, by net.java.dev.jaxb2-commons:jaxb-fluent-api plugin. It can be very useful, it can save a lot of unnecessary lines in the business log.
10 Switch to use javax.annotation.* annotations in the XSD and also generate according to it. For more details see jaxb2-annotate-plugin and stackoverflow.com
OpenApi, Swagger

As already mentioned, the generated DTOs are part of the complete documentation. Preferably they should contain all the information that may be needed by other developers, testers, organizers, etc…​ To do this, the XSD elements should be filled as much as possible when creating the XSD, because the generator will use them to generate the documentation annotations.

These annotations (OpenApi) will be displayed in a user interface using a program called Swagger.

xml file source
..
    <xsd:simpleType name="EntityIdType">
        <xsd:restriction base="xsd:string">
            <xsd:maxLength value="30" />
            <xsd:pattern value="[+a-zA-Z0-9_]{1,30}" />
        </xsd:restriction>
    </xsd:simpleType>
    <xsd:group name="BaseAuditUserGroup">
        <xsd:sequence>
            <xsd:element name="creatorUser" type="EntityIdType"
                minOccurs="0" />
            <xsd:element name="modifierUser" type="EntityIdType"
                minOccurs="0" />
        </xsd:sequence>
    </xsd:group>
    <xsd:group name="BaseAuditGroup">
        <xsd:sequence>
            <xsd:group ref="BaseAuditDateGroup" />
            <xsd:group ref="BaseAuditUserGroup" />
        </xsd:sequence>
    </xsd:group>
    <xsd:complexType name="AbstractAuditDtoType">
        <xsd:complexContent>
            <xsd:extension base="AbstractDtoType">
                <xsd:sequence>
                    <xsd:group ref="BaseAuditGroup" />
                </xsd:sequence>
            </xsd:extension>
        </xsd:complexContent>
    </xsd:complexType>
...

This results in the generation of a class - a subset of the following:

part of generated AbstractAuditDtoType.java class
...
    @Schema(name = "creatorUser", title = "creatorUser", description = "creatorUser", maxLength = 30, pattern = "[+a-zA-Z0-9_]{1,30}")
    protected String creatorUser;
...

The documentation on the user interface shows this:

XSD catalog swaggerUI sample
Figure 3. swagger UI

The example doesn’t include xsd:documentation because I couldn’t find a suitable tiny example, but generating it works.

XSD editors

If no catalog file is used, all XSD editors can usually handle imports. Using catalog complicates the situation. The catalog file itself is an additional configuration file, does not necessarily need to be on top of the XSD files.

In XSD management software, this must be added separately to resolve the referenced namespace prefixes. Since most of us developers use Eclipse or IDEA, we will describe these in more detail.

IDEA

This does not cause any particular problems in the configuration, as IDEA’s XSD handler seems to be able to read the project settings from maven pom.xml, so the catalog file can be read. True, no one has tried it with an external catalog file…​

Eclipse

For Catalog XSD, we have to manually create our own catalog file, since that is what the generator uses, does not correspond to the Eclipse XSD/XML plugin. The plugin itself cannot read from a maven structure the way the generator does. The Eclipse plugin requires a fixed absolute path in the catalog, which is unique for each developer, it can’t work with relative paths (which I think is ruled out by the linking process itself).

For help, a sample of what you need to manually create:

/coffee-dto-xsd/example.eclipse.catalog.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<catalog xmlns="urn:oasis:names:tc:entity:xmlns:xml:catalog">
    <public publicId="http://common.dto.coffee.icellmobilsoft.hu/common"
        uri="/home/ss/PROJECTS/ICELL/COFFEE/workspace/coffee/coffee-dto/coffee-dto-xsd/src/main/resources/xsd/hu/icellmobilsoft/coffee/dto/common/common.xsd" />
    <public publicId="http://common.dto.coffee.icellmobilsoft.hu/commonservice"
        uri="/home/ss/PROJECTS/ICELL/COFFEE/workspace/coffee/coffee-dto/coffee-dto-xsd/src/main/resources/xsd/hu/icellmobilsoft/coffee/dto/common/commonservice.xsd" />
    <public publicId="http://common.dto.coffee.icellmobilsoft.hu/commonsecurity"
        uri="/home/ss/PROJECTS/ICELL/COFFEE/workspace/coffee/coffee-dto/coffee-dto-xsd/src/main/resources/xsd/hu/icellmobilsoft/coffee/dto/common/commonsecurity.xsd" />
</catalog>

Everyone must set their own route!

Catalog import

The Eclipse menus are the next step:

  1. FileImport

  2. Within the window, down XMLXML Catalog

  3. Then Next and enter the above mentioned manually processed catalog file.

If the paths are filled correctly Eclipse can resolve the namespace references from now on and the code assist will work.

Catalog delete

If changes are made to the XSD structure then again Catalog import but first you have to delete the old one.

This should be done as follows:

  1. WindowPreferences

  2. In the window navigate to XMLXML Catalog.

  3. You will see something like this: XSD catalog catalog delete

  4. We select the User Specified Entries items, then Remove

  5. Then you can go again to Catalog import

Do not attempt to make any changes to User Specified Entries, because they will be saved incorrectly by Eclipse after saving and unlocking the Catalog will not work. This may be some Eclipse Bug.
XML Schema validation

In Catalog solution there is an example of the super.catalog.xml file. In it you can see that the file is accessed via maven dependency:

uri="maven:en.icellmobilsoft.coffee.dto.xsd:coffee-dto-xsd:jar::!/xsd/en/icellmobilsoft/coffee/dto/common/common.xsd"

This actually refers to the following:

pom.xml
...
<dependency>
    <groupId>hu.icellmobilsoft.coffee.dto</groupId>
    <artifactId>coffee-dto</artifactId>
</dependency>
...

So it unpacks from the .m2/repository/hu/icellmobilsoft/coffee/dto/coffee-dto/_VERSION_/coffee-dto.jar file the XSD file /xsd/hu/icellmobilsoft/coffee/dto/common/common.xsd.

The following solution requires java 9+!

The maven: protocol itself is unknown to java.net.URL, so we need to write a handler for it:

coffee-tool/src/main/java/hu/icellmobilsoft/coffee/tool/protocol/handler/MavenURLHandler.java content gist
/**
 * Handling URL protocol for this format:
 *
 * <pre>
 * maven:hu.icellmobilsoft.coffee:coffee-dto-xsd:jar::!/xsd/hu/icellmobilsoft/coffee/dto/common/common.xsd
 * </pre>
 *
 * Format is: <code>maven:groupId:atifactId:package:version:!file_path</code>
 * <ul>
 * <li>protocol - URL schema protocol, in this case "maven"</li>
 * <li>hu.icellmobilsoft.coffee.dto.xsd - maven groupId</li>
 * <li>coffee-dto-xsd - maven artifactId</li>
 * <li>jar - maven package</li>
 * <li>maven version</li>
 * </ul>
 *

This handler still needs to be registered as follows:

  • Create a text file named java.net.spi.URLStreamHandlerProvider (without extension)

  • Include the content of the handler you have created. If you have more than one of them, you can create a new line for each one.

In our case it will look like this:

src/main/resources/META-INF/services/java.net.spi.URLStreamHandlerProvider
hu.icellmobilsoft.coffee.tool.protocol.handler.MavenURLStreamHandlerProvider

From now on, the java 9+ XSD Catalog handler can read the path known by the generator.

In order for JAXB itself to be able to validate via Catalog, it was necessary to create some classes, which are not yet Coffee ready. This will be documented later, until then you will have to look at projects.

1. Why do we use string type for IDs and not int? For security reasons, IDs should be non-consecutive, otherwise they can be easily guessed, for portability reasons during a db migration, IDs with int type can easily get mixed up with existing data, in terms of extensibility we have more room for manoeuvre, in terms of quantity we are not limited, in terms of information carried the amount of data is not known by the system, in terms of synchronisation string is less expensive because using a sequence can stall the process (e.g. Oracle RAC) until all nodes respond.