- Supported Images
- Our recommended set uses Red Hat's Universal Base Image (UBI) as the Operating System and are rebuilt periodically. They are available from IBM Container Registry (icr.io) and are listed here.
- Another set, using Ubuntu as the Operating System, can be found on Docker Hub.
According to best practices for container images, you should create a new image (FROM icr.io/appcafe/open-liberty:
) which adds a single application and the corresponding configuration. You should avoid configuring the container manually once it started, unless it is for debugging purposes, because such changes won't persist if you spawn a new container from the image.
Your application image template should follow a pattern similar to:
FROM icr.io/appcafe/open-liberty:kernel-slim-java17-openj9-ubi
# Add Liberty server configuration including all necessary features
COPY --chown=1001:0 server.xml /config/
# Modify feature repository (optional)
# A sample is in the 'Getting Required Features' section below
COPY --chown=1001:0 featureUtility.properties /opt/ol/wlp/etc/
# This script will add the requested XML snippets to enable Liberty features and grow image to be fit-for-purpose using featureUtility.
# Only available in 'kernel-slim'. The 'full' tag already includes all features for convenience.
RUN features.sh
# Add interim fixes (optional)
COPY --chown=1001:0 interim-fixes /opt/ol/fixes/
# Add app
COPY --chown=1001:0 Sample1.war /config/dropins/
# This script will add the requested server configurations, apply any interim fixes and populate caches to optimize runtime
RUN configure.sh
This will result in a container image that has your application and configuration pre-loaded, which means you can spawn new fully-configured containers at any time.
Refer to Open Liberty Docs for server configuration (server.xml) information.
The kernel-slim
tag provides just the bare minimum server. You can grow it to include the features needed by your application by invoking features.sh
.
Liberty features are downloaded from Maven Central repository by default. But you can specify alternatives using /opt/ol/wlp/etc/featureUtility.properties
:
remoteRepo.url=https://my-remote-server/secure/maven2
remoteRepo.user=operator
remoteRepo.password={aes}KM8dhwcv892Ss1sawu9R+
Refer to Repository and proxy modifications for more information.
This section describes the optional enterprise functionality that can be enabled via the Dockerfile during build
time, by setting particular build-arguments (ARG
) and calling RUN configure.sh
. Each of these options trigger the inclusion of specific configuration via XML snippets (except for VERBOSE
), described below:
TLS
(SSL
is deprecated)- Description: Enable Transport Security in Liberty by adding the
transportSecurity-1.0
feature (includes support for SSL). - XML Snippet Location: keystore.xml.
- Description: Enable Transport Security in Liberty by adding the
HZ_SESSION_CACHE
- Description: Enable the persistence of HTTP sessions using JCache by adding the
sessionCache-1.0
feature. - XML Snippet Location: hazelcast-sessioncache.xml
- Description: Enable the persistence of HTTP sessions using JCache by adding the
VERBOSE
- Description: When set to
true
it outputs the commands and results to stdout fromconfigure.sh
. Otherwise, default setting isfalse
andconfigure.sh
is silenced.
- Description: When set to
The following enterprise functionalities are now deprecated. You should stop using them. They are still available in full
but not available in kernel-slim
. They have been removed from the Open Liberty images based on Java 21 and above.:
HTTP_ENDPOINT
- Description: Add configuration properties for an HTTP endpoint.
- XML Snippet Location: http-ssl-endpoint.xml when SSL is enabled. Otherwise http-endpoint.xml
MP_HEALTH_CHECK
- Description: Check the health of the environment using Liberty feature
mpHealth-1.0
(implements MicroProfile Health). - XML Snippet Location: mp-health-check.xml
- Description: Check the health of the environment using Liberty feature
MP_MONITORING
- Description: Monitor the server runtime environment and application metrics by using Liberty features
mpMetrics-1.1
(implements Microprofile Metrics) andmonitor-1.0
. - XML Snippet Location: mp-monitoring.xml
- Note: With this option,
/metrics
endpoint is configured without authentication to support the environments that do not yet support scraping secured endpoints.
- Description: Monitor the server runtime environment and application metrics by using Liberty features
IIOP_ENDPOINT
- Description: Add configuration properties for an IIOP endpoint.
- XML Snippet Location: iiop-ssl-endpoint.xml when SSL is enabled. Otherwise, iiop-endpoint.xml.
- Note: If using this option,
env.IIOP_ENDPOINT_HOST
environment variable should be set to the server's host. See IIOP endpoint configuration for more details.
JMS_ENDPOINT
- Description: Add configuration properties for an JMS endpoint.
- XML Snippet Location: jms-ssl-endpoint.xml when SSL is enabled. Otherwise, jms-endpoint.xml
Single Sign-On can be optionally configured by adding Liberty server variables in an xml file, by passing environment variables (less secure), or by passing Liberty server variables in through the Liberty operator. See SECURITY.md.
OpenJ9's SCC allows the VM to store Java classes in an optimized form that can be loaded very quickly, JIT compiled code, and profiling data. Deploying an SCC file together with your application can significantly improve start-up time. The SCC can also be shared by multiple VMs, thereby reducing total memory consumption.
Open Liberty container images contain an SCC and (by default) add your application's specific data to the SCC at image build time when your Dockerfile invokes RUN configure.sh
.
Note that currently some content in the SCC is sensitive to heap geometry. If the server is started with options that cause heap geometry to significantly change from when the SCC was created that content will not be used and you may observe fluctuations in start-up performance. Specifying a smaller -Xmx
value increases the chances of obtaining a heap geometry that's compatible with the AOT code.
This feature can be controlled via the following variables:
OPENJ9_SCC
(environment variable)- Description: If
"true"
, cache application-specific in an SCC and include it in the image. A new SCC will be created if needed, otherwise data will be added to the existing SCC. - Default:
"true"
.
- Description: If
TRIM_SCC
(environment variable)- Description: If
"true"
, the application-specific SCC layer will be sized-down to accomodate only the data populated during image build process. To allow the application to add more data to the SCC at runtime, set this variable to"false"
, but also ensure the SCC is not marked read-only. This can be done by setting the OPENJ9_JAVA_OPTIONS environment variable in your application Dockerfile like so:ENV OPENJ9_JAVA_OPTIONS="-XX:+IgnoreUnrecognizedVMOptions -XX:+IdleTuningGcOnIdle -Xshareclasses:name=openj9_system_scc,cacheDir=/opt/java/.scc,nonFatal -Dosgi.checkConfiguration=false"
. Note that OPENJ9_JAVA_OPTIONS is already defined in the base Liberty image dockerfile, but includes thereadonly
sub-option. - Default:
"true"
.
- Description: If
SCC_SIZE
(environment variable)- Description: The size of the application-specific SCC layer in the image. This value is only used if
TRIM_SCC
is set to"false"
. - Default:
"80m"
.
- Description: The size of the application-specific SCC layer in the image. This value is only used if
WARM_ENDPOINT
(environment variable)- Description: If
"true"
, curl will be used to access the WARM_ENDPOINT_URL (see below) during the population of the SCC. This will increase the amount of information in the SCC and improve first request time in subsequent starts of the image. - Default:
"true"
- Description: If
WARM_ENDPOINT_URL
(enviornment variable)- Description: The URL to access during SCC population if WARM_ENDPOINT is true.
- Default:
"localhost:9080/"
WARM_OPENAPI_ENDPOINT
(environment variable)- Description: (24.0.0.4+) If
"true"
, curl will be used to access the WARM_OPENAPI_ENDPOINT_URL (see below) during the population of the SCC. This will increase the amount of information in the SCC and improve first request time in subsequent starts of the image. - Default:
"true"
- Description: (24.0.0.4+) If
WARM_OPENAPI_ENDPOINT_URL
(enviornment variable)- Description: (24.0.0.4+) The URL to access during SCC population if WARM_OPENAPI_ENDPOINT is true.
- Default:
"localhost:9080/openapi"
To customize one of the built-in XML snippets, make a copy of the snippet from Github and edit it locally. Once you have completed your changes, use the COPY
command inside your Dockerfile to copy the snippet into /config/configDropins/overrides
. It is important to note that you do not need to set build-arguments (ARG
) for any customized XML snippets. The following Dockerfile snippet is an example of how you should include the customized snippet.
COPY --chown=1001:0 <path_to_customized_snippet> /config/configDropins/overrides
It is important to be able to observe the logs emitted by Open Liberty when it is running in a container. A best practice method would be to emit the logs in JSON and to then consume it with a logging stack of your choice.
Configure your Open Liberty container image to emit JSON formatted logs to the console/standard-out with your selection of liberty logging events by creating a bootstrap.properties
file with the following properties. You can also disable writing to the messages.log or trace.log files if you don't need them.
# direct events to console in json format
com.ibm.ws.logging.console.log.level=info
com.ibm.ws.logging.console.format=json
com.ibm.ws.logging.console.source=message,trace,accessLog,ffdc,audit
# disable writing to messages.log by not including any sources (optional)
com.ibm.ws.logging.message.format=json
com.ibm.ws.logging.message.source=
# disable writing to trace.log by only sending trace data to console (optional)
com.ibm.ws.logging.trace.file.name=stdout
Make sure to include the file you have just created into your Open Liberty Dockerfile.
COPY --chown=1001:0 bootstrap.properties /config/
These environment variables can be set when running container as well. This can be achieved by using the run command's '-e' option to pass in an environment variable value.
docker run -d -p 80:9080 -p 443:9443 -e WLP_LOGGING_CONSOLE_FORMAT=JSON -e WLP_LOGGING_CONSOLE_LOGLEVEL=info -e WLP_LOGGING_CONSOLE_SOURCE=message,trace,accessLog,ffdc,audit open-liberty:latest
For more information regarding the configuration of Open Liberty's logging capabilities see: https://openliberty.io/docs/ref/general/#log-trace-configuration.html
The Liberty session caching feature builds on top of an existing technology called JCache (JSR 107), which provides an API for distributed in-memory caching. There are several providers of JCache implementations. The configuration for two such providers, Infinispan and Hazelcast, are outlined below.
-
Infinispan - One JCache provider is the open source project Infinispan, which is the basis for Red Hat Data Grid. Enabling Infinispan session caching retrieves the Infinispan client libraries from the Infinispan JCACHE (JSR 107) Remote Implementation maven repository, and configures the necessary infinispan.client.hotrod.* properties and the Liberty server feature sessionCache-1.0 by including the XML snippet infinispan-client-sessioncache.xml.
-
Setup Infinispan Service - Configuring Liberty session caching with Infinispan depends on an Infinispan service being available in your Kubernetes environment. It is preferable to create your Infinispan service by utilizing the Infinispan Operator. The Infinispan Operator Tutorial provides a good example of getting started with Infinispan in OpenShift.
-
Install Client Jars and Set INFINISPAN_SERVICE_NAME - To enable Infinispan functionality in Liberty, the container image author can use the Dockerfile provided below. This Dockerfile assumes an Infinispan service name of
example-infinispan
, which is the default used in the Infinispan Operator Tutorial. To customize your Infinispan service see Creating Infinispan Clusters. TheINFINISPAN_SERVICE_NAME
environment variable must be set at build time as shown in the example Dockerfile, or overridden at image deploy time.- TIP - If your Infinispan deployment and Liberty deployment are in different namespaces/projects, you will need to set the
INFINISPAN_HOST
,INFINISPAN_PORT
,INFINISPAN_USER
, andINFINISPAN_PASS
environment variables in addition to theINFINISPAN_SERVICE_NAME
environment variable. This is due to the Liberty deployment not having the access to the Infinispan service environment variables it requires.
- TIP - If your Infinispan deployment and Liberty deployment are in different namespaces/projects, you will need to set the
### Infinispan Session Caching ### FROM icr.io/appcafe/open-liberty:kernel-slim-java8-openj9-ubi AS infinispan-client # Install Infinispan client jars USER root RUN infinispan-client-setup.sh USER 1001 FROM icr.io/appcafe/open-liberty:kernel-slim-java8-openj9-ubi AS open-liberty-infinispan # Copy Infinispan client jars to Open Liberty shared resources COPY --chown=1001:0 --from=infinispan-client /opt/ol/wlp/usr/shared/resources/infinispan /opt/ol/wlp/usr/shared/resources/infinispan # Instruct configure.sh to use Infinispan for session caching. # This should be set to the Infinispan service name. # TIP - Run the following oc/kubectl command with admin permissions to determine this value: # oc get infinispan -o jsonpath={.items[0].metadata.name} ENV INFINISPAN_SERVICE_NAME=example-infinispan # Uncomment and set to override auto detected values. # These are normally not needed if running in a Kubernetes environment. # One such scenario would be when the Infinispan and Liberty deployments are in different namespaces/projects. #ENV INFINISPAN_HOST= #ENV INFINISPAN_PORT= #ENV INFINISPAN_USER= #ENV INFINISPAN_PASS= # This script will add the requested XML snippets and grow image to be fit-for-purpose RUN configure.sh
- Mount Infinispan Secret - Finally, the Infinispan generated secret must be mounted as a volume under the mount point of
/platform/bindings/infinispan/secret/
on Liberty containers. The default , for versions latest and 20.0.0.6+, of/platform/bindings/infinispan/secret/
can to be overridden by setting theLIBERTY_INFINISPAN_SECRET_DIR
environment variable. When using the Infinispan Operator, this secret is automatically generated as part of the Infinispan service with the name of<INFINISPAN_CLUSTER_NAME>-generated-secret
. For the mounting of this secret to succeed, the Infinispan Operator and Liberty must share the same namespace. If they do not share the same namespace, theINFINISPAN_HOST
,INFINISPAN_PORT
,INFINISPAN_USER
, andINFINISPAN_PASS
environment variables can be used instead(see the Dockerfile example above). For an example of mounting this secret, review thevolumes
andvolumeMounts
portions of the YAML below.
... spec: volumes: - name: infinispan-secret-volume secret: secretName: example-infinispan-generated-secret containers: - name: servera-container image: ol-runtime-infinispan-client:1.0.0 ports: - containerPort: 9080 volumeMounts: - name: infinispan-secret-volume readOnly: true mountPath: "/platform/bindings/infinispan/secret" ...
-
-
Hazelcast - Another JCache provider is Hazelcast In-Memory Data Grid. Enabling Hazelcast session caching retrieves the Hazelcast client libraries from the hazelcast/hazelcast container image, configures Hazelcast by copying a sample hazelcast.xml, and configures the Liberty server feature sessionCache-1.0 by including the XML snippet hazelcast-sessioncache.xml. By default, the Hazelcast Discovery Plugin for Kubernetes will auto-discover its peers within the same Kubernetes namespace. To enable this functionality, the container image author can include the following Dockerfile snippet, and choose from either client-server or embedded topology.
### Hazelcast Session Caching ### # Copy the Hazelcast libraries from the Hazelcast container image COPY --from=hazelcast/hazelcast --chown=1001:0 /opt/hazelcast/lib/*.jar /opt/ol/wlp/usr/shared/resources/hazelcast/ # Instruct configure.sh to copy the client topology hazelcast.xml ARG HZ_SESSION_CACHE=client # Default setting for the verbose option ARG VERBOSE=false # Instruct configure.sh to copy the embedded topology hazelcast.xml and set the required system property #ARG HZ_SESSION_CACHE=embedded #ENV JAVA_TOOL_OPTIONS="-Dhazelcast.jcache.provider.type=server ${JAVA_TOOL_OPTIONS}" ## This script will add the requested XML snippets and grow image to be fit-for-purpose RUN configure.sh
The process to apply interim fixes (iFix) is defined here.
When generating server dump for a Liberty server running in a container in a pod on a Kubernetes cluster (including OpenShift), the server dump command might cause the following error:
$ server dump defaultServer --archive=all.dump.zip --include=system
Dumping server defaultServer.
CWWKE0009E: The system cannot find the following file and this file will not be included in the server dump archive: /opt/ibm/wlp/output/defaultServer/The core file created by child process with pid = 252052 was not found. Expected to find core file with name "/opt/ibm/wlp/output/defaultServer/core.252052"
Server defaultServer dump complete in /opt/ibm/wlp/output/defaultServer/all.dump.zip.
This issue happens when the server dump command includes --include=system
and if there is a |
(pipe) contained in the core_pattern
file in the container:
Example on a OpenShift 4.3 cluster:
$ cat /proc/sys/kernel/core_pattern
|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e
Another example on a Kubernetes cluster:
$ cat /proc/sys/kernel/core_pattern
|/usr/share/apport/apport %p %s %c %d %P %E
If the first character of the /proc/sys/kernel/core_pattern
file is a pipe symbol (|
), then the remainder of the line is interpreted as the command-line for a user-space program (or script) that is to be executed and processing the dump.
To access the core dump:
- If the program is
/usr/lib/systemd/systemd-coredump
, then the core dump should go to/var/lib/systemd/coredump/
by default (overridden configuration in/etc/systemd/coredump.conf
). To get this coredump, from the host, runsudo coredumpctl -o core.dmp dump ${PID}
and transfer thecore.dmp
file. - If the program is
/usr/share/apport/apport
, then the core dump should go to/var/crash/
by default (overridden configuration in/etc/default/apport
). To get this core dump, from the host, gather the file from/var/crash
on the host.
If the core dump is not found in these locations, review the host’s kernel log (e.g. journalctl
) to see if there were errors in those programs.
When the issue is encountered, the user encounters the following messages in the logs from the server:
[AUDIT ] CWWKE0057I: Introspect request received. The server is dumping status.
JVMDUMP034I User requested System dump using '/opt/ibm/wlp/output/defaultServer/core.20200605.191845.1.0001.dmp' through com.ibm.jvm.Dump.triggerDump
JVMPORT030W /proc/sys/kernel/core_pattern setting "|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e" specifies that the core dump is to be piped to an external program. Attempting to rename either core or core.190.
JVMDUMP012E Error in System dump: The core file created by child process with pid = 190 was not found. Expected to find core file with name "/opt/ibm/wlp/output/defaultServer/core.190"
[AUDIT ] CWWKE0068I: Java dump created: /opt/ibm/wlp/output/defaultServer/The core file created by child process with pid = 190 was not found. Expected to find core file with name "/opt/ibm/wlp/output/defaultServer/core.190"
Since JVM cannot find the system dump, it is not able to add some useful metadata to the core dump but this is usually not required. An example of this information includes some extra memory region metadata for the info map command in jdmpview
which is useful for native memory leak analysis.
Users generating other types of dumps such as thread dump and heap dump should not see this issue.