//Defrosting Warmed-up Java [using Rootless CRIU and Project Panama]

I needed a toy project to experiment with JEP 389 of Project Panama (modern JNI) but wanted to take a better look at CRIU (Checkpoint/Restore In Userspace) too. So I thought, lets try to combine both and created JCRIU. The immediate questions I had were: how fast can it defrost a warmed up JVM and can it make a program time travel.

Lets attempt to investigate the first question with this blog entry.

CRIU Crash Course

CRIU can dump process trees to disk (checkpoint) and restore them any time later (implemented in user space) - its all in the name.

Lets run a minimal test first.


#!/bin/bash
echo my pid: $$
i=0
while true
do
    echo $i && ((i=i+1)) && sleep 1
done

The script above will print its PID initially and then continue to print and increment a number. It isn't important that this is a bash script, it could be any process.

shell 1:


$ sh test.sh 
my pid: 14255
0
1
...
9
Killed

shell 2:


$ criu dump -t 14255 --shell-job -v -D dump/
...
(00.021161) Dumping finished successfully

This command will let CRIU dump (checkpoint) the process with the specified PID and store its image in ./dump (overwriting any older image on the same path). The flag --shell-job tells CRIU that the process is attached to a console. Dumping a process will automatically kill it, like in this example, unless -R is specified.

shell 2:


$ criu restore --shell-job -D dump/
10
11
12
...

To restore, simply replace "dump" with "restore", without specifying the PID. As expected the program continues counting in shell 2, right where it was stopped in shell 1.

Rootless CRIU

As of now (Nov. 2020) the CRIU commands above still require root permissions. But this might change soon. Linux 5.9 received cap_checkpoint_restore (patch) and CRIU is also already being prepared. To test rootless CRIU, simply build the non-root branch and set cap_checkpoint_restore to the resulting binary (no need to install, you can use criu directly).


sudo setcap cap_checkpoint_restore=eip /path/to/criu/binary

Note: Dependent on your linux distribution you might have to set cap_sys_ptrace too. Some features might not work yet, for example restoring as --shell-job or using the CRIU API. Use a recent Kernel (at least 5.9.8) before trying to restore a JVM.

CRIU + Java + Panama = JCRIU

JCRIU uses Panama's jextract tool during build time to generate a low level (1:1) binding directly from the header of the CRIU API. The low level binding isn't exposed through the public API however, its just a implementation detail. Both jextract and the foreign function module are part of project Panama, early access builds are available here. JEP 389: Foreign Linker API has been (today) accepted for inclusion as JDK 16 incubator module - it might appear in mainline builds soon.

The main entry point is CRIUContext which implements AutoCloseable to cleanly dispose resources after use. Potential errors are mapped to CRIUExceptions. Checkpointing should be fairly robust since the communication is done over RPC with the actual CRIU process. Crashing CRIU most likely won't take the JVM down too.


    public static void main(String[] args) throws IOException, InterruptedException {
        
        // create empty dir for images
        Path image = Paths.get("checkpoint_test_image");

        if (!Files.exists(image))
            Files.createDirectory(image);
        
        // checkpoint the JVM every second
        try (CRIUContext criu = CRIUContext.create()
                .logLevel(WARNING).leaveRunning(true).shellJob(true)) {
            
            int n = 0;
            
            while(true) {
                Thread.sleep(1000);

                criu.checkpoint(image); // checkpoint and entry point for a restore

                long pid = ProcessHandle.current().pid()
                System.out.println("my PID: "+pid+" checkpont# "+n++);
            }
        }
    }

The above example is somewhat similar to the simple bash script. The main difference is that the Java program is checkpointing itself every second. This allows us to CTRL+C any time - the program will keep counting and checkpointing where it left of, if restored.


[mbien@longbow JCRIUTest]$ sudo sh start-demo.sh 
WARNING: Using incubator modules: jdk.incubator.foreign
my PID: 16195 checkpont# 0
my PID: 16195 checkpont# 1
my PID: 16195 checkpont# 2
my PID: 16195 checkpont# 3
my PID: 16195 checkpont# 4
my PID: 16195 checkpont# 5
CTRL+C
[mbien@longbow JCRIUTest]$ sudo criu restore --shell-job -D checkpoint_test_image/
my PID: 16195 checkpont# 5
my PID: 16195 checkpont# 6
my PID: 16195 checkpont# 7
my PID: 16195 checkpont# 8
my PID: 16195 checkpont# 9
CTRL+C
[mbien@longbow JCRIUTest]$ sudo criu restore --shell-job -D checkpoint_test_image/
my PID: 16195 checkpont# 9
my PID: 16195 checkpont# 10
my PID: 16195 checkpont# 11
my PID: 16195 checkpont# 12
my PID: 16195 checkpont# 13
my PID: 16195 checkpont# 14
CTRL+C

Note: start-demo.sh is just setting env variables to an early access JDK 16 panama build, enables jdk.incubator.foreign etc. The project README has the details.

Important Details and Considerations

  • CRIU restores images with the same PIDs the processes had during checkpoint. This won't cause much trouble in containers since the namespace should be quite empty, but might conflict from time to time on a workstation. If the same image should be restored multiple times concurrently, it will have to run in its own PID namespace. This can be achieved with sudo unshare -p -m -f [restore command]. See man unshare for details.
  • Opened files are not allowed to change (in size) between checkpoint and restore. If they do, the restore operation will fail. (watch out for log files, JFR repos, JVM perf data or temporary files)
  • If the application established TCP connections you have to tell CRIU that via the --tcp-established flag (or similar named method in CRIUContext). CRIU will try to restore all connections in their correct states. wiki link to more options
  • The first checkpoint or restore after system boot can take a few seconds because CRIU has to gather information about the system configuration first; this information is cached for subsequent uses
  • Some application dependent post-restore tasks might be required, for example keystore/cert replacement or RNG re-initialization (...)
  • CRIU can't checkpoint resources it can't reach. A X Window or state stored on a GPU can't be dumped
  • Migration should probably only be attempted between (very) similar systems and hardware

Instant Defrosting of Warmed-up JVMs

Lets take a look what you can do with super luminal, absolute zero, instant defrosting JCRIU (ok I'll stop ;)) when applied to my favorite dusty java web monolith: Apache Roller. I stopped the time this blog here would require to start on my workstation when loaded from a NVMe on JDK 16 + Jetty 9.4.34. (I consider it started when the website has loaded in the browser, not when the app server reports it started)

classic start: ~6.5 s

(for comparison: it takes about a minute to start on a Raspberry Pi 3b+, which is serving this page you are reading right now)

Now lets try this again. But this time Roller will warm itself up, generate RSS feeds, populate the in-memory cache, give the JIT a chance to compile hot paths, compact the heap by calling System.gc() and finally shock frost itself via criu.checkpoint(...).


        warmup();    // generates/caches landing page/RSS feeds and first 20 blog entries
        System.gc(); // give the GC a chance to clean up unused objects before checkpoint

        try (CRIUContext criu = CRIUContext.create()
                .logLevel(WARNING).leaveRunning(false).tcpEstablished(true)) {

            criu.checkpoint(imagePath);  // checkpoint + exit

        } catch (CRIUException ex) {
            jfrlog.warn("post warmup checkpoint failed", ex);
        }

(The uncompressed image size was between 500-600 MB during my tests, heap was set to 1 GB with ParallelGC active)

restore:


$ sudo time criu restore --shell-job --tcp-established -d -D blog_image/

real 0m0,204s
user 0m0,015s
sys  0m0,022s

instant defrosting: 204 ms

Note: -d detaches the shell after the restore operation completed. Alternative way to measure defrosting time is by enabling verbose logging with -v and comparing the last timestamp, this is slightly slower (+20ms) since CRIU tends to log a lot on lower log levels. Let me know if there is a better way of measuring this, but I double checked everything and the image loading speed would be well below the average read speed of my M.2 NVMe.

The blog is immediately reachable in the browser, served by a warmed-up JVM.

Conclusion && Discussion

CRIU is quite interesting for use cases where Java startup time matters. Quarkus for example moves slow framework initialization from startup to build time, native images with GraalVM further improve initialization by AOT compiling the application into a single binary, but this also sacrifices a little bit throughput. CRIU can be another tool in the toolbox to quickly map a running JVM with application into memory (no noteworthy code changes required).

The Foreign Linker API (JEP 389) is currently proposed as preview feature for OpenJDK 16, which is a major part of project Panama. However, to use JCRIU on older JDKs, another implementation for CRIUContext would be needed. A implementation which communicates via google protocol buffers with CRIU would completely avoid binding to the CRIU C-API for example.

The JVM would be in an excellent position to aid CRIU in many ways. It already is an operating system for Java/Bytecode based programs (soon even with its own implementation for threads) and knows how to drive itself to safe points (checkpointing an application which is under load is probably a bad idea), how to compact or resize the heap, invalidate code cache etc - I see great potential there.

Let me know what you think.

Thanks a lot to Adrian Reber (@adrian__reber) who patiently answered all my questions about CRIU.


//Formatting JFR Events with J'Bang [and JFRLog]

Once JFRLog stored all logs as events in JFR records, you might want to read them out again for inspection and maybe even in an easy readable format which resembles classic log files a bit more.

For this I wrote the JFRPrint utility which was originally a single-file java program (SFJP) but can now be used as jbang script. The utility can format any JFR event, not only log messages.

setup and example


# add the catalog to jbang
$ jbang catalog add jfrlog https://github.com/mbien/JFRLog/blob/master/cli/jbang-catalog.json

# define a log pattern
$ MSG_PATTERN="{eventName,0d,C} {startTime,dt:yyyy-MM-dd HH:mm:ss.SSS}\
 [{eventThread.javaName}] {origin,1d}: {message} {throwable,o,n}"

# print jfr records using the pattern
$ jbang jfrprint 10h log.* "$MSG_PATTERN" dump.jfr
INFO 2020-09-30 16:12:42.458 [main] jfrlogtest.LogTest: Hello There!
INFO 2020-09-30 16:12:42.460 [main] jfrlogtest.LogTest: 1 2 3 test 
WARN 2020-09-30 16:12:42.461 [main] jfrlogtest.LogTest: don't panic 
java.lang.IllegalArgumentException: test, please ignore
	at dev.mbien.jfrlogtest.LogTest.main(LogTest.java:12)
...

Usage is as follows:


jfrprint timespan event_name pattern [jfr_record_file | jfr_repository_folder]

timespan can be set to only print events which happened after now-timespan. The util will print all events matching event_name (supports * wildcard as postfix) if a JFR record is passed as an argument. If it is a repository folder however, it is going to behave similar to tail -f and will stream all upcoming events from the live JFR repository.

To print the usage and more examples simply type:


jbang jfrprint help

message pattern

{fieldName, option1, option2, ..}

The message pattern format fairly simple. The curly brace blocks are replaced with the event field defined by fieldName. Printing the name of the event thread for example becomes {eventThread.javaName}.

Options can be appended in a coma separated list after fieldName.

  • dt: prefix defines a date-time format and supports everything what DateTimeFormatter.ofPattern() is able to parse
  • c sets the String to lower case, C to upper case
  • [0-n]d defines how many dots you want to see. 0d would format "log.Warn" to "Warn". 1d formats "foo.bar.Bazz" to "bar.Bazz"
  • o stands for optional and won't print anything if the field is null. This can be useful for printing exceptions when combined with n which adds a new line before the field
  • for more options check the source

{...} is a special token which will print all fields which haven't been printed yet. This is especially useful for events which aren't log messages which might have unknown fields.

The following pattern will print all events with all their fields which happened in the last hour:


jfrprint 1h * "{eventName} {startTime} {...}" record.jfr

Note: if no pattern is provided the output will match the multi-line output of OpenJDK's jfr print CLI tool which is also the same format as used in jdk.jfr.Event::toString().

it is still very basic

I wrote it originally as SFJP and tried to keep everything as simple and concise as possible. But since it is now set up as as jbang "script", it would allow to make the CLI experience a bit nicer - which i might do in future ;)

Let me know if you find it useful or want to see a particular feature.


//Stopping Containers Correctly

Stopping a container with


$ podman stop container-name
or

$ docker stop container-name

will send SIGTERM to the first process (PID 1) and shut down the container when the process terminates. If this doesn't happen within a certain time frame (default is 10s), the runtime will send SIGKILL to the process and take the container down.

So far so good, things are getting interesting when your container process isn't PID 1.

This is already the case if the process is started via a shell script.


#!/bin/bash

... 

java $FLAGS $APP

Attempting to stop this container will terminate the script, while the JVM will keep running. The container runtime is usually smart enough to notice that a process is still active after the script terminated and will wait the grace period anyway, before shutting down the container forcefully. The JVM however won't notice anything and won't have the opportunity to call shutdown hooks, write JFR dumps or finish transactions.

signal delegation

One way to solve this is by delegating the signal from the shell script to the main process:


... 
java $FLAGS $APP & # detach process from script
PID=$!             # remember process ID

trap 'kill -TERM $PID' INT TERM # delegate kill signal to JVM

wait $PID   # attach script to JVM again; note: TERM signal unblocks this wait
trap - TERM INT
wait $PID   # wait for JVM to exit after signal delegation
EXIT_STATUS=$?

The second wait prevents the script from exiting before the JVM finished termination and is required since the first wait is unblocked as soon the script received the signal.

it still didn't work

Interestingly, after implementing this (and trying out other variations of the same concept) it still didn't work for some reason - debugging showed the trap never fired.

Turns out that nothing was wrong with the signal delegation - signals just never reached the script :). So I searched a bit around and found this article which basically described the same async/wait/delegate method in greater detail (thats where I stole the EXIT_STATUS line from), so I knew it had to work. Another great article gave me the idea to check the Dockerfile again.


FROM ...
...
CMD ./start.sh

The sh shell interpreting the bash script was the first process!


$ podman ps
CONTAINER ID  IMAGE                     COMMAND             ...
de216106ff39  localhost/test:latest  /bin/sh -c ./start.sh  ...

htop (on the host) in tree view shows it more clearly:


$ htop
    1 root       ... /sbin/init
15643 podpilot   ...  - /usr/libexec/podman/conmon --api-version ...
15646 100996     ... |   - /bin/sh -c ./start.sh ...
15648 100996     ... |      - /bin/bash ./start.sh
15662 100996     ... |         - /home/jdk/bin/java -Xshare:on ...

To fix this a different CMD (or ENTRYPOINT) syntax is needed:


FROM ...
...
CMD [ "./start.sh" ]

Lets check again after rebuild + run:


$ podman ps
CONTAINER ID  IMAGE                     COMMAND         ...
72e3e60ed60b  localhost/test:latest  ./start.sh  ...

$ htop
    1 root       ... /sbin/init
15746 podpilot   ...  - /usr/libexec/podman/conmon --api-version ...
15749 100996     ... |   - /bin/bash ./start.sh ...
15771 100996     ... |      - /home/jdk/bin/java -Xshare:on ...

Much better. Since the script is now executed directly, it is able to receive and delegate the signals to the JVM. The Java Flight Recorder records also appeared in the volume, which meant that the JVM had enough time to convert the JFR repository into a single record file. The podman stop command also returned within a fraction of a second.

Since the trap is listening to SIGINT too, even the CTRL+C signal is properly propagated when the container is started in non-detached mode. Nice bonus for manual testing.

alternatives

Starting the JVM with


exec java $FLAGS $APP

will replace the shell process with the JVM process without changing PID or process name. Disadvantage: no java commandline in top and the shell won't execute any lines after the exec line (because it basically doesn't exist anymore).

... and if you don't care about the container life cycle too much you can always tell the JVM directly to shutdown, this will close all parent shells bottom-up until PID 1 terminated which will finally stop the container.


podman exec -it container sh -c "kill \$(jps | grep -v Jps | cut -f1 -d' ')"

- - -

lessons learned:
sometimes two square brackets make the difference :)


//JFRLog - Logging directly to the Java Flight Recorder

I just wrote JFRLog - a lightweight SLF4J logger implementation which lets you log directly into the Java Flight Recorder. Since SLF4J (Simple Logging Facade for Java) has a binding to basically every logging framework out there, it doesn't even matter how the logging is done or how many frameworks the application or the dependencies use (hopefully not many).

Why? I was mostly curious how practical something like that would be and it turned out it works great. If you are running linux its likely that most of the logging is already centralized and stored in binary form and you need tools like journalctl to read it. If you ever had to correlate log files with events of a JFR recording you probably also asked yourself why isn't everything in the recording in the first place. Having everything in one place simplifies containers too.

JFRLog

The project is available on github under the MIT License. It requires JDK 8+ to run but JDK 14+ to build/test since the JUnit tests rely on the event streaming API. JFRLog's only dependency is the SLF4J API.

quickstart

The basic 2.5 steps to get it working are:

  1. replace your SLF4J compatible logging impl with slf4j-jfr-bridge-x.x.x.jar
  2. start the JVM with JFR enabled (e.g. -XX:StartFlightRecording=filename=logs/dump.jfr,dumponexit=true) or enable recording later
  3. check the flight recorder repository or recording dump for log.* events

update: artifacts of the first release are now available on maven central.

minimal maven dependencies:

    <dependencies>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>1.7.30</version>
        </dependency>
        <!-- depends on slf4j-api -->
        <dependency>
            <groupId>dev.mbien.jfrlog</groupId>
            <artifactId>slf4j-jfr-bridge</artifactId>
            <version>0.1.0</version>
        </dependency>
    </dependencies>

(adding slf4j-api is optional since it is already a dependency of JFRLog but it is a good practice to depend on the interface and makes it also more obvious what is going on)

minimal example:

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class HelloJFRLog {
    private final static Logger log = LoggerFactory.getLogger(HelloJFRLog.class);
    public static void main(String[] args) {
        log.info("does it work?");
    }
}
start with JFR enabled, for example by using the following JVM flag: -XX:StartFlightRecording=filename=test_dump.jfr,dumponexit=true

inspect the flight record:


$ jfr print --events "log.*" test_dump.jfr
log.Info {
  startTime = 11:52:15.954
  message = "does it work?"
  origin = "dev.mbien.jfrlogtestproject.HelloJFRLog"
  throwable = N/A
  eventThread = "main" (javaThreadId = 1)
}

Thats it! Log and JFR events all in one place.

configuration

Configuration is straight forward. You can set log levels via java -D arguments or via a jfrlog.properties file and place it into the classpath. Java -D args are treated with higher priority than the configuration file to simplify testing.

example jfrlog.properties:

# Sets MyKlass to debug, the rest of the package to error and the default log level
# to info (for everything which isn't explicitly configured). 
# The most specific rule wins (order does not matter).
jfrlog.default=info
jfrlog.dev.cool.app=error
jfrlog.dev.cool.app.MyKlass=debug

You can also disable entire log levels by disabling the corresponding events in the JFR recording profile if so desired.

this blog you are reading is using it right now

The Apache Roller blog is a big dusty webapp with many dependencies. Since dependencies use their own favorite logging framework its a nice example of a near-worst-case-scenario a.k.a.: the LoggingHell (extends JarHell). Half of the Apache libs use the old Java Commons Logging layer, struts2 depends on log4j2 for some reason, some webjars use java.util.logging, eclipselink uses its own (!) logging but thankfully with a SLF4J compatibility extension and roller used log4j1 at some point (the version in my WIP branch uses log4j2). Ah, and the web server needs some logging too, but thankfully Jetty uses the SLF4J layer so it doesn't care that much.

So, to see some text in a file, all what is needed is:

  • jcl-over-slf4j to connect the JCL abstraction layer to the SLF4J layer for spring etc
  • log4j-to-slf4j to make struts happy
  • jul-to-slf4j for webjars (i think)
  • org.eclipse.persistence.extension to bridge eclipselink's personal logger to SLF4J

Beautiful, all logs lead to SLF4J now. We only forgot about the logging impl itself. Usually you would have a log4j-slf4j-impl or logback-classic there - most likely. But to enable JFRLog all what is needed is to replace it with slf4j-jfr-bridge and that was exactly what I did.

I also wrote some java (single file java program) scripts using the JFR streaming API for easy inspection of the logging repository. (but that is probably a story for another time update: CLI tools blog entry)

How many new logging frameworks or abstraction layers will the current pandemic give us?

Let me know what you think.

fly safe (and keep JFR on)


//Configuring Eclipse Jetty to use Virtual Threads

A quick guide about how to configure Jetty to use Project Loom's virtual threads instead of plain old java threads.

Jetty's default thread pool implementation can be swapped out by implementing Jetty's ThreadPool interface and passing an instance to the Server constructor. If you are using jetty stand alone, everything is initialized by xml files.

Assuming you are using the recommended jetty home / jetty base folder structure, all what is needed is to create jetty-threadpool.xml in [jetty-base]/etc containing the following:


<Configure>
<New id="threadPool" class="dev.mbien.virtualthreads4jetty.VirtualThreadExecutor"/>
</Configure>

and put a jar containing the custom VirtualThreadExecutor into [jetty-base]/lib/ext. I uploaded a build to the release section of the vt4jetty github project.

If you don't have an lib/ext folder yet you can enable it with:


java -jar $JETTY_HOME/start.jar --add-to-start=ext

here the code:


package dev.mbien.virtualthreads4jetty;

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import org.eclipse.jetty.util.thread.ThreadPool;

/**
 * Executes each task in a new virtual thread.
 * 
 * <p>Java's default ForkJoinPool is used as scheduler. To influence carrier
 * thread count use -Djdk.defaultScheduler.parallelism=N. Default is
 * {@link Runtime#availableProcessors()}.
 * 
 * @author mbien
 */
public class VirtualThreadExecutor implements ThreadPool {
    
    private final ExecutorService executor;

    public VirtualThreadExecutor() {
        executor = Executors.newThreadExecutor(
                Thread.builder().virtual().name("jetty-vt#", 0).factory());
        // too early for logging libs
        System.out.println("VirtualThreadExecutor is active.");
    }
    
    @Override
    public void execute(Runnable command) {
        executor.execute(command);
    }

    @Override
    public void join() throws InterruptedException {
        executor.shutdown();
        executor.awaitTermination(3, TimeUnit.SECONDS);
    }

    // those are hopefully only used for stats/dashboards etc
    @Override
    public int getThreads() { return -1; }

    @Override
    public int getIdleThreads() { return -1; }

    @Override
    public boolean isLowOnThreads() { return false; }
    
}

Tested with JDK16-loom+4-56 (2020/7/25) early access build from here and latest Jetty.

I encountered some JVM crashes while load testing Apache Roller with virtual threads enabled - keep in mind this is still all very much work in progress.


//[Java in] Rootless Containers with Podman

I have always been a little surprised how quickly it became acceptable to run applications wrapped in containers as root processes. Nobody would have run a web server as root before docker became mainstream if there was some way to avoid it. But with docker it became OK to have the docker daemon and the container processes all running as root. The first item in most docker tutorials became how to elevate your user rights so that you don't have to type sudo before every docker command.

But this doesn't have to be the case of course. One project I had an eye on was podman which is a container engine implementing the docker command line interface with quite good support for rootless operations. With the release of Podman 2.0.x (and the fact that it is slowly making it into the debian repositories) I started to experiment with it a bit more. (for the experimental rootless mode of Docker check out this page)

cgroups v2

Containers rely heavily on kernel namespaces and a feature called control groups. To properly run rootless containers the kernel must be supporting and running with cgroups v2 enabled. To check if cgroups v2 are enabled simply run:


ls /sys/fs/cgroup
cgroup.controllers  cgroup.max.depth  cgroup.max.descendants  cgroup.procs ...

If the files are prefixed with cgroup. you are running cgroups v2, if not, its still v1.

Many distributions will still run with cgroups v1 enabled by default for backwards compatibility. But you can enable cgroups v2 with the systemd kernel flag: systemd.unified_cgroup_hierarchy=1. To do this with grub for example:

  • edit /etc/default/grub and
  • add systemd.unified_cgroup_hierarchy=1 to the key GRUB_CMDLINE_LINUX_DEFAULT (space separated list)
  • then run sudo grub-mkconfig -o /boot/grub/grub.cfg and reboot.

... and make sure you are not running an ancient linux kernel.

crun

The underlying OCI implementation has to support cgroups v2 too. I tested mostly on crun which is a super fast and lightweight alternative to runc. The runtime can be passed to podman via the --runtime flag


podman --runtime /usr/bin/crun <commands>

but it got picked up automatically in my case after I installed the package (manjaro linux, runc is still installed too).


podman info | grep -A5 ociRuntime
  ociRuntime:
    name: crun
    package: Unknown
    path: /usr/bin/crun
    version: |-
      crun version 0.14.1

subordinate uid and gids

The last step required to set up rootless containers are /etc/subuid and /etc/subgid. If the files don't exist yet, create them and add a mapping range from your user name to container users. For example the line:

duke:100000:65536

Gives duke the right to create 65536 users in container images, starting from UID 100000. Duke himself will be mapped by default to root (0) in the container. (Same must be done for groups in subgid). The range should never overlap with UIDs on the host system. Details in man subuid. More on users later in the volumes section.

rootless containers

Some things to keep in mind:
  • rootless podman runs containers with less privileges than the user which started the container
    • some of these restrictions can be lifted (via --privileged, for example)
    • but rootless containers will never have more privileges than the user that launched them
    • root in the container is the user on the host
  • rootless containers have no IP or MAC address, because nw device association requires root privileges
    • podman uses slirp4netns for user mode networking
    • pinging something from within a container won't work out of the box - but don't panic: it can be configured if desired

podman

Podman uses the same command-line interface as Docker and it also understands Dockerfiles. So if everything is configured correctly it should all look familiar:

$ podman version
Version:      2.0.2
API Version:  1
Go Version:   go1.14.4
Git Commit:   201c9505b88f451ca877d29a73ed0f1836bb96c7
Built:        Sun Jul 12 22:46:58 2020
OS/Arch:      linux/amd64

$ podman pull debian:stable-slim
...
$ podman images
REPOSITORY                       TAG          IMAGE ID      CREATED       SIZE
docker.io/library/debian         stable-slim  56fae066253c  4 days ago    72.5 MB
...
$ podman run --rm debian:stable-slim cat /etc/debian_version
10.4

Setting alias docker=podman allows existing scripts to be reused without modification - but I stick here to podman to not cause confusion.

container communication

Rootless containers don't have their own IP addresses but you can bind them to ports (>1024). Host2container communication works therefore analog to communicating with any service you would have running on the host.


$ podman run --name wiki --rm -d -p 8443:8443 jspwiki-jetty
$ podman port -a
fd4c06b454ee	8443/tcp -> 0.0.0.0:8443
$ firefox https://localhost:8443/wiki

To setup quick and dirty container2container communication you can let them communicate over the IP address of the host (or host name) and ports, if the firewall is OK with that. But a better maintainable approach are pods. Pods are groups of containers which belong together. It is basically a infrastructure container, containing the actual containers. All containers in a pod share the same localhost and use it for pod-local communication. The outside world is reached via opened ports on the pod.

Lets say we have a blog and a db. The blog requires the db but all the host cares about is the https port of the blog container. So we can simply put blog container and db container into a blog-pod and let both communicate via pod-local localhost (podhost?). The https port is opened on the blog-pod for the host while the db isn't reachable from the outside.


$ podman pod create --name blogpod -p 8443:8443

# note: a pod starts out with one container already in it,
# it is the infrastructure container - basically the pod itself
$ podman pod list
POD ID        NAME     STATUS   CREATED        # OF CONTAINERS  INFRA ID
39ad88b8892f  blogpod  Created  7 seconds ago  1                af7baf0e7fde

$ podman run --pod blogpod --name blogdb --rm -d blog-db
$ podman run --pod blogpod --name apacheroller --rm -d roller-jetty

$ podman pod list
POD ID        NAME     STATUS   CREATED         # OF CONTAINERS  INFRA ID
39ad88b8892f  blogpod  Created  30 seconds ago  3                af7baf0e7fde

$ firefox https://localhost:8443/blog

Now we already have two containers able to communicate with each other and a host which is able to communicate with a container in the pod - and no sudo in sight.

volumes and users

We already know that the user on the outside is root on the inside, but lets quickly check it just to be sure:


$ whoami
duke
$ id -u
1000
$ mkdir /tmp/outside
$ podman run --rm -it -v /tmp/outside:/home/inside debian:stable-slim bash
root@2fbc9edaa0ee:/$ id -u
0
root@2fbc9edaa0ee:/$ touch /home/inside/hello_from_inside && exit
$ ls -l /tmp/outside
-rw-r--r-- 1 duke duke 0 31. Jul 06:55 hello_from_inside

Indeed, duke's UID of 1000 was mapped to 0 on the inside.

Since we are using rootless containers and not half-rootless containers we can let the blog and the db within their containers run in their own user namespaces too, what if they write logs to mounted volumes? That is when the subuid and subgid files come into play we configured earlier.

Lets say the blog-db container process should run in the namespace of the user dbduke. Since dbduke doesn't have root rights on the inside (as intended), dbduke won't also have rights to write to the mounted volume which is owned by duke. The easiest way to solve this problem is to simply chown the volume folder on the host to the mapped user of the container.


# scripts starts blog-db
# query the UID from the container and chown the volumes folder
UID_INSIDE=$(podman run --name UID_probe --rm blog-db /usr/bin/id -u)
podman unshare chown -R $UID_INSIDE /path/to/volume

podman run -v /path/to/volume:/path/inside ...  blog-db

Podman ships with a tool called unshare (the name is going to make less sense the longer you think about it) which lets you execute commands in the namespace of a different user. The command podman unshare allows to use the rights of duke to chown a folder to the internal UID of dbduke.

If we would check the folder rights from both sides, we would see that the UID was mapped from:


podman run --name UID_probe --rm blog-db /usr/bin/id -u
998
to

$ ls -l volume/
drwxr-xr-x 2 100997 100997 4096 31. Jul 07:54 logs
on the outside which is within the range specified in the /etc/subuid file - so everything works as intended. This allows user namespace isolation between containers (dbduke, wikiduke etc) and also between containers and the user on the host who launched the containers (duke himself).

And still no sudo in sight.

memory and cpu limits [and java]

Memory limits should work out of the box in rootless containers

$ podman run -it --rm -m 256m blog-db java -Xlog:os+container -version
[0.003s][info][os,container] Memory Limit is: 268435456
...

This allows the JVM to make smarter choices without having to provide absolute -Xmx flags (but you still can).

Setting CPU limits might not work out of the box without root (tested on Manjaro which is basically Arch), since the cgroups config might have user delegation turned off by default. But it is very easy to change:


# assuming your user id is 1000 like duke
$ sudo systemctl edit user@1000.service

# now modify the file so that it contains
[Service]
Delegate=yes

# and check if it worked
$ cat /sys/fs/cgroup/user.slice/user-1000.slice/cgroup.controllers
cpuset cpu io memory pids

You might have to reboot - it worked right away in my case.


# default CPU settings uses all cores
$ podman run -it --rm blog-db sh -c\
 "echo 'Runtime.getRuntime().availableProcessors();/exit' | jshell -q"
jshell> Runtime.getRuntime().availableProcessors()$1 ==> 4

# assign specific cores to container
$ podman run -it --rm --cpuset-cpus 1,2 blog-db sh -c\
 "echo 'Runtime.getRuntime().availableProcessors();/exit' | jshell -q"
jshell> Runtime.getRuntime().availableProcessors()$1 ==> 2

Container CPU core limits should become less relevant in the java world going forward, especially once projects like Loom [blog post] have been integrated. Since most things in java will run on virtual threads on top of a static carrier thread pool, it will be really easy to restrict the parallelism level of a JVM (basically -Djdk.defaultScheduler.parallelism=N and maybe another flag to limit max GC thread count).

But it works if you need it for rootless containers too.

class data sharing

Podman uses fuse-overlayfs for image management by default, which is overlayfs running in user mode.

$ podman info | grep -A5 overlay.mount_program
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: Unknown
      Version: |-
        fusermount3 version: 3.9.2
        fuse-overlayfs: version 1.1.0

This means that JVM class data sharing is also supported out of the box if the image containing the class data archive is shared in the image graph between multiple rootless containers.

The class data stored in debian-slim-jdk (a local image I created) will be mapped only once into memory and shared between all child containers. Which are in the example below blog-db, roller-jetty and wiki-jetty.


$ podman image tree --whatrequires debian-slim-jdk
Image ID: 57c885825969
Tags:     [localhost/debian-slim-jdk:latest]
Size:     340.1MB
Image Layers
└── ID: ab5467b188d7 Size: 267.6MB Top Layer of: [localhost/debian-slim-jdk:latest]
 ├── ID: fa04d6485aa5 Size:  7.68kB
 │   └── ID: 649de1f63ecc Size: 11.53MB Top Layer of: [localhost/debian-slim-jdk-jetty:latest]
 │    ├── ID: 34ce3917399d Size: 8.192kB
 │    │   └── ID: d128b826459d Size:  56.7MB Top Layer of: [localhost/roller-jetty:latest]
 │    ├── ID: 9a9c51927e42 Size: 8.192kB
 │    └── ID: d5c7176c1073 Size: 27.56MB Top Layer of: [localhost/wiki-jetty:latest]
 ├── ID: 06acb45dd590 Size:  7.68kB
 └── ID: 529ff7e97882 Size: 1.789MB Top Layer of: [localhost/blog-db:latest]

stopping java containers cleanly

A little tip in the end: In my experience JVMs are quite often launched from scripts in containers. This has automatically the side effect that the JVM process won't be PID 1, since it isn't the entry point. Commands like podman stop <container> will post SIGKILL to the shell script (!) wait 10s then simply kill the container process without the JVM ever knowing what was going on. More on that in the stopping containers blog entry.

Shutdown actions like dumping the JFR file won't be executed and IO writes might not have completed. So unless you trap the signal in the script and send it to the JVM somehow, there are more direct ways to stop a java container:


# script to cleanly shut down a container
$ podman exec -it $(container) sh -c\
 "kill \$(/jdk/bin/jps | grep -v Jps | cut -f1 -d' ')"

#... or if killall is installed (it usually isn't)
$ podman exec -it $(container) killall java

Once the JVM has cleanly shut down the launch script will finish which will be also noticed by the container once PID 1 is gone and it will cleanly shutdown too.

- - -

fly safe podman


//Taking a look at Virtual Threads (Project Loom)

Project Loom introduces lightweight, JVM managed, virtual threads (old name: fibers) to java. Lets take a look how the project is progressing and see how they compare to plain old OS managed threads.

Loom is currently a separate project based on JDK 15, but since there is no JEP available yet and the deadline is approaching, it is unlikely that it will land in the main repository as preview feature before JDK 16. Early access binaries are available from jdk.java.net/loom. I used Build 15-loom+7-141 (2020/5/11) for the experiments in this blog entry - be warned the API is not final.

virtual threads

Virtual threads are lightweight threads scheduled by the java runtime to run on plain old java threads ("real" threads). The threads used to run virtual threads are called carrier threads. While plain old java threads (POTs) can be fairly heavyweight due to the fact that they represent OS threads, millions of virtual threads can be spawned without causing problems.

The main feature however is that a virtual thread doesn't block its current carrier thread on blocking operations, like IO (Sockets, NIO channels...) or blocking java.util.concurrent API calls (Semaphores, Queues, Locks, Future.get()...) and even Thread.sleep(). Instead of blocking, the carrier will mount and resume a different virtual thread while the blocked virtual thread is waiting for a resource or an event to occur. Once the virtual thread is no longer blocked, it simply resumes execution on the next available carrier thread.

This should allow more efficient use of the CPU and additionally reduce the total number of POTs, since a thread running on a core which would normally be idle while waiting for a resource, can now work on something else, by replacing a blocked virtual thread with a another which isn't blocked.

some properties:

  • virtual threads are java entities, independent of OS threads
  • java.lang.Thread is used for both kinds of threads, virtual and OS
  • all virtual threads are daemons
  • spawning and blocking virtual threads is cheap
  • virtual threads require carrier threads to run on
    • a carrier thread runs a virtual thread by mounting it
    • if the VT blocks, the stack is stored and the VT is unmounted to be resumed later
  • j.u.c.Executor like a ForkJoinPool or ThreadPoolExecutor is used to schedule VTs to carriers
    • custom schedulers can be provided by implementing the Executor interface
  • millions of virtual threads can run on few carrier threads
  • Continuations (basically a VT without a scheduler) won't be in the initial release but might appear later
  • serialization is planned but currently low priority

edge cases:

  • ThreadLocals must be used with care but will still work
    • Thread.Builder#disallowThreadLocals() can be used to prohibit it entirely
    • better solutions like Scopes, Carrier- or ProcessorLocals might be implemented in future
  • some situations will cause pinning which will block the carrier if the virtual thread blocks while pinned
    • native stack frames will cause pinning
    • blocking a VT while holding a monitor (i.e. synchronized block) will currently block the carrier
      • this might be only a temporary limitation
      • doesn't apply to alternatives like j.u.c.ReentrantLock which can be used instead
    • -Djdk.tracePinnedThreads=short or -Djdk.tracePinnedThreads=full will log pinned threads

for more info: [State of Loom 1, 2] [Loom Proposal] [loom-dev mailing list] [ea javadoc]

a quick test

Since Loom is implemented as a preview feature, the flag --enable-preview has to be passed to both javac and also to the JVM at launch. This will load the preview module and tell the JVM that it is ok to run bytecode which has been compiled with preview features. This should reduce the risk of it accidentally landing on production machines via a maven repository :).


    public static void main(String[] args) {
        Thread.startVirtualThread(() -> {
            System.out.println("Hello Loom from "+Thread.currentThread()+"!");
        });
    }
output:
Hello Loom from VirtualThread[<unnamed>,ForkJoinPool-1-worker-3,CarrierThreads]!

The code above attaches a Runnable via Thread.startVirtualThread(Runnable task) to a new virtual thread and schedules it for execution on the global carrier thread pool. The output shows that the carrier thread pool in use is in fact a j.u.c.ForkJoinPool which has a work-stealing scheduler. The size of the global carrier pool can be set with the flag -Djdk.defaultScheduler.parallelism=N, the default is set to the available core count (or hardware thread count or whatever the container is configured to return).

a better test

The following example can run tasks on either a classic fixed size thread pool (with POTs) or on a unbounded virtual thread pool. The virtual thread pool attaches each task to a new virtual thread and uses the fixed size thread pool as carrier pool. The tasks consist of a simulated IO part and a computational part, the carrier thread count and the number of tasks can be adjusted.

This is no benchmark or load test, but rather an attempt to demonstrate the differences between the two thread types.



    public static void main(String[] args) throws InterruptedException {
       
        final boolean USE_VIRTUAL_THREADS = true;
        final int CARRIER_THREAD_COUNT = 1;
        final int TASK_COUNT = 2;
        
        // plain old thread factory and thread pool using the new builder
        ThreadFactory carrierTF = Thread.builder().name("carrier#", 0)
                                                  .daemon(true).factory();
        ExecutorService carrierPool = Executors.newFixedThreadPool(
                                          CARRIER_THREAD_COUNT, carrierTF);
        
        ExecutorService executor;
        
        if(USE_VIRTUAL_THREADS) {

            // factory for virtual threads scheduled on the carrier pool
            ThreadFactory virtualTF = Thread.builder()
                    .virtual(carrierPool)
                    .name("virtual#", 0).factory();

            // thread executor will spawn a new virtual thread for each task
            executor = Executors.newThreadExecutor(virtualTF);
            
        }else{
            executor = carrierPool;
        }
        
        for (int i = 0; i < TASK_COUNT; i++)
            executor.submit(new WaitAndHurry());
        
        executor.shutdown();
        executor.awaitTermination(20, TimeUnit.SECONDS); // virtual threads are daemons
        
    }

The task itself is less interesting:


    
    private final static class WaitAndHurry implements Runnable {

        private final static long START_TIME = System.nanoTime();
        
        @Override
        public void run() {
            doIO();    // block for 2s
            doWork();  // compute something for ~2s
            print("done");
        }

        private void doIO() {
            print("io");
            try {
                Thread.sleep(2000);
            } catch (InterruptedException ex) {
                throw new RuntimeException(ex);
            }
        }

        private void doWork() {
            print("work");
            long number = 479001599; 
            boolean prime = true;
            for(long i = 2; i <= number/2; ++i) {
                if(number % i == 0)  {
                    prime = false;
                    break;
                }
            }
            if (!prime) {throw new RuntimeException("wrong result");} // to prevent the JIT to optimize everything away
        }

        private void print(String msg) {
            double elapsed = (System.nanoTime()-START_TIME)/1_000_000_000.0d;
            String timestamp = String.format("%.2fs", elapsed);
            System.out.println(timestamp + " " + Thread.currentThread() + " " + msg);
        }
        
    }

output for 1 carrier thread and 2 tasks attached to virtual threads:

0.00s VirtualThread[virtual#0,carrier#0,main] io
0.01s VirtualThread[virtual#1,carrier#0,main] io
2.03s VirtualThread[virtual#0,carrier#0,main] work
3.88s VirtualThread[virtual#0,carrier#0,main] done
3.88s VirtualThread[virtual#1,carrier#0,main] work
5.67s VirtualThread[virtual#1,carrier#0,main] done
Knowing that the IO part of the task takes 2s and the computational part about 1.8s (on my system without warmup) we can put it into a chart by looking at the timestamps:
  VT0: |WAIT||WORK|
  VT1: |WAIT|      |WORK|
If we view the carrier thread as a resource we can draw a less abstract version which is closer to reality:
  CT0: |IDLE||WORK||WORK|
  VT0: |WAIT|     .
  VT1: |WAIT|           .

This shows that virtual threads already have ability to wait in parallel, even when run on just a single carrier thread. The carrier thread is also the only entity which is able to do work since it can only mount one virtual thread at a time.

Rule of thumb: virtual threads are concurrent waiters while real threads are concurrent workers.

Classic thread pool for reference using a single thread:
0.00s Thread[carrier#0,5,main] io
2.02s Thread[carrier#0,5,main] work
3.84s Thread[carrier#0,5,main] done
3.84s Thread[carrier#0,5,main] io
5.84s Thread[carrier#0,5,main] work
7.67s Thread[carrier#0,5,main] done

  CT0: |WAIT||WORK||WAIT||WORK|

Sequential as expected.


lets bump it to 2 carrier threads and 4 tasks:

0.02s VirtualThread[virtual#0,carrier#0,main] io
0.03s VirtualThread[virtual#2,carrier#0,main] io
0.03s VirtualThread[virtual#3,carrier#0,main] io
0.02s VirtualThread[virtual#1,carrier#1,main] io
2.03s VirtualThread[virtual#0,carrier#0,main] work
2.04s VirtualThread[virtual#2,carrier#1,main] work
3.85s VirtualThread[virtual#2,carrier#1,main] done
3.85s VirtualThread[virtual#3,carrier#1,main] work
3.86s VirtualThread[virtual#0,carrier#0,main] done
3.86s VirtualThread[virtual#1,carrier#0,main] work
5.63s VirtualThread[virtual#3,carrier#1,main] done
5.69s VirtualThread[virtual#1,carrier#0,main] done

  VT0: |WAIT||WORK|
  VT1: |WAIT||WORK|
  VT2: |WAIT|      |WORK|
  VT3: |WAIT|      |WORK|
        
  CT0: |IDLE||WORK||WORK|
  CT1: |IDLE||WORK||WORK|
  VT0: |WAIT|     .
  VT1: |WAIT|     .
  VT2: |WAIT|           .
  VT3: |WAIT|           .

Now we gained the ability to work in parallel using two threads while using all virtual threads to wait in parallel - best of both worlds.

Classic thread pool for reference using 2 threads:

0.00s Thread[carrier#1,5,main] io
0.00s Thread[carrier#0,5,main] io
2.03s Thread[carrier#1,5,main] work
2.03s Thread[carrier#0,5,main] work
3.87s Thread[carrier#0,5,main] done
3.87s Thread[carrier#0,5,main] io
3.88s Thread[carrier#1,5,main] done
3.88s Thread[carrier#1,5,main] io
5.87s Thread[carrier#0,5,main] work
5.88s Thread[carrier#1,5,main] work
7.67s Thread[carrier#0,5,main] done
7.70s Thread[carrier#1,5,main] done

  CT0: |WAIT||WORK||WAIT||WORK|
  CT1: |WAIT||WORK||WAIT||WORK|

No surprises.


real threads in a virtual world

Virtual threads implicitly convert blocking APIs into a async/await pattern - and you won't even have to be aware of it as user of an API (most of the time at least). Entire callback based frameworks (buzzword: reactive) could be made obsolete, since their main purpose has always been to avoid that programmers have to deal with any kind of concurrency problems, often even accepting that nothing can run in parallel in them (only parallel waiting is happening behind the scenes, basically like in python, or virtual threads on a single carrier in our example). Even Node.js received basic worker_threads in v12 using a language which is single threaded by design (data is copied to the worker when it starts and copied back again in a callback when the job is done).

Java on the other hand was multi threaded since the beginning (25 years ago; time flies) and is only now getting virtual threads (if you don't count green threads of Java 1.1). Since virtual threads are using the same java.lang.Thread class as the OS threads do, they are pretty much interchangeable with each other and can keep using established APIs. Asynchronous IO APIs are hopefully going to be used less often in future, because code which does async IO now, can be made less error prone and easier to read by using simple blocking IO APIs from within virtual threads.

Plain old java threads will most likely still have a purpose (beside being a carrier) in future however: Not every long running background task which is periodically reading a few bytes from a file will benefit from being virtual and limitations like pinning due to native stack frames in virtual threads which also block the carrier, will probably always require some additional POTs for special cases.


summary

Project Loom made significant progress recently and is already in a fairly usable state. I am looking forward to it being integrated into the main repository (hopefully JDK 16 or 17). Virtual Threads have the potential to be a big game changer for Java: better concurrency while using fewer OS threads without significant code changes - what more to ask.

Debugging a few million virtual threads is going to be interesting, thread dumps of the future will require a bit more tooling, e.g. hierarchical views etc or at least a good scroll wheel on the mouse :)


//Tracking Apache Roller page views using JFR custom events

The Java Flight Recorder is great for monitoring the JVM but after checking out the new JFR Event Streaming in JDK 14, I was curious how well it would work in practice for basic logging needs. I always wanted to generate some page view statistics for this web blog but never liked the idea of tracking users in their browsers (or at least those few who turned off tracking protection for some reason or used a outdated browser).

So lets take a look if we can record custom page view events using JFR instead of logging or storing stats in a database.

The goal is to record the link the anonymous reader clicked to get to the page (or null) and the actual page on the blog which is visited.

Custom Event


    @Label("Incoming Referrer Event")
    @Description("Reader landed on the Blog.")
    @StackTrace(false)
    public class IncomingReferrerEvent extends Event {

        @Label("Referrer URL")
        private final String referrerUrl;
        
        @Label("Request URL")
        private final String requestUrl;

        public IncomingReferrerEvent(String referrerUrl, String requestUrl) {
            this.referrerUrl = referrerUrl;
            this.requestUrl = requestUrl;
        }

    }

Defining a custom JFR event is trivial, simply extend jdk.jfr.Event and annotate the fields, the code should be self explanatory.@StackTrace(false) disables the recording of stack traces since we don't need it for this event. Custom events are enabled by default. The event name is the fully qualified class name by default, but can be overwritten with @Name("dev.foo.NewName").


    new IncomingReferrerEvent(referrerUrl, requestUrl).commit();

Records the event (placed somewhere in doGet() of Apache Roller's HttpServlet). Events can have a beginning and an end, by simply calling commit() without begin() or end(), only the time stamp at commit and the event values are stored. Due to the design of the JFR you don't really have to worry about concurrency because the events are held in thread-local buffers before they are flushed to disk which should make it fairly performant even in highly parallel settings.

Configuration

The next step is to setup a JFR configuration file.



<?xml version="1.0" encoding="UTF-8"?>
<configuration version="2.0" label="Blog" description="Roller JFR Events" provider="Duke Himself">

    <event name="org.apache.roller.weblogger.jfr.IncomingReferrerEvent">
        <setting name="enabled">true</setting>
    </event>

</configuration>

I could have used the default configuration provided by the jdk but this would lead to much larger file sizes after some uptime. The configuration above only enables our custom event, everything else is turned off. For a larger blueprint see JDK_HOME/lib/jfr/default.jfc.

Once the server is running we can switch the recording on using jcmd. ($1 is the PID)


jcmd $1 JFR.configure repositorypath="/home/server/jfr" samplethreads=false
jcmd $1 JFR.start name="blogstats" settings="/home/server/jfr/blog.jfc"

First Test - 8 MB, not great, not terrible

After a quick test we can plot the summary of the recording file in the repository to check if we see any events.


jfr summary 2020_02_03_04_35_07_519/2020_02_07_16_46_39.jfr 

 Version: 2.1
 Chunks: 1
 Start: 2020-02-07 15:46:39 (UTC)

 Event Type                                             Count  Size (bytes) 
==============================================================================
 jdk.CheckPoint                                        130997      12587784
 org.apache.roller.weblogger.jfr.IncomingReferrerEvent     66          4735
 jdk.Metadata                                               1         81383
 jdk.ThreadEnd                                              0             0
 jdk.TLSHandshake                                           0             0
 jdk.ThreadSleep                                            0             0
 jdk.ThreadPark                                             0             0
 jdk.JavaMonitorEnter                                       0             0
...

There it is, we can see IncomingReferrerEvent in the record. You might be wondering what those jdk.CheckPoint events are. I wasn't sure either since I noticed them the first time with JDK 14 and a websearch resulted in no hits, so I asked on hotspot-jfr-dev. It turns out that they are part of the file format - for more details read the reply or even better the rest of the mail thread too.

The check points will create around 8 MB of data per day (one check point per second), which can be seen as static footprint without recording any actual events. 8 MB, not great, not terrible.

More Events

Having now a good estimate for the footprint, it was time to enable more events. Since the correlation of IncomingReferrerEvents with GC events, network/CPU load or thread count might be interesting given that it all is running on a Raspberry Pi 3b+ with limited resources. I ended up enabling some more events after grep'ing the code base of VisualVM for event Strings (it looks like JFR support is coming soon, the custom build I am using looks fairly complete already) and then adding them to the blog.jfc config file.


jfr summary blog_dump06.jfr 
...
 Event Type                                              Count  Size (bytes) 
==============================================================================
 jdk.CheckPoint                                        1223057     118202307
 jdk.NetworkUtilization                                 264966       4731868
 jdk.CPULoad                                            242765       6198487
 jdk.ActiveSetting                                        3348        123156
 org.apache.roller.weblogger.jfr.IncomingReferrerEvent    6367         89338
 jdk.GCHeapSummary                                         830         35942
 jdk.GarbageCollection                                     415         12577
 jdk.Metadata                                               13       1038982
 jdk.JVMInformation                                         12         27113
 jdk.GCHeapConfiguration                                    12           353
 jdk.ActiveRecording                                        12           727

Even with the added events and another 14 day test run I ended up with a file size of about 130 MB. That is about 9.28 MB per day - mostly CheckPoints ;) - in fact ~118 MB of them. luckily the file compresses nicely.

The undocumented Flag

If you take a look at the JDK sources you notice a flag with the description: "flush-interval, Minimum time before flushing buffers, measured in (s)econds..." - sounds exactly what we need.

jcmd $1 JFR.start name="blogstats" flush-interval=10s settings="/home/server/jfr/blog.jfc"

Results look promising: as expected around 10x reduction in CheckPoint events. (I don't have more data to show since I literally just added the flag and rebooted the server as I am typing this)

update: size of the record dump file after ~14.3 days (343h) is 25.13 MB which is about 1.8 MB per day - much better.

I can't recommend relying on undocumented or hidden flags since they can change or go away without notice (@see hotspot-jfr-dev discussion). The realtime nature of event streaming is also gone with a buffer delay of 10s, so keep that in mind.

The File Format

The file format for the JFR repository and record dumps isn't standardized and can change. So another thing to keep in mind if JFR is used for logging is that it might not be possible to read old JDK 14 dumps with JDK 20 in future (although reading JDK 11 dumps with 14 works just fine). But this doesn't sound like a big issue to me since JDKs don't just disappear, it will always be possible to open JDK 14 dumps with JDK 14 (and maybe even with JDK 20 from the future if backwards compatibility can be maintained).

Conclusion

Although the static footprint of the JFR in JDK 14 came as a surprise I am not too concerned of it. Storage is cheap and I am sure the issue will be solved or reduced to a minimum in one way or another.

The benefit of tracking my blog stats with JFR is that I can inspect them with the same tools I would use for JVM performance monitoring (VisualVM, JDK Mission Control, etc). JFR Event Streaming and the OpenJDK command line tools make it also easy to access the data in a structured manner.

JFR can not only be used for JVM monitoring but certainly also for structured logging in some scenarios. Log files will probably never disappear completely and I am sure JFR (or the original JRockit Flight Recorder implementation) was never intended to fully replace a logging framework but maybe one day logs will mostly contain lines produced by Event#toString() running dual stacks where it is still required :).

fly safe


//JFR Event Streaming with Java 14

The Java Flight Recorder has a long history. It used to be part of the BEA JRockit JVM, became a commercial feature of the Oracle JDK (7+) after Oracle acquired BEA and Sun and was finally fully open sourced with the release of OpenJDK 11 (backports exist for 8) (JEP328). OpenJDK 14 will add some improvements to the JFR.

JEP 349 will allow the continuous consumption of java flight recorder events in-memory from within the same JVM, or out-of-process from a different JVM via its JFR repository file.

JEP 349 made it already into the early access builds and can be experimented with since OpenJDK 14 build 22+. Lets check it out.

In-Process Streaming

The base JFR configuration files (XML) can be found in JDK_HOME/lib/jfr. The default configuration (default.jfc) is relatively low overhead, while profile.jfc will provide more data. Java Mission Control can create custom settings based on templates if needed. I am using the default config for the examples.

The first example will start JFR on the local JVM using the default recorder settings and register a few event handlers to check if it is working.


import java.io.IOException;
import java.text.ParseException;
import jdk.jfr.Configuration;
import jdk.jfr.consumer.EventStream;
import jdk.jfr.consumer.RecordingStream;

public class JFRStreamTest {
    
    public static void main(String[] args) throws IOException, ParseException  {
        
        Configuration config = Configuration.getConfiguration("default");
        System.out.println(config.getDescription());
        System.out.println("settings:");
        config.getSettings().forEach((key, value) -> System.out.println(key+": "+value));
        
        // open a stream and start local recording
        try (EventStream es = new RecordingStream(config)) {
            
            // register event handlers
            es.onEvent("jdk.GarbageCollection", System.out::println);
            es.onEvent("jdk.CPULoad", System.out::println);
            es.onEvent("jdk.JVMInformation", System.out::println);
            
            // start and block
            es.start();
        }
    
    }
    
}

The above example should print information about the running JVM once, CPU load periodically and GC events when they occur.

Out-of-Process Streaming

Simply start the flight recorder as usual via jcmd <PID> JFR.start or via the JVM flag -XX:+FlightRecorder at startup. The repository location will be stored in the jdk.jfr.repository system property as soon JFR is running (new in Java 14). It can also be set at startup via a comma separated list of flight recorder options: -XX:FlightRecorderOptions=repository=./blackbox

Update thanks to Erik from Oracle in the comments section: The repository location can be also set using jcmd <PID> JFR.configure repositorypath=<directory>. If you set it after a recording has started, new data will be written in the new location.


$ jcmd -l | grep netbeans
8492 org.netbeans.Main ... --branding nb

$ jcmd 8492 JFR.start name=streamtest
Started recording 1. No limit specified, using maxsize=250MB as default.
Use jcmd 8492 JFR.dump name=streamtest filename=FILEPATH to copy recording data to file.

$ jinfo -sysprops 8492 | grep jfr
jdk.jfr.repository=/tmp/2019_11_18_02_19_59_8492

Now that the recording is running and we know where the repository is, a second JVM can open a stream to the live JFR repository and monitor the application. Note that we did not dump any JFR records to a file, we connect to the live repository directly.


import java.io.IOException;
import java.nio.file.Path;
import jdk.jfr.consumer.EventStream;

public class JFRStreamTest {
    
    public static void main(String[] args) throws IOException  {
        // connect to JFR repository
        try (EventStream es = EventStream.openRepository(Path.of("/tmp/2019_11_18_02_19_59_8492"))) {
            
            // register some event handlers
            //es.onEvent("jdk.CPULoad", System.out::println);
            es.onEvent("jdk.SocketRead", System.out::println);
            es.onEvent("jdk.SocketWrite", System.out::println);
            
            // start and block
            es.start();
        }
    
    }
    
}

As a quick test i monitored with the example above a NetBeans instance running on Java 14 and let the IDE check for updates. Since we watch for SocketRead and Write events the output looked like:

jdk.SocketRead {
  startTime = 04:34:09.571
  duration = 117.764 ms
  host = "netbeans.apache.org"
  address = "40.79.78.1"
  port = 443
  timeout = 30.000 s
  bytesRead = 5 bytes
  endOfStream = false
  eventThread = "pool-5-thread-1" (javaThreadId = 163)
  stackTrace = [
    java.net.Socket$SocketInputStream.read(byte[], int, int) line: 68
    sun.security.ssl.SSLSocketInputRecord.read(InputStream, byte[], int, int) line: 457
    sun.security.ssl.SSLSocketInputRecord.decode(ByteBuffer[], int, int) line: 165
    sun.security.ssl.SSLTransport.decode(TransportContext, ByteBuffer[], int, int, ...
    sun.security.ssl.SSLSocketImpl.decode(ByteBuffer) line: 1460
    ...
  ]
}
...

Streaming Dumped Records

Opening streams (EventStream.openFile(path)) to JFR record dumps (jcmd <PID> JFR.dump filename=foo.jfr) is also possible of course - works as expected.

Conclusion

Pretty cool new feature! It is currently not possible to do in-memory but out-of-process steaming without syncing on repository files. But since a ramdisk can workaround this issue so easily I am not even sure if this capability would be worth it.

fly safe


//[Dynamic] [Application] Class Data Sharing in Java

Class Data Sharing as a JVM feature exists for quite some time already, but it became more popular in context of containers. CDS maps a pre-defined class archive into memory and makes it shareable between JVM processes. This can improve startup time but also reduce per-JVM footprint in some cases.

Although basic CDS was already supported in Java 5, this blog entry assumes that Java 11 or later is used, since JVM flags and overall capabilities changed over time. CDS might not be supported on all architectures or GCs: G1, CMS, ShenandoahGC, Serial GC, Parallel GC, and ParallelOld GC do all support CDS on 64bit linux, while ZGC for example doesn't have support for it yet.


JDK Class Data Sharing

The most basic form of CDS is setup to share only JDK class files. This is enabled by default since JDK 12 (JEP 341). If you check the java version using a shell you notice that the JVM version string will contain "sharing".

[mbien@hulk server]$ java -version
openjdk version "12.0.1" 2019-04-16
OpenJDK Runtime Environment AdoptOpenJDK (build 12.0.1+12)
OpenJDK 64-Bit Server VM AdoptOpenJDK (build 12.0.1+12, mixed mode, sharing)
The shared class archive can be found pre installed in ${JAVA_HOME}/lib/server/ called classes.jsa. By temporary renaming the file or switching to an older JDK, "sharing" won't appear anymore in the version string.

[mbien@hulk server]$ java -version
openjdk version "12.0.1" 2019-04-16
OpenJDK Runtime Environment AdoptOpenJDK (build 12.0.1+12)
OpenJDK 64-Bit Server VM AdoptOpenJDK (build 12.0.1+12, mixed mode)

The JDK CDS archive can be manually generated by invoking java -Xshare:dump with root privileges. Recent JVMs are configured to use -Xshare:auto at startup, which will automatically use CDS if available. Enforcing CDS with -Xshare:on will cause the JVM to fail if no archive is found.


Application Class Data Sharing

The basic CDS archive only contains JDK class data. Adding application class data (JEP 310) is fairly easy but comes with some limitations.

  • the classpath used at archive creation time must be the same as (or a prefix of) the classpath used at run time
  • wildcards or exploded JARs are not allowed in the path (no cheating)

1) create a list of classes (JDK and APP; Java 11+) to include in the shared archive:


$ java -Xshare:off -XX:DumpLoadedClassList=classes.list -jar foo.jar

2) create the archive using that list:


$ java -Xshare:dump -XX:SharedClassListFile=classes.list -XX:SharedArchiveFile=foo.jsa -cp foo.jar

3) launch the applications using the shared archive:


$ java -Xshare:on -XX:SharedArchiveFile=foo.jsa -jar foo.jar

-Xlog:class+load helps to verify that it is working properly.

[0.062s][info][class,load] java.lang.Object source: shared objects file
[0.062s][info][class,load] java.io.Serializable source: shared objects file
[0.062s][info][class,load] java.lang.Comparable source: shared objects file
[0.062s][info][class,load] java.lang.CharSequence source: shared objects file
...

The applications (containers) don't have to be identical to make use of custom CDS archives. The archive is based on the class list of step 1 which can be freely modified or merged with other lists. The main limitation is the classpath prefix rule.

Some Results

As quick test I used OpenJDK 11 + Eclipse Jetty + Apache JSP Wiki and edited a single page (to ensure the classes are loaded), generated a custom archive and started a few more instances with CDS enabled using the archive.


classes.list size = 4296 entries
wiki.jsa size = 57,6 MB 

 VIRT    RES     SHR  %MEM    TIME    COMMAND
2771292 315128  22448  8,3   0:20.54 java -Xmx128m -Xms128m -Xshare:off -cp ... 
2735468 259148  45184  6,9   0:18.38 java -Xmx128m -Xms128m -XX:SharedArchiveFile=wiki.jsa -cp ...
2735296 270044  45192  7,1   0:19.66 java -Xmx128m -Xms128m -XX:SharedArchiveFile=wiki.jsa -cp ...
2735468 259812  45060  6,9   0:18.26 java -Xmx128m -Xms128m -XX:SharedArchiveFile=wiki.jsa -cp ...
2735468 256800  45196  6,8   0:19.10 java -Xmx128m -Xms128m -XX:SharedArchiveFile=wiki.jsa -cp ...

The first JVM is started with CDS turned off as reference. The four other instances have sharing enabled using the custom archive.


Dynamic Class Data Archives

The proposal in JEP 350 which is currently targeted for Java 13 will allow to combine a static base archive with a dynamically generated archive.

The dynamic class data archive is created in a setup phase at application exit (-XX:ArchiveClassesAtExit=dynamic.jsa) and essentially automates step 1 and 2. Just like before -XX:SharedArchiveFile=dynamic.jsa will tell the JVM to map the shared archive. Since the dynamic archive references the static JDK archive, both will be used automatically.

The main advantage (beside the added convenience) is that the dynamic archive will support both builtin class loaders and user-defined class loaders.

now go forth and share class data :)


//Cleaning Bash History using a Java 11 Single-File Sourcecode Program

Java 11 adds with JEP330 the ability to launch a Java source file directly from the command line without requiring to explicitly compile it. I can see this feature being convenient for some simple scripting or quick tests. Although Java isn't the most concise language it still has the benefit of being quite readable and having powerful utility APIs.

The following example reads the bash shell history from a file into a LinkedHashSet to retain ordering and keeps the most recently used line if duplicates exist, essentially cleaning up the history.



import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.LinkedHashSet;

public class HistoryCleaner {

    public static void main(String[] args) throws IOException {

        Path path = Path.of(System.getProperty("user.home")+"/.bash_history");

        LinkedHashSet<String> deduplicated = new LinkedHashSet<>(1024);
        Files.lines(path).forEachOrdered(line -> {
            deduplicated.remove(line);
            deduplicated.add(line);
        });

        Files.write(path, deduplicated);
    }

}


duke@virtual1:~$ java HistoryCleaner.java

will compile and run the "script".


Shebangs are supported too


#!/usr/bin/java --source 11 -Xmx32m -XX:+UseSerialGC

import java.io....

However, since this is not part of the java language specification, single-file java programs (SFJP) with a #! (shebang) in the first line are technically no regular java files and will not compile when used with javac. This also means that the common file naming conventions are not required for pure SFJPs (i.e. file name does not have to be the same as the class name).

Using a different file extension is advised. We can't call it .js - so lets call it .sfjp :).


duke@virtual1:~$ ./HistoryCleaner.sfjp

This will compile and run the HistoryCleaner SFJP containing the shebang declaration in the first line. Keeping the .java file extension would result in a compiler error. As bonus we can also set JVM flags using the shebang mechanism. Keep in mind that -source [version] must be the first flag to tell the java launcher that it should expect a source file instead of bytecode.


//NetBeans OpenCL Pack

Since I am doing a lot with OpenCL lately I decided to try to improve the tooling around OpenCL a bit. A weekend later the NetBeans OpenCL Pack was born :).

Features Including:

  • OpenCL Editor with syntax highlighting code completion and CL reference pages integration
  • OpenCL compiler integration
  • In-editor annotations of compiler warnings and errors updated as you type
  • JOCL project template

Technical Details:

The editor uses ANTLR as parser and lexer. This allows such simple things like keyword highlighting and also more complex features like semantic highlighting, formatting and auto completion (formatting is not yet implemented). It can also detect and report syntax errors, however this feature is automatically disabled if an OpenCL compiler is present on the host system. All with help of JOCL detected OpenCL implementations can be used as compiler backend.

Instead of using the old OpenGL Pack as template I decided to write it from scratch using latest NetBeans 7 and Java 7 APIs. So you will have to start NB with JDK7 to be able to use it.

Download

you can download it from the NetBeans plugin portal [mirror], sourcecode is on github

feedback and/or contributions/bugreports are as always appreciated

Screenshots:

auto completion editor project templates

have fun!


//Many little improvements made it into JOCL recently

Ok some of them are big, but I will only cover the little things with this blog entry :).

CLKernel

I added multiple utility methods to CLKernel and related classes. It is for example now possible to create a kernel and set its arguments in one line.


CLKernel sha512 = program.createCLKernel("sha512", padBuffer, digestBuffer, rangeBuffer);

Thanks to feedback in the jocl forums I also added methods to set vector typed arguments directly. In past you could do this only by setting them via a java.util.Buffer.


kernel.setArg(index, x, y, z, w);

Another small feature of CLKernel is to enforce 32bit arguments. You may want to switch between single and double floatingpoint precision at runtime or mix between both to improve performance you will have to compile the program with the double FP extension enabled. By setting kernel.setForce32bitArgs(true) all java doubles used as kernel arguments will be automatically cast down to 32bit CL floats (see MultiDeviceFractal demo for a example). This is nothing special but might safe you several if(single){setArg((float)foo)}else{setArg(foo)} constructs.

CLWork

CLKernel still only represents the function in the OpenCL program you want to call - nothing more. The new CLWork object contains everything required for kernel execution, like the NDRange and the kernel itself.


    int size = buffer.getNIOCapacity();
    CLWork1D work = CLWork.create1D(program.createCLKernel("sum", buffer, size));
    work.setWorkSize(size, 1).optimizeFor(device);

    // execute
    queue.putWriteBuffer(buffer, false)
         .putWork(work)
         .putReadBuffer(buffer, true);

optimizeFor(device) adjusts the workgroup size to meet device specific recommended values. This should make sure that all computing units of your GPU are used by dividing the work into groups (however this only works if your task does not care about the workgroup size, see javadoc).

CLSubDevice

Sometimes you don't want to put your CLDevice under 100% load. This might be the case for example if your device is the CPU your application is running on or if you have to share the GPU with an OpenGL context for rendering. One easy way of controlling device load is to limit the amount of compute units used for a task.


    CLPlatform platform = CLPlatform.getDefault(version(CL_1_1), type(CPU));

    CLDevice devices = platform.getMaxFLOPSDevice(type(CPU));
    CLSubDevice[] subs = device.createSubDevicesByCount(4, 4);
    // array contains now two virtual devices containing four CPU cores each

    CLContext context = CLContext.create(subs);
    CLCommandQueue queue = subs[0].createCommandQueue();
    ...

CLSubDevices extends CLDevice and can be used for context creation, queue creation and everywhere you would use the CLDevice. Prior to creating subdevices you should check if device.isFissionSupported() returns true.

CLProgram builder

Ok, this utility is not that new but I haven't blogged about it yet. If program.build() isn't enough you should take a look at the program builder. CLBuildConfiguration stores everything which is needed for program compilation and is easily configurable via the builder pattern :).


        // reusable builder
        CLBuildConfiguration builder = CLProgramBuilder.createConfiguration()
                                     .withOption(ENABLE_MAD)
                                     .forDevices(context.getDevices())
                                     .withDefine("RADIUS", 5)
                                     .withDefine("ENABLE_FOOBAR");
        builder.build(programA);
        builder.build(programB);
        ...

CLBuildConfiguration is fully reusable and can be upgraded to CLProgramConfiguration if you combine it with a CLProgram. Both can be serialised which allows to store the build configuration or the entire prebuild program on disc or send it over the network. (caching binaries on disc can safe startup time for example)


        // program configuration
        ois = new ObjectInputStream(new FileInputStream(file));
        CLProgramConfiguration programConfig = CLProgramBuilder.loadConfiguration(ois, context);
        assertNotNull(programConfig.getProgram());
        ois.close();
        program = programConfig.build(); // builds from source or loads binaries if possible
        assertTrue(program.isExecutable());

Note: loading binaries and associating them with the right driver/device is currently not trivial with OpenCL. Even if everything works as intended it is still possible that the driver refuses the binaries for some reason (driver update...etc). Thats why its recommended to add the program source to the configuration before calling build() to allow a automatic rebuild as fallback.


        // another entry point for complex builds (prepare() returns CLProgramConfiguration)
        program.prepare().withOption(ENABLE_MAD).forDevice(context.getMaxFlopsDevice()).build();

(all snippets have been stolen from the junit tests)
I am sure I forgot something... but this should cover at least some of the incremental improvements. Expect a few more blog entries for the larger features soon.

- - - - - -
In other news: Nvidia released OpenCL 1.1 drivers, some of us thought this would never happen -> all major vendors (AMD, Intel, NV, IBM, ZiiLABS ..) support now OpenCL 1.1 (screenshot)

have fun!


//Developing with JOCL on AMD, Intel and Nvidia OpenCL platforms

One nice feature of OpenCL is that the platform abstraction was handled in the spec from the first day on. You can install all OpenCL drivers side by side and let the application choose at runtime, on which device and on which platform it should execute the kernels.

As of today there are three four vendors which provide OpenCL implementations for the desktop. AMD and Intel support the OpenCL 1.1 specification where Nvidia apparently tries to stick with 1.0 to encourage their customers to stick with CUDA ;-). [edit] And of course there is also Apple providing out-of-the box OpenCL 1.0 support in MacOS 10.6.

JOCL contains a small CLInfo utility which can be used to quickly verify OpenCL installations. Here is the output on my system (ubuntu was booted) having all three SDKs installed:

CL_PLATFORM_NAMEATI StreamNVIDIA CUDAIntel(R) OpenCL
CL_PLATFORM_VERSIONOpenCL 1.1 ATI-Stream-v2.2 (302)OpenCL 1.0 CUDA 4.0.1OpenCL 1.1 LINUX
CL_PLATFORM_PROFILEFULL_PROFILEFULL_PROFILEFULL_PROFILE
CL_PLATFORM_VENDORAdvanced Micro Devices, Inc.NVIDIA CorporationIntel(R) Corporation
CL_PLATFORM_ICD_SUFFIX_KHRAMDNVIntel
CL_PLATFORM_EXTENSIONS[cl_khr_icd, cl_amd_event_callback][cl_khr_icd, cl_khr_byte_addressable_store, cl_nv_compiler_options, cl_nv_pragma_unroll, cl_nv_device_attribute_query, cl_khr_gl_sharing][cl_khr_icd, cl_khr_byte_addressable_store, cl_khr_fp64, cl_khr_local_int32_extended_atomics, cl_khr_local_int32_base_atomics, cl_khr_global_int32_base_atomics, cl_khr_gl_sharing, cl_intel_printf, cl_khr_global_int32_extended_atomics, cl_ext_device_fission]
CL_DEVICE_NAMEIntel(R) Core(TM) i7 CPU 940 @ 2.93GHzGeForce GTX 295GeForce GTX 295Intel(R) Core(TM) i7 CPU 940 @ 2.93GHz
CL_DEVICE_TYPECPUGPUGPUCPU
CL_DEVICE_AVAILABLEtruetruetruetrue
CL_DEVICE_VERSIONOpenCL 1.1 ATI-Stream-v2.2 (302)OpenCL 1.0 CUDAOpenCL 1.0 CUDAOpenCL 1.1
CL_DEVICE_PROFILEFULL_PROFILEFULL_PROFILEFULL_PROFILEFULL_PROFILE
CL_DEVICE_ENDIAN_LITTLEtruetruetruetrue
CL_DEVICE_VENDORGenuineIntelNVIDIA CorporationNVIDIA CorporationIntel(R) Corporation
CL_DEVICE_EXTENSIONS[cl_amd_device_attribute_query, cl_khr_byte_addressable_store, cl_khr_int64_extended_atomics, cl_khr_local_int32_extended_atomics, cl_amd_fp64, cl_amd_printf, cl_khr_local_int32_base_atomics, cl_khr_int64_base_atomics, cl_khr_global_int32_base_atomics, cl_khr_gl_sharing, cl_khr_global_int32_extended_atomics, cl_ext_device_fission][cl_khr_icd, cl_khr_byte_addressable_store, cl_khr_fp64, cl_khr_local_int32_extended_atomics, cl_khr_local_int32_base_atomics, cl_nv_compiler_options, cl_nv_pragma_unroll, cl_nv_device_attribute_query, cl_khr_global_int32_base_atomics, cl_khr_gl_sharing, cl_khr_global_int32_extended_atomics][cl_khr_icd, cl_khr_byte_addressable_store, cl_khr_fp64, cl_khr_local_int32_extended_atomics, cl_khr_local_int32_base_atomics, cl_nv_compiler_options, cl_nv_pragma_unroll, cl_nv_device_attribute_query, cl_khr_global_int32_base_atomics, cl_khr_gl_sharing, cl_khr_global_int32_extended_atomics][cl_khr_byte_addressable_store, cl_khr_fp64, cl_khr_local_int32_extended_atomics, cl_khr_local_int32_base_atomics, cl_khr_global_int32_base_atomics, cl_khr_gl_sharing, cl_intel_printf, cl_khr_global_int32_extended_atomics, cl_ext_device_fission]
CL_DEVICE_MAX_COMPUTE_UNITS830308
CL_DEVICE_MAX_CLOCK_FREQUENCY2934124212422930
CL_DEVICE_VENDOR_ID40984318431832902
CL_DEVICE_OPENCL_C_VERSIONOpenCL C 1.1 com.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info string [error: CL_INVALID_VALUE]com.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info string [error: CL_INVALID_VALUE]OpenCL C 1.1
CL_DRIVER_VERSION2.0270.41.06270.41.061.1
CL_DEVICE_ADDRESS_BITS64323264
CL_DEVICE_PREFERRED_VECTOR_WIDTH_SHORT8118
CL_DEVICE_PREFERRED_VECTOR_WIDTH_CHAR161116
CL_DEVICE_PREFERRED_VECTOR_WIDTH_INT4114
CL_DEVICE_PREFERRED_VECTOR_WIDTH_LONG2112
CL_DEVICE_PREFERRED_VECTOR_WIDTH_FLOAT4114
CL_DEVICE_PREFERRED_VECTOR_WIDTH_DOUBLE0112
CL_DEVICE_NATIVE_VECTOR_WIDTH_CHAR16com.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info value [error: CL_INVALID_VALUE]com.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info value [error: CL_INVALID_VALUE]16
CL_DEVICE_NATIVE_VECTOR_WIDTH_SHORT8com.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info value [error: CL_INVALID_VALUE]com.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info value [error: CL_INVALID_VALUE]8
CL_DEVICE_NATIVE_VECTOR_WIDTH_INT4com.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info value [error: CL_INVALID_VALUE]com.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info value [error: CL_INVALID_VALUE]4
CL_DEVICE_NATIVE_VECTOR_WIDTH_LONG2com.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info value [error: CL_INVALID_VALUE]com.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info value [error: CL_INVALID_VALUE]2
CL_DEVICE_NATIVE_VECTOR_WIDTH_HALF0com.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info value [error: CL_INVALID_VALUE]com.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info value [error: CL_INVALID_VALUE]0
CL_DEVICE_NATIVE_VECTOR_WIDTH_FLOAT4com.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info value [error: CL_INVALID_VALUE]com.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info value [error: CL_INVALID_VALUE]4
CL_DEVICE_NATIVE_VECTOR_WIDTH_DOUBLE0com.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info value [error: CL_INVALID_VALUE]com.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info value [error: CL_INVALID_VALUE]2
CL_DEVICE_MAX_WORK_GROUP_SIZE10245125121024
CL_DEVICE_MAX_WORK_ITEM_DIMENSIONS3333
CL_DEVICE_MAX_WORK_ITEM_SIZES[1024, 1024, 1024][512, 512, 64][512, 512, 64][1024, 1024, 1024]
CL_DEVICE_MAX_PARAMETER_SIZE4096435243521024
CL_DEVICE_MAX_MEM_ALLOC_SIZE10737418242348318722347008003154703360
CL_DEVICE_GLOBAL_MEM_SIZE322122547293932748893880320012618813440
CL_DEVICE_LOCAL_MEM_SIZE32768163841638432768
CL_DEVICE_HOST_UNIFIED_MEMORYtruecom.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info value [error: CL_INVALID_VALUE]com.jogamp.opencl.CLException$CLInvalidValueException: error while asking for info value [error: CL_INVALID_VALUE]true
CL_DEVICE_MAX_CONSTANT_BUFFER_SIZE655366553665536131072
CL_DEVICE_GLOBAL_MEM_CACHELINE_SIZE640064
CL_DEVICE_GLOBAL_MEM_CACHE_SIZE3276800262144
CL_DEVICE_MAX_CONSTANT_ARGS899128
CL_DEVICE_IMAGE_SUPPORTfalsetruetruetrue
CL_DEVICE_MAX_READ_IMAGE_ARGS0128128128
CL_DEVICE_MAX_WRITE_IMAGE_ARGS088128
CL_DEVICE_IMAGE2D_MAX_WIDTH0409640968192
CL_DEVICE_IMAGE2D_MAX_HEIGHT032768327688192
CL_DEVICE_IMAGE3D_MAX_WIDTH0204820482048
CL_DEVICE_IMAGE3D_MAX_HEIGHT0204820482048
CL_DEVICE_IMAGE3D_MAX_DEPTH0204820482048
CL_DEVICE_MAX_SAMPLERS01616128
CL_DEVICE_PROFILING_TIMER_RESOLUTION110001000340831
CL_DEVICE_EXECUTION_CAPABILITIES[EXEC_KERNEL, EXEC_NATIVE_KERNEL][EXEC_KERNEL][EXEC_KERNEL][EXEC_KERNEL, EXEC_NATIVE_KERNEL]
CL_DEVICE_HALF_FP_CONFIG[][][][]
CL_DEVICE_SINGLE_FP_CONFIG[DENORM, INF_NAN, ROUND_TO_NEAREST, ROUND_TO_INF, ROUND_TO_ZERO][INF_NAN, ROUND_TO_NEAREST, ROUND_TO_INF, ROUND_TO_ZERO, FMA][INF_NAN, ROUND_TO_NEAREST, ROUND_TO_INF, ROUND_TO_ZERO, FMA][DENORM, INF_NAN, ROUND_TO_NEAREST]
CL_DEVICE_DOUBLE_FP_CONFIG[][DENORM, INF_NAN, ROUND_TO_NEAREST, ROUND_TO_INF, ROUND_TO_ZERO, FMA][DENORM, INF_NAN, ROUND_TO_NEAREST, ROUND_TO_INF, ROUND_TO_ZERO, FMA][DENORM, INF_NAN, ROUND_TO_NEAREST, ROUND_TO_INF, ROUND_TO_ZERO, FMA]
CL_DEVICE_LOCAL_MEM_TYPEGLOBALLOCALLOCALGLOBAL
CL_DEVICE_GLOBAL_MEM_CACHE_TYPEREAD_WRITENONENONEREAD_WRITE
CL_DEVICE_QUEUE_PROPERTIES[PROFILING_MODE][OUT_OF_ORDER_MODE, PROFILING_MODE][OUT_OF_ORDER_MODE, PROFILING_MODE][OUT_OF_ORDER_MODE, PROFILING_MODE]
CL_DEVICE_COMPILER_AVAILABLEtruetruetruetrue
CL_DEVICE_ERROR_CORRECTION_SUPPORTfalsefalsefalsefalse
cl_khr_fp16falsefalsefalsefalse
cl_khr_fp64falsetruetruetrue
cl_khr_gl_sharing | cl_APPLE_gl_sharingtruetruetruetrue

The CLInfo utility is part of the jocl-demos project and is also available via webstart. For a plain text version of the above output you can run:

 java -jar jocl.jar:gluegen-rt.jar\
    -Djava.library.path="path/to/jocl/libs:path/to/gluegen/libs" com.jogamp.opencl.util.CLInfo

(btw to install the intel sdk on debian based systems follow this link)

happy coding!


//Java Binding for the OpenCL API

I am currently working on Java Binding for the OpenCL API using GlueGen (as used in JOGL, JOAL). The project started as part of my bachelor of CS thesis short after the release of the first OpenCL specification draft and is now fully feature complete with OpenCL 1.1. JOCL is currently in the stabilization phase, a beta release shouldn't be far away.

Overview - How does it work?

JOCL enables applications running on the JVM to use OpenCL for massively parallel, high performance computing tasks, executed on heterogeneous hardware (GPUs, CPUs, FPGAs etc) in a platform independent manner. JOCL consists of two parts, the low level and the high level binding.

The low level bindings (LLB) are automatically generated using the official OpenCL headers as input and provide a high performance, JNI based, 1:1 mapping to the C functions.

This has the following advantages:

  • reduces maintenance overhead and ensures spec conformance
  • compiletime JNI bindings are the fastest way to access native libs from the JVM
  • makes translating OpenCL C code into Java + JOCL very easy (e.g. from books or tutorials)
  • flexibility and stability: OpenCL libs are loaded dynamically and accessed via function pointers

The hand written high level bindings (HLB) is build on top of LLB and hides most boilerplate code (like object IDs, pointers and resource management) behind easy to use java objects. HLB use direct NIO buffers internally for fast memory transfers between the JVM and the OpenCL implementation and is very GC friendly. Most of the API is designed for method chaining but of course you don't have to use it this way if you don't want to. JOCL also seamlessly integrates with JOGL 2 (both are built and tested together). Just pass the JOGL context as parameter to the JOCL context factory and you will receive a shared context. If you already know OpenCL and Java, HLB should be very intuitive for you.

The project is available on jogamp.org. Please use the mailinglist / forum for feedback or questions and the bugtracker if you experience any issues. The JOCL root repository is located on github, you may also want to take a look at the jocl-demos project. (If the demos are not enough you might also want to take a look at the junit tests)

Screenshots (sourcecode in jocl-demos project):

JOCL Julia Set high precision

More regarding OpenGL interoperability and other features in upcoming blog entries.

The following sample shows basic setup, computation and cleanup using the high level APIs.

Hello World or parallel a+b=c


/**
 * Hello Java OpenCL example. Adds all elements of buffer A to buffer B
 * and stores the result in buffer C.
 * Sample was inspired by the Nvidia VectorAdd example written in C/C++
 * which is bundled in the Nvidia OpenCL SDK.
 * @author Michael Bien
 */
public class HelloJOCL {

    public static void main(String[] args) throws IOException {
        // Length of arrays to process (arbitrary number)
        int elementCount = 11444777;
        // Local work size dimensions
        int localWorkSize = 256;
        // rounded up to the nearest multiple of the localWorkSize
        int globalWorkSize = roundUp(localWorkSize, elementCount);

        // setup
        CLContext context = CLContext.create();

        CLProgram program = context.createProgram(
                       HelloJOCL.class.getResourceAsStream("VectorAdd.cl")
                                 ).build();

        CLBuffer<FloatBuffer> clBufferA =
                       context.createFloatBuffer(globalWorkSize, READ_ONLY);
        CLBuffer<FloatBuffer> clBufferB =
                       context.createFloatBuffer(globalWorkSize, READ_ONLY);
        CLBuffer<FloatBuffer> clBufferC =
                       context.createFloatBuffer(globalWorkSize, WRITE_ONLY);

        out.println("used device memory: "
            + (clBufferA.getSize()+clBufferB.getSize()+clBufferC.getSize())/1000000 +"MB");

        // fill read buffers with random numbers (just to have test data).
        fillBuffer(clBufferA.getBuffer(), 12345);
        fillBuffer(clBufferB.getBuffer(), 67890);

        // get a reference to the kernel functon with the name 'VectorAdd'
        // and map the buffers to its input parameters.
        CLKernel kernel = program.createCLKernel("VectorAdd");
        kernel.putArgs(clBufferA, clBufferB, clBufferC).putArg(elementCount);

        // create command queue on fastest device.
        CLCommandQueue queue = context.getMaxFlopsDevice().createCommandQueue();

        // asynchronous write to GPU device,
        // blocking read later to get the computed results back.
        long time = nanoTime();
        queue.putWriteBuffer(clBufferA, false)
             .putWriteBuffer(clBufferB, false)
             .put1DRangeKernel(kernel, 0, globalWorkSize, localWorkSize)
             .putReadBuffer(clBufferC, true);
        time = nanoTime() - time;

        // cleanup all resources associated with this context.
        context.release();

        // print first few elements of the resulting buffer to the console.
        out.println("a+b=c results snapshot: ");
        for(int i = 0; i < 10; i++)
            out.print(clBufferC.getBuffer().get() + ", ");
        out.println("...; " + clBufferC.getBuffer().remaining() + " more");

        out.println("computation took: "+(time/1000000)+"ms");

    }

    private static final void fillBuffer(FloatBuffer buffer, int seed) {
        Random rnd = new Random(seed);
        while(buffer.remaining() != 0)
            buffer.put(rnd.nextFloat()*100);
        buffer.rewind();
    }

    private static final int roundUp(int groupSize, int globalSize) {
        int r = globalSize % groupSize;
        if (r == 0) {
            return globalSize;
        } else {
            return globalSize + groupSize - r;
        }
    }

}

VectorAdd.cl


    // OpenCL Kernel Function for element by element vector addition
    kernel void VectorAdd(global const float* a,
                          global const float* b,
                          global float* c, int numElements) {

        // get index into global data array
        int iGID = get_global_id(0);

        // bound check (equivalent to the limit on a 'for' loop)
        if (iGID >= numElements)  {
            return;
        }

        // add the vector elements
        c[iGID] = a[iGID] + b[iGID];
    }