// Java 19's new Map/Set Factory Methods

Beside the big preview features of Java 19 (e.g. virtual threads which I blogged about over two years ago - time flies), there are also some noteworthy API updates which might fall under the radar. One API change are new factory methods for creating mutable, pre-sized Map and Set collections. Lets take a quick look and find out how they differ from the old constructor counterparts and when to use them instead.

Pre-sized Maps/Sets

Many collections can be created with pre-allocated initial capacity if the number of items which will be stored in them is already known or can be estimated. This avoids unnecessary resizing operations which happen in the background while the implementation has to dynamically expand the collection based on item count.

This is all straight forward but there is one little anomaly: HashMaps (and related collections) have an initialCapacity and a loadFactor (default: 0.75) parameter. The initialCapacity however is not for the expected entry count, it is for the initial size of an internal table (impl. detail: rounded up to the nearest power-of-two), which is larger than the actual entry count in the map, since it is only filled until the given loadFactor is reached before it is resized. [javadoc]

This detail is very easy to overlook, for example:


        Map<String, String> map = new HashMap<>(4);
        map.put("one", "1");
        map.put("two", "2");
        map.put("three", "3");
        map.put("four", "4");

may look correct on first glance, but it will resize the internal table when more than 0.75*4 entries are added (which is the case above). Resizing a (large) Map can be comparatively expensive, the code responsible for it isn't trivial. Further, the javadoc mentions that "creating a HashMap with a sufficiently large capacity will allow the mappings to be stored more efficiently than letting it perform automatic rehashing as needed to grow the table".

What we want is to create a Map which can actually hold 4 entries, for this we have to calculate the capacity parameter unfortunately:


// Java 18
Map<String, String> map = new HashMap<>((int) Math.ceil(4 / 0.75));

The new factory methods are simply hiding this calculation.


// Java 19+
// analog exists for HashSet, LinkedHashSet, LinkedHashMap and WeakHashMap
Map<String, String> map = HashMap.newHashMap(4);

If you think you made this mistake before - don't feel bad :). Since even OpenJDK code overlooked this detail on some occasions as can be seen in PR1 and PR2 which introduce the new factory methods and also refactor JDK code to use them (instead of the various ways how the capacity was calculated). Stuart Marks gives here a few more examples of how to not calculate it. The javadoc of the respective constructors (e.g. for HashMap#HashMap(int)) contains now also an API Note which delegates to the factory methods.

note: ConcurrentHashMap and IdentityHashmap already expect the parameter being the actual entry count, which means they didn't receive factory methods.

I might add some jackpot code transformation rules to this collection to make migration a bit more convenient (update: done).

Just for Fun: A Quick Experiment

We can show this by observing the internal table size of the Map while adding entries:


    public static void main(String[] args) throws ReflectiveOperationException {
        System.out.println(Runtime.version());
        
        int entries = 4; // small number to fit the output on a blog entry
        inspect("new HashMap<>(entries)", new HashMap<>(entries), entries);
        inspect("HashMap.newHashMap(entries)", HashMap.newHashMap(entries), entries);
        inspect("new HashMap<>(((int) Math.ceil(entries / 0.75)))", new HashMap<>(((int) Math.ceil(entries / 0.75))), entries);
    }

    private static void inspect(String desc, Map<? super Object, ? super Object> map,
                                int entries) throws ReflectiveOperationException {

        System.out.println();
        System.out.println("filling '"+desc+"' with "+entries+" entries...");
        System.out.println("table size: [content]");
        for (int i = 0; i < entries; i++) {
            map.put("key"+i, "value");

            Field field = HashMap.class.getDeclaredField("table");
            field.setAccessible(true);
            Object[] table = (Object[]) field.get(map);
            System.out.println(table.length+": "+Arrays.asList(table));
        }

        System.out.println("map size: {content}");
        System.out.println(map.size()+": "+map);
    }

output:


19+36-2238

filling 'new HashMap<>(entries)' with 4 entries...
table size: [content]
4: [null, null, null, key0=value]
4: [key1=value, null, null, key0=value]
4: [key1=value, key2=value, null, key0=value]
8: [key1=value, key2=value, null, key0=value, null, null, key3=value, null]
map size: {content}
4: {key1=value, key2=value, key0=value, key3=value}

filling 'HashMap.newHashMap(entries)' with 4 entries...
table size: [content]
8: [null, null, null, key0=value, null, null, null, null]
8: [key1=value, null, null, key0=value, null, null, null, null]
8: [key1=value, key2=value, null, key0=value, null, null, null, null]
8: [key1=value, key2=value, null, key0=value, null, null, key3=value, null]
map size: {content}
4: {key1=value, key2=value, key0=value, key3=value}

filling 'new HashMap<>(((int) Math.ceil(entries / 0.75)))' with 4 entries...
table size: [content]
8: [null, null, null, key0=value, null, null, null, null]
8: [key1=value, null, null, key0=value, null, null, null, null]
8: [key1=value, key2=value, null, key0=value, null, null, null, null]
8: [key1=value, key2=value, null, key0=value, null, null, key3=value, null]
map size: {content}
4: {key1=value, key2=value, key0=value, key3=value}

Conclusion

In Java 19 and later, we can use the new factory methods for creating mutable Sets/Maps with pre-allocated item count without having to calculate the internal capacity. The old constructor counterparts are now only needed when non-standard load factors are chosen.

- - - sidenotes - - -

HashMaps initialize their internal table lazily. Calling it pre-allocation isn't entirely correct. Adding the first entry will however allocate the correctly sized table even if its done later.

Since HashMap's internal table size is rounded up to the nearest power-of-two, the capacity might be still sufficient to not cause resize ops even when the constructor was used incorrectly by mistake without properly calculating the initial capacity (still no excuse for not fixing it ;)).


// CRaC - Coordinated Restore at Checkpoint

Last year I experimented a little bit with the instant restoration of started and warmed up Java programs from disk, beside a couple of other potential use cases for checkpoints. To achieve this, I accessed a rootless build of CRIU directly from Java via its C/RPC-API (using Panama as binding layer). Although it worked surprisingly well, it quickly became clear that a proper implementation would require help from the JVM on a lower level and also an API to coordinate checkpoint/restore events between libraries.

I was pleased to see that there is a decent chance this might actually happen, since a new project with the name CRaC is currently in the voting stage to be officially started as OpenJDK sub-project. Lets take a look at the prototype.

update: CRaC has been approved (OpenJDK project, github).

With a little Help from the JVM

Why would checkpoint/restore benefit from JVM and OpenJDK support? Several reasons. CRIU does not like it when files change between C/R, a simple log file might spoil the fun if a JVM is restored, shut down and then restored again (which will fail). A JVM is also in an excellent position to run heap cleanup and compaction prior to calling CRIU to dump the process to disk. Checkpointing could be also done after driving the JVM into a safe point and making sure that everything stopped.

The CRaC prototype covers all of that already and more:

  • CheckpointException is thrown if files or sockets are open at a checkpoint
  • a simple API allows coordination with C/R events
  • Heap is cleaned, compacted and the checkpoint is made when the JVM reached a safe point
  • CRaC handles some JVM produced files automatically (no need to set -XX:-UsePerfData for example)
  • The jcmd tool can be used to checkpoint a JVM from a shell
  • CRIU is bundled in the JDK as a bonus - no need to have it installed

Since CRaC would be potentially part of OpenJDK one day, it could manage the files of JFR repositories automatically, and help with other tasks like the re-seeding SecureRandom instances or updating SSL certificates in future, which would be difficult (or impossible) to achieve as a third party library.

Coordinated Restore at Checkpoint

The API is very simple and somewhat similar to what I wrote for JCRIU, the main difference is that the current implementation does not allow the JVM to continue running after a checkpoint is created (But I don't see why this can't change in future).


Core.checkpointRestore();

serves currently both as checkpoint and program exit. It is also at the same time the entry point for a restore.


Core.getGlobalContext().register(resource);

A global context is used to register resources which will be notified before a checkpoint is created and in reverse order after the process is restored.

Minimal Example

Lets say we have a class CRACTest which can write Strings to a file (like a logger). To coordinate with C/Rs, it would need to close the file before checkpoint and reopen it after restore.


public class CRACTest implements Resource, AutoCloseable {

    private OutputStreamWriter writer;

    public CRACTest() {
        writer = newWriter();
        Core.getGlobalContext().register(this); // register as resource
    }
...
...
    @Override
    public void beforeCheckpoint(Context<? extends Resource> context) throws Exception {
        System.out.println("resource pre-checkpoint");
        writer.close();
        writer = null;
    }

    @Override 
    public void afterRestore(Context<? extends Resource> context) throws Exception {
        System.out.println("resource post-restore");
        writer = newWriter();
    }
    
    public static void main(String[] args) throws IOException {
        System.out.println(Runtime.version());
        
        try (CRACTest writer = new CRACTest()) {
            writer.append("hello");
            try {
                System.out.println("pre-checkpoint PID: "+ProcessHandle.current().pid());
                Core.checkpointRestore();   // exit and restore point
                System.out.println("post-restore PID: "+ProcessHandle.current().pid());
            } catch (CheckpointException | RestoreException ex) {
                throw new RuntimeException("C/R failed", ex);
            }
            writer.append(" there!\n");
        }
    }
}

start + checkpoint + exit:


$CRaC/bin/java -XX:CRaCCheckpointTo=/tmp/cp -cp target/CRACTest-0.1-SNAPSHOT.jar dev.mbien.CRACTest
14-crac+0-adhoc..crac-jdk
pre-checkpoint PID: 12119
resource pre-checkpoint

restore at checkpoint:


$CRaC/bin/java -XX:CRaCRestoreFrom=/tmp/cp -cp target/CRACTest-0.1-SNAPSHOT.jar dev.mbien.CRACTest
resource post-restore
post-restore PID: 12119

lets see what we wrote to the file:


cat /tmp/test/CRACTest/out.txt
hello there!

restore 3 more times as a test:


./restore.sh
resource post-restore
post-restore PID: 12119
./restore.sh
resource post-restore
post-restore PID: 12119
./restore.sh
resource post-restore
post-restore PID: 12119

cat /tmp/test/CRACTest/out.txt
hello there!
 there!
 there!
 there!

works as expected.

What happens when we leave an io stream open? Lets remove writer.close() from beforeCheckpoint() and attempt to run a fresh instance.


./run.sh
14-crac+0-adhoc..crac-jdk
pre-checkpoint PID: 12431
resource pre-checkpoint
resource post-restore
Exception in thread "main" java.lang.RuntimeException: C/R failed
	at dev.mbien.cractest.CRACTest.main(CRACTest.java:72)
Caused by: jdk.crac.CheckpointException
	at java.base/jdk.crac.Core.checkpointRestore1(Core.java:134)
	at java.base/jdk.crac.Core.checkpointRestore(Core.java:177)
	at dev.mbien.cractest.CRACTest.main(CRACTest.java:69)
	Suppressed: jdk.crac.impl.CheckpointOpenFileException: /tmp/test/CRACTest/out.txt
		at java.base/jdk.crac.Core.translateJVMExceptions(Core.java:76)
		at java.base/jdk.crac.Core.checkpointRestore1(Core.java:137)
		... 2 more

The JVM will detect and tell us which files are still open before a checkpoint is attempted. In this case no checkpoint is made and the JVM continues. By adding this restriction, CRaC avoids a big list of potential restore failures.

Tool Integration

Checkpoints can be also triggered externally by using the jcmd tool.


jcmd 15119 JDK.checkpoint
15119:
Command executed successfully

Context and Resources

The Context itself implements Resource. This allows hierarchies of custom contexts to be registered to the global context. Since the context of a resource is passed to the beforeCheckpoint and afterRestore methods, it can be used to carry information to assist in C/R of specific resources.

Performance

As demonstrated with JCRIU, restoring initialized and warmed up Java applications can be really fast - CRaC however can be even faster due to the fact that the process image is much more compact. The average time to restore the JVM running this blog from a checkpoint using JCRIU was ~200 ms, while CRaC can restore JVMs in ~50 ms. Although this will depend on the size of the process image and IO read speed.

Potential use-cases beside instant restore

CRaC seems to be concentrating mainly on the use-case of restoring a started and warmed up JVM as fast as possible. This makes of course sense, since why would someone start a JVM in a container, on-demand, when it could have been already started when the container image was built? The purpose of the container is most likely to run business logic, not to start programs.

However, if CRaC would allow programs to continue running after a checkpoint, it would open up many other possibilities. For example:

  • time traveling debuggers, stepping backwards to past breakpoints (checkpoints)
  • snapshotting of a production JVM to restore and test/inspect it locally, do heap dumps etc
  • maybe some niche use-cases of periodic checkpoints and automatic restoration on failure (incremental dumps)
  • instantly starting IDEs (although this won't be a small task)

in any case... exciting times :)

Thanks to Anton Kozlov from Azul for immediately fixing a bug I encountered during testing.


- - - sidenotes - - -

jdk14-crac/lib/criu and jdk14-crac/lib/action-script might require cap_sys_ptrace to be set on some systems to not fail during restore.

The rootless mode for CRIU hasn't made it yet into the master branch which means that the JVM or criu has to be run with root privileges for now.

C/R of UI doesn't work at all, since disposing a window will still leave some cached resources behind (opened sockets, file descriptors etc) - but this is another aspect which could be only solved on the JDK level (although this won't be trivial).


// Java 17's Enhanced Pseudo-Random Number Generators

JEP 356 adds a new set of pseudo-random number generators to Java 17 and a nice new API to list and instantiate them. Lets take a look.

RandomGeneratorFactory

The new main entry point is java.util.random.RandomGeneratorFactory, which can list all available factories (all()), get one by name (of("..")) or return the default factory (getDefault()). Lets see first what JDK 17 ships with.


            RandomGeneratorFactory.all()
                .map(fac -> fac.group()+":"+fac.name()
                                + " {"
                                + (fac.isSplittable()?" splitable":"")
                                + (fac.isStreamable()?" streamable":"")
                                + (fac.isJumpable()?" jumpable":"")
                                + (fac.isArbitrarilyJumpable()?" arbitrary-jumpable":"")
                                + (fac.isLeapable()?" leapable":"")
                                + (fac.isHardware()?" hardware":"")
                                + (fac.isStatistical()?" statistical":"")
                                + (fac.isStochastic()?" stochastic":"")
                                + " stateBits: "+fac.stateBits()
                                + " }"
                    )
                .sorted().forEach(System.out::println);

prints...


LXM:L128X1024MixRandom { splitable streamable statistical stateBits: 1152 }
LXM:L128X128MixRandom { splitable streamable statistical stateBits: 256 }
LXM:L128X256MixRandom { splitable streamable statistical stateBits: 384 }
LXM:L32X64MixRandom { splitable streamable statistical stateBits: 96 }
LXM:L64X1024MixRandom { splitable streamable statistical stateBits: 1088 }
LXM:L64X128MixRandom { splitable streamable statistical stateBits: 192 }
LXM:L64X128StarStarRandom { splitable streamable statistical stateBits: 192 }
LXM:L64X256MixRandom { splitable streamable statistical stateBits: 320 }
Legacy:Random { statistical stateBits: 48 }
Legacy:SecureRandom { stochastic stateBits: 2147483647 }
Legacy:SplittableRandom { splitable streamable statistical stateBits: 64 }
Xoroshiro:Xoroshiro128PlusPlus { streamable jumpable leapable statistical stateBits: 128 }
Xoshiro:Xoshiro256PlusPlus { streamable jumpable leapable statistical stateBits: 256 }

The Legecy group represents the old PRNGs. For example the "Random" factory will produce java.util.Random instances while "SecureRandom" produces java.security.SecureRandom.


        RandomGenerator rng1 = RandomGeneratorFactory.of("Random").create(42);   // new way
        RandomGenerator rng2 = new Random(42);                                   // old way
        RandomGenerator rng3 = RandomGeneratorFactory.getDefault().create(42);   // new default
        RandomGenerator rng4 = RandomGenerator.getDefault();                     // shortcut to new default

        System.out.println(rng1.getClass()); // class java.util.Random
        System.out.println(rng2.getClass()); // class java.util.Random
        System.out.println(rng3.getClass()); // class jdk.random.L32X64MixRandom
        System.out.println(rng4.getClass()); // class jdk.random.L32X64MixRandom

The default implementation is already a new algorithm of the LXM group, which is fine since the API didn't exist before - existing applications won't be affected. From the doc: "Returns a RandomGenerator meeting the minimal requirement of having an algorithm whose state bits are greater than or equal 64."

No Thread Safety

None of the new implementations are thread safe while both java.util.Random and java.security.SecureRandom are.

Although it is not very common to share the same instance between threads (there is even ThreadLocalRandom for this specific purpose if it doesn't have to be cryptographically secure), I would advice against blindly refactoring the code into something like


        RandomGenerator threadSafeQuestionMark = RandomGeneratorFactory.all()
                                    .filter(RandomGeneratorFactory::isStochastic)
                                    .sorted((g1, g2) -> g2.stateBits() - g1.stateBits())
                                    .findFirst().get().create();

This will return a thread safe SecureRandom now, but if there would be a better implementation in future, which isn't thread safe, your application might break if it relied on that fact. There is no isThreadSafe() in the API so there is no good way to filter it. Make sure you don't rely on the special nature of the legacy implementations before using filters in a forward-incompatible way. See SplittableGenerator section for a better solution instead to sharing.

Which random to pick?

...if you are using java.security.SecureRandom

If you look at the capability list above you will notice that one algorithm is not quite like the others. SecureRandom is the only stochastic algorithm and it initialized by some entropy source, usually responsibility of your kernel (/dev/random) during boot, or a lava lamp, etc. So if your application used SecureRandom before, keep using it, there is currently only one cryptographically strong RNG in the JDK.

...if you are using java.util.Random

You have several options to pick from now on (as long you don't share the instance between threads). The javadoc for the java.util.random package has a great description of the new algorithms, mixing functions and also a section which helps with choosing the right one.

Consider getDefault() before picking a factory at random ;)

...if you are using java.util.SplittableRandom

Consider switching to the new SplittableGenerators, see quote in benchmark section.

SplittableGenerator and Threads

As soon multiple threads are involved you want to make sure that individual threads don't generate the same random numbers in parallel. A quick and dirty way of doing this in past, was by simply sharing a thread safe java.util.Random. A slightly better aproach is ThreadLocalRandom.current() (however thread locals will face scalability issues once virtual threads arrive). But a much better approach is Java 8 java.util.SplittableRandom (see Legacy group above).

Java 17 adds several LXM implementations which all implement the SplittableGenerator interface of the new java.util.random package. The general idea is to split a new instance from a local source, before a new thread (or task) is spawned without causing any contention. This ensures that the instances are initialized in a way that they don't end up in the same cycle of pseudo random numbers.


        ExecutorService vte = Executors.newVirtualThreadExecutor();
        
        SplittableGenerator source = RandomGeneratorFactory.<SplittableGenerator>of("L128X1024MixRandom").create();
        source.splits(100).forEach((rng) -> {
            vte.submit(() -> {
                // this is one of 100 virtual threads with its own independent rng instance
                // the instance uses the same "L128X1024MixRandom" algorithm
                long random = rng.nextLong();
                ...
            });
        });

Each splitted generator is also a SplittableGenerator, so tasks can split more generators for their subtasks recursively on demand (useful for ForkJoinPools).

ScopeLocals of project loom will be another way to inject context dependent variables into tasks (but this is beyond JDK 17).


    private final static ScopeLocal<SplittableGenerator> rng_scope = 
                                            ScopeLocal.inheritableForType(SplittableGenerator.class);
    
    public static void main(String[] args) throws InterruptedException {
                
        SplittableGenerator rng1 =
                RandomGeneratorFactory.<SplittableGenerator>of("L128X1024MixRandom").create();
        SplittableGenerator rng2 =
                RandomGeneratorFactory.<SplittableGenerator>of("L32X64MixRandom").create();
                
        try (ExecutorService vte = Executors.newVirtualThreadExecutor()) {
            for (int i = 0; i < 5; i++) {
                ScopeLocal.where(rng_scope, rng1.split(), () -> { vte.submit(new Task()); });
            }   
            for (int i = 0; i < 5; i++) {
                ScopeLocal.where(rng_scope, rng2.split(), () -> { vte.submit(new Task()); });
            }
        }
    }
    
    private static class Task implements Runnable {
        @Override public void run() {
            SplittableGenerator rng = rng_scope.get();
            System.out.println(rng);
        }
    }
prints 5x L128X1024MixRandom and 5x L32X64MixRandom, with every virtual thread having its own instance.

jdk.random.L128X1024MixRandom@2d7b71b1
jdk.random.L128X1024MixRandom@7ab82aa3
jdk.random.L128X1024MixRandom@704041d3
jdk.random.L32X64MixRandom@3542c1bf
jdk.random.L32X64MixRandom@e941886
jdk.random.L32X64MixRandom@43dd13b
jdk.random.L32X64MixRandom@760156b6
jdk.random.L32X64MixRandom@556d3ef0
jdk.random.L128X1024MixRandom@456e8e4d
jdk.random.L128X1024MixRandom@316b0e77

Sources

A SplittableGenerator can also split a new instance of its implementation from a different source. Interestingly, the source has to be a SplittableGenerator as well.


interface SplittableGenerator extends StreamableGenerator {
...
        SplittableGenerator split();
...
        SplittableGenerator split(SplittableGenerator source);
...

After going through the sourcecode I couldn't find a reason why the source couldn't be a different generator type, for example a high entropy SecureRandom instance. So I asked on the core-libs-dev list and it turns out it is deliberate:

"(...) You are right that the comment in the JEP was a little loose, and that the implementation(s) of the split/splits methods could in principle draw random values from a RandomGenerator that is not itself splittable. There might even be applications for such functionality.

However, we chose not to support that more general functionality for a fairly subtle reason: there are concerns that if a PNRG is less than perfect, using it as a source of entropy for seeding a PRNG that uses a different algorithm might result in unexpected correlations that could drastically reduce the quality of the output of the new PRNG instance. (...) —Guy Steele " full mail

Basically implementations of SplittableGenerator have a certain baseline quality which makes them viable as source for splits of their own or other splittable implementations. Other algorithms which don't implement SplittableGenerator might not necessarily have this quality and could cause problems down the line - interesting.

Benchmark

There can't be a blog post about algorithms without the obligatory benchmark.


# Run complete. Total time: 00:21:44

Benchmark                         (name)  Mode  Cnt    Score    Error  Units
RandomJMH.rngInt      L128X1024MixRandom  avgt    5    5.037 ±  0.035  ns/op
RandomJMH.rngInt       L128X128MixRandom  avgt    5    3.640 ±  0.035  ns/op
RandomJMH.rngInt       L128X256MixRandom  avgt    5    3.948 ±  0.014  ns/op
RandomJMH.rngInt         L32X64MixRandom  avgt    5    1.983 ±  0.001  ns/op
RandomJMH.rngInt       L64X1024MixRandom  avgt    5    2.545 ±  0.001  ns/op
RandomJMH.rngInt        L64X128MixRandom  avgt    5    2.045 ±  0.006  ns/op
RandomJMH.rngInt   L64X128StarStarRandom  avgt    5    2.055 ±  0.023  ns/op
RandomJMH.rngInt        L64X256MixRandom  avgt    5    2.659 ±  1.715  ns/op
RandomJMH.rngInt                  Random  avgt    5    8.979 ±  0.001  ns/op
RandomJMH.rngInt            SecureRandom  avgt    5  183.858 ±  0.798  ns/op
RandomJMH.rngInt        SplittableRandom  avgt    5    1.291 ±  0.003  ns/op
RandomJMH.rngInt    Xoroshiro128PlusPlus  avgt    5    1.771 ±  0.001  ns/op
RandomJMH.rngInt      Xoshiro256PlusPlus  avgt    5    2.063 ±  0.023  ns/op
RandomJMH.rngLong     L128X1024MixRandom  avgt    5    5.035 ±  0.037  ns/op
RandomJMH.rngLong      L128X128MixRandom  avgt    5    3.647 ±  0.046  ns/op
RandomJMH.rngLong      L128X256MixRandom  avgt    5    3.953 ±  0.042  ns/op
RandomJMH.rngLong        L32X64MixRandom  avgt    5    3.003 ±  0.001  ns/op
RandomJMH.rngLong      L64X1024MixRandom  avgt    5    2.589 ±  0.030  ns/op
RandomJMH.rngLong       L64X128MixRandom  avgt    5    2.046 ±  0.005  ns/op
RandomJMH.rngLong  L64X128StarStarRandom  avgt    5    2.052 ±  0.027  ns/op
RandomJMH.rngLong       L64X256MixRandom  avgt    5    2.455 ±  0.001  ns/op
RandomJMH.rngLong                 Random  avgt    5   17.983 ±  0.190  ns/op
RandomJMH.rngLong           SecureRandom  avgt    5  367.623 ±  2.274  ns/op
RandomJMH.rngLong       SplittableRandom  avgt    5    1.296 ±  0.014  ns/op
RandomJMH.rngLong   Xoroshiro128PlusPlus  avgt    5    1.776 ±  0.023  ns/op
RandomJMH.rngLong     Xoshiro256PlusPlus  avgt    5    2.063 ±  0.001  ns/op

linux 5.10.49; jdk-17+28; CPU i7-6700K, HT off, boost time limit off, boost thread limit off. source.

The bad performance of the old Random class is most likely attributed to its thread safe promise, it has to work with atomic longs and CAS loops while the new implementations can just compute and return the next value.

Keep in mind this is just CPU time, this does not take per instance memory footprint or the mathematical properties of the algorithms into account (javadoc for more info). This benchmark also only tests two methods.

The old SplittableRandom for example, although performing very well, has its own problems. Quoting the JEP: "In 2016, testing revealed two new weaknesses in the algorithm used by class SplittableRandom. On the one hand, a relatively minor revision can avoid those weaknesses. On the other hand, a new class of splittable PRNG algorithms (LXM) has also been discovered that are almost as fast, even easier to implement, and appear to completely avoid the three classes of weakness to which SplittableRandom is prone."

Summary

Java 17 adds the java.util.random package with new APIs and PRNG implementations. Switching to it can be worth it, but be careful when migrating old code due to changes in thread safety. If you are using SecureRandom, keep using it, in all other cases consider getDefault() instead of legacy Random or pick a specific implementation which fits your use case best. Take a look at SplittableGenerators of the LXM group for multi threaded scenarios. If it doesn't have to be splittable, consider the Xoroshiro and Xoshiro implementations.

Thats all for now. Until next time ;)


// Custom Java Runtimes with jlink [and jdeps for classpath applications]

The jlink command line tool can be used to create custom java runtimes, which only include the functionality required by the (modular) java application. However, what if the application isn't modular and still uses the classpath? In this case an extra step is needed to determine which JDK modules are required by the application before jlink can be used.

classic classpaths: finding module dependencies with jdeps

jdeps is excellent for porting classic classpath based applications to java modules. It analyzes jars and list all their dependencies, which can be other jars, or modules, with package granularity. Although we don't want to port the dusty application to the module system for this blog post, listing all the module dependencies is exactly what we need for jlink, to be able to create a custom java runtime.

jdeps produces the following output:


# foo.jar depends on bar.jar
foo.jar -> bar.jar
...
# or foo.jar depends on a module
foo.jar -> module.name
...

Since the tool is intended to assist with porting applications to java modules, the default output will be fairly detailed down to package dependencies. The summary (-s) omits all that and only lists jars or modules.

All we have to do is to go recursively through all jars and remember the module names they depend on.


# -s omit detailed package dependencies
# -R analyze recursively through all found dependencies
# --multi-release 16 for the case that there are multi release jars involved
$JDK/bin/jdeps -s -R --multi-release 16 --class-path 'lib/*' dusty-application.jar
jakarta.persistence-2.2.3.jar -> java.base
jakarta.persistence-2.2.3.jar -> java.instrument
jakarta.persistence-2.2.3.jar -> java.logging
jakarta.persistence-2.2.3.jar -> java.sql
foo.jar -> bar.jar
...

Some greping and deduplication and we have a short list of JDK modules our application depends on.


$JDK/bin/jdeps -s -R --multi-release 16 --class-path 'lib/*' dusty-application.jar\
 | grep -Ev '\.jar$' | cut -d " " -f 3 | sort -u
java.base
java.desktop
java.instrument
java.logging
java.naming
java.net.http
java.security.jgss
java.sql
java.xml
jdk.unsupported

Thats it? Not quite. Analyzing an application like that won't show dependencies which are caused via reflection. So you will have to take a good look at the resulting modules and probably add some manually. A good candidate are jdk.crypto.* modules. jlink can assist with that task too by listing service providers.


$JDK/bin/jlink --suggest-providers java.security.Provider
Suggested providers:
  java.naming provides java.security.Provider used by java.base
  java.security.jgss provides java.security.Provider used by java.base
  jdk.crypto.ec provides java.security.Provider used by java.base
...

You might also want to add modules like jdk.jfr, java.management or jdk.localedata even when the application isn't direclty depending on them. You can experiment with options like --compile-time which will usually list more dependencies (default is runtime analysis). jlink adds transitive dependencies automatically.

Any missing modules should be quickly noticed during integration tests.

custom runtimes with jlink

Once we have the module list we can give it to jlink for the actual heavy lifting.


MODS=...
JDK=/home/mbien/dev/java/jdk-16.0.1+9
DIST=custom-$(basename $JDK)

$JDK/bin/jlink -v --add-modules $MODS\
 --compress=2 --no-header-files --no-man-pages\
 --vendor-version="[mbien.dev pod REv1]"\
 --output $DIST

du -s $DIST

jlink is automatically using the modules of the JDK which contains the tool, which means that the example above will create a runtime based on jdk-16.0.1+9. The flag --module-path would set a path to a alternative module folder. If the application is already modular, the path could also include the application modules, if they should be part of the runtime too.

some noteworthy flags:

  • --strip-debug this is going to strip debug symbols from both the native binaries and bytecode, you probably don't want to use this since this will remove all line numbers from stack traces. Its likely that the binaries of the JDK distribution you are using have most of their symbols already stripped.
  • --strip-native-debug-symbols=objcopy=/usr/bin/objcopy Same as above, but only for native binaries
  • --compress=0|1|2 0 for no compression, 1 for string deduplication, 2 for zip compressed modules. This might influence startup time slightly; see CDS section below
  • --include-locales=langtag[,langtag]* include only a subset of locales instead of the full module
  • --vendor-version="i made this" this looks uninteresting at first glance but it is very useful if you want to recognize your custom runtime again once you have multiple variants in containers. Adding domain name/project name or purpose of the base image helps.
    It will appear on the second line of the output of java -version

full JDK as baseline


MODS=ALL-MODULE-PATH

# --compress=1
138372 (151812 with CDS)

# --compress=2
102988 (116428 with CDS)

# --compress=2 --strip-debug
90848 (102904 with CDS)

custom runtime example


MODS=java.base,java.instrument,java.logging,java.naming,java.net.http,\
java.security.jgss,java.sql,java.xml,jdk.jfr,jdk.unsupported,java.rmi,\
java.management,java.datatransfer,java.transaction.xa,\
jdk.crypto.cryptoki,jdk.crypto.ec

# --compress=1
55996 (69036 with CDS)

# --compress=2
45304 (58344 with CDS)

# --compress=2 --strip-debug
40592 (52240 with CDS)

(this a aarch64 build of OpenJDK, x64 binaries are slightly larger)

Most modules are actually fairly small, the 5 largest modules are java.base, java.desktop, jdk.localedata, jdk.compiler and jdk.internal.vm.compiler. Since java.base is mandatory anyway, adding more modules won't significantly influence the runtime size unless you can't avoid some of the big ones.

Once you are happy with the custom runtime you should add it to your test environment of the project and IDE.

CDS - to share or not to share?

I wrote about class data sharing before so I keep this short. A CDS archive is a file which is mapped into memory by the JVM on startup and is shared between JVM instances. This even works for co-located containers, sharing the same image layer which includes the CDS archive.

Although it adds to the image size, zip compression + CDS seems to be always smaller than uncompressed without CDS. The CDS file should also eliminate the need to decompress modules during startup since it should contain the most important classes already. So the decision seems to be made easy: compact size + improved startup time and potential (small) memory footprint savings as bonus.

Leaving the CDS out frees up ~10 MB of image size. If this matters to your project, benchmark it to see if it makes a difference. It is also possible to put application classes into the shared archive or create a separate archive for the application which extends the runtime archive (dynamic class data sharing). Or go a step further and bundle the application and runtime in a single, AOT compiled, compact, native image with GraalVM (although this might reduce peak throughput due to lack of JIT and have a smaller choice of GCs beside other restrictions) - but this probably won't happen for dusty applications.


# create CDS archive for the custom runtime
$DIST/bin/java -Xshare:dump

# check if it worked, this will fail if it can't map the archive
$DIST/bin/java -Xshare:on -version

# list all modules included in the custom java runtime
$DIST/bin/java --list-modules

summary

Only a single extra step is needed to determine most of the dependencies of an application, even if it hasn't been ported to java modules yet. Maintaining a module list won't be difficult since it should be fairly static (backend services won't suddenly start using swing packages when they are updated). Make sure that the custom runtime is used in your automated tests and IDE.

Stop using java 8, time to move on - even without a modular application :)


- - - sidenotes - - -

If you want to create a runtime which can compile and run single-file-sourcode-programs adding just jdk.compiler isn't enough. This will result in a a little bit misleading "IllegalArgumentException: error: release version 16 not supported" exception. Solution is to add jdk.zipfs too - I haven't investigated it any further.

If jlink has to be run from within a container (can be useful for building for foreign archs, e.g aarch64 on x64), you might have to change the process fork mechanism if you run into trouble (java.io.IOException: Cannot run program "objcopy": error=0, Failed to exec spawn helper: pid: 934, exit value: 1). (export JAVA_TOOL_OPTIONS="-Djdk.lang.Process.launchMechanism=vfork" worked for me)


// Defrosting Warmed-up Java [using Rootless CRIU and Project Panama]

I needed a toy project to experiment with JEP 389 of Project Panama (modern JNI) but wanted to take a better look at CRIU (Checkpoint/Restore In Userspace) too. So I thought, lets try to combine both and created JCRIU. The immediate questions I had were: how fast can it defrost a warmed up JVM and can it make a program time travel.

Lets attempt to investigate the first question with this blog entry.

CRIU Crash Course

CRIU can dump process trees to disk (checkpoint) and restore them any time later (implemented in user space) - its all in the name.

Lets run a minimal test first.


#!/bin/bash
echo my pid: $$
i=0
while true
do
    echo $i && ((i=i+1)) && sleep 1
done

The script above will print its PID initially and then continue to print and increment a number. It isn't important that this is a bash script, it could be any process.

shell 1:


$ sh test.sh 
my pid: 14255
0
1
...
9
Killed

shell 2:


$ criu dump -t 14255 --shell-job -v -D dump/
...
(00.021161) Dumping finished successfully

This command will let CRIU dump (checkpoint) the process with the specified PID and store its image in ./dump (overwriting any older image on the same path). The flag --shell-job tells CRIU that the process is attached to a console. Dumping a process will automatically kill it, like in this example, unless -R is specified.

shell 2:


$ criu restore --shell-job -D dump/
10
11
12
...

To restore, simply replace "dump" with "restore", without specifying the PID. As expected the program continues counting in shell 2, right where it was stopped in shell 1.

Rootless CRIU

As of now (Nov. 2020) the CRIU commands above still require root permissions. But this might change soon. Linux 5.9 received cap_checkpoint_restore (patch) and CRIU is also already being prepared. To test rootless CRIU, simply build the non-root branch and set cap_checkpoint_restore to the resulting binary (no need to install, you can use criu directly).


sudo setcap cap_checkpoint_restore=eip /path/to/criu/binary

Note: Dependent on your linux distribution you might have to set cap_sys_ptrace too. Some features might not work yet, for example restoring as --shell-job or using the CRIU API. Use a recent Kernel (at least 5.9.8) before trying to restore a JVM.

CRIU + Java + Panama = JCRIU

JCRIU uses Panama's jextract tool during build time to generate a low level (1:1) binding directly from the header of the CRIU API. The low level binding isn't exposed through the public API however, its just a implementation detail. Both jextract and the foreign function module are part of project Panama, early access builds are available here. JEP 389: Foreign Linker API has been (today) accepted for inclusion as JDK 16 incubator module - it might appear in mainline builds soon.

The main entry point is CRIUContext which implements AutoCloseable to cleanly dispose resources after use. Potential errors are mapped to CRIUExceptions. Checkpointing should be fairly robust since the communication is done over RPC with the actual CRIU process. Crashing CRIU most likely won't take the JVM down too.


    public static void main(String[] args) throws IOException, InterruptedException {
        
        // create empty dir for images
        Path image = Paths.get("checkpoint_test_image");

        if (!Files.exists(image))
            Files.createDirectory(image);
        
        // checkpoint the JVM every second
        try (CRIUContext criu = CRIUContext.create()
                .logLevel(WARNING).leaveRunning(true).shellJob(true)) {
            
            int n = 0;
            
            while(true) {
                Thread.sleep(1000);

                criu.checkpoint(image); // checkpoint and entry point for a restore

                long pid = ProcessHandle.current().pid()
                System.out.println("my PID: "+pid+" checkpont# "+n++);
            }
        }
    }

The above example is somewhat similar to the simple bash script. The main difference is that the Java program is checkpointing itself every second. This allows us to CTRL+C any time - the program will keep counting and checkpointing where it left of, if restored.


[mbien@longbow JCRIUTest]$ sudo sh start-demo.sh 
WARNING: Using incubator modules: jdk.incubator.foreign
my PID: 16195 checkpont# 0
my PID: 16195 checkpont# 1
my PID: 16195 checkpont# 2
my PID: 16195 checkpont# 3
my PID: 16195 checkpont# 4
my PID: 16195 checkpont# 5
CTRL+C
[mbien@longbow JCRIUTest]$ sudo criu restore --shell-job -D checkpoint_test_image/
my PID: 16195 checkpont# 5
my PID: 16195 checkpont# 6
my PID: 16195 checkpont# 7
my PID: 16195 checkpont# 8
my PID: 16195 checkpont# 9
CTRL+C
[mbien@longbow JCRIUTest]$ sudo criu restore --shell-job -D checkpoint_test_image/
my PID: 16195 checkpont# 9
my PID: 16195 checkpont# 10
my PID: 16195 checkpont# 11
my PID: 16195 checkpont# 12
my PID: 16195 checkpont# 13
my PID: 16195 checkpont# 14
CTRL+C

Note: start-demo.sh is just setting env variables to an early access JDK 16 panama build, enables jdk.incubator.foreign etc. The project README has the details.

Important Details and Considerations

  • CRIU restores images with the same PIDs the processes had during checkpoint. This won't cause much trouble in containers since the namespace should be quite empty, but might conflict from time to time on a workstation. If the same image should be restored multiple times concurrently, it will have to run in its own PID namespace. This can be achieved with sudo unshare -p -m -f [restore command]. See man unshare for details.
  • Opened files are not allowed to change (in size) between checkpoint and restore. If they do, the restore operation will fail. (watch out for log files, JFR repos, JVM perf data or temporary files)
  • If the application established TCP connections you have to tell CRIU that via the --tcp-established flag (or similar named method in CRIUContext). CRIU will try to restore all connections in their correct states. wiki link to more options
  • The first checkpoint or restore after system boot can take a few seconds because CRIU has to gather information about the system configuration first; this information is cached for subsequent uses
  • Some application dependent post-restore tasks might be required, for example keystore/cert replacement or RNG re-initialization (...)
  • CRIU can't checkpoint resources it can't reach. A X Window or state stored on a GPU can't be dumped
  • Migration should probably only be attempted between (very) similar systems and hardware

Instant Defrosting of Warmed-up JVMs

Lets take a look what you can do with super luminal, absolute zero, instant defrosting JCRIU (ok I'll stop ;)) when applied to my favorite dusty java web monolith: Apache Roller. I stopped the time this blog here would require to start on my workstation when loaded from a NVMe on JDK 16 + Jetty 9.4.34. (I consider it started when the website has loaded in the browser, not when the app server reports it started)

classic start: ~6.5 s

(for comparison: it takes about a minute to start on a Raspberry Pi 3b+, which is serving this page you are reading right now)

Now lets try this again. But this time Roller will warm itself up, generate RSS feeds, populate the in-memory cache, give the JIT a chance to compile hot paths, compact the heap by calling System.gc() and finally shock frost itself via criu.checkpoint(...).


        warmup();    // generates/caches landing page/RSS feeds and first 20 blog entries
        System.gc(); // give the GC a chance to clean up unused objects before checkpoint

        try (CRIUContext criu = CRIUContext.create()
                .logLevel(WARNING).leaveRunning(false).tcpEstablished(true)) {

            criu.checkpoint(imagePath);  // checkpoint + exit

        } catch (CRIUException ex) {
            jfrlog.warn("post warmup checkpoint failed", ex);
        }

(The uncompressed image size was between 500-600 MB during my tests, heap was set to 1 GB with ParallelGC active)

restore:


$ sudo time criu restore --shell-job --tcp-established -d -D blog_image/

real 0m0,204s
user 0m0,015s
sys  0m0,022s

instant defrosting: 204 ms

Note: -d detaches the shell after the restore operation completed. Alternative way to measure defrosting time is by enabling verbose logging with -v and comparing the last timestamp, this is slightly slower (+20ms) since CRIU tends to log a lot on lower log levels. Let me know if there is a better way of measuring this, but I double checked everything and the image loading speed would be well below the average read speed of my M.2 NVMe.

The blog is immediately reachable in the browser, served by a warmed-up JVM.

Conclusion && Discussion

CRIU is quite interesting for use cases where Java startup time matters. Quarkus for example moves slow framework initialization from startup to build time, native images with GraalVM further improve initialization by AOT compiling the application into a single binary, but this also sacrifices a little bit throughput. CRIU can be another tool in the toolbox to quickly map a running JVM with application into memory (no noteworthy code changes required).

The Foreign Linker API (JEP 389) is currently proposed as preview feature for OpenJDK 16, which is a major part of project Panama. However, to use JCRIU on older JDKs, another implementation for CRIUContext would be needed. A implementation which communicates via google protocol buffers with CRIU would completely avoid binding to the CRIU C-API for example.

The JVM would be in an excellent position to aid CRIU in many ways. It already is an operating system for Java/Bytecode based programs (soon even with its own implementation for threads) and knows how to drive itself to safe points (checkpointing an application which is under load is probably a bad idea), how to compact or resize the heap, invalidate code cache etc - I see great potential there.

Let me know what you think.

Thanks a lot to Adrian Reber (@adrian__reber) who patiently answered all my questions about CRIU.


// Formatting JFR Events with J'Bang [and JFRLog]

Once JFRLog stored all logs as events in JFR records, you might want to read them out again for inspection and maybe even in an easy readable format which resembles classic log files a bit more.

For this I wrote the JFRPrint utility which was originally a single-file java program (SFJP) but can now be used as jbang script. The utility can format any JFR event, not only log messages.

setup and example


# add the catalog to jbang
$ jbang catalog add jfrlog https://github.com/mbien/JFRLog/blob/master/cli/jbang-catalog.json

# define a log pattern
$ MSG_PATTERN="{eventName,0d,C} {startTime,dt:yyyy-MM-dd HH:mm:ss.SSS}\
 [{eventThread.javaName}] {origin,1d}: {message} {throwable,o,n}"

# print jfr records using the pattern
$ jbang jfrprint 10h log.* "$MSG_PATTERN" dump.jfr
INFO 2020-09-30 16:12:42.458 [main] jfrlogtest.LogTest: Hello There!
INFO 2020-09-30 16:12:42.460 [main] jfrlogtest.LogTest: 1 2 3 test 
WARN 2020-09-30 16:12:42.461 [main] jfrlogtest.LogTest: don't panic 
java.lang.IllegalArgumentException: test, please ignore
	at dev.mbien.jfrlogtest.LogTest.main(LogTest.java:12)
...

Usage is as follows:


jfrprint timespan event_name pattern [jfr_record_file | jfr_repository_folder]

timespan can be set to only print events which happened after now-timespan. The util will print all events matching event_name (supports * wildcard as postfix) if a JFR record is passed as an argument. If it is a repository folder however, it is going to behave similar to tail -f and will stream all upcoming events from the live JFR repository.

To print the usage and more examples simply type:


jbang jfrprint help

message pattern

{fieldName, option1, option2, ..}

The message pattern format fairly simple. The curly brace blocks are replaced with the event field defined by fieldName. Printing the name of the event thread for example becomes {eventThread.javaName}.

Options can be appended in a coma separated list after fieldName.

  • dt: prefix defines a date-time format and supports everything what DateTimeFormatter.ofPattern() is able to parse
  • c sets the String to lower case, C to upper case
  • [0-n]d defines how many dots you want to see. 0d would format "log.Warn" to "Warn". 1d formats "foo.bar.Bazz" to "bar.Bazz"
  • o stands for optional and won't print anything if the field is null. This can be useful for printing exceptions when combined with n which adds a new line before the field
  • for more options check the source

{...} is a special token which will print all fields which haven't been printed yet. This is especially useful for events which aren't log messages which might have unknown fields.

The following pattern will print all events with all their fields which happened in the last hour:


jfrprint 1h * "{eventName} {startTime} {...}" record.jfr

Note: if no pattern is provided the output will match the multi-line output of OpenJDK's jfr print CLI tool which is also the same format as used in jdk.jfr.Event::toString().

it is still very basic

I wrote it originally as SFJP and tried to keep everything as simple and concise as possible. But since it is now set up as as jbang "script", it would allow to make the CLI experience a bit nicer - which i might do in future ;)

Let me know if you find it useful or want to see a particular feature.


// Stopping Containers Correctly

Stopping a container with


$ podman stop container-name
or

$ docker stop container-name

will send SIGTERM to the first process (PID 1) and shut down the container when the process terminates. If this doesn't happen within a certain time frame (default is 10s), the runtime will send SIGKILL to the process and take the container down.

So far so good, things are getting interesting when your container process isn't PID 1.

This is already the case if the process is started via a shell script.


#!/bin/bash

... 

java $FLAGS $APP

Attempting to stop this container will terminate the script, while the JVM will keep running. The container runtime is usually smart enough to notice that a process is still active after the script terminated and will wait the grace period anyway, before shutting down the container forcefully. The JVM however won't notice anything and won't have the opportunity to call shutdown hooks, write JFR dumps or finish transactions.

signal delegation

One way to solve this is by delegating the signal from the shell script to the main process:


... 
java $FLAGS $APP & # detach process from script
PID=$!             # remember process ID

trap 'kill -TERM $PID' INT TERM # delegate kill signal to JVM

wait $PID   # attach script to JVM again; note: TERM signal unblocks this wait
trap - TERM INT
wait $PID   # wait for JVM to exit after signal delegation
EXIT_STATUS=$?

The second wait prevents the script from exiting before the JVM finished termination and is required since the first wait is unblocked as soon the script received the signal.

it still didn't work

Interestingly, after implementing this (and trying out other variations of the same concept) it still didn't work for some reason - debugging showed the trap never fired.

Turns out that nothing was wrong with the signal delegation - signals just never reached the script :). So I searched a bit around and found this article which basically described the same async/wait/delegate method in greater detail (thats where I stole the EXIT_STATUS line from), so I knew it had to work. Another great article gave me the idea to check the Dockerfile again.


FROM ...
...
CMD ./start.sh

The sh shell interpreting the bash script was the first process!


$ podman ps
CONTAINER ID  IMAGE                     COMMAND             ...
de216106ff39  localhost/test:latest  /bin/sh -c ./start.sh  ...

htop (on the host) in tree view shows it more clearly:


$ htop
    1 root       ... /sbin/init
15643 podpilot   ...  - /usr/libexec/podman/conmon --api-version ...
15646 100996     ... |   - /bin/sh -c ./start.sh ...
15648 100996     ... |      - /bin/bash ./start.sh
15662 100996     ... |         - /home/jdk/bin/java -Xshare:on ...

To fix this a different CMD (or ENTRYPOINT) syntax is needed:


FROM ...
...
CMD [ "./start.sh" ]

Lets check again after rebuild + run:


$ podman ps
CONTAINER ID  IMAGE                     COMMAND         ...
72e3e60ed60b  localhost/test:latest  ./start.sh  ...

$ htop
    1 root       ... /sbin/init
15746 podpilot   ...  - /usr/libexec/podman/conmon --api-version ...
15749 100996     ... |   - /bin/bash ./start.sh ...
15771 100996     ... |      - /home/jdk/bin/java -Xshare:on ...

Much better. Since the script is now executed directly, it is able to receive and delegate the signals to the JVM. The Java Flight Recorder records also appeared in the volume, which meant that the JVM had enough time to convert the JFR repository into a single record file. The podman stop command also returned within a fraction of a second.

Since the trap is listening to SIGINT too, even the CTRL+C signal is properly propagated when the container is started in non-detached mode. Nice bonus for manual testing.

alternatives

Starting the JVM with


exec java $FLAGS $APP

will replace the shell process with the JVM process without changing PID or process name. Disadvantage: no java commandline in top and the shell won't execute any lines after the exec line (because it basically doesn't exist anymore).

... and if you don't care about the container life cycle too much you can always tell the JVM directly to shutdown, this will close all parent shells bottom-up until PID 1 terminated which will finally stop the container.


podman exec -it container sh -c "kill \$(jps | grep -v Jps | cut -f1 -d' ')"

- - -

lessons learned:
sometimes two square brackets make the difference :)


// JFRLog - Logging directly to the Java Flight Recorder

I just wrote JFRLog - a lightweight SLF4J logger implementation which lets you log directly into the Java Flight Recorder. Since SLF4J (Simple Logging Facade for Java) has a binding to basically every logging framework out there, it doesn't even matter how the logging is done or how many frameworks the application or the dependencies use (hopefully not many).

Why? I was mostly curious how practical something like that would be and it turned out it works great. If you are running linux its likely that most of the logging is already centralized and stored in binary form and you need tools like journalctl to read it. If you ever had to correlate log files with events of a JFR recording you probably also asked yourself why isn't everything in the recording in the first place. Having everything in one place simplifies containers too.

JFRLog

The project is available on github under the MIT License. It requires JDK 8+ to run but JDK 14+ to build/test since the JUnit tests rely on the event streaming API. JFRLog's only dependency is the SLF4J API.

quickstart

The basic 2.5 steps to get it working are:

  1. replace your SLF4J compatible logging impl with slf4j-jfr-bridge-x.x.x.jar
  2. start the JVM with JFR enabled (e.g. -XX:StartFlightRecording:filename=logs/dump.jfr,dumponexit=true) or enable recording later
  3. check the flight recorder repository or recording dump for log.* events

update: artifacts of the first release are now available on maven central.

minimal maven dependencies:

    <dependencies>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>1.7.30</version>
        </dependency>
        <!-- depends on slf4j-api -->
        <dependency>
            <groupId>dev.mbien.jfrlog</groupId>
            <artifactId>slf4j-jfr-bridge</artifactId>
            <version>0.1.0</version>
        </dependency>
    </dependencies>

(adding slf4j-api is optional since it is already a dependency of JFRLog but it is a good practice to depend on the interface and makes it also more obvious what is going on)

minimal example:

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class HelloJFRLog {
    private final static Logger log = LoggerFactory.getLogger(HelloJFRLog.class);
    public static void main(String[] args) {
        log.info("does it work?");
    }
}
start with JFR enabled, for example by using the following JVM flag: -XX:StartFlightRecording:filename=test_dump.jfr,dumponexit=true

inspect the flight record:


$ jfr print --events "log.*" test_dump.jfr
log.Info {
  startTime = 11:52:15.954
  message = "does it work?"
  origin = "dev.mbien.jfrlogtestproject.HelloJFRLog"
  throwable = N/A
  eventThread = "main" (javaThreadId = 1)
}

Thats it! Log and JFR events all in one place.

configuration

Configuration is straight forward. You can set log levels via java -D arguments or via a jfrlog.properties file and place it into the classpath. Java -D args are treated with higher priority than the configuration file to simplify testing.

example jfrlog.properties:

# Sets MyKlass to debug, the rest of the package to error and the default log level
# to info (for everything which isn't explicitly configured). 
# The most specific rule wins (order does not matter).
jfrlog.default=info
jfrlog.dev.cool.app=error
jfrlog.dev.cool.app.MyKlass=debug

You can also disable entire log levels by disabling the corresponding events in the JFR recording profile if so desired.

this blog you are reading is using it right now

The Apache Roller blog is a big dusty webapp with many dependencies. Since dependencies use their own favorite logging framework its a nice example of a near-worst-case-scenario a.k.a.: the LoggingHell (extends JarHell). Half of the Apache libs use the old Java Commons Logging layer, struts2 depends on log4j2 for some reason, some webjars use java.util.logging, eclipselink uses its own (!) logging but thankfully with a SLF4J compatibility extension and roller used log4j1 at some point (the version in my WIP branch uses log4j2). Ah, and the web server needs some logging too, but thankfully Jetty uses the SLF4J layer so it doesn't care that much.

So, to see some text in a file, all what is needed is:

  • jcl-over-slf4j to connect the JCL abstraction layer to the SLF4J layer for spring etc
  • log4j-to-slf4j to make struts happy
  • jul-to-slf4j for webjars (i think)
  • org.eclipse.persistence.extension to bridge eclipselink's personal logger to SLF4J

Beautiful, all logs lead to SLF4J now. We only forgot about the logging impl itself. Usually you would have a log4j-slf4j-impl or logback-classic there - most likely. But to enable JFRLog all what is needed is to replace it with slf4j-jfr-bridge and that was exactly what I did.

I also wrote some java (single file java program) scripts using the JFR streaming API for easy inspection of the logging repository. (but that is probably a story for another time update: CLI tools blog entry)

How many new logging frameworks or abstraction layers will the current pandemic give us?

Let me know what you think.

fly safe (and keep JFR on)


// Configuring Eclipse Jetty to use Virtual Threads

A quick guide about how to configure Jetty to use Project Loom's virtual threads instead of plain old java threads.

Jetty's default thread pool implementation can be swapped out by implementing Jetty's ThreadPool interface and passing an instance to the Server constructor. If you are using jetty stand alone, everything is initialized by xml files.

Assuming you are using the recommended jetty home / jetty base folder structure, all what is needed is to create jetty-threadpool.xml in [jetty-base]/etc containing the following:


<Configure>
<New id="threadPool" class="dev.mbien.virtualthreads4jetty.VirtualThreadExecutor"/>
</Configure>

and put a jar containing the custom VirtualThreadExecutor into [jetty-base]/lib/ext. I uploaded a build to the release section of the vt4jetty github project.

If you don't have an lib/ext folder yet you can enable it with:


java -jar $JETTY_HOME/start.jar --add-to-start=ext

here the code:


package dev.mbien.virtualthreads4jetty;

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import org.eclipse.jetty.util.thread.ThreadPool;

/**
 * Executes each task in a new virtual thread.
 * 
 * <p>Java's default ForkJoinPool is used as scheduler. To influence carrier
 * thread count use -Djdk.defaultScheduler.parallelism=N. Default is
 * {@link Runtime#availableProcessors()}.
 * 
 * @author mbien
 */
public class VirtualThreadExecutor implements ThreadPool {
    
    private final ExecutorService executor;

    public VirtualThreadExecutor() {
        executor = Executors.newThreadExecutor(
                Thread.builder().virtual().name("jetty-vt#", 0).factory());
        // too early for logging libs
        System.out.println("VirtualThreadExecutor is active.");
    }
    
    @Override
    public void execute(Runnable command) {
        executor.execute(command);
    }

    @Override
    public void join() throws InterruptedException {
        executor.shutdown();
        executor.awaitTermination(3, TimeUnit.SECONDS);
    }

    // those are hopefully only used for stats/dashboards etc
    @Override
    public int getThreads() { return -1; }

    @Override
    public int getIdleThreads() { return -1; }

    @Override
    public boolean isLowOnThreads() { return false; }
    
}

Tested with JDK16-loom+4-56 (2020/7/25) early access build from here and latest Jetty.

I encountered some JVM crashes while load testing Apache Roller with virtual threads enabled - keep in mind this is still all very much work in progress.


// [Java in] Rootless Containers with Podman

I have always been a little surprised how quickly it became acceptable to run applications wrapped in containers as root processes. Nobody would have run a web server as root before docker became mainstream if there was some way to avoid it. But with docker it became OK to have the docker daemon and the container processes all running as root. The first item in most docker tutorials became how to elevate your user rights so that you don't have to type sudo before every docker command.

But this doesn't have to be the case of course. One project I had an eye on was podman which is a container engine implementing the docker command line interface with quite good support for rootless operations. With the release of Podman 2.0.x (and the fact that it is slowly making it into the debian repositories) I started to experiment with it a bit more. (for the experimental rootless mode of Docker check out this page)

cgroups v2

Containers rely heavily on kernel namespaces and a feature called control groups. To properly run rootless containers the kernel must be supporting and running with cgroups v2 enabled. To check if cgroups v2 are enabled simply run:


ls /sys/fs/cgroup
cgroup.controllers  cgroup.max.depth  cgroup.max.descendants  cgroup.procs ...

If the files are prefixed with cgroup. you are running cgroups v2, if not, its still v1.

Many distributions will still run with cgroups v1 enabled by default for backwards compatibility. But you can enable cgroups v2 with the systemd kernel flag: systemd.unified_cgroup_hierarchy=1. To do this with grub for example:

  • edit /etc/default/grub and
  • add systemd.unified_cgroup_hierarchy=1 to the key GRUB_CMDLINE_LINUX_DEFAULT (space separated list)
  • then run sudo grub-mkconfig -o /boot/grub/grub.cfg and reboot.

... and make sure you are not running an ancient linux kernel.

crun

The underlying OCI implementation has to support cgroups v2 too. I tested mostly on crun which is a super fast and lightweight alternative to runc. The runtime can be passed to podman via the --runtime flag


podman --runtime /usr/bin/crun <commands>

but it got picked up automatically in my case after I installed the package (manjaro linux, runc is still installed too).


podman info | grep -A5 ociRuntime
  ociRuntime:
    name: crun
    package: Unknown
    path: /usr/bin/crun
    version: |-
      crun version 0.14.1

subordinate uid and gids

The last step required to set up rootless containers are /etc/subuid and /etc/subgid. If the files don't exist yet, create them and add a mapping range from your user name to container users. For example the line:

duke:100000:65536

Gives duke the right to create 65536 users in container images, starting from UID 100000. Duke himself will be mapped by default to root (0) in the container. (Same must be done for groups in subgid). The range should never overlap with UIDs on the host system. Details in man subuid. More on users later in the volumes section.

rootless containers

Some things to keep in mind:
  • rootless podman runs containers with less privileges than the user which started the container
    • some of these restrictions can be lifted (via --privileged, for example)
    • but rootless containers will never have more privileges than the user that launched them
    • root in the container is the user on the host
  • rootless containers have no IP or MAC address, because nw device association requires root privileges
    • podman uses slirp4netns for user mode networking
    • pinging something from within a container won't work out of the box - but don't panic: it can be configured if desired

podman

Podman uses the same command-line interface as Docker and it also understands Dockerfiles. So if everything is configured correctly it should all look familiar:

$ podman version
Version:      2.0.2
API Version:  1
Go Version:   go1.14.4
Git Commit:   201c9505b88f451ca877d29a73ed0f1836bb96c7
Built:        Sun Jul 12 22:46:58 2020
OS/Arch:      linux/amd64

$ podman pull debian:stable-slim
...
$ podman images
REPOSITORY                       TAG          IMAGE ID      CREATED       SIZE
docker.io/library/debian         stable-slim  56fae066253c  4 days ago    72.5 MB
...
$ podman run --rm debian:stable-slim cat /etc/debian_version
10.4

Setting alias docker=podman allows existing scripts to be reused without modification - but I stick here to podman to not cause confusion.

container communication

Rootless containers don't have their own IP addresses but you can bind them to ports (>1024). Host2container communication works therefore analog to communicating with any service you would have running on the host.


$ podman run --name wiki --rm -d -p 8443:8443 jspwiki-jetty
$ podman port -a
fd4c06b454ee	8443/tcp -> 0.0.0.0:8443
$ firefox https://localhost:8443/wiki

To setup quick and dirty container2container communication you can let them communicate over the IP address of the host (or host name) and ports, if the firewall is OK with that. But a better maintainable approach are pods. Pods are groups of containers which belong together. It is basically a infrastructure container, containing the actual containers. All containers in a pod share the same localhost and use it for pod-local communication. The outside world is reached via opened ports on the pod.

Lets say we have a blog and a db. The blog requires the db but all the host cares about is the https port of the blog container. So we can simply put blog container and db container into a blog-pod and let both communicate via pod-local localhost (podhost?). The https port is opened on the blog-pod for the host while the db isn't reachable from the outside.


$ podman pod create --name blogpod -p 8443:8443

# note: a pod starts out with one container already in it,
# it is the infrastructure container - basically the pod itself
$ podman pod list
POD ID        NAME     STATUS   CREATED        # OF CONTAINERS  INFRA ID
39ad88b8892f  blogpod  Created  7 seconds ago  1                af7baf0e7fde

$ podman run --pod blogpod --name blogdb --rm -d blog-db
$ podman run --pod blogpod --name apacheroller --rm -d roller-jetty

$ podman pod list
POD ID        NAME     STATUS   CREATED         # OF CONTAINERS  INFRA ID
39ad88b8892f  blogpod  Created  30 seconds ago  3                af7baf0e7fde

$ firefox https://localhost:8443/blog

Now we already have two containers able to communicate with each other and a host which is able to communicate with a container in the pod - and no sudo in sight.

volumes and users

We already know that the user on the outside is root on the inside, but lets quickly check it just to be sure:


$ whoami
duke
$ id -u
1000
$ mkdir /tmp/outside
$ podman run --rm -it -v /tmp/outside:/home/inside debian:stable-slim bash
root@2fbc9edaa0ee:/$ id -u
0
root@2fbc9edaa0ee:/$ touch /home/inside/hello_from_inside && exit
$ ls -l /tmp/outside
-rw-r--r-- 1 duke duke 0 31. Jul 06:55 hello_from_inside

Indeed, duke's UID of 1000 was mapped to 0 on the inside.

Since we are using rootless containers and not half-rootless containers we can let the blog and the db within their containers run in their own user namespaces too, what if they write logs to mounted volumes? That is when the subuid and subgid files come into play we configured earlier.

Lets say the blog-db container process should run in the namespace of the user dbduke. Since dbduke doesn't have root rights on the inside (as intended), dbduke won't also have rights to write to the mounted volume which is owned by duke. The easiest way to solve this problem is to simply chown the volume folder on the host to the mapped user of the container.


# scripts starts blog-db
# query the UID from the container and chown the volumes folder
UID_INSIDE=$(podman run --name UID_probe --rm blog-db /usr/bin/id -u)
podman unshare chown -R $UID_INSIDE /path/to/volume

podman run -v /path/to/volume:/path/inside ...  blog-db

Podman ships with a tool called unshare (the name is going to make less sense the longer you think about it) which lets you execute commands in the namespace of a different user. The command podman unshare allows to use the rights of duke to chown a folder to the internal UID of dbduke.

If we would check the folder rights from both sides, we would see that the UID was mapped from:


podman run --name UID_probe --rm blog-db /usr/bin/id -u
998
to

$ ls -l volume/
drwxr-xr-x 2 100997 100997 4096 31. Jul 07:54 logs
on the outside which is within the range specified in the /etc/subuid file - so everything works as intended. This allows user namespace isolation between containers (dbduke, wikiduke etc) and also between containers and the user on the host who launched the containers (duke himself).

And still no sudo in sight.

memory and cpu limits [and java]

Memory limits should work out of the box in rootless containers

$ podman run -it --rm -m 256m blog-db java -Xlog:os+container -version
[0.003s][info][os,container] Memory Limit is: 268435456
...

This allows the JVM to make smarter choices without having to provide absolute -Xmx flags (but you still can).

Setting CPU limits might not work out of the box without root (tested on Manjaro which is basically Arch), since the cgroups config might have user delegation turned off by default. But it is very easy to change:


# assuming your user id is 1000 like duke
$ sudo systemctl edit user@1000.service

# now modify the file so that it contains
[Service]
Delegate=yes

# and check if it worked
$ cat /sys/fs/cgroup/user.slice/user-1000.slice/cgroup.controllers
cpuset cpu io memory pids

You might have to reboot - it worked right away in my case.


# default CPU settings uses all cores
$ podman run -it --rm blog-db sh -c\
 "echo 'Runtime.getRuntime().availableProcessors();/exit' | jshell -q"
jshell> Runtime.getRuntime().availableProcessors()$1 ==> 4

# assign specific cores to container
$ podman run -it --rm --cpuset-cpus 1,2 blog-db sh -c\
 "echo 'Runtime.getRuntime().availableProcessors();/exit' | jshell -q"
jshell> Runtime.getRuntime().availableProcessors()$1 ==> 2

Container CPU core limits should become less relevant in the java world going forward, especially once projects like Loom [blog post] have been integrated. Since most things in java will run on virtual threads on top of a static carrier thread pool, it will be really easy to restrict the parallelism level of a JVM (basically -Djdk.defaultScheduler.parallelism=N and maybe another flag to limit max GC thread count).

But it works if you need it for rootless containers too.

class data sharing

Podman uses fuse-overlayfs for image management by default, which is overlayfs running in user mode.

$ podman info | grep -A5 overlay.mount_program
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: Unknown
      Version: |-
        fusermount3 version: 3.9.2
        fuse-overlayfs: version 1.1.0

This means that JVM class data sharing is also supported out of the box if the image containing the class data archive is shared in the image graph between multiple rootless containers.

The class data stored in debian-slim-jdk (a local image I created) will be mapped only once into memory and shared between all child containers. Which are in the example below blog-db, roller-jetty and wiki-jetty.


$ podman image tree --whatrequires debian-slim-jdk
Image ID: 57c885825969
Tags:     [localhost/debian-slim-jdk:latest]
Size:     340.1MB
Image Layers
└── ID: ab5467b188d7 Size: 267.6MB Top Layer of: [localhost/debian-slim-jdk:latest]
 ├── ID: fa04d6485aa5 Size:  7.68kB
 │   └── ID: 649de1f63ecc Size: 11.53MB Top Layer of: [localhost/debian-slim-jdk-jetty:latest]
 │    ├── ID: 34ce3917399d Size: 8.192kB
 │    │   └── ID: d128b826459d Size:  56.7MB Top Layer of: [localhost/roller-jetty:latest]
 │    ├── ID: 9a9c51927e42 Size: 8.192kB
 │    └── ID: d5c7176c1073 Size: 27.56MB Top Layer of: [localhost/wiki-jetty:latest]
 ├── ID: 06acb45dd590 Size:  7.68kB
 └── ID: 529ff7e97882 Size: 1.789MB Top Layer of: [localhost/blog-db:latest]

stopping java containers cleanly

A little tip in the end: In my experience JVMs are quite often launched from scripts in containers. This has automatically the side effect that the JVM process won't be PID 1, since it isn't the entry point. Commands like podman stop <container> will post SIGKILL to the shell script (!) wait 10s then simply kill the container process without the JVM ever knowing what was going on. More on that in the stopping containers blog entry.

Shutdown actions like dumping the JFR file won't be executed and IO writes might not have completed. So unless you trap the signal in the script and send it to the JVM somehow, there are more direct ways to stop a java container:


# script to cleanly shut down a container
$ podman exec -it $(container) sh -c\
 "kill \$(/jdk/bin/jps | grep -v Jps | cut -f1 -d' ')"

#... or if killall is installed (it usually isn't)
$ podman exec -it $(container) killall java

Once the JVM has cleanly shut down the launch script will finish which will be also noticed by the container once PID 1 is gone and it will cleanly shutdown too.

- - -

fly safe podman


// Taking a look at Virtual Threads (Project Loom)

Project Loom introduces lightweight, JVM managed, virtual threads (old name: fibers) to java. Lets take a look how the project is progressing and see how they compare to plain old OS managed threads.

Loom is currently a separate project based on JDK 15, but since there is no JEP available yet and the deadline is approaching, it is unlikely that it will land in the main repository as preview feature before JDK 16. Early access binaries are available from jdk.java.net/loom. I used Build 15-loom+7-141 (2020/5/11) for the experiments in this blog entry - be warned the API is not final.

virtual threads

Virtual threads are lightweight threads scheduled by the java runtime to run on plain old java threads ("real" threads). The threads used to run virtual threads are called carrier threads. While plain old java threads (POTs) can be fairly heavyweight due to the fact that they represent OS threads, millions of virtual threads can be spawned without causing problems.

The main feature however is that a virtual thread doesn't block its current carrier thread on blocking operations, like IO (Sockets, NIO channels...) or blocking java.util.concurrent API calls (Semaphores, Queues, Locks, Future.get()...) and even Thread.sleep(). Instead of blocking, the carrier will mount and resume a different virtual thread while the blocked virtual thread is waiting for a resource or an event to occur. Once the virtual thread is no longer blocked, it simply resumes execution on the next available carrier thread.

This should allow more efficient use of the CPU and additionally reduce the total number of POTs, since a thread running on a core which would normally be idle while waiting for a resource, can now work on something else, by replacing a blocked virtual thread with a another which isn't blocked.

some properties:

  • virtual threads are java entities, independent of OS threads
  • java.lang.Thread is used for both kinds of threads, virtual and OS
  • all virtual threads are daemons
  • spawning and blocking virtual threads is cheap
  • virtual threads require carrier threads to run on
    • a carrier thread runs a virtual thread by mounting it
    • if the VT blocks, the stack is stored and the VT is unmounted to be resumed later
  • j.u.c.Executor like a ForkJoinPool or ThreadPoolExecutor is used to schedule VTs to carriers
    • custom schedulers can be provided by implementing the Executor interface
  • millions of virtual threads can run on few carrier threads
  • Continuations (basically a VT without a scheduler) won't be in the initial release but might appear later
  • serialization is planned but currently low priority

edge cases:

  • ThreadLocals must be used with care but will still work
    • Thread.Builder#disallowThreadLocals() can be used to prohibit it entirely
    • better solutions like Scopes, Carrier- or ProcessorLocals might be implemented in future
  • some situations will cause pinning which will block the carrier if the virtual thread blocks while pinned
    • native stack frames will cause pinning
    • blocking a VT while holding a monitor (i.e. synchronized block) will currently block the carrier
      • this might be only a temporary limitation
      • doesn't apply to alternatives like j.u.c.ReentrantLock which can be used instead
    • -Djdk.tracePinnedThreads=short or -Djdk.tracePinnedThreads=full will log pinned threads

for more info: [State of Loom 1, 2] [Loom Proposal] [loom-dev mailing list] [ea javadoc]

a quick test

Since Loom is implemented as a preview feature, the flag --enable-preview has to be passed to both javac and also to the JVM at launch. This will load the preview module and tell the JVM that it is ok to run bytecode which has been compiled with preview features. This should reduce the risk of it accidentally landing on production machines via a maven repository :).


    public static void main(String[] args) {
        Thread.startVirtualThread(() -> {
            System.out.println("Hello Loom from "+Thread.currentThread()+"!");
        });
    }
output:
Hello Loom from VirtualThread[<unnamed>,ForkJoinPool-1-worker-3,CarrierThreads]!

The code above attaches a Runnable via Thread.startVirtualThread(Runnable task) to a new virtual thread and schedules it for execution on the global carrier thread pool. The output shows that the carrier thread pool in use is in fact a j.u.c.ForkJoinPool which has a work-stealing scheduler. The size of the global carrier pool can be set with the flag -Djdk.defaultScheduler.parallelism=N, the default is set to the available core count (or hardware thread count or whatever the container is configured to return).

a better test

The following example can run tasks on either a classic fixed size thread pool (with POTs) or on a unbounded virtual thread pool. The virtual thread pool attaches each task to a new virtual thread and uses the fixed size thread pool as carrier pool. The tasks consist of a simulated IO part and a computational part, the carrier thread count and the number of tasks can be adjusted.

This is no benchmark or load test, but rather an attempt to demonstrate the differences between the two thread types.



    public static void main(String[] args) throws InterruptedException {
       
        final boolean USE_VIRTUAL_THREADS = true;
        final int CARRIER_THREAD_COUNT = 1;
        final int TASK_COUNT = 2;
        
        // plain old thread factory and thread pool using the new builder
        ThreadFactory carrierTF = Thread.builder().name("carrier#", 0)
                                                  .daemon(true).factory();
        ExecutorService carrierPool = Executors.newFixedThreadPool(
                                          CARRIER_THREAD_COUNT, carrierTF);
        
        ExecutorService executor;
        
        if(USE_VIRTUAL_THREADS) {

            // factory for virtual threads scheduled on the carrier pool
            ThreadFactory virtualTF = Thread.builder()
                    .virtual(carrierPool)
                    .name("virtual#", 0).factory();

            // thread executor will spawn a new virtual thread for each task
            executor = Executors.newThreadExecutor(virtualTF);
            
        }else{
            executor = carrierPool;
        }
        
        for (int i = 0; i < TASK_COUNT; i++)
            executor.submit(new WaitAndHurry());
        
        executor.shutdown();
        executor.awaitTermination(20, TimeUnit.SECONDS); // virtual threads are daemons
        
    }

The task itself is less interesting:


    
    private final static class WaitAndHurry implements Runnable {

        private final static long START_TIME = System.nanoTime();
        
        @Override
        public void run() {
            doIO();    // block for 2s
            doWork();  // compute something for ~2s
            print("done");
        }

        private void doIO() {
            print("io");
            try {
                Thread.sleep(2000);
            } catch (InterruptedException ex) {
                throw new RuntimeException(ex);
            }
        }

        private void doWork() {
            print("work");
            long number = 479001599; 
            boolean prime = true;
            for(long i = 2; i <= number/2; ++i) {
                if(number % i == 0)  {
                    prime = false;
                    break;
                }
            }
            if (!prime) {throw new RuntimeException("wrong result");} // to prevent the JIT to optimize everything away
        }

        private void print(String msg) {
            double elapsed = (System.nanoTime()-START_TIME)/1_000_000_000.0d;
            String timestamp = String.format("%.2fs", elapsed);
            System.out.println(timestamp + " " + Thread.currentThread() + " " + msg);
        }
        
    }

output for 1 carrier thread and 2 tasks attached to virtual threads:

0.00s VirtualThread[virtual#0,carrier#0,main] io
0.01s VirtualThread[virtual#1,carrier#0,main] io
2.03s VirtualThread[virtual#0,carrier#0,main] work
3.88s VirtualThread[virtual#0,carrier#0,main] done
3.88s VirtualThread[virtual#1,carrier#0,main] work
5.67s VirtualThread[virtual#1,carrier#0,main] done
Knowing that the IO part of the task takes 2s and the computational part about 1.8s (on my system without warmup) we can put it into a chart by looking at the timestamps:
  VT0: |WAIT||WORK|
  VT1: |WAIT|      |WORK|
If we view the carrier thread as a resource we can draw a less abstract version which is closer to reality:
  CT0: |IDLE||WORK||WORK|
  VT0: |WAIT|     .
  VT1: |WAIT|           .

This shows that virtual threads already have ability to wait in parallel, even when run on just a single carrier thread. The carrier thread is also the only entity which is able to do work since it can only mount one virtual thread at a time.

Rule of thumb: virtual threads are concurrent waiters while real threads are concurrent workers.

Classic thread pool for reference using a single thread:
0.00s Thread[carrier#0,5,main] io
2.02s Thread[carrier#0,5,main] work
3.84s Thread[carrier#0,5,main] done
3.84s Thread[carrier#0,5,main] io
5.84s Thread[carrier#0,5,main] work
7.67s Thread[carrier#0,5,main] done

  CT0: |WAIT||WORK||WAIT||WORK|

Sequential as expected.


lets bump it to 2 carrier threads and 4 tasks:

0.02s VirtualThread[virtual#0,carrier#0,main] io
0.03s VirtualThread[virtual#2,carrier#0,main] io
0.03s VirtualThread[virtual#3,carrier#0,main] io
0.02s VirtualThread[virtual#1,carrier#1,main] io
2.03s VirtualThread[virtual#0,carrier#0,main] work
2.04s VirtualThread[virtual#2,carrier#1,main] work
3.85s VirtualThread[virtual#2,carrier#1,main] done
3.85s VirtualThread[virtual#3,carrier#1,main] work
3.86s VirtualThread[virtual#0,carrier#0,main] done
3.86s VirtualThread[virtual#1,carrier#0,main] work
5.63s VirtualThread[virtual#3,carrier#1,main] done
5.69s VirtualThread[virtual#1,carrier#0,main] done

  VT0: |WAIT||WORK|
  VT1: |WAIT||WORK|
  VT2: |WAIT|      |WORK|
  VT3: |WAIT|      |WORK|
        
  CT0: |IDLE||WORK||WORK|
  CT1: |IDLE||WORK||WORK|
  VT0: |WAIT|     .
  VT1: |WAIT|     .
  VT2: |WAIT|           .
  VT3: |WAIT|           .

Now we gained the ability to work in parallel using two threads while using all virtual threads to wait in parallel - best of both worlds.

Classic thread pool for reference using 2 threads:

0.00s Thread[carrier#1,5,main] io
0.00s Thread[carrier#0,5,main] io
2.03s Thread[carrier#1,5,main] work
2.03s Thread[carrier#0,5,main] work
3.87s Thread[carrier#0,5,main] done
3.87s Thread[carrier#0,5,main] io
3.88s Thread[carrier#1,5,main] done
3.88s Thread[carrier#1,5,main] io
5.87s Thread[carrier#0,5,main] work
5.88s Thread[carrier#1,5,main] work
7.67s Thread[carrier#0,5,main] done
7.70s Thread[carrier#1,5,main] done

  CT0: |WAIT||WORK||WAIT||WORK|
  CT1: |WAIT||WORK||WAIT||WORK|

No surprises.


real threads in a virtual world

Virtual threads implicitly convert blocking APIs into a async/await pattern - and you won't even have to be aware of it as user of an API (most of the time at least). Entire callback based frameworks (buzzword: reactive) could be made obsolete, since their main purpose has always been to avoid that programmers have to deal with any kind of concurrency problems, often even accepting that nothing can run in parallel in them (only parallel waiting is happening behind the scenes, basically like in python, or virtual threads on a single carrier in our example). Even Node.js received basic worker_threads in v12 using a language which is single threaded by design (data is copied to the worker when it starts and copied back again in a callback when the job is done).

Java on the other hand was multi threaded since the beginning (25 years ago; time flies) and is only now getting virtual threads (if you don't count green threads of Java 1.1). Since virtual threads are using the same java.lang.Thread class as the OS threads do, they are pretty much interchangeable with each other and can keep using established APIs. Asynchronous IO APIs are hopefully going to be used less often in future, because code which does async IO now, can be made less error prone and easier to read by using simple blocking IO APIs from within virtual threads.

Plain old java threads will most likely still have a purpose (beside being a carrier) in future however: Not every long running background task which is periodically reading a few bytes from a file will benefit from being virtual and limitations like pinning due to native stack frames in virtual threads which also block the carrier, will probably always require some additional POTs for special cases.


summary

Project Loom made significant progress recently and is already in a fairly usable state. I am looking forward to it being integrated into the main repository (hopefully JDK 16 or 17). Virtual Threads have the potential to be a big game changer for Java: better concurrency while using fewer OS threads without significant code changes - what more to ask.

Debugging a few million virtual threads is going to be interesting, thread dumps of the future will require a bit more tooling, e.g. hierarchical views etc or at least a good scroll wheel on the mouse :)


// Tracking Apache Roller page views using JFR custom events

The Java Flight Recorder is great for monitoring the JVM but after checking out the new JFR Event Streaming in JDK 14, I was curious how well it would work in practice for basic logging needs. I always wanted to generate some page view statistics for this web blog but never liked the idea of tracking users in their browsers (or at least those few who turned off tracking protection for some reason or used a outdated browser).

So lets take a look if we can record custom page view events using JFR instead of logging or storing stats in a database.

The goal is to record the link the anonymous reader clicked to get to the page (or null) and the actual page on the blog which is visited.

Custom Event


    @Label("Incoming Referrer Event")
    @Description("Reader landed on the Blog.")
    @StackTrace(false)
    public class IncomingReferrerEvent extends Event {

        @Label("Referrer URL")
        private final String referrerUrl;
        
        @Label("Request URL")
        private final String requestUrl;

        public IncomingReferrerEvent(String referrerUrl, String requestUrl) {
            this.referrerUrl = referrerUrl;
            this.requestUrl = requestUrl;
        }

    }

Defining a custom JFR event is trivial, simply extend jdk.jfr.Event and annotate the fields, the code should be self explanatory.@StackTrace(false) disables the recording of stack traces since we don't need it for this event. Custom events are enabled by default. The event name is the fully qualified class name by default, but can be overwritten with @Name("dev.foo.NewName").


    new IncomingReferrerEvent(referrerUrl, requestUrl).commit();

Records the event (placed somewhere in doGet() of Apache Roller's HttpServlet). Events can have a beginning and an end, by simply calling commit() without begin() or end(), only the time stamp at commit and the event values are stored. Due to the design of the JFR you don't really have to worry about concurrency because the events are held in thread-local buffers before they are flushed to disk which should make it fairly performant even in highly parallel settings.

Configuration

The next step is to setup a JFR configuration file.



<?xml version="1.0" encoding="UTF-8"?>
<configuration version="2.0" label="Blog" description="Roller JFR Events" provider="Duke Himself">

    <event name="org.apache.roller.weblogger.jfr.IncomingReferrerEvent">
        <setting name="enabled">true</setting>
    </event>

</configuration>

I could have used the default configuration provided by the jdk but this would lead to much larger file sizes after some uptime. The configuration above only enables our custom event, everything else is turned off. For a larger blueprint see JDK_HOME/lib/jfr/default.jfc.

Once the server is running we can switch the recording on using jcmd. ($1 is the PID)


jcmd $1 JFR.configure repositorypath="/home/server/jfr" samplethreads=false
jcmd $1 JFR.start name="blogstats" settings="/home/server/jfr/blog.jfc"

First Test - 8 MB, not great, not terrible

After a quick test we can plot the summary of the recording file in the repository to check if we see any events.


jfr summary 2020_02_03_04_35_07_519/2020_02_07_16_46_39.jfr 

 Version: 2.1
 Chunks: 1
 Start: 2020-02-07 15:46:39 (UTC)

 Event Type                                             Count  Size (bytes) 
==============================================================================
 jdk.CheckPoint                                        130997      12587784
 org.apache.roller.weblogger.jfr.IncomingReferrerEvent     66          4735
 jdk.Metadata                                               1         81383
 jdk.ThreadEnd                                              0             0
 jdk.TLSHandshake                                           0             0
 jdk.ThreadSleep                                            0             0
 jdk.ThreadPark                                             0             0
 jdk.JavaMonitorEnter                                       0             0
...

There it is, we can see IncomingReferrerEvent in the record. You might be wondering what those jdk.CheckPoint events are. I wasn't sure either since I noticed them the first time with JDK 14 and a websearch resulted in no hits, so I asked on hotspot-jfr-dev. It turns out that they are part of the file format - for more details read the reply or even better the rest of the mail thread too.

The check points will create around 8 MB of data per day (one check point per second), which can be seen as static footprint without recording any actual events. 8 MB, not great, not terrible.

More Events

Having now a good estimate for the footprint, it was time to enable more events. Since the correlation of IncomingReferrerEvents with GC events, network/CPU load or thread count might be interesting given that it all is running on a Raspberry Pi 3b+ with limited resources. I ended up enabling some more events after grep'ing the code base of VisualVM for event Strings (it looks like JFR support is coming soon, the custom build I am using looks fairly complete already) and then adding them to the blog.jfc config file.


jfr summary blog_dump06.jfr 
...
 Event Type                                              Count  Size (bytes) 
==============================================================================
 jdk.CheckPoint                                        1223057     118202307
 jdk.NetworkUtilization                                 264966       4731868
 jdk.CPULoad                                            242765       6198487
 jdk.ActiveSetting                                        3348        123156
 org.apache.roller.weblogger.jfr.IncomingReferrerEvent    6367         89338
 jdk.GCHeapSummary                                         830         35942
 jdk.GarbageCollection                                     415         12577
 jdk.Metadata                                               13       1038982
 jdk.JVMInformation                                         12         27113
 jdk.GCHeapConfiguration                                    12           353
 jdk.ActiveRecording                                        12           727

Even with the added events and another 14 day test run I ended up with a file size of about 130 MB. That is about 9.28 MB per day - mostly CheckPoints ;) - in fact ~118 MB of them. luckily the file compresses nicely.

The undocumented Flag

If you take a look at the JDK sources you notice a flag with the description: "flush-interval, Minimum time before flushing buffers, measured in (s)econds..." - sounds exactly what we need.

jcmd $1 JFR.start name="blogstats" flush-interval=10s settings="/home/server/jfr/blog.jfc"

Results look promising: as expected around 10x reduction in CheckPoint events. (I don't have more data to show since I literally just added the flag and rebooted the server as I am typing this)

update: size of the record dump file after ~14.3 days (343h) is 25.13 MB which is about 1.8 MB per day - much better.

I can't recommend relying on undocumented or hidden flags since they can change or go away without notice (@see hotspot-jfr-dev discussion). The realtime nature of event streaming is also gone with a buffer delay of 10s, so keep that in mind.

The File Format

The file format for the JFR repository and record dumps isn't standardized and can change. So another thing to keep in mind if JFR is used for logging is that it might not be possible to read old JDK 14 dumps with JDK 20 in future (although reading JDK 11 dumps with 14 works just fine). But this doesn't sound like a big issue to me since JDKs don't just disappear, it will always be possible to open JDK 14 dumps with JDK 14 (and maybe even with JDK 20 from the future if backwards compatibility can be maintained).

Conclusion

Although the static footprint of the JFR in JDK 14 came as a surprise I am not too concerned of it. Storage is cheap and I am sure the issue will be solved or reduced to a minimum in one way or another.

The benefit of tracking my blog stats with JFR is that I can inspect them with the same tools I would use for JVM performance monitoring (VisualVM, JDK Mission Control, etc). JFR Event Streaming and the OpenJDK command line tools make it also easy to access the data in a structured manner.

JFR can not only be used for JVM monitoring but certainly also for structured logging in some scenarios. Log files will probably never disappear completely and I am sure JFR (or the original JRockit Flight Recorder implementation) was never intended to fully replace a logging framework but maybe one day logs will mostly contain lines produced by Event#toString() running dual stacks where it is still required :).

fly safe


// JFR Event Streaming with Java 14

The Java Flight Recorder has a long history. It used to be part of the BEA JRockit JVM, became a commercial feature of the Oracle JDK (7+) after Oracle acquired BEA and Sun and was finally fully open sourced with the release of OpenJDK 11 (backports exist for 8) (JEP328). OpenJDK 14 will add some improvements to the JFR.

JEP 349 will allow the continuous consumption of java flight recorder events in-memory from within the same JVM, or out-of-process from a different JVM via its JFR repository file.

JEP 349 made it already into the early access builds and can be experimented with since OpenJDK 14 build 22+. Lets check it out.

In-Process Streaming

The base JFR configuration files (XML) can be found in JDK_HOME/lib/jfr. The default configuration (default.jfc) is relatively low overhead, while profile.jfc will provide more data. Java Mission Control can create custom settings based on templates if needed. I am using the default config for the examples.

The first example will start JFR on the local JVM using the default recorder settings and register a few event handlers to check if it is working.


import java.io.IOException;
import java.text.ParseException;
import jdk.jfr.Configuration;
import jdk.jfr.consumer.EventStream;
import jdk.jfr.consumer.RecordingStream;

public class JFRStreamTest {
    
    public static void main(String[] args) throws IOException, ParseException  {
        
        Configuration config = Configuration.getConfiguration("default");
        System.out.println(config.getDescription());
        System.out.println("settings:");
        config.getSettings().forEach((key, value) -> System.out.println(key+": "+value));
        
        // open a stream and start local recording
        try (EventStream es = new RecordingStream(config)) {
            
            // register event handlers
            es.onEvent("jdk.GarbageCollection", System.out::println);
            es.onEvent("jdk.CPULoad", System.out::println);
            es.onEvent("jdk.JVMInformation", System.out::println);
            
            // start and block
            es.start();
        }
    
    }
    
}

The above example should print information about the running JVM once, CPU load periodically and GC events when they occur.

Out-of-Process Streaming

Simply start the flight recorder as usual via jcmd <PID> JFR.start or via the JVM flag -XX:+FlightRecorder at startup. The repository location will be stored in the jdk.jfr.repository system property as soon JFR is running (new in Java 14). It can also be set at startup via a comma separated list of flight recorder options: -XX:FlightRecorderOptions:repository=./blackbox

Update thanks to Erik from Oracle in the comments section: The repository location can be also set using jcmd <PID> JFR.configure repositorypath=<directory>. If you set it after a recording has started, new data will be written in the new location.


$ jcmd -l | grep netbeans
8492 org.netbeans.Main ... --branding nb

$ jcmd 8492 JFR.start name=streamtest
Started recording 1. No limit specified, using maxsize=250MB as default.
Use jcmd 8492 JFR.dump name=streamtest filename=FILEPATH to copy recording data to file.

$ jinfo -sysprops 8492 | grep jfr
jdk.jfr.repository=/tmp/2019_11_18_02_19_59_8492

Now that the recording is running and we know where the repository is, a second JVM can open a stream to the live JFR repository and monitor the application. Note that we did not dump any JFR records to a file, we connect to the live repository directly.


import java.io.IOException;
import java.nio.file.Path;
import jdk.jfr.consumer.EventStream;

public class JFRStreamTest {
    
    public static void main(String[] args) throws IOException  {
        // connect to JFR repository
        try (EventStream es = EventStream.openRepository(Path.of("/tmp/2019_11_18_02_19_59_8492"))) {
            
            // register some event handlers
            //es.onEvent("jdk.CPULoad", System.out::println);
            es.onEvent("jdk.SocketRead", System.out::println);
            es.onEvent("jdk.SocketWrite", System.out::println);
            
            // start and block
            es.start();
        }
    
    }
    
}

As a quick test i monitored with the example above a NetBeans instance running on Java 14 and let the IDE check for updates. Since we watch for SocketRead and Write events the output looked like:

jdk.SocketRead {
  startTime = 04:34:09.571
  duration = 117.764 ms
  host = "netbeans.apache.org"
  address = "40.79.78.1"
  port = 443
  timeout = 30.000 s
  bytesRead = 5 bytes
  endOfStream = false
  eventThread = "pool-5-thread-1" (javaThreadId = 163)
  stackTrace = [
    java.net.Socket$SocketInputStream.read(byte[], int, int) line: 68
    sun.security.ssl.SSLSocketInputRecord.read(InputStream, byte[], int, int) line: 457
    sun.security.ssl.SSLSocketInputRecord.decode(ByteBuffer[], int, int) line: 165
    sun.security.ssl.SSLTransport.decode(TransportContext, ByteBuffer[], int, int, ...
    sun.security.ssl.SSLSocketImpl.decode(ByteBuffer) line: 1460
    ...
  ]
}
...

Streaming Dumped Records

Opening streams (EventStream.openFile(path)) to JFR record dumps (jcmd <PID> JFR.dump filename=foo.jfr) is also possible of course - works as expected.

Conclusion

Pretty cool new feature! It is currently not possible to do in-memory but out-of-process steaming without syncing on repository files. But since a ramdisk can workaround this issue so easily I am not even sure if this capability would be worth it.

fly safe


// [Dynamic] [Application] Class Data Sharing in Java

Class Data Sharing as a JVM feature exists for quite some time already, but it became more popular in context of containers. CDS maps a pre-defined class archive into memory and makes it shareable between JVM processes. This can improve startup time but also reduce per-JVM footprint in some cases.

Although basic CDS was already supported in Java 5, this blog entry assumes that Java 11 or later is used, since JVM flags and overall capabilities changed over time. CDS might not be supported on all architectures or GCs: G1, CMS, ShenandoahGC, Serial GC, Parallel GC, and ParallelOld GC do all support CDS on 64bit linux, while ZGC for example doesn't have support for it yet.


JDK Class Data Sharing

The most basic form of CDS is setup to share only JDK class files. This is enabled by default since JDK 12 (JEP 341). If you check the java version using a shell you notice that the JVM version string will contain "sharing".

[mbien@hulk server]$ java -version
openjdk version "12.0.1" 2019-04-16
OpenJDK Runtime Environment AdoptOpenJDK (build 12.0.1+12)
OpenJDK 64-Bit Server VM AdoptOpenJDK (build 12.0.1+12, mixed mode, sharing)
The shared class archive can be found pre installed in ${JAVA_HOME}/lib/server/ called classes.jsa. By temporary renaming the file or switching to an older JDK, "sharing" won't appear anymore in the version string.

[mbien@hulk server]$ java -version
openjdk version "12.0.1" 2019-04-16
OpenJDK Runtime Environment AdoptOpenJDK (build 12.0.1+12)
OpenJDK 64-Bit Server VM AdoptOpenJDK (build 12.0.1+12, mixed mode)

The JDK CDS archive can be manually generated by invoking java -Xshare:dump with root privileges. Recent JVMs are configured to use -Xshare:auto at startup, which will automatically use CDS if available. Enforcing CDS with -Xshare:on will cause the JVM to fail if no archive is found.


Application Class Data Sharing

The basic CDS archive only contains JDK class data. Adding application class data (JEP 310) is fairly easy but comes with some limitations.

  • the classpath used at archive creation time must be the same as (or a prefix of) the classpath used at run time
  • wildcards or exploded JARs are not allowed in the path (no cheating)

1) create a list of classes (JDK and APP; Java 11+) to include in the shared archive:


$ java -Xshare:off -XX:DumpLoadedClassList=classes.list -jar foo.jar

2) create the archive using that list:


$ java -Xshare:dump -XX:SharedClassListFile=classes.list -XX:SharedArchiveFile=foo.jsa -cp foo.jar

3) launch the applications using the shared archive:


$ java -Xshare:on -XX:SharedArchiveFile=foo.jsa -jar foo.jar

-Xlog:class+load helps to verify that it is working properly.

[0.062s][info][class,load] java.lang.Object source: shared objects file
[0.062s][info][class,load] java.io.Serializable source: shared objects file
[0.062s][info][class,load] java.lang.Comparable source: shared objects file
[0.062s][info][class,load] java.lang.CharSequence source: shared objects file
...

The applications (containers) don't have to be identical to make use of custom CDS archives. The archive is based on the class list of step 1 which can be freely modified or merged with other lists. The main limitation is the classpath prefix rule.

Some Results

As quick test I used OpenJDK 11 + Eclipse Jetty + Apache JSP Wiki and edited a single page (to ensure the classes are loaded), generated a custom archive and started a few more instances with CDS enabled using the archive.


classes.list size = 4296 entries
wiki.jsa size = 57,6 MB 

 VIRT    RES     SHR  %MEM    TIME    COMMAND
2771292 315128  22448  8,3   0:20.54 java -Xmx128m -Xms128m -Xshare:off -cp ... 
2735468 259148  45184  6,9   0:18.38 java -Xmx128m -Xms128m -XX:SharedArchiveFile=wiki.jsa -cp ...
2735296 270044  45192  7,1   0:19.66 java -Xmx128m -Xms128m -XX:SharedArchiveFile=wiki.jsa -cp ...
2735468 259812  45060  6,9   0:18.26 java -Xmx128m -Xms128m -XX:SharedArchiveFile=wiki.jsa -cp ...
2735468 256800  45196  6,8   0:19.10 java -Xmx128m -Xms128m -XX:SharedArchiveFile=wiki.jsa -cp ...

The first JVM is started with CDS turned off as reference. The four other instances have sharing enabled using the custom archive.


Dynamic Class Data Archives

The proposal in JEP 350 which is currently targeted for Java 13 will allow to combine a static base archive with a dynamically generated archive.

The dynamic class data archive is created in a setup phase at application exit (-XX:ArchiveClassesAtExit=dynamic.jsa) and essentially automates step 1 and 2. Just like before -XX:SharedArchiveFile=dynamic.jsa will tell the JVM to map the shared archive. Since the dynamic archive references the static JDK archive, both will be used automatically.

The main advantage (beside the added convenience) is that the dynamic archive will support both builtin class loaders and user-defined class loaders.

now go forth and share class data :)


// Cleaning Bash History using a Java 11 Single-File Sourcecode Program

Java 11 adds with JEP330 the ability to launch a Java source file directly from the command line without requiring to explicitly compile it. I can see this feature being convenient for some simple scripting or quick tests. Although Java isn't the most concise language it still has the benefit of being quite readable and having powerful utility APIs.

The following example reads the bash shell history from a file into a LinkedHashSet to retain ordering and keeps the most recently used line if duplicates exist, essentially cleaning up the history.


import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.LinkedHashSet;
import java.util.HashSet;
import java.util.stream.Stream;

public class HistoryCleaner {

    public static void main(String[] args) throws IOException {

        Path path = Path.of(System.getProperty("user.home")+"/.bash_history");

        HashSet<String> deduplicated = new LinkedHashSet<>(1024);
        try (Stream<String> lines = Files.lines(path)) {
            lines.forEach(line -> {
                deduplicated.remove(line);
                deduplicated.add(line);
            });
        }

        Files.write(path, deduplicated);
    }

}


duke@virtual1:~$ java HistoryCleaner.java

will compile and run the "script".


Shebangs are supported too


#!/usr/bin/java --source 11 -Xmx32m -XX:+UseSerialGC

import java.io....

However, since this is not part of the java language specification, single-file java programs (SFJP) with a #! (shebang) in the first line are technically no regular java files and will not compile when used with javac. This also means that the common file naming conventions are not required for pure SFJPs (i.e. file name does not have to be the same as the class name).

Using a different file extension is advised. We can't call it .js - so lets call it .sfjp :).


duke@virtual1:~$ ./HistoryCleaner.sfjp

This will compile and run the HistoryCleaner SFJP containing the shebang declaration in the first line. Keeping the .java file extension would result in a compiler error. As bonus we can also set JVM flags using the shebang mechanism. Keep in mind that -source [version] must be the first flag to tell the java launcher that it should expect a source file instead of bytecode.