Skip to content

Better entropy in virtual machines

I had the problem that a Tomcat 7 server in a virtual machine would take ages to start, even though there were only a few rather small applications deployed in it. I think that these problems first appeared after upgrading to Ubuntu 14.04 LTS, but this might just have been a coincidence. Fortunately, the log file gave a hint to the cause of the problem:

INFO: Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [206,789] milliseconds.
[...]
INFO: Server startup in 220309 ms

So the initialization of the random number generator (RNG) was responsible for most of the startup time. When you think about it, this is not that surprising: When the system has just booted, there is virtually no entropy available, so the read from /dev/random might block for a very long time. In a physical system, one can use something like haveged or a hardware RNG to fill the kernel's entropy pool, but what about a virtual machine?

Luckily, in recent versions of Linux KVM and libvirt, there is a way to feed entropy from a virtualization host to a virtual machine. In the virtual machine, the device appears as a hardware RNG (/dev/hwrng). Have a look at my wiki for a configuration example.

In the virtual machine, one still needs to read the data from the virtual RNG and feed it into the kernel's entropy pool. For this purpose, the daemon from the rng-tools package does a good job.

Using the combination of haveged in the VM host and rng-tools in the VM, I could significantly boost the startup time of the Tomcat server:

INFO: Server startup in 11831 ms

Non-blocking DatagramChannel and empty UDP packets

I just found out the hard way that there are two bugs when using a non-blocking DatagramChannel in Java with empty (zero payload size) UDP packets.

The first one is not so much a bug but more an API limitation: When sending an empty UDP packet, you cannot tell whether it has been actually sent. The method returns the number of bytes sent and returns zero when the packet has not been sent, but if you send an empty packet, the number of bytes sent is zero even if the send operation suceeded. So there is no way to tell whether the send operation was successful.

The second bug is more serious and this one clearly is a bug in the implementation. When using a DatagramChannel with a Selector, the selector does not return from its select operation when an empty packet has been received. This means that the select call might block forever and you will only see the empty packets received once a non-empty packet arrives.

I describe the two problems and a possible workaround in more detail in my wiki. In the project that I am working on, I can live with not knowing for sure whether the packet was sent (I just try again later if there is no reaction) and for the case where I have to receive packets, I am now using blocking I/O. However, I still think that this is a nasty bug that should be fixed in a future release of Java.