Skip to content

Spring's @RequestMapping annotation works on private methods

Recently, I spent a lot of time on debugging a nasty problem with Spring WebMVC and Spring Security.

I had a class annotated with @Controller and a method annotated with @RequestMapping. I wanted to protected this method using the @Secured annotation. So I turned on global method security by adding @EnabledGlobalMethodSecurity with the right parameters to my @Configuration class, but it did not work. The method could still be called without having the proper privileges (or being authenticated at all).

After hours of debugging, I found out that the AOP advice was not applied to my controller method because it would not find the method when processing the controller class. At that moment I realized that the method had been declared package private. AOP proxies are not applied to non-public methods (for CGLIB proxies this would be possible, but in general it is not desirable and Spring does not do it).

This left me with the question: Why does the request mapping work. The answer is simple: When looking for methods with the @RequestMapping annotation, Spring does not check the method's access modifiers. As the method is invoked using reflection, it will work even if the method has been declared private (unless there is a SecurityManager in charge, but for most Spring applications there will not be one).

This leaves us with a very awkward situation: Private methods might be called by external code and if there is an @Secured annotation on them, it will be ignored. In my opinion, this is a bug: The @RequestMapping annotation should only work on public methods. There are actually four places in Spring where this could be fixed (Spring 4.1.7):

  1. AbstractHandlerMapping.java: line 172
  2. AbstractHandlerMapping.java: line 207
  3. HandlerMethodSelector.java: line 60
  4. RequestMappingHandlerMapping.java: line 187

It would be completely sufficient to check whether the method is public in one of these places. Until this is fixed in Spring (and it might never get fixed because the fix would break backward compatibility), I use my own RequestMappingHandlerMapping which does the check:

public class PublicOnlyRequestMappingHandlerMapping extends
        RequestMappingHandlerMapping {

    @Override
    protected RequestMappingInfo getMappingForMethod(Method method,
            Class<?> handlerType) {
        RequestMappingInfo info = super
                .getMappingForMethod(method, handlerType);
        if (info != null && !Modifier.isPublic(method.getModifiers())) {
            logger.warn("Ignoring non-public method with @RequestMapping annotation: "
                    + method);
            return null;
        } else {
            return info;
        }
    }

}

As you can see, the implementation is very simple. I first call the super method and then check whether the method is public so that I can generate a warning message when @RequestMapping has been used on a non-public method. If one does not care about such a message, once can check the method's access modifier fist and only invoke the super method when the investigated method is public.

In order to use the custom RequestMappingHandlerMapping, we have to use a custom implementation of WebMvcConfigurationSupport (when using Java Config):

@Configuration
public class CustomWebMvcConfiguration extends
        DelegatingWebMvcConfiguration {

    @Bean
    @Override
    public RequestMappingHandlerAdapter requestMappingHandlerAdapter() {
        RequestMappingHandlerAdapter adapter = super
                .requestMappingHandlerAdapter();
        adapter.setIgnoreDefaultModelOnRedirect(true);
        return adapter;
    }

    @Bean
    @Override
    public RequestMappingHandlerMapping requestMappingHandlerMapping() {
        RequestMappingHandlerMapping handlerMapping = new PublicOnlyRequestMappingHandlerMapping();
        handlerMapping.setOrder(0);
        handlerMapping.setInterceptors(getInterceptors());
        handlerMapping
                .setContentNegotiationManager(mvcContentNegotiationManager());

        PathMatchConfigurer configurer = getPathMatchConfigurer();
        if (configurer.isUseSuffixPatternMatch() != null) {
            handlerMapping.setUseSuffixPatternMatch(configurer
                    .isUseSuffixPatternMatch());
        }
        if (configurer.isUseRegisteredSuffixPatternMatch() != null) {
            handlerMapping.setUseRegisteredSuffixPatternMatch(configurer
                    .isUseRegisteredSuffixPatternMatch());
        }
        if (configurer.isUseTrailingSlashMatch() != null) {
            handlerMapping.setUseTrailingSlashMatch(configurer
                    .isUseTrailingSlashMatch());
        }
        if (configurer.getPathMatcher() != null) {
            handlerMapping.setPathMatcher(configurer.getPathMatcher());
        }
        if (configurer.getUrlPathHelper() != null) {
            handlerMapping.setUrlPathHelper(configurer.getUrlPathHelper());
        }

        return handlerMapping;
    }

}

This implementation copies the implementation of requestMappingHandlerMapping() from the parent class, but replaces the actual implementation used with our own class. In addition to that, this configuration also overrides requestMappingHandlerAdapter() in order to the the ignoreDefaultModelOnRedirect attribute. This is the recommended setting for new Spring WebMVC applications, but it cannot be made the default in Spring because it would break backward compatibility. Of course, the two changes are completely independent, so you can choose to only implement either of them.

Using the Red Pitaya with an Asus N10 Nano WiFi dongle

Recently, I got a Red Pitaya, which is a very neat toy for my electronics lab. A few months ago I already got a BitScope Micro. Back then, the Red Pitaya costed about 450 EUR, while the BitScope Micro (with BNC Adapter) costed  only about 150 EUR. However, compared to the Red Pitaya, the features of the BitScope Micro are quite limited. In particular, the features of the signal generator are quite limited and the sampling rate (and sampling length) of the two analog inputs is low. Now, the Red Pitaya has been reduced in price, so that it is available for about 250 EUR, so I could not resist any longer.

One of the many neat features of the Red Pitaya is the fact that you do not need a PC with any special software installed and a USB connection to use it. You can simply connect over the network with a browser, so a table it enough to use it, making it much more flexible in use. However, by default a wired Ethernet connection is still needed.

Luckily, it can act as a WiFi access point when installing the right WiFi USB dongle. The Red Pitaya manual recommends the Edimax EW7811Un, but the supplier where I ordered the Red Pitaya did not have this dongle on stock. Basically, the choice of the dongle is limited by the kernel module(s) compiled into the Linux kernel used by the Red Pitaya eco-system, so any device that is compatible with the rtl8192cu driver should work.

Therefore, I got an Asus N10 nano USB WiFi dongle, which supposedly uses a chip from this family. There seem to be two versions of this device and only one of them is using the Realtek chip. However, the datasheet from the supplier explictly specified the Realtek chip, so I expected it to work.

When the hardware arrived, I prepared the Micro SD card for the Red Pitaya, plugged everything in, and to my surprise the WiFi did not show up. So I added a wired connection in order to login using SSH and investigate the situation.

The USB WiFi dongle was detected by Linux (as it showed up in /sys/bus/usb), however, the corresponding network device was missing and could not be brought up. In the end, I found out, that the driver was not enabled, because the Linux kernel was rather old and the device ID was simply not known yet to the driver.

Such a problem can easily be fixed by explicitly telling the driver to support the device ID (echo "0b05 17ba" >/sys/bus/usb/drivers/rtl8192cu/new_id for my Asus N10 nano WiFI USB dongle) and suddenly I could bring up the network device wlan0. Unfortunately, there was still no "Red Pitaya" WiFI available.

A closer investigation of the startup scripts showed, that the software for hosting an access point was only started, when the device wlan0 was already available during startup. Luckily, there was a slightly uncoventional but rather easy way to ensure this: The necessary code could be added to the file /etc/network/config on the SD card (/opt/etc/network/config in the Red Pitaya's file system). This file is sourced by the initialization script, so any shell code present in this file will be executed before bringing up the network. I simply added the following lines to the beginning of the file:

# Add device ID for Asus N10 Nano to rtl8192cu driver
echo "0b05 17ba" >/sys/bus/usb/drivers/rtl8192cu/new_id

After making this change and rebooting the Red Pitaya, the WiFi access-point mode worked like a charm.

Debian installer with custom proxy and custom repository key

When using the mirror/http/proxy option in a preseed file for the Debian installer, I experienced a problem: The GPG key for a local Apt repository could not be loaded because the installer is using the proxy when downloading the GPG key. This fails, if the proxy is an instance of Apt-Cacher NG (which only allows access to certain well-defined paths).

As it turns out, I am not the first one to experience this problem: There is an open bug in both the Ubuntu and the Debian bug trackers. Unfortunately, these bugs have been open for four years without any significant action.

Luckily, there is a workaround: By specifying a URL starting with "file:///" instead of "http://" in the preseed file, this bug can be avoided. Of course, you have to use a preseed script at the early stage to actually download the file to the specified path. However, this is possible, because the preseed script does not have to use the Apt-Cacher NG proxy.

You might want to add unset http_proxy to the start of every shell script that you run from the preseed file. This includes scripts that are run through in-target, because in-target seems to add this environment variable.

Better entropy in virtual machines

I had the problem that a Tomcat 7 server in a virtual machine would take ages to start, even though there were only a few rather small applications deployed in it. I think that these problems first appeared after upgrading to Ubuntu 14.04 LTS, but this might just have been a coincidence. Fortunately, the log file gave a hint to the cause of the problem:

INFO: Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [206,789] milliseconds.
[...]
INFO: Server startup in 220309 ms

So the initialization of the random number generator (RNG) was responsible for most of the startup time. When you think about it, this is not that surprising: When the system has just booted, there is virtually no entropy available, so the read from /dev/random might block for a very long time. In a physical system, one can use something like haveged or a hardware RNG to fill the kernel's entropy pool, but what about a virtual machine?

Luckily, in recent versions of Linux KVM and libvirt, there is a way to feed entropy from a virtualization host to a virtual machine. In the virtual machine, the device appears as a hardware RNG (/dev/hwrng). Have a look at my wiki for a configuration example.

In the virtual machine, one still needs to read the data from the virtual RNG and feed it into the kernel's entropy pool. For this purpose, the daemon from the rng-tools package does a good job.

Using the combination of haveged in the VM host and rng-tools in the VM, I could significantly boost the startup time of the Tomcat server:

INFO: Server startup in 11831 ms

Process tracking in Upstart

Recently, I exprienced a problem with the process tracking in Upstart:

I wanted start a daemon process running as a specific user. However, I needed root privileges in the pre-start script, so I could not use the setuid/setgid options. I tried to use su in the exec option, but then Upstart would not track the right process. The expect fork and expect daemon options did not help either. As a nasty side effect, these options cannot be tested easily, because having the wrong option will lead to Upstart waiting for an already dead process to die and there is no way to reset the status in Upstart. At least, there is a workaround for effectively resetting the status without restarting the whole computer.

The problem is that su forks when running the command it is asked to run instead of calling exec from the main process. Unfortunately, The process I had to run would fork again because I had to run it as a daemon (not running it as a daemon had some undesirable side effects). Finally, I found the solution: Instead of using su, start-stop-daemon can be used. This tool will not fork and therefore it will not upset Upstart's process tracking. For example, the line

exec start-stop-daemon --start --chuid daemonuser --exec /bin/server_cmd
will run /bin/server_cmd as daemonuser without forking.

This way, expect fork or expect daemon can be used, just depending on the fork behavior of the final process.

Trouble with IPv6 in a KVM guest running the 3.13 kernel

Some time ago, I wrote about two problems with the 3.13 kernel shipping with Ubuntu 14.04 LTS Trusty Tahr: One turned out to be a problem with KSM on NUMA machines acting as Linux KVM hosts and was fixed in later releases of the 3.13 kernel. The other one affected IPv6 routing between virtual machines on the same host. Finally, I figured out the cause of the second problem and how it can be solved.

I use two different kinds of network setups for Linux KVM hosts: For virtual-machine servers in our own network, the virtual machines get direct bridged access to the network (actually I use OpenVSwitch on the VM hosts for bridging specific VLANs, but this is just a technical detail). For this kind of setup, everything works fine, even when using the 3.13 kernel. However, we also have some VM hosts that are actually not in our own network, but are hosted in various data centers. For these VM hosts, I use a routed network configuration. This means that all traffic coming from and going to the virtual machines is routed by the VM host. On layer 2 (Ethernet), the virtual machines only see the VM host and the hosting provider's router only sees the physical machine.

This kind of setup has two advantages: First, it always works, even if the hosting provider expects to only see a single, well-known MAC address (which might be desirable for security reasons). Second, the VM host can act as a firewall, only allowing specific traffic to and from the outside world. In fact, the VM host can also act as a router between different virtual machines, thus protecting them from each other should one be compromised.

The problems with IPv6 only appear when using this kind of setup, where the Linux KVM host acts as a router, not a bridge. The symptoms are that IPv6 packets between two virtual machines are occasionally dropped, while communication with the VM host and the outside world continues to work fine. This is caused by the neigbor-discovery mechanism in IPv6. From the perspective of the VM host, all virtual machines are in the same network. Therefore, it sends an ICMPv6 redirect message in order to indicate that the VM should contact the other VM directly. However, this does not work because the network setup only allows traffic between the VM host and individual virtual machines, but no traffic between two virtual machines (otherwise it could not act as a firewall). Therefore, the neighbor-discovery mechanism determines the other VM to be not available (it should be on the same network but does not answer). After some time, the entry in the neighbor table (that you can inspect with ip neigh show) will expire and communication will work again for a short time, until the next redirect message is received and the same story starts again.

There are two possible solutions to this: The proper one would be to use an individual interface for each guest on the VM host. In this case, the VM host would not expect the virtual machines to be on the same network and thus stop sending redirect packets. Unfortunately, this makes the setup more complex and - if using a separate /64 for each interface - needs a lot of address space. The simpler albeit sketchy solution is to prevent the redict messages from having any effect. For IPv4, one could disable the sending of redirect messages through the sysctl option net.ipv4.conf.<interface>.send_redirects. For IPv6 however, this option is not available. So one could either use an iptables rule on the OUTPUT chain for blocking those packets or simply configure the KVM guests to ignore such packets. I chose the latter approach and added

# IPv6 redirects cause problems because of our routing scheme.
net.ipv6.conf.default.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0

to /etc/sysctl.conf in all affected virtual machines.

I do not know, why this behavior changed with kernel 3.13. One would expect the same problem to appear with older kernel versions, but I guess there must have been some change in the details of how NDP and redirect messages are handled.

Addendum (2014-11-02):

Adding the suggested options to sysctl.conf does not seem to fix the problem completely. For some reasons, an individual network interface can still have this setting enabled. Therefore, I now added the following line to the IPv6 configuration of the affected interface in /etc/network/interfaces:

        post-up sysctl net.ipv6.conf.$IFACE.accept_redirects=0

This finally fixes it, even if the other options are not added to sysctl.conf.

Update on KVM problems with kernel 3.13

A few weeks ago, I wrote about problems with kernel 3.13 on Ubuntu 12.04 LTS  and 14.04 LTS.

Most likely, the problem that caused the excessive CPU load and occassional high network latencey has been fixed by now and the fix is going to be included in version 3.13.0-33 of the kernel package. I experienced this problem on a multi-processor machine, so it is probable that this was the problem with KSM and NUMA that has been fixed.

I am not sure, whether the problems that I had  with IPv6 connectivity are also solved by this fix: I had experienced those problems on a single-processor (but multi-core) machine, so it does not sound like a NUMA problem to me.

Anyhow, I will give the 3.13 kernel another try when the updated version is released. For the moment, I have migrated all server machines back to the 3.2 kernel, because the 3.5 kernel's end-of-life is soon and the 3.13 kernel has not been ready for production use yet. I do not expect to have considerable gains by using a newer kernel version on the servers anyway, so for the moment, the 3.2 kernel is a good option.


Linux KVM Problems with Ubuntu 14.04 LTS / Kernel 3.13.0-30

A few days ago I upgraded a virtual-machine host from Ubuntu 12.04 LTS (Precise Pangolin) to Ubuntu 14.04 LTS (Trusty Tahr). First, I thought that everything was working fine.

However, a short time later I noticed funny problems with the network connectivtity, particularly (but not only) affecting Windows guests. Occasionally, ICMP echo requests would only be answered with an enormous delay (seconds) or sometimes not even be answered at all. TCP connections to guests would stall very often. At the same time the load on the host system would be high even though the CPU usage would not be extremely heavy.

After I downgraded the virtual-machine host back to Ubuntu 12.04 LTS (and consequently to kernel 3.5) this problems disappeared immediately.

It seems like this is a bug related to the 3.13 kernel shipped with Ubuntu 14.04 LTS. There is a bug report on Launchpad and a discussion on Server Fault. It might be that the other problems that I experienced with the backported 3.13 kernel are related to this issue.

For the moment I will keep our virtual-machine hosts on Ubuntu 12.04 LTS and kernel 3.5, until the problems with the 3.13 kernel have been sorted out.

Trouble after installing linux-generic-lts-trusty in Ubuntu 12.04 LTS

Yesterday I updated a lot of computers (hosts as well as virtual machines) running Ubuntu 12.04 LTS (Precise Pangolin) to the backported version of the 3.13 kernel. This kernel is provided by the linux-image-generic-lts-trusty package which is installed (together with the linux-headers-generic-lts-trusty package) when installing linux-generic-lts-trusty. By installing the backported kernel (before the update all Ubuntu 12.04 LTS systems where running on the 3.5 kernel provided by linux-generic-lts-quantal) I wanted to increase the uniformity between the Ubuntu 12.04 LTS and Ubuntu 14.04 LTS systems.

After installing the new kernel and rebooting the machines, funny network problems started to happen. For some virtual machines, IPv6 communication between virtual machines running on the same VM host became very unreliable. For other virtual machines, I experienced occassional huge delays (up to several seconds) for IPv4 packets.

After testing around for a few hours (at the same time I had upgraded a virtual-machine host to Ubuntu 14.04 LTS and first suspected this upgrade, specifically the new version of OpenVSwitch), I found out that these network problems were indeed caused by the new kernel in the virtual machines. If one of two virtual machines running on the same host had the new kernel running, the problems with IPv6 appeared. If both were running the old kernel version, the problems disappeared. The other problem with the massively delayed IPv4 packets was a bit harder to reproduce. Funnily, it already became much better when I downgraded just one of the virtual machines on the host.

At the current stage (linux-image-generic-lts-quantal-3.13.0-30), there seems to be a massive problem with the IP stack of the kernel. For some reasons, this problems only seem to be triggered if the kernel is running in a (Linux KVM) virtual machine. For now, I downgraded all virtual machines back to the old kernel version.

I have to do some more tests to find out whether these problems are caused by the newer kernel in general or whether they are specifically caused by the backported version. At the moment I only have one virtual machine with Ubuntu 14.04 LTS, so I will have to setup some test VMs to carry out more tests.

Until then, I can only recommend to stay away from the backported 3.13 kernel, at least for virtual machines.

Nagios check_linux_raid missing in Ubuntu 14.04 LTS

I just upgrade a KVM virtual machine host from Ubuntu 12.04 LTS (Precise Pangolin) to Ubuntu 14.04 LTS (Trusty Tahr). Everything went smoothly except for one problem: The check_linux_raid script is missing in the updated version of the nagios-plugins-standard package.

The nagios-plugins-contrib package seems to contain a script which basically does the same job, but this package has a lot of other plugins that pull tons of additional dependencies, so I did not want to install this package. Luckily, just copying the check_linux_raid script from a system with the older version of Ubuntu worked fine for me.

Migrating the EJBCA database from H2 to PostgreSQL

I recently installed EJBCA for managing our internal public key infrastructure (PKI). Before using EJBCA, I used openssl from the command-line, but this got uncomfortable, in particular for managing certificate revocation lists (CRLs).

Unfortunately, I made a small but significant mistake when setting up EJBCA: I chose to use the default embedded H2 database. While this database for sure could handle the load for our small PKI, it is inconvenient when trying to make backups: The whole application server needs to be stopped in order to ensure consistency of the backups, a solution which is rather impractical. Therefore I wanted to migrate the EJBCA database from H2 to PostgreSQL.

However, H2 and PostgreSQL are quite different, and the SQL dump generated by H2 could not be easily imported into PostgreSQL. After trying various approaches, I luckily found the nice tool SQuirreL SQL, which (besides other things) can copy tables between databases - even databases of different type. Obviously, this will not solve all migration problems, but for my situation it worked quite well.

I documented the whole migration process in my wiki, in case someone else wants to do the same.

Bare-Metal Recovery of Windows Server 2012 R2 using Bacula

I have been using Bacula as our main backup system for years. While Bacula works perfectly for Linux systems, bare-metal recovery (also known as disaster recovery) of Windows systems has been an open issue ever since.

The Bacula manual describes some procedures, but they only apply to systems running an operating system not newer than Windows Server 2003 R2. Even these procedures remain a bit unclear. If you look for solutions that cover Windows Server 2008 and newer versions of Windows, you will only find a few mailing-list posts that discuss using Windows Server Backup in combination with Bacula. However, none of these solutions sound very appealing.

I believe that you do not have a backup unless you tested the restore, I wanted to find out the best way for backing up a Windows system with Bacula. So I spent some time and installed a Windows Server 2012 R2 system in a virtual machine, made a backup with Bacula, and then tried to restore this backup in a new virtual machine. I actually succeeded without using Windows Server Backup or any other third-party tool. It really seems to work with a Bacula-only solution.

I documented the steps I used in the wiki, just in case I might have to restore a Windows System from a Bacula backup in the future. Maybe this guide is useful for you as well.

OpenLDAP Server not listening on IPv6 Socket in Zimbra 8

Recently I have been experiencing a strange with an installation of the Community Edition of Zimbra Collaboration Server 8: Although all services were running, no e-mails were delivered. In the log file /var/log/zimbra.log I found messages like "zimbra amavis[9323]: (09323-01) (!!)TROUBLE in process_request: connect_to_ldap: unable to connect at (eval 111) line 152.".

The strange things about this was, that the OpenLDAP daemon (slapd) was running and answering requests. After restarting Zimbra (/etc/init.d/zimbra restart), the problem disappeared, however it reappeared after the next reboot.

After some time I figured out, that - right after the reboot - slapd was only listening on an IPv4 socket, not on an IPv6 socket. After restarting the OpenLDAP server (ldap stop && ldap start as user zimbra), the problem disappeared again and netstat showed that now slapd was also listening on the IPv6 socket.

In the end I could not figure out, why the OpenLDAP daemon would only listen on IPv4 when started during system boot but would listen on both IPv4 and IPv6 when started later. I was suspecting some problem with name resolution in the early boot process (although both the IPv4 and the IPv6 address were listed in /etc/hosts).

However, I found a work-around for the problem: By setting the local configuration option ldap_bind_url to ldap:/// (zmlocalconfig -e ldap_bind_url=ldap:///) , I could configure OpenLDAP to listen on all local interfaces, which apparently fixed the problem.

RTFM or better don't...

While I am writing about curious bugs, here is another one, although technically it is not really a bug.

When setting up Icinga with mod_gearman, I wondered why service-checks where running on the assigned mod_gearman worker node, but host-checks were running on the main Icinga server and were not distributed using mod_gearman. I checked the configuration again and again, but could not find an error. Also searching the web did not bring much useful information.

The only thing that I could find were hints that do_hostchecks had to set to "yes" in /etc/mod-gearman/module.conf. But according to the mod_gearman documentation, this option was set to "yes" by default.

Well, as it turns out, the flag is set to "no" by default, at least in the version of mod_gearman that is available in the software repositories of Ubuntu 12.04 LTS (Precise Pangolin). By the way, the manual that is distributed in the source archive of mod_gearman 1.2.2 (this is the same version that comes with Ubuntu) says the same, so it is not a thing that was changed recently.

OpenDKIM bug in Zimbra Collaboration Server

Recently I stumbled across a bug in the OpenDKIM configuration of the Zimbra Collaboration Server.

In ZCS 8.0.3 (Community Edition, but I guess the same applies to the Network Edition), the file /opt/zimbra/conf/opendkim.conf.in specifies the socket that OpenDKIM listens on in the following way:

Socket                %%zimbraInetMode%%:8465@[%%zimbraLocalBindAddress%%]

This results in the following socket address of "inet6:8465@[::1]" in the final file (opendkim.conf). However, the Postfix configuration file /opt/zimbra/postfix/conf/master.cf.in specifies the socket as "inet:localhost:8465". This leads to Postfix trying to connect to an IPv4 socket, while OpenDKIM is listening on an IPv6 socket, so that the connection cannot be established.

The fix is quite easy: By changing "%%zimbraInetMode%%:8465@[%%zimbraLocalBindAddress%%]" to "inet:8465@[127.0.0.1]" in opendkim.conf.in and restarting Zimbra, OpenDKIM can be made to listen on an IPv4 socket, so that Postfix can connect again.

The curious thing is, that this bug has already been reported half a year ago and has supposedly been fixed. However, it seems like this fix was only applied to the 9.0 branch of Zimbra and not to Zimbra 8.0.