Hudson has reached version 1.300 last Friday. While it's not like this release is fundamentally different from any other past releases, it does feel like it is some kind of a milestone.
The community continues to grow.
I think most of my projects are driven by anger/rage, and this one is no exception. I was doing a hobby project, and I had to write a META-INF/services/Something file and put a fully-qualified name of my class that implements Something.
Now, I've done this countless times, and while I hated every time, I sort of looked the other way and just wrote it manually.
Here at my work, I take care of a 30-40 node Hudson cluster for our group.
The java.net Maven2 repository was set up about 2.5 years ago so that people hosting projects on java.net can push artifacts to a Maven repository.
This repository has grown in size to the point that it puts significant strain on the java.net system.
When run on Unix, Hudson can now authenticate users through the operating system, by using its user database and group database.
I noticed that many Unix deployment of Hudson chooses LDAP for the authentication, but the problem with LDAP is that there are too many things that you need to configure.
One cannot call oneself a Java geek if you haven't done JVM crash dump analysis. I mean, a C programmer would laugh at you if you tell them you don't know how to look at the stack dump.
Starting 1.281, Hudson can now launch itself as a proper Unix daemon. All you have to do is start Hudson as:
$ java -jar hudson.war --daemon
If you run this as root, it'll leave /var/run/hudson.pid and record PID there. Unlike java -jar hudson.war &, this will detach the daemon from the shell properly, so it'll keep going even after you exit your shell.
One of the things I recently came across is the Linux kernel's unique ability to have a process-specific file system mount table. In Unix that I know of, a file system mount table is global to the entire system, but apparently, starting Linux 2.6.16, you can have multiple mount tables in the system.
Here at Sun, one of my job is to maintain our internal Hudson cluster of some 40 nodes. Among other things, a part of the administration task involves in setting up a new slave every so often, which means installing a new OS, configuring it, and adding it to the cluster. We need to support all kinds of different OSes, so that adds an interesting complexity to the mix.