Worse than static methods or final classes?

Do you know what’s worse that static methods or classes marked as final? I’ll tell you what’s worse: static methods that return final classes. That only provides private constructors.

Here I was, merrily testing my way through a piece of software that sends emails. According to the Java Mail documentation, you are supposed to first create an email session as follows:

Session mailSession = Session.getInstance(properties);

(it is worth noting that all what the getInstance() method does is call the private constructor for Session: new Session(props, null)).
Had Session been a more normal class, I could have mocked it like that:

Session mockSession = mock(Session.class);

and probably checked how it is being passed around like this:

verify(someService).startEmailSession(mockSession);

What follows is what I have to do instead. Continue reading

Posted in java, tdd | 4 Comments

I avoid method variables in my test methods

Here is a typical example of a test method

@Test
public void should_search_by_path() {
	Searcher searcher = new Searcher();
	Path location = new Path("somewhere");
	String data = "data";

	searcher.putAt(data, location);

	assertThat(searcher.findAt(location), is(data));
}

It seems that many developers consider this good code. I don’t. My main gripe here is that the presence of variables does not add much useful information. Instead, they make the code too verbose.

A first rewrite would produce something like that: Continue reading

Posted in java | 5 Comments

Java’s varargs are for unit tests

At Devoxx last week, Joshua Bloch argued during his talk “The Evolution of Java: Past, Present, and Future” that varargs are only “somewhat useful”. I think he is overlooking some usages, particularly in tests. Here is my case. Continue reading

Posted in conferences, java, tdd | 2 Comments

Play Framework and Guice: use providers in Guice modules

Play Framework has a Guice module. Unfortunately, its use is fairly limited compared to what Guice can do. In this post, I describe how it is configured on my current personal project. Continue reading

Posted in java, test | Tagged , , | 1 Comment

CITCON London 2010

I’m returning from CITCON London 2010. What a great conference (and I’m not just saying that just because I helped organize it)!

In fact, I feel it has been the best CITCON so far. I was a bit afraid of the large crowd (150 people registered, a similar number to Paris last year; I’m not sure how many showed up. 120, maybe?), but it turned out easier than expected to discuss with other participants. Also, and most importantly, there was a feeling of a higher level of experience than usual. Few talks about the basics of tests or Continuous Integration (and no “what’s the best CI server” session at all, thank God). Instead, it was “Advanced TDD”, “Share Pair Programming experience”, “Mobile Testing”, etc. All good stuff and, as usual, I just couldn’t attend all the sessions I wanted.

Break Sponsors

Continue reading

Posted in citcon, java | Comments Off on CITCON London 2010

Why aren’t there more Agile luminaries developing and selling software?

Have you noticed that few Agile luminaries earn a living from writing and selling software? Many do write code as consultants. Other are respected authors of non-commercial open-source development tools. Some do work for software companies such as RallyDev or ThoughtWork Studios, though it seems that most visible presenters coming from there are consultants or at least business-facing types. But almost none actually make money directly by doing what they teach others to do.

James Shore mentioned working on his own startup with Arlo Belshee and Kim Wallmark, but that was more than a year ago and we haven’t heard much since. Ward Cunningham is CTO of a website, which come reasonably close to being a software house. In fact, Kent Beck is the only example I know of someone who actually tries to make a living out of writing and selling software (with mixed results). Tellingly, Ward and Kent are not very visible on the conference circuit anymore (though they are certainly interviewed regularly).

Programmer on "Vacation"

This lead to an interesting discussion yesterday on Twitter with Deborah Hartmann Preuss, Alexandru Bolboaca, Willem van den Ende, Brian Marick, and Jeffrey Fredrick (see transcript at the end of this post). Continue reading

Posted in agile | 13 Comments

How to use LogMeIn under Linux

For my Remote Pair Programming session with Alexandru Bolboaca, I wanted to work on our actual code, not toy programs. It was hard finding a technical solution to allow this (despite the many suggestions I received on Twitter; the biggest issue is sharing the entire development environment), but I finally settled on LogMeIn. LogMeIn basically lets you create an ad hoc VPN with them serving as a middle man. The great thing with it is that all the configuration is done on the client machines. There is nothing to change on firewalls (especially important for the other people that you are working with).

LogMeIn has a download that seems very simple to use… as long as you are under Windows. It also has a Mac OS X version and a Linux version, but they hardly come with any documentation. What’s worse, it is hard to find additional information on the support site.

So, for your eyes only, here are some instructions on how to get LogMeIn to run under Linux. (this applies only to the client machine; setting up the network can be done entirely on LogMeIn’s website)

The worlds network

My configuration: Ubuntu 10.04 Lucid Lynx 64 bits with LogMeIn Hamachi 2.0.0.11-1. (Hamachi is a protocal that creates a VPN that goes through their servers)

  1. First, create a login on http://www.logmein.com/
  2. Install their Linux client. Just double-clicking it after download should be enough.
  3. Configure the client in the command line :
    1. hamachi login
    2. hamachi attach <your email on logmein>
    3. hamachi set-nick <a human-readable user name for you; any should do>
    4. hamachi do-join <id of the VPN previously created on LogMeIn>
      • The password is the one specified by the domain creator. This is not the password for your login.
  4. Wait for domain creator to approve your machine on the virtual network (you might need to send an email to remind her of that)

Done! From that point, you can use VNC or anything else to connect to a remote computer. Use something like ifconfig on the remote computer and use the IP address under the ham0 entry (ham is for Hamachi, obviously). The IP address has an unusual value such as 5.18.76.84.

Posted in pair programming | 15 Comments

How we use Git at Algodeal

I’ve talked recently with the CTO of a small-but-successful company, trying to explain how we do software development. I realized that many things are difficult for them to copy from us, mostly because we have a different approach to implementing features (in particular, we try to limit GUI-intensive features, while they have a very rich Javascript interface).

One thing, however, that I believe they can adopt without changing their code is the source code management tool Git. However, they had already considered it (they are currently using Subversion) and figured that it does not solve problems they already have.

The Gits

I’d agree that Git doesn’t fix obvious problems. However, Git is powerful enough (once the complexity is mastered) to make lots of little things easier. Here is what it does for us.

  • It makes merges much less painful, even when no branched are involved, for example when two developers modify the same file. With Git, it is possible for someone to move a file to a different package, while another developer renames it simultaneously. And removes half of the content. Git (almost always) magically resolves the conflicts and maintains the history of the file, all without any special command. No need for “svn move” or “svn rename”. And no code freeze when someone renames a package, so we tend to do refactorings more often.
  • When a regression is detected late, Git can help find the faulty commit. That can be done automatically if you can write a script that detect the bug (for example if there was no automated failing test in the faulty commit), but that’s not necessary. For example, it helped us find at what point “mvn eclipse:eclipse” stopped working (it was because we enforced maven 3 usage in the POMs). The command that does this is “git bisect.”
  • The entire source repository is on the developer’s machine (this is configurable). This gives us free backup across all our machines, and instantaneous access to Git logs. In the same spirit, I frequently use git blame to find out who wrote what on a particular piece of code, so that I can ask them for clarifications.
  • There are no access rights to manage (though users’ accounts on the server machine, if you choose to have one, as many do, are still necessary).
  • Suppose you’re working on something, but must suspend that work temporarily to fix a bug. In SVN, either you cancel all your changes, or re-download the entire code base in a separate directory. Or count on the fact that changes won’t overlap (maybe). In Git, you can temporarily put your changes on the side, do your fix, then retrieve them back again. This is known as “git stash“.
  • If by any chance it is necessary to roll a fix in production, it is possible to hand-pick changes with “git cherry-pick“. My colleagues often use this.
  • Generally speaking, there is much less mangling of the code base (none of the infamous “svn cleanup“)

None of these features are absolutely necessary. But all together, they make life easier for us. It even let do serverless CI.

Sebastien Douche has said during a presentation at a recent Paris JUG evening that DVCS are the one thing that all developers should learn in 2010. I think he’s right.

Posted in algodeal, source control | Comments Off on How we use Git at Algodeal

Bob Martin on TDD in Clojure

Robert “Uncle Bob” Martin has just blogged on the differences in TDD styles using Clojure, as compared to more traditional languages such as Java. Though I am a Clojure-newbie, I mostly disagree with his conclusions.

His main point is that, because Clojure is a functional language, functions have no side-effects and therefore can be used directly in the tests.

For example, the production code

(defn update-all [os]
  (update os))

would be tested with something like

(testing "update-all"
  (let [
    o1 (make-object ...)
    o2 (make-object ...)
    os [o1 o2]
    us (update-all os)
    ]
    (is (= (nth us 0) (update o1)))
    (is (= (nth us 1) (update o2)))
    )
  )

There is no reason to believe that the (update) function is side-effect-free

Changing internal values is only one way of creating side-effect. I admit that Clojure encourages coders to write code that does not change variables (if I got it right, it is definitely possible to do so, but with some additional work). However, that effect only stops at the boundaries of the language. At some point, it might access the file system or a database. Clearly, the state might change there.

Correct implementation of the (update-all) function depends on the correct implementation of (update)

Bob Martin says: ”this test simply checks that the appropriate three functions are getting called on each element of the list”.
Suppose that the (update) function does not do anything or maybe does something that does not return a value, such as printing out to the console. Then, calling it will have the same effect as not calling it at all. The test above will pass even if the (update-all) function does not provide any implementation at all. When, later, the bug is found, it will be harder to fix.

The test could be clearer (with a more powerful test framework)

One of my biggest concerns is that the test looks a lot like the code itself. Looks like duplication of information to the reader.
If there was a mock framework for Clojure, I would expect to see something like

(testing "update-all"
  (let [
    pre-conditions (
      (should-return (update 1) 1.5)
      (should-return (update 3) 3.0) )
    o1 (make-object 1)
    o2 (make-object 3)
    os [o1 o2]
    us (update-all os)
    ]
    (is (= (nth us 0) (1.5)
    (is (= (nth us 1) (3.0)
    )
  )

Bob Martin is right to conclude that “Clojure without TDD is just as much a nightmare as Java or Ruby without TDD.”

But he should also make it clearer that it is lacking a mock framework (he does point to Brian Marick’s work on this).

It should be noted that it is possible to get a similar implementation style in Java as in Clojure, though it is significant work. In fact, that’s often how we use it here at Algodeal. That means mostly relying on immutable objects and state-less methods. Immutable collections from Google Collections help a lot, too. Still, we like to use mocks in our tests (too much for some, probably).

In the end, Uncle Bob’s post is another aspect of the (almost) age-old debate described by Martin Fowler: classicists vs. mockists. If you haven’t already, read Fowler’s article, it’s worth it.

Posted in tdd | Comments Off on Bob Martin on TDD in Clojure

AppArmor: how to fix the Create New User issue with logprof

We have started to use AppArmor as a way to strengthen the security on our platform. A reasonably good tool for which you can find rather straightforward tutorials.

Portrait of a young woman dressed as Boadecia or Mother England
Portrait of a young woman dressed as Boadecia or Mother England by Powerhouse Museum

AppArmor is a tool that can explicitly allow or deny actions by some applications. Those actions are recorded in a profile. Many profiles are already available, for such tools as Firefox, but sometimes it is necessary to create your own. This was the case for us: we wanted to make sure using Mono was not too much of a risky endeavor (we run investment strategies written by our users in .NET).

Creating a profile for AppArmor can be done is a couple of ways. One is to run the application you want to lock down in record (“complain”) mode, check the logs produced by AppArmor and select the corresponding rules you want to enforce. The tool that checks the logs and produces the profile containing the rules is logprof, generally run with the aa-logprof command.

Last morning, when I merrily tried to run logprof, it prompted me at the end of the process to create a user.

Create New User?
(Y)es / [(N)o]

I didn’t know what “Creating a user” meant here. And, at this point, there is nothing useful to do. Whether you reply Yes or No, you are always prompted for a username and password, then asked whether you want to save the configuration, which inevitably ends with a Login failure, and you are back to the Create a User question (Ctrl-C to get out). Here is the whole trace:

Updating AppArmor profiles in /etc/apparmor.d.
Create New User?

(Y)es / [(N)o]
Username: noideawhatmyusernameis
Password: noideawhatthepasswordis

Save Configuration?

[(Y)es] / (N)o

Login failure
Please check username and password and try again.
RPC::XML::Client::send_request: HTTP server error: Not Found

Create New User?

It took me a while to understand what was going on, so I’m writing this post in the hope that it will help someone (possibly me, in the not-so-distant future).

The user here refers in fact to a user on the central, public repository for AppArmor profiles. You do not normally need a login for downloading profiles, but logins are required to upload them. Now, I obviously do not want to upload my profiles, so what’s the deal?
In all likelihood, I must have enabled the upload of profiles at some point, possibly when I was trying to figure out what AppArmor was doing. There is a way to undo that, but very little documentation and few discussions about it on the internet.
I finally found it on Novell’s site.

In the end, here is what you need to do:

  1. move to /etc/apparmor (and not /etc/apparmor.d, which is the directory where the profiles are saved)
  2. edit repository.conf
  3. in the [repository] section, replace upload = yes with upload = no

All done!

Footnote: the status of AppArmor is not clear to me. Wikipedia indicated that Novell has fired the original team that developed it and indeed Novell’s site only points to AppArmor v2.1 and earlier. A Google search returns many links to Ubuntu and indeed Karmic Koala comes with v2.3.1 (the latest, AFAIK), but Ubuntu pages do not offer very advanced documentation. Novell does have the best documentation but is strangely not well referenced on Google and the documentation only goes until v2.1, which is not impressive. The official development site is hosted by Novell, but it only mentions v2.3beta and has not seen any release seen mid-2008. Finally, a similar tool, Tomoyo, has been merged into the Linux kernel v2.6.30 since mid-2009. So I think that, once we will have move all our servers to Karmic, we’ll dump AppArmor.

Posted in unix | 6 Comments

Predictions for CITCON Europe 2009

Last year, at CITCON Amsterdam 2008, a few of us stayed late into the night, drinking beer and discussing the state of the world.

And what to do when you have 21 geeks with time on their hands? Why, predictions, of course! (I want to do it again this year, check out the Google Moderator page I’ve started)

Bar at the Marriott Hotel

We decided to come up with a number of predictions (and bet on them), some serious, some not, that would be verified at CITCON Europe 2009, the price being beer points. And, the losers will be named and shamed, while the winners will be glorified (at least until new predictions are made, and for no more than a year, whichever is earliest).

Here are the predictions, and the actual outcome (a couple of them were settled by votes at the closing session):

Prediction Votes Actual
CITCON Europe has more than 120 attendees (I had voted against!!) YES YES
more .NET developers than Java developers NO NO
CITCON will take place in Paris YES YES
at least 5% of attendees are female (I personally did vote in favor) yes NO
at least 20% of participants do Ruby draw NO
Java closures are considered too complex NO NO
IBM buys ThoughtWorks NO NO
IBM buys Valtech NO NO
there is a Maven.NET coded in Java, with MS Tools integration NO NO
Ivan Moore gives up on build-o-matic NO NO
McCain wins the election draw NO
CITCON Europe takes place in Frankfurt NO NO
Jeffrey Fredrick XOR Tom Sulston (that is, either Jeffrey or Tom, but not both) have short hair YES YES
Fewer Agile Consultancies NO NO

So, out of 14 predictions, we got 11 right, 1 wrong, and 2 undecided.
Now, you may think that the answers were straightforward. But you need to realize that, for each one of them, someone was willing to bet a beer against the consensus. In other words, at the time when the predictions were made, it was not clear cut.

In the interest of the bets, I shall now reveal the names.
Winners (Glory to Them All!)

  • Andrew Parker (8 rights, 1 wrong)
  • Eric Lefevre (that’s me) (10 rights, 2 wrongs)
  • Guillaume Tardif (6 rights, no wrongs)
  • Jean-Michel Bea (8 rights, 2 wrongs)
  • Pekka Pietikäinen (7 rights, 2 wrongs)

Losers (Boo to Them All!)

  • Julian Simpson (3 rights, 4 wrongs)
  • Jeffrey Fredrick (3 rights, 6 wrongs)
  • Paul Julius (5 rights, 2 wrongs) — PJ is still a loser, ‘cos he has been right on bets with small payoffs
  • Tom Sulston (4 rights, 4 wrongs)

I have started a new series of predictions for CITCON Europe 2010. There are two steps:

  1. suggest predictions & vote for the best ones
  2. when predictions have been selected, vote

To actually win your beers, you’ll have to come to CITCON Europe 2010 (still unannounced).
Please check out the Google Moderator page to propose your own predictions.

Sorting out the bets from 2008If you want to the gritty details, I have a picture of the full spreadsheet.

Posted in citcon | Comments Off on Predictions for CITCON Europe 2009

CITCON Paris 2009, a personal retrospective of the organization

Closing session

As I write this, 3 days after the closing, I have still not fully recovered from CITCON Paris 2009. I have been very much involved in organizing this edition, so I would like to indulge in a bit of personal retrospective, mostly on the organization of the conference. This is basically self-reflexion; if that’s not your thing, you can leave. You won’t miss much.

Here goes.

What worked at the conference:

  • we had more than 120 participants, which is in line with CITCON’s goals and the highest number ever in all 11 events. Also, it is very close to the number we had estimated ourselves.
  • all the people in the waiting list have eventually been invited to join the main registration list; no one was left behind
  • costs were well under control, especially thanks to the free use of ISEP’s classrooms
  • we got significant money from sponsors; in fact, combined with the well-contained expenses, this event contributed hugely to settle debts from the past events
  • quality of food was alright (especially for a free event)
  • there were a number of well-known people, helping make this event special for other participants
  • twittering was big; according to my feed on Google Reader, there were more than 300 tweets. Including quite a few from people regretting not to have come.

What could have worked better:

  • there was not enough food. I think this is partly because the caterer is not a real professional. Even though we had given good estimates for the number of participants, I think that, as the person in charge of the student foyer, he was used mostly to students eating on a budget. If we use such a semi-professional in the future (likely, since we want to use more free venues such as universities), we would be wise to over-estimate the number of participants as far as food is concerned. Just in case.
  • Even though we did arrange the chairs in the main room in circles, we left the other rooms as they were, theater-style. This didn’t help having involved discussions (as opposed to presentations).
  • I’m wondering if we have not reached the maximum possible number of participants. One of the things that I really enjoyed last year was the late night drinks with the few that dared stay. This year, we were 30 or 40 at the end. Groups started to split up. Guillaume and I led a few to the Ti Jos bar and to the Caveau des Oubliettes. Although nice, it was a bit sad, as there wasn’t really a “closing the closing session” moment. We didn’t even get to do new predictions! (and settle last year’s bets, BTW).
  • Some rooms were lacking a video projector. I wonder if it would be good investment for the Open Information Foundation to buy one of those small and inexpensive projectors that have appeared recently on the market

On a more personal note, the conference passed a bit like a blur for me. Despite using OpenSpace Technology, I still ended up as the contact person for many participants, suppliers and sponsors, which was distracting. Also, helping my brother with the filming didn’t help. I even managed to miss out on the (now traditional) “Is Scrum Evil?” session, which had been a favorite of mine last year.

I still had a great time. Met Antony Marcano and Andy Palmer from Pair With Us (they are hoping to join us at the Paris Coding Dojo sometime — looking forward to it) as well as Gojko Adzic, whose copies of book was given away to some lucky participants, and Jason Sankey and Daniel Ostermeier from Zutubi… Reconnected with many former colleagues and friends, too. I also attended a few sessions ;-)

Oh, and last but not least, I’m one of the winners from last year’s bets! What do I win? Well, beer, in theory. But, even better, I get to call PJ, Jeffrey, Tom, Julian and Yegor LOSERS for a year. Priceless.

Check out

See you next year, in one of the five cities in our short list (Zürich, Copenhagen, Belgrade, Dublin, and Prague).

Posted in citcon | 1 Comment

Faster tests, at CITCON Paris 2009

"Going nowhere fast"

“Going nowhere fast” by Nathan

The session on Faster Tests (led by David) was interesting, at least to the extend that it was quite clear that we at Algodeal are not doing too bad indeed (Douglas Squirrel from youDevise is another one that seems to be quite cerebral about tests and builds).

Faster tests Faster tests

By looking that the various options discussed to get tests faster, I think it’s fair to say that the only way to really speed up tests is by compromising their integrity, at least to a level. In a way, to make tests faster, you’ve got to face reality and move away from their ideal abstraction (very reminiscent of Joel Spolsky’s Law of Leaky Abstractions). The only question is: how confident are you that those (slightly compromised) tests actually test something useful?

This leads to the conclusion that we only keep long integration tests because it is difficult for us to really understand what’s going on. If we did have an excellent understanding, we would have unit tests instead. And, interestingly, as we progress in our project, we find ways to convert integration tests into unit tests. In other words, we better understand what’s going on.

Also, check out my notes on the session on Mock Objects.

Posted in citcon, test | 1 Comment

Mock objects at CITCON Paris 2009

The session on mock objects, mostly lead by Steve Freeman, was a bit messy but interesting. My colleague David got to show some of our code on the screen, which was scary and exciting (he felt the urge to fix some of the tests he had shown immediately after). Also, I think I finally understood the relation between mock objects and interfaces that Steve insists on.

See, I always thought that Steve was in favour in adding interfaces directly on top of concrete classes. For example, if you have a FileManager, you would also have a IFileManager.

Steve made more clear that the idea was to use interfaces to represent a role, or (more exactly) just one of the roles that a class has. That makes sense. But, to be honest, I still prefer to have a single role per class. So, no interfaces really needed.

I wish I had more time to talk with Steve. Maybe his coming book will have answers for me.

Mock objects

Posted in citcon, java | 5 Comments

Interviewed by François Beauregard

François Beauregard from Pyxis Technologies interviewed me during Agile 2009 for their Vox Agile podcast. The interview is now online.

Toy sampling megaphone

We chatted about a favorite topic of mine: how to expand the horizons for Agile. My point is mostly that the Agile crowd is mostly talking about basic issues in software development, including during the Agile 2009 conference. I fear that this my give the wrong impression to beginners (“how, so we only need to do this and that, and we’re agile? Cool!”) and even to seasoned practitioners (“this Agile thing is not addressing my needs anymore”).
I would much prefer that we talk more about complex problems, whether they relate directly to Agile or not. This can include technical discussions or more touchy-feely ones. As long as we are addressing difficult problems, we will be making progress.

I also want to see more cross-domains talks. Obvious domains are the heavy industry (I won’t need to remind how influential Toyota has been to the IT industry) or performing arts. But that could also include things such as Behavioral Economics.

Or not. I don’t know for sure. However, I do know that we should be taking more risks. And stop presenting Introductions to Retrospectives for the upteenth time.

At the end of the talk, I mention 2 things for further reading. Here they are, plus a bonus book that I’ve just read:

The podcast is available in French on the Vox Agile site.

Posted in agile, agile2009 | 1 Comment