Trio programming revisited

May 1, 2012 4 comments

Last month I was at the airport browsing random things on my phone and I ran into this text by Jon Evans.
It made me think of how people always try to come up with the perfect set of practices for software development, but there’s always someone reminding them that common sense still prevails over every possible process or practice. But it also reminded me of some thoughts I had many years ago about pair programming, or, actually, trio programming.

I’ve seen many terms for the situation where 3 people are coding together: triple-programming, trio-programming, trairing, etc, choose at will.
The problem I see with these terms is that usually they are used to describe 3 people working on a single feature, and that’s not very effective in my opinion – except on some rare occasions with extremely complex/important features or problems (common sense still prevails).

Going back in time a little, when I was in college I had to work with a group of 3 other guys to write a web application, it was our finals for one of the courses we were taking, and it was a big deal because we had never written something like that with all the integrated bits and layers and shenanigans. One of the guys bailed for some reason I can’t recall – no hard feelings, he had his legitimate reasons – so it fell to the other 2 and I to do the work.
All the 3 of us were raised in Brazil, and as I usually say, every Brazilian is born agile, because we always seem to “do everything at the latest responsible moment”. As expected, 2 days before the deadline to hand in the application we had barely started coding, so we had an overnight development session where we wrote most of the application together.

It worked like this: we were 3 guys, we had 2 machines. There was always one pair and one solo dev, but the pair would rotate constantly. I was pairing with João now, and César was going solo, until he stumbled on an issue/doubt/important-decision-making-point, where he would call for help and either João or I would join him. If João joined him, now I was the one going solo, until one of them joined me again. We had 2 well defined streams of work, but neither of them had a clear driver, because all 3 of us had a medium-to-high level general understanding of what was going on with both.

The most interesting thing is that this process had not been defined previously, in fact we only realized that it had happened once we finished, it was all very natural and organic, and worked very well. We didn’t have any tool to remind us of rotating, in fact, we barely noticed we were rotating. We were just 3 guys working together on a project, all of us helping one another.

Back to the present, whenever I think about that experience, I recall how well it worked, and how interesting it would be to try that again.
I guess I am just trying to put into words a very non-organized way of work, but I think it’s possible.
And here’s why:

A typical pairing session for me usually has 3 clear different actions: the talk, the decision making, the typing. For the typing bits, given both people in the pair are proficient with the technology in hand, usually there’s not much the navigator has to add. Most of the comments around typing are going to be things along the lines of “you forgot a semicolon” or “there’s a typo there”, which are usually things the driver had already noticed and was on his way to fixing, or that he would catch quickly enough anyway. The greatest value of pairing is on the other 2 actions.

(Now, there can be typing involved in the talk and in the decision making, typically when one of the developers in the pair go like “here, what if we did something kinda like this?” and starts to type code away, usually not really code that is going to be used in the final version of the solution but just a means of expressing his/her ideas. So when I say that “typing” is one of the 3 actions in pairing, it does not include these moments where we write code to discuss our solution.)

That’s why I think triple-pairing in 2 work streams works so well: there’s always a pair talking and making decisions, while the typing can be done solo. Hell, if the typing is so important to you, and, let’s say, you don’t want to have to wait for the tests to run to catch a typo, I could even argue that there’s some just-in-time code-reviews happening there as well: whenever the pair enters in typing mode, one of the guys in the pair goes back to the solo guy to check on how he’s doing and can offer advice on the typing bits.

So how can this practice be formalized? Should we try to assign 2 stories to 3 people?
Well, maybe, why not? I still want to try this out and see what happens. If anyone out there has had experiences in these lines, please share!

But I think that there’s more to that than simply having 3 developers for 2 streams. The project I am currently working on has a small team of 5 developers. Which means there’s always one person soloing, but this person is more often than not doing QA work. Anyway, here are 2 situations that I think fit this idea even though they don’t fall into the 3 for 2 category:

– I worked in a story while another dev worked on some other bits of work. We were sitting side by side just as if we were pairing, but our pairing screens were not connected to the same machine. We both knew what each other was doing, and we both talked about both problems constantly and even took each other’s keyboard for a few seconds to type something – but we had clearly one person working on each stream of work;

– 2 pairs working on 2 different stories, but all 4 people involved knew at least some about the 2 stories (they are both stories in the same project after all, so guess what? They are related!). One developer goes to the restroom, his pair has a doubt, one of the developers in the other pair joins them, the refreshed-developer-who’s-just-back-in-the-room hovers a little bit both streams of work and eventually joins one of them. This happened more than once (and not only triggered by restroom visits, otherwise we would be thinking about a general team visit to the nephrologist).

So, here are my conclusions:

– 3 developers for 2 streams of work may be worth trying. I want to try it out again, and will share my experience when I do so. If you have, please share with me.
– When pairing on a story with someone, don’t shut yourselves from the world – I see this happening very often. Go around the team and talk about the problems you’re facing, ask for advice from a third party, go around offering advice. If you’ve done the talk and made a decision with your pair that results in at least a few minutes of pure typing/refactoring/moving stuff around, choose one of you to do the dirty job and get the other to go poke around and check what the others are doing, maybe they could use some talk themselves. Of course, sometimes it may be worth to stay and pair through the typing too, specially if it involves moving a bunch of existing code around or touching the always-present-hairy-parts of the codebase – again, common sense.


Don’t use this in JavaScript

April 28, 2012 8 comments

this JavaScript:

var o = {
    f1: function() {
    f2: function() {

o.f2.prototype.f1 = function() {

var k = {
    f1: function() {

new o.f2();
var x = o.f2;

There. That’s why I never use the this keyword in JavaScript.
And that’s why I think you should stop using it too.
Can you figure out what’s the output of this code?

Read more…

Thoughts on software estimation

June 28, 2011 1 comment

1. Why do we estimate?

Estimations are part of our everyday life. We estimate how long the bus ride to work will be so that we know what time we should wake up. Drivers estimate how big a spot by the curb is to decide if they can park there or not. Engineers estimate how much time they need to get a building done so that they can charge the clients, pay the builders, and so on.

So, looking at these examples, I’d say that we estimate as a means of predicting something that usually can be measured: time, distance, size, etc, without having to actually measure it. We predict because measuring it may not be worth it, or because if we measure it, once we are done, this knowledge isn’t useful anymore.
With this prediction in hand we are able to make decisions. So I think that we can say that we estimate stuff to plan our actions.

2. Estimating projects

Let’s take the engineer as an example, I am not an engineer, but since my father is a builder, I think I can say some things about it. So the engineer has to estimate a building project, probably the differences between a project and another will eventually boil down to the material, the size, the number of builders that will be working, and a few more variables that are likely to be easily quantified. Once the engineer has all the information he needs, it’s just a matter of calculations. There aren’t many different ways of building a wall or a pillar after all. Of course, it is an estimation, so it may still prove itself invalid, that’s why it’s called estimation and not vision-from-my-crystal-ball.

Now, in software development we also need to estimate how long it will take, how many developers/QAs we will need, and so on, but the tricky thing is: how do we measure software? We don’t have materials, area, size. What units are going to be used as input for our calculations to come up with a reliable estimation?

Well, there aren’t any. Software development, different to classical engineering, isn’t measurable, because it’s an intellectual and creative type of work. Writing software for a bank is not the same thing as writing a mobile phone game. While we work in a development project we learn about the client’s domain, their needs, their specificities. And each client has their own details, even if they belong in the same type of industry. This makes all development projects a learning project. If we are learning something, we don’t really have the knowledge before it is done, so our estimations won’t be very accurate.
Not to mention differences of technology, the knowledge of the team members, how to deal with the client people to gather requirements and approve a completed functionality.
If you ever hear anyone saying that software development is an exact science, tell them: “Lies!”. It’s a lot about learning, finding different solutions for different problems and *a lot* of dealing with people, not so similar to the classical build-my-house project.

3. So what do we do?

Well, that’s a good question, since we can’t really say “we won’t estimate this software project”. We still need to plan it, and as we agreed estimations are needed for planning. So, where I come from we don’t estimate how big a software is, nor how long it’s going to take us to write it. We estimate how complex it seems to be, and the most important thing is: this estimations are relative, not absolute. We split the work into “stories“, an independent and small piece of work and assign “story points” to each story, a completely abstract unit of measurement.

If I tell you that I will need 4 hours to wash your car, this is an absolute estimation, since 4 hours is a fixed amount of something that we all know how to measure and have a common understanding of (hours). Now if we agree that a given piece of functionality for your software is 2 story points, it’s relative because it means that it is twice as complex as it is to write another piece of functionality that is 1 story point. But a “story point’ doesn’t really mean anything.

It’s true that there is a statistical co-relation between story points and time, as we can see on the chart below, but this co-relation of how much time in average a story point is worth varies quite a lot depending on the team, the project, the domain, technology and so on. The relativity between the points is still valid though.

How story points relate with one another when converting them to time units. Taken from Story Points explained.

4. Estimating software

So now that we can measure the effort to write software using its complexity in story points, we can start estimating our work. As we already agreed, development is a lot  about learning, so we can infer that the more we work on a project, with the same team and environment, the more accurate our estimations will be, since we will know more about such project’s details. That’s why we ideally avoid estimating a whole project at once in the beginning, it’s better to estimate it in parts. For instance, estimating a bucket of stories for each planned release or each planned iteration.

After a few iterations we can get a grasp of an average number of story points that team can complete in a given amount of time. It’s important noticing that any change in any of these variables: team, project and environment may and will affect this co-relation. Once we have this average number we can start predicting how much time it will take us to complete a big set of stories.

Now, that sounds very good in theory, but in real life we usually need to tell our clients an estimation before we actually start working, depending on the contract and the relationship with the client, so… what do we do? This has got to be the one billion dollars question of software development lately. And the more I talk to people about it, the more it seems like there’s no satisfying answer.

5. Contracts

When the client is known and there is already a strong relationship between the parts it is easier to get a flexible contract, usually in a time-and-materials fashion, allowing us to adapt the complexity x time co-relation as the project advances and negotiate deadlines x scope x team variables to get whatever the client needs the most as soon as we can deliver. This really requires mutual trust and collaboration, which agile is all about. But the world is not 100% agile, as humans are not 100% trustworthy, so from what I have experienced these contracts are not the most common out there, specially when we are trying to get a new client to partner up with.

On the other side of the rope lies the least flexible type of contracts: fixed-price and scope contracts. The client tells you: here’s what I need, you tell them: I need X months to get that done, and it will cost you Y money units. Needless to say there’s a big risk involved in this type of contract. If the estimation is too low either you may need to work over-hours to get the work done, or the deadline won’t be met. In the worst cases, both may happen. If the estimation is too high, the client may prefer a cheaper potential partner and leave you. In the case they accept your terms, there will probably be a big waste of money from the client’s part, and time from both your sides (see the Parkinson’s Law). Not to mention requirement changes that, regardless of what the clients say, will come up during the development.

Most of the other types of contracts I have seen lie in between these two types or are a mix of them. Some examples I have seen are offering a couple of iterations to develop a prototype for the potential client, for free. After this prototype is ready, if the client likes what they see, the team will have a much better knowledge to work on more reliable estimations for a fixed-price contract. Another option could be a contract with a fixed deadline but variable scope, where the team compromises to deliver at least X story points. Or fixed scope but variable deadline, depending on what’s more important for that client.

6. Managing estimations

That said, I have had many discussions with colleagues about how to manage existing estimations, mostly in regards of whether we should re-estimate stories or not.
This may become a rather hot debate, but my opinion is that: we should try to avoid re-estimating stories, unless we’re really far off.
I really like Mike Cohn’s take on the subject, I don’t think I can explain it as well as him, so just take a look at it!
The basic idea is that we have knowledge-before-the-fact and knowledge-after-the-fact, and we shouldn’t mix them on our backlog, since we will need a normalized set of data to plan our future work on.

The problem arises when we use estimations not only for planning, but also to charge the clients due to the contract type. In this case re-estimating stories or not may not be an option. If the estimations are not tied with the project costs, informing the client that a given story will take longer than planned may suffice, in this case, since the development is in progress, a timed-estimation may even be accurate enough and more useful.

7. Conclusion

In my opinion, software estimation techniques are quite fair nowadays, the problem is not how we estimate software, it’s how we charge our clients.
Estimations are called estimations for a reason, they are not supposed to be the truth written in stone, and contracts based on that are quite risky.
I have the impression that once we all build a common understanding that software development is not as a classical engineering type of project many things will become simpler, and we will stop feeling that we are working in the software-estimation industry instead of in the software-development one (at least I feel like that at times).
The engineering part of our work is done by the compilers and interpreters, not by the developers.

Testing events on jQuery objects with Jasmine

January 10, 2011 5 comments

Recently we had a piece of JavaScript code that looked roughly like this:

myApp.buttonBinder = {
  bind: function(button) {
    if (myApp.shouldChangeButtonAction()) {
      button.unbind("click").click(function() {

So we wanted to test this using Jasmine.
Our main goal was to test that when the button was clicked the form that contains it was being submitted.
It would be cool if we could use Jasmine’s spies:

describe("myApp.buttonBinder", function() {
  it("should bind form submit to button", function() {
    var form = $("<form/>");
    var button = $("<input/>");
    spyOn(form, 'submit');



But unfortunately this does not work. I didn’t look too much into it, but I suspect it is due to the fact that when the actual code runs, the jQuery ‘closest’ selector creates a new form object that represents the same DOM element. But as it is a different object the spy can’t recognize that the function submit has been called.
One idea is to instead add a listener to the form’s submit event and check if this listener has been called:

describe("myApp.buttonBinder", function() {
  it("should bind form submit to button", function() {
    var form = $("<form/>");
    var button = $("<input/>");
    var formHasBeenSubmitted = false;
    form.submit(function() {
      formHasBeenSubmitted = true;



And voilá! This works!
But it looks terrible, doesn’t it?
Good thing Jasmine lets us extend itself, so we can create a new spy function and a new custom matchers:

// SpecHelper.js
var jasmineExtensions = {
  jQuerySpies: {},
  spyOnEvent: function(element, eventName) {
    var control = {
      triggered: false
    element.bind(eventName, function() {
      control.triggered = true;
    jasmineExtensions.jQuerySpies[element[eventName]] = control;

var spyOnEvent = jasmineExtensions.spyOnEvent;

beforeEach(function() {
    toHaveBeenTriggered: function() {
      var control = jasmineExtensions.jQuerySpies[this.actual];
      return control.triggered;

And now our test looks like this:

describe("myApp.buttonBinder", function() {
  it("should bind form submit to button", function() {
    var form = $("<form/>");
    var button = $("<input/>");
    spyOnEvent(form, 'submit');



Much better \o/

Go learn JavaScript!

December 8, 2010 15 comments

This post comes late, as I have realized this late myself.
But it is not late enough to be useless, because most developers out there still haven’t learned JavaScript.

If you work with development of some sort, big chances are you work with web development.
If you work with web development, regardless of the technology stack – be it Java, .NET, Ruby, Python, Perl or Brainfuck – your application runs on a browser, and all browsers run JavaScript, so you can’t escape from it. *

That said, once you have learned JavaScript, you will find out that programming in JavaScript is actually fun!
I have gone through this myself as I have seen colleagues going through this too.
Not only you will find out it is fun, you will see how JavaScript is a powerful language that has lots of good things that you will start to wish other languages also had.

Most important of all: your JavaScript code will start to look good.
And once all developers in your team have learned that learning JavaScript is not only important, but necessary, your project’s JavaScript codebase will stop looking horrendous as it does nowadays. It will look good, modularized and testable. And you won’t ever again go “Oh crap, I have to touch JavaScript in this story…”.

Wait, did I say testable? Yes, I did! You can actually TDD your JavaScript code as well as you can TDD your <favorite language> code.
For that I recommend Jasmine, have been using it in our project, love it.
You can even combine Jasmine with Rhino or WebDriver and add your JavaScript tests to your Continuous Integration pipeline. (Your project has a CI server, right?!)

And you know what? Learning JavaScript is EASY!
I believe most developers familiar with any OO language wouldn’t need more than a couple of days to start writing more than decent JavaScript code.
There’s plenty of good websites and books out there for you to learn it, so go on and have fun!

* Ok, actually, you can escape from writing JavaScript.
You could go with GWT for instance. And I remember there were other frameworks that let you develop the whole fancy UI on Java on the server side, sending only few instructions to the client – I can’t recall any names right now.
The thing is: is it worth it?
I worked with GWT for a considerable time in 2009 and, now that I have had more JavaScript experience, I definitely wouldn’t go back to GWT. It just seems like a lot more work to do, not to mention the messy HTML and CSSs it generates.
If you’d like to have your JavaScript compiled from something else, take a look at CoffeeScript 😉

Step-by-step selenium tests with page objects, dsl and fun!

September 29, 2010 15 comments

Note: this is a long post!

Not long ago I wrote about functional tests and the page objects pattern in Aqris’ blog. Back then we at Aqris got all very excited about page objects as they were the solution we were looking for to solve our problems on maintaining our functional test code base, which by that time was based on a set of messy helpers that nobody really understood completely.

Before page objects, whenever a developer in our team had to write a test that had to perform an action no other test already performed, he or she would go through that bunch of helpers trying to figure out which one was the helper that should perform such an action. The result of this approach was that the team couldn’t get to a full agreement on what was each helper’s responsibilities. Everybody agreed that the code was not good, but each person had their point of view on why.

We first learned about the pattern here, when reading about WebDriver (now Selenium 2). Page objects came as the solution to separate the different actions our helpers contained in an extremely simple and even obvious way that nobody in our team had previously thought: simply creating a specialized helper for each page of the application.

It is in fact so simple that I still wonder how come we didn’t think of that before… I think that we were too busy trying to figure out how to deal with the helpers we had, and we were too used to have them that way. I guess that’s because the previous project we had worked on (and the first one we had a strong movement towards automated tests with Selenium) was a web app based on only one single page with lots of GWT-based ajax.

Anyway, excuses aside, we started using page objects and it was great! But then other doubts started to come up: how to deal with parts of a page that are common to many pages – for instance a side navigation bar? How to make our tests more readable? Should our page objects use selenium directly? If yes, how to resolve the selenium dependency? Can the page objects just encapsulate the locators for the html elements instead? Should page objects be responsible for the tests assertions too, depending on xUnit’s API, or should they just provide verification methods that would be used by our tests’ code itself?

I think that these are questions that may or may not have a straight correct answer, but here I will write a bit of what worked well for us or for me later on when dealing with that.
To do that I think we can write a test for an imagined scenario.
Let’s try that!

The problem

Let’s say that we have a test case to test the following hypothetical scenario:

We have a hypothetical book store application.
Every page in the application has a navigation bar on the side.
An user goes to our application home page and, by clicking on a link on the navigation bar, she goes to a page to search for books.
On the search books page she fills in a search form, entering “Tolkien” in the author field and “Rings” in the title field, and submits the form.
She is then redirected to a search result page that contains a list of books along with the same search form already filled in with the same search data she had entered – in this case, “Tolkien” in the author field and “Rings” in the title field.

We want to assure that, given our test data, the search result contains the book ‘The Return Of The King’.
We also want to assure that the search form in the result page still has the data she had previously entered.

The code solution after the jump 🙂

Read more…

How to switch off Dell XPS 1340’s discrete video card on Linux

June 29, 2010 22 comments

Finally, after months, the discrete video card of my Dell Studio XPS 1340 is switched off on Linux.

The story…

This laptop comes with two video cards and what nvidia calls Hybrid SLI.
The idea is very nice : one video card is integrated to the motherboard and is always switched on.
The other one is a discrete card and is manually switched on by the user when he/she wants more graphical processing power, consuming more power. This sounds interesting for laptop owners, who may want to save the battery as much as possible.
When the discrete card is on, both cards work together combined to provide a more powerful graphical processing device.
Even tough the idea is nice, the implementation is not, and nvidia even gave up moving on with the technology. If I am not mistaken, I read somewhere that they claim the driver implementation to control the devices together is very complicated.
The drivers for Windows Vista and Windows 7 work quite well, and is maintained by the laptop vendors, in this case, Dell.

Now if you use Linux…:
nvidia provides their proprietary driver for Linux, which is very nice, but allows us to use only the integrated card.
At first it was ok for me, as I didn’t have the need to use the power of both cards together on Linux anyway. The problem is: the Linux driver does not switch the discrete card off. Yeap, that’s right, you can’t use the discrete card, but it is always on consuming power. Ironic, uh?! The result is: on Linux my laptop always runs hotter and the battery lasts shorter than on Windows.

I researched for a long time when I bought the laptop, but I couldn’t find a way to get the discrete card off.
One good piece of news is that the project nouveau, an open source driver for nvidia cards, plans to add full support to switching these cards on/off at will. The driver is still under heavy development though… and many features are not yet implemented.

So just a few days I received an update from the sites I have been following related to the subject.
avilella has been running a great blog about switchable graphics on Linux, and quickly updating it on every new details that comes up. The address to his blog is:

There is also a bug report on Launchpad regarding the same problem:
On this bug report page they are collecting information about all different laptops with switchable cards, so that they can work on a solution for everyone.
But just yesterday a member from Launchpad named drphd found an ACPI method that can be called on the Dell XPS 1340 to disable the graphic card.

I just made a small modification on the module that avilella posted here, making it specific for the Dell XPS 1340, by using the method indicated by drphd. And, thanks to these guys, we can now switch the discrete card off.

The solution

So, to use it, just download these two files:


Before compiling the module, run: lspci -v
You should see the information for both cards, including the IRQ and the kernel modules related to them.
Now place those two files inside the same folder, cd to this folder and run:

sudo cp xps_nv.ko /lib/modules/`uname -r`/kernel/
sudo depmod
sudo modprobe xps_nv

If you run lspci -v again, you should see the detailed information only for the integrated card. On my machine the output after the module is loaded is:

02:00.0 VGA compatible controller: nVidia Corporation G98 [GeForce 9200M GS] (rev ff) (prog-if ff)
	!!! Unknown header type 7f

03:00.0 VGA compatible controller: nVidia Corporation C79 [GeForce 9400M G] (rev b1)
	Subsystem: Dell Device 0271
	Flags: bus master, fast devsel, latency 0, IRQ 23
	Memory at aa000000 (32-bit, non-prefetchable) [size=16M]
	Memory at b0000000 (64-bit, prefetchable) [size=256M]
	Memory at cc000000 (64-bit, prefetchable) [size=32M]
	I/O ports at 5000 [size=128]
	[virtual] Expansion ROM at c0000000 [disabled] [size=128K]
	Kernel driver in use: nvidia
	Kernel modules: nvidia-current, nvidiafb, nouveau

To make sure that the module is loaded every time you boot your laptop, edit the file /etc/modules and add, in the end, a new line with the text:


As you see, I have the Dell XPS 1340 with a GeForce 9400M G and a GeForce 9200M GS.
But I think that the same solution should also work for people who have the combination GeForce 9400M G + GeForce 210M.

After the module is loaded the laptop runs a bit cooler, it uses around 4W less power and the battery is estimated to last around 30-50 minutes longer.

Remember that every time you install a new kernel you will have to re-compile and re-install the module.


Categories: English, Linux