Friday, May 27, 2011

Live Blog from ICSE: Peer review in open source projects

This is my final live blog from ICSE. See recent posts for others.

 
The presenter, Peter Rigby discussed 460 instances of peer review in open source projects, including interviews with 9top reviewers.
They examined scalability of the peer review process, techniques for finding patches to review, motivation, patch structure and norms, as well as the effect of too many opinions and ignored reviews.

The presenter discussed the fact that large projects can receive hundreds of peer review messages a day. One lead developer received over 2000 emails in a day, only two of which were personal emails.

Motivations include intrinsic interest and a sense of responsibility. People are invested in a particular code base; if they don't review changes, then the code quality will deteriorate.

They filtered messages from people and subsystems. This helps developers to not feel overwhelmed. However many messages are posted to multiple lists.

One developer said that when reviewing potential patches, "a good and descriptive subject line draws immediate attention". A change log in the message gives conceptual understanding.

The developer snips out excess detail in replies; this seems to help considerably to streamline the process. My opinion is that this should be the case in general email etiquette.
 
 Another practice is that in a back-and-forth email conversation, people tend to reply to individuals as well as the list, and gradually more cc recipients get added.

Some review discussions are purely technical (whether it works); others focus on scope, politics, necessity etc. There are thus often too many trivial opinions (Parkinson's law of triviality); people with trivial information tend to drown out more serious discussion. They measured outsider involvement and influence.

One problem they examined is that there are too few developers in open source development. Lack of time tends to result in postponement of reviewing; this puts the onus on the author of the patch to ensure it is eventually included.

Takeaway mssages:
  • Reviewers are invested in doing reviews
  • They tend to postpone rather than rush
  • Asynchronous review processes facilitate discussion
  • Politicized discussion is infrequent
  • Scaling in large systems can occur through multiple mailing lists for different subssytems (other methods are discussed in the paper)
The presenter discussed the threats to the credibility of his work, and how these are mitigated. For example, the data is public.

An audience member asked about how this compares to traditional inspection. The presenter said there are many similarities and that the open source process seems to work as well as inspections. Key is early reviewing, and reviewing in small chunks. An audience asked about 'review then commit' and 'commit then review'; the presenter said that the processes were mostly the same. Another audience member asked a related question about status-quo bias: He said that patches that are committed first have a higher necessity for review.

An audience said that email for this is terrible and there should be better tools. The presenter said that the flexibility of email is underestimated. Several projects, however, have started to use tools, with more traceability. Forcing people to use tools has some drawbacks though. "You have to be careful to say that email is bad ... if you understand the norms of the community then it is really very efficient".

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.