Some thoughts inspired by "Toward peer review as a group engagement"

These are rough thoughts and should be taken as such. Think of them more as the interesting points that deserve thought in a context that the author of the paper (Andrea Bonaccorsi) was not specifically considering when writing it. Which is a good place to start this: this paper was written about academic peer review and not about code review. I personally find these to be close cousins to each other but they represent different functions in different disciplines. So I am doing some extrapolating here.

You can find the original paper here if you find any of this interesting it is worth a read.

It is possible that there are biasing elements in code review based on, theprestige of the author, the prestige of the change, the relationship between author and reviewer.

This quote, "Given a (relatively) fixed time budget, increasing the number of referee reports means allocating a smaller number of hours to each of them. More subtly, if researchers allocate a fixed proportion of their time to referee work, it is likely that they will accept submissions in a hierarchical order, first from journals of higher reputation." really made me pause. If these similar biases play out in code review, which we can be certain that biases do exist. There are several recent studies on this in regards to gender that point out disparities.

But I haven't seen any around the level of prestige. This is another thing to keep in mind, if a code review never seems to get picked up. Anecdotally, I know I have passed over doing code reviews based on who the author was in the past.

These biases may mean you have work that is hard to review

Like I said just above... anecdotally I know this is true from my experience.

Sometimes extrinsic motivation stacked on intrinsic motivation can have lasting impact

"The authors asked a pool of 1,500 referees of the Journal of Public Economics to review a paper. Referees were assigned to four experimental conditions: the first group was asked to deliver the report in six weeks, the second group in four weeks, while the third was offered a payment of 100 USD for meeting the four week deadline and a final group was informed that their turnaround times would be made public (social pressure). What they find is that cash incentives significantly improve speed and do not crowd out intrinsic motivation, since referees continue delivering reports in four weeks even after the end of the incentive treatment. They suggest that price incentives are complementary to social pressure". Social pressure is probably our most common lever when trying to get code through review. So this is interesting.

What motivates us to do code reviews?

I really don't have any thing to say here other than that, this seems like an interesting thing to figure out. I have no idea right now. Something worth thinking about.

Incentivize teams not individuals

This one is pretty clear. Measure and incentivize teams not individuals to prevent the crowding out of other work, but also because teams ship code not individuals.

This comes from the proposal in the end of the paper for how to better structure academic paper review. To be honest, it starts to sound a lot like code review. Which is interesting. Teams owning reviews in the areas that they have specific knowledge, and working within them to work out how to get the reviews done.

Team instability, and demanding deadlines may provide a disincentive to do code reviews

"...researchers are permenantly worried about the lack of funding..."

Why be reviewing when you could be writing code/shipping code/closing tickets/etc.? Uncertainty makes people look out for themselves and their metrics. So be careful because you could encourage people not to review code.

Be careful what you measure and incentivize

This is a quote from a different paper in the body of this paper.

"Paying for rejections will incur additional publishing costs and paying solely for acceptance will benefit only reviewers whose comments are always positive. Finally, a paid peer review would distort the selection criteria of peer reviewers, contributing to the emergence of new commercial peer review agencies, cronyism or nepotism reviewing activities, or specialised agencies to provide reviews on demand, similar to paper mills or agencies that write and fabricate data from scratch"

But it is an important reminder that how we incentivize things can have unintended consequences. So be thoughtful.

Conclusion

So yeah, those are just some thoughts. Feel free to dig in to the paper and find your own. Lots of partially formed thoughts and intuitions here... that need more thought on my part before they become actionable and useful.

If you enjoyed this article please share it! Also, I have a newsletter that you might enjoy as well. Thanks! -Daniel

Published: 23 January 2023 | Tags: leadership , measures , statistics , incentives