In Code Reviews and Predictive Impact Analysis, Mike Conley describes an enhancement to existing peer code review tools that provides "a recommendation engine for code reviews…'by the way, have you checked…?'”
And then he asks:
I wonder if such a tool would be accurate… and, again, would it be useful?
There are two questions there. Would it be useful? Maybe so, although more on that in a moment.
Would it be accurate? That's a tougher question. I was discussing this idea with Brandon and he pointed out that to create a "in the past you reviewed file foo.h at the same time as file foo.c" capability requires a substantial body of reviews where you actually did do that.
In other words, you can't "teach" the algorithm to create good suggestions until you give it good data. And while you are amassing that corpus of data, the suggestions are most likely going to be pretty useless. Which means they will be irritating. And users, especially software developers, tend to turn off irritating/useless features (e.g. the Microsoft Office Assistant is still being mocked, years after its death).
As Mike suggested, the tool could pull from other sources, particularly version control system data. That's a valid point, but in a way it sort of suggests that the problem should be checked for elsewhere. Specifically, when the commit to version control is done (or as part of continuous integration) might be the more appropriate place for that check.
There are similar features that peer code review tools can easily offer. Smart Bear's code review tool offers "subscriptions" which allow a user to define simple rules. These can be file-based - when a file with a name matching the specified pattern is added to a review, then the user is automatically added as a reviewer. Or people-based - any review with files authored/modified by user A will automatically include user B as a reviewer.
And I suppose that concept could be extended. For example, in addition to allowing rules for the addition of specific reviewers based on the names of the files, it could provide the automatic insertion of comments into a review. That's essentially what Mike is asking for, only he wants the tool to generate the rules automatically.
My preference would be to leave that to other tools and then perhaps integrate the results into the code review where appropriate.