On the Peer Review System of NSERC’s Discovery Grant Program

> français

SSCThis article was written by David Stephens for SSC Liaison, Volume 25, Number 3, August 2011.

At the recent SSC meeting in Wolfville, after a meeting between the SSC Research Committee and NSERC, I was asked, as Section Chair of the NSERC evaluation unit responsible for Statistics, to write a piece for Liaison addressing some issues of concern to members of our community. I want to try to do this to make the evaluation process as transparent as possible, with the hope that researchers retain faith in the way the system works, and also because I wanted to take the opportunity point out that NSERC, and everyone involved in the assessment of Discovery Grant applications is committed to working to make the system serve the community.

Before starting on substantive issues, I thought I should give some brief biographical details. I am a Professor in the Department of Mathematics and Statistics at McGill. I came to Canada in 2006 after eleven years in a faculty position in London (UK). My first (and so far only) Discovery Grant application was funded in 2007, and my current grant level is $22K. I also received a Discovery Accelerator Supplement in that year. I mention these things only because they give some context to my view of the Discovery Grant system, in that I am fairly new to the system, and have been fortunate to have been well supported by it up to this point. For the last three years, I have served on the Grant Selection Committee/Evaluation Group as a Statistics committee member, and I was Section Chair for Statistics in the 2011 Competition. I will stay on as Section Chair for 2012. My tenure on the committee has coincided exactly with the implementation of the new evaluation procedures, so I have no direct knowledge of how the evaluations were carried out under the old system.

I was specifically asked to address issues concerning collaborative research. I regard myself as primarily an applied statistician who works with researchers from other disciplines (for example, in biology and genetics, and in health research) on a regular basis; sometimes this involves methodological research in statistics, sometimes assistance in statistical analysis. In any case, issues surrounding collaborative and interdisciplinary research relate very much to my own work. The Discovery Grant instructions relating to collaborative research are laid out in the various NSERC Guides to Professors and other material, and I will attempt to summarize the most salient points. I apologize in advance if what I write seems to you jargon-filled or too generic; it is tough to cover all possible points of issue in the space available.

I will begin with a brief summary of the way that assessments are carried out, then address collaborative research, and conclude with comments on one aspect of the evaluation process that I get asked about frequently, the assessment of training.

Assessment of Discovery Grant Applications

Assessment of Discovery Grant applications in Mathematics and Statistics is performed by Evaluation Group (EG) 1508, one of twelve EG spanning the Natural Science and Engineering (NSE) disciplines falling under the NSERC remit. The assessment comprises three components: Scientific and Engineering Excellence of the Researcher (EoR), Merit of the Proposal, and Contributions to the training of Highly Qualified Personnel (HQP). An application is rated independently by five committee members (each of whom, after discussion, provides their own ratings, the median rating ultimately being computed) for each component on a six point scale. The total score (18 down to 3) is the sum of three component scores, which is translated into a rating bin (A for 18, B for 17 etc.). The funding level for each bin is discussed and finalized at the end of the competition by the Executive Committee, which comprises the Group Chair and Section Chairs, in consultation with NSERC; the funding recommendation on bin levels is made by the Executive Committee to NSERC once an appropriate balance between grant levels and success rate has been arrived at.

The NSERC Peer Review Manual (PRM) provides guidance as to how committee members should perform their assessment. I strongly recommend that anyone submitting a Discovery Grant application should read the manual, and pay close attention to it. Committee members will rely upon their own knowledge of the field to inform their assessment of applications; this may include technical aspects of the science involved, the current state of research in the area, and also, say, knowledge of particular journals’ review procedures to assist their assessment of the quality of a publication (see PRM, section 6.8.1.1, p 5 of Chapter 6). The key guiding principles are that the assessment should be based upon the material provided in the application and reviewers’ reports only, and that the onus is on the applicant to provide sufficient information to facilitate the assessment. That is, committee members are not permitted to fill in missing information, or give benefit of the doubt. I highlight these points as they describe how committee members assess applicants in the EoR component, and, importantly, give some clarification on points concerning collaborative research.

In addition to the central aspect of scientific quality, a key concept in all of the components of assessment is that of impact. Committee members are tasked with assessing the impact of an applicant’s past research and training track record, and the potential for future impact of the proposed research and training plans. The PRM has guidance to members on markers of impact in section 6.8.1., but the applicant is at liberty to use whatever impact indicators they see fit. As ever, the onus is on the applicant to provide the evidence and make their case. The issue of impact is also relevant to collaborative research.

Collaborative Research

Statistics is, for most of us, an inherently collaborative discipline, and collaborations can take many forms. I will refer to three broad classes: (i) purely theoretical/methodological statistical research in collaboration, say, with other statisticians, (ii) collaboration across other scientific or engineering disciplines, and (iii) collaboration in areas not covered by the NSERC remit. In terms of the assessment of EoR, the PRM is quite clear: what is being assessed is Scientific and Engineering excellence, that is, excellence in disciplines covered by NSERC. An implication of this is that all contributions across the NSE disciplines are to be regarded on an equal footing; this is feasible under the conference model, where, even if an application is being reviewed by a particular EG, appropriate expertise is available to review NSE interdisciplinary applications, as members from other EGs can be recruited. In my experience, the conference model works well in this regard. Thus, according to the PRM, collaborations in classes (i) and (ii) are to be treated identically. In practice, of course, as our Evaluation Group is populated by mathematicians and statisticians, it is imperative that for any applications that cover multiple NSE disciplines, the applicant explains clearly what their contributions, and the impact of their work, have been, so that the committee members are in a position to give such research its full credit. For example, whereas committee members will be familiar with practices and norms, and reputation for high standards and associated high impact of mainstream statistical journals, they may not be so cognizant of these aspects for journals outside of this. Therefore, in such cases, applicants should take care to clarify.

For class (iii), the situation is somewhat different. It is recognized by NSERC that what the PRM terms “eligible research” may have impact in research fields not covered by NSERC (see PRM, section 6.8.1.1.1, pp 6-8 of Chapter 6), and that this impact should be recognized in the assessment of the excellence of a researcher. However, what EG committee members are tasked to assess is the impact specifically of NSE research. It is not necessarily the case that collaborative statistical work will be regarded as eligible research. For example, the application of well-established statistical methods in a collaborative setting would not in general be considered as eligible research, although applicants are free to put forward their arguments which will be assessed on a case-by-case basis. There may also be instances where an applicant has made a notable methodological contribution in a collaborative setting; in this case, the applicant should explain the context and specific impact of this contribution.

It is also important for appropriate assessment of collaborative research that applicants clearly demarcate their personal role in co-authored papers, or in co-supervision of HQP. The extent to which this is done may depend on the paper/co-supervision concerned: for example, in co-authored papers with one or two other authors, a simple statement of, say, equal contribution might be sufficient, whereas for multi-authored papers, it is in the applicant’s interest to establish clearly what their specific contribution was. I think this is especially relevant to statisticians, who often contribute to publications in mainstream science or health-related areas; in such cases, it is very important that applicants point out clearly what their role in the research was, and specifically how their own work had an impact. Incidentally, it has been put to me that participation by statisticians in, say, multi-authored, collaborative papers is regarded as a negative factor by committee members. This is not the case; in the absence of further explanation, such papers may not provide evidence of NSE excellence, but will also not be regarded as detrimental.

Finally on collaborative work, in the proposal itself, proposed research and training must lie within the NSE domain. If the proposed research/training is interdisciplinary within NSE, the applicant needs to lay out clearly what its impact will be in the various fields; if impact goes beyond NSE, then that should be explained also. NSERC is sensitive to the challenges associated with interdisciplinary research, and there are specific instructions (PRM, p7 of Chapter 6) on the assessment of interdisciplinary applications that will be taken into account.

The Training of HQP

Training of HQP at all levels (undergraduate through postdoctoral researcher) is essential to maintain the ongoing success of research in Canada, and is regarded as a vital component of the purpose for which the Discovery Grant is intended. In the assessment process, committee members are tasked to assess the extent, quality and impact of past training record, and the feasibility and likely impact of the proposed training plan; therefore, applicants should elaborate on these points in detail. For the 2012 Competition, a new section of the application form F101 is to be included to facilitate this. In terms of the assessment itself, whereas the number of trainees may speak to the commitment of the applicant to training, it does not on its own quantify anything in terms of impact or quality. Thus the assessment will not depend solely on the raw numbers; indeed, a high number of trainees is neither necessary nor sufficient for a high rating. Page 13 of Chapter 6 of the PRM gives specific examples of markers of training quality and impact, such as the involvement of trainees in research contributions via co-authorship, and trainee career destinations, etc.

The opportunity for training at all levels is not available to applicants from some institutions. For example, there may be no PhD program in Statistics in a particular Department, or restricted access to undergraduate students who might otherwise be recruited. Committee members have been instructed in how to assess the records of applicants in such situations, and again the focus is placed on impact. As a concrete example of how such cases might be handled, the merit ratings grid contains, in the description of the rating Moderate, the wording

Training record is acceptable but may be modest relative to other applicants.

If an applicant does not have the opportunity to train graduate students, then it might appear that their training record is modest relative to other applicants. However, the PRM also states (p 12 of Chapter 6)

A researcher working at a university without a graduate program should not be ranked lower due to limited or no graduate student supervision. If an applicant presents a solid record of supervising trainees at other levels, this must be recognized in the HQP assessment, as described in the merit rating indicators.

This is aimed at ensuring that researchers in such universities are not systematically disadvantaged. As before, any pertinent information about the “institutional context” of an applicant must be included in the application package, as it cannot be introduced by committee members themselves. Also, as before, the particular contribution made by the applicant must be made clear in the application.

The assessment of the HQP component focuses primarily on Research training, and as such, the teaching of graduate courses would not normally be considered as a contribution for Discovery Grant assessment purposes. The fact that an applicant has taken on an extra teaching load within their Department is not, in itself, considered a contribution in this context. However, there may be exceptional circumstances where the teaching of such courses could be considered a notable contribution; if the course had a particular, tangible impact for a given student or cohort of students, then this could be included in the application in the free-form “Contributions” section of F100 (in this case the student(s) should not be included in the F100 list of trainees, which is restricted to formally supervised trainees). It is up to the applicant to put forward their argument to convince the members of the committee in this regard.

Final comments

Apart from the topics that I have discussed above, there are I am sure several other issues that are of concern to the community. The results of the Competition in 2010 and 2011, the relationship with Mathematics, and Grant levels in general, are all the focus of much scrutiny. I cannot address these issues here, but I will say that everyone concerned with the implementation of the evaluation process takes their responsibility seriously, and regards the effectiveness of the process in funding researchers appropriately as a top priority. Preparation for the 2012 Competition has begun already, and, as I write, the new batch of F180s are due in just a few weeks.

Over the next six months, the EG members and NSERC staff will continue working to improve a system that must serve the needs of our community. Recently members of the Canadian mathematics and statistics community have expressed fundamental doubts about the Discovery Grant peer review system. The members of the EG Executive Committee, as well as EG members and NSERC staff, are all aware of aspects of the review process that could be improved or implemented more effectively, and will work together towards making these improvements. The Discovery Grant system is a critical element in the success of all research in Canada, not merely Science and Engineering, but also because of its influence in the health and social science/humanities domains. Everyone involved in its implementation realizes this, and remains committed to its success.

DavidStephens

David Stephens

Comments are closed.