正在加载图片...
Industrial and Government Applications Track Paper ta and the difference in time zone typically resulted in a two- that best reduces the of residual weights from the or three-day turnaround in running the system. Nonetheless, the centroid of the panel =0.5) the panelists ar system illustrated its utility by finding proposals that were selected, then four panelists are assigned to proposals by revaide obviously assigned to the wrong panel and suggesting qualifie using SRTM with e=0.5. The mean residual term weight under reviewers that were overlooked by program officers these conditions is shown in Table 2. It is apparent from this In the third year, encouraged by the results of the second gure that both approaches that examine the proposals to select year, NSF purchased the appropriate computer equipment and ran panelists have a benefit over picking panelists who are experts in Revaide in house. Furthermore, this enabled tighter integration the general subject area( Standing Panel). Furthermore, selecting with NSFs databases, e.g., proposal meta-data was accepted in panelists with complementary expertise( SRTM) has an advantage of selecting panelists whose expertise is most similar to the he exact format produced by NSF's systems rather than requiring central theme of the proposals(Similarity) an intermediate step f manually reformat Furthermore, processes were put into place to accurately record and maintain the data used by revaide. This reduced the time quired to get results from Revaide from a few day Standing Panel Similarity SRTM hours. Plans are now being evaluated to have a contractor full 0.783 0.662 0.5 grate revaide with NSF's intemal systems and build a web interface to Revaide. In the next section, we summarize the Table 2. Sum of residual term weights with three experiences in the third year of using Revaide alternative appro 4. Evaluation and Lessons learned 4.3 Experiences and Lessons Learned In this section, we report on two experiments that In the third year of Reviade's development, it was relied irically evaluate the utility of the residual term upon heavily in the Information and Intelligent Systems(IIs)with ach in assigning reviewers. We also report on the the evaluation of a competition that received slightly over 1000 we have learned in deploying Revaide in the government proposals and several other competitions with 200-500 proposals It was also used in competitions in the Computer and 4.1 Selecting Reviewers for Proposals Communication Foundations Division and a Computer and We consider selecting reviewers independently for Information Science and Engineering interdisciplinary proposals. In particular, for each proposal submitted to the 2004 competition. In lIs, Revaide was relied upon to initially dispatch Information Technology Research program in the division of proposals to program officers, to check panels for coherence, to Information and Intelligent Systems, a total of approximately find panels for orphan proposals, and to recommend reviewers for 1, 500, we compare finding the three closest review etermined by cosine similarity to the three that best reduce Revaide greatly reduced the time required to form SRTM. In each case the pool of reviewers is the people who panels. In one competition, this was essentially submitted proposals to the division in the prior three years. The completed in two weeks compared to average sum of residual term weights (with E=0.5) decreases approximately six weeks for a smaller competition from 0.636 for the three closest to 0.569 for Revaide's approach that didnt use revaide Note that this average does not tell the entire story. For more thar five percent of the proposals, perhaps the most interdisciplinar e pool of reviewers beyond proposals, there was a difference of greater than 0. 15 in the sum those normally called upon by program officers of residual terms, demonstrating the importance of finding a set of While some members of the community had been reviewers with complementary expertise. Of course, it may seem called upon repeatedly, others with similar ke a tautology to show that a system that attempts to minimize expertise had been overlooked. In many cases, SRTM has a lower SRTM. However, this also puts a number people who had not reviewed before agreed to behind the intuition that similarity alone isn't sufficient for review nearly immediately when asked, while finding reviewers for interdisciplinary proposals those frequently called upon are more reluctant to serve another time 4.2 Selecting panelists Revaide greatly reduced the amount of time to find Here we consider alternative strategies for selecting panelists reviewers for panels. One program officer for two panels of proposals submitted to the 2005 Universal reported it took a week rather than a month to Access solicitation. For each panel Dare using SIX finalize two panels. randomly selected people funded in the prior year as reviewers (analogous to the common conference practice of inviting a One program officer after using Revaide asked program committee before papers are submitted or the NIH panelists to select which proposals they were most practice of having a standing panel), the six reviewers closest to interested in reviewing the centroid of the proposals in the panel, and the six reviewers desired proposals by the panelists were indeed the proposals that led to the reviewers invitation. officer thought the For example, an unexpected carriage return in a proposal title reviewers suggested had expertise that wasn't resulted in an ill-formed tab separated file relevant to the proposal. However, after the 69data5 and the difference in time zone typically resulted in a two￾or three-day turnaround in running the system. Nonetheless, the system illustrated its utility by finding proposals that were obviously assigned to the wrong panel and suggesting qualified reviewers that were overlooked by program officers. In the third year, encouraged by the results of the second year, NSF purchased the appropriate computer equipment and ran Revaide in house. Furthermore, this enabled tighter integration with NSF’s databases, e.g., proposal meta-data was accepted in the exact format produced by NSF’s systems rather than requiring an intermediate step of manually reformatting the data. Furthermore, processes were put into place to accurately record and maintain the data used by Revaide. This reduced the time required to get results from Revaide from a few days to a few hours. Plans are now being evaluated to have a contractor fully integrate Revaide with NSF’s internal systems and build a web interface to Revaide. In the next section, we summarize the experiences in the third year of using Revaide. 4. Evaluation and Lessons Learned In this section, we report on two experiments that empirically evaluate the utility of the residual term weight approach in assigning reviewers. We also report on the lessons we have learned in deploying Revaide in the government context. 4.1 Selecting Reviewers for Proposals We consider selecting reviewers independently for proposals. In particular, for each proposal submitted to the 2004 Information Technology Research program in the division of Information and Intelligent Systems, a total of approximately 1,500, we compare finding the three closest reviewers as determined by cosine similarity to the three that best reduce SRTM. In each case the pool of reviewers is the people who submitted proposals to the division in the prior three years. The average sum of residual term weights (with ε = 0.5) decreases from 0.636 for the three closest to 0.569 for Revaide’s approach. Note that this average does not tell the entire story. For more than five percent of the proposals, perhaps the most interdisciplinary proposals, there was a difference of greater than 0.15 in the sum of residual terms, demonstrating the importance of finding a set of reviewers with complementary expertise. Of course, it may seem like a tautology to show that a system that attempts to minimize SRTM has a lower SRTM. However, this also puts a number behind the intuition that similarity alone isn’t sufficient for finding reviewers for interdisciplinary proposals. 4.2 Selecting Panelists Here we consider alternative strategies for selecting panelists for two panels of proposals submitted to the 2005 Universal Access solicitation. For each panel, we compare using six randomly selected people funded in the prior year as reviewers (analogous to the common conference practice of inviting a program committee before papers are submitted or the NIH practice of having a standing panel), the six reviewers closest to the centroid of the proposals in the panel, and the six reviewers 5 For example, an unexpected carriage return in a proposal title resulted in an ill-formed tab separated file. that best reduces the sum of residual term weights from the centroid of the panel (with ε = 0.5). Once the panelists are selected, then four panelists are assigned to proposals by Revaide using SRTM with ε = 0.5. The mean residual term weight under these conditions is shown in Table 2. It is apparent from this figure that both approaches that examine the proposals to select panelists have a benefit over picking panelists who are experts in the general subject area (Standing Panel). Furthermore, selecting panelists with complementary expertise (SRTM) has an advantage of selecting panelists whose expertise is most similar to the central theme of the proposals (Similarity). Standing Panel Similarity SRTM 0.783 0.662 0.521 Table 2. Sum of residual term weights with three alternative approaches to selecting panelists. 4.3 Experiences and Lessons Learned In the third year of Reviade’s development, it was relied upon heavily in the Information and Intelligent Systems (IIS) with the evaluation of a competition that received slightly over 1000 proposals and several other competitions with 200-500 proposals. It was also used in competitions in the Computer and Communication Foundations Division and a Computer and Information Science and Engineering interdisciplinary competition. In IIS, Revaide was relied upon to initially dispatch proposals to program officers, to check panels for coherence, to find panels for orphan proposals, and to recommend reviewers for most panels. Some summary results and lessons learned included: • Revaide greatly reduced the time required to form panels. In one competition, this was essentially completed in two weeks compared to approximately six weeks for a smaller competition that didn’t use Revaide. • Revaide increased the pool of reviewers beyond those normally called upon by program officers. While some members of the community had been called upon repeatedly, others with similar expertise had been overlooked. In many cases, people who had not reviewed before agreed to review nearly immediately when asked, while those frequently called upon are more reluctant to serve another time. • Revaide greatly reduced the amount of time to find reviewers for panels. One program officer reported it took a week rather than a month to finalize two panels. • One program officer after using Revaide asked panelists to select which proposals they were most interested in reviewing. Frequently, the most desired proposals by the panelists were indeed the proposals that led to the reviewer’s invitation. • In one case, a program officer thought the reviewers suggested had expertise that wasn’t relevant to the proposal. However, after the 869 Industrial and Government Applications Track Paper
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有