The field of artificial intelligence (which includes machine learning, computer vision, natural language understanding etc.) is going through an unprecedented phase of interest and enthusiasm. This is evident in the number of articles written in popular media, interest of non-experts in the field, increase in the audience at AI conferences, and rise in number of submitted articles to the top conferences. The later has posed a challenge in ensuring high quality reviewing at these conferences.
AI, as in other scientific discipline, relies on double-blind peer review process to ensure that quality papers are added to the literature. While alternative exists such as arXiv where anyone can upload their paper. The importance of peer review is evident from observing that every year several papers get published on the Maths arXiv that claim to have solved Riemann Hypothesis (a famous unsolved problem). Almost all these papers have glaring mistakes. I picked a maths example cause an error in the proof is more glaring, however, arXiv CS papers can contain errors in proof, experimental protocol or an ethical concern that might be pointed out during peer review.
However, the multiplicative increase in submissions at AI conferences have put this peer review process under stress. Various methods have been proposed to ensure that only quality submissions are submitted including: preventing authors to post on arXiv a month before the conference deadline (to avoid flag planting papers), adopting an open reviewing policy (to discourage resubmission without revision), and some interesting proposals that place limit on the number of papers an author can write every year.
Anecdotally, it appears to me that a large increase in the submissions is driven by papers that demonstrate a novel application of an existing technology, or an important but not novel modification of an existing technology. These are perfectly valid and important contributions, however, it might be prudent to publish such papers separately. For example, we can create an applied research/demonstration conference where such papers are published. This conference can be held more frequently to accommodate the high frequency load and can have a more tailored reviewing process (such as focus on code review and application). The existing research conferences can then focus on more fundamental research questions. This has several advantages:
- These papers will not face heat from NeuRIPS/ICML reviewers who demand technical novelty. A paper can have an actual application without being technically novel. But the current conference system doesn’t do justice to distinguish between these two type of contributions.
- Encourages engineering claim such as hyperparameter tuning to be published. The current conference model inevitably pressures authors to come up with a technical story to sell their paper to reviewers. Often times, these papers are forced to pick/add a story whereas the biggest empirical gain might be coming from hyperparameter tuning. Alternatively, such results can be published as short applied papers that claim hyperparameter tuning to be their main contribution. For example, let’s say someone downloads the state-of-the-art RL algorithm and demonstrates better result by adding another layer in the model. Then such a result should be communicated but it would be better suited as a short applied AI conference paper instead of being packaged as an 8 page conference paper.
If a paper claims to have found a better choice of hyper parameter for an important problem, then this result should be communicated. However, it should be a two paragraph article in an applied AI magazine instead of being packaged as an 8 page conference paper.
- Load on existing conferences will be reduced. These mainstream conference can focus on the bigger picture such as understanding why existing neural network methods work, creating better optimization algorithm, or in general creating algorithms which are fundamentally novel and not minor modification of an existing algorithm.
- The audience for an applied AI conference might be more suited for these papers than general NeuRIPS/ICML/ICLR audience. For example, an engineer working in the field of machine translation might readily benefit from the knowledge of improved hyperparameter tuning or an observation that doing curriculum learning is helpful. They will not have to extract this detail from the Appendix of an 8 page paper which will not have this detail as their main story.
A significant bulk of AI submissions are applied research paper of the type described above. The current conference cycle does not do justice in appreciating these results and demands technical novelty which often results in a weird incentive system that pushes authors to come up with a story to package their results. A move towards the creation of an applied conference/workshop/magazines where such results are appreciated would result in both decrease in load on existing conferences while providing the right audience and appreciation for these applied AI papers.