If ChatGPT can produce research papers that are indistinguishable from what most scientists can write, then maybe scientists can focus on actually advancing science—something that ChatGPT has thus far proven unable to do.
— Beyond papers: rethinking science in the era of artificial intelligence by Daniel Lemire
Looking at the proceedings of our conferences over the past few years, I find that most of the papers are simply uninteresting. Moreover, it seems that every first-year PhD student is now required to write a systematic review on their topic — supposedly to learn about the field while producing a publication.
Let me be blunt: every systematic review I’ve read has felt like a waste of time. I want to read opinionated reviews written by experts — people who have seen enough to have perspective — not by PhD students who have just skimmed the past decade of papers on Google Scholar.
We need far fewer papers (I’m doing my best to contribute to that cause), and the ones we do publish should be bold, revolutionary, and even a little irreverent. We need innovation and the courage to break expectations. Incremental research has its place, but that doesn’t mean it always needs to be published.
To make this possible, evaluation committees — both nationally and within universities — must rethink their processes to move away from bean-counting metrics. Our current incentive system discourages genuine peer review, and even when proper reviews happen, they often waste effort on work that adds little value.
Otherwise, yes — the bean-counting-reinforcement-learning AIs will take our jobs.