The Surprising Performance of Simple Baselines for Misinformation Detection
We examine the performance of a broad set of modern transformer-based language models and show that with basic fine-tuning, these models are competitive with and can even significantly outperform recently proposed state-of-the-art methods
Pelrine, Kellin, Jacob Danovitch, and Reihaneh Rabbany. The Surprising Performance of Simple Baselines for Misinformation Detection. arXiv preprint arXiv:2104.06952 (2021). https://arxiv.org/abs/2104.06952