: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL 2021) Direct Access : Full Research Paper (PDF) Video Presentation (MP4)
: How is BERT surprised? Layerwise detection of linguistic anomalies 325 mp4
This study investigates how the BERT language model identifies "linguistic anomalies" (sentences that are grammatically or semantically strange). The authors found that BERT's ability to detect these anomalies varies across its different neural layers, with middle layers often performing best at capturing specific structural inconsistencies. : Proceedings of the 59th Annual Meeting of
: Bai Li, Zining Zhu, Guillaume Thomas, Frank Rudzicz, and Yang Xu and Yang Xu