Do llamas work in english? on the latent language of multilingual transformers C Wendler, V Veselovsky, G Monea, R West arXiv preprint arXiv:2402.10588, 2024 | 44 | 2024 |
Pass: Parallel speculative sampling G Monea, A Joulin, E Grave arXiv preprint arXiv:2311.13581, 2023 | 26 | 2023 |
A glitch in the matrix? locating and detecting language model grounding with fakepedia G Monea, M Peyrard, M Josifoski, V Chaudhary, J Eisner, E Kıcıman, ... arXiv preprint arXiv:2312.02073, 2023 | 9 | 2023 |
How do llamas process multilingual text? a latent exploration through activation patching C Dumas, V Veselovsky, G Monea, R West, C Wendler ICML 2024 Workshop on Mechanistic Interpretability, 2024 | 2 | 2024 |
LLMs Are In-Context Reinforcement Learners G Monea, A Bosselut, K Brantley, Y Artzi arXiv preprint arXiv:2410.05362, 2024 | 1 | 2024 |