Mirella Lapata

Recent years have witnessed the rise of increasingly larger and more sophisticated language models (LMs) capable of performing every task imaginable, sometimes at (super)human level. In this talk, I will argue that in many realistic scenarios solely relying on a single general-purpose LLM is suboptimal. A single LLM is likely to under-represent real-world data distributions, heterogeneous skills, and task-specific requirements. Instead, I will discuss Multi-LLM collaboration as an alternative for compositional generative modeling. This approach leads to more effective problem-solving while being more inclusive and explainable. I will focus on narrative story generation tasks and demonstrate how these can be tackled by orchestrating a society of agents --- each pursuing individual goals while collectively working toward the overall task objective. Additionally, I will explore how these agent societies leverage reasoning to improve performance.