Skip to content
Artificial Intelligence
Natural Language Processing
Programming Languages

StarCoder 2 and The Stack v2: The Next Generation

View Publication

Author

Arjun Guha (Roblox + Northeastern University), Anton Lozhkov (HuggingFace), Raymond Li (ServiceNow), Loubna Ben Allal (HuggingFace), Federico Cassano (Northeastern University), Joel Lamy-Poirier (ServiceNow), Nouamane Tazi (HuggingFace), Ao Tang (Nvidia), Dmytro Pykhtar (Nvidia), Jiawei Liu (University of Illinois Urbana-Champaign), Yuxiang Wei (University of Illinois Urbana-Champaign), Tianyang Liu (UC San Diego), Max Tian (ServiceNow), Denis Kocetkov (ServiceNow), Arthur Zucker (HuggingFace), Young Belkada (HuggingFace), Zijan Wang (Independent), Qian Liu (Sea AI Lab), Dmitry Abulkhanov (Independent), Indraneil Paul (Technical University of Darmstadt), Zhuang Li (Monash University), Wen-Ding Li (Cornell University), Megan Risdal (Kaggle), Jia Li (Independent), Jian Zhu (University of British Columbia), Terry Yue Zhuo (Monash University + CSIRO’s Data61), Evgenii Zheltonozhskii (Technion – Israel Institute of Technology), Nii Osae Osae Dade (Mazzuma), Wenhao Yu (University of Notre Dame), Lucas Krauß (Independent), Naman Jain (UC Berkeley), Yixuan Su (Cohere), Xuanli He (University College London), Manan Dey (Salesforce), Edoardo Abati (Independent), Yekun Chai (Baidu), Niklas Muennighoff (Contextual AI), Xiangru Tang (Yale University), Muhtasham Oblokulov (Technical University of Munich), Christopher Akiki (Leipzig University + ScaDS.AI), Marc Marone (Johns Hopkins University), Chenghao Mou (Independent), Mayank Mishra (IBM Research), Alex Gu (MIT), Binyuan Hui (Independent), Tri Dao (Princeton University), Armel Zebaze (HuggingFace), Olivier Dehaene (HuggingFace), Nicolas Patry (HuggingFace), Canwen Xu (UC San Diego), Julian McAuley (UC San Diego), Han Hu (Monash University), Torsten Scholak (ServiceNow), Sebastien Paquet (ServiceNow), Jennifer Robinson (ServiceNow), Carolyn Jane Anderson (Wellesley College), Nicolas Chapados (ServiceNow), Mostofa Patwary (Nvidia), Nima Tajbakhsh (Nvidia), Yacine Jernite (HuggingFace), Carlos Muñoz Ferrandis (HuggingFace), Lingming Zhang (University of Illinois Urbana-Champaign), Sean Hughes (ServiceNow), Thomas Wolf (HuggingFace), Leandro von Werra (HuggingFace), Harm de Vries (ServiceNow)

Venue

Abstract

The BigCode project, an open-scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder2. In partnership with Software Heritage (SWH), we build The Stack v2 on top of the digital commons of their source code archive. Alongside the SWH repositories spanning 619 programming languages, we carefully select other high-quality data sources, such as GitHub pull requests, Kaggle notebooks, and code documentation. This results in a training set that is 4x larger than the first StarCoder dataset. We train StarCoder2 models with 3B, 7B, and 15B parameters on 3.3 to 4.3 trillion tokens and thoroughly evaluate them on a comprehensive set of Code LLM benchmarks. We find that our small model, StarCoder2-3B, outperforms other Code LLMs of similar size on most benchmarks, and also outperforms StarCoderBase-15B. Our large model, StarCoder2- 15B, significantly outperforms other models of comparable size. In addition, it matches or outperforms CodeLlama-34B, a model more than twice its size. Although DeepSeekCoder- 33B is the best-performing model at code completion for high-resource languages, we find that StarCoder2-15B outperforms it on math and code reasoning benchmarks, as well as several low-resource languages. We make the model weights available under an OpenRAIL license and ensure full transparency regarding the training data by releasing the SoftWare Heritage persistent IDentifiers (SWHIDs) of the source code data.