A Conversational Paradigm for Program Synthesis
Created on March 29|Last edited on March 29
Comment

Program synthesis strives to generate a computer program as a solution to a given problem specification. In a new study, the authors propose a conversational program synthesis approach via large language models, which addresses the challenges of searching over a vast program space and user intent specification faced in prior approaches.
The authors train a family of large language models, called CodeGen, on natural language and programming language data. With weak supervision in the data and the scaling up of data size and model size, conversational capacities emerge from the simple autoregressive language modeling.
To study the model behavior on conversational program synthesis, the authors further develop a multi-turn programming benchmark (MTPB), where solving each problem requires multi-step synthesis via multi-turn conversation between the user and the model.
Findings
The findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm. In addition, the model CodeGen (with up to 16B parameters trained on TPU-v4) outperforms OpenAI's Codex on the HumanEval benchmark.

Add a comment
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.