原文
[Submitted on 22 Nov 2022 (v1), last revised 23 Oct 2023 (this version, v4)]
View a PDF of the paper titled Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks, by Wenhu Chen and 3 other authors
View PDFAbstract:Recently, there has been significant progress in teaching language models to perform step-by-step reasoning to solve complex numerical reasoning tasks. Chain-of-thoughts prompting (CoT) is by far the state-of-art method for these tasks. CoT uses language models to perform both reasoning and computation in the multi-step `thought' process. To disentangle computation from reasoning, we propose `Program of Thoughts' (PoT), which uses language models (mainly Codex) to express the reasoning process as a program. The computation is relegated to an external computer, which executes the generated programs to derive the answer. We evaluate PoT on five math word problem datasets (GSM, AQuA, SVAMP, TabMWP, MultiArith) and three financial-QA datasets (FinQA, ConvFinQA, TATQA) for both few-shot and zero-shot setups. Under both few-shot and zero-shot settings, PoT can show an average performance gain over CoT by around 12\% across all the evaluated datasets. By combining PoT with self-consistency decoding, we can achieve SoTA performance on all math problem datasets and near-SoTA performance on financial datasets. All of our data and code are released in Github this https URL
From: Wenhu Chen [view email]
[v1] Tue, 22 Nov 2022 21:06:00 UTC (8,689 KB)
[v2] Fri, 25 Nov 2022 01:49:50 UTC (8,689 KB)
[v3] Tue, 29 Nov 2022 03:46:29 UTC (8,689 KB)
[v4] Mon, 23 Oct 2023 01:27:38 UTC (4,047 KB)
[v1] Tue, 22 Nov 2022 21:06:00 UTC (8,689 KB)
[v2] Fri, 25 Nov 2022 01:49:50 UTC (8,689 KB)
[v3] Tue, 29 Nov 2022 03:46:29 UTC (8,689 KB)
[v4] Mon, 23 Oct 2023 01:27:38 UTC (4,047 KB)