• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
LaRS: Latent Reasoning Skills for Chain-of-Thought Reasoning.
Zifan Xu, Haozhu
Wang, Dmitriy Bespalov, Xian Wu, Peter Stone, and Yanjun Qi.
In Findings
of Empirical Methods in Natural Language Processing, November 2024.
Chain-of-thought (CoT) prompting is a popular in-context learning (ICL) approachfor large language models (LLMs), especially when tackling complex reasoningtasks. Traditional ICL approaches construct prompts using examples that containquestions similar to the input question. However, CoT prompting, which includescrucial intermediate reasoning steps (rationales) within its examples,necessitates selecting examples based on these rationales rather than thequestions themselves. Existing methods require human experts or pre-trained LLMsto describe the skill, a high-level abstraction of rationales, to guide theselection. These methods, however, are often costly and difficult to scale.Instead, this paper introduces a new approach named Latent Reasoning Skills(LaRS) that employs unsupervised learning to create a latent space representationof rationales, with a latent variable called a reasoning skill. Concurrently,LaRS learns a reasoning policy to determine the required reasoning skill for agiven question. Then the ICL examples are selected by aligning the reasoningskills between past examples and the question. This approach is theoreticallygrounded and compute-efficient, eliminating the need for auxiliary LLM inferenceor manual prompt design. Empirical results demonstrate that LaRS consistentlyoutperforms SOTA skill-based selection methods, processing example banks fourtimes faster, reducing LLM inferences during the selection stage by half, andshowing greater robustness to sub-optimal example banks.
@InProceedings{zifan_xu_emnlp2024, author = {Zifan Xu and Haozhu Wang and Dmitriy Bespalov and Xian Wu and Peter Stone and Yanjun Qi}, title = {LaRS: Latent Reasoning Skills for Chain-of-Thought Reasoning}, booktitle = {Findings of Empirical Methods in Natural Language Processing}, year = {2024}, month = {November}, location = {Miami, Florida}, abstract = {Chain-of-thought (CoT) prompting is a popular in-context learning (ICL) approach for large language models (LLMs), especially when tackling complex reasoning tasks. Traditional ICL approaches construct prompts using examples that contain questions similar to the input question. However, CoT prompting, which includes crucial intermediate reasoning steps (rationales) within its examples, necessitates selecting examples based on these rationales rather than the questions themselves. Existing methods require human experts or pre-trained LLMs to describe the skill, a high-level abstraction of rationales, to guide the selection. These methods, however, are often costly and difficult to scale. Instead, this paper introduces a new approach named Latent Reasoning Skills (LaRS) that employs unsupervised learning to create a latent space representation of rationales, with a latent variable called a reasoning skill. Concurrently, LaRS learns a reasoning policy to determine the required reasoning skill for a given question. Then the ICL examples are selected by aligning the reasoning skills between past examples and the question. This approach is theoretically grounded and compute-efficient, eliminating the need for auxiliary LLM inference or manual prompt design. Empirical results demonstrate that LaRS consistently outperforms SOTA skill-based selection methods, processing example banks four times faster, reducing LLM inferences during the selection stage by half, and showing greater robustness to sub-optimal example banks. }, }
Generated by bib2html.pl (written by Patrick Riley ) on Tue Nov 19, 2024 10:24:40