UTCS Artificial Intelligence
courses
talks/events
demos
people
projects
publications
software/data
labs
areas
admin
When is Tree Search Useful for LLM Planning? It Depends on the Discriminator (2024)
Ziru Chen, Michael White,
Raymond Mooney
, Ali Payani, Yu Su, Huan Sun
In this paper, we examine how large language models (LLMs) solve multi-step problems under a language agent framework with three components: a generator, discriminator, and a planning method. We investigate the practical utility of two advanced planning methods, iterative correction and tree search. We present a comprehensive analysis of how discrimination accuracy affects the overall performance of agents when using these two methods or a simpler method, re-ranking. Experiments on two tasks, text-to-SQL parsing and mathematical reasoning, show that: (1) advanced planning methods demand discriminators with at least 90 percent accuracy to achieve significant improvements over re-ranking; (2) current LLMs’ discrimination abilities have not met the needs of advanced planning methods to achieve such improvements; (3) with LLM-based discriminators, advanced planning methods may not adequately balance accuracy and efficiency. For example, compared to the other two methods, tree search is at least 10–20 times slower but leads to negligible performance gains, which hinders its real-world applications.
View:
PDF
,
Arxiv
Citation:
Association for Computational Linguistics (ACL)
(2024).
Bibtex:
@article{chen:acl24, title={When is Tree Search Useful for LLM Planning? It Depends on the Discriminator}, author={Ziru Chen and Michael White and Raymond Mooney and Ali Payani and Yu Su and Huan Sun}, booktitle={Association for Computational Linguistics (ACL)}, month={August}, url="http://www.cs.utexas.edu/users/ai-labpub-view.php?PubID=128060", year={2024} }
Presentation:
Slides (PDF)
Poster
Video
People
Raymond J. Mooney
Faculty
mooney [at] cs utexas edu
Areas of Interest
Learning for Semantic Parsing
Natural Language Processing
Labs
Machine Learning