When is Tree Search Useful for LLM Planning? It Depends on the Discriminator (2024)
Ziru Chen, Michael White, Raymond Mooney, Ali Payani, Yu Su, Huan Sun
In this paper, we examine how large language models (LLMs) solve multi-step problems under a language agent framework with three components: a generator, discriminator, and a planning method. We investigate the practical utility of two advanced planning methods, iterative correction and tree search. We present a comprehensive analysis of how discrimination accuracy affects the overall performance of agents when using these two methods or a simpler method, re-ranking. Experiments on two tasks, text-to-SQL parsing and mathematical reasoning, show that: (1) advanced planning methods demand discriminators with at least 90 percent accuracy to achieve significant improvements over re-ranking; (2) current LLMs’ discrimination abilities have not met the needs of advanced planning methods to achieve such improvements; (3) with LLM-based discriminators, advanced planning methods may not adequately balance accuracy and efficiency. For example, compared to the other two methods, tree search is at least 10–20 times slower but leads to negligible performance gains, which hinders its real-world applications.
View:
PDF, Arxiv
Citation:
Association for Computational Linguistics (ACL) (2024).
Bibtex:

Presentation:
Slides (PDF) Poster Video
Raymond J. Mooney Faculty mooney [at] cs utexas edu