Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
In the rapidly evolving landscape of artificial intelligence, Large Language Models (LLMs) have emerged as potent tools with an impressive aptitude for understanding and generating human-like text. Their relevance in the domain of planning is particularly noteworthy, given the similarities between planning tasks and programming code-related tasks, a forte of LLMs. Planning, akin to scripting in the Lisp programming language using Planning Domain Definition Language (PDDL), presents a fertile ground to explore the capabilities of LLMs in devising effective and efficient plans. This lab seeks to delve deep into the nuances of utilizing LLMs for planning, offering participants a comprehensive understanding of various techniques integral to the functioning of these models. Participants will be introduced to supervised fine-tuning and a range of prompting techniques, fostering a critical analysis of which approaches tend to enhance planning capabilities significantly. At the heart of this lab is a hands-on session where participants can work closely with “Plansformer”, our proprietary fine-tuned model developed explicitly for planning tasks. This session aims to provide a comparative analysis of the current state-of-the-art LLMs, including GPT-4, GPT-3.5, BARD, and Llama2, offering insights into their respective strengths and weaknesses in planning. We will also briefly explain and show how neuro-symbolic approaches can complement the incorrect generations from LLMs.
Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
Chen-chia Chang, Wan-hsuan Lin, et al.
ICML 2025
Daniel Karl I. Weidele, Hendrik Strobelt, et al.
SysML 2019
Gang Liu, Michael Sun, et al.
ICLR 2025