Session 4#
This session will provide an overview of LLM agents and approaches to neuro-symbolic AI, bringing together the topics from the previous days.
Slides from the fourth lecture can be found here.
The discussion of the day will be on the paper by Yao et al. (2023). ReAct: Synergizing reasoning and acting in language models.
Exercise 4.1.: LLM agents & NeSy architectures
Do you think that LLM agents in the sense in which they were discussed today are agents (according to definitions and properties form day 1)? Why (not)?
Do you think applications that are called LLM agents need to be agentic?
What speaks in favor, what speaks against calling LLM generations “reasoning”? Name two aspects.
Which tools do you think could be used with an LLM to build an LLM+tool-based cognitive architecture?
What is the difference between an LLM agent and a few-shot prompted (vanilla) LLM?
For which tasks do you think would the ReAct prompting strategy work (not) well? Name one example each.
Exercise 4.2.: Conceptualizing and LLM agent
In the following, your task is to conceptualize the structure of a system (agent) designed for a specific task. Please describe the structure of your system in words, or pseudo-code, or a box-and-arrow diagram.
Your task is to write a step-by-step guide / blueprint for an LLM based agent that will put all important appointments from your email to your Google Calendar, but will filter out spam appointments from emails from your former school. The agent should make sure there are no scheduling conflicts, and inform the user if there are conflicts.
Your agent can be equipped with the following tools: interface to your email (accessing new incoming emails, writing emails), standard LLM calls to a model of your choice which will follow your prompts, access to your calendar (read and write).
Exercise 4.3.: Using LangChain
For a basic example of how to use langchain, see the exercise sheet here. Note that part of the code involve using API keys for different LLMs, so it might rather be useful as a conceptual example starting point. There are exercises with opne-source models accessed through HuggingFace, too. Disclaimer: since the sheet is from a few months ago and Langchain is under active development, some things might be deprecated.