Not too long ago, LlamaIndex launched a brand new characteristic referred to as Workflow in one in all its variations, offering event-driven and logic decoupling capabilities for LLM purposes.
In right now’s article, we’ll take a deep dive into this characteristic by a sensible mini-project, exploring what’s new and nonetheless missing. Let’s get began.
Increasingly LLM purposes are shifting in the direction of clever agent architectures, anticipating LLMs to fulfill person requests by calling completely different APIs or a number of iterative calls.
This shift, nevertheless, brings an issue: as agent purposes make extra API calls, program responses decelerate and code logic turns into extra advanced.
A typical instance is ReActAgent, which includes steps like Thought, Motion, Statement, and Closing Reply, requiring no less than three LLM calls and one device name. If loops are wanted, there might be much more I/O calls.