An LLM can deal with basic routing. Semantic search can deal with personal information higher. Which one would you decide?
A single immediate can not deal with every little thing, and a single information supply will not be appropriate for all the information.
Right here’s one thing you typically see in manufacturing however not in demos:
You want multiple information supply to retrieve data. Multiple vector retailer, graph DB, and even an SQL database. And also you want completely different prompts to deal with completely different duties, too.
In that case, we’ve an issue. Given unstructured, typically ambiguous, and poorly formatted person enter, how will we resolve which database to retrieve information from?
If, for some purpose, you continue to assume it’s too simple, right here’s an instance.
Suppose you could have a tour-guiding chatbot, and one traveler asks for an optimum journey schedule between 5 locations. Letting the LLM reply could hallucinate, as LLMs aren’t good with location-based calculations.
As an alternative, should you retailer this data in a graph database, the LLM could generate a question to fetch the shortest journey path between the factors…