LLM Refinement Layer
After executing the SQL query and retrieving the raw results, Billx-Agent can optionally send the data back to the LLM for natural language summarization. This helps convert technical output into user-friendly answers.
🎯 What It Does
Takes the raw SQL output (rows + columns)
Rephrases it into a plain-English summary
Optionally applies context, units, or comparisons
💡 Why It’s Useful
Raw SQL results like this:
[{ "sum": 24320 }]…can be refined to:
"Total sales for Q1 were $24,320"
This is especially helpful when:
Embedding results in dashboards or reports
Reading the answer aloud in
/audio-chatServing non-technical users
🔁 When Is It Triggered?
Automatically enabled on
/chatand/audio-chatPowered by the same Gemini LLM engine used for SQL generation
Uses the original prompt + result to generate a final answer
📋 Example Refinement Call
Input to LLM:
{
"prompt": "How many orders were placed last month?",
"result": [{ "count": 218 }]
}LLM Output:
"There were 218 orders placed in the last month."
🧠 Prompt + Result Fusion
The LLM receives:
The original prompt
The SQL query
The raw result
Optionally: the tool name and database context
This results in responses that are accurate, conversational, and safe.
🧑🏫 You can disable refinement if you want pure SQL results only (e.g. analytics dashboards).
Last updated