LLM Agent: prevent tool-output leakage; nicer responses (no Tool Result / no JSON dumps)
Some checks failed
CI / Frontend Lint & Type Check (push) Has been cancelled
CI / Frontend Build (push) Has been cancelled
CI / Backend Lint (push) Has been cancelled
CI / Backend Tests (push) Has been cancelled
CI / Docker Build (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
Deploy / Build & Push Images (push) Has been cancelled
Deploy / Deploy to Server (push) Has been cancelled
Deploy / Notify (push) Has been cancelled

This commit is contained in:
2025-12-17 14:38:49 +01:00
parent 01d6d24e59
commit fed5b15378

View File

@ -39,12 +39,17 @@ def _build_system_prompt(path: str) -> str:
return (
"You are the Pounce Hunter Companion (domain trading expert). Always respond in English.\n"
"You have access to internal tools that return live data. Use tools when needed.\n\n"
"OUTPUT STYLE:\n"
"- Never show raw tool output to the user.\n"
"- Never print phrases like 'Tool Result', 'TOOL_RESULT', or code-fenced JSON.\n"
"- If you used tools, silently incorporate the data and present ONLY a clean summary.\n"
"- Keep formatting simple: short paragraphs and bullet points. Avoid dumping structured data.\n\n"
"TOOL CALLING PROTOCOL:\n"
"- If you need data, respond with ONLY a JSON object:\n"
' {"tool_calls":[{"name":"tool_name","args":{...}}, ...]}\n'
"- Do not include any other text when requesting tools.\n"
"- After tools are executed, you will receive TOOL_RESULT messages.\n"
"- When you are ready to answer the user, respond normally (not JSON).\n\n"
"- When you are ready to answer the user, respond normally (not JSON) and do NOT mention tools.\n\n"
"AVAILABLE TOOLS (JSON schemas):\n"
f"{json.dumps(tools, ensure_ascii=False)}\n\n"
"RULES:\n"
@ -132,7 +137,10 @@ async def run_agent(
convo.append(
{
"role": "system",
"content": f"TOOL_RESULT name={name} json={_truncate_json(result)}",
"content": (
f"TOOL_RESULT_INTERNAL name={name} json={_truncate_json(result)}. "
"This is internal context. Do NOT quote or display this to the user."
),
}
)
@ -153,7 +161,13 @@ async def stream_final_answer(convo: list[dict[str, Any]], *, model: Optional[st
+ [
{
"role": "system",
"content": "Final step: respond to the user. Do NOT output JSON tool_calls. Do NOT request tools.",
"content": (
"Final step: respond to the user.\n"
"- Do NOT output JSON tool_calls.\n"
"- Do NOT request tools.\n"
"- Do NOT include raw tool outputs, internal tags, or code-fenced JSON.\n"
"- If you used tools, present only a clean human summary."
),
}
],
"temperature": temperature,