Hunter Companion: natural greetings + no invented tasks; lower temperature
Some checks failed
CI / Frontend Lint & Type Check (push) Has been cancelled
CI / Frontend Build (push) Has been cancelled
CI / Backend Lint (push) Has been cancelled
CI / Backend Tests (push) Has been cancelled
CI / Docker Build (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
Deploy / Build & Push Images (push) Has been cancelled
Deploy / Deploy to Server (push) Has been cancelled
Deploy / Notify (push) Has been cancelled

This commit is contained in:
2025-12-17 14:41:17 +01:00
parent fed5b15378
commit aab2a0c3ad
2 changed files with 52 additions and 2 deletions

View File

@ -15,6 +15,32 @@ from app.services.llm_tools import execute_tool, tool_catalog_for_prompt
settings = get_settings()
def _is_greeting(text: str) -> bool:
t = (text or "").strip().lower()
if not t:
return False
# common minimal greetings
greetings = {
"hi",
"hello",
"hey",
"yo",
"sup",
"hola",
"hallo",
"guten tag",
"good morning",
"good evening",
"good afternoon",
}
if t in greetings:
return True
# very short greeting-like messages
if len(t) <= 6 and t.replace("!", "").replace(".", "") in greetings:
return True
return False
def _tier_level(tier: str) -> int:
t = (tier or "").lower()
if t == "tycoon":
@ -44,6 +70,13 @@ def _build_system_prompt(path: str) -> str:
"- Never print phrases like 'Tool Result', 'TOOL_RESULT', or code-fenced JSON.\n"
"- If you used tools, silently incorporate the data and present ONLY a clean summary.\n"
"- Keep formatting simple: short paragraphs and bullet points. Avoid dumping structured data.\n\n"
"BEHAVIOR:\n"
"- Do NOT invent user preferences, keywords, TLDs, budgets, or tasks.\n"
"- If the user greets you or sends a minimal message (e.g., 'hi', 'hello'), respond naturally and ask what they want help with.\n"
"- Ask 12 clarifying questions when the user request is ambiguous.\n\n"
"WHEN TO USE TOOLS:\n"
"- Use tools only when the user explicitly asks about their account data, their current page, their lists (watchlist/portfolio/listings/inbox/yield), or when they provide a specific domain to analyze.\n"
"- Never proactively 'fetch' domains or run scans based on guessed keywords.\n\n"
"TOOL CALLING PROTOCOL:\n"
"- If you need data, respond with ONLY a JSON object:\n"
' {"tool_calls":[{"name":"tool_name","args":{...}}, ...]}\n'
@ -54,7 +87,7 @@ def _build_system_prompt(path: str) -> str:
f"{json.dumps(tools, ensure_ascii=False)}\n\n"
"RULES:\n"
"- Never claim you checked external sources unless the user provided the data.\n"
"- Keep answers practical and decisive. If domain-related: include BUY/CONSIDER/SKIP + bullets.\n"
"- Keep answers practical and decisive. If (and only if) the user is asking about a specific domain: include BUY/CONSIDER/SKIP + bullets.\n"
)
@ -111,6 +144,23 @@ async def run_agent(
]
convo = base + (messages or [])
# If the user just greets, answer naturally without tool-looping.
last_user = next((m for m in reversed(messages or []) if m.get("role") == "user"), None)
if last_user and _is_greeting(str(last_user.get("content") or "")):
convo.append(
{
"role": "assistant",
"content": (
"Hey — how can I help?\n\n"
"If you want, tell me:\n"
"- which Terminal page youre on (or what youre trying to do)\n"
"- or a specific domain youre considering\n"
"- or what outcome you want (find deals, assess a name, manage leads, etc.)"
),
}
)
return convo
for _ in range(max_steps):
payload = {
"model": model or settings.llm_default_model,

View File

@ -192,7 +192,7 @@ export function HunterCompanion() {
try {
await streamChatCompletion({
messages: [...system, ...history, { role: 'user', content: text }],
temperature: 0.7,
temperature: 0.5,
path: pathname || '/terminal/hunt',
onDelta: (delta) => {
setMessages((prev) =>