Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: preserve newlines in non-streaming LLM response formatting #49

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 5 additions & 3 deletions packages/python/src/mainframe_orchestra/llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -312,16 +312,18 @@ async def stream_generator():
# Non-streaming logic
spinner.text = f"Waiting for {model} response..."
response: OpenAIChatCompletion = await client.chat.completions.create(**request_params)

content = response.choices[0].message.content
spinner.succeed("Request completed")

try:
# Attempt to parse the API response as JSON and reformat it as compact, single-line JSON.
compact_response = json.dumps(json.loads(content.strip()), separators=(',', ':'))
except ValueError:
# If it's not JSON, collapse any extra whitespace (including newlines) into a single space.
compact_response = " ".join(content.strip().split())
# If it's not JSON, preserve newlines but clean up extra whitespace within lines
lines = content.strip().splitlines()
compact_response = "\n".join(line.strip() for line in lines)

logger.debug(f"API Response: {compact_response}")
return compact_response, None

Expand Down