docs: enhance send-chat-message docs to also show ChatFullResponse#7430
Conversation
Greptile SummaryThis PR enhances the Swagger/OpenAPI documentation for the The changes are purely documentation-focused:
Confidence Score: 5/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Client
participant Endpoint as /send-chat-message
participant Logic as Message Processing
participant Response
Note over Client,Response: stream=true (default)
Client->>Endpoint: POST with stream=true
Endpoint->>Logic: Handle stream mode
Logic-->>Endpoint: Packet stream
Endpoint-->>Response: StreamingResponse (text/event-stream)
Response-->>Client: SSE stream of packets
Note over Client,Response: stream=false
Client->>Endpoint: POST with stream=false
Endpoint->>Logic: Handle non-stream mode
Logic-->>Endpoint: Collect all packets
Endpoint-->>Response: ChatFullResponse (application/json)
Response-->>Client: Complete JSON response
|
Greptile's behavior is changing!From now on, if a review finishes with no comments, we will not post an additional "statistics" comment to confirm that our review found nothing to comment on. However, you can confirm that we reviewed your changes in the status check section. This feature can be toggled off in your Code Review Settings by deselecting "Create a status check for each PR". |
There was a problem hiding this comment.
1 issue found across 1 file
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="backend/onyx/server/query_and_chat/chat_backend.py">
<violation number="1" location="backend/onyx/server/query_and_chat/chat_backend.py:536">
P2: Non-streaming chat response is wrapped in StreamingResponse due to response_class override, breaking JSON ChatFullResponse for stream=false</violation>
</file>
Since this is your first cubic review, here's how it works:
- cubic automatically reviews your code and comments on bugs and improvements
- Teach cubic by replying to its comments. cubic learns from your replies and gets better over time
- Ask questions if you need clarification on any suggestion
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
99c5de3 to
4944c11
Compare
|
@yuhongsun96 @wenxi-onyx is there anything I can do for this? |
4944c11 to
5dc5942
Compare
The test failures seem real and being introduced by this change. You can reproduce them locally by running, which is a shorthand for, |
Could you point out which test case(s) are failing? Struggling to read the output of that command. |
|
For what it's worth, the command fails for me on |
Hmmm. That looks like a python version incompatibility (if not actually broken code). We primarily test and develop on python3.11. |
|
Rats, that might be a false from me, I probably didn't pull from the
remote. It's been a day 😂
…On Wed, 21 Jan 2026, 18:26 Jamison Lahman, ***@***.***> wrote:
*jmelahman* left a comment (onyx-dot-app/onyx#7430)
<#7430 (comment)>
For what it's worth, the command fails for me on main at
d8068f0...
INFO Generating OpenAPI schema and Python client
INFO Schema output: /Users/ciaran/dev/onyx/backend/generated/openapi.json
INFO Client output: /Users/ciaran/dev/onyx/backend/generated/onyx_openapi_client
Traceback (most recent call last):
File "<stdin>", line 236, in <module>
File "<stdin>", line 225, in main
File "<stdin>", line 36, in generate_schema
File "/Users/ciaran/dev/onyx/backend/onyx/main.py", line 52, in <module>
from onyx.server.auth_check import check_router_auth
File "/Users/ciaran/dev/onyx/backend/onyx/server/auth_check.py", line 14, in <module>
from onyx.server.onyx_api.ingestion import api_key_dep
File "/Users/ciaran/dev/onyx/backend/onyx/server/onyx_api/ingestion.py", line 21, in <module>
from onyx.indexing.indexing_pipeline import build_indexing_pipeline
File "/Users/ciaran/dev/onyx/backend/onyx/indexing/indexing_pipeline.py", line 72, in <module>
from onyx.llm.chat_llm import LLMRateLimitError
File "/Users/ciaran/dev/onyx/backend/onyx/llm/chat_llm.py", line 39, in <module>
from onyx.llm.llm_provider_options import CREDENTIALS_FILE_CUSTOM_CONFIG_KEY
File "/Users/ciaran/dev/onyx/backend/onyx/llm/llm_provider_options.py", line 82, in <module>
for model in litellm.bedrock_models + litellm.bedrock_converse_models
~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
TypeError: unsupported operand type(s) for +: 'set' and 'set'
CRITICAL: Exiting due to uncaught exception TypeError: unsupported operand type(s) for +: 'set' and 'set'
FATA Failed to generate OpenAPI schema and client: exit status 1
Hmmm. d8068f0 is from 8 months ago.
That looks like a python version incompatibility (if not actually broken
code). We primarily test and develop on python3.11. uv also makes it
convenient to setup a specific python toolchain,
https://github.com/onyx-dot-app/onyx/blob/main/contributing_guides/dev_setup.md#local-set-up
—
Reply to this email directly, view it on GitHub
<#7430 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACFQTJ5K2UKSNM4ZSRZDIDT4H7AHFAVCNFSM6AAAAACRZS4A4KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTOOBQGM4DSOJXGU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
5dc5942 to
a0c37fc
Compare
|
Hey @jmelahman sorry for the tired responses last night! I dived back in this morning and have resolved the failing spec generation 🙌 |
Description
When developing integrations with the new
send-chat-messageendpoint, I noticed that currently, the Swagger docs only display the stream example ("string") whereas it can return aStreamingResponseor aChatFullResponsewhenstream=false- It'd be useful to have this model in the docs for clarity.I updated the
responsesfield in the method definition to include both examples, for stream and not.How Has This Been Tested?
CONTRIBUTING_VSCODE.md<api-url>/docs#/public/handle_send_chat_messageAdditional Options