8000 504 Gateway Timeout on /api/quality/conflicts Endpoint · Issue #9348 · cvat-ai/cvat · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

504 Gateway Timeout on /api/quality/conflicts Endpoint #9348

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
nvj1d opened this issue Apr 21, 2025 · 7 comments
Closed

504 Gateway Timeout on /api/quality/conflicts Endpoint #9348

nvj1d opened this issue Apr 21, 2025 · 7 comments
Labels
need info Need more information to investigate the issue

Comments

@nvj1d
Copy link
nvj1d commented Apr 21, 2025

am encountering a persistent 504 Gateway Timeout error when attempting to open a job in CVAT via the /api/quality/conflicts endpoint.

Environment:

CVAT version: 2.18.0
Deployment: Docker Compose
Docker command: docker-compose -f docker-compose.yml -f docker-compose.https.yml up -d

Issue:
When navigating to a job, the frontend attempts to fetch: GET /api/quality/conflicts?org=test&report_id=3494&page_size=500&page=1

This request returns: 504 Gateway Timeout (from nginx)

@azhavoro
Copy link
Contributor

Please provide:

  1. output of docker ps
  2. logs from the quality report worker container

@azhavoro azhavoro added the need info Need more information to investigate the issue label Apr 22, 2025
@zhiltsov-max
Copy link
Contributor
zhiltsov-max commented Apr 28, 2025

Hi, it may be a performance issue related to RAM or CPU required to process a big join in the DB. If you find out that this is the issue, it's likely to be optimized in #8275 and #9116. If you have access to the code, you can try to apply this patch: 0390f7b as a workaround.

@nvj1d
Copy link
Author
nvj1d commented Apr 28, 2025

Please provide:

  1. output of docker ps
  2. logs from the quality report worker container

those are the stats from docker:

Image
and that's the quality_reports logs

Image

@nvj1d
Copy link
Author
nvj1d commented Apr 28, 2025

Hi, it may be a performance issue related to RAM or CPU required to process a big join in the DB. If you find out that this is the issue, it's likely to be optimized in #9116. If you have access to the code, you can try to apply this patch: 0390f7b as a workaround.

Thanks a lot for your reply! I'll go ahead and test this workaround.
Also, I had a quick question: regarding the locked timeout on the backend — if I were to increase it, would that help or have any potential side effects?

Image

Alternatively, would it make sense to decrease the number of results per page — for example, from 500 to 300 — in the server-proxy.ts file?
I tested this in Postman and it seems to work!

Image

@zhiltsov-max
Copy link
Contributor

Alternatively, would it make sense to decrease the number of results per page — for example, from 500 to 300 — in the server-proxy.ts file?

Yes, it is also a known workaround for the performance issue.

Also, I had a quick question: regarding the locked timeout on the backend — if I were to increase it, would that help or have any potential side effects?

The possible outcome is that if the server fails on a request without releasing a lock, the lock will be held for the time specified. This can happen if the server process is killed by the OS due to out of memory (OOM) condition. This default value matches the nginx request timeout.

8000

@nvj1d
Copy link
Author
nvj1d commented Apr 28, 2025

Alternatively, would it make sense to decrease the number of results per page — for example, from 500 to 300 — in the server-proxy.ts file?

Yes, it is also a known workaround for the performance issue.

Also, I had a quick question: regarding the locked timeout on the backend — if I were to increase it, would that help or have any potential side effects?

The possible outcome is that if the server fails on a request without releasing a lock, the lock will be held for the time specified. This can happen if the server process is killed by the OS due to out of memory (OOM) condition. This default value matches the nginx request timeout.

Thank you so much for your valuable input!

@azhavoro
Copy link
Contributor
azhavoro commented May 5, 2025

@nvj1d Hi, can the issue be closed?

@azhavoro azhavoro closed this as completed May 5, 2025
@azhavoro azhavoro reopened this May 5, 2025
@nvj1d nvj1d closed this as completed May 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
need info Need more information to investigate the issue
Projects
None yet
Development

No branches or pull requests

3 participants
0