8000 Why didn't process of cleaning come to 100% ? · Issue #58 · dataegret/pgcompacttable · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Why didn't process of cleaning come to 100% ? #58

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Eliss-good opened this issue Apr 23, 2025 · 2 comments
Open

Why didn't process of cleaning come to 100% ? #58

Eliss-good opened this issue Apr 23, 2025 · 2 comments

Comments

@Eliss-good
Copy link
Eliss-good commented Apr 23, 2025

Hi!
I've just encountered sometimes that process of pg_compacttable was stoped so much early than it must be.

Example:
Lunch command - pgcompacttable -h XXXX -p 6432 --dbname XXXX -f --delay-ratio=0 --password XXXX -u postgres
Capacity disk of database ~ 1000Gb

Description: process was stopped on 12% of progress and it did't continue its work, additional I checked existing connections of pg_compact in Postgres server and I didn't find active, sleep and etc processes, so I suppose that something completed this process, but I don't know why it did happen

Last logs:
[Tue Apr 22 16:47:48 2025] () Reindex forced: public.XXXX, initial size 296651 pages(2.263GB), has been reduced by 32% (757.062MB), duration 50 seconds. [Tue Apr 22 16:48:43 2025] ) Reindex forced: public.XXXXX, initial size 296679 pages(2.263GB), has been reduced by 32% (757.281MB), duration 54 seconds. [Tue Apr 22 16:48:43 2025] () Processing results: 1517860 pages left (1962402 pages including toasts and indexes), size reduced by -24.000KB (1.572GB including toasts and indexes) in total. [Tue Apr 22 17:01:14 2025] () Statistics: 3442922 pages (8066880 pages including toasts and indexes), it is expected that ~0.290% (9917 pages) can be compacted with the estimated space saving being 77.483MB. [Tue Apr 22 17:01:14 2025] () Processing forced. [Tue Apr 22 17:02:15 2025] () Progress: 12%, 1245 pages completed.

Questions:

  1. How can I expend level of log?
  2. Is it connected that I have a large table ?
  3. Is it connected that I became to use param - "delay-ratio=0" ?
  4. Did anybody encounter the same problem?
@Eliss-good Eliss-good changed the title Why didn't process of cleaning go to 100% ? Why didn't process of cleaning come to 100% ? Apr 23, 2025
@alexius2
Copy link
Collaborator

If there is no pgcompacttable process anymore and there is no related connection to database from it then it's strange.

It doesn't look like pgcompacttable process stopped by itself, more like it was terminated. Is there function pgcompact_clean_pages* left in database (pgcompacttable should remove it after it complete its work) ?

May be ssh session where it was running was closed due to inactivity or network problem (information about that might be in system logs) ? Is there any related ERROR/FATAL message in postgresql logs ?

I recommend to run pgcompacttable inside screen/tmux session or with nohup so that wouldn't happen.

Otherwise I have no explanation, large table or --delay-ratio=0 shouldn't be a problem. --force option I almost never use but I doubt that it can cause this.
There is also --verbose option which I usually use.

@Eliss-good
Copy link
Author

GM! @alexius2

  1. Function pgcompact_clean_pages* doesn't exist
  2. At the moment when pg_compact ended works, i look only these logs 2025-04-23 22:17:13.253 MSK [3791243] LOG: could not receive data from client: Connection reset by peer
  3. Bicycle I start a pg_compact process like a background task used operator & in command of linux
    pgcompacttable -h XXXX -p 6432 --dbname XXXX -f --delay-ratio=0 --password XXXX -u postgres > output_res.log 2>&1 & - example

Next time I'll try to lunch inside screen/tmux session and add flag --verbose

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
0