-
Notifications
You must be signed in to change notification settings - Fork 51
Huge deadlocks count on the last phase operations ALTER INDEX RENAME TO #35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
2020-10-31 21:05:28 MSK [20639] [local] [unknown] postgres@queue2 57014 ALTER INDEX ERROR: canceling statement due to statement timeout |
I've set execution hourly by crontab: and still having errors: [postgres@ ~]$ grep -c ERR /u01/postgres/main/log/postgresql-2020-11-01.log [local]:5432 postgres@queue2=# \d+ queue_message2 It seems strange that pgcomp 8000 act_index**** not removed and all of them + their "master" index (idx_queue_message2_main) lead to deadlock errors by ALTER INDEX RENAME TO operations. [local]:5432 postgres@queue2=# select * from pg_stat_user_indexes where indexrelname='idx_queue_message2_main'; |
[local]:5432 postgres@queue2=# select * from pg_stat_user_indexes where relname='queue_message2'; |
[postgres@ ~]$ /u01/postgres/pgcompacttable-master/bin/pgcompacttable -V We're using the latest tool's version |
One more type of errors - canceling autovacuum task (arising only in the time pgcompacttable executed) PG log: tool log: |
Hello I didn't actually expect a deadlock here. I think that in this case we have no further error handling and pgcompacttable will leave the new index. I suggest checking all pgcompact_index_N indexes, most likely they are redundant and should be removed.
I doubt this is good idea at all. For regular maintenance it would be better to tune autovacuum settings and investigate root cause of your table (or index) bloat. The usual reason is long transactions and default (means lazy) settings of autovacuum. Typically postgresql with appropriate tuning of autovacuum will not need pgcompacttable (excepts after massive deletes).
Yes, this is expected behavior. pgcompacttable run some commands (e.g. VACUUM, ANALYZE) which definitely conflicts with autovacuum - autovacuum notices this and cancels itself with such log message. If you have modern PostgreSQL (12 and above) - it would be better to use native |
Uh oh!
There was an error while loading. Please reload this page.
Hello, Maxim!
Thank you for very useful tool. We have deadlock errors in postgresql logs related to last phase of pgcompacttable - "REINDEX".
For example:
2020-10-31 21:05:37 MSK [21303] 172.20.2.144 PostgreSQL JDBC Driver queue2@queue2 40P01 UPDATE ERROR: deadlock detected
2020-10-31 21:05:37 MSK [21303] 172.20.2.144 PostgreSQL JDBC Driver queue2@queue2 40P01 UPDATE DETAIL: Process 21303 waits for RowExclusiveLock on relation 38747785 of database 16619; blocked by process 20639.
Process 20639 waits for AccessExclusiveLock on relation 39109132 of database 16619; blocked by process 21303.
Process 21303: UPDATE public.queue_message2 as x SET sended_at = now(), state='SENDED'
WHERE id = (
SELECT id
FROM public.queue_message2
WHERE queue_id = $1
AND state = 'QUEUED'
AND router_tag = $2
AND expired_at > now()
ORDER BY queue_id, state, router_tag,
priority DESC, queued_at ASC FOR UPDATE SKIP LOCKED LIMIT 1)
RETURNING x.*
Process 20639:
ALTER INDEX "public".pgcompact_index_20528 RENAME TO "pgcompact_index_14090";
Can there be a positive effect from using "Options controlling the behaviour" like --reindex-*** or any else?
May be --print-reindex-queries will help in such situation to perform manual reindex later?
Problem arising only on huge DML-loaded database shardes.
The text was updated successfully, but these errors were encountered: