@lamp You can restart sidekiq at any time. Jobs are queued in Redis, and as long as Redis is not lost, you're good to go.
However, you should carefully examine whether you really need the concurrency: fewer threads in sidekiq, fewer database connections, and more worker memory may improve the throughput of your jobs and allow you to process the queue faster.
@noellabo @PeterCxy well i increased concurrency up to 24 but it doesn't seem to have helped. i got 20,100 enqueued in pull queue with a latency of 4 hours and 2,700 in default with a latency of 12 minutes. CPU and memory usage and internet and disk bandwidth are all low, so I don't see what's limiting it. maybe disk latency, it's hard to find that info.
On a minimal server, you can reduce the number of connections to the database by keeping threads to a minimum, e.g. 5. This also saves memory usage of sidekiq.
Since PostgreSQL consumes memory according to the number of connections, reducing the number of connections will make more room for the whole system.
This leeway can be allocated to work_mem to maximize the efficiency of query execution.
It's a bit extreme, but try it for yourself.
@lamp @PeterCxy Note that linkcrawlworker is a low importance worker, so if it becomes a burden, there is a solution not to run it.
https://github.com/mastodon/mastodon/blob/bb1ef11c30b19db56b61b0918b176e1459e1f776/app/workers/link_crawl_worker.rb#L9-L10
If you remove these two lines, the worker will exit normally without doing anything.