@lamp You can restart sidekiq at any time. Jobs are queued in Redis, and as long as Redis is not lost, you're good to go.
However, you should carefully examine whether you really need the concurrency: fewer threads in sidekiq, fewer database connections, and more worker memory may improve the throughput of your jobs and allow you to process the queue faster.
@noellabo @PeterCxy well i increased concurrency up to 24 but it doesn't seem to have helped. i got 20,100 enqueued in pull queue with a latency of 4 hours and 2,700 in default with a latency of 12 minutes. CPU and memory usage and internet and disk bandwidth are all low, so I don't see what's limiting it. maybe disk latency, it's hard to find that info.
@lamp @PeterCxy Threads can be a bit more numerous. For example, you could set it to 15 for linkcrawlworker. However, this will only be useful if you have several slow-responding servers and the job doesn't finish easily. For the actual execution time, check the logs with journalctl. You can see the execution time of the worker.
@lamp @PeterCxy Note that linkcrawlworker is a low importance worker, so if it becomes a burden, there is a solution not to run it.
https://github.com/mastodon/mastodon/blob/bb1ef11c30b19db56b61b0918b176e1459e1f776/app/workers/link_crawl_worker.rb#L9-L10
If you remove these two lines, the worker will exit normally without doing anything.