@lamp You can restart sidekiq at any time. Jobs are queued in Redis, and as long as Redis is not lost, you're good to go.
However, you should carefully examine whether you really need the concurrency: fewer threads in sidekiq, fewer database connections, and more worker memory may improve the throughput of your jobs and allow you to process the queue faster.
On a minimal server, you can reduce the number of connections to the database by keeping threads to a minimum, e.g. 5. This also saves memory usage of sidekiq.
Since PostgreSQL consumes memory according to the number of connections, reducing the number of connections will make more room for the whole system.
This leeway can be allocated to work_mem to maximize the efficiency of query execution.
It's a bit extreme, but try it for yourself.
@lamp @PeterCxy Note that linkcrawlworker is a low importance worker, so if it becomes a burden, there is a solution not to run it.
https://github.com/mastodon/mastodon/blob/bb1ef11c30b19db56b61b0918b176e1459e1f776/app/workers/link_crawl_worker.rb#L9-L10
If you remove these two lines, the worker will exit normally without doing anything.
@lamp @PeterCxy Threads can be a bit more numerous. For example, you could set it to 15 for linkcrawlworker. However, this will only be useful if you have several slow-responding servers and the job doesn't finish easily. For the actual execution time, check the logs with journalctl. You can see the execution time of the worker.