A worker is a Python process that typically runs in the background and exists solely as a work horse to perform lengthy or blocking tasks that you don’t want to perform inside web processes.
To start crunching work, simply start a worker from the root of your project directory:
Workers will read jobs from the given queues (the order is important) in an endless loop, waiting for new work to arrive when all jobs are done.
Each worker will process a single job at a time. Within a worker, there is no concurrent processing going on. If you want to perform jobs concurrently, simply start more workers.
By default, workers will start working immediately and will block and wait for new work when they run out of work. Workers can also be started in burst mode to finish all currently available work and quit as soon as all given queues are emptied.
This can be useful for batch work that needs to be processed periodically, or just to scale up your workers temporarily during peak periods.
The life-cycle of a worker consists of a few phases:
busyand registers job in the
idleand sets both the job and its result to expire based on
result_ttl. Job is also removed from
StartedJobRegistryand added to to
FinishedJobRegistryin the case of successful execution, or
FailedQueuein the case of failure.
rq worker shell script is a simple fetch-fork-execute loop.
When a lot of your jobs do lengthy setups, or they all depend on the same set
of modules, you pay this overhead each time you run a job (since you’re doing
the import after the moment of forking). This is clean, because RQ won’t
ever leak memory this way, but also slow.
A pattern you can use to improve the throughput performance for these kind of jobs can be to import the necessary modules before the fork. There is no way of telling RQ workers to perform this set up for you, but you can do it yourself before starting the work loop.
To do this, provide your own worker script (instead of using
A simple implementation example:
Workers are registered to the system under their names, see monitoring.
By default, the name of a worker is equal to the concatenation of the current
hostname and the current PID. To override this default, specify the name when
starting the worker, using the
Worker instances store their runtime information in Redis. Here’s how to
New in version 0.10.0.
If you only want to know the number of workers for monitoring purposes, using
Worker.count() is much more performant.
New in version 0.9.0.
If you want to check the utilization of your queues,
store a few useful information:
If, at any time, the worker receives
SIGINT (via Ctrl+C) or
kill), the worker wait until the currently running task is finished, stop
the work loop and gracefully register its own death.
If, during this takedown phase,
SIGTERM is received again, the
worker will forcefully terminate the child process (sending it
will still try to register its own death.
New in version 0.3.2.
If you’d like to configure
rq worker via a configuration file instead of
through command line arguments, you can do this by creating a Python file like
The example above shows all the options that are currently supported.
REDIS_PASSWORD settings are new since 0.3.3.
To specify which module to read settings from, use the
New in version 0.4.0.
There are times when you want to customize the worker’s behavior. Some of the more common requests so far are:
You can use the
-w option to specify a different worker class to use:
Will be available in next release.
You can tell the worker to use a custom class for jobs and queues using
Don’t forget to use those same classes when enqueueing the jobs.
New in version 0.5.5.
If you need to handle errors differently for different types of jobs, or simply want to customize
RQ’s default error handling behavior, run
rq worker using the