A worker is a Python process that typically runs in the background and exists solely as a work horse to perform lengthy or blocking tasks that you don’t want to perform inside web processes.
To start crunching work, simply start a worker from the root of your project directory:
$ rq worker high default low *** Listening for work on high, default, low Got send_newsletter('email@example.com') from default Job ended normally without result *** Listening for work on high, default, low ...
Workers will read jobs from the given queues (the order is important) in an endless loop, waiting for new work to arrive when all jobs are done.
Each worker will process a single job at a time. Within a worker, there is no concurrent processing going on. If you want to perform jobs concurrently, simply start more workers.
You should use process managers like Supervisor or systemd to run RQ workers in production.
By default, workers will start working immediately and will block and wait for new work when they run out of work. Workers can also be started in burst mode to finish all currently available work and quit as soon as all given queues are emptied.
$ rq worker --burst high default low *** Listening for work on high, default, low Got send_newsletter('firstname.lastname@example.org') from default Job ended normally without result No more work, burst finished. Registering death.
This can be useful for batch work that needs to be processed periodically, or just to scale up your workers temporarily during peak periods.
In addition to
rq worker also accepts these arguments:
-u: URL describing Redis connection details (e.g
rq worker --url redis://:email@example.com:1234/9)
-P: multiple import paths are supported (e.g
rq worker --path foo --path bar)
-c: path to module containing RQ settings.
--results-ttl: job results will be kept for this number of seconds (defaults to 500).
-w: RQ Worker class to use (e.g
rq worker --worker-class 'foo.bar.MyWorker')
-j: RQ Job class to use.
--queue-class: RQ Queue class to use.
--connection-class: Redis connection class to use, defaults to
--log-format: Format for the worker logs, defaults to
--date-format: Datetime format for the worker logs, defaults to
--disable-job-desc-logging: Turn off job description logging.
The life-cycle of a worker consists of a few phases:
busyand registers job in the
idleand sets both the job and its result to expire based on
result_ttl. Job is also removed from
StartedJobRegistryand added to to
FinishedJobRegistryin the case of successful execution, or
FailedJobRegistryin the case of failure.
rq worker shell script is a simple fetch-fork-execute loop.
When a lot of your jobs do lengthy setups, or they all depend on the same set
of modules, you pay this overhead each time you run a job (since you’re doing
the import after the moment of forking). This is clean, because RQ won’t
ever leak memory this way, but also slow.
A pattern you can use to improve the throughput performance for these kind of jobs can be to import the necessary modules before the fork. There is no way of telling RQ workers to perform this set up for you, but you can do it yourself before starting the work loop.
To do this, provide your own worker script (instead of using
A simple implementation example:
#!/usr/bin/env python import sys from rq import Connection, Worker # Preload libraries import library_that_you_want_preloaded # Provide queue names to listen to as arguments to this script, # similar to rq worker with Connection(): qs = sys.argv[1:] or ['default'] w = Worker(qs) w.work()
Workers are registered to the system under their names, which are generated
randomly during instantiation (see monitoring). To override this default,
specify the name when starting the worker, or use the
--name cli option.
Updated in version 0.10.0.
Worker instances store their runtime information in Redis. Here’s how to
from redis import Redis from rq import Queue, Worker # Returns all workers registered in this connection redis = Redis() workers = Worker.all(connection=redis) # Returns all workers in this queue (new in version 0.10.0) queue = Queue('queue_name') workers = Worker.all(queue=queue) worker = workers print(worker.name)
worker.name, worker also have the following properties:
hostname- the host where this worker is run
pid- worker’s process ID
queues- queues on which this worker is listening for jobs
state- possible states are
current_job- the job it’s currently executing (if any)
last_heartbeat- the last time this worker was seen
birth_date- time of worker’s instantiation
successful_job_count- number of jobs finished successfully
failed_job_count- number of failed jobs processed
total_working_time- amount of time spent executing jobs, in seconds
New in version 0.10.0.
If you only want to know the number of workers for monitoring purposes,
Worker.count() is much more performant.
from redis import Redis from rq import Worker redis = Redis() # Count the number of workers in this Redis connection workers = Worker.count(connection=redis) # Count the number of workers for a specific queue queue = Queue('queue_name', connection=redis) workers = Worker.all(queue=queue)
New in version 0.9.0.
If you want to check the utilization of your queues,
store a few useful information:
from rq.worker import Worker worker = Worker.find_by_key('rq:worker:name') worker.successful_job_count # Number of jobs finished successfully worker.failed_job_count # Number of failed jobs processed by this worker worker.total_working_time # Amount of time spent executing jobs (in seconds)
If, at any time, the worker receives
SIGINT (via Ctrl+C) or
kill), the worker wait until the currently running task is finished, stop
the work loop and gracefully register its own death.
If, during this takedown phase,
SIGTERM is received again, the
worker will forcefully terminate the child process (sending it
will still try to register its own death.
If you’d like to configure
rq worker via a configuration file instead of
through command line arguments, you can do this by creating a Python file like
REDIS_URL = 'redis://localhost:6379/1' # You can also specify the Redis DB to use # REDIS_HOST = 'redis.example.com' # REDIS_PORT = 6380 # REDIS_DB = 3 # REDIS_PASSWORD = 'very secret' # Queues to listen on QUEUES = ['high', 'default', 'low'] # If you're using Sentry to collect your runtime exceptions, you can use this # to configure RQ for it in a single step # The 'sync+' prefix is required for raven: https://github.com/nvie/rq/issues/350#issuecomment-43592410 SENTRY_DSN = 'sync+http://public:firstname.lastname@example.org/1' # If you want custom worker name # NAME = 'worker-1024'
The example above shows all the options that are currently supported.
REDIS_PASSWORD settings are new since 0.3.3.
To specify which module to read settings from, use the
$ rq worker -c settings
There are times when you want to customize the worker’s behavior. Some of the more common requests so far are:
You can use the
-w option to specify a different worker class to use:
$ rq worker -w 'path.to.GeventWorker'
You can tell the worker to use a custom class for jobs and queues using
$ rq worker --job-class 'custom.JobClass' --queue-class 'custom.QueueClass'
Don’t forget to use those same classes when enqueueing the jobs.
from rq import Queue from rq.job import Job class CustomJob(Job): pass class CustomQueue(Queue): job_class = CustomJob queue = CustomQueue('default', connection=redis_conn) queue.enqueue(some_func)
When a Job times-out, the worker will try to kill it using the supplied
UnixSignalDeathPenalty). This can be overridden
if you wish to attempt to kill jobs in an application specific or ‘cleaner’ manner.
DeathPenalty classes are constructed with the following arguments
BaseDeathPenalty(timeout, JobTimeoutException, job_id=job.id)
If you need to handle errors differently for different types of jobs, or simply want to customize
RQ’s default error handling behavior, run
rq worker using the
$ rq worker --exception-handler 'path.to.my.ErrorHandler' # Multiple exception handlers is also supported $ rq worker --exception-handler 'path.to.my.ErrorHandler' --exception-handler 'another.ErrorHandler'
If you want to disable RQ’s default exception handler, use the
$ rq worker --exception-handler 'path.to.my.ErrorHandler' --disable-default-exception-handler