This document describes how RQ works internally when enqueuing or dequeueing.
Whenever a function call gets enqueued, RQ does two things:
All jobs are stored in Redis under the
rq:job: prefix, for example:
The keys of such a job hash are:
created_at => '2012-02-13 14:35:16+0000' enqueued_at => '2012-02-13 14:35:16+0000' origin => 'default' data => <pickled representation of the function call> description => "count_words_at_url('http://nvie.com')"
Depending on whether or not the job has run successfully or has failed, the following keys are available, too:
ended_at => '2012-02-13 14:41:33+0000' result => <pickled return value> exc_info => <exception information>
Whenever a dequeue is requested, an RQ worker does two things:
resulthash key and the hash itself is expired after 500 seconds; or
exc_infohash key and the job ID is pushed onto the
Any job ID that is encountered by a worker for which no job hash is found in Redis is simply ignored. This makes it easy to cancel jobs by simply removing the job hash. In Python:
from rq import cancel_job cancel_job('2eafc1e6-48c2-464b-a0ff-88fd199d039c')
Note that it is irrelevant on which queue the job resides. When a worker eventually pops the job ID from the queue and notes that the Job hash does not exist (anymore), it simply discards the job ID and continues with the next.