RQ (Redis Queue) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It can be integrated in your web stack easily.
RQ requires Redis >= 3.0.0.
First, run a Redis server. You can use an existing one. To put jobs on queues, you don’t have to do anything special, just define your typically lengthy or blocking function:
import requests
def count_words_at_url(url):
resp = requests.get(url)
return len(resp.text.split())
Then, create a RQ queue:
from redis import Redis
from rq import Queue
q = Queue(connection=Redis())
And enqueue the function call:
from my_module import count_words_at_url
result = q.enqueue(count_words_at_url, 'http://nvie.com')
Scheduling jobs are similarly easy:
# Schedule job to run at 9:15, October 10th
job = queue.enqueue_at(datetime(2019, 10, 8, 9, 15), say_hello)
# Schedule job to be run in 10 seconds
job = queue.enqueue_in(timedelta(seconds=10), say_hello)
You can also ask RQ to retry failed jobs:
from rq import Retry
# Retry up to 3 times, failed job will be requeued immediately
queue.enqueue(say_hello, retry=Retry(max=3))
# Retry up to 3 times, with configurable intervals between retries
queue.enqueue(say_hello, retry=Retry(max=3, interval=[10, 30, 60]))
To start executing enqueued function calls in the background, start a worker from your project’s directory:
$ rq worker --with-scheduler
*** Listening for work on default
Got count_words_at_url('http://nvie.com') from default
Job result = 818
*** Listening for work on default
That’s about it.
Simply use the following command to install the latest released version:
pip install rq
There are several important concepts in RQ:
Queue
: contains a list of Job
instances to be executed in a FIFO manner.Job
: contains the function to be executed by the worker.Worker
: responsible for getting Job
instances from a Queue
and executing them.Execution
: contains runtime data of a Job
, created by a Worker
when it executes a Job
.Result
: stores the outcome of an Execution
, whether it succeeded or failed.This project has been inspired by the good parts of Celery, Resque and this snippet, and has been created as a lightweight alternative to existing queueing frameworks, with a low barrier to entry.