Celery is a Memory Hog

In my business I need to create lots of low volume Django sites. I do this on Webfaction, where I am limited on how much memory I can use.

In a recent project, I needed to mirror a portion of another database (Viewpoint) in my Django database. Every night a cron job would launch a custom Django command to update the Django database. Nothing too tricky about that… until the customer said he needed to be able to launch that sync command occasionally during the day, maybe one or two times/week.

A quick search of the Internets pulled up Celery. I knew it would be over-kill. But I thought it might be useful in other projects. So maybe it was time to give it a try. Ugh. What a mistake. It took forever to get Celery, Redis and Supervisor all working. Then adding insult to injury all that code used about 300MB of memory. That’s about 3 times as much as a Django site running under Apache.

Next I moved on to Django-cron. A quick read of the docs revealed this:

The 2 most common ways in which most people go about this is either writing custom python scripts or a management command per cron (leads to too many management commands!).

But wait! I already had the Django command that runs under cron written and working. I just added a cron job to run every 5 minutes and check for a manual sync.


Here is a post where I dive deeper into memory use.


Celery 3.1, Django and Webfaction

Despite wanting to keep things as simple as possible, occasionally I run into circumstances where I need my Django app to be able to run long processes. While there are lots of different ways to hack this together, Celery offers a clear and well tested option. These are my notes on the install and config process on Webfaction.

About Celery

What is Celery? Celery is a python module that does three things. It provides decorators to make python functions into executable tasks. It provides methods for putting those tasks in to a message queue. It also provides a daemon that monitors the message queue and executes tasks. The message queue is a separate piece of software, that runs in the background, for example RabbitMQ or Redis.

Configuring Redis

The Celery docs seem to prefer RabbitMQ over Reddis. Here is what they say about Redis “… is more susceptible to data loss in the event of abrupt termination or power failures.” However RabbitMQ is based on Erlang, so in addition to installing RabbitMQ, you have install Erlang.

Redis is based on GCC and libc, so the install is really easy. Since my use case could easily tolerate abrupt termination, I chose Redis. The quickstart covered everything I needed to get things working on my Ubuntu 12.04 development machine.  This tutorial is also pretty good. It’s not necessary to read, but its pretty concise and contains a lot of useful information that may give you ideas of how to use Redis outside of Celery.

To get Redis running on Webfaction, check out this post on Stackoverflow. Most notably, you will need to use Webfaction’s control panel to create a custom app listening on port.

Configuring Celery

Here are the Celery docs. Prior to Celery version 3.1, the Django app for interacting with celery separate from celery. In version 3.1, much of the Django functionality is built in. Keep that in mind if you run into problems and google around for solutions.

I am using Redis to store the results, so there is no need to install django-celery.

I am putting celery in my virtualenv. To install, activate your virtualenv and run:

pip install celery

Do not do the general “First steps with Celery” section. It shows a way to configure celery that is somewhat different from how the docs show to setup celery for Django. No doubt you could reconcile the approaches, but who needs that? Instead jump to the celery Django section. For now, ignore the suggestion to read the “First steps” section.

One thing that gave me lots of grief is my Django project has the “old” structure where manage.py is in the same dir as settings.py. This messed up the relative and absolute imports. To solve this, in the root of my Django project, I renamed my module celery.py to my_celery.py. And I changed __init__.py to reflect this. I also removed the project name from settings to become:

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'settings')

Redis uses a pre-defined default port number. If you use all the defaults in development, things go pretty smoothly. The problem is, you cannot use that same port on Webfaction. The best solution is to run redis in development using the same port you will use on Webfaction. This will make it more likely to work when you move it to Webfaction. To start redis on a port:

redis-server --port 13789

Next, you need to add some settings to Django settings:

BROKER_URL = 'redis://'

Now cd into your django project root and run:

celery -A my_celery worker -l info

Pay attention to the info celery displays when it starts. You should see your settings in the “transport” and “results”. You should also see your celery tasks under “[tasks]”

Once you get everything running, its time to use supervisor to launch celery as a daemon.


To make all this work, you need to manage some daemons. The consensus for that is you need to run supervisor. The docs are pretty good. I used pip to install.

pip install supervisor

I installed it outside of my virtualenv so that it could be available to all my virtualenvs. Make sure to read the section “Creating a Configuration File“. It is short and to the point. In the celery section “[program:celeryworker]” make sure to point the command to your virtualenv:

command=/home/me/.virtualenvs/my_virt/bin/celery worker -A my_celery --loglevel=INFO

To activate your virtualenv, point your PATH environment variable to the bin directory of your virtualenv:


This does the same thing that the virtualenv activate command does.

Also set directory to point to the root of your project:

directory=/home/me/3s_hts ; directory to cwd to before exec (def no cwd)

Once its installed and configured, you can run it with:


To shutdown supervisor first stop all processes with:

supervisorctl stop all

Then kill supervisor with a normal kill command.

When I start celery this way, I get 9 processes devoted to celery. For my needs this is over-kill. To get fewer processes change the “command” line in supervisord.conf to:

command=/home/athena/.virtualenvs/qdb6/bin/celery -A my_celery --concurrency=3 worker


Adding Tasks

So you are working in development mode. You fired up Celery worker and start writing some tasks in your Python/Django project. Of course, you realize that the python code is auto-detecting the tasks, but no big deal because the development server restarts when you save your new task. Then you go to run it and you get:

“Received unregistered task of type”

It turns out that each time you add a task, you need to restart Celery. If you look carefully at the messages as Celery loads, you will see a list of tasks that it has found.

Restarting Celery

If you use supervisorctl to stop celery, it will NOT stop the celery workers. To do that, you need a command like:

ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9

This is from the celery docs. I tried various supervisor settings, such as such as stopsignal, and killasgroup, but none of them stopped the workers. I posted this solution on Stackoverflow. You might want to check that out to see if someone has come up with a better solution.