Django Development Server with DEBUG=False

When you run the Django development server with DEBUG=True, static files are served from their location before collectstatic is run. This is what you want because changes you make are immediately reflected in your pages. Also all hosts are allowed.

Of course you would never run in production with DEBUG=True. This means your development system is not a very good replica of the production system when it comes to static files. If you are having problems with static files, it would be nice if you could work on them in development. It turns out you can. Just run:

python manage.py runserver --insecure

You also will need to setup a local directory for collectstatic to write to:

MEDIA_ROOT = 'my local media root'
STATIC_ROOT = 'my local static root'

then run collectstatic. In my case, I also needed to turn off django-pipeline.

Celery Logging

When you start Celery, there is a flag for setting the logging level:

celery worker -A my_app -E -l WARNING --concurrency=1

Simple enough. There is a similar flag for Celery beat. The problem is that even if you set the level fairly high and you are running celery beat, you will still get DEBUG and INFO level logging info every time Celery beat runs. This can create a huge log file of useless information.

It seems there is a bug somewhere in the logging system. This post suggests that django-debug-toolbar can be the culprit. Unfortunately for me, I am not running django-debug-toolbar. Apparently some other 3rd party app is causing the problem.

My solution is simple. I created a cron job to clear the file every hour, using the command:

cat /dev/null > /path/to/logfile

Celery Debugging, Tricks, Etc…

I am not sure why, but Celery takes me forever to get working. Here are some notes for getting it working with Django.

Celery Troubleshooting

Lets say you have Supervisor daemonizing Celery. You can see the Celery processes are running with:

ps aux | grep celery

But things are not quite right yet. What to do? For troubleshooting, things will go a lot quicker if you do not run Celery as a daemon. Just cd to your manage.py dir and run:

celery worker -A <my_project> -E -l debug -B

This will give you lots of messages and is a lot easier to start and stop.

Restarting Supervisor

As you are debugging, you might discover errors in your Supervisor scripts. After you make changes to the Supervisor scripts, you need to restart supervisor:

service supervisor restart

Celery Beat

If you want to run periodic tasks, Celery beat is the answer. In the past, I started Celery using the -B flag in my command for starting Celery. But the docs say you should not use this in production. Instead of the -B flag, setup beat in production as shown here.

Celery Logging

In the Celery docs, they recommend setting up task logging like this:

from celery.utils.log import get_task_logger
logger = get_task_logger(__name__)

Easy enough. But where do those messages go? As is, you would think they would go to the root logger. If you did not setup the root logger, then they should go to Streamhandler, which should send them to stdout. I setup stdout in supervisor to write to a file, yet no celery logging appears there.

Others have suggested that Celery mucks with the logger tree. Maybe that’s true. In any case you can “kind of” solve it by creating a ‘celery’ logger handler. I say “kind of” because I still cannot figure out where Celery Beat tasks log to. In any case, the logger code is below.

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'simple': {
            'format': '%(levelname)s %(message)s',
            'datefmt': '%y %b %d, %H:%M:%S',
            },
        },
    'filters': {
        'require_debug_false': {
            '()': 'django.utils.log.RequireDebugFalse'
        }
    },
    'handlers': {
        'mail_admins': {
            'level': 'ERROR',
            'filters': ['require_debug_false'],
            'class': 'django.utils.log.AdminEmailHandler'
        },
        'celery': {
            'level': 'DEBUG',
            'class': 'logging.FileHandler',
            'filename': normpath(join(SITE_ROOT, 'celery.log')),
            'formatter': 'simple',
        }
    },
    'loggers': {
        'django.request': {
            'handlers': ['mail_admins'],
            'level': 'ERROR',
            'propagate': True,
        },
        'celery': {
            'handlers': ['celery'],
            'level': 'DEBUG',
        }
    }
}

Celery Beat: Connection refused

In my case, this error was due to RabbitMQ being down. Running:

ps aux | grep rabbit

Showed only:

/usr/lib/erlang/erts-5.8.5/bin/epmd -daemon

To start RabbitMQ, run:

sudo rabbitmq-server -detached

Now you should see a bunch of new RabbitMQ processes.

Stopping Celery Worker Processes

If you use Supervisor to start and stop Celery, you will notice that you are accumulating worker processes. Evidently starting with supervisor creates workers, but stopping does not shut them down. I tried various settings in my supervisor config file, with no luck. I have googled it and not found an answer. So for now I am using a shell command to handle this:

ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9

This is from the Celery docs.