Migrating django_manage from Ansible 1.9 to Ansible 2.5

Here is one of the errors (to help you get here from Google):

"msg": "\n:stderr: /usr/bin/env: python\r: No such file or directory\n"

I had an old Django manage.py file that worked for Ansible 1.9. It did not have a shebang (e.g. #!/usr/bin/env python). The error it was generating was:

"msg": "[Errno 8] Exec format error", "rc": 8

Reading the docs, I found this key tidbit:

As of ansible 2.x, your manage.py application must be executable (rwxr-xr-x), and must have a valid shebang, i.e. “#!/usr/bin/env python”, for invoking the appropriate Python interpreter.

That seemed easy enough. But I was not sure what shebang to use since I was using a virtualenv. The quickest way to find out seemed to be use nano to edit manage.py on the server, run the Ansible task that was failing. Repeat until it works.

Then things got weird. Every reasonable shebang lead to the error I started this post with. Also, for no obvious reason, nano started warning me about the file being in DOS format, which is strange since this project has always been in Linux. Maybe when I was cutting and pasting some shebangs into the file I ended up with some DOS whitespace. Running the command “dos2unix manage.py” fixed it.

The shebang that worked was:

#!/usr/bin/env python

UPDATE

It appears that the DOS format chars entered manage.py a long time ago. Maybe when the project was created. I ended up fixing the file in my repo.

 

Advertisements

Logging a Big Process

Lets say you have a web site. When the user clicks a link it runs a process that generates a huge report, lots of ins and outs. Lots of places where some of the data might be questionable, but not bad enough to give up. What you really want to do is warn the user. The problem is your code is pretty modular. You could pass around a variable to keep track of the issues, but wouldn’t it be better if there were a more unified approach? Some sort of error accumulator… maybe a logger. Wait that’s built in to Python. This works:

# other_module.py
import logging
logger = logging.getLogger('my logger')

def f3():
    logger.debug('test f3')

and

import logging
try:
    from cStringIO import StringIO      # Python 2
except ImportError:
    from io import StringIO
    
import other_module

logger = logging.getLogger('my logger')
logger.setLevel(logging.DEBUG)
logger.propagate = False
formatter = logging.Formatter('%(module)s.%(funcName)s:%(lineno)d - %(message)s')
log_stream = StringIO()
handler = logging.StreamHandler(log_stream)
handler.setFormatter(formatter)
logger.addHandler(handler)


def f1():
    logger.error('test f1')


def f2():
    logger.debug('test f2')


def complex_process():
    # Clear stream to limit errors to each call to main
    log_stream.seek(0)
    log_stream.truncate()

    f1()
    f2()
    other_module.f3()
    errors = log_stream.getvalue()
    print(errors)

complex_process()
complex_process()

Results in :

logging_example.f1:21 - test f1
logging_example.f2:25 - test f2
other_module.f3:8 - test f3

logging_example.f1:21 - test f1
logging_example.f2:25 - test f2
other_module.f3:8 - test f3

Warning: do not use logging.basicConfig() for this. As stated in the docs “it’s intended as a one-off simple configuration facility, only the first call will actually do anything: subsequent calls are effectively no-ops.” If you do use this function, it is likely it will not do anything and the logger you will end up using is the root logger. One big problem with that is you will receive logging messages from lots of other, unexpected modules, such as third party modules.

Receive Text Messages on a Website Using Twilio

In a nutshell:

  1. Setup and get a phone number on Twilio
  2. Configure Twilio
  3. When some one sends a text to the phone number, Twilio packs info about the text in an HTTP POST and posts to your website at the URL you provided when configuring Twilio

Here is a sample of the POST:

{
    u'Body': [u'Hello world!'],
    u'MessageSid': [u'SMxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'],
    u'FromZip': [u'53706'],
    u'SmsStatus': [u'received'],
    u'SmsMessageSid': [u'SMxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'],
    u'AccountSid': [u'AC63444b3f5817d72cbbadb35a71bdd2e9'],
    u'FromCity': [u'MADISON'],
    u'ApiVersion': [u'2010-04-01'],
    u'To': [u'+16089999999'],
    u'From': [u'+16081234567'],
    u'NumMedia': [u'0'],
    u'ToZip': [u'53703'],
    u'ToCountry': [u'US'],
    u'NumSegments': [u'1'],
    u'ToState': [u'WI'],
    u'SmsSid': [u'SMxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'],
    u'ToCity': [u'MADISON'],
    u'FromState': [u'WI'],
    u'FromCountry': [u'US']
}

That should get you started. See the Twilio documentation for details.

SSH Stops Working After Ubuntu Upgrade

I recently upgraded from 14.04 to 16.04. It went pretty smoothly, except some of my SSH connections stopped working. It turns out the the upgrade, automatically upgraded OpenSSL from version 6 to version 7 and version 7 no longer allows keys that it thinks are insecure.

To see what version you are running:

ssh -V

The big problem is the server I need to connect to is managed by a “Windows” guy who hates Linux. Getting him to update the key is going to take a while and I need to connect NOW.

The solution is at: http://www.openssh.com/legacy.html

I put this in my  ~/.ssh/config file:

Host XXX.XXX.XXX.XXX
    HostKeyAlgorithms +ssh-dss

It’s ugly, but it works.

Using the Ansible Find Command

The “find” command was added to Ansible in version 2. Here is how I used it to change the permissions on some log files:

- name: Get all log files in django manage.py dir
  find:
    paths: "{{ django_manage_path }}"
    patterns: "*.log"
    recurse: yes
  register: files_to_change
    

- name: Make sure Django log files in manage.py dir are owned by vagrant
  become: yes
  file: path={{ item.path }} owner=vagrant group=admin mode=0660
  with_items: "{{ files_to_change.files }}"

Ansible Could Not Find Templates After Migration from 1.9 to 2.5

I had a playbook that included a task from another role like this:

tasks:
  - include: roles/django/tasks/create_server_settings.yml

This include stopped working when I migrated from Ansible 1.9 to 2.5. The task used the “template” command and Ansible could not find the template. It looked in:

  • roles/django/tasks/templates/server_settings.py.j2
  • roles/django/tasks/server_settings.py.j2

I am using the recommended directory structure with the tasks and templates directories both at the same level in the directory tree, in this case:

  • roles/django/templates/server_settings.py.j2

Switching to the command:

 tasks:
  - include_tasks: roles/django/tasks/create_server_settings.yml

did NOT help.

The solution was to use this command:

tasks:
  - include_role:
      name: django
      tasks_from: create_server_settings

Monitoring Django RQ

In my use case, users used a form to put a long running process on the queue. I wanted to make a page that would allow each use to see the status of the jobs they queued. This turned out to be slightly more difficult than it should be.

The first step involved saving the job information to the user’s session:

from datetime import datetime

from django.conf import settings
from django.views.generic import FormView
from django.http import HttpResponseRedirect
from django.core.urlresolvers import reverse

import pytz
import django_rq

class MyView(FormView):
    def form_valid(self, form):
        queue = django_rq.get_queue('low')
        job = queue.enqueue(
            generate_report_func,
            int(form.cleaned_data['year']),
            email_list=form.cleaned_data['recipients']
        )

        # Save queued job to session as a list so that order is preserved
        now = pytz.timezone(settings.TIME_ZONE).localize(datetime.now())
        enqueued_jobs = self.request.session.get('enqueued_jobs', [])
        enqueued_jobs.append({
            'job_id': job._id,
            'name': 'Revenue and Open POs Report {}'.format(form.cleaned_data['year']),
            'started': now.isoformat(),
            'started_for_display': now.strftime('%Y-%m-%d %H:%M'),
            'queue': 'low'
        })
        self.request.session['enqueued_jobs'] = enqueued_jobs
        self.request.session.modified = True
        return HttpResponseRedirect(reverse('on_queue'))

 
Here is the code for the view that allows the user to see the status of each job:

import django_rq
from redis import Redis
from rq.registry import StartedJobRegistry

from django.views.generic import TemplateView

from dateutil.parser import parse


class OnQueueView(TemplateView):
    template_name = 'on_queue.html'

    def get_context_data(self, **kwargs):
        kwargs = super(OnQueueView, self).get_context_data(**kwargs)

        # Make a list of queued jobs
        queue = django_rq.get_queue('low')
        job_status = {x._id: x.status for x in queue.jobs}

        # Make a list of running jobs
        redis_conn = Redis()
        registry = StartedJobRegistry('low', connection=redis_conn)
        for job_id in registry.get_job_ids():
            job_status[job_id] = 'running'

        # Make a list of failed jobs
        for job_id in django_rq.get_failed_queue().job_ids:
            job_status[job_id] = 'failed'

        # Insert status into list of jobs, remove old jobs
        all_jobs = self.request.session.get('enqueued_jobs', [])
        kwargs['jobs'] = []
        now = pytz.timezone(settings.TIME_ZONE).localize(datetime.now())
        self.request.session.modified = False
        for job in all_jobs:
            dt = (now - parse(job['started'])).total_seconds()
            if dt < 3600 * 4:
                if job['job_id'] in job_status:
                    job['status'] = job_status[job['job_id']]
                else:
                    job['status'] = 'completed'
                kwargs['jobs'].append(job)
                self.request.session.modified = True

        if self.request.session.modified:
            self.request.session['enqueued_jobs'] = kwargs['jobs']

        return kwargs