Testing Django When Using @cached_property

I use the @cached_property decorator quite a bit. It’s pretty straightforward. Usually, it’s set it and forget it. As noted in the docs, it persists as long as the instance persists.

However, it can cause problems during testing if you want to change it’s value. Here is how to change the value (from SO):

class SomeClass(object):

    @cached_property
    def expensive_property(self):
         return datetime.now()

obj = SomeClass()
print obj.expensive_property
print obj.expensive_property # outputs the same value as before
del obj.expensive_property
print obj.expensive_property # outputs new value
Advertisements

Django, Ajax and HTML Select

This happens all the time; one field in a form determines the options in another select field. One way to handle this is to do a Ajax post. It’s not difficult, but there are lots of parts to remember. Here is one way to do it (using jquery).

The Django Ajax handler:

from django.http import JsonResponse

def get_options(request):
    project_id = int(request.POST['project_id'])
    project = models.Project.objects.get(id=project_id)

    choices = get_select_choices(project)
    options = []
    for choice_id, choice_label in choices:
        option = '<option value="{}">{}</option>'.format(str(choice_id), choice_label)
        options.append(option)
    return JsonResponse({'options': options})

The javascript

<script type="text/javascript">
    var project_field, phase_select;

    var get_phases_for_project = function(){
        var project = project_field.val();

        if (project === ""){
            phase_select.empty();
            phase_select.append($("<option></option>").attr("value",'').text('--------------'));
        }
        
        $.post(
            "{% url 'my_url' %}",
            {project_id: project, csrfmiddlewaretoken: '{{ csrf_token }}'},
            /**
             * @param data
             * @param data.phases
             */
            function(data){
                phase_select.empty();
                $.each(data.phases, function (index2, value) {
                        phase_select.append(value);
                });
            }
        )
    };

    $(document).ready(function(){
        project_field = $('#id_project');
        project_field.change(get_phases_for_project);
        phase_select = $("#id_phase");
    });
</script>

SSL, Django Development Server and Chrome

I am developing a Django site that uses Stripe. Even for testing, Stripe requires HTTPS. In the past, I used django-sslserver version 0.19 and ignored the complaining Chrome made about the certificate being self signed. Today (Sept 2017), none of that worked.

First thing I did was upgrade django-sslserver to 0.20. This crashed with an error related to:

ssl.PROTOCOL_TLSv1_2

It turns out ssl is built into Python and that constant is not defined in version 2.7.6. Reverting back to django-sslserver to 0.19 solved that problem.

Next, Chrome/Stripe will no longer let you ignore the SSL certificate warnings. This blog post by Alexander Zeitler does a pretty good job explaining how solve this problem. If you run into this problem:

error on line -1 of /dev/fd/11
140736435860488:error:02001009:system library:fopen:Bad file descriptor:bss_file.c:175:fopen('/dev/fd/11','rb')
140736435860488:error:2006D002:BIO routines:BIO_new_file:system lib:bss_file.c:184:
140736435860488:error:0E078002:configuration file routines:DEF_LOAD:system lib:conf_def.c:197:

remove sudo from createselfsignedcertificate.sh and run the script using sudo.

When all of that is done, you need to tell Chrome to trust your Certificate Authority by going to “Advanced Settings -> Manage Certificates”, then “Authorities/Import. Select the rootCA.pem in the ssl directory created by the scripts above.

This probably already setup on your machine, but you need to check the file /etc/hosts to make sure localhost points to the IP address django-sslserver is using (most likely 127.0.0.1). Then in the browser go to:

https://localhost:8000/

Launch django-sslserver using something like:

python manage.py runsslserver --certificate ~/ssl/server.crt --key ~/ssl/server.key

 

Headless Chrome with Python and Selenium

In version 59, Google Chrome acquired the option to run headlessly! Here is what works for me (Ubuntu: 14.04, Chrome version: 60.0.3112.78, chromedriver: 2.31) :

from selenium import webdriver

def get_page_html(url, headless=False, screen_shot_path=None):
    if headless:
        options = webdriver.ChromeOptions()
        options.add_argument('--headless')
        options.add_argument('--window-size=1200x2900')
        options.add_argument('--disable-gpu')
        selenium = webdriver.Chrome("/usr/local/share/chromedriver", chrome_options=options)
    else:
        selenium = webdriver.Chrome("/usr/local/share/chromedriver")
        selenium.set_window_size(1500, 900)

    selenium.get(url)
    page_html = selenium.page_source
    if screen_shot_path:
        selenium.save_screenshot(screen_shot_path)

    selenium.quit()
    return page_html

Make sure your version of chromedriver is up-to-date. Also, notice that the options.add_argument() expects the arguments to be as if you were running google-chrome from the command line (e.g. prefix with –).

Daemonizing Django-RQ using Supervisor

I was trying to set up a task to be run from Django-RQ. The task involved scrapping a webpage using Selenium and Google Chrome. It worked great in development, but not in production. The error message indicated that there were problems starting Chrome.

One big difference between dev and production was in production I was daemonizing Django-RQ using Supervisor. Some queued tasks would run. Just not the ones involving Selenium. The clue came when I stopped Django-RQ using supervisorctl and then started it from the command line. Now the Selenium tasks worked.

I solved the problem by adding this snippet to the top of the task that used Selenium:


import os
import json

json.dump(os.environ['PATH'].split(':'), open('debug_file.json', 'wb'))

This revealed that the environment PATH when running from the command line was much different that that when running Django-RQ from Supervisor. Adding some of those paths to the Supervisor config solved the problem.

[program:django_rq]
command= {{ virtualenv_path }}/bin/python manage.py rqworker high default low
stdout_logfile = /var/log/redis/redis_6379.log

numprocs=1

directory={{ django_manage_path }}
environment = DJANGO_SETTINGS_MODULE="{{ django_settings_import }}",PATH="{{ virtualenv_path }}/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
user = vagrant
stopsignal=TERM

autostart=true
autorestart=true