The OAuth2 spec cleanly separates the role of Authorization Server (AS) from that of Resource Server (RS). The role of the AS, and the whole OAUTH2 dance, is to get an access token that will be accepted by a RS.
It’s puzzling. It should be easy, nay, trivial, to implement the Resource Server side in Django, yet it’s not. There are several libraries whose description can be interpreted as “implementing OAuth2”, yet what they all do is implement the Authorization Server side. I want to consume access tokens, not issue them!
Now, in theory the access token could be anything. But common Authorization Server implementations (keycloak, authentik, various hosted services) have converged on issuing signed JSON Web Tokens. So what the resource server needs is to be configured with the key to verify the tokens. We could conceivably hard code the key in source or configuration, but that is a bit of a hassle, and anyway, this is the third decade of the third millennium, quite frankly we shouldn’t have to. All server implementations offer a JWKS endpoint where the currently valid keys can be queried (and even a full autodiscovery endpoint, to discover the JWKS endpoint). An implementation of a resource server should, in theory, only need to be pointed at the JWKS endpoint and everything should just work.
We want to authorize requests. Every time a new request comes in. That’s what a Resource Server does. It uses the provided token to check authorization. And apparently the documentation seems to suggest the correct way to do this is to fetch the keys from the JWKS endpoint on every request.
WTF?
We’ll need some caching. The documentation is mum on the topic. The implementation however is not. Turns out, they have implemented a cache. Only, they have implemented it on the PyJWKClient object itself. And there’s no easy way to hook up a real cache (such as Django’s).
The usual flow for normal Python web frameworks is that no object survives from request to request. Each request gets a clean slate. They may run in the same process sequentially. In different threads in the same process. In multiple processes. Or even async in the same thread in the same process. With the given example code we would be hitting the authorization server JWKS endpoint for every incoming request, adding huge latencies to processing.
In order to retain even a shred of sanity, we have no choice but to turn the JWKClient into a kind of singleton. It looks like this:
import jwt
from django.conf import settings
_jwks_client: Optional[jwt.PyJWKClient] = None
def get_jwks_client() -> jwt.PyJWKClient:
# PyJWKClient caches responses from the JWKS endpoint *inside* the PyJWKClient object
global _jwks_client
if _jwks_client:
return _jwks_client
_jwks_client = jwt.PyJWKClient(settings.JWKS_URI)
return _jwks_client
With this definition in place you can get the signing key as signing_key = get_jwks_client().get_signing_key_from_jwt(token) and will at least get some caching within a process, until the server decides to spawn a new process.
Then, to hook up authentication into Django Rest Framework you’ll do something like this (where User.from_token needs to be something that can turn a verified JWT dict into a User object):
def authenticate_request(request):
if header := get_authorization_header(request):
match header.split():
case b"Bearer", token_bytes:
token = token_bytes.decode("us-ascii", errors="strict")
signing_key = get_jwks_client().get_signing_key_from_jwt(token)
data = jwt.decode(token, signing_key.key, algorithms=["RS256"])
if data:
return User.from_token(data), data
We’ve recently begun dockerizing our applications in an effort to make development and deployment easier. One of the challenges was establishing a good baseline Dockerfile which can maximize the benefits of Dockers caching mechanism and at the same time provide minimal application images without any superfluous contents.
The basic installation flow for any Django project (let’s call it foo) is simple enough:
Preconditions (click to expand)
The foo project has a Django settings module, which contains suitable default settings, especially with regards to the database connection. A requirements.txt file lists all project dependencies. The pip command should be executed in a Python virtual environment (or it may be executed as root in a Docker container).
(Note: In this blog post we’ll mostly ignore the commands to actually get the Django project running within a web server. We’ll end up using gunicorn with WSGI, but won’t comment further on it.)
This sequence isn’t suitable for a Dockerfile as-is, because the final command in the sequence creates the database within the container image. Except for very specific circumstances this is likely not desired. In a normal deployment the database is located either on a persistent volume mounted from outside, or in another container completely.
First lesson: The Django migrate command needs to be part of the container start script, as opposed to the container build script. It’s harmless/idempotent if the database is already fully migrated, but necessary on the first container start, and on every subsequent update that includes database migrations.
Baseline Dockerfile
A naive Dockerfile and accompanying start script would look like this:
Preconditions (click to expand)
A requirements.txt with all required Python packages to install exists, as well as a foo.wsgi file to load the WSGI application.
# Dockerfile
FROM python:slim
ENV DJANGO_SETTINGS_MODULE foo.settings
RUN mkdir -p /app
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt gunicorn
RUN python manage.py collectstatic
RUN python manage.py compilemessages
ENTRYPOINT ["/app/docker-entrypoint.sh"]
Large image size. The entire source checkout of our application will be in the final docker image. Also, depending on the package requirements we may need to apt-get install a compiler or development package before executing pip install. These will then also be in the final image (and on our production machine).
Long re-build time. Any change to the source directory will invalidate the Docker cache starting with line 6 in the Dockerfile. The pip install will be executed fully from scratch every time.
(Note: We’re using the slim Python docker image. The alpine image would be even smaller, but its use of the musl C library breaks some Python modules. Depending on your dependencies you might be able to swap in python:alpine instead of python:slim.)
Improved Caching
Docker caches all individual build steps, and can use the cache when the same step is applied to the same current state. In our naive Dockerfile all the expensive commands are dependent on the full state of the source checkout, so the cache cannot be used after even the tiniest code change.
The common solution looks like this:
# Dockerfile
FROM python:slim
ENV DJANGO_SETTINGS_MODULE foo.settings
RUN mkdir -p /app
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt gunicorn
COPY . .
RUN python manage.py collectstatic
RUN python manage.py compilemessages
ENTRYPOINT ["/app/docker-entrypoint.sh"]
In this version the pip install command on line 7 can be cached until the requirements.txt or the base image change. Re-build time is drastically reduced, but the image size is unaffected.
Building with setup.py
If we package up our Django project as a proper Python package with a setup.py, we can use pip to install it directly (and could also publish it to PyPI).
If the setup.py lists all project dependencies (including Django) in install_requires, then we’re able to execute (for example in a virtual environment):
pip install .
This will pre-compile and install all our dependencies, and then pre-compile and instal all our code, and install everything into the Python path. The main difference to the previous versions is that our own code is pre-compiled too, instead of just executed from the source checkout. There is little immediate effect from this: The interpreter startup might be slightly faster, because it doesn’t need to compile our code every time. In a web-app environment this is likely not noticeable.
But because our dependencies and our own code are now properly installed in the same place, we can drop our source code from the final container.
(We’ve also likely introduced a problem with non-code files, such as templates and graphics assets, in our project. They will by default not be installed by setup.py. We’ll take care of this later.)
Due to the way Docker works, all changed files of every build step cumulatively determine the final container size. If we install 150MB of build dependencies, 2MB of source code and docs, generate 1MB of pre-compiled code, then delete the build dependencies and source code, our image has grown by 153MB.
This accumulation is per step: Files that aren’t present after a step don’t count towards the total space usage. A common workaround is to stuff the entire build into one step. This approach completely negates any caching: Any change in the source files (which are necessarily part of the step) also requires a complete redo of all dependencies.
Enter multi-stage build: At any point in the Dockerfile we’re allowed to use a new FROM step to create a whole new image within the same file. Later steps can refer to previous images, but only the last image of the file will be considered the output of the image build process.
How do we get the compiled Python code from one image to the next? The Docker COPY command has an optional --from= argument to specify an image as source.
Which files do we copy over? By default, pip installs everything into /usr/local, so we could copy that. An even better approach is to use pip install --prefix=... to install into an isolated non-standard location. This allows us to grab all the files related to our project and no others.
Preconditions (click to expand)
The Django configuration must set STATIC_ROOT="/app/static.dist". A setup.py must be present to properly install our project. All dependency information needs to be in setup.py, no requirements.txt is used.
# Dockerfile
FROM python:slim as common-base
ENV DJANGO_SETTINGS_MODULE foo.settings
# Intermediate image, all compilation takes place here
FROM common-base as builder
RUN pip install -U pip setuptools
RUN mkdir -p /app
WORKDIR /app
RUN apt-get update && apt-get install -y build-essential python3-dev
RUN mkdir -p /install
COPY . .
RUN sh -c 'pip install --no-warn-script-location --prefix=/install .'
RUN cp -r /install/* /usr/local
RUN sh -c 'python manage.py collectstatic --no-input'
# Final image, just copy over pre-compiled files
FROM common-base
RUN mkdir -p /app
COPY docker-entrypoint.sh /app/
COPY --from=builder /install /usr/local
COPY --from=builder /app/static.dist /app/static.dist
ENTRYPOINT ["/app/docker-entrypoint.sh"]
This will drastically reduce our final image size since neither the build-essential packages, nor any of the source dependencies are part of it. However, we’re back to our cache-invalidation problem: Any code change invalidates all caches starting at line 17, requiring Docker to redo the full Python dependency installation.
One possible solution is to re-use the previous trick of copying the requirements.txt first, in isolation, to only install the dependencies. But that would mean we need to manage dependencies in both requirements.txt and setup.py. Is there an easier way?
Multi-Stage, Cache-Friendly Build
The command setup.py egg_info will create a foo.egg-info directory with various bits of information about the package, including a requirements.txt.
We’ll execute egg_info in an isolated image, copy the requirements.txt to a new image (in order to be independent from changes in setup.py other than the list of requirements), then install dependencies using the generated requirements.txt. Up to here these steps are fully cacheable unless the list of project dependencies changes. Afterwards we’ll proceed in the usual fashion by copying over the remaining source code and installing it.
(One snap: The generated requirements.txt also contains all possible extras listed in setup.py, under bracket separated sections such as [dev]. pip cannot handle that, so we’ll use grep to cut the generated requirements.txt at the first blank line.)
Preconditions (click to expand)
The Django configuration must set STATIC_ROOT="/app/static.dist". All dependencies for production use are in the normal setup.py dependencies, and not in extras.
# Dockerfile
FROM python:slim as common-base
ENV DJANGO_SETTINGS_MODULE foo.settings
FROM common-base as base-builder
RUN pip install -U pip setuptools
RUN mkdir -p /app
WORKDIR /app
# Stage 1: Extract dependency information from setup.py alone
# Allows docker caching until setup.py changes
FROM base-builder as dependencies
COPY setup.py .
RUN python setup.py egg_info
# Stage 2: Install dependencies based on the information extracted from the previous step
# Caveat: Expects an empty line between base dependencies and extras, doesn't install extras
# Also installs gunicon in the same step
FROM base-builder as builder
RUN apt-get update && apt-get install -y build-essential python3-dev
RUN mkdir -p /install
COPY --from=dependencies /app/foo.egg-info/requires.txt /tmp/
RUN sh -c 'pip install --no-warn-script-location --prefix=/install $(grep -e ^$ -m 1 -B 9999 /tmp/requires.txt) gunicorn'
# Everything up to here should be fully cacheable unless dependencies change
# Now copy the application code
COPY . .
# Stage 3: Install application
RUN sh -c 'pip install --no-warn-script-location --prefix=/install .'
# Stage 4: Install application into a temporary container, in order to have both source and compiled files
# Compile static assets
FROM builder as static-builder
RUN cp -r /install/* /usr/local
RUN sh -c 'python manage.py collectstatic --no-input'
# Stage 5: Install compiled static assets and support files into clean image
FROM common-base
RUN mkdir -p /app
COPY docker-entrypoint.sh /app/
COPY --from=builder /install /usr/local
COPY --from=static-builder /app/static.dist /app/static.dist
ENTRYPOINT ["/app/docker-entrypoint.sh"]
Addendum: Handling data files
When converting your project to be installable with setup.py, you should make sure that you’re not missing any files in the final build. Run setup.py egg_info and then check the generated foo.egg-info/SOURCES.txt for missing files.
A common trip-up is the distinction between Python packages and ordinary directories. By definition a Python package is a directory that contains an __init__.py file (can be empty). By default setup.py only installs Python packages. So make sure you’ve got __init__.py files also on all intermediate directory levels of your code (check in management/commands, for example).
If your project uses templates or other data files (not covered by collectstatic), you need to do two things to get setup.py to pick them up:
Set include_package_data=True in the call to setuptools.setup() in setup.py.
Add a MANIFEST.in file next to setup.py that contains instructions to include your data files. The most straightforward way for a template directory is something like recursive-include foo/templates *
The section on Including Data Files in the setuptools documentation covers this in more detail.
I have a Django based project, and am doing unit tests with py.test. To debug a test failure it’s sometimes useful to see the actual SQL queries that Django emitted, which is surprisingly hard. I assumed that that would be such an obvious and common need, that a simple switch (for pytest-django) or easy plugin would exist to simply output SQL queries as they are executed.
It is a common need alright (1, 2, 3), but the correct solution is surprisingly unwieldy1.
For one, there is no existing helper or plugin. There are helpers and plugins to count queries and assert a certain query count, which as a side effect track all queries and print the executed queries on query count assertion failure, but I’ve yet to find any case where that would be useful to me. More importantly it’s useless for the exact case here: The stored list of queries is only printed if the expected query count is not matched, not in any other case, such as, say, a failing unit test which you’d want to debug by inspecting the queries that were executed.
Therefore: Fuck it, let’s do it live. Django tracks all queries in the connection object, but in general only if DEBUG=True. For various reasons, tests are executed with DEBUG=False, which is a good thing, since you want to test close to production. Django does provide a context helper to temporarily enable query tracking on a connection which we’ll use instead.
Putting it together, we need to transform a humble test such as
@pytest.mark.django_db
def test_frobnicate_foo(foo):
from django.db import connection
from django.test.utils import CaptureQueriesContext
with CaptureQueriesContext(connection):
assert foo.frobnicate(), connection.queries[0]['sql']
in order to see the value of the first SQL query in case of assertion failure.
At some point someone™ should write a generic plugin to do that.
There are several incorrect solutions on StackOverflow, such as the one that starts with “First, subclass TestCase”, which doesn’t apply to py.test, or the ever helpful “try using django-debug-toolbar”, which doesn’t apply to unit tests in general. ↩
For a project I with a Raspberry Pi (Zero W) needed a simple and easy input device to change a numerical value. So I bought some rotary encoders off Amazon.
If you search the Internet for information/tutorials on how to use a “KY-040” rotary encoder with Linux and the Raspberry Pi, you’ll find a dozen people or so who’ve done that, and written about it. Naturally, I’ll need to do that too — with a twist.
The most often referenced code to operate the KY-040 seems to be from Martin O’Hanlons Raspberry Pi and KY040 Rotary Encoder blog post, which even ended up being a python module KY040. The down side of this approach is that it’s very unreliable: The code triggers on an edge of one of the GPIO inputs and then reads the other input, all in Python code. To make this sort of work it needs long debounce times. The net result is that the code misses many turn events if the shaft is turned too fast, and sometimes gives the wrong turn direction.
I’m also not positive that Martins explanations of the signal output of the encoder is correct. Keyes KY-040 Arduino Rotary Encoder User Manual has a different explanation for the working principle, and some Arduino code. The difference is that, although the pins on the module are marked “CLK” and “DT” (for clock and data), it’s more common for rotary encoders to simply have pin “A” and pin “B”.
This matches what I’ve seen on this module: With pins A and B the most important distinction is the order in which they generate edges. If you only look for edges on one pin (“clock”) and then sample the other pin (“data”) you’ll kind-of also get information about the turns, but depending on the edge rate and sample speed it’s going to be unreliable.
It’s possible to observe both edges in Python with RPi.GPIO, but again, there’s a lot of overhead for what should be mostly real-time processing and is not fully reliable.
Good thing we’re using Linux which has device drivers for all sorts of things. Including, of course, a rotary encoder connected to GPIOs: rotary-encoder.txt (includes nice ASCII art on the operational principle).
Good thing also we’re using the Raspberry Pi, which has a matching device tree overlay (README for precompiled firmware).
(Note: If you’re compiling your own kernel, you’ll need the Raspberry Pi kernel starting with 4.9.y, CONFIG_INPUT_GPIO_ROTARY_ENCODER, and you’ll probably also want CONFIG_INPUT_EVDEV. The rpi-firmware with compiled overlays needs to be recent-ish, ~mid January 2018, for these examples to work.)
To enable/configure the rotary-encoder device tree overlay, simply put something like the following into /boot/config.txt (with the encoder connected to pins 23 and 24 on the Raspberry Pi):
After a reboot you’ll have a new device in /dev/input/ for the rotary encoder. You can use the evtest tool (as in evtest /dev/input/event0) to look at the events it generates and confirm that it reacts perfectly to every turn of the encoder, without missing a movement or confusing the direction.
While you’re at it, you might also want to add the middle button as a key (mine is connected to pin 22):
In order to make use of this in your Python programs: Use python-evdev.
# -*- encoding: utf-8 -*-
from __future__ import print_function
import evdev
import select
devices = [evdev.InputDevice(fn) for fn in evdev.list_devices()]
devices = {dev.fd: dev for dev in devices}
value = 1
print("Value: {0}".format(value))
done = False
while not done:
r, w, x = select.select(devices, [], [])
for fd in r:
for event in devices[fd].read():
event = evdev.util.categorize(event)
if isinstance(event, evdev.events.RelEvent):
value = value + event.event.value
print("Value: {0}".format(value))
elif isinstance(event, evdev.events.KeyEvent):
if event.keycode == "KEY_ENTER" and event.keystate == event.key_up:
done = True
break
Interesting problem: You have RSA signatures and the signed data, and want to know the RSA public key that can be used to verify the signatures. For older signature schemes this is possible, if you have at least two signatures (or an oracle that can provide signatures on request).
Math is not my strong suit, but I found the necessary formula in this Cryptography StackExchange post: RSA public key recovery from signatures. It has the general idea, but is light on details and actual code.
Tools:
OpenSSL to generate examples
SageMath for the actual calculations. It has an absolutely wonderful Jupyter notebook interface.
First, let’s generate an example key and two example files. We’ll use 512 bits RSA for this example, which is about the minimum key size we can use, just to keep the examples short (in both screen real estate and calculation size). Don’t worry: while the calculation is ~30 seconds for 512 bits RSA, it’ll only grow to ~2.5 minutes for real-world 2048 bits RSA.
RSA signatures are complicated beasts. In theory, you only have to hash the input and apply the RSA operation with the private key (that is, ‘decrypt’ it), but for various reasons this is highly insecure and never done in practice.
Instead, we’ll let OpenSSL handle the generation of signatures for our examples:
The first step ‘encrypts’ the signature (that is: applies the RSA operation with the public key) and prints a hexdump of the result. In the hexdump we see:
Some padding: 00 01 ff ff … ff 00
An ASN.1 structure, consisting of
A sequence (tag 30, 49 bytes), of
A sequence (tag 30, 13 bytes), of
An object identifier (tag 06, 9 bytes) for sha256
A NULL value (tag 05, 0 bytes)
An octet string (tag 04, 32 bytes) with
The SHA-256 hash (3e6ff8…a10c2a) of the signed data
The signature follows the PKCS#1 standard for RSA signatures. All the extra stuff serves to distinguish signatures with SHA-256 from signatures with other hashes, and to prevent some attacks on the padding. It’s also the reason why we can’t go much below 512 bits RSA if we want to demo with SHA-256. (It must be noted that PKCS#1 padding shouldn’t be used anymore. The new standard is RSASSA-PSS, which has a robust security proof, but also is randomized and completely foils the technique in this blog post.)
Let’s define the first set of functions to generate this sort of padding:
import hashlib
def pkcs1_padding(size_bytes, hexdigest, hashfn):
oid = {hashlib.sha256: '608648016503040201'}[hashfn]
result = '06' + ("%02X" % (len(oid)/2)) + oid + '05' + '00'
result = '30' + ("%02X" % (len(result)/2)) + result
result = result + '04' + ("%02X" % (len(hexdigest)/2)) + hexdigest
result = '30' + ("%02X" % (len(result)/2)) + result
result = '0001' + ('ff' * int(size_bytes - 3 - len(result)/2) ) + '00' + result
return result
def hash_pad(size_bytes, data, hashfn):
hexdigest = hashfn(data).hexdigest()
return pkcs1_padding(size_bytes, hexdigest, hashfn)
To perform the gcd calculation, you need for each signature the corresponding signed data, the hash function used, and the public exponent of the RSA key pair. Both hash function and public exponent may need to be guessed, but the hash is usually SHA-256, and the exponent is usually 0x10001 (65537) or 3.
The full code is as follows:
import binascii, hashlib
def message_sig_pair(size_bytes, data, signature, hashfn=hashlib.sha256):
return ( Integer('0x' + hash_pad(size_bytes, data, hashfn)), Integer('0x' + binascii.hexlify(signature)) )
def find_n(*filenames):
data_raw = []
signature_raw = []
for fn in filenames:
data_raw.append( open(fn, 'rb').read() )
signature_raw.append( open(fn+'.sig', 'rb').read() )
size_bytes = len(signature_raw[0])
if any(len(s) != size_bytes for s in signature_raw):
raise Exception("All signature sizes must be identical")
for hashfn in [hashlib.sha256]:
pairs = [message_sig_pair(size_bytes, m, s, hashfn) for (m,s) in zip(data_raw, signature_raw)]
for e in [0x10001, 3, 17]:
gcd_input = [ (s^e - m) for (m,s) in pairs ]
result = gcd(*gcd_input)
if result != 1:
return (hashfn, e, result)
If we test it, we’ll find:
time hashfn, e, n = find_n('hallowelt.txt', 'hallootto.txt');
CPU times: user 27.3 s, sys: 609 ms, total: 27.9 s
Wall time: 28.4 s
$ openssl rsa -in privkey.pem -pubout
writing RSA key
-----BEGIN PUBLIC KEY-----
MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBANnaxQliHtfye0hoqxh09kl3jGPxEAA2
boJ88Y/XDbHifzmQJSTimqK/sxZ2J8qqQI4X6QfuPETgMh3Hf7iJAHUCAwEAAQ==
-----END PUBLIC KEY-----
Warning: The gcd method may sometimes return not n but a product k * n for a smallish value of k. You may need to check for small prime factors and remove them.
The find_n function is written to accept arbitrary arguments (must be filenames where the file contains the data and the filename appended with .sig contains the signature) but will only work when given exactly two arguments. FactHacks: Batch gcd has a batchgcd_faster function that will work on an arbitrary number of arguments (but is slower in the 2‑argument case).
I have a setup in which a frontend nginx server handles multiple names on one IP for HTTP port 80 and HTTPS port 443 (through SNI) and forwards the requests to distinct backend HTTP servers, based on name and/or path. One of these backends is a WordPress installation, and that one is problematic: WordPress tends to insert absolute URLs into everything. And since it doesn’t know that the frontend was accessed through HTTPS, it inserts HTTP URLs. Worse, when using the WordPress HTTPS plugin and choosing to force administrative logins through HTTPS only, you can end up with an endless redirect loop.
A quasi standard signal from frontend to backend in this case is the X‑Forwarded-SslHTTP header, which, when set to on, should indicate that the frontend used HTTPS. Unfortunately, WordPress ignores the header. Instead, is_ssl() only checks the HTTPS server variable (and alternatively checks the server port to be 443).
The WordPress documentation referenced above offers a simple fix by overwriting the HTTPS variable in the wp-config.php file. Instead, for nginx with FastCGI, I propose this simpler, more generic, and, in my humble opinion, more elegant fix:
The default nginx FastCGI configuration includes a file fastcgi_params, which, among others, has this line:
in the corresponding proxy configuration on the frontend nginx)
This will set the HTTPS server variable when the X‑Forwarded-SslHTTP header is set, allowing all kinds of PHP programs to transparently discover that they’re used to through HTTPS.
A project I’m newly affiliated with has used Google Groups for all their private group communication so far. Since I’m not a big fan of storing private data in the proprietary data silos of cloud providers, this is a situation I want to rectify. Why use Google Groups when you can set up GNU mailman yourself and not have all data and meta-data pass through Google?
There’s one caveat: While Google provides an export of group member lists, there’s no export functionality for the current archive. Which in this case represented 2 years’ worth of fruitful discussion and organizational knowledge. Some tools exist to try and dump all of a group’s archive, but none really agreed with me. So I rolled my own.
Inside you’ll find a Python script that uses the lynx browser to access the Google Groups API (so it can work with a Google login cookie as an authenticated user) and will enumerate all messages in a group’s archive and download each into a different file as a standard RFC (2)822 message. While programming this I found that some of the messages are being returned from the API in a mangled form, so I also wrote a tool (can be called with an option from the downloader) that can partially reverse this mangling.
With the message files from my download tool, formail from the procmail package, and some shell scripting I was able to generate an mbox file with the entire groups’ archives which could be easily imported into mailman.
I recently set up an owncloud instance for private use and found that the load time was abysmal. Showing the default “Files” page spends ~21 seconds for ~140 HTTP requests1, even though my HTTP setup is already quite pimped (with SPDY and all). What is worse: The time does not reduce on subsequent visits. No cache-control headers are sent and all the Javascript and CSS resources are requested again. There is ETag and If-None-Match in place, so most of the requests just yield a 304 Not Modified response, but they still block the loading process. Which is even less understandable if you look at the requests: All Javascript and CSS resources are using a “?v=md5($owncloud_version)” cache buster, so they would be fully cacheable with no ill effects.
For a standard owncloud installation in /var/www with Apache: Open your /var/www/owncloud/.htaccess in a text editor and append the following lines (Update 2014-10-09 18:35 UTC: Add missing \ before .)
then in a shell make sure that the headers module is enabled in Apache:
sudo a2enmod headers
and restart Apache as prompted by a2enmod.
The next time you load the owncloud web interface your browser will be told to cache the Javascript and CSS resources for 30 days, and the time after that it won’t request them again. The “Files” app load time dropped from 21 to 6 seconds for me – with 16 instead of ~140 requests. That’s almost reasonable!
In Firefox: Press Ctrl-Shift‑Q to bring up the Network web developer tool to watch the drama unfold in its entirety. ↩
While upgrading my server infrastructure I noticed that I really should be providing IPv6 not only for the services (like this HTTP/HTTPS site) but also for the DNS itself, and also at some point might want to enable DNSSEC for my domain to join in the fight with DANE against the mafia that is the global X.509 certification authority infrastructure.
My DNS servers have been powered by DJB’s most excellent djbdns package1 since I first started hosting theses services myself. The software package truly is fire and forget: You set it up once and it will continue working, with no maintenance or pesky software upgrades, year after year. That’s one thing Dan’s software is famous for.
The other thing everyone knows about his software is that if you want to add features, you’ll have to apply third-party patches. A well-known patch set for IPv6 in tinydns is available from my friend Fefe, and is also included in Debian-based distributions in a package called dbndns. Peter Conrad wrote DNSSEC support for tinydns (explicitly basing on Fefe’s IPv6 patches).
When trying to set that up, I quickly became frustrated: Applying several patches from several distinct locations one after the other doesn’t seem like the way software should be distributed in 2014. Also, Peter’s code has a few easily patched problems.
Import djbdns‑1.05.tar.gz. No signature check was made since no signed version is available, but I checked that I was using the same package as Ubuntu/Debian.
Apply djbdns‑1.05-test27.diff.bz2. I checked Fefe’s signature and verified his key’s fingerprint using a separate channel.
Apply 0003-djbdns-misformats-some-long-response-packets-patch‑a.diff from the Ubuntu package.
Apply 0004-dnscache.c‑allow-a-maximum-of-20-concurrent-outgoing.diff from the Ubuntu package.
Apply djbdns-ipv6-make.patch. No signature check was done, but the patch is trivial.
Import tinydnssec‑1.05–1.3.tar.bz2. I checked Peter’s signature and verified his key through the web of trust.
Apply djbdns‑1.05-dnssec.patch from the aforementioned package.
Small fixup for conf-cc and conf-ld: Do not use diet for compilation or linking (was introduced with Fefe’s patch).
Small fixup for tinydns-sign.pl: Use Digest::SHA instead of Digest::SHA1.
Small fixup for run-tests.sh: GNU tail does not understand the +n syntax.
Small fixup for run-tests.sh: Need bash, say so (not all /bin/sh are bash).
The resulting source builds fine, and the tests mostly run fine. Tests 1 and 7 each fail in 50% of cases due to the randomized record ordering in the tinydns output which is not accounted for in the test code.
djbdns is in the public domain, tinydnssec is published under GPL‑3, which means that the combined source also falls under GPL‑3.
The software package is ‘djbdns’, among the servers in it are ‘tinydns’ to host an authoritative UDPDNS server and ‘axfrdns’ to host a TCPDNS server ↩
Historically, baud rates on UNIX –later: POSIX– systems have been manipulated using the tcgetattr()/tcsetattr() functions with a struct termios and a very limited set of possible rates identified by constants such as B0, B50, B75, B110, …, through B9600. These have later been extended for select values such as B38400 and B115200. Hardware has since evolved to be able to use almost any value as a baud rate, even much higher ones. The interface however, has never been properly repaired.
Linux used a technique called “baud rate aliasing” to circumvent that problem in the past: A special mode can be set so that a request for B38400 would not actually set 38.4kBaud but instead a separately defined other baud rate with names like spd_hi (“high”?) for 57.6kBaud, spd_shi (“super high”?) for 230kBaud or spd_warp for 460kBaud. These names may give you an idea how old and limited this interface is.
For this reason there is a new ioctl interface to set an arbitrary baud rate by actually using an integer to store the requested baud rate: TCGETS2/TCSETS2 using struct termios2.
Both documentation and example code for this method are sparse. A bug report to implement this in libc6 is still open. Thankfully that bug report includes example C code to use the interface directly. The constant to tell the structure that an OTHER Baud rate has been set has unwisely been called BOTHER, which, being a proper English word, makes it completely impossible to find any information on the internet about. So, to be more explicit (and hopefully be found by any future search for this topic): This is an example on how to set a custom baud rate with the BOTHER flag on Linux in Perl.
Transforming the C example into Perl code using the Perl ioctl function should be easy, right? Muahahaha. Every example on how to use Perl ioctl on the Internet (that I’ve reviewed) is wrong and/or broken. Even better: the perl distribution itself is broken in this instance. Quoth /usr/lib/perl/5.18.2/asm-generic/ioctls.ph on Ubuntu 14.04:
(hint: count the number of opening and closing parentheses.)
Even if that Perl code was syntactically correct, it’s wrong in principle: The third argument to the _IOR macro should be the struct termios2 structure size. On x86_64 it’s 44 bytes, not 1.
So, I’ve written code with two purposes:
Correctly use Perl’s ioctl to
set a custom serial baud rate under Linux.
The definitions of both TCGETS2 and struct termios2 may be architecture dependent, so there’s a helper program in C to output the parameters for the current architecture.
All the code (set baud rate with TCSETS2 BOTHER in C, set baud rate with TCSETS2 BOTHER in Perl, C helper to output constants for the current architecture, Makefile) I released into the public domain at github.com/henryk/perl-baudrate/.