Understanding Common Table Expressions in SQL

One of the lesser known features of modern SQL are so-called “Common Table Expressions” (CTE) or “WITH queries”. I’ll explain the mental model that helped me make sense of them, and how to use them to execute recursive queries. Afterwards I’ll show how to apply these techniques in Django.

Syntactically a CTE consists of one or more statements marked with WITH and placed before the statement they relate to, within the same SQL query. Conceptually these additional statements behave as if defining a view, or temporary table(s), that is valid only within this one SQL query.

The intended use is for simplifying complex or repeated operations and pulling them out of the main query. Let’s say you have normalized your database beyond all reason and have the following schema for storing names1:

CREATE TABLE name (
    id bigint NOT NULL,
    name character varying(50) NOT NULL,
    CONSTRAINT name_pkey PRIMARY KEY (id)
);
CREATE TABLE person (
    id bigint NOT NULL,
    first_name_id bigint NOT NULL,
    last_name_id bigint NOT NULL,
    CONSTRAINT person_pkey PRIMARY KEY (id),
    CONSTRAINT first_name FOREIGN KEY (first_name_id)
        REFERENCES name (id),
    CONSTRAINT last_name FOREIGN KEY (last_name_id)
        REFERENCES name (id)
);

Given this schema, you’d have to use something like SELECT CONCAT(first.name, ' ', last.name) everywhere you wanted a full name, together with a join along the foreign keys.

Even this small example becomes tiresome pretty fast. It’s worse for more complex cases and gets even more complicated when you consider computed or aggregate functions.

The WITH statement lets you extract the complications from your query and get them over with first. A query for full name substrings could look like this:

WITH full (id, name) AS (
  SELECT 
    p.id AS id,
    CONCAT(first.name, ' ', last.name) AS name
  FROM
    person p
    LEFT JOIN name first
    LEFT JOIN name last
  WHERE
    first.id = p.first_name_id,
    last.id = p.last_name_id
)
SELECT id, name FROM full
WHERE name LIKE '%om Ridd%';

This behaves as if a temporary table named full with columns id, name is created and filled with the results from the first SELECT statement (the CTE), just before executing the second, main, SELECT statement. In the main SELECT you do not need to worry about the JOINs or other details from the inside of the CTE. It will appear as if it were a normal table.

Multiple CTE in one query are possible, and neither the CTE nor the main query are limited to SELECT. The PostgreSQL documentation has an example with DELETE … RETURNING and INSERT INTO … SELECT that moves rows between tables.

But so far, this is only syntactic sugar, an easier way to write something you can also express otherwise. You could achieve the same result with a (possibly materialized) view, though you’d have to modify the database schema. Or you could include everything from the CTE in your main query, but have to take great care not to trip up when using DISTINCT or GROUP BY. Or you have an ORM that will handle all the complicated queries for you and will allow reuse of complex expressions.

Then everything changes when you learn about WITH RECURSIVE. The magic consists of two building blocks: First, a WITH RECURSIVE statement may refer to itself. That is, in the SELECT that fills data into your imaginary temporary table you can read data from the imaginary temporary table. This, in itself, is not very useful, since the table starts out empty. The second building block is extending the CTE SELECT to include data from the start.

The canonical use case of a recursive query looks like this:

WITH RECURSIVE my_cte AS (
  SELECT ... /* base case*/
    FROM somewhere_else
    WHERE ...
  UNION
  SELECT ... /* recursive step */
    FROM somewhere_else JOIN my_cte
    WHERE ...
)
SELECT * FROM my_cte;

The two SELECT statements in the CTE perform the functions of base case and recursive step in the definition of recursion. The semantics of WITH RECURSIVE is such that it repeats the query, storing results in the imaginary temporary table, until no new rows are being added. In other words, the stop condition is implicitly fixed and cannot be altered. You have to make sure that it terminates eventually, for example with a LIMIT clause.

An artificial example would be to create a sequence of numbers:

WITH RECURSIVE numbers (n) AS (
  SELECT 1 AS n
  UNION
  SELECT numbers.n + 1 AS n
    FROM numbers
)
SELECT * FROM numbers LIMIT 10;

This example shows that the base case needn’t necessarily come from another table. You can also see that in most simple cases the final, main, SELECT is just a dummy SELECT * (in this case with an added LIMIT) because most of the heavy lifting is being done in the CTE.

Recursive CTEs allow SQL to do things not possible without. A prime example is operating on trees in adjacency list form. This is the simplest and most obvious way to represent hierarchical data, but without recursive CTEs it’s not possible to directly express some common queries2, such as retrieving all ancestors of a node.

To illustrate here’s an example with the required basics of a simple hierarchical tree structure:

CREATE TABLE node (
    id bigint NOT NULL,
    parent_id bigint,
    name character varying (50) NOT NULL,
    CONSTRAINT node_pkey PRIMARY KEY (id),
    CONSTRAINT parent FOREIGN KEY (parent_id)
        REFERENCES node (id)
);

Every node has a name and an optional foreign key reference to its parent node. It’s very easy to query for a node’s parent node, or all child nodes of a specific node. All other queries are more complicated, or impossible, without a recursive CTE. Let’s take this fictional table:

idnameparent_id
1EnochNULL
2Methuselah1
3Lamech2
4Noah3
5Shem4
6Ham4
7Japeth4

To get all children of Noah:

SELECT child.name
  FROM node child
    JOIN node parent ON child.parent_id = parent.id
  WHERE parent.name = 'Noah';

To get Lamech’s father:

SELECT parent.name
  FROM node child
    JOIN node parent ON child.parent_id = parent.id
  WHERE child.name = 'Lamech';

You could extend the queries with additional JOINs to also handle grandparents, great-grandparents and so on, but it’s impossible in normal SQL to handle arbitrarily long chains of ancestors or descendants. Now consider this recursive query for all of Japeth’s ancestors:

WITH RECURSIVE ancestors (id, name, parent_id) AS (
  SELECT id, name, parent_id
    FROM node
    WHERE name = 'Japeth'
  UNION
  SELECT parent.id, parent.name, parent.parent_id
    FROM node parent
      JOIN ancestors child ON parent.id = child.parent_id
)
SELECT name FROM ancestors;

The second SELECT is a bit confusing. Here’s what happens: At first the imaginary temporary table contains only one row, corresponding to Japeth, the starting point for the following steps. The second SELECT then, for every node already in ancestors (now aliased to child, because these are all children considered so far), finds its parent from node and adds it to the ancestors.

So at every step the ancestors imaginary temporary table contains a set of all parents discovered so far, and all their parents (that is, all nodes that consider the existing nodes their child) are added until no more new parents are discovered.

A common variant is to include a path length variable to quantify the degree of relationship (and for example also be able to exactly query for all paths of a specific length). Another technique is to not pass through the entire row in the CTE, but only operate on the primary keys and then JOIN for the remaining columns later. Let’s look at an example for all descendants of Methuselah:

WITH RECURSIVE descendants (id, n) AS (
  SELECT id, 0
    FROM node
    WHERE name = 'Methuselah'
  UNION
  SELECT child.id, parent.id+1
    FROM node child
      JOIN descendants parent ON parent.id = child.parent_id
)
SELECT descendants.n, node.name
  FROM descendants
    LEFT JOIN node ON descendants.id = node.id;

You should see a common pattern to keep in mind: In the second SELECT the node object (called child) conceptually matches our CTE name (descendants), while the CTE reference is the reverse (parent)! We’re adding a node child for every parent already in the CTE.

Bonus Round: CTEs in Django

The django-cte package allows using CTEs in the Django ORM with the normal query API. The equivalent Django code for the last example looks like this:

# model.py
from django.db import models
from django_cte import CTEManager

class Node(models.Model):
  objects = CTEManager()
  id = AutoField(primary_key=True)
  name = models.CharField(max_length=50, null=False)
  parent = models.ForeignKey("self", null=True, on_delete=CASCADE)
from django.db.models import Value, IntegerField
from django_cte import With
from .models import Node

descendants = With.recursive(
  lambda cte: Node.objects.filter(
      name="Methusaleh"
    ).values(
      "id",
      n=Value(0, output_field=IntegerField())
    ).union(
      cte.join(Node, parent_id=cte.col.id).values(
        "id",
        n=cte.col.n + Value(1, output_field=IntegerField())
      )
    )
)

descendants_set = descendants.join(
  Node, id=descendants.col.id
).with_cte(descendants).annotate(
  n=descendants.col.n
)

A few observations to keep in mind:

  • In Django all database expressions need to start with a QuerySet.
  • The django_cte.With object wraps a QuerySet to allow additional methods, including with_cte() which adds the common table expression. Remember that in SQL the CTE goes before the main query (the QuerySet), which might be confusing here.
  • In order to map the self-referential nature of WITH RECURSIVE to Python syntax, django_cte.With.recursive() takes a callable that produces a QuerySet. The callable receives a reference to the CTE.
  • django_cte.With.recursive() needs to be JOINed to the underlying Model to be useful. You also need to .annotate() any computed columns from your CTE to use them outside of the CTE.


  1. Don’t. Instead, please read Falsehoods Programmers Believe About Names 

  2. Workarounds include nested sets and materialized paths, but with additional requirements on the data structures and some maintenance cost on updates 

An Efficient Multi-Stage Build for Python Django in Docker

A Docker brand motorized tricycle, looks fragile and overladen

We’ve recently begun dockerizing our applications in an effort to make development and deployment easier. One of the challenges was establishing a good baseline Dockerfile which can maximize the benefits of Dockers caching mechanism and at the same time provide minimal application images without any superfluous contents.

The basic installation flow for any Django project (let’s call it foo) is simple enough:

export DJANGO_SETTINGS_MODULE=foo.settings
pip install -r requirements.txt
python manage.py collectstatic
python manage.py compilemessages
python manage.py migrate

(Note: In this blog post we’ll mostly ignore the commands to actually get the Django project running within a web server. We’ll end up using gunicorn with WSGI, but won’t comment further on it.)

This sequence isn’t suitable for a Dockerfile as-is, because the final command in the sequence creates the database within the container image. Except for very specific circumstances this is likely not desired. In a normal deployment the database is located either on a persistent volume mounted from outside, or in another container completely.

First lesson: The Django migrate command needs to be part of the container start script, as opposed to the container build script. It’s harmless/idempotent if the database is already fully migrated, but necessary on the first container start, and on every subsequent update that includes database migrations.

Baseline Dockerfile

A naive Dockerfile and accompanying start script would look like this:

# Dockerfile
FROM python:slim
ENV DJANGO_SETTINGS_MODULE foo.settings
RUN mkdir -p /app
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt gunicorn
RUN python manage.py collectstatic
RUN python manage.py compilemessages
ENTRYPOINT ["/app/docker-entrypoint.sh"]
# docker-entrypoint.sh
cd /app
python manage.py migrate
exec gunicorn --bind '[::]:80' --worker-tmp-dir /dev/shm --workers "${GUNICORN_WORKERS:-3}" foo.wsgi:application

(The --worker-tmp-dir bit is a workaround for the way Docker mounts /tmp. See Configuring Gunicorn for Docker.)

This approach does work, but has two drawbacks:

  • Large image size. The entire source checkout of our application will be in the final docker image. Also, depending on the package requirements we may need to apt-get install a compiler or development package before executing pip install. These will then also be in the final image (and on our production machine).
  • Long re-build time. Any change to the source directory will invalidate the Docker cache starting with line 6 in the Dockerfile. The pip install will be executed fully from scratch every time.

(Note: We’re using the slim Python docker image. The alpine image would be even smaller, but its use of the musl C library breaks some Python modules. Depending on your dependencies you might be able to swap in python:alpine instead of python:slim.)

Improved Caching

Docker caches all individual build steps, and can use the cache when the same step is applied to the same current state. In our naive Dockerfile all the expensive commands are dependent on the full state of the source checkout, so the cache cannot be used after even the tiniest code change.

The common solution looks like this:

# Dockerfile
FROM python:slim
ENV DJANGO_SETTINGS_MODULE foo.settings
RUN mkdir -p /app
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt gunicorn
COPY . .
RUN python manage.py collectstatic
RUN python manage.py compilemessages
ENTRYPOINT ["/app/docker-entrypoint.sh"]

In this version the pip install command on line 7 can be cached until the requirements.txt or the base image change. Re-build time is drastically reduced, but the image size is unaffected.

Building with setup.py

If we package up our Django project as a proper Python package with a setup.py, we can use pip to install it directly (and could also publish it to PyPI).

If the setup.py lists all project dependencies (including Django) in install_requires, then we’re able to execute (for example in a virtual environment):

pip install .

This will pre-compile and install all our dependencies, and then pre-compile and instal all our code, and install everything into the Python path. The main difference to the previous versions is that our own code is pre-compiled too, instead of just executed from the source checkout. There is little immediate effect from this: The interpreter startup might be slightly faster, because it doesn’t need to compile our code every time. In a web-app environment this is likely not noticeable.

But because our dependencies and our own code are now properly installed in the same place, we can drop our source code from the final container.

(We’ve also likely introduced a problem with non-code files, such as templates and graphics assets, in our project. They will by default not be installed by setup.py. We’ll take care of this later.)

Due to the way Docker works, all changed files of every build step cumulatively determine the final container size. If we install 150MB of build dependencies, 2MB of source code and docs, generate 1MB of pre-compiled code, then delete the build dependencies and source code, our image has grown by 153MB.

This accumulation is per step: Files that aren’t present after a step don’t count towards the total space usage. A common workaround is to stuff the entire build into one step. This approach completely negates any caching: Any change in the source files (which are necessarily part of the step) also requires a complete redo of all dependencies.

Enter multi-stage build: At any point in the Dockerfile we’re allowed to use a new FROM step to create a whole new image within the same file. Later steps can refer to previous images, but only the last image of the file will be considered the output of the image build process.

How do we get the compiled Python code from one image to the next? The Docker COPY command has an optional --from= argument to specify an image as source. 

Which files do we copy over? By default, pip installs everything into /usr/local, so we could copy that. An even better approach is to use pip install --prefix=... to install into an isolated non-standard location. This allows us to grab all the files related to our project and no others.

# Dockerfile
FROM python:slim as common-base

ENV DJANGO_SETTINGS_MODULE foo.settings

# Intermediate image, all compilation takes place here
FROM common-base as builder

RUN pip install -U pip setuptools

RUN mkdir -p /app
WORKDIR /app

RUN apt-get update && apt-get install -y build-essential python3-dev
RUN mkdir -p /install

COPY . .

RUN sh -c 'pip install --no-warn-script-location --prefix=/install .'
RUN cp -r /install/* /usr/local
RUN sh -c 'python manage.py collectstatic --no-input'

# Final image, just copy over pre-compiled files
FROM common-base

RUN mkdir -p /app
COPY docker-entrypoint.sh /app/
COPY --from=builder /install /usr/local
COPY --from=builder /app/static.dist /app/static.dist

ENTRYPOINT ["/app/docker-entrypoint.sh"]

This will drastically reduce our final image size since neither the build-essential packages, nor any of the source dependencies are part of it. However, we’re back to our cache-invalidation problem: Any code change invalidates all caches starting at line 17, requiring Docker to redo the full Python dependency installation.

One possible solution is to re-use the previous trick of copying the requirements.txt first, in isolation, to only install the dependencies. But that would mean we need to manage dependencies in both requirements.txt and setup.py. Is there an easier way?

Multi-Stage, Cache-Friendly Build

The command setup.py egg_info will create a foo.egg-info directory with various bits of information about the package, including a requirements.txt.

We’ll execute egg_info in an isolated image, copy the requirements.txt to a new image (in order to be independent from changes in setup.py other than the list of requirements), then install dependencies using the generated requirements.txt. Up to here these steps are fully cacheable unless the list of project dependencies changes. Afterwards we’ll proceed in the usual fashion by copying over the remaining source code and installing it.

(One snap: The generated requirements.txt also contains all possible extras listed in setup.py, under bracket separated sections such as [dev]. pip cannot handle that, so we’ll use grep to cut the generated requirements.txt at the first blank line.)

# Dockerfile
FROM python:slim as common-base

ENV DJANGO_SETTINGS_MODULE foo.settings

FROM common-base as base-builder

RUN pip install -U pip setuptools

RUN mkdir -p /app
WORKDIR /app

# Stage 1: Extract dependency information from setup.py alone
#  Allows docker caching until setup.py changes
FROM base-builder as dependencies

COPY setup.py .
RUN python setup.py egg_info

# Stage 2: Install dependencies based on the information extracted from the previous step
#  Caveat: Expects an empty line between base dependencies and extras, doesn't install extras
# Also installs gunicon in the same step
FROM base-builder as builder
RUN apt-get update && apt-get install -y build-essential python3-dev
RUN mkdir -p /install
COPY --from=dependencies /app/foo.egg-info/requires.txt /tmp/
RUN sh -c 'pip install --no-warn-script-location --prefix=/install $(grep -e ^$ -m 1 -B 9999 /tmp/requires.txt) gunicorn'

# Everything up to here should be fully cacheable unless dependencies change
# Now copy the application code

COPY . .

# Stage 3: Install application
RUN sh -c 'pip install --no-warn-script-location --prefix=/install .'

# Stage 4: Install application into a temporary container, in order to have both source and compiled files
#  Compile static assets
FROM builder as static-builder

RUN cp -r /install/* /usr/local

RUN sh -c 'python manage.py collectstatic --no-input'

# Stage 5: Install compiled static assets and support files into clean image
FROM common-base

RUN mkdir -p /app
COPY docker-entrypoint.sh /app/
COPY --from=builder /install /usr/local
COPY --from=static-builder /app/static.dist /app/static.dist

ENTRYPOINT ["/app/docker-entrypoint.sh"]

Addendum: Handling data files

When converting your project to be installable with setup.py, you should make sure that you’re not missing any files in the final build. Run setup.py egg_info and then check the generated foo.egg-info/SOURCES.txt for missing files.

A common trip-up is the distinction between Python packages and ordinary directories. By definition a Python package is a directory that contains an __init__.py file (can be empty). By default setup.py only installs Python packages. So make sure you’ve got __init__.py files also on all intermediate directory levels of your code (check in management/commands, for example).

If your project uses templates or other data files (not covered by collectstatic), you need to do two things to get setup.py to pick them up:

  • Set include_package_data=True in the call to setuptools.setup() in setup.py.
  • Add a MANIFEST.in file next to setup.py that contains instructions to include your data files.
    The most straightforward way for a template directory is something like recursive-include foo/templates *

The section on Including Data Files in the setuptools documentation covers this in more detail.

Understanding Capabilities in Linux

For some time now the Linux kernel has been supporting a capabilities(7) based permission control model. In theory this allows assigning fine-grained permissions to processes so that processes that previously required UID 0/root permissions don’t need these any more. In practice though, uptake of this feature is relatively low, and actually trying to use it is hampered by confusing vocabulary and non-intuitive semantics.

So what’s the story?

All special access permission exemptions that were previously exclusively attached to UID 0 are now associated with a capability. Examples for these are: CAP_FOWNER (bypass file permission checks), CAP_KILL (bypass permission checks for sending signals), CAP_NET_RAW (use raw sockets), CAP_NET_BIND_SERVICE (bind a socket to Internet domain privileged ports).

Capabilities can be bestowed on execution (similar to how SUID operates) or be inherited from a parent process. So in theory it should be possible to, for example, start an Apache web server on port 80 as a normal user with no root access at all, if you can provide it with the CAP_NET_BIND_SERVICE capability. Another example: Wireshark only needs the CAP_NET_RAW and CAP_NET_ADMIN capabilities. It is highly undesirable to run the main UI and protocol parsers as root, and slightly less desirable to run dumpcap, which is the helper tool that Wireshark actually uses to sniff traffic, as root. Instead, the preferred installation method on Debian systems is to set the dumpcap binary up so that it automatically gains the required privileges on execution, and then limit execution of the binary to a certain group of users.

Gaining and giving capabilities

This is the most confusing part, because a) it doesn’t behave intuitively in the “just like suid-root” mental model, and b) uses the same words for completely different functions.

Conceptually capabilities are maintained in sets, which are represented as bit masks. For all running processes capability information is maintained per thread; for binaries in the file system it’s stored in extended attributes. Thread capability sets are copied on fork() and specially transformed on execve(), as discussed below.

Several different capability sets and related variables exist. In the documentation these are treated as somewhat symmetrical for files and threads, but in reality they are not, so I’ll describe them one by one:

Thread permitted set
This is a superset of capabilities that the thread may add to either the thread permitted or thread inheritable sets. The thread can use the capset() system call to manage capabilities: It may drop any capability from any set, but only add capabilities to its thread effective and inherited sets that are in its thread permitted set. Consequently it cannot add any capability to its thread permitted set, unless it has the CAP_SETPCAP capability in its thread effective set.
Thread effective set
This is the actual set of capabilities that the kernel uses for permission checks.
Thread inheritable set
This is a set that plays a role in bequeathing capabilities to other binaries. It would more properly be called ‘bequeathable’: a capability not in this set cannot be inherited by a different binary through the inheritance process. However, being in this set does also not automatically make a binary inherit the capability. Also note that ‘inheriting’ a capability does not necessarily automatically give any thread effective capabilities: ‘inherited’ capabilities only directly influence the new thread permitted set.
File permitted set
This is a set of capabilities that are added to the thread permitted set on binary execution (limited by cap_bset).
File inheritable set
This set plays a role in inheriting capabilities from another binary: the intersection (logical AND) of the thread inheritable and file inheritable sets are added to the thread permitted set after the execve() is successful.
File effective flag
This is actually just a flag: When the flag is set, then the thread effective set after execve() is set to the thread permitted set, otherwise it’s empty.
cap_bset
This is a bounding capability set which can mask out (by ANDing) file permitted capabilities, and some other stuff. I’ll not discuss it further and just assume that it contains everything.

Based on these definitions the documentation gives a concise algorithm for the transformation that is applied on execve() (new and old relate to the thread capability sets before and after the execve(), file refers to the binary file being executed):

  • New thread permitted = (old thread inheritable AND file inheritable) OR (file permitted AND cap_bset)
  • New thread effective = new thread permitted, if file effective flag set, 0 otherwise
  • New thread inheritable = old thread inheritable

This simple definition has some surprising (to me) consequences:

  1. The ‘file inheritable set’ is not related to the ‘thread inheritable set’. Having a capability in the file inheritable set of a binary will not put that capability into the resulting processes thread inheritable set. In other words: A thread that wants to bequeath a capability to a different binary needs to explicitly add the capability to its thread inheritable set through setcap().
  2. Conversely the ‘thread inheritable set’ is not solely responsible for bequeathing a capability to a different binary. The binary also needs to be allowed to receive the capability by setting it in the file inheritable set.
  3. Bequeathing a capability to a different binary by default only gives it the theoretical ability to use the capability. To become effective, the target process must add the capability to its effective set using setcap(). Or the file effective flag must be set.
  4. A nice side effect of the simple copy operation used for the thread inheritable set: A capability can be passed in the thread inheritable set through multiple intermediate fork() and execve() calls to a target process at the end of a very long chain without becoming effective in the middle.
  5. The relevant file capability sets are those of the binary being executed. When trying to give permitted capabilities to an interpreted script, the capabilities must be in the file inheritable set of the interpreter binary. Additionally: If the script can’t/won’t call capset(), the file effective flag must be set on the interpreter binary.

Summary

I’ve tried to summarize all the possible paths that a capability can take within a Linux thread using capset() or execve(). (Note: fork() isn’t shown here, since all capability information is simply duplicated when forking.)

Linux Capabilities: Possible capability transmission paths