Docker Deployment Best Practices

Given: There’s a CI system that automatically builds docker images from your VCS (e.g. git), we use self-hosted gitlab.
Goal: Both initial and subsequent automated deployments to different environments (staging and production).

Rejected Approaches

Most existing blog articles and howtos for this use case, specifically in the context of gitlab, tend to be relatively simple, relatively easy, and very very wrong. The biggest issue is with root access to the production server. I believe that developers (and the CI/CD system) should not have full root access to the production system(s), to retain semblance of separation in case of breaches. Yes, sure, a malicious developer could still check-in bad code which might eventually get deployed to production, but there is (should be) a review process before that, and traces in the VCS.

And yet, most recommendations on how to do deployment with gitlab circle around one of two approaches:

  1. Install a gitlab “runner” on your production server. That is, an agent which gets commands from the gitlab server and executes them. This runner needs full root access (or, equivalently, docker daemon access), thus giving the gitlab server (and anyone who has/gains control over it) full root access to the production system(s).
    This approach also needs meticulous management of the different runners, since they are now being used not just for build purposes but also have a second, distinct, duty for deployment.
  2. Use your normal gitlab runners that are running somewhere else, but explicitly give them root access to the target servers, e.g. with a remote SSH login.
    Again, this gives everybody in control of the gitlab server full production access, as well as anybody in control of one of the affected runners. Usually this is made less obvious by “only” giving docker daemon access, but that’s still equivalent to full root access.

There’s variants on this theme, like using Ansible for some abstraction, but it always boils down to somehow making it so that the gitlab server is capable of executing arbitrary commands as root on the production system.

Our Approach

For container management we’re going to use docker compose, the new one, not docker-compose. A compose.yaml file (with extensions, see below) is going to fully describe the deployment, and compose will take care of container management for updates.

Ideally we want to divide the task into two parts:

  1. Initial setup
  2. Continuous delivery

For the initial setup there’s not a pressing reason for full automation. We’re not setting up new environments all the time. There’s still some best practices and room for automation, see below, but in general it’s a one-time process executed with high privileges.

The continuous updates on the other hand should be fast, automated, and, above all, restricted. An update to a deployed docker application does exactly one thing: pull new image(s) then restart container(s).

Restricted SSH keys for update deployment

Wouldn’t it be great if we had an agent on the production server that could do that, and only that? Turns out, we have! Using additional configuration on ~/.ssh/authorized_keys we can configure a public key authenticated login that will only execute a (set of) predefined command(s), and nothing else1. And since sshd is already running and exposed to the internet anyway, we don’t get any new attack surface.

The options we need are:

  • restrict to disable, roughly, all other functionality
  • command="cd ...; docker compose pull && docker compose up -d" which will make any login with that key execute only this command (you’ll need to fill in the path to cd into).

Using docker compose

In order for this to seamlessly work, there’s some best practices to follow when creating the environment:

  • All container configuration is handled by the docker compose framework 
    • Specifically: docker compose up just works.
      No weird docker compose -f compose.foo.yaml -f compose.bar.yaml -e WTF_AM_I_DOING=dunno up incantations.
  • The docker compose configuration should itself be version controlled
  • The containers come up by themselves in a usable configuration, and can handle container updates gracefully 
    • For example in Django, the django-admin migrate call must be part of the container startup
    • In general it’s not allowed to need to manually execute commands in the containers or the compose environment for updates. You’re allowed to require one necessary initialization command on first setup, under extenuating circumstances only.
  • There’s also good container design (topic of a different blog post) with regards to separation of code and data

Good docker compose setup

There’s two ways to handle the main compose configuration of a project: As part of the git repository of one of the components, or as a separate git repository by itself.

  • The first approach applies if it’s a very simple project, maybe just one component. If it contains only the code you wrote, and possibly some ancillary containers like the database, then you’ll put the compose.yaml into the root directory of the main git repository. This also applies if your project consists of multiple components maintained by you, but it’s obvious which one is the main one (usually the most complex one).
    Like if you have a backend container (e.g. Python wsgi), a frontend container (statically compiled HTML/JS, hosted by an nginx), a db container (standard PostgreSQL), maybe a cache, and some helper daemon (another Python project). Three of these are maintained by you, but the main one is the backend, so that’s where the compose.yaml lives.
  • For complex projects it makes sense to create a dedicated git repository that only hosts the compose file and associated files. This specifically applies if the compose file needs to be accompanied by additional configuration files to be mounted into the containers. These usually do not belong in your application’s git repository.

The idea here is that the main compose.yaml file (using includes is allowed) handles all the basic configuration and setup of the project, independent of the environment. Doing a docker compose up -d should bring up the project in some default state configured for a default environment (e.g. staging). Additional environment specific configuration should be placed in a compose.override.yaml file, which is not checked into git and which contains all the modifications necessary for a specific environment. Usually this will only set environment variables such as URL paths and API keys.

Additional points of note:

  • All containers should be configured read_only: true, possibly assisted by tmpfs: ["/run","/var/run/someapp"] or similar. If that’s not possible, go yell at the container image creator2.
  • All configuration that is mounted from the outside should be mounted read-only
  • Data paths are handled by volumes
  • The directory name is the compose project name. That’s how you get the ability to deploy more than one instance of a project on the same host. The directory name should be short and to the point (e.g. frobnicator or maybe frobnicator-staging).
  • Ports in the main compose.yaml file are a problem, since port numbers are a global resource. A useful pattern is to not specify a port binding in the compose.yaml file and instead rely on compose.override.yaml for each deployment to specify a unique port for this deployment. That’s one of the few cases where it’s acceptable to absolutely require a compose.override.yaml for correct operation, and it must be noted in the README.

Putting It All Together

This example shows how to set up deployment for project transmogrify/frobnicator, hosted on gitlab at git.example.com, with the registry accessible as registry.example.com, to host deploy-host, using non-root (but still docker daemon capable) user deploy-user.

Preliminaries

On deploy-host, we’ll create a SSH public/private key to be used as deploy key for the git repository containing the main compose.yaml and configure docker pull access. This probably only needs to be done once for each target host.

ssh-keygen -t ed25519

Just hit enter for default filename (~/.ssh/id_ed25519) and no passphrase. Take the resulting public key (in ~/.ssh/id_ed25519.pub) and configure it in gitlab as a read-only deploy key for the project containing the compose.yaml (under https://git.example.com/transmogrify/frobnicator/-/settings/repository).

We’ll also need a deploy token for docker registry access. This should be scoped to access all necessary projects. In general this means you’ll want to keep all related projects in a group and create a group access token under https://git.example.com/groups/transmogrify/-/settings/access_tokens. Create the access token with a name of deploy-user@deploy-host, role Reporter and Scope read_registry.

Caveat 1: docker can only manage one set of login credentials per registry host. Either use non-privileged/user-space docker daemons separated by project (e.g. with different users on the deploy host, each one only managing one project), which is a topic of a different blog post. Or use a “Personal” Access Token for a global technical user which has access to all the necessary projects instead.
(There’s a third option: Create multiple .docker/config.json files and set the DOCKER_CONFIG environment variable accordingly. This violates the “docker compose up should just work” requirement.)

Caveat 2: docker really doesn’t want to store login credentials at all. There’s a couple of layers of stupidity here. Just do the following (note: this will overwrite all previously saved docker logins on this host, but you shouldn’t have any):

mkdir -p ~/.docker
echo '{"auths": {"registry.example.com": {"auth": ""}}}' > ~/.docker/config.json

Then you can do a normal login with the deploy token and it’ll work:

docker login registry.example.com

First deployment / Setup

Clone the repository and configure any overrides necessary. Then start the application.

git clone ssh://git@git.example.com/transmogrify/frobnicator.git
cd frobnicator
vi compose.override.yaml  # Or whatever is necessary
docker compose up -d

Your project should now be running. Finish any remaining steps (set up reverse proxy etc.) and debug whatever mistakes you made.

Set up for autodeploy

  1. On deploy-host (or a developer laptop) generate another SSH key. We’re not going to keep it for very long, so do it like this:
ssh-keygen -t ed25519 -f pull-key  # Hit enter a couple of times for no passphrase
  1. Also retrieve the SSH host keys from deploy host (possibly through another way):
ssh-keyscan deploy-host
  1. On deploy-host, add the following line to ~deploy-user/.ssh/authorized_keys (where ssh-ed25519 AAA... is the contents of pull-key.pub from step 1):
restrict,command="cd /home/deploy-user/frobnicator; docker compose pull && docker compose up -d" ssh-ed25519 AAA.... 
  1. In gitlab, on group level, configure variables (https://git.example.com/groups/transmogrify/-/settings/ci_cd):
NameValueSettings
SSH_AUTH-----BEGIN OPENSSH PRIVATE ...
(contents of pull-key from step 1)
File, Protected
SSH_KNOWN_HOSTSresults of step 2
SSH_DEPLOY_TARGETssh://deploy-user@deploy-host

You may use the environments feature of gitlab here, which will generally mean a different set of values per environment (and then choosing the environment in the job in the next step). Afterwards, delete the temporary files pull-key and pull-key.pub from step 1.

  1. In your project’s or projects’ .gitlab-ci.yaml file (this is in the code projects, not necessarily the project containing compose.yaml), add this (the publish docker job is outside of the scope of this post):
stages:
  - build
  - deploy

# ...

deploy docker:
  image: docker:git
  stage: deploy
  cache: []
  needs:
    - publish docker
  before_script: |
    mkdir -p ~/.ssh
    echo "${SSH_KNOWN_HOSTS}" > ~/.ssh/known_hosts
    chmod -R go= ~/.ssh "${SSH_AUTH}"
  script: ssh -i "${SSH_AUTH}" -o StrictHostKeyChecking=no "${SSH_DEPLOY_TARGET}"

In order to handle multiple environments, you can also add

deploy docker:
  # ...
  rules:
    - if: $CI_COMMIT_BRANCH == "dev"
      variables:
        ENVIRONMENT: staging
    - if: "$CI_COMMIT_BRANCH =~ /^master|main$/"
      variables:
        ENVIRONMENT: production  
  environment: "${ENVIRONMENT}"

Voila. Every time after a docker image has been built, a gitlab runner will now trigger a docker compose pull/up, with minimal security impact since that’s the only thing it can do.

Addenda

The preliminaries and initial setup can be automated with Ansible.

You can put the gitlab-ci configuration into a common template file that can be referenced from all projects. For example, we have a common tools/ci repository, so the only thing necessary to get auto deployment is to put

stages:
  - publish
  - deploy

include:
  - project: tools/ci
    file: docker-ssh.yml

into a project and do the deploy-host setup and variable definitions (well, and add the other include files that handle the actual docker image building).

  1. man authorized_keys, section “AUTHORIZED_KEYS FILE FORMAT↩︎
  2. Their image is bad and they should feel bad. ↩︎

Should I Use JWTs For Authentication Tokens?

No.


Not satisfied? Fine, fine. I’ll write a longer answer.

Let’s talk about what we’re talking about. JWT stands for JSON Web Tokens, a reasonably well defined standard for authenticated tokens. Specifically they have a header with format information, a payload, and a signature or message authentication code. The core idea is that whoever has the corresponding verification key can verify that the payload is authentic and has not been altered. What they do with that information is up to them.

The JWT spec (RFC 7519) makes suggestions by providing a few well-known registered claim names: issuer, audience, subject, expiration time, etc. A common usage pattern is that, after verifying the authenticity against whatever trust relationship they have with the issuer, the recipient checks whether they are the intended audience (if any is specified) and the expiration time has not yet passed, and then take the subject as an authenticated identity of the bearer of the token.

It’s perfectly designed for bearer token authentication! Or is it? Let me be clear: JWT as authentication tokens are constructed for Google/Facebook scale environments, and absolutely no one who is not Google/Facebook needs to put up with the ensuing tradeoffs. If you process less than 10k requests per second, you’re not Google nor are you Facebook.

The core benefit, proponents will tell you, is that the recipient of a JWT doesn’t need to connect to the user database to verify the token authenticity and render its service. In a large installation, like Google’s, that means that the JWT issuer, the authentication service, can be a dedicated service that is managed and scaled like other services, and is the only service that needs to access the centralized user database. All other services can act on the information stored in the JWT alone, and don’t need to go through the user database, which would represent a choke point.

What about logout/session invalidation? Well, in order for this model to work, the authentication token should have a fairly short lifetime. Maybe 5 minutes, max. The client is also issued a second token, the so-called refresh token, with which it can request a new authentication token from the authentication service. This gives the authentication service a chance to consult the user database to see whether the user or a specific session has been blocked in the meantime.

Here’s the twist that is rarely, if ever, spelled out: In this setup the refresh token, not the authentication token, is the real session token. The refresh token represents the session with the authentication service (which can be revoked), while the authentication tokens are just derived credentials to be used for a few requests at most. The beauty, from Google’s point of view, is that this delegates keeping the session alive to the client, i.e. not Google’s servers. Oh and by the way, the refresh token can be, and usually is, opaque, since it’s only ever consumed by the same service that creates it. That reduces a lot of complexity, by just using an opaque identifier stored in a database.

Now, let’s assume you are not Google. Check which of these apply to you:

  • You wanted to implement log-out, so now you’re keeping an allowlist of valid JWTs, or a denylist of revoked JWTs. To check this you hit the database on each request.
  • You need to be able to block users entirely, so you check a “user active” flag in the database. You hit the database on each request.
  • You need additional relationships between the user object and other objects in the database. You hit the database on each request.
  • Your service does anything at all with data in the database. You hit the database on each request.

Congratulations, if you confirmed any of the items above, you don’t need JWTs. You’re hitting the database anyway, and I’m pretty sure that you only have one database which stores both your user profiles and your application data. By just using a “normal” opaque session token and storing it in the database, the same way Google does with the refresh token, and dropping all JWT authentication token nonsense, you stand to reap these great benefits:

  • No weird workarounds (allow/denylist) for shortcomings of JWT as authentication token
  • Greatly reduced complexity. No need to manage a secure JWT signing/authentication key
  • You get to pass on some interesting bugs.

Just use the normal session mechanism that comes with your web framework and that you were using before someone told you that Google uses JWT. It has stood the test of time and is probably fine.

If you need something to do to make you feel like you’re running a big deployment, you can probably configure your session mechanism to use redisvalkey to store the session data. You’re still going to use the authenticated user id to query the database, but for unauthenticated requests it may be faster/use less resources. It might not be. You’ll have to tune and measure that.

Auditing User Intent in Closed Source IoT Applications

Amazon Echo Dot

(Header image under CC-BY by Gregory Varnum)

Hardware-backed voice assistants like Amazon Alexa and Google Assistant have received some criticism for their handling of voice data behind the scenes. The companies had outsourced quality control/machine learning feedback to external contractors who received voice recordings of user commands and were tasked to improve the assistants’ recognition of voice and intent. This came as a surprise to many users, who only expected their voice commands to be processed by automated systems and not listened to by actual humans.

It is worth noting that the user intent for their voice to be recorded and sent to the cloud was generally not called into question: While the devices listen all the time, their recording and sending only starts after they detect a so-called wake word: “Alexa” or “Hey Google”. There are scattered reports of accidental activation with similar sounds (“Alec, say, what’s up?”), but on balance this part of the system appears to be reasonably robust. Accidental activation is always mitigated by the fact that the devices clearly indicate their current mode: Recognition of the wake word triggers a confirmation sound and LEDs to light up.

New reporting shows how a malicious actor can get around this user intent in a limited fashion: Several sets of bugs in the system design allowed the assistant to stay awake and send recordings to the attacker even when a user might reasonably expect them not to be. It is important to note that these bugs are not remote-access vulnerabilities! User intent is still necessary to start the interaction, it’s just that the interaction lasts longer than the user expects. Also, none of the local safeguards against undected listening are impacted: The LEDs still light up and indicate an attentive assistant.

It is in the companies’ best interest to not be found spying on their users, and the easiest way to achieve that is by not doing it. Amazon, specifically, tries very hard to be seen as privacy-preserving because that enables additional services for them. Their in-home delivery service is absolutely dependent on consumers trusting them to open their doors for the delivery driver (who in turn is instructed not to actually enter the home, but just drop the package right on the other side of the door, and is filmed doing so). Amazon demonstrates their willingness to at least appear privacy-respecting on other fronts too: The microphone-off button on the Amazon Echo devices cuts power to the microphone array and lights up a “mute” LED: it’s impossible to turn on the LED under software control. When the LED is on, the microphone is off.

The primary concern still is an issue of trust: Do I trust the device to only record and transmit audio when I intend for it to do so? In theory the device manufacturer could have the device surreptitiously record everything. There’s no easy way to audit either the device or its connections to the outside world. Some progress has been made to extract and analyse device firmware, but ultimately this cannot rule out a silent firmware update with listening capabilities at a later date.

An Auditing System to Confirm User Intent

This essay proposes a system in which users can gain confidence that they are not surreptitiously monitored, without requiring a device manufacturer to give up any of their proprietary secrets. It assumes cooperation on the part of the manufacturer and a certain technical expertise on the part of at least some of the users.

Step 1: The manufacturer augments their back-end systems to log device activity and TLS session keys, and keeps these logs for a certain number of days.

Step 2: The end user passively records all incoming and outgoing traffic from the device. Obviously only a small percentage of end users will be able to do that and only a fraction of those will actually record the traffic. But since the manufacturer cannot be sure which devices are being monitored, they risk detection if they tamper with any of them.

Step 3: The user requests a list of sessions keys from the manufacturer and uses it to decrypt the captured connections.

Step 4: The manufacturer provides a machine-readable list of activities of the device, both user-initiated (such as queries to a voice assistant) and automated (such as firmware updates, or normal device telemetry).

Step 5: Analysis software matches the list of device activities to the recorded connections and flags any suspicious activity. The software should be open source, initially provided by the manufacturer, and be extensible by the community at large.

Step 6: The user can cross-check the now-vetted list of device activities to confirm whether it matches their intent.

The most impractical step is number 2: Only few users would bother to configure their networks in a way that allows the device traffic to be monitored. However, I believe that even the possibility of monitoring should deter malicious behaviour. This step is also most easily supported by third-party tools: An OpenWRT extension for example would greatly simplify the recording for users of OpenWRT, and other CPE manufacturers could follow suit1.

The IoT device manufacturer may want to keep some data — such as firmware update files, or received audio streams — proprietary and secret. They must do so in a manner that allows the analysis tool to confirm that only downstream data is withheld: Either by using a separate, at-rest, encryption layer inside the TLS connection, or by using a separate TLS connection to a special endpoint which carries only the absolutely minimal amount of information (one small HTTP request) in the upstream direction. The analysis tool is then able to ignore the contents of this proprietary data while still being able to flag anomalies in the meta data (“Three 150MB firmware updates in a day? Really?”).

Rationale: A scheme that forcibly opens up firmware files or DRM’ed audio streams would be a non-starter for industry adoption. Decrypting this downstream content isn’t necessary for the goal of confirming user intent. Conversely all information carried on the upstream channel by definition belongs to the user, since they generated it. (If they didn’t generate it, it wouldn’t need to be transferred.)

Potential for abuse: When suggesting to store new kinds of data (step 1) it’s important to analyze the potential for abuse this data has, be it from law-enforcement agencies or from vengeful ex-partners. I believe no new threat is introduced here: The current backends of manufacturers with voice assistants already store voice recordings and generally give the option to look at the device history or download recordings (both to LEAs and to anyone with account access). The data recorded in step 1 should give no additional insight into the user behaviour beyond what is already recorded under the status quo — except that it allows to confirm the completeness of the log.


  1. This opens up the question on whether one trusts their CPE manufacturer to build correct logging and to not collude with the IoT device manufacturer. 

On the Security of AES in HomeMatic

HomeMatic is a line of home automation devices that are popular in Germany and use a proprietary radio protocol (BidCoS, Bidirectional Communication Standard) on a frequency of 868MHz. Some devices allow optional use of “AES signing” for message authentication. When enabled, the execution of a command is delayed until a challenge-response process between the initiator and receiver of the command is completed. All AES capable HomeMatic devices ship with a default key, which can optionally be set to a custom value. The signing requirement is disabled by default for most devices, except for the KeyMatic/WinMatic line which includes devices for door lock and window automation which always require AES for all commands.

Before 2014 the common wisdom1 was to leave the AES key at the default value: Setting a custom key and forgetting it renders the device useless and requires it to be sent back to the manufacturer to reset it – for a fee.

Sathya Laufer and Christian Mallas demonstrated that this is trivially dangerous at the 30th Chaos Communication Congress: Within the HomeMatic universe there are LAN gateways that accept commands over Ethernet/IP and forward them through BidCoS to the target device. In this setup, all the BidCoS AES operations are executed by the LAN gateway. So if the target device is using the default key, an attacker can simply use a LAN gateway (which also knows the default AES key) to send arbitrary commands to that device, without bothering with any of the cryptographic details.

Later, the default AES key became known, and a reverse engineering of the authentication protocol is available (see next section), so an attacker can also use custom hardware to send commands to all devices still configured with the default key.

Michael Gernoth did a superb job of reverse engineering the authentication protocol2, but his description is mainly based on the flow to authenticate a known message. I’ll try to re-frame that here in the way that the authentication token is generated, and also generically for arbitrary message sizes. Where applicable I’ll use the same abbreviations that Michael uses (∥ is concatenation, ⊕ is XOR):

Packets
m Original message to be authenticated. Note: m = D0 ∥ D1, if the length is considered not to be part of the packet.
c Challenge message
r Response message
a ACK message
Data items
Name Description Length/bytes Packet
D0 Metadata (counter, flags, type, sender, receiver) 10 m
D1 Parameters varies m
C Challenge 6 c
P AES response 16 r
A ACK authentication 4 a
T Timestamp or counter 6

Under these definitions the calculation of the authentication messages happens as follows:

  • K’ := K ⊕ (C ∥ 00…)
  • Pd’ := AES(K’, T ∥ D0)
  • P := AES(K’, Pd’ ⊕ (D100…) )

Or, in easier terms, and likely representing the original idea, for arbitrary packet lengths:

AESCBC(IV= 00…, Key= K ⊕ (C ∥ 00…), Payload= T ∥ m)

(with the last block of the CBC calculation being output as P and the first 4 bytes of the first block as A)

While this looks like a standard (ab)use of AES-CBC as an authentication code, best as I can tell, the verification really has to happen in the very strange backwards way that Michael describes, for the simple reason that T is not transmitted and thus not available to the verifier to replicate the same calculation as the initiator.

How secure is the HomeMatic system if a custom AES key is used?

A customary simple measure for security is the number of bits an attacker must guess correctly to violate whatever security properties the system claims to have. This is related to the number of operations to execute in the attack3. Example: For an authentication mechanism with a security of 128 bits, the attacker needs to either correctly guess 128 bits (either key or authentication code) to break the system, or perform 2127 operations (random authentication attempts) to score a successful attack with a probability of ~50%.4

Under this definition a single block of AES with fully random key has a security of 128 bits. We would hope to find the same security in the HomeMatic use of AES.

CBC-MAC is generally not a recommended way to construct a message authentication code and very easy to implement wrong, for examples and a longer discussion of this aspect see this article on CBC-MAC by Matthew Green. That being said, XORing the challenge into the key prevents most of the more obvious attacks. They are enabled again, when a challenge is re-used though.

Challenge re-use

The challenge is a 6‑byte value generated by the verifier that must be random. Random number generation (RNG) with computers is hard, and RNGs in many (embedded) devices have had spectacular failures with security implications in the past: 1 2 3 4.

Now, I can’t make any assertions as to the quality of HomeMatic’s RNG, so let’s assume that it is stellar and always outputs full 48 bits of randomness. The birthday paradox then tells us that after around 224 ≈ 16.7 million tries there’s a better than even (50%) chance that a number repeats. A repeated challenge directly gives the attacker the possibility to replay a previous command.

For this attack to work with probability 50% the attacker must capture on the order of 16.7 million identical messages, and then try to get the same command executed 16.7 million times, representing a security factor of around 26 bits. Probably nothing to worry about in a radio protocol that will do at most ~5 authentications per second (the second phase alone will take 6 weeks), but a far cry from 128 bits.

The specific usage of CBC-MAC is also susceptible to an attack where the attacker can craft a valid authentication token for a message that consists of a message sniffed from the radio channel appended with attacker controlled data, see Matthew’s article linked above. However, I couldn’t think of a way to make this stick: The attacker still needs to be able to either coax the system into generating an authentication token for a different attacker chosen message, or be very limited in the message manipulations possible.

Blind guessing

Remember that T is never transmitted but apparently inferred from the protocol? That means, from an attacker’s point of view, that these bits are free. Instead of guessing 128 bits of key or authentication code, the attacker only need to guess 128–6*8=80 bits. Even if the verifier checks T to be monotonously increasing that only adds one additional bit of complexity. It works like this:

  1. Attacker sends arbitrary message m = D0 ∥ D1
  2. Verifier sends challenge C
  3. Attacker sends random answer P
  4. Verifier calculates
    • Pd’ := AES-1(K’,P) ⊕ (D100…), and
    • Pd’d := AES-1(K’, Pd’).

    Verifier then checks whether the last ten bytes of Pd’d match D0 (and maybe if the first 6 bytes are higher than the last T received).

  5. If in the previous step a match is found, the verifier executes the command and outputs A.

Note that in the protocol D1 is never checked, but used to calculate something that is checked against D0. Now: An attacker that feeds random data into P will cause random data to appear in Pd’d. There’s a chance of 1 in 280 that this random data matches D0 (and possibly: another 1 in 2 that the first 6 bytes are numerically larger than the last T).

Again: 80 is much lower than 128, so cryptographically speaking the mechanism is broken. Practically though, sending 280 requests (giving a success probability of 63% to the attacker) will take 7.6 Pa5, so that’s probably nothing to worry about.

Entropy

Internally the AES key is a binary value of 128 bits, but that’s not how it’s presented to the user in the front-end. Setting the HomeMatic security key requests an arbitrary text string from the user, which is then hashed with MD5 and the resulting hash is used as the AES key.

A careless user might not worry too much about this key, and the on-screen prompt only reminds them to use at least 5 characters. Even under the best of circumstances, one typeable character has only about 6 bits of entropy. The minimum security recommended by the user interface therefore is equivalent to 5*6 = 30 bits. Also: An attacker can execute an offline dictionary attack on the key after intercepting one or a few radio messages. Execution rates for these kinds of attacks typically lie in the millions or billions of operations per second even on regular PC hardware (CPU and GPU), so any 5 character security key will be cracked in seconds.

Luckily this isn’t a flaw by design: The user needs to make sure to chose a fully random key with at least 128 bits of entropy, for example in the form of 32 random hexadecimal characters.

Summary

The problems are, in order from most to least exploitable:

  1. Low entropy in the security key. Security break in seconds. Easily averted by choosing a strong key.
  2. Challenge re-use. Break may be possible within a few decades to years.
  3. Blind guessing. Break possible within a few petayears.

From a theoretical point of view, the security of the BidCoS protocol with a custom AES key is much worse than it should be. From a practical point of view it’s entirely acceptable, if the user chooses a long fully random key, and the attacker isn’t present when the key is set6.

Note: These considerations only apply to the theoretical protocol and not any particular implementation. It’s possibly, even likely, that there are exploitable bugs in some device firmware and/or that the RNG is not as good as expected. Bugs in this area generally reduce the security to the “break within minutes to seconds” category.


  1. See for example this archived article from October 2012, courtesy of the internet archive. The article was updated to say the polar opposite in January 2014. 

  2. XORing the challenge into the key is somewhat unusual, I’m not sure I would’ve found that 

  3. Theoretical computer science generally doesn’t care about constant factors. 

  4. These two notions are not identical, but close enough that, for the purposes of approximately judging system security, I’ll treat them as interchangeable for this article. 

  5. 7.6 petayears, 7 600 000 000 000 000 years 

  6. Obviously the key is transmitted over the air, encrypted with a key that the attacker already knows by induction. 

Understanding Capabilities in Linux

For some time now the Linux kernel has been supporting a capabilities(7) based permission control model. In theory this allows assigning fine-grained permissions to processes so that processes that previously required UID 0/root permissions don’t need these any more. In practice though, uptake of this feature is relatively low, and actually trying to use it is hampered by confusing vocabulary and non-intuitive semantics.

So what’s the story?

All special access permission exemptions that were previously exclusively attached to UID 0 are now associated with a capability. Examples for these are: CAP_FOWNER (bypass file permission checks), CAP_KILL (bypass permission checks for sending signals), CAP_NET_RAW (use raw sockets), CAP_NET_BIND_SERVICE (bind a socket to Internet domain privileged ports).

Capabilities can be bestowed on execution (similar to how SUID operates) or be inherited from a parent process. So in theory it should be possible to, for example, start an Apache web server on port 80 as a normal user with no root access at all, if you can provide it with the CAP_NET_BIND_SERVICE capability. Another example: Wireshark only needs the CAP_NET_RAW and CAP_NET_ADMIN capabilities. It is highly undesirable to run the main UI and protocol parsers as root, and slightly less desirable to run dumpcap, which is the helper tool that Wireshark actually uses to sniff traffic, as root. Instead, the preferred installation method on Debian systems is to set the dumpcap binary up so that it automatically gains the required privileges on execution, and then limit execution of the binary to a certain group of users.

Gaining and giving capabilities

This is the most confusing part, because a) it doesn’t behave intuitively in the “just like suid-root” mental model, and b) uses the same words for completely different functions.

Conceptually capabilities are maintained in sets, which are represented as bit masks. For all running processes capability information is maintained per thread; for binaries in the file system it’s stored in extended attributes. Thread capability sets are copied on fork() and specially transformed on execve(), as discussed below.

Several different capability sets and related variables exist. In the documentation these are treated as somewhat symmetrical for files and threads, but in reality they are not, so I’ll describe them one by one:

Thread permitted set
This is a superset of capabilities that the thread may add to either the thread permitted or thread inheritable sets. The thread can use the capset() system call to manage capabilities: It may drop any capability from any set, but only add capabilities to its thread effective and inherited sets that are in its thread permitted set. Consequently it cannot add any capability to its thread permitted set, unless it has the CAP_SETPCAP capability in its thread effective set.
Thread effective set
This is the actual set of capabilities that the kernel uses for permission checks.
Thread inheritable set
This is a set that plays a role in bequeathing capabilities to other binaries. It would more properly be called ‘bequeathable’: a capability not in this set cannot be inherited by a different binary through the inheritance process. However, being in this set does also not automatically make a binary inherit the capability. Also note that ‘inheriting’ a capability does not necessarily automatically give any thread effective capabilities: ‘inherited’ capabilities only directly influence the new thread permitted set.
File permitted set
This is a set of capabilities that are added to the thread permitted set on binary execution (limited by cap_bset).
File inheritable set
This set plays a role in inheriting capabilities from another binary: the intersection (logical AND) of the thread inheritable and file inheritable sets are added to the thread permitted set after the execve() is successful.
File effective flag
This is actually just a flag: When the flag is set, then the thread effective set after execve() is set to the thread permitted set, otherwise it’s empty.
cap_bset
This is a bounding capability set which can mask out (by ANDing) file permitted capabilities, and some other stuff. I’ll not discuss it further and just assume that it contains everything.

Based on these definitions the documentation gives a concise algorithm for the transformation that is applied on execve() (new and old relate to the thread capability sets before and after the execve(), file refers to the binary file being executed):

  • New thread permitted = (old thread inheritable AND file inheritable) OR (file permitted AND cap_bset)
  • New thread effective = new thread permitted, if file effective flag set, 0 otherwise
  • New thread inheritable = old thread inheritable

This simple definition has some surprising (to me) consequences:

  1. The ‘file inheritable set’ is not related to the ‘thread inheritable set’. Having a capability in the file inheritable set of a binary will not put that capability into the resulting processes thread inheritable set. In other words: A thread that wants to bequeath a capability to a different binary needs to explicitly add the capability to its thread inheritable set through setcap().
  2. Conversely the ‘thread inheritable set’ is not solely responsible for bequeathing a capability to a different binary. The binary also needs to be allowed to receive the capability by setting it in the file inheritable set.
  3. Bequeathing a capability to a different binary by default only gives it the theoretical ability to use the capability. To become effective, the target process must add the capability to its effective set using setcap(). Or the file effective flag must be set.
  4. A nice side effect of the simple copy operation used for the thread inheritable set: A capability can be passed in the thread inheritable set through multiple intermediate fork() and execve() calls to a target process at the end of a very long chain without becoming effective in the middle.
  5. The relevant file capability sets are those of the binary being executed. When trying to give permitted capabilities to an interpreted script, the capabilities must be in the file inheritable set of the interpreter binary. Additionally: If the script can’t/won’t call capset(), the file effective flag must be set on the interpreter binary.

Summary

I’ve tried to summarize all the possible paths that a capability can take within a Linux thread using capset() or execve(). (Note: fork() isn’t shown here, since all capability information is simply duplicated when forking.)

Linux Capabilities: Possible capability transmission paths

On the Difference Between RFID and NFC

What is RFID? What is NFC? What is the difference between RFID and NFC? These questions come up time and again, so let me answer them in some detail.

Both are terms that are almost never used correctly, and both have, in a general sense, something to do with communicating or radioing.

What is RFID?

Let’s start with the older term: RFID is just “radio frequency identification”. It’s not really defined, beyond being a combination of the two attributes, and, if you are so inclined, you could cite the “Identification Friend or Foe” systems invented for military airplanes in the 1930s as one of the earliest RFID systems1.

In modern times, the term RFID is almost always used to imply a system consisting of few relatively complex ‘readers’ and a larger number of relatively, or very, simple ‘transponders’, with some sort of radio signal being used to indicate the identification, or at least presence2, of the latter to the former. Now, that’s still quite abstract, so let’s add further characteristics, at each step going in the direction of the systems that most people actually mean when they say RFID with no further qualification:

  • The transponder could be active (have its own power source) or passive (be energized by the reader using some physical effect), the latter is what’s on most peoples minds in the context of RFID.
  • A passive transponder can be communicated with with radio waves through radar backscatter (ultra-high frequencies, range in the hundreds of meters, very little power available to the transponder) or, more often seen in everyday life, be inductively coupled (low to high frequencies, range less than a couple meters, possibly high power available).
  • An inductively coupled transponder could operate on a non-standardized low frequency (LF, ~120–140kHz) in a proprietary system, the standard high frequency (HF, 13.56MHz) in a proprietary system, or, most uses of the term RFID, the 13.56MHz frequency using an ISO standardized protocol.
  • The 13.56MHz RFID ISO protocols are ISO 15693, vicinity coupling, defined range less than a meter, and, more often referenced in the context of “RFID”, ISO 14443, proximity coupling, defined range less than ten centimeters.

Different properties of these general approaches lead to a very domain specific understanding of what “a normal RFID system” is: Warehouse management applications sometimes deal with ISO 15693 and more often with Gen 2 EPC (ISO 18000–6, passive backscatter, UHF: 860–960MHz). Average consumers overwhelmingly find themselves confronted with ISO 14443 systems (electronic passports, credit cards, newer corporate IDs) or proprietary HF systems (many corporate IDs). Finally, most very simple or moderately old applications quite often work with proprietary LF systems.

It’s a shaky definition process, but at least once you have determined that you are talking about ISO 14443 you’re on quite firm ground. However, this only gets you to establish communication with a transponder, possibly gather a transponder specific unique identifier, and transmit bytes back and forth. The actual command set for reading and writing, and potentially other functions such as electronic purse applications, is a completely different horseride altogether.

What is NFC?

Now, on the subject of NFC, this is even less well defined – or possibly better, depending on how you look at it. It’s a relatively new term, so there’s no firm default interpretation you could use, beside it having to do something with “near-field” and “communication” (e.g. inductive coupling and some sort of information transfer). There are, however, a couple of well defined things that bear the name NFC – none of which are usually exclusively intended by someone using the term:

  • NFCIP‑1, also known as ISO 18092 (dual-published as ECMA-340 [PDF]), which is an air interface for half-duplex communication between two entities using inductive coupling on 13.56MHz, at least one of the entities must be actively powered.
  • NFC Forum which is an industry association that publishes a set of standards, among them are: 
    • NFC Data Exchange Format (NDEF) which is compact binary data storage and message serialization format
    • NFC Record Type Definition (RTD) which is a specification format for NDEF message formats
    • A couple of RTDs that define both the message format and expected semantics of common use cases such as smart posters, business cards, etc.
    • NFC Tag Type definitions (1 through 4) that define a set of protocols for passive data storage tags and how to access NDEF messages on them

How do RFID and NFC relate?

Now comes the fun part: NFCIP‑1 is, not by accident, compatible with ISO 14443, where appropriate. Full-on NFCIP‑1 devices generally can implement both sides (now called Initiator and Target) of the communication, and so are compatible both with ISO 14443 readers (by emulating a tag) and ISO 14443 tags (by behaving as a reader). As an aside: Most vendors, while they’re on the 13.56MHz frequency anyway, also implement all the usual 13.56MHz RFID protocols in the things they call NFC chipsets, which is not at all helpful when trying to untangle the standards salad. Just because your “NFC phone” can operate with a certain tag does not mean that it’s “doing NFC” in a certain narrowly defined sense.

And even better: The NFC tag types correspond to existing 13.56MHz RFID tag products, but sometimes in a generalized version. For example tag type 2 is essentially NXP Mifare Ultralight3, but where Ultralight has a fixed 64 bytes of memory, the tag type 2 also allows arbitrary sizes bigger than 64B. And indeed, one of the most ubiquitous “NFC tag”s that you can buy now are NFC type 2 tags which are not NXP Mifare Ultralight and have ~160 bytes of memory.

In conclusion, by NFC most people mean, depending on context, a tag type or message format from the NFC ecosystem, or the NFC chip in their phones, even when they are using it with any old ISO 14443 tag4, which, closing the loop here, is what most people mean when they are referencing RFID.


  1. I got that example from Dr. Melanie Rieback, who does so in all her talks. 

  2. This is sometimes referred to as ‘1‑bit identification’, and extremely often seen in the context of electronic article surveillance

  3. The memory map table in the NFC tag type definition is an almost verbatim copy of that in the Ultralight data sheet, however, you will not find the words “mifare” nor “ultralight” anywhere in the tag type definition document. 

  4. The single most widespread ISO 14443 transponder type is Mifare Classic, which is not an NFC Forum tag type, but, confusingly, works with most NFC implementations in mobile phones as if it was.