Auditing User Intent in Closed Source IoT Applications

Amazon Echo Dot

(Header image under CC-BY by Gregory Varnum)

Hardware-backed voice assistants like Amazon Alexa and Google Assistant have received some criticism for their handling of voice data behind the scenes. The companies had outsourced quality control/machine learning feedback to external contractors who received voice recordings of user commands and were tasked to improve the assistants’ recognition of voice and intent. This came as a surprise to many users, who only expected their voice commands to be processed by automated systems and not listened to by actual humans.

It is worth noting that the user intent for their voice to be recorded and sent to the cloud was generally not called into question: While the devices listen all the time, their recording and sending only starts after they detect a so-called wake word: “Alexa” or “Hey Google”. There are scattered reports of accidental activation with similar sounds (“Alec, say, what’s up?”), but on balance this part of the system appears to be reasonably robust. Accidental activation is always mitigated by the fact that the devices clearly indicate their current mode: Recognition of the wake word triggers a confirmation sound and LEDs to light up.

New reporting shows how a malicious actor can get around this user intent in a limited fashion: Several sets of bugs in the system design allowed the assistant to stay awake and send recordings to the attacker even when a user might reasonably expect them not to be. It is important to note that these bugs are not remote-access vulnerabilities! User intent is still necessary to start the interaction, it’s just that the interaction lasts longer than the user expects. Also, none of the local safeguards against undected listening are impacted: The LEDs still light up and indicate an attentive assistant.

It is in the companies’ best interest to not be found spying on their users, and the easiest way to achieve that is by not doing it. Amazon, specifically, tries very hard to be seen as privacy-preserving because that enables additional services for them. Their in-home delivery service is absolutely dependent on consumers trusting them to open their doors for the delivery driver (who in turn is instructed not to actually enter the home, but just drop the package right on the other side of the door, and is filmed doing so). Amazon demonstrates their willingness to at least appear privacy-respecting on other fronts too: The microphone-off button on the Amazon Echo devices cuts power to the microphone array and lights up a “mute” LED: it’s impossible to turn on the LED under software control. When the LED is on, the microphone is off.

The primary concern still is an issue of trust: Do I trust the device to only record and transmit audio when I intend for it to do so? In theory the device manufacturer could have the device surreptitiously record everything. There’s no easy way to audit either the device or its connections to the outside world. Some progress has been made to extract and analyse device firmware, but ultimately this cannot rule out a silent firmware update with listening capabilities at a later date.

An Auditing System to Confirm User Intent

This essay proposes a system in which users can gain confidence that they are not surreptitiously monitored, without requiring a device manufacturer to give up any of their proprietary secrets. It assumes cooperation on the part of the manufacturer and a certain technical expertise on the part of at least some of the users.

Step 1: The manufacturer augments their back-end systems to log device activity and TLS session keys, and keeps these logs for a certain number of days.

Step 2: The end user passively records all incoming and outgoing traffic from the device. Obviously only a small percentage of end users will be able to do that and only a fraction of those will actually record the traffic. But since the manufacturer cannot be sure which devices are being monitored, they risk detection if they tamper with any of them.

Step 3: The user requests a list of sessions keys from the manufacturer and uses it to decrypt the captured connections.

Step 4: The manufacturer provides a machine-readable list of activities of the device, both user-initiated (such as queries to a voice assistant) and automated (such as firmware updates, or normal device telemetry).

Step 5: Analysis software matches the list of device activities to the recorded connections and flags any suspicious activity. The software should be open source, initially provided by the manufacturer, and be extensible by the community at large.

Step 6: The user can cross-check the now-vetted list of device activities to confirm whether it matches their intent.

The most impractical step is number 2: Only few users would bother to configure their networks in a way that allows the device traffic to be monitored. However, I believe that even the possibility of monitoring should deter malicious behaviour. This step is also most easily supported by third-party tools: An OpenWRT extension for example would greatly simplify the recording for users of OpenWRT, and other CPE manufacturers could follow suit1.

The IoT device manufacturer may want to keep some data — such as firmware update files, or received audio streams — proprietary and secret. They must do so in a manner that allows the analysis tool to confirm that only downstream data is withheld: Either by using a separate, at-rest, encryption layer inside the TLS connection, or by using a separate TLS connection to a special endpoint which carries only the absolutely minimal amount of information (one small HTTP request) in the upstream direction. The analysis tool is then able to ignore the contents of this proprietary data while still being able to flag anomalies in the meta data (“Three 150MB firmware updates in a day? Really?”).

Rationale: A scheme that forcibly opens up firmware files or DRM’ed audio streams would be a non-starter for industry adoption. Decrypting this downstream content isn’t necessary for the goal of confirming user intent. Conversely all information carried on the upstream channel by definition belongs to the user, since they generated it. (If they didn’t generate it, it wouldn’t need to be transferred.)

Potential for abuse: When suggesting to store new kinds of data (step 1) it’s important to analyze the potential for abuse this data has, be it from law-enforcement agencies or from vengeful ex-partners. I believe no new threat is introduced here: The current backends of manufacturers with voice assistants already store voice recordings and generally give the option to look at the device history or download recordings (both to LEAs and to anyone with account access). The data recorded in step 1 should give no additional insight into the user behaviour beyond what is already recorded under the status quo — except that it allows to confirm the completeness of the log.


  1. This opens up the question on whether one trusts their CPE manufacturer to build correct logging and to not collude with the IoT device manufacturer. 

On the Security of AES in HomeMatic

HomeMatic is a line of home automation devices that are popular in Germany and use a proprietary radio protocol (BidCoS, Bidirectional Communication Standard) on a frequency of 868MHz. Some devices allow optional use of “AES signing” for message authentication. When enabled, the execution of a command is delayed until a challenge-response process between the initiator and receiver of the command is completed. All AES capable HomeMatic devices ship with a default key, which can optionally be set to a custom value. The signing requirement is disabled by default for most devices, except for the KeyMatic/WinMatic line which includes devices for door lock and window automation which always require AES for all commands.

Before 2014 the common wisdom1 was to leave the AES key at the default value: Setting a custom key and forgetting it renders the device useless and requires it to be sent back to the manufacturer to reset it – for a fee.

Sathya Laufer and Christian Mallas demonstrated that this is trivially dangerous at the 30th Chaos Communication Congress: Within the HomeMatic universe there are LAN gateways that accept commands over Ethernet/IP and forward them through BidCoS to the target device. In this setup, all the BidCoS AES operations are executed by the LAN gateway. So if the target device is using the default key, an attacker can simply use a LAN gateway (which also knows the default AES key) to send arbitrary commands to that device, without bothering with any of the cryptographic details.

Later, the default AES key became known, and a reverse engineering of the authentication protocol is available (see next section), so an attacker can also use custom hardware to send commands to all devices still configured with the default key.

Michael Gernoth did a superb job of reverse engineering the authentication protocol2, but his description is mainly based on the flow to authenticate a known message. I’ll try to re-frame that here in the way that the authentication token is generated, and also generically for arbitrary message sizes. Where applicable I’ll use the same abbreviations that Michael uses (∥ is concatenation, ⊕ is XOR):

Packets
m Original message to be authenticated. Note: m = D0 ∥ D1, if the length is considered not to be part of the packet.
c Challenge message
r Response message
a ACK message
Data items
Name Description Length/bytes Packet
D0 Metadata (counter, flags, type, sender, receiver) 10 m
D1 Parameters varies m
C Challenge 6 c
P AES response 16 r
A ACK authentication 4 a
T Timestamp or counter 6

Under these definitions the calculation of the authentication messages happens as follows:

  • K’ := K ⊕ (C ∥ 00…)
  • Pd’ := AES(K’, T ∥ D0)
  • P := AES(K’, Pd’ ⊕ (D100…) )

Or, in easier terms, and likely representing the original idea, for arbitrary packet lengths:

AESCBC(IV= 00…, Key= K ⊕ (C ∥ 00…), Payload= T ∥ m)

(with the last block of the CBC calculation being output as P and the first 4 bytes of the first block as A)

While this looks like a standard (ab)use of AES-CBC as an authentication code, best as I can tell, the verification really has to happen in the very strange backwards way that Michael describes, for the simple reason that T is not transmitted and thus not available to the verifier to replicate the same calculation as the initiator.

How secure is the HomeMatic system if a custom AES key is used?

A customary simple measure for security is the number of bits an attacker must guess correctly to violate whatever security properties the system claims to have. This is related to the number of operations to execute in the attack3. Example: For an authentication mechanism with a security of 128 bits, the attacker needs to either correctly guess 128 bits (either key or authentication code) to break the system, or perform 2127 operations (random authentication attempts) to score a successful attack with a probability of ~50%.4

Under this definition a single block of AES with fully random key has a security of 128 bits. We would hope to find the same security in the HomeMatic use of AES.

CBC-MAC is generally not a recommended way to construct a message authentication code and very easy to implement wrong, for examples and a longer discussion of this aspect see this article on CBC-MAC by Matthew Green. That being said, XORing the challenge into the key prevents most of the more obvious attacks. They are enabled again, when a challenge is re-used though.

Challenge re-use

The challenge is a 6‑byte value generated by the verifier that must be random. Random number generation (RNG) with computers is hard, and RNGs in many (embedded) devices have had spectacular failures with security implications in the past: 1 2 3 4.

Now, I can’t make any assertions as to the quality of HomeMatic’s RNG, so let’s assume that it is stellar and always outputs full 48 bits of randomness. The birthday paradox then tells us that after around 224 ≈ 16.7 million tries there’s a better than even (50%) chance that a number repeats. A repeated challenge directly gives the attacker the possibility to replay a previous command.

For this attack to work with probability 50% the attacker must capture on the order of 16.7 million identical messages, and then try to get the same command executed 16.7 million times, representing a security factor of around 26 bits. Probably nothing to worry about in a radio protocol that will do at most ~5 authentications per second (the second phase alone will take 6 weeks), but a far cry from 128 bits.

The specific usage of CBC-MAC is also susceptible to an attack where the attacker can craft a valid authentication token for a message that consists of a message sniffed from the radio channel appended with attacker controlled data, see Matthew’s article linked above. However, I couldn’t think of a way to make this stick: The attacker still needs to be able to either coax the system into generating an authentication token for a different attacker chosen message, or be very limited in the message manipulations possible.

Blind guessing

Remember that T is never transmitted but apparently inferred from the protocol? That means, from an attacker’s point of view, that these bits are free. Instead of guessing 128 bits of key or authentication code, the attacker only need to guess 128–6*8=80 bits. Even if the verifier checks T to be monotonously increasing that only adds one additional bit of complexity. It works like this:

  1. Attacker sends arbitrary message m = D0 ∥ D1
  2. Verifier sends challenge C
  3. Attacker sends random answer P
  4. Verifier calculates
    • Pd’ := AES-1(K’,P) ⊕ (D100…), and
    • Pd’d := AES-1(K’, Pd’).

    Verifier then checks whether the last ten bytes of Pd’d match D0 (and maybe if the first 6 bytes are higher than the last T received).

  5. If in the previous step a match is found, the verifier executes the command and outputs A.

Note that in the protocol D1 is never checked, but used to calculate something that is checked against D0. Now: An attacker that feeds random data into P will cause random data to appear in Pd’d. There’s a chance of 1 in 280 that this random data matches D0 (and possibly: another 1 in 2 that the first 6 bytes are numerically larger than the last T).

Again: 80 is much lower than 128, so cryptographically speaking the mechanism is broken. Practically though, sending 280 requests (giving a success probability of 63% to the attacker) will take 7.6 Pa5, so that’s probably nothing to worry about.

Entropy

Internally the AES key is a binary value of 128 bits, but that’s not how it’s presented to the user in the front-end. Setting the HomeMatic security key requests an arbitrary text string from the user, which is then hashed with MD5 and the resulting hash is used as the AES key.

A careless user might not worry too much about this key, and the on-screen prompt only reminds them to use at least 5 characters. Even under the best of circumstances, one typeable character has only about 6 bits of entropy. The minimum security recommended by the user interface therefore is equivalent to 5*6 = 30 bits. Also: An attacker can execute an offline dictionary attack on the key after intercepting one or a few radio messages. Execution rates for these kinds of attacks typically lie in the millions or billions of operations per second even on regular PC hardware (CPU and GPU), so any 5 character security key will be cracked in seconds.

Luckily this isn’t a flaw by design: The user needs to make sure to chose a fully random key with at least 128 bits of entropy, for example in the form of 32 random hexadecimal characters.

Summary

The problems are, in order from most to least exploitable:

  1. Low entropy in the security key. Security break in seconds. Easily averted by choosing a strong key.
  2. Challenge re-use. Break may be possible within a few decades to years.
  3. Blind guessing. Break possible within a few petayears.

From a theoretical point of view, the security of the BidCoS protocol with a custom AES key is much worse than it should be. From a practical point of view it’s entirely acceptable, if the user chooses a long fully random key, and the attacker isn’t present when the key is set6.

Note: These considerations only apply to the theoretical protocol and not any particular implementation. It’s possibly, even likely, that there are exploitable bugs in some device firmware and/or that the RNG is not as good as expected. Bugs in this area generally reduce the security to the “break within minutes to seconds” category.


  1. See for example this archived article from October 2012, courtesy of the internet archive. The article was updated to say the polar opposite in January 2014. 

  2. XORing the challenge into the key is somewhat unusual, I’m not sure I would’ve found that 

  3. Theoretical computer science generally doesn’t care about constant factors. 

  4. These two notions are not identical, but close enough that, for the purposes of approximately judging system security, I’ll treat them as interchangeable for this article. 

  5. 7.6 petayears, 7 600 000 000 000 000 years 

  6. Obviously the key is transmitted over the air, encrypted with a key that the attacker already knows by induction. 

Understanding Capabilities in Linux

For some time now the Linux kernel has been supporting a capabilities(7) based permission control model. In theory this allows assigning fine-grained permissions to processes so that processes that previously required UID 0/root permissions don’t need these any more. In practice though, uptake of this feature is relatively low, and actually trying to use it is hampered by confusing vocabulary and non-intuitive semantics.

So what’s the story?

All special access permission exemptions that were previously exclusively attached to UID 0 are now associated with a capability. Examples for these are: CAP_FOWNER (bypass file permission checks), CAP_KILL (bypass permission checks for sending signals), CAP_NET_RAW (use raw sockets), CAP_NET_BIND_SERVICE (bind a socket to Internet domain privileged ports).

Capabilities can be bestowed on execution (similar to how SUID operates) or be inherited from a parent process. So in theory it should be possible to, for example, start an Apache web server on port 80 as a normal user with no root access at all, if you can provide it with the CAP_NET_BIND_SERVICE capability. Another example: Wireshark only needs the CAP_NET_RAW and CAP_NET_ADMIN capabilities. It is highly undesirable to run the main UI and protocol parsers as root, and slightly less desirable to run dumpcap, which is the helper tool that Wireshark actually uses to sniff traffic, as root. Instead, the preferred installation method on Debian systems is to set the dumpcap binary up so that it automatically gains the required privileges on execution, and then limit execution of the binary to a certain group of users.

Gaining and giving capabilities

This is the most confusing part, because a) it doesn’t behave intuitively in the “just like suid-root” mental model, and b) uses the same words for completely different functions.

Conceptually capabilities are maintained in sets, which are represented as bit masks. For all running processes capability information is maintained per thread; for binaries in the file system it’s stored in extended attributes. Thread capability sets are copied on fork() and specially transformed on execve(), as discussed below.

Several different capability sets and related variables exist. In the documentation these are treated as somewhat symmetrical for files and threads, but in reality they are not, so I’ll describe them one by one:

Thread permitted set
This is a superset of capabilities that the thread may add to either the thread permitted or thread inheritable sets. The thread can use the capset() system call to manage capabilities: It may drop any capability from any set, but only add capabilities to its thread effective and inherited sets that are in its thread permitted set. Consequently it cannot add any capability to its thread permitted set, unless it has the CAP_SETPCAP capability in its thread effective set.
Thread effective set
This is the actual set of capabilities that the kernel uses for permission checks.
Thread inheritable set
This is a set that plays a role in bequeathing capabilities to other binaries. It would more properly be called ‘bequeathable’: a capability not in this set cannot be inherited by a different binary through the inheritance process. However, being in this set does also not automatically make a binary inherit the capability. Also note that ‘inheriting’ a capability does not necessarily automatically give any thread effective capabilities: ‘inherited’ capabilities only directly influence the new thread permitted set.
File permitted set
This is a set of capabilities that are added to the thread permitted set on binary execution (limited by cap_bset).
File inheritable set
This set plays a role in inheriting capabilities from another binary: the intersection (logical AND) of the thread inheritable and file inheritable sets are added to the thread permitted set after the execve() is successful.
File effective flag
This is actually just a flag: When the flag is set, then the thread effective set after execve() is set to the thread permitted set, otherwise it’s empty.
cap_bset
This is a bounding capability set which can mask out (by ANDing) file permitted capabilities, and some other stuff. I’ll not discuss it further and just assume that it contains everything.

Based on these definitions the documentation gives a concise algorithm for the transformation that is applied on execve() (new and old relate to the thread capability sets before and after the execve(), file refers to the binary file being executed):

  • New thread permitted = (old thread inheritable AND file inheritable) OR (file permitted AND cap_bset)
  • New thread effective = new thread permitted, if file effective flag set, 0 otherwise
  • New thread inheritable = old thread inheritable

This simple definition has some surprising (to me) consequences:

  1. The ‘file inheritable set’ is not related to the ‘thread inheritable set’. Having a capability in the file inheritable set of a binary will not put that capability into the resulting processes thread inheritable set. In other words: A thread that wants to bequeath a capability to a different binary needs to explicitly add the capability to its thread inheritable set through setcap().
  2. Conversely the ‘thread inheritable set’ is not solely responsible for bequeathing a capability to a different binary. The binary also needs to be allowed to receive the capability by setting it in the file inheritable set.
  3. Bequeathing a capability to a different binary by default only gives it the theoretical ability to use the capability. To become effective, the target process must add the capability to its effective set using setcap(). Or the file effective flag must be set.
  4. A nice side effect of the simple copy operation used for the thread inheritable set: A capability can be passed in the thread inheritable set through multiple intermediate fork() and execve() calls to a target process at the end of a very long chain without becoming effective in the middle.
  5. The relevant file capability sets are those of the binary being executed. When trying to give permitted capabilities to an interpreted script, the capabilities must be in the file inheritable set of the interpreter binary. Additionally: If the script can’t/won’t call capset(), the file effective flag must be set on the interpreter binary.

Summary

I’ve tried to summarize all the possible paths that a capability can take within a Linux thread using capset() or execve(). (Note: fork() isn’t shown here, since all capability information is simply duplicated when forking.)

Linux Capabilities: Possible capability transmission paths

On the Difference Between RFID and NFC

What is RFID? What is NFC? What is the difference between RFID and NFC? These questions come up time and again, so let me answer them in some detail.

Both are terms that are almost never used correctly, and both have, in a general sense, something to do with communicating or radioing.

What is RFID?

Let’s start with the older term: RFID is just “radio frequency identification”. It’s not really defined, beyond being a combination of the two attributes, and, if you are so inclined, you could cite the “Identification Friend or Foe” systems invented for military airplanes in the 1930s as one of the earliest RFID systems1.

In modern times, the term RFID is almost always used to imply a system consisting of few relatively complex ‘readers’ and a larger number of relatively, or very, simple ‘transponders’, with some sort of radio signal being used to indicate the identification, or at least presence2, of the latter to the former. Now, that’s still quite abstract, so let’s add further characteristics, at each step going in the direction of the systems that most people actually mean when they say RFID with no further qualification:

  • The transponder could be active (have its own power source) or passive (be energized by the reader using some physical effect), the latter is what’s on most peoples minds in the context of RFID.
  • A passive transponder can be communicated with with radio waves through radar backscatter (ultra-high frequencies, range in the hundreds of meters, very little power available to the transponder) or, more often seen in everyday life, be inductively coupled (low to high frequencies, range less than a couple meters, possibly high power available).
  • An inductively coupled transponder could operate on a non-standardized low frequency (LF, ~120–140kHz) in a proprietary system, the standard high frequency (HF, 13.56MHz) in a proprietary system, or, most uses of the term RFID, the 13.56MHz frequency using an ISO standardized protocol.
  • The 13.56MHz RFID ISO protocols are ISO 15693, vicinity coupling, defined range less than a meter, and, more often referenced in the context of “RFID”, ISO 14443, proximity coupling, defined range less than ten centimeters.

Different properties of these general approaches lead to a very domain specific understanding of what “a normal RFID system” is: Warehouse management applications sometimes deal with ISO 15693 and more often with Gen 2 EPC (ISO 18000–6, passive backscatter, UHF: 860–960MHz). Average consumers overwhelmingly find themselves confronted with ISO 14443 systems (electronic passports, credit cards, newer corporate IDs) or proprietary HF systems (many corporate IDs). Finally, most very simple or moderately old applications quite often work with proprietary LF systems.

It’s a shaky definition process, but at least once you have determined that you are talking about ISO 14443 you’re on quite firm ground. However, this only gets you to establish communication with a transponder, possibly gather a transponder specific unique identifier, and transmit bytes back and forth. The actual command set for reading and writing, and potentially other functions such as electronic purse applications, is a completely different horseride altogether.

What is NFC?

Now, on the subject of NFC, this is even less well defined – or possibly better, depending on how you look at it. It’s a relatively new term, so there’s no firm default interpretation you could use, beside it having to do something with “near-field” and “communication” (e.g. inductive coupling and some sort of information transfer). There are, however, a couple of well defined things that bear the name NFC – none of which are usually exclusively intended by someone using the term:

  • NFCIP‑1, also known as ISO 18092 (dual-published as ECMA-340 [PDF]), which is an air interface for half-duplex communication between two entities using inductive coupling on 13.56MHz, at least one of the entities must be actively powered.
  • NFC Forum which is an industry association that publishes a set of standards, among them are: 
    • NFC Data Exchange Format (NDEF) which is compact binary data storage and message serialization format
    • NFC Record Type Definition (RTD) which is a specification format for NDEF message formats
    • A couple of RTDs that define both the message format and expected semantics of common use cases such as smart posters, business cards, etc.
    • NFC Tag Type definitions (1 through 4) that define a set of protocols for passive data storage tags and how to access NDEF messages on them

How do RFID and NFC relate?

Now comes the fun part: NFCIP‑1 is, not by accident, compatible with ISO 14443, where appropriate. Full-on NFCIP‑1 devices generally can implement both sides (now called Initiator and Target) of the communication, and so are compatible both with ISO 14443 readers (by emulating a tag) and ISO 14443 tags (by behaving as a reader). As an aside: Most vendors, while they’re on the 13.56MHz frequency anyway, also implement all the usual 13.56MHz RFID protocols in the things they call NFC chipsets, which is not at all helpful when trying to untangle the standards salad. Just because your “NFC phone” can operate with a certain tag does not mean that it’s “doing NFC” in a certain narrowly defined sense.

And even better: The NFC tag types correspond to existing 13.56MHz RFID tag products, but sometimes in a generalized version. For example tag type 2 is essentially NXP Mifare Ultralight3, but where Ultralight has a fixed 64 bytes of memory, the tag type 2 also allows arbitrary sizes bigger than 64B. And indeed, one of the most ubiquitous “NFC tag”s that you can buy now are NFC type 2 tags which are not NXP Mifare Ultralight and have ~160 bytes of memory.

In conclusion, by NFC most people mean, depending on context, a tag type or message format from the NFC ecosystem, or the NFC chip in their phones, even when they are using it with any old ISO 14443 tag4, which, closing the loop here, is what most people mean when they are referencing RFID.


  1. I got that example from Dr. Melanie Rieback, who does so in all her talks. 

  2. This is sometimes referred to as ‘1‑bit identification’, and extremely often seen in the context of electronic article surveillance

  3. The memory map table in the NFC tag type definition is an almost verbatim copy of that in the Ultralight data sheet, however, you will not find the words “mifare” nor “ultralight” anywhere in the tag type definition document. 

  4. The single most widespread ISO 14443 transponder type is Mifare Classic, which is not an NFC Forum tag type, but, confusingly, works with most NFC implementations in mobile phones as if it was.