KY-040 Rotary Encoder with Linux on the Raspberry Pi

For a project I with a Raspberry Pi (Zero W) needed a simple and easy input device to change a numerical value. So I bought some rotary encoders off Amazon.

If you search the Internet for information/tutorials on how to use a “KY-040” rotary encoder with Linux and the Raspberry Pi, you’ll find a dozen people or so who’ve done that, and written about it. Naturally, I’ll need to do that too — with a twist.

The most often referenced code to operate the KY-040 seems to be from Martin O’Hanlons Raspberry Pi and KY040 Rotary Encoder blog post, which even ended up being a python module KY040. The down side of this approach is that it’s very unreliable: The code triggers on an edge of one of the GPIO inputs and then reads the other input, all in Python code. To make this sort of work it needs long debounce times. The net result is that the code misses many turn events if the shaft is turned too fast, and sometimes gives the wrong turn direction.

I’m also not positive that Martins explanations of the signal output of the encoder is correct. Keyes KY-040 Arduino Rotary Encoder User Manual has a different explanation for the working principle, and some Arduino code. The difference is that, although the pins on the module are marked “CLK” and “DT” (for clock and data), it’s more common for rotary encoders to simply have pin “A” and pin “B”.

This matches what I’ve seen on this module: With pins A and B the most important distinction is the order in which they generate edges. If you only look for edges on one pin (“clock”) and then sample the other pin (“data”) you’ll kind-of also get information about the turns, but depending on the edge rate and sample speed it’s going to be unreliable.

It’s possible to observe both edges in Python with RPi.GPIO, but again, there’s a lot of overhead for what should be mostly real-time processing and is not fully reliable.

Good thing we’re using Linux which has device drivers for all sorts of things. Including, of course, a rotary encoder connected to GPIOs: rotary-encoder.txt (includes nice ASCII art on the operational principle).

Good thing also we’re using the Raspberry Pi, which has a matching device tree overlay (README for precompiled firmware).

(Note: If you’re compiling your own kernel, you’ll need the Raspberry Pi kernel starting with 4.9.y, CONFIG_INPUT_GPIO_ROTARY_ENCODER, and you’ll probably also want CONFIG_INPUT_EVDEV. The rpi-firmware with compiled overlays needs to be recent-ish, ~mid January 2018, for these examples to work.)

To enable/configure the rotary-encoder device tree overlay, simply put something like the following into /boot/config.txt (with the encoder connected to pins 23 and 24 on the Raspberry Pi):

# enable rotary encoder
dtoverlay=rotary-encoder,pin_a=23,pin_b=24,relative_axis=1

After a reboot you’ll have a new device in /dev/input/ for the rotary encoder. You can use the evtest tool (as in evtest /dev/input/event0) to look at the events it generates and confirm that it reacts perfectly to every turn of the encoder, without missing a movement or confusing the direction.

While you’re at it, you might also want to add the middle button as a key (mine is connected to pin 22):

dtoverlay=gpio-key,gpio=22,keycode=28,label="ENTER"

In order to make use of this in your Python programs: Use python-evdev.

# -*- encoding: utf-8 -*-
from __future__ import print_function

import evdev
import select

devices = [evdev.InputDevice(fn) for fn in evdev.list_devices()]
devices = {dev.fd: dev for dev in devices}

value = 1
print("Value: {0}".format(value))

done = False
while not done:
  r, w, x = select.select(devices, [], [])
  for fd in r:
    for event in devices[fd].read():
      event = evdev.util.categorize(event)
      if isinstance(event, evdev.events.RelEvent):
        value = value + event.event.value
        print("Value: {0}".format(value))
      elif isinstance(event, evdev.events.KeyEvent):
        if event.keycode == "KEY_ENTER" and event.keystate == event.key_up:
          done = True
          break

 

Calculating an RSA Public Key from Two Signatures

Interesting problem: You have RSA signatures and the signed data, and want to know the RSA public key that can be used to verify the signatures. For older signature schemes this is possible, if you have at least two signatures (or an oracle that can provide signatures on request).

Math is not my strong suit, but I found the necessary formula in this Cryptography StackExchange post: RSA public key recovery from signatures. It has the general idea, but is light on details and actual code.

Tools:

  • OpenSSL to generate examples
  • SageMath for the actual calculations. It has an absolutely wonderful Jupyter notebook interface.

First, let’s generate an example key and two example files. We’ll use 512 bits RSA for this example, which is about the minimum key size we can use, just to keep the examples short (in both screen real estate and calculation size). Don’t worry: while the calculation is ~30 seconds for 512 bits RSA, it’ll only grow to ~2.5 minutes for real-world 2048 bits RSA.

$ echo "Hallo, Welt" > hallowelt.txt
$ echo "Hallo, Otto" > hallootto.txt
$ openssl genrsa 512 > privkey.pem

RSA signatures are complicated beasts. In theory, you only have to hash the input and apply the RSA operation with the private key (that is, ‘decrypt’ it), but for various reasons this is highly insecure and never done in practice.

Instead, we’ll let OpenSSL handle the generation of signatures for our examples:

$ openssl dgst < hallowelt.txt -out hallowelt.txt.sig -sign privkey.pem
$ openssl dgst < hallootto.txt -out hallootto.txt.sig -sign privkey.pem

The resultant *.sig files are 64 bytes each, matching the 512 bit RSA modulus.

To better understand the RSA signature generation process (and prepare the next step), let’s look ‘into’ the signatures:

$ openssl rsautl -encrypt -inkey privkey.pem -in hallowelt.txt.sig -raw | hd
00000000  00 01 ff ff ff ff ff ff  ff ff ff ff 00 30 31 30  |.............010|
00000010  0d 06 09 60 86 48 01 65  03 04 02 01 05 00 04 20  |...`.H.e....... |
00000020  3e 6f f8 06 a5 b4 e7 e6  d7 4d 26 7f e3 db 90 a2  |>o.......M&.....|
00000030  e2 bc a3 70 e3 db 9b 10  73 fd 55 e1 06 a1 0c 2a  |...p....s.U....*|
$ openssl rsautl -encrypt -inkey privkey.pem -in hallowelt.txt.sig -raw | openssl asn1parse -offset 13 -inform der
    0:d=0  hl=2 l=  49 cons: SEQUENCE
    2:d=1  hl=2 l=  13 cons: SEQUENCE
    4:d=2  hl=2 l=   9 prim: OBJECT            :sha256
   15:d=2  hl=2 l=   0 prim: NULL
   17:d=1  hl=2 l=  32 prim: OCTET STRING      [HEX DUMP]:3E6FF806A5B4E7E6D74D267FE3DB90A2E2BCA370E3DB9B1073FD55E106A10C2A
$ openssl dgst -sha256 hallowelt.txt
SHA256(hallowelt.txt)= 3e6ff806a5b4e7e6d74d267fe3db90a2e2bca370e3db9b1073fd55e106a10c2a

The first step ‘encrypts’ the signature (that is: applies the RSA operation with the public key) and prints a hexdump of the result. In the hexdump we see:

  • Some padding: 00 01 ff ff ... ff 00
  • An ASN.1 structure, consisting of
    • A sequence (tag 30, 49 bytes), of
      • A sequence (tag 30, 13 bytes), of
        • An object identifier (tag 06, 9 bytes) for sha256
        • A NULL value (tag 05, 0 bytes)
      • An octet string (tag 04, 32 bytes) with
        • The SHA-256 hash (3e6ff8...a10c2a) of the signed data

The signature follows the PKCS#1 standard for RSA signatures. All the extra stuff serves to distinguish signatures with SHA-256 from signatures with other hashes, and to prevent some attacks on the padding. It’s also the reason why we can’t go much below 512 bits RSA if we want to demo with SHA-256. (It must be noted that PKCS#1 padding shouldn’t be used anymore. The new standard is RSASSA-PSS, which has a robust security proof, but also is randomized and completely foils the technique in this blog post.)

Let’s define the first set of functions to generate this sort of padding:

import hashlib
def pkcs1_padding(size_bytes, hexdigest, hashfn):
    oid = {hashlib.sha256: '608648016503040201'}[hashfn]
    result = '06' + ("%02X" % (len(oid)/2)) + oid + '05' + '00'
    result = '30' + ("%02X" % (len(result)/2)) + result
    
    result = result + '04' + ("%02X" % (len(hexdigest)/2)) + hexdigest
    result = '30' + ("%02X" % (len(result)/2)) + result
    
    result = '0001' + ('ff' * int(size_bytes - 3 - len(result)/2) ) + '00' + result
    return result

def hash_pad(size_bytes, data, hashfn):
    hexdigest = hashfn(data).hexdigest()
    return pkcs1_padding(size_bytes, hexdigest, hashfn)

A simple test:

hash_pad(64, "Hallo, Welt\n", hashlib.sha256)

'0001ffffffffffffffffffff003031300D0609608648016503040201050004203e6ff806a5b4e7e6d74d267fe3db90a2e2bca370e3db9b1073fd55e106a10c2a'

To perform the gcd calculation, you need for each signature the corresponding signed data, the hash function used, and the public exponent of the RSA key pair. Both hash function and public exponent may need to be guessed, but the hash is usually SHA-256, and the exponent is usually 0x10001 (65537) or 3.

The full code is as follows:

import binascii, hashlib
def message_sig_pair(size_bytes, data, signature, hashfn=hashlib.sha256):
    return ( Integer('0x' + hash_pad(size_bytes, data, hashfn)), Integer('0x' + binascii.hexlify(signature)) )

def find_n(*filenames):
    data_raw = []
    signature_raw = []
    for fn in filenames:
        data_raw.append( open(fn, 'rb').read() )
        signature_raw.append( open(fn+'.sig', 'rb').read() )
    size_bytes = len(signature_raw[0])
    if any(len(s) != size_bytes for s in signature_raw):
        raise Exception("All signature sizes must be identical")
    
    for hashfn in [hashlib.sha256]:
        pairs = [message_sig_pair(size_bytes, m, s, hashfn) for (m,s) in zip(data_raw, signature_raw)]
        for e in [0x10001, 3, 17]:
            gcd_input = [ (s^e - m) for (m,s) in pairs ]
            result = gcd(*gcd_input)
            if result != 1:
                return (hashfn, e, result)

If we test it, we’ll find:

time hashfn, e, n = find_n('hallowelt.txt', 'hallootto.txt');

CPU times: user 27.3 s, sys: 609 ms, total: 27.9 s
Wall time: 28.4 s

print hex(n)

d9dac509621ed7f27b4868ab1874f649778c63f11000366e827cf18fd70db1e27f39902524e29aa2bfb3167627caaa408e17e907ee3c44e0321dc77fb8890075

And compare to the ground truth of our example:

$ openssl rsa -in privkey.pem -noout -text
Private-Key: (512 bit)
modulus:
    00:d9:da:c5:09:62:1e:d7:f2:7b:48:68:ab:18:74:
    f6:49:77:8c:63:f1:10:00:36:6e:82:7c:f1:8f:d7:
    0d:b1:e2:7f:39:90:25:24:e2:9a:a2:bf:b3:16:76:
    27:ca:aa:40:8e:17:e9:07:ee:3c:44:e0:32:1d:c7:
    7f:b8:89:00:75
publicExponent: 65537 (0x10001)
[...]

Finally, to create a standard format PEM public key from our n and e:

from Crypto.PublicKey import RSA
print RSA.construct( (long(n), long(e)) ).exportKey(format='PEM')

-----BEGIN PUBLIC KEY-----
MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBANnaxQliHtfye0hoqxh09kl3jGPxEAA2
boJ88Y/XDbHifzmQJSTimqK/sxZ2J8qqQI4X6QfuPETgMh3Hf7iJAHUCAwEAAQ==
-----END PUBLIC KEY-----

Which is exactly what we would get from OpenSSL:

$ openssl rsa -in privkey.pem -pubout
writing RSA key
-----BEGIN PUBLIC KEY-----
MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBANnaxQliHtfye0hoqxh09kl3jGPxEAA2
boJ88Y/XDbHifzmQJSTimqK/sxZ2J8qqQI4X6QfuPETgMh3Hf7iJAHUCAwEAAQ==
-----END PUBLIC KEY-----

Warning: The gcd method may sometimes return not n but a product k * n for a smallish value of k. You may need to check for small prime factors and remove them.

The code is available as a SageMath Jupyter notebook: rsa_find_n.ipynb (all example files).

The find_n function is written to accept arbitrary arguments (must be filenames where the file contains the data and the filename appended with .sig contains the signature) but will only work when given exactly two arguments. FactHacks: Batch gcd has a batchgcd_faster function that will work on an arbitrary number of arguments (but is slower in the 2-argument case).

NGINX, FastCGI, HTTPS, and WordPress

I have a setup in which a frontend nginx server handles multiple names on one IP for HTTP port 80 and HTTPS port 443 (through SNI) and forwards the requests to distinct backend HTTP servers, based on name and/or path. One of these backends is a WordPress installation, and that one is problematic: WordPress tends to insert absolute URLs into everything. And since it doesn’t know that the frontend was accessed through HTTPS, it inserts HTTP URLs. Worse, when using the WordPress HTTPS plugin and choosing to force administrative logins through HTTPS only, you can end up with an endless redirect loop.

A quasi standard signal from frontend to backend in this case is the X-Forwarded-Ssl HTTP header, which, when set to on, should indicate that the frontend used HTTPS. Unfortunately, WordPress ignores the header. Instead, is_ssl() only checks the HTTPS server variable (and alternatively checks the server port to be 443).

The WordPress documentation referenced above offers a simple fix by overwriting the HTTPS variable in the wp-config.php file. Instead, for nginx with FastCGI, I propose this simpler, more generic, and, in my humble opinion, more elegant fix:

The default nginx FastCGI configuration includes a file fastcgi_params, which, among others, has this line:

fastcgi_param   HTTPS                   $https if_not_empty;

Below this simply add

fastcgi_param   HTTPS                   $http_x_forwarded_ssl if_not_empty;

(and of course have something along the lines of

proxy_set_header X-Forwarded-Ssl on;

in the corresponding proxy configuration on the frontend nginx)

This will set the HTTPS server variable when the X-Forwarded-Ssl HTTP header is set, allowing all kinds of PHP programs to transparently discover that they’re used to through HTTPS.

Rescuing Full Archives From A Google Group

A project I’m newly affiliated with has used Google Groups for all their private group communication so far. Since I’m not a big fan of storing private data in the proprietary data silos of cloud providers, this is a situation I want to rectify. Why use Google Groups when you can set up GNU mailman yourself and not have all data and meta-data pass through Google?

There’s one caveat: While Google provides an export of group member lists, there’s no export functionality for the current archive. Which in this case represented 2 years’ worth of fruitful discussion and organizational knowledge. Some tools exist to try and dump all of a group’s archive, but none really agreed with me. So I rolled my own.

I give to you: https://github.com/henryk/gggd

Inside you’ll find a Python script that uses the lynx browser to access the Google Groups API (so it can work with a Google login cookie as an authenticated user) and will enumerate all messages in a group’s archive and download each into a different file as a standard RFC (2)822 message. While programming this I found that some of the messages are being returned from the API in a mangled form, so I also wrote a tool (can be called with an option from the downloader) that can partially reverse this mangling.

With the message files from my download tool, formail from the procmail package, and some shell scripting I was able to generate an mbox file with the entire groups’ archives which could be easily imported into mailman.

owncloud – Cache Static Assets (CSS/Javascript)

To whom it might be useful,

I recently set up an owncloud instance for private use and found that the load time was abysmal. Showing the default “Files” page spends ~21 seconds for ~140 HTTP requests1, even though my HTTP setup is already quite pimped (with SPDY and all). What is worse: The time does not reduce on subsequent visits. No cache-control headers are sent and all the Javascript and CSS resources are requested again. There is ETag and If-None-Match in place, so most of the requests just yield a 304 Not Modified response, but they still block the loading process. Which is even less understandable if you look at the requests: All Javascript and CSS resources are using a “?v=md5($owncloud_version)” cache buster, so they would be fully cacheable with no ill effects.

For a standard owncloud installation in /var/www with Apache: Open your /var/www/owncloud/.htaccess in a text editor and append the following lines (Update 2014-10-09 18:35 UTC: Add missing \ before .)

<IfModule mod_headers.c>
<FilesMatch "\.(css|js)$">
Header set Cache-Control "max-age=2592000, public"
</FilesMatch>
</IfModule>

then in a shell make sure that the headers module is enabled in Apache:

sudo a2enmod headers

and restart Apache as prompted by a2enmod.

The next time you load the owncloud web interface your browser will be told to cache the Javascript and CSS resources for 30 days, and the time after that it won’t request them again. The “Files” app load time dropped from 21 to 6 seconds for me – with 16 instead of ~140 requests.  That’s almost reasonable!


  1. In Firefox: Press Ctrl-Shift-Q to bring up the Network web developer tool to watch the drama unfold in its entirety. 

DJB’s tinydns and DNSSEC

While upgrading my server infrastructure I noticed that I really should be providing IPv6 not only for the services (like this HTTP/HTTPS site) but also for the DNS itself, and also at some point might want to enable DNSSEC for my domain to join in the fight with DANE against the mafia that is the global X.509 certification authority infrastructure.

My DNS servers have been powered by DJB’s most excellent djbdns package1 since I first started hosting theses services myself. The software package truly is fire and forget: You set it up once and it will continue working, with no maintenance or pesky software upgrades, year after year. That’s one thing Dan’s software is famous for.

The other thing everyone knows about his software is that if you want to add features, you’ll have to apply third-party patches. A well-known patch set for IPv6 in tinydns is available from my friend Fefe, and is also included in Debian-based distributions in a package called dbndns. Peter Conrad wrote DNSSEC support for tinydns (explicitly basing on Fefe’s IPv6 patches).

When trying to set that up, I quickly became frustrated: Applying several patches from several distinct locations one after the other doesn’t seem like the way software should be distributed in 2014. Also, Peter’s code has a few easily patched problems.

So I’ve set up github.com/henryk/tinydnssec/tree/dnssec-1.05-test27-8ubuntu1-tinydnssec_1.3. Each commit is either the import of a tarball, application of a patch or a fix from me. I have signed the tag with my GPG key. You can easily use the github provided download link dnssec-1.05-test27-8ubuntu1-tinydnssec_1.3.zip.

The steps I took, in order:

  1. Import djbdns-1.05.tar.gz. No signature check was made since no signed version is available, but I checked that I was using the same package as Ubuntu/Debian.
  2. Apply djbdns-1.05-test27.diff.bz2. I checked Fefe’s signature and verified his key’s fingerprint using a separate channel.
  3. Apply 0003-djbdns-misformats-some-long-response-packets-patch-a.diff from the Ubuntu package.
  4. Apply 0004-dnscache.c-allow-a-maximum-of-20-concurrent-outgoing.diff from the Ubuntu package.
  5. Apply djbdns-ipv6-make.patch. No signature check was done, but the patch is trivial.
  6. Import tinydnssec-1.05-1.3.tar.bz2. I checked Peter’s signature and verified his key through the web of trust.
  7. Apply djbdns-1.05-dnssec.patch from the aforementioned package.
  8. Small fixup for conf-cc and conf-ld: Do not use diet for compilation or linking (was introduced with Fefe’s patch).
  9. Small fixup for tinydns-sign.pl: Use Digest::SHA instead of Digest::SHA1.
  10. Small fixup for run-tests.sh: GNU tail does not understand the +n syntax.
  11. Small fixup for run-tests.sh: Need bash, say so (not all /bin/sh are bash).

The resulting source builds fine, and the tests mostly run fine. Tests 1 and 7 each fail in 50% of cases due to the randomized record ordering in the tinydns output which is not accounted for in the test code.

djbdns is in the public domain, tinydnssec is published under GPL-3, which means that the combined source also falls under GPL-3.


  1. The software package is ‘djbdns’, among the servers in it are ‘tinydns’ to host an authoritative UDP DNS server and ‘axfrdns’ to host a TCP DNS server 

Setting Arbitrary Baud Rates On Linux

Historically, baud rates on UNIX –later: POSIX– systems have been manipulated using the tcgetattr()/tcsetattr() functions with a struct termios and a very limited set of possible rates identified by constants such as B0, B50, B75, B110, …, through B9600. These have later been extended for select values such as B38400 and B115200. Hardware has since evolved to be able to use almost any value as a baud rate, even much higher ones. The interface however, has never been properly repaired.

Linux used a technique called “baud rate aliasing” to circumvent that problem in the past: A special mode can be set so that a request for B38400 would not actually set 38.4kBaud but instead a separately defined other baud rate with names like spd_hi (“high”?) for 57.6kBaud, spd_shi (“super high”?) for 230kBaud or spd_warp for 460kBaud. These names may give you an idea how old and limited this interface is.

For this reason there is a new ioctl interface to set an arbitrary baud rate by actually using an integer to store the requested baud rate: TCGETS2/TCSETS2 using struct termios2.

Both documentation and example code for this method are sparse. A bug report to implement this in libc6 is still open. Thankfully that bug report includes example C code to use the interface directly. The constant to tell the structure that an OTHER Baud rate has been set has unwisely been called BOTHER, which, being a proper English word, makes it completely impossible to find any information on the internet about. So, to be more explicit (and hopefully be found by any future search for this topic): This is an example on how to set a custom baud rate with the BOTHER flag on Linux in Perl.

Transforming the C example into Perl code using the Perl ioctl function should be easy, right? Muahahaha. Every example on how to use Perl ioctl on the Internet (that I’ve reviewed) is wrong and/or broken. Even better: the perl distribution itself is broken in this instance. Quoth /usr/lib/perl/5.18.2/asm-generic/ioctls.ph on Ubuntu 14.04:

eval 'sub TCGETS2 () { &_IOR(ord(\'T\'), 0x2a, 1;}' unless defined(&TCGETS2);

(hint: count the number of opening and closing parentheses.)

Even if that Perl code was syntactically correct, it’s wrong in principle: The third argument to the _IOR macro should be the struct termios2 structure size. On x86_64 it’s 44 bytes, not 1.

So, I’ve written code with two purposes:

  1. Correctly use Perl’s ioctl to
  2. set a custom serial baud rate under Linux.

The definitions of both TCGETS2 and struct termios2 may be architecture dependent, so there’s a helper program in C to output the parameters for the current architecture.

All the code (set baud rate with TCSETS2 BOTHER in C, set baud rate with TCSETS2 BOTHER in Perl, C helper to output constants for the current architecture, Makefile) I released into the public domain at github.com/henryk/perl-baudrate/.