Building a better… tennis ball?

If I had a nickel for every time I pulled the car into the garage, only to find I didn’t quite pull in far enough, and then have to get back in and pull the car forward another 6 inches… I wouldn’t quite have a quarter… But still, it’s 2016, it is totally unacceptable for this to happen even once!

The traditional method of solving this problem involves suspending a tennis ball from the ceiling of the garage such that it just barely touches the windshield of the car when it is pulled in far enough (if this method would meet your needs, and you want to know more, let Sonya McKinney explain). This plan has one glaring flaw for our usage though… My wife prefers to pull the car in nose first, while I prefer to back in. We drive a super sexy car, so the front and rear windshields have different offsets from their respective ends of the vehicle. So what are we to do? Hang two tennis balls? Come to a consensus on which way to park the car? No. It’s time for a needlessly over-engineered, WiFi-enabled solution.

Here’s the plan… We have a shelving unit that sits directly in front of the car. I’m going to mount an ultrasonic range finder at bumper height, and an LED strip across the second shelf. As the LEDs are individually addressable, the lights will move in from both edges until they meet in the middle, when the car is in the right spot. The strip will flash red if you go too far.

Here is highly detailed diagram of the goal:

img_20161201_172222
Where does the WiFi come in? I’m going to use an ESP8266 to drive things, and since it will know if the car is parked or not, that information can feed into my Home Assistant instance, which can then be used to make better automation decisions about whether someone is home or not.

Stay tuned next week for actual progress (part two will likely be getting the project on a breadboard and getting most of the code written).

Rotating Nginx TLS Session Tickets with Ansible

TLS performance has come a long way over the past few years. Widespread deployment of elliptic curve Diffie–Hellman key exchange protocols has led to significant improvements in both the speed of the initial TLS handshake, as well as the forward secrecy of the encrypted traffic.

Despite this, that initial handshake remains the biggest performance hit from TLS. This is especially painful as it slows down the user’s initial page load. The TLS handshake must be completed before the server can even begin to process the HTTP request. While there are various things that can be done to improve this (like tuning the TCP initial congestion window), the biggest wins come from avoiding it entirely on subsequent connections.

There are two primary mechanisms for avoiding that initial handshake. The first is simply caching the TLS session on the server. When the client shows up again, it says “Hey, last time we were using session xzy123, should we just pick up where we left off?”. If the server has that session ID in its cache, it can resume the session. This has the advantage of being very simple, but has a number of disadvantages in terms of scalability. Caching all of those sessions can start to take a non-trivial amount of resources on a busy server, and if you want to scale out to multiple servers, you need to share the session cache between them (which isn’t an easy task with any of the popular open source web servers).

The other option is to use TLS session tickets. At the end of the handshake process, the server sends the client an encrypted ‘ticket’ which includes all the information the server needs to resume the session. When the client reconnects it sends the ticket to the server, and if the server is able to decrypt it, it can pick up with the same session. This makes the storage of the session information the client’s problem (which is fine, as it only has to worry about its own session), and makes it easy to scale out to multiple servers (it’s just a small matter of programming to distribute the TLS ticket encryption key so any server is able to decrypt the tickets).

Great! Problem solved! We just stick the same ticket encryption key on all our servers and we’re good to go! Not so fast… We have now completely broken forward secrecy for our clients. Any attacker that is able to compromise that ticket encryption key now has everything they need to decrypt the sessions it was used to store (assume our adversary was playing the long game and recording all historical traffic with the hopes of decrypting it later). Darn…

So, what to do? We need to rotate these keys on a regular basis. How frequently is up to you and your threat model. Hourly, daily, weekly… all would be reasonable choices, based on your particular needs. The exact method for how you would rotate those keys varies depending on your webserver, and your configuration management tooling, but the title of this post includes “Nginx” and “Ansible”, so I guess I’ll use those…

Nginx makes it very easy to rotate these keys. You can specify as many keys as you would like; the first one will be used to encrypt new tickets, and when a ticket is received, Nginx will attempt to decrypt it with each key in turn. To rotate the keys, overwrite the last key with the penultimate key, and work your way up, finally replacing the first key with a brand spanking new one. There are lots of ways to accomplish this, but I like Ansible, so here’s the playbook I use:

This is a pretty straightforward playbook. We generate a new key, make sure all the key files exist (this could be a newly created server), then rotate them (key ‘n’ becomes key ‘n + 1’, dropping the last one), and set the new key as the first one. If we need to create new keys, we just create them at random. This means that this server won’t be able to pick up old sessions, but it will catch up with the other servers as the keys continue to get rotated.

The only odd thing in the playbook is the fact that we have to convert the new key to base64 before copying it to the server (just to convert it back to raw bytes). The issue here is that Ansible tries to be too helpful when it comes to dealing with bytes. Nginx expects a TLS session ticket key to be a file containing exactly 48 bytes. Ansible, however, tries to encode the bytes as text, munging into something that is almost always not 48 bytes. So, we base64 encode the bytes before Ansible sees them, and then decode that base64 string into the new key file.

One important security consideration this post glosses over is the fact that you really shouldn’t store these keys on persistent storage (if someone gets access to your hardware, they could forensically recover old keys). The simplest solution would just be to store them on an in-memory filesystem. If you don’t reboot your servers that often, this would probably work. If you do reboot them frequently, then you’ll some means to distribute the last N keys to server after a reboot, otherwise you’ll lose most of the benefit of rolling the keys in the first place. If you’re interested in solving this problem at scale, check out CloudFlare’s excellent write-up of how they’re handling it.

The other minor detail is that if you are handling multiple hosts with the same server, they should use different sets of keys. If you were using this playbook, you could either run it multiple times with different key paths, or just tweak the playbook a bit to handle the different hosts.

This post is intended to lay out the 20% of effort needed to solve 80% of the problem. Even if you store them on persistent storage, simply enabling TLS session tickets and rotating your keys will put you well ahead of most of the TLS deployments out there.

AeroPress Review

So it’s that time of year… A few weeks after Christmas, and the dust has settled. Some new clothes in your dresser, maybe some new books and DVDs on your shelf, and a handful of items destined for a box in the closet.

This year I’ve found there’s been one gift I’ve used just about every day since I got it. Sarah got me an Aerobie AeroPress coffee maker, and it has quickly become part of my daily routine. It makes a fantastic cup of coffee (I know it claims to make espresso, but the pressures really aren’t high enough… it’s just really good, strong coffee).

Here are some things I’ve noticed as I’ve gotten to know my AeroPress:

The grind is very important. I use a Capresso Burr Grinder, which gives me a very consistent grind. I have it set just above the finest setting it will do. The other handy thing about this grinder is that the numbers on the timer happen to coincide well with the amount of coffee needed for the AeroPress.

The other very important thing is water temperature. The manufacturer recommends 175 degrees, and after some experimentation, I have to agree with them. Much cooler than that and the extraction isn’t as good… much hotter and it starts to extract a noticeable amount of bitterness. I have a standard tea kettle and a milk steaming thermometer that fits well in the neck of the kettle. I put the kettle on the (electric) stove until the thermometer reads 160. At that point I turn the stove off and the water will coast up to 175 while I prepare the grounds.

After that, I just follow the instructions that came with the AeroPress… Grounds go in, water goes in, stir for a few seconds, then slowly press down with the plunger. Once the coffee is brewed, I top the cup off with hot water to taste (I find I end up doubling the amount of liquid in the cup) and enjoy a great cup of coffee/americano (I haven’t quite decided what I should call it yet…).

Cleanup is extremely easy… Just take the plastic filter holder off the bottom, use the plunger to press the compacted grounds into the trash, and give everything a quick rinse. The whole cleanup process takes maybe 10 seconds.

So, with all that being said… I would heartily recommend the AeroPress to anyone who is looking for a great coffee experience. It has made it almost too easy for me to make good coffee.

 

Advertising

I realize a major goal of most advertising isn’t to make you think too hard about the product or service being advertised, it’s just to make you feel happy thoughts and buy stuff.

However, it is fun once in a while to take a second and think about some of the information being presented. My personal favorites are car insurance ads that mention the fact that “Drivers who switch to XYZ insurance save an average of $8,483,179 (or something like that)”. Seems like it would pretty easy to come up with stats like that when you limit the set of data to “drivers who switched”.

Someday I want to find a bunch of people who would end up paying more according to Geico’s formulas and get them to switch just to mess with their averages.

APE, Django, and Signals

In my last post, I built a quick Django application for posting messages, and updating those messages using the Ajax Push Engine (APE). One of the things that bugged me about the design was that the APE update occurred directly in the view that saved the message. That’s fine for the very narrow application as it stands, but what happens if sometime in the future, some other view or method saves messages? You’ll have to duplicate that APE code there too, and duplication is something we want to avoid.

All we really want to do is make sure that whenever Django saves a message, it also spits that message out to any clients listening on the APE channel. Django provides a really easy way to do that with signals. Because I’m lazy, I’ll let the Django docs describe signals to you.

So, this is _really_ easy to implement. You just define the signal handler, then connect it to the post_save signal of the Message model. Like so:

You can just stick that at the bottom of models.py if you’d like.

Now your view for handling message submission is really simple:

And there you have it… Any time you save a message, Django will tell APE, and you can forget all about it.

Comet with Django and APE

Recently I’ve been looking at using the AJAX Push Engine (APE) from Weeyla for adding Comet functionality to a Django site. To get a feel for for how APE works, I threw together a really trivial message posting app, and figured I’d document my process for those who might also be interested in integrating APE with Django. Note: This post assumes you know how to set up a Django project, and know enough about Django to realize this code is missing lots of really important bits (validation, authentication, etc…). This is going to be a short novel of a blog post, with a lot of code, so bear with me…

So, to start with, I just slapped together a really basic message posting app. Just a message with a name and a timestamp.

And here’s the template:

Ok, so that’s the basic setup… Pretty boring, right? If Sue submits a message, Bob has to refresh his page to see it. So, we’re going to add APE into the mix to update Bob right away. Getting APE up running it pretty easy, just follow the instructions over at the APE Wiki. We’ll be using APE’s inlinepush server module. Just for good measure we’ll be tossing some jQuery in as well.

So, the basic idea is we need to connect to a APE “channel” which we’ll use to receive updated messages. We’ll use jQuery to send the messages. So let’s start by using jQuery to submit the form, with the following script:

And here are the updated views:

And so with that, we can now submit messages without reloading the page. Any other users, however, would need to refresh to see the new messages. That’s where APE comes in. Here’s the APE client code (along with the updated append_message function):

And here’s the updated view to send new posts to APE:

You’ll notice I’ve added two settings to settings.py:

APE_PASSWORD = 'testpasswd'
APE_SERVER = 'http://ape-test.local:6969/?'

And that’s all there is to it! Like I said earlier, this is a very incomplete application, and a lot of the “boilerplate” work is left as an exercise for the reader. It should be more than enough to get you on your way though. Perhaps in a later post I’ll move the APE update from inside the view to a post_save signal on the model itself. That’s for another time though…

UPDATE: Well, another time is today. Check out this post about adding a signal handler.

Expanding VirtualBox Dynamic VDIs

There is no way to increase the maximum size of a dynamic VDI. You can, however, clone a smaller VDI to a larger one. Using this method, you can give your guest OS some more breathing room.

A commonly asked question in the VirtualBox IRC channel (#vbox on FreeNode) is some variant of “How can I make my dynamic disk image bigger?”

VirtualBox allows you to create dynamically expanding virtual disk images (VDIs) for your VMs. This means the file containing the virtual hard disk on the host only takes up as much space as the guest is actually using. When you create the dynamic VDI, the size you specify is the maximum size the VDI will expand to. This allows your guest OS to make the assumptions it needs to about drive geometry, etc.

There is no way to increase the maximum size of a dynamic VDI. You can, however, clone a smaller VDI to a larger one. Using this method, you can give your guest OS some more breathing room.

Update: As of version 4.0, VirtualBox has support for resizing dynamic VDIs and VHDs. Read the fine manual for more information. If you’re not using a VDI or a VHD (or if you’re not using a dynamic variant of one of those), read on… The technique described here should work for any image type VirtualBox supports.

Update: Windows users may want to check out mpack’s CloneVDI tool. It provides a user friendly GUI interface for a lot of VDI management tasks (including this one)

WARNING: This process will not work if your current VDI is smaller than 4GB. If your current VDI is smaller than 4GB, do not attempt this method (Update: The 4GB restriction no longer applies as of version 4.0). Also, do not delete your original VDI before you’re sure the new one works.

The basic process is as follows:

  1. Create a new (larger) VDI
  2. Clone the existing VDI to the new one
  3. Detach the old VDI from the VM and attach the new one
  4. Expand the partition on the VDI
  5. Boot the VM and make sure everything works
  6. Once you’re sure the new VDI is working properly, delete the old one.

So, let’s break those steps down a bit…

To create a new VDI, you can either use the Virtual Media Manager in the GUI (look in the File menu), or you can use VBoxManage in a terminal window, like so (note: if you’re using a Windows host, open a command window and run: cd “C:\Program Files\Sun\xVM VirtualBox”):

VBoxManage createhd –filename my_filename.vdi –size 50000 –remember

This will create a 50GB (50,000MB) dynamic VDI called “my_filename.vdi” and store it in the default location (on Linux, this is usually ~/.VirtualBox/HardDisks). The ‘–remember’ flag will automatically register the VDI with VirtualBox, so that we can use it later.

Now that we’ve created the new VDI, we need to clone the old one to it. You have to use VBoxManage for this step, since cloning isn’t supported in the GUI.

VBoxManage clonehd old.vdi new.vdi –existing

Obviously you should replace “old.vdi” and “new.vdi” with the appropriate filenames. You can either specify absolute paths, or just specify the filenames if the VDIs are stored in the default folder. This process will take some time, so grab some coffee and we’ll pick back up in a few minutes…

Once you’ve cloned the VDI, you need to attach it the VM (and detach the old one). It’s probably easiest to do this in the settings page for the VM in the GUI, but it can also be done with VBoxManage. If your VDI is attached to the Primary Master IDE slot for the VM (which is the default), then you can use the following commands:

VBoxManage modifyvm MyVMName –hda none

VBoxManage modifyvm MyVMName –hda new.vdi

Obviously you’ll want to replace “MyVMName” and “new.vdi” with the appropriate values for your set environment. Once you’ve done this, you’ll need to expand the partition on the new VDI, to let it know it can use all the new space you’ve given it. The details of how to do this are outside the scope of this post, but the general idea would be to boot the VM with some sort of partitioning tool (GParted is a good Free option) and use that to expand the partition.

Once you’ve expanded the partition, boot the VM normally and make sure everything works. Once you’re sure everything works, unregister and delete the old VDI (again, this is easy enough to do in the GUI, but you can use VBoxManage as well).

VBoxManage closemedium disk old.vdi

rm /path/to/old.vdi

That’s all there is to it. The obvious downside to this method is that it takes up some extra disk space during the process, but if you’re looking to expand your VDI by any significant amount anyway, then this probably isn’t an issue.