In preparation for launching Linsides, which provides services via Linode’s private LAN, I wanted to get an idea of how well Linode’s LAN actually performs. I thought other folks might be interested in this information, so I figured I’d post my results.
First up is network latency. How long does it take a packet to travel from one Linode to another, and then back again? To test this, I used mtr, a handy tool for evaluating network latency.
josh@redacted:~$ mtr -rc 100 192.168.140.180 HOST: redacted Loss% Snt Last Avg Best Wrst StDev 1. 192.168.140.180 0.0% 100 0.3 0.3 0.2 0.4 0.0
So, over 100 samples, the average round-trip time was 0.3 milliseconds, and no trip took more than 0.4 milliseconds. This is certainly very fast, and is pretty much what you’d expect from systems that are likely no more than a few dozen feet from each other.
Next up is network throughput. For this, I used iperf. Step one was getting iperf listening on my test Linode:
root@li101-202:~# iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------
Step two is to run the test from my primary Linode:
josh@redacted:~$ iperf -c 192.168.140.180 ------------------------------------------------------------ Client connecting to 192.168.140.180, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.134.41 port 52600 connected with 192.168.140.180 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec
So, 941 Mbits/sec. I probably could have tweaked that up a Mbit/sec or two by adjusting the TCP window size, but 941 Mbits/sec is certainly a respectable number (and more or less what you’d expect from a GigE connection). One thing to note: The private interface is an alias on the primary interface. This means that the default cap on outbound bandwidth (which exists to prevent you from accidentally blowing through all your transfer in a couple hours) applies to the private interface as well. The default cap is 50Mbit/sec, but all it takes is a ticket to get that bumped up if needed.
All in all, no real surprises here. The private network performs pretty much as well as you would expect a modern network to perform. It should be noted that this is just one data point, and it’s possible performance fluctuates a bit over time, but I’ve never noticed anything like that. The private network has always been rock solid for me.
So it’s that time of year… A few weeks after Christmas, and the dust has settled. Some new clothes in your dresser, maybe some new books and DVDs on your shelf, and a handful of items destined for a box in the closet.
This year I’ve found there’s been one gift I’ve used just about every day since I got it. Sarah got me an Aerobie AeroPress coffee maker, and it has quickly become part of my daily routine. It makes a fantastic cup of coffee (I know it claims to make espresso, but the pressures really aren’t high enough… it’s just really good, strong coffee).
Here are some things I’ve noticed as I’ve gotten to know my AeroPress:
The grind is very important. I use a Capresso Burr Grinder, which gives me a very consistent grind. I have it set just above the finest setting it will do. The other handy thing about this grinder is that the numbers on the timer happen to coincide well with the amount of coffee needed for the AeroPress.
The other very important thing is water temperature. The manufacturer recommends 175 degrees, and after some experimentation, I have to agree with them. Much cooler than that and the extraction isn’t as good… much hotter and it starts to extract a noticeable amount of bitterness. I have a standard tea kettle and a milk steaming thermometer that fits well in the neck of the kettle. I put the kettle on the (electric) stove until the thermometer reads 160. At that point I turn the stove off and the water will coast up to 175 while I prepare the grounds.
After that, I just follow the instructions that came with the AeroPress… Grounds go in, water goes in, stir for a few seconds, then slowly press down with the plunger. Once the coffee is brewed, I top the cup off with hot water to taste (I find I end up doubling the amount of liquid in the cup) and enjoy a great cup of coffee/americano (I haven’t quite decided what I should call it yet…).
Cleanup is extremely easy… Just take the plastic filter holder off the bottom, use the plunger to press the compacted grounds into the trash, and give everything a quick rinse. The whole cleanup process takes maybe 10 seconds.
So, with all that being said… I would heartily recommend the AeroPress to anyone who is looking for a great coffee experience. It has made it almost too easy for me to make good coffee.
UPDATE: The services listed in this posted are now available at https://linsides.com.
I’ve decided to offer private memcached instances on the Linode private network. Memcached is a very fast, in-memory object cache, used by many websites to reduce load on their database. It’s commonly run on several servers on the same LAN, and most memcached clients use a hashing system to distribute objects across multiple instances.
One major feature of Linode’s VPS service is the private network that allows Linodes in the same datacenter to communicate with each other without touching your bandwidth quota (another bonus is that’s _very_ fast, average RTT’s are around 0.2ms for me).
I personally use memcached on a few Linodes in order to provide caching for a number of sites I run (including this humble blog). I’ve found it to be a little cumbersome managing multiple instances across multiple servers, so I developed a simple dashboard to make my life easier. I figured if I had a need for easily managed memcached instances, maybe other folks do as well, so I’ve decided to open it up to let some folks try it out, with the intention of eventually selling it as a service.
Here’s an obligatory screenshot from the dashboard:
Right now feature set is pretty basic. In the future I’d like to add a number of features including the ability to schedule a cache flush on a repeating basis, a simple interface to check the value of a key, and perhaps even an API for managing instances.
So, if you’re a Linode user (Newark only for now), and feel like adding another memcached instance or two to your cache, fill out this form and I’ll get you set up with a beta account.
I realize a major goal of most advertising isn’t to make you think too hard about the product or service being advertised, it’s just to make you feel happy thoughts and buy stuff.
However, it is fun once in a while to take a second and think about some of the information being presented. My personal favorites are car insurance ads that mention the fact that “Drivers who switch to XYZ insurance save an average of $8,483,179 (or something like that)”. Seems like it would pretty easy to come up with stats like that when you limit the set of data to “drivers who switched”.
Someday I want to find a bunch of people who would end up paying more according to Geico’s formulas and get them to switch just to mess with their averages.
In my last post, I built a quick Django application for posting messages, and updating those messages using the Ajax Push Engine (APE). One of the things that bugged me about the design was that the APE update occurred directly in the view that saved the message. That’s fine for the very narrow application as it stands, but what happens if sometime in the future, some other view or method saves messages? You’ll have to duplicate that APE code there too, and duplication is something we want to avoid.
All we really want to do is make sure that whenever Django saves a message, it also spits that message out to any clients listening on the APE channel. Django provides a really easy way to do that with signals. Because I’m lazy, I’ll let the Django docs describe signals to you.
So, this is _really_ easy to implement. You just define the signal handler, then connect it to the post_save signal of the Message model. Like so:
You can just stick that at the bottom of models.py if you’d like.
Now your view for handling message submission is really simple:
And there you have it… Any time you save a message, Django will tell APE, and you can forget all about it.
Recently I’ve been looking at using the AJAX Push Engine (APE) from Weeyla for adding Comet functionality to a Django site. To get a feel for for how APE works, I threw together a really trivial message posting app, and figured I’d document my process for those who might also be interested in integrating APE with Django. Note: This post assumes you know how to set up a Django project, and know enough about Django to realize this code is missing lots of really important bits (validation, authentication, etc…). This is going to be a short novel of a blog post, with a lot of code, so bear with me…
So, to start with, I just slapped together a really basic message posting app. Just a message with a name and a timestamp.
And here’s the template:
Ok, so that’s the basic setup… Pretty boring, right? If Sue submits a message, Bob has to refresh his page to see it. So, we’re going to add APE into the mix to update Bob right away. Getting APE up running it pretty easy, just follow the instructions over at the APE Wiki. We’ll be using APE’s inlinepush server module. Just for good measure we’ll be tossing some jQuery in as well.
So, the basic idea is we need to connect to a APE “channel” which we’ll use to receive updated messages. We’ll use jQuery to send the messages. So let’s start by using jQuery to submit the form, with the following script:
And here are the updated views:
And so with that, we can now submit messages without reloading the page. Any other users, however, would need to refresh to see the new messages. That’s where APE comes in. Here’s the APE client code (along with the updated append_message function):
And here’s the updated view to send new posts to APE:
You’ll notice I’ve added two settings to settings.py:
APE_PASSWORD = 'testpasswd' APE_SERVER = 'http://ape-test.local:6969/?'
And that’s all there is to it! Like I said earlier, this is a very incomplete application, and a lot of the “boilerplate” work is left as an exercise for the reader. It should be more than enough to get you on your way though. Perhaps in a later post I’ll move the APE update from inside the view to a post_save signal on the model itself. That’s for another time though… UPDATE: Well, another time is today. Check out this post about adding a signal handler.
Had a chance to go to a fantastic local concert the other night at the Westcott Theater. The Andrew and Noah VanNorstrand Band were their usual high energy selves. They played some stuff off their upcoming album (which should be out this spring), which sounds like it’s going to be really impressive. Andrew and Noah never cease to amaze me with their talent as musicians, poets, and songwriters.
A real treat for me though was seeing Mike and Ruthy, who I’d never heard before. Mike and Ruthy are a “song-writing, harmony-singing, banjo and fiddle-slinging duo from the Hudson Valley” who are a lot of fun in concert. Their passion for each other, and for their music is obvious. We picked up their CD The Honeymoon Agenda and have been enjoying it very much. The whole album has a fun groove, and Ruthy’s vocals (especially on the Blues numbers like “Something’s Got A Hold Me”) are phenomenal.
If you’re looking for a fun, folksy, polished sound, I’d definitely recommend checking out Mike and Ruthy.
Neither my wife nor I are terribly good at managing money. We’ve gotten better over time, but budgeting and saving just don’t come naturally to us.
One “trick” I have discovered recently is the value of always asking for a cheaper option. I’ve been amazed at how little negotiation is actually necessary. If you simply balk at the price offered and don’t counter offer, often times the result will surprise you. Twice in the past few weeks this has worked out well for me.
My wife’s car has needed a new muffler for a while, so I finally quit being lazy and took it over to a local muffler shop. Once it was up on the lift is was pretty clear that a fair amount of the exhaust system needed to be replaced. The quoted price was a little over $400. I suggested that was a bit more than I was looking to pay that day, and left it at that. After a moment’s (fairly awkward) silence, the shop manager offered to “see what he could do” and came back with a price $125 lower. I’d been hoping to get down to $350 or so, but keeping my mouth shut saved me another $75.
Just this weekend we were shopping for a mattress and box spring and had a very similar experience. The offered price of $439 became $279 simply by suggesting the first price was more than we wanted to pay.
So, the moral of the story? Whenever you’re spending any significant amount of money, always take a moment to ask for a lower price. The worst outcome is that they don’t budge. However, I’ve found that better than half the time it’ll save you some cash. It’s saved me almost $300 in the last month.
The other day I was reading through Guido van Rossum’s proposal for a moratorium on Python language changes. It’s an interesting read, not only for the technical merits of the idea, but because it’s a good example of how Guido relates to the Python community. It got me to thinking about how his style compares to that of others who lead large open source projects. Perhaps the most obvious example would be Linus Torvalds and the Linux kernel. Obviously there are major differences in the scope and scale of the two projects, but I think enough similarities exist to make a comparison worthwhile.
Both Linus and Guido are often referred to as the Benevolent Dictator For Life (BFDL) for their respective projects. However, there is (at times) a sharp distinction as to how much emphasis there is on “benevolent.” To an outside observer, both Guido and Linus seem to pretty get their way most of the time, it’s just the route taken to get there that differs.
Linus tends to have a fairly… “straightforward” communication style. If he disagrees with you, it’s a pretty safe bet he’ll let you know. Likewise, if he feels that the project should go in a certain way, he makes that very plain. His “I’m a bastard” mailing list posting, and recent falling out with Alan Cox are just two examples where tact took a back seat to getting his point across.
Guido, on the other hand, seems to take a much softer approach. His vision for Python is often communicated as a suggestion, or even a question (Should we go this way?). Take a look at the moratorium proposal, or the recent package distribution discussions for some examples.
In my own life, I’ve found a Guido-esque approach to be more effective most of the time. Intuitively I feel like it’s the “better” system. However, I’ve never had to manage a project anywhere near the size of the kernel, and it’s entirely likely that the soft approach simply doesn’t scale.
Obviously it’s easy enough to take a couple excerpts of some e-mails, make some generalizations, and come to some broad conclusions. So, what do you think? Am I nuts? How have you seen this play out in your life? Any pertinent examples I missed?
A commonly asked question in the VirtualBox IRC channel (#vbox on FreeNode) is some variant of “How can I make my dynamic disk image bigger?”
VirtualBox allows you to create dynamically expanding virtual disk images (VDIs) for your VMs. This means the file containing the virtual hard disk on the host only takes up as much space as the guest is actually using. When you create the dynamic VDI, the size you specify is the maximum size the VDI will expand to. This allows your guest OS to make the assumptions it needs to about drive geometry, etc.
There is no way to increase the maximum size of a dynamic VDI. You can, however, clone a smaller VDI to a larger one. Using this method, you can give your guest OS some more breathing room.
Update: As of version 4.0, VirtualBox has support for resizing dynamic VDIs and VHDs. Read the fine manual for more information. If you’re not using a VDI or a VHD (or if you’re not using a dynamic variant of one of those), read on… The technique described here should work for any image type VirtualBox supports.
Update: Windows users may want to check out mpack’s CloneVDI tool. It provides a user friendly GUI interface for a lot of VDI management tasks (including this one)
WARNING: This process will not work if your current VDI is smaller than 4GB. If your current VDI is smaller than 4GB, do not attempt this method (Update: The 4GB restriction no longer applies as of version 4.0). Also, do not delete your original VDI before you’re sure the new one works.
The basic process is as follows:
- Create a new (larger) VDI
- Clone the existing VDI to the new one
- Detach the old VDI from the VM and attach the new one
- Expand the partition on the VDI
- Boot the VM and make sure everything works
- Once you’re sure the new VDI is working properly, delete the old one.
So, let’s break those steps down a bit…
To create a new VDI, you can either use the Virtual Media Manager in the GUI (look in the File menu), or you can use VBoxManage in a terminal window, like so (note: if you’re using a Windows host, open a command window and run: cd “C:\Program Files\Sun\xVM VirtualBox”):
VBoxManage createhd –filename my_filename.vdi –size 50000 –remember
This will create a 50GB (50,000MB) dynamic VDI called “my_filename.vdi” and store it in the default location (on Linux, this is usually ~/.VirtualBox/HardDisks). The ‘–remember’ flag will automatically register the VDI with VirtualBox, so that we can use it later.
Now that we’ve created the new VDI, we need to clone the old one to it. You have to use VBoxManage for this step, since cloning isn’t supported in the GUI.
VBoxManage clonehd old.vdi new.vdi –existing
Obviously you should replace “old.vdi” and “new.vdi” with the appropriate filenames. You can either specify absolute paths, or just specify the filenames if the VDIs are stored in the default folder. This process will take some time, so grab some coffee and we’ll pick back up in a few minutes…
Once you’ve cloned the VDI, you need to attach it the VM (and detach the old one). It’s probably easiest to do this in the settings page for the VM in the GUI, but it can also be done with VBoxManage. If your VDI is attached to the Primary Master IDE slot for the VM (which is the default), then you can use the following commands:
VBoxManage modifyvm MyVMName –hda none
VBoxManage modifyvm MyVMName –hda new.vdi
Obviously you’ll want to replace “MyVMName” and “new.vdi” with the appropriate values for your set environment. Once you’ve done this, you’ll need to expand the partition on the new VDI, to let it know it can use all the new space you’ve given it. The details of how to do this are outside the scope of this post, but the general idea would be to boot the VM with some sort of partitioning tool (GParted is a good Free option) and use that to expand the partition.
Once you’ve expanded the partition, boot the VM normally and make sure everything works. Once you’re sure everything works, unregister and delete the old VDI (again, this is easy enough to do in the GUI, but you can use VBoxManage as well).
VBoxManage closemedium disk old.vdi
rm /path/to/old.vdi (Windows users are on your own, I don’t know what the command is to delete files in Windows)
That’s all there is to it. The obvious downside to this method is that it takes up some extra disk space during the process, but if you’re looking to expand your VDI by any significant amount anyway, then this probably isn’t an issue.