New Server

Now running on the new virtual server.

Virtual Machine Testing

A while back I built a machine for hosting VMs. It’s not super awesome, but it should be capable. It has a 4 core Sandy Bridge Xeon (E3-1220, 3.1 GHz) 8gb of memory (2 dimms with room for another pair) and no disk. It boots from a 4 or 8 gb Kingston USB flash drive. All other storage is over gigabit ethernet to my storage server.

The storage server runs Solaris Express 11, and while there are some things I might change next time, that is for another post. With ~8 TB of raw storage I have a ~6 TB ZFS pool. I’m using NFS and iSCSI to export storage to the VM host.

The power failure (and subsequent second power failure) have taken a toll on the reliability of my DNS server. It’s a dual core atom mini-itx board and simply a temporary solution that grew permanent. Worse, I began hosting this blog on that machine.

I’m hoping to move the DNS and web server onto a virtual machine. First I wanted to try and get a handle on some performance. Earlier today I created a pair of iSCSI luns and shared them. Last night I tuned up my NFS permissions. Today, in about the time it took to wrote this post I’ve installed Ubuntu 11.10 (desktop) in three VMs backed with iSCSI (thick provisioned), iSCSI (thin provisioned) and NFS.

Later today I’m going to see how the virtual disk performance compares between the three and maybe spin up a new DNS server.

Uptime down

Well my little Solaris running mini itx system finally no longer has a giant uptime. At about 779 days and 45 minutes the power went out for about 5 seconds bringing this mini system’s big uptime to an end.

That’s what I get for connecting a temporary system to wall power I guess…because in IT temporary is code for permanent or production.

Friday Night Rally

Went to my first Friday Rally. Talked Jeremy into being my navigator and nearly killed my wife with emotions and starvation.

Finally finished at the Round Table Pizza, bought us all some pizza and Mac and Jack Amber Ale. Some of the veterans said this wasn’t a very good example of a TSD Rally, and if we were frustrated we should give it another chance still.

Audi A4 Avant, photo by flickr user mateus27_24-25

While eating our pizza and talking to a member of the sponsoring club we heard that car 19 should come up for our trophy…turns out we placed First in the Novice group.

See: for events you can take place, and soon the results of the April 2011 Friday Nighter.


Yesterday around 4:30 pm an old HP DL360 G3 server running Solaris turned 1500 days of uptime.

$ uptime
3:00pm up 1500 day(s), 21:43, 2 users, load average: 0.00, 0.00, 0.00

It’s getting close to hitting 1501. That however will be it. I’m going to reboot it later today to confirm some network configuration changes.

Selectric II

Turned on my new (to me) IBM Selectric II Correcting Typewriter tonight. Gwyn picked this up for me from a craigslist seller. Gwyn is somewhat enamored by typewriters (among other things retro) and was ecstatic when I finally picked up some typewriter ribbons for her portable typewriter she found at a thrift shop.

This Selectric has seen plenty of use. It has the film ribbon and I think bits from the many ribbons past are all collected up in the bottom of the typewriter, well not the bottom, but inside, on top of the mechanisms. The price was right, at $20 I figured I could turn it if I was bored by it. The seller claimed working, but needs a ribbon.

Uh, not quite. It mostly works but has an all but new ribbon already installed.

Photo from Wikipedia, Etan J. Tal

I guess I need to figure out how to take it apart and clean everything. The carriage is sometimes slow to move or doesn’t move full character spaces as it reaches the right side. It seems to be missing part of the guides for the correction tape. Space works intermittently and backspace/correct are nearly inoperable.

Cosmetically it looks good from 10 feet. This one is black which helps hide the years of use. I am partial to the blue and red colors but black is probably better than tan. The textured finish is worn from many wrist hours and someone wrote on the top with a black pen. I feel like I should put it on a walnut typewriter cart in a dimly lit office with some orange shag carpet.

I have to admit I was only slightly interested when we got this beast, but tonight I was surprised how much I enjoyed it so far. The 10 cpi Courier fontball produces the most beautifully sharp text. The exactness of the guides on the carriage is amazing. I need to research the tab mechanism, you can set multiple tabs some how, which is fascinating to me.

Time for a change. Time for a change.

I propose a change in the way our operating systems let us handle file management. I think most of us with more than a few files have been there, backing up some files, reorganizing all the digital baggage. You start a copy process, but it’s about 3 GB so you’re looking around and find another 18 GBs of ISO files to move, and then you start moving some 4 GB of MP3s. Pretty soon you have 3 or 4 copy operations taking place in parallel, and your hard drive doesn’t like it.

Photo by Michael Coté, cote on Flickr

If I select a dataset to copy, say 100 large files, the copy process either discovers each file and copies it, or makes a list of all files and directories and copies each file in the list. Either way it is a nice sequential operation. If I happen to have physically different drives this process is quite efficient. One drive reads and the other writes. Sure there are interruptions, but the drives are mostly able to stay on track.

If I run 3 or 4 copy operations on the same pair of drives, or worse a single drive, I now have the drive spending a lot of time repositioning the read/write heads. Repositioning the heads is a time consuming process and the read caches empty quickly, if there are write caches they fill up equally fast, and then we wait more.

For a few small files this isn’t too noticeable, but when the dataset becomes large, like a media library it shows up real quick. Solid state drives (SSD) are better since they don’t have the moving parts. I’d like to believe that laying down fragmented files is sub optimal even on a SSD though. Further more, at least for a while SSDs aren’t replacing spinning hard drives for bulk storage.

Caching is supposed to help, but it can’t compete with datasets many times larger than your memory. Even if you have 20 GBs of memory and datasets in the 10 GB range the OS won’t use a 10 GB buffer. I think larger buffers could help, but I think that is part of a bigger solution.

I wrote earlier that a single copy operation from one drive to another was efficient. I’d like to see a copy queue implemented. Now only one copy operation is taking place at a time. Knowing that only one copy operation is taking place at a time lets the OS know it can use a larger buffer, that it won’t have twenty copy processes all needing their own buffers.

Bonus points for parallelizing copies when many spindles are involved, maybe multiple reads doesn’t hurt performance too much so run a few readers on one spindle if there are multiple spindles being used for writes. Obviously copies that are queued up on different drives could safely run in parallel.

Now that we have a queue why not let the user prioritize or reorder the queue. I think some intelligence built in should automatically promote a 20 MB copy over a 20 GB copy, especially to removable drives.

Photo by Erik Pitti, epitti on Flickr

Since the queue could know every file being copied I think it makes sense to optimize for the many small file dataset duplication problem. In a dataset with multiple small files a duplication operation to the same physical drive is not very efficient if each file is copied atomically. Let the queue figure out when this is happening and read multiple files before writing them out.

Little Victories

After weeks of searching I found the source code to an old program I wanted to port to a microcontroller. I think it’d would have helped if I went into the search knowing the name of the program.

It was written for X11 on Unix systems. The comments in the code says it was created in July 1990, updated in January 1991.

I had to add a few #include’s and tweak the library path in the makefile but in about 20 lazy minutes I went from never looking at the code to running on OS X.

Not bad for code that’s 20 years old!

IR Sensor network.

I was thinking about making a sensor network. I considered just making every sensor fat with Ethernet. Ethernet modules run around $25 + $0.50 for a MAC address eeprom. That can really add up fast if you have no budget. Then there are ip addresses, configuration code, local interfaces, cables, and power, and wait…a cable, what if I don’t want cables in the way.

XBee’s are about the same cost as the Ethernet modules. Maybe a little more if you’re using prototype boards and need adapter boards. XBee’s might be easier to deal with on dumb sensors without a bunch of interface for configuring network addresses. Not needing buttons and LCDs on your nodes saves a lot of code and cost too.

An IR sensor (38kHz) is around $2.00 and IR LEDs are cheap, perhaps use a preloaded MAC address eeprom to give unique addresses, or just code them in, or a set of jumpers and you’re around $4.00 for address and emitter/receiver. Range isn’t great so maybe a mesh/repeater algorithm needs to be developed on the nodes. You still need power, but this might work well in a computer room to log environmental data.

Seems like this could all be built onto a breakout board like the Evil Mad Science target board: Then maybe some casework, sensors and power for a total in the range of $30 + sensors and labor. Temperature is cheap, maybe $2. Humidity seems to start around $7 and go up. Perhaps a budget figure around $50/node for end units. Double or triple for the controlling node with Ethernet, local interface and maybe some more micro controller speed.

Here’s a starting point, just a TX and RX unit. Not sure if it’d work out, but it’s something to play with later.

Don’t Panic

Scott over at Mostly Networks has a good post on keeping calm in stressful situations.

I think I do a good job staying collected under most “emergencies.” Running around without fully understanding the situation doesn’t solve problems. Stabbing solutions in without collecting enough context usually just makes a mess.

Sometimes you run into something new, and maybe even potentially big. Don’t make it more than it is and try not to get caught up in how big it is.

Unfortunately, if you work in a shop where there’s a pretend urgency applied to every project and every failure you’re likely to meet up with resistance when you’re the one that’s calm under fire.

Just remember, most of your systems won’t be creating a life and death situation for anyone. Don’t shorten your life with a cloud of stress because of them. If your systems are in a life safety position stay calm and don’t rush so you are focused and can properly repair them.