Mounting a USB hard drive on startup on Ubuntu Core

A photo of a Raspberry Pi 4 connected to a USB external hard drive

As you’ll be aware from my regular posts about it, I have a Raspberry Pi 4 running Ubuntu Core, which acts as a server for Home Assistant, Plex and Calibre-Web. Here’s how I’ve set it up to mount an external USB hard drive on boot up.

As it’s a Raspberry Pi, the operating system and binaries set on a microSD card, which in this case is a mere 16 GB. Whilst the me of 20 years ago would have been astounded at the concept of something so tiny holding so much data, 16 GB isn’t much nowadays. So, I have a 1 TB external USB hard drive for storing the media files for Plex and Calibre-Web.

Ubuntu Core doesn’t automatically mount USB storage devices on startup unless you tell it to, and the instructions for doing so are different when compared with a regular Linux distro.

There’s no fstab

Most Linux distros, including regular Ubuntu, include fstab for managing file systems and mounting devices. But Ubuntu Core is designed to be a lightweight distro to act as firmware for Internet of Things devices, and so it doesn’t include many tools that are common in other Linux distros. fstab is one such tool which is missing.

You can, of course, just mount a USB drive manually with the following:

sudo mkdir /media/data
sudo mount /dev/sda1 /media/data

But this won’t persist when the computer restarts. After a bit of searching, I found a solution on StackExchange; it’s for Ubuntu Core 16, but works on 22 as well.

How to tell systemd to mount your USB hard drive

It should go without saying that you should back up your system before doing any of this. If you make a mistake and systemd stops working, your device could become unbootable.

Firstly, you’ll need to run sudo blkid to list all of the file systems that Ubuntu Core can see. Find the one that starts with ‘/dev/sda1’ and make a note of the long hexadecimal string that comes after UUID – it’ll probably look something like ‘2435ba65-f000-234244ac’. Copy and save this, as this identifies your USB hard drive.

Next, you’ll need to create a text file. Ubuntu Core only seems to offer the Vi text editor, which I haven’t bothered to learn to use properly. My favoured text editor is nano, but it’s not available on Ubuntu Core. Therefore, my recommendation is to create a file on another device and FTP it across. The file should be called media-data.mount; it’s really important the file name matches the intended mount point. For example, if you’re instead planning to mount the USB hard drive to /mnt/files, this text file would need to be called mnt-files.mount.

Here’s the template for the file:

[Unit]
Description=Mount unit for data

[Mount]
What=/dev/disk/by-uuid/[Your UUID]
Where=/media/data
Type=ext4

[Install]
WantedBy=multi-user.target

You’ll need to paste in the UUID for your USB hard drive where it says ‘[Your UUID]’. You’ll also need to match the file system type; I have my external USB hard drive formatted as ext4 for maximum compatibility with Linux, but yours may use ExFAT or NTFS.

This file needs to be saved to /etc/systemd/system/media-data.mount . You can either use vi to create and save this file directly or FTP it across and copy it over.

There are three further commands to run in turn:

sudo systemctl daemon-reload
sudo systemctl start media-data.mount
sudo systemctl enable media-data.mount

If you’ve done this correctly, then the next time you restart your device, your USB hard drive should mount automatically. If not, then you should receive some surprisingly helpful error messages explaining what you’ve done wrong.

There’s another guide at Wimpy’s World which has some additional detail and helped me get this working.

Home Assistant with HTTPS and HomeKit

A screenshot of Home Assistant running in a web browser with HTTPS enabled and no certificate errors

Welcome to the latest chapter of getting Home Assistant working on a Raspberry Pi using Docker. Last time, I’d managed to get it working in Docker, but only over a regular HTTP connection and without HomeKit. The good news is that I’ve solved both of these problems.

Using SWAG to enable HTTPS

Firstly, I recommend reading this paragraph whilst listening to ‘Swagger Jagger’ by Cher Lloyd.

I’ve tried lots of different ways to get Home Assistant working over SSL/TLS. There’s a good reason why this is one of the key selling points of Home Assistant Cloud, as it can be difficult. Thankfully, there’s a Docker image called SWAG (Secure Web Application Gateway) that handles much of the legwork. Once you’ve installed SWAG, follow this guide, and you should find that you can access your Home Assistant setup at https://homeassistant.[yourusername].duckdns.org/ . No need to specify a port, or accept any certificate warnings.

Inside SWAG, there’s a DNS client, which will automatically renew the SSL certificates every 90 days for you, using ZeroSSL or Let’s Encrypt. There’s also nginx, which is used to set up a reverse proxy, and support for dynamic DNS services like DuckDNS.

SWAG has sample configurations for lots of different services, including calibre-web, so I have SSL access to my calibre-web image too. My only issues with it so far were last week when DuckDNS went down on Sunday morning. Most services, like Home Assistant, need to be mounted as subdomains (as above), but others (like calibre-web) can be mounted as subfolders, e.g. https://[yourusername].duckdns.org/calibre-web. This reduces the number of subdomains that you need SSL certificates for; ZeroSSL only offers 3 subdomains for a free account so it’s worth considering subfolders if you want to add more services.

If you have your own domain, then you can also add a CNAME to it to point it at your DuckDNS account, should you wish to use that rather than a [something].duckdns.org address.

Getting Apple HomeKit working

Carrying on the musical theme, here’s ‘Carry Me Home’ by Gloworm, a 90s dance classic which has only recently become available on digital platforms again.

After getting my swagger jagger on and getting HTTPS working, the final issue I’ve been having with Home Assistant is the HomeKit bridge. Adding Home Assistant devices to Apple’s Home app is something that normally works out of the box if you install Home Assistant OS, but takes more work if you use Docker.

The instructions which helped me where these on the Home Assistant forums. You’re going to need to install another Docker image containing avahi; there are several but this one worked for me. It’s bang up to date, unlike the most common Docker image which is, um, 8 years out of date and also only works on x86 machines. Which isn’t much help for my arm64-based Raspberry Pi 4.

Once you’ve installed avahi, added the relevant lines to configuration.yaml in Home Assistant and restarted it, HomeKit should work. To get started, add the HomeKit integration to Home Assistant – you may want to specify which devices will show if you don’t want all of them. Then, use your iPhone or iPad to scan the QR code in your Home Assistant notification panel, and add the bridge. If all goes well, it should immediately tell you that it’s an unsigned device, but will then let you set up each device in turn.

If it just sits there for several minutes and then gives up, you’ll need to do some more digging. Don’t worry, this happened to me too. I suggest downloading the Discovery app, which shows all of the mDNS devices broadcasting on your network. If you can’t see ‘_hap._tcp’ in the list, then there’s a problem. In my case, this turned out to be because my Raspberry Pi wasn’t connected to the same wifi network. It’s plugged in to my ADSL router with a network cable, but we use Google Wifi which results in a ‘double NAT’ situation. Connecting the Raspberry Pi to both wired and wireless connections seemed to fix the issue.

Indeed, as a side effect Home Assistant managed to autodiscover some additional devices on my network, which was nice.

Home Assistant Core in Docker? Done it, mate

All in all, I’ve successfully managed to get Home Assistant to where I want it to be – self-updating in Docker, secure remote access, and a HomeKit bridge so that I can ask Siri to manage my devices. I’m looking forward to being able to turn my heating on whilst driving, for example.

It’s been a challenge, requiring a lot of skimming through the Home Assistant forums and various StackExchange discussions. Ideally, I would have a spare computer to run Home Assistant OS, which would have taken some of the leg work out of this, but I’m happy with the setup. Finding SWAG and getting it to work was a moment of joy, after all the setbacks I’d had before.

Using Portainer to manage Docker

Screenshot of the Portainer web interface

So you may have noticed that I have a thing going on with Docker at present. I’ve set up Home Assistant in Docker, and more recently also set up calibre-web with Docker. Between these, and other Docker images, it’s quite a lot to manage – especially on a headless remote device. Thankfully, Portainer is a web-based solution to managing multiple Docker containers.

There’s a free community edition which offers sufficient features to manage one Docker system, which I’m using. If you need to manage multiple systems, there’s a Business Edition available that you need to pay for, but home users should get by with the Community Edition. Although you will see lots of greyed out options which are only available in the Business Edition – something anyone who uses a freemium WordPress plugin will recognise.

The installation instructions are detailed, and there are a number of steps that you’ll need to follow using the command line. Once everything’s set up, you’ll be able to open a web browser and see all of your Docker containers, and their status.

Portainer lets you start, stop and restart containers from the web interface, and delete any containers no longer needed. The feature that I’ve found most useful is the ‘Duplicate/Edit’ function, which allows you to easily duplicate a container, and optionally replace the original container with a new one with updated variables. This is great for people like me who invariably make a mistake when setting up a Docker Compose file. Logs are also made easily accessible, which helped me when troubleshooting a container that was starting but then wasn’t accessible through a web browser.

You can also run new containers in Portainer; whilst this is easier than typing out commands, Docker Compose works better for me as you can just copy and paste them.

If you’ve got a few Docker images up and running, I would recommend Portainer as an easier way of managing them. It’s much nicer than having to type out commands in a ssh session, and is a friendlier way of working with Docker for less experienced users, like myself.

Managing e-books with Calibre-web

Screenshot of the calibre-web interface

If, like me, you’ve picked up a number of e-books over the years, you may use Calibre as your e-book manager. It’s a desktop application with an optional web interface, but it has its drawbacks. The user interface is clunky, and it tries to cram lots of advanced features in – even the latest version 7 is overwhelming for new users. So, if you can forego the desktop application, there’s an alternative called calibre-web that does the same thing in a web browser, and with a much nicer interface.

Once installed, you can migrate your existing metadata.db from Calibre and the e-book folders, and calibre-web will pick up where you left off. I particularly like the ability to download metadata from sources such as Google Books, to get more complete data about each book besides its author and title. There’s a built-in e-reader, or you can use an app that supports OPDS – I used Aldiko.

By far the easiest way to install it is using Docker. There’s a good image on DockerHub; it’s maintained by a third-party but recommended by calibre-web’s developers. Once installed, it doesn’t require much additional configuration.

By default, calibre-web doesn’t allow uploads, but you can amend this in the Admin settings. The settings toggle s rather buried away, and it took me some time to find. But once uploads are enabled, it allows you to completely replace the desktop Calibre app if you want to. You can also set up multiple user accounts, if you want to share your calibre-web server with others.

I have calibre-web installed on the same Raspberry Pi as my Plex and Home Assistant servers. Indeed, calibre-web essentially offers a kind-of Plex for e-books, seeing as Plex doesn’t offer this itself. Unfortunately, most of my e-books were purchased through Amazon, and so only accessible through their Kindle apps and devices. But for the handful of books that I’ve picked up through the likes of Unbound and Humble Bundle, it’s helpful to have them in one place.

Running Home Assistant in Docker and Snap

A screenshot of the Home Assistant installation instructions for Docker

So, as I mentioned a couple of weeks ago, I’ve set up Home Assistant (HA) to control the various smart devices that we have around the home. At the time, I just used a snap package, but now I’ve migrated to using Docker, and here’s why.

Firstly, there are some disadvantages of installing Home Assistant using a snap package. Namely:

  1. The snap package isn’t an official release by the Home Assistant project, and is instead built by a third party.
  2. This means that, at time of writing, it’s a couple of releases behind the latest official release.
  3. It also means that it’s not a formally supported way of running Home Assistant, and there are fewer resources out there to help you if you’re stuck.
  4. I had issues updating previously installed custom components from HACS

Meanwhile, there’s an official Home Assistant Docker image that is updated at the same time as new releases, and it’s mentioned in the installation guide.

So, on the whole, Docker is better for running HA than Snap. But I wanted to run HA on my Raspberry Pi 4 which has Ubuntu Core on it, and that only offers Snap. But wait… you can install Docker on Snap, and the Docker Snap package is one maintained by Canonical so it’s regularly updated.

You can see where this is going. What if I install Docker using Snap, and then install Home Assistant into Docker? Well, that’s what I did, and I’m pleased to inform you that it works.

Docker on Snap, step-by-step

If you want to try this yourself, here’s the steps that I followed. However, please be aware that you can’t migrate a Home Assistant setup from Snap to Docker. Whilst HA does offer a backup tool, the option to restore a backup is only available on Home Assistant Operating System, and it seems that manually copying the files across won’t work either. So, if you currently use Snap, you’ll have to set up HA again from scratch afterwards. You’ll also, at the very least, need to run snap stop home-assistant-snap before you start.

  1. Install Docker. You can do this by logging into your machine using SSH and typing in snap install docker.
  2. Enable networking. There’s probably a better way of doing this, but for me, just running chmod 777 /var/run/docker.sock worked.
  3. Install Home Assistant. You’ll need to enter quite a long shell command, which is:
    docker run -d \
    --name homeassistant \
    --privileged \
    --restart=unless-stopped \
    -e TZ=MY_TIME_ZONE \
    -v /PATH_TO_YOUR_CONFIG:/config \
    --network=host \
    ghcr.io/home-assistant/home-assistant:stable

    The two variables in bold will need changing. For ‘MY_TIME_ZONE‘ you’ll need to type in your time zone, which in my case is ‘Europe/London‘, and for ‘PATH_TO_YOUR_CONFIG‘ is a folder where you want your configuration files. I suggest /home/[username]/homeassistant .
  4. Grab a drink, as the installation will take a few minutes, and then open http://[your IP address]:8123 in a web browser. If it’s worked, then you’ll be presented with HA’s onboarding screen.

Again, if you had the HA snap package installed, then if everything’s working with Docker, you’ll need to uninstall any related HA packages (like HACS, toolbox and configurator) and then the home-assistant-snap itself. And then you’ll need to set up all of your devices again. The good news is that, if you decide to move your HA installation to a new machine, you can just migrate the Docker image in future.

Wouldn’t it be better just running Docker?

Okay, so you may be wondering why I’ve set up HA this way. After all, it would probably be easier just to install Raspberry Pi OS Lite and put Docker on that, without using Snap. Well, there’s a method to my madness:

  • I like running Ubuntu Core because it’s so minimalist. It comes with the bare minimum of software installed, which means that there’s less risk of your system being compromised if a software vulnerability is found and exploited.
  • I already have Plex running quite happily in Snap, and didn’t want to have to migrate that as well.

In other words, this was the easiest way of running HA in Docker with my current setup. And I’m happy with it – I’m running the latest version of HA and it seems to work better.

There are a couple of additional steps that I still need to complete, which are:

  • Enabling SSL/TLS for remote access
  • Enabling mDNS broadcasts for Apple HomeKit integration

I’m working on these. Home Assistant Cloud is the easiest way of setting up secure access and I’m considering it. It’s a paid-for service, but it does financially support HA’s development, and seems to much easier than the alternatives. As for mDNS, I’m still working on this, and I imagine there’ll be things I need to tweak in both Docker and Snap to get it to work.

Adventures in setting up Homebridge for HomeKit

A screenshot of the Homebridge dashboard

A recent project of mine has been to get Homebridge up and running. It’s a server-based program that acts as a bridge between smart devices in the home, and Apple’s Home app on iOS.

One thing, I don’t know why

HomeKit, the technology underpinning Home, is famously limited; whilst most smart devices support Amazon’s Alexa and Google Assistant, very few support HomeKit. Indeed, out of the various smart speakers, plug sockets, dishwasher, thermostat, smoke alarm and TV that we have in our house, it’s only the TV that natively supports HomeKit.

Whilst just about everything else (except the smoke alarm) supports Google Assistant, and the Google Home app, it would be helpful to be able to use these devices with Siri. For example, when I’m driving, I want to be able to use the Hey Siri command to turn the heating on, so that we don’t come home to a cold house.

I tried so hard, and got so far

There’s a few ways to run Homebridge. If you have money to spare, then by far the easiest way is to buy a HOOBS box. HOOBS stands for ‘Homebridge Out of Box System‘, and you’ll get a plug-in device with a customised version of Homebridge that is simple to set up. You can also buy HOOBS on an SD card, that can be slotted into your own Raspberry Pi. Or, you can just download the HOOBS SD card image for a donation of £10.

I have two Raspberry Pis – a RPi 400 which is our seven-year-old’s computer, and a RPi 4 which is my Plex server. The latter runs Plex under Ubuntu Core, a minimal version of Ubuntu Linux which doesn’t include a graphical user interface, or even the Aptitude package manager. Instead, apps can be installed using Snap packages, which enforces greater sandboxing and security. There is a Snap package for Homebridge, but I couldn’t actually get it to work; once installed, I couldn’t open the browser page as instructed.

So, I’ve installed it using Apt on our child’s Raspberry Pi 400, and followed the proper instructions.

There’s only one thing you should know

When you first start Homebridge, it won’t do much initially. To get it talking to your devices, you’ll need to install the appropriate plugins, which you can do through the web UI. I suggest going with the plugins that have been ‘verified’ first, as you’ll probably find that there’s more than one plugin for some of the more popular services like Nest. Whilst installing plugins is relatively easy, configuring them can be difficult:

  • The Nest plugin, for example, has you logging into your Google Nest account in Chrome’s Incognito mode, whilst having Developer Tools open. You then have to copy and paste various data from the HTTP headers.
  • I have a series of smart plug sockets which use the Tuya Smart Life platform, but I had them registered under a different app which Homebridge can’t connect to. I had to de-register them and then set them up again on the official Tuya app.
  • Despite following the instructions, I couldn’t get my Bosch smart dishwasher to connect

Setting up Homebridge is therefore something best reserved for people who are comfortable using the Linux command line and with at least an intermediate understanding of how devices work. However, it does mean that I now have these devices in HomeKit as planned.

Homebridge even supports my Solar Inverter, although in a rather odd way. It appears as 12(!) separate accessories in the Home app, seeing as HomeKit doesn’t ‘know’ what a solar panel is. You can also make the Google Home app talk to Homebridge – again, this is the only way that I can make my Solax system work with Google.

But in the end, it doesn’t even Matter

Those of you who follow news in the smart devices/Internet of Things space will be aware of Matter, a new unified smart device standard with the support of Amazon, Apple, Google and Samsung. Matter will hopefully do away with the separate ecosystems that each company offers, and any Matter approved device should work with any other. However, the final Matter specification was only agreed last year, and I’m not expecting many of my existing devices to be updated to support it. At best, my Google Nest Mini devices will be updated soon, and my thermostat may be updated. For others, I would probably have to replace them with Matter-enabled devices in due course. Therefore, Homebridge offers me the flexibility that Matter will hopefully bring as an interim solution.

My favourite add-ons for Thunderbird

A screenshot of the Thunderbird add-ons web page

It’s been some time since I used Mozilla Thunderbird at home – I switched to Sparrow, then Apple’s own Mail app, before settling on Airmail last year. But at work, where I deal with a high volume of email, I prefer to use Thunderbird, instead of the provided Outlook 2010. There are a few add-ons which help me get stuff done, and so here is my list:

Lightning

Unlike Outlook, Lotus Notes or Evolution, Thunderbird doesn’t ship with a calendar. Lightning is an official Mozilla extension which adds a reasonably good calendar pane. Calendars can be local, subscribed .ics files on the internet, or there’s basic CalDAV support as well, and it works well with multiple calendars. A ‘Today’ panel shows up in your email pane so you can quickly glance at upcoming appointments.

Once you have Lightning installed, there are some other calendar extensions you can add. Some people use the Provider for Google Calendar extension – I don’t, as nowadays Google Calendar supports CalDAV so there’s no need for it. If you need access to Exchange calendars, then there’s also a Provider for Exchange extension too, although as we’re not (yet) on an Exchange system at work I haven’t yet tried this.

There’s also ThunderBirthDay, which shows the birthdays of your contacts as a calendar.

Google Contacts

If you use Gmail and its online address book to synchronise your contacts between devices, then Google Contacts will put these contacts in Thunderbird’s address book. It doesn’t require much setup – if you’ve already set up a Gmail account in Thunderbird then it’ll use those settings.

This is probably of most interest to Windows and Linux users. On Mac OS X, Thunderbird can read (and write, I think) to the global OS X Address Book, which can be synchronised with Google Contacts and therefore this extension isn’t needed. In the past, I used the Zindus extension for this purpose but it’s no longer under development.

Mail Redirect

This is a feature that older email clients like Eudora had, which allowed you to redirect a message to someone else, leaving the message intact. Mail Redirect adds this is a function in Thunderbird.

It’s different to forwarding, where you quote the original message or send it as an attachment – with Redirect, the email appears in the new recipient’s inbox in almost exactly the same way as it did in yours. That way, if the new recipient replies, the reply goes to the sender and not to you.

Thunderbird Conversations

If you like the way that Gmail groups email conversations together in the reading pane, then Thunderbird Conversations is for you. It replaces the standard reading pane, showing any replies, and messages that you have sent – even if they’re in a different folder. You can also use it to compose quick replies from the reading pane rather than opening a new window.

LookOut

Although this extension apparently no longer works, LookOut should improve compatibility with emails sent from Microsoft Outlook – especially older versions. Sometimes, attachments get encapsulated in a ‘winmail.dat’ file, which Thunderbird doesn’t understand. LookOut will make these attachments available to download as regular files. Hopefully someone will come along and fix it, but there hasn’t been an update since 2011 so I’m guessing this extension has been abandoned.

Smiley Fixer

Another add-on that will make working alongside Outlook-using colleagues a bit easier. If you’ve ever received emails with a capital letter ‘J’ at the end of a sentence, then this is Microsoft Outlook converting a smiley :) into a character from the Wingdings font. Thunderbird doesn’t really understand this and just displays ‘J’, which is where Smiley Fixer comes in. It will also correct a few other symbols, such as arrows, but you may still see the occasional odd letter in people’s signatures.

Enigmail

If you use GnuPG to encrypt messages, then you’ll probably have the Enigmail extension installed. Though it originally was a pain to set up, nowadays it seems to work quite well without a lot of technical knowledge. It includes a listing of all of the keys in your keychain, and you can ask it to obtain public keys for everyone in your address book should you wish.

Dropbox for Filelink

Some time ago a feature called ‘Filelink’ was added to Thunderbird, which allowed you to send links to large files, rather than including them as attachments. Whilst most people nowadays have very generous storage limits for their email, sometimes it’s best not to send large files as email attachments. Thunderbird supports Box and the soon-to-be-discontinued Ubuntu One services by default, but you can use the Dropbox for Filelink extension to add the more popular Dropbox service. Another extension will add any service which supports WebDAV which may be helpful if you’re in a corporate environment and don’t want to host files externally.

These are the extensions that I use to get the most out of Thunderbird. Although I’ve tried using Outlook 2010, I still prefer Thunderbird as it’s more flexible and can be set up how I want it.

Fixing high memory usage caused by mds

Screenshot of activity monitor on Mac OS X showing mds with high memory usafe

Recently my Mac Mini has been running very slowly, with some programs freezing for as much as several minutes. I pruned the list of items that were running on startup but this didn’t seem to make much difference.

So I opened Activity Monitor (the OS X equivalent of Task Manager) and found a process called ‘mds’ was consuming huge amounts of RAM and virtual memory. MDS is the process which builds an index of your disks for use by Spotlight, the tool that lets you search your drives, and also by Time Machine for backups. Sometimes MDS requires a fair amount of RAM, but it was using almost 2 gigabytes of virtual memory and almost a gigabyte of RAM in my case. I only have 4 gigabytes of RAM in total, and so this was causing major problems as OS X had to regularly swap data between RAM and the paging file.

I’d tried looking into this before and got nowhere. Most of the results in Google were discussions on Apple’s support forums, which were devoid of any real solutions. But eventually I found this post on iCan’t Internet which actually had a solution.

Firstly you should run Disk Utility. Repair your hard disk, and also repair the disk permissions. This may fix your problem, but it didn’t in my case so I moved on to the next step.

Open up Terminal, and type in the following command: sudo mdutil -avE . This runs a tool called ‘mdutil’, and tells it to completely rebuild Spotlight’s index. It turns out that the index on my hard disk had got corrupted somehow, and this was causing problems with the ‘mds’ process. It took a while for the command to run, but afterwards a huge amount of RAM and virtual memory became free. Unsurprisingly, my Mac ran much more happily after this.

Hopefully if you’ve have the same problem this will help. It has certainly breathed new life into my increasingly sluggish computer.

Resurrecting a dead OS with KernelEx

I’ve come across KernelEx – it’s an open source compatibility layer for Windows 98 and Me which allows programs designed for Windows 2000 and XP to run on the older operating systems. I came by it on the VLC forums, where there are screenshots of VLC 1.0.1 and Firefox 3.5.2 running even though these programs normally wouldn’t run on such an old copy of Windows.

I can’t test KernelEx because I don’t have a copy of Windows 98 or Me to hand. In any case, both operating systems have been long abandoned by Microsoft and are probably full of unpatched security holes now. But if you’re feeling nostalgic, or just like the geeky satisfaction of getting something to work that shouldn’t normally work, give it a shot.

How to migrate a Parallels virtual machine to VirtualBox

A screenshot of the web site for VirtualBox

Despite Parallels and VirtualBox both being programs which run virtual machines on Mac OS X, they both use different file formats for storing the virtual machines on disk. Though I believe Parallels will open a VirtualBox disk, VirtualBox cannot automatically import Parallels disks. But it’s not impossible…

If the guest operating system, i.e. the system that is running inside Parallels, is Windows 2000/XP/Vista, then it is possible to use a free tool from VMWare to do the conversion. Here’s a step-by-step:

1. Back up your virtual machine

Seriously. We’ll need to modify it a bit before it’s converted, so you’ll want a backup copy just in case things go wrong, or if you may use Parallels again in future.

2. Uninstall Parallels Tools

This is the modifying bit. Load your Windows virtual machine in Parallels, and uninstall Parallels Tools (the helper program that adds drivers and clipboard sharing, and other stuff). This is important as otherwise your virtual machine won’t boot in VirtualBox – and I know this from experience. You also can’t uninstall Parallels Tools unless you are running Parallels at the time.

3. Close all programs

Close as many running programs in your virtual machine as possible. We’re about to take a snapshot image of it while it is running, so any unsaved data may be lost when you boot the image in VirtualBox. That includes programs with icons in your notification area, such as virus scanners, instant messaging programs etc.

4. Install VMWare Converter

Once Parallels Tools has been uninstalled (you may need to reboot the virtual machine for this), we can begin the conversion process using a tool ironically made by VMWare. Go to the download page for the VMWare Converter in whatever web browser you use in your virtual machine (it’s a Windows program) Download it, and then install it.

Run the Converter tool, and click ‘Convert Machine’ – this should pop up a wizard which walks you through the process of setting up a new virtual machine image. You want to tell it to use a ‘Physical Computer’, and then on the next screen choose ‘This Local Machine’. Select the hard disk of the virtual machine and leave ‘Ignore page file and hibernation file’ ticked as this will just bloat the new virtual disk with unnecessary rubbish.

For the type of virtual machine, select ‘Other virtual machine’, and on the next screen, give it a name (e.g. ‘Windows Vista’). Next, you will also need to save it somewhere, and this should not be the existing hard disk of the virtual machine. You can either use your Mac’s main hard disk, mapped to drive ‘Z:’ under Parallels, a network drive or an external drive if you have it forwarded through to the virtual machine. You should be able to use the top option for the type (i.e. ‘Workstation 6.x’) but if it doesn’t work try another option. Keep ‘Allow disk to expand’ checked on the next screen. Click through until you’re ready to complete, and start the conversion.

5. Go and grab a cup of coffee

Or go out shopping. Or read a few chapters of War and Peace. Either way, the machine will take a significant amount of time to convert – mine took around 45 minutes and was only around 15 GB. Bigger disks may well take longer. It helps if you don’t have lots of other programs running on your Mac at the same time as then more of your CPU juice can be used for the conversion.

6. Shut down the machine in Parallels

Now that you’ve exported the machine, shut down Windows and close Parallels. This is mostly so that you can stay within the terms of the license agreement for Windows which won’t allow multiple instances.

7. Import the disk into VirtualBox

Open VirtualBox, choose ‘File’ and then ‘Virtual Disk Manager’. Add the disk file that you created, and click OK. Then click ‘New’ to create a new virtual machine, and select the correct operating system from the list. Try to ensure that you give the virtual machine the same settings (such as RAM size) as you did in Parallels. When asked for a hard disk, click the ‘Existing’ button and choose the disk file that you created from the list. Then click Finish.

8. Boot up in VirtualBox

Hopefully all will have gone to plan, and you will be able to boot into Windows as before. All of your files and programs should be there waiting for you.

If, however, you encounter a blue screen mentioning ‘prlfs.sys’ like I did, boot the machine but press F8 during the boot to enter Safe Mode with Command Prompt. Type in cd c:\windows\system32\drivers and then rename prlfs.sys prlfs.sys.old and then reboot – that should get you up and running.

For the inquisitive, prlfs.sys is part of Parallels Tools and this should have been removed as part of step 2, however muggins here forgot to this when he tried it himself and therefore encountered this error.

9. Install VirtualBox Guest Additions

Guest Additions are to VirtualBox what Parallels Tools are to Parallels – in other words, they make Windows sit better in the virtual machine and improve integration with the host operating system. On the main VirtualBox menu, select Devices and then ‘Install Guest Additions’ and follow the on-screen instructions. Though this is optional, it will improve the experience of using Windows in VirtualBox.

Hopefully now you’ll be up and running in VirtualBox. Feel free to post comments below and I’ll try to do what I can to answer them but I’m not the world’s greatest expert in this. I also don’t know how to do this in other versions of Windows or other operating systems.