Playlist of the month: guitar-heavy indie rock

Screenshot of the guitar heavy indie rock playlist on Spotify

Now that I’m blogging regularly again, I’ve decided to start a new monthly feature where I post a playlist of 10 songs, all around a theme. With a few hours to go until the end of the month, here’s this month’s playlist.

These songs are all indie rock songs with a big guitar riffs, and are some of my favourite songs. If you want to listen along, here’s the Spotify playlist.

  • ‘Steve McQueen’ by The Automatic. This was the first single from this band’s second album, ‘This Is A Fix’. It starts with thumping guitars, and never lets up. Whilst The Automatic are best known for their debut single, ‘Monster’, this is my favourite.
  • ‘When We Wake Up’ by Asylums. I can’t quite remember how I came across this song but it’s great, and only has around 60,000 streams on Spotify so far. There’s a catchy chorus and powerful guitar riffs all of the way through.
  • ‘Praise Be’ by The Plea. Another less well-known band who I found out about because they supported Ash on tour. I really like this song and it’s surprising that it’s not more well known.
  • ‘Boulevard of Broken Dreams’ by Green Day. I’m sure people will argue whether this fits the theme, but I would argue that it’s one of their best songs from their best album.
  • ‘Ruby’ by Kaiser Chiefs. The first single from their second album, with strong guitars from the start and a catchy chorus.
  • ‘Nothing’ by A. Calling your band ‘A’ probably made sense in the days when people bought singles from high street record stores, but the move to digital platforms makes this song a little harder to find.
  • ‘Dirty Little Secret’ by The All-American Rejects. Another first single from a second album; whilst it’s not my favourite song by this band, it fits the theme.
  • ‘I Bet You Look Good On The Dancefloor’ by Arctic Monkeys. Their debut single, and their best, in my view. I know it’s a controversial opinion but none of their subsequent singles have been as good as this.
  • ‘Orpheus’ by Ash. I mentioned Ash earlier, and as they’re the band I’ve seen the most (three times) I should include one of theirs.
  • ‘Just A Day’ by Feeder. Originally a B-side to ‘Seven Days In The Sun’, this ended up being a single for a later Greatest Hits album, and is arguably among their best songs.

I’ll do another playlist of 10 songs next month. With it being December, no prizes for guessing the theme.

Ultra Processed Food

Cover images of the books about ultra-processed food mentioned in the article

Something that I’ve become more concerned about in our household is our consumption of so-called ‘ultra-processed food’. My wife has had a few health issues over the past 18 months, including an elevated risk of developing type two diabetes which has seen her cut her sugar intake. But this coincided with the publishing of several books related to ultra-processed food, and has seen us made some changes to reduce our exposure to them.

The books

Before I go into much detail, here are the books I’m talking about:

  1. Ultra-Processed People by Dr Chris van Tulleken
  2. The Way We Eat Now by Dr Bee Wilson
  3. Ravenous by Henry Dimbleby

Note: these are sponsored links, but feel free to purchase these books from your local independent tax-paying bookshop, or borrow them from a library.

If you only read one of these, read Chris van Tulleken’s Ultra-Processed People. Chris is probably better known as ‘Dr Chris’ from the CBBC show Operation Ouch, which he presents with his twin brother Dr Xand (and later Dr Ronx). He’s a triple-threat: a GP who is also a research scientist and a TV presenter, and it shows. He’s able to digest some academic research into an easily readable format, which isn’t surprising when you consider that this is what he does for his patients and his TV audience. But it also means that there’s academic rigour behind this book.

Dr Xand pops up quite a bit in this book; Chris and Xand are identical twins but have different physiques. Chris puts this down to Xand’s time in the USA, where he was exposed to higher amounts of so-called ‘ultra-processed food’, and so he’s ended up higher on the BMI scale than his brother (although Chris acknowledges that BMI is discredited). When they both contracted Covid-19 in 2020, Xand was more seriously ill than Chris.

Over the course of the book, we discover that there’s increasing evidence that ultra-processed food is linked to obesity, and how the food industry tries to downplay it.

How do we define ultra-processed food?

Chris acknowledges that it can be hard to define what ultra-processed food is. The best model that we have is the Nova classification, developed by Prof Carlos Augusto Monteiro at the University of Sao Paulo in Brazil. Essentially, this splits food into 4 groups:

  • Nova 1 – wholefoods, like fruit, vegetables, nuts etc that can be eaten with little to no processing
  • Nova 2 – culinaries, like vinegar, oils, butter and salt that require some processing
  • Nova 3 – processed food. This is basically anything that’s been cooked, so home-made bread would fall under here. Foods from the Nova 1 and 2 categories are combined to create the foods in the Nova 3 category.
  • Nova 4 – ultra-processed food, which is made from formulations of food additives that may not include any ingredients from the Nova 1 category.

Probably the easiest way to work out if something fits into the Nova 4 category is by looking at the list of ingredients. If there are one or more ingredients listed that you can’t expect to find at a typical large supermarket, then it’s probably ultra-processed food. Things like emulsifiers, artificial sweeteners, preservatives and ingredients identified only using those dreaded E numbers that my mum used to be wary of back in the 1980s.

And there’s a lot of food that fall into the Nova 4 category. Almost all breakfast cereals, and any bread that is made in a factory, are examples.

Why are ultra-processed foods so common?

Fundamentally it’s to do with cost and distribution. For example, a tin of tomatoes that contains some additional ultra-processed ingredients may be cheaper than a tin just containing tomatoes (and perhaps a small amount of acidity regulator). It’s a bit like how drug dealers cut drugs with, for example, flour, to make more money when selling their drugs on.

Distribution is also a factor. A loaf of bread that is baked in a factory may take a couple of days to reach supermarket shelves, where it also needs to be ‘fresh’ for a few days. So the manufacturers will add various preservatives and ingredients to ensure that bread remains soft.

You can bake your own bread using only yeast, flour, salt, olive oil and water. But Tesco will sell you a loaf of Hovis white bread that also contains ‘Soya Flour, Preservative: E282, Emulsifiers: E477e, E471, E481, Rapeseed Oil, and Flour Treatment Agent: Ascorbic Acid’. These are to keep the bread soft and extend its shelf life, as a homemade loaf may start going stale after 2-3 days. This means that a shop-bought loaf may go mouldy before it goes stale.

Other common examples

Breakfast cereals brand themselves as a healthy start to the day, but often contain worryingly-high amounts of sugar. And there’s evidence that their over-use of ultra-processed ingredients interferes with the body’s ability to regulate appetite, leading to over-eating.

Ice cream is also often ultra-processed, if you buy it in a supermarket. The extra additives ensure that it can survive being stored at varying temperatures whilst in transit. It’s notable that most UK ice cream is manufactured by just two companies – Froneri (Nestlé, Cadbury’s, Kelly’s, Häagen-Dasz and Mövenpick brands) and Unilever (Walls and Ben & Jerry’s). There are many small ice cream producers, but the challenge of transporting ice cream and keeping it at the right temperature means that they have limited reach.

I’m also worried about a lot of newer ‘plant-based’ foods that are designed to have the same taste and texture as meat and dairy products. You can eat a very healthy plant-based diet, but I would argue that some ultra-processed plant-based foods would be less healthy that the meat and dairy products that they’re mimicking.

What we’re doing to cut our intake of ultra-processed food

We now bake our own bread in a bread machine. Not only do you avoid ultra-processed ingredients, but freshly-baked bread tastes so much nicer than a loaf bought in a shop. It takes a little more planning, but most of the ingredients don’t need to be kept fresh.

We also buy more premium products where we can. Rather than refined vegetable oils, we buy cold-pressed oil for frying, and I’ve mentioned chopped tomatoes above. Of course, these products cost more, and it’s something that both Chris and Henry mention in their books. It should come as no surprise that there’s a link between obesity and poverty, if people on low incomes cannot afford good food.

And we’ve had to give up Pringles. Chris devotes practically a whole chapter to them, and how they trick the brain into wanting more.

You can download the Open Food Facts app to help decipher food labels. It includes a barcode scanner, and will warn you if what you’ve scanned is ultra-processed food. The good news is that there are still plenty of convenience foods which are not ultra-processed – there’s some suggestions in this Guardian article.

Whilst I haven’t yet given up on artificially-sweetened soft drinks, we reckon that we’ve cut our sugar intake and our exposure to artificial additives. In many cases, we don’t know the long-term health effects of these additives, although we do know that some people struggle to lose weight despite eating a supposedly ‘healthy’ diet and exercising regularly.

Home Assistant with HTTPS and HomeKit

A screenshot of Home Assistant running in a web browser with HTTPS enabled and no certificate errors

Welcome to the latest chapter of getting Home Assistant working on a Raspberry Pi using Docker. Last time, I’d managed to get it working in Docker, but only over a regular HTTP connection and without HomeKit. The good news is that I’ve solved both of these problems.

Using SWAG to enable HTTPS

Firstly, I recommend reading this paragraph whilst listening to ‘Swagger Jagger’ by Cher Lloyd.

I’ve tried lots of different ways to get Home Assistant working over SSL/TLS. There’s a good reason why this is one of the key selling points of Home Assistant Cloud, as it can be difficult. Thankfully, there’s a Docker image called SWAG (Secure Web Application Gateway) that handles much of the legwork. Once you’ve installed SWAG, follow this guide, and you should find that you can access your Home Assistant setup at https://homeassistant.[yourusername].duckdns.org/ . No need to specify a port, or accept any certificate warnings.

Inside SWAG, there’s a DNS client, which will automatically renew the SSL certificates every 90 days for you, using ZeroSSL or Let’s Encrypt. There’s also nginx, which is used to set up a reverse proxy, and support for dynamic DNS services like DuckDNS.

SWAG has sample configurations for lots of different services, including calibre-web, so I have SSL access to my calibre-web image too. My only issues with it so far were last week when DuckDNS went down on Sunday morning. Most services, like Home Assistant, need to be mounted as subdomains (as above), but others (like calibre-web) can be mounted as subfolders, e.g. https://[yourusername].duckdns.org/calibre-web. This reduces the number of subdomains that you need SSL certificates for; ZeroSSL only offers 3 subdomains for a free account so it’s worth considering subfolders if you want to add more services.

If you have your own domain, then you can also add a CNAME to it to point it at your DuckDNS account, should you wish to use that rather than a [something].duckdns.org address.

Getting Apple HomeKit working

Carrying on the musical theme, here’s ‘Carry Me Home’ by Gloworm, a 90s dance classic which has only recently become available on digital platforms again.

After getting my swagger jagger on and getting HTTPS working, the final issue I’ve been having with Home Assistant is the HomeKit bridge. Adding Home Assistant devices to Apple’s Home app is something that normally works out of the box if you install Home Assistant OS, but takes more work if you use Docker.

The instructions which helped me where these on the Home Assistant forums. You’re going to need to install another Docker image containing avahi; there are several but this one worked for me. It’s bang up to date, unlike the most common Docker image which is, um, 8 years out of date and also only works on x86 machines. Which isn’t much help for my arm64-based Raspberry Pi 4.

Once you’ve installed avahi, added the relevant lines to configuration.yaml in Home Assistant and restarted it, HomeKit should work. To get started, add the HomeKit integration to Home Assistant – you may want to specify which devices will show if you don’t want all of them. Then, use your iPhone or iPad to scan the QR code in your Home Assistant notification panel, and add the bridge. If all goes well, it should immediately tell you that it’s an unsigned device, but will then let you set up each device in turn.

If it just sits there for several minutes and then gives up, you’ll need to do some more digging. Don’t worry, this happened to me too. I suggest downloading the Discovery app, which shows all of the mDNS devices broadcasting on your network. If you can’t see ‘_hap._tcp’ in the list, then there’s a problem. In my case, this turned out to be because my Raspberry Pi wasn’t connected to the same wifi network. It’s plugged in to my ADSL router with a network cable, but we use Google Wifi which results in a ‘double NAT’ situation. Connecting the Raspberry Pi to both wired and wireless connections seemed to fix the issue.

Indeed, as a side effect Home Assistant managed to autodiscover some additional devices on my network, which was nice.

Home Assistant Core in Docker? Done it, mate

All in all, I’ve successfully managed to get Home Assistant to where I want it to be – self-updating in Docker, secure remote access, and a HomeKit bridge so that I can ask Siri to manage my devices. I’m looking forward to being able to turn my heating on whilst driving, for example.

It’s been a challenge, requiring a lot of skimming through the Home Assistant forums and various StackExchange discussions. Ideally, I would have a spare computer to run Home Assistant OS, which would have taken some of the leg work out of this, but I’m happy with the setup. Finding SWAG and getting it to work was a moment of joy, after all the setbacks I’d had before.

Using Portainer to manage Docker

Screenshot of the Portainer web interface

So you may have noticed that I have a thing going on with Docker at present. I’ve set up Home Assistant in Docker, and more recently also set up calibre-web with Docker. Between these, and other Docker images, it’s quite a lot to manage – especially on a headless remote device. Thankfully, Portainer is a web-based solution to managing multiple Docker containers.

There’s a free community edition which offers sufficient features to manage one Docker system, which I’m using. If you need to manage multiple systems, there’s a Business Edition available that you need to pay for, but home users should get by with the Community Edition. Although you will see lots of greyed out options which are only available in the Business Edition – something anyone who uses a freemium WordPress plugin will recognise.

The installation instructions are detailed, and there are a number of steps that you’ll need to follow using the command line. Once everything’s set up, you’ll be able to open a web browser and see all of your Docker containers, and their status.

Portainer lets you start, stop and restart containers from the web interface, and delete any containers no longer needed. The feature that I’ve found most useful is the ‘Duplicate/Edit’ function, which allows you to easily duplicate a container, and optionally replace the original container with a new one with updated variables. This is great for people like me who invariably make a mistake when setting up a Docker Compose file. Logs are also made easily accessible, which helped me when troubleshooting a container that was starting but then wasn’t accessible through a web browser.

You can also run new containers in Portainer; whilst this is easier than typing out commands, Docker Compose works better for me as you can just copy and paste them.

If you’ve got a few Docker images up and running, I would recommend Portainer as an easier way of managing them. It’s much nicer than having to type out commands in a ssh session, and is a friendlier way of working with Docker for less experienced users, like myself.

Managing e-books with Calibre-web

Screenshot of the calibre-web interface

If, like me, you’ve picked up a number of e-books over the years, you may use Calibre as your e-book manager. It’s a desktop application with an optional web interface, but it has its drawbacks. The user interface is clunky, and it tries to cram lots of advanced features in – even the latest version 7 is overwhelming for new users. So, if you can forego the desktop application, there’s an alternative called calibre-web that does the same thing in a web browser, and with a much nicer interface.

Once installed, you can migrate your existing metadata.db from Calibre and the e-book folders, and calibre-web will pick up where you left off. I particularly like the ability to download metadata from sources such as Google Books, to get more complete data about each book besides its author and title. There’s a built-in e-reader, or you can use an app that supports OPDS – I used Aldiko.

By far the easiest way to install it is using Docker. There’s a good image on DockerHub; it’s maintained by a third-party but recommended by calibre-web’s developers. Once installed, it doesn’t require much additional configuration.

By default, calibre-web doesn’t allow uploads, but you can amend this in the Admin settings. The settings toggle s rather buried away, and it took me some time to find. But once uploads are enabled, it allows you to completely replace the desktop Calibre app if you want to. You can also set up multiple user accounts, if you want to share your calibre-web server with others.

I have calibre-web installed on the same Raspberry Pi as my Plex and Home Assistant servers. Indeed, calibre-web essentially offers a kind-of Plex for e-books, seeing as Plex doesn’t offer this itself. Unfortunately, most of my e-books were purchased through Amazon, and so only accessible through their Kindle apps and devices. But for the handful of books that I’ve picked up through the likes of Unbound and Humble Bundle, it’s helpful to have them in one place.

Comment Spam strikes back

An illustration of a robot turning web pages into canned meat product. Generated using Bing AI Image Generator

So now that I’m blogging again, it’s the return of comment spam on my blog posts.

Comment spam has always been a problem with blogs – ever since blogs first allowed comments, spam has followed. Despite the advert of the rel=”nofollow” link attribute, automated bots still crawl web sites and submit comments with links in the hope that this will boost the rankings in search engines.

In the early days of blogging, blogs often appeared high in Google’s search engine results – by their very nature, they featured lots of links, were updated frequently, and the blogging tools of the time often produced simple HTML which was easily parsed by crawlers. So it was only natural that those wanting to manipulate search engine rankings would try to take advantage of this.

I’ve always used Akismet for spam protection, even before I switched to WordPress, and it does a pretty good job. Even then, I currently have all comments set to be manually approved by me, and last week a few got through Akismet that I had to manually junk.

Humans, or AI?

These five interested me because they were more than just the usual generic platitudes about this being a ‘great post’ and ‘taught me so much about this topic’. They were all questions about the topic of the blog post in question, with unique names. However, as they all came through together, and had the same link in them, it was clear that they were spam – advertising a university in Indonesia, as it happens.

Had it not been for the prominent spam link and the fact they all came in together, I may have not picked up on them being spam. Either they were actually written by a human, or someone is harnessing an AI to write comment spam posts now. If it’s the latter, then I wonder how much that’s costing. As many will know already, AI requires a huge amount of processing power and whilst some services are offering free and low cost tools, I can’t see this lasting much longer as the costs add up. But it could also just be someone being paid using services like Amazon Mechanical Turk, even though such tasks are almost certainly against their terms of service.

I think I’m a little frustrated that comment spam is still a problem even after a few years’ break from blogging. But then email spam is a problem that we still haven’t got a fix for, despite tools like SPF, DKIM and DMARC. I’m guessing people still do it because, in some small way, it does work?

Asking your friends a question every day

An illustration of a question mark appearing from a wizard's hat. Generated by Bing AI Image Generator

A couple of years ago, I asked my Facebook friends a question – what animal do you think our child wants as a pet? And as an incentive, whoever guessed correctly could nominate a charity to receive a £5 donation. The post got around 60 comments before the correct answer – a parrot – was guessed, and the £5 went to the Bradford Metropolitan Food Bank.

We didn’t buy our child a parrot as a pet – they’re expensive to buy and insure, and can out-live their humans – but it gave me an idea.

So for the whole of 2022, I asked a new, unique question to my Facebook friends. I wrote most of these in an Excel spreadsheet over the course of Christmas 2021, and then added to it over the year. Questions were usually posted in the morning, and all got at least one comment – but some many more.

I have around 300 friends on Facebook and so I tried to come up with questions that were inclusive, or hypothetical, so as not to exclude people. For example, not all my friends drive, so asking lots of questions about cars would exclude people. I also wanted to avoid any questions that could be triggering for people, so most were framed around positive experiences.

I suppose I was taking a leaf out of Richard Herring’s book – I suppose literally because he has published several Emergency Questions books – but it’s something I enjoyed doing. It also meant that I found out some more facts about my friends and got to know some of them better. It also reminded me of the really early days of blogging with writing prompts like the Friday Five.

This year, I’ve asked the same questions, but included my answers in the posts, as I didn’t usually get a chance to answer my own questions in 2022. This has also required some re-ordering of questions, as some related to events like Easter which were on different days this year.

And for 2024? Well, I’m slowly working on some brand new questions, although I’m only up to March so far. And I keep thinking of great ideas for questions, only to find that I’ve already asked them before.

Maybe I’ll publish them as a page on here someday.

Bringing back the archives

An illustration of a phoenix rising from the ashes, with a web page. Generated by the Bing AI Image Creator

Last month, I wrote about how I had found peace with myself regarding losing over a decade’s worth of blog posts.

Well, I’ve already sort-of changed my mind. I have already brought back some old posts which, until now, were only accessible on the Web Archive Wayback Machine.

This doesn’t mean that all of my old posts will be reinstated – if anything, I’ll be bringing back 1-2% of them at most. My criteria are:

  • Posts which, despite being offline for about 5 years, are still linked to. I’m using the Redirection WordPress plugin to track 404 errors, which you can group by URL to see the most commonly not-found pages.
  • Posts that still offer useful advice, or information that is otherwise not easily accessible on other web sites.
  • Posts that mark important events in my life.

So, here’s a selection of what I’ve brought back already, in chronological order:

  • Media Player Classic (January 2004). A review of a now-defunct lightweight alternative media player for Windows. VLC is probably a better option nowadays.
  • Apple Lossless Encoder (May 2004). A blog post about Apple’s then-new music format which preserves full audio quality when ripping CDs in iTunes, and how it compares to other formats like FLAC and Monkey’s Audio.
  • Knock-off Nigel (August 2008). An anti-piracy advert for TV.
  • How to migrate a Parallels virtual machine to VirtualBox (November 2008). A how-to guide for switching from Parallels Desktop to VirtualBox, which I imagine is still useful for some people.
  • Fixing high memory usage caused by mds (February 2013). A how-to guide for fixing an issue with MacOS. I don’t use a Mac anymore but hopefully this is still useful to someone.
  • Baby update (November 2015). This was actually a draft version of a post that must have somehow survived in Firefox’s local storage, so I re-published it.
  • How to: fix wrong location on iPhone (January 2017). Another how-to guide that fixed an issue I was having at the time with my iPhone’s location randomly jumping around.

There’s more to come, as and when I find time to restore them. I’m also using Google Search Console to find pages that it’s expecting to work, but that result in a 404 error.

I wear glasses now

A photo of me, taken in July 2021, wearing glasses

There’s a few life developments that have happened in the years since I stopped blogging regularly, and one of those was in July 2021 – I started wearing glasses.

I hadn’t noticed that my vision was deteriorating, but it was picked up at a routine eye test. I had a suspicion that the optometrist had found I needed glasses as he tweaked the lenses, and suddenly the last couple of lines on the eye chart got much more clear. Oh well, I managed 37 years without needing to wear them.

I’m fortunate that I can just wear one pair of glasses for both near and distance vision, so I don’t need to take them on and off for different tasks, or wear bi-focals. And they make a difference – as someone who uses screens all day at work, my eyes aren’t as tired at the end of the day as they were before.

Of course, July 2021 was around the time when we still needed to wear facemasks on public transport, so I got the lovely experience of my glasses steaming up.

You may also notice that I’m overdue for my next eye test, so I promise that I’ll book another one soon. I’ve contemplated getting contact lenses next time, but it depends how much my glasses cost. And I don’t mind wearing glasses too much.

Running Home Assistant in Docker and Snap

A screenshot of the Home Assistant installation instructions for Docker

So, as I mentioned a couple of weeks ago, I’ve set up Home Assistant (HA) to control the various smart devices that we have around the home. At the time, I just used a snap package, but now I’ve migrated to using Docker, and here’s why.

Firstly, there are some disadvantages of installing Home Assistant using a snap package. Namely:

  1. The snap package isn’t an official release by the Home Assistant project, and is instead built by a third party.
  2. This means that, at time of writing, it’s a couple of releases behind the latest official release.
  3. It also means that it’s not a formally supported way of running Home Assistant, and there are fewer resources out there to help you if you’re stuck.
  4. I had issues updating previously installed custom components from HACS

Meanwhile, there’s an official Home Assistant Docker image that is updated at the same time as new releases, and it’s mentioned in the installation guide.

So, on the whole, Docker is better for running HA than Snap. But I wanted to run HA on my Raspberry Pi 4 which has Ubuntu Core on it, and that only offers Snap. But wait… you can install Docker on Snap, and the Docker Snap package is one maintained by Canonical so it’s regularly updated.

You can see where this is going. What if I install Docker using Snap, and then install Home Assistant into Docker? Well, that’s what I did, and I’m pleased to inform you that it works.

Docker on Snap, step-by-step

If you want to try this yourself, here’s the steps that I followed. However, please be aware that you can’t migrate a Home Assistant setup from Snap to Docker. Whilst HA does offer a backup tool, the option to restore a backup is only available on Home Assistant Operating System, and it seems that manually copying the files across won’t work either. So, if you currently use Snap, you’ll have to set up HA again from scratch afterwards. You’ll also, at the very least, need to run snap stop home-assistant-snap before you start.

  1. Install Docker. You can do this by logging into your machine using SSH and typing in snap install docker.
  2. Enable networking. There’s probably a better way of doing this, but for me, just running chmod 777 /var/run/docker.sock worked.
  3. Install Home Assistant. You’ll need to enter quite a long shell command, which is:
    docker run -d \
    --name homeassistant \
    --privileged \
    --restart=unless-stopped \
    -e TZ=MY_TIME_ZONE \
    -v /PATH_TO_YOUR_CONFIG:/config \
    --network=host \
    ghcr.io/home-assistant/home-assistant:stable

    The two variables in bold will need changing. For ‘MY_TIME_ZONE‘ you’ll need to type in your time zone, which in my case is ‘Europe/London‘, and for ‘PATH_TO_YOUR_CONFIG‘ is a folder where you want your configuration files. I suggest /home/[username]/homeassistant .
  4. Grab a drink, as the installation will take a few minutes, and then open http://[your IP address]:8123 in a web browser. If it’s worked, then you’ll be presented with HA’s onboarding screen.

Again, if you had the HA snap package installed, then if everything’s working with Docker, you’ll need to uninstall any related HA packages (like HACS, toolbox and configurator) and then the home-assistant-snap itself. And then you’ll need to set up all of your devices again. The good news is that, if you decide to move your HA installation to a new machine, you can just migrate the Docker image in future.

Wouldn’t it be better just running Docker?

Okay, so you may be wondering why I’ve set up HA this way. After all, it would probably be easier just to install Raspberry Pi OS Lite and put Docker on that, without using Snap. Well, there’s a method to my madness:

  • I like running Ubuntu Core because it’s so minimalist. It comes with the bare minimum of software installed, which means that there’s less risk of your system being compromised if a software vulnerability is found and exploited.
  • I already have Plex running quite happily in Snap, and didn’t want to have to migrate that as well.

In other words, this was the easiest way of running HA in Docker with my current setup. And I’m happy with it – I’m running the latest version of HA and it seems to work better.

There are a couple of additional steps that I still need to complete, which are:

  • Enabling SSL/TLS for remote access
  • Enabling mDNS broadcasts for Apple HomeKit integration

I’m working on these. Home Assistant Cloud is the easiest way of setting up secure access and I’m considering it. It’s a paid-for service, but it does financially support HA’s development, and seems to much easier than the alternatives. As for mDNS, I’m still working on this, and I imagine there’ll be things I need to tweak in both Docker and Snap to get it to work.