How to Set Up LXD Containers on Ubuntu 24.04 with ZFS Storage

Tinker Logs
How to Set Up LXD Containers on Ubuntu 24.04 with ZFS Storage
Shot by Eberhard Grossgasteiger

Step-by-step guide to setting up LXD containers on Ubuntu 24.04 with ZFS storage, covering installation, ZFS pool creation for VPS and local machines, container launching, image publishing, and SSH access configuration.

Introduction

As servers evolve, they increasingly serve as development environments — especially for Linux-based tools. Installing languages like Coq on Windows can be difficult, and quick prototyping often leaves behind messy dependencies that are hard to fully clean up. Tools like Vim with LSP support also require multiple steps to set up and tear down. LXD provides lightweight, OS-level containers that support isolated, multi-user environments. It allows users to share common files (such as Vim configurations) while providing a fully isolated environment for each container. Like Docker, you can publish your image and update it further as needed.

Another use case could be in a lab environment where hardware resources like GPUs are limited. You may want each group member to utilize this resource without interfering with others’ work. In this case, you can also use LXD to create a container for each user, sharing the same hardware resources. Additionally, you can mount common files, such as datasets, to reduce memory usage.

How to Install LXD on Ubuntu with Snap

Nothing new than copy command below and paste into you terminal:

bash
      snap install lxd
    

Ubuntu has snap installed already but if you’re using other system you should install snap using your package manager first.

You should clearly see a message telling you successfully installed lxd like:

bash
      lxd (5.21/stable) 5.21.3-c5ae129 from Canonical installed
    

If you’re not using the root account and don’t want to type sudo every time for lxd or lxc, add your current user to the lxd group:

bash
      sudo usermod -aG lxd $USER
    

To apply the change, either log out and back in, or run:

bash
      newgrp lxd
    

🔧 Avoid Running lxd or lxc as Root

Another tip: don’t run lxd or lxc as root. Even if the user is in the lxd group, running these commands as root can still cause issues—because sudo is required in specific cases.

If you try to run them directly as root, you might see this error:

console
      permanently dropping privs did not work: File exists
    

Setting Up a ZFS Storage Pool for LXD (VPS & Local)

In most cases when you’re using a VPS as a server, memory is limited. Therefore, I want to manage it clearly and efficiently. I chose ZFS as my storage backend because it only stores incremental changes when you copy a container. This reduces redundancy and saves space. It also allows you to manually define a pool, making it easier to back up and specify where the data resides.

Install it with the following command:

bash
      apt install zfsutils-linux -y
    

If there’s no error, you’re good to go. To double-check, you can run:

bash
      zfs --version
    

🖥️ Creating a ZFS Pool on a Local Machine

⏩ Using a VPS? You can jump to the VPS section here.

If you’re using a personal computer in a lab, you can directly specify which disk you want to use:

bash
      zpool create pool /dev/sda
    

This command tells zpool to create a storage pool named pool on the disk located at /dev/sda.

If you get an error like:

bash
      cannot resolve path '/dev/sda'
    

This means the disk /dev/sda doesn’t exist on your machine. Check your actual disk name first using:

bash
      df /
    

You should see output like this:

bash
      Filesystem     1K-blocks     Used Available Use% Mounted on
/dev/sda        62328864 10646208  48947556  18% /
    

The value under Filesystem is the device you’re looking for. I ran this command on my VPS, so your output may vary.

Once you confirm the correct disk name, you can create the pool:

bash
      zpool create pool /dev/sda
    

👉 If you’re ready, skip ahead to the ZFS dataset creation section.

☁️ Creating a ZFS Pool on a VPS with a Virtual Disk

If you’re using a VPS like me, you usually don’t have access to a whole disk, so you’ll need to create a virtual disk file first.

First, check your available disk space:

bash
      df -h /
    

Sample output:

bash
      Filesystem      Size  Used Avail Use% Mounted on
/dev/vda2        60G   11G   47G  18% /
    

Now, create a directory to store your virtual disk file:

bash
      mkdir -p /var/zfs
    

This command creates the /var/zfs directory. The -p option ensures that all necessary parent directories are created.

You can verify it with:

bash
      ls /var/zfs
    

If the directory doesn’t exist, you’ll see:

bash
      ls: cannot access '/var/non-existing': No such file or directory
    

Since I have 47G available, I’ll allocate 24G for this virtual disk. You can choose a different size depending on your available space:

bash
      fallocate -l 24G /var/zfs/zfs.img
    

This creates a 24GB virtual disk file at /var/zfs/zfs.img. The filename is arbitrary.

Next, map this image to a loop device:

bash
      losetup -fP /var/zfs/zfs.img
    

This attaches the image file to a free loop device.

Find the loop device name:

bash
      LOOPDEV=$(losetup -a | grep zfs.img | cut -d: -f1)
    

To verify:

bash
      echo ${LOOPDEV}
    

For example, this might output /dev/loop3.

Finally, create the ZFS pool:

bash
      zpool create pool $LOOPDEV
    

This creates a new ZFS storage pool named pool using the virtual disk.

Creating a ZFS Dataset for LXD

Once your ZFS pool is ready, you need to create a dataset inside it:

bash
      zfs create pool/lxd
    

This simple command tells ZFS to create a dataset named lxd under the existing pool.

To enable deduplication for this dataset, set the dedup property to on:

bash
      zfs set dedup=on pool/lxd
    

You can verify that the dataset was created and inspect its properties using:

bash
      zfs list
    

Example output:

bash
      NAME       USED  AVAIL  REFER  MOUNTPOINT
pool       142K  22.8G    24K  /pool
pool/lxd    24K  22.8G    24K  /pool/lxd
    

This output confirms the dataset has been created successfully.

It may look like a lot of steps, but I included detailed verifications and potential error handling to ensure everything works smoothly.

Configuring LXD with lxd init

Now we’re finally ready to initialize LXD. Just run:

bash
      lxd init
    

This will start an interactive setup where you’ll be prompted to answer several configuration questions.

bash
      Would you like to use LXD clustering? (yes/no) [default=no]:
    

Only say “yes” if you have multiple servers or devices and want them to act as a cluster.

bash
      Do you want to configure a new storage pool? (yes/no) [default=yes]:
    

Although we’ve already allocated storage, this step lets us link it with the LXD system.

bash
      Name of the new storage pool [default=default]:
    

This name doesn’t need to match your actual ZFS pool name.

bash
      Name of the storage backend to use (powerflex, zfs, btrfs, ceph, dir, lvm) [default=zfs]:
    

Since we’re using ZFS, make sure it’s selected. If ZFS isn’t installed, this option may not appear.

bash
      Create a new ZFS pool? (yes/no) [default=yes]: no
    

No, thank you. We already created a ZFS pool manually.

bash
      Name of the existing ZFS pool or dataset: pool/lxd
    

Use the exact name of the dataset you created earlier.

bash
      Would you like to connect to a MAAS server? (yes/no) [default=no]:
    

Unless you know what MAAS is and intend to use it.

bash
      Would you like to create a new local network bridge? (yes/no) [default=yes]:
    
  • yes — for most VPS environments where nothing is preconfigured.
  • In lab environments, check with your IT or professor about network policies.
bash
      What should the new bridge be called? [default=lxdbr0]:
    

lxdbr0 is fine unless you need something specific.

bash
      What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like the LXD server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
    

Press Enter for all — the default values work well for most users.

Launching Your First LXD Container

Now we can finally launch our first LXD container:

bash
      lxc launch ubuntu:24.04 temp
    

This command launches a container named temp using the ubuntu:24.04 image.

After launching, you can check the list of containers:

bash
      lxc list
    

Example output:

bash
      +------+---------+------+----------------------------------------------+-----------+-----------+
| NAME |  STATE  | IPV4 |                     IPV6                     |   TYPE    | SNAPSHOTS |
+------+---------+------+----------------------------------------------+-----------+-----------+
| temp | RUNNING |      | fd42:c80b:480:4c53:216:3eff:fecd:7a41 (eth0) | CONTAINER | 0         |
+------+---------+------+----------------------------------------------+-----------+-----------+
    

⚠️ No IPv4 Address? Here’s How to Fix It

You might notice that the container doesn’t have an IPv4 address. This means you won’t be able to access it via SSH or expose services externally.

👉 If your container does have an address and you’re ready to publish it, skip ahead to Publishing your container image.

If your container does have an IPv4 address, you’re good to go. If not — and you’re on a VPS — it’s likely due to firewall rules. By default, many VPS providers block all ports.

To allow your container to obtain an IP and access the outside world, you need to configure your firewall properly. See the Official LXD Guide for more details.

If you’re using UFW, you can use the following commands (from the official docs):

bash
      # allow the guest to get an IP from the LXD host
sudo ufw allow in on lxdbr0 to any port 67 proto udp
sudo ufw allow in on lxdbr0 to any port 547 proto udp

# allow the guest to resolve host names from the LXD host
sudo ufw allow in on lxdbr0 to any port 53

# allow the guest to have access to outbound connections
CIDR4="$(lxc network get lxdbr0 ipv4.address | sed 's|\.[0-9]\+/|.0/|')"
CIDR6="$(lxc network get lxdbr0 ipv6.address | sed 's|:[0-9]\+/|:/|')"

sudo ufw route allow in on lxdbr0 from "${CIDR4}"
sudo ufw route allow in on lxdbr0 from "${CIDR6}"
    

After configuring the firewall, restart the container:

bash
      lxc restart temp
    

Then run:

bash
      lxc list
    

And you should now see an IPv4 address:

bash
      +------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| NAME |  STATE  |         IPV4          |                     IPV6                     |   TYPE    | SNAPSHOTS |
+------+---------+-----------------------+----------------------------------------------+-----------+-----------+
| temp | RUNNING | 10.249.126.223 (eth0) | fd42:c80b:480:4c53:216:3eff:fecd:7a41 (eth0) | CONTAINER | 0         |
+------+---------+-----------------------+----------------------------------------------+-----------+-----------+
    

You can enter the container using:

bash
      lxc exec temp bash
    

Sample prompt:

bash
      root@temp:~#
    

Notice the prompt has changed to reflect the container’s hostname. You’re now inside the container and can use it like a regular Linux terminal. To exit, just type:

bash
      exit
    

How to Publish and Reuse LXD Container Images

If your IPv4 address is working, you should be able to use the terminal to download packages as usual. Once you’re happy with your container setup, you can publish it as a local image to reuse later.

First, stop the container:

bash
      lxc stop temp
    

Next, publish the container using the publish command:

bash
      lxc publish temp --alias template --public
    

You can list available images with:

bash
      lxc image list
    

The alias you set (e.g., template) will appear in the output:

bash
      +----------+--------------+--------+---------------------------------------------+--------------+-----------+-----------+------------------------------+
|  ALIAS   | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 | ARCHITECTURE |   TYPE    |   SIZE    |         UPLOAD DATE          |
+----------+--------------+--------+---------------------------------------------+--------------+-----------+-----------+------------------------------+
| template | 5c72fbce13bc | yes    | Ubuntu 24.04 LTS server (20250610)          | x86_64       | CONTAINER | 430.77MiB | Jun 18, 2025 at 8:21pm (UTC) |
+----------+--------------+--------+---------------------------------------------+--------------+-----------+-----------+------------------------------+
|          | 9c73fb6ca4c2 | no     | ubuntu 24.04 LTS amd64 (release) (20250610) | x86_64       | CONTAINER | 258.29MiB | Jun 18, 2025 at 7:52pm (UTC) |
+----------+--------------+--------+---------------------------------------------+--------------+-----------+-----------+------------------------------+
    

The unnamed image above is the base image you used to create the temp container — most likely ubuntu:24.04, if you ran:

bash
      lxc launch ubuntu:24.04 temp
    

🔍 How ZFS Incremental Storage Works with LXD

This section is mainly for exploring how ZFS stores and organizes your container data.
If you’re not particularly interested in the details of ZFS, and instead want to connect to your container via SSH, you can skip ahead to that section here.

Otherwise, you’ve reached the end of this guide — thanks for following along!

You can inspect your ZFS storage by running:

bash
      zfs list
    

Example output:

bash
      NAME                                                                               USED  AVAIL  REFER  MOUNTPOINT
pool                                                                               691M  22.1G    24K  /pool
pool/lxd                                                                           678M  22.1G    24K  legacy
pool/lxd/buckets                                                                    24K  22.1G    24K  legacy
pool/lxd/containers                                                                190M  22.1G    24K  legacy
pool/lxd/containers/temp                                                           190M  22.1G   667M  legacy
pool/lxd/custom                                                                     24K  22.1G    24K  legacy
pool/lxd/deleted                                                                   144K  22.1G    24K  legacy
pool/lxd/deleted/buckets                                                            24K  22.1G    24K  legacy
pool/lxd/deleted/containers                                                         24K  22.1G    24K  legacy
pool/lxd/deleted/custom                                                             24K  22.1G    24K  legacy
pool/lxd/deleted/images                                                             24K  22.1G    24K  legacy
pool/lxd/deleted/virtual-machines                                                   24K  22.1G    24K  legacy
pool/lxd/images                                                                    487M  22.1G    24K  legacy
pool/lxd/images/9c73fb6ca4c2ae7dd357696a2e16ff8ac2f140090deab77b95a24add2386a55a   487M  22.1G   487M  legacy
pool/lxd/virtual-machines                                                           24K  22.1G    24K  legacy
    

You can identify the temp container and the original image (9c73fb6ca4c2) here.

After you’ve published your container, it’s safe to remove the temp container to free up space:

bash
      lxc rm temp
    

The ZFS layout will now look like this, with everything else unchanged, except that temp is gone:

bash
      NAME                                                                               USED  AVAIL  REFER  MOUNTPOINT
pool                                                                               499M  22.3G    24K  /pool
pool/lxd                                                                           487M  22.3G    24K  legacy
pool/lxd/buckets                                                                    24K  22.3G    24K  legacy
pool/lxd/containers                                                                 24K  22.3G    24K  legacy
pool/lxd/custom                                                                     24K  22.3G    24K  legacy
...
pool/lxd/images                                                                    487M  22.3G    24K  legacy
pool/lxd/images/9c73fb6ca4c2ae7dd357696a2e16ff8ac2f140090deab77b95a24add2386a55a   487M  22.3G   487M  legacy
pool/lxd/virtual-machines                                                           24K  22.3G    24K  legacy
    

Honestly, it’s a bit magical — it’s not immediately obvious where the published image is stored.

If you launch a new container named cpp using the template image you published:

bash
      lxc launch template cpp
    

You’ll finally see your published image (5c72fbce13bc, aliased as template) occupying its own space — about 667M. That makes sense: your compressed template image was 430.77MiB, and the original image was 258.29MiB, leading to a delta of roughly 172.48MiB.

Interestingly, that delta aligns closely with the previous temp container’s size (190M), so 172.48MiB is a reasonable result.

Interestingly, the new cpp container only takes up about 3.19M:

bash
      NAME                                                                               USED  AVAIL  REFER  MOUNTPOINT
pool                                                                              1.15G  22.1G    24K  /pool
pool/lxd                                                                          1.13G  22.1G    24K  legacy
pool/lxd/buckets                                                                    24K  22.1G    24K  legacy
pool/lxd/containers                                                               3.21M  22.1G    24K  legacy
pool/lxd/containers/cpp                                                           3.19M  22.1G   668M  legacy
pool/lxd/custom                                                                     24K  22.1G    24K  legacy
...
pool/lxd/images                                                                   1.13G  22.1G    24K  legacy
pool/lxd/images/5c72fbce13bcbdfa41285d8b3af408a38f824c38c00b6694c10a4cdf814dae46   667M  22.1G   667M  legacy
pool/lxd/images/9c73fb6ca4c2ae7dd357696a2e16ff8ac2f140090deab77b95a24add2386a55a   487M  22.1G   487M  legacy
pool/lxd/virtual-machines                                                           24K  22.1G    24K  legacy
    

After launching a new container another-cpp from the template image:

bash
      lxc launch template another-cpp
    

You can observe the updated ZFS usage:

bash
      NAME                                                                               USED  AVAIL  REFER  MOUNTPOINT
pool                                                                              1.15G  22.1G    24K  /pool
pool/lxd                                                                          1.13G  22.1G    24K  legacy
pool/lxd/buckets                                                                    24K  22.1G    24K  legacy
pool/lxd/containers                                                               5.11M  22.1G    24K  legacy
pool/lxd/containers/another-cpp                                                   1.88M  22.1G   667M  legacy
pool/lxd/containers/cpp                                                           3.21M  22.1G   668M  legacy
pool/lxd/custom                                                                     24K  22.1G    24K  legacy
...
pool/lxd/images                                                                   1.13G  22.1G    24K  legacy
pool/lxd/images/5c72fbce13bcbdfa41285d8b3af408a38f824c38c00b6694c10a4cdf814dae46   667M  22.1G   667M  legacy
pool/lxd/images/9c73fb6ca4c2ae7dd357696a2e16ff8ac2f140090deab77b95a24add2386a55a   487M  22.1G   487M  legacy
pool/lxd/virtual-machines                                                           24K  22.1G    24K  legacy
    

At this point, both cpp and another-cpp containers are running and consume very little additional space.

Out of curiosity, you installed neovim inside the another-cpp container. Let’s check how that changed things:

bash
      NAME                                                                               USED  AVAIL  REFER  MOUNTPOINT
pool                                                                              1.30G  22.0G    24K  /pool
pool/lxd                                                                          1.28G  22.0G    24K  legacy
pool/lxd/buckets                                                                    24K  22.0G    24K  legacy
pool/lxd/containers                                                                160M  22.0G    24K  legacy
pool/lxd/containers/another-cpp                                                    157M  22.0G   791M  legacy
pool/lxd/containers/cpp                                                           3.21M  22.0G   668M  legacy
...
pool/lxd/images                                                                   1.13G  22.0G    24K  legacy
pool/lxd/images/5c72fbce13bcbdfa41285d8b3af408a38f824c38c00b6694c10a4cdf814dae46   667M  22.0G   667M  legacy
pool/lxd/images/9c73fb6ca4c2ae7dd357696a2e16ff8ac2f140090deab77b95a24add2386a55a   487M  22.0G   487M  legacy
pool/lxd/virtual-machines                                                           24K  22.0G    24K  legacy
    

Now another-cpp uses around 157MB, reflecting the space taken by neovim and any additional dependencies it pulled in.

💡 Why ZFS Incremental Storage Matters

ZFS incremental storage truly shines in this setup:

  • Containers launched from a common image share storage efficiently.
  • Only changes are written, keeping disk usage minimal.
  • Even if a container diverges heavily, space savings are significant compared to full duplication.

So, whether your container stays pristine or gets customized, you’re still benefiting from ZFS.

SSH into an LXD Container: Port Forwarding & Firewall Setup (Optional)

This section is optional, but useful if you want to SSH into your container from a remote machine.

To expose your container’s SSH port to the outside world, use the following command:

bash
      lxc config device add cpp sshproxy proxy listen=tcp:0.0.0.0:3000 connect=tcp:127.0.0.1:22
    

Yes, it looks long — but the only parts you need to modify are:

  • Replace cpp with your container name.
  • Replace 3000 with any port number you want to expose externally.

If successful, you’ll see:

console
      Device sshproxy added to cpp
    

🔥 Don’t Forget the Firewall — Opening Ports with UFW

If you’re new to this, don’t forget to open the port in your firewall so it’s accessible from the outside.

For example, using ufw:

bash
      ufw allow 3000
    

This command allows traffic on port 3000. The output will look like:

console
      Rule added
Rule added (v6)
    

You can confirm the rule was added using:

bash
      ufw status
    

Sample output:

bash
      Status: active

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere
67/udp on lxdbr0           ALLOW       Anywhere
547/udp on lxdbr0          ALLOW       Anywhere
53 on lxdbr0               ALLOW       Anywhere
3000                       ALLOW       Anywhere
22/tcp (v6)                ALLOW       Anywhere (v6)
67/udp (v6) on lxdbr0      ALLOW       Anywhere (v6)
547/udp (v6) on lxdbr0     ALLOW       Anywhere (v6)
53 (v6) on lxdbr0          ALLOW       Anywhere (v6)
3000 (v6)                  ALLOW       Anywhere (v6) 

Anywhere                   ALLOW FWD   10.219.247.0/24 on lxdbr0
Anywhere (v6)              ALLOW FWD   fd42:2bf9:abb2:6256::/64 on lxdbr0
    

🧑‍💻 Finally, SSH into Your LXD Container

Note that you need to use your server’s IP address. The username is the one shown before the @ in the terminal prompt—for example, in root@cpp:~#, the username is root, not cpp. The port is the one you specified earlier.

Here’s an example command:

bash
      ssh root@155.xxx.xxx.xxx -p 3000
    

If everything works, it will ask for a password:

bash
      root@155.xxx.xxx.xxx's password:
    

If you haven’t set a password for the user yet, go back to the container and run:

bash
      passwd
    

Example:

console
      root@cpp:~# passwd
New password:
Retype new password:
passwd: password updated successfully
    

💡 Note: When typing the password, it won’t show anything — this is normal in Linux for password input.

🚫 Got “Permission denied (publickey)”? Don’t Panic

If you see this error:

console
      root@155.xxx.xxx.xxx: Permission denied (publickey).
    

It usually means your SSH server is configured to not allow password authentication. To fix this:

bash
      vim /etc/ssh/sshd_config
    

Edit the SSH config inside the container:

bash
      vim /etc/ssh/sshd_config
    

Find and update the following lines:

bash
      PasswordAuthentication yes
    

If you’re logging in as root, also make sure:

bash
      PermitRootLogin yes
    

After saving the file, restart the SSH service:

bash
      systemctl restart ssh
    

Now you should be able to connect via SSH using the password.

🛠️ Still Denied? Fix the sshd_config Override

If you’re still getting denied access, you’re not alone — I ran into this myself.

Inside your /etc/ssh/sshd_config file, you might find a line like this:

console
      Include /etc/ssh/sshd_config.d/*.conf
    

This means that any .conf files in /etc/ssh/sshd_config.d/ can override the settings in the main config.

To investigate, run:

bash
      ls -l /etc/ssh/sshd_config.d/
    

Example output:

console
      total 1
-rw-r--r-- 1 root root 26 Jun 10 12:54 60-cloudimg-settings.conf
    

In this case, there’s only one override file. Check what’s inside:

bash
      cat /etc/ssh/sshd_config.d/*.conf
    

If you see something like:

bash
      PasswordAuthentication no
    

That’s the culprit. It overrides what you set in the main config and disables password logins.

Open the file shown in your directory listing:

bash
      vim /etc/ssh/sshd_config.d/60-cloudimg-settings.conf
    

You have two options: you can either comment out the line by adding a # in front of it, like # PasswordAuthentication no, or explicitly allow password authentication by setting it to yes, as in PasswordAuthentication yes.

Then restart the SSH service:

bash
      systemctl restart ssh
    

To confirm your changes:

bash
      sshd -T | grep passwordauthentication
    

You should now see:

bash
      passwordauthentication yes
    

✅ Let’s Try That SSH Connection Again

Now, try logging in once more:

bash
      ssh root@155.xxx.xxx.xxx -p 3000
    

If it works — welcome in!

🎉 Wrap-Up: Hello World!

To celebrate, let’s compile a classic:

bash
      root@cpp:~# vim main.cpp
 [New] 6L, 71B written
root@cpp:~# g++ main.cpp
root@cpp:~# ./a.out
Hello World!
    

Enjoy your development environment!

2026© Rui Jiang

Home