Exanding ZFS Root Parition on a Live System
The Problem
My server, cprox, as I call it, was set up with only 64GB of storage on the root partition. I don’t know what I was thinking at the time, but this caused issues only a week after setting up the new server.
Within this time, the OS, packages, and other things Proxmox does, consumed 86% of the drive. This made things run slowly and caused the command line to freeze sometimes, which was very weird since the OS was on mirrored SSDs - which is exactly what allowed me to do this resizing live.
For context, I was using a pair of extra Crucial 1TB SSDs I had laying around. I had set up as a ZFS mirror with 64GB for root and the rest I left empty. I don’t know why I chose that amount, but at the time it seemed OK.
These mirrored drives hold my root and rpool, which I don’t use for VMs, containers, or any other kind of work. But as mentioned above, Proxmox uses this drive for various things, and the OS uses /var for logs and whatnot.
After finding out I was running out of space, I started looking online for how to expand the root partition without rebooting the server.
Finding the Solution
At first I didn’t find any promising results, but one person at this Reddit thread guided me in the right direction.
u/gargravarr2112 had this to say:
> I've done this on a production server (two actually) and it worked perfectly:
>
> 1. Identify one disk at a time by serial number
> 2. **zpool offline zpool0 <disk ID>**
> 3. Remove the drive
> 4. Replace with the larger drive
> 5. **zpool replace zpool0 <old disk ID> <new disk ID>**
> 6. Wait for resilver to complete
> 7. Repeat from 1. until all disks are replaced
> 8. **zpool online -e zpool0 <new disk ID>** for each drive
> 9. The pool automatically expands
>
> I did this on production servers with 480GB SATA SSDs; they took about 25 minutes per drive to resilver. YMMV with HDDs. The pool remained usable throughout, though we took the server out of production temporarily to devote all the I/O.
Due to the fact that I had mirrored drives, I was able to adapt this process with some changes.
The Process
First, identify your drives:
sudo zpool status rpool
...
rpool
mirror-0
ata-CT1000BX500SSD1_241xxx-part3
ata-CT1000BX500SSD1_240xxx-part3
Next, check if the pool is set to autoexpand. If not, enable it:
sudo zpool get autoexpand rpool
rpool autoexpand off local
sudo zpool set autoexpand=on rpool
sudo zpool get autoexpand rpool
rpool autoexpand on local
This allows the drives to expand to the entire partition size we set. In my case, 100% of the available space.
Take the first drive offline:
sudo zpool offline rpool ata-CT1000BX500SSD1_241xxx-part3
Running zpool status rpool again will show the pool as DEGRADED with the disk showing as OFFLINE.
Expand the partition using fdisk:
sudo fdisk /dev/disk/by-id/ata-CT1000BX500SSD1_241xxxx
Command (m for help): p # Print partition table
Command (m for help): d # Delete partition
Partition number: 3 # Delete partition 3 (the ZFS data partition)
Command (m for help): n # Create new partition
Partition number: 3
First sector: <enter> # Use default (start right after partition 2)
Last sector: <enter> # Use default (end of disk)
Command (m for help): t # Change partition type
Partition number: 3
Hex code: bf # Solaris root (ZFS)
Command (m for help): w # Write changes to disk
Make sure you don’t erase the zfs_member signature on partition 3.
Clear the partition header:
sudo dd bs=512 if=/dev/zero of=/dev/disk/by-id/ata-CT1000BX500SSD1_241xxxx-part3 count=2048
Replace the partition in the pool:
Since we’re using the same drive (just with a larger partition), ZFS will complain the disk is still in use. Use the -f force flag:
sudo zpool replace rpool ata-CT1000BX500SSD1_241xxx-part3 /dev/disk/by-id/ata-CT1000BX500SSD1_241xxxx-part3 -f
Monitor the resilver process:
sudo watch zpool status rpool
This shows the current status of the pool and the percentage of resilvering completed. On these SSDs it only took about 5 minutes.
Repeat for the second drive:
Once zpool status rpool shows the pool is no longer DEGRADED, perform the same steps for the other drive.
The Result
As soon as the second drive finished resilvering, the pool automatically expanded:
sudo zfs list rpool
NAME USED AVAIL REFER MOUNTPOINT
rpool 43.2G 858G 208K /rpool
No reboot needed! From 64GB to 858GB available space.

Final Thoughts
This process might seem intimidating, but with ZFS mirroring it’s actually quite safe. The pool stays online and usable throughout the entire process. Even if something goes wrong during the expansion of one drive, your mirror keeps everything running.
The key points to remember:
- Enable autoexpand on the pool first
- Take drives offline one at a time
- Use the force flag when replacing with the same disk ID
- Monitor the resilver process before moving to the next drive
Would I recommend it trying this yourself? If you have mirrored drives and a good backup, absolutely. Just take your time and follow the steps carefully.