Ivan Tomica

Moving /home to another ZFS dataset

When I set up ZFS on my main rig I was kind of doing it “fast as possible” and haven’t paid attention to much details. Recently, I wanted to customize my setup a bit so here’s how I migrated my /home to another ZFS dataset.

Setup was as follows:

  • There is ZFS pool called storage composed of two mirrored vdevs
  • One vdev is 2x2TB drives while the other one is 2TB drive and 1,5TB drive. Giving total capacity of “3 and something” TB of usable space.
  • Pool (root dataset of the pool) had mountpoint=/home and was only dataset in the pool.

My plan, and exact commands to accomplish that were as follows:

  • Set ZFS pool mountpoint to /, so when dataset is created, it is mounted on sub-directory of /:
    zfs set canmount=off storage
    zfs set mountpoint=/ storage
  • Create new storage/home dataset and allow mount. Adjust compression algorithm and turn off atime in order to prevent writes on file access:
    zfs create storage/home
    zfs set canmount=on storage/home
    zfs set mountpoint=/home storage/home
    zfs set compression=lz4 storage/home
    zfs set atime=off storage/home
  • Create dataset for my user, this will inherit options from storage/home so nothing special to set there:
    zfs create storage/home/ivan

Fine, but since storage was already used for storing data, all data is on that dataset, so naturally, I had to copy all of that data to the new dataset which was accomplished with rsync:

  • Create directory for temporary mount:
    mkdir /oldhome
  • Change mountpoint for storage dataset:
    zfs set canmount=noauto storage
    zfs set mountpoint=/oldhome storage
  • Mount the dataset:
    zfs mount storage
  • Other datasets were already mounted to their appropriate places so I just copied the data over:
    rsync -avh /oldhome/ivan/ /home/ivan 

Since I wanted to watch the progress I had command running in one tmux pane and used verbose mode as you can see from the command. Other pane df -h and zpool iostat -v running in order to monitor the progress and read/write operations on the pool and vdevs.

Why did I copy all of the data manually instead of cloning the dataset?
– I wanted to apply new settings. compression=lz4 saved me about 27GB of disk space on my almost 900GB of data.

At last, I’ve changed storage dataset options back to:

zfs set canmount=off storage
zfs set mountpoint=/ storage

Would you do something differently? Feel free to post suggestions.

Tagged in:, ,

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *