Publish ZFS NAS post
This commit is contained in:
parent
770324312d
commit
4cb472cb18
|
@ -1 +1,2 @@
|
|||
* [Backing up my ZFS NAS to an external drive](./zfs-nas-backup.md)
|
||||
* [The traditional first software engineer blog post](./blog-start.md)
|
|
@ -0,0 +1,60 @@
|
|||
---
|
||||
title: Backing up my ZFS NAS to an external drive
|
||||
pubdate: 2023-09-29T16:23:49-07:00
|
||||
---
|
||||
|
||||
The SSD on my sole Windows machine died. Fortunately, I didn't lose anything important; I *did* lose some things, but I can live without them. Most of my important documents were already stored on a network server: a Raspberry Pi 4 running NixOS and attached to a drive dock with two 4 TB drives configured in a ZFS mirror. This was a great occasion to make a backup: I could use the drive dock to confirm the SSD was dead, and given the circumstances I definitely wanted a recent backup before taking the NAS drives out of the dock.
|
||||
|
||||
Since I wanted to get this done sooner rather than later, instead of waiting for a convenient sale on storage media, I just bought a 5TB Seagate external drive from a local retailer. Seagate doesn't necessarily rank highly in my estimation of hard drive vendors, but this will be a cold storage backup, so it won't even be up and running for most of its life. If drive failure chances are correlated to uptime, this drive will last a while.
|
||||
|
||||
## Creating the ZFS backup drive
|
||||
|
||||
Creating a zpool on the drive itself is pretty easy. I plugged it into a computer with ZFS tools installed and ran this:
|
||||
|
||||
```
|
||||
sudo zpool create -o ashift=13 -o autoexpand=on -o autotrim=on -O canmount=off -O mountpoint=none -O compression=on -O checksum=sha256 -O xattr=sa -O acltype=posix -O atime=off -O relatime=on coldstorage /dev/sdb
|
||||
```
|
||||
|
||||
This command complained about the extant exFAT the drive came with; rerunning with `-f` wiped it clean.
|
||||
|
||||
## Creating the snapshot to back up
|
||||
|
||||
`man zfs-send` seems to prefer sending snapshots, which makes sense: you don't have to think about sending updates if changes are happening during the send. Let's check what snapshots we have:
|
||||
|
||||
```
|
||||
zfs list -t snapshot
|
||||
```
|
||||
|
||||
The last snapshot was taken 2023-01-09. Better make a new one.
|
||||
|
||||
```
|
||||
zfs snapshot -r pool@2023-08-30
|
||||
```
|
||||
|
||||
`-r` makes the snapshot recursive, so all the datasets within `pool` are also snapshotted.
|
||||
|
||||
## Sending the snapshot
|
||||
|
||||
We're going to pipe the data over SSH, which means we need the user account to have the zfs permissions. Normally I just `sudo su` to do zfs things, but you can't do that over ssh easily. ZFS has its own access control system to handle this.
|
||||
|
||||
On the sending side, I had to give myself these:
|
||||
|
||||
```
|
||||
zfs allow -u tvb send,snapshot,hold pool
|
||||
```
|
||||
|
||||
On the receiving side:
|
||||
|
||||
```
|
||||
zfs allow -u tvb compression,mountpoint,create,mount,receive,sharenfs,userprop,atime,recordsize coldstorage
|
||||
```
|
||||
|
||||
There might have been more; basically any property set on the pool or datasets needs to be allowed on the receiving side, since those properties are being created or modified on the destination ZFS.
|
||||
|
||||
Now we can send the data:
|
||||
|
||||
```
|
||||
ssh nas.lan zfs send -R pool@2023-08-30 | pv | zfs recv -s coldstorage/pool
|
||||
```
|
||||
|
||||
And that's all it takes. I am not a ZFS expert by any means, but on the whole the experience here was pretty painless. I spent a lot of time reading the man pages, which did a good job explaining what I needed to know.
|
|
@ -4,4 +4,5 @@ title: Blog
|
|||
|
||||
[RSS](./feed.xml)
|
||||
|
||||
* [Backing up my ZFS NAS to an external drive](./2023/zfs-nas-backup.md)
|
||||
* [The traditional first software engineer blog post](./2023/blog-start.md)
|
Loading…
Reference in New Issue