Skip to content

Playing in the ZFS Pool

A lot of drives for ZFS Storage Pools.  (CC BY 2.0 Licensed Image by Billie Ward, Flickr user wwward0)

A lot of drives for ZFS Storage Pools. (CC BY 2.0 Licensed Image from Billie Ward, wwward0)

Storage Pools are the basic method for consolidation of storage devices, data integrity, and redundancy for ZFS. Using some commands, you can quickly configure simple storage pools. In this article, we will explore the basic types of ZFS Storage Pools (stripes, mirrors, and RAID-Z), the available storage and redundancy trade-offs, and the commands used to create each type. There are several example commands included that can be used to create storage pools for experimentation and testing. Using the ZFS Playground we built in a previous article, we can experiment and test ZFS Storage Pools quickly and easily.

Learning Objectives

  1. You will understand the basic types of ZFS Storage Pools.
  2. You will understand the tradeoffs associated with each pool type.
  3. You will be able to create basic storage pools in a virtual environment.

You should review the previous article “Building a ZFS Playground” for details on creating a virtual playground to “play” with ZFS. With this playground, you can use the commands in this article to create and destroy ZFS Storage Pools.

ZFS Storage Pools

The ZFS Storage Pool is the primary mechanism for consolidating storage devices into a unified storage service. Once the storage pool is created, portions of the pool can be allocated for use in datasets and volumes. The pool provides the mass storage and the low-level integrity, redundancy, and performance capabilities upon which specific storage services can be created.

A ZFS Storage Pool can consist of whole disks, partitions (or slices in the Solaris world), or files. Whole disks are the easiest to manage under ZFS, and the recommended practice. Partitions are possible but have limitations and add to the complexity of management. The ability to use individual files is primarily for testing and experimentation purposes. Since we have a virtualized environment and whole disks (albeit virtual), our playground will provide a more than adequate environment for similar experimentation. If you need to experiment on physical server before implementing something in production, then files would be a good way to do that.

Big, Fast, or Safe: Pick Two

ZFS provides several options for creating your Storage Pool. There are trade-offs, of course, in the amount of storage available, the speed of writing and reading, and the redundancy of the data. These trade-offs are very familiar to administrators that have configured RAID storage. You can choose your Storage Pool to be big, fast, or safe, however, you can only choose two of those objectives. Striped devices, in which data is spread across all of the disks in the pool, are big and reasonably fast, but the failure of one drive leads to a failure of the whole pool. Mirrored devices, in which data is written to two or more devices, are fast and safe, but you divide your total usable storage in half. RAID systems, in which parity data is distributed on the storage devices and used to reconstruct failed drives, are reasonably big and safe, but they can be slow on writes and take forever to rebuild a failed drive.

Seeing issues with the standard RAID-5 level (mostly the RAID-5 write hole), the ZFS designers implemented a variation on RAID-5 to address those weaknesses. The ZFS RAID-Z implementation uses dynamic stripe widths with copy-on-write to prevent the parity synchronization issue. There are three different RAID-Z types each having a different level of parity distribution. The RAIDZ1 type, which is similar to RAID-5, is a single-parity system that can handle a failure of one device without losing data. The RAIDZ2 type, which is similar to RAID-6, is a double-parity system that can handle two device failures, and the RAIDZ3 type is a triple-parity system that can handle three device failures.

While we can generally only pick two options from the big, fast, and safe storage objectives, ZFS provides some additional capabilities for storage pools to address failed devices and faster reads and writes. We will look at those in a future article.

Creating Pools

When a ZFS Storage pool is created, there are several processes that occur. First, the devices to be included in the pool are checked to ensure that they are not currently in use or already have an existing file system. There are also checks on the devices and the replication type specified to ensure that everything is compatible.

When creating a storage pool, you need a name for the pool. There are some naming conventions to be observed. The name must start with a letter and can contain letters and numbers, an underscore (“_”), a period (“.”), or a dash (“-“). There are also some reserved names that cannot be used. Those are:

  • mirror
  • raidz
  • log
  • spare

Creating, managing, and destroying a ZFS Storage Pool is handled through the zpool command. The FreeBSD zpool(8) man page provides very detailed information on the command. The zpool command uses subcommands, such as create, and additional parameters including the pool name to operate on the pools.

Caveat

If you are following our article series, then we are creating the storage pools in a virtualized environment to play with ZFS and learn the commands. The virtual disks we created for this purpose are available to incorporate into the storage pools. The problem is that all of the virtual disks map to a single physical drive. While ZFS offers great performance and data integrity capabilities when used with multiple physical disks, we will not have those capabilities available in the virtual environment.

A Note on Cleanup

Since we are experimenting by actually creating the various storage pools, we will need to delete the pools we create in each example before creating the next pool using the same disks. Use the following command to delete the pool:

Simple Stripes

The default configuration when creating a storage pool in ZFS is a stripe. Data striping spreads data across the storage devices. The total storage available is the combination of all of disks in the pool. There is no redundancy, so a device failure will result in a failure of the storage pool.

Magic Mirrors

The mirror configuration copies data across two or more devices identically. The total storage available is the size of a single device in the mirror. A mirrored pool with two devices can survive the loss of one device without negatively impacting the integrity of the data. If there are multiple devices in the mirror, the pool can survive can survive the failure of all but one device.

The first example below creates a storage pool consisting of two devices in a mirrored configuration.

This example creates a storage pool consisting of two separate mirrors with data striped across both of the mirrors, which is an equivalent to RAID 1+0.

Resisting RAID-Z

RAID-Z provides similar parity-based protections to RAID-5 and RAID-6 with more complete data integrity protections and better performance. The different RAID-Z levels provide protection against single, double, or triple device failures.

There are recommendations on the appropriate number of disks to use for each RAID-Z level. Delphix recommends at least five disks for RAIDZ1, six disks for RAIDZ2, and eleven disks for RAIDZ-3.

In each example below, the raidz keyword and the following number determines the RAID-Z level of parity. raidz and raidz1 are equivalent.

Notice that the available storage space for each pool decreases with each increase of the RAID-Z level. This is consistent with the idea that the higher levels provide additional parity storage to handle additional device failures.

Next Time

In the next article, we will explore configuring additional features for ZFS Storage Pools, such as compression and deduplication. We will also look into configuring additional devices for redundancy (spare) and read (cache) and write (log) performance.

References

Categories: ZFS.

Tags: ,