zfs tutorial

This explains how to setup openzfs on an ubuntu system.  My system has an iqstor 15 drive array on it, and that is what I will be using in this example.

  1. apt-get install python-software-properties
  2. add-apt-repository ppa:zfs-native/stable
  3. apt-get update
  4. apt-get install ubuntu-zfs
  5. zpool create -f -o ashift=12 tank raidz \
    ata-ST4000DM000-1111 ata-ST40000DM000-2222 ata-ST40000DM000-3333

NAS Build:
zpool create -f -o ashift=12 tank raidz2 \

 

SLC SLOG drive
unit MiB
mkpart zfs 1 16385
mkpart zfs 16285 122104
quit
zpool add tank log mirror <part1> <part2> <part3> (SLOG)
zpool add tank cache (L2ARC)

Filesystems are called datasets:
They all share the storage pool equally

zfs get compressionratio tank
zfs create tank/databases

zfs set compression=lz4 tank/databases
zfs create tank/databases -o compression=lz4

Snapshots:
zfs snapshot tank/databases@friday (start a snapshot)

apt-get install zfs-auto-snapshot (create snapshots on time intervals)
zfs set com.zun:auto-snapshot=true tank/databases
zfs set snapdir=visible tank/databases
Why create multiple file systems/datasets?
zpool scrub tank (check for errors)
zpool scrub cancel (kill scrubbing)
zpool iostat -v 1

zpool replace -f tank <disk1> <disk2> (old drive – new drive)
zpool status (will show the status of resilving)
Z 3,5,9,17,33
Z2 4,6,10,18,34
Z3 5,7,11,19,35

 

zpool status
zpool list
zpool iostat -v 1
zfs get compressionratio tank/databases
zdb (lists what zfs internal cache looks like)
zfs get all tank/databases (show all the paramters for a filesystem)
zfs list -t snapshot (show snapshots)

zfsonlinux.org
open-zfs.org/wiki/Main_Page

zfs set quota=10G tank/home/jeff
zfs get quota takn/home/jeff
zfs list -r tank/home

zpool create -f -o ashift=12 tank1 raidz2 \
pci-0000:0c:00.0-fc-0x520092b44d01bd04-lun-0 \
pci-0000:0c:00.0-fc-0x520092b4563e6e07-lun-0 \
pci-0000:0c:00.0-fc-0x520092b43a90080d-lun-0 \
pci-0000:0c:00.0-fc-0x520092b43a8c0a07-lun-0 \
pci-0000:0c:00.0-fc-0x520092b44d01bd10-lun-0 \
pci-0000:0c:00.0-fc-0x520092b43bd7ef01-lun-0 \
pci-0000:0c:00.0-fc-0x520092b49d716800-lun-0 \
pci-0000:0c:00.0-fc-0x520092b43a8c0a05-lun-0 \
pci-0000:0c:00.0-fc-0x520092b4353f2600-lun-0 \
pci-0000:0c:00.0-fc-0x520092b43a8c0a09-lun-0 \
pci-0000:0c:00.0-fc-0x520092b44d01bd06-lun-0 \
pci-0000:0c:00.0-fc-0x520092b44d01bd07-lun-0 \
pci-0000:0c:00.0-fc-0x520092b44d01bd0a-lun-0 \
pci-0000:0c:00.0-fc-0x520092b43a8c0a03-lun-0 \
pci-0000:0c:00.0-fc-0x520092b44d01bd1b-lun-0

zfs create tank/databases -o compression=lz4

–> replace disk with larger disk
zpool set autoexpand=on tank1
** pull a disk
** check logs for the drive that was removed
zpool detach tank old_1tb_drive
** insert new disk
zpoool replace -f old_1tb_drive new_2tb_drive

 

Mount on Boot:

You need to edit the file /etc/default/zfs with your favourite editor, eg: nano, vim or something else, and change the lines

ZFS_MOUNT='no'
ZFS_UNMOUNT='no'

to

ZFS_MOUNT='yes'
ZFS_UNMOUNT='yes'

Automation tasks:

Reference here

iQstor J2880 FC Switched JBOD (SBOD) System (decom)

Tech Spec:
http://www.iqstor.com/products/j2880/technical-specification

User Manual

Creating a raid6 via lvm2:

pvcreate /dev/sd[abscdefghijklmno]
vgcreate iqstor01 /dev/sd[abscdefghijklmno]
lvcreate –type raid6 -l 80%VG -i 13 -n lvm_iqstor01 iqstor01

this created a 10tb lvm, not a 13 tb as I expected.

 

Install Webmin:

apt-get -y install libnet-ssleay-perl libauthen-pam-perl libio-pty-perl apt-show-versions libapt-pkg-perl

wget http://prdownloads.sourceforge.net/webadmin/webmin_1.680_all.deb

apt-get -y install ./webmin_1.680_all.deb

Ceph – distributed object storage

http://ceph.com/

What is Ceph?  Ceph is a free software storage platform designed to present object, block, and file storage from a single distributed computer cluster. Ceph’s main goals are to be completely distributed without a single point of failure, scalable to the exabyte level, and freely-available. The data is replicated, making it fault tolerant. (ref)

There was an IBM write up here or you can just ready the cached copy. (l-ceph-pdf)

References:

  • http://en.wikipedia.org/wiki/Ceph_(storage)

Setting up your owncloud system

I’m going to do this under ubuntu, but you can use any flavor of linux really.

OWNCLOUD_DIR="/opt/md2/owncloud"
apt-get -y install apache2 php5 php5-gd php-xml-parser php5-intl \
        php5-sqlite php5-mysql smbclient curl libcurl3 php5-curl \
        apache2 mysql-server libapache2-mod-php5 \
        php5-gd php5-json php5-mysql php5-curl \
        php5-intl php5-mcrypt php5-imagick
mkdir -p ${OWNCLOUD_DIR}
cd ${OWNCLOUD_DIR}
wget "http://download.owncloud.org/community/owncloud-5.0.15.tar.bz2"
tar -xjf ${OWNCLOUD_DIR}/owncloud-5.0.15.tar.bz2
mv owncloud/* ${OWNCLOUD_DIR}
mv owncloud/.htaccess ${OWNCLOUD_DIR}
mkdir -p ${OWNCLOUD_DIR}/data
chown -R www-data:www-data ${OWNCLOUD_DIR}/data
chown -R www-data:www-data ${OWNCLOUD_DIR}/apps
chown -R www-data:www-data ${OWNCLOUD_DIR}/config

Add the following applications:

apt-get install davfs2
echo ‘“https://your-owncloud-server-url.com/owncloud/remote.php/webdav”   yourUserName   “your password here”‘ >> /etc/davfs2/secrets
mkdir /media/owncloud
chown localUserId:localUserId /media/owncloud
echo ‘https://your-owncloud-server-url.com/owncloud/remote.php/webdav/  /media/owncloud   davfs   defaults,user,noauto,uid=1000,gid=1000  0       0′ >> /etc/fstab

 

 

 

References:

  • http://doc.owncloud.org/server/5.0/developer_manual/app/gettingstarted.html
  • WebDAV:  http://doc.owncloud.org/server/5.0/admin_manual/installation.html
  • Setting up WebDAV:  http://www.adercon.com/ac/node/100
  • http://forum.owncloud.org/viewtopic.php?f=17&t=7536 (Great for webdav debugging)

parted

parted -a opt /dev/md0
(parted) u MiB
(parted) rm 1
(parted) mkpart primary 1 100%

or an alternate dirty method would simply go like this

(parted) mkpart primary ext4 1 -1

Partition a 4tb external drive:
parted -a opt /dev/sdc
mklabel gpt
unit TB 
mkpart primary 0.00TB 4.00TB
align-check optimal 1 (check if the partition is aligned)
quit
mkfs.ext4 /dev/sdc1

You  might want to reduce the reserved for defragmentation to 1% with such a large drive.

tune2fs -m 1 /dev/sdc1