SAN Box Used Via iSCSI - Funtoo Linux
SAN Box Used Via iSCSI - Funtoo Linux
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
1 Introduction 2 The SAN box 2.1 Hardware and setup considerations 2.2 Operating system choice and storage pool configuration 2.3 Networking considerations 2.4 Architectural overview 3 Preliminary BIOS operations on the SAN Box 3.1 BIOS Upgrade 3.2 SAN box BIOS setup 4 Phase 1: Setting up Solaris 11 on the SAN box 4.1 Activating SSH to work from a remote machine 4.1.1 DNS servers configuration 4.1.2 Network configuration 4.1.3 Bringing up SSH 4.1.4 Enabling some extra virtual terminals 4.1.5 Be a Watt-saver! 5 Phase 2: Creating the SAN storage pool on the SAN box 5.1 Enabling the hotplug service 5.2 Creating the SAN storage pool (RAID-Z1) 5.3 Torturing the SAN pool 5.3.1 Removal of one disk 5.3.2 Returning to normal operation 5.3.3 Removal of two disks 5.4 Creating zvols 6 Phase 3: Bringing a bit of iSCSI magic 6.1 iSCSI concepts overview 6.1.1 Initiator, target and LUN 6.1.2 IQN vs EUI 6.2 Making the SAN box zvols accessible through iSCSI 6.2.1 Installing pre-requisites 6.2.2 Creating the Logical Units (LU) 6.2.3 Mapping concepts 6.2.4 Setting up the LU mapping 6.2.5 Restricting the listening interface of a target (portals and portal groups) 6.2.6 Testing from the SAN Box 7 Phase 4: Mounting a remote volume from a Funtoo Linux box 7.1 Installing requirements 7.2 Linux kernel configuration 7.3 Configuring the Open iSCSI initiator 7.4 Spawning iscsid 7.5 Querying the SAN box for available targets 7.6 Connecting to the remote target 7.7 Using the remote LU 7.8 Unmounting the iSCSI disk and logging out from the target 7.9 Taking and restoring a zvol snapshot 7.10 Automating the connection to a remote iSCSI disk 8 Optimizations / security
1 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
8.1 Activating stakeholders authentication (CHAP) 8.2 Configuring the SAN box to send iSCSI blocks in round-robin 8.2.1 Activating multi-pathing 8.3 Other ways to gain transmission speed 8.3.1 Changing the logical unit block size 8.3.2 Using Ethernet jumbo frames 9 Final words 10 Footnotes and references
In this lab we are experimenting a Funtoo Linux machine that mounts a remote volume located on a Solaris machine through iSCSI. The remote volume resides on a RAID-Z ZFS pool (zvol).
The SAN box is composed of the following: Jetway NF81 Mini-ITX Motherboard: AMD eOntario (G-Series) T56N Dual Core 1.6 GHz APU @ 18W 2 x DDR3-800/1066 Single Channel SO-DIMM slots (up to 8 GB) Integrated ATI Radeon HD 6310 Graphics 2 x Realtek RTL8111E PCI-E Gigabit Ethernet 5 x SATA3 6Gb/s Connectors LVDS and Inverter connectors (used to connect a LCD panel directly on the motherboard) Computer case: Chenbro ES34069, for a compact case it gives a big deal: 4 *hot swappable* hard drive bays plus 1 internal bay for a 2.5 drive. The case is compartmented and two fans ventilates the compartment where the 4 hard drives resides (drives remain cool) Lockable front door preventing access to the hard drives bays (mandatory if the machine is in an open place and if you have little devils at home who love playing with funny things in your back, this can saves you the bereavement of several terabytes of data after hearing a "Dad, what is that stuff?" :-) ) 4x Western Digital Caviar Green 3TB (SATA-II) - storage pool 1x OCZ Agility 2 60GB (SATA-II) - system disk 2x 4GB Kingston DDR-3 1066 SO-DIMM memory module - storage area, will be divided in several virtual volumes exported via iSCSI. 1x external USB DVD drive (Samsung S084 choosen for the aesthetics, personal choice)
2 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
Operating system: Oracle Solaris Express 11 (T56N APU has hardware acceleration for AES, according to Wikipedia... Crypto Framework has support for it, Solaris 10 and later required) Storage: all of the 4 WD will be used to create a big ZFS RAID-Z1 zpool (RAID-5-on-steroids, extremely robust in terms preserving and automatically restoring the data integrity) with several emulated virtual block devices (zvols) exported via iSCSI.
Solaris 11 Express licencing terms changed since the acquisition of Sun: check the OTN Licence terms to see if you can use or not use OSE 11 in your context. As a general rule of thumb: if your SAN box will hold strictly familial data (i.e. you don't use it for supporting commercial activities / generating financial profits) you should be entitled to freely use OSE 11. However we are not not lawers nor Oracle commercial partners so always check with Oracle to see if they allow you to freely use OSE 11 or if they require you to buy a support contract in your context.
Each of the two wthernet NIC will be connected to two different (linked) Gigabits switches. For the record, the used switches here are TrendNet TEG-S80Dg, directly connected on a 120V plug no external PSU
Before going on with the installation of Solaris, we have to do two operations: Upgrade the BIOS to its latest revision (our NF81 motherboard came with an BIOS revision) Setup the BIOS
At date of writing, JetWay published a newer BIOS revision (A03) for the motherboard (http://www.jetwaycomputer.com /NF81.html) , unfortunately the board has no built-in flash utilities accessible at machine startup so only solution is to build a
3 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
bootable DOS USB Key using Qemu (unetbootin with a FreeDOS failed has been tried but no joy):
Flashing a BIOS is always a risky operation, always connect the computer to a UPS protected power plug and let the flash to complete
On your Funtoo box: Emerge the package app-emulation/qemu Download Balder (http://www.finnix.org/Balder) (minimalistic FreeDOS image), at date of writing balder10.img is the only version available Plug your USB key, note how the kernel recognize it (e.g. /dev/sdz) and backup any valuable data on it. With the partitiong tool of your choice (cfdisk, fdisk...) create a single partition on your USB Key, change it type to 0xF (Win95 FAT32) and mark it as being bootable Run qmemu and make it boot on the Balder image and make it use your USB has an hard-drive => qemu -boot a -fda balder10.img -hda /dev/sdz (your USB key is seen as drive C: and the Balder image as floppy drive A:) In the boot menu shown in the Qemu window select option "3 Start FreeDOS for i386 + FDXMS", after a fraction of second a extremly very well known prompt coming from the antiquity of computer-scienc welcomes you :) It is now time to install the boot code on the "hard drive" and copy all file contained in the Balder image. To achieve that execute in sequence sys c: followed by xcopy /E /N a: c: Close QEmu and test your key by running Qemu again: qemu /dev/sdz Do you see Balder boot menu? Perfect, close Qemu again Mount your USB key within the VFS: mount /dev/sdz /mnt/usbkey On the Motherboard manufacturer website, download the update archive (in the case of JetWay, get xxxxxxxx.zip not xxxxxxxx_w.zip, that later is for flashing from a Windows instance). In general BIOS update archives comes as a ZIP archive containing not only a BIOS image but also the adequate tools (for MS-DOS ;) to reflash the BIOS EEPROM. This is not the case with the JetWay NF81 but some motherboard models require you to change a jumper position on the motherboard before flashing the BIOS EEPROM (read-only/read-write). Extract in the above mount-point the files contained in the BIOS update ZIP archive. Now unmount the USB key - DO NOT UNPLUG THE KEY WITHOUT UNMOUNTING IT FIRST YOU WILL GET CORRUPTED FILES ON IT. On the SAN Box: Plug the USB key on the SAN Box, turn the machine on and go in the BIOS setting to make sure the first boot device is the USB key. For the the JetWay NF81 (more or less UEFI BIOS?): if you see your USB Key twice in the boot devices list with one instance name starting with "UEFI...", make sure this name is selected first else the system won't consider the USB as a bootable device. Save your changes and exit from the BIOS setup program, when you USB key is probed the BIOS should boot on it In the Balder boot menu, choose again "3 Start FreeDOS for i386 + FDXMS" At the FreeDOS C:> prompt, just run the magic command FLASH.BAT (this DOS batch file runs the BIOS flashing with the adequate arguments) when the flashing finishes power cycle the machine (in our case, for reason, the flashing utility didn't return at the prompt) Go again in the BIOS Settings and load the default settings (highly recommended).
Pays attention to the following settings: Main: Check the date and time it should be set on your local date and time SATA Config: No drives will be listed here but the one connected on the 5th SATA port (assuming you have enabled the SATA-IDE combined node, if SATA-IDE combined mode has been disabled no drive will be showed up). The option is a bit misleading, it should have been called "IDE Drives"... Advanced: CPU Config -> C6 mode: enabled (C6 not shown in powertop?)
4 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
Shutdown temperature: 75C (you can choose a bit lower) Chipset - Southbridge Onchip SATA chancel: Enabled Onchip SATA type: AHCI (mandatory to be able to use hot plug and this also enables the drive Native Command Queuing (NCQ) SATA-IDE Combined mode: Disabled (this option affects *only* the 5th SATA port as being an IDE one even is AHCI has been enabled for "Onchip SATA type", undocumented in the manual :-( ) Boot: set the device priority to the USB DVD-ROM drive first then the 2.5 OCZ SSD drive
Ok this is not especially Funtoo related but the following notes can definitely saves you several hours. The whole process of setting up a Solaris operating system won't be explained because it far beyond the scope of this article but here is some pertinent information for SAN setup thematic. A few points: Everything we need (video drivers untested Xorg won't be used) is properly recognized out of the box especially the Ethernet RTL8111E chipset (prtconf -D report the kernel is using a driver for the RTL8168 this is normal, both chipsets are very close so an alias has been put in /etc/driver_aliases). 3TB drives are properly recognized and used under Solaris, no need of the HBA adapter provided by Western Digital We won't partition the 4 Western Dgigital drives with GPT we will use the directly the whole devices According to Wikipedia, AES hardware acceleration of the AMD T56N APU is supported by the Solaris Cryptographic Framework (you won't see it has an hardware crypto provider) Gotcha: canadian-french keyboard layout gives an azerty keyboard on the console :(
Solaris 11 Express comes with a text-only CD-ROM [1]. Because machine serves a storage box with no screen connected on it most of time and working from a 80x25 console is not very comfortable (especially with an incorrect keyboard mapping), the best starting point is to enable the network and activate SSH.
ISC BIND version 9 You MUST use multiple A resource records Define your direct resolution like below:
(...) sanbox-if0 IN A 192.168.1.100 sanbox-if1 IN A 192.168.1.101 sanbox IN A 192.168.1.100 sanbox IN A 192.168.1.101 (...)
5 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
Network configuration
Let's configure the network on the SAN Box. If you are used to Linux, you will see that the process in the Solaris world the process is a bit similar with some substential differences however. First, let's see what are the NICs recognized by the Solaris kernel at system startup:
# dladm show-phys LINK MEDIA rge0 Ethernet rge1 Ethernet
SPEED 0 0
Both Ethernet interfaces are connected to a switch (have their link LEDs lit) but they still need to be "plumbed" (plumbing is persistent between reboots):
# ifconfig rge0 plumb # ifconfig rge1 plumb # dladm show-phys LINK MEDIA rge0 Ethernet rge1 Ethernet
STATE up up
Once plumbed, both interfaces will also be listed when doing an ifconfig:
# ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 rge0: flags=1000843<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 0.0.0.0 netmask 0 ether 0:30:18:a1:73:c6 rge1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4 inet 0.0.0.0 netmask 0 ether 0:30:18:a1:73:c6
Time to assign a static IP address to both interfaces (we won't use link aggregation or fail-over configuration in that case):
# ifconfig rge0 192.168.1.100/24 up # ifconfig rge1 192.168.1.101/24 up
Notice the up the end, if you forget it Solaris will assign an IP address to the NIC but will consider it being inactive (down). Checking again:
# ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 rge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 0:30:18:a1:73:c6 rge1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4 inet 0.0.0.0 netmask 0 ether 0:30:18:a1:73:c6
To see rge0 and rge1 automatically brough up at system startup, we need to defile 2 files names /etc/hostname.rgeX containing the NIC IP address:
# echo "192.168.1.100/24" > /etc/hostname.rge0 # echo "192.168.1.100/24" > /etc/hostname.rge1
Now let's add a default route and make it persistent across reboots:
# echo 192.168.1.1 > /etc/defaultrouter # route add `cat /etc/defaultrouter` # ping 192.168.1.1 192.168.1.1 is alive
6 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
Once everything is in place, reboot the SAN box and see if both NICs are up with an IP address assigned and with a default route:
# ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 rge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 0:30:18:a1:73:c6 rge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4 inet 192.168.1.101 netmask ffffff00 broadcast 192.168.1.255 ether 0:30:18:a1:73:c6 (...) # netstat -rn Routing Table: IPv4 Destination -------------------default 127.0.0.1 192.168.1.0 192.168.1.0
Flags Ref Use Interface ----- ----- ---------- --------UG 2 682 UH 2 172 lo0 U 3 127425 rge1 U 3 1260 rge0
A final step for this paragraph: name resolution configuration. The environment used here has DNS servers (primary 192.168.1.2, secondary 192.168.1.1) so /etc/resolv.conf looks like this:
search mytestdomain.lan nameserver 192.168.1.2 nameserver 192.168.1.1
By default the hosts resolution is configured (/etc/nsswitch.conf) to use the DNS servers first then the file /etc/hosts so make sure that /etc/nsswitch.conf has the following line:
hosts: dns
7 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
Address: 192.168.1.101 # nslookup sanbox Server: 192.168.1.2 Address: 192.168.1.2#53 Name: sanbox.mytestdomain.lan Address: 192.168.1.101 Name: sanbox.mytestdomain.lan Address: 192.168.1.100
Perfect! Two different requests gives a list in a different order. Now reboot the box and check everything is in order: Both network interfaces automatically brought up with an IP address assigned The default route is set to the right gateway Name resolution is setup (see /etc/resolv.conf)
Bringing up SSH
This is very straightforward: Check if the service is disabled (on a fresh install it is):
# svcs -a | grep ssh disabled 8:07:01 svc:/network/ssh:default
Edit /etc/sshd/ssd_config (nano and vi are included by Solaris) to match your preferences. Do not bother with remote root access, RBAC security policcies of Solaris does not allow it and you must logon with a normal user account prior gaining root privileges with su -. Now enable the SSH server and check its status
# svcadm enable ssh # svcs -a | grep ssh online 8:07:01 svc:/network/ssh:default
If you get maintenance instead of online it means the service encountered an error somewhere, usually it is due to a typo in the the configuration file. You can get some information with svcs -x ssh
SSH is now ready and listen for connections (here coming from any IPv4/IPv6 address hence the two lines):
# netstat -an | grep ssh# netstat -an | grep "22" *.22 *.* 0 *.22 *.*
0 128000
0 LISTEN 0
0 128000
0 LISTEN
8 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
fmri name enabled state next_state state_time logfile restarter contract_id dependency
svc:/system/vtdaemon:default vtdaemon for virtual console secure switch true online none June 19, 2011 09:21:53 AM EDT /var/svc/log/system-vtdaemon:default.log svc:/system/svc/restarter:default 97 require_all/none svc:/system/console-login:default (online)
Another form of service checking (long form) has been used here just for the sake of demonstrating.
Now enable five more virtual consoles (vt2 to vt7) and enable hotkeys property of vtdaemon else you won't be able to switch between the virtual consoles:
# for i in 2 3 4 5 6 ; do svcadm enable console-login:vt$i; done # svccfg -s vtdaemon setprop options/hotkeys=true
If you want to disable terminal the screen auto-locking capability of your newly added virtual terminals :
# svccfg -s vtdaemon setprop options/secure=false
The service vtdaemon and the virtual consoles are now up and running:
# svcs -a | grep "vt" online 9:43:14 svc:/system/vtdaemon:default online 9:43:14 svc:/system/console-login:vt3 online 9:43:14 svc:/system/console-login:vt2 online 9:43:14 svc:/system/console-login:vt5 online 9:43:14 svc:/system/console-login:vt6 online 9:43:14 svc:/system/console-login:vt4
console-login:vtX won't be enabled and online if vtdaemon itself is not enabled and online.
Now try to switch between virtual terminals with Ctrl-Alt-F2 to Ctrl-Alt-F6 (virtual terminals do auto-lock when you switch in between if you did not set vtdaemon property options/secure to false). You can return to the console with Ctrl-Alt-F2.
Be a Watt-saver!
When idle with all of the disk spun up, the SAN Box consumes near 40W at the plug when idle and near 60W when dealing with a lot of I/O activities. 40W may not seem a big figure as it represents a big 1kWh/day and 365kWh/year (for your wallet at a rate of 10/kWh, an expense of $36.50/year). Not a big deal but preserving every bit of natural resources for Earth sustainability is
9 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
welcomed and the San Box can easily do its part, especially when you are out of your house for several hours or days :-) Solaris has device power management features and they are easy to configure:
Enforcing power management is mandatory on many non Oracle/Sun/Fujitsu machines because Solaris will fail to detect the machine device management capabilities, hence leaving this feature disabled. Of course this has the consequence of not having the the drives being automatically spun down by Solaris when not solicited for a couple of minutes.
First run the format command (results can differ in your case):
# format Searching for disks...done
AVAILABLE DISK SELECTIONS: 0. c7t0d0 <ATA-WDC WD30EZRS-00J-0A80-2.73TB> /pci@0,0/pci1002,4393@11/disk@0,0 1. c7t1d0 <ATA-WDC WD30EZRS-00J-0A80-2.73TB> /pci@0,0/pci1002,4393@11/disk@1,0 2. c7t2d0 <ATA-WDC WD30EZRS-00J-0A80-2.73TB> /pci@0,0/pci1002,4393@11/disk@2,0 3. c7t3d0 <ATA-WDC WD30EZRS-00J-0A80-2.73TB> /pci@0,0/pci1002,4393@11/disk@3,0 4. c7t4d0 <ATA -OCZ-AGILITY2 -1.32 cyl 3914 alt 2 hd 255 sec 63> /pci@0,0/pci1002,4393@11/disk@4,0 Specify disk (enter its number):
Note the disks physical pathes (here pci@0,0/pci1002,4393@11/disk@0,0 to pci@0,0/pci1002,4393@11/disk@3,0 are of interest) and exit by pressing Ctrl-C. Second, edit the file /etc/power.conf and look for the following line:
autopm auto
Change it for:
autopm enable
Third at the end of the same file, add one device-thresholds line per device storage you want to bring down (devicethresholds needs the physical path of the device) followed by the delay before reaching the device standby mode. In our case we want to wait 10 minutes so it gives:
device-thresholds device-thresholds device-thresholds device-thresholds /pci@0,0/pci1002,4393@11/disk@0,0 /pci@0,0/pci1002,4393@11/disk@1,0 /pci@0,0/pci1002,4393@11/disk@2,0 /pci@0,0/pci1002,4393@11/disk@3,0 10m 10m 10m 10m
Third run pmconfig (no arguments) to make this new configuration active This (theoretically) puts the 4 hard drives in standby mode after 10 minutes of inactivity. It is possible that your drives come with preconfigured values preventing them from being spun down before a factory-set delay or being spun at all (experiments with our WD Caviar Green drives are put in standby with a factory-set delay of ~8 minutes). With the hardware used to build the SAN Box, nothing more than 27W is drained from the power plug when all of the
10 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
drives are in standby and when the CPU is idle (32% more energy savings compared to the original 40W) :-).
At date of writing (June 2011) Solaris does not support adding additional hard-drives to an existing to a Z-pool created as a RAID-Z array to increase its size. It is however possible to increase the storage space by replacing of all of the drives composing a RAID-Z array one after another (wait for the completion of the re-silvering process after each drive replacement).
The very first step of this section is not creating the Z-Pool but, instead checking if hotplug service is enabled and online as the computer case we use has hot-pluggable SATA bays:
# svcs -l hotplug fmri svc:/system/hotplug:default name hotplug daemon enabled true state online next_state none state_time June 19, 2011 10:22:22 AM EDT logfile /var/svc/log/system-hotplug:default.log restarter svc:/system/svc/restarter:default contract_id 61 dependency require_all/none svc:/system/device/local (online) dependency require_all/none svc:/system/filesystem/local (online)
In the case the service is shown as disabled, just enable it by doing svcadm enable hotplug and check if it is stated as being online after that. When this service is online, the Solaris kernel is aware whenever a disk is be plugged-in or removed from the computer frame (your SATA bays must support hot plugging, this is the case here).
RAID-Z pools can be created in two flavors: RAID-Z1 (~RAID-5): supports the failure of only one disk at the price of "sacrificing" the availability one disk for storage (4x 3 Tb gives 9 Tb of storage) RAID-Z2 (~RAID-6): supports the failure of up to two disks at the price of "sacrificing" the availability two disks for storage (4x 3 Tb gives 6 Tb of storage) and requiring more computational power from the CPU. A good trade-off here between computational power/storage (main priority)/reliability is to use RAID-Z1. The first step is to identify the logical device name of the drives that will be used for our RAID-Z1 storage pool. If you have configured the power-management of the SAN box (see above paragraphs), you should have noticed that the format command returns something (e.g. c7t0d0) just before the <ATA-...> drive identification string. This is the logical device name we will need to create the storage pool!
0. c7t0d0 <ATA-WDC WD30EZRS-00J-0A80-2.73TB> /pci@0,0/pci1002,4393@11/disk@0,0 1. c7t1d0 <ATA-WDC WD30EZRS-00J-0A80-2.73TB> /pci@0,0/pci1002,4393@11/disk@1,0 2. c7t2d0 <ATA-WDC WD30EZRS-00J-0A80-2.73TB> /pci@0,0/pci1002,4393@11/disk@2,0 3. c7t3d0 <ATA-WDC WD30EZRS-00J-0A80-2.73TB>
11 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
/pci@0,0/pci1002,4393@11/disk@3,0
As each of the SATA drives are connected on their own SATA bus, Solaris see them as 4 different "SCSI targets" plugged in the same "SCSI HBA" (one LUN per target hence the d0 the zpool command will automatically create on each of the given disks a GPT label covering the whole disk for you. GPT labels overcome the 2.19TB limit and are redundant structures (one copy is put at the beginning of the disk, one exact copy at the end of the disk).
Creating the storage pool (named san here) as a RAID-Z1 array is simply achieved with the following incantation (of course adapt to your case... here we use c7t0d0, c7t1d0, c7t2d0 and c7t3d0):
# zpool create san raidz c7t0d0 c7t1d0 c7t2d0 c7t3d0
Now check status and say hello to your brand new SAN storage pool (notice it has been mounted with its name directly under the VFS root):
# zpool status san pool: san state: ONLINE scan: none requested config: NAME san raidz1-0 c7t0d0 c7t1d0 c7t2d0 c7t3d0 STATE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Size 8.1T
The above just says: everything is in order, the pool is fully functional (not in a DEGRADED state).
12 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
state: ONLINE scan: none requested config: NAME san raidz1-0 c7t0d0 c7t1d0 c7t2d0 c7t3d0 STATE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Nothing?... This is absolutely normal if no write operations occurred on the pool from the time you have removed the hard drive. If you create some activity you will see the pool switch in degraded state:
# dd if=/dev/random of=/san/test.dd count=1000 # zpool status san pool: san state: DEGRADED status: One or more devices has been removed by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scan: none requested config: NAME san raidz1-0 c7t0d0 c7t1d0 c7t2d0 c7t3d0 STATE DEGRADED DEGRADED ONLINE ONLINE REMOVED ONLINE READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
xxxxxxx xxxxxxx xxxxxxx xxxxxxx xxxxxxx xxxxxxx xxxxxxx xxxxxxx xxxxxxx xxxxxxx xxxxxxx
SATA sata: sata: sata: sata: sata: sata: sata: sata: sata: sata:
device detected at port 2 [ID 663010 kern.info] /pci@0,0/pci1002,4393@11 : [ID 761595 kern.info] SATA disk device at port 2 [ID 846691 kern.info] model WDC WD30EZRS-00J99B0 [ID 693010 kern.info] firmware 80.00A80 [ID 163988 kern.info] serial number WD-************* [ID 594940 kern.info] supported features: [ID 981177 kern.info] 48-bit LBA, DMA, Native Command Queueing, SMART, SMART self-test [ID 643337 kern.info] SATA Gen2 signaling speed (3.0Gbps) [ID 349649 kern.info] Supported queue depth 32 [ID 349649 kern.info] capacity = 5860533168 sectors
Solaris now sees the drive again! What about the storage pool?
# zpool pool: state: status: status san san DEGRADED One or more devices has been removed by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scan: none requested config: NAME san raidz1-0 c7t0d0 c7t1d0 c7t2d0 STATE DEGRADED DEGRADED ONLINE ONLINE REMOVED READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
13 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
c7t3d0
ONLINE
Still in degraded state, correct. ZFS does not take any initiative when a new drive is connected and it is up to you to handle the operations. Let's start by asking the list of the SATA port on the motherboard (we can safely ignore sata0/5 in our case, our motherboard chipset has support for 6 SATA ports but only 5 of them have physical connectors on the motherboard PCB):
# cfgadm -a sata Ap_Id sata0/0::dsk/c7t0d0 sata0/1::dsk/c7t1d0 sata0/2 sata0/3::dsk/c7t3d0 sata0/4::dsk/c7t4d0 sata0/5
Condition ok ok unknown ok ok ok
To return to normal state, just "configure" the drive this is accomplished by:
# cfgadm -c configure sata0/2 # cfgadm -a sata Ap_Id sata0/0::dsk/c7t0d0 sata0/1::dsk/c7t1d0 sata0/2::dsk/c7t2d0 sata0/3::dsk/c7t3d0 sata0/4::dsk/c7t4d0
Condition ok ok ok ok ok
Fully operational again! Note that the system took the initiative of re-silvering the storage pool for you :-) This is very quick here because the pool is empty but it can take several minutes or hours to complete. Here we just pulled back the drive, ZFS is intelligent enough scan what the drive contain and know it has to attach the drive the "san" storage pool. In the case we replace the drive with a brand new blank one, we would have explicitly tell to replace the drive (Solaris will resilver the array but you will still see it in a degraded state). Assuming the old and new device are assigned to the same drive bay (here giving c7t2d0):
# zpool replace c7t2d0 c7t2d0
14 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
REC-ACTION: Make sure the affected devices are connected, then run 'zpool clear'.
Any current I/O operation is suspended (and the processes process waiting after them are put in uninterruptible sleep state).... Querying the pool status gives:
# zpool status san pool: san state: UNAVAIL status: One or more devices are faulted in response to IO failures. action: Make sure the affected devices are connected, then run 'zpool clear'. see: http://www.sun.com/msg/ZFS-8000-HC scan: resilvered 24.6M in 0h0m with 0 errors on Sun Jun 19 12:45:30 2011 config: NAME san raidz1-0 c7t0d0 c7t1d0 c7t2d0 c7t3d0 STATE UNAVAIL UNAVAIL ONLINE REMOVED REMOVED ONLINE READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
How we will use the storage pool? Well, this is not a critical question although it remains important because you can make a zvol grow on the fly (of course this make sens if the filesystem on it has also the capability to grow). For the demonstration we will use the following layout: san/os 2Tb - Operating systems local mirror (Several Linux, Solaris and *BSD) san/homes - 1Tb - Home directories of the other Linux boxes test - 20 Gb - Scrapable test data To use the pool space in a clever manner, we will use allocation a space allocation policy known as thin provisioning (also known as sparse volume, notice the -s in the commands):
# zfs create -s -V 2.5t san/os # zfs create -s -V 500g san/home-host1 # zfs create -s -V 20g san/test # zfs list (...) san 3.10T 4.91T san/home-host1 516G 5.41T san/os 2.58T 7.48T san/test 20.6G 4.93T
/san -
Notice the last column in the last example: it shows no path (only a dash!) And doing ls -la in our SAN pool reports:
# ls -la /san total 4 drwxr-xr-x 2 root root 2 2011-06-19 13:59 . drwxr-xr-x 25 root root 27 2011-06-19 11:33 ..
This is absolutely correct, zvols are a bit different of standard datasets. If you sneak what lies under /dev you will notice a pseudo-directory named zvol. Let's see what it contains:
# ls -l /dev/zvol total 0 drwxr-xr-x 3 root sys 0 2011-06-19 10:22 dsk drwxr-xr-x 4 root sys 0 2011-06-19 10:34 rdsk
Ah! Interesting, what lies beneath? Let's take the red pill and dive one level deeper (we choose arbitrarily rdsk, dsk will show similar results):
15 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
# ls -l /dev/zvol/rdsk total 0 drwxr-xr-x 6 root sys 0 2011-06-19 10:34 rpool1 drwxr-xr-x 2 root sys 0 2011-06-19 14:04 san
rpool1 correspond to the system pool (your Solaris installation in itself lives in a zpool, so you can do snapshots and rollbacks with it!) and san is our brand new storage pool. What is inside the san directory?
# ls -l /dev/zvol/rdsk/san lrwxrwxrwx 1 root root 0 2011-06-19 14:11 home-host1 -> ../../../..//devices/pseudo/zfs@0:3,raw lrwxrwxrwx 1 root root 0 2011-06-19 14:11 os -> ../../../..//devices/pseudo/zfs@0:4,raw lrwxrwxrwx 1 root root 0 2011-06-19 14:11 test -> ../../../..//devices/pseudo/zfs@0:5,raw
Wow! The three virtual zvols we have created are just seen just as if there were three physical physical disks but indeed they lies in a storage pool created as a RAID-Z1 array. "Just like" really means what it says i.e. you can do everything with them just as you would do with real disks (format won't see them however because if just scans /dev/rdsk and not /dev/zpool /rdsk). iSCSI will bring the cheerbeery on the sundae!
Solaris 11 uses a new COMSTAR implementation which differs from the COMSTAR implementation found in Solaris 10. This new version brings some slight changes like not supporting the iscsishare property on zvols.
Before going further, you must know iSCSI a bit and several fundamental concepts (it is not complicated but you need to get a precise idea of it). First, iSCSI is exactly what its name suggest: a protocol to carry SCSI commands over a network. In the TCP/IP world, iSCSI relies on TCP (and not UDP) thus, splitting/retransmission/reordering of iSCSI packet are transparently done by the magic of the TCP/IP stack. Should the network connection break and as being a stateful protocol, TCP will be aware of the situation. So no gambling on data integrity here: iSCSI disks are as reliable as if they were DAS (Directly Attached Storage) peripherals. Of course using TCP does not avoid packet tampering and man-in-the-middle attacks and just like any other protocol relying on TCP.
Altough the iSCSI is a typical client-server architecture, the stakeholders are given the name nodes. A node can act either as a server either as client or can mix the roles (use remote iSCSI disks and provides its own iSCSI disk to others iSCSI nodes)
Because your virtual SCSI cable is nothing more than a network stream, you can do whatever you want starting by applying QoS (Quality of Service) policies to it in your routers, encrypting it or sniff it if you are curious about the details. The drawback for the end-user of an iSCSI disk is the network speed: slow or overloaded networks will make I/O operations from/to iSCSI devices
16 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
extremely irritating... This is not necessarily an issue for our SAN box used in a home gigagbit LAN with a few machines but it will become an issue if you intend to share your iSCSI disk over the Internet or use remote ISCSI disk over the Internet.
A bit schematic but things work exactly in that way. Oh something the previous does not illustrate: aside of emulating a remote SCSI disk, some iSCSI implementations have the capability to forward the command stream to a directly attached SCSI storage device in total transparency.
IQN vs EUI
Each initiator and target is given unique identifier, just like a World Wide Name (WWN) in the Fiber Channel world. This identifier adopt three different formats: iSCSI Qualified Name (IQN): Described in RFC 3720 (http://tools.ietf.org/html/rfc3720) , an IQN identifier is composed of the following Extended-unique identifier (EUI): T11 Network Address Authority: Described in RFC 3980 (http://tools.ietf.org/html/rfc3980) , NAA identifiers brings iSCSI identifiers compatible with naming conventions used in Fibre Channel (FC) and Serial Attached SCSI (SAS) storage technologies. IQN remains probably the most common seen identifier format. An IQN identifier can be up to 223 ASCII characters long and is composed of the following four parts: the litteral iqn the date the naming authority took the ownership of its internet domain name the internet domain (reversed order) of the naming autority optional: A double colon ":" prefixing an arbitrary storage designation
17 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
For example, all COMSTAR IQN identifiers starts with: iqn.1986-03.com.sun where as open iSCSI IQN starts with iqn.2005-03.org.open-iscsi. Follwing are all valid IQNs: iqn.2005-03.org.open-iscsi:d8d1606ef0cb iqn.2005-03.org.open-iscsi:storage-01 iqn.1986-03.com.sun:01:0003ba2d0f67.46e47e5d In the case of COMSTAR, the optional part of IQN can have two forms (http://wikis.sun.com/display/StorageDev /IQN+Name+Format) : :01:<MAC address>.<timestamp>[.<friendly name>] (used by iSCSI initiators) :02:<UUID>.<local target name> (used by iSCSI targets) Where: <MAC address>: the 12 lowercase hexadecimal characters composing a 6 bytes MAC address (MAC = EFGHIF --> 45464748494a) <timestamp>: the lowercase hexadecimal representation of the number of seconds elapsed since January 1, 1970 at the time the name is created <friendly name>: an extension that provides a name that is meant to be meaningful to users. <UUID>: a unique identifier consisting of 36 hexadecimal characters in the format xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxx <local target name>: a name used by the target.
The following has been tested under Solaris Express 11, various tutorial around gives explanations for Solaris 10. Both versions have important differences (e.g. Solaris 11 has no more support for the shareiscsi property and uses a new COMSTAR implementation).
Installing pre-requisites
The first step is to install the iSCSI COMSTAR packages (they are not present in Solaris 11 Express, you must install them):
# pkg install iscsi/target Packages to install: Create boot environment: Services to restart: DOWNLOAD Completed PHASE Install Phase PHASE Package State Update Phase Image State Update Phase
1 No 1 PKGS 1/1 ACTIONS 48/48 ITEMS 1/1 2/2 FILES 14/14 XFER (MB) 0.2/0.2
Now enable the iSCSI Target service, if you have a look at its dependencies you will see it depends on the SCSI Target Mode Framework (STMF):
# svcs -l iscsi/target fmri svc:/network/iscsi/target:default name iscsi target enabled false state disabled next_state none state_time June 19, 2011 06:42:56 PM EDT
18 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
# svcadm -r enable iscsi/target # svcs -l iscsi/target fmri svc:/network/iscsi/target:default name iscsi target enabled true state online next_state none state_time June 19, 2011 06:56:57 PM EDT logfile /var/svc/log/network-iscsi-target:default.log restarter svc:/system/svc/restarter:default dependency require_any/error svc:/milestone/network (online) dependency require_all/none svc:/system/stmf:default (online)
Do services are online? Good! Notice the -r it just asks "enable all of the dependencies".
You have two ways of creating LUs: stmfadm (used here, provides more control) sdbadm
The very first step of exporting our zvols is to create their correponding LUs. This is accomplished by:
# stmfadm create-lu /dev/zvol/rdsk/san/test Logical unit created: 600144F0200ACB0000004E0505940007 # stmfadm create-lu /dev/zvol/rdsk/san/os Logical unit created: 600144F0200ACB0000004E05059A0008 # stmfadm create-lu /dev/zvol/rdsk/san/homes Logical unit created: 600144F0200ACB0000004E0505A40009
Now just for the sake of demonstration, we will control with two different commands:
# sbdadm list-lu Found 3 LU(s) GUID -------------------------------600144f0200acb0000004e0505940007 600144f0200acb0000004e05059a0008 600144f0200acb0000004e0505a40009 DATA SIZE ------------------2748779069440 536870912000 21474836480 SOURCE ---------------/dev/zvol/rdsk/san/os /dev/zvol/rdsk/san/home-host1 /dev/zvol/rdsk/san/test
# stmfadm list-lu -v LU Name: 600144F0200ACB0000004E0505940007 Operational Status: Online Provider Name : sbd Alias : /dev/zvol/rdsk/san/os View Entry Count : 1 Data File : /dev/zvol/rdsk/san/os Meta File : not set Size : 2748779069440 Block Size : 512 Management URL : not set Vendor ID : SUN Product ID : COMSTAR Serial Num : not set Write Protect : Disabled
19 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
Writeback Cache : Enabled Access State : Active LU Name: 600144F0200ACB0000004E05059A0008 Operational Status: Online Provider Name : sbd Alias : /dev/zvol/rdsk/san/home-host1 View Entry Count : 1 Data File : /dev/zvol/rdsk/san/home-host1 Meta File : not set Size : 536870912000 Block Size : 512 Management URL : not set Vendor ID : SUN Product ID : COMSTAR Serial Num : not set Write Protect : Disabled Writeback Cache : Enabled Access State : Active LU Name: 600144F0200ACB0000004E0505A40009 Operational Status: Online Provider Name : sbd Alias : /dev/zvol/rdsk/san/test View Entry Count : 1 Data File : /dev/zvol/rdsk/san/test Meta File : not set Size : 21474836480 Block Size : 512 Management URL : not set Vendor ID : SUN Product ID : COMSTAR Serial Num : not set Write Protect : Disabled Writeback Cache : Enabled Access State : Active
You can change some properties like the alias name if you wish
# stmfadm modify-lu -p alias=operating-systems 600144F0200ACB0000004E0505940007 # stmfadm list-lu -v 600144F0200ACB0000004E0505940007 LU Name: 600144F0200ACB0000004E0505940007 Operational Status: Online Provider Name : sbd Alias : operating-systems View Entry Count : 0 Data File : /dev/zvol/rdsk/san/os Meta File : not set Size : 2748779069440 Block Size : 512 Management URL : not set Vendor ID : SUN Product ID : COMSTAR Serial Num : not set Write Protect : Disabled Writeback Cache : Enabled Access State : Active
GUID is the key you will use to manipulate the LUs (if you want to delete one for example).
Mapping concepts
To be able to be seen by iSCSI initators we must do an operation called "mapping". To map our brand new LUs we can adopt two strategies: Make a LU visible to ALL iSCSI initiators an EVERY port (simple to setup but poor in terms of security) Make a LU visible to certain iSCSI initiators via certain ports (selective mapping) Under COMSTAR, a LU mapping is done via something called a view. A view is a table consisting of several view entries each one of those entries consisting in the association of a unique {target group (tg), initiator group (ig), Logical Unit Number (LUN)} triplet to a Logical Unit (of course, a single LU can be used by several triplets). Solaris SCSI target mode framework command line utilities are smart enough to detect and refuse duplicate {tg,ig,lun} entries, so you worry about them. The terminology used in the COMSTAR manual pages is a bit confusing because it uses "host group" instead of "initiator group" but one and the other are the same if you read stmfadm(1M) carefully. Is target group a set of several targets and host (initiator) group a set of initiators? Yes absolutely with the following nuance: an initiator can not be a member of two hosts groups. The LUN is just an arbitrary ordinal reference to LU.
20 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
How the view will work? let's say we have a view defined like this (this is a technically not exact because we use targets and initiators and not groups of them):
Target
Initiator
LUN
Internally, this iSCSI server knows what are the real devices (or zvols) hidden behind 600144F0200ACB0000004E0505940007 and 600144F0200ACB0000004E05059A0008 (see the previous paragraph). Suppose the initiator iqn.2015-01.com.myorg.host04 establishes a connection the target iqn.2015-01.com.myiscsi.server01:storageC, it will be presented 2 LUNs (13 and 17). On the iSCSI client machine each one of those will appear as distinct SCSI drives (Linux would show them as for example /dev/sdg and /dev/sdh where as Solaris would present their logical path as /dev/c4t1d0 and /dev/c4t2d0). However the same initiator connecting to target iqn.2015-01.com.myiscsi.server03:storageA will have no LUNs in its line of sight.
Second, we must create the target group (view entries are composed of targets groups and initiators groups) arbitrarily named sbtg-01:
# stmfadm create-tg sbtg-01
21 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
Now the turn of the initiator group (aka the host group, terminology is a bit confusing). To make thins simple we will use two initiators: 1. One located on the SAN Box it self 2. One located on a remote Funtoo Linux box The first initiator is already present on the SAN box because it has been automatically created at the iSCSI packages installation. To see it:
# iscsiadm list initiator-node Initiator node name: iqn.1986-03.com.sun:01:809a71be02ff.4df93a1e Initiator node alias: uranium Login Parameters (Default/Configured): Header Digest: NONE/Data Digest: NONE/Authentication Type: NONE RADIUS Server: NONE RADIUS Access: disabled Tunable Parameters (Default/Configured): Session Login Response Time: 60/Maximum Connection Retry Time: 180/Login Retry Time Interval: 60/Configured Sessions: 1
The second initiator (Open iSCSI is used, location can differ in your case if you use something else) IQN can be found on the Funtoo box in the file /etc/iscsi/initiatorname.iscsi:
# cat /etc/iscsi/initiatorname.iscsi | grep -e "^InitiatorName" InitiatorName=iqn.2011-06.lan.mydomain.worsktation01:openiscsi-a902bcc1d45e4795580c06b1d66b2eaf
To create a group encompassing both initiators (again the name sbhg-01 is arbitrary):
# stmfadm create-hg sbhg-01 # stmfadm add-hg-member -g sbhg-01 iqn.1986-03.com.sun:01:809a71be02ff.4df93a1e iqn.2011-06.lan.mydomain.worsktation01:openi # stmfadm list-hg -v Host Group: sbhg-01 Member: iqn.1986-03.com.sun:01:809a71be02ff.4df93a1e Member: iqn.2011-06.lan.mydomain.worsktation01:openiscsi-a902bcc1d45e4795580c06b1d66b2eaf
In this simple case we have only one initiator group and one target groups so we will show all of our LU trough it:
# stmfadm add-view -n 10 -t sbtg-01 -h sbhg-01 600144F0200ACB0000004E0505940007 # stmfadm add-view -n 11 -t sbtg-01 -h sbhg-01 600144F0200ACB0000004E05059A0008 # stmfadm add-view -n 12 -t sbtg-01 -h sbhg-01 600144F0200ACB0000004E0505A40009
If -n would not specified, stmfadm will automatically assign a LU number (LUN). Here again, 10 11 and 12 are arbitrary numbers (we didn't started with 0 just for the sake of the demonstration )
22 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
It is absolutely normal to get single result here because a single entry concerning that LU has been created. If the LU was referenced in another {tg,ig,lun} triplet you should have seen twice. Good news everything is in order! The bad one: we have not finished (yet) :-)
Second, we manually trace a path to the "remote" iSCSI target located on the SAN box itself:
23 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
A bit silent at first glance but let's be curious a bit and check what the kernel said at the exact moment the static discovery configuration has been entered:
Jun Jun Jun Jun Jun Jun Jun Jun Jun Jun Jun Jun 24 24 24 24 24 24 24 24 24 24 24 24 17:53:38 17:53:38 17:53:38 17:53:38 17:53:38 17:53:38 17:53:38 17:53:38 17:53:38 17:53:38 17:53:38 17:53:38 **** **** **** **** **** **** **** **** **** **** **** **** scsi: [ID 583861 kern.info] sd10 at scsi_vhci0: unit-address g600144f0200acb0000004e0505940007: f_tpgs genunix: [ID 936769 kern.info] sd10 is /scsi_vhci/disk@g600144f0200acb0000004e0505940007 genunix: [ID 408114 kern.info] /scsi_vhci/disk@g600144f0200acb0000004e0505940007 (sd10) online genunix: [ID 483743 kern.info] /scsi_vhci/disk@g600144f0200acb0000004e0505940007 (sd10) multipath statu scsi: [ID 583861 kern.info] sd11 at scsi_vhci0: unit-address g600144f0200acb0000004e05059a0008: f_tpgs genunix: [ID 936769 kern.info] sd11 is /scsi_vhci/disk@g600144f0200acb0000004e05059a0008 genunix: [ID 408114 kern.info] /scsi_vhci/disk@g600144f0200acb0000004e05059a0008 (sd11) online genunix: [ID 483743 kern.info] /scsi_vhci/disk@g600144f0200acb0000004e05059a0008 (sd11) multipath statu scsi: [ID 583861 kern.info] sd12 at scsi_vhci0: unit-address g600144f0200acb0000004e0505a40009: f_tpgs genunix: [ID 936769 kern.info] sd12 is /scsi_vhci/disk@g600144f0200acb0000004e0505a40009 genunix: [ID 408114 kern.info] /scsi_vhci/disk@g600144f0200acb0000004e0505a40009 (sd12) online genunix: [ID 483743 kern.info] /scsi_vhci/disk@g600144f0200acb0000004e0505a40009 (sd12) multipath statu
Wow, a lot of useful information here: 1. it says that the target iqn.1986-03.com.sun:02:2e5faacb-4bdf-4f7f-e643-ebc8bf856603 is online 2. it shows the 3 LUNs (disks) for that target (did you noticed the numbers 10, 11 and 12 ? They are the same we used when defining the view) 3. it says it "multi path" is in a "degraded" state. At this point this is absolutely normal as we have only one path to the "remote" 3 disks. And the ultimate test: what would format say? Test it!
# format Searching for disks...done c0t600144F0200ACB0000004E0505940007d0: configured with capacity of 2560.00GB
AVAILABLE DISK SELECTIONS: 0. c0t600144F0200ACB0000004E0505A40009d0 <SUN -COMSTAR -1.0 /scsi_vhci/disk@g600144f0200acb0000004e0505a40009 1. c0t600144F0200ACB0000004E05059A0008d0 <SUN -COMSTAR -1.0 /scsi_vhci/disk@g600144f0200acb0000004e05059a0008 2. c0t600144F0200ACB0000004E0505940007d0 <SUN-COMSTAR-1.0-2.50TB> /scsi_vhci/disk@g600144f0200acb0000004e0505940007 3. c7t0d0 <ATA-WDC WD30EZRS-00J-0A80-2.73TB> /pci@0,0/pci1002,4393@11/disk@0,0 4. c7t1d0 <ATA-WDC WD30EZRS-00J-0A80-2.73TB> /pci@0,0/pci1002,4393@11/disk@1,0 5. c7t2d0 <ATA-WDC WD30EZRS-00J-0A80-2.73TB> /pci@0,0/pci1002,4393@11/disk@2,0 6. c7t3d0 <ATA-WDC WD30EZRS-00J-0A80-2.73TB> /pci@0,0/pci1002,4393@11/disk@3,0 7. c7t4d0 <ATA -OCZ-AGILITY2 -1.32 cyl 3914 alt 2 hd 255 sec 63> /pci@0,0/pci1002,4393@11/disk@4,0 Specify disk (enter its number):
cyl 2607 alt 2 hd 255 sec 63> cyl 65267 alt 2 hd 255 sec 63>
Perfect! You can try to select a disk then partition it if you wish.
When selecting a iSCSI disk you will have to use the fdisk option first if Solaris complains with: WARNING - This disk may be in use by an application that has modified the fdisk table. Ensure that this disk is not currently in use before proceeding to use fdisk. For some reasons (not related to thin provisioning), it appeared that some iSCSI drives appeared with a corrupted partition table.
24 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
Now we have a functional Solaris box with several emulated volumes accessed just like they were a DAS (Direct Attached Storage) SCSI hard disks. Now the $100 question: "Once a filesystem has been created on the remote iSCSI volume can I mount it from several hosts of my network?". Answer is: yes you can but only one of them must have the volume mounted as read-write if you intend to use a traditional (ZFS, UFS, EXT2/3/4, Reiser, BTRFS, JFS...) filesystem on it. Things are different if you intend to use the remote iSCSI disks for a cluster and use a filesystem like GFS/OCFS2 or Lustre on them. Clusters dedicated filesystem like GFS/OCFS2/Lustre are designed from the ground up to support concurrent read-write access (distributed locking) from several hosts and are smart enough to avoid data corruption. Just to be sure you are aware:
Do not mount an iSCSI remote disk in read-write mode from several hosts unless you intend to use the disk with a filesystem dedicated to clusters like GFS/OCFS2 or Lustre. Not respecting this principle will make your data being corrupted.
If you need to share the content of your newly created volume between several hosts of your network, the safest ways is to connect to the iSCSI volume from one of your hosts then configure on this host a NFS share. An alternative strategy is, on the SAN Box, set the sharenfs property (zfs set sharenfs=on /path/of/zvol) on the zvol and mount this NFS share from all of your client hosts. This technique suppose you have formatted the zvol with a filesystem readable by Solaris (UFS or ZFS). Remeber: UFS is not endianness neutral. A UFS filesystem created from a SPARC machine won't be readable on a x86 machine and vice-versa. For us here NFS have limited interest in the present context :-)
You have at least two choices for setting up an iSCSI initiator: sys-block/iscsi-initiator-core-tools sys-block/open-iscsi (use at least the version 2.0.872, it contains several bug fixes and includes changes required to be used with Linux 2.6.39 and Linux 3.0). We will use this alternative here. Quoting Open iSCSI README (http://www.open-iscsi.org/docs/README) , Open iSCSI is composed of two parts: some kernel modules (built-in in Linux Kernel sources) several userland bits: A management tool (iscsiadm) for the persistant Open iSCSI database A daemon (iscsid) that handle all of the initiator iSCSI magic in the background for you. Its role is implements control path of iSCSI protocol, plus assuming some management facilities like automatically re-start discovery at startup, based on the contents of persistent iSCSI database. An IQN identifier generator (iscsi-iname)
We suppose your Funtoo box is correctly setup and has an operational networking. The very first thing to do is to reconfigure your Linux kernel to enable iSCSI: under SCSI low-level drivers, activate iSCSI Initiator over TCP/IP (CONFIG_ISCSI_TCP). This is the Open iSCSI kernel part under Cryptographic options, activate Cryptographic API and under Library routines, CRC32c (Castagnoli, et al) Cyclic Redundancy-Check should already have been enabled.
25 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
Open iSCSI does not use the term node as defined by the iSCSI RFC, where a node is a single iSCSI initiator or target. Open iSCSI uses the term node to refer to a portal on a target, so tools like iscsiadm require that --targetname and --portal argument be used when in node mode.
ifaces: used to bind the Open ISCSI initiator on a specific NIC (created for you by iscsid on the first execution) initiatorname.iscsi.example: stores a template to configure the name (IQN) of the iSCSI initiator (copy it as initiatorname.iscsi if that later does not exist) iscsid.conf: various configuration parameters (leave the default for now) Following directories stores what is called the persistent database in Open iSCSI terminology nodes (aka the nodes database): storage of node connection parameters per remote iSCSI target and portal (created for you, you can then change the values) send_targets (aka the discovery database): storage of what iscsi daemon discovers using a dicovery protocol per portal and remote iSCSI target. Most of the file there are symlinks to the files lying in the /etc/iscsi/nodes directory plus a file named st_config which stores various parameters. Out-of-the-box, you don't have to do heavy tweaking. Just make sure that initiatorname.iscsi exists and contains the exact same initiator identifier you specified when defining the Logical Units (LUs) mapping.
[ ok ] [ ok ] [ ok ]
[ !! ]
At this point iscsid complains about not having any record telling about iSCSI targets in its persistent database, this is absolutely normal.
What the Funtoo box will see when initiating a discovery session on the SAN box? Simple just ask :-)
# iscsiadm -m discovery -t sendtargets -p 192.168.1.14 192.168.1.14:3260,1 iqn.1986-03.com.sun:02:2e5faacb-4bdf-4f7f-e643-ebc8bf856603
26 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
Here we get 3 results because the connection lister used by the remote target listens on the two NICs of the SAN box using its IPv4 stack and also on a public IPv6 address (numbers masked for privacy reasons). In that case querying one of the other addresses (192.168.1.13 or 2607:fa48:****:****:****:****:****:****) would give identical results. iscsiadm will record the returned information in its persistent database. The ",1" in the above result is just a target ordinal number, is not related to the LUNs living inside the target.
We have used the "send targets" discovery protocol, it is however possible to use iSNS or use a static discovery (that later is called custom iSCSI portal in Open iSCSI terminology).
Now the most exciting part! We will initiate an iSCSI session on the remote target ("logging in" the remote target). Since we didn't activate CHAP authentications on the remote target, the process is straightforward:
# iscsiadm -m node -T iqn.1986-03.com.sun:02:2e5faacb-4bdf-4f7f-e643-ebc8bf856603 -p 192.168.1.13 --login Logging in to [iface: default, target: iqn.1986-03.com.sun:02:2e5faacb-4bdf-4f7f-e643-ebc8bf856603, portal: 192.168.1.13,326 Login to [iface: default, target: iqn.1986-03.com.sun:02:2e5faacb-4bdf-4f7f-e643-ebc8bf856603, portal: 192.168.1.13,3260] su
You must use the portal reference exactly as shown by the discovery command above because Open iSCSI recorded it and will lookup for it in its persistent database. Using an IP address instead of a hostname will make Open iSCSI complain ("iscsiadm: no records found!").
Loading iSCSI transport class v2.0-870. iscsi: registered transport (tcp) scsi7 : iSCSI Initiator over TCP/IP scsi 7:0:0:10: Direct-Access SUN COMSTAR 1.0 PQ: 0 ANSI: 5 sd 7:0:0:10: Attached scsi generic sg4 type 0 scsi 7:0:0:11: Direct-Access SUN COMSTAR 1.0 PQ: 0 ANSI: 5 sd 7:0:0:11: Attached scsi generic sg5 type 0 sd 7:0:0:10: [sdc] 5368709120 512-byte logical blocks: (2.74 TB/2.50 TiB) sd 7:0:0:11: [sdd] 1048576000 512-byte logical blocks: (536 GB/500 GiB) scsi 7:0:0:12: Direct-Access SUN COMSTAR 1.0 PQ: 0 ANSI: 5 sd 7:0:0:12: Attached scsi generic sg6 type 0 sd 7:0:0:10: [sdc] Write Protect is off sd 7:0:0:10: [sdc] Mode Sense: 53 00 00 00 sd 7:0:0:11: [sdd] Write Protect is off sd 7:0:0:11: [sdd] Mode Sense: 53 00 00 00 sd 7:0:0:12: [sde] 41943040 512-byte logical blocks: (21.4 GB/20.0 GiB) sd 7:0:0:10: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 7:0:0:11: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 7:0:0:12: [sde] Write Protect is off sd 7:0:0:12: [sde] Mode Sense: 53 00 00 00 sd 7:0:0:12: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdc: unknown partition table sdd: sdd1 sde: sde1 sd 7:0:0:10: [sdc] Attached SCSI disk sd 7:0:0:11: [sdd] Attached SCSI disk sd 7:0:0:12: [sde] Attached SCSI disk
27 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
If you don't see the logical units appearing, check on the remote side that you have added the initiator to the host group when defining the view entries.
Something we had not dived into so far: if you explore the /dev directory of your Funtoo box you will notice a sub-directory named disk. The content of this directory is not maintained by Open iSCSI but by udev and describes what that later knows about all "SCSI" (iSCSI, SATA, SCSI...) disks:
# ls /dev/disk total 0 drwxr-xr-x 2 root drwxr-xr-x 2 root drwxr-xr-x 2 root drwxr-xr-x 2 root
root 700 Jun 25 09:24 root 60 Jun 22 04:04 root 320 Jun 25 09:24 root 140 Jun 22 04:04
Directories are self-explicative: by-id: stores the devices by their hardware identifier. You will notice the devices have been identified in several manners and notably just like if they were Fiber Channel/SAS devices (presence of WWN lines). For local disks, their WWN is not random but corresponds to the WWN number shown whenever you query the disk with hdparm -I (see "Logical Unit WWN Device Identifier" section in hdparm output). For iSCSI disks their WWN corresponds to the GUID attributed by COMSTAR when the logical unit was created (see Creating the Logical Units section in above paragraphs).
# ls -l /dev/disk/by-id total 0 lrwxrwxrwx 1 root root 9 lrwxrwxrwx 1 root root 9 ... lrwxrwxrwx 1 root root 9 lrwxrwxrwx 1 root root 9 lrwxrwxrwx 1 root root 9 .... lrwxrwxrwx 1 root root 9 lrwxrwxrwx 1 root root 10 lrwxrwxrwx 1 root root 10 lrwxrwxrwx 1 root root 10 lrwxrwxrwx 1 root root 9 lrwxrwxrwx 1 root root 9 lrwxrwxrwx 1 root root 9
Jun 22 04:04 ata-HL-DT-ST_BD-RE_BH10LS30_K9OA4DH1606 -> ../../sr0 Jun 22 04:04 ata-LITE-ON_DVD_SHD-16S1S -> ../../sr1 Jun 25 09:24 scsi-3600144f0200acb0000004e0505940007 -> ../../sdc Jun 25 09:24 scsi-3600144f0200acb0000004e05059a0008 -> ../../sdd Jun 25 09:24 scsi-3600144f0200acb0000004e0505a40009 -> ../../sde Jun Jun Jun Jun Jun Jun Jun 22 22 22 22 25 25 25 04:04 08:04 04:04 04:04 09:24 09:24 09:24 wwn-0x**************** -> ../../sda wwn-0x****************-part1 -> ../../sda1 wwn-0x****************-part2 -> ../../sda2 wwn-0x****************-part4 -> ../../sda4 wwn-0x600144f0200acb0000004e0505940007 -> ../../sdc wwn-0x600144f0200acb0000004e05059a0008 -> ../../sdd wwn-0x600144f0200acb0000004e0505a40009 -> ../../sde
by-label: stores the storage devices by their label (volume name). Optical media and ext2/3/4 partitions (tune2fs -L ...) are typically labelled. If a device has not been labelled with a name it won't show up there:
# ls -l /dev/disk/by-label total 0 lrwxrwxrwx 1 root root 9 Jun 22 04:04 DATA_20110621 -> ../../sr1 lrwxrwxrwx 1 root root 10 Jun 25 10:41 boot-partition -> ../../sda1
by-path: stores the storage devices by their physical path. For DAS (Direct Attached Storage) devices paths refers to something under /sys. For iSCSI devices, they are referred to with the target name where the logical unit resides and the LUN of that later is used (notice the LUN nu,ber, it is the same one you have attributed when creating the view on the remote iSCSI node):
# ls -l /dev/disk/by-path total 0 lrwxrwxrwx 1 root root 9 lrwxrwxrwx 1 root root 9 lrwxrwxrwx 1 root root 9 lrwxrwxrwx 1 root root 9 lrwxrwxrwx 1 root root 10 lrwxrwxrwx 1 root root 10 lrwxrwxrwx 1 root root 10 lrwxrwxrwx 1 root root 9
25 25 25 22 25 22 22 22
ip-192.168.1.13:3260-iscsi-iqn.1986-03.com.sun:02:2e5faacb-4bdf-4f7f-e643-ebc8bf85660 ip-192.168.1.13:3260-iscsi-iqn.1986-03.com.sun:02:2e5faacb-4bdf-4f7f-e643-ebc8bf85660 ip-192.168.1.13:3260-iscsi-iqn.1986-03.com.sun:02:2e5faacb-4bdf-4f7f-e643-ebc8bf85660 pci-0000:00:1f.2-scsi-0:0:0:0 -> ../../sda pci-0000:00:1f.2-scsi-0:0:0:0-part1 -> ../../sda1 pci-0000:00:1f.2-scsi-0:0:0:0-part2 -> ../../sda2 pci-0000:00:1f.2-scsi-0:0:0:0-part4 -> ../../sda4 pci-0000:00:1f.2-scsi-0:0:1:0 -> ../../sr0
28 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
by-uuid: a bit similar to /dev/disk/by-label but disks are reported by their UUID (if defined). In the the following example only /dev/sda has some UUIDs defined (this disk holds several BTRFS partitions which have been automatically given an UUID at their creation time):
# ls -l /dev/disk/by-uuid total 0 lrwxrwxrwx 1 root root 10 Jun 22 04:04 01178c43-7392-425e-8acf-3ed16ab48813 -> ../../sda4 lrwxrwxrwx 1 root root 10 Jun 22 04:04 1701af39-8ea3-4463-8a77-ec75c59e716a -> ../../sda2 lrwxrwxrwx 1 root root 10 Jun 25 10:41 5432f044-faa2-48d3-901d-249275fa2976 -> ../../sda1
For storage devices > 2.19TB capacity, do not use traditional partitioning tools like fdisk/cfdisk, use a GPT partitioning tool like gptfdisk (sys-apps/gptfdisk).
At this point the most difficult part has been done, you can now use the remote iSCSI just if they were DAS devices. Let's demonstrate with one:
# gdisk /dev/sdc GPT fdisk (gdisk) version 0.7.1 Partition table scan: MBR: not present BSD: not present APM: not present GPT: not present Creating new GPT entries. Command (? for help): n Partition number (1-128, default 1): First sector (34-5368709086, default = 34) or {+-}size{KMGTP}: Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. Use 'l' on the experts' menu to adjust alignment Last sector (2048-5368709086, default = 5368709086) or {+-}size{KMGTP}: Current type is 'Linux/Windows data' Hex code or GUID (L to show codes, Enter = 0700): Changed type of partition to 'Linux/Windows data' Command (? for help): p Disk /dev/sdc: 5368709120 sectors, 2.5 TiB Logical sector size: 512 bytes Disk identifier (GUID): 6AEC03B6-0047-44E1-B6C7-1C0CBC7C4CE6 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 5368709086 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number 1 Start (sector) 2048 End (sector) Size 5368709086 2.5 TiB Code 0700 Name Linux/Windows data
Command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed? (Y/N): yes OK; writing new GUID partition table (GPT). The operation has completed successfully. # gdisk -l /dev/sdc GPT fdisk (gdisk) version 0.7.1 Partition table scan: MBR: protective BSD: not present APM: not present
29 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdc: 5368709120 sectors, 2.5 TiB Logical sector size: 512 bytes Disk identifier (GUID): 6AEC03B6-0047-44E1-B6C7-1C0CBC7C4CE6 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 5368709086 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number 1 Start (sector) 2048 End (sector) Size 5368709086 2.5 TiB Code 0700 Name Linux/Windows data
Notice COMSTAR emulates a disk having the traditional 512 bytes/sector division. Now have a look again in /dev/disk/by-id:
# ls -l /dev/disk/by-id ... lrwxrwxrwx 1 root root 9 Jun 25 13:23 wwn-0x600144f0200acb0000004e0505940007 -> ../../sdc lrwxrwxrwx 1 root root 10 Jun 25 13:33 wwn-0x600144f0200acb0000004e0505940007-part1 -> ../../sdc1 ...
What would be the next step? Creating a filesystem on the brand new partition of course!
# mkfs.ext4 /dev/sdc1 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 134217728 inodes, 536870911 blocks 26843545 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 16384 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848, 512000000 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 35 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
If you prefer, specify the partition through the WWN (it is just a symbolic link to /dev/sdX) -> mkfs.ext4 /dev/disk/by-id /wwn-0x600144f0200acb0000004e0505940007-part1 in our case
Just like any DAS device, you can mow mount the partition of the remote iSCSI disk in the VFS:
# mkdir /san/os # mount /dev/disk/by-id/wwn-0x600144f0200acb0000004e0505940007-part1 /san/os # df -h Filesystem Size Used Avail Use% Mounted on rootfs 406G 216G 150G 59% / /dev/root 406G 216G 150G 59% / rc-svcdir 1.0M 140K 884K 14% /libexec/rc/init.d udev 10M 440K 9.6M 5% /dev shm 5.9G 0 5.9G 0% /dev/shm ... /dev/sdc1 2.0T 199M 1.9T 1% /san/os
30 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
Et voila! :-) You now have an operational remote iSCSI disk protected against data corruption in total transparency for you client iSCSI nodes. This will be not as fast as a local disk with the kind of setup we use but4 it is enough for a home usage and this not so bad considering the relatively modest setup we have. The price to pay for a low-power home storage box is speed....
Unmounting is done in the exact same way you would do with a DAS device. E.g.:
# umount /dev/sde1
Once you don't use none of the LU in a target, you may want to "disconnect" from the remote target. This is called "logging-out" and is simply accomplished by:
# iscsiadm -m node -T iqn.1986-03.com.sun:02:2e5faacb-4bdf-4f7f-e643-ebc8bf856603 -p 192.168.1.13 --logout
After logging out, the iSCSI remote disks does not appera anymore in /dev (check in ls /dev/disk/by-id for example).
An extremely cool feature of ZFS is the very easy way it handles snapshots. If you are not familiar with the concept, a snapshot is just a (coherent) photograph of the content of a filesystem taken at a given time. On Copy-on-Write filesystems like ZFS and BTRFS, snapshots consumes very little physical space at the time you create them and then will "grow" due the activities on the filesystem they relates to (some files are created/modified/deleted). In fact it is not the snapshot that "grows", it is just that divergence between a filesystem state and a snapshot tends to become more and more important as the times goes on. Another fundamental difference between a ZFS snapshot and the snapshotting capability of Linux LVM: while the first takes nearly zero space, the other is a physical copy of all of a LVM Logical volume.... The cheerberry on the sundae with ZFS is that you can not only take a snapshot of a regular dataset, you can also take a snapshot of a zvol! Moreover, although zvols are emulated block devices (ZFS has no idea about what they content it only sees data blocks) ZFS can do snapshots with the same performance level it has with regular ZFS datasets. In short words: you can take dozen and dozen snapshots of a zvol, ZFS won't not duplicate any of the zvol data blocks beyond what is strictly needed.
ZFS and iSCSI are two hermetically siloed worlds with no communication ways in-between. When taking a snapshot ZFS has absolutely no idea about what iSCSI is doing and vice-versa: you can be in the middle of writing a file while snapshotting it (making it corrupted in the snapshop, a bit paradoxal for a sort of lifeboat!). Therefore you must take extra-precautions by yourself like unmounting all of the partitions located on the zvol you want to snapshot from all iSCSI client nodes. Therefore you have the warranty that no files would be snapshotted "in the middle" of a writing operation.
Taking a snapshot of a zvol is done in the exact same manner than you would do with a regular ZFS dataset. Suppose we want, on the SAN box, to take a snapshot of the zvol san/test (the snapshot is named snap1-20110626T0928EDT):
# zfs snapshot san/test@snap1-20110626T0928EDT
31 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
MOUNTPOINT /san -
No names containing an atmark at in the list! This is not a bug but a little change in the behaviour of the zfs command. You can see snapshots by adding -t snapshot or -t all if you want to get a all-in-one list:
# zfs list -t snapshot NAME ... san/test@snap1-20110626T0928EDT ....
USED 174K
AVAIL -
REFER 23.9K
MOUNTPOINT -
Restoring the iSCSI disk from one of its snapshots (rolling-back) is just as simple:
# zfs rollback san/test@snap1-20110626T0928EDT
When rollbacking a zvol from a snapshot, unmount all of the iSCSI disk partitions from all iSCSI client nodes. Not doing so will lead to silent and severe data corruption on the rollbacked zvol!. Remember you are working with an emulated raw black device, ZFS sees nothing but raw data blocks and there is no collaboration between iSCSI and ZFS.
We used full-snapshots in the above lines, it could also have been possible to use incremental snapshots. If you have a second storage box, you can then stream the snapshot to it just in case of.
tbc.
Altough our SAN Box has two NICs and several of its client machines have dual NICs on their motherboard we didn't used links aggregation on the SAN box and other hosts for a precise reason: Solaris is not able to spread level packet in round-robin between several physical interfaces (Linux can do that). However this is not an issue because 802.3ad standard states that all packets composing a transmission must travel in sequence using the same link. For a SAN strage box it is a bit annoying because only a few connexions are made to the box and only very few of them would be quite active. It is likely that only one NIC at time will be heavily solicited while the other remains idle. (We could have used a fail-over but it is a bit overkill for a home storage box). Fortunately for us the iSCSI protocol designers were aware of that problem and a solution exists: the iSCSI protocol has the capability of spreading a iSCSI single session over several TCP connections established over several physical interfaces thus allowing iSCSI blocks to be sent over all of them in round-robin. Of course its requires to setup multi-pathing both on he SAN box and on every machines of the network who want that feature.
Activating multi-pathing
32 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
So far so good! You may have noticed that iSCSI is cool concept but can be painfully slow because it generates a lot of network traffic (professional networks dedicates NICs exclusively for the sole usage of the data traffic), and worse this traffic is travelling over a single link for a given iSCSI session. Links aggregation at level 2 is not of any help because doing level 2 round robin is a violation of the 802.3ad standard and Solaris does not support doing round robin with Ethernet frames. However it is possible to get iSCSI sending its blocks in round robin. How? Simple: the secret lies in the iSCSI specification itself which states that an iSCSI session can be spread over several TCP connections in parallel. If those TCP channels are established through several different NICs thus having different physically distinct paths (just like a over a physical parallel bus), it is very likely that both NICs of the SAN box will be equally used at the end (load-balancing). The challenge at this point is to setup something known as multipathing. Multipathing is also very commonly used for fail-over configuration (in the case a physical link between two nodes would be broken for some reasons, the communication will ne switch on the other, Fiber Channel networks make an aggressive use of this). Technically speaking iSCSI supports multipathing nativelyn, however iSCSI implementations supporting that feature are rare to not say "do not exist". Fortunately, Solaris has an exit door: MPxIO, MPxIO is a Solaris system layer
The LU logical block size is not to be confused with the block size used by the zvol on the zpool (generally 8k for 64 bits machines).
If you use Ethernet jumbo frames make sure any concerned piece of equipment including the Ethernet switches you use support those.
33 of 34
6/29/2011 11:53 PM
http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI
1. You can, as an alternative, grab the Solaris Express 11 live CD image and install the system from there. Benefit is you get a pre-installed GNOME environment (can also be installed with the textual boot CD by doing pkg install slim_install, 400Mb will be downloaded from the package archives). Note that several to Oracle Solaris Express 11 exists notably OpenIndiana (http://openindiana.org) or Illumos (http://www.illumos.org) ). Retrieved from "http://www.funtoo.org/wiki/SAN_Box_used_via_iSCSI" Categories: Labs | Featured This page was last modified on 26 June 2011, at 16:30. This page has been accessed 574 times. Privacy policy About Funtoo Linux Disclaimers
34 of 34
6/29/2011 11:53 PM