0% found this document useful (0 votes)
349 views19 pages

RHEL 8 iSCSI Storage with DRBD & Pacemaker

This document provides instructions for configuring a highly available iSCSI storage cluster using DRBD, Pacemaker, and Corosync on RHEL 8. It assumes two nodes named node-a and node-b that will be configured with DRBD to provide storage redundancy. Pacemaker and Corosync will manage resources and provide failover capability. The document covers installing and configuring DRBD, Pacemaker, Corosync, and TargetCLI to create active/passive and active/active iSCSI configurations that can survive hardware failures.

Uploaded by

jnognet2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
349 views19 pages

RHEL 8 iSCSI Storage with DRBD & Pacemaker

This document provides instructions for configuring a highly available iSCSI storage cluster using DRBD, Pacemaker, and Corosync on RHEL 8. It assumes two nodes named node-a and node-b that will be configured with DRBD to provide storage redundancy. Pacemaker and Corosync will manage resources and provide failover capability. The document covers installing and configuring DRBD, Pacemaker, Corosync, and TargetCLI to create active/passive and active/active iSCSI configurations that can survive hardware failures.

Uploaded by

jnognet2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Highly Available iSCSI Storage with DRBD and Pacemaker on

RHEL 8
Brian Hellman;Hayley Swimelar;Matt Kereczman;David Thomas
Version 1.0, 2020-03-05
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1  

2. Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2  

2.1. System Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2  

2.2. Firewall Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2  

2.3. SELinux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2  

3. Installation and Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3  

3.1. Register Nodes and Repository Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3  

3.2. Installing DRBD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5  

3.3. Configuring DRBD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5  

3.4. Installing Pacemaker and Corosync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6  

3.5. Additional Cluster-Level Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8  

3.6. Installing TargetCLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8  

3.7. Creating an Active/Passive iSCSI Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9  

3.8. Creating an Active/Active iSCSI Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11  

4. Security Considerations and Data Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12  

4.1. Restricting Target Access by Initiator Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12  

4.2. Dual-Primary Multipath iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12  

5. Setting Configuration Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13  

6. Using Highly Available iSCSI Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14  

6.1. Connecting to iSCSI Targets From Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14  

7. Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
 

Appendix A: Additional Information and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16  

Appendix B: Legalese . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17  

B.1. Trademark Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17  

B.2. License Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17  


Highly Available iSCSI Storage with DRBD and Pacemaker on RHEL 8: Chapter 1. Introduction

Chapter 1. Introduction
This guide outlines the configuration of a highly-available iSCSI storage cluster using DRBD, Pacemaker, and Corosync.
iSCSI provides access to block devices via TCP/IP, which allows storage to be accessed remotely using standard
networks.

An iSCSI initiator (client) connects to an iSCSI target (server) and accesses a Logical Unit Number (LUN). Once a LUN is
connected, it will function as a normal block device to the client.

This guide was written for RHEL 8, using the cluster stack software as packaged by LINBIT.

1
Highly Available iSCSI Storage with DRBD and Pacemaker on RHEL 8: 2.1. System Configurations

Chapter 2. Assumptions
This guide assumes the following:

2.1. System Configurations


Hostname LVM Device Volume DRBD Device External External IP Crossover Crossover IP
Group Interface Interface
node-a /dev/sdb vg_drbd lv_r0 eth0 [Link] eth1 [Link]
node-b /dev/sdb vg_drbd lv_r0 eth0 [Link] eth1 [Link]

 We’ll need a virtual IP for services to run on. For this guide we will use [Link]

2.2. Firewall Configuration


Refer to your firewall documentation for how to open/allow ports. You will need the following ports open in order for
your cluster to function properly.

Component Protocol Port


DRBD TCP 7788
Corosync UDP 5404, 5405

2.3. SELinux
If you have SELinux enabled, and you’re having issues, consult your distributions documentation for how to properly
configure it, or disable it (not recommended).

2
Highly Available iSCSI Storage with DRBD and Pacemaker on RHEL 8: 3.1. Register Nodes and Repository Configuration

Chapter 3. Installation and Configuration


3.1. Register Nodes and Repository Configuration
We will install DRBD from LINBIT’s repositories. To access those repositories you will need to have been setup in
LINBIT’s system, and have access to the LINBIT customer portal.

Once you have access to the customer portal, you can register and configure your node’s repository access by using
the Python command line tool outlined in the "REGISTER NODES" section of the portal.

To register the cluster nodes and configure LINBIT’s repositories, run the following on all nodes, one at a time:

# curl -O [Link]
# chmod +x ./[Link]
# ./[Link]


If no python interpreter found :-( is displayed when running linbit-manage-
[Link], install Python 3 using the following command: # dnf install python3.

The script will prompt you for your LINBIT portal username and password. Once provided, it will list cluster nodes
associated with your account (none at first).

After you tell the script which cluster to register the node with, you will be asked a series of questions regarding which
repositories you’d like to enable.

Be sure to say yes to the questions regarding installing LINBIT’s public key to your keyring and writing the repository
configuration file.

After that, you should be able to # dnf info kmod-drbd and see dnf pulling package information from LINBIT’s
repository.


Before installing packages, make sure to only pull the cluster stack packages from LINBIT’s
repositories.

To ensure we only pull cluster packages from LINBIT, we will need to add the following exclude line to our repository
files:

exclude=cluster* corosync* drbd kmod-drbd libqb* pacemaker* resource-agents*

3.1.1. RHEL 8 Repository Configuration


The x86_64 architecture repositories are used in the following examples. Adjust accordingly if
your system architecture is different.

Add the exclude line to both the [rhel-8-for-x86_64-baseos-rpms] and [rhel-8-for-x86_64-


appstream-rpms] repositories. The default location for all repositories in RHEL 8 is
/etc/[Link].d/[Link]. The modified repository configuration should look like this:

3
Highly Available iSCSI Storage with DRBD and Pacemaker on RHEL 8: 3.1. Register Nodes and Repository Configuration

# '/etc/[Link].d/[Link]' example:

[rhel-8-for-x86_64-baseos-rpms]
name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs)
baseurl = [Link]
enabled = 1
gpgcheck = 1
gpgkey = [Link]
sslverify = 1
sslcacert = /etc/rhsm/ca/[Link]
sslclientkey = /etc/pki/entitlement/<your_key_here>.pem
sslclientcert = /etc/pki/entitlement/<your_cert_here>.pem
metadata_expire = 86400
enabled_metadata = 1
exclude=cluster* corosync* drbd kmod-drbd libqb* pacemaker* resource-agents*

[rhel-8-for-x86_64-appstream-rpms]
name = Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs)
baseurl = [Link]
enabled = 1
gpgcheck = 1
gpgkey = [Link]
sslverify = 1
sslcacert = /etc/rhsm/ca/[Link]
sslclientkey = /etc/pki/entitlement/<your_key_here>.pem
sslclientcert = /etc/pki/entitlement/<your_cert_here>.pem
metadata_expire = 86400
enabled_metadata = 1
exclude=cluster* corosync* drbd kmod-drbd libqb* pacemaker* resource-agents*

If the Red Hat High Availability Add-On is enabled, either add the exclude line to the [rhel-8-
 for-x86_64-highavailability-rpms] section or consider disabling the repository. LINBIT
provides most of the packages available in the HA repository.

3.1.2. CentOS 8 Repository Configuration


Add the exclude line to both the [BaseOS] section of /etc/[Link].d/[Link] as well as the
[AppStream] section of /etc/[Link].d/[Link] repository files. The modified
repository configuration should look like this:

# /etc/[Link].d/[Link] example:

[BaseOS]
name=CentOS-$releasever - Base
mirrorlist=[Link]
nfra
#baseurl=[Link]
gpgcheck=1
enabled=1
gpgkey=[Link]
exclude=cluster* corosync* drbd kmod-drbd libqb* pacemaker* resource-agents*

4
Highly Available iSCSI Storage with DRBD and Pacemaker on RHEL 8: 3.2. Installing DRBD

# /etc/[Link].d/[Link] example:

[AppStream]
name=CentOS-$releasever - AppStream
mirrorlist=[Link]
=$infra
#baseurl=[Link]
gpgcheck=1
enabled=1
gpgkey=[Link]
exclude=cluster* corosync* drbd kmod-drbd libqb* pacemaker* resource-agents*

If the [HighAvailability] repo is enabled in /etc/[Link].d/[Link],


 either add the exclude line to the [HighAvailability] section or consider disabling the
repository. LINBIT provides most of the packages available in the HA repository.

3.2. Installing DRBD


Install DRBD using the following command:

# dnf install drbd kmod-drbd crmsh

Now prevent DRBD from starting at boot, Pacemaker will be responsible for starting the DRBD service:

# systemctl disable drbd

3.3. Configuring DRBD


Now that we’ve installed DRBD, we’ll need to create our resource file. To do this, create /etc/drbd.d/[Link] with the
following contents:

resource r0 {
  protocol C;
  device /dev/drbd0;
  disk /dev/vg_drbd/lv_r0;
  meta-disk internal;
  on node-a {
  address [Link]:7788;
  }
  on node-b {
  address [Link]:7788;
  }
}


This is a bare-bones configuration file. There are many settings you can tune to increase
performance. See the DRBD User’s Guide for more information.

Create the metadata by issuing the following command:

5
Highly Available iSCSI Storage with DRBD and Pacemaker on RHEL 8: 3.4. Installing Pacemaker and Corosync

# drbdadm create-md r0
initializing activity log
initializing bitmap (32 KB) to all zero
Writing meta data...
New drbd meta data block successfully created.
success


This command should complete without any warnings - if you get messages about data being
detected, and choose to proceed, you will lose data.

Bring the device up on both nodes and verify their state by typing the following:

# drbdadm up r0
# drbdadm status
r0 role:Secondary
  disk:Inconsistent
  node-b role:Secondary
  peer-disk:Inconsistent

You can see in the above output that the resource is connected, but in an inconsistent state. In order to have your data
replicated, you’ll need to put the resource into a consistent state. There are two options:

• Do a full sync of the device, which could potentially take a long time depending upon the size of the disk.
• Skip the initial sync so we can get to business.

We’ll be using option 2. Run the following commands on node-a.

# drbdadm -- --clear-bitmap new-current-uuid r0/0


# drbdadm status
r0 role:Secondary
  disk:UpToDate
  node-b role:Secondary
  peer-disk:UpToDate

3.4. Installing Pacemaker and Corosync


This section will cover installing Pacemaker and Corosync. Issue the following command to install and enable the
necessary packages:

# dnf install pacemaker corosync

# systemctl enable pacemaker


Created symlink /etc/systemd/system/[Link]/[Link] to
/usr/lib/systemd/system/[Link].

# systemctl enable corosync


Created symlink /etc/systemd/system/[Link]/[Link] to
/usr/lib/systemd/system/[Link].

6
Highly Available iSCSI Storage with DRBD and Pacemaker on RHEL 8: 3.4. Installing Pacemaker and Corosync

3.4.1. Configuring Corosync


Create and edit the file /etc/corosync/[Link], it should look like this:

totem {
  version: 2
  secauth: off
  cluster_name: cluster
  transport: knet
  rrp_mode: passive
}

nodelist {
  node {
  ring0_addr: [Link]
  ring1_addr: [Link]
  nodeid: 1
  name: node-a
  }
  node {
  ring0_addr: [Link]
  ring1_addr: [Link]
  nodeid: 2
  name: node-b
  }
}

quorum {
  provider: corosync_votequorum
  two_node: 1
}

logging {
  to_syslog: yes
}

Now that Corosync has been configured we can start the Corosync and Pacemaker services:

# systemctl start corosync


# systemctl start pacemaker

Verify that everything has been started and is working correctly by issuing the following command, you should see
output similar to what is below.

7
Highly Available iSCSI Storage with DRBD and Pacemaker on RHEL 8: 3.5. Additional Cluster-Level Configuration

# crm_mon -rf -n1


Stack: corosync
Current DC: node-a (version [Link]-2.0+20160622+e174ec8.el7-e174ec8) - partition with
quorum
Last updated: Fri Feb 24 [Link] 2017 Last change: Fri Feb 24 [Link] 2017 by hacluster
via crmd on node-a

2 nodes and 0 resources configured

Node node-a: online


Node node-b: online

Inactive resources:

Migration Summary:
* Node node-a:
* Node node-b:

3.5. Additional Cluster-Level Configuration


Since we only have two nodes, we will need to tell Pacemaker to ignore quorum. Run the following commands from
either cluster node (but not both):

# crm configure property no-quorum-policy=ignore

Furthermore, we will not be configuring node level fencing (aka STONITH) in this guide; disable STONITH using the
following command:

# crm configure property stonith-enabled=false

Fencing/STONITH is an important part of HA clustering and should be used whenever possible.


 Disabling STONITH will lend the cluster vulnerable to split-brains and potential data corruption
and/or loss.


For more information on Fencing and STONITH, you can review the ClusterLabs page on STONITH
or contact the experts at LINBIT.

3.6. Installing TargetCLI


TargetCLI is an interactive shell used to manage the Linux-IO (LIO) target. LIO is the SCSI target that has been included
with the Linux kernel since 2.6.38.

TargetCLI may be installed using the following command on both nodes:

# dnf install targetcli

8
Highly Available iSCSI Storage with DRBD and Pacemaker on RHEL 8: 3.7. Creating an Active/Passive iSCSI Configuration

3.7. Creating an Active/Passive iSCSI Configuration


An active/passive iSCSI Target consists of the following cluster resources:

• A DRBD resource to replicate data, which is switched from and to the Primary and Secondary roles as deemed
necessary by the cluster resource manager.
• A virtual, floating cluster IP address, allowing initiators to connect to the target no matter which physical node
it is running on.
• The iSCSI Target itself.
• Two port blocking rules to prevent connection errors while the target is being brought up/down: one to block
the iSCSI target and another to unblock it.
• One or more iSCSI Logical Units (LUs), each corresponding to a Logical Volume in the LVM Volume Group.

The following Pacemaker configuration example assumes that [Link] is the virtual IP address to use for a
target with the iSCSI Qualified Name (IQN) [Link]:drbd0.

The target is to contain one Logical Unit: (LUN 0) mapping to /dev/drbd0.

To start configuring these resources, open the crm shell at the configuration prompt as root (or any non-root user that
is part of the haclient group), by issuing the following command:

# crm configure

Now create the DRBD resource primitive by running the following commands:

crm(live)configure# primitive p_drbd_r0 ocf:linbit:drbd \


  params drbd_resource="r0" \
  op start timeout=240 \
  op promote timeout=90 \
  op demote timeout=90 \
  op stop timeout=100 \
  op monitor interval="29" role="Master" \
  op monitor interval="31" role="Slave"

Running verify after configuring each new Pacemaker resource is recommended: it will alert you if
 you have syntax errors in your configuration. To edit the configuration in a text editor run edit with
an optional resource ID.

Next, create a Master/Slave resource corresponding to the DRBD resource r0:

crm(live)configure# ms ms_drbd_r0 p_drbd_r0 \


  meta master-max=1 master-node-max=1 \
  notify=true clone-max=2 clone-node-max=1

We will now create a floating Virtual IP for the cluster to be managed by Pacemaker:

9
Highly Available iSCSI Storage with DRBD and Pacemaker on RHEL 8: 3.7. Creating an Active/Passive iSCSI Configuration

crm(live)configure# primitive p_iscsi_ip0 ocf:heartbeat:IPaddr2 \


  params ip="[Link]" cidr_netmask="24" \
  op start timeout=20 \
  op stop timeout=20 \
  op monitor interval="10s"

 iSCSI iqn names must be unique and should follow RFC 3720 standards.

crm(live)configure# primitive p_iscsi_target_drbd0 ocf:heartbeat:iSCSITarget \


  params iqn="[Link]:drbd0" \
  implementation=lio-t portals="[Link]:3260" \
  op start timeout=20 \
  op stop timeout=20 \
  op monitor interval=20 timeout=40

Now that the iSCSI target and virtual IP are established, we will configure the Logicial Units:

crm(live)configure# primitive p_iscsi_lun_drbd0 ocf:heartbeat:iSCSILogicalUnit \


  params target_iqn="[Link]:drbd0" \
  implementation=lio-t lun=0 path="/dev/drbd0" \
  op start timeout=20 \
  op stop timeout=20 \
  op monitor interval=20 timeout=40

Configure port blocking and unblocking. This will prevent an iSCSI initiator from receiving a "Connection refused" error
during failover before the iSCSI target has successfully started.

crm(live)configure# primitive p_iscsi_portblock_on_drbd0 ocf:heartbeat:portblock \


  params ip=[Link] portno=3260 protocol=tcp action=block \
  op start timeout=20 \
  op stop timeout=20 \
  op monitor timeout=20 interval=20

crm(live)configure# primitive p_iscsi_portblock_off_drbd0 ocf:heartbeat:portblock \


  params ip=[Link] portno=3260 protocol=tcp action=unblock \
  op start timeout=20 \
  op stop timeout=20 \
  op monitor timeout=20 interval=20

To tie all of this together, create a resource group from the resources associated with our iSCSI Target:

crm(live)configure# group g_iscsi_drbd0 \


  p_iscsi_portblock_on_drbd0 \
  p_iscsi_ip0 p_iscsi_target_drbd0 p_iscsi_lun_drbd0 \
  p_iscsi_portblock_off_drbd0

This group, by default, is ordered and colocated, which means that the resources contained therein will always run on
the same physical node, will be started in the order as specified, and stopped in reverse order.

Finally, we have to make sure that this resource group is started after the DRBD resource has started and set it to run

10
Highly Available iSCSI Storage with DRBD and Pacemaker on RHEL 8: 3.8. Creating an Active/Active iSCSI Configuration

on the same node where DRBD is in the Primary role:

crm(live)configure# colocation cl_g_iscsi_drbd0-with-ms_drbd_r0 \


  inf: g_iscsi_drbd0:Started ms_drbd_r0:Master

crm(live)configure# order o_ms_drbd_r0-before-g_iscsi_drbd0 \


  ms_drbd_r0:promote g_iscsi_drbd0:start

Now, our configuration is complete, and may be activated:

crm(live)configure# commit

3.8. Creating an Active/Active iSCSI Configuration


Configuring an active/active iSCSI Target is similar to an active/passive target, with the exception of the following:

• There must be at least two DRBD resources to replicate data, which can run independently on separate nodes.
• One LVM Logical Volume to serve as the backing device for each DRBD resource.
• Each target must have its own floating cluster IP address, allowing initiators to connect to the target no matter
which physical node it is running on.
• Each DRBD resource needs to have a resource group, plus order and colocation constraints similar to the single
DRBD resource in the active/passive section.

Once these resource groups are configured, you may have them run on separate nodes to achieve an active/active
cluster. This can be done by manually migrating the resource groups through the CRM shell, or by configuring the
resources to prefer running on a particular node.

11
Highly Available iSCSI Storage with DRBD and Pacemaker on RHEL 8: 4.1. Restricting Target Access by Initiator Address

Chapter 4. Security Considerations and Data Integrity


There are a few more points worth mentioning, dealing with access control and data integrity.

4.1. Restricting Target Access by Initiator Address


Access to iSCSI targets should be restricted to specific initiators, identified by their iSCSI Qualified Name (IQN). Use
the allowed_initiators parameter supported by the iSCSI Target Pacemaker resource agent.

Create a resource to include the allowed_initiators parameter, containing a space-separated list of initiator IQNs
allowed to connect to this target. In the example below,access is granted to the initiator IQN iqn.1994-
[Link]:5e6220f26ee.

On Linux, the initiators’s IQN can be found by running:

 cat /etc/iscsi/[Link]

primitive p_iscsi_drbd0 \
  ocf:heartbeat:iSCSITarget \
  params iqn="[Link]:drbd0" \
  allowed_initiators="[Link]:42615a3677" "[Link]:27cac47b0ae" \
  op start timeout=20 \
  op stop timeout=20 \
  op monitor interval="10s"

When you close the editor, the configuration changes are inserted into the CIB configuration. To commit these
changes, enter the following command:

crm(live)configure# commit

After you commit the changes, the target will immediately reconfigure and enable the access restrictions.

If initiators are connected to the target at the time of re-configuration, and one of the connected
 initiators is not included in the initiators list for this resource, then those initiators will lose access
to the target, possibly resulting in disruption on the initiator node. Use with care.

4.2. Dual-Primary Multipath iSCSI


Do not attempt to use Multipath iSCSI targets with dual-primary DRBD resources. Multipath iSCSI targets do not
coordinate with each other and are not cluster aware.

Multipath iSCSI may appear to work with dual-primary DRBD resources under light testing, but will
 not be able to maintain consistent data under production loads or in the case of some failures, such
as the lost of the replication link.

12
Highly Available iSCSI Storage with DRBD and Pacemaker on RHEL 8: Chapter 5. Setting Configuration Parameters

Chapter 5. Setting Configuration Parameters


This section outlines some of the configuration parameters one may want to set in a highly available iSCSI Target
configuration.

Configuration at the iSCSI target level is performed using the additional_parameters instance attribute defined
for the iSCSITarget resource agent.

For example, to set the DefaultTime2Retain and DefaultTime2Wait session parameters to 60 and 5 seconds,
respectively, modify your target resource as follows:

crm(live)configure# edit p_iscsi_target_drbd0

primitive p_iscsi_target_drbd0 \
  ocf:heartbeat:iSCSITarget \
  params iqn="[Link]:drbd0" \
  additional_parameters="DefaultTime2Retain=60 DefaultTime2Wait=5"
  op start timeout=20 \
  op stop timeout=20 \
  op monitor interval="10s"

crm(live)configure# commit

Use this command to list which parameters can be set with additional_parameters:

 # ls /sys/kernel/config/target/iscsi/<IQN>/tpgt_1/param/

13
Highly Available iSCSI Storage with DRBD and Pacemaker on RHEL 8: 6.1. Connecting to iSCSI Targets From Linux

Chapter 6. Using Highly Available iSCSI Targets


This section describes some common usage scenarios for highly available iSCSI Targets.

6.1. Connecting to iSCSI Targets From Linux


The recommended way of connecting to a highly available iSCSI Target from Linux is to use iscsiadm which can be
installed by running the following command:

# dnf install -y iscsi-initiator-utils

After installing the iscsi-initiator-utils package, it is first necessary to start the iSCSI service, iscsi. To do so, issue the
following command:

# systemctl start iscsi

Now you may start a discovery session on your target portal. Assuming your cluster IP address for the target is
[Link], you may do so via the following command:

# iscsiadm -m discovery -p [Link] -t sendtargets

The output from this command should include the names of all targets you have configured.

[Link]:3260,1 [Link]:drbd0

If a configured target does not appear in this list, check whether your initiator has been blocked
 from accessing this target via an initiator restriction (see Restricting Target Access by Initiator
Address).

Finally, you may log in to the target, which will make all LUNs configured therein available as local SCSI devices:

# iscsiadm -m node -p [Link] -T [Link]:drbd0 --login

14
Highly Available iSCSI Storage with DRBD and Pacemaker on RHEL 8: Chapter 7. Feedback

Chapter 7. Feedback
Any questions or comments about this document are appreciated and encouraged.

For a public discussion about the concepts mentioned in this white paper, you are invited to subscribe and post to the
drbd-user mailing list. Please see the drbd-user mailing list for details.

15
Highly Available iSCSI Storage with DRBD and Pacemaker on RHEL 8: Appendix A: Additional Information and Resources

Appendix A: Additional Information and Resources


• LINBIT’s GitHub Organization: [Link]
• Join LINBIT’s Community on Slack: [Link]
• The DRBD® and LINSTOR® User’s Guide: [Link]
• The DRBD® and LINSTOR® Mailing Lists: [Link]

◦ drbd-announce: Announcements of new releases and critical bugs found


◦ drbd-user: General discussion and community support
◦ drbd-dev: Coordination of development

16
Highly Available iSCSI Storage with DRBD and Pacemaker on RHEL 8: B.1. Trademark Notice

Appendix B: Legalese
B.1. Trademark Notice
DRBD® and LINBIT® are trademarks or registered trademarks of LINBIT in Austria, the United States, and other
countries. Other names mentioned in this document may be trademarks or registered trademarks of their respective
owners.

B.2. License Information


The text and illustrations in this document are licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported
license ("CC BY-SA").

• A summary of CC BY-NC-SA is available at [Link]


• The full license text is available at [Link]
• In accordance with CC BY-NC-SA, if you modify this document, you must indicate if changes were made. You
may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use

17

You might also like