0% found this document useful (0 votes)
30 views9 pages

XtremIO Process Restart Script (Process - Restart - Py)

Uploaded by

mingli.bi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
30 views9 pages

XtremIO Process Restart Script (Process - Restart - Py)

Uploaded by

mingli.bi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 9

XtremIO Process restart script (process_restart.

py)
This document is purely for development and testing reference, it is not to be used as a guarantee the script has been tested

Use it at your own risk!!!

Check the 'Scripts publish location' section within XtremIO Support Scripts page for scripts that have been officially published

Background
Purpose
Support
Utilisation
Basic use (for field usage)
Restart SYM
Restart PM
Expert use (for L3 only)
Restart gateway
Restart xenv
Enable/check clustering
Restart survivor
Restart sshd (ssh deamon)
Disabling/enabling and checking idle destage

Background
In certain scenarios iit is necessary to manually restart certain XtremIO processes.
This manual action requires privileged access to the storage controllers (root) and can be harmful to the storage stability if performed incorrectly
and/or when the cluster overall health status does not allow it.
To prevent the above, the signed script called process_restart has been implemented in order to perform said actions safely.

Purpose
The script is capable to safely restart the following processes:

SYM
PM
Xenv
gateway
survivor
clustering (enable/check)
sshd
idle destage (enable/disable/check)

Support
The script supports the following XtremIO versions:

All XIOS 4.0 versions


All XIOS 6.0 versions

Utilisation
1. Using the xmsupload user and SFTP, upload the process_restart script to the XMS under /images/scripts.
2. Log in to the xms as xmsadmin and to the xmcli as tech user

Basic use (for field usage)

Restart SYM

1. Obtain the cluster-id from the xmcli command show-clusters(cluster-id is a mandatory even if only one cluster is being
managed by XMS )
2. Run the following xmcli command, using the details obtained on step 1 , run-script script="process_restart_v1.4-s4.0.0.
py " arguments="--cluster-id [cluster-id] --restart-sym"

The example below shows restarting SYM for cluster-id 2


xmcli (tech)> run-script script="process_restart-v1.4-s4.0.0.py"
arguments="--cluster-id=2 --restart-sym"
15:55:18 - 2017-10-22 15:55:17,246 INFO Cluster 2(xbrick68) is healthy -
PSNT XIO00150200936
15:55:23 - 2017-10-22 15:55:20,552 INFO Successfully authenticated user
tech with technician permission
15:55:23 - 2017-10-22 15:55:20,552 INFO Script Version is 1.4
15:55:23 - 2017-10-22 15:55:20,553 INFO Cluster xbrick68 version 4.0.25-27
was found
15:55:23 - 2017-10-22 15:55:20,553 INFO Found lock file with pid - 32163
15:55:23 - 2017-10-22 15:55:20,553 INFO pid 32163 is not running -
updating lock file with current pid
15:55:23 - 2017-10-22 15:55:20,553 INFO Created lock file with pid - 5619
15:55:38 - 2017-10-22 15:55:37,363 INFO Traffic will be forwarded through
10.82.84.37
15:55:43 - 2017-10-22 15:55:38,980 INFO MetaData load Prefetch is not
running
15:55:43 - 2017-10-22 15:55:41,591 INFO 10.82.84.39 - cat /xtremapp/debuc
/127.0.0.1\:33336/clst_sym_elect/clst_sym_elect | grep \" state \-\"
15:55:49 - 2017-10-22 15:55:43,438 INFO 10.82.84.37 - cat /xtremapp/debuc
/127.0.0.1\:33336/clst_sym_elect/clst_sym_elect | grep \" state \-\"
15:55:49 - 2017-10-22 15:55:45,299 INFO 10.82.84.39 clustering status is 5
15:55:49 - 2017-10-22 15:55:45,299 INFO 10.82.84.37 clustering status is 2
15:55:54 - 2017-10-22 15:55:50,988 INFO (10.82.84.39:22)Found SYM pid
41915 - started at Sun Oct 22 15:44:04 2017
15:55:54 - 2017-10-22 15:55:50,989 INFO (10.82.84.39:22)Killing SYM
process 41915 on 10.82.84.39:22
15:56:19 - 2017-10-22 15:56:15,131 INFO New SYM started - 43230 on
10.82.84.39:22
15:56:19 - 2017-10-22 15:56:16,573 INFO Sleeping for 30 seconds before
checking logs
15:56:54 - 2017-10-22 15:56:51,709 INFO Found SYM restart completed in SYM
log - 2017-10-22 15:55:56.595875
Script exited with status: 0

The script will restart the SYM process as long as:

All the I/O modules (xenv) are healthy and


All the Storage Controllers are healthy and
There is no metadata-prefetch ongoing and
No HA flow in the last 5 minutes and
The clustering service is running on both X1 brick storage controllers and
The SYM process was not restarted within the last 5 minutes

The example show restarting SYM with waiting checking for lazy load status
xmcli (tech)> run-script script="process_restart-v1.4-s4.0.0.py"
arguments="--cluster-id=4 --restart-sym --no_lazy_load_wait"
13:56:39 - 2017-10-22 13:56:38,616 INFO Cluster 4(xbrick665) is healthy -
PSNT XIO00154604176
13:56:44 - 2017-10-22 13:56:42,085 INFO Successfully authenticated user
tech with technician permission
13:56:44 - 2017-10-22 13:56:42,086 INFO Script Version is 1.4
13:56:44 - 2017-10-22 13:56:42,086 INFO Cluster xbrick665 version 6.0.1-
13_X2 was found
13:56:44 - 2017-10-22 13:56:42,086 INFO Found lock file with pid - 16441
13:56:44 - 2017-10-22 13:56:42,086 INFO pid 16441 is not running -
updating lock file with current pid
13:56:44 - 2017-10-22 13:56:42,086 INFO Created lock file with pid - 16984
13:56:59 - 2017-10-22 13:56:55,646 INFO Traffic will be forwarded through
10.82.75.71
13:56:59 - 2017-10-22 13:56:57,297 INFO MetaData Load Prefetch is running
13:57:04 - 2017-10-22 13:57:01,568 INFO 10.82.75.71 - cat /xtremapp/debuc
/127.0.0.1\:33336/clst_sym_elect/clst_sym_elect | grep \" state \-\"
13:57:04 - 2017-10-22 13:57:03,482 INFO 10.82.75.73 - cat /xtremapp/debuc
/127.0.0.1\:33336/clst_sym_elect/clst_sym_elect | grep \" state \-\"
13:57:10 - 2017-10-22 13:57:05,393 INFO 10.82.75.71 clustering status is 5
13:57:10 - 2017-10-22 13:57:05,393 INFO 10.82.75.73 clustering status is 2
13:57:15 - 2017-10-22 13:57:11,104 INFO (10.82.75.71:22)Found SYM pid
38748 - started at Tue Oct 10 17:51:12 2017
13:57:15 - 2017-10-22 13:57:11,104 INFO (10.82.75.71:22)Killing SYM
process 38748 on 10.82.75.71:22
13:57:40 - 2017-10-22 13:57:35,903 INFO New SYM started - 10970 on
10.82.75.71:22
13:57:40 - 2017-10-22 13:57:37,361 INFO Sleeping for 30 seconds before
checking logs
13:58:15 - 2017-10-22 13:58:13,243 INFO Found SYM restart completed in SYM
log - 2017-10-22 13:57:19.349370
Script exited with status: 0

Restart PM

1. Obtain the cluster-id from the xmcli command show-clusters(cluster-id is a mandatory even if only one cluster is being
managed by XMS )
2. Obtain the sc-id from the xmcli command show-storage-controllers(sc-id is mandatory) of the storage controller that needs its
PM restarted
3. Run the following xmcli command, using the details obtained on step 1 and 2, run-script script="process_restart-v1.4-s4.
0.0.py" arguments="cluster-id=[cluster-id] --restart-pm --sc-id=[sc-id]"

The example below shows restarting PM on X1-SC1 (sc-id 1 from show-storage-controllers)

on cluster-id 2
xmcli (tech)> run-script script="process_restart-v1.4-s4.0.0.py"
arguments="--cluster-id=2 --restart-pm --sc-id=1"
07:49:21 - 2017-10-23 07:49:16,788 INFO Cluster 2(xbrick68) is healthy -
PSNT XIO00150200936
07:49:21 - 2017-10-23 07:49:20,600 INFO Successfully authenticated user
tech with technician permission
07:49:26 - 2017-10-23 07:49:20,600 INFO Script Version is 1.4
07:49:26 - 2017-10-23 07:49:20,600 INFO Cluster xbrick68 version 4.0.25-27
was found
07:49:26 - 2017-10-23 07:49:20,601 INFO Found lock file with pid - 10579
07:49:26 - 2017-10-23 07:49:20,601 INFO pid 10579 is not running -
updating lock file with current pid
07:49:26 - 2017-10-23 07:49:20,601 INFO Created lock file with pid - 15604
07:49:41 - 2017-10-23 07:49:37,119 INFO Traffic will be forwarded through
10.82.84.37
07:49:41 - 2017-10-23 07:49:37,261 INFO Executing command on X1 - SC1
using 10.82.84.37:22000
07:49:41 - 2017-10-23 07:49:38,905 INFO MetaData load Prefetch is not
running
07:49:46 - 2017-10-23 07:49:44,655 INFO (X1-SC1-IB1)Found PM pid 44863 -
started at Sun Oct 22 15:47:40 2017
07:49:51 - 2017-10-23 07:49:44,656 INFO (X1-SC1-IB1)Killing PM process
44863 on 10.82.84.37:22000
07:50:12 - 2017-10-23 07:50:07,201 INFO New PM started - 98817 on
10.82.84.37:22000
07:50:12 - 2017-10-23 07:50:09,219 INFO Sleeping for 15 seconds before
checking logs
07:50:42 - 2017-10-23 07:50:39,352 INFO Found PM restart completed in SYM
log - 2017-10-23 07:49:57.421545

The script will restart the PM process as long as:

All the I/O modules (xenv) are healthy and


All the Storage Controllers are healthy and
There is no metadata-prefetch ongoing and
No HA flow in the last 5 minutes and
The PM process was not restarted within the last 5 minutes

Expert use (for L3 only)

Restart gateway

1. Obtain the cluster-id from the xmcli command show-clusters(cluster-id is a mandatory even if only one cluster is being
managed by XMS )
2. Obtain the sc-id from the xmcli command show-storage-controllers(sc-id is mandatory) of the storage controller that needs its
gateway restarted
3. Run the following xmcli command, using the details obtained on step 1 and 2, run-script script="process_restart-v1.4-s4.
0.0.py" arguments="cluster-id=[cluster-id] --restart-gw --sc-id=[sc-id]"

The example below shows restarting gateway on X2-SC1


xmcli (tech)> run-script script="process_restart-v1.4-s4.0.0.py"
arguments="--cluster-id=1 --restart-gw --sc-id=3"
08:58:31 - 2017-10-23 08:58:27,010 INFO Cluster 1(xbrickdrm1601-1604) is
healthy - PSNT XIO00171515169
08:58:31 - 2017-10-23 08:58:30,100 INFO Successfully authenticated user
tech with technician permission
08:58:31 - 2017-10-23 08:58:30,101 INFO Script Version is 1.4
08:58:31 - 2017-10-23 08:58:30,101 INFO Cluster xbrickdrm1601-1604 version
6.0.1-11_X2 was found
08:58:31 - 2017-10-23 08:58:30,101 INFO Found lock file with pid - 19961
08:58:31 - 2017-10-23 08:58:30,101 INFO pid 19961 is not running -
updating lock file with current pid
08:58:31 - 2017-10-23 08:58:30,101 INFO Created lock file with pid - 20670
08:58:47 - 2017-10-23 08:58:45,115 INFO Traffic will be forwarded through
10.139.120.122
08:58:52 - 2017-10-23 08:58:45,116 INFO Executing command on X2 - SC1
using 10.139.120.122:22004
08:58:52 - 2017-10-23 08:58:47,261 INFO MetaData load Prefetch is not
running
08:58:52 - 2017-10-23 08:58:48,012 INFO (X2-SC1-IB1)Found GW pid 8029 -
started at Thu Oct 19 14:04:47 2017
08:58:52 - 2017-10-23 08:58:48,012 INFO (X2-SC1-IB1)Killing GW process
8029 on 10.139.120.122:22004
08:59:07 - 2017-10-23 08:59:03,829 INFO New GW started - 51953 on
10.139.120.122:22004
08:59:07 - 2017-10-23 08:59:05,903 INFO Sleeping for 45 seconds before
checking logs
08:59:58 - 2017-10-23 08:59:53,096 INFO Found GW restart completed in log
- 2017-10-23 08:58:48.380774
Script exited with status: 0

Restart xenv

1. Obtain the cluster-id from the xmcli command show-clusters(cluster-id is a mandatory even if only one cluster is being
managed by XMS )
2. Obtain the CSID from the xmcli command show-xenvs of the Xenv that should be restarted
3. Obtain the sc-id from the xmcli command show-storage-controllers(sc-id is mandatory) of the storage controller that needs its
xenv restarted (mandatory to cross-check CSID against sc-id)
4. Run the following xmcli command, using the details obtained on step 1, 2 and 3, run-script script="process_restart-v1.4-
s4.0.0.py" arguments="cluster-id=[cluster-id] --restart-xenv [CSID] --sc-id=[sc-id]"

The example below shows restarting xenv 11 on X1-SC1


mcli (tech)> run-script script="process_restart-v1.4-s4.0.0.py" arguments="
--cluster-id=1 --restart-xenv 11 --sc-id=1"
12:34:45 - 2017-10-23 12:34:42,878 INFO Cluster 1(xbrickdrm1601-1604) is
healthy - PSNT XIO00171515169
12:34:50 - 2017-10-23 12:34:45,562 INFO Successfully authenticated user
tech with technician permission
12:34:50 - 2017-10-23 12:34:45,562 INFO Script Version is 1.4
12:34:50 - 2017-10-23 12:34:45,562 INFO Cluster xbrickdrm1601-1604 version
6.0.1-11_X2 was found
12:34:50 - 2017-10-23 12:34:45,563 INFO Found lock file with pid - 5758
12:34:50 - 2017-10-23 12:34:45,563 INFO pid 5758 is not running - updating
lock file with current pid
12:34:50 - 2017-10-23 12:34:45,563 INFO Created lock file with pid - 7063
12:35:06 - 2017-10-23 12:35:01,353 INFO Traffic will be forwarded through
10.139.120.122
12:35:06 - 2017-10-23 12:35:01,354 INFO Executing command on X1 - SC1
using 10.139.120.122:22000
12:35:06 - 2017-10-23 12:35:03,203 INFO MetaData load Prefetch is not
running
12:35:06 - 2017-10-23 12:35:03,723 INFO (X1-SC1-IB1)Found XENV 11 pid
34572 - started at Sun Oct 22 09:17:57 2017
12:35:06 - 2017-10-23 12:35:03,723 INFO (X1-SC1-IB1)Killing XENV 11
process 34572 on 10.139.120.122:22000
12:35:21 - 2017-10-23 12:35:19,557 INFO New XENV 11 started - 27974 on
10.139.120.122:22000
12:35:26 - 2017-10-23 12:35:21,159 INFO Sleeping for 15 seconds before
checking logs
12:35:51 - 2017-10-23 12:35:48,166 INFO Found XENV restart completed in
SYM log - 2017-10-23 12:35:10.623519
Script exited with status: 0

Enable/check clustering

1. Obtain the cluster-id from the xmcli command show-clusters(cluster-id is a mandatory even if only one cluster is being
managed by XMS )
2. Obtain the sc-id from the xmcli command show-storage-controllers(sc-id is mandatory) of the storage controller on X1 Brick
Run the following xmcli command, to enable clustering, using the details obtained on step 1 and 2, run-script script="
process_restart-v1.4-s4.0.0.py" arguments="cluster-id=[cluster-id] --clustering enable --sc-id=
[sc-id]"
Run the following xmcli command, to check clustering status, using the details obtained on step 1 and 2, run-script script="
process_restart-v1.4-s4.0.0.py" arguments="cluster-id=[cluster-id] --clustering check --sc-id=
[sc-id]"

Te example below shows enabling clustering on X1-SC1


xmcli (tech)> run-script script="process_restart-v1.4-s4.0.0.py"
arguments="--cluster-id=2 --clustering enable --sc-id=1"
07:02:50 - 2017-10-23 07:02:46,475 INFO Cluster 2(xbrick68) is healthy -
PSNT XIO00150200936
07:02:50 - 2017-10-23 07:02:50,032 INFO Successfully authenticated user
tech with technician permission
07:02:55 - 2017-10-23 07:02:50,033 INFO Script Version is 1.4
07:02:55 - 2017-10-23 07:02:50,033 INFO Cluster xbrick68 version 4.0.25-27
was found
07:02:55 - 2017-10-23 07:02:50,034 INFO Found lock file with pid - 30991
07:02:55 - 2017-10-23 07:02:50,034 INFO pid 30991 is not running -
updating lock file with current pid
07:02:55 - 2017-10-23 07:02:50,034 INFO Created lock file with pid - 2710
07:03:10 - 2017-10-23 07:03:07,981 INFO Traffic will be forwarded through
10.82.84.37
07:03:15 - 2017-10-23 07:03:08,123 INFO Executing command on X1 - SC1
using 10.82.84.37:22000
07:03:15 - 2017-10-23 07:03:12,575 INFO 10.82.84.39 - cat /xtremapp/debuc
/127.0.0.1\:33336/clst_sym_elect/clst_sym_elect | grep \" state \-\"
07:03:20 - 2017-10-23 07:03:14,405 INFO 10.82.84.37 - cat /xtremapp/debuc
/127.0.0.1\:33336/clst_sym_elect/clst_sym_elect | grep \" state \-\"
07:03:20 - 2017-10-23 07:03:16,273 INFO 10.82.84.39 clustering status is 5
07:03:20 - 2017-10-23 07:03:16,273 INFO 10.82.84.37 clustering status is 2
07:03:20 - 2017-10-23 07:03:16,273 INFO 10.82.84.37 - echo 4 1 > /xtremapp
/debuc/127.0.0.1\:33336/commands/clst_control
07:03:20 - 2017-10-23 07:03:19,125 INFO 10.82.84.39 - cat /xtremapp/debuc
/127.0.0.1\:33336/clst_sym_elect/clst_sym_elect | grep \" state \-\"
07:03:25 - 2017-10-23 07:03:20,999 INFO 10.82.84.37 - cat /xtremapp/debuc
/127.0.0.1\:33336/clst_sym_elect/clst_sym_elect | grep \" state \-\"
07:03:25 - 2017-10-23 07:03:22,853 INFO 10.82.84.39 clustering status is 5
07:03:25 - 2017-10-23 07:03:22,853 INFO 10.82.84.37 clustering status is 2
07:03:25 - 2017-10-23 07:03:24,874 INFO MetaData load Prefetch is not
running
Script exited with status: 0

Restart survivor

1. Obtain the cluster-id from the xmcli command show-clusters(cluster-id is a mandatory even if only one cluster is being
managed by XMS )
2. Obtain the sc-id from the xmcli command show-storage-controllers(sc-id is mandatory) of the storage controller that needs its
gateway restarted
3. Run the following xmcli command, using the details obtained on step 1 and 2, run-script script="process_restart-v1.4-s4.
0.0.py" arguments="cluster-id=[cluster-id] --restart-survivor --sc-id=[sc-id]"

The example below shows restarting survivor on X4-SC1


xmcli (tech)> run-script script="process_restart-v1.4-s4.0.0.py"
arguments="--cluster-id=1 --restart-survivor --sc-id=7"
13:40:16 - 2017-10-19 13:40:13,920 INFO Cluster 1(xbrickdrm1601-1604) is
healthy - PSNT XIO00171515169
13:40:21 - 2017-10-19 13:40:17,551 INFO Successfully authenticated user
tech with technician permission
13:40:21 - 2017-10-19 13:40:17,551 INFO Script Version is 1.4
13:40:21 - 2017-10-19 13:40:17,551 INFO Cluster xbrickdrm1601-1604 version
6.0.1-11_X2 was found
13:40:21 - 2017-10-19 13:40:17,551 INFO Found lock file with pid - 19573
13:40:21 - 2017-10-19 13:40:17,552 INFO pid 19573 is not running -
updating lock file with current pid
13:40:21 - 2017-10-19 13:40:17,552 INFO Created lock file with pid - 22179
13:40:36 - 2017-10-19 13:40:35,739 INFO Traffic will be forwarded through
10.139.120.122
13:40:36 - 2017-10-19 13:40:35,740 INFO Executing command on X4 - SC1
using 10.139.120.122:22012
13:40:41 - 2017-10-19 13:40:38,223 INFO MetaData load Prefetch is not
running
13:40:41 - 2017-10-19 13:40:38,766 INFO (X4-SC1-IB1)Found Survivor pid
57613 - started at Sun Oct 15 11:42:33 2017
13:40:41 - 2017-10-19 13:40:38,767 INFO (X4-SC1-IB1)Killing Survivor
process 57613 on 10.139.120.122:22012
13:40:56 - 2017-10-19 13:40:54,589 INFO New Survivor started - 28648 on
10.139.120.122:22012
13:41:02 - 2017-10-19 13:40:58,586 INFO Sleeping for 60 seconds before
checking logs
13:42:02 - 2017-10-19 13:41:58,965 INFO Found Survivor restart completed
in log - 2017-10-19 13:40:38.949717
Script exited with status: 0

Restart sshd (ssh deamon)


1. Obtain the cluster-id from the xmcli command show-clusters(cluster-id is a mandatory even if only one cluster is being
managed by XMS )
2. Obtain the sc-id from the xmcli command show-storage-controllers(sc-id is mandatory) of the storage controller that needs its
sshd restarted
3. Run the following xmcli command, using the details obtained on step 1 and 2, run-script script="process_restart-v1.4-s4.
0.0.py" arguments="cluster-id=[cluster-id] --restart-sshd --sc-id=[sc-id]"

The example below shows restarting sshd on X1-SC1


xmcli (tech)> run-script script="process_restart-v1.4-s4.0.0.py"
arguments="--cluster-id=2 --restart-sshd --sc-id=1"
07:39:22 - 2017-10-23 07:39:18,134 INFO Cluster 2(xbrick68) is healthy -
PSNT XIO00150200936
07:39:27 - 2017-10-23 07:39:23,521 INFO Successfully authenticated user
tech with technician permission
07:39:27 - 2017-10-23 07:39:23,521 INFO Script Version is 1.4
07:39:27 - 2017-10-23 07:39:23,521 INFO Cluster xbrick68 version 4.0.25-27
was found
07:39:27 - 2017-10-23 07:39:23,521 INFO Found lock file with pid - 2710
07:39:27 - 2017-10-23 07:39:23,522 INFO pid 2710 is not running - updating
lock file with current pid
07:39:27 - 2017-10-23 07:39:23,522 INFO Created lock file with pid - 5736
07:39:42 - 2017-10-23 07:39:39,181 INFO Traffic will be forwarded through
10.82.84.37
07:39:42 - 2017-10-23 07:39:39,321 INFO Executing command on X1 - SC1
using 10.82.84.37:22000
07:39:42 - 2017-10-23 07:39:40,741 INFO MetaData load Prefetch is not
running
07:39:47 - 2017-10-23 07:39:46,384 INFO (X1-SC1-IB1)Found sshd pid 91107 -
started at Mon Oct 09 15:36:27 2017
07:39:52 - 2017-10-23 07:39:46,385 INFO 10.82.84.37 - service sshd restart
07:39:57 - 2017-10-23 07:39:54,177 INFO (X1-SC1-IB1)Found sshd pid 98159 -
started at Mon Oct 23 07:39:47 2017
Script exited with status: 0

Disabling/enabling and checking idle destage

1. Obtain the cluster-id from the xmcli command show-clusters(cluster-id is a mandatory even if only one cluster is being
managed by XMS )

Run the following xmcli command to enable idle destage, using the details obtained on step 1 and 2, run-script script="
process_restart-v1.4-s4.0.0.py" arguments="cluster-id=[cluster-id] --idle-destage enable"
Run the following xmcli command to enable idle destage, using the details obtained on step 1 and 2, run-script script="
process_restart-v1.4-s4.0.0.py" arguments="cluster-id=[cluster-id] --idle-destage disable"
Run the following xmcli command to check idle destage status, using the details obtained on step 1 and 2, run-script
script="process_restart-v1.4-s4.0.0.py" arguments="cluster-id=[cluster-id] --idle-destage
check"

The example below shows enabling idle destage


xmcli (tech)> run-script script="process_restart-v1.4-s4.0.0.py"
arguments="--cluster-id=2 --idle-destage enable"
12:07:31 - 2017-10-22 12:07:26,253 INFO Cluster 2(xbrick68) is healthy -
PSNT XIO00150200936
12:07:31 - 2017-10-22 12:07:29,963 INFO Successfully authenticated user
tech with technician permission
12:07:36 - 2017-10-22 12:07:29,964 INFO Script Version is 1.4
12:07:36 - 2017-10-22 12:07:29,964 INFO Cluster xbrick68 version 4.0.25-27
was found
12:07:36 - 2017-10-22 12:07:29,969 INFO Found lock file with pid - 4044
12:07:36 - 2017-10-22 12:07:29,969 INFO pid 4044 is not running - updating
lock file with current pid
12:07:36 - 2017-10-22 12:07:29,969 INFO Created lock file with pid - 6810
12:07:51 - 2017-10-22 12:07:46,344 INFO Traffic will be forwarded through
10.82.84.37
12:07:56 - 2017-10-22 12:07:52,786 INFO SYM SC for cluster 2 is 10.82.84.39
12:07:56 - 2017-10-22 12:07:52,786 INFO Enabling idle destage
12:08:01 - 2017-10-22 12:07:59,650 INFO Idle Destage is currently enabled

You might also like