-
Notifications
You must be signed in to change notification settings - Fork 38
Cortx monitor single node VM provisioning manual
- Setup yum repo (Required only for installing build specific RPMS)
curl https://raw.githubusercontent.com/Seagate/cortx-monitor/main/low-level/files/opt/seagate/sspl/setup/sspl_dev_deploy -o sspl_dev_deploy
chmod a+x sspl_dev_deploy
./sspl_dev_deploy --setup_repo -T http://cortx-storage.colo.seagate.com/releases/cortx/github/main/centos-7.8.2003/<build_number>/prod/
Specify <build_number> in the build URL.
Example:
./sspl_dev_deploy --setup_repo -T http://cortx-storage.colo.seagate.com/releases/cortx/github/main/centos-7.8.2003/830/prod/
-
Install 3rd party packages
-
RPMs
Package Version hdparm 9.43 ipmitool 1.8.18 lshw B.02.18 python3 3.6.8 python36-dbus 1.2.4 python36-gobject 3.22.0 python36-paramiko 2.1.1 python36-psutil 5.6.7 shadow-utils 4.6 smartmontools 7.0 systemd-python36 1.0.0 udisks2 2.8.4 -
Python
Package Version cryptography 2.8 jsonschema 3.2.0 pika 1.1.0 pyinotify 0.9.6 python-daemon 2.2.4 requests 2.25.1 zope.component 4.6.2 zope.event 4.5.0 zope.interface 5.2.0
-
-
Run cortx-py-utils pre-requisites
-
Config file for cortx-py-utils
- Create config file for cortx-py-utils at /tmp/cortx-config-new
- Description of keys used in config file
-
Install cortx-py-utils
-
Install SSPL RPMs
SSPL mini-provisioner interfaces are supposed to be executed in order i.e post-install, prepare, config, init. And rest of interfaces can be executed in any order i.e for test, reset, cleanup.
Input config required at each mini provisioner stage is a cumulative i.e. super set of current plus previous stage config. The input config from post-install stage will be available to prepare stage, post-install and prepare stage input config will be available to config stage. Same way, post-install, prepare and config stage input configs to init stage. Init stage is expected to have all configs required for sspl-ll service to start and run smoothly.
At each mini-provisioner stage, argument "--config <template_file>" needs to be passed , which is a reference to required configs backend, depending on the stage.
The template file is a key-value mapped yaml file, maintaining the config requirement template for each stage. All the strings start with TMPL_ should be replaced with valid values. If no values identified, then it can be left blank.
-
TMPL_{field} is a identifier for consumer to update variable keys & values(ex. CI CD pipeline)
Template fields used accross mini provisioner interfaces
BMC information such as IP, USER and SECRET can be blank for VM.
Stage Field Example Value Description Required post_install TMPL_NODE_NAME srvnode-1 Server name of node Yes post_install TMPL_MACHINE_ID 30512e5ae6df9f1ea02327bab45e499d cat /etc/machine-id Yes post_install TMPL_HOSTNAME ssc-vm-2217.colo.seagate.com Hostname of node Yes post_install TMPL_BMC_IP BMC IP Address No post_install TMPL_BMC_SECRET BMC encrypted password No post_install TMPL_BMC_USER BMC username No post_install TMPL_ENCLOSURE_ID enc30512e5ae6df9f1ea02327bab45e499d Strorage Enclosure logical name Yes post_install TMPL_SERVER_NODE_TYPE virtual Yes post_install TMPL_CONTROLLER_PASSWORD '!manage' Controller access - plaintext password Yes post_install TMPL_PRIMARY_CONTROLLER_IP 10.0.0.2 Controller A IP Address Yes post_install TMPL_PRIMARY_CONTROLLER_PORT 22 Controller A Port Yes post_install TMPL_SECONDARY_CONTROLLER_IP 10.0.0.3 Controller B IP Address Yes post_install TMPL_SECONDARY_CONTROLLER_PORT 22 Controller B Port Yes post_install TMPL_CONTROLLER_TYPE Gallium Type of controller Yes post_install TMPL_CONTROLLER_USER manage Controller access - username Yes prepare TMPL_ENCLOSURE_NAME enclosure-1 Strorage Enclosure logical name Yes prepare TMPL_ENCLOSURE_TYPE RBOD Type of enclosure Yes prepare TMPL_DATA_PRIVATE_FQDN srvnode-1.data.private.fqdn FQDN of Private Data Network Yes prepare TMPL_DATA_PRIVATE_INTERFACE Data network private interfaces No prepare TMPL_DATA_PUBLIC_FQDN srvnode-1.data.public.fqdn FQDN of Public Data Network Yes prepare TMPL_DATA_PUBLIC_INTERFACE Data network public interfaces No prepare TMPL_MGMT_INTERFACE eth0 Management network interfaces Yes prepare TMPL_MGMT_PUBLIC_FQDN srvnode-1.public.fqdn FQDN of Management Network Yes prepare TMPL_NODE_ID SN01 Identifies the node in the cluster Yes prepare TMPL_RACK_ID RC01 Identifies the rack in the cluster Yes prepare TMPL_SITE_ID DC01 Identifies the site in the cluster Yes prepare TMPL_CLUSTER_ID CC01 UUID Yes
-
Post Install
-
Run utils post_install
/opt/seagate/cortx/utils/bin/utils_setup post_install --config /tmp/cortx-config-new
-
Run SSPL post_install
/opt/seagate/cortx/sspl/bin/sspl_setup post_install --config /opt/seagate/sspl/conf/sspl.post-install.tmpl.1-node
-
-
Prepare
-
Run SSPL prepare
/opt/seagate/cortx/sspl/bin/sspl_setup prepare --config /opt/seagate/sspl/conf/sspl.prepare.tmpl.1-node
-
-
Config
-
Run utils config
/opt/seagate/cortx/utils/bin/utils_setup config --config /tmp/cortx-config-new
-
Run SSPL config
/opt/seagate/cortx/sspl/bin/sspl_setup config --config /opt/seagate/sspl/conf/sspl.config.tmpl.1-node
-
-
Init
-
Run SSPL init
/opt/seagate/cortx/sspl/bin/sspl_setup init --config /opt/seagate/sspl/conf/sspl.init.tmpl.1-node
-
-
Start SSPL service
systemctl start sspl-ll
-
Check service status
systemctl status sspl-ll
Note: Starting service using systemctl is recommended only when HA framework is not in place, otherwise service is supposed to be started in Cortx cluster automatically by HA
-
Start SSPL tests with plan
/opt/seagate/cortx/sspl/bin/sspl_setup test --config <config_url> --plan [sanity|alerts|self_primary|self_secondary]
Ex.
/opt/seagate/cortx/sspl/bin/sspl_setup test --config <config_url> --plan sanity