Lenovo LiCO 6.1.0 Installation Manual

LiCO 6.1.0 Installation Guide (for SLES)
Eighth Edition (Decmeber 2020)
© Copyright Lenovo 2018, 2020.
LIMITED AND RESTRICTED RIGHTS NOTICE: If data or software is delivered pursuant to a General Services Administration (GSA) contract, use, reproduction, or disclosure is subject to restrictions set forth in Contract No. GS-35F-
05925.

Reading instructions

reader/
• Replace values in angle brackets with the actual values. For example, when you see <*_USERNAME> and <*_PASSWORD>, enter your actual username and password.
• Between the command lines and in the configuration files, ignore all annotations starting with #.
.
https://get.adobe.com/
© Copyright Lenovo 2018, 2020 ii
iii LiCO 6.1.0 Installation Guide (for SLES)
Contents
Reading instructions. . . . . . . . . . . ii
Chapter 1. Overview. . . . . . . . . . . 1
Introduction to LiCO . . . . . . . . . . . . . . 1
Typical cluster deployment . . . . . . . . . . . 1
Operating environment . . . . . . . . . . . . . 2
Supported servers and chassis models . . . . . . 3
Prerequisites . . . . . . . . . . . . . . . . . 4
Chapter 2. Deploy the cluster
environment . . . . . . . . . . . . . . . 5
Install an OS on the management node . . . . . . 5
Deploy the OS on other nodes in the cluster. . . . . 5
Configure environment variables . . . . . . . 5
Create a local repository . . . . . . . . . . 7
Install Lenovo xCAT . . . . . . . . . . . 10
Prepare OS mirrors for other nodes . . . . . 10
Set xCAT node information . . . . . . . . 11
Add host resolution . . . . . . . . . . . 11
Configure DHCP and DNS services . . . . . 11
Install a node OS through the network . . . . 12
Create local repository for other nodes . . . . 12
Configure the memory for other nodes . . . . 14
Checkpoint A . . . . . . . . . . . . . . 14
Install infrastructure software for nodes . . . . . 15
List of infrastructure software to be installed . . 15 Configure a local Zypper repository for the
management node . . . . . . . . . . . . 15
Configure a local Zypper repository for login
and compute nodes . . . . . . . . . . . 15
Configure LiCO dependencies repositories . . 16
Obtain the LiCO installation package. . . . . 17
Configure the local repository for LiCO . . . . 17
Configure the xCAT local repository . . . . . 17
Install Slurm . . . . . . . . . . . . . . 18
Configure NFS . . . . . . . . . . . . . 18
Configure Chrony . . . . . . . . . . . . 19
GPU driver installation . . . . . . . . . . 19
Configure Slurm . . . . . . . . . . . . . 21
Install Icinga2 . . . . . . . . . . . . . . 22
Install MPI . . . . . . . . . . . . . . . 25
Install Singularity . . . . . . . . . . . . 25
Checkpoint B . . . . . . . . . . . . . . 26
Chapter 3. Install LiCO
dependencies . . . . . . . . . . . . . 29
Cluster check. . . . . . . . . . . . . . . . 29
Check environment variables. . . . . . . . 29
Check the LiCO dependencies repository . . . 29
Check the LiCO repository . . . . . . . . . 29
Check the OS installation . . . . . . . . . 30
Check NFS . . . . . . . . . . . . . . . 30
Check Slurm . . . . . . . . . . . . . . 30
Check MPI and Singularity . . . . . . . . . 31
Check OpenHPC installation . . . . . . . . 31
List of LiCO dependencies to be installed. . . . . 31
Install RabbitMQ . . . . . . . . . . . . . . 31
Install MariaDB . . . . . . . . . . . . . . . 32
Install InfluxDB . . . . . . . . . . . . . . . 32
Install Confluent. . . . . . . . . . . . . . . 33
Configure user authentication . . . . . . . . . 33
Install OpenLDAP-server . . . . . . . . . 33
Install libuser . . . . . . . . . . . . . . 34
Install OpenLDAP-client. . . . . . . . . . 34
Install nss-pam-Idapd . . . . . . . . . . 34
Chapter 4. Install LiCO . . . . . . . . 37
List of LiCO components to be installed . . . . . 37
Install LiCO on the management node . . . . . . 37
Install LiCO on the login node . . . . . . . . . 38
Install LiCO on the compute nodes . . . . . . . 38
Configure the LiCO internal key. . . . . . . . . 38
Chapter 5. Configure LiCO . . . . . . 39
Configure the service account . . . . . . . . . 39
Configure cluster nodes . . . . . . . . . . . 39
Room information . . . . . . . . . . . . 39
Logic group information. . . . . . . . . . 39
Room row information . . . . . . . . . . 40
Rack information . . . . . . . . . . . . 40
Chassis information . . . . . . . . . . . 40
Node information . . . . . . . . . . . . 41
Configure generic resources . . . . . . . . . . 42
Gres information. . . . . . . . . . . . . 42
List of cluster services . . . . . . . . . . . . 42
Configure LiCO components. . . . . . . . . . 43
lico-vnc-mond . . . . . . . . . . . . . 43
lico-portal . . . . . . . . . . . . . . . 43
Initialize the system . . . . . . . . . . . . . 44
Initialize users . . . . . . . . . . . . . . . 44
Import system images . . . . . . . . . . . . 45
Chapter 6. Start and log in to LiCO . . 47
Start LiCO . . . . . . . . . . . . . . . . . 47
© Copyright Lenovo 2018, 2020 i
Log in to LiCO . . . . . . . . . . . . . . . 47
Configure LiCO services . . . . . . . . . . . 47
Chapter 7. Appendix: Important
information . . . . . . . . . . . . . . . 49
Configure VNC . . . . . . . . . . . . . . . 49
Standalone VNC installation . . . . . . . . 49
VNC batch installation . . . . . . . . . . 49
Configure the Confluent Web console . . . . . . 50
LiCO commands . . . . . . . . . . . . . . 50
Change a user’s role . . . . . . . . . . . 50
Resume a user . . . . . . . . . . . . . 50
Delete a user . . . . . . . . . . . . . . 50
Import a user . . . . . . . . . . . . . . 51
Import AI images . . . . . . . . . . . . 51
Generate nodes.csv in confluent . . . . . . 51
Firewall settings. . . . . . . . . . . . . . . 51
Set firewall on the management node . . . . 51
Set firewall on the login node . . . . . . . . 52
SSHD settings . . . . . . . . . . . . . . . 52
Improve SSHD security . . . . . . . . . . 52
Slurm issues troubleshooting . . . . . . . . . 53
Node status check . . . . . . . . . . . . 53
Memory allocation error . . . . . . . . . . 53
Status setting error. . . . . . . . . . . . 53
InfiniBand issues troubleshooting . . . . . . . . 53
Installation issues troubleshooting . . . . . . . 53
XCAT issues troubleshooting . . . . . . . . . 54
MPI issues troubleshooting . . . . . . . . . . 54
Edit nodes.csv from xCAT dumping data . . . . . 55
Notices and trademarks . . . . . . . . . . . 55
ii LiCO 6.1.0 Installation Guide (for SLES)

Chapter 1. Overview

Public network
Nodes BMC interface
Nodes eth interface
High speed network interface
TCP networking
Login node
Compute node
High speed
network
Parallel file system
Management node

Introduction to LiCO

Lenovo Intelligent Computing Orchestration (LiCO) is an infrastructure management software for high­performance computing (HPC) and artificial intelligence (AI). It provides features like cluster management and monitoring, job scheduling and management, cluster user management, account management, and file system management.
With LiCO, users can centralize resource allocation in one supercomputing cluster and carry out HPC and AI jobs simultaneously. Users can perform operations by logging in to the management system interface with a browser, or by using command lines after logging in to a cluster login node with another Linux shell.

Typical cluster deployment

This Guide is based on the typical cluster deployment that contains management, login, and compute nodes.
Figure 1. Typical cluster deployment
© Copyright Lenovo 2018, 2020 1
Elements in the cluster are described in the table below.
Table 1. Description of elements in the typical cluster
Element
Management node
Compute node Completes computing tasks.
Login node
Parallel file system
Nodes BMC interface
Nodes eth interface
High speed network interface
Description
Core of the HPC/AI cluster, undertaking primary functions such as cluster management, monitoring, scheduling, strategy management, and user & account management.
Connects the cluster to the external network or cluster. Users must use the login node to log in and upload application data, develop compilers, and submit scheduled tasks.
Provides a shared storage function. It is connected to the cluster nodes through a high­speed network. Parallel file system setup is beyond the scope of this Guide. A simple NFS setup is used instead.
Used to access the node’s BMC system.
Used to manage nodes in cluster. It can also be used to transfer computing data.
Optional. Used to support the parallel file system. It can also be used to transfer computing data.
Note: LiCO also supports the cluster deployment that only contains the management and compute nodes. In this case, all LiCO modules installed on the login node need to be installed on the management node.

Operating environment

Cluster server:
Lenovo ThinkSystem servers
Operating system:
SUSE Linux Enterprise server (SLES) 15 SP2
Client requirements:
• Hardware: CPU of 2.0 GHz or above, memory of 8 GB or above
• Browser: Chrome (V 62.0 or higher) or Firefox (V 56.0 or higher) recommended
• Display resolution: 1280 x 800 or above
2
LiCO 6.1.0 Installation Guide (for SLES)

Supported servers and chassis models

LiCO can be installed on certain servers, as listed in the table below.
Table 2. Supported servers
Product code
sd530 7X21
sd650 7X58
sr630
sr645
sr650
sr655
sr665
sr670
Machine type
7X01, 7X02
7D2X, 7D2Y
7X05, 7X06
7Y00, 7Z01
7D2V, 7D2W
7Y36, 7Y37, 7Y38
Product name
Lenovo ThinkSystem SD530 (0.5U)
Lenovo ThinkSystem SD650 (2 nodes per 1U tray)
Lenovo ThinkSystem SR630 (1U)
Lenovo ThinkSystem SR645 (1U)
Lenovo ThinkSystem SR650 (2U)
Lenovo ThinkSystem SR655 (2U)
Lenovo ThinkSystem SR665 (2U)
Lenovo ThinkSystem SR670 (2U)
Appearance
sr850
sr850p
sr950
7X18, 7X19
7D2F, 7D2G, 7D2H
7X11, 7X12, 7X13
Lenovo ThinkSystem SR850 (2U)
Lenovo ThinkSystem SR850P (2U)
Lenovo ThinkSystem SR950 (4U)
LiCO can be installed on certain chassis models, as listed in the table below.
Chapter 1. Overview 3
Table 3. Supported chassis models
Product code
d2 7X20
n1200
Machine type
5456, 5468, 5469
Model name
D2 Enclosure (2U)
NeXtScale n1200 (6U)
Appearance

Prerequisites

• Refer to LiCO best recipe to ensure that the cluster hardware uses proper firmware levels, drivers, and settings:
• Refer to the OS part of LeSI 20B_SI best recipe to install the OS security patch:
support.lenovo.com/us/en/solutions/HT511104
• The installation described in this Guide is based on SLES 15 SP2.
• A SLE-15-SP2-Full-x86_64-GM-Media1.iso local repository should be added on management node.
• Unless otherwise stated in this Guide, all commands are executed on the management node.
• To enable the firewall, modify the firewall rules according to “Firewall settings” on page 51.
• It is important to regularly patch and update components and the OS to prevent security vulnerabilities. Additionally it is recommended that known updates at the time of installation be applied during or immediately after the OS deployment to the managed nodes and prior to the rest of the LiCO setup steps
• LiCO leverages OpenHPC packages which aggregate a number of common ingredients required to deploy and manage High Performance Computing (HPC) Linux clusters including provisioning tools, resource management, I/O clients, development tools, and a variety of scientific libraries. Lenovo provides a download of the most recent version of OpenHPC which is unmodified from what is distributed by OpenHPC. There are known open-source components within OpenHPC that have known, registered, vulnerabilities. None of these issues has been assessed as critical. However, it is recommended that the user update or remove such components using the native package management tools.
https://support.lenovo.com/us/en/solutions/ht507011.
https://
.
4
LiCO 6.1.0 Installation Guide (for SLES)

Chapter 2. Deploy the cluster environment

If the cluster environment already exists, you can skip this chapter.

Install an OS on the management node

Install an official version of SLES 15 SP2 on the management node.
Run the following commands to configure the memory and restart OS:
echo '* soft memlock unlimited' >> /etc/security/limits.conf
echo '* hard memlock unlimited' >> /etc/security/limits.conf
reboot

Deploy the OS on other nodes in the cluster

Configure environment variables

Step 1. Log in to the management node.
Step 2. Run the following commands to configure environment variables for the entire installation process:
su root
cd ~
vi lico_env.local
Step 3. Run the following commands to edit the lico_env.local file:
# Management node hostname
sms_name="head"
# Set the domain name
domain_name="hpc.com"
# Set OpenLDAP domain name
lico_ldap_domain_name="dc=hpc,dc=com"
# set OpenLDAP domain component
lico_ldap_domain_component="hpc"
# IP address of management node in the cluster intranet
sms_ip="192.168.0.1"
# The network adapter name corresponding to the management node IP address
sms_eth_internal="eth0"
# Subnet mask in the cluster intranet. If all nodes in the cluster already have
# OS installed, retain the default configurations.
internal_netmask="255.255.0.0"
© Copyright Lenovo 2018, 2020
5
# BMC username and password
bmc_username="<BMC_USERNAME>"
bmc_password="<BMC_PASSWORD>"
# OS mirror pathway for xCAT
iso_path="/isos"
# Local repository directory for OS
packages_repo_dir="/install/custom/packages"
# Local repository directory for xCAT
xcat_repo_dir="/install/custom/xcat"
# link name of repository directory for Lenovo OpenHPC
link_ohpc_repo_dir="/install/custom/ohpc"
# link name of repository directory for LiCO
link_lico_repo_dir="/install/custom/lico"
# link name of repository directory for LiCO-dep
link_lico_dep_repo_dir="/install/custom/lico-dep"
# Local repository directory for Lenovo OpenHPC, please change it according to
# this version.
ohpc_repo_dir="/install/custom/ohpc-2.0.0"
# LiCO repository directory for LiCO, please change it according to this version.
lico_repo_dir="/install/custom/lico-6.1.0"
# LiCO repository directory for LiCO-dep, please change it according to this version.
lico_dep_repo_dir="/install/custom/lico-dep-6.1.0"
# Total compute nodes
num_computes="2"
# Prefix of compute node hostname. If OS has already been installed on all the nodes
# of the cluster, change the configuration according to actual conditions.
compute_prefix="c"
# Compute node hostname list. If OS has already been installed on all the nodes of the
# cluster, change the configuration according to actual conditions.
c_name[0]=c1
c_name[1]=c2
# Compute node IP list. If OS has already been installed on all the
# nodes of the cluster, change the configuration according to actual conditions.
c_ip[0]=192.168.0.6
6 LiCO 6.1.0 Installation Guide (for SLES)
c_ip[1]=192.168.0.16
# Network interface card MAC address corresponding to the compute node IP. If OS has
# already been installed on all the nodes of the cluster, change the configuration
# according to actual conditions.
c_mac[0]=fa:16:3e:73:ec:50
c_mac[1]=fa:16:3e:27:32:c6
# Compute node BMC address list.
c_bmc[0]=192.168.1.6
c_bmc[1]=192.168.1.16
# Total login nodes. If there is no login node in the cluster, the number of logins
# must be "0". And the 'l_name', 'l_ip', 'l_mac', and 'l_bmc' lines need to be removed.
num_logins="1"
# Login node hostname list. If OS has already been installed on all nodes in the cluster,
# change the configuration according to actual conditions.
l_name[0]=l1
# Login node IP list. If OS has already been installed on all the nodes of the cluster,
# change the configuration according to actual conditions.
l_ip[0]=192.168.0.15
# Network interface card MAC address corresponding to the login node IP.
# If OS has already been installed on all nodes in the cluster, change the configuration
# according to actual conditions.
l_mac[0]=fa:16:3e:2c:7a:47
# Login node BMC address list.
l_bmc[0]=192.168.1.15
# icinga api listener port
icinga_api_port=5665
Step 4. Save the changes to lico_env.local.This Guide assumes that the node's BMC username and
password are consistent. If they are inconsistent, they need to be modified during the installation.
Step 5. Run the following commands to make the configuration file take effect:
chmod 600 lico_env.local
source lico_env.local
After the cluster environment is set up, configure the IP address of the public network on the login or management node. In this way, you can log in to LiCO Web portal from external network.

Create a local repository

Step 1. Run the following command to create a directory for ISO storage:
Chapter 2. Deploy the cluster environment 7
mkdir -p ${iso_path}
Step 2. Download SLE-15-SP2-Full-x86_64-GM-Media1.iso from the official Web site. Record MD5SUM
result from your download Web site.
Step 3. Copy the file to ${iso_path}.
Step 4. Run the following commands to compare md5sum result with original to check if ISO file is valid:
cd ${iso_path}
md5sum SLE-15-SP2-Full-x86_64-GM-Media1.iso
cd ~
Step 5. Run the following commands to mount image:
mkdir -p ${packages_repo_dir}
mount -o loop ${iso_path}/SLE-15-SP2-Full-x86_64-GM-Media1.iso ${packages_repo_dir}
Step 6. Run the following commands to configure local repository:
cat << eof > ${iso_path}/SLES15-SP2-15.2.repo
[SLES15-SP2-15.2-PACKAGES]
name=sle15-packages
enabled=1
autorefresh=0
gpgcheck=0
baseurl=file://${packages_repo_dir}
[SLES15-SP2-15.2-PACKAGES-Module-Basesystem]
name=sle15-packages-basesystem
enabled=1
autorefresh=0
gpgcheck=0
baseurl=file://${packages_repo_dir}/Module-Basesystem
[SLES15-SP2-15.2-PACKAGES-Module-Desktop-Applications]
name=sle15-packages-desktop-applications
enabled=1
autorefresh=0
gpgcheck=0
baseurl=file://${packages_repo_dir}/Module-Desktop-Applications
[SLES15-SP2-15.2-PACKAGES-Module-Development-Tools]
name=sle15-packages-development-tools
enabled=1
autorefresh=0
gpgcheck=0
8 LiCO 6.1.0 Installation Guide (for SLES)
baseurl=file://${packages_repo_dir}/Module-Development-Tools
[SLES15-SP2-15.2-PACKAGES-Module-HPC]
name=sle15-packages-hpc
enabled=1
autorefresh=0
gpgcheck=0
baseurl=file://${packages_repo_dir}/Module-HPC
[SLES15-SP2-15.2-PACKAGES-Module-Public-Cloud]
name=sle15-packages-public-cloud
enabled=1
autorefresh=0
gpgcheck=0
baseurl=file://${packages_repo_dir}/Module-Public-Cloud
[SLES15-SP2-15.2-PACKAGES-Module-Python2]
name=sle15-packages-python2
enabled=1
autorefresh=0
gpgcheck=0
baseurl=file://${packages_repo_dir}/Module-Python2
[SLES15-SP2-15.2-PACKAGES-Module-Server-Applications]
name=sle15-packages-server-applications
enabled=1
autorefresh=0
gpgcheck=0
baseurl=file://${packages_repo_dir}/Module-Server-Applications
[SLES15-SP2-15.2-PACKAGES-Product-HA]
name=sle15-packages-product-ha
enabled=1
autorefresh=0
gpgcheck=0
baseurl=file://${packages_repo_dir}/Product-HA
[SLES15-SP2-15.2-PACKAGES-Module-Legacy]
name=sle15-packages-legacy
enabled=1
autorefresh=0
Chapter 2
. Deploy the cluster environment 9
gpgcheck=0
baseurl=file://${packages_repo_dir}/Module-Legacy
eof
cp ${iso_path}/SLES15-SP2-15.2.repo /etc/zypp/repos.d

Install Lenovo xCAT

Step 1. Download the package from https://hpc.lenovo.com/downloads/20b/confluent-3.0.6-xcat-
2.16.0.lenovo2-suse15.tar.bz2
Step 2. Upload the package to the /root directory on the management node.
Step 3. Run the following commands to create xcat local repository:
mkdir -p $xcat_repo_dir
cd /root
zypper in bzip2
tar -xvf confluent-3.0.6-xcat-2.16.0.lenovo2-suse15.tar.bz2 -C $xcat_repo_dir
cd $xcat_repo_dir/lenovo-hpc-suse15
./mklocalrepo.sh
cd ~
Step 4. Run the following commands to install xcat:
.
zypper install perl-Expect perl-Net-DNS
zypper --gpg-auto-import-keys install -y --force-resolution xCAT
source /etc/profile.d/xcat.sh

Prepare OS mirrors for other nodes

Step 1. Run the following command to prepare the OS image for the other nodes:
copycds $iso_path/SLE-15-SP2-Full-x86_64-GM-Media1.iso
Step 2. Run the following command to confirm that the OS image has been copied:
lsdef -t osimage
Note: The output should be as follows:
sle15.2-x86_64-install-compute (osimage)
sle15.2-x86_64-netboot-compute (osimage)
sle15.2-x86_64-statelite-compute (osimage)
Step 3. Run the following command to disable the Nouveau module:
chdef -t osimage sle15.2-x86_64-install-compute addkcmdline=\
"rdblacklist=nouveau nouveau.modeset=0 R::modprobe.blacklist=nouveau"
Note: Nouveau module is an accelerated open-source driver for NVIDIA cards. This module should disabled before the installation of CUDA driver.
10
LiCO 6.1.0 Installation Guide (for SLES)

Set xCAT node information

Note: If the ThinkSystem SR635/SR655 server is used in other nodes, change “serialport=0” to “serialport=
1” before running the following commands.
Step 1. Run the following commands to import the compute node configuration in the lico_env.local file to
xCAT:
for ((i=0; i<$num_computes; i++)); do
mkdef -t node ${c_name[$i]} groups=compute,all arch=x86_64 netboot=xnba mgt=ipmi \
bmcusername=${bmc_username} bmcpassword=${bmc_password} ip=${c_ip[$i]} \
mac=${c_mac[$i]} bmc=${c_bmc[$i]} serialport=0 serialspeed=115200;
done
Step 2. Run the following commands to import the login node configuration in the lico_env.local file to
xCAT:
for ((i=0; i<$num_logins; i++)); do
mkdef -t node ${l_name[$i]} groups=login,all arch=x86_64 netboot=xnba mgt=ipmi \
bmcusername=${bmc_username} bmcpassword=${bmc_password} ip=${l_ip[$i]} \
mac=${l_mac[$i]} bmc=${l_bmc[$i]} serialport=0 serialspeed=115200;
done
Step 3. (Optional) If the BMC username and password of the node are inconsistent, run the following
command to make them consistent:
tabedit ipmi
Step 4. Run the following command to configure the root account password for the node:
chtab key=system passwd.username=root passwd.password=<ROOT_PASSWORD>

Add host resolution

Note: If the cluster already has the OS installed and can resolve the IP address through the hostname, skip
this section.
Run the following commands to add host resolution:
chdef -t site domain=${domain_name}
chdef -t site master=${sms_ip}
chdef -t site nameservers=${sms_ip}
sed -i "/^\s*${sms_ip}\s*.*$/d" /etc/hosts
sed -i "/\s*${sms_name}\s*/d" /etc/hosts
echo "${sms_ip} ${sms_name} ${sms_name}.${domain_name} " >> /etc/hosts
makehosts

Configure DHCP and DNS services

Note: If all nodes in the cluster have the OS installed, skip this step.
Chapter 2. Deploy the cluster environment 11
Run the following commands to configure DHCP and DNS services:
makenetworks
makedhcp -n
makedhcp -a
makedns -n
echo "search ${domain_name}" > /etc/resolv.conf
echo "nameserver ${sms_ip}" >> /etc/resolv.conf
Note: Please reference to the following two links to make sure that the management node is pointing at the same DNS as other nodes:
https://sourceforge.net/p/xcat/wiki/XCAT_iDataPlex_Cluster_Quick_Start/#install-xcat-on-the-management-node https://sourceforge.net/p/xcat/wiki/Cluster_Name_Resolution/

Install a node OS through the network

Note: If all nodes in the cluster have the OS installed, skip this section.
Run the following commands to set and install the necessary OS mirror:
nodeset all osimage=sle15.2-x86_64-install-compute
ln -s /install/sles15.2/x86_64/1 /install/sles15.2/x86_64/2
rsetboot all net -u
rpower all reset
Note: It takes several minutes to complete the OS installation. You can use the following command to check the progress:
nodestat all

Create local repository for other nodes

Run the following commands:
cat << eof > /var/tmp/SLES15-SP2-15.2.repo
[SLES15-SP2-15.2-PACKAGES]
name=sle15-packages
enabled=1
autorefresh=0
gpgcheck=0
baseurl=http://${sms_name}${packages_repo_dir}
[SLES15-SP2-15.2-PACKAGES-Module-Basesystem]
name=sle15-packages-basesystem
enabled=1
autorefresh=0
gpgcheck=0
12 LiCO 6.1.0 Installation Guide (for SLES)
baseurl=http://${sms_name}${packages_repo_dir}/Module-Basesystem
[SLES15-SP2-15.2-PACKAGES-Module-Desktop-Applications]
name=sle15-packages-desktop-applications
enabled=1
autorefresh=0
gpgcheck=0
baseurl=http://${sms_name}${packages_repo_dir}/Module-Desktop-Applications
[SLES15-SP2-15.2-PACKAGES-Module-Development-Tools]
name=sle15-packages-development-tools
enabled=1
autorefresh=0
gpgcheck=0
baseurl=http://${sms_name}${packages_repo_dir}/Module-Development-Tools
[SLES15-SP2-15.2-PACKAGES-Module-HPC]
name=sle15-packages-hpc
enabled=1
autorefresh=0
gpgcheck=0
baseurl=http://${sms_name}${packages_repo_dir}/Module-HPC
[SLES15-SP2-15.2-PACKAGES-Module-Public-Cloud]
name=sle15-packages-public-cloud
enabled=1
autorefresh=0
gpgcheck=0
baseurl=http://${sms_name}${packages_repo_dir}/Module-Public-Cloud
[SLES15-SP2-15.2-PACKAGES-Module-Python2]
name=sle15-packages-python2
enabled=1
autorefresh=0
gpgcheck=0
baseurl=http://${sms_name}${packages_repo_dir}/Module-Python2
[SLES15-SP2-15.2-PACKAGES-Module-Server-Applications]
name=sle15-packages-server-applications
enabled=1
autorefresh=0
Chapter 2
. Deploy the cluster environment 13
Loading...
+ 44 hidden pages