Lenovo LiCO 6.0.0 Installation Manual

LiCO 6.0.0 Installation Guide (for SLES)
Seventh Edition (August 2020)
© Copyright Lenovo 2018, 2020.
LIMITED AND RESTRICTED RIGHTS NOTICE: If data or software is delivered pursuant to a General Services Administration (GSA) contract, use, reproduction, or disclosure is subject to restrictions set forth in Contract No. GS-35F-
05925.

Reading instructions

reader/
• Replace values in angle brackets with the actual values. For example, when you see <*_USERNAME> and <*_PASSWORD>, enter your actual username and password.
• Between the command lines and in the configuration files, ignore all annotations starting with #.
.
https://get.adobe.com/
© Copyright Lenovo 2018, 2020 ii
iii LiCO 6.0.0 Installation Guide (for SLES)
Contents
Reading instructions. . . . . . . . . . . ii
Chapter 1. Overview. . . . . . . . . . . 1
Introduction to LiCO . . . . . . . . . . . . . . 1
Typical cluster deployment . . . . . . . . . . . 1
Operating environment . . . . . . . . . . . . . 2
Supported servers and chassis models . . . . . . 3
Prerequisites . . . . . . . . . . . . . . . . . 4
Chapter 2. Deploy the cluster
environment . . . . . . . . . . . . . . . 5
Install an OS on the management node . . . . . . 5
Deploy the OS on other nodes in the cluster. . . . . 5
Configure environment variables . . . . . . . 5
Create a local repository . . . . . . . . . . 8
Install Lenovo xCAT . . . . . . . . . . . . 8
Prepare OS mirrors for other nodes . . . . . . 9
Set xCAT node information . . . . . . . . . 9
Add host resolution . . . . . . . . . . . 10
Configure DHCP and DNS services . . . . . 10
Install a node OS through the network . . . . 11
Create local repository for other nodes . . . . 11
Configure the memory for other nodes . . . . 12
Checkpoint A . . . . . . . . . . . . . . 12
Install infrastructure software for nodes . . . . . 12
List of infrastructure software to be installed . . 12 Configure a local Zypper repository for the
management node . . . . . . . . . . . . 13
Configure a local Zypper repository for login
and compute nodes . . . . . . . . . . . 13
Configure LiCO dependencies repositories . . 14
Obtain the LiCO installation package. . . . . 14
Configure the local repository for LiCO . . . . 15
Configure the xCAT local repository . . . . . 15
Install Slurm . . . . . . . . . . . . . . 15
Configure NFS . . . . . . . . . . . . . 16
Configure Chrony . . . . . . . . . . . . 17
GPU driver installation . . . . . . . . . . 17
Configure Slurm . . . . . . . . . . . . . 18
Install Icinga2 . . . . . . . . . . . . . . 20
Install MPI . . . . . . . . . . . . . . . 22
Install Singularity . . . . . . . . . . . . 23
Checkpoint B . . . . . . . . . . . . . . 23
Chapter 3. Install LiCO
dependencies . . . . . . . . . . . . . 25
Cluster check. . . . . . . . . . . . . . . . 25
Check environment variables. . . . . . . . 25
Check the LiCO dependencies repository . . . 25
Check the LiCO repository . . . . . . . . . 25
Check the OS installation . . . . . . . . . 26
Check NFS . . . . . . . . . . . . . . . 26
Check Slurm . . . . . . . . . . . . . . 26
Check MPI and Singularity . . . . . . . . . 27
Check OpenHPC installation . . . . . . . . 27
List of LiCO dependencies to be installed. . . . . 27
Install RabbitMQ . . . . . . . . . . . . . . 27
Install MariaDB . . . . . . . . . . . . . . . 28
Install InfluxDB . . . . . . . . . . . . . . . 28
Install Confluent. . . . . . . . . . . . . . . 29
Configure user authentication . . . . . . . . . 29
Install OpenLDAP-server . . . . . . . . . 29
Install libuser . . . . . . . . . . . . . . 30
Install OpenLDAP-client. . . . . . . . . . 30
Install nss-pam-Idapd . . . . . . . . . . 30
Chapter 4. Install LiCO . . . . . . . . 33
List of LiCO components to be installed . . . . . 33
Install LiCO on the management node . . . . . . 33
Install LiCO on the login node . . . . . . . . . 34
Install LiCO on the compute nodes . . . . . . . 34
Configure the LiCO internal key. . . . . . . . . 34
Chapter 5. Configure LiCO . . . . . . 35
Configure the service account . . . . . . . . . 35
Configure cluster nodes . . . . . . . . . . . 35
Room information . . . . . . . . . . . . 35
Logic group information. . . . . . . . . . 35
Room row information . . . . . . . . . . 36
Rack information . . . . . . . . . . . . 36
Chassis information . . . . . . . . . . . 36
Node information . . . . . . . . . . . . 37
Configure generic resources . . . . . . . . . . 38
Gres information. . . . . . . . . . . . . 38
List of cluster services . . . . . . . . . . . . 38
Configure LiCO components. . . . . . . . . . 39
lico-vnc-mond . . . . . . . . . . . . . 39
lico-portal . . . . . . . . . . . . . . . 39
Initialize the system . . . . . . . . . . . . . 40
Initialize users . . . . . . . . . . . . . . . 40
Import system images . . . . . . . . . . . . 41
Chapter 6. Start and log in to LiCO . . 43
Start LiCO . . . . . . . . . . . . . . . . . 43
© Copyright Lenovo 2018, 2020 i
Log in to LiCO . . . . . . . . . . . . . . . 43
Configure LiCO services . . . . . . . . . . . 43
Chapter 7. Appendix: Important
information . . . . . . . . . . . . . . . 45
Configure VNC . . . . . . . . . . . . . . . 45
Standalone VNC installation . . . . . . . . 45
VNC batch installation . . . . . . . . . . 45
Configure the Confluent Web console . . . . . . 46
LiCO commands . . . . . . . . . . . . . . 46
Change a user’s role . . . . . . . . . . . 46
Resume a user . . . . . . . . . . . . . 46
Delete a user . . . . . . . . . . . . . . 46
Import a user . . . . . . . . . . . . . . 47
Import AI images . . . . . . . . . . . . 47
Generate nodes.csv in confluent . . . . . . 47
Firewall settings. . . . . . . . . . . . . . . 47
Set firewall on the management node . . . . 47
Set firewall on the login node . . . . . . . . 48
SSHD settings . . . . . . . . . . . . . . . 48
Improve SSHD security . . . . . . . . . . 48
Slurm issues troubleshooting . . . . . . . . . 49
Node status check . . . . . . . . . . . . 49
Memory allocation error . . . . . . . . . . 49
Status setting error. . . . . . . . . . . . 49
InfiniBand issues troubleshooting . . . . . . . . 49
Installation issues troubleshooting . . . . . . . 49
XCAT issues troubleshooting . . . . . . . . . 50
MPI issues troubleshooting . . . . . . . . . . 50
Edit nodes.csv from xCAT dumping data . . . . . 51
Notices and trademarks . . . . . . . . . . . 51
ii LiCO 6.0.0 Installation Guide (for SLES)

Chapter 1. Overview

Public network
Nodes BMC interface
Nodes eth interface
High speed network interface
TCP networking
Login node
Compute node
High speed
network
Parallel file system
Management node

Introduction to LiCO

Lenovo Intelligent Computing Orchestration (LiCO) is an infrastructure management software for high­performance computing (HPC) and artificial intelligence (AI). It provides features like cluster management and monitoring, job scheduling and management, cluster user management, account management, and file system management.
With LiCO, users can centralize resource allocation in one supercomputing cluster and carry out HPC and AI jobs simultaneously. Users can perform operations by logging in to the management system interface with a browser, or by using command lines after logging in to a cluster login node with another Linux shell.

Typical cluster deployment

This Guide is based on the typical cluster deployment that contains management, login, and compute nodes.
Figure 1. Typical cluster deployment
© Copyright Lenovo 2018, 2020 1
Elements in the cluster are described in the table below.
Table 1. Description of elements in the typical cluster
Element
Management node
Compute node Completes computing tasks.
Login node
Parallel file system
Nodes BMC interface
Nodes eth interface
High speed network interface
Description
Core of the HPC/AI cluster, undertaking primary functions such as cluster management, monitoring, scheduling, strategy management, and user & account management.
Connects the cluster to the external network or cluster. Users must use the login node to log in and upload application data, develop compilers, and submit scheduled tasks.
Provides a shared storage function. It is connected to the cluster nodes through a high­speed network. Parallel file system setup is beyond the scope of this Guide. A simple NFS setup is used instead.
Used to access the node’s BMC system.
Used to manage nodes in cluster. It can also be used to transfer computing data.
Optional. Used to support the parallel file system. It can also be used to transfer computing data.
Note: LiCO also supports the cluster deployment that only contains the management and compute nodes. In this case, all LiCO modules installed on the login node need to be installed on the management node.

Operating environment

Cluster server:
Lenovo ThinkSystem servers
Operating system:
SUSE Linux Enterprise server (SLES) 15 SP1
Client requirements:
• Hardware: CPU of 2.0 GHz or above, memory of 8 GB or above
• Browser: Chrome (V 62.0 or higher) or Firefox (V 56.0 or higher) recommended
• Display resolution: 1280 x 800 or above
2
LiCO 6.0.0 Installation Guide (for SLES)

Supported servers and chassis models

LiCO can be installed on certain servers, as listed in the table below.
Table 2. Supported servers
Product code
sd530 7X21
sd650 7X58
sr630
sr645
sr650
sr655
sr665
sr670
Machine type
7X01, 7X02
7D2X, 7D2Y
7X05, 7X06
7Y00, 7Z01
7D2V, 7D2W
7Y36, 7Y37, 7Y38
Product name
Lenovo ThinkSystem SD530 (0.5U)
Lenovo ThinkSystem SD650 (2 nodes per 1U tray)
Lenovo ThinkSystem SR630 (1U)
Lenovo ThinkSystem SR645 (1U)
Lenovo ThinkSystem SR650 (2U)
Lenovo ThinkSystem SR655 (2U)
Lenovo ThinkSystem SR665 (2U)
Lenovo ThinkSystem SR670 (2U)
Appearance
sr850
sr850p
sr950
7X18, 7X19
7D2F, 7D2G, 7D2H
7X11, 7X12, 7X13
Lenovo ThinkSystem SR850 (2U)
Lenovo ThinkSystem SR850P (2U)
Lenovo ThinkSystem SR950 (4U)
LiCO can be installed on certain chassis models, as listed in the table below.
Chapter 1. Overview 3
Table 3. Supported chassis models
Product code
d2 7X20
n1200
Machine type
5456, 5468, 5469
Model name
D2 Enclosure (2U)
NeXtScale n1200 (6U)
Appearance

Prerequisites

• Refer to LiCO best recipe to ensure that the cluster hardware uses proper firmware levels, drivers, and settings:
• Refer to the OS part of LeSI 20A_SI best recipe to install the OS security patch:
us/en/solutions/HT510293
• The installation described in this Guide is based on SLES 15 SP1.
• A SLE-15-SP1-Installer or SLE-15-SP1-Packages local repository should be added on management node.
• Unless otherwise stated in this Guide, all commands are executed on the management node.
• To enable the firewall, modify the firewall rules according to “Firewall settings” on page 47.
• It is important to regularly patch and update components and the OS to prevent security vulnerabilities. Additionally it is recommended that known updates at the time of installation be applied during or immediately after the OS deployment to the managed nodes and prior to the rest of the LiCO setup steps
https://support.lenovo.com/us/en/solutions/ht507011.
https://support.lenovo.com/
.
4
LiCO 6.0.0 Installation Guide (for SLES)

Chapter 2. Deploy the cluster environment

If the cluster environment already exists, you can skip this chapter.

Install an OS on the management node

Install an official version of SLES 15 SP1 on the management node. You can select the minimum installation.
Run the following commands to configure the memory and restart OS:
echo '* soft memlock unlimited' >> /etc/security/limits.conf
echo '* hard memlock unlimited' >> /etc/security/limits.conf
reboot

Deploy the OS on other nodes in the cluster

Configure environment variables

Step 1. Log in to the management node.
Step 2. Run the following commands to configure environment variables for the entire installation process:
su root
cd ~
vi lico_env.local
Step 3. Run the following commands to edit the lico_env.local file:
# Management node hostname
sms_name="head"
# Set the domain name
domain_name="hpc.com"
# Set OpenLDAP domain name
lico_ldap_domain_name="dc=hpc,dc=com"
# set OpenLDAP domain component
lico_ldap_domain_component="hpc"
# IP address of management node in the cluster intranet
sms_ip="192.168.0.1"
# The network adapter name corresponding to the management node IP address
sms_eth_internal="eth0"
# Subnet mask in the cluster intranet. If all nodes in the cluster already have
# OS installed, retain the default configurations.
internal_netmask="255.255.0.0"
© Copyright Lenovo 2018, 2020
5
# BMC username and password
bmc_username="<BMC_USERNAME>"
bmc_password="<BMC_PASSWORD>"
# OS mirror pathway for xCAT
iso_path="/isos"
# Local repository directory for OS
installer_repo_dir="/install/custom/installer"
packages_repo_dir="/install/custom/packages"
# Local repository directory for xCAT
xcat_repo_dir="/install/custom/xcat"
# link name of repository directory for Lenovo OpenHPC
link_ohpc_repo_dir="/install/custom/ohpc"
# link name of repository directory for LiCO
link_lico_repo_dir="/install/custom/lico"
# link name of repository directory for LiCO-dep
link_lico_dep_repo_dir="/install/custom/lico-dep"
# Local repository directory for Lenovo OpenHPC, please change it according to
# this version.
ohpc_repo_dir="/install/custom/ohpc-2.0.0"
# LiCO repository directory for LiCO, please change it according to this version.
lico_repo_dir="/install/custom/lico-6.0.0"
# LiCO repository directory for LiCO-dep, please change it according to this version.
lico_dep_repo_dir="/install/custom/lico-dep-6.0.0"
# Total compute nodes
num_computes="2"
# Prefix of compute node hostname. If OS has already been installed on all the nodes
# of the cluster, change the configuration according to actual conditions.
compute_prefix="c"
# Compute node hostname list. If OS has already been installed on all the nodes of the
# cluster, change the configuration according to actual conditions.
c_name[0]=c1
c_name[1]=c2
# Compute node IP list. If OS has already been installed on all the
# nodes of the cluster, change the configuration according to actual conditions.
6 LiCO 6.0.0 Installation Guide (for SLES)
c_ip[0]=192.168.0.6
c_ip[1]=192.168.0.16
# Network interface card MAC address corresponding to the compute node IP. If OS has
# already been installed on all the nodes of the cluster, change the configuration
# according to actual conditions.
c_mac[0]=fa:16:3e:73:ec:50
c_mac[1]=fa:16:3e:27:32:c6
# Compute node BMC address list.
c_bmc[0]=192.168.1.6
c_bmc[1]=192.168.1.16
# Total login nodes. If there is no login node in the cluster, the number of logins
# must be "0". And the 'l_name', 'l_ip', 'l_mac', and 'l_bmc' lines need to be removed.
num_logins="1"
# Login node hostname list. If OS has already been installed on all nodes in the cluster,
# change the configuration according to actual conditions.
l_name[0]=l1
# Login node IP list. If OS has already been installed on all the nodes of the cluster,
# change the configuration according to actual conditions.
l_ip[0]=192.168.0.15
# Network interface card MAC address corresponding to the login node IP.
# If OS has already been installed on all nodes in the cluster, change the configuration
# according to actual conditions.
l_mac[0]=fa:16:3e:2c:7a:47
# Login node BMC address list.
l_bmc[0]=192.168.1.15
# icinga api listener port
icinga_api_port=5665
Step 4. Save the changes to lico_env.local.This Guide assumes that the node's BMC username and
password are consistent. If they are inconsistent, they need to be modified during the installation.
Step 5. Run the following commands to make the configuration file take effect:
chmod 600 lico_env.local
source lico_env.local
After the cluster environment is set up, configure the IP address of the public network on the login or management node. In this way, you can log in to LiCO Web portal from external network.
Chapter 2. Deploy the cluster environment 7

Create a local repository

Step 1. Run the following command to create a directory for ISO storage:
mkdir -p ${iso_path}
Step 2. Download SLE-15-SP1-Installer-DVD-x86_64-GM-DVD1.iso and SLE-15-SP1-Packages-x86_64-GM-
DVD1.iso from the official Web site. Record MD5SUM result from your download Web site.
Step 3. Copy the file to ${iso_path}.
Step 4. Run the following commands to compare md5sum result with original to check if ISO file is valid:
cd ${iso_path}
md5sum SLE-15-SP1-Installer-DVD-x86_64-GM-DVD1.iso
md5sum SLE-15-SP1-Packages-x86_64-GM-DVD1.iso
cd ~
Step 5. Run the following commands to mount image:
mkdir -p ${installer_repo_dir}
mkdir -p ${packages_repo_dir}
mount -o loop ${iso_path}/SLE-15-SP1-Installer-DVD-x86_64-GM-DVD1.iso ${installer_repo_dir}
mount -o loop ${iso_path}/SLE-15-SP1-Packages-x86_64-GM-DVD1.iso ${packages_repo_dir}
Step 6. Run the following commands to configure local repository:
cat << eof > ${iso_path}/SLES15-SP1-15.1.repo
[SLES15-SP1-15.1-INSTALLER]
name=sle15-installer
enabled=1
autorefresh=0
gpgcheck=0
baseurl=file://${installer_repo_dir}
[SLES15-SP1-15.1-PACKAGES]
name=sle15-packages
enabled=1
autorefresh=0
gpgcheck=0
baseurl=file://${packages_repo_dir}
eof
cp ${iso_path}/SLES15-SP1-15.1.repo /etc/zypp/repos.d

Install Lenovo xCAT

Step 1. Download the package from https://hpc.lenovo.com/downloads/20a.7/xcat-2.15.1.lenovo4_confluent-
2.5.2-suse15.tar.bz2
Step 2. Upload the package to the /root directory on the management node.
8
LiCO 6.0.0 Installation Guide (for SLES)
.
Step 3. Run the following commands to create xcat local repository:
mkdir -p $xcat_repo_dir
cd /root
zypper in bzip2
tar -xvf xcat-2.15.1.lenovo4_confluent-2.5.2-suse15.tar.bz2 -C $xcat_repo_dir
cd $xcat_repo_dir/lenovo-hpc-suse15
./mklocalrepo.sh
cd ~
Step 4. Run the following commands to install xcat:
zypper install perl-Expect perl-Net-DNS
zypper --gpg-auto-import-keys install -y --force-resolution xCAT
source /etc/profile.d/xcat.sh

Prepare OS mirrors for other nodes

Step 1. Run the following command to prepare the OS image for the other nodes:
copycds $iso_path/SLE-15-SP1-Installer-DVD-x86_64-GM-DVD1.iso \
$iso_path/SLE-15-SP1-Packages-x86_64-GM-DVD1.iso
Step 2. Run the following command to confirm that the OS image has been copied:
lsdef -t osimage
Note: The output should be as follows:
sle15.1-x86_64-install-compute (osimage)
sle15.1-x86_64-netboot-compute (osimage)
sle15.1-x86_64-statelite-compute (osimage)
Step 3. Run the following command to disable the Nouveau module:
chdef -t osimage sle15.1-x86_64-install-compute addkcmdline=\
"rdblacklist=nouveau nouveau.modeset=0 R::modprobe.blacklist=nouveau"
Note: Nouveau module is an accelerated open-source driver for NVIDIA cards. This module should disabled before the installation of CUDA driver.

Set xCAT node information

Note: If the ThinkSystem SR635/SR655 server is used in other nodes, change “serialport=0” to “serialport=
1” before running the following commands.
Step 1. Run the following commands to import the compute node configuration in the lico_env.local file to
xCAT:
for ((i=0; i<$num_computes; i++)); do
mkdef -t node ${c_name[$i]} groups=compute,all arch=x86_64 netboot=xnba mgt=ipmi \
bmcusername=${bmc_username} bmcpassword=${bmc_password} ip=${c_ip[$i]} \
Chapter 2
. Deploy the cluster environment 9
mac=${c_mac[$i]} bmc=${c_bmc[$i]} serialport=0 serialspeed=115200;
done
Step 2. Run the following commands to import the login node configuration in the lico_env.local file to
xCAT:
for ((i=0; i<$num_logins; i++)); do
mkdef -t node ${l_name[$i]} groups=login,all arch=x86_64 netboot=xnba mgt=ipmi \
bmcusername=${bmc_username} bmcpassword=${bmc_password} ip=${l_ip[$i]} \
mac=${l_mac[$i]} bmc=${l_bmc[$i]} serialport=0 serialspeed=115200;
done
Step 3. (Optional) If the BMC username and password of the node are inconsistent, run the following
command to make them consistent:
tabedit ipmi
Step 4. Run the following command to configure the root account password for the node:
chtab key=system passwd.username=root passwd.password=<ROOT_PASSWORD>

Add host resolution

Note: If the cluster already has the OS installed and can resolve the IP address through the hostname, skip
this section.
Run the following commands to add host resolution:
chdef -t site domain=${domain_name}
chdef -t site master=${sms_ip}
chdef -t site nameservers=${sms_ip}
sed -i "/^\s*${sms_ip}\s*.*$/d" /etc/hosts
sed -i "/\s*${sms_name}\s*/d" /etc/hosts
echo "${sms_ip} ${sms_name} ${sms_name}.${domain_name} " >> /etc/hosts
makehosts

Configure DHCP and DNS services

Note: If all nodes in the cluster have the OS installed, skip this step.
Run the following commands to configure DHCP and DNS services:
makenetworks
makedhcp -n
makedhcp -a
makedns -n
echo "search ${domain_name}" > /etc/resolv.conf
echo "nameserver ${sms_ip}" >> /etc/resolv.conf
10 LiCO 6.0.0 Installation Guide (for SLES)
Note: Please reference to the following two links to make sure that the management node is pointing at the
same DNS as other nodes:
https://sourceforge.net/p/xcat/wiki/XCAT_iDataPlex_Cluster_Quick_Start/#install-xcat-on-the-management-node https://sourceforge.net/p/xcat/wiki/Cluster_Name_Resolution/

Install a node OS through the network

Note: If all nodes in the cluster have the OS installed, skip this section.
Run the following commands to set and install the necessary OS mirror:
nodeset all osimage=sle15.1-x86_64-install-compute
rsetboot all net -u
rpower all reset
Note: It takes several minutes to complete the OS installation. You can use the following command to check the progress:
nodestat all

Create local repository for other nodes

Run the following commands:
cat << eof > /var/tmp/SLES15-SP1-15.1.repo
[SLES15-SP1-15.1-INSTALLER]
name=sle15-installer
enabled=1
autorefresh=0
gpgcheck=0
baseurl=http://${sms_name}${installer_repo_dir}
[SLES15-SP1-15.1-PACKAGES-Module-Basesystem]
name=sle15-packages-basesystem
enabled=1
autorefresh=0
gpgcheck=0
baseurl=http://${sms_name}${packages_repo_dir}/Module-Basesystem
[SLES15-SP1-15.1-PACKAGES-Module-HPC]
name=sle15-packages-hpc
enabled=1
autorefresh=0
gpgcheck=0
baseurl=http://${sms_name}${packages_repo_dir}/Module-HPC
[SLES15-SP1-15.1-PACKAGES-Module-Desktop-Applications]
Chapter 2
. Deploy the cluster environment 11
name=sle15-packages-desktop-applications
enabled=1
autorefresh=0
gpgcheck=0
baseurl=http://${sms_name}${packages_repo_dir}/Module-Desktop-Applications
eof
xdcp all /var/tmp/SLES15-SP1-15.1.repo /etc/zypp/repos.d

Configure the memory for other nodes

Run the following commands:
xdcp all /etc/security/limits.conf /etc/security/limits.conf
psh all reboot

Checkpoint A

Run the following commands to check and ensure that the installation is complete:
psh all uptime
Notes:
• The output should be as follows:
c1: 05:03am up 0:02, 0 users, load average: 0.20, 0.13, 0.05
c2: 05:03am up 0:02, 0 users, load average: 0.20, 0.14, 0.06
l1: 05:03am up 0:02, 0 users, load average: 0.17, 0.13, 0.05
……
• If you cannot run these commands,check if the xCAT is successfully installed on the management node, and passwordless SSH is set between the management node and other nodes. You can copy the id_rsa file and the id_rsa.pub file from the management node to other nodes, and run these commands again.

Install infrastructure software for nodes

List of infrastructure software to be installed

Note: In the Installation node column, M stands for “Management node”, L stands for “Login node”, and C
stands for “Compute node”.
Table 4. Infrastructure software to be installed
Software Component Version Service Installation node Notes
nfs nfs-kernel-server 2.1.1 nfs-server M
chrony chrony 3.2 chronyd M
slurm ohpc-slurm-
server
ohpc-slurm-client 2.0 munge, slurmd
12 LiCO 6.0.0 Installation Guide (for SLES)
2.0
munge, slurmctld
M
C, L
Loading...
+ 41 hidden pages