Ibm OPEN STORAGE READ ME FIRST 7-9-2010, I VIRTUALIZATION READ ME FIRST 7-9-2010 Manual

0 (0)

IBM i Virtualization and Open Storage Read-me First

Vess Natchev

Cloud | Virtualization | Power Systems

IBM Rochester, MN

vess@us.ibm.com

July 9th, 2010

1

This “read-me first” document provides detailed instructions on using IBM i 6.1 virtualization and connecting open storage to IBM i. It covers prerequisites, supported hardware and software, planning considerations, install and post-install tasks such as backups. The document also contains links to many additional information sources.

Table of Contents

1. IBM i virtualization solutions

1.1.IBM i logical partition (LPAR) hosting another IBM i partition

1.2.IBM i using open storage as a client of the Virtual I/O Server (VIOS)

1.3.IBM i on a Power blade

2.IBM i hosting IBM i supported configurations

2.1.Hardware

2.2.Software and firmware

3.IBM i hosting IBM i concepts

3.1.Virtual SCSI and Ethernet adapters

3.2.Storage virtualization

3.3.Optical virtualization

3.4.Network virtualization

4.Prerequisites for implementing IBM i hosted LPARs

4.1.Storage planning

4.2.Performance

4.3.Dual hosting

5.Implementing IBM i client LPARs with an IBM i host

6.Post-install tasks and considerations

6.1.Configure IBM i networking

6.2.How to perform IBM i operator panel functions

6.3.How to display the IBM i partition System Reference Code (SRC) history

6.4.Client IBM i LPARs considerations and limitations

6.5.Configuring Electronic Customer Support (ECS) over LAN

6.6.Copying storage spaces

6.7.Backups

7.IBM i using open storage supported configurations

7.1.Hardware

7.2.Software and firmware

8.IBM i using open storage through VIOS concepts

8.1.Virtual SCSI and Ethernet adapters

8.2.Storage virtualization

8.3.Optical virtualization

8.4.Network virtualization

9.Prerequisites for attaching open storage to IBM i through VIOS

9.1.Storage planning

9.2.Performance

9.3.Dual hosting and multi-path I/O (MPIO)

9.3.1.Dual VIOS LPARs with IBM i mirroring

9.3.2.Path redundancy to a single set of LUNs

2

9.3.2.1.Path redundancy with a single VIOS

9.3.2.2.Redundant VIOS LPARs with client-side MPIO

9.3.3 Subsystem Device Driver – Path Control Module (SDDPCM)

10. Attaching open storage to IBM i through VIOS

10.1.Open storage configuration

10.2.VIOS installation and configuration

10.3.IBM i installation and configuration

10.4.HMC provisioning of open storage in VIOS

10.5.End-to-end LUN device mapping

11.Post-install tasks and considerations

11.1.Configure IBM i networking

11.2.How to perform IBM i operator panel functions

11.3.How to display the IBM i partition System Reference Code (SRC) history

11.4.Client IBM i LPARs considerations and limitations

11.5.Configuring Electronic Customer Support (ECS) over LAN

11.6.Backups

12.DS4000 and DS5000 Copy Services and IBM i

12.1.FlashCopy and VolumeCopy

12.2.Enhanced Remote Mirroring (ERM)

13.IBM i using SAN Volume Controller (SVC) storage through VIOS

13.1.IBM i and SVC concepts

13.2.Attaching SVC storage to IBM i

14.SVC Copy Services and IBM i

14.1.FlashCopy

14.1.1.Test scenario

14.1.2.FlashCopy statements

14.2.Metro and Global Mirror

14.2.1.Test scenario

14.2.2.Metro and Global Mirror support statements

15.XIV Copy Services and IBM i

16.DS5000 direct attachment to IBM i

16.1.Overview

16.2.Supported hardware and software

16.3.Best practices, limitations and performance

16.4.Sizing and configuration

16.5.Copy Services support

17.N_Port ID Virtualization (NPIV) for IBM i

17.1.Overview

17.2.Supported hardware and software

17.3.Configuration

17.4.Copy Services support

18.Additional resources

19.Trademarks and disclaimers

3

1. IBM i virtualization solutions

IBM i 6.1 introduces three significant virtualization capabilities that allow faster deployment of IBM i workloads within a larger heterogeneous IT environment. This section will introduce and differentiate these new technologies.

1.1. IBM i logical partition (LPAR) hosting another IBM i partition

An IBM i 6.1 LPAR can host one or more additional IBM i LPARs, known as virtual client LPARs. Virtual client partitions can have no physical I/O hardware assigned and instead leverage virtual I/O resources from the host IBM i partition. The types of hardware resources that can be virtualized by the host LPAR are disk, optical and networking. The capability of IBM i to provide virtual I/O resources has been used successfully for several years to integrate AIX®, Linux® and Windows® workloads on the same platform. The same virtualization technology, which is part of the IBM i operating system, can now be used to host IBM i LPARs. IBM i hosting IBM i is the focus of the first half of this document.

1.2. IBM i using open storage as a client of the Virtual I/O Server (VIOS)

IBM i virtual client partitions can also be hosted by VIOS. VIOS is virtualization software that runs in a separate partition whose purpose is to provide virtual storage, optical, tape and networking resources to one or more client partitions. The most immediate benefit VIOS brings to an IBM i client partition is the ability to expand its storage portfolio to use 512-byte/sector open storage. Open storage volumes (or logical units, LUNs) are physically attached to VIOS via a Fibre Channel or Serial-attached SCSI (SAS) connection and then made available to IBM i. While IBM i does not directly attach to the SAN in this case, once open storage LUNs become available through VIOS, they are managed the same way as integrated disks or LUNs from a directly attached storage system. IBM i using open storage through VIOS is the focus of the second half of this read-me first guide.

1.3. IBM i on a Power blade

The third major virtualization enhancement with IBM i 6.1 is the ability to run an IBM i LPAR and its applications on a Power blade server, such as IBM BladeCenter JS12 or JS22. Running IBM i on a Power blade is beyond the scope of this document. See the IBM i on a Power Blade Readme First for a complete technical overview and implementation instructions: http://www.ibm.com/systems/power/hardware/blades/ibmi.html.

2. IBM i hosting IBM i supported configurations

2.1. Hardware

One of the most significant benefits of this solution is the broad hardware support. Any storage, network and optical adapters and devices supported by the host IBM i partition on a POWER6 processor-based server can be virtualized to the client IBM i partition. Virtualization of tape devices from an IBM i host to an IBM i client is not supported. The following table lists the supported hardware:

4

Hardware type

Supported for

Notes

 

IBM i hosting

 

 

IBM i

 

IBM Power servers

Yes

Includes IBM Power 520 Express, IBM

 

 

Power 550 Express, IBM Power 560

 

 

Express, IBM Power 570 and IBM Power 595

 

 

Does not include IBM POWER6 processor-

 

 

based blade servers, such as IBM

 

 

BladeCenter JS12 and JS22

IBM Power 575

No

 

POWER5-based systems or

No

 

earlier

 

 

Storage adapters (Fibre

Yes

Must be supported by IBM i 6.1 or later and

Channel, SAS, SCSI)

 

supported on POWER6-based IBM Power

 

 

server

Storage devices and

Yes

Must be supported by IBM i 6.1 or later and

subsystems

 

supported on POWER6-based IBM Power

 

 

server

Network adapters

Yes

Must be supported by IBM i 6.1 or later and

 

 

supported on POWER6-based IBM Power

 

 

server

Optical devices

Yes

Must be supported by IBM i 6.1 or later and

 

 

supported on POWER6-based IBM Power

 

 

server

Tape devices

No

 

To determine the storage, network and optical devices supported on each IBM Power server model, refer to the Sales Manual for each model: http://www.ibm.com/common/ssi/index.wss.

To determine the storage, network and optical devices supported only by IBM i 6.1, refer to the upgrade planning Web site: https://www304.ibm.com/systems/support/i/planning/upgrade/futurehdwr.html.

2.2. Software and firmware

Software or firmware type

Supported for

Notes

 

IBM i hosting

 

 

IBM i

 

IBM i 6.1 or later

Yes

Required on both host and client IBM i

 

 

partition

IBM i 5.4 or earlier

No

Not supported on host or client partition

IBM Power server system

Yes

This is the minimum system firmware level

firmware 320_040_031 or later

 

required

HMC firmware HMC V7 R3.2.0

Yes

This is the minimum HMC firmware level

or later

 

required

3. IBM i hosting IBM i concepts

The capability of an IBM i partition to host another IBM i partition involves hardware and virtualization components. The hardware components are the storage, optical and network adapters and devices physically assigned to the host IBM i LPAR. The virtualization components

5

Ibm OPEN STORAGE READ ME FIRST 7-9-2010, I VIRTUALIZATION READ ME FIRST 7-9-2010 Manual

are the system firmware and IBM i operating system objects necessary to virtualize the physical I/O resources to client partitions. The following diagram shows the full solution and its components:

3.1. Virtual SCSI and Ethernet adapters

IBM i hosting IBM i uses an existing function of the system firmware, or Power Hypervisor: the capability to create virtual SCSI and Ethernet adapters in a partition. Virtual adapters are created for each LPAR in the Hardware Management Console (HMC). Virtual SCSI adapters are used for storage and optical virtualization; virtual Ethernet adapters are used for network virtualization.

Note that using virtual I/O resources from a host partition does not preclude an IBM i client partition from owning physical hardware. A mix of virtual and physical hardware in the same partition is supported for IBM i in this environment, by assigning both types of adapters to the partition in the HMC.

3.2. Storage virtualization

To virtualize integrated disk (SCSI, SAS or SSD) or LUNs from a SAN system to an IBM i client partition, both HMC and IBM i objects must be created. In the HMC, the minimum required configuration is:

One virtual SCSI server adapter in the host partition

6

One virtual SCSI client adapter in the client partition

This virtual SCSI adapter pair allows the client partition to send read and write I/O operations to the host partition. More than one virtual SCSI pair can exist for the same client partition in this environment. To minimize performance overhead on the host partition, the virtual SCSI connection is used to send I/O requests, but not for the actual transfer of data. Using the capability of the Power Hypervisor for Logical Remote Direct Memory Access (LRDMA), data are transferred directly from the physical adapter assigned to the host partition to a buffer in memory of the client partition.

There is no additional configuration required in IBM i in the virtual client partition. In the host partition, the minimum required IBM i setup consists of the following:

One Network Server Description (NWSD) object

One Network Server Storage Space (NWSSTG) object

The NWSD object associates a virtual SCSI server adapter in IBM i (which in turn is connected to a virtual SCSI client adapter in the HMC) with one or more NWSSTG objects. At least one NWSD object must be created in the host for each client, though more are supported.

The NWSSTG objects are the virtual disks provided to the client IBM i partition. They are created from available physical storage in the host partition. In the client, they are recognized and managed as standard DDxx disk devices (with a different type and model). The following screenshot shows several storage spaces for a client partition in an IBM i 6.1 host partition:

The next screenshot shows several storage spaces in an IBM i 6.1 client partition:

7

Storage spaces for an IBM i client partition do not have to match physical disk sizes; they can be created from 160 MB to 1 TB in size, as long as there is available storage in the host. The 160 MB minimum size is a requirement from the storage management Licensed Internal Code (LIC) on the client partition. For an IBM i client partition, up to 16 NWSSTGs can be linked to a single NWSD, and therefore, to a single virtual SCSI connection. Up to 32 outstanding I/O operations from the client to each storage space are supported for IBM i clients. Storage spaces can be created in any existing Auxiliary Storage Pool (ASP) on the host, including Independent ASPs. Through the use of NWSSTGs, any physical storage supported in the IBM i host partition on a POWER6-based system can be virtualized to a client partition.

3.3. Optical virtualization

Any optical drive supported in the host IBM i LPAR can be virtualized to an IBM i client LPAR. An existing virtual SCSI connection can be used, or a new connection can be created explicitly for optical I/O traffic. By default, if a virtual SCSI connection exists between host and client, all physical OPTxx optical drives in the host will be available to the client, where they will also be recognized as OPTxx devices. The NWSD parameter Restricted device resources can be used to specify which optical devices in the host a client partition cannot access.

A virtualized optical drive in the host partition can be used for a D-mode Initial Program Load (IPL) and install of the client partition, as well as for installing Program Temporary Fixes (PTFs) or applications later. If the optical drive is writeable, the client partition will be able to write to the physical media in the drive.

3.4. Network virtualization

Virtualizing a network adapter and using a virtual LAN (VLAN) for partition-to-partition communication within a system are existing IBM i capabilities. In order for a client to use a host’s physical network adapter, a virtual Ethernet adapter must be created in the HMC in both partitions. To be on the same VLAN, the two virtual Ethernet adapters must have the same Port Virtual LAN ID (PVID). This type of adapter is recognized by IBM i as a communications port (CMNxx) with a different type. In the host partition, the virtual Ethernet adapter is then associated with the physical network adapter via a routing configuration – either Proxy ARP or Network Address Translation (NAT). This allows the client partition to send network packets via the VLAN and through the physical adapter to the outside LAN. The physical adapter can be any network adapter supported by IBM i 6.1, including Integrated Virtual Ethernet (IVE) ports, also known as Host Ethernet Adapter (HEA) ports.

4. Prerequisites for implementing IBM i hosted LPARs

4.1. Storage planning

Because virtual disks for the IBM i client LPAR are NWSSTG objects in the host LPAR, the main prerequisite to installing a new client LPAR is having sufficient capacity in the host to create those objects. Note that the host partition is not capable of detecting what percent of the virtual storage is used in the client. For example, if a 500-GB storage space is created, it will occupy that amount of physical storage in the host IBM i LPAR, even if the disk capacity is only 50% utilized in the client LPAR.

8

It is recommended to closely match the total size of the storage spaces for each client partition to its initial disk requirements. As the storage needs of the client partition grow, additional storage spaces can be dynamically created and linked to it on the host partition. On the client, the new virtual disk will automatically be recognized as a non-configured drive and can be added to any existing ASP. The only restriction to consider in this case is the maximum number of storage spaces allowed per virtual SCSI connection for an IBM i client partition, which is 16. If more than 16 NWSSTGs are needed for a client LPAR, additional virtual SCSI connections can be created dynamically in the HMC.

4.2. Performance

As described in section 2.2, disk I/O operations in an IBM i virtual client partition result in I/O requests to the physical disk adapter(s) and drives assigned to the host partition. Therefore, the best way to ensure good disk performance in the client LPAR is to create a well-performing disk configuration in the host LPAR. Because the host partition is a standard IBM i partition, all the recommendations in the Performance Capabilities Reference manual (http://www.ibm.com/systems/i/solutions/perfmgmt/resource.html) will apply to it. Use the manual’s suggestions for maximizing IBM i disk performance for the type of physical storage used in the host, whether it is integrated disk or SAN.

Note that if only the System ASP exists on the host partition, NWSSTG objects are created on the same physical disk units as all other objects. If the host partition is running production applications in addition to providing virtual storage to client partitions, there will be disk I/O contention as both client partitions and IBM i workloads in the host send I/O requests to those disk units. To minimize disk I/O contention, create storage space objects in a separate ASP on the host (Independent ASPs are supported). Performance on the client(s) would then depend on the disk adapter and disk configuration used for that ASP. If the host partition is providing virtual storage to more than one client partition, consider using separate ASPs for the storage space objects for each client. This recommendation should be weighed against the concern of ending up with too few physical disk arms in each ASP to provide good performance.

Disk contention from IBM i workloads in the host LPAR and virtual client LPARs can be eliminated if a separate IBM i LPAR is used just for hosting client LPARs. An additional benefit of this configuration is the fact that an application or OS problem stemming from a different workload on the host cannot negatively affect client partitions. These benefits should be weighed against:

The license cost associated with a separate IBM i partition

The maintenance time required for another partition, such as applying Program Temporary Fixes (PTFs)

The ability to create well-performing physical disk configurations in both partitions that meet the requirements of their workloads

If the host partition runs a heavy-I/O workload and the client partitions also have high disk response requirements, it is strongly recommended to consider using a separate hosting partition, unless separate ASPs on the host are used for storage space objects. If the host partition’s workload is light to moderate with respect to disk requirements and the client partitions are used mostly for development, test or quality assurance (QA), it is acceptable to use one IBM i partition for both tasks.

4.3. Dual hosting

An IBM i client partition has a dependency on its host: if the host partition fails, IBM i on the client will lose contact with its disk units. The virtual disks would also become unavailable if the host partition is brought down to restricted state or shut down for scheduled maintenance or to apply

9

PTFs. To remove this dependency, two host partitions can be used to simultaneously provide virtual storage to one or more client partitions.

The configuration for two hosts for the same client partition uses the same concepts as that for a single host described in section 2.2. In addition, a second virtual SCSI client adapter exists in the client LPAR, connected to a virtual SCSI server adapter in the second host LPAR. The IBM i configuration of the second host mimics that of the first host, with the same number of NWSD and NWSSTG objects, and NWSSG objects of the same size. As a result, the client partition recognizes a second set of virtual disks of the same number and size. To achieve redundancy, adapter-level mirroring is used between the two sets of storage spaces from the two hosts. Thus, if a host partition fails or is taken down for maintenance, mirroring will be suspended, but the client partition will continue to operate. When the inactive host is either recovered or restarted, mirroring can be resumed.

5. Implementing IBM i client LPARs with an IBM i host

Installing IBM i in a client LPAR with an IBM i host consists of two main phases:

Creating the virtual SCSI configuration in the HMC

Creating the NWSSTG and NWSD objects in the IBM i host partition, and activating the new client partition

The implementation steps are described in detail in the topic Creating an IBM i logical partition that uses IBM i virtual I/O resources using the HMC in the Power Systems Logical Partitioning Guide: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphat/iphat.pdf. Note that sufficient available capacity is required in the IBM i host partition to create the storage space objects. When following the detailed implementation instructions, keep in mind the performance recommendations in section 4.2 of this document.

6. Post-install tasks and considerations

6.1. Configure IBM i networking

Once the IBM i client partition is installed and running, the first system management step is to configure networking. There are three types of network adapters that can be assigned to an IBM i client partition:

A standard physical network adapter in a PCI slot

A logical port on a Host Ethernet Adapter (HEA)

A virtual Ethernet adapter

Note that both physical and virtual I/O resources can be assigned to an IBM i virtual client partition. If a physical network adapter was not assigned to the IBM i client partition when it was first created, see the topic Managing physical I/O devices and slots dynamically using the HMC in the Power Systems Logical Partitioning Guide

(http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphat/iphat.pdf) to assign an available adapter.

An IBM i client partition can also use the new HEA capability of POWER6 processor-based servers. To assign a logical port (LHEA) on an HEA to an IBM i client partition, see the topic

Creating a Logical Host Ethernet Adapter for a running logical partition using the HMC in

10

the Power Systems Logical Partitioning Guide: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphat/iphat.pdf.

Lastly, a virtual Ethernet adapter can also provide network connectivity to an IBM i client partition. To create one, consult the topic Configuring a virtual Ethernet adapter using the HMC in the

Power Systems Logical Partitioning Guide: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphat/iphat.pdf.

In all three cases, the assigned network adapter will be recognized as a communications port (CMNxx) in IBM i. The type of communications port will depend on the network adapter: for example, 5706 for a Gigabit Ethernet adapter, 5623 for an LHEA and 268C for a virtual Ethernet adapter. In the case of a standard PCI network adapter or an LHEA, networking can be configured following the process described in the IBM i networking topic in the Information Center: http://publib.boulder.ibm.com/infocenter/systems/scope/i5os/topic/rzajy/rzajyoverview.htm.

If the IBM i client partition is using a virtual Ethernet adapter for networking, additional configuration on the IBM i host is required. The virtual Ethernet adapter allows the client partition to communicate only with other partitions whose virtual Ethernet adapters have the same Port Virtual LAN ID (PVID); in other words, partitions on the same virtual LAN within the system. A routing configuration can be created in the IBM i host partition to allow forwarding of network packets from the outside LAN to the client partition on the virtual LAN. That type of virtual network configuration has been used successfully for several years to provide networking to Linux client partitions with an IBM i host. The two methods for routing traffic from the physical LAN to a client partition on a virtual LAN are Proxy ARP and Network Address Translation (NAT). To configure Proxy ARP or NAT in the IBM i host partition, follow the instructions in section 5.2 of the Redbook Implementing POWER Linux on IBM System i Platform

(http://www.redbooks.ibm.com/redbooks/pdfs/sg246388.pdf).

6.2. How to perform IBM i operator panel functions

Operator panel functions in an IBM i client partitions are performed in the HMC:

Sign onto the HMC with a profile with sufficient authority to manage the IBM i client partition

Select the partition

Use the open-in-context arrow to select Serviceability Æ Control Panel Functions, then the desired function.

6.3.How to display the IBM i partition System Reference Code (SRC) history

Sign onto the HMC with a profile with sufficient authority to manage the IBM i client partition

Select the partition

Use the open-in-context arrow to select Serviceability Æ Reference Code History

To display words 2 through 9 of a reference code, click the radio button for that code.

6.4.Client IBM i LPARs considerations and limitations

Consult the topic Considerations and limitations for i5/OS client partitions on systems managed by the Integrated Virtualization Manager (IVM) in the Information Center: http://publib.boulder.ibm.com/infocenter/systems/scope/i5os/topic/rzahc/rzahcbladei5limits.htm.

11

While in this case the IBM i client partition is not being managed by IVM, it does virtual I/O resources and the limitations outlined in the topic above apply to it.

6.5. Configuring Electronic Customer Support (ECS) over LAN

A supported WAN adapter can be assigned to the IBM i client partition for ECS. Alternatively, ECS over LAN can be configured. Consult the topic Setting up a connection to IBM in the Information Center: http://publib.boulder.ibm.com/infocenter/systems/scope/i5os/topic/rzaji/rzaji_setup.htm.

6.6. Copying storage spaces

Because an IBM i client partition is installed into one or more storage space objects in the IBM i host partition, new client partitions can be deployed rapidly by copying the storage space(s). Note that each IBM i partition, client or host, must have valid OS and Licensed Product Program licenses for the number of processors it uses.

To copy one or more storage spaces that contain an installed IBM i client partition, first shut down the partition during an available period of downtime. Next, log into the host IBM i partition with a security officer-level profile and perform the following steps:

Enter WRKNWSSTG

Enter 3 next to the storage space you are going to copy, then press Enter

Enter a name of up to 10 characters for the new storage space

The size of the original storage space will be entered automatically. The new storage space can be as large or larger (up to 1 TB), but not smaller

Enter the correct ASP ID. The ASP where the original storage space exists is the default

Optionally, enter a text description

Press Enter

To deploy the new client partition, follow the instructions in section 5 to create the necessary virtual SCSI configuration in the HMC and the NWSD object in the host IBM i partition.

6.7. Backups

As mentioned above, an IBM i client partition with an IBM i host can use a mix of virtual and physical I/O resources. Therefore, the simplest backup and restore approach is to assign an available tape adapter on the system to it and treat it as a standard IBM i partition. The tape adapter can be any adapter supported by IBM i on IBM Power servers and can be shared with other partitions. To assign an available tape adapter to the IBM i client partition, consult the topic

Managing physical I/O devices and slots dynamically using the HMC in the Logical Partitioning Guide

(http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphat/iphat.pdf).

Once a tape adapter connected to a tape drive or library is available to the client partition, use the Backup and Recovery topic in the Information Center to manage backups: http://publib.boulder.ibm.com/infocenter/systems/scope/i5os/index.jsp?topic=/rzahg/rzahgbackup. htm&tocNode=int_215989.

The IBM i host partition can also be used for system-level backups of the client partition. See the topic Saving IBM i server objects in IBM i in the Logical Partitioning Guide

(http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphat/iphat.pdf).

12

13

7. IBM i using open storage supported configurations

There are three general methods by which IBM i connects to open storage:

Directly through the SAN fabric without VIOS

As a client of VIOS, with VIOS connecting to a storage subsystem through the SAN fabric

As a client of VIOS, with VIOS using storage from a SAN Volume Controller (SVC). SVC in turn connects to one or more storage subsystems through the SAN fabric

The first set of support tables in the hardware and software sections below applies to the first connection method above (IBM i direct attachment), while the second set of tables in both sections applies to the second method (IBM i Æ VIOS Æ storage subsystem). Furthermore, the support statements in the second set of tables below apply to the end-to-end solution of IBM i using open storage as a client of VIOS. This document will not attempt to list the full device support of VIOS, nor of any other clients of VIOS, such as AIX and Linux. For the general VIOS support statements, including other clients, see the VIOS Datasheet at: http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/datasheet.html.

When the second connection method to open storage is used (IBM i Æ VIOS Æ SVC Æ storage subsystem), the support statement for IBM i follows that of SVC. For a list of environments supported by SVC, see the data sheet on the SVC overview Web site: http://www.ibm.com/systems/storage/software/virtualization/svc. Note that IBM i cannot connect directly to SVC; it must do so as a client of VIOS.

IBM i as a client of VIOS is also supported on POWER6 or later processor-based blade servers. IBM i cannot attach directly to open storage when running on Power blades. For the full supported configurations statement for the IBM i on Power blade solution, see the Supported Environments document at: http://www.ibm.com/systems/power/hardware/blades/ibmi.html.

7.1. Hardware

Hardware type

Supported by

Notes

 

IBM i for direct

 

 

attachment

 

IBM Power servers

Yes

Includes IBM Power 520 Express, IBM

 

 

Power 550 Express, IBM Power 560

 

 

Express, IBM Power 570 and IBM Power 595

IBM Power 575

No

 

POWER5-based systems or

No

 

earlier

 

 

DS5100 using Fibre Channel

Yes

It is strongly recommended that SATA

or SATA drives

 

drives are used only for test or archival IBM i

 

 

applications for performance reasons

DS5300 using Fibre Channel

Yes

It is strongly recommended that SATA

or SATA drives

 

drives are used only for test or archival IBM i

 

 

applications for performance reasons

EXP810 expansion unit with

Yes

It is strongly recommended that SATA

Fibre Channel or SATA drives

 

drives are used only for test or archival IBM i

 

 

applications for performance reasons

EXP5000 expansion unit with

Yes

It is strongly recommended that SATA

Fibre Channel or SATA drives

 

drives are used only for test or archival IBM i

 

 

applications for performance reasons

14

EXP5060 expansion unit with

Yes

It is strongly recommended that SATA

Fibre Channel or SATA drives

 

drives are used only for test or archival IBM i

 

 

applications for performance reasons

DS6800

Yes

 

DS8100 using Fibre Channel

Yes

It is strongly recommended that FATA

or FATA drives

 

drives are used only for test or archival IBM i

 

 

applications for performance reasons

DS8300 using Fibre Channel

Yes

It is strongly recommended that FATA

or FATA drives

 

drives are used only for test or archival IBM i

 

 

applications for performance reasons

DS8700 using Fibre Channel

Yes

It is strongly recommended that FATA

or FATA drives

 

drives are used only for test or archival IBM i

 

 

applications for performance reasons

 

 

 

Hardware type

Supported by

Notes

 

IBM i as a

 

 

client of VIOS

 

IBM Power servers

Yes

Includes IBM Power 520 Express, IBM

 

 

Power 550 Express, IBM Power 560

 

 

Express, IBM Power 570 and IBM Power 595

IBM Power 575

No

 

POWER5-based systems or

No

 

earlier

 

 

DS3950 using Fibre Channel

Yes

It is strongly recommended that SATA

or SATA drives

 

drives are used only for test or archival IBM i

 

 

applications for performance reasons

DS4800 using Fibre Channel

Yes

Supported by IBM i as a client of VIOS on

drives

 

both IBM Power servers and IBM Power

 

 

blade servers

DS4800 using SATA drives

Yes

It is strongly recommended that SATA

 

 

drives are used only for test or archival IBM i

 

 

applications for performance reasons

DS4700 using Fibre Channel

Yes

Supported by IBM i as a client of VIOS on

drives

 

both IBM Power servers and IBM Power

 

 

blade servers

DS4700 using SATA drives

Yes

It is strongly recommended that SATA

 

 

drives are used only for test or archival IBM i

 

 

applications for performance reasons

DS3400 using SAS or SATA

Yes

It is strongly recommended that SATA

drives

 

drives are used only for test or archival IBM i

 

 

applications for performance reasons

DS4200 and earlier DS4000

No

 

and FAStT models

 

 

EXP810 expansion unit

Yes

See supported drives comments above

attached to DS4800 or

 

 

DS47000

 

 

EXP710 expansion unit

Yes

See supported drives comments above

attached to DS4800 or

 

 

DS4700

 

 

SAN Volume Controller (SVC)

Yes

SVC is not supported for direct connection to

 

 

IBM i

SVC Entry Edition (SVC EE)

Yes

SVC EE is not supported for direct

 

 

connection to IBM i

15

Loading...
+ 33 hidden pages