Several books make up the 1174, 9300 and LINCS library, and include information to install,
customize, operate, and maintain the 1174 and 9300 products. Following is a list and description
of these manuals.
1174 Hardware Reference
The 1174 Hardware Description manual provides a description of the hardware found in several
of the 1174 hardware platforms. These include the 1174-10R, 1174-10L, 1174-15X, 1174-20R,
1174-25X, 1174-60R, 1174-60C, 1174-65R, 1174-90R, and 1174-90T models. This manual
includes installation planning considerations and front panel operations.
1174 Hardware Ref erence - 1174-65S/90S Comm unications Servers
The 1174 Hardware Description manual provides a description of the hardware found in the
1174-65S and 1174-90S hardware platforms. This manual includes installation planning
considerations and front panel operations.
9300 Hardware Description
The 1174 Hardware Description manual provides a description of the hardware found in the
1174 hardware platforms. This manual includes installation planning consideration and front
panel operations.
LINCS Product Description
The LINCS Product Description manual gives a brief description of the LINCS communications
software capabilities. A reasonably complete list of the functions supported by LINCS is included.
LINCS Features
The LINCS Feature manual provides a much more detailed description of many of the LINCS
features. Among those features described in detail are APPN Network Node, SN A PU Gatew ay
support, IPX Routing, Host Connectivity, 3270 Server capabilities (IPX and TN3270), CUT
Device features including Windo wing, Keystroke Record/Playback, Entry Assist and Calculator ,
IP routing, IP Channel Bridge, ASCII De vice and ASCII Host support, and NetV ie w features.
LINCS Configuration
A Description of the LINCS Configuration process, as well as details of the configuration panels
used to customize the LINCS software can be found in this manual.
LINCS Central Control
This manual contains information about the online Central Control panels. The Central Control
mode provides a means to manage the LINCS software and the 1174 and 9300 hardware. A
detailed description of their use is included in the manual.
LINCS Problem Determination
The LINCS Problem Determination manual aids the LINCS administrator by providing useful
information about error codes and how to interpret them. Information is also included for running
offline utilities.
Routing over Point-to-Point Lines .............................................................. 134
Configuration and Management ............................................................. 134
12. Network Computing T erminal Support......................................................... 135
What is a Network Computing Terminal? .................................................. 135
13. IP Channel Bridge ........................................................................................ 137
IP Channel Bridge Applications ................................................................. 137
Configuration and Management ............................................................. 137
707021-001
ix
1. Overview
This document is organized into chapters based upon the major features supported by LINCS.
The communications features: APPN NETWORK NODE, DSPU SUPPOR T , IPX R OUTING,
IP ROUTING, IP CHANNEL BRIDGE, HOST CONNECTIVITY, and 3270 SERVER
FEA TURES which are used to route data thr ough the LINCS node or to end devices which are
attached to the LINCS node are described first.
Subsequent sections discuss the CUT, Network Computer Terminal, and DFT end devices
which are supported along with the device features, RPQs that have been implemented, and
the Management tools which are available to conf igure and manage your LINCS node.
Three appendices: ASCII Keyboard Control Codes, ASCII Device Setup, and Keyboard Maps
for 3270 Emulation provide information for users of ASCII hosts and de vices.
Embedded throughout the document, there are sections titled Configuration and Management.
These sections are included to direct you to the appropriate Central Control utilities for
configuring and managing a particular feature. Refer to the appropriate utilities in the
Configuration and Central Control Manuals for further details.
1707021-001
2707021-001
2. APPN Network Node
Advanced Peer-to-Peer Networking (APPN) is an enhancement to SN A that suppor ts peer
to peer connections. APPN is appr opriate for large SNA customers with multiple mainframes,
those with existing AS/400 APPN based networks who are moving to multi-protocol
networking, and those who wish to replace existing fixed pr edetermined point-to-point links
with APPN’s path selection scheme. The APPN architecture is an open technology, jointly
developed in an industry consortium called the APPN Implementers’ Workshop (AIW).
MTX is a voting member of the AIW . LINCS’ APPN Network node feature provides routing
and network services to other adjacent Network Nodes, end nodes (EN) and low-entry
networking (LEN) nodes with or without the presence of a local IBM mainframe. LINCS’
APPN consists of the following services:
• Intermediate Session Routing (ISR)
• High Performance Routing (HPR)
• Dependent LU Requester (DLUR)
• Connectivity
• Network Management
• Route Selection:
• Class of Service Definitions (COS)
• Directory Services
• Flexible Network T opolo gy
• Safe Store
LINCS APPN implementation is based on Data Connections Ltd’s SN AP APPN, which was
developed according to V ersion 2 of IBM’ s APPN specification, including many optional APPN
function sets. By adhering to this standard, you can be certain that LINCS nodes will interconnect
to End Nodes and Network Nodes from a wide array of vendors, whose interoperability is
tested and proven at the APPN Implementers Workshop’s Connectathon.
Connectivity
APPN utilizes LINCS’ data link control objects for layer 2 connectivity to adjacent nodes. The
APPN feature can be configured for predefined or dynamic connections to adjacent nodes.
Dynamic connections are limited to LLC, and predefined connections can be any of the
following protocols:
• LLC
• Channel/SNA
• Frame Relay
• SDLC SDLC/DAP
• TCP/IP
• X.25
Predefined circuits are r equir ed to support a LEN node.
3707021-001
LINCS Features
Dynamic Connections
LINCS uses dynamic connections to find other APPN nodes. End nodes using a LINCS node
as their server will dynamically connect, as will nodes on the connection network.
Predefined Circuits
Predefined Circuits are for nodes that cannot be located dynamically; Lo w Entry Nodes and
Channel/SNA and SDLC links are examples ho wev er, links to End Nodes and adjacent NNs
can be predefined. A user can specify the node type or allow LINCS to learn the node type of
the node that initiates communications.
Connection Networks
A connection network increases session performance by allowing end nodes to communicate
directly without the routing services of a network node. LINCS directs the end node to end
node communication, but does not have to be a part of the connection netw ork. To do this the
transport facility must be a “Shared Access Transport Facility” which allows end nodes to
communicate directly i.e. LLC or Frame Relay.
Route Selection
APPN’s dynamic route selection eliminates the comple x network definition required by other
protocols. APPN’s route selection separates the search phase from the route selection. First
the requested resource (LU) is located, then a route to the resource is determined.
The path chosen by APPN is based on location, topology information, requested priority and
class-of-service. APPN’s chooses route selection using:
• Class of Service
• Directory Services
• Network T opology
Class of Service
COS routing defines how diff erent types of data will be routed, using paths optimized for that
specific type. All APPN nodes have several predefined classes of service. The COS used is
determined by the mode name used at session establishment time. LINCS supports four standard
mode names, plus the null or blank name. The four standard names correlate to identical COS
names. A name of all blanks correlates to the #CONNECT COS name. The Mode Name is
used to obtain a COS name and transmission priority requirements for the requested session.
A COS name designates a set of definitions used for route selection. LINCS has fi ve standard
predefined COS names, and allows you to def ine additional COS names. Because COS names
are data objects in LINCS, they may be copied to other LINCS nodes manually or via Central
Site Change Management, a definite advantage for users needing to add COS names or modify
the standard definitions.
4
707021-001
Chapter 2. APPN Netw ork Node
Class of Service Definitions (COS)
For a particular COS, APPN determines the importance of eight values that are defined for
every link within the APPN network. The values are:
• Propagation delay
• Cost per byte
• Cost for connect time
• Effectiv e capacity
• Security
• Three optional user defined values
Each COS assigns a particular weight to these values. When an end node requests a route to a
partner, the class-of-service requested is compared against the COSs a vailable along the path.
If the defined weights of a COS at each node meet or exceed the weights for the COS requested,
then the path will be selected. If one node along the path cannot provide the COS requested
(for example: SECURE), the request will be rejected. Configuration and Management
The “Display/Update APPN COS” utility on the Customization Data Menu is used to display
or define your COS parameters. If you are not extremely knowledgeable about SNA and APPN
route selection procedures, you should probably not define your own COSs. Default values
will be used if no COS is defined.
Directory Services
APPN performs dynamic, automatic network definition capability . In an APPN network,
end nodes and the resources they provide (LUs) need only be known by their serving
network node.
A LEN’ s resources are defined during LINCS configuration. End nodes inform network nodes
of their resources upon connecting. Directory services maintains a directory database of this
information, plus information about resources outside its own domain as it learns of them.
Aging algorithms remove inacti ve entrees from the “Safe Stored” directory database to ke ep it
at a manageable size. LINCS’ APPN supports standard directory services including:
• Network node server – LINCS registers its APPN end node’s resources a local database.
LENs and ENs use this service to locate remote LUs.
• LU registration – LEN LUs can be registered using LINCS’ Central Control. This ena bles
remote end nodes to find them.
• Directed and broadcast services – LINCS uses directed search requests to obtain domain
path information from a central directory server, typically VT AM, and uses broadcast search
requests when no central directory service is present.
Locating a Resource
APPN end nodes do not need partner definitions. Instead, an end node asks its network node
server (for example LINCS) to find a partner , and to determine the best route to get there. Each
end node tells its network node server which LUs reside in it. By combining all the information
known by all the network nodes, the location of any LU can be determined. When an NN is
requested to find an LU, it first looks within its own directory for the LU’s location. If not
707021-001
5
LINCS Features
there, the NN sends the request to all of its adjacent network nodes. The send process is
repeated by the adjacent nodes, until the LU is found. At that time, APPN caches the location
of the found LU, so it will not have to go through the search phase if that LU is requested
again. If more than one possible route is found, APPN selects the best path of those a v ailable
which meet the requested requirements.
Network Topology
APPN allows any network topology. Each NN can be directly connected to every other NN,
connected through a single routing hub, connected in a hierarchical network design, or any
combination of these. LINCS APPN maintains information about all NNs and intermediate
routing Transmission Groups (TG) within the netw ork in a “Safe Stored” T opology Database.
LINCS exchanges network information with other network nodes to maintain up-to-date
topology information, which includes data about NNs, plus their connections to V irtual Routing
Nodes and other NNs. Unlike TCP/IP, topology information is exchanged only when a topology
change occurs, which reduces network management traffic significantly. Also, the topology
data base contains information about network nodes only (Information about LEN and ENs is
obtained from APPN’ s directory service) which reduces the size of the database.
Safe Store
All LINCS nodes are equipped with hard disks, so LINCS can save network information,
which is known as the APPN Safe Store feature. Saf e Stores occur only when updates to the
directory or topology database have occurred. LINCS checks whether updates have occurred
on heuristically determined 1 to 10 minute intervals. If LINCS APPN is stopped and restarted,
only topology and directory updates that have occurred since the last Safe Store need to be
obtained from the adjacent end node. This greatly reduces the amount of network management
data traffic.
Intermediate Session Routing (ISR)
LINCS APPN supports networks using Inter mediate Session Routing. ISR provides connectionoriented routing for LEN and EN sessions and for connection to adjacent NNs. Additionally
ISR provides adaptive pacing and segmentation and reassembly of APPN network message
units when required by partners. ISR routes session traffic at the Path Information Unit le vel.
High Performance Routing (HPR)
High Performance Routing (HPR) is an extension to APPN, which uses the same link types
that base APPN supports. HPR adds reliability to the network, so link-le v el error recovery is
only recommended on links with high error rates. HPR requires that LLC does not support
link-level error recov ery. HPR r equires that X.25, Channel/SNA, SDLC, SDLC/D AP , do operate
with link-level error recovery.
HPR provides dynamic rerouting for APPN, so sessions can survive link failures, while
maintaining the deterministic stability and class-of-service associated with ISR. HPR can
coexist and interoperate with all existing levels of APPN nodes. HPR nodes can distinguish
between base APPN and HPR nodes and links. APPN nodes vie w HPR nodes and links as
base APPN nodes and links. HPR uses the existing APPN route selection algorithm for
route selection.
6
707021-001
Chapter 2. APPN Netw ork Node
HPR provides end-to-end connections, thereby obtaining a significant throughput increase
over ISR. HPR utilizes three new protocols to achiev e additional throughput:
• Automatic Network Routing
• Rapid Transport Protocol
• Adaptive Rate Based congestion control
Automatic Network Routing (ANR)
ANR is a connectionless protocol that switches packets on a predetermined path. It improves
switching speed, because of its reduced number of instructions compared to ISR. ANR also
eliminates the 500-byte control block for each session, increasing the number of connections
that the LINCS node can support, similar to Source-Route-Bridging. Labels that represent the
full path between end node partners are carried in the header of each ANR packet. Because
there is no limit to the number of labels in the header, ANR is not limited to 7 hops, like Source
Route Bridging is. ANR selects the highest priority pack et, gets the next link from and deletes
the first label, then sends the packet out on the identified link.
Some important features the ANR of fers are:
• Fast packet switching – ANR is much faster than base APPN’s ISR. ANR operates at a
lower layer than ISR. (ANR operates at Layer 2; ISR operates at Layer 3.) That minimizes
the storage and processing required to route packets through intermediate nodes. ANR
performs the traffic prioritization and packet forwarding functions found in ISR. Functions
such as flow control, segmentation, link-level error recovery, and congestion control are
NOT performed in the intermediate node (as in HPR). Instead, these functions are e xecuted
only at the endpoints of an R TP connection.
• Source Routing – ANR suppor ts source routing. Each packet has a network lay er header
with routing information at the beginning of the packet. This routing information consists
of a string of ANR labels. These labels describe the path of the packet through an HPR
subnet. ANR labels are locally assigned by each HPR node . When a HPR node receives a
packet, it:
1. looks at the first ANR label in the packet
2. selects the corresponding link over which to send the packet
3. deletes this first ANR label from the pack et
4. forwards the packet out onto the selected link
• No session awareness – Intermediate HPR nodes have no knowledge of sessions. They
simply route the session traffic based on the routing information. Therefore, intermediate
nodes no longer have to store any session information (routing tables) for sessions that are
routed across it, as in base APPN.
Rapid T ransport Protocol (RTP)
RTP is designed for f ast high quality networks. It is a connection-oriented transport protocol
at OSI Layer 2. Before ANR takes place, R TP determines from the APPN topology and directory
databases what the largest packet size is that can be supported at each node across an entire
route. Before the first ANR hop, R TP segments packets to the minimum packet size, eliminating
any requirement for segmenting and reassembly within the network. The last NN in the path
reassembles and resequences the packets, if necessary . R TP retransmits only missing packets,
707021-001
7
LINCS Features
which is more efficient than algorithms that retransmit the missing packet and all following
ones. This selective retr ansmit algorithm preserves packet order , and is the foundation for the
Multiple Link Transmission Group (MLTG) support in HPR. RTP handles link failures by
computing ANR labels f or a new path that meets the class-of service requirements, and switching
to it without notifying or disrupting higher protocol layers.
RTP establishes RTP connections to carry session traffic across a HPR subnet. These
connections are “transport pipes” that connect two HPR nodes over a specif ic path in a HPR
subnet. Multiple sessions may share the same R TP connection if they are using the same class
of service. Intermediate HPR nodes have no knowledge of the RTP connection. They simply
route the session traffic based on the ANR routing information.
RTP performs the follo wing functions:
• Segmentation to the size needed for the smallest link
• Reassembly of segments Fast R TP connection setup and dissolution
• Selective retransmissions – RTP retransmits only missed or corrupted packets, instead of
every packet since the error occurred.
• Sequence checking, in-order delivery
• End-to-end error recovery – HPR takes advantage of high-speed links. Since high-speed
links are more reliable, they do not need the level of error recovery found in base APPN.
RTP performs error recovery on an end-to-end basis, instead of requiring link-level error
recovery on each intermediate link (as in base APPN). By only checking for errors at the
endpoints of a RTP connection, the number of flo ws required for error recovery is reduced.
• Nondisruptive path switching – If a link or node goes do wn, R TP automatically reroutes the
data without disrupting the traffic flo w . R TP connections are reestablished over a new route
that bypasses the failed link or node. Missed data is automatically recovered using end-toend error recovery .
Adaptive rate-based flow/congestion control (ARB)
Base APPN performs adaptive session pacing at each node in the network. This method of
flow control works well in networks with various link types operating at different speeds.
Networks with high-speed links, howev er , can reduce the amount of processing done at each
node by using adaptive rate-based congestion control (ARB) at the RTP endpoints. ARB
attempts to predict when congestion will occur and reduce a node’s sending rate before this
happens. Each node samples the rate at which it receives and sends packets. When b uf fering
limits are approached, ARB appends rate messages to data packets telling the end nodes to
speed up or slow down, instead of waiting until after congestion develops and packets are
discarded, requiring retransmission. ARB a voids congestion, instead of reacting to congestion,
which yields higher link utilization.
HPR Configuration and Management
Configuration – HPR is configured for an APPN circuit by setting the APPN HPR field in
the circuits Link Profile to the desired value.
Management – The APPN Menu in Central Control Mode contains the following utilities
which have HPR specific information.
8
707021-001
Chapter 2. APPN Netw ork Node
• The Display/Update Circuit Status utility contains an HPR support f ield.
• The Display Node Topology utility indicates whether or not HPR is supported on APPN
nodes.
• The Displa y RTP Connections utility displays information a bout the R TP connections for
which this node is an end point.
Dependent LU Requester (DLUR)
The Dependent LU Requester feature in a LINCS node will accept PU2.1 APPN traf f ic from
a Dependent LU Server (DLUS), and conv ert it to PU2.0 traff ic for processing by dependent
LUs. This allows dependent LUs to benefit from APPN routing services that are traditionally
unavailable to them.
Some advantages of DLUR over traditional Host and Gate way Circuits are:
• Dynamic routing and directory services without static user definitions.
• Transmission priority
• Class of service
• Reduced system definition
• Dependent LUs can be moved anywhere in the APPN network without problem management
concerns and without changing VTAM definitions.
• Backup and recovery for links and nodes is available without additional definition and
without requiring the user to switch to another logical terminal.
• The DLUR and DLUS do not have to be in the same APPN subnetwork, because LUs can
be routed across subnetworks through border nodes.
• The DLUR/DLUS function supports SSCP takeover and giv eback.
• The DLUR LINCS node can also route the Central Site Control Facility (CSCF) traffic to
the DLUR host circuit or the DLUR gateway circuit.
The dependent LUs may reside in the LINCS node or on a downstream node. LUs residing
within the LINCS node are referred to as Internal LUs, those residing in a DSPU are referred
to as External LUs.
• Internal LUs - The dependent LUs will reside in the LINCS node, therefore a DLUR Host
Circuit should be defined to accept the PU2.0 traffic. A DLUR Host Circuit provides
dependent LU support for the following:
• Direct and Network devices
• TN3270 Clients
• SAA Clients
• DSPU using LU to PU Mapping and APPN mapping feature
• Central Site Change Management (CSCM)
• Local Format Storage U2.0 traffic.
Each DLUR host circuit and DLUR gateway circuit will be treated as an APPN predefined
circuit, and so will be included in the maximum APPN circuits limit. These circuits are also
counted as part of the maximum Host Circuits and maximum Gateway Circuits.
707021-001
9
LINCS Features
DLUR Configuration and Management
Configuration - In order to use DLUR, you must enable APPN as described in the Conf iguration
and Management section. The panels which contain DLUR configuration items are:
• The SNA Options panel allo ws you to define the DLUS (Dependant LU Serv er).
• If APPN will be used to route data to dependent LUs associated with one or more PUs on
the LINCS node (i.e., internal PUs), a DLUR Host Circuit should be defined for each such
PU. DLUR host circuits are defined by setting both the Line Options and Link Profile
panels to DLUR. Refer to the section titled “3270 Host Connectivity” for more information
on defining Host Circuits.
• If APPN will be used to route data to dependent LUs associated with one or more PUs
which are downstream from the LINCS node (i.e., exter nal PUs), a DLUR Gateway Circuit
should be defined for each such PU. DLUR gate way circuits are defined by setting both the
Line Options and Link Profile panels for the Upstream connection to DLUR. Refer to the
section titled “Gateway Circuits” for more information on defining Gateway Circuits.
Management - Central Control Mode contains the following utilities pertaining to DLUR
circuits:
• The Display/Update Circuit Status utility on the APPN Menu in Central Control Mode
includes information on your DLUR circuits, in addition to all other APPN dynamic and
predefined circuits.
• The Display/Update Gateway Circuits Status utility on the Communications Menu shows
the current status of your DLUR gateway circuits.
• The Display/Update 3270 LU Connections utility on the Device Menu shows the LUs
that are active on your DLUR host circuits.
APPN Configuration and Management
Configuration - The following list defines the Configuration utilities used to define APPN
Circuits. The “ APPN Menu” in Conf iguration can also help guide you through these utilities:
• The Line Options utility defines the lines you will be using to access the APPN network,
and enables the appropriate protocols for that line.
•A Link Profile should be defined for eac h protocol you will be using to access the APPN
network. The profiles define link parameters, which may be shared by all of your APPN
Circuits. Among these parameter s are the APPN T ransmission Group characteristics.
• The SNA Options panel contains some APPN Options. Among these are the Network Id
and the APPN CP name, which are required when using APPN.
• The APPN Dynamic Connections utility defines parameters which are used to create APPN
circuits dynamically. It also allows you to define APPN lines as a part of a Connection
Network.
10
• The APPN Predef ined Circuits utility predefines APPN circuits. This is requir ed for some
protocols, such as SDLC and Channel/SNA.
• The APPN LEN LUs utility defines LUs residing in Adjacent LEN nodes. You must define
these LUs for LEN nodes, since a LEN’s LUs cannot be located dynamically.
707021-001
Chapter 2. APPN Netw ork Node
Management - The “ APPN Menu” in Central Control Mode contains many utilities that help
you determine the status of your APPN node, circuits, and sessions. There are also utilities
that display the current APPN directory and topology, and even a utility to APING another
APPN node. Refer to the APPN Menu in Central Control Mode for further details of the
utilities available.
In addition to the Network Management Vector Transport (NMVT) facility and the SNMP
agent available in all LINCS nodes, LINCS Central Control pro vides on-line management for
APPN. From the Central Control APPN Menu, you can determine the status of the LINCS’
APPN node and adjacent nodes and links. From the APPN Menu, you can use APING to
determine the presence of and link status to end and network nodes, and select any of the
following for current and historical information about the state of the network:
• Node Status - lists the number of configured and activ e adjacent nodes, intermediate sessions
and directory entries; whether the node is congested, and the route additional resistance.
From this panel, the APPN feature can be stopped and restarted.
• Circuit Status - lists all defined APPN circuits and their status. Links may be stopped and
started from this panel.
• ISR Sessions - lists the sessions and provides information about the sessions for which this
APPN node is providing intermediate session routing. This panel is used when contemplating
stopping the node or determining the cause for congestion.
• End Point Sessions - provides information about the sessions for which this node is
considered the end point.
• RTP Connections - this panel displays information about the RTP connections for which
this node is an end point. From a subordinate panel a user can request that the current RTP
Connection path-switch to a better route.
• Directory - lists all of the network resources this node knows about. This information is
Safe Stored and recovered when the LINCS node is IMLed.
• Node T opolo gy - displays information about network nodes within the network gathered
from T opology Database Updates. This information is Safe Stored and recovered when the
LINCS node is IML ’d.
• T ransmission Group Topology - similar to Node T opology , but about T ransmission Groups.
Class-of-Service data for each TG is shown on this panel.
• Problems and Exceptions - a chronological list of problems and exceptions that have
occurred. A problem is a possibly perceptible anomalous event that degrades the system.
An exception is anomalous event that deg rades the system, but is not yet perceptible.
• Audits - a list of normal events that have occurred, such as when circuits are stopped or
started, sessions activated and deactivated and CP-CP sessions are established.. Used for
tracking activity and accounting.
707021-001
11
12707021-001
3. DSPU Support
Gateway Circuits
PU Passthrough
This feature allows a LINCS node to act as a SNA gateway between Downstream Physical
Units (DSPUs) and an upstream SNA host. Each host PU that is mapped to a DSPU is called
a gateway circuit. SNA data for gateway circuits is passed through the LINCS node but the
link layer connections to the host and the DSPU are maintained as separate logical links.
LINCS does not tunnel any protocol and therefore is not susceptible to link level timeouts,
retries, and lost sessions. All combinations of the upstream and do wnstream pr otocols listed
below are supported.
Configuration and Management
Configuration - The following lists the Configuration utilities which are used to define
Gateway Circuits. The “Gateway Menu” in Configuration will guide you through these
configuration utilities.
• The Line Options utility should be used to define the lines you will be using for your
upstream and downstream connections. The appr opriate protocols should be enabled on the
Line Options panels.
• A Link Profile should be defined for each protocol you will be using. The prof iles are used
to define link parameters which may be shared by any or all of your Gateway Circuits. A
single profile may be assigned to links with the same protocol, thus eliminating the need to
configure this link type information for each upstream and downstream connection.
• A Gateway Circuit panel must be defined for each connection to a DSPU. A line and
link profile must be assigned to each upstream and downstream connection you def ine.
These panels are used to define unique information for each connection, such as
addressing information.
Management - The “Communications Menu” in Central Control Mode contains many utilities
to help you determine the status of your Gateway Circuits. From this menu, you can determine
the status of your communication lines or of individual Gateway Circuits. Also, depending
upon the protocol(s) being used, you may access one of the protocol submenus (e.g., TCP/IP
Menu, LLC Menu) to get more specific information, please refer to the Communication Menu
in Central Control Mode for more information.
Upstream Protocols
Any of the following protocols can be used as an upstream protocol to connect a PU2.0 SNA
DSPU to a 3270 host:
• Channel (Bus and Tag, or ESCON)
• Frame Relay/LLC
• LAN/LLC
• SDLC
• TCP/IP
• X.25
• APPN DLUR
13707021-001
LINCS Features
Downstream Protocols
LINCS uses any of the following protocols to communicate with the DSPU:
• Frame Relay/LLC
• LAN/LLC
• SDLC/DAP
• TCP/IP
• X.25
SDLC/DAP
LINCS’ SDLC/DAP feature expands the SN A gateway capabilities into the SDLC en vironment,
by allowing the LINCS node to support DSPUs (PU type 2.0) over SDLC links. DSPUs are
polled by the LINCS similar to the way a front end processor running NCP does. SNA traf fic
is passed through the LINCS node upstream to a host connection. This host connection can be
through a channel attachment or through a LAN attachment, T oken Ring or Ethernet.
Downstream device attachments are made using the standard SCC or HSC card of the LINCS
node. Each card is capable of supporting up to 16 PUs downstream over one or more lines,
running up to 64 Kbps each. Lines may be configured to support full-duplex or half-duplex
operations, point-to-point or multidrop, NRZ or NRZI.
Segmentation
The maximum frame size sent to the host and to the DSPU is configured in the link profiles
associated with each gateway circuit. LINCS will perform SN A segmentation in both directions
as required.
Link T ermination
LINCS will send a REQDISCONTACT to the host if the link to the DSPU is lost for any
reason. The host is then responsible for the error recovery required to reactiv ate the link.
LU TO PU Mapping
Host logical unit (LU) to LAN physical unit (PU) mapping permits LU traffic destined for a
LINCS node to be redirected to LLC connected LAN DSPUs. Using this feature, also known
as a PU Concentration, the SNA host has access to DSPUs without requiring a separate PU at
the host for each LAN device. To the host, the LINCS node appears to be a standard PU2.0
device, while to the DSPU, the LINCS node appears to be a PU passthrough gateway. This is
achieved by redirecting the datastream for some of the LINCS node’s LUs onto the LAN
according to the configured LU-PU map.
This feature has the following benefits:
• By mapping LUs instead of PUs to the LAN clients, the number of PUs required on the host
can be reduced.
14
• The mapped LUs can be pooled and made available to DSPUs on a first-come, first-served basis.
707021-001
Chapter 3. DSPU Support
• LUs from multiple hosts may be mapped to the same DSPU.
• New LAN clients may be added without changing the GEN or reconfiguring the
LINCS node.
Because a PC-Based DSN handles its own keyboard, LINCS functions such as Local Prints,
Device-oriented alerts, and the Response T ime Monitor are not supported from the Gatew ay .
The host LU to LAN PU feature works with any LINCS SN A host circuit.
Configuration and Management
Configuration - The following lists the Configuration utilities which are used to define the
LUs and DSPUs which will be used by the LU to PU mapping feature. The “LU to PU Mapping”
EMU in Configuration will guide you through these configuration utilities.
• You must define at least one SNA host circuit to receive traff ic for the LUs which will be
mapped to the DSPUs. Refer to the 3270 Host Connectivity section to information on how
to configure your 3270 host circuits.
• The Line Options utility should be used to define the LLC lines which will be used to
communicate to the DSPU(s).
• A LLC Link Profile must be defined with the link information to be used by the LLC link to
the DSPU(s).
• LU to PU Mapping Profiles are used to associate Upstream LUs with Downstream PUs.
Multiple DSPUs can share a LU to PU Mapping Profile, or if security is required a profile
may be defined for each DSPU.
Management - The “Display/Update 3270 LU Connections” utility displays all configured
3270 LUs and indicates whether or not DSPUs currently own the LUs. If a DSPU owns a LU,
the DSPU’s LAN address will be display ed. The LUs may be disconnected using this utility.
707021-001
15
16707021-001
4. IPX Routing
The LINCS IPX Router feature allows a LINCS node to act as an IPX router when attached to
a NetWare internetwork via one or more LAN boards. IPX routing is supported between all
LAN connections. The LANs can be any combination of Ethernet and/or Token Ring lines, or
two different frame types on the same line.
T o enable IPX routing, enable one of the follo wing protocol combinations:
• one LAN board with multiple IPX/MAC layer protocols
• multiple LAN boards with one or more IPX/MAC layer protocols enabled
• one LAN boards with IPX SNA Server enabled
MA C Layer Protocol Conversion
Using IPX protocol, LINCS routes packets to and from various clients, servers, and routers in
the internetwork to their final destination network. MA C layer protocol conv ersion is performed
when packets are routed between network segments using differing MAC layer protocols.
LINCS IPX Router feature uses the Routing Information Protocol (RIP) and Server Information
Protocol (SAP) to maintain an internal database of network route and service information.
MAC layer protocol conversion is done between different frame types. The following frame
types are supported:
• T ok en Ring and Ethernet 802.2 LLC
• T ok en Ring and Ethernet SN AP
• Ethernet 802.3
• Ethernet V ersion 2
RIP Database
The RIP database is used by IPX to determine the best route to a destination network when
routing IPX packets. The RIP application is used to broadcast internetwork route information
to directly connected network segments, to keep other routers in the internetwork up to date on
network configuration changes. This information is broadcast when network configuration
changes are detected, and periodically during normal operations. The RIP application also
responds to RIP routing information requests from other routers, clients, and servers in the
internetwork. RIP uses an Ag ing process to remove networks from the network route database
if no broadcasts are received periodically to indicate that a giv en network is still a v ailable.
SAP Database
The SAP database is used by Servers residing on the LINCS internal IPX network. The SAP
application is used to broadcast internetwork service information to directly connected network
segments, to keep other routers and servers in the internetwork up to date on network service
changes. This information will be broadcast when network service changes are detected, and
periodically during normal operations. The SAP application also responds to SAP service
information requests from other routers and servers in the internetwork. SAP uses an Aging
17707021-001
LINCS Features
process to remove services from the network services database if no broadcasts are received
periodically to indicate that a given service is still available. SAP also interacts with RIP to
determine if a path exists to a given service, before registering it in the Service Information
T able. SAP also interacts with RIP to ensure that a path has not been lost to the service after it
has been registered in the Service Information Table.
LINCS VS NOVELL
BENEFITS: IPX Routing at no cost: When LINCS IPX SN A Server is providing host access
to clients on two separate networks, IPX packets are routed automatically. There is no
configuration required or any additional expense.
DIFFERENCES: LINCS does not support:
• Routing directly through WAN connections. WAN internetw orks can be accessed indirectly
through a LINCS IPX Router when another IPX router on the internetwork is connected to
the WAN network and the LINCS IPX Router has a direct or indirect LAN connection to
that IPX router.
• Source Route Bridging. This is an optional feature by Novell’s IPX Router software (No vell
ROUTER.NLM).
• RIP and SAP filtering options, currently supported by Novell 4.01
Configuration and Management
Configuration: The following configuration panels are used to enable IPX Routing:
• Line Options – Enable IPX protocol on the desired lines. For each line with IPX enabled,
the next panel is the IPX Line Options panel, where you can define additional parameters for
IPX on that line.
• IPX Options – This panel defines global parameter for all IPX lines.
The IPX Router configuration submenu takes you through the configur ation panels described
above, or you can use the full menu. Additionally, there are several RPQs found on the RPS
panels associated with the IPX feature.
Resource Requirements: IPX Feature Memory Requirements. If one or more LAN boards
have IPX enabled (one LAN board with multiple IPX/MA C layer protocols enabled or multiple
LAN boards with one or more IPX/MAC layer protocols enabled), or one LAN boards has
IPX enabled and the IPX SNA Server is enabled (one or more IPX SN A Server LUs enabled),
feature memory will be required for IPX Router support. This memory will be used for the
RIP database and the SAP database as well, as other data areas needed by IPX, RIP , and SAP.
See the Feature Memory panel in the Configuration document to see how much memory is
required and how to specify it.
18
Management: The IPX Menu in Central Control is used to gather information and statistics
about your IPX links.
707021-001
5. Host Connectivity
LINCS supports connections to host systems of the following types:
• 3270
• Asynchronous
• TCP/IP (via TELNET)
Dynamic Host Connections
The Dynamic Host Connection (DHC) feature allows users to connect to host resources
dynamically . This allo ws a LINCS node to hav e more device connections than the number of
resources that are defined on the host, minimizing host configuration. Load Balancing is
provided by allowing users who connect dynamically to specifying a wildcard host. If a
connection is made to a wildcard host, LINCS will select the least busy SNA host f or connection.
A device in session with a LINCS node can dynamically switch between 3270 hosts (SNA,
BSC, or Non-SNA) and ASCII hosts (TELNET, or Asynchronous).
Host connections are made using the Connect panels, which are discussed later in this section.
Configuration and Management for Dynamic Host
Connections
Configuration – The Host Classes utilities (3270, ASCII, IP) in Configuration can be used to
define which host resources will be av ailable for dynamic host connection. Host Classes are
especially useful if you wish to limit access to particular host resources.
Displays are given the authority to dynamically connect to host resources on the Resource
Authorization Matrix in a display’ s Device Prof ile. There you conf igure whether a display will
connect to specific host classes, or any host resource.
The Device Profile’s Host Assignments At Power On fields determine whether sessions on a
device will pre-connect to a host, or use the connect panels. Pre-connection means host
connection occurs automatically when the device powers on. The connect panels allow
interactive host selection.
Management – Central Control Mode contains two utilities which are useful when using the
Dynamic Host Connection feature:
• The Display/Update Host Connections utility displays all configured devices (Netw ork and
Direct Devices) and shows the host resources being used by those devices. You can also
view the host resources currently being used by all LINCS devices, or disconnect host
resources using this utility .
• The Display/Update 3270 LU Connections utility displays all configured 3270 LUs and
shows the device which o wns the LU. 3270 LUs may be owned by LINCS Devices (Network
and Direct), Downstream Nodes using LU to PU Mapping, TN3270 Clients, or SAA Clients.
This utility displays information indicating the device type which currently owns the LUs.
The LUs may also be disconnected using this utility .
19707021-001
LINCS Features
Host Classes
Host Classes are used to share a limited number of resources with a greater number of users.
There are two types of host classes:
• 3270: SNA, BSC and Non-SN A hosts
• ASCII: IP and Async hronous hosts
If you don’t create host classes, you will have to assign individual sessions (LUs, host port
addresses or IP addresses) to each device.
3270 Host Classes
3270 Host Classes allow sharing of or “pooling” LUs. This can be useful in the following
situations:
• Pooling LUs allows a large number of workstations to gain occasional access to a limited
number of LUs. A user ma y access the LU for a period of time, and then disconnect from
the LU, thus making it available to other users.
• Creating LU pools is the simplest manner of making 3270 LUs available to TN3270 clients
and LU to PU mapped DSPUs.
• 3270 Host Classes are required to make LUs available to IPX SN A clients.
3270 Host Classes can share a common class name, thereby collapsing LUs from multiple
hosts into a single host class. This is useful when a user desires connection to one of several
hosts, but doesn’t necessarily care which host is chosen.
ASCII Host Classes
• ASCII host classes allow LINCS devices to share ports to ASCII hosts.
• IP host classes allow LINCS devices share TELNET connections to TCP/IP hosts.
Host Connection Menus
The Host Connection Menu lists the Host Connection panels which may be used to dynamically
connect to a host resource.
Host Connection Menu
ItemConnection Type
13270
2ASCII
3TCP/IP
4LAT Class
5LAT Dynamic
20
Select Item: _
707021-001
Loading...
+ 118 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.