Juniper cRPD User Manual

cRPD Deployment Guide for Linux Server

Published

2021-03-31

ii

Juniper Networks, Inc. 1133 nn v n Way Sunnyvale, California 94089 USA

408-745-2000 www.juniper.net

Juniper Networks, the Juniper Networks logo, Juniper, and Junos are registered trademarks of Juniper Networks, Inc. in the United States and other countries. All other trademarks, service marks, registered marks, or registered service marks are the property of their r s c v owners.

Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right

to change, modify, transfer, or otherwise revise this b c

n without n c

cRPD Deployment Guide for Linux Server

 

 

Copyright © 2021 Juniper Networks, Inc. All rights reserved.

 

 

The n rm

n in this document is current as of the date on the

page.

YEAR 2000 NOTICE

Juniper Networks hardware and s w r products are Year 2000 compliant. Junos OS has no known m r

m ns through the year 2038. However, the NTP c n is known to have some c y in the year 2036.

END USER LICENSE AGREEMENT

The Juniper Networks product that is the subject of this technical

c m n

n consists of (or is intended for use

with) Juniper Networks s w r

Use of such s

w r

is subject to the terms and c n

ns of the End User License

Agreement ("EULA") posted at

s s

r

n r n

s

r

. By downloading, installing or using such

s w r you agree to the terms and c n

ns of that EULA.

 

 

 

 

iii

Table of Contents

1

2

About This Guide | vii

Overview

Understanding Containerized RPD | 2

cRPD Resource Requirements | 9

Junos OS Features Supported on cRPD | 10

Installing and Upgrading cRPD

Requirements for Deploying cRPD on a Linux Server | 16 Installing cRPD on Docker | 17

Before You Install | 18

Install and Verify Docker | 18

Download the cRPD S w r | 19

Cr

n Data Volumes and Running cRPD using Docker | 20

C n

r n

Memory | 21

C n

r n

cRPD using the CLI | 21

Installing cRPD on Kubernetes | 25

Installing Kubernetes | 26

Kubernetes Cluster | 26

Downloading cRPD Docker Image | 28

Cr

n

a cRPD Pod using Deployment | 28

Cr

n

a cRPD Pod using YAML | 31

Cr

n

a cRPD Pod using Job Resource | 34

Cr

n

a cRPD Pod using DaemonSet | 36

Scaling of cRPD | 41

3

4

iv

Rolling Update of cRPD Deployment | 43

cRPD Pod Deployment with Allocated Resources | 46 cRPD Pod Deployment using Mounted Volume | 49

Upgrading cRPD | 52

Upgrade S w r | 52

Managing cRPD

Managing cRPD | 55

Networking Docker Containers | 55

Building Topologies | 56

Cr n an OVS Bridge | 56

Removing Interfaces and Bridges | 57

Viewing Container Processes in a Running cRPD | 57

Accessing cRPD CLI and Bash Shell | 58

Pausing and Resuming Processes within a cRPD Container | 58

Removing a cRPD Instance | 58

Viewing Docker S s cs and Logs | 59

Viewing c v Containers | 59

Stopping the Container | 60

Establishing an SSH C nn c n for a NETCONF Session and cRPD | 60

Establishing an SSH C nn c n | 60

Enabling SSH | 61

Port Forwarding Mechanism | 61

C nn c n to a NETCONF Server on Container | 61

Programmable R

n

cRPD

c n Development Using JET APIs | 64

G n

Started with JET | 65

C n r JET n r c n with Linux OS | 65

v

 

 

Maximum Number of JET C nn c ns | 65

 

 

Compile IDL Files | 66

 

 

 

 

 

 

 

5

C n

r n

cRPD Features

 

C n

r n S

 

n s on Host OS | 69

 

 

C n

r n

ARP Scaling

| 69

 

 

 

 

C n

r n

OSPFv2/v3 | 70

 

 

C n

r n

MPLS | 70

 

 

 

Adding MPLS Routes | 71

 

 

Adding Routes with MPLS label | 71

 

 

Cr

n a VRF device | 71

 

 

Assigning a Network Interface to a VRF | 72

 

 

Viewing the Devices assigned to VRF | 72

 

 

Viewing Neighbor Entries to VRF | 73

 

 

Viewing Addresses for a VRF | 73

 

 

Viewing Routes for a VRF | 73

 

 

Removing Network Interface from a VRF | 74

 

 

Hash Field S

c n for ECMP Load Balancing on Linux | 74

 

 

wECMP using BGP on Linux | 76

 

 

 

 

y R

n

in cRPD | 78

 

 

Understanding M

 

y in cRPD | 78

 

 

Example: C n

r n

M

y R n with BGP in cRPD | 79

 

 

 

 

Requirements | 79

 

 

 

 

 

 

 

 

 

 

Overview | 79

 

 

 

 

 

 

C n

r

n | 80

 

 

 

 

 

r c

n | 84

 

 

 

 

 

 

 

 

 

 

 

Layer 3 Overlay Support in cRPD | 86

Understanding Layer 3 Overlay VRF support in cRPD | 86

vi

Example: C n

 

r n Layer 3 VPN (VRF) on cRPD Instance | 88

 

Requirements | 88

 

Overview |

89

 

C n

r

 

n | 89

 

r

c

n

| 97

 

 

 

 

 

MPLS Support in cRPD | 102

Understanding MPLS support in cRPD | 102

Example: C n

 

r n S c Label Switched Paths for MPLS in cRPD | 103

 

Requirements | 104

 

Overview |

104

 

C n

r

 

n | 105

 

r

c

n

| 110

 

 

 

 

 

Sharding and UpdateIO on cRPD | 118

Understanding Sharding | 118

Understanding UpdateIO | 119

6

r b s

n

 

 

Debugging cRPD

c n | 121

Command-Line Interface | 121

Fault Handling | 122

r b s n Container | 122

Verify Docker | 123

Viewing Core Files | 124

C n r n Syslog | 125

rb s n with Kubectl | 125

Kubectl Command-Line Interface | 126

Viewing Pods | 126

Viewing Container Logs | 127

vii

About This Guide

Use this guide to install the containerized r

n

protocol process (cRPD) in the Linux environment. This

guide also includes basic cRPD container c n

r

n and management procedures.

r c m

n the ns

n management, and basic c n r

n procedures covered in this guide,

refer to the Junos OS

c m n

n for n

rm

n about further s

w r c n r n

1

CHAPTER

Overview

Understanding Containerized RPD | 2

cRPD Resource Requirements | 9

Junos OS Features Supported on cRPD | 10

2

Understanding Containerized RPD

IN THIS SECTION

 

 

R

n

Protocol Process |

2

 

R

n

Engine Kernel | 3

 

 

 

 

cRPD Overview | 3

 

 

 

 

B n

 

s | 5

 

 

 

 

 

Docker Overview | 6

 

 

 

 

Supported Features on cRPD | 7

 

 

Licensing | 7

c Engineering using BGP Add-Path | 7

 

 

Use case: Egress Peer r

 

 

 

 

 

 

Containerized r n protocol daemon (cRPD) is Juniper’s r n protocol daemon (rpd) decoupled from Junos OS and packaged as a Docker container to run in Linux-based environments. rpd runs as

user space

c

n and learns route state through various r

n protocols and maintains the

complete set in r

n n rm

n base (RIB), also known as r

 

n table. The rpd process is also

responsible for downloading the routes into the forwarding n

rm

n base (FIB) also known as

forwarding table based on local s

c n criteria. Whereas the Packet Forwarding Engine (PFE) in

Juniper Networks router holds the FIB and does packet forwarding, for cRPD. The host Linux kernel stores the FIB and performs packet forwarding. cRPD can also be deployed to provide control plane-

only services such as BGP route r

c n

 

 

NOTE: Route R

c n networking service must not depend on the same hardware or the

controllers that host the

c

n s w r containers that are using the reachability learnt by

using the Route R

c n service. cRR service must work independently.

 

 

 

 

Rn Protocol Process

Within Junos OS, the r

n protocol process controls the r

n protocols that run on a router. The

rpd process starts all c n

r r

n protocols and handles all r

n messages. It maintains one or

more r

n tables, which consolidate the r

n n rm

n learned from all r

n protocols. From

3

this r

n n rm

n the r

n protocol process determines the c v

routes to network

s

n

ns and installs these routes into the R

n

Engine’s forwarding table. Finally, rpd implements

a r

n

policy, which enables you to control the r

 

n

n rm

n that is transferred between the

r

n

protocols and the r

n table. Using the r

 

n

policy, you can

r and limit the transfer of

n

rm

n as well as set r

r s associated with s

c

c routes.

 

 

Rn Engine Kernel

The R

n Engine s

w r

consists of several s

w r processes that control router

nc n y and

a kernel that provides the c

mm n c

n among all the processes.

 

 

The R

n Engine kernel provides the link between the r

n

tables and the R

n

Engine’s

forwarding table. The kernel is also responsible for all c mm

n c

n with the Packet Forwarding

Engine, which includes keeping the Packet Forwarding Engine’s copy of the forwarding table

synchronized with the master copy in the R

n

Engine.

 

 

 

 

The RPD runs n v

y on Linux and communicates program routes into the Linux kernel using Netlink.

The Netlink messages are used to install FIB state generated by RPD to Linux kernel, to interact with mgd and cli for c n r n and management, to maintain protocol sessions using PPMD, and to detect liveness using BFD. RPD learns the interface r b s such as their name, addresses, MTU s n s and link status from the netlink messages. Netlink acts as an interface to the kernel components.

cRPD Overview

cRPD maintains the route state n rm n in RIB and forwards the routes into FIB based on local route

s

c n criteria. cRPD contains the RPD, PPMD, CLI, MGD, and BFD processes. cRPD

n s how

r

n protocols such as ISIS, OSPF, and BGP operate on the device, including s c n

routes and

maintaining forwarding tables.

 

4

Figure 1 on page 4 shows the cRPD overview.

Figure 1: cRPD Overview

The network interfaces learnt by the underlying OS kernel are exposed to the RPD on Linux container. RPD learns about all the network interfaces and add route state for all the network interfaces. If there are n Docker containers running in the system, all the containers and c n running directly on the host can access to the same set of network interfaces and state.

When m cRPD running on a system, that is in bridge mode, containers are connected to the host network stack through bridges. cRPD is connected to the Linux OS using bridges. The host interfaces are connected to the container using bridges. M containers can connect to the same bridge and communicate with one another. The default Docker bridge enables NAT. NAT bridges are used as a

management port into the container. This means the clients cannot n

c nn c ns to the cRR and

cRR is in control of which clients it connects to. If the bridge is connected to the host OS network

interfaces, external c mm n c n is feasible. For r

n purposes, it is possible to assign all or a

subset of physical interfaces for use by a Docker container. This mode is useful for containerized Route R c r and r n n the system into m r n domains.

5

Figure 2 on page 5 shows the architecture of cRPD able to run n v y on Linux.

Figure 2: cRPD on Linux Architecture

B n s

The use of containers reduces the

m required for service boot up from several minutes to a few

 

seconds, which results in faster deployment.

 

 

• You can run cRPD on any Linux server that supports Docker.

 

With a small footprint and minimum resource r s rv

n requirements, cRPD can easily scale to

 

keep up with customers’ peak demand.

 

 

Provides s n c n y higher density without requiring resource r s rv

n on the host than what is

 

r by VM-based s

ns

 

 

 

Well proven or a stable r

n s

w r on Linux with cRPD.

 

Juniper cRPD User Manual

6

Docker Overview

Docker is an open source s

w r

rm that s m

s the cr

 

n management, and teardown of a

virtual container that can run on any Linux server. A Docker container is an open source s

w r

 

development

 

rm with its main b n

being to package

c

ns in “containers” to allow them

to be portable among any system running the Linux

r

n system (OS). A container provides an OS-

level v r

z

n approach for an

c

n and associated dependencies that allow the

c

n

to run on a s

c c

rm Containers are not VMs, rather they are isolated virtual environments with

dedicated CPU, memory, I/O, and networking.

 

 

 

 

 

 

 

A container image is a lightweight, standalone, executable package of a piece of s

w r Because

 

containers include all dependencies for an

c

n m

containers with c n

c n

dependencies

can run on the same Linux

s r b

n Containers use the host OS Linux kernel features, such as groups

and namespace s

n to allow m

 

containers to run in s

 

n on the same Linux host OS. An

c

n in a container can have a small memory footprint because the container does not require a

guest OS, which is required with VMs, because it shares the kernel of its Linux host’s OS.

Containers have a high spin-up speed and can take much less m enables you to install, run, and upgrade c ns quickly and

to boot up as compared to VMs. This c n y

Figure 3 on page 6 provides an overview of a typical Docker container environment.

Figure 3: Docker Container Environment

7

Supported Features on cRPD

cRPD supports the following features:

BGP Route R c

r in the Linux container

BGP add-path, m

graceful restart helper mode

BGP, OSPF, OSPFv3, IS-IS, and S c

• BMP, BFD, and Linux-FIB

Equal-Cost M

(ECMP)

JET for Programmable RPD

Junos OS CLI

Management using open interfaces NETCONF, and SSH

IPv4 and IPv6 r n

MPLS r n

Licensing

The cRPD s w r features require a license to c v the feature. To understand more about cRPD licenses, see Supported Features on cRPD, Juniper Agile Licensing Guide, and Managing cRPD Licenses.

Use case: Egress Peer r c Engineering using BGP Add-Path

Service providers meet growing r c demands. They need services which keep their capital expenditure and r n expenditure low. Juniper provides tools and c ns to deploy, c n r manage and maintain this complexity.

Egress peer r c engineering (TE) allows a central controller to instruct an ingress router in a domain to

direct r

c towards a s c c egress router and a s

c c external interface to reach a r c r

s n

n out of the network for

m m

z

n of the v r s egress routes.

The Internet – a public global network of networks – is built as system of interconnected networks of Service Provider (SP) infrastructures. These networks are n represented as Autonomous Systems (ASs) each has globally unique Autonomous System Number (ASN). The data-plane n rc nn c n link

8

(NNI) and control-plane (eBGP) direct c nn c

n between two ASs allows Internet r c to travel

between the two, usually as part of a formal agreement called peering.

A SP has m

peering r

ns with m

other SPs. They are usually geographically

distributed,

r in number and bandwidth of the NNI link, and use various business or cost models

Figure 4: Peering Among Service Providers

In the context of AS peering, r

c egress assumes that the

s n

 

n network address is reachable

through a certain peer AS. So, for example, a device in Peer AS#2 can reach a

s

n

n IP address in

Peer AS#4 through Service Provider AS#1, This reachability n

rm

 

n is provided by a peer AS using

an eBGP Network Layer Reachability n

rm

 

n (NLRI)

v r s m n

An AS typically

v r s s IP

addresses that belong to it, but an AS may also

v r s

addresses learned from another AS. For

example, Peer AS#2 can

v r s

to SP (AS#1) addresses it has received from Peer AS#3, Peer AS#7

and even Peer AS#8, Peer AS#9, Peer AS#4 and Peer AS#5. It all depends of the BGP r

n policies

between the individuals ASs. Therefore, a given

s

n

n IP

r

x can be reached through m

peering ASs and over m

NNIs. It is the role of routers and network operators in the SP network to

select “best” exit interface for each s

n

n

r

x

 

 

 

 

 

 

 

The need for engineering the way that

r c exits the service provider AS is cr

c

for ensuring cost

c ncy while providing a good end user experience at the same

m

The

n n of “best” exit

interface is a c mb n

n of cost as well as latency and

r c loss.

 

 

 

 

 

For more n rm

n see Fundamentals of Egress Peering Engineering and BGP Labeled Unicast Egress

Peer Engineering Using cRPD as Ingress.

9

RELATED DOCUMENTATION

Docker Overview

What is Docker?

What is a Container?

Get Started With Docker

cRPD Resource Requirements

IN THIS SECTION

cRPD Scaling | 9

Table 1 on page 9 lists the minimum resource requirements for cRPD.

Table 1: cRPD Minimum Resource Requirements

scr n

Minimum Value

 

 

CPU

1 core

 

 

Memory

256 MB

 

 

Disk space

256 MB

 

 

cRPD Scaling

You can scale the performance and capacity of a cRPD by increasing the allocated amount of memory and the CPU available on the host hardware or VM resources.

Table 2 on page 10 lists the cRPD scaling n rm n

10

Table 2: cRPD Scaling

 

 

 

 

 

 

Instance

 

RIB/FIB Route Scale

Minimum Memory

 

 

 

 

cRPD

 

32,000

256 MB

 

 

 

 

 

 

64,000

512 MB

 

 

 

 

 

 

128,000

1024 MB

 

 

 

 

 

 

1,000,000

2048 MB

 

 

 

 

Junos OS Features Supported on cRPD

IN THIS SECTION

Features Supported on cRPD | 10

Features Supported on cRPD

cRPD inherits most of the r n features with the following c ns r ns shown in Table 3 on page 11.

11

Table 3: Supported Features on cRPD

 

 

 

 

 

 

 

 

 

 

 

 

Feature

 

scr

n

 

 

 

 

 

 

 

 

 

 

 

 

BGP FlowSpec

S

r

n

in Junos OS Release 20.3R1, BGP

w s

c c

n method is

 

supported to prevent denial -of-service

c s on the cRPD environment.

 

[See Understanding BGP Flow Routes for

r c Filtering.]

 

 

 

 

 

EVPN-VPWS

S

r

n

in Junos OS Release 20.3R1, EVPN-VPWS is supported to provide

 

VPWS with EVPN signaling mechanisms on cRPD.

 

 

[See Overview of VPWS with EVPN Signaling Mechanisms.]

 

 

 

 

 

EVPN TYPE 5 with

S

r

n

in Junos OS Release 20.3R1, EVPN Type 5 is supported for EVPN/

MPLS

MPLS.

 

 

 

 

 

 

[See EVPN Type-5 Route with MPLS

nc

s

n for EVPN-MPLS.]

 

 

 

 

 

 

Segment r n

S

r

n

in Junos OS Release 20.3R1, Segment r

n support for OSPF

 

and IS-IS protocols to provide basic

nc

n y with Source Packet

 

R

 

n

in Networking (SPRING).

 

 

 

 

 

[See Understanding Source Packet R

n

in Networking (SPRING).]

 

 

 

 

 

Layer 2 VPN

S

r

n

in Junos OS Release 20.3R1, support for Layer 2 circuit to provide

 

Layer 2 VPN and VPWS with LDP signaling.

 

 

 

[See C

n r n Ethernet over MPLS (Layer 2 Circuit).]

 

 

 

 

 

 

MPLS

S

r

n

in Junos OS Release 20.3R1, support for MPLS to provide LDP

 

signaling protocol c n r n with the control plane

nc n y

 

[See Understanding the LDP Signaling Protocol.]

 

 

 

 

 

 

 

 

 

 

 

12

Table 3: Supported Features on cRPD (C n n

)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Feature

 

 

 

scr

n

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Eventd

 

 

S

r

n

in Junos OS Release 20.4R1, we support only external event

 

 

 

policies. You can enable these policies in cRPD. In cRPD, eventd and

 

 

 

rsyslogd run as independent processes. The eventd process provides

 

 

 

v n

n

r

c to processes such as rpd, auditd, and mgd and supports

 

 

 

automated event policy

x c

n

 

 

 

 

 

 

 

Use the set

v n

 

ns policy policy name events [events] then

 

 

 

command to enable an event policy and restart event-processing to restart

 

 

 

event processing.

 

 

 

 

 

 

 

 

 

 

By default, Python 3.x support is enabled with

x s

n on-box Python or

 

 

 

SLAX

nc

ns in the cRPD environment.

 

 

 

 

 

 

Use the [edit system scripts language python3] hierarchy level to enable

 

 

 

and to support Python event

m

n

 

 

 

 

 

 

[See

v

n

ns and event-policy.]

 

 

 

 

 

 

 

 

 

 

 

 

 

 

n

c

n

S

r

n

in cRPD Release 21.1R1, you can c

n

r

local and remote

r z

 

n and

 

 

r z

ns on RADIUS and TACPLUS servers at the [edit system

cc n

n

 

services ssh] hierarchy level.

 

 

 

 

 

 

 

 

We support the following features:

 

 

 

 

 

 

 

Local

n c

n and local

r z

n

 

 

 

 

 

TACACS+

n c

n

r z

n and

cc

n n

 

 

 

User template support

 

 

 

 

 

 

 

 

Support for

r

n commands and regular expressions

 

 

 

Local

n c

n and remote

r z

 

n

 

 

 

 

[See

 

ssw

r

ns , "tacplus", and "radius (System)".]

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

13

Table 3: Supported Features on cRPD (C

n n

 

)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Feature

 

 

scr

n

 

 

 

 

 

 

 

 

 

 

 

 

SRv6 network

 

S

r n

in cRPD Release 21.1R1, you can c n

r to enable basic

programming in IS-IS

 

segment r

n

nc

n

s in a core IPv6 network for both route

 

 

r

c r role and host r

n

roles.

 

 

 

You can enable SRv6 network programming in an IPv6 network at the [edit

 

 

s

rc

c

r

n

hierarchy level.

 

 

 

A Segment

n

r consists of the following parts:

 

 

Locator— Locator is the

rs

part of a SID that consists of the most

 

 

 

s n

c n

bits r

 

r s n

n

the address of a

r c r SRv6 node. The

 

 

 

locator is very similar to a network address that provides a route to its

 

 

 

parent node. The IS-IS protocol installs the locator route in the inet6.0

rn table. IS-IS routes the segment to its parent node, which

subsequently performs a

nc

n

n in the other part of the SRv6

SID. You can also specify the algorithm associated with this locator.

F nc

n—The other part of the SID

 

n s a

nc

n that is performed

locally on the node that is s

c

by the locator. There are several

nc

ns that have already been

n

in the Internet r

r

s r n

srv n w r r

r mm n

r

SRv6 Network

 

Programming. However, we have implemented the following

nc ns

that are signalled in IS-IS. IS-IS installs these

nc

n SIDs in the inet6.0

rn table.

End— An endpoint

nc

 

n for SRv6 ns n

n of a r x SID. It

does not allow for

c

s

n of an outer header for the removal

of an SRH. Therefore, an End SID cannot be the last SID of a SID list

and cannot be the

s

n

n Address (DA) of a packet without an

SRH.

 

 

 

 

End.X— An endpoint X nc n is an SRv6 ns n

n of an

adjacent SID. It is a variant of the endpoint nc

n with Layer 3

cross-connect to an array of Layer 3 adjacencies.

 

NOTE: The support for

v r (s c

s end sid behavior) and x b

algorithm

 

ns is not available for c

n r n end sids.

[See s rc

c

r

n

].

 

14

Table 3: Supported Features on cRPD (C n n

)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Feature

scr

n

 

 

 

 

 

 

 

 

 

 

 

Increase ECMP next-

S r n

in cRPD Release 21.1R1, you can specify the m

next-hop

hop limit

limit at the [edit r

n

ns maximum-ecmp] hierarchy level. This

 

helps to load-balance the r

c over m

 

 

paths. The default ECMP

 

next-hop limit is 16.

 

 

 

 

 

 

 

[See r

n

ns m x

cm

and "Hash Field S c

n for ECMP Load

 

Balancing on Linux " on page 74].

 

 

 

 

 

 

 

 

EVPN Type 5 with

S r n

in cRPD Release 21.1R1, we support EVPN Type 5 Route over

VXLAN

VXLAN for both IPv4 and IPv6 r

x

v r

s m n s

 

 

[See EVPN Type-5 Route with VXLAN

nc

s

n for EVPN-VXLAN].

 

 

 

 

 

 

 

 

 

 

2

CHAPTER

Installing and Upgrading cRPD

Requirements for Deploying cRPD on a Linux Server | 16 Installing cRPD on Docker | 17

Installing cRPD on Kubernetes | 25

Upgrading cRPD | 52

16

Requirements for Deploying cRPD on a Linux Server

IN THIS SECTION

Host Requirements | 16

Interface Naming and Mapping | 17

This s c n presents an overview of requirements for deploying a cRPD container on a Linux server:

Host Requirements

Table 4 on page 16 lists the Linux host requirement s

c

c

ns for deploying a cRPD container on a

Linux server.

 

 

 

 

Table 4: Host Requirements

 

 

 

 

 

 

 

 

 

Component

S

c

c

n

 

 

Linux OS support

Ubuntu 18.04

 

 

 

 

Linux Kernel

4.15

 

 

 

 

 

Docker Engine

18.09.1

 

 

 

 

CPUs

2 CPU core

 

 

 

 

 

Memory

4 GB

 

 

 

 

 

 

Disk space

10 GB

 

 

 

 

 

 

 

17

Table 4: Host Requirements (C n n

)

 

 

 

 

 

 

Component

 

S c c

n

 

 

 

 

Host processor type

 

x86_64 m

c r CPU

 

 

 

 

Network Interface

 

Ethernet

 

 

 

 

 

Interface Naming and Mapping

Table 5 on page 17 lists the supported interfaces on cRPD.

Table 5: Interface Naming and Mapping

Interface Number

cRPD Interfaces

 

 

eth0

eth0-mgmt-interface

 

 

eth1

eth1-data-interface

 

 

Installing cRPD on Docker

IN THIS SECTION

Before You Install | 18

Install and Verify Docker | 18

Download the cRPD S w r | 19

Cr

n Data Volumes and Running cRPD using Docker | 20

C n

r n Memory | 21

18

C n r n cRPD using the CLI | 21

This s c n outlines the steps to install the cRPD container in a Linux server environment that is running Ubuntu or Red Hat Enterprise Linux (RHEL). The cRPD container is packaged in a Docker image and runs in the Docker Engine on the Linux host.

This s c n includes the following topics:

Before You Install

Before you install cRPD as r

n service to achieve r

n

nc n

y in a Linux container

environment, ensure to:

 

 

 

 

 

• Verify the system requirement s c c

ns for the Linux server to deploy the cRPD, see

"Requirements for Deploying cRPD on a Linux Server" on page 16.

 

Install and Verify Docker

Install and c n

r

Install Docker for

ns

Docker on Linux host

rm to implement the Linux container environment, see

n ns r c ns on the supported Linux host

r n systems.

Verify the Docker ns

n See "Debugging cRPD

c

n

on page 121.

 

To install the latest Docker:

 

 

 

 

 

 

 

 

Log in and download the s

w r

 

 

 

 

 

 

 

root@ubuntu-vm18:~# curl -fsSL

s

wn

c r c

m

n x

b n

| sudo apt-key add -

root@ubuntu-vm18:~# add-apt-repository "deb [arch=amd64]

s

wn

c r c m n x

b n bionic stable" root@ubuntu-vm18:~# add update root@ubuntu-vm18:~# add install docker-ce

19

Download the cRPD S

w r

 

The cRPD s

w r is available as a cRPD Docker

from the Juniper Internal Docker registry.

There are two ways to download the s

w r

 

Juniper s

w r download page

 

 

Juniper Docker Registry

 

 

 

Before you import the cRPD s

w r

ensure that Docker is installed on the Linux host and that the

Docker Engine is running.

 

 

 

Once the Docker Engine has been installed on the host, perform the following to download and start using the cRPD image:

NOTE: You must include the --privileged in the docker run command to enable the cRPD container to run in privileged mode.

To download the cRPD s w r using the Juniper Docker Registry:

1. Log in to the Juniper Internal Docker registry using the login name and password that you received as part of the sales m n process when ordering cRPD.

root@ubuntu-vm18:~# docker login hub.juniper.net -u <username> -p <password>

2. Pull the docker image from the download site using the following command:

 

root@ubuntu-vm18:~# docker pull

b

n

r n

r

n

cr

1

2R1

 

3. Verify images in docker image repository.

 

 

 

 

 

 

 

 

 

root@ubuntu-vm18:~# docker images

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

REPOSITORY

 

 

TAG

 

 

 

 

 

 

 

 

IMAGE

 

ID

CREATED

 

 

 

 

 

 

 

 

 

 

 

SIZE

 

 

 

 

 

 

 

 

 

 

 

 

 

crpd

 

 

19.2R1.8

 

 

 

 

 

 

 

 

 

4156a807054a

 

6 days ago

 

 

278MB

 

 

 

 

 

 

 

 

 

 

 

To download the cRPD s

w r

from the Juniper download URL:

 

 

 

1. Download the cRPD s

w r

image from the Juniper Networks website.

 

 

root@ubuntu-vm18:~# docker load -i

n

s r

n

 

cr

c

r 1

2R1

z

20

2.Verify the downloaded images in docker image repository root@ubuntu-vm18:~# docker images

REPOSITORY

 

TAG

IMAGE

ID

CREATED

 

 

SIZE

 

 

 

crpd

 

19.2R1.8

 

4156a807054a

6

days ago

278MB

 

 

 

 

Cr n Data Volumes and Running cRPD using Docker

To create data volumes:

1. Create data volume for c n r n and var logs. root@ubuntu-vm18:~# docker volume create cr 01 c n

crpd01-config

root@ubuntu-vm18:~# docker volume create crpd01-varlog

crpd01-varlog

 

 

 

 

 

 

 

 

Data volumes remain even

r containers are destroyed and can be

c

to newer containers.

Data volumes are not shared between m

containers at the same

m unless they are ready-

only volumes.

 

 

 

 

2. Download and load the cRPD s w r

3.c the data volumes to create and launch the container to the cRPD instance.

In the bridge mode, containers are connected to host network stack through bridge(s). M containers can connect to the same bridge and communicate with each other. External devices c mm n c n is possible, if the bridge is connected to the host OS network interfaces.

For r n purposes, it is also possible to assign exclsuively assign all or a subset of physical interfaces for exclusive use by a docker container.

root@ubuntu-vm18:~# docker run --rm --detach --name crpd01 -h crpd01 --net=bridge --privileged

-v cr 01 c n

c n -v crpd01-varlog:/var/log -it crpd:19.2R1.8

21

Data volumes remain even

r containers are destroyed and can be

c

to newer containers.

Data volumes are not shared between m

containers at the same

m unless they are ready-

only volumes.

 

 

 

 

To launch cRPD in host networking mode:

1. Run the command to launch cRPD in host networking mode:

root@ubuntu-vm18:~# docker run --rm --detach --name crpd01 -h crpd01 --privileged --net=host - v cr 01 c n c n -v crpd01-varlog:/var/log -it crpd:19.2R1.8

C n r n Memory

To limit the amount of memory allocated to the cRPD:

You can specify the memory size using the following command:

root@ubuntu-vm18:~# docker run --rm --detach --name crpd01 -h crpd01 --privileged -v cr 01 c n c n -v crpd01-varlog:/var/log -m 2048MB --memory-swap=2048MB -it crpd:19.2R1.8

C n

r n

cRPD using the CLI

 

 

 

 

cRPD provides Junos command line c

n

r

n and

r

n commands for r n service. It

provides subset of r n protocols c

n

r

n that enable node r c

s in topology and r n

You can c

n

r interfaces from Linux shell. Interface c

n

r n is available only for the ISO

addresses.

 

 

 

 

 

 

 

 

To c n r the cRPD container using the CLI:

1.Log in to the cRPD container. root@ubuntu-vm18:~/# docker exec -it crpd01 cli

2. Enter c n

r

n mode.

root@crpd01> c n

r

Entering configuration mode

[edit]

3. Set the root n c n password by entering a cleartext password, an encrypted password, or an SSH public key string (DSA or RSA).

22

root@crpd01# set system r

n c

n plain-text-password

New password: password

Retype new password: password

4.

Commit the c n r n to c v it on the cRPD instance.

 

root@crpd01# commit

 

 

 

 

 

commit complete

 

 

 

5.

(

n ) Use the show command to display the c n r n to verify that it is correct.

root@crpd01# show

## Last changed: 2019-02-13 19:28:26 UTC

version "19.2I20190125_1733_rbu-builder [rbu-builder]"; system {

root-authentication { encrypted-password "$6$JEc/p

$QOUpqi2ew4tVJNKXZYiCKT8CjnlP3SLu16BRIxvtz0CyBMc57WGu2oCyg/ lTr0iR8oJMDumtEKi0HVo2NNFEJ."; ## SECRET-DATA

}

}

routing-options { forwarding-table {

export pplb;

}

router-id 90.90.90.20; autonomous-system 100;

}

protocols { bgp {

group test {

type internal; local-address 90.90.90.20; family inet {

unicast;

}

neighbor 90.90.90.10 { bfd-liveness-detection {

minimum-interval 100;

}

23

}

neighbor 90.90.90.30 { bfd-liveness-detection {

minimum-interval 100;

}

}

}

group test6 {

type internal;

local-address abcd::90:90:90:20; family inet6 {

unicast;

}

neighbor abcd::90:90:90:10 { bfd-liveness-detection { minimum-interval 100;

}

}

neighbor abcd::90:90:90:30 { bfd-liveness-detection { minimum-interval 100;

}

}

}

}

isis {

level 1 disable; interface all {

family inet { bfd-liveness-detection {

minimum-interval 100;

}

}

family inet6 { bfd-liveness-detection {

minimum-interval 100;

}

}

}

}

ospf {

area 0.0.0.0 { interface all {

Loading...
+ 104 hidden pages