The information in this document is current as of the date on the title page.
ii
YEAR 2000 NOTICE
Juniper Networks hardware and software products are Year 2000 compliant. Junos OS has no known time-related
limitations through the year 2038. However, the NTP application is known to have some difficulty in the year 2036.
END USER LICENSE AGREEMENT
The Juniper Networks product that is the subject of this technical documentation consists of (or is intended for use with)
Juniper Networks software. Use of such software is subject to the terms and conditions of the End User License Agreement
(“EULA”) posted at https://support.juniper.net/support/eula/. By downloading, installing or using such software, you
agree to the terms and conditions of that EULA.
Table of Contents
1
2
About the Documentation | ix
Documentation and Release Notes | ix
Documentation Conventions | ix
Documentation Feedback | xii
Requesting Technical Support | xii
Self-Help Online Tools and Resources | xiii
Creating a Service Request with JTAC | xiii
Installation and Configuration Overview
Platform and Software Compatibility | 15
Installation Options | 16
iii
NorthStar Controller System Requirements | 18
Server Sizing Guidance | 18
Additional Disk Space for JTI Analytics in ElasticSearch | 21
Additional Disk Space for Network Events in Cassandra | 21
Collector (Celery) Memory Requirements | 22
Firewall Port Guidance | 23
Analytics Requirements | 26
Two-VM Installation Requirements | 27
VM Image Requirements | 27
JunosVM Version Requirements | 27
VM Networking Requirements | 27
Changing Control Packet Classification Using the Mangle Table | 28
Installation on a Physical Server
Using an Ansible Playbook to Automate NorthStar Installation | 31
Before You Begin | 32
Creating the Ansible Inventory File | 33
Executing the Playbook | 34
Installing Data Collectors and Secondary Collectors for Analytics | 35
Variables | 36
3
Installing the NorthStar Controller | 38
Activate Your NorthStar Software | 40
Download the Software | 40
If Upgrading, Back Up Your JunosVM Configuration and iptables | 41
Install NorthStar Controller | 41
Configure Support for Different JunosVM Versions | 43
Create Passwords | 45
Enable the NorthStar License | 45
Adjust Firewall Policies | 46
Launch the Net Setup Utility | 46
Configure the Host Server | 48
Configure the JunosVM and its Interfaces | 53
iv
Configure Junos cRPD Settings | 58
Set Up the SSH Key for External JunosVM | 60
Upgrade the NorthStar Controller Software in an HA Environment | 63
Configuring NorthStar Settings Using the NorthStar CLI | 66
Accessing the NorthStar CLI | 67
NorthStar Configuration Settings | 71
Uninstalling the NorthStar Controller Application | 75
Uninstall the NorthStar Software | 75
Reinstate the License File | 76
Installation in an OpenStack Environment
Overview of NorthStar Controller Installation in an OpenStack Environment | 78
Testing Environment | 79
Networking Scenarios | 79
HEAT Templates | 80
HEAT Template Input Values | 81
Known Limitations | 82
Virtual IP Limitations from ARP Proxy Being Enabled | 82
Hostname Changes if DHCP is Used Rather than a Static IP Address | 82
Disk Resizing Limitations | 82
4
OpenStack Resources for NorthStar Controller Installation | 83
NorthStar Controller in an OpenStack Environment Pre-Installation Steps | 84
Installing the NorthStar Controller in Standalone Mode Using a HEAT Template | 85
Launch the Stack | 85
Obtain the Stack Attributes | 86
Resize the Image | 87
Install the NorthStar Controller RPM Bundle | 89
Configure the JunosVM | 89
Configure SSH Key Exchange | 90
Installing a NorthStar Cluster Using a HEAT Template | 91
System Requirements | 91
v
Launch the Stack | 91
Obtain the Stack Attributes | 91
Configure the Virtual IP Address | 92
Resize the Image | 93
Install the NorthStar Controller RPM Bundle | 96
Configure the JunosVM | 96
Configure SSH Key Exchange | 96
Configure the HA Cluster | 97
Installing and Configuring Optional Features
Installing Data Collectors for Analytics | 99
Overview | 99
Analytics Geo-HA | 101
Single-Server Deployment–No NorthStar HA | 102
External Analytics Node(s)–No NorthStar HA | 103
External Analytics Node(s)–With NorthStar HA | 115
Verifying Data Collection When You Have External Analytics Nodes | 118
Replacing a Failed Node in an External Analytics Cluster | 121
Collectors Installed on the NorthStar HA Cluster Nodes | 126
Troubleshooting Logs | 132
Configuring Routers to Send JTI Telemetry Data and RPM Statistics to the Data
Collectors | 133
Collector Worker Installation Customization | 138
Secondary Collector Installation for Distributed Data Collection | 140
Configuring a NorthStar Cluster for High Availability | 143
Before You Begin | 143
Set Up SSH Keys | 145
Access the HA Setup Main Menu | 146
Configure the Three Default Nodes and Their Interfaces | 150
Configure the JunosVM for Each Node | 152
(Optional) Add More Nodes to the Cluster | 153
vi
Configure Cluster Settings | 155
Test and Deploy the HA Configuration | 156
Replace a Failed Node if Necessary | 161
Configure Fast Failure Detection Between JunosVM and PCC | 163
Using a Remote Server for NorthStar Planner | 164
Process Overview: Installing and Configuring Remote Planner Server | 164
Download the Software to the Remote Planner Server | 165
Install the Remote Planner Server | 165
Run the Remote Planner Server Setup Utility | 166
Installing Remote Planner Server at a Later Time | 173
Configuring Topology Acquisition and Connectivity Between the NorthStar
5
6
Controller and the Path Computation Clients
Understanding Network Topology Acquisition on the NorthStar Controller | 176
Configuring Topology Acquisition | 178
Overview | 178
Before You Begin | 179
Configuring Topology Acquisition Using BGP-LS | 181
Configure BGP-LS Topology Acquisition on the NorthStar Controller | 181
Configure the Peering Router to Support Topology Acquisition | 182
Configuring Topology Acquisition Using OSPF | 183
Configure OSPF on the NorthStar Controller | 183
Configure OSPF over GRE on the NorthStar Controller | 184
Configuring Topology Acquisition Using IS-IS | 184
vii
Configure IS-IS on the NorthStar Controller | 185
Configure IS-IS over GRE on the NorthStar Controller | 185
Configuring PCEP on a PE Router (from the CLI) | 186
Configuring a PE Router as a PCC | 186
Setting the PCC Version for Non-Juniper Devices | 188
Mapping a Path Computation Client PCEP IP Address | 190
Accessing the User Interface
NorthStar Application UI Overview | 194
UI Comparison | 194
Browser Compatibility | 195
The NorthStar Login Window | 195
Accessing the NorthStar Planner from Within NorthStar Controller | 198
User Inactivity Timer | 198
NorthStar Controller Web UI Overview | 198
Appendix
7
Upgrading from Pre-4.3 NorthStar with Analytics | 206
Export Existing Data from the NorthStar Application Server (Recommended) | 206
Upgrade Procedure with NorthStar Application and NorthStar Analytics on the Same
Server | 208
Upgrade Procedure with NorthStar Application and NorthStar Analytics on Separate
Servers | 208
Update the Netflow Aggregation Setting | 209
Import Existing Data (Recommended) | 210
viii
About the Documentation
IN THIS SECTION
Documentation and Release Notes | ix
Documentation Conventions | ix
Documentation Feedback | xii
Requesting Technical Support | xii
Use this guide to install the NorthStar Controller application, perform initial configuration tasks, install
optional features, establish connectivity to the network, and access the NorthStar UI. System requirements
and deployment scenario server requirements are included.
ix
Documentation and Release Notes
To obtain the most current version of all Juniper Networks®technical documentation, see the product
documentation page on the Juniper Networks website at https://www.juniper.net/documentation/.
If the information in the latest release notes differs from the information in the documentation, follow the
product Release Notes.
Juniper Networks Books publishes books by Juniper Networks engineers and subject matter experts.
These books go beyond the technical documentation to explore the nuances of network architecture,
deployment, and administration. The current list can be viewed at https://www.juniper.net/books.
Documentation Conventions
Table 1 on page x defines notice icons used in this guide.
Table 1: Notice Icons
x
DescriptionMeaningIcon
Indicates important features or instructions.Informational note
Caution
Indicates a situation that might result in loss of data or hardware
damage.
Alerts you to the risk of personal injury or death.Warning
Alerts you to the risk of personal injury from a laser.Laser warning
Indicates helpful information.Tip
Alerts you to a recommended use or implementation.Best practice
Table 2 on page x defines the text and syntax conventions used in this guide.
Table 2: Text and Syntax Conventions
ExamplesDescriptionConvention
Fixed-width text like this
Italic text like this
Represents text that you type.Bold text like this
Represents output that appears on
the terminal screen.
Introduces or emphasizes important
•
new terms.
Identifies guide names.
•
Identifies RFC and Internet draft
•
titles.
To enter configuration mode, type
the configure command:
user@host> configure
user@host> show chassis alarms
No alarms currently active
A policy term is a named structure
•
that defines match conditions and
actions.
Junos OS CLI User Guide
•
RFC 1997, BGP Communities
•
Attribute
Table 2: Text and Syntax Conventions (continued)
xi
ExamplesDescriptionConvention
Italic text like this
Text like this
< > (angle brackets)
| (pipe symbol)
Represents variables (options for
which you substitute a value) in
commands or configuration
statements.
Represents names of configuration
statements, commands, files, and
directories; configuration hierarchy
levels; or labels on routing platform
components.
variables.
Indicates a choice between the
mutually exclusive keywords or
variables on either side of the symbol.
The set of choices is often enclosed
in parentheses for clarity.
Configure the machine’s domain
name:
[edit]
root@# set system domain-name
domain-name
To configure a stub area, include
•
the stub statement at the [edit
protocols ospf area area-id]
hierarchy level.
The console port is labeled
•
CONSOLE.
stub <default-metric metric>;Encloses optional keywords or
broadcast | multicast
(string1 | string2 | string3)
# (pound sign)
[ ] (square brackets)
Indention and braces ( { } )
; (semicolon)
GUI Conventions
Indicates a comment specified on the
same line as the configuration
statement to which it applies.
Encloses a variable for which you can
substitute one or more values.
Identifies a level in the configuration
hierarchy.
Identifies a leaf statement at a
configuration hierarchy level.
rsvp { # Required for dynamic MPLS
only
community name members [
community-ids ]
[edit]
routing-options {
static {
route default {
nexthop address;
retain;
}
}
}
Table 2: Text and Syntax Conventions (continued)
xii
ExamplesDescriptionConvention
Bold text like this
> (bold right angle bracket)
Represents graphical user interface
(GUI) items you click or select.
Separates levels in a hierarchy of
menu selections.
In the Logical Interfaces box, select
•
All Interfaces.
To cancel the configuration, click
•
Cancel.
In the configuration editor hierarchy,
select Protocols>Ospf.
Documentation Feedback
We encourage you to provide feedback so that we can improve our documentation. You can use either
of the following methods:
Online feedback system—Click TechLibrary Feedback, on the lower right of any page on the Juniper
•
Networks TechLibrary site, and do one of the following:
Click the thumbs-up icon if the information on the page was helpful to you.
•
Click the thumbs-down icon if the information on the page was not helpful to you or if you have
•
suggestions for improvement, and use the pop-up form to provide feedback.
E-mail—Send your comments to techpubs-comments@juniper.net. Include the document or topic name,
•
URL or page number, and software version (if applicable).
Requesting Technical Support
Technical product support is available through the Juniper Networks Technical Assistance Center (JTAC).
If you are a customer with an active Juniper Care or Partner Support Services support contract, or are
covered under warranty, and need post-sales technical support, you can access our tools and resources
online or open a case with JTAC.
JTAC policies—For a complete understanding of our JTAC procedures and policies, review the JTAC User
•
Guide located at https://www.juniper.net/us/en/local/pdf/resource-guides/7100059-en.pdf.
JTAC hours of operation—The JTAC centers have resources available 24 hours a day, 7 days a week,
•
365 days a year.
Self-Help Online Tools and Resources
For quick and easy problem resolution, Juniper Networks has designed an online self-service portal called
the Customer Support Center (CSC) that provides you with the following features:
Changing Control Packet Classification Using the Mangle Table | 28
Platform and Software Compatibility
The NorthStar Controller 6.1.0 release is qualified to work with Junos OS Release 18.3R2.4. We recommend
contacting JTAC for information about the compatibility of other Junos OS releases.Table 3 on page 15
lists feature-specific Junos OS requirements. The NorthStar features listed have been qualified with the
specified Junos OS release and are intended to work with that release.
Table 3: Feature-Specific Junos OS Requirements
Junos OS ReleaseNorthStar Feature
15.1F6Analytics
17.2R1Segment Routing (SPRING), MD5 authentication for PCEP, P2MP, Admin
groups
18.3R2PCEP-Provisioned P2MP Groups
15
19.4R1PCEP-Provisioned P2MP Groups with MVPN (S,G) Service Mapping via
Flowspec
19.2R1.8EPE
19.2R1.2Bandwidth sizing and container LSPs for SR-TE LSPs
19.4R3, 20.1R1PCC Delegated LSP Support for SR LSPs
NOTE: The Path Computation Element Protocol (PCEP) configuration on the PCC routers does
not persist across upgrades when the SDN package is not part of the installation binary. Before
upgrading the Junos OS image to this release, save the existing configuration to a file by using
the save command. After you upgrade the Junos OS image on each PCC router, use the loadoverride command to restore the PCEP configuration.
The NorthStar Controller is supported on the following Juniper platforms: M Series, T Series, MX Series,
PTX Series, and QFX10008. As of Junos OS Release 17.4R1, NorthStar Controller is also supported on
QFX5110, QFX5100, and QFX5200. Please contact JTAC for more information.
Junos OS supports Internet draft draft-crabbe-pce-pce-initiated-lsp-03 for the stateful PCE-initiated LSP
implementation (M Series, MX Series, PTX Series, T Series, and QFX Series).
Installation Options
There are three NorthStar Controller installation options for use with Junos VM as summarized in
Figure 1 on page 16.
Figure 1: NorthStar/Junos VM Installation Options
16
You can also install NorthStar Controller using cRPD as summarized in Figure 2 on page 17.
Figure 2: NorthStar/cRPD Installation
17
For installation procedures, see:
Installing the NorthStar Controller on page 38
•
This topic also includes information about installing with NorthStar cRPD.
Overview of NorthStar Controller Installation in an OpenStack Environment on page 78
•
RELATED DOCUMENTATION
NorthStar Controller System Requirements | 18
Installing the NorthStar Controller | 38
NorthStar Controller System Requirements
The NorthStart Controller runs on Linux systems running CentOS 7 or Red Hat Enterprise Linux (RHEL)
7.
Ensure that:
You use a supported version of CentOS Linux or Red Hat Enterprise Linux (RHEL). These are our Linux
•
recommendations:
CentOS Linux or RHEL 7.6 or 7.7 image. Earlier versions are not supported.
•
Install your choice of supported Linux version using the minimal ISO.
•
You use RAM, number of virtual CPUs, and hard disk specified in “Server Sizing Guidance” on page 18
•
for your installation.
You open the ports listed in “Firewall Port Guidance” on page 23.
•
18
NOTE: When upgrading NorthStar Controller, files are backed up to the /opt directory.
Server Sizing Guidance
The guidance in this section should help you to configure your servers with sufficient resources to efficiently
and effectively support the NorthStar Controller functions. The recommendations in this section are the
result of internal testing combined with field data.
A typical NorthStar deployment contains the following systems:
An application system
•
The application system contains the path computation element (PCE), the path computation server (PCS),
the components for Web access, topology acquisition, CLI or SNMP message collection and, a
configuration database.
An analytics system
•
The analytics system is used for telemetry and collecting NetFlow data, and contains the analytics
database. The analytics system is used in deployments tracking traffic levels of a network.
(Optional) A dedicated or secondary collector
•
A secondary collector is used for collecting CLI and SNMP messages from large nodes and is needed
when there is a need for a heavy collection of data; see Table 5 on page 20.
(Optional) A dedicated planner node
•
A planner node is required for running offline network simulation on a system other than the application
system; see Table 5 on page 20.
For high availability deployments, described in “Configuring a NorthStar Cluster for High Availability” on
page 143, a cluster would have 3 or more application and analytics systems, but they would be sized similarly
to a deployment with a single application system and a single analytics system.
Table 4 on page 19 outlines the estimated server requirements of the application and analytics systems
by network size.
Table 4: Server Requirements for Application and Analytics Systems by Network Size
Medium (<75
nodes)
(RAM / vCPU /
HDD)
Large (<300 nodes)
(RAM / vCPU / HDD)
XL (300+ nodes)*
(RAM / vCPU / HDD)
Instance
Type
POC/LAB
(RAM / vCPU /
HDD)
19
Application
500G
For collecting a large number of SNMP and CLI messages
on a single, non-high availability (HA) system, you may
require additional 16GB RAM and 8 vCPUs or a secondary
collector; see Table 5 on page 20.
Analytics
500G
NetFlow deployments may require additional 16G to 32G RAM and doubling of
the virtual CPUs on the analytics system.
NOTE: Based on the number of devices in your network, check with your Juniper Networks representative to confirm
your specific requirements for networks in the XL category.
When installing the minimal installation Centos or RHEL Linux, the filesystems can be collapsed to a single
root (/) filesystem or separate filesystems. If you are using separate filesystems, you can assign space for
each customer according to the size mentioned in Table 6 on page 20 for the different directories.
Table 6: Recommended Space for Filesystem
24G/tmp
/opt
filesystem
PurposeSpace RequirementFilesystem
Linux kernel and necessary files for boot1G/boot
Not needed, but can have minimal configuration0 to 4Gswap
Operating system (including /usr)10G/
Containerized processes (application system only)20G/var/lib/docker
NorthStar debug files in case of process error (application
system only)
NorthStar componentsRemaining space in the
Additional Disk Space for JTI Analytics in ElasticSearch
Considerable storage space is needed to support JTI analytics in ElasticSearch. Each JTI record event
requires approximately 330 bytes of disk space. A reasonable estimate of the number of events generated
is (<num-of-interfaces> + <number-of-LSPs>) ÷ reporting-interval-in-seconds = events per second.
So for a network with 500 routers, 50K interfaces, and 60K LSPs, with a configured five-minute reporting
interval (300 seconds), you can expect something in the neighborhood of 366 events per second to be
generated. At 330 bytes per event, it comes out to 366 events x 330 bytes x 86,400 seconds in a day =
over 10G of disk space per day or 3.65T per year. For the same size network, but with a one-minute
reporting interval (60 seconds), you would have a much larger disk space requirement—over 50G per day
or 18T per year.
There is an additional roll-up event created per hour per element for data aggregation. In a network with
50K interfaces and 60K LSPs (total of 110K elements), you would have 110K roll-up events per hour. In
terms of disk space, that would be 110K events per hour x 330 bytes per event x 24 hours per day = almost
1G of disk space required per day.
21
For a typical network of about 100K elements (interfaces + LSPs), we recommend that you allow for an
additional 11G of disk space per day if you have a five-minute reporting interval, or 51G per day if you
have a one-minute reporting interval.
See NorthStar Analytics Raw and Aggregated Data Retention in the NorthStar Controller User Guide for
information about customizing data aggregation and retention parameters to reduce the amount of disk
space required by ElasticSearch.
Additional Disk Space for Network Events in Cassandra
The Cassandra database is another component that requires additional disk space for storage of network
events.
Using that same example of 50K interfaces and 60K LSPs (110 elements) and estimating one event every
15 minutes (900 seconds) per element, there would be 122 events per second. The storage needed would
then be 122 events per second x 300 bytes per event x 86,400 seconds per day = about 3.2 G per day, or
1.2T per year.
Using one event every 5 minutes per element as an estimate instead of every 15 minutes, the additional
storage requirement is more like 9.6G per day or 3.6T per year.
For a typical network of about 100K elements (interfaces + LSPs), we recommend that you allow for an
additional 3-10G of disk space per day, depending on the rate of event generation in your network.
By default, NorthStar keeps event history for 35 days. To customize the number of days event data is
retained:
1. Modify the dbCapacity parameter in /opt/northstar/data/web_config.json
2. Restart the pruneDB process using the supervisorctl restart infra:prunedb command.
Collector (Celery) Memory Requirements
When you use the collector.sh script to install secondary collectors on a server separate from the NorthStar
application (for distributed collection), the script installs the default number of collector workers described
in Table 7 on page 22. The number of celery processes started by each worker is the number of cores in
the CPU plus one. So in a 32-core server (for example), the one installed default worker would start 33
celery processes. Each celery process uses about 50M of RAM.
Table 7: Default Workers, Processes, and Memory by Number of CPU Cores
See “Secondary Collector Installation for Distributed Data Collection” on page 140 for more information
about distributed data collection and secondary workers.
The default number of workers installed is intended to optimize server resources, but you can change the
number by using the provided config_celery_workers.sh script. See “Collector Worker Installation
Customization” on page 138 for more information. You can use this script to balance the number of workers
installed with the amount of memory available on the server.
NOTE: This script is also available to change the number of workers installed on the NorthStar
application server from the default, which also follows the formulas shown in Table 7 on page 22.
Firewall Port Guidance
The ports listed in Table 8 on page 23 must be allowed by any external firewall being used. The ports with
the word cluster in their purpose descriptions are associated with high availability (HA) functionality. If
you are not planning to configure an HA environment, you can ignore those ports. The ports with the word
Analytics in their purpose descriptions are associated with the Analytics feature. If you are not planning
to use Analytics, you can ignore those ports. The remaining ports listed must be kept open in all
configurations.
Table 8: Ports That Must Be Allowed by External Firewalls
PurposePort
23
179
830
BGP: JunosVM or cRPD for router BGP-LS—not needed if IGP is used for topology acquisition. In
a cRPD installation, the router connects port 179/TCP (BGP) directly to the NorthStar application
server. cRPD runs as a process inside the NorthStar application server. Junos VM and cRPD are
mutually exclusive.
SNMP161
NTAD450
NETCONF communication between NorthStar Controller and routers. This is the default port for
NETCONFD, but in some installations, port 22 is preferred. To change to port 22, access the
NorthStar CLI as described in “Configuring NorthStar Settings Using the NorthStar CLI” on page 66,
and modify the value of the port setting. Use the set northstar netconfd device-connection-poolnetconf port command.
Containerized Management Daemon (cMGD). Used to access NorthStar CLI.2222
3000
Zookeeper cluster2888
JTI: Default Junos Telemetry Interface reports for IFD, IFL, and LSP (supports NorthStar Analytics).
In previous NorthStar releases, three JTI ports were required (2000, 2001, 2002). Starting with
Release 4.3.0, this single port is used instead.
Model Driven Telemetry (MDT)3001
MDT3002
Zookeeper cluster3888
Table 8: Ports That Must Be Allowed by External Firewalls (continued)
PurposePort
PCEP: PCC (router) to NorthStar PCE server4189
cMGD-REST5000
RabbitMQ5672
Redis6379
Communications port to NorthStar Planner7000
Cassandra database cluster7001
Health Monitor8124
24
Web: Web client/REST to secure web server (https)8443
Netflow9000
Remote Planner Server9042
Elasticsearch9201
Elasticsearch cluster9300
10001
BMP passive mode: By default, the monitor listens on this port for incoming connections from the
network.
Cassandra database cluster17000
PRPD: NorthStar application to router network50051
Figure 3 on page 25 details the direction of data flow through the ports, when node clusters are not being
used. Figure 4 on page 26 and Figure 5 on page 26 detail the additional flows for NorthStar application
HA clusters and analytics HA clusters, respectively.
Figure 3: NorthStar Main Port Map
25
Figure 4: NorthStar Application HA Port Map
Figure 5: Analytics HA Port Map
26
Analytics Requirements
In addition to ensuring that ports 3000 and 1514 are kept open, using the NorthStar analytics features
requires that you counter the effects of Reverse Path Filtering (RPF) if necessary. If your kernel does RPF
by default, you must do one of the following to counter the effects:
Disable RPF.
•
Ensure there is a route to the source IP address of the probes pointing to the interface where those
•
probes are received.
Specify loose mode reverse filtering (if the source address is routable with any of the routes on any of
•
the interfaces).
Two-VM Installation Requirements
A two-VM installation is one in which the JunosVM is not bundled with the NorthStar Controller software.
VM Image Requirements
The NorthStar Controller application VM is installed on top of a Linux VM, so Linux VM is required. You
•
can obtain a Linux VM image in either of the following ways:
Use the generic version provided by most Linux distributors. Typically, these are cloud-based images
•
for use in a cloud-init-enabled environment, and do not require a password. These images are fully
compatible with OpenStack.
27
Create your own VM image. Some hypervisors, such as generic DVM, allow you to create your own
•
VM image. We recommend this approach if you are not using OpenStack and your hypervisor does
not natively support cloud-init.
The JunosVM is provided in Qcow2 format when inside the NorthStar Controller bundle. If you download
•
the JunosVM separately (not bundled with NorthStar) from the NorthStar download site, it is provided
in VMDK format.
The JunosVM image is only compatible with IDE disk controllers. You must configure the hypervisor to
•
use IDE rather than SATA controller type for the JunosVM disk image.
If you have, and want to continue using a version of JunosVM older than Release 17.2R1, you can change
the NorthStar configuration to support it, but segment routing support would not be available. See “Installing
the NorthStar Controller” on page 38 for the configuration steps.
VM Networking Requirements
The following networking requirements must be met for the two-VM installation approach to be successful:
Each VM requires the following virtual NICs:
•
One connected to the external network
•
One for the internal connection between the NorthStar application and the JunosVM
•
One connected to the management network if a different interface is required between the router
•
facing and client facing interfaces
We recommend a flat or routed network without any NAT for full compatibility.
•
A virtual network with one-to-one NAT (usually referenced as a floating IP) can be used as long as BGP-LS
•
is used as the topology acquisition mechanism. If IS-IS or OSPF adjacency is required, it should be
established over a GRE tunnel.
NOTE: A virtual network with n-to-one NAT is not supported.
28
Changing Control Packet Classification Using the
Mangle Table
The NorthStar application uses default classification for control packets. To support a different packet
classification, you can use Linux firewall iptables to reclassify packets to a different priority.
The following sample configuration snippets show how to modify the ToS bits using the mangle table,
changing DSCP values to cs6.