HP MSR2003, MSR3024, MSR3044, MSR3064, MSR4000 IP Multicast Configuration Guide(V7)

...
Page 1
HP MSR Router Series
IP Multicast
onfiguration Guide(V7)
C
Part number: 5998-5679
Software version: CMW710-R0106
Page 2
Legal and notice information
© Copyright 2014 Hewlett-Packard Development Company, L.P.
No part of this documentation may be reproduced or transmitted in any form or by any means without prior written consent of Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice.
HEWLETT-PACKARD COMPANY MAKES NO WARRANTY OF ANY KIND WITH REGARD TO THIS MATERIAL, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. Hewlett-Packard shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material.
The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Page 3

Contents

Multicast overview ······················································································································································· 1
Introduction to multicast ···················································································································································· 1
Information transmission techniques ······················································································································· 1 Multicast features ······················································································································································ 3 Common notations in multicast ······························································································································· 4
Multicast benefits and applications ························································································································ 4 Multicast models ································································································································································ 5 Multicast architecture ························································································································································ 5
Multicast addresses ·················································································································································· 6
Multicast protocols ··················································································································································· 9 Multicast packet forwarding mechanism ····················································································································· 11 Multicast support for VPNs ············································································································································ 11
Introduction to VPN instances ······························································································································ 11
Multicast application in VPNs ······························································································································ 12
Configuring IGMP snooping ····································································································································· 13
Overview ········································································································································································· 13
Basic IGMP snooping concepts ··························································································································· 13
How IGMP snooping works ································································································································· 15
Protocols and standards ······································································································································· 16 Feature and hardware compatibility ···························································································································· 16 IGMP snooping configuration task list ························································································································· 16 Configuring basic IGMP snooping functions ·············································································································· 17
Enabling IGMP snooping ····································································································································· 17
Specifying the IGMP snooping version ··············································································································· 18
Setting the maximum number of IGMP snooping forwarding entries ······························································ 19 Configuring IGMP snooping port functions ················································································································· 19
Setting aging timers for dynamic ports ··············································································································· 19
Configuring static ports ········································································································································· 20
Configuring a port as a simulated member host ······························································································· 21
Enabling IGMP snooping fast-leave processing ································································································· 21
Disabling a port from becoming a dynamic router port ··················································································· 22 Configuring an IGMP snooping querier ······················································································································ 23
Configuration prerequisites ·································································································································· 23
Enabling IGMP snooping querier ························································································································ 23
Configuring parameters for IGMP queries and responses ··············································································· 23 Configuring parameters for IGMP messages ·············································································································· 24
Configuration prerequisites ·································································································································· 24
Configuring source IP addresses for IGMP messages ······················································································· 24
Setting the 802.1p precedence for IGMP messages ························································································ 25 Configuring IGMP snooping policies ··························································································································· 26
Configuring a multicast group filter ····················································································································· 26
Configuring multicast source port filtering ·········································································································· 27
Enabling dropping unknown multicast data ······································································································· 27
Enabling IGMP report suppression ······················································································································ 28
Setting the maximum number of multicast groups on a port ············································································ 28
Enabling the multicast group replacement function ··························································································· 29 Displaying and maintaining IGMP snooping ·············································································································· 30 IGMP snooping configuration examples ····················································································································· 31
i
Page 4
Group policy and simulated joining configuration example ············································································ 31
Static port configuration example ······················································································································· 33
IGMP snooping querier configuration example ································································································· 36 Troubleshooting IGMP snooping ·································································································································· 38
Layer 2 multicast forwarding cannot function ···································································································· 38
Multicast group filter does not work ···················································································································· 38
Configuring multicast routing and forwarding ········································································································· 40
Overview ········································································································································································· 40
RPF check mechanism ··········································································································································· 40
Static multicast routes ············································································································································ 42
Multicast forwarding across unicast subnets ······································································································ 44 Multicast routing and forwarding configuration task list ··························································································· 44 Enabling IP multicast routing ········································································································································· 45 Configuring multicast routing and forwarding ············································································································ 45
Configuring static multicast routes ······················································································································· 45
Configuring the RPF route selection rule ············································································································· 46
Configuring multicast load splitting ····················································································································· 46
Configuring a multicast forwarding boundary ··································································································· 46 Displaying and maintaining multicast routing and forwarding ················································································· 47 Configuration examples ················································································································································ 48
Changing an RPF route ········································································································································· 48
Creating an RPF route ··········································································································································· 50
Multicast forwarding over a GRE tunnel ············································································································· 52 Troubleshooting multicast routing and forwarding ····································································································· 55
Static multicast route failure ································································································································· 55
Configuring IGMP ······················································································································································ 56
Overview ········································································································································································· 56
IGMPv1 overview ·················································································································································· 56
IGMPv2 enhancements ········································································································································· 58
IGMPv3 enhancements ········································································································································· 58
IGMP SSM mapping ············································································································································· 60
IGMP proxying ······················································································································································ 61
IGMP support for VPNs ········································································································································ 62
Protocols and standards ······································································································································· 62 IGMP configuration task list ·········································································································································· 62 Configuring basic IGMP functions ······························································································································· 62
Enabling IGMP ······················································································································································ 63
Specifying the IGMP version ································································································································ 63
Configuring an interface as a static member interface ····················································································· 63
Configuring a multicast group filter ····················································································································· 64 Adjusting IGMP performance ······································································································································· 64
Configuring IGMP query parameters ·················································································································· 64
Enabling IGMP fast-leave processing ·················································································································· 65 Configuring IGMP SSM mappings ······························································································································· 66
Configuration prerequisites ·································································································································· 66
Configuration procedure ······································································································································ 66 Configuring IGMP proxying ········································································································································· 66
Configuration prerequisites ·································································································································· 66
Enabling IGMP proxying ······································································································································ 66
Configuring multicast forwarding on a downstream interface ········································································· 67
Configuring multicast load splitting on the IGMP proxy ··················································································· 67 Displaying and maintaining IGMP ······························································································································· 68 IGMP configuration examples ······································································································································ 68
ii
Page 5
Basic IGMP functions configuration examples ··································································································· 68
IGMP SSM mapping configuration example ····································································································· 71
IGMP proxying configuration example ··············································································································· 74 Troubleshooting IGMP ··················································································································································· 75
No membership information on the receiver-side router ··················································································· 75
Inconsistent membership information on the routers on the same subnet ························································ 76
Configuring PIM ························································································································································· 77
Overview ········································································································································································· 77
PIM-DM overview ·················································································································································· 77
PIM-SM overview ··················································································································································· 79
BIDIR-PIM overview ················································································································································ 86
Administrative scoping overview ························································································································· 89
PIM-SSM overview ················································································································································· 91
Relationship among PIM protocols ······················································································································ 92
PIM support for VPNs ············································································································································ 93
Protocols and standards ······································································································································· 93 Configuring PIM-DM ······················································································································································ 93
PIM-DM configuration task list······························································································································ 94
Configuration prerequisites ·································································································································· 94
Enabling PIM-DM ··················································································································································· 94
Enabling the state refresh feature ························································································································ 94
Configuring state refresh parameters ·················································································································· 95
Configuring PIM-DM graft retry timer ·················································································································· 95 Configuring PIM-SM······················································································································································· 96
PIM-SM configuration task list ······························································································································ 96
Configuration prerequisites ·································································································································· 96
Enabling PIM-SM ··················································································································································· 96
Configuring an RP ················································································································································· 97
Configuring a BSR ················································································································································· 99
Configuring multicast source registration·········································································································· 101
Configuring switchover to SPT ··························································································································· 102 Configuring BIDIR-PIM ················································································································································· 103
BIDIR-PIM configuration task list ························································································································· 103
Configuration prerequisites ································································································································ 103
Enabling BIDIR-PIM ·············································································································································· 103
Configuring an RP ··············································································································································· 104
Configuring a BSR ··············································································································································· 105 Configuring PIM-SSM ·················································································································································· 108
PIM-SSM configuration task list ·························································································································· 108
Configuration prerequisites ································································································································ 108
Enabling PIM-SM ················································································································································· 108
Configuring the SSM group range ···················································································································· 109 Configuring common PIM features ····························································································································· 109
Configuration task list ········································································································································· 109
Configuration prerequisites ································································································································ 109
Configuring a multicast data filter ····················································································································· 110
Configuring a hello message filter ···················································································································· 110
Configuring PIM hello message options ··········································································································· 110
Configuring common PIM timers ······················································································································· 112
Setting the maximum size of each join or prune message ············································································· 113
Enabling BFD for PIM ·········································································································································· 114
Enabling SNMP notifications for PIM ················································································································ 114 Displaying and maintaining PIM ································································································································ 114 PIM configuration examples ······································································································································· 115
iii
Page 6
PIM-DM configuration example ························································································································· 115
PIM-SM non-scoped zone configuration example ··························································································· 119
PIM-SM admin-scoped zone configuration example ······················································································· 122
BIDIR-PIM configuration example ······················································································································· 127
PIM-SSM configuration example ························································································································ 131 Troubleshooting PIM ···················································································································································· 134
A multicast distribution tree cannot be correctly built ······················································································ 134
Multicast data is abnormally terminated on an intermediate router ······························································ 135
An RP cannot join an SPT in PIM-SM ················································································································ 136
An RPT cannot be built or multicast source registration fails in PIM-SM ························································ 136
Configuring MSDP ·················································································································································· 138
Overview ······································································································································································· 138
How MSDP works ··············································································································································· 138
MSDP support for VPNs ······································································································································ 143
Protocols and standards ····································································································································· 143 MSDP configuration task list ······································································································································· 144 Configuring basic MSDP functions ····························································································································· 144
Configuration prerequisites ································································································································ 144
Enabling MSDP ···················································································································································· 144
Creating an MSDP peering connection ············································································································ 145
Configuring a static RPF peer ···························································································································· 145 Configuring an MSDP peering connection ··············································································································· 145
Configuration prerequisites ································································································································ 145
Configuring the description for an MSDP peer ································································································ 145
Configuring an MSDP mesh group ··················································································································· 146
Controlling MSDP peering connections ············································································································ 146 Configuring SA message related parameters ··········································································································· 147
Configuration prerequisites ································································································································ 147
Configuring SA message contents ····················································································································· 148
Configuring SA request messages ····················································································································· 148
Configuring SA message filtering rules ············································································································· 149
Configuring the SA message cache ·················································································································· 150 Displaying and maintaining MSDP ···························································································································· 150 MSDP configuration examples ···································································································································· 151
PIM-SM inter-domain multicast configuration ··································································································· 151
Inter-AS multicast configuration by leveraging static RPF peers ····································································· 156
Anycast RP configuration ···································································································································· 161
SA message filtering configuration ···················································································································· 165 Troubleshooting MSDP ················································································································································ 169
MSDP peers stay in disabled state ···················································································································· 169
No SA entries exist in the router's SA message cache ··················································································· 169
No exchange of locally registered (S, G) entries between RPs ······································································ 170
Configuring multicast VPN ····································································································································· 171
Overview ······································································································································································· 171
MD-VPN overview ··············································································································································· 172
Protocols and standards ····································································································································· 176 How MD-VPN works ···················································································································································· 176
Default-MDT establishment ································································································································· 176
Default-MDT-based delivery ································································································································ 178
MDT switchover ··················································································································································· 181
Inter-AS MD VPN ················································································································································· 182 Multicast VPN configuration task list ·························································································································· 183 Configuring MD-VPN ··················································································································································· 184
iv
Page 7
Configuration prerequisites ································································································································ 184
Enabling IP multicast routing in a VPN instance ······························································································ 184
Creating the MD for a VPN instance ················································································································ 185
Specifying the default-group address ················································································································ 185
Specifying the MD source interface ·················································································································· 186
Configuring MDT switchover parameters ········································································································· 186
Enabling data-group reuse logging ··················································································································· 187 Configuring BGP MDT ················································································································································· 187
Configuration prerequisites ································································································································ 187
Enabling BGP MDT peers or peer groups ········································································································ 187
Configuring a BGP MDT route reflector ············································································································ 188 Displaying and maintaining multicast VPN ··············································································································· 189 Multicast VPN configuration examples ······················································································································ 189
Intra-AS MD VPN configuration example ········································································································· 190
Inter-AS MD VPN configuration example ········································································································· 202 Troubleshooting MD-VPN ············································································································································ 216
A default-MDT cannot be established ··············································································································· 216
An MVRF cannot be created ······························································································································ 217
Configuring MLD snooping ···································································································································· 218
Overview ······································································································································································· 218
Basic MLD snooping concepts ··························································································································· 218
How MLD snooping works ································································································································· 220
Protocols and standards ····································································································································· 221 Hardware and feature compatibility ·························································································································· 221 MLD snooping configuration task list ························································································································· 222 Configuring basic MLD snooping functions ·············································································································· 222
Enabling MLD snooping ····································································································································· 222
Specifying the MLD snooping version ··············································································································· 223
Setting the maximum number of MLD snooping forwarding entries ······························································ 224 Configuring MLD snooping port functions ················································································································· 224
Setting aging timers for dynamic ports ············································································································· 224
Configuring static ports ······································································································································· 225
Configuring a port as a simulated member host ····························································································· 226
Enabling MLD snooping fast-leave processing ································································································· 226
Disabling a port from becoming a dynamic router port ················································································· 227 Configuring the MLD snooping querier ····················································································································· 228
Configuration prerequisites ································································································································ 228
Enabling MLD snooping querier ························································································································ 228
Configuring parameters for MLD queries and responses ··············································································· 228 Configuring parameters for MLD messages ·············································································································· 229
Configuration prerequisites ································································································································ 229
Configuring source IPv6 addresses for MLD messages ··················································································· 230
Setting the 802.1p precedence for MLD messages ························································································ 230 Configuring MLD snooping policies ··························································································································· 231
Configuring an IPv6 multicast group filter ········································································································ 231
Configuring IPv6 multicast source port filtering ······························································································· 232
Enabling dropping unknown IPv6 multicast data ···························································································· 233
Enabling MLD report suppression ······················································································································ 233
Setting the maximum number of IPv6 multicast groups on a port ·································································· 234
Enabling the IPv6 multicast group replacement function ················································································· 234 Displaying and maintaining MLD snooping ·············································································································· 235 MLD snooping configuration examples ····················································································································· 236
IPv6 group policy and simulated joining configuration example ·································································· 236
Static port configuration example ····················································································································· 238
v
Page 8
MLD snooping querier configuration example ································································································· 241 Troubleshooting MLD snooping ·································································································································· 244
Layer 2 multicast forwarding cannot function ·································································································· 244
IPv6 multicast group filter does not work ·········································································································· 244
Configuring IPv6 multicast routing and forwarding ····························································································· 245
Overview ······································································································································································· 245
RPF check mechanism ········································································································································· 245
IPv6 multicast forwarding across IPv6 unicast subnets ···················································································· 247 Configuration task list ·················································································································································· 248 Enabling IPv6 multicast routing ··································································································································· 248 Configuring IPv6 multicast routing and forwarding ································································································· 248
Configuring the RPF route selection rule ··········································································································· 248
Configuring IPv6 multicast load splitting··········································································································· 249
Configuring an IPv6 multicast forwarding boundary ······················································································ 249 Displaying and maintaining IPv6 multicast routing and forwarding ······································································ 250 IPv6 multicast routing and forwarding configuration examples ·············································································· 251
IPv6 multicast forwarding over a GRE tunnel ··································································································· 251
Configuring MLD ····················································································································································· 255
Overview ······································································································································································· 255
How MLDv1 works ·············································································································································· 255
MLDv2 enhancements ········································································································································· 257
MLD SSM mapping ············································································································································· 258
MLD proxying ······················································································································································ 259
MLD support for VPNs ········································································································································ 260
Protocols and standards ····································································································································· 260 MLD configuration task list ·········································································································································· 260 Configuring basic MLD functions ······························································································································· 260
Enabling MLD ······················································································································································ 261
Specifying the MLD version ································································································································ 261
Configuring an interface as a static member interface ··················································································· 261
Configuring an IPv6 multicast group filter ········································································································ 262 Adjusting MLD performance ······································································································································· 262
Configuring MLD query parameters ·················································································································· 262
Enabling MLD fast-leave processing ·················································································································· 263 Configuring MLD SSM mappings ······························································································································· 263
Configuration prerequisites ································································································································ 264
Configuration procedure ···································································································································· 264 Configuring MLD proxying ········································································································································· 264
Configuration prerequisites ································································································································ 264
Enabling MLD proxying ······································································································································ 264
Configuring IPv6 multicast forwarding on a downstream interface ······························································ 265
Configuring IPv6 multicast load splitting on the MLD proxy ··········································································· 265 Displaying and maintaining MLD ······························································································································· 266 MLD configuration examples ······································································································································ 266
Basic MLD functions configuration examples ··································································································· 266
MLD SSM mapping configuration example ····································································································· 269
MLD proxying configuration example ··············································································································· 272 Troubleshooting MLD ··················································································································································· 273
No member information exists on the receiver-side router ············································································· 273
Inconsistent membership information on the routers on the same subnet ······················································ 274
Configuring IPv6 PIM ·············································································································································· 275
Overview ······································································································································································· 275
IPv6 PIM-DM overview ········································································································································ 275
vi
Page 9
IPv6 PIM-SM overview ········································································································································ 277
IPv6 BIDIR-PIM overview ····································································································································· 284
IPv6 administrative scoping overview ··············································································································· 287
IPv6 PIM-SSM overview ······································································································································ 289
Relationship among IPv6 PIM protocols ············································································································ 290
IPv6 PIM support for VPNs ································································································································· 291
Protocols and standards ····································································································································· 291 Configuring IPv6 PIM-DM ············································································································································ 291
IPv6 PIM-DM configuration task list ··················································································································· 292
Configuration prerequisites ································································································································ 292
Enabling IPv6 PIM-DM ········································································································································ 292
Enabling the state refresh feature ······················································································································ 292
Configuring state refresh parameters ················································································································ 293
Configuring IPv6 PIM-DM graft retry timer ······································································································· 293 Configuring IPv6 PIM-SM ············································································································································ 294
IPv6 PIM-SM configuration task list ···················································································································· 294
Configuration prerequisites ································································································································ 294
Enabling IPv6 PIM-SM ········································································································································· 294
Configuring an RP ··············································································································································· 295
Configuring a BSR ··············································································································································· 297
Configuring IPv6 multicast source registration ································································································· 299
Configuring switchover to SPT ··························································································································· 300 Configuring IPv6 BIDIR-PIM ········································································································································· 301
IPv6 BIDIR-PIM configuration task list ················································································································ 301
Configuration prerequisites ································································································································ 301
Enabling IPv6 BIDIR-PIM ····································································································································· 301
Configuring an RP ··············································································································································· 302
Configuring a BSR ··············································································································································· 303 Configuring IPv6 PIM-SSM ·········································································································································· 306
IPv6 PIM-SSM configuration task list ················································································································· 306
Configuration prerequisites ································································································································ 306
Enabling IPv6 PIM-SM ········································································································································· 306
Configuring the IPv6 SSM group range ··········································································································· 307 Configuring common IPv6 PIM features ···················································································································· 307
Configuration task list ········································································································································· 307
Configuration prerequisites ································································································································ 307
Configuring an IPv6 multicast data filter ··········································································································· 308
Configuring a hello message filter ···················································································································· 308
Configuring IPv6 PIM hello message options ··································································································· 308
Configuring common IPv6 PIM timers ··············································································································· 310
Setting the maximum size of each join or prune message ············································································· 311
Enabling BFD for IPv6 PIM ································································································································· 312
Enabling SNMP notifications for IPv6 PIM ······································································································· 312 Displaying and maintaining IPv6 PIM ························································································································ 313 IPv6 PIM configuration examples ······························································································································· 313
IPv6 PIM-DM configuration example ················································································································· 313
IPv6 PIM-SM non-scoped zone configuration example ··················································································· 317
IPv6 PIM-SM admin-scoped zone configuration example ··············································································· 320
IPv6 BIDIR-PIM configuration example ·············································································································· 325
IPv6 PIM-SSM configuration example ··············································································································· 330 Troubleshooting IPv6 PIM ············································································································································ 333
A multicast distribution tree cannot be correctly built ······················································································ 333
IPv6 multicast data is abnormally terminated on an intermediate router ······················································ 334
An RP cannot join an SPT in IPv6 PIM-SM ········································································································ 334
vii
Page 10
An RPT cannot be built or IPv6 multicast source registration fails in IPv6 PIM-SM ······································· 335
Support and other resources ·································································································································· 336
Contacting HP ······························································································································································ 336
Subscription service ············································································································································ 336 Related information ······················································································································································ 336
Documents ···························································································································································· 336
Websites ······························································································································································· 336 Conventions ·································································································································································· 337
viii
Page 11

Multicast overview

Introduction to multicast

As a technique that coexists with unicast and broadcast, the multicast technique effectively addresses the issue of point-to-multipoint data transmission. By enabling high-efficiency point-to-multipoint data transmission over a network, multicast greatly saves network bandwidth and reduces network load.
By using multicast technology, a network operator can easily provide bandwidth-critical and time-critical information services. These services include live webcasting, Web TV, distance learning, telemedicine, Web radio, and real-time video conferencing.

Information transmission techniques

The information transmission techniques include unicast, broadcast, and multicast.
Unicast
In unicast transmission, the information source must send a separate copy of information to each host that needs the information.
Figure 1 Unicast transmission
Host A
Receiver
Host B
Source
Host C
Receiver
Host D
IP network
Packets for Host B
Packets for Host D
Packets for Host E
In Figure 1, assume that Host B, Host D, and Host E need the information. A separate transmission channel must be established from the information source to each of these hosts.
Receiver
Host E
In unicast transmission, the traffic transmitted over the network is proportional to the number of hosts that need the information. If a large number of hosts need the information, the information source must send a separate copy of the same information to each of these hosts. Sending many copies can place a tremendous pressure on the information source and the network bandwidth.
1
Page 12
Broadcast
Unicast is not suitable for batch transmission of information.
In broadcast transmission, the information source sends information to all hosts on the subnet, even if some hosts do not need the information.
Figure 2 Broadcast transmission
Multicast
In Figure 2, assume that only Host B, Host D, and Host E need the information. If the information is broadcast to the subnet, Host A and Host C also receive it. In addition to information security issues, broadcasting to hosts that do not need the information also causes traffic flooding on the same subnet.
Broadcast is disadvantageous in transmitting data to specific hosts. Moreover, broadcast transmission is a significant waste of network resources.
Unicast and broadcast techniques cannot provide point-to-multipoint data transmissions with the minimum network consumption.
Multicast transmission can solve this problem. When some hosts on the network need multicast information, the information sender, or multicast source, sends only one copy of the information. Multicast distribution trees are built through multicast routing protocols, and the packets are replicated only on nodes where the trees branch.
2
Page 13
Figure 3 Multicast transmission
The multicast source sends only one copy of the information to a multicast group. Host B, Host D, and Host E, which are information receivers, must join the multicast group. The routers on the network duplicate and forward the information based on the distribution of the group members. Finally, the information is correctly delivered to Host B, Host D, and Host E.
To summarize, multicast has the following advantages:
Advantages over unicast—Multicast traffic is replicated and distributed until it flows to the
farthest-possible node from the source. The increase of receiver hosts will not remarkably increase the load of the source or the usage of network resources.
Advantages over broadcast—Multicast data is sent only to the receivers that need it. This saves
network bandwidth and enhances network security. In addition, multicast data is not confined to the same subnet.

Multicast features

A multicast group is a multicast receiver set identified by an IP multicast address. Hosts must join a
multicast group to become members of the multicast group before they receive the multicast data addressed to that multicast group. Typically, a multicast source does not need to join a multicast group.
An information sender is called a "multicast source." A multicast source can send data to multiple
multicast groups at the same time. Multiple multicast sources can send data to the same multicast group at the same time.
The group memberships are dynamic. Hosts can join or leave multicast groups at any time.
Multicast groups are not subject to geographic restrictions.
Routers or Layer 3 switches that support Layer 3 multicast are called "multicast routers" or "Layer 3
multicast devices." In addition to providing the multicast routing function, a multicast router can also manage multicast group memberships on stub subnets with attached group members. A multicast router itself can be a multicast group member.
3
Page 14
For a better understanding of the multicast concept, you can compare multicast transmission to the transmission of TV programs.
Table 1 Comparing TV program transmission and multicast transmission
TV program transmission Multicast transmission
A TV station transmits a TV program through a channel.
A user tunes the TV set to the channel. A receiver joins the multicast group.
The user starts to watch the TV program transmitted by the TV station on the channel.
The user turns off the TV set or tunes to another channel.

Common notations in multicast

The following notations are commonly used in multicast transmission:
(*, G)—Rendezvous point tree (RPT), or a multicast packet that any multicast source sends to
multicast group G. The asterisk (*) represents any multicast source, and "G" represents a specific multicast group.
(S, G)—Shortest path tree (SPT), or a multicast packet that multicast source "S" sends to multicast
group "G." "S" represents a specific multicast source, and "G" represents a specific multicast group.
For more information about the concepts RPT and SPT, see "Configuring PIM" and "Configuring IPv6 PIM."
A multicast source sends multicast data to a multicast group.
The receiver starts to receive the multicast data that the source is sending to the multicast group.
The receiver leaves the multicast group or joins another group.

Multicast benefits and applications

Multicast benefits
Enhanced efficiency—Reduces the processor load of information source servers and network
devices.
Optimal performance—Reduces redundant traffic.
Distributed application—Enables point-to-multipoint applications at the price of minimum network
resources.
Multicast applications
Multimedia and streaming applications, such as Web TV, Web radio, and real-time video/audio
conferencing
Communication for training and cooperative operations, such as distance learning and
telemedicine
Data warehouse and financial applications (stock quotes)
Any other point-to-multipoint application for data distribution
4
Page 15

Multicast models

Based on how the receivers treat the multicast sources, the multicast models include any-source multicast (ASM), source-filtered multicast (SFM), and source-specific multicast (SSM).

ASM model

In the ASM model, any sender can send information to a multicast group as a multicast source. Receivers can join a multicast group identified by a group address and get multicast information addressed to that multicast group. In this model, receivers do not know the positions of the multicast sources in advance. However, they can join or leave the multicast group at any time.

SFM model

The SFM model is derived from the ASM model. To a sender, the two models appear to have the same multicast membership architecture.
The SFM model functionally extends the ASM model. The upper-layer software checks the source address of received multicast packets and permits or denies multicast traffic from specific sources. Therefore, receivers can receive the multicast data from only part of the multicast sources. To a receiver, multicast sources are not all valid, but are filtered.

SSM model

Users might be interested in the multicast data from only certain multicast sources. The SSM model provides a transmission service that enables users to specify at the client side the multicast sources in which they are interested.
In the SSM model, receivers have already determined the locations of the multicast sources. This is the main difference between the SSM model and the ASM model. In addition, the SSM model uses a different multicast address range than the ASM/SFM model. Dedicated multicast forwarding paths are established between receivers and the specified multicast sources.

Multicast architecture

IP multicast addresses the following issues:
Where should the multicast source transmit information to? (Multicast addressing.)
What receivers exist on the network? (Host registration.)
Where is the multicast source that will provide data to the receivers? (Multicast source discovery.)
How should information be transmitted to the receivers? (Multicast routing.)
IP multicast is an end-to-end service. The multicast architecture involves the following parts:
Addressing mechanism—A multicast source sends information to a group of receivers through a
multicast address.
Host registration—Receiver hosts can join and leave multicast groups dynamically. This mechanism
is the basis for management of group memberships.
Multicast routing—A multicast distribution tree (a forwarding path tree for multicast data on the
network) is constructed for delivering multicast data from a multicast source to receivers.
Multicast applications—A software system that supports multicast applications, such as video
conferencing, must be installed on multicast sources and receiver hosts. The TCP/IP stack must support reception and transmission of multicast data.
5
Page 16

Multicast addresses

IP multicast addresses
IPv4 multicast addresses:
IANA assigned the Class D address block (224.0.0.0 to 239.255.255.255) to IPv4 multicast.
Table 2 Class D IP address blocks and description
Address block Description
224.0.0.0 to 224.0.0.255
224.0.1.0 to 238.255.255.255
239.0.0.0 to 239.255.255.255
Reserved permanent group addresses. The IP address
224.0.0.0 is reserved. Other IP addresses can be used by routing protocols and for topology searching, protocol maintenance, and so on. Table 3 lists common permanent group addres will not be forwarded beyond the local subnet regardless of the TTL value in the IP header.
Globally scoped group addresses. This block includes the following types of designated group addresses:
ses. A packet destined for an address in this block
232.0.0.0/8—SSM group addresses.
233.0.0.0/8—Glop group addresses.
Administratively scoped multicast addresses. These addresses are considered locally unique rather than globally unique. You can reuse them in domains administered by different organizations without causing conflicts. For more information, see RFC 2365.
NOTE:
"Glop" is a mechanism for assigning multicast addresses between different ASs. By filling an AS number into the middle two bytes of 233.0.0.0, you get 255 multicast addresses for that AS. For more information, see RFC 2770.
Table 3 Some reserved multicast addresses
Address Description
224.0.0.1 All systems on this subnet, including hosts and routers.
224.0.0.2 All multicast routers on this subnet.
224.0.0.3 Unassigned.
224.0.0.4 DVMRP routers.
224.0.0.5 OSPF routers.
224.0.0.6 OSPF designated routers and backup designated routers.
224.0.0.7 Shared Tree (ST) routers.
224.0.0.8 ST hosts.
224.0.0.9 RIPv2 routers.
224.0.0.11 Mobile agents.
224.0.0.12 DHCP server/relay agent.
6
Page 17
Address Description
p
224.0.0.13 All Protocol Independent Multicast (PIM) routers.
224.0.0.14 RSVP encapsulation.
224.0.0.15 All Core-Based Tree (CBT) routers.
224.0.0.16 Designated SBM.
224.0.0.17 All SBMs.
224.0.0.18 VRRP.
IPv6 multicast addresses:
Figure 4 IPv6 multicast format
The following describes the fields of an IPv6 multicast address:
{ 0xFF—If the most significant eight bits are 11111111, this address is an IPv6 multicast address.
{ Flags—The Flags field contains four bits.
Figure 5 Flags field format
Table 4 Flags field description
Bit Descri
0 Reserved, set to 0.
tion
When set to 0, this address is an IPv6 multicast address
without an embedded RP address.
R
When set to 1, this address is an IPv6 multicast address
with an embedded RP address. (The P and T bits must also be set to 1.)
When set to 0 , this a ddress is a n IPv6 m u lticast a ddress not
P
based on a unicast prefix.
When set to 1, this address is an IPv6 multicast address
based on a unicast prefix. (The T bit must also be set to 1.)
When set to 0, this address is an IPv6 multicast address
T
{ Scope—The Scope field contains four bits, which represent the scope of the IPv6 internetwork
permanently-assigned by IANA.
When set to 1, this address is a transient, or dynamically
assigned IPv6 multicast address.
for which the multicast traffic is intended.
7
Page 18
Table 5 Values of the Scope field
g
Value Meanin
0, F Reserved.
1 Interface-local scope.
2 Link-local scope.
3 Subnet-local scope.
4 Admin-local scope.
5 Site-local scope.
6, 7, 9 through D Unassigned.
8 Organization-local scope.
E Global scope.
{ Group ID—The Group ID field contains 112 bits. It uniquely identifies an IPv6 multicast group in
the scope that the Scope field defines.
Ethernet multicast MAC addresses
IPv4 multicast MAC addresses:
As defined by IANA, the most significant 24 bits of an IPv4 multicast MAC address are 0x01005E. Bit 25 is 0, and the other 23 bits are the least significant 23 bits of a multicast IPv4 address.
Figure 6 IPv4-to-MAC address mapping
The most significant four bits of a multicast IPv4 address are 1110. Only 23 bits of the remaining 28 bits are mapped to a MAC address, so five bits of the multicast IPv4 address are lost. As a result, 32 multicast IPv4 addresses map to the same IPv4 multicast MAC address. Therefore, a device might receive some unwanted multicast data at Layer 2 processing, which needs to be filtered by the upper layer.
IPv6 multicast MAC addresses:
As defined by IANA, the most significant 16 bits of an IPv6 multicast MAC address are 0x3333 as its address prefix. The least significant 32 bits are mapped from the least significant 32 bits of a multicast IPv6 address. The problem of duplicate IPv6-to-MAC address mapping also arises like IPv4-to-MAC address mapping.
8
Page 19
Figure 7 IPv6-to-MAC address mapping
ght
t
IMPORTANT:
Because of the duplicate mapping from multicast IP address to multicast MAC address, the device mi inadvertently send multicast protocol packets as multicast data in Layer 2 forwarding. To avoid this, do no use the IP multicast addresses that are mapped to multicast MAC addresses 0100-5E00-00xx and 3333-0000-00xx (where "x" represents any hexadecimal number from 0 to F).

Multicast protocols

Multicast protocols include the following categories:
Layer 3 and Layer 2 multicast protocols:
{ Layer 3 multicast refers to IP multicast working at the network layer.
Layer 3 multicast protocols—IGMP, MLD, PIM, IPv6 PIM, MSDP, MBGP, and IPv6 MBGP.
{ Layer 2 multicast refers to IP multicast working at the data link layer.
Layer 2 multicast protocols—IGMP snooping and MLD snooping.
IPv4 and IPv6 multicast protocols:
{ For IPv4 networks—IGMP snooping, IGMP, PIM, MSDP, and MBGP.
{ For IPv6 networks—MLD snooping, MLD, IPv6 PIM, and IPv6 MBGP.
This section provides only general descriptions about applications and functions of the Layer 2 and Layer 3 multicast protocols in a network. For more information about these protocols, see the related chapters.
Layer 3 multicast protocols
Layer 3 multicast protocols include multicast group management protocols and multicast routing protocols.
9
Page 20
Figure 8 Positions of Layer 3 multicast protocols
Multicast group management protocols:
IGMP and MLD protocol are multicast group management protocols. Typically, they run between hosts and Layer 3 multicast devices that directly connect to the hosts to establish and maintain multicast group memberships.
Multicast routing protocols:
A multicast routing protocol runs on Layer 3 multicast devices to establish and maintain multicast routes and correctly and efficiently forward multicast packets. Multicast routes constitute loop-free data transmission paths (also known as multicast distribution trees) from a data source to multiple receivers.
In the ASM model, multicast routes include intra-domain routes and inter-domain routes.
{ An intra-domain multicast routing protocol discovers multicast sources and builds multicast
distribution trees within an AS to deliver multicast data to receivers. Among a variety of mature intra-domain multicast routing protocols, PIM is most widely used. Based on the forwarding mechanism, PIM has dense mode (often referred to as "PIM-DM") and sparse mode (often referred to as "PIM-SM").
{ An inter-domain multicast routing protocol is used for delivering multicast information between
two ASs. So far, mature solutions include Multicast Source Discovery Protocol (MSDP) and MBGP. MSDP propagates multicast source information among different ASs. MBGP is an extension of the MP-BGP for exchanging multicast routing information among different ASs.
For the SSM model, multicast routes are not divided into intra-domain routes and inter-domain routes. Because receivers know the position of the multicast source, channels established through PIM-SM are sufficient for the transport of multicast information.
Layer 2 multicast protocols
Layer 2 multicast protocols include IGMP snooping and MLD snooping.
IGMP snooping and MLD snooping are multicast constraining mechanisms that run on Layer 2 devices. They manage and control multicast groups by monitoring and analyzing IGMP or MLD messages exchanged between the hosts and Layer 3 multicast devices. This effectively controls the flooding of multicast data in Layer 2 networks.
10
Page 21

Multicast packet forwarding mechanism

In a multicast model, receiver hosts of a multicast group are usually located at different areas on the network. They are identified by the same multicast group address. To deliver multicast packets to these receivers, a multicast source encapsulates the multicast data in an IP packet with the multicast group address as the destination address. Multicast routers on the forwarding paths forward multicast packets that an incoming interface receives through multiple outgoing interfaces. Compared to a unicast model, a multicast model is more complex in the following aspects:
To ensure multicast packet transmission in the network, different routing tables are used to guide
multicast forwarding. These routing tables include unicast routing tables, routing tables for multicast (for example, the MBGP routing table), and static multicast routing tables.
To process the same multicast information from different peers received on different interfaces of the
same device, the multicast device performs an RPF check on each multicast packet. The RPF check result determines whether the packet will be forwarded or discarded. The RPF check mechanism is the basis for most multicast routing protocols to implement multicast forwarding.
For more information about the RPF mechanism, see "Configuring multicast routing and forwarding" and "Configuring IPv6 multicast routing and forwarding."

Multicast support for VPNs

Multicast support for VPNs refers to multicast applied in VPNs.

Introduction to VPN instances

VPNs must be isolated from one another and from the public network. As shown in Figure 9, VPN A and VPN B separately access the public network through PE devices.
Figure 9 VPN networking diagram
VPN A
CE a2
CE b2
CE b1
P
CE a1
PE 1
Public network
PE 2
CE b3
VPN BVPN B
CE a3
PE 3
VPN A
VPN A
11
Page 22
The P device belongs to the public network. The CE devices belong to their respective VPNs. Each
CE device serves its own VPN and maintains only one set of forwarding mechanisms.
The PE devices connect to the public network and the VPNs. Each PE device must strictly distinguish
the information for different networks, and maintain a separate forwarding mechanism for each network. On a PE device, a set of software and hardware that serve the same network forms an instance. Multiple instances can exist on the same PE device, and an instance can reside on different PE devices. On a PE device, the instance for the public network is called the public network instance, and those for VPNs are called VPN instances.

Multicast application in VPNs

A PE device that supports multicast for VPNs does the following operations:
Maintains an independent set of multicast forwarding mechanisms for each VPN, including the
multicast protocols, PIM neighbor information, and multicast routing table. In a VPN, the device forwards multicast data based on the forwarding table or routing table for that VPN.
Implements the isolation between different VPNs.
Implements information exchange and data conversion between the public network and VPN
instances.
For example, as shown in Figure 9, a mu Only receivers that belong to both the multicast group and VPN A can receive the multicast data. The multicast data is multicast both in VPN A and on the public network.
lticast source in VPN A sends multicast data to a multicast group.
12
Page 23

Configuring IGMP snooping

In this chapter, "MSR2000" refers to MSR2003. "MSR3000" collectively refers to MSR3012, MSR3024, MSR3044, MSR3064. "MSR4000" collectively refers to MSR4060 and MSR4080.

Overview

IGMP snooping runs on a Layer 2 switch as a multicast constraining mechanism to improve multicast forwarding efficiency. It creates Layer 2 multicast forwarding entries from IGMP packets that are exchanged between the hosts and the router.
As shown in Figure 10, w to all hosts. When IGMP snooping is enabled, the Layer 2 switch forwards multicast packets of known multicast groups to only the receivers.
Figure 10 Multicast packet transmission without and with IGMP snooping
en IGMP snooping is not enabled, the Layer 2 switch floods multicast packets
h
The switch in this document refers to an MSR router installed with Layer 2 Ethernet interface modules.

Basic IGMP snooping concepts

IGMP snooping related ports
As shown in Figure 11, IGMP snooping runs on Switch A and Switch B, and Host A and Host C are receivers in a multicast group.
13
Page 24
Figure 11 IGMP snooping related ports
W
t
p
p
The following describes the ports involved in IGMP snooping:
Router port—Layer 3 multicast device-side port. Layer 3 multicast devices include DRs and IGMP
queriers. In Figure 11, Gi
gabitEthernet 1/0/1 of Switch A and GigabitEthernet 1/0/1 of Switch B
are the router ports. A Layer 2 device records all its router ports in a router port list.
Do not confuse the "router port" in IGMP snooping with the "routed interface" commonly known as the "Layer 3 interface." The router port in IGMP snooping is a Layer 2 interface.
Member port—Multicast receiver-side port. In Figure 11, Gi
GigabitEthernet 1/0/3 of Switch A and GigabitEthernet 1/0/2 of Switch B are the member ports. A Layer 2 device records all its member ports in the IGMP snooping forwarding table.
Unless otherwise specified, router ports and member ports in this document include both static and dynamic router ports and member ports.
NOTE:
hen IGMP snooping is enabled, all ports that receive PIM hello messages or IGMP general queries with the source addresses other than 0.0.0.0 are considered dynamic router ports. For more information abou PIM hello messages, see "Configuring PIM."
Aging timers for dynamic ports in IGMP snooping
Timer Description
When a port receives an expected Dynamic router port aging timer
message, the Layer 2 device starts
or resets an aging timer for the port.
When the timer expires, the
dynamic router port ages out.
gabitEthernet 1/0/2 and
Expected message before the timer ex
IGMP general query with the source address other than
0.0.0.0 or PIM hello message.
ires
Action after the timer
ires
ex
The Layer 2 device removes the port from its router port list.
When a port dynamically joins a Dynamic member port aging timer
multicast group, the Layer 2 device
starts or resets an aging timer for
the port. When the timer expires,
the dynamic member port ages out.
IGMP membership report.
14
The Layer 2 device removes the port from the IGMP snooping forwarding table.
Page 25
NOTE:
In IGMP snooping, only dynamic ports age out. Static ports never age out.

How IGMP snooping works

The ports in this section are dynamic ports. For information about how to configure and remove static ports, see "Configuring static ports."
IG
MP messages types include general query, IGMP report, and leave message. An IGMP
snooping-enabled Layer 2 device performs differently depending on the message types.
General query
To check for the existence of multicast group members, the IGMP querier periodically sends IGMP general queries to all hosts and routers on the local subnet. All these hosts and routers are identified by the address 224.0.0.1.
After receiving an IGMP general query, the Layer 2 device forwards the query to all ports in the VLAN except the port that received the query. The Layer 2 device also performs one of the following actions:
If the receiving port is a dynamic router port in the router port list, the Layer 2 device restarts the
aging timer for the port.
If the receiving port does not exist in the router port list, the Layer 2 device adds the port to the router
port list. It also starts an aging timer for the port.
IGMP report
A host sends an IGMP report to the IGMP querier for the following purposes:
Responds to queries if the host is a multicast group member.
Applies for a multicast group membership.
After receiving an IGMP report from the host, the Layer 2 device forwards it through all the router ports in the VLAN. It also resolves the address of the reported multicast group, and looks up the forwarding table for a matching entry as follows:
If no match is found, the Layer 2 device creates a forwarding entry for the group with the receiving
If a match is found but the matching forwarding entry does not contain the receiving port, the Layer
If a match is found and the matching forwarding entry contains the receiving port, the Layer 2
In an application with a group filter configured on an IGMP snooping-enabled Layer 2 device, when a user requests a multicast program, the user's host initiates an IGMP report. After receiving this report, the Layer 2 device resolves the multicast group address in the report and performs ACL filtering on the report. If the report passes ACL filtering, the Layer 2 device creates an IGMP snooping forwarding entry for the multicast group with the receiving port as an outgoing interface. Otherwise, the Layer 2 device drops this report, in which case the multicast data for the multicast group is not sent to this port, and the user cannot retrieve the program.
port as an outgoing interface. It also marks the receiving port as a dynamic member port and starts an aging timer for the port.
2 device adds the receiving port to the outgoing interface list. It also marks the receiving port as a dynamic member port and starts an aging timer for the port.
device restarts the aging timer for the port.
A Layer 2 device does not forward an IGMP report through a non-router port because of the IGMP report suppression mechanism. For more information about the IGMP report suppression mechanism, see "Configuring IGMP."
15
Page 26
Leave message
An IGMPv1 host does not send any leave messages when it leaves a multicast group. The Layer 2 device cannot immediately update the status of the port that connects to the receiver host. The Layer 2 device does not remove the port from the outgoing interface list in the associated forwarding entry until the aging time for the group expires. For a static member port, this mechanism does not take effect.
An IGMPv2 or IGMPv3 host sends an IGMP leave message to the multicast router when it leaves a multicast group.
When the Layer 2 device receives an IGMP leave message on a dynamic member port, the Layer 2 device first examines whether a forwarding entry matches the group address in the message.
If no match is found, the Layer 2 device directly discards the IGMP leave message.
If a match is found but the matching forwarding entry does not contain the port, the Layer 2 device
immediately discards the IGMP leave message.
If a match is found and the matching forwarding entry contains the port, the Layer 2 device
forwards the leave message to all router ports in the VLAN. The Layer 2 device does not immediately remove the port from the forwarding entry. Instead, it adjusts the aging timer for the port.
After receiving the IGMP leave message, the IGMP querier resolves the multicast group address in the message. Then, it sends an IGMP group-specific query to the multicast group through the port that received the leave message.
After receiving the IGMP group-specific query, the Layer 2 device forwards it through all its router ports in the VLAN and all member ports of the multicast group. Then, it waits for the responding IGMP reports from the directly connected hosts to check for the existence of members for the multicast group. For the port that receives the leave message (assuming that it is a dynamic member port), the Layer 2 device also performs one of the following actions:
If the port receives an IGMP report before the aging timer expires, the Layer 2 device resets the
aging timer.
If the port does not receive any IGMP reports when the aging timer expires, the Layer 2 device
removes the port from the forwarding entry for the multicast group.

Protocols and standards

RFC 4541, Considerations for Internet Group Management Protocol (IGMP) and Multicast Listener
Discovery (MLD) Snooping Switches

Feature and hardware compatibility

This feature is supported on the following hardware:
MSR routers installed with the Layer 2 switching module SIC 4GSW, SIC 4GSWP, DSIC 9FSW, DSIC 9FSWP, HMIM 24GSW, HMIM 24GSW-POE, or HMIM 8GSW.

IGMP snooping configuration task list

Task at a glance
Configuring basic IGMP snooping functions:
(Required.) Enabling IGMP snooping
16
Page 27
Task at a glance
(Optional.) Specifying the IGMP snooping version
(Optional.) Setting the maximum number of IGMP snooping forwarding entries
Configuring IGMP snooping port functions:
(Optional.) Setting aging timers for dynamic ports
(Optional.) Configuring static ports
(Optional.) Configuring a port as a simulated member host
(Optional.) Enabling IGMP snooping fast-leave processing
(Optional.) Disabling a port from becoming a dynamic router port
Configuring an IGMP snooping querier:
(Optional.) Enabling IGMP snooping querier
(Optional.) Configuring parameters for IGMP queries and responses
Configuring parameters for IGMP messages:
(Optional.) Configuring source IP addresses for IGMP messages
(Optional.) Setting the 802.1p precedence for IGMP messages
Configuring IGMP snooping policies:
(Optional.) Configuring a multicast group filter
(Optional.) Configuring multicast source port filtering
(Optional.) Enabling dropping unknown multicast data
(Optional.) Enabling IGMP report suppression
(Optional.) Setting the maximum number of multicast groups on a port
(Optional.) Enabling the multicast group replacement function

Configuring basic IGMP snooping functions

Before you configure basic IGMP snooping functions, complete the following tasks:
Configure the corresponding VLANs.
Determine the IGMP snooping version.
Determine the maximum number of IGMP snooping forwarding entries.

Enabling IGMP snooping

When you enable IGMP snooping, follow these guidelines:
You must enable IGMP snooping globally before you enable it for a VLAN.
IGMP snooping for a VLAN works only on the member ports in that VLAN.
You can enable IGMP snooping for a VLAN in IGMP-snooping view or in VLAN view. These configurations have the same priority level.
To enable IGMP snooping for a VLAN in IGMP-snooping view:
Step Command
1. Enter system view.
system-view N/A
Remarks
17
Page 28
Step Command
2. Enable IGMP snooping
globally and enter IGMP-snooping view.
igmp-snooping By default, IGMP snooping is disabled.
Remarks
3. Enable IGMP snooping for
specified VLANs.
To enable IGMP snooping for a VLAN in VLAN view:
enable vlan vlan-list
Step Command
4. Enter system view.
5. Enable IGMP snooping
globally and enter IGMP-snooping view.
6. Return to system view.
7. Enter VLAN view.
8. Enable IGMP snooping for the
VLAN.
system-view
igmp-snooping
quit N/A
vlan vlan-id N/A
igmp-snooping enable

Specifying the IGMP snooping version

Different versions of IGMP snooping processes different versions of IGMP messages.
IGMPv2 snooping processes IGMPv1 and IGMPv2 messages, but it floods IGMPv3 messages in
the VLAN instead of processing them.
By default, IGMP snooping is disabled for a VLAN.
Remarks
N/A
By default, IGMP snooping is disabled.
By default, IGMP snooping is disabled for a VLAN.
IGMPv3 snooping processes IGMPv1, IGMPv2, and IGMPv3 messages.
If you change IGMPv3 snooping to IGMPv2 snooping, the device does the following:
Clears all IGMP snooping forwarding entries that are dynamically added.
Keeps static IGMPv3 snooping forwarding entries (*, G).
Clears static IGMPv3 snooping forwarding entries (S, G), which will be restored when IGMP
snooping is switched back to IGMPv3 snooping.
For more information about static IGMP snooping forwarding entries, see "Configuring static ports."
u can specify the IGMP snooping version for a VLAN in IGMP-snooping view or in VLAN view. These
o
Y configurations have the same priority level.
To specify the IGMP snooping version for a VLAN in IGMP-snooping view:
Step Command
9. Enter system view.
10. Enable IGMP snooping globally
and enter IGMP-snooping view.
11. Specify the IGMP snooping
version for the specified VLANs
system-view N/A
igmp-snooping N/A
version version-number vlan vlan-list
Remarks
The default setting is IGMPv2 snooping.
18
Page 29
To specify the IGMP snooping version for a VLAN in VLAN view:
Step Command
1. Enter system view.
2. Enter VLAN view.
3. Specify the version of IGMP
snooping.
system-view
vlan vlan-id N/A
igmp-snooping version version-number
Remarks
N/A
The default setting is IGMPv2 snooping.

Setting the maximum number of IGMP snooping forwarding entries

You can modify the maximum number of IGMP snooping forwarding entries. When the number of forwarding entries on the device reaches the upper limit, the device does not automatically remove any existing entries. To avoid the situation that new entries cannot be created, HP recommends that you manually remove some entries.
To set the maximum number of IGMP snooping forwarding entries:
Step Command
1. Enter system view.
2. Enter IGMP-snooping view.
3. Set the maximum number of
IGMP snooping forwarding entries.
system-view N/A
igmp-snooping N/A
entry-limit limit
Remarks
The default setting is
4294967295.

Configuring IGMP snooping port functions

Before you configure IGMP snooping port functions, complete the following tasks:
Enable IGMP snooping for the VLAN.
Determine the aging timer for dynamic router ports.
Determine the aging timer for dynamic member ports.
Determine the addresses of the multicast group and multicast source.

Setting aging timers for dynamic ports

When you set aging timers for dynamic ports, follow these guidelines:
If the memberships of multicast groups frequently change, you can set a relatively small value for the
aging timer of the dynamic member ports. If the memberships of multicast groups rarely change, you can set a relatively large value.
If a dynamic router port receives a PIMv2 hello message, the aging timer for the port is the timer
specified in the hello message. In this case, the router-aging-time command or the igmp-snooping router-aging-time command does not take effect on the port.
You can set the aging timers for dynamic ports either for the current VLAN in VLAN view or globally
for all VLANs in IGMP-snooping view. If the configurations are made in both VLAN view and IGMP-snooping view, the configuration made in VLAN view takes priority.
19
Page 30
Setting the aging timers for dynamic ports globally
Step Command
1. Enter system view.
2. Enter IGMP-snooping view.
3. Set the aging timer for dynamic
router ports globally.
4. Set the global aging timer for
dynamic member ports globally.
system-view N/A
igmp-snooping N/A
router-aging-time interval
host-aging-time interval
Setting the aging timers for the dynamic ports in a VLAN
Step Command
1. Enter system view.
2. Enter VLAN view.
3. Set the aging timer for the dynamic
router ports in the VLAN.
4. Set the aging timer for the dynamic
member ports in the VLAN.
system-view N/A
vlan vlan-id N/A
igmp-snooping router-aging-time interval
igmp-snooping host-aging-time interval
Remarks
The default setting is 260 seconds.
The default setting is 260 seconds.
Remarks
The default setting is 260 seconds.
The default setting is 260 seconds.

Configuring static ports

You can configure a port as a static port for the following purposes:
To make all the hosts attached to the port receive multicast data addressed to a multicast group,
configure the port as a static member port for the multicast group.
To make the Layer 2 device attached to the port forward all received multicast traffic on the port,
configure the port as a static router port.
You can also configure a port as a static router port, through which the Layer 2 device can forward all the multicast traffic it receives.
When you configure static ports, follow these guidelines:
A static member port does not respond to queries from the IGMP querier. When you configure a
port as a static member port or cancel this configuration on the port, the port does not send unsolicited IGMP reports or IGMP leave messages.
Static member ports and static router ports never age out. To remove such a port, use the undo
igmp-snooping static-group or undo igmp-snooping static-router-port command.
To configure static ports:
Step Command
1. Enter system view.
2. Enter Layer 2 Ethernet interface
view.
Remarks
system-view N/A
interface interface-type
interface-number
N/A
3. Configure the port as a static
member port.
igmp-snooping static-group
group-address [ source-ip source-address ] vlan vlan-id
20
By default, a port is not a static member port.
Page 31
Step Command
4. Configure the port as a static
router port.
igmp-snooping static-router-port vlan vlan-id
Remarks
By default, a port is not a static router port.

Configuring a port as a simulated member host

Generally, a host that runs IGMP can respond to IGMP queries. If a host fails to respond, the multicast router might consider that no member of this multicast group exists on the subnet, and removes the corresponding forwarding path.
To avoid this situation, you can configure the port as a simulated member host for a multicast group. When the simulated member host receives an IGMP query, it replies with an IGMP report. Therefore, the Layer 2 device can continue receiving multicast data.
When a port is configured as a simulated member host, the Layer 2 device is equivalent to an independent host in the following ways:
It sends an unsolicited IGMP report through the port and responds to IGMP general queries with
IGMP reports through the port.
It sends an IGMP leave message through the port when you remove the simulated joining
configuration.
To configure a port as a simulated member host:
Step Command
1. Enter system view.
2. Enter Layer 2 Ethernet interface
view.
3. Configure the port as a simulated
member host.
system-view N/A
interface interface-type
interface-number
igmp-snooping host-join
group-address [ source-ip source-address ] vlan vlan-id
Remarks
N/A
By default, the port is not a simulated member host.
NOTE:
Unlike a static member port, a port configured as a simulated member host ages out like a dynamic member port.

Enabling IGMP snooping fast-leave processing

The IGMP snooping fast-leave processing feature enables the Layer 2 device to process IGMP leave messages quickly. With this feature, the Layer 2 device immediately removes that port that receives an IGMP leave message from the forwarding entry for the multicast group specified in the message. Then, when the Layer 2 device receives IGMP group-specific queries for that multicast group, it does not forward them to that port.
When you enable the IGMP snooping fast-leave processing feature, follow these guidelines:
In a VLAN, you can enable IGMP snooping fast-leave processing on ports that have only one
receiver host attached. If a port has multiple hosts attached, do not enable IGMP snooping fast-leave processing on this port. Otherwise, other receiver hosts attached to this port in the same multicast group cannot receive the multicast data destined to this group.
21
Page 32
You can enable IGMP snooping fast-leave processing either for the current port in interface view or
globally for all ports in IGMP-snooping view. If the configurations are made both in interface view and IGMP-snooping view, the configuration made in interface view takes priority.
Enabling IGMP snooping fast-leave processing globally
Step Command
1. Enter system view.
2. Enter IGMP-snooping view.
3. Enable IGMP snooping
fast-leave processing globally.
system-view N/A
igmp-snooping N/A
fast-leave [ vlan vlan-list ]
Remarks
By default, fast-leave processing is disabled.
Enabling IGMP snooping fast-leave processing on a port
Step Command
1. Enter system view.
2. Enter Layer 2 Ethernet interface
view.
3. Enable IGMP snooping
fast-leave processing on the port.
system-view N/A
interface interface-type
interface-number
igmp-snooping fast-leave [ vlan vlan-list ]
Remarks
N/A
By default, fast-leave processing is disabled.

Disabling a port from becoming a dynamic router port

The following problems might exist in a multicast access network:
After receiving an IGMP general query or a PIM hello message from a connected host, a router port
becomes a dynamic router port. Before its timer expires, this dynamic router port receives all multicast packets within the VLAN where the port belongs and forwards them to the host. It adversely affects multicast traffic reception of the host.
The IGMP general query or PIM hello message that the host sends affects the multicast routing
protocol state on Layer 3 devices, such as the IGMP querier or DR election. This might further cause network interruption.
To solve these problems and to improve network security and the control over multicast users, you can disable that router port from becoming a dynamic router port.
To disable a port from becoming a dynamic router port:
Step Command
1. Enter system view.
2. Enter Layer 2 Ethernet interface
view.
3. Disable the port from
becoming a dynamic router port.
system-view N/A
interface interface-type
interface-number
igmp-snooping router-port-deny [ vlan vlan-list ]
Remarks
N/A
By default, a port can become a dynamic router port.
This configuration does not affect the static router port configuration.
22
Page 33

Configuring an IGMP snooping querier

This section describes how to configure an IGMP snooping querier.

Configuration prerequisites

Before you configure an IGMP snooping querier, complete the following tasks:
Enable IGMP snooping for the VLAN.
Determine the interval for sending IGMP general queries.
Determine the IGMP last member query interval.
Determine the maximum response time for IGMP general queries.

Enabling IGMP snooping querier

A Layer 2 multicast device does not support IGMP, and by default it cannot send queries. To ensure multicast data forwarding at the data link layer on a network without Layer 3 multicast devices, you can configure the Layer 2 device as an IGMP snooping querier. In this way, the Layer 2 device sends IGMP queries, and establishes and maintains multicast forwarding entries.
Do not configure an IGMP snooping querier in a multicast network that runs IGMP. An IGMP snooping querier does not take part in IGMP querier elections. However, it might affect IGMP querier elections if it sends IGMP general queries with a low source IP address.
To enable IGMP snooping querier:
Step Command
1. Enter system view.
2. Enter VLAN view.
3. Enable IGMP snooping
querier.
system-view N/A
vlan vlan-id N/A
igmp-snooping querier
Remarks
By default, no IGMP snooping querier is configured.

Configuring parameters for IGMP queries and responses

CAUTION:
Make sure the interval for sending IGMP general queries is larger than the maximum response time for IGMP general queries. Otherwise, multicast group members might be deleted by mistake.
You can modify the IGMP general query interval based on the actual condition of the network.
A multicast listening host starts a timer for each multicast group that it has joined when it receives an IGMP query (general query or group-specific query). This timer is initialized to a random value in the range of 0 to the maximum response time advertised in the IGMP query message. When the timer value decreases to 0, the host sends an IGMP report to the multicast group.
To speed up the response of hosts to IGMP queries and to avoid simultaneous timer expirations from causing IGMP report traffic bursts, you must correctly set the maximum response time.
The maximum response time for IGMP general queries is set by the max-response-time command.
The maximum response time for IGMP group-specific queries equals the IGMP last member query
interval.
23
Page 34
You can configure parameters for IGMP queries and responses for the current VLAN in VLAN view or globally for all VLANs in IGMP-snooping view. If the configurations are made in both VLAN view and IGMP-snooping view, the configuration made in VLAN view takes priority.
Configuring the global parameters for IGMP queries and responses
Step Command
1. Enter system view.
2. Enter IGMP-snooping view.
3. Set the maximum response
time for IGMP general queries.
4. Set the IGMP last member
query interval.
system-view N/A
igmp-snooping N/A
max-response-time interval
last-member-query-interval interval
Configuring the parameters for IGMP queries and responses in a VLAN
Step Command
1. Enter system view.
2. Enter VLAN view.
3. Set the interval for sending
IGMP general queries.
4. Set the maximum response
time for IGMP general queries.
5. Set the IGMP last member
query interval.
system-view N/A
vlan vlan-id N/A
igmp-snooping query-interval interval
igmp-snooping max-response-time interval
igmp-snooping last-member-query-interval interval
Remarks
The default setting is 10 seconds.
The default setting is 1 second.
Remarks
The default setting is 125 seconds.
The default setting is 10 seconds.
The default setting is 1 second.

Configuring parameters for IGMP messages

This section describes how to configure parameters for IGMP messages.

Configuration prerequisites

Before you configure parameters for IGMP messages, complete the following tasks:
Enable IGMP snooping for the VLAN.
Determine the source IP address of IGMP general queries.
Determine the source IP address of IGMP group-specific queries.
Determine the source IP address of IGMP reports.
Determine the source IP address of IGMP leave messages.
Determine the 802.1p precedence of IGMP messages

Configuring source IP addresses for IGMP messages

After a Layer 2 device receives an IGMP query whose source IP address is 0.0.0.0 on a port, it does not enlist that port as a dynamic router port. This might prevent multicast forwarding entries from being correctly created at the data link layer and eventually cause multicast traffic forwarding failures.
24
Page 35
To avoid this problem, when a Layer 2 device acts as the IGMP snooping querier, you can configure a non-all-zero IP address as the source IP address of IGMP queries. You can also change the source IP address of IGMP messages sent by a simulated member host or an IGMP snooping proxy.
Changing the source address of IGMP queries might affect the IGMP querier election within the subnet.
To configure source IP addresses for IGMP messages:
Step Command
1. Enter system view.
2. Enter VLAN view.
3. Configure the source IP
address for IGMP general queries.
4. Configure the source IP
address for IGMP group-specific queries.
5. Configure the source IP
address for IGMP reports.
6. Configure the source IP
address for IGMP leave messages.
system-view N/A
vlan vlan-id N/A
igmp-snooping general-query source-ip ip-address
igmp-snooping special-query source-ip ip-address
igmp-snooping report source-ip ip-address
igmp-snooping leave source-ip ip-address
Remarks
The default setting is the IP address of the current VLAN interface. If there is no IP address for the current VLAN interface, the source IP address is 0.0.0.0.
By default, if the IGMP snooping querier has received IGMP queries, the source IP address of IGMP group-specific queries is the source IP address of IGMP queries. Otherwise, it is the IP address of the current VLAN interface. If there is no IP address for the VLAN interface, the source IP address is 0.0.0.0.
The default setting is the IP address of the current VLAN interface. If there is no IP address for the current VLAN interface, the source IP address is 0.0.0.0.
The default setting is the IP address of the current VLAN interface. If there is no IP address for the current VLAN interface, the source IP address is 0.0.0.0.

Setting the 802.1p precedence for IGMP messages

When congestion occurs on outgoing ports of the Layer 2 device, it forwards IGMP messages in their
802.1p priority order, from highest to lowest. You can assign higher forwarding priority to IGMP messages by changing their 802.1p precedence.
You can configure the 802.1p precedence of IGMP messages for the current VLAN in VLAN view or globally for all VLANs in IGMP-snooping view. If the configurations are made in both VLAN view and IGMP-snooping view, the configuration made in VLAN view takes priority.
Setting the 802.1p precedence for IGMP messages globally
Step Command Remarks
1. Enter system view.
2. Enter IGMP-snooping view.
3. Set the 802.1p precedence for
IGMP messages.
system-view N/A
igmp-snooping N/A
The default setting is 0.
dot1p-priority priority-number
25
The global configuration takes effect on all VLANs.
Page 36
Setting the 802.1p precedence for IGMP messages in a VLAN
Step Command Remarks
1. Enter system view.
2. Enter VLAN view.
3. Set the 802.1p precedence for
IGMP messages in the VLAN.
system-view N/A
vlan vlan-id
igmp-snooping dot1p-priority
priority-number
N/A
The default setting is 0.

Configuring IGMP snooping policies

Before you configure IGMP snooping policies, complete the following tasks:
Enable IGMP snooping for the VLAN.
Determine the ACL used as the multicast group filter.
Determine the maximum number of multicast groups that a port can join.

Configuring a multicast group filter

When you configure a multicast group filter, follow these guidelines:
This configuration is effective on the multicast groups that a port dynamically joins. If you configure
the port as a static member port for a multicast group, this configuration is not effective on the multicast group.
You can configure a multicast group filter either for the current port in interface view or globally for
all ports in IGMP-snooping view. If the configurations are made in both interface view and IGMP-snooping view, the configuration made in interface view takes priority.
Configuring a multicast group filter globally
Step Command
1. Enter system view.
2. Enter IGMP-snooping view.
3. Configure a multicast group
filter globally.
system-view N/A
igmp-snooping N/A
group-policy acl-number [ vlan vlan-list ]
Configuring a multicast group filter on a port
Step Command
1. Enter system view.
2. Enter Layer 2 Ethernet interface
view.
3. Configure a multicast group
filter on a port.
system-view N/A
interface interface-type interface-number
igmp-snooping group-policy acl-number [ vlan vlan-list ]
Remarks
By default, no multicast group filter is configured. A host can join any valid multicast group.
Remarks
N/A
By default, no multicast group filter is configured on a port. Hosts on this port can join any valid multicast group.
26
Page 37

Configuring multicast source port filtering

This feature is supported on the following hardware:
MSR routers installed with the Layer 2 switching module HMIM 24GSW, HMIM 24GSW-POE, HMIM 8GSW.
When the multicast source port filtering feature is enabled on a port, the port blocks all multicast data packets, but it permits multicast protocol packets to pass. You can connect the port to multicast receivers but not to multicast sources.
If this feature is disabled on the port, you can connect the port to either multicast sources or multicast receivers.
You can enable multicast source port filtering either for the specified ports in IGMP-snooping view or for the current port in interface view. These configurations have the same priority level.
Configuring multicast source port filtering globally
Step Command Remarks
4. Enter system view.
system-view N/A
5. Enter IGMP-snooping view.
6. Enable multicast source port
filtering.
igmp-snooping N/A
source-deny port interface-list
Configuring multicast source port filtering on a port
Step Command Remarks
1. Enter system view.
2. Enter Layer 2 Ethernet interface
view.
3. Enable multicast source port
filtering.
system-view N/A
interface interface-type
interface-number
igmp-snooping source-deny

Enabling dropping unknown multicast data

CAUTION:
For MSR routers installed with the Layer 2 switching module SIC 4GSW or SIC 4GSWP, unknown IPv6 multicast data will be dropped if you enable dropping unknown IPv4 multicast data on them.
This feature is supported on the following hardware:
By default, multicast source port filtering is disabled.
N/A
By default, the multicast source port filtering is disabled.
MSR routers installed with the Layer 2 switching module SIC 4GSW, SIC 4GSWP, HMIM 24GSW, HMIM 24GSW-POE, or HMIM 8GSW.
Unknown multicast data refers to multicast data for which no forwarding entries exist in the IGMP snooping forwarding table. When the Layer 2 device receives such multicast data, one of the following occurs:
If the function of dropping unknown multicast data is disabled, the Layer 2 device floods unknown
multicast data in the VLAN to which the data belongs.
27
Page 38
If the function of dropping unknown multicast data is enabled, the Layer 2 device drops all received
unknown multicast data.
To enable dropping unknown multicast data for a VLAN:
Step Command
1. Enter system view.
2. Enter VLAN view.
3. Enable dropping unknown
multicast data for the VLAN.
system-view N/A
vlan vlan-id N/A
igmp-snooping drop-unknown

Enabling IGMP report suppression

This feature is supported on the following hardware:
MSR routers installed with the Layer 2 switching module HMIM 24GSW, HMIM 24GSW-POE, or HMIM 8GSW.
When a Layer 2 device receives an IGMP report from a multicast group member, the Layer 2 device forwards the IGMP report to the directly connected Layer 3 device. If multiple members of a multicast group are attached to the Layer 2 device, the Layer 3 device might receive duplicate reports for the same multicast group.
When the IGMP report suppression function is enabled, the Layer 2 device in each query interval forwards only the first IGMP report for the multicast group to the Layer 3 device. It does not forward the subsequent IGMP reports for the same multicast group, which reduces the number of packets being transmitted over the network.
Remarks
By default, the dropping unknown multicast data feature is disabled. Unknown multicast data is flooded.
To enable IGMP report suppression:
Step Command Remarks
1. Enter system view.
2. Enter IGMP-snooping view.
3. Enable IGMP report
suppression.
system-view N/A
igmp-snooping N/A
report-aggregation
By default, IGMP report suppression is enabled.

Setting the maximum number of multicast groups on a port

You can set the maximum number of multicast groups on a port to regulate the port traffic.
When you set the maximum number of multicast groups on a port, follow these guidelines:
This configuration is effective on the multicast groups that a port dynamically joins. If you configure
the port as a static member port for a multicast group, this configuration is not effective on the multicast group.
If the number of multicast groups on a port exceeds the limit, the system removes all the forwarding
entries related to that port from the IGMP snooping forwarding table. The receiver hosts attached to that port can join multicast groups again before the number of multicast groups on the port reaches the limit.
To set the maximum number of multicast groups on a port:
28
Page 39
Step Command
1. Enter system view.
2. Enter Layer 2 Ethernet interface
view.
3. Set the maximum number of
multicast groups on a port.
system-view N/A
interface interface-type
interface-number
igmp-snooping group-limit limit [ vlan vlan-list ]
Remarks
N/A
The default setting is
4294967295.

Enabling the multicast group replacement function

When the number of multicast groups on a Layer 2 device or a port exceeds the limit:
If the multicast group replacement is enabled, the Layer 2 device or the port replaces an existing
multicast group with a newly joined multicast group.
If the multicast group replacement is disabled, the Layer 2 device or the port discards IGMP reports
that are used for joining a new multicast group.
In some specific applications, such as channel switching, a newly joined multicast group must automatically replace an existing multicast group. In this case, the function of multicast group replacement must also be enabled so a user can switch from the current multicast group to a new group.
When you enable the multicast group replacement function, follow these guidelines:
This configuration is effective on the multicast groups that a port dynamically joins. If you configure
the port as a static member port for a multicast group, this configuration is not effective on the multicast group.
You can enable the multicast group replacement function either for the current port in interface view
or globally for all ports in IGMP-snooping view. If the configurations are made in both interface view and IGMP-snooping view, the configuration made in interface view takes priority.
To enable the multicast group replacement function globally:
Step Command
1. Enter system view.
2. Enter IGMP-snooping view.
3. Enable the multicast group
replacement function globally.
To enable the multicast group replacement function on a port:
system-view N/A
igmp-snooping N/A
overflow-replace [ vlan vlan-list ]
Step Command
1. Enter system view.
2. Enter Layer 2 Ethernet interface
view.
3. Enable multicast group
replacement function on a port.
system-view N/A
interface interface-type interface-number
igmp-snooping overflow-replace [ vlan vlan-list ]
Remarks
By default, the multicast group replacement function is disabled.
Remarks
N/A
By default, the multicast group replacement function is disabled.
29
Page 40

Displaying and maintaining IGMP snooping

Execute display commands in any view and reset commands in user view.
Task Command
Display IGMP snooping status. display igmp-snooping [ global | vlan vlan-id ]
Display information about dynamic IGMP snooping forwarding entries (MSR2000/MSR3000).
Display information about dynamic IGMP snooping forwarding entries (MSR4000).
Display information about dynamic router ports (MSR2000/MSR3000).
Display information about dynamic router ports (MSR4000).
Display information about static IGMP snooping forwarding entries (MSR2000/MSR3000).
Display information about static IGMP snooping forwarding entries (MSR4000).
Display information about static router ports (MSR2000/MSR3000).
Display information about static router ports (MSR4000).
display igmp-snooping group [ group-address | source-address ] * [ vlan vlan-id ] [ verbose ]
display igmp-snooping group [ group-address | source-address ] * [ vlan vlan-id ] [ verbose ] [ slot slot-number ]
display igmp-snooping router-port [ vlan vlan-id ]
display igmp-snooping router-port [ vlan vlan-id ] [ slot slot-number ]
display igmp-snooping static-group [ group-address | source-address ] * [ vlan vlan-id ] [ verbose ]
display igmp-snooping static-group [ group-address |
source-address ] * [ vlan vlan-id ] [ verbose ] [ slot slot-number ]
display igmp-snooping static-router-port [ vlan vlan-id ]
display igmp-snooping static-router-port [ vlan vlan-id ]
[ slot slot-number ]
Display statistics for the IGMP messages learned by IGMP snooping.
Display information about Layer 2 IP multicast groups (MSR2000/MSR3000).
Display information about Layer 2 IP multicast groups (MSR4000).
Display information about Layer 2 IP multicast group entries (MSR2000/MSR3000).
Display information about Layer 2 IP multicast group entries (MSR4000).
Display information about Layer 2 MAC multicast groups (MSR2000/MSR3000).
Display information about Layer 2 MAC multicast groups (MSR4000).
Display information about Layer 2 MAC multicast forwarding entries (MSR2000/MSR3000).
Display information about Layer 2 MAC multicast forwarding entries (MSR4000).
display igmp-snooping statistics
display l2-multicast ip [ group group-address | source source-address ] * [ vlan vlan-id ]
display l2-multicast ip [ group group-address | source source-address ] * [ vlan vlan-id ] [ slot slot-number ]
display l2-multicast ip forwarding [ group group-address | source source-address ] * [ vlan vlan-id ]
display l2-multicast ip forwarding [ group group-address
| source source-address ] * [ vlan vlan-id ] [ slot slot-number ]
display l2-multicast mac [ mac-address ] [ vlan vlan-id ]
display l2-multicast mac [ mac-address ] [ vlan vlan-id ] [ slot slot-number ]
display l2-multicast mac forwarding [ mac-address ] [ vlan vlan-id ]
display l2-multicast mac forwarding [ mac-address ]
[ vlan vlan-id ] [ slot slot-number ]
30
Page 41
Task Command
Remove the dynamic IGMP snooping forwarding entries for the specified multicast groups.
Remove dynamic router ports. reset igmp-snooping router-port { all | vlan vlan-id }
Clear statistics for the IGMP messages learned by IGMP snooping.
reset igmp-snooping group { group-address [ source-address ] | all } [ vlan vlan-id ]
reset igmp-snooping statistics

IGMP snooping configuration examples

Group policy and simulated joining configuration example

Network requirements
As shown in Figure 12, Router A runs IGMPv2 and serves as the IGMP querier. Switch A runs IGMPv2 snooping.
Configure a group policy and simulated joining to meet the following requirements:
Host A and Host B receive only the multicast data addressed to the multicast group 224.1.1.1.
Multicast data can be forwarded through GigabitEthernet 2/1/3 and GigabitEthernet 2/1/4 of
Switch A uninterruptedly, even though Host A and Host B fail to receive the multicast data.
Switch A will drop unknown multicast data instead of flooding it in VLAN 100.
Figure 12 Network diagram
Configuration procedure
1. Assign an IP address and subnet mask to each interface according to Figure 12. (Details not
shown.)
2. Configure Router A:
# Enable IP multicast routing.
<RouterA> system-view
31
Page 42
[RouterA] multicast routing [RouterA-mrib] quit
# Enable IGMP on GigabitEthernet 2/1/1.
[RouterA] interface gigabitethernet 2/1/1 [RouterA-GigabitEthernet2/1/1] igmp enable [RouterA-GigabitEthernet2/1/1] quit
# Enable PIM-DM on GigabitEthernet 2/1/2.
[RouterA] interface gigabitethernet 2/1/2 [RouterA-GigabitEthernet2/1/2] pim dm [RouterA-GigabitEthernet2/1/2] quit
3. Configure Switch A:
# Enable IGMP snooping globally.
<SwitchA> system-view [SwitchA] igmp-snooping [SwitchA-igmp-snooping] quit
# Create VLAN 100, assign GigabitEthernet 2/1/1 through GigabitEthernet 2/1/4 to the VLAN, and enable IGMP snooping and dropping unknown multicast data for the VLAN.
[SwitchA] vlan 100 [SwitchA-vlan100] port gigabitethernet 2/1/1 to gigabitethernet 2/1/4 [SwitchA-vlan100] igmp-snooping enable [SwitchA-vlan100] igmp-snooping drop-unknown [SwitchA-vlan100] quit
# Configure a multicast group filter so that the hosts in VLAN 100 can join only the multicast group
224.1.1.1.
[SwitchA] acl number 2001 [SwitchA-acl-basic-2001] rule permit source 224.1.1.1 0 [SwitchA-acl-basic-2001] quit [SwitchA] igmp-snooping [SwitchA-igmp-snooping] group-policy 2001 vlan 100 [SwitchA-igmp-snooping] quit
# Configure GigabitEthernet 2/1/3 and GigabitEthernet 2/1/4 as simulated member hosts of multicast group 224.1.1.1.
[SwitchA] interface gigabitethernet 2/1/3 [SwitchA-GigabitEthernet2/1/3] igmp-snooping host-join 224.1.1.1 vlan 100 [SwitchA-GigabitEthernet2/1/3] quit [SwitchA] interface gigabitethernet 2/1/4 [SwitchA-GigabitEthernet2/1/4] igmp-snooping host-join 224.1.1.1 vlan 100 [SwitchA-GigabitEthernet2/1/4] quit
Verifying the configuration
# Send IGMP reports from Host A and Host B to join the multicast groups 224 .1.1.1 and 224.2.2.2. (Details not shown.)
# Display information about the dynamic IGMP snooping forwarding entries in VLAN 100 on Switch A.
[SwitchA] display igmp-snooping group vlan 100 Total 1 entries.
VLAN 100: Total 1 entries.
32
Page 43
(0.0.0.0, 224.1.1.1) Host slots (0 in total): Host ports (2 in total): GE2/1/3 (00:03:23) GE2/1/4 (00:04:10)
The output shows the following information:
Host A and Host B have joined the multicast group 2 2 4 .1 .1.1 through the member ports Ethernet 1/4
and Ethernet 1/3 on Switch A, respectively.
Host A and Host B have failed to join the multicast group 224.2.2.2.

Static port configuration example

Network requirements
As shown in Figure 13:
Router A runs IGMPv2 and serves as the IGMP querier. Switch A, Switch B, and Switch C run
IGMPv2 snooping.
Host A and host C are permanent receivers of multicast group 224.1.1.1.
Configure static ports to meet the following requirements:
To enhance the reliability of multicast traffic transmission, configure GigabitEthernet 2/1/3 and
GigabitEthernet 2/1/5 on Switch C as static member ports for multicast group 224.1.1.1.
Suppose the STP runs on the network. To avoid data loops, the forwarding path from Switch A to
Switch C is blocked under normal conditions. Multicast data flows to the receivers attached to Switch C only along the path of Switch A—Switch B—Switch C. When this path is blocked, at least one IGMP query-response cycle must be completed before multicast data flows to the receivers along the path of Switch A—Switch C. In this case, the multicast delivery is interrupted during the process. For more information about the STP, see Layer 2—LAN Switching Configuration Guide.
Configure GigabitEthernet 2/1/3 on Switch A as a static router port. Then, multicast data can flow to the receivers nearly uninterruptedly along the path of Switch A—Switch C when the path of Switch A—Switch B—Switch C is blocked.
33
Page 44
Figure 13 Network diagram
Source
1.1.1.1/24
GE2/1/2
1.1.1.2/24
Router A
IGMP querier
GE2/1/1
10.1.1.1/24
Switch A
GE2/1/1
Switch B
1
/
1
/
2
E
G
2
/
1
/
2
E
G
G
E
2
/
1
/
3
GE
1
/
0
/
1
2
E
G
Switch C
G
E
2
/
5
/
1
/
1
/
3
Configuration procedure
1. Assign an IP address and subnet mask to each interface according to Figure 13. (Details not
shown.)
2. Configure Router A:
# Enable IP multicast routing.
<RouterA> system-view [RouterA] multicast routing [RouterA-mrib] quit
# Enable IGMP on GigabitEthernet 2/1/1.
[RouterA] interface gigabitethernet 2/1/1 [RouterA-GigabitEthernet2/1/1] igmp enable [RouterA-GigabitEthernet2/1/1] quit
# Enable PIM-DM on GigabitEthernet 2/1/2.
[RouterA] interface gigabitethernet 2/1/2 [RouterA-GigabitEthernet2/1/2] pim dm [RouterA-GigabitEthernet2/1/2] quit
3. Configure Switch A:
VLAN 100
Host C
Receiver
Host A
Receiver
Host B
# Enable IGMP snooping globally.
<SwitchA> system-view [SwitchA] igmp-snooping [SwitchA-igmp-snooping] quit
# Create VLAN 100, assign GigabitEthernet 2/1/1 through GigabitEthernet 2/1/3 to the VLAN, and enable IGMP snooping for the VLAN.
[SwitchA] vlan 100 [SwitchA-vlan100] port gigabitethernet 2/1/1 to gigabitethernet 2/1/3 [SwitchA-vlan100] igmp-snooping enable
34
Page 45
[SwitchA-vlan100] quit
# Configure GigabitEthernet 2/1/3 as a static router port.
[SwitchA] interface gigabitethernet 2/1/3 [SwitchA-GigabitEthernet2/1/3] igmp-snooping static-router-port vlan 100 [SwitchA-GigabitEthernet2/1/3] quit
4. Configure Switch B:
# Enable IGMP snooping globally.
<SwitchB> system-view [SwitchB] igmp-snooping [SwitchB-igmp-snooping] quit
# Create VLAN 100, assign GigabitEthernet 2/1/1 and GigabitEthernet 2/1/2 to the VLAN, and enable IGMP snooping for the VLAN.
[SwitchB] vlan 100 [SwitchB-vlan100] port gigabitethernet 2/1/1 gigabitethernet 2/1/2 [SwitchB-vlan100] igmp-snooping enable [SwitchB-vlan100] quit
5. Configure Switch C:
# Enable IGMP snooping globally.
<SwitchC> system-view [SwitchC] igmp-snooping [SwitchC-igmp-snooping] quit
# Create VLAN 100, assign GigabitEthernet 2/1/1 through GigabitEthernet 2/1/5 to the VLAN, and enable IGMP snooping for the VLAN.
[SwitchC] vlan 100 [SwitchC-vlan100] port gigabitethernet 2/1/1 to gigabitethernet 2/1/5 [SwitchC-vlan100] igmp-snooping enable [SwitchC-vlan100] quit
# Configure GigabitEthernet 2/1/3 and GigabitEthernet 2/1/5 as static member ports for multicast group 224.1.1.1.
[SwitchC] interface gigabitEthernet 2/1/3 [SwitchC-GigabitEthernet2/1/3] igmp-snooping static-group 224.1.1.1 vlan 100 [SwitchC-GigabitEthernet2/1/3] quit [SwitchC] interface gigabitEthernet 2/1/5 [SwitchC-GigabitEthernet2/1/5] igmp-snooping static-group 224.1.1.1 vlan 100 [SwitchC-GigabitEthernet2/1/5] quit
Verifying the configuration
# Display information about the static router ports in VLAN 100 on Switch A.
[SwitchA] display igmp-snooping static-router-port vlan 100 VLAN 100: Router slots (0 in total): Router ports (1 in total): GE2/1/3
The output shows that GigabitEthernet 2/1/3 on Switch A has become a static router port.
# Display information about the static IGMP snooping forwarding entries in VLAN 100 on Switch C.
[SwitchC] display igmp-snooping static-group vlan 100
35
Page 46
Total 1 entries.
VLAN 100: Total 1 entries. (0.0.0.0, 224.1.1.1) Host slots (0 in total): Host ports (2 in total): GE2/1/3 GE2/1/5
The output shows that GigabitEthernet 2/1/3 and GigabitEthernet 2/1/5 on Switch C have become static member ports of the multicast group 224 .1.1.1.

IGMP snooping querier configuration example

Network requirements
As shown in Figure 14:
The network is a Layer 2-only network.
Source 1 and Source 2 send multicast data to the multicast groups 224.1.1.1 and 225.1.1.1,
respectively.
Host A and Host C are receivers of multicast group 224.1.1.1, and Host B and Host D are receivers
o f m u l t i c a s t g ro u p 2 25 .1.1.1.
All host receivers run IGMPv2, and all switches run IGMPv2 snooping. Switch A (which is close to
the multicast sources) acts as the IGMP querier.
To prevent the switches from flooding unknown packets in the VLAN, enable dropping unknown multicast packets on all the switches.
Figure 14 Network diagram
Configuration procedure
1. Configure Switch A:
# Enable IGMP snooping globally.
36
Page 47
<SwitchA> system-view [SwitchA] igmp-snooping [SwitchA-igmp-snooping] quit
# Create VLAN 100, assign GigabitEthernet 2/1/1 through GigabitEthernet 2/1/3 to the VLAN, and enable IGMP snooping and dropping unknown multicast packets for the VLAN.
[SwitchA] vlan 100 [SwitchA-vlan100] port gigabitethernet 2/1/1 to gigabitethernet 2/1/3 [SwitchA-vlan100] igmp-snooping enable [SwitchA-vlan100] igmp-snooping drop-unknown
# Configure Switch A as the IGMP snooping querier.
[SwitchA-vlan100] igmp-snooping querier [SwitchA-vlan100] quit
2. Configure Switch B:
# Enable IGMP snooping globally.
<SwitchB> system-view [SwitchB] igmp-snooping [SwitchB-igmp-snooping] quit
# Create VLAN 100, assign GigabitEthernet 2/1/1 through GigabitEthernet 2/1/4 to the VLAN, and enable IGMP snooping and dropping unknown multicast packets for the VLAN.
[SwitchB] vlan 100 [SwitchB-vlan100] port gigabitethernet 2/1/1 to gigabitethernet 2/1/4 [SwitchB-vlan100] igmp-snooping enable [SwitchB-vlan100] igmp-snooping drop-unknown [SwitchB-vlan100] quit
3. Configure Switch C:
# Enable IGMP snooping globally.
<SwitchC> system-view [SwitchC] igmp-snooping [SwitchC-igmp-snooping] quit
# Create VLAN 100, assign GigabitEthernet 2/1/1 through GigabitEthernet 2/1/3 to the VLAN, and enable IGMP snooping and dropping unknown multicast packets for the VLAN.
[SwitchC] vlan 100 [SwitchC-vlan100] port gigabitethernet 2/1/1 to gigabitethernet 2/1/3 [SwitchC-vlan100] igmp-snooping enable [SwitchC-vlan100] igmp-snooping drop-unknown [SwitchC-vlan100] quit
4. Configure Switch D:
# Enable IGMP snooping globally.
<SwitchD> system-view [SwitchD] igmp-snooping [SwitchD-igmp-snooping] quit
# Create VLAN 100, assign GigabitEthernet 2/1/1 and GigabitEthernet 2/1/2 to the VLAN, and enable IGMP snooping and dropping unknown multicast packets for the VLAN.
[SwitchD] vlan 100 [SwitchD-vlan100] port gigabitethernet 2/1/1 to gigabitethernet 2/1/2 [SwitchD-vlan100] igmp-snooping enable
37
Page 48
[SwitchD-vlan100] igmp-snooping drop-unknown [SwitchD-vlan100] quit
Verifying the configuration
# Display statistics for IGMP messages learned by IGMP snooping on Switch B.
[SwitchB] display igmp-snooping statistics Received IGMP general queries: 3 Received IGMPv1 reports: 0 Received IGMPv2 reports: 12 Received IGMP leaves: 0 Received IGMPv2 specific queries: 0 Sent IGMPv2 specific queries: 0 Received IGMPv3 reports: 0 Received IGMPv3 reports with right and wrong records: 0 Received IGMPv3 specific queries: 0 Received IGMPv3 specific sg queries: 0 Sent IGMPv3 specific queries: 0 Sent IGMPv3 specific sg queries: 0 Received error IGMP messages: 0
The output shows that all switches except Switch A can receive the IGMP general queries after Switch A acts as the IGMP snooping querier.

Troubleshooting IGMP snooping

Layer 2 multicast forwarding cannot function

Symptom
Layer 2 multicast forwarding cannot function on the Layer 2 device.
Analysis
IGMP snooping is not enabled.
Solution
1. Use the display igmp-snooping command to display IGMP snooping status.
2. If IGMP snooping is not enabled, use the igmp-snooping command in system view to enable IGMP
snooping globally. Then, use the igmp-snooping enable command in VLAN view to enable IGMP snooping for the VLAN.
3. If IGMP snooping is enabled globally but not enabled for the VLAN, use the igmp-snooping enable
command in VLAN view to enable IGMP snooping for the VLAN.
4. If the problem persists, contact HP Support.

Multicast group filter does not work

Symptom
Hosts can receive multicast data from multicast groups that are not permitted by the multicast group filter.
38
Page 49
Analysis
Solution
The ACL is incorrectly configured.
The multicast group filter is not correctly applied.
The function of dropping unknown multicast data is not enabled, so unknown multicast data is
flooded.
1. Use the display acl command to verify that the configured ACL meets the multicast group filter
requirements.
2. Use the display this command in IGMP-snooping view or in a corresponding interface view to
verify that the correct multicast group filter has been applied. If not, use the group-policy or
igmp-snooping group-policy command to apply the correct multicast group filter.
3. Use the display igmp-snooping command to verify that the function of dropping unknown multicast
data is enabled. If not, use the drop-unknown or igmp-snooping drop-unknown command to enable the function of dropping unknown multicast data.
4. If the problem persists, contact HP Support.
39
Page 50

Configuring multicast routing and forwarding

In this chapter, "MSR2000" refers to MSR2003. "MSR3000" collectively refers to MSR3012, MSR3024, MSR3044, MSR3064. "MSR4000" collectively refers to MSR4060 and MSR4080.

Overview

The following tables are involved in multicast routing and forwarding:
Multicast routing table of each multicast routing protocol, such as the PIM routing table.
General multicast routing table that summarizes multicast routing information generated by different
multicast routing protocols. The multicast routing information from multicast sources to multicast groups are stored in a set of (S, G) routing entries.
Multicast forwarding table that guides multicast forwarding. The optimal routing entries in the
multicast routing table are added to the multicast forwarding table.

RPF check mechanism

A multicast routing protocol relies on the existing unicast routes, MBGP routes, or static multicast routes to create multicast routing entries. When creating multicast routing entries, the multicast routing protocol uses reverse path forwarding (RPF) check to ensure the multicast data delivery along the correct path. The RPF check also helps avoid data loops.
A multicast routing protocol uses the following tables to perform an RPF check:
Unicast routing table—Contains unicast routing information.
MBGP multicast routing table—Contains MBGP multicast routing information.
Static multicast routing table—Contains RPF routes that are manually configured.
MBGP multicast routing table and static multicast routing table are used for RPF check rather than multicast routing.
RPF check process
When performing an RPF check, the router searches its unicast routing table, MBGP routing table, and static multicast routing table at the same time using the following process:
1. The router separately chooses an optimal route from the unicast routing table, MBGP routing table,
and the static multicast routing table:
{ The router looks up its unicast routing table for an optimal unicast route back to the packet
source. The outgoing interface of the route is the RPF interface and the next hop is the RPF neighbor. The router considers the path of the packet that the RPF interface receives from the RPF neighbor as the shortest path that leads back to the source.
{ The router looks up its MBGP routing table for an optimal MBGP route back to the packet source.
The outgoing interface of the route is the RPF interface and the next hop is the RPF neighbor.
{ The router looks up its static multicast routing table for an optimal static multicast route back to
the packet route. The route explicitly defines the RPF interface and the RPF neighbor.
2. The router selects one of the three optimal routes as the RPF route according to the following
principles:
40
Page 51
{ If the router uses the longest prefix match principle, the router selects the matching route as the
RPF route. If the routes have the same mask, the router selects the route that has the highest priority as the RPF route. If the routes have the same priority, the router selects a route as the RPF route in the order of static multicast route, MBGP route, and unicast route.
For more information about the route preference, see Layer 3—IP Routing Configuration Guide.
{ If the router does not use the longest prefix match principle, the router selects the route that has
the highest priority as the RPF route. If the routes have the same priority, the router selects a route as the RPF route in the order of static multicast route, MBGP route, and unicast route.
In RPF checks, a "packet source" means different things in different situations:
For a packet that travels along the SPT from the multicast source to the receivers or to the RP, the
packet source is the multicast source.
For a packet that travels along the RPT from the RP to the receivers, the packet source is the RP.
For a packet that travels along the source-side RPT from the multicast source to the RP, the packet
source is the RP.
For a bootstrap message from the BSR, the packet source is the BSR.
For more information about the concepts of SPT, RPT, source-side RPT, RP, and BSR, see "Configuring PIM."
RPF check implementation in multicast
Implementing an RPF check on each received multicast packet brings a big burden to the router. The use of a multicast forwarding table is the solution to this issue. When the router creates a multicast forwarding entry for a multicast packet, it sets the RPF interface of the packet as the incoming interface of the forwarding entry. After the router receives a multicast packet, it looks up its multicast forwarding table:
If no match is found, the router first determines the RPF route back to the packet source and the RPF
interface. Then, it creates a forwarding entry with the RPF interface as the incoming interface and performs one of the following actions:
{ If the interface that received the packet is the RPF interface, the RPF check succeeds and the
router forwards the packet out of all the outgoing interfaces.
{ If the interface that received the packet is not the RPF interface, the RPF check fails and the router
discards the packet.
If a match is found and the matching forwarding entry contains the receiving interface, the router
forwards the packet out of all the outgoing interfaces.
If a match is found but the matching forwarding entry does not contain the receiving interface, the
router determines the RPF route back to the packet source. Then, the router performs one of the following actions:
{ If the RPF interface is the incoming interface, it means that the forwarding entry is correct, but
the packet traveled along a wrong path. The router discards the packet.
{ If the RPF interface is not the incoming interface, it means that the forwarding entry has expired.
The router replaces the incoming interface with the RPF interface. In this case, if the interface that received the packet is the RPF interface, the router forwards the packet out of all outgoing interfaces. Otherwise, it discards the packet.
41
Page 52
Figure 15 RPF check process
IP Routing Table on Router C
Destination/Mask
192.168.0.0/24
Source
192.168.0.1/24
Multicast packets
Interface
GE1/0/2
Router A
GE1/0/2
Router B
GE1/0/1
GE1/0/1
Router C
Receiver
Receiver
As shown in Figure 15, assume that unicast routes are available in the network, MBGP is not configured, and no static multicast routes have been configured on Router C. Multicast packets travel along the SPT from the multicast source to the receivers. The multicast forwarding table on Router C contains the (S, G) entry, with GigabitEthernet 1/0/2 as the incoming interface.
If a multicast packet arrives at Router C on GigabitEthernet 1/0/2, Router C forwards the packet
out of all outgoing interfaces.
If a multicast packet arrives at Router C on GigabitEthernet 1/0/1, Router C performs an RPF check
on the packet. Router C searches its unicast routing table and finds that the outgoing interface to the source (the RPF interface) is GigabitEthernet 1/0/2. In this case, the (S, G) entry is correct, but the packet traveled along a wrong path. The packet fails the RPF check, and Router C discards the packet.

Static multicast routes

Depending on the application environment, a static multicast route can change an RPF route or create an RPF route.
Changing an RPF route
Typically, the topology structure of a multicast network is the same as that of a unicast network, and multicast traffic follows the same transmission path as unicast traffic does. You can configure a static multicast route for a multicast source to change the RPF route. As a result, the router creates a transmission path for multicast traffic that is different from the transmission path for unicast traffic.
42
Page 53
Figure 16 Changing an RPF route
As shown in Figure 16, when no static multicast route is configured, Router C's RPF neighbor on the path back to the source is Router A. The multicast data from the source travels through Router A to Router C. You can configure a static multicast route on Router C and specify Router B as Router C's RPF neighbor on the path back to the source. The multicast data from the source travels along the path: Router A to Router B and then to Router C.
Creating an RPF route
When a unicast route is blocked, multicast forwarding might be stopped due to lack of an RPF route. You can configure a static multicast route to create an RPF route. In this way, a multicast routing entry is created to guide multicast forwarding.
Figure 17 Creating an RPF route
Multicast Routing Table Static on Router C
Source/Mask
192.168.0.0/24
Multicast Routing Table Static on Router D
Source/Mask
192.168.0.0/24
Source
192.168.0.1/24
Interface
GE1/0/1
Interface
GE1/0/1
Router A Router B Router C
RPF neighbor/Mask
1.1.1.1/24
RPF neighbor/Mask
2.2.2.2/24
RIP domain
GE1/0/2
1.1.1.1/24
OSPF domain
GE1/0/1
1.1.1.2/24
Router D
GE1/0/1
2.2.2.1/24
GE1/0/2
2.2.2.2/24
Receiver
Receiver
Multicast packets Multicast static route
As shown in Figure 17, the RIP domain and the OSPF domain are unicast isolated from each other. When no static multicast route is configured, the receiver hosts in the OSPF domain cannot receive the multicast packets from the multicast source in the RIP domain. You can configure a static multicast route on Router
43
Page 54
A
C and Router D and specify Router B and Router C as the RPF neighbors of Router C and Router D, respectively. In this way, the receiver hosts can receive the multicast data from the multicast source.
NOTE:
static multicast route is effective only on the multicast router on which it is configured, and will not be
advertised throughout the network or redistributed to other routers.

Multicast forwarding across unicast subnets

Routers forward the multicast data from a multicast source hop by hop along the forwarding tree, but some routers might not support multicast protocols in a network. When the multicast data is forwarded to a router that does not support IP multicast, the forwarding path is blocked. In this case, you can enable multicast forwarding across two unicast subnets by establishing a tunnel between the routers at the edges of the two unicast subnets.
Figure 18 Multicast data transmission through a tunnel
As shown in Figure 18, a tunnel is established between the multicast routers Router A and Router B. Router A encapsulates the multicast data in unicast IP packets, and forwards them to Router B across the tunnel through unicast routers. Then, Router B strips off the unicast IP header and continues to forward the multicast data to the receiver.
To use this tunnel only for multicast traffic, configure the tunnel as the outgoing interface only for multicast routes.

Multicast routing and forwarding configuration task list

Tasks at a glance
(Required.) Enabling IP multicast routing
(Optional.) Configuring multicast routing and forwarding:
(Optional.) Configuring static multicast routes
(Optional.) Configuring the RPF route selection rule
(Optional.) Configuring multicast load splitting
(Optional.) Configuring a multicast forwarding boundary
44
Page 55
y
NOTE:
The device can route and forward multicast data only through the primary IP addresses of interfaces, rather than their secondary addresses or unnumbered IP addresses. For more information about primar and secondary IP addresses, and IP unnumbered, see

Enabling IP multicast routing

Enable IP multicast routing before you configure any Layer 3 multicast functionality on the public network or VPN instance.
To enable IP multicast routing:
Step Command Remarks
Layer 3—IP Services Configuration Guide
.
1. Enter system view.
2. Enable IP multicast routing
and enter MRIB view.
system-view
multicast routing [ vpn-instance
vpn-instance-name ]
N/A
By default, IP multicast routing is disabled.

Configuring multicast routing and forwarding

Before you configure multicast routing and forwarding, complete the following tasks:
Configure a unicast routing protocol so that all devices in the domain are interoperable at the
network layer.
Enable PIM-DM or PIM-SM.

Configuring static multicast routes

By configuring a static multicast route for a given multicast source, you can specify an RPF interface or an RPF neighbor for the multicast traffic from that source.
To configure a static multicast route:
Step Command Remarks
1. Enter system view.
system-view
N/A
ip rpf-route-static [ vpn-instance
2. Configure a static multicast
route.
vpn-instance-name ] source-address { mask-length | mask } { rpf-nbr-address | interface-type interface-number } [ preference preference ]
45
By default, no static multicast route exists.
Page 56
Step Command Remarks
Delete a specific static multicast
route: undo ip rpf-route-static [ vpn-instance vpn-instance-name ]
3. (Optional.) Delete static
multicast routes.
source-address { mask-length | mask } { rpf-nbr-address | interface-type interface-number }
Delete all static multicast routes:
delete ip rpf-route-static
[ vpn-instance vpn-instance-name ]

Configuring the RPF route selection rule

You can configure the router to select the RPF route based on the longest prefix match principle. For more information about RPF route selection, see "RPF check process."
To
configure a multicast routing policy:
Step Command Remarks
N/A
1. Enter system view.
2. Enter MRIB view.
3. Configure the device to
select the RPF route based on the longest prefix match.
system-view
multicast routing [ vpn-instance vpn-instance-name ]
longest-match

Configuring multicast load splitting

To optimize the traffic delivery for multiple data flows, you can configure load splitting on a per-source basis or on a per-source-and-group basis.
To configure multicast load splitting:
Step Command Remarks
1. Enter system view.
2. Enter MRIB view.
3. Configure multicast load
splitting.
system-view
multicast routing [ vpn-instance vpn-instance-name ]
load-splitting { source | source-group }
N/A
N/A
By default, the route with the highest priority is selected as the RPF route.
N/A
N/A
By default, load splitting is disabled.

Configuring a multicast forwarding boundary

A multicast forwarding boundary sets the boundary condition for the multicast groups in a specific range. The multicast data for a multicast group travels within a definite boundary in a network. If the destination address of a multicast packet matches the boundary condition, the packet is not forwarded. If an interface is configured as a multicast boundary, it can no longer forward multicast packets (including packets sent from the local device), nor receive multicast packets.
46
Page 57
TIP:
You do not need to enable IP multicast routing before this configuration.
To configure a multicast forwarding boundary:
Step Command
1. Enter system view.
2. Enter interface view.
3. Configure a multicast
forwarding boundary.
system-view
interface interface-type interface-number
multicast boundary group-address { mask-length | mask }
Remarks
N/A
N/A
By default, no forwarding boundary is configured.

Displaying and maintaining multicast routing and forwarding

CAUTION:
The reset commands might cause multicast data transmission failures.
Execute display commands in any view and reset commands in user view.
Task Command
Display information about the interfaces maintained by the MRIB.
display mrib [ vpn-instance vpn-instance-name ] interface [ interface-type interface-number ]
Display multicast boundary information.
Display information about the DF for multicast forwarding (MSR2000/MSR3000).
Display information about the DF for multicast forwarding (MSR4000).
Display statistics for multicast forwarding events (MSR2000/MSR3000).
Display statistics for multicast forwarding events (MSR4000).
Display multicast forwarding entries information (MSR2000/MSR3000).
display multicast [ vpn-instance vpn-instance-name ] boundary [ group-address [ mask-length | mask ] ] [ interface interface-type interface-number ]
display multicast [ vpn-instance vpn-instance-name ] forwarding df-info [ group-address ] [ verbose ]
display multicast [ vpn-instance vpn-instance-name ] forwarding df-info [ group-address ] [ verbose ] [ slot slot-number ]
display multicast [ vpn-instance vpn-instance-name ] forwarding event
display multicast [ vpn-instance vpn-instance-name ] forwarding event [ slot slot-number ]
display multicast [ vpn-instance vpn-instance-name ] forwarding-table
[ source-address [ mask { mask-length | mask } ] | group-address [ mask { mask-length | mask } ] | incoming-interface interface-type
interface-number | outgoing-interface { exclude | include | match } interface-type interface-number | statistics ] *
47
Page 58
W
g
Task Command
display multicast [ vpn-instance vpn-instance-name ] forwarding-table
Display multicast forwarding table information (MSR4000).
[ source-address [ mask { mask-length | mask } ] | group-address [ mask { mask-length | mask } ] | incoming-interface interface-type
interface-number | outgoing-interface { exclude | include | match } interface-type interface-number | slot slot-number | statistics ] *
Display information about the DF list in the multicast forwarding entries (MSR2000/MSR3000).
Display information about the DF list in the multicast forwarding entries (MSR4000).
Display information about the multicast routing entries.
Display information about the static multicast routing entries.
Display RPF route information about the multicast source.
Clear statistics for multicast forwarding events.
Clear forwarding entries from the multicast forwarding table.
display multicast [ vpn-instance vpn-instance-name ] forwarding-table df-list [ group-address ] [ verbose ]
display multicast [ vpn-instance vpn-instance-name ] forwarding-table df-list [ group-address ] [ verbose ] [ slot slot-number ]
display multicast [ vpn-instance vpn-instance-name ] routing-table
[ source-address [ mask { mask-length | mask } ] | group-address [ mask { mask-length | mask } ] | incoming-interface interface-type
interface-number | outgoing-interface { exclude | include | match } interface-type interface-number ] *
display multicast [ vpn-instance vpn-instance-name ] routing-table static [ source-address { mask-length | mask } ]
display multicast [ vpn-instance vpn-instance-name ] rpf-info
source-address [ group-address ]
reset multicast [ vpn-instance vpn-instance-name ] forwarding event
reset multicast [ vpn-instance vpn-instance-name ] forwarding-table
{ { source-address [ mask { mask-length | mask } ] | group-address [ mask { mask-length | mask } ] | incoming-interface { interface-type
interface-number } } * | all }
reset multicast [ vpn-instance vpn-instance-name ] routing-table
Clear routing entries from the multicast routing table.
{ { source-address [ mask { mask-length | mask } ] | group-address [ mask { mask-length | mask } ] | incoming-interface interface-type interface-number } * | all }
NOTE:
hen a routing entry is removed, the associated forwarding entry is also removed. When a forwardin
entry is removed, the associated routing entry is also removed.

Configuration examples

Changing an RPF route

Network requirements
As shown in Figure 19:
PIM-DM runs in the network. All routers in the network support multicast.
Router A, Router B, and Router C run OSPF.
Typically, the receiver host can receive the multicast data from Source through the path: Router A to
Router B, which is the same as the unicast route.
48
Page 59
Configure the routers so that the receiver host can receive the multicast data from Source through the path: Router A to Router C to Router B. This path is different from the unicast route.
Figure 19 Network diagram
Router C
40.1.1.2/24
Router A Router B
GE2/1/1
50.1.1.1/24
Multicast static route
Configuration procedure
1. Assign an IP address and subnet mask for each interface according to Figure 19. (Details not
shown.)
2. Enable OSPF on the routers in the PIM-DM domain to make sure the following conditions are met:
(Details not shown.)
GE2/1/2
40.1.1.1/24
GE2/1/2
GE2/1/3
30.1.1.2/24
Source Receiver
50.1.1.100/24 10.1.1.100/24
PIM-DM
GE2/1/1
20.1.1.2/24
GE2/1/3
30.1.1.1/24
GE2/1/2
20.1.1.1/24
GE2/1/1
10.1.1.1/24
{ The routers are interoperable at the network layer.
{ The routers can dynamically update their routing information.
3. Enable IP multicast routing, and enable IGMP and PIM-DM:
# On Router B, enable IP multicast routing.
<RouterB> system-view [RouterB] multicast routing [RouterB-mrib] quit
# Enable IGMP on GigabitEthernet 2/1/1 (the interface that connects to the receiver host).
[RouterB] interface gigabitethernet 2/1/1 [RouterB-GigabitEthernet2/1/1] igmp enable [RouterB-GigabitEthernet2/1/1] quit
# Enable PIM-DM on the other interfaces.
[RouterB] interface gigabitethernet 2/1/2 [RouterB-GigabitEthernet2/1/2] pim dm [RouterB-GigabitEthernet2/1/2] quit [RouterB] interface gigabitethernet 2/1/3 [RouterB-GigabitEthernet2/1/3] pim dm [RouterB-GigabitEthernet2/1/3] quit
# On Router A, enable IP multicast routing, and enable PIM-DM on each interface.
49
Page 60
<RouterA> system-view [RouterA] multicast routing [RouterA-mrib] quit [RouterA] interface gigabitethernet 2/1/1 [RouterA-GigabitEthernet2/1/1] pim dm [RouterA-GigabitEthernet2/1/1] quit [RouterA] interface gigabitethernet 2/1/2 [RouterA-GigabitEthernet2/1/2] pim dm [RouterA-GigabitEthernet2/1/2] quit [RouterA] interface gigabitethernet 2/1/3 [RouterA-GigabitEthernet2/1/3] pim dm [RouterA-GigabitEthernet2/1/3] quit
# Enable IP multicast routing and PIM-DM on Router C in the same way Router A is configured. (Details not shown.)
4. Display the RPF route to Source on Router B.
[RouterB] display multicast rpf-info 50.1.1.100 RPF information about source 50.1.1.100: RPF interface: GigabitEthernet2/1/3, RPF neighbor: 30.1.1.2 Referenced route/mask: 50.1.1.0/24 Referenced route type: igp Route selection rule: preference-preferred Load splitting rule: disable
The output shows that the current RPF route on Router B is contributed by a unicast routing protocol and the RPF neighbor is Router A.
5. Configure a static multicast route on Router B, specifying Router C as its RPF neighbor to Source.
[RouterB] ip rpf-route-static 50.1.1.100 24 20.1.1.2
Verifying the configuration
# Display information about the RPF route to Source on Router B.
[RouterB] display multicast rpf-info 50.1.1.100 RPF information about source 50.1.1.100: RPF interface: GigabitEthernet2/1/2, RPF neighbor: 20.1.1.2 Referenced route/mask: 50.1.1.0/24 Referenced route type: multicast static Route selection rule: preference-preferred Load splitting rule: disable
The output shows the following:
The RPF route on Router B is the configured static multicast route.
The RPF neighbor of Router B is Router C.

Creating an RPF route

Network requirements
As shown in Figure 20:
PIM-DM runs in the network and all routers in the network support IP multicast.
Router B and Router C run OSPF, and have no unicast routes to Router A.
50
Page 61
Typically, the receiver host receives the multicast data from Source 1 in the OSPF domain.
Configure the routers so that the receiver host can receive multicast data from Source 2, which is outside the OSPF domain.
Figure 20 Network diagram
Configuration procedure
1. Assign an IP address and subnet mask for each interface according to Figure 20. (Details not
shown.)
2. Enable OSPF on Router B and Router C to make sure the following conditions are met: (Details not
shown.)
{ The routers are interoperable at the network layer.
{ The routers can dynamically update their routing information.
3. Enable IP multicast routing, and enable IGMP and PIM-DM:
# On Router C, enable IP multicast routing.
<RouterC> system-view [RouterC] multicast routing [RouterC-mrib] quit
# Enable IGMP on GigabitEthernet 2/1/1 (the interface that connects to the receiver host).
[RouterC] interface gigabitethernet 2/1/1 [RouterC-GigabitEthernet2/1/1] igmp enable [RouterC-GigabitEthernet2/1/1] quit
# Enable PIM-DM on GigabitEthernet 2/1/2.
[RouterC] interface gigabitethernet 2/1/2 [RouterC-GigabitEthernet2/1/2] pim dm [RouterC-GigabitEthernet2/1/2] quit
# On Router A, enable IP multicast routing, and enable PIM-DM on each interface.
<RouterA> system-view [RouterA] multicast routing [RouterA-mrib] quit [RouterA] interface gigabitethernet 2/1/1 [RouterA-GigabitEthernet2/1/1] pim dm
51
Page 62
[RouterA-GigabitEthernet2/1/1] quit [RouterA] interface gigabitethernet 2/1/2 [RouterA-GigabitEthernet2/1/2] pim dm [RouterA-GigabitEthernet2/1/2] quit
# Enable IP multicast routing and PIM-DM on Router B in the same way Router A is configured. (Details not shown.)
4. Display information about their RPF routes to Source 2 on Router B and Router C.
[RouterB] display multicast rpf-info 50.1.1.100 [RouterC] display multicast rpf-info 50.1.1.100
No output is displayed because no RPF routes to Source 2 exist on Router B and Router C.
5. Configure a static multicast route:
# Configure a static multicast route on Router B, specifying Router A as its RPF neighbor to Source
2.
[RouterB] ip rpf-route-static 50.1.1.100 24 30.1.1.2
# Configure a static multicast route on Router C, specifying Router B as its RPF neighbor to Source
2.
[RouterC] ip rpf-route-static 50.1.1.100 24 20.1.1.2
Verifying the configuration
# Display information about their RPF routes to Source 2 on Router B and Router C.
[RouterB] display multicast rpf-info 50.1.1.100 RPF information about source 50.1.1.100: RPF interface: GigabitEthernet2/1/3, RPF neighbor: 30.1.1.2 Referenced route/mask: 50.1.1.0/24 Referenced route type: multicast static Route selection rule: preference-preferred Load splitting rule: disable [RouterC] display multicast rpf-info 50.1.1.100 RPF information about source 50.1.1.100: RPF interface: GigabitEthernet2/1/2, RPF neighbor: 20.1.1.2 Referenced route/mask: 50.1.1.0/24 Referenced route type: multicast static Route selection rule: preference-preferred Load splitting rule: disable
The output shows that the RPF routes to Source 2 exist on Router B and Router C. These RPF routes are the configured static multicast routes.

Multicast forwarding over a GRE tunnel

Network requirements
As shown in Figure 21:
Multicast routing and PIM-DM are enabled on Router A and Router C. Router B does not support
multicast.
OSPF is running on Router A, Router B, and Router C.
Configure a GRE tunnel so that the receiver host can receive the multicast data from Source.
52
Page 63
Figure 21 Network diagram
Configuration procedure
1. Assign an IP address and mask for each interface according to Figure 21. (Details not shown.)
2. Enable OSPF on routers to make sure the following conditions are met: (Details not shown.)
{ The routers are interoperable at the network layer.
{ The routers can dynamically update their routing information.
3. Configure a GRE tunnel:
# On Router A, create interface Tunnel 0, and specify the tunnel encapsulation mode as GRE over IPv4.
<RouterA> system-view [RouterA] interface tunnel 0 mode gre
# Assign an IP address to interface Tunnel 0, and specify its source and destination addresses.
[RouterA-Tunnel0] ip address 50.1.1.1 24 [RouterA-Tunnel0] source 20.1.1.1 [RouterA-Tunnel0] destination 30.1.1.2 [RouterA-Tunnel0] quit
# Create interface Tunnel 0 on Router C and specify the tunnel encapsulation mode as GRE over IPv4.
<RouterC> system-view [RouterC] interface tunnel 0 mode gre
# Assign an IP address to interface Tunnel 0, and specify its source and destination addresses.
[RouterC-Tunnel0] ip address 50.1.1.2 24 [RouterC-Tunnel0] source 30.1.1.2 [RouterC-Tunnel0] destination 20.1.1.1 [RouterC-Tunnel0] quit
4. Enable IP multicast routing, PIM-DM, and IGMP:
# On Router A, enable multicast routing, and enable PIM-DM on each interface.
[RouterA] multicast routing [RouterA-mrib] quit [RouterA] interface gigabitethernet 2/1/1 [RouterA-GigabitEthernet2/1/1] pim dm [RouterA-GigabitEthernet2/1/1] quit [RouterA] interface gigabitethernet 2/1/2
53
Page 64
[RouterA-GigabitEthernet2/1/2] pim dm [RouterA-GigabitEthernet2/1/2] quit [RouterA] interface tunnel 0 [RouterA-Tunnel0] pim dm [RouterA-Tunnel0] quit
# On Router C, enable multicast routing.
[RouterC] multicast routing [RouterC-mrib] quit
# Enable IGMP on GigabitEthernet 2/1/1 (the interface that connects to the receiver host).
[RouterC] interface gigabitethernet 2/1/1 [RouterC-GigabitEthernet2/1/1] igmp enable [RouterC-GigabitEthernet2/1/1] quit
# Enable PIM-DM on other interfaces.
[RouterC] interface gigabitethernet 2/1/2 [RouterC-GigabitEthernet2/1/2] pim dm [RouterC-GigabitEthernet2/1/2] quit [RouterC] interface tunnel 0 [RouterC-Tunnel0] pim dm [RouterC-Tunnel0] quit
5. On Router C, configure a static multicast route, specifying its RPF neighbor to Source as interface
Tunnel 0 on Router A.
[RouterC] ip rpf-route-static 10.1.1.0 24 50.1.1.1
Verifying the configuration
# Send an IGMP report from Receiver to join the multicast group 22 5.1.1.1. (Details not shown.)
# Send multicast data from Source to the multicast group 22 5.1.1.1. (Details not shown.)
# Display PIM routing table information on Router C.
[RouterC] display pim routing-table Total 1 (*, G) entry; 1 (S, G) entry
(*, 225.1.1.1) Protocol: pim-dm, Flag: WC UpTime: 00:04:25 Upstream interface: NULL Upstream neighbor: NULL RPF prime neighbor: NULL Downstream interface(s) information: Total number of downstreams: 1 1: GigabitEthernet2/1/1 Protocol: igmp, UpTime: 00:04:25, Expires: -
(10.1.1.100, 225.1.1.1) Protocol: pim-dm, Flag: ACT UpTime: 00:06:14 Upstream interface: Tunnel0 Upstream neighbor: 50.1.1.1 RPF prime neighbor: 50.1.1.1
54
Page 65
Downstream interface(s) information: Total number of downstreams: 1 1: GigabitEthernet2/1/1 Protocol: pim-dm, UpTime: 00:04:25, Expires: -
The output shows that Router A is the RPF neighbor of Router C and the multicast data from Router A is delivered over a GRE tunnel to Router C.

Troubleshooting multicast routing and forwarding

Static multicast route failure

Symptom
No dynamic routing protocol is enabled on the routers, and the physical status and link layer status of interfaces are both up, but the static multicast route fails.
Analysis
If a static multicast route is not correctly configured or updated to match the current network
conditions, it does not exist in the static multicast routing table.
Solution
If a better route is found, the static multicast route might also fail.
1. Use the display multicast routing-table static command to display information about static
multicast routes. Verify that the static multicast route has been correctly configured and the route entry exists in the static multicast routing table.
2. Check the type of the interface that connects the static multicast route to the RPF neighbor. If the
interface is not a point-to-point interface, make sure you specify the address for the RPF neighbor.
3. If the problem persists, contact HP Support.
55
Page 66

Configuring IGMP

Overview

Internet Group Management Protocol (IGMP) establishes and maintains the multicast group memberships between a Layer 3 multicast device and its directly connected hosts.
IGMP has three versions:
IGMPv1 (defined by RFC 1112 )
IGMPv2 (defined by RFC 2236)
IGMPv3 (defined by RFC 3376)
All IGMP versions support the ASM model. In addition to the ASM model, IGMPv3 can directly implement the SSM model. IGMPv1 and IGMPv2 must work with the IGMP SSM mapping function to implement the SSM model. For more information about the ASM and SSM models, see "Multicast overview."

IGMPv1 overview

IGMPv1 manages multicast group memberships based on the query and response mechanism.
All routers that run IGMP on the same subnet can get IGMP membership report messages (often called "reports") from hosts. However, only one router can act as the IGMP querier to send IGMP query messages (often called "queries"). The querier election mechanism determines which router acts as the IGMP querier on the subnet.
In IGMPv1, the designated router (DR) elected by the multicast routing protocol (such as PIM) serves as the IGMP querier. For more information about DR, see "Configuring PIM."
56
Page 67
Figure 22 IGMP queries and reports
IP network
DR
Router A Router B
Ethernet
Host A
(G2)
Query
Report
Host B
(G1)
Host C
(G1)
As shown in Figure 22, Host B and Host C are interested in the multicast data addressed to the multicast group G1. Host A is interested in the multicast data addressed to G2. The following process describes how the hosts join the multicast groups and how the IGMP querier (Router B in Figure 22) main
ains the
t
multicast group memberships:
1. The hosts send unsolicited IGMP reports to the multicast groups they want to join without having to
wait for the IGMP queries from the IGMP querier.
2. The IGMP querier periodically multicasts IGMP queries (with the destination address of 224.0.0.1)
to all hosts and routers on the local subnet.
3. After receiving a query message, Host B or Host C (the host whose delay timer expires first) sends
an IGMP report to the multicast group G1 to announce its membership for G1. This example assumes that Host B sends the report message. After receiving the report from Host B, Host C suppresses its own report for G1. Because IGMP routers (Router A and Router B) already know that G1 has at least one member (Host B), other members do not need to report their memberships. This mechanism, known as "IGMP report suppression," helps reduce traffic on the local subnet.
4. At the same time, Host A sends a report to the multicast group G2 after receiving a query message.
5. Through the query and response process, the IGMP routers (Router A and Router B) determine that
the local subnet has members of G1 and G2. The multicast routing protocol (PIM, for example) on the routers generates (*, G1) and (*, G2) multicast forwarding entries, where asterisk (*) represents any multicast source. These entries are the basis for subsequent multicast forwarding.
6. When the multicast data addressed to G1 or G2 reaches an IGMP router, the router looks up the
multicast forwarding table. Based on the (*, G1) or (*, G2) entries, the router forwards the multicast data to the local subnet. Then, the receivers on the subnet can receive the data.
IGMPv1 does not define a leave group message (often called a "leave message"). When an IGMPv1 host is leaving a multicast group, it stops sending reports to that multicast group. If the subnet has no members for a multicast group, the IGMP routers will not receive any report addressed to that multicast group. In this case, the routers clear the information for that multicast group after a period of time.
57
Page 68
NOTE:
The IGMP report suppression mechanism is not supported on MSR routers installed with the Layer 2 switching module SIC-4FSW, 4FSWP, SIC-9FSW, or 9FSWP.

IGMPv2 enhancements

Backwards-compatible with IGMPv1, IGMPv2 has introduced a querier election mechanism and a leave-group mechanism.
Querier election mechanism
In IGMPv1, the DR elected by the Layer 3 multicast routing protocol (such as PIM) serves as the querier among multiple routers that run IGMP on the same subnet.
IGMPv2 introduced an independent querier election mechanism. The querier election process is as follows:
1. Initially, every IGMPv2 router assumes itself to be the querier and sends IGMP general query
messages (often called "general queries") to all hosts and routers on the local subnet. The destination address is 224.0.0.1.
2. After receiving a general query, every IGMPv2 router compares the source IP address of the query
message with its own interface address. After comparison, the router with the lowest IP address wins the querier election and all the other IGMPv2 routers become non-queriers.
3. All the non-queriers start a timer, known as an "other querier present timer." If a router receives an
IGMP query from the querier before the timer expires, it resets this timer. Otherwise, it considers the querier to have timed out and initiates a new querier election process.
"Leave group" mechanism
In IGMPv1, when a host leaves a multicast group, it does not send any notification to the multicast routers. The multicast routers determine whether a group has members by using the maximum response time. This adds to the leave latency.
In IGMPv2, when a host leaves a multicast group, the following process occurs:
1. The host sends a leave message to all routers on the local subnet. The destination address is
224.0.0.2.
2. After receiving the leave message, the querier sends a configurable number of group-specific
queries to the group that the host is leaving. Both the destination address field and the group address field of the message are the address of the multicast group that is being queried.
3. One of the remaining members (if any on the subnet) of the group should send a membership
report within the maximum response time advertised in the query messages.
4. If the querier receives a membership report for the group before the maximum response timer
expires, it maintains the memberships for the group. Otherwise, the querier assumes that the local subnet has no member hosts for the group and stops maintaining the memberships for the group.

IGMPv3 enhancements

IGMPv3 is based on and is compatible with IGMPv1 and IGMPv2. It provides hosts with enhanced control capabilities and provides enhancements of query and report messages.
58
Page 69
Enhancements in control capability of hosts
IGMPv3 introduced two source filtering modes (Include and Exclude). These modes allow a host to join a designated multicast group and to choose whether to receive or reject multicast data from a designated multicast source. When a host joins a multicast group, one of the following occurs:
If the host expects to receive multicast data from specific sources like S1, S2, …, it sends a report
with the Filter-Mode denoted as "Include Sources (S1, S2, …)."
If the host expects to reject multicast data from specific sources like S1, S2, …, it sends a report with
the Filter-Mode denoted as "Exclude Sources (S1, S2, …)."
As shown in Figure 23,
f which can send multicast data to the multicast group G. Host B is interested in the multicast data
both o
the network comprises two multicast sources, Source 1 (S1) and Source 2 (S2),
that Source 1 sends to G but not in the data from Source 2.
Figure 23 Flow paths of source-and-group-specific multicast traffic
In IGMPv1 or IGMPv2, Host B cannot select multicast sources when it joins the multicast group G. The multicast streams from both Source 1 and Source 2 flow to Host B whether or not it needs them.
When IGMPv3 runs between the hosts and routers, Host B can explicitly express that it needs to receive the multicast data that Source 1 sends to the multicast group G (denoted as (S1, G)) . It also can explicitly express that it does not want to receive the multicast data that Source 2 sends to multicast group G (denoted as (S2, G)). As a result, Host B receives only multicast data from Source 1.
Enhancements in query and report capabilities
Query message carrying the source addresses
IGMPv3 is compatible with IGMPv1 and IGMPv2 and supports general queries and group-specific queries. It also introduces group-and-source-specific queries.
{ A general query does not carry a group address or a source address.
{ A group-specific query carries a group address, but no source address.
{ A group-and-source-specific query carries a group address and one or more source addresses.
Reports containing multiple group records
Unlike an IGMPv1 or IGMPv2 report message, an IGMPv3 report message is destined to
224.0.0.22 and contains one or more group records. Each group record contains a multicast group address and a multicast source address list.
Group records include the following categories:
59
Page 70
{ IS_IN—The source filtering mode is Include. The report sender requests the multicast data from
only the sources defined in the specified multicast source list.
{ IS_EX—The source filtering mode is Exclude. The report sender requests the multicast data from
any sources except those defined in the specified multicast source list.
{ TO_IN—The filtering mode has changed from Exclude to Include.
{ TO_EX—The filtering mode has changed from Include to Exclude.
{ ALLOW—The Source Address fields contain a list of additional sources from which the receiver
wants to obtain data. If the current filtering mode is Include, these sources are added to the multicast source list. If the current filtering mode is Exclude, these sources are deleted from the multicast source list.
{ BLOCK—The Source Address fields contain a list of the sources from which the receiver no
longer wants to obtain data. If the current filtering mode is Include, these sources are deleted from the multicast source list. If the current filtering mode is Exclude, these sources are added to the multicast source list.

IGMP SSM mapping

The IGMP SSM mapping feature provides SSM support for receiver hosts that are running IGMPv1 or IGMPv2. This feature is implemented by configuring static IGMP SSM mappings on the IGMP-enabled routers.
The SSM model assumes that the IGMP-enabled routers have identified the desired multicast sources when receivers join multicast groups.
A host running IGMPv3 can explicitly specify multicast source addresses in its reports.
A host running IGMPv1 or IGMPv2, however, cannot specify multicast source addresses in its
reports. In this case, you must configure the IGMP SSM mapping feature to translate the (*, G) information in the IGMPv1 or IGMPv2 reports into (G, INCLUDE, (S1, S2...)) information.
Figure 24 IGMP SSM mapping
As shown in Figure 24, Host A, Host B, and Host C on an SSM network, run IGMPv1, IGMPv2, and IGMPv3, respectively. To provide the SSM service for Host A and Host B, you must configure the IGMP SSM mapping feature on Router A.
60
Page 71
When the IGMP SSM mapping feature is configured, Router A checks the multicast group address G in the received IGMPv1 or IGMPv2 report, and does the following:
If G is not in the SSM group range, Router A provides the ASM service.
If G is in the SSM group range but does not have relevant IGMP SSM mappings, Router A drops the
message.
If G is in the SSM group range and has relevant IGMP SSM mappings, Router A translates the (*,
G) information in the IGMP report into (G, INCLUDE, (S1, S2...)) information to provide the SSM service.
NOTE:
The IGMP SSM mapping feature does not process IGMPv3 reports.
For more information about SSM group ranges, see "Configuring PIM."

IGMP proxying

In a simple tree-shaped topology, it is not necessary to run multicast routing protocols, such as PIM, on edge devices. Instead, you can configure IGMP proxying on these devices. With IGMP proxying configured, the edge device serves as an IGMP proxy for the downstream hosts. It sends IGMP messages, maintains group memberships, and implements multicast forwarding based on the memberships. In this case, the IGMP proxy device is a host but no longer a PIM neighbor to the upstream device.
Figure 25 IGMP proxying
As shown in Figure 25, an IGMP proxy device has the following types of interfaces:
Upstream interface—Also called the "proxy interface." A proxy interface is an interface on which
IGMP proxying is configured. It is in the direction toward the root of the multicast forwarding tree. An upstream interface acts as a host that is running IGMP. Therefore, it is also called the "host interface."
Downstream interface—An interface that is running IGMP and is not in the direction toward the
root of the multicast forwarding tree. A downstream interface acts as a router that is running IGMP. Therefore, it is also called the "router interface."
61
Page 72
An IGMP proxy device maintains a group membership database, which stores the group memberships on all the downstream interfaces. Each entry comprises the multicast address, filter mode, and source list. Such an entry is a collection of members in the same multicast group on each downstream interface.
An IGMP proxy device performs host functions on the upstream interface based on the database. It responds to queries according to the information in the database or sends join/leave messages when the database changes. On the other hand, the IGMP proxy device performs router functions on the downstream interfaces by participating in the querier election, sending queries, and maintaining memberships based on the reports.

IGMP support for VPNs

IGMP maintains group memberships on a per-interface base. After receiving an IGMP message on an interface, IGMP processes the packet within the VPN to which the interface belongs. IGMP only communicates with other multicast protocols within the same VPN instance.

Protocols and standards

RFC 1112 , Host Extensions for IP Multicasting
RFC 2236, Internet Group Management Protocol, Version 2
RFC 3376, Internet Group Management Protocol, Version 3

IGMP configuration task list

Task at a glance
Configuring basic IGMP functions:
(Required.) Enabling IGMP
(Optional.) Specifying the IGMP version
(Optional.) Configuring an interface as a static member interface
• (Optional.) Configuring a multicast group filter
Adjusting IGMP performance:
(Optional.) Configuring IGMP query parameters
(Optional.) Enabling IGMP fast-leave processing
(Optional.) Configuring IGMP SSM mappings
Configuring IGMP proxying:
(Optional.) Enabling IGMP proxying
(Optional.) Configuring multicast forwarding on a downstream interface
(Optional.) Configuring multicast load splitting on the IGMP proxy

Configuring basic IGMP functions

Before you configure basic IGMP functions, complete the following tasks:
Configure any unicast routing protocol so that all devices are interoperable at the network layer.
Configure PIM.
Determine the IGMP version.
62
Page 73
Determine the multicast group and multicast source addresses for static group member
configuration.
Determine the ACL for multicast group filtering.

Enabling IGMP

To configure IGMP, enable IGMP on the interface where the multicast group memberships are established and maintained.
To enable IGMP:
Step Command Remarks
1. Enter system view.
system-view N/A
2. Enable IP multicast routing and
enter MRIB view.
3. Return to system view.
4. Enter interface view.
5. Enable IGMP.
multicast routing [ vpn-instance vpn-instance-name ]
quit N/A
interface interface-type interface-number
igmp enable By default, IGMP is disabled.

Specifying the IGMP version

Because the protocol packets of different IGMP versions vary in structure and type, specify the same IGMP version for all routers on the same subnet. Otherwise, IGMP cannot operate correctly.
To specify an IGMP version:
Step Command Remarks
1. Enter system view.
2. Enter interface view.
3. Specify an IGMP version.
system-view N/A
interface interface-type interface-number
igmp version version-number The default setting is IGMPv2.
By default, IP multicast is disabled.
N/A
N/A

Configuring an interface as a static member interface

To test multicast data forwarding, you can configure an interface as a static member of a multicast group. Then, the interface can always receive multicast data addressed to the multicast group.
Configuration guidelines
A static member interface has the following restrictions:
{ If the interface is IGMP and PIM-SM enabled, it must be a PIM-SM DR.
{ If the interface is IGMP enabled but not PIM-SM enabled, it must be an IGMP querier.
For more information about PIM-SM and DR, see "Configuring PIM."
A static member interface does not respond to queries that the IGMP querier sends. When you
configure an interface as a static member or cancel this configuration, the interface does not send
63
Page 74
g
any IGMP report or IGMP leave message. This is because the interface is not a real member of the multicast group or the multicast source and group.
Configuration procedure
To configure an interface as a static member interface:
Step Command Remarks
1. Enter system view.
system-view N/A
2. Enter interface view.
3. Configure the interface as a
static member interface.
interface interface-type interface-number
igmp static-group group-address [ source source-address ]

Configuring a multicast group filter

To prevent the hosts attached to an interface from joining certain multicast groups, you can specify an ACL on the interface as a packet filter. As a result, the receiver hosts attached to this interface can join only the multicast groups that the ACL permits.
To configure a multicast group filter:
Step Command Remarks
1. Enter system view.
2. Enter interface view.
3. Configure a multicast group
filter.
system-view N/A
interface interface-type interface-number
igmp group-policy acl-number
[ version-number ]
N/A
By default, an interface is not a static member of any multicast group or multicast source and group.
N/A
By default, no multicast group filter is configured on any interface. Hosts attached to an interface can join any multicast group.
NOTE:
If you configure the interface as a static member interface for a multicast group, this configuration is not effective on the multicast group or the multicast source and group.

Adjusting IGMP performance

Before adjusting IGMP performance, complete the following tasks:
Configure any unicast routing protocol so that all devices are interoperable at the network layer.
Configure basic IGMP functions.

Configuring IGMP query parameters

The IGMP querier periodically sends IGMP general queries at the IGMP general query interval to check for multicast group members on the network. You can modify the IGMP general query interval based on the actual network conditions.
64
roup or a multicast source and
Page 75
If multiple multicast routers exist on the same subnet, only the IGMP querier sends IGMP queries. When a non-querier receives an IGMP query, it starts an IGMP other querier present timer. If it receives a new IGMP query before the timer expires, the non-querier resets the timer. Otherwise, it considers that the querier has failed and starts a new querier election.
When you configure the IGMP query parameters, follow these guidelines:
To avoid frequent IGMP querier changes, you must set the IGMP other querier present interval
greater than the IGMP general query interval.
The configuration of the IGMP other querier present interval takes effect only on the devices that run
IGMPv2 and IGMPv3.
To configure the IGMP query parameters:
Step Command Remarks
1. Enter system view.
system-view N/A
2. Enter interface view.
3. Set the IGMP general query
interval.
4. Set the IGMP other querier
present timer.
interface interface-type interface-number
igmp query-interval interval The default setting is 125 seconds.
igmp other-querier-present-interval interval

Enabling IGMP fast-leave processing

In some applications, such as ADSL dial-up networking, only one multicast receiver host is attached to an interface of the IGMP querier. To allow fast response to the leave messages of the host when it switches frequently from one multicast group to another, you can enable fast-leave processing on the IGMP querier.
Wi th IGMP fast -leave p rocessi ng en abled, a fter re ceiving an IGMP leave message from a host, the IGMP querier directly sends a leave notification to the upstream. It does not send IGMP group specific queries or IGMP group and source specific queries. This reduces leave latency and preserves the network bandwidth.
N/A
By default, the IGMP other querier present timer is [ IGMP general query interval ] × [ IGMP robustness variable ] + [ maximum response time for IGMP general queries ] / 2.
The IGMP fast-leave processing configuration is effective only if the device is running IGMPv2 or IGMPv3.
To enable IGMP fast-leave processing:
Step Command Remarks
1. Enter system view.
2. Enter interface view.
3. Enable IGMP fast-leave
processing.
system-view N/A
interface interface-type interface-number
igmp fast-leave [ group-policy
acl-number ]
65
N/A
By default, IGMP fast-leave processing is disabled.
Page 76

Configuring IGMP SSM mappings

On an SSM network, some receiver hosts run only IGMPv1 or IGMPv2. To provide the SSM service to these receiver hosts, you can configure the IGMP mapping feature on the IGMP-enabled routers.
The IGMP SSM mapping feature does not process IGMPv3 messages. To provide SSM services for all hosts that run different IGMP versions on a subnet, you must enable IGMPv3 on the interface that forwards multicast traffic onto the subnet.

Configuration prerequisites

Before you configure the IGMP SSM mapping feature, complete the following tasks:
Configure any unicast routing protocol so that all devices in the domain are interoperable at the
network layer.
Configure basic IGMP functions.

Configuration procedure

To configure IGMP SSM mappings:
Step Command
1. Enter system view.
2. Enter IGMP view.
3. Configure IGMP SSM
mappings.
system-view N/A
igmp [ vpn-instance vpn-instance-name ]
ssm-mapping source-address acl-number

Configuring IGMP proxying

This section describes how to configure IGMP proxying.

Configuration prerequisites

Before you configure the IGMP proxying feature, configure any unicast routing protocol so that all devices in the domain are interoperable at the network layer.

Enabling IGMP proxying

You can enable IGMP proxying on the interface in the direction toward the root of the multicast forwarding tree to make the device serve as an IGMP proxy.
Remarks
N/A
By default, no IGMP mapping is configured.
Configuration guidelines
You cannot enable I GM P on an interface with I GMP proxyin g enable d. I f you configure oth er IGMP
commands on such an interface, only the igmp version command takes effect.
You cannot enable multicast routing protocols (such as PIM and MSDP) on an interface with IGMP
proxying enabled. In IGMPv1, the IGMP querier is the DR that is elected by PIM. Therefore, a device with its downstream interface running IGMPv1 cannot be elected as the DR and cannot serve as the IGMP querier.
66
Page 77
Configuration procedure
To enable IGMP proxying:
Step Command
1. Enter system view.
2. Enable IP multicast routing
and enter MRIB view.
3. Return to system view.
4. Enter interface view.
5. Enable the IGMP proxying
feature.
system-view N/A
multicast routing [ vpn-instance vpn-instance-name ]
quit N/A
interface interface-type interface-number
igmp proxying enable
Remarks
By default, IP multicast routing is disabled.
N/A
By default, IGMP proxying is disabled.

Configuring multicast forwarding on a downstream interface

Typically, only IGMP queriers can forward multicast traffic and non-queriers cannot. This prevents multicast data from being repeatedly forwarded. If a downstream interface on the IGMP proxy device has failed in the querier election, you must enable multicast forwarding capability on this interface. Otherwise, downstream hosts cannot receive multicast data.
On a shared-media network, there might exist more than one IGMP proxy device. If the downstream interface of one IGMP proxy device has been elected as the querier, you cannot enable multicast forwarding on any other non-querier downstream interface of the other proxy devices. Otherwise, duplicate multicast traffic might be received on the shared-media network.
To enable multicast forwarding on a downstream interface:
Step Command
1. Enter system view.
2. Enter interface view.
3. Enable multicast forwarding on a
non-querier downstream interface.
system-view N/A
interface interface-type
interface-number
igmp proxy forwarding
Remarks
N/A
By default, the multicast forwarding capability on a non-querier downstream interface is disabled.

Configuring multicast load splitting on the IGMP proxy

If IGMP proxying is enabled on multiple interfaces of the IGMP proxy device, one of the following occurs:
If you disable the load splitting function, only the interface with the highest IP address forwards the
multicast traffic.
If you enable the load splitting function, the IGMP proxying-enabled interfaces share the multicast
traffic on a per-group basis.
To enable the load splitting function on the IGMP proxy:
Step Command
1. Enter system view.
system-view N/A
67
Remarks
Page 78
Step Command
2. Enter IGMP view.
3. Enable the load splitting
function on the IGMP proxy.
igmp [ vpn-instance vpn-instance-name ]
proxy multipath

Displaying and maintaining IGMP

CAUTION:
The reset igmp group command might cause multicast data transmission failures.
Execute display commands in any view and reset commands in user view.
Task Command Remarks
display igmp [ vpn-instance vpn-instance-name ]
Display IGMP group information.
Display IGMP information.
group [ group-address | interface interface-type interface-number ] [ static | verbose ]
display igmp[ vpn-instance vpn-instance-name ] interface [ interface-type interface-number ] [ proxy ] [ verbose ]
Remarks
N/A
By default, the load splitting function is disabled.
N/A
N/A
Display multicast group membership information maintained by the IGMP proxy.
Display information about IGMP proxy routing table.
Display IGMP SSM mappings.
Clear all the dynamic IGMP group entries of the specified IGMP group or all IGMP groups.
display igmp [ vpn-instance vpn-instance-name ] proxy group [ group-address | interface interface-type interface-number ] [ verbose ]
display igmp [ vpn-instance vpn-instance-name ] proxy routing-table [ source-address [ mask
{ mask-length | mask } ] | group-address [ mask { mask-length | mask } ] ] * [ verbose ]
display igmp [ vpn-instance vpn-instance-name ] ssm-mapping group-address
reset igmp [ vpn-instance vpn-instance-name ] group { all | interface interface-type
interface-number { all | group-address [ mask { mask | mask-length } ] [ source-address [ mask { mask | mask-length } ] ] } }

IGMP configuration examples

This section provides examples of configuring IGMP on routers.

Basic IGMP functions configuration examples

N/A
N/A
N/A
This command cannot remove static IGMP group entries.
Network requirements
As shown in Figure 26:
VOD streams are sent to receiver hosts in multicast. Receiver hosts of different organizations form
stub networks N1 and N2. Host A and Host C are receiver hosts in N1 and N2, respectively.
68
Page 79
IGMPv2 runs between Router A and N1, and between the other two routers and N2. Router A acts
as the IGMP querier in N1. Router B acts as the IGMP querier in N2 because it has a lower IP address.
Configure the routers to meet the following requirements:
T h e h os t s i n N 1 c a n jo i n o n ly t h e m u l t i c a s t g r o u p 22 4 .1.1.1.
The hosts in N2 can join any multicast groups.
Figure 26 Network diagram
PIM network
2
/
1
/
2
E
G
Router A
Querier
G
E
2
/
1
/
2
Router B
G
E
2
/
1
/
2
Router C
Configuration procedure
1. Assign an IP address and subnet mask to each interface according to Figure 26. (Details not
shown.)
GE2/1/1
10.110.1.1/24
GE2/1/1
10.110.2.1/24
GE2/1/1
10.110.2.2/24
Receiver
Host A
N1
Host B
Receiver
Host C
N2
Host D
2. Configure OSPF on the routers of the PIM network to make sure the following conditions are met:
(Details not shown.)
{ The routers are interoperable at the network layer.
{ The routers can dynamically update their routing information.
3. Enable IP multicast routing, and enable IGMP and PIM-DM:
# On Router A, enable IP multicast routing.
<RouterA> system-view [RouterA] multicast routing [RouterA-mrib] quit
# Enable IGMP on GigabitEthernet 2/1/1.
[RouterA] interface gigabitethernet 2/1/1 [RouterA-GigabitEthernet2/1/1] igmp enable [RouterA-GigabitEthernet2/1/1] quit
# Enable PIM-DM on GigabitEthernet 2/1/2.
[RouterA] interface gigabitethernet 2/1/2 [RouterA-GigabitEthernet2/1/2] pim dm
69
Page 80
[RouterA-GigabitEthernet2/1/2] quit
# On Router B, enable IP multicast routing.
<RouterB> system-view [RouterB] multicast routing [RouterB-mrib] quit
# Enable IGMP on GigabitEthernet 2/1/1.
[RouterB] interface gigabitethernet 2/1/1 [RouterB-GigabitEthernet2/1/1] igmp enable [RouterB-GigabitEthernet2/1/1] quit
# Enable PIM-DM on GigabitEthernet 2/1/2.
[RouterB] interface gigabitethernet 2/1/2 [RouterB-GigabitEthernet2/1/2] pim dm [RouterB-GigabitEthernet2/1/2] quit
# On Router C, enable IP multicast routing.
<RouterC> system-view [RouterC] multicast routing [RouterC-mrib] quit
# Enable IGMP on GigabitEthernet 2/1/1.
[RouterC] interface gigabitethernet 2/1/1 [RouterC-GigabitEthernet2/1/1] igmp enable [RouterC-GigabitEthernet2/1/1] quit
# Enable PIM-DM on GigabitEthernet 2/1/2.
[RouterC] interface gigabitethernet 2/1/2 [RouterC-GigabitEthernet2/1/2] pim dm [RouterC-GigabitEthernet2/1/2] quit
4. Configure a multicast group filter on Router A to control that the hosts connected to GigabitEthernet
2/1/1 can join only the multicast group 224.1.1.1.
[RouterA] acl number 2001 [RouterA-acl-basic-2001] rule permit source 224.1.1.1 0 [RouterA-acl-basic-2001] quit [RouterA] interface gigabitethernet 2/1/1 [RouterA-GigabitEthernet2/1/1] igmp group-policy 2001 [RouterA-GigabitEthernet2/1/1] quit
Verifying the configuration
# Display IGMP information on GigabitEthernet 2/1/1 of Router B.
[RouterB] display igmp interface gigabitethernet 2/1/1 GigabitEthernet2/1/1(10.110.2.1): IGMP is enabled. IGMP version: 2 Query interval for IGMP: 125s Other querier present time for IGMP: 255s Maximum query response time for IGMP: 10s Querier for IGMP: 10.110.2.1 (This router) IGMP groups reported in total: 1
70
Page 81

IGMP SSM mapping configuration example

Network requirements
As shown in Figure 27:
The PIM-SM domain uses both the ASM model and SSM model for multicast delivery.
GigabitEthernet 2/1/3 on Router D serves as the C-BSR and C-RP. The SSM group range is
232.1.1.0/24.
IGMPv3 runs on GigabitEthernet 2/1/1 on Router D. The receiver host runs IGMPv2, and does not
support IGMPv3. Therefore, the receiver host cannot specify expected multicast sources in its membership reports.
Source 1, Source 2, and Source 3 send multicast packets to multicast groups in the SSM group
range.
Configure the IGMP SSM mapping feature on Router D so that the receiver host can receive multicast data from Source 1 and Source 3 only.
Figure 27 Network diagram
Source 2
Source 1
Router B Router C
GE2/1/1
GE2/1/1 GE2/1/3
Router A
GE2/1/3
GE2/1/2
PIM-SM
GE2/1/2
GE2/1/3
GE2/1/2
GE2/1/2
GE2/1/3
GE2/1/1
GE2/1/1
Router D
Source 3
Receiver
Table 6 Interface and IP address assignment
Device Interface IP address
Source 1 133.133.1.1/24 Source 3 133.133.3.1/24
Source 2 133.133.2.1/24 Receiver 133.133.4.1/24
Router A
Router A
GigabitEthernet 2/1/1
GigabitEthernet 2/1/2
133.133.1.2/24 Router C
192.168.1.1/24 Router C
Device
Interface
GigabitEthernet 2/1/1
GigabitEthernet 2/1/2
IP address
133.133.3.2/24
192.168.3.1/24
GigabitEthernet
Router A
Router B
Router B
Router B
2/1/3
GigabitEthernet 2/1/1
GigabitEthernet 2/1/2
GigabitEthernet 2/1/3
192.168.4.2/24 Router C
133.133.2.2/24 Router D
192.168.1.2/24 Router D
192.168.2.1/24 Router D
71
GigabitEthernet 2/1/3
GigabitEthernet 2/1/1
GigabitEthernet 2/1/2
GigabitEthernet 2/1/3
192.168.2.2/24
133.133.4.2/24
192.168.3.2/24
192.168.4.1/24
Page 82
Configuration procedure
1. Assign an IP address and subnet mask to each interface according to Figure 27. (Details not
shown.)
2. Configure OSPF on the routers in the PIM-SM domain to make sure the following conditions are
met: (Details not shown.)
{ The routers are interoperable at the network layer.
{ The routers can dynamically update their routing information.
3. Enable IP multicast routing, PIM-SM, and IGMP:
# On Router D, enable IP multicast routing.
<RouterD> system-view [RouterD] multicast routing [RouterD-mrib] quit
# Enable IGMPv3 on GigabitEthernet 2/1/1 (the interface that connects to the receiver host).
[RouterD] interface gigabitethernet 2/1/1 [RouterD-GigabitEthernet2/1/1] igmp enable [RouterD-GigabitEthernet2/1/1] igmp version 3 [RouterD-GigabitEthernet2/1/1] quit
# Enable PIM-SM on the other interfaces.
[RouterD] interface gigabitethernet 2/1/2 [RouterD-GigabitEthernet2/1/2] pim sm [RouterD-GigabitEthernet2/1/2] quit [RouterD] interface gigabitethernet 2/1/3 [RouterD-GigabitEthernet2/1/3] pim sm [RouterD-GigabitEthernet2/1/3] quit
# On Router A, enable IP multicast routing, and enable PIM-SM on each interface.
<RouterA> system-view [RouterA] multicast routing [RouterA-mrib] quit [RouterA] interface gigabitethernet 2/1/1 [RouterA-GigabitEthernet2/1/1] pim sm [RouterA-GigabitEthernet2/1/1] quit [RouterA] interface gigabitethernet 2/1/2 [RouterA-GigabitEthernet2/1/2] pim sm [RouterA-GigabitEthernet2/1/2] quit [RouterA] interface gigabitethernet 2/1/3 [RouterA-GigabitEthernet2/1/3] pim sm [RouterA-GigabitEthernet2/1/3] quit
# Configure Router B and Router C in the same way Router A is configured. (Details not shown.)
4. Configure C-BSR and C-RP interfaces on Router D.
[RouterD] pim [RouterD-pim] c-bsr 192.168.4.1 [RouterD-pim] c-rp 192.168.4.1 [RouterD-pim] quit
5. Configure the SSM group range:
# Configure the SSM group range 232.1.1.0/24 on Router D.
72
Page 83
[RouterD] acl number 2000 [RouterD-acl-basic-2000] rule permit source 232.1.1.0 0.0.0.255 [RouterD-acl-basic-2000] quit [RouterD] pim [RouterD-pim] ssm-policy 2000 [RouterD-pim] quit
# Configure the SSM group range on Router A, Router B, and Router C in the same way Router D is configured. (Details not shown.)
6. Configure IGMP SSM mappings on Router D.
[RouterD] igmp [RouterD-igmp] ssm-mapping 133.133.1.1 2000 [RouterD-igmp] ssm-mapping 133.133.3.1 2000 [RouterD-igmp] quit
Verifying the configuration
# On Router D, display IGMP SSM mappings for multicast group 232 .1.1.1 on the public network.
[RouterD] display igmp ssm-mapping 232.1.1.1 Group: 232.1.1.1 Source list:
133.133.1.1
133.133.3.1
# Display information about the multicast groups created based on the configured IGMP SSM mappings on the public network.
[RouterD] display igmp group IGMP groups in total: 1 GigabitEthernet2/1/1(133.133.4.2): IGMP groups reported in total: 1 Group address Last reporter Uptime Expires
232.1.1.1 133.133.4.1 00:02:04 off
# Display PIM routing table information on the public network.
[RouterD] display pim routing-table Total 0 (*, G) entry; 2 (S, G) entry
(133.133.1.1, 232.1.1.1) RP: 192.168.4.1 Protocol: pim-ssm, Flag: UpTime: 00:13:25 Upstream interface: GigabitEthernet2/1/3 Upstream neighbor: 192.168.4.2 RPF prime neighbor: 192.168.4.2 Downstream interface(s) information: Total number of downstreams: 1 1: GigabitEthernet2/1/1 Protocol: igmp, UpTime: 00:13:25, Expires: -
(133.133.3.1, 232.1.1.1) RP: 192.168.4.1 Protocol: pim-ssm, Flag:
73
Page 84
UpTime: 00:13:25 Upstream interface: GigabitEthernet2/1/2 Upstream neighbor: 192.168.3.1 RPF prime neighbor: 192.168.3.1 Downstream interface(s) information: Total number of downstreams: 1 1: GigabitEthernet2/1/1 Protocol: igmp, UpTime: 00:13:25, Expires: -

IGMP proxying configuration example

Network requirements
As shown in Figure 28, PIM-DM runs on the core network. Host A and Host C in the stub network receive VO D i n fo r m a t io n s e n t t o m u l t i c a s t g ro u p 2 24 .1.1.1.
Configure the IGMP proxying feature on Router B so that Router B can maintain group memberships and forward multicast traffic without running PIM-DM.
Figure 28 Network diagram
Configuration procedure
1. Assign an IP address and subnet mask to each interface according to Figure 28. (Details not
shown.)
2. Enable IP multicast routing, PIM-DM, IGMP, and IGMP proxying:
# Enable IP multicast routing on Router A.
<RouterA> system-view [RouterA] multicast routing [RouterA-mrib] quit
# Enable PIM-DM on GigabitEthernet 2/1/2.
[RouterA] interface gigabitethernet 2/1/2 [RouterA-GigabitEthernet2/1/2] pim dm [RouterA-GigabitEthernet2/1/2] quit
# Enable IGMP on GigabitEthernet 2/1/1.
[RouterA] interface gigabitethernet 2/1/1 [RouterA-GigabitEthernet2/1/1] igmp enable
74
Page 85
[RouterA-GigabitEthernet2/1/1] quit
# Enable IP multicast routing on Router B.
<RouterB> system-view [RouterB] multicast routing [RouterB-mrib] quit
# Enable IGMP proxying on GigabitEthernet 2/1/1.
[RouterB] interface gigabitethernet 2/1/1 [RouterB-GigabitEthernet2/1/1] igmp proxy enable [RouterB-GigabitEthernet2/1/1] quit
# Enable IGMP on GigabitEthernet 2/1/2.
[RouterB] interface gigabitethernet 2/1/2 [RouterB-GigabitEthernet2/1/2] igmp enable [RouterB-GigabitEthernet2/1/2] quit
Verify the configuration
# Display the multicast group membership information maintained by the IGMP proxy on Router B.
[RouterB] display igmp proxy group IGMP proxy group records in total: 1 GigabitEthernet2/1/1(192.168.1.2): IGMP proxy group records in total: 1 Group address Member state Expires
224.1.1.1 Delay 00:00:02

Troubleshooting IGMP

No membership information on the receiver-side router

Symptom
When a host sends a report for joining the multicast group G, no membership information of the multicast group G exists on the router closest to that host.
Analysis
The correctness of networking and interface connections and whether the protocol layer of the
interface is up directly affect the generation of group membership information.
Multicast routing must be enabled on the router. IGMP must be enabled on the interface that
connects to the host.
If the IGMP version on the router interface is lower than that on the host, the router cannot recognize
the IGMP report from the host.
If you have configured the igmp group-policy command on the interface, the interface cannot
receive report messages that failed to pass filtering.
Solution
1. Use the display igmp interface command to verify that the networking, interface connection, and
IP address configuration are correct. If the command does not produce output, the interface is in an abnormal state. The reason might be that you have configured the shutdown command on the interface, the interface is not correctly connected, or the IP address configuration is not correctly completed.
75
Page 86
2. Use the display current-configuration command to verify that multicast routing is enabled. If it is not
enabled, use the multicast routing command in system view to enable IP multicast routing. In addition, verify that IGMP is enabled on the associated interfaces.
3. Use the display igmp interface command to verify that the IGMP version on the interface is lower
than that on the host.
4. Use the display current-configuration interface command to verify that no ACL rule has been
configured to filter out the reports sent by the host to the multicast group G.
5. If the problem persists, contact HP Support.

Inconsistent membership information on the routers on the same subnet

Symptom
Different memberships are maintained on different IGMP routers on the same subnet.
Analysis
A router running IGMP maintains multiple parameters for each interface. Inconsistent IGMP
interface parameter configurations for routers on the same subnet will result in inconsistency of memberships.
Solution
Although IGMP routers are partially compatible with hosts that separately run different IGMP
versions, all routers on the same subnet must run the same IGMP version. Inconsistent IGMP versions running on routers on the same subnet leads to inconsistency of IGMP memberships.
1. Use the display current-configuration command to verify the IGMP information on the interfaces.
2. Use the display igmp interface command on all routers on the same subnet to verify the
IGMP-related timer settings. Make sure the settings are consistent on all the routers.
3. Use the display igmp interface command to verify that all the routers on the same subnet are
running the same IGMP version.
4. If the problem persists, contact HP Support.
76
Page 87

Configuring PIM

Overview

Protocol Independent Multicast (PIM) provides IP multicast forwarding by leveraging unicast static routes or unicast routing tables generated by any unicast routing protocol, such as RIP, OSPF, IS-IS, or BGP. PIM uses the underlying unicast routing to generate a multicast routing table without relying on any particular unicast routing protocol.
PIM uses the RPF mechanism to implement multicast forwarding. When a multicast packet arrives on an interface of the device, it undergoes an RPF check. If the RPF check succeeds, the device creates a multicast routing entry and forwards the packet. If the RPF check fails, the device discards the packet. For more information about RPF, see "Configuring multicast routing and forwarding."
Based on the implementation mechanism, PIM includes the following categories:
Protocol Independent Multicast–Dense Mode (PIM-DM)
Protocol Independent Multicast–Sparse Mode (PIM-SM)
Bidirectional Protocol Independent Multicast (BIDIR-PIM)
Protocol Independent Multicast Source-Specific Multicast (PIM-SSM)
In this document, a PIM domain refers to a network composed of PIM routers.

PIM-DM overview

PIM-DM uses the push mode for multicast forwarding, and is suitable for small-sized networks with densely distributed multicast members.
The following describes the basic implementation of PIM-DM:
PIM-DM assumes that all downstream nodes want to receive multicast data from a source, so
multicast data is flooded to all downstream nodes on the network.
Branches without downstream receivers are pruned from the forwarding trees, leaving only those
branches that contain receivers.
The pruned state of a branch has a finite holdtime timer. When the timer expires, multicast data is
again forwarded to the pruned branch. This flood-and-prune cycle takes place periodically to maintain the forwarding branches.
The graft mechanism is used to reduce the latency for resuming the forwarding capability of a
previously pruned branch.
In PIM-DM, the multicast forwarding paths for a multicast group constitutes a source tree. The source tree is rooted at the multicast source and has multicast group members as its "leaves." Because the source tree consists of the shortest paths from the multicast source to the receivers, it is also called a "shortest path tree (SPT)."
Neighbor discovery
In a PIM domain, each PIM interface on a router periodically multicasts PIM hello messages to all other PIM routers (identified by the address 224.0.0.13) on the local subnet. Through the exchanging of hello
77
Page 88
A
messages, all PIM routers on the subnet discover their PIM neighbors, maintain PIM neighboring relationship with other routers, and build and maintain SPTs.
SPT building
The process of building an SPT is the flood-and-prune process:
1. In a PIM-DM domain, when the multicast source S sends multicast data to the multicast group G,
2. The nodes without downstream receivers are pruned. A router that has no downstream receivers
NOTE:
the multicast data is flooded throughout the domain. A router performs an RPF check for the multicast data. If the RPF check succeeds, the router creates an (S, G) entry and forwards the data to all downstream nodes in the network. In the flooding process, all the routers in the PIM-DM domain create the (S, G) entry.
multicasts a prune message to all PIM routers on the subnet. When an upstream node receives the prune message, it removes the receiving interface from the (S, G) entry. In this way, the upstream stream node stops forwarding subsequent packets addressed to that multicast group down to this node.
n (S, G) entry contains a multicast source address S, a multicast group address G, an outgoing
interface list, and an incoming interface.
A prune process is initiated by a leaf router. As shown in Figure 29, the router interface that does not have any downstream receivers initiates a prune process by sending a prune message toward the multicast source. This prune process goes on until only necessary branches are left in the PIM-DM domain, and these necessary branches constitute an SPT.
Figure 29 SPT building
Host A
Source
Server
SPT
Prune message
Multicast packets
Receiver
Host C
Receiver
Host B
The pruned state of a branch has a finite holdtime timer. When the timer expires, multicast data is again forwarded to the pruned branch. The flood-and-prune cycle takes place periodically to maintain the forwarding branches.
Graft
A previously pruned branch might have new downstream receivers. To reduce the latency for resuming the forwarding capability of this branch, a graft mechanism is used as follows:
78
Page 89
Assert
1. The node that needs to receive the multicast data sends a graft message to its upstream node,
telling it to rejoin the SPT.
2. After receiving this graft message, the upstream node adds the receiving interface to the outgoing
interface list of the (S, G) entry. It also sends a graft-ack message to the graft sender.
3. If the graft sender receives a graft-ack message, the graft process finishes. Otherwise, the graft
sender continues to send graft messages at a configurable interval until it receives an acknowledgment from its upstream node.
On a subnet with more than one multicast router, the assert mechanism shuts off duplicate multicast flows to the network. It does this by electing a unique multicast forwarder for the subnet.
Figure 30 Assert mechanism
As shown in Figure 30, after Router A and Router B receive an (S, G) packet from the upstream node, they both forward the packet to the local subnet. As a result, the downstream node Router C receives two identical multicast packets. Both Router A and Router B, on their own downstream interfaces, receive a duplicate packet forwarded by the other. After detecting this condition, both routers send an assert message to all PIM routers (224.0.0.13) on the local subnet through the interface that received the packet. The assert message contains the multicast source address (S), the multicast group address (G), and the metric preference and metric of the unicast route/MBGP route/static multicast route to the multicast source. By comparing these parameters, either Router A or Router B becomes the unique forwarder of the subsequent (S, G) packets on the shared-media LAN. The comparison process is as follows:
1. The router with a higher metric preference to the multicast source wins.
2. If both routers have the same metric preference, the router with a smaller metric to the multicast
source wins.
3. If both routers have the same metric, the router with a higher IP address on the downstream
interface wins.

PIM-SM overview

PI M-DM uses th e flood- and-prune cycles to buil d SPTs for multicast data forwarding. Although an SPT has the shortest paths from the multicast source to the receivers, it is built with a low efficiency. PIM-DM is not suitable for large and medium-sized networks.
PIM-SM uses the pull mode for multicast forwarding, and it is suitable for large- and medium-sized networks with sparsely and widely distributed multicast group members.
79
Page 90
The basic implementation of PIM-SM is as follows:
PIM-SM assumes that no hosts need multicast data. In the PIM-SM mode, a host must express its
interest in the multicast data for a multicast group before the data is forwarded to it. PIM-SM implements multicast forwarding by building and maintaining rendezvous point trees (RPTs). An RPT is rooted at a router that has been configured as the rendezvous point (RP) for a multicast group. The multicast data for the group is forwarded by the RP to the receivers along the RPT.
After a receiver host joins a multicast group, the receiver-side designated router (DR) sends a join
message to the RP for the multicast group. The path along which the message goes hop by hop to the RP forms a branch of the RPT.
When a multicast source sends multicast data to a multicast group, the source-side DR must register
the multicast source with the RP by unicasting register messages to the RP. The multicast source stops sending until it receives a register-stop message from the RP. When the RP receives the register message, it triggers the establishment of an SPT. Then, the subsequent multicast packets travel along the SPT to the RP. After reaching the RP, the multicast packets are duplicated and delivered to the receivers along the RPT.
Multicast data is replicated wherever the RPT branches, and this process automatically repeats until the multicast data reaches the receivers.
Neighbor discovery
PIM-SM uses the same neighbor discovery mechanism as PIM-DM does. For more information, see "Neighbor discovery."
DR election
On a shared-media LAN like Ethernet, only a DR forwards the multicast data. A DR is required in both the source-side network and receiver-side network. A source-side DR acts on behalf of the multicast source to send register messages to the RP. The receiver-side DR acts on behalf of the receiver hosts to send join messages to the RP.
PIM-DM does not require a DR. However, if IGMPv1 runs on any shared-media LAN in a PIM-DM domain, a DR must be elected to act as the IGMPv1 querier for the LAN. For more information about IGMP, see "Configuring IGMP."
IMPORTANT:
IGMP must be enabled on the device that acts as the receiver-side DR. Otherwise, the receiver hosts attached to the DR cannot join any multicast groups.
80
Page 91
Figure 31 DR election
As shown in Figure 31, the DR election process is as follows:
1. The routers on the shared-media LAN send hello messages to one another. The hello messages
contain the priority for DR election. The router with the highest DR priority is elected as the DR.
2. The router with the highest IP address wins the DR election under either of following conditions:
If the DR fails, its PIM neighbor lifetime expires and the other routers will initiate to elect a new DR.
RP discovery
An RP is the core of a PIM-SM domain. For a small-sized, simple network, one RP is enough for multicast forwarding throughout the network. In this case, you can specify a static RP on each router in the PIM-SM domain. However, in a PIM-SM network that covers a wide area, a huge amount of multicast data is forwarded by the RP. To lessen the RP burden and optimize the topological structure of the RPT, you can configure multiple candidate-RPs (C-RPs) in a PIM-SM domain. An RP is dynamically elected from the C-RPs by the bootstrap mechanism. An elected RP provides services for a different multicast group. For this purpose, you must configure a bootstrap router (BSR). A BSR serves as the administrative core of a PIM-SM domain. A PIM-SM domain has only one BSR, but can have multiple candidate-BSRs (C-BSRs) so that, if the BSR fails, a new BSR can be automatically elected from the C-BSRs and avoid service interruption.
NOTE:
An RP can provide services for multiple multicast groups, but a multicast group only uses one RP.
A device can act as a C-RP and a C-BSR at the same time.
{ All the routers have the same DR election priority.
{ A router does not support carrying the DR-election priority in hello messages.
As shown in Figure 32, each C-RP periodically unicasts its advertisement messages (C-RP-Adv messages) to the BSR. An advertisement message contains the address of the advertising C-RP and the multicast group range to which it is designated. The BSR collects these advertisement messages and organizes the C-RP information into an RP-set, which is a database of mappings between multicast groups and RPs. The BSR encapsulates the RP-set information in the bootstrap messages (BSMs) and floods the BSMs to the entire PIM-SM domain.
81
Page 92
Figure 32 Information exchange between C-RPs and BSR
Based on the information in the RP-set, all routers in the network can select an RP for a specific multicast group based on the following rules:
1. The C-RP that is designated to the smallest group range wins.
2. If the C-RPs are designated to the same group ranges, the C-RP with the highest priority wins.
3. If the C-RPs have the same priority, the C-RP with the largest hash value wins. The hash value is
4. If the C-RPs have the same hash value, the C-RP with the highest IP address wins.
Anycast RP
PIM-SM requires only one active RP to serve each multicast group. If the active RP fails, the multicast traffic might be interrupted. The Anycast RP mechanism enables redundancy backup among RPs by configuring multiple RPs with the same IP address. A multicast source registers with the nearest RP or a receiver joins the nearest RP to implement source information synchronization.
Anycast RP has the following benefits:
Optimal RP path—A multicast source registers with the nearest RP to build an optimal SPT. A
Redundancy backup among RPs—When an RP fails, the RP-related sources and receiver-side DRs
Anycast RP is implemented in either of the following methods:
Anycast RP through MSDP—In this method, you can configure multiple RPs with the same IP address
Anycast RP through PIM-SM—In this method, you can configure multiple RPs for one multicast group
calculated through the hash algorithm.
receiver joins the nearest RP to build an optimal RPT.
will register with or join their nearest available RPs. This achieves redundancy backup among RPs.
for one multicast group and configure MSDP peering relationships between them. For more information about Anycast RP through MSDP, see "Configuring MSDP."
and add them to an Anycast RP set. This method introduces the following concepts:
{ Anycast RP set—A set of RPs that are designated to the same multicast group.
{ Anycast RP member—Each RP in the Anycast RP set.
{ Anycast RP member address—IP address of each Anycast RP member for communication
among the RP members.
82
Page 93
{ Anycast RP address—IP address of the Anycast RP set for communication within the PIM-SM
domain. It is also known as RPA.
As shown in Figure 33, RP
1, RP 2, and RP 3 are members of an Anycast RP set.
Figure 33 Anycast RP through PIM-SM
The following describes how Anycast RP through PIM-SM is implemented:
a. RP 1 receives a register message destined to the Anycast RP address (RPA). Because the
message is not from other Anycast RP members (RP 2 or RP 3), RP 1 assumes that the register message is from the DR. RP 1 changes the source IP address of the register message to its own address and sends the message to the other members (RP 2 and RP 3).
If a router acts as both a DR and an RP, it creates a register message, and then forwards the message to the other RP members.
b. After receiving the register message, RP 2 and RP 3 find out that the source address of the
register message is an Anycast RP member address. They stop forwarding the message to other routers.
In Anycast RP implementation, an RP must forward the register message from the DR to other Anycast RP members to synchronize multicast source information.
83
Page 94
RPT building
Figure 34 RPT building in a PIM-SM domain
Host A
Source
Server
RPT
Join message
Multicast packets
RP DR
DR
Receiver
Host C
Receiver
Host B
As shown in Figure 34, the process of building an RPT is as follows:
1. When a receiver wants to join the multicast group G, it uses an IGMP message to inform the
receiver-side DR.
2. After getting the receiver information, the DR sends a join message, which is forwarded hop by
hop to the RP for the multicast group.
3. The routers along the path from the DR to the RP form an RPT branch. Each router on this branch
adds to its forwarding table a (*, G) entry, where the asterisk (*) represents any multicast source. The RP is the root of the RPT, and the DR is a leaf of the RPT.
When the multicast data addressed to the multicast group G reaches the RP, the RP forwards the data to the DR along the established RPT, and finally to the receiver.
When a receiver is no longer interested in the multicast data addressed to the multicast group G, the receiver-side DR sends a prune message. The prune message goes hop by hop along the RPT to the RP. After receiving the prune message, the upstream node deletes the interface that connects to this downstream node from the outgoing interface list. It also checks whether it still has receivers for that multicast group. If not, the router continues to forward the prune message to its upstream router.
Multicast source registration
The multicast source uses the registration process to inform an RP of its presence.
84
Page 95
Figure 35 Multicast source registration
As shown in Figure 35, the multicast source registers with the RP as follows:
1. The multicast source S sends the first multicast packet to the multicast group G. When receiving the
multicast packet, the source-side DR encapsulates the packet into a PIM register message and unicasts the message to the RP.
2. After the RP receives the register message, it decapsulates the register message and forwards the
register message down to the RPT. Meanwhile, it sends an (S, G) source-specific join message toward the multicast source. The routers along the path from the RP to the multicast source constitute an SPT branch. Each router on this branch creates an (S, G) entry in its forwarding table.
3. The subsequent multicast data from the multicast source are forwarded to the RP along the
established SPT. When the multicast data reaches the RP along the SPT, the RP forwards the data to the receivers along the RPT. Meanwhile, it unicasts a register-stop message to the source-side DR to prevent the DR from unnecessarily encapsulating the data.
Switchover to SPT
In a PIM-SM domain, only one RP and one RPT provide services for a specific multicast group. Before the switchover to SPT occurs, the source-side DR encapsulates all multicast data addressed to the multicast group in register messages and sends them to the RP. After receiving these register messages, the RP decapsulates them and forwards them to the receivers-side DR along the RPT.
Multicast forwarding along the RPT has the following weaknesses:
Encapsulation and decapsulation are complex on the source-side DR and the RP.
The path for a multicast packet might not be the shortest one.
The RP might be overloaded by multicast traffic bursts.
To eliminate these weaknesses, PIM-SM allows an RP or the receiver-side DR to initiate a switchover to SPT when the traffic rate exceeds a specific threshold.
The RP initiates a switchover to SPT:
The RP periodically checks the multicast packet forwarding rate. If the RP finds that the traffic rate exceeds the specified threshold, it sends an (S, G) source-specific join message toward the multicast source. The routers along the path from the RP to the multicast source constitute an SPT.
85
Page 96
The subsequent multicast data is forwarded to the RP along the SPT without being encapsulated into register messages.
For more information about the switchover to SPT initiated by the RP, see "Multicast source
registration."
he receiver-side DR initiates a switchover to SPT:
T
The receiver-side DR periodically checks the forwarding rate of the multicast packets that the multicast source S sends to the multicast group G. If the forwarding rate exceeds the specified threshold, the DR initiates a switchover to SPT as follows:
a. The receiver-side DR sends an (S, G) source-specific join message toward the multicast source.
The routers along the path create an (S, G) entry in their forwarding table to constitute an SPT branch.
b. When the multicast packets reach the router where the RPT and the SPT branches, the router
drops the multicast packets that travel along the RPT. It then sends a prune message with the RP bit toward the RP.
c. After receiving the prune message, the RP forwards it toward the multicast source (supposed
only one receiver exists). Thus, the switchover to SPT is completed. The subsequent multicast packets travel along the SPT from the multicast source to the receiver hosts.
With the switchover to SPT, PIM-SM builds SPTs more economically than PIM-DM does.
Assert
PIM-SM uses a similar assert mechanism as PIM-DM does. For more information, see "Assert."

BIDIR-PIM overview

In some many-to-many applications, such as a multi-side video conference, multiple receivers might be interested in the multicast data from multiple multicast sources. With PIM-DM or PIM-SM, each router along the SPT must create an (S, G) entry for each multicast source, consuming a lot of system resources.
BIDIR-PIM addresses the problem. Derived from PIM-SM, BIDIR-PIM builds and maintains a bidirectional RPT, which is rooted at the RP and connects the multicast sources and the receivers. Along the bidirectional RPT, the multicast sources send multicast data to the RP, and the RP forwards the data to the receivers. Each router along the bidirectional RPT needs to maintain only one (*, G) entry, saving system resources.
BIDIR-PIM is suitable for a network with dense multicast sources and receivers.
Neighbor discovery
BIDIR-PIM uses the same neighbor discovery mechanism as PIM-SM does. For more information, see "Neighbor discovery."
RP discovery
BIDIR-PIM uses the same RP discovery mechanism as PIM-SM does. For more information, see "RP
discovery." In BIDIR-PIM, an RPF interface is the interface toward an RP, and an RPF neighbor is the
address of the next hop to the RP.
In PIM-SM, an RP must be specified with a real IP address. In BIDIR-PIM, an RP can be specified with a virtual IP address, which is called the "rendezvous point address (RPA)." The link corresponding to the RPA's subnet is called the "rendezvous point link (RPL)." All interfaces connected to the RPL can act as the RPs, and they back up one another.
86
Page 97
DF election
On a subnet with multiple multicast routers, duplicate multicast packets might be forwarded to the RP. To address this issue, BIDIR-PIM uses a designated forwarder (DF) election mechanism to elect a unique DF for each RP on each subnet in the BIDIR-PIM domain. Only the DF can forward multicast data to the RP.
DF election is not necessary for an RPL.
Figure 36 DF election
Router DRouter E
RP
Router B Router C
Ethernet
DF election message
Multicast packets
Router A
Source
As shown in Figure 36, without the DF election mechanism, both Router B and Router C can receive multicast packets from Route A. They also can forward the packets to downstream routers on the local subnet. As a result, the RP (Router E) receives duplicate multicast packets. With the DF election mechanism, once receiving the RP information, Router B and Router C initiate a DF election process for the RP:
1. Router B and Router C multicast a DF election message to all PIM routers (224.0.0.13). The
election message carries the RP's address, and the priority and metric of the unicast route, MBGP route, or static multicast route to the RP.
2. The router with a route of the higher priority becomes the DF.
3. In the case of a tie in the priority, the router with the route with the lower metric wins the DF
election.
4. In the case of a tie in the metric, the router with the higher IP address wins.
Bidirectional RPT building
A bidirectional RPT comprises a receiver-side RPT and a source-side RPT. The receiver-side RPT is rooted at the RP and takes the routers that directly connect to the receivers as leaves. The source-side RPT is also rooted at the RP but takes the routers that directly connect to the sources as leaves. The processes for building these two RPTs are different.
87
Page 98
Figure 37 RPT building at the receiver side
As shown in Figure 37, the process for building a receiver-side RPT is the same as the process for building an RPT in PIM-SM:
1. When a receiver wants to join the multicast group G, it uses an IGMP message to inform the
directly connected router.
2. After receiving the message, the router sends a join message, which is forwarded hop by hop to
the RP for the multicast group.
3. The routers along the path from the receiver's directly connected router to the RP form an RPT
branch. Each router on this branch adds a (*, G) entry to its forwarding table.
After a receiver leaves the multicast group G, the directly connected router sends a prune message. The prune message goes hop by hop along the reverse direction of the RPT to the RP. After receiving the prune message, an upstream node removes the interface that connects to the downstream node from the outgoing interface list. It also checks whether it has receivers for that multicast group. If not, the router continues to forward the prune message to its upstream router.
88
Page 99
Figure 38 RPT building at the multicast source side
As shown in Figure 38, the process for building a source-side RPT is relatively simple:
4. When a multicast source sends multicast packets to the multicast group G, the DF in each subnet
unconditionally forwards the packets to the RP.
5. The routers along the path from the source's directly connected router to the RP constitute an RPT
branch. Each router on this branch adds to its forwarding table a (*, G) entry, where the asterisk (*) represents any multicast source.
After a bidirectional RPT is built, the multicast sources send multicast traffic to the RP along the source-side RPT. When the multicast traffic arrives at the RP, the RP forwards the traffic to the receivers along the receiver-side RPT.
IMPORTANT:
If a receiver and a source are at the same side of the RP, the source-side RPT and the receiver-side RPT might meet at a node before reaching the RP. In this case, the multicast packets from the multicast source to the receiver are directly forwarded by the node, instead of by the RP.

Administrative scoping overview

Typically, a PIM-SM domain or a BIDIR-PIM domain contains only one BSR, which is responsible for advertising RP-set information within the entire PIM-SM domain or BIDIR-PIM domain. The information about all multicast groups is forwarded within the network that the BSR administers. This is called the "non-scoped BSR mechanism."
Administrative scoping mechanism
To implement refined management, you can divide a PIM-SM domain or BIDIR-PIM domain into a global-scoped zone and multiple administratively-scoped zones (admin-scoped zones). This is called the "administrative scoping mechanism."
The administrative scoping mechanism effectively releases stress on the management in a single-BSR domain and enables provision of zone-specific services through private group addresses.
89
Page 100
Admin-scoped zones are divided for multicast groups. Zone border routers (ZBRs) form the boundary of an admin-scoped zone. Each admin-scoped zone maintains one BSR for multicast groups within a specific range. Multicast protocol packets, such as assert messages and BSMs, for a specific group range cannot cross the boundary of the admin-scoped zone for the group range. Multicast group ranges that are associated with different admin-scoped zones can have intersections. However, the multicast groups in an admin-scoped zone are valid only within the local zone, and theses multicast groups are regarded as private group addresses.
The global-scoped zone maintains a BSR for the multicast groups that do not belong to any admin-scoped zones.
Relationship between admin-scoped zones and the global-scoped zone
The global-scoped zone and each admin-scoped zone have their own C-RPs and BSRs. These devices are effective only on their respective zones, and the BSR election and the RP election are implemented independently. Each admin-scoped zone has its own boundary. The multicast information within a zone cannot cross this boundary in either direction. You can have a better understanding of the global-scoped zone and admin-scoped zones based on geographical locations and multicast group address ranges.
In view of geographical locations:
An admin-scoped zone is a logical zone for particular multicast groups. The multicast packets for such multicast groups are confined within the local admin-scoped zone and cannot cross the boundary of the zone.
Figure 39 Relationship in view of geographical locations
As shown in Figure 39, for the multicast groups in a specific group address range, the admin-scoped zones must be geographically separated and isolated. A router cannot belong to multiple admin-scoped zones. An admin-scoped zone contains routers that are different from other admin-scoped zones. However, the global-scoped zone includes all routers in the PIM-SM domain or BIDIR-PIM domain. Multicast packets that do not belong to any admin-scoped zones are forwarded in the entire PIM-SM domain or BIDIR-PIM domain.
In view of multicast group address ranges:
Each admin-scoped zone is designated to specific multicast groups, of which the multicast group addresses are valid only within the local zone. The multicast groups of different admin-scoped zones might have intersections. All the multicast groups other than those of the admin-scoped zones use the global-scoped zone.
90
Loading...