Multicast across BGP domains

Hello

Sometimes we may have multicast sources in one BGP domain and receivers in another BGP domain. In this case we need to exchange information across domains about:

  1. routes to get to the source of the multicast traffic
  2. sources of multicast traffic

 

To do 1), we need BGP to share multicast information, not just unicast.

To do 2), we need MSDP, a special TCP application that will inform domain B that domain A has a multicast source.

 

  1. To activate sharing RPF information, that is how to get to a source of multicast traffic, we need to activate a neighbor for a given address family, e.g.router bgp 100
    bgp log-neighbor-changes
    neighbor IBGP peer-group
    neighbor IBGP remote-as 100
    neighbor IBGP update-source Loopback0
    neighbor 150.1.3.3 peer-group IBGP
    neighbor 150.1.6.6 peer-group IBGP
    neighbor 150.1.9.9 peer-group IBGP
    !
    address-family ipv4
    neighbor IBGP route-reflector-client
    neighbor 150.1.3.3 activate
    neighbor 150.1.6.6 activate
    neighbor 150.1.9.9 activate
    exit-address-family
    !
    address-family ipv4 multicast
    neighbor IBGP route-reflector-client
    neighbor 150.1.3.3 activate
    neighbor 150.1.6.6 activate
    neighbor 150.1.9.9 activate
    exit-address-family
  2. We need to establish an MSDP peering on an RP in domain A with the RP in domain B, e.g.
    ip msdp peer 150.1.8.8 connect-source Loopback0 remote-as 200
    ip msdp peer 150.1.5.5 connect-source Loopback0 remote-as 200

Now we have everything we need to get multicast traffic from domain A to domain B. If a host in domain B joins a multicast stream, this info will go to its RP. In domain A, a source starts streaming. This goes to the RP in domain A. Domain A RP informs Domain B RP that it has a streaming source.

Multicast helper

Hello

We can use a multicast helper to ”transport” broadcast traffic onto a different segment.

multicast_helper

Everybody is familiar with unicast helper which changed broadcast into directed unicast traffic, for example if a DHCP server is outside the broadcast segment. Here the idea is similar: we would like to change broadcast into multicast.

  1. R1 sends broadcast traffic to 155.1.12.255 udp 3000
  2. R2 does 3 things:

a) adds udp 3000 to its list of protocols that it can forward,

ip forward-protocol udp 3000

b) enables multicast helper on the link R1>R2, and maps the broadcast traffic to a multicast address, defining an ACL which defines which traffic should mapped.

ip multicast helper-map broadcast 226.0.0.1 MYTRAFFIC

ip access-list extended MYTRAFFIC

permit udp any any eq 3000

permit udp any any eq 53

c) enables ip pim dense-mode on all links

int eth0/0.12

ip pim dense-mode

int eth0/0.23

ip pim dense-mode

3. R3 also does the same three things (and one extra):

a) adds udp 5000 to its list of protocols that it can forward,

b) enables helper on interface R2>R3 and maps the multicast traffic onto its broadcast segment defining an ACL that says which traffic should be mapped

ip multicast helper-map 226.0.0.1 155.1.33.255 MYTRAFFIC !!!but without the word broadcast this time!!!

permit udp any any eq 3000

permit udp any any eq 53

c) enables dense mode on both links

int eth0/0.23

ip pim dense-mode

int eth0/0.33

ip pim dense-mode

d) additionally, on its broadcast segment it needs two commands:

int eth0/0.33

ip directed-broadcast

ip broadcast-address 155.1.33.255

 

Test this by enabling ip domain-lookup on R1 but not specifying domain server. The lookups will start anyway, sending traffic to udp port 53.

This is debug from R3.

access-list 100

permit udp any any eq 53

debug ip packet detail 100

 

Output:

IP: tableid=0, s=155.1.12.1 (Ethernet0/0.23), d=155.1.33.255 (Ethernet0/0.33), routed via RIB
IP: s=155.1.12.1 (Ethernet0/0.23), d=155.1.33.255 (Ethernet0/0.33), len 50, sending full packet
UDP src=53300, dst=53
MFIBv4(0x0): Pkt (155.1.12.1,236.0.0.1) from Ethernet0/0.23 (PS) accepted for forwarding
IP: s=155.1.12.1 (Ethernet0/0.23), d=236.0.0.1 (Ethernet0/0.33), len 50, output feature
R8#
UDP src=53300, dst=53, MFIB Adjacency(86), rtype 0, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE
IP: s=155.1.12.1 (Ethernet0/0.23), d=236.0.0.1 (Ethernet0/0.33), len 50, sending full packet

 

Multicast stub router

If we have R1>R2>R3, where R1 is an HQ main router and R2 a remote office router, there is little point in asking R2 to spend resources on multicast states, since all it does is forward packets to and from the HQ.

  1. We should therefore configure R8 to ask R5 to hold its IGMP group states with the command:

ip igmp helper-address <address of the link of R5 towards R8>

2. Then let’s make R8 as ”stupid” as possible by making its PIM dense towards R5 and R10 (where hosts are connected).

int eth0/0.58

ip pim dense-mode

int eth0/0.108

ip pim dense-mode

3. Next, let’s prevent R8 from establishing a neighbor relationship with R5 or R10 (it should only forward multicast packets!) with:

eth0/0.58

ip pim neighbor-filter 58

eth 0/0.108

ip pim neighbor filter 108

 

access-list 58 deny 155.1.58.5

access-list 58 permit any

access-list 108 deny 155.1.108.10

access-list 108 permit any

4. On R10, PIM DM must also be enabled on the link towards R8. Let’s also join a group.

eth0/0.108

ip pim dense-mode

ip igmp join-group 225.0.0.10

 

Now R5 will see the hosts as if they were directly connected to R5. R5 will use sparse mode to create the tree, R8 will use dense mode, and R10 will use dense mode to process received multicast traffic.

IGMP snooping and AutoRP

Hello

Another (reasonably) tough tshooting ticket:

Logical topology

R5>R8>R10

I turned on autoRP announce on R10 and autoRP discovery on R8. Result? R8 was not getting announce packets from R10. I tried everything then had an idea:

Physical topology is actually

R5>SW1>R8>SW1>R10

I turned off IGMP snooping on interfaces leading to R8 and R10 and R5 on SW1. Everything started working fine.

 

 

Multicast tshooting fail

Hello

Just found something funny when fighting multicast labs. I’ve joined an igmp group on R6 and I was trying to ping it from R10 but getting no responses from R6.

I enabled debugging and saw these mysterious entries. Google is not (!!!) helpful at all with this.

R6#debug ip mfib pak
MFIB IPv4 pak debugging enabled for default IPv4 table
R6#
MFIBv4(0x0): Pkt (155.1.108.10,239.6.6.6) (FS) lookup miss, dropping
R6#
MFIBv4(0x0): Pkt (155.1.108.10,239.6.6.6) (FS) lookup miss, dropping
R6#
MFIBv4(0x0): Pkt (155.1.108.10,239.6.6.6) (FS) lookup miss, dropping
R6#
MFIBv4(0x0): Pkt (155.1.108.10,239.6.6.6) (FS) lookup miss, dropping
R6#
MFIBv4(0x0): Pkt (155.1.108.10,239.6.6.6) (FS) lookup miss, dropping
R6#
MFIBv4(0x0): Pkt (155.1.108.10,239.6.6.6) (FS) lookup miss, dropping

It was a multiaccess ethernet interface and when I joined this group on another router, I was getting responses from the other router so there had to be something wrong with R6.

And of course, there was.

R6#show ip mroute
IP Multicast Forwarding is not enabled.
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,
L – Local, P – Pruned, R – RP-bit set, F – Register flag,
T – SPT-bit set, J – Join SPT, M – MSDP created entry, E – Extranet,
X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,
U – URD, I – Received Source Specific Host Report,
Z – Multicast Tunnel, z – MDT-data group sender,
Y – Joined MDT-data group, y – Sending to MDT-data group,
G – Received BGP C-Mroute, g – Sent BGP C-Mroute,
N – Received BGP Shared-Tree Prune, n – BGP C-Mroute suppressed,
Q – Received BGP S-A Route, q – Sent BGP S-A Route,
V – RD & Vector, v – Vector, p – PIM Joins on route
Outgoing interface flags: H – Hardware switched, A – Assert winner, p – PIM Join
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

 

I think that this is the real value of doing these labs. You learn stuff the hard way. But why oh why does IOS not shout that multicast is not enabled if you join an igmp group on it?