You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm currently trying to set up FRR with multiple VRFs and ran into some issues that I can't fully understand. I'm hoping someone here with more experience can help clarify what's going wrong.
In my production environment, I have a Cumulus router running FRR and another router with very limited BGP capabilities (e.g., no MP-BGP support). I reproduced the setup using Containerlab, running FRR on one router and BIRD on the other.
Setup
Let me explain the topology first, though I’ll also include configs below, since they’re probably more useful.
I have two routers: Router 1 (FRR) and Router 2 (BIRD).
They are directly connected over eth1 and exchange routes via iBGP in the default VRF.
I’ve added a second direct connection over eth2, which should be logically separated from the first. This interface is placed in a new VRF called rust on Router 1.
I’ve also created a new BGP instance for this VRF, since Router 1 will eventually advertise these routes to other routers.
Unfortunately, I can’t establish a second BGP session over the new link and also can not use MP-BGP because of the limitation of Router 2.
So I was thinking: maybe Router 2 could continue to advertise routes over the BGP session in the default VRF, and Router 1 could leak some of those routes into the rust VRF.
Here’s the relevant configuration:
Router 1
bash-5.1# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if164: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether e6:bb:ad:c0:e0:d8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.20.20.2/24 brd 172.20.20.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 3fff:172:20:20::2/64 scope global nodad
valid_lft forever preferred_lft forever
inet6 fe80::e4bb:adff:fec0:e0d8/64 scope link
valid_lft forever preferred_lft forever
3: rust: <NOARP,MASTER,UP,LOWER_UP> mtu 65575 qdisc noqueue state UP group default qlen 1000
link/ether 8a:6e:b6:b6:a5:1e brd ff:ff:ff:ff:ff:ff
167: eth1@if166: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default
link/ether aa:c1:ab:82:ab:ec brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 10.14.33.2/24 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a8c1:abff:fe82:abec/64 scope link
valid_lft forever preferred_lft forever
169: eth2@if168: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue master rust state UP group default
link/ether aa:c1:ab:4c:80:dd brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.29.0.2/24 scope global eth2
valid_lft forever preferred_lft forever
inet6 fe80::a8c1:abff:fe4c:80dd/64 scope link
valid_lft forever preferred_lft forever
bash-5.1# cat /etc/frr/frr.conf
frr version 7.5.1_git
frr defaults datacenter
hostname router1
no ipv6 forwarding
!
router bgp 65152
bgp router-id 10.14.32.5
no bgp default ipv4-unicast
bgp cluster-id 10.14.20.0
bgp bestpath as-path multipath-relax
neighbor pg-test peer-group
neighbor pg-test remote-as internal
bgp listen range 10.14.33.0/24 peer-group pg-test
!
address-family ipv4 unicast
neighbor pg-test activate
neighbor pg-test next-hop-self force
neighbor pg-test soft-reconfiguration inbound
exit-address-family
!
router bgp 4290000211 vrf rust
!
address-family ipv4 unicast
import vrf default
exit-address-family
!
line vty
!
Router 2
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if165: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 32:df:89:29:36:46 brd ff:ff:ff:ff:ff:ff
inet 172.20.20.3/24 brd 172.20.20.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 3fff:172:20:20::3/64 scope global flags 02
valid_lft forever preferred_lft forever
inet6 fe80::30df:89ff:fe29:3646/64 scope link
valid_lft forever preferred_lft forever
166: eth1@if167: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 9500 qdisc noqueue state UP
link/ether aa:c1:ab:f0:81:e5 brd ff:ff:ff:ff:ff:ff
inet 10.14.33.3/24 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a8c1:abff:fef0:81e5/64 scope link
valid_lft forever preferred_lft forever
168: eth2@if169: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 9500 qdisc noqueue state UP
link/ether aa:c1:ab:ba:76:7e brd ff:ff:ff:ff:ff:ff
inet 172.29.0.3/24 scope global eth2
valid_lft forever preferred_lft forever
inet6 fe80::a8c1:abff:feba:767e/64 scope link
valid_lft forever preferred_lft forever
/ # cat /etc/bird.conf
log stderr all;
router id 10.14.33.3;
protocol device {
scan time 10;
}
protocol static {
ipv4 {
export all;
};
route 10.10.10.0/24 via 172.29.0.1;
}
protocol bgp {
local as 65152;
neighbor 10.14.33.2 as 65152;
ipv4 {
import none;
export filter {
print "Exporting ", net, " with next hop: ", bgp_next_hop;
accept;
};
};
}
Problems/Questions
Unfortunately, the setup doesn’t behave as expected:
The route appears as inaccessible in the default VRF
The BGP route is marked as invalid, probably because the next-hop (172.29.0.1) is not reachable from the default VRF:
router1# show bgp ipv4 10.10.10.0/24
BGP routing table entry for 10.10.10.0/24
Paths: (1 available, no best path)
Not advertised to any peer
Local
172.29.0.1 (inaccessible) from 10.14.33.3 (10.14.33.3)
Origin IGP, localpref 100, invalid, internal
Last update: Wed Apr 23 14:09:59 2025
This is expected, in a way, because of the VRF separation. I can work around this by adding a blackhole route in the default VRF:
ip route add blackhole 172.29.0.0/24
Even with that, FRR prefers the default VRF
Although the route is successfully leaked into the rust VRF and the next-hop is reachable within rust, FRR still prefers the path via the default VRF, and the route remains inactive in the kernel:
router1# show bgp vrf rust ipv4
BGP table version is 1, local router ID is 172.29.0.2, vrf id 3
Default local pref 100, local AS 4290000211
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
*> 10.10.10.0/24 172.29.0.1(router1)@0<
100 0 i
Displayed 1 routes and 1 total paths
router1# show ip route vrf rust
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued, r - rejected, b - backup
VRF rust:
B 10.10.10.0/24 [200/0] via 172.29.0.1 (vrf default) inactive, weight 1, 00:00:06
C>* 172.29.0.0/24 is directly connected, eth2, 00:02:10
I also experimented with setting Route Distinguisher and Route Target values manually, and switching to eBGP between the routers, but nothing really changed.
At this point, I feel like I may be misunderstanding how route leaking between VRFs is supposed to work in this situation. Any insights or advice would be greatly appreciated!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hej everyone,
I'm currently trying to set up FRR with multiple VRFs and ran into some issues that I can't fully understand. I'm hoping someone here with more experience can help clarify what's going wrong.
In my production environment, I have a Cumulus router running FRR and another router with very limited BGP capabilities (e.g., no MP-BGP support). I reproduced the setup using Containerlab, running FRR on one router and BIRD on the other.
Setup
Let me explain the topology first, though I’ll also include configs below, since they’re probably more useful.
eth1
and exchange routes via iBGP in thedefault
VRF.eth2
, which should be logically separated from the first. This interface is placed in a new VRF calledrust
on Router 1.So I was thinking: maybe Router 2 could continue to advertise routes over the BGP session in the
default
VRF, and Router 1 could leak some of those routes into therust
VRF.Here’s the relevant configuration:
Router 1
Router 2
Problems/Questions
Unfortunately, the setup doesn’t behave as expected:
inaccessible
in thedefault
VRFThe BGP route is marked as
invalid
, probably because the next-hop (172.29.0.1) is not reachable from thedefault
VRF:This is expected, in a way, because of the VRF separation. I can work around this by adding a blackhole route in the
default
VRF:default
VRFAlthough the route is successfully leaked into the
rust
VRF and the next-hop is reachable within rust, FRR still prefers the path via thedefault
VRF, and the route remains inactive in the kernel:I also experimented with setting Route Distinguisher and Route Target values manually, and switching to eBGP between the routers, but nothing really changed.
At this point, I feel like I may be misunderstanding how route leaking between VRFs is supposed to work in this situation. Any insights or advice would be greatly appreciated!
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions