On 06/14/2018 09:22 PM, someone wrote: > So I have to ask, why is it advantageous to put this in a container > rather than just run it directly > on the container's host?
Most any host now-a-days has quite a bit of horse power to run services. All those services could be run natively all in one namespace on the same host, or ...
I tend to gravitate towards running services individually in LXC containers. This creates a bit more overhead than running chroot style environments, but less than running full fledged kvm style virtualization for each service.
I typically automate the provisioning and the spool up of the container and its service. This makes it easy to rebuild/update/upgrade/load-balance services individually and enmasse across hosts.
By running BGP within each container, BGP can be used to advertise the loopback address of the service. I go one step further: for certain services I will anycast some addresses into bgp. This provides an easy way to load balance and provide resiliency of like service instances across hosts,
Therefore, by running BGP within the container, and on the host, routes can be distributed across a network with all the policies available within the bgp protocol. I use Free Range Routing, which is a fork of Quagga, to do this. I use the eBGP variant (RFC 7938) for the hosts and containers , which allows for the elimination of the extra overhead of OSPF or similar internal gateway protocol.
Stepping away a bit, this means that BGP is used in a tiered scenario. There is the regular eBGP with the public ASN for handling DFZ-style public traffic. For internal traffic, private eBGP ASNs are used for routing traffic between and within hosts and containers.
With recent improvements to Free Range Routing and the Linux Kernel, various combinations of MPLS, VxLAN, EVPN, and VRF configurations can be used to further segment and compartmentalize traffic within a host, and between containers. It is now very easy to run vlan-less between hosts through various easy to configure encapsulation mechanisms. To be explicit, this relies on and contributes to a resilient layer 3 network between hosts, and eliminates the bothersome layer 2 redundancy headaches with spanning tree and such.
That was a very long winded way to say: keep a very basic host configuration running a minimal set of functional services, and re-factor the functionality and split it across multiple containers to provide easy access to and maintenance of individual services like dns, smtp, database, dashboards, public routing, private routing, firewalling, monitoring, management, ...
There is a higher up-front configuration cost, but over the longer term, if configured via automation tools like Salt or similar, maintenance and security is improved.
It does require a different level of sophistication with operational staff.
Other BGP routing Daemons:
Other mailing list comments:
Our use case was both on exporting service IPs as well as receiving routes from ToRs. Exa is more geared towards the former than the latter. Rather then working on getting imports and route installation through Exa, we found it simpler with BIRD exporting the service IP from it bound to a loopback to run local healthchecks on the nodes and then have them yank the service IP from the loopback on failing healthchecks in order to stop exporting.
The intent of the original post was vague. Like a lot of people, I would not run a full BGP router in a container. Now, if the purpose is to inject or learn a handful of routes in order to do limited host routing, I can see the need. A route-server or a looking glass in a container would be fine, or something to perform analysis on the routing table, but not anything that has to route actual traffic.
I use ExaBGP to inject routes, perfect tool for that. If routes have to be received (not my use case) it makes more sense, as stated by previous posts, to use Quagga or BIRD. Which one is better : easy : if you like Cisco better, use Quagga. If you like Juniper better, use BIRD
BIRD looking glass looks very good