The key is the 'ether' specifier. The -e option shows the mac address in the output:
$ sudo tcpdump -n -i vlan50 ether host e4:54:e8:29:44:2d or ether host ff:ff:ff:ff:ff:ff -vv -e
The key is the 'ether' specifier. The -e option shows the mac address in the output:
$ sudo tcpdump -n -i vlan50 ether host e4:54:e8:29:44:2d or ether host ff:ff:ff:ff:ff:ff -vv -e
The country code is not read from ACPI but from the EEPROM of the WiFi-card (0x0 "World Regulatory Domain" by default).
How to Build your Own Wireless Router (Part 3) - kernel requires code patch to enable 5GHz operations.
From Phoronix - Linux 5.6 To Bring FQ-PIE Packet Scheduler To Help Fight Bufferbloat, there are some example configurations for using different buffering mechanisms (Implementing the Flow Queue PIE AQM in the Linux Kernel).
For wireless clients:
net.core.default_qdisc=fq net.ipv4.tcp_congestion_control = bbrand fq_codel (codel) is better suited for router.
Another possibility:
I had pretty amazing results with cake. I prefer it over fq_codel and it is now part of main line since 4.19.
Cake - Common Applications Kept Enhanced
In the past the internet would just grind to a halt when the Apple devices started cloud syncing. They would totally swamp the outgoing bandwidth. Now nobody notices when an Apple device is syncing.
Open firmware for small routers for congestion management:
The OpenWrt issue with 4MB has to do with using newer kernels. For those devices, DD-WRT is a better bet (still on kernel 3.10).
Courtesy of 3 quick ways to reduce your attack surface on Linux, a command to identify open ports and identify the associated application:
$ ss -tulnp --no-header | awk '{print($1, $5, $7)}' udp 0.0.0.0:32770 users:(("vlc",pid=2786557,fd=39)) udp 0.0.0.0:32771 users:(("vlc",pid=2786557,fd=40)) udp 127.0.0.1:123 udp 0.0.0.0:123 udp 0.0.0.0:631 udp 0.0.0.0:37019 udp 0.0.0.0:5353 udp 0.0.0.0:39276 users:(("vlc",pid=2786557,fd=50)) udp 0.0.0.0:39277 users:(("vlc",pid=2786557,fd=51)) tcp 0.0.0.0:61209 tcp 127.0.0.1:25 tcp 127.0.0.1:5433 tcp 0.0.0.0:8794 tcp 127.0.0.1:3493 tcp 127.0.0.1:5101 users:(("ssh",pid=899751,fd=4)) tcp 127.0.0.1:5102 users:(("ssh",pid=2187130,fd=4)) tcp 127.0.0.1:5201 users:(("ssh",pid=2186389,fd=5)) tcp 127.0.0.1:5202 users:(("ssh",pid=2186389,fd=7)) tcp 0.0.0.0:22 tcp 127.0.0.1:631 tcp 127.0.0.1:5432 ...
In building an openflow controller, the controller needs to perform some packet interception, inspection, and modification activities. This is easy from a UDP perspective. But when the openflow engine forwards the complete packet, mac addresses and all, the packets need to be decoded. And for TCP, there are various connection oriented states which need to be taken in to account when trying to perform deep packet inspection.
This means creating a purpose built TCP stack to perform the state management, or someone make use of existing tools to perform the state management. In my searches, I came across:
The book "TCP/IP Illustrated, Volume 1, 2e" discusses timers extensively, something useful, as I am considering putting in the basic state handling code manually, rather than using one of the above libraries.
I saw a debian bug report Installation was successfully at BananaPi. When initially looking at the Banana Pi, as it has a version with a number of network interfaces, I didn't readily see the tooling necessary for booting images.
The bug report shows that Debian now has native images available, and similar images can be found in the Buster and Sid distributions. There are also Beagle Bone Black images in there.
Random links I picked up for building custom installs:
Kind of related, Redhat has a summary article of Introduction to Linux interfaces for virtual networking. Excellent summary of many interface types in Linux.
2019/02/18 From a different direction, via the Debian Bugs list, I've come across Lamobo R1 which is another name for the Banana Pi R1. The page has some additional information, and by cruising to the main page, additional AllWinner SoC info can be obtained.
In addition, Armbian Build on GitHub offers the code for building Arm kernels on these types of boards.
Kernel CI shows results of Kernel Builds on various devices, with an example entry of Kernel 5.0 booting on Banana Pi R2
Sometimes, you just don't know what is on your network. A couple ways of finding out include using nmap or arp-scan.
An easy install:
sudo apt install arp-scan nmap
And an easy use with some basic defaults (the Unknown ones are LXC containers with manual mac addresses, and Cadmus Computer Systems is a VirtualBox device):
$sudo arp-scan --interface=vlan90 --localnet Interface: vlan90, datalink type: EN10MB (Ethernet) Starting arp-scan 1.9.5 with 256 hosts (https://github.com/royhills/arp-scan) 10.55.90.1 00:13:3b:0f:59:24 Speed Dragon Multimedia Limited 10.55.90.14 08:00:27:de:fc:9d Cadmus Computer Systems 10.55.90.21 0a:00:55:90:00:ab (Unknown) 10.55.90.22 0a:00:55:90:00:0c (Unknown) 10.55.90.23 0a:00:55:90:00:23 (Unknown) 10.55.90.31 f0:ad:4e:03:64:7f Globalscale Technologies, Inc. 8 packets received by filter, 0 packets dropped by kernel Ending arp-scan 1.9.5: 256 hosts scanned in 3.155 seconds (81.14 hosts/sec). 6 responded
With nmap, it knows about the mis-named mac:
$ sudo nmap -sn 10.55.90.0/24 Starting Nmap 7.70 ( https://nmap.org ) at 2018-09-29 20:25 MDT Nmap scan report for 10.55.90.1 Host is up (0.00032s latency). MAC Address: 00:13:3B:0F:59:24 (Speed Dragon Multimedia Limited) Nmap scan report for 10.55.90.14 Host is up (-0.10s latency). MAC Address: 08:00:27:DE:FC:9D (Oracle VirtualBox virtual NIC) Nmap scan report for 10.55.90.21 Host is up (-0.100s latency). MAC Address: 0A:00:55:90:00:AB (Unknown) Nmap scan report for 10.55.90.22 Host is up (-0.100s latency). MAC Address: 0A:00:55:90:00:0C (Unknown) Nmap scan report for 10.55.90.23 Host is up (-0.087s latency). MAC Address: 0A:00:55:90:00:23 (Unknown) Nmap scan report for 10.55.90.31 Host is up (0.00074s latency). MAC Address: F0:AD:4E:03:64:7F (Globalscale Technologies) Nmap scan report for 10.55.90.10 Host is up. Nmap done: 256 IP addresses (7 hosts up) scanned in 3.31 seconds
With these specific commands, arp-scan may see things which nmap may not, ie, firewall that are blocking pings. nmap probably has a mode for scanning similarly to arp-scan.
From the iovisor-dev mailing list:
>I am trying to collect IPFIX flow data from the linux host interface.
Why IPFIX and not sFlow or netflow ?
>Can someone guide me the best way to collect the data using XDP.
It depends a bit on you setup. Assuming you want to do this "inline" on the box receiving the traffic. Then you should know/learn, that XDP cannot allocate a new packet (that e.g. could be used sending IPFIX/sFlow info directly). Instead, I would use the perf-ringbuffer to store sampled-packets (via copy), and then code a userspace program that reads from this perf-ringbuffer, and it will communicate with the central IPFIX/sFlow server.
>Any samples for reference will be a great help.
From XDP howto use the perf-ringbuffer via bpf_perf_event_output, samples are avail here:
- https://github.com/torvalds/linux/blob/master/samples/bpf/xdp_sample_pkts_kern.c
- https://github.com/torvalds/linux/blob/master/samples/bpf/xdp_sample_pkts_user.c
Notice, there are also plenty of BCC examples using the perf-ringbuffer, look for BCC code with:BPF_PERF_OUTPUT(events); events.perf_submit(ctx, data, sizeof(struct data_t));
On 06/14/2018 09:22 PM, someone wrote: > So I have to ask, why is it advantageous to put this in a container > rather than just run it directly > on the container's host?
Most any host now-a-days has quite a bit of horse power to run services. All those services could be run natively all in one namespace on the same host, or ...
I tend to gravitate towards running services individually in LXC containers. This creates a bit more overhead than running chroot style environments, but less than running full fledged kvm style virtualization for each service.
I typically automate the provisioning and the spool up of the container and its service. This makes it easy to rebuild/update/upgrade/load-balance services individually and enmasse across hosts.
By running BGP within each container, BGP can be used to advertise the loopback address of the service. I go one step further: for certain services I will anycast some addresses into bgp. This provides an easy way to load balance and provide resiliency of like service instances across hosts,
Therefore, by running BGP within the container, and on the host, routes can be distributed across a network with all the policies available within the bgp protocol. I use Free Range Routing, which is a fork of Quagga, to do this. I use the eBGP variant (RFC 7938) for the hosts and containers , which allows for the elimination of the extra overhead of OSPF or similar internal gateway protocol.
Stepping away a bit, this means that BGP is used in a tiered scenario. There is the regular eBGP with the public ASN for handling DFZ-style public traffic. For internal traffic, private eBGP ASNs are used for routing traffic between and within hosts and containers.
With recent improvements to Free Range Routing and the Linux Kernel, various combinations of MPLS, VxLAN, EVPN, and VRF configurations can be used to further segment and compartmentalize traffic within a host, and between containers. It is now very easy to run vlan-less between hosts through various easy to configure encapsulation mechanisms. To be explicit, this relies on and contributes to a resilient layer 3 network between hosts, and eliminates the bothersome layer 2 redundancy headaches with spanning tree and such.
That was a very long winded way to say: keep a very basic host configuration running a minimal set of functional services, and re-factor the functionality and split it across multiple containers to provide easy access to and maintenance of individual services like dns, smtp, database, dashboards, public routing, private routing, firewalling, monitoring, management, ...
There is a higher up-front configuration cost, but over the longer term, if configured via automation tools like Salt or similar, maintenance and security is improved.
It does require a different level of sophistication with operational staff.
Other BGP routing Daemons:
Other mailing list comments:
Our use case was both on exporting service IPs as well as receiving routes from ToRs. Exa is more geared towards the former than the latter. Rather then working on getting imports and route installation through Exa, we found it simpler with BIRD exporting the service IP from it bound to a loopback to run local healthchecks on the nodes and then have them yank the service IP from the loopback on failing healthchecks in order to stop exporting.
The intent of the original post was vague. Like a lot of people, I would not run a full BGP router in a container. Now, if the purpose is to inject or learn a handful of routes in order to do limited host routing, I can see the need. A route-server or a looking glass in a container would be fine, or something to perform analysis on the routing table, but not anything that has to route actual traffic.
I use ExaBGP to inject routes, perfect tool for that. If routes have to be received (not my use case) it makes more sense, as stated by previous posts, to use Quagga or BIRD. Which one is better : easy : if you like Cisco better, use Quagga. If you like Juniper better, use BIRD
![]()
BIRD looking glass looks very good
When using the BGP module in Free Range Routing, the 'network The draw back to advertising connected prefixes is that the prefix is advertised even a related interface is not 'up'. This could lead to a blackhole scenario.
A better way to handle the advertisements of connected prefixes is to use the 'redistribute connected' command.
Even with the use of this command, there may be scenarios (which I need to test at some point) where the prefix is advertised or withdrawn depending upon the link state. Free Range Routing has an additional command which could be used to ensure link state checking: 'bgp network import-check'.
There is more about the Linux state checking flags in the
Why Link-State Matters presentation from LinuxCon 2015.
In addition, the Free Range Routing developers have brought together some
relevant sysctl settings.
This is another collection of random notes, this time, on how to build something on Linux somewhat resembling Cisco's Global Load Balancing capability, basically a continuation of my entry at Linux ifupdown2 VRRP.
Traditionally, one sets up VRRP using keepalived or the simpler vrrpd. This configuration is typically used when setting up (typically) two routers in an active/passive setup to act as a gateway for a network subnet. In essence, the two (or more) routers negotiate who will hold the gateway mac and ip address.
In other circumstances, it might be desired (and possible) to run active/active. This is a possibility when running containers on a host, and there are similar services running across the hosts. In this instance the same address can be assigned as a secondary address across multiple containers to load balance traffic.
And in even other cases, subnets may be stretched in a layer2 over layer3 encapsulated network across multiple hosts. And in this case, each host should be able to act as a gateway for the traffic local to it. It is this last example I am currently investigating.
Reynold's Blog has an entry called Configuring Cumulus Linux High Availability Layer 2 Network. The most interesting aspect of this post is reference to using 'address-virtual' commands when using ifupdown2 style /etc/network/interface structures:
address-virtual 00:00:5e:00:01:02 10.11.2.254/24
The ip and mac addresses are identical across interfaces sharing the gateway role. The mac address is a reserved range 00:00:5e:00:01:00 – 00:00:5e:00:01:ff for VRRP style operations. The ip address is the virtual ip address (VIP). This style of usage is explained more in Virtual Router Redundancy - VRR.
Or maybe I don't worry about this as Ethernet Virtual Private Network - EVPN has a section with asymmetric routing and symmetric routing which do not need vrrp style constructs.
Layer 3 routing on Cumulus Linux MLAG talks about VRR, the address-virtual, and FRR/route-map to obtain ECMP based load balancing. Now the question - how to get things to not need MLAG.
# netstat -rn -f inet Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 10.7.28.23 0.0.0.0 UG 0 0 0 vlan28 10.7.1.1 10.7.3.7 255.255.255.255 UGH 0 0 0 enp14s0f2 10.7.1.2 10.7.3.4 255.255.255.255 UGH 0 0 0 enp14s0f3 10.7.1.33 10.7.28.23 255.255.255.255 UGH 0 0 0 vlan28 10.7.1.34 10.7.28.23 255.255.255.255 UGH 0 0 0 vlan28 10.7.1.41 10.7.3.7 255.255.255.255 UGH 0 0 0 enp14s0f2 .....
# ls -l /sys/class/net total 0 lrwxrwxrwx 1 root root 0 Mar 8 04:06 bond0 -> ../../devices/virtual/net/bond0 -rw-r--r-- 1 root root 4096 Mar 8 04:05 bonding_masters lrwxrwxrwx 1 root root 0 Mar 8 04:06 enp12s0f0 -> ../../devices/pci0000:00/0000:00:03.1/0000:0c:00.0/net/enp12s0f0 lrwxrwxrwx 1 root root 0 Mar 8 04:06 enp12s0f1 -> ../../devices/pci0000:00/0000:00:03.1/0000:0c:00.1/net/enp12s0f1 lrwxrwxrwx 1 root root 0 Mar 8 04:06 enp12s0f2 -> ../../devices/pci0000:00/0000:00:03.1/0000:0c:00.2/net/enp12s0f2 lrwxrwxrwx 1 root root 0 Mar 8 04:06 enp12s0f3 -> ../../devices/pci0000:00/0000:00:03.1/0000:0c:00.3/net/enp12s0f3 lrwxrwxrwx 1 root root 0 Mar 8 04:05 enp14s0f0 -> ../../devices/pci0000:00/0000:00:03.2/0000:0e:00.0/net/enp14s0f0 lrwxrwxrwx 1 root root 0 Mar 8 04:05 lo -> ../../devices/virtual/net/lo lrwxrwxrwx 1 root root 0 Mar 8 04:05 ovsbr0 -> ../../devices/virtual/net/ovsbr0 lrwxrwxrwx 1 root root 0 Mar 8 04:06 ovs-system -> ../../devices/virtual/net/ovs-system lrwxrwxrwx 1 root root 0 Mar 8 04:05 vlan19 -> ../../devices/virtual/net/vlan19
# cat /sys/class/net/enp8s0f0/statistics/tx_bytes 308623980
# netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.9.0.0 10.9.2.32 255.255.255.0 UG 0 0 0 enp1s0 10.9.1.1 10.9.2.32 255.255.255.255 UGH 0 0 0 enp1s0
ss -ntl
# nstat -a #kernel IpInReceives 156764 0.0 IpInDelivers 156764 0.0 IpOutRequests 151083 0.0 IpOutNoRoutes 40 0.0 .....
Useful combinations:
I was looking through Cumulus ifupdown2 code and came across references to vrrpd and ifplugd. I have been using keepalived before becoming aware of what ifupdown2 can do. I am starting to fathom how things relate 'under the hood'.
Configuring Cumulus Linux High Availability Layer 2 Network – Part 2 introduces vrrp along with 'address-virtual' in the /etc/network/interfaces file:
auto brvlan.10 iface brvlan.10 address 10.0.10.1/24 address-virtual 00:00:5e:00:01:01 10.0.10.254/24
The MAC address range is reserved in an RFC, with the last byte adjustable, which now makes sense as to why there could only be 255 vrrp instances. Based upon current thinking, something I need to test, is ifupdown2 sets up ifplugd and vrrpd under the hood to handle migrating the virtual address from interface to interface.
I need to figure out how the configuration above correlates with vrrp specific settings as seen in ifupdown2 vrrpd.py source code.
When evaluating VRRP settings, referring to Configuring, Attacking and Securing VRRP on Linux may be helpful. One useful suggestion for some implementations is to put the virtual address on an untagged interface away from the 'protected' vlans in order to prevent spoofing of the VRRP packets.
Debian packages a very old version of vrrpd. It is best to use a more up-to-date version at fredbcode/Vrrpd
Right up my alley -- someone who is using SaltStack to provision networks. There are some ideas in which I might lift for some of my own work: Building your own sdn with debian linux salt stack and python.
![]() |
January '21 |
![]() |
||||
---|---|---|---|---|---|---|
Mo | Tu | We | Th | Fr | Sa | Su |
Monday, January 25. 2021 | ||||||
1 | 2 | 3 | ||||
4 | 5 | 6 | 7 | 8 | 9 | 10 |
11 | 12 | 13 | 14 | 15 | 16 | 17 |
18 | 19 | 20 | 21 | 22 | 23 | 24 |
25 | 26 | 27 | 28 | 29 | 30 | 31 |