EBPF and Linux Networking

5,629 views

Published on

In this session, we’ll review how previous efforts, including Netfilter, Berkley Packet Filter (BPF), Open vSwitch (OVS), and TC, approached the problem of extensibility. We’ll show you an open source solution available within the Red Hat Enterprise Linux kernel, where extending and merging some of the existing concepts leads to an extensible framework that satisfies the networking needs of datacenter and cloud virtualization.

Published in: Technology
0 Comments
18 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
5,629
On SlideShare
0
From Embeds
0
Number of Embeds
93
Actions
Shares
0
Downloads
160
Comments
0
Likes
18
Embeds 0
No embeds

No notes for slide

EBPF and Linux Networking

  1. 1. 2011-2014 © PLUMgrid - Confidential Information Extended BPF and Data Plane Extensibility: An overview of networking and Linux Fernando Sanchez Principal SE, PLUMgrid Inc. @fernandosanchez Brenden Blanco Architect, Office of the CTO, PLUMgrid @brendenblanco
  2. 2. 2011-2014 © PLUMgrid - Confidential Information Agenda •  Lessons from Physical Networks: Traditional Data Center Design and the effects of virtualization •  Hypervisor Networking Layer: Virtual Switches, Distributed Virtual Switches and Network Overlays •  (E)BPF and its applicability to an Extensible Networking Dataplane – From Virtual Switches to Virtual Networks •  Demos, examples and usage of BPF 2
  3. 3. 2011-2014 © PLUMgrid - Confidential Information Lessons from Physical Networks: Traditional Data Center Design and the effects of virtualization 3
  4. 4. 2011-2014 © PLUMgrid - Confidential Information Server Virtualization How does this affect the network?
  5. 5. 2011-2014 © PLUMgrid - Confidential Information Traditional Data Center: Characteristics 5 • One host OS per server • Three tier (Access, Distribution, Core) Networking Design • Traditional L2 and L3 protocols • spanning-tree issues, anyone? • HA based in physical server/link deployments
  6. 6. 2011-2014 © PLUMgrid - Confidential Information Traditional Data Center: General Issues 6 •  Costly, Complex, and Constrained •  Switch cross connects waste revenue generating ports •  Scalability based on hardware and space •  Network sub-utilization •  Slow L2/3 failure recovery •  Layer 4/7 is centralized at core layer •  Quickly reaching HW limits (#MACs, #VLANs, etc.)
  7. 7. 2011-2014 © PLUMgrid - Confidential Information A Modern Data Center: Characteristics 7 • Server Virtualization: • Multiple OS and VMs • Efficient Network Virtualization: • Multiple link utilization • Fast convergence • Increased uptime • Storage Virtualization: • Fast & efficient • New design requirements needed! Fully distributed network layer
  8. 8. 2011-2014 © PLUMgrid - Confidential Information Effects of Server Virtualization Virtualization helped optimize compute but added to the network issues: •  Traffic Flows: East West and VM to VM flows could cause hair-pinning of traffic •  VM Segmentation: More VLAN and MAC address issues •  VM Management: Traditional systems could not see past the hypervisor •  Intra Server Security: How to secure traffic within a server? Fully distributed network layer
  9. 9. 2011-2014 © PLUMgrid - Confidential Information Hypervisor Networking Layer: Virtual Switches, Distributed Virtual Switches and Network Overlays 9
  10. 10. 2011-2014 © PLUMgrid - Confidential Information A New Networking Layer Your data plane matters … A LOT vSwitches Distributed vSwitches vRouters Distributed topologies Extensible data plane
  11. 11. 2011-2014 © PLUMgrid - Confidential Information Virtual Switches •  A Virtual Switch (vSwitch) is a software component within a server that allows one inter- virtual machine (VM) communication as well as communication with external world •  A vSwitch has a few key advantages: •  Provides network functionalities right inside the hypervisor layer •  Operations are similar to that of the hypervisor yet with control over network functionality •  Compared to a physical switch, it's easy to roll out new functionality, which can be hardware or firmware related Host
  12. 12. 2011-2014 © PLUMgrid - Confidential Information Open vSwitch •  Open vSwitch is a production quality, multilayer virtual switch licensed under the Apache 2.0 license •  Enables massive network automation •  Supports distribution across multiple physical servers
  13. 13. 2011-2014 © PLUMgrid - Confidential Information Inside a Compute Node 13 Compute Node Kernel Ethmgmt vSwitch Kernel Module Tenant VMs VM VM VM User Vif vSwitch User Space Component
  14. 14. 2011-2014 © PLUMgrid - Confidential Information From vSwitch to Distributed vSwitch •  Logically stretches across multiple physical servers •  Provides L2 connectivity for VMs that belong to the same tenant within each server and across them •  Generally uses IP tunnel Overlays (VxLAN, GRE) to create isolated L2 broadcast domains across L3 boundaries 14 VM VM VM VM VM VM Distributed vSwitch VM VMVM VMVM VM
  15. 15. 2011-2014 © PLUMgrid - Confidential Information How about L2+ Functions? “in-kernel switch” approach 15 Compute Node Kernel Ethmgmt In Kernel Functions Tenant VMs VM VM VM User Vif Advanced Functions Advanced Functions Dedicated Network Node
  16. 16. 2011-2014 © PLUMgrid - Confidential Information Extensible In-Kernel Functions 16 Compute Node Kernel Ethmgmt Tenant VMs VM VM VM User Vif
  17. 17. 2011-2014 © PLUMgrid - Confidential Information Extensible Data Plane Architecture •  Is there a way to provide a software networking data plane: •  Able to load and chain Virtual Network Functions dynamically •  Extensible •  Fully programmable, able to freely access the raw network devices •  In-kernel – leverage all the existing kernel features for •  Hardware support and portability •  Guaranteed runtime safety •  Predictable performance (delay, jitter, throughput…) •  E-BPF Technology https://lwn.net/Articles/603983 17
  18. 18. 2011-2014 © PLUMgrid - Confidential Information (E)BPF and its applicability to an Extensible Networking Dataplane – From Virtual Switches to Virtual Networks 18
  19. 19. 2011-2014 © PLUMgrid - Confidential Information Classic BPF •  BPF - Berkeley Packet Filter – In-kernel virtual machine with low level instruction set for raw access to data link layer •  Introduced in Linux in 1997 in kernel version 2.1.75 •  Initially used as socket filter by packet capture tool tcpdump (via libpcap) Use Cases: •  socket filters (drop or trim packet and pass to user space) –  used by tcpdump/libpcap, wireshark, nmap, dhcp, arpd, ... •  In-kernel networking subsystems –  cls_bpf (TC classifier) –QoS subsystem- , xt_bpf, ppp, team, ... •  seccomp (chrome sandboxing) –  introduced in 2012 to filter syscall arguments with bpf program
  20. 20. 2011-2014 © PLUMgrid - Confidential Information Extended BPF •  New set of patches introduced in the Linux kernel since 3.15 (June 8th, 2014) and into 3.19 (Feb 8th, 2015), 4.0 (April 12th, 2015) and into 4.1 •  “Universal in-kernel virtual machine”* •  More registers (64 bit), safety (no crashes, finite execution…), userspace maps •  In-kernel JIT compiler (safe) à x86, ARM64, s390, powerpc*, MIPS* …. •  LLVM backend: any platform that LLVM compiles into will work. (GCC backend in the works) à PORTABILITY! Use Cases: 1.  networking 2.  tracing (analytics, monitoring, debugging) 3.  in-kernel optimizations 4.  hw modeling 5.  crazy stuff... *http://lwn.net/Articles/599755/
  21. 21. 2011-2014 © PLUMgrid - Confidential Information Extended BPF program = BPF instructions + BPF maps •  BPF map: key/value storage of different types •  value = bpf_table_lookup(table_id, key) – lookup key in a table •  Userspace can read/modify the tables •  More on this on later slide BPF insns program (pre-3.15) Extended BPF insns program 2 registers + stack 32-bit registers 4-byte load/store to stack 1-8 byte load from packet Conditional jump forward +, -, *, … instructions 10 registers + stack 64-bit registers 1-8 byte load/store to stack 1-8 byte load/store to packet Conditional jump fwd and backward Same + signed_shift + bswap •  BPF instructions improvements:
  22. 22. 2011-2014 © PLUMgrid - Confidential Information Extended BPF Networking Program Example Fully Programmable Dataplane Access Restrictive C program to: •  obtain the protocol type (UDP, TCP, ICMP, …) from each packet •  keep a count for each protocol in a “map”: int bpf_prog1(struct __sk_buff *skb) { int index = load_byte(skb, ETH_HLEN + offsetof(struct iphdr, protocol)); long *value; value = bpf_map_lookup_elem(&my_map, &index); if (value) __sync_fetch_and_add(value, 1); return 0; } Equivalent eBPF program (struct insns * “pretty print”) Load an incoming frame and get the IP protocol as “index” from it Lookup that IP protocol “index” in an existing map* and get current “value” If found, add 1 to the “value” LLVM GCC* JIT https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/samples/bpf/sockex2_kern.c Blazing FAST in-kernel machine code! insns 0: r6 = r1 1: r0 = *(u8 *)skb[23] 2: *(u32 *)(r10 -4) = r0 3: r1 = 0xba933f00 5: r2 = r10 6: r2 += -4 7: call 1 8: if r0 == 0x0 goto pc+2 9: r1 = 1 10: lock *(u64 *)(r0 +0) += r1 11: r0 = 0 12: exit
  23. 23. 2011-2014 © PLUMgrid - Confidential Information DEMO 1 – Using BPF kprobes for “Hello World” •  Write a simple program that says “Hello World!” •  Attach it to a kprobe point: sys_clone, sys_fork,… •  Everytime that event happens, “Hello World!” on stdout •  Dynamically modify the code and update the in-kernel program “Hello, RedHat summit!” 23
  24. 24. 2011-2014 © PLUMgrid - Confidential Information •  BPF programs can attach to sockets, the traffic control (TC) subsystem, kprobe, syscalls, tracepoints… •  Sockets can be STREAM (L4/UDP), DATAGRAM (L4/ TCP) or RAW (TC) •  This allows to hook at different levels of the Linux networking stack, providing the ability to act on traffic that has or hasn’t been processed already by other pieces of the stack •  Opens up the possibility to implement network functions at different layers of the stack Hooking BPF into the Linux networking stack (RX) HW/veth/cont USERSPACE TAP/Raw (RO) driver netif_receive_skb() TC / traffic control Bridge hook / prerouting IP / routing KERNELSPACE insns 1 BPF Socket (TCP/UDP) insns 5 BPF insns 2 BPF insns 3 insns 4
  25. 25. 2011-2014 © PLUMgrid - Confidential Information •  BPF programs can attach to sockets, the traffic control (TC) subsystem, kprobe, syscalls, tracepoints… •  Sockets can be STREAM (L4/UDP), DATAGRAM (L4/ TCP) or RAW (TC) •  This allows to hook at different levels of the linux networking stack, providing the ability to act on traffic that has or hasn’t been processed already by other pieces of the stack •  Opens up the possibility to implement network functions at different layers of the OSI stack Hooking BPF into the Linux networking stack (TX) HW/veth/cont USERSPACE TAP/Raw (RO) driver dev_queue_xmit() TC / traffic control IP / routing KERNELSPACE insns 1 BPF Socket (TCP/UDP) For simplicity, the following slides simplify this view into a single “kernel networking stack” insns 2 BPF insns 3 insns 4
  26. 26. 2011-2014 © PLUMgrid - Confidential Information •  BPF Linux ‘call’ and set of in-kernel helper functions define what BPF programs can do int bpf(BPF_PROG_LOAD, union bpf_attr *attr, unsigned int size); •  BPF code itself acts as ‘glue’ between calls to in-kernel helper functions •  BPF helpers allow for additional functionality •  ktime_get_ns (timestamp) •  skb_store_bytes (packet write) •  L3/L4 chksum replace •  map_lookup/update/delete (more on maps later) Extended BPF system usage Userspace “Call” and “Helpers” insns 1 stack Kernel space HW/veth/cont Enables “in-kernel VNFs”
  27. 27. 2011-2014 © PLUMgrid - Confidential Information Extended BPF “maps” •  Maps are generic storage of different types for sharing data (key/value pairs) between kernel and userspace •  The maps are accessed from user space via BPF syscall, with commands: •  create a map with given type and attributes and receive as file descriptor: map_fd = bpf(BPF_MAP_CREATE, union bpf_attr *attr, u32 size) •  Additional calls to perform operations on the map: lookup key/value, update, delete, iterate, delete a map •  userspace programs use this syscall to create/ access maps that BPF programs are concurrently updating bpf_insn stack map_1 User space PHY space Kernel space Tables for “in-kernel VNFs”
  28. 28. 2011-2014 © PLUMgrid - Confidential Information Putting it all together - Networking with BPF Example - Attach a program to a socket •  User creates an eBPF program and obtains a union bpf_attr (previous slides) that includes the insns BPF instruction set for the program. •  A userspace program loads the eBPF program: int bpf(BPF_PROG_LOAD, union bpf_attr *attr, unsigned int size); •  It also creates a map, controlled with a file descriptor map_fd = bpf(BPF_MAP_CREATE, union bpf_attr *attr, u32 size) •  Create a socket (varies depending on socket type): socket = socket(PF_INET, SOCK_STREAM,IPPROTO_TCP) •  Attach the BPF program to a socket setsockopt(socket, SOL_SOCKET, SO_ATTACH_BPF, &fd, sizeof(fd)); •  Enjoy in-kernel networking nirvana ☺ insns sock filter map_1 User space Kernel space HW/veth/cont
  29. 29. 2011-2014 © PLUMgrid - Confidential Information eBPF framework for networking Building Virtual Network Infrastructure µController attachment points attachment points eBPF Execution Container Kernel space User space IO context IO Module helpers (optional)IO Module (dynamically loaded) Open repo of “IO Modules” Encap/Tunneling QoS / sched. IN-KERNEL VNFs Switching Routing Firewall insns 1 insns 2 insns 3
  30. 30. 2011-2014 © PLUMgrid - Confidential Information Is there an easier/safer way to use this technology? Higher-level APIs for producing and using BPF code •  BPF ensures that programs to be loaded in the kernel won’t crash or loop forever, by running it through a “verifier” upon loading it. (BPF_PROG_LOAD) •  But it is today possible to write programs in C that would compile into invalid BPF (C is like that), and a user would only know upon trying to run it •  A BPF-specific frontend would allow for a compiler to provide feedback on the validity of the code •  BPF COMPILER COLLECTION (BCC) http://github.com/iovisor/bcc
  31. 31. 2011-2014 © PLUMgrid - Confidential Information Why BCC? •  Current approaches to converting a C program to BPF involve many custom steps, tools •  clang frontend, llvm backend with BPF support •  kernel  samples/bpf/libbpf.c  APIs   •  ELF  loader  with  sec<on  rewrites   •  programs use low-level helper functions •  can be simplified
  32. 32. 2011-2014 © PLUMgrid - Confidential Information Writing a BPF Program - Easy Mode •  Write your BPF program in C... inline or in a separate file •  Write a python script that loads and interacts with your BPF program •  Attach to kprobes, socket, tc filter/action •  Read/update maps •  Configuration, complex calculation/correlations •  Iterate on above and re-try...in seconds
  33. 33. 2011-2014 © PLUMgrid - Confidential Information Demo 1 redux: EASY MODE •  Hello again, RedHat Summit
  34. 34. 2011-2014 © PLUMgrid - Confidential Information Demo 2: Using BPF for a versatile networking application •  Let’s assume that we have a set of applications running on top of a multitenant overlay network Think an Openstack cloud running on top of VxLAN, or an IP VPN running on top of MPLS •  Let’s store statistics of all the endpoints for every “overlay”, and also the endpoints for every “underlay”, in realtime, without latency. Think seeing in realtime the traffic between all VMs of an Openstack cloud (without having to have administrative access), or being able to see the traffic between every CE router, IP phone, server or endpoint connected to the IP VPN •  Write a program that measures the traffic traversing the physical network and dynamically stores measurements of all all metadata independently of whether it’s outer (VxLAN, MPLS) or inner (Ethernet/IP). Then display on demand each level of depth
  35. 35. 2011-2014 © PLUMgrid - Confidential Information Demo 2: Using BPF for a versatile networking application Dynamic analytics on a multi-level encapsulation network 35 172.16.1.1/24 192.168.0.1/24 192.168.1.1/24 192.168.3.2/24 192.168.1.2/24 192.168.0.3/24 192.168.3.3/24 172.16.1.2/24 172.16.1.3/24 vxlan 10001 vxlan 10001 vxlan 10002 vxlan 10002 vxlan 10003 vxlan 10003
  36. 36. 2011-2014 © PLUMgrid - Confidential Information Our Vision 36 Thank You

×