aboutsummaryrefslogtreecommitdiff
path: root/device (follow)
Commit message (Collapse)AuthorAgeFilesLines
* device: move Queue{In,Out}boundElement Mutex to container typeJordan Whited2023-10-106-111/+121
| | | | | | | | | | | Queue{In,Out}boundElement locking can contribute to significant overhead via sync.Mutex.lockSlow() in some environments. These types are passed throughout the device package as elements in a slice, so move the per-element Mutex to a container around the slice. Reviewed-by: Maisem Ali <maisem@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: distribute crypto work as slice of elementsJordan Whited2023-10-103-55/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After reducing UDP stack traversal overhead via GSO and GRO, runtime.chanrecv() began to account for a high percentage (20% in one environment) of perf samples during a throughput benchmark. The individual packet channel ops with the crypto goroutines was the primary contributor to this overhead. Updating these channels to pass vectors, which the device package already handles at its ends, reduced this overhead substantially, and improved throughput. The iperf3 results below demonstrate the effect of this commit between two Linux computers with i5-12400 CPUs. There is roughly ~13us of round trip latency between them. The first result is with UDP GSO and GRO, and with single element channels. Starting Test: protocol: TCP, 1 streams, 131072 byte blocks [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-10.00 sec 12.3 GBytes 10.6 Gbits/sec 232 3.15 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 12.3 GBytes 10.6 Gbits/sec 232 sender [ 5] 0.00-10.04 sec 12.3 GBytes 10.6 Gbits/sec receiver The second result is with channels updated to pass a slice of elements. Starting Test: protocol: TCP, 1 streams, 131072 byte blocks [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-10.00 sec 13.2 GBytes 11.3 Gbits/sec 182 3.15 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 13.2 GBytes 11.3 Gbits/sec 182 sender [ 5] 0.00-10.04 sec 13.2 GBytes 11.3 Gbits/sec receiver Reviewed-by: Adrian Dewhurst <adrian@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* conn, device: use UDP GSO and GRO on LinuxJordan Whited2023-10-101-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | StdNetBind probes for UDP GSO and GRO support at runtime. UDP GSO is dependent on checksum offload support on the egress netdev. UDP GSO will be disabled in the event sendmmsg() returns EIO, which is a strong signal that the egress netdev does not support checksum offload. The iperf3 results below demonstrate the effect of this commit between two Linux computers with i5-12400 CPUs. There is roughly ~13us of round trip latency between them. The first result is from commit 052af4a without UDP GSO or GRO. Starting Test: protocol: TCP, 1 streams, 131072 byte blocks [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-10.00 sec 9.85 GBytes 8.46 Gbits/sec 1139 3.01 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 9.85 GBytes 8.46 Gbits/sec 1139 sender [ 5] 0.00-10.04 sec 9.85 GBytes 8.42 Gbits/sec receiver The second result is with UDP GSO and GRO. Starting Test: protocol: TCP, 1 streams, 131072 byte blocks [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-10.00 sec 12.3 GBytes 10.6 Gbits/sec 232 3.15 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 12.3 GBytes 10.6 Gbits/sec 232 sender [ 5] 0.00-10.04 sec 12.3 GBytes 10.6 Gbits/sec receiver Reviewed-by: Adrian Dewhurst <adrian@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: wait for and lock ipc operations during closeJames Tucker2023-06-271-0/+2
| | | | | | | | | If an IPC operation is in flight while close starts, it is possible for both processes to deadlock. Prevent this by taking the IPC lock at the start of close and for the duration. Signed-off-by: James Tucker <jftucker@gmail.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* conn: disable sticky sockets on AndroidJason A. Donenfeld2023-03-231-0/+3
| | | | | | | | We can't have the netlink listener socket, so it's not possible to support it. Plus, android networking stack complexity makes it a bit tricky anyway, so best to leave it disabled. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: buff -> bufJason A. Donenfeld2023-03-134-48/+48
| | | | | | This always struck me as kind of weird and non-standard. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* conn: inch BatchSize toward being non-dynamicJason A. Donenfeld2023-03-102-2/+2
| | | | | | | | There's not really a use at the moment for making this configurable, and once bind_windows.go behaves like bind_std.go, we'll be able to use constants everywhere. So begin that simplification now. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* conn, device, tun: implement vectorized I/O on LinuxJordan Whited2023-03-103-9/+13
| | | | | | | | | | | | | | | | | | | | Implement TCP offloading via TSO and GRO for the Linux tun.Device, which is made possible by virtio extensions in the kernel's TUN driver. Delete conn.LinuxSocketEndpoint in favor of a collapsed conn.StdNetBind. conn.StdNetBind makes use of recvmmsg() and sendmmsg() on Linux. All platforms now fall under conn.StdNetBind, except for Windows, which remains in conn.WinRingBind, which still needs to be adjusted to handle multiple packets. Also refactor sticky sockets support to eventually be applicable on platforms other than just Linux. However Linux remains the sole platform that fully implements it for now. Co-authored-by: James Tucker <james@tailscale.com> Signed-off-by: James Tucker <james@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* conn, device, tun: implement vectorized I/O plumbingJordan Whited2023-03-108-260/+562
| | | | | | | | | | | | | | | | | | | | Accept packet vectors for reading and writing in the tun.Device and conn.Bind interfaces, so that the internal plumbing between these interfaces now passes a vector of packets. Vectors move untouched between these interfaces, i.e. if 128 packets are received from conn.Bind.Read(), 128 packets are passed to tun.Device.Write(). There is no internal buffering. Currently, existing implementations are only adjusted to have vectors of length one. Subsequent patches will improve that. Also, as a related fixup, use the unix and windows packages rather than the syscall package when possible. Co-authored-by: James Tucker <james@tailscale.com> Signed-off-by: James Tucker <james@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: uniformly check ECDH output for zerosJason A. Donenfeld2023-02-165-38/+45
| | | | | | | | For some reason, this was omitted for response messages. Reported-by: z <dzm@unexpl0.red> Fixes: 8c34c4c ("First set of code review patches") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: bump copyright yearJason A. Donenfeld2023-02-0736-36/+36
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: bump copyright yearJason A. Donenfeld2022-09-2036-36/+36
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* all: use Go 1.19 and its atomic typesBrad Fitzpatrick2022-09-0415-234/+106
| | | | | Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* conn, device, tun: set CLOEXEC on fdsBrad Fitzpatrick2022-07-041-1/+1
| | | | | Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* all: use any in place of interface{}Josh Bleecher Snyder2022-03-164-15/+15
| | | | | | Enabled by using Go 1.18. A bit less verbose. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* all: update to Go 1.18Josh Bleecher Snyder2022-03-166-15/+9
| | | | | | | | | | Bump go.mod and README. Switch to upstream net/netip. Use strings.Cut. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* global: apply gofumptJason A. Donenfeld2021-12-099-18/+9
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: handle peer post config on blank lineJason A. Donenfeld2021-11-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We missed a function exit point. This was exacerbated by e3134bf ("device: defer state machine transitions until configuration is complete"), but the bug existed prior. Minus provided the following useful reproducer script: #!/usr/bin/env bash set -eux make wireguard-go || exit 125 ip netns del test-ns || true ip netns add test-ns ip link add test-kernel type wireguard wg set test-kernel listen-port 0 private-key <(echo "QMCfZcp1KU27kEkpcMCgASEjDnDZDYsfMLHPed7+538=") peer "eDPZJMdfnb8ZcA/VSUnLZvLB2k8HVH12ufCGa7Z7rHI=" allowed-ips 10.51.234.10/32 ip link set test-kernel netns test-ns up ip -n test-ns addr add 10.51.234.1/24 dev test-kernel port=$(ip netns exec test-ns wg show test-kernel listen-port) ip link del test-go || true ./wireguard-go test-go wg set test-go private-key <(echo "WBM7qimR3vFk1QtWNfH+F4ggy/hmO+5hfIHKxxI4nF4=") peer "+nj9Dkqpl4phsHo2dQliGm5aEiWJJgBtYKbh7XjeNjg=" allowed-ips 0.0.0.0/0 endpoint 127.0.0.1:$port ip addr add 10.51.234.10/24 dev test-go ip link set test-go up ping -c2 -W1 10.51.234.1 Reported-by: minus <minus@mnus.de> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: reduce peer lock critical section in UAPIJosh Bleecher Snyder2021-11-231-26/+28
| | | | | | | | | The deferred RUnlock calls weren't executing until all peers had been processed. Add an anonymous function so that each peer may be unlocked as soon as it is completed. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: remove code using unsafeJosh Bleecher Snyder2021-11-231-33/+13
| | | | | | | | | | | | | There is no performance impact. name old time/op new time/op delta TrieIPv4Peers100Addresses1000-8 78.6ns ± 1% 79.4ns ± 3% ~ (p=0.604 n=10+9) TrieIPv4Peers10Addresses10-8 29.1ns ± 2% 28.8ns ± 1% -1.12% (p=0.014 n=10+9) TrieIPv6Peers100Addresses1000-8 78.9ns ± 1% 78.6ns ± 1% ~ (p=0.492 n=10+10) TrieIPv6Peers10Addresses10-8 29.3ns ± 2% 28.6ns ± 2% -2.16% (p=0.000 n=10+10) Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: use netip where possible nowJason A. Donenfeld2021-11-237-54/+57
| | | | | | | | There are more places where we'll need to add it later, when Go 1.18 comes out with support for it in the "net" package. Also, allowedips still uses slices internally, which might be suboptimal. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: only propagate roaming value before peer is referenced elsewhereJason A. Donenfeld2021-11-161-1/+3
| | | | | | | | | A peer.endpoint never becomes nil after being not-nil, so creation is the only time we actually need to set this. This prevents a race from when the variable is actually used elsewhere, and allows us to avoid an expensive atomic. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: align 64-bit atomic member in DeviceJason A. Donenfeld2021-11-161-5/+6
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: start peers before running handshake testJason A. Donenfeld2021-11-161-0/+2
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: fix nil pointer dereference in uapi readDavid Anderson2021-11-161-2/+2
| | | | | Signed-off-by: David Anderson <danderson@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: make new peers inherit broken mobile semanticsJason A. Donenfeld2021-11-153-0/+5
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: defer state machine transitions until configuration is completeJason A. Donenfeld2021-11-153-15/+18
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: do not consume handshake messages if not runningJason A. Donenfeld2021-11-151-1/+1
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: timers: use pre-seeded per-thread unlocked fastrandn for jitterJason A. Donenfeld2021-10-281-10/+5
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: timers: seed unsafe rng before use for jitterJason A. Donenfeld2021-10-281-3/+11
| | | | | | | Forgetting to seed the unsafe rng, the jitter before followed a fixed pattern, which didn't help when a fleet of computers all boot at once. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: remove old-style build tagsJason A. Donenfeld2021-10-125-5/+0
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: add new go 1.17 build commentsJason A. Donenfeld2021-09-055-2/+7
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: zero out allowedip node pointers when removingJason A. Donenfeld2021-06-042-1/+22
| | | | | | This should make it a bit easier for the garbage collector. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: limit allowedip fuzzer a to 4 times throughJason A. Donenfeld2021-06-031-5/+10
| | | | | | | Trying this for every peer winds up being very slow and precludes it from acceptable runtime in the CI, so reduce this to 4. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: simplify allowedips lookup signatureJason A. Donenfeld2021-06-035-17/+18
| | | | | | The inliner should handle this for us. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: remove nodes by peer in O(1) instead of O(n)Jason A. Donenfeld2021-06-032-72/+82
| | | | | | | | Now that we have parent pointers hooked up, we can simply go right to the node and remove it in place, rather than having to recursively walk the entire trie. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: remove recursion from insertion and connect parent pointersJason A. Donenfeld2021-06-033-59/+95
| | | | | | | | This makes the insertion algorithm a bit more efficient, while also now taking on the additional task of connecting up parent pointers. This will be handy in the following commit. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: reduce size of trie structJason A. Donenfeld2021-06-035-53/+45
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: allow reducing queue constants on iOSJason A. Donenfeld2021-05-223-11/+12
| | | | | | | | | | | Heavier network extensions might require the wireguard-go component to use less ram, so let users of this reduce these as needed. At some point we'll put this behind a configuration method of sorts, but for now, just expose the consts as vars. Requested-by: Josh Bleecher Snyder <josh@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* tun: linux: account for interface removal from outsideJason A. Donenfeld2021-05-201-1/+5
| | | | | | | | On Linux we can run `ip link del wg0`, in which case the fd becomes stale, and we should exit. Since this is an intentional action, don't treat it as an error. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: optimize Peer.String even moreJason A. Donenfeld2021-05-181-14/+16
| | | | | | This reduces the allocation, branches, and amount of base64 encoding. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: optimize Peer.StringJosh Bleecher Snyder2021-05-141-7/+20
| | | | Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: add ID to repeated routinesJason A. Donenfeld2021-05-073-13/+13
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: remove unusual ... in messagesJason A. Donenfeld2021-05-071-2/+2
| | | | | | We dont use ... in any other present progressive messages except these. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: avoid verbose log line during ordinary shutdown sequenceJason A. Donenfeld2021-05-071-1/+1
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: log all errors received by RoutineReceiveIncomingJosh Bleecher Snyder2021-05-061-1/+1
| | | | | | | | | | | When debugging, it's useful to know why a receive func exited. We were already logging that, but only in the "death spiral" case. Move the logging up, to capture it always. Reduce the verbosity, since it is not an error case any more. Put the receive func name in the log line. Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
* device: don't defer unlocking from loopJason A. Donenfeld2021-04-121-1/+1
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* conn: reconstruct v4 vs v6 receive function based on symtabJason A. Donenfeld2021-04-121-2/+3
| | | | | | This is kind of gross but it's better than the alternatives. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: allocate new buffer in receive death spiralKristupas Antanavičius2021-04-121-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | Note: this bug is "hidden" by avoiding "death spiral" code path by 6228659 ("device: handle broader range of errors in RoutineReceiveIncoming"). If the code reached "death spiral" mechanism, there would be multiple double frees happening. This results in a deadlock on iOS, because the pools are fixed size and goroutine might stop until somebody makes space in the pool. This was almost 100% repro on the new ARM Macbooks: - Build with 'ios' tag for Mac. This will enable bounded pools. - Somehow call device.IpcSet at least couple of times (update config) - device.BindUpdate() would be triggered - RoutineReceiveIncoming would enter "death spiral". - RoutineReceiveIncoming would stall on double free (pool is already full) - The stuck routine would deadlock 'device.closeBindLocked()' function on line 'netc.stopping.Wait()' Signed-off-by: Kristupas Antanavičius <kristupas.antanavicius@nordsec.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* all: make conn.Bind.Open return a slice of receive functionsJosh Bleecher Snyder2021-04-022-20/+12
| | | | | | | | | | | | | | | | | Instead of hard-coding exactly two sources from which to receive packets (an IPv4 source and an IPv6 source), allow the conn.Bind to specify a set of sources. Beneficial consequences: * If there's no IPv6 support on a system, conn.Bind.Open can choose not to return a receive function for it, which is simpler than tracking that state in the bind. This simplification removes existing data races from both conn.StdNetBind and bindtest.ChannelBind. * If there are more than two sources on a system, the conn.Bind no longer needs to add a separate muxing layer. Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>