aboutsummaryrefslogtreecommitdiff
path: root/device/device.go (follow)
Commit message (Collapse)AuthorAgeFilesLines
* conn, device, tun: implement vectorized I/O plumbingJordan Whited2023-03-101-4/+23
| | | | | | | | | | | | | | | | | | | | Accept packet vectors for reading and writing in the tun.Device and conn.Bind interfaces, so that the internal plumbing between these interfaces now passes a vector of packets. Vectors move untouched between these interfaces, i.e. if 128 packets are received from conn.Bind.Read(), 128 packets are passed to tun.Device.Write(). There is no internal buffering. Currently, existing implementations are only adjusted to have vectors of length one. Subsequent patches will improve that. Also, as a related fixup, use the unix and windows packages rather than the syscall package when possible. Co-authored-by: James Tucker <james@tailscale.com> Signed-off-by: James Tucker <james@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: uniformly check ECDH output for zerosJason A. Donenfeld2023-02-161-1/+1
| | | | | | | | For some reason, this was omitted for response messages. Reported-by: z <dzm@unexpl0.red> Fixes: 8c34c4c ("First set of code review patches") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: bump copyright yearJason A. Donenfeld2023-02-071-1/+1
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: bump copyright yearJason A. Donenfeld2022-09-201-1/+1
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* all: use Go 1.19 and its atomic typesBrad Fitzpatrick2022-09-041-17/+15
| | | | | Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: align 64-bit atomic member in DeviceJason A. Donenfeld2021-11-161-5/+6
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: make new peers inherit broken mobile semanticsJason A. Donenfeld2021-11-151-0/+1
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: defer state machine transitions until configuration is completeJason A. Donenfeld2021-11-151-0/+5
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: allow reducing queue constants on iOSJason A. Donenfeld2021-05-221-1/+1
| | | | | | | | | | | Heavier network extensions might require the wireguard-go component to use less ram, so let users of this reduce these as needed. At some point we'll put this behind a configuration method of sorts, but for now, just expose the consts as vars. Requested-by: Josh Bleecher Snyder <josh@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: add ID to repeated routinesJason A. Donenfeld2021-05-071-3/+3
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* all: make conn.Bind.Open return a slice of receive functionsJosh Bleecher Snyder2021-04-021-9/+8
| | | | | | | | | | | | | | | | | Instead of hard-coding exactly two sources from which to receive packets (an IPv4 source and an IPv6 source), allow the conn.Bind to specify a set of sources. Beneficial consequences: * If there's no IPv6 support on a system, conn.Bind.Open can choose not to return a receive function for it, which is simpler than tracking that state in the bind. This simplification removes existing data races from both conn.StdNetBind and bindtest.ChannelBind. * If there are more than two sources on a system, the conn.Bind no longer needs to add a separate muxing layer. Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
* device: rename unsafeCloseBind to closeBindLockedJosh Bleecher Snyder2021-03-301-3/+5
| | | | | | | And document a bit. This name is more idiomatic. Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
* device: get rid of peers.empty boolean in timersActiveJason A. Donenfeld2021-03-061-8/+6
| | | | | | | | | | There's no way for len(peers)==0 when a current peer has isRunning==false. This requires some struct reshuffling so that the uint64 pointer is aligned. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* conn: make binds replacableJason A. Donenfeld2021-02-231-10/+3
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: return error from Up() and Down()Jason A. Donenfeld2021-02-101-13/+19
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: handshake routine writes into encryption queueJason A. Donenfeld2021-02-091-0/+1
| | | | | | | Since RoutineHandshake calls peer.SendKeepalive(), it potentially is a writer into the encryption queue, so we need to bump the wg count. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: make RoutineReadFromTUN keep encryption queue aliveJosh Bleecher Snyder2021-02-091-1/+2
| | | | | | | | | | | | | | | | | | | | | | RoutineReadFromTUN can trigger a call to SendStagedPackets. SendStagedPackets attempts to protect against sending on the encryption queue by checking peer.isRunning and device.isClosed. However, those are subject to TOCTOU bugs. If that happens, we get this: goroutine 1254 [running]: golang.zx2c4.com/wireguard/device.(*Peer).SendStagedPackets(0xc000798300) .../wireguard-go/device/send.go:321 +0x125 golang.zx2c4.com/wireguard/device.(*Device).RoutineReadFromTUN(0xc000014780) .../wireguard-go/device/send.go:271 +0x21c created by golang.zx2c4.com/wireguard/device.NewDevice .../wireguard-go/device/device.go:315 +0x298 Fix this with a simple, big hammer: Keep the encryption queue alive as long as it might be written to. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: clarify device.state.state docs (again)Josh Bleecher Snyder2021-02-091-2/+4
| | | | Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: rename unsafeRemovePeer to removePeerLockedJason A. Donenfeld2021-02-091-9/+5
| | | | | | This matches the new naming scheme of upLocked and downLocked. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: remove deviceStateNewJason A. Donenfeld2021-02-091-8/+6
| | | | | | | It's never used and we won't have a use for it. Also, move to go-running stringer, for those without GOPATHs. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: fix comment typo and shorten state.mu.Lock to state.LockJason A. Donenfeld2021-02-091-8/+7
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: fix typo in commentJason A. Donenfeld2021-02-091-1/+1
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: fix alignment on 32-bit machines and test for itJason A. Donenfeld2021-02-091-6/+1
| | | | | | | | | The test previously checked the offset within a substruct, not the offset within the allocated struct, so this adds the two together. It then fixes an alignment crash on 32-bit machines. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: do not log on idempotent device state changeJason A. Donenfeld2021-02-091-1/+0
| | | | | | | Part of being actually idempotent is that we shouldn't penalize code that takes advantage of this property with a log splat. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: create channels.goJosh Bleecher Snyder2021-02-081-61/+0
| | | | | | | We have a bunch of stupid channel tricks, and I'm about to add more. Give them their own file. This commit is 100% code movement. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: remove device.state.stopping from RoutineTUNEventReaderJosh Bleecher Snyder2021-02-081-1/+1
| | | | | | | | | | The TUN event reader does three things: Change MTU, device up, and device down. Changing the MTU after the device is closed does no harm. Device up and device down don't make sense after the device is closed, but we can check that condition before proceeding with changeState. There's thus no reason to block device.Close on RoutineTUNEventReader exiting. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: overhaul device state managementJosh Bleecher Snyder2021-02-081-128/+151
| | | | | | | | | | | | | | | | | | | | | | | | | | This commit simplifies device state management. It creates a single unified state variable and documents its semantics. It also makes state changes more atomic. As an example of the sort of bug that occurred due to non-atomic state changes, the following sequence of events used to occur approximately every 2.5 million test runs: * RoutineTUNEventReader received an EventDown event. * It called device.Down, which called device.setUpDown. * That set device.state.changing, but did not yet attempt to lock device.state.Mutex. * Test completion called device.Close. * device.Close locked device.state.Mutex. * device.Close blocked on a call to device.state.stopping.Wait. * device.setUpDown then attempted to lock device.state.Mutex and blocked. Deadlock results. setUpDown cannot progress because device.state.Mutex is locked. Until setUpDown returns, RoutineTUNEventReader cannot call device.state.stopping.Done. Until device.state.stopping.Done gets called, device.state.stopping.Wait is blocked. As long as device.state.stopping.Wait is blocked, device.state.Mutex cannot be unlocked. This commit fixes that deadlock by holding device.state.mu when checking that the device is not closed. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: remove device.state.stopping from RoutineHandshakeJosh Bleecher Snyder2021-02-081-1/+0
| | | | | | It is no longer necessary. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: remove device.state.stopping from RoutineDecryptionJosh Bleecher Snyder2021-02-081-1/+1
| | | | | | | It is no longer necessary, as of 454de6f3e64abd2a7bf9201579cd92eea5280996 (device: use channel close to shut down and drain decryption channel). Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: tie encryption queue lifetime to the peers that write to itJosh Bleecher Snyder2021-02-031-2/+4
| | | | | Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: use a waiting sync.Pool instead of a channelJason A. Donenfeld2021-02-021-6/+3
| | | | | | Channels are FIFO which means we have guaranteed cache misses. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: use int64 instead of atomic.Value for time stampJason A. Donenfeld2021-01-291-13/+3
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: use new model queues for handshakesJason A. Donenfeld2021-01-291-27/+28
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: simplify peer queue lockingJason A. Donenfeld2021-01-291-14/+14
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: bump copyrightJason A. Donenfeld2021-01-281-1/+1
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: do not allow get to run while set runsJason A. Donenfeld2021-01-281-1/+2
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: avoid deadlock when changing private key and removing self peersJason A. Donenfeld2021-01-271-0/+2
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: use linked list for per-peer allowed-ip traversalJason A. Donenfeld2021-01-271-1/+0
| | | | | | | | | | | | | | | | | | | | | | | This makes the IpcGet method much faster. We also refactor the traversal API to use a callback so that we don't need to allocate at all. Avoiding allocations we do self-masking on insertion, which in turn means that split intermediate nodes require a copy of the bits. benchmark old ns/op new ns/op delta BenchmarkUAPIGet-16 3243 2659 -18.01% benchmark old allocs new allocs delta BenchmarkUAPIGet-16 35 30 -14.29% benchmark old bytes new bytes delta BenchmarkUAPIGet-16 1218 737 -39.49% This benchmark is good, though it's only for a pair of peers, each with only one allowedips. As this grows, the delta expands considerably. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: combine debug and info log levels into 'verbose'Jason A. Donenfeld2021-01-261-5/+5
| | | | | | | | | | | | There are very few cases, if any, in which a user only wants one of these levels, so combine it into a single level. While we're at it, reduce indirection on the loggers by using an empty function rather than a nil function pointer. It's not like we have retpolines anyway, and we were always calling through a function with a branch prior, so this seems like a net gain. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: change logging interface to use functionsJosh Bleecher Snyder2021-01-261-5/+5
| | | | | | | | | | | | | | | | | | | | | This commit overhauls wireguard-go's logging. The primary, motivating change is to use a function instead of a *log.Logger as the basic unit of logging. Using functions provides a lot more flexibility for people to bring their own logging system. It also introduces logging helper methods on Device. These reduce line noise at the call site. They also allow for log functions to be nil; when nil, instead of generating a log line and throwing it away, we don't bother generating it at all. This spares allocation and pointless work. This is a breaking change, although the fix required of clients is fairly straightforward. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: serialize access to IpcSetOperationJosh Bleecher Snyder2021-01-251-0/+1
| | | | | | Interleaves IpcSetOperations would spell trouble. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: remove unnecessary zeroingJosh Bleecher Snyder2021-01-201-5/+0
| | | | | | Newly allocated objects are already zeroed. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: put handshake buffer in pool in FlushPacketQueuesJosh Bleecher Snyder2021-01-201-1/+2
| | | | | | This appears to have been an oversight. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: use channel close to shut down and drain decryption channelJosh Bleecher Snyder2021-01-201-12/+25
| | | | | | | | | This is similar to commit e1fa1cc5560020e67d33aa7e74674853671cf0a0, but for the decryption channel. It is an alternative fix to f9f655567930a4cd78d40fa4ba0d58503335ae6a. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: receive: drain decryption queue before exiting RoutineDecryptionJason A. Donenfeld2021-01-071-1/+4
| | | | | | | | | | | | It's possible for RoutineSequentialReceiver to try to lock an elem after RoutineDecryption has exited. Before this meant we didn't then unlock the elem, so the whole program deadlocked. As well, it looks like the flush code (which is now potentially unnecessary?) wasn't properly dropping the buffers for the not-already-dropped case. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* all: use ++ to incrementJosh Bleecher Snyder2021-01-071-1/+1
| | | | | | Make the code slightly more idiomatic. No functional changes. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: add missing colon to error lineJason A. Donenfeld2021-01-071-1/+1
| | | | | | | People are actually hitting this condition, so make it uniform. Also, change a printf into a println, to match the other conventions. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: fix data race in peer.timersActiveJosh Bleecher Snyder2021-01-071-2/+4
| | | | | | | | | Found by the race detector and existing tests. To avoid introducing a lock into this hot path, calculate and cache whether any peers exist. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: fix persistent_keepalive_interval data racesJosh Bleecher Snyder2021-01-071-1/+1
| | | | | Co-authored-by: David Anderson <danderson@tailscale.com> Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: use channel close to shut down and drain encryption channelJosh Bleecher Snyder2021-01-071-7/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new test introduced in this commit used to deadlock about 1% of the time. I believe that the deadlock occurs as follows: * The test completes, calling device.Close. * device.Close closes device.signals.stop. * RoutineEncryption stops. * The deferred function in RoutineEncryption drains device.queue.encryption. * RoutineEncryption exits. * A peer's RoutineNonce processes an element queued in peer.queue.nonce. * RoutineNonce puts that element into the outbound and encryption queues. * RoutineSequentialSender reads that elements from the outbound queue. * It waits for that element to get Unlocked by RoutineEncryption. * RoutineEncryption has already exited, so RoutineSequentialSender blocks forever. * device.RemoveAllPeers calls peer.Stop on all peers. * peer.Stop waits for peer.routines.stopping, which blocks forever. Rather than attempt to add even more ordering to the already complex centralized shutdown orchestration, this commit moves towards a data-flow-oriented shutdown. The device.queue.encryption gets closed when there will be no more writes to it. All device.queue.encryption readers always read until the channel is closed and then exit. We thus guarantee that any element that enters the encryption queue also exits it. This removes the need for central control of the lifetime of RoutineEncryption, removes the need to drain the encryption queue on shutdown, and simplifies RoutineEncryption. This commit also fixes a data race. When RoutineSequentialSender drains its queue on shutdown, it needs to lock the elem before operating on it, just as the main body does. The new test in this commit passed 50k iterations with the race detector enabled and 150k iterations with the race detector disabled, with no failures. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>