aboutsummaryrefslogtreecommitdiff
path: root/device/pools_test.go (follow)
Commit message (Collapse)AuthorAgeFilesLines
* conn, device, tun: implement vectorized I/O plumbingJordan Whited2023-03-101-0/+48
| | | | | | | | | | | | | | | | | | | | Accept packet vectors for reading and writing in the tun.Device and conn.Bind interfaces, so that the internal plumbing between these interfaces now passes a vector of packets. Vectors move untouched between these interfaces, i.e. if 128 packets are received from conn.Bind.Read(), 128 packets are passed to tun.Device.Write(). There is no internal buffering. Currently, existing implementations are only adjusted to have vectors of length one. Subsequent patches will improve that. Also, as a related fixup, use the unix and windows packages rather than the syscall package when possible. Co-authored-by: James Tucker <james@tailscale.com> Signed-off-by: James Tucker <james@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: bump copyright yearJason A. Donenfeld2023-02-071-1/+1
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: bump copyright yearJason A. Donenfeld2022-09-201-1/+1
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* all: use Go 1.19 and its atomic typesBrad Fitzpatrick2022-09-041-10/+13
| | | | | Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* all: use any in place of interface{}Josh Bleecher Snyder2022-03-161-2/+2
| | | | | | Enabled by using Go 1.18. A bit less verbose. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: disable waitpool testsJason A. Donenfeld2021-02-221-0/+1
| | | | | | | This code is stable, and the test is finicky, especially on high core count systems, so just disable it. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: run fewer trials in TestWaitPool when race detector enabledJosh Bleecher Snyder2021-02-091-0/+4
| | | | | | | On a many-core machine with the race detector enabled, this test can take several minutes to complete. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* device: do not attach finalizer to non-returned objectJason A. Donenfeld2021-02-091-1/+1
| | | | | | | | Before, the code attached a finalizer to an object that wasn't returned, resulting in immediate garbage collection. Instead return the actual pointer. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: benchmark the waitpool to compare it to the prior channelsJason A. Donenfeld2021-02-031-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here is the old implementation: type WaitPool struct { c chan interface{} } func NewWaitPool(max uint32, new func() interface{}) *WaitPool { p := &WaitPool{c: make(chan interface{}, max)} for i := uint32(0); i < max; i++ { p.c <- new() } return p } func (p *WaitPool) Get() interface{} { return <- p.c } func (p *WaitPool) Put(x interface{}) { p.c <- x } It performs worse than the new one: name old time/op new time/op delta WaitPool-16 16.4µs ± 5% 15.1µs ± 3% -7.86% (p=0.008 n=5+5) Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: use a waiting sync.Pool instead of a channelJason A. Donenfeld2021-02-021-0/+60
Channels are FIFO which means we have guaranteed cache misses. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>