summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* crypto: mips/poly1305 - enable for all MIPS processorsbackport-5.4.yMaciej W. Rozycki2022-07-071-1/+1
| | | | | | | | | | | | | | | | | commit baeae5b993eebbd5d6d62f11ce11461f0a4d74e7 upstream. The MIPS Poly1305 implementation is generic MIPS code written such as to support down to the original MIPS I and MIPS III ISA for the 32-bit and 64-bit variant respectively. Lift the current limitation then to enable code for MIPSr1 ISA or newer processors only and have it available for all MIPS processors. Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk> Fixes: f9c88283df69 ("crypto: mips/poly1305 - incorporate OpenSSL/CRYPTOGAMS optimized implementation") Cc: stable@vger.kernel.org # v5.5+ Acked-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: Kconfig - CRYPTO_MANAGER_EXTRA_TESTS requires the managerJason A. Donenfeld2022-07-071-1/+1
| | | | | | | | | | | | | | | commit 6a34f3afc26eca1ed55b984d4ac359ec7aa74236 upstream. The extra tests in the manager actually require the manager to be selected too. Otherwise the linker gives errors like: ld: arch/x86/crypto/chacha_glue.o: in function `chacha_simd_stream_xor': chacha_glue.c:(.text+0x422): undefined reference to `crypto_simd_disabled_for_test' Fixes: da1db20dc767 ("crypto: Kconfig - allow tests to be disabled when manager is disabled") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: Kconfig - allow tests to be disabled when manager is disabledJason A. Donenfeld2022-07-071-4/+0
| | | | | | | | | | | | | commit da1db20dc7675041106813bea44ce06e74c9a588 upstream. The library code uses CRYPTO_MANAGER_DISABLE_TESTS to conditionalize its tests, but the library code can also exist without CRYPTO_MANAGER. That means on minimal configs, the test code winds up being built with no way to disable it. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: poly1305 - add new 32 and 64-bit generic versionsJason A. Donenfeld2022-07-073-4/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit ed4df0aaf3f7ea481ed4ad2c441c0fac15581555 upstream. These two C implementations from Zinc -- a 32x32 one and a 64x64 one, depending on the platform -- come from Andrew Moon's public domain poly1305-donna portable code, modified for usage in the kernel. The precomputation in the 32-bit version and the use of 64x64 multiplies in the 64-bit version make these perform better than the code it replaces. Moon's code is also very widespread and has received many eyeballs of scrutiny. There's a bit of interference between the x86 implementation, which relies on internal details of the old scalar implementation. In the next commit, the x86 implementation will be replaced with a faster one that doesn't rely on this, so none of this matters much. But for now, to keep this passing the tests, we inline the bits of the old implementation that the x86 implementation relied on. Also, since we now support a slightly larger key space, via the union, some offsets had to be fixed up. Nonce calculation was folded in with the emit function, to take advantage of 64x64 arithmetic. However, Adiantum appeared to rely on no nonce handling in emit, so this path was conditionalized. We also introduced a new struct, poly1305_core_key, to represent the precise amount of space that particular implementation uses. Testing with kbench9000, depending on the CPU, the update function for the 32x32 version has been improved by 4%-7%, and for the 64x64 by 19%-30%. The 32x32 gains are small, but I think there's great value in having a parallel implementation to the 64x64 one so that the two can be compared side-by-side as nice stand-alone units. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: chacha_generic - remove unnecessary setkey() functionsEric Biggers2022-07-071-15/+3
| | | | | | | | | | | | | commit 0f859c1eb629b50b8ce5a68ab85406360cd2a60e upstream. Use chacha20_setkey() and chacha12_setkey() from <crypto/internal/chacha.h> instead of defining them again in chacha_generic.c. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: curve25519 - x86_64 library and KPP implementationsJason A. Donenfeld2022-07-071-0/+6
| | | | | | | | | | | | | | | | | | | | | commit bd7e9bbebd331b692b6a1cba73fb8424d37a20ce upstream. This implementation is the fastest available x86_64 implementation, and unlike Sandy2x, it doesn't requie use of the floating point registers at all. Instead it makes use of BMI2 and ADX, available on recent microarchitectures. The implementation was written by Armando Faz-Hernández with contributions (upstream) from Samuel Neves and me, in addition to further changes in the kernel implementation from us. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Samuel Neves <sneves@dei.uc.pt> Co-developed-by: Samuel Neves <sneves@dei.uc.pt> [ardb: - move to arch/x86/crypto - wire into lib/crypto framework - implement crypto API KPP hooks ] Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: curve25519 - implement generic KPP driverArd Biesheuvel2022-07-073-0/+96
| | | | | | | | | | commit 2a22f5c1440006ab3ec53111206094c1048410a2 upstream. Expose the generic Curve25519 library via the crypto API KPP interface. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: curve25519 - add kpp selftestArd Biesheuvel2022-07-072-0/+1231
| | | | | | | | | | | | commit e86a2eeed689c9432906fcfae6bea663e36f8c46 upstream. In preparation of introducing KPP implementations of Curve25519, import the set of test cases proposed by the Zinc patch set, but converted to the KPP format. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: mips/poly1305 - incorporate OpenSSL/CRYPTOGAMS optimized implementationArd Biesheuvel2022-07-071-0/+5
| | | | | | | | | | | | | | | | | | | | | commit f9c88283df6979e2dccafc20ee0b4d7281819d3e upstream. This is a straight import of the OpenSSL/CRYPTOGAMS Poly1305 implementation for MIPS authored by Andy Polyakov, a prior 64-bit only version of which has been contributed by him to the OpenSSL project. The file 'poly1305-mips.pl' is taken straight from this upstream GitHub repository [0] at commit d22ade312a7af958ec955620b0d241cf42c37feb, and already contains all the changes required to build it as part of a Linux kernel module. [0] https://github.com/dot-asm/cryptogams Co-developed-by: Andy Polyakov <appro@cryptogams.org> Signed-off-by: Andy Polyakov <appro@cryptogams.org> Co-developed-by: René van Dorst <opensource@vdorst.com> Signed-off-by: René van Dorst <opensource@vdorst.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: x86/poly1305 - expose existing driver as poly1305 libraryArd Biesheuvel2022-07-071-0/+1
| | | | | | | | | | | | commit eaa511dd305fc92c64b4a2bc740e80034176376f upstream. Implement the arch init/update/final Poly1305 library routines in the accelerated SIMD driver for x86 so they are accessible to users of the Poly1305 library interface as well. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: x86/poly1305 - depend on generic library not generic shashArd Biesheuvel2022-07-072-8/+5
| | | | | | | | | | | | | | | | | | | commit 8a388bd4fa17b3765c0e00c42e0bd1bc106f1807 upstream. Remove the dependency on the generic Poly1305 driver. Instead, depend on the generic library so that we only reuse code without pulling in the generic skcipher implementation as well. While at it, remove the logic that prefers the non-SIMD path for short inputs - this is no longer necessary after recent FPU handling changes on x86. Since this removes the last remaining user of the routines exported by the generic shash driver, unexport them and make them static. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: poly1305 - expose init/update/final library interfaceArd Biesheuvel2022-07-071-21/+1
| | | | | | | | | | | | | | commit a5a8b462ec8dc4adcd434465303d0ada17530105 upstream. Expose the existing generic Poly1305 code via a init/update/final library interface so that callers are not required to go through the crypto API's shash abstraction to access it. At the same time, make some preparations so that the library implementation can be superseded by an accelerated arch-specific version in the future. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: x86/poly1305 - unify Poly1305 state struct with generic codeArd Biesheuvel2022-07-071-3/+3
| | | | | | | | | | | | | commit cb47ba99195bd0956dc1421df222c4ceb41f2c5c upstream. In preparation of exposing a Poly1305 library interface directly from the accelerated x86 driver, align the state descriptor of the x86 code with the one used by the generic driver. This is needed to make the library interface unified between all implementations. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: poly1305 - move core routines into a separate libraryArd Biesheuvel2022-07-074-192/+16
| | | | | | | | | | | | | | | | | commit 43269ab83e94bc5621c232e74dd15ab8cebdd99a upstream. Move the core Poly1305 routines shared between the generic Poly1305 shash driver and the Adiantum and NHPoly1305 drivers into a separate library so that using just this pieces does not pull in the crypto API pieces of the generic Poly1305 routine. In a subsequent patch, we will augment this generic library with init/update/final routines so that Poyl1305 algorithm can be used directly without the need for using the crypto API's shash abstraction. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: chacha - unexport chacha_generic routinesArd Biesheuvel2022-07-071-18/+8
| | | | | | | | | | | | | commit d94bd9d8cbda847af13a88714122bca6301de68e upstream. Now that all users of generic ChaCha code have moved to the core library, there is no longer a need for the generic ChaCha skcpiher driver to export parts of it implementation for reuse by other drivers. So drop the exports, and make the symbols static. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: mips/chacha - wire up accelerated 32r2 code from ZincArd Biesheuvel2022-07-071-0/+6
| | | | | | | | | | | | | | | | | | | | | | commit ad261a4d13e1128853932f8f3cdcad4d68d12a92 upstream. This integrates the accelerated MIPS 32r2 implementation of ChaCha into both the API and library interfaces of the kernel crypto stack. The significance of this is that, in addition to becoming available as an accelerated library implementation, it can also be used by existing crypto API code such as Adiantum (for block encryption on ultra low performance cores) or IPsec using chacha20poly1305. These are use cases that have already opted into using the abstract crypto API. In order to support Adiantum, the core assembler routine has been adapted to take the round count as a function argument rather than hardcoding it to 20. Co-developed-by: René van Dorst <opensource@vdorst.com> Signed-off-by: René van Dorst <opensource@vdorst.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: x86/chacha - expose SIMD ChaCha routine as library functionArd Biesheuvel2022-07-071-0/+1
| | | | | | | | | | | | | | | | | | commit 57cc6bc92e387c102c12ad07528c5d33a342ecca upstream. Wire the existing x86 SIMD ChaCha code into the new ChaCha library interface, so that users of the library interface will get the accelerated version when available. Given that calls into the library API will always go through the routines in this module if it is enabled, switch to static keys to select the optimal implementation available (which may be none at all, in which case we defer to the generic implementation for all invocations). Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: x86/chacha - depend on generic chacha library instead of crypto driverArd Biesheuvel2022-07-071-1/+1
| | | | | | | | | | | | | | | | | | | commit 47708ff48faa838190a4be50489e08c1ec383b46 upstream. In preparation of extending the x86 ChaCha driver to also expose the ChaCha library interface, drop the dependency on the chacha_generic crypto driver as a non-SIMD fallback, and depend on the generic ChaCha library directly. This way, we only pull in the code we actually need, without registering a set of ChaCha skciphers that we will never use. Since turning the FPU on and off is cheap these days, simplify the SIMD routine by dropping the per-page yield, which makes for a cleaner switch to the library API as well. This also allows use to invoke the skcipher walk routines in non-atomic mode. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: chacha - move existing library code into lib/cryptoArd Biesheuvel2022-07-072-55/+6
| | | | | | | | | | | | | | | | | | | | | | | commit e0a4e2e022bda4c9d1278d189d1c05b99fc57b6c upstream. Currently, our generic ChaCha implementation consists of a permute function in lib/chacha.c that operates on the 64-byte ChaCha state directly [and which is always included into the core kernel since it is used by the /dev/random driver], and the crypto API plumbing to expose it as a skcipher. In order to support in-kernel users that need the ChaCha streamcipher but have no need [or tolerance] for going through the abstractions of the crypto API, let's expose the streamcipher bits via a library API as well, in a way that permits the implementation to be superseded by an architecture specific one if provided. So move the streamcipher code into a separate module in lib/crypto, and expose the init() and crypt() routines to users of the library. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: lib - tidy up lib/crypto Kconfig and MakefileArd Biesheuvel2022-07-071-12/+1
| | | | | | | | | | | commit 8a68f716aa88e5e7dc1cfebd6bbbc0d85192c1f7 upstream. In preparation of introducing a set of crypto library interfaces, tidy up the Makefile and split off the Kconfig symbols into a separate file. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* crypto: drbg - make reseeding from get_random_bytes() synchronousgregkh/stable-5.4.yNicolai Stange2022-06-221-50/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 074bcd4000e0d812bc253f86fedc40f81ed59ccc upstream. get_random_bytes() usually hasn't full entropy available by the time DRBG instances are first getting seeded from it during boot. Thus, the DRBG implementation registers random_ready_callbacks which would in turn schedule some work for reseeding the DRBGs once get_random_bytes() has sufficient entropy available. For reference, the relevant history around handling DRBG (re)seeding in the context of a not yet fully seeded get_random_bytes() is: commit 16b369a91d0d ("random: Blocking API for accessing nonblocking_pool") commit a73db989e365 ("crypto: drbg - add async seeding operation") commit 205a525c3342 ("random: Add callback API for random pool readiness") commit 023b75dd47bc ("crypto: drbg - Use callback API for random readiness") commit c2719503f5e1 ("random: Remove kernel blocking API") However, some time later, the initialization state of get_random_bytes() has been made queryable via rng_is_initialized() introduced with commit 9a47249d444d ("random: Make crng state queryable"). This primitive now allows for streamlining the DRBG reseeding from get_random_bytes() by replacing that aforementioned asynchronous work scheduling from random_ready_callbacks with some simpler, synchronous code in drbg_generate() next to the related logic already present therein. Apart from improving overall code readability, this change will also enable DRBG users to rely on wait_for_random_bytes() for ensuring that the initial seeding has completed, if desired. The previous patches already laid the grounds by making drbg_seed() to record at each DRBG instance whether it was being seeded at a time when rng_is_initialized() still had been false as indicated by ->seeded == DRBG_SEED_STATE_PARTIAL. All that remains to be done now is to make drbg_generate() check for this condition, determine whether rng_is_initialized() has flipped to true in the meanwhile and invoke a reseed from get_random_bytes() if so. Make this move: - rename the former drbg_async_seed() work handler, i.e. the one in charge of reseeding a DRBG instance from get_random_bytes(), to "drbg_seed_from_random()", - change its signature as appropriate, i.e. make it take a struct drbg_state rather than a work_struct and change its return type from "void" to "int" in order to allow for passing error information from e.g. its __drbg_seed() invocation onwards to callers, - make drbg_generate() invoke this drbg_seed_from_random() once it encounters a DRBG instance with ->seeded == DRBG_SEED_STATE_PARTIAL by the time rng_is_initialized() has flipped to true and - prune everything related to the former, random_ready_callback based mechanism. As drbg_seed_from_random() is now getting invoked from drbg_generate() with the ->drbg_mutex being held, it must not attempt to recursively grab it once again. Remove the corresponding mutex operations from what is now drbg_seed_from_random(). Furthermore, as drbg_seed_from_random() can now report errors directly to its caller, there's no need for it to temporarily switch the DRBG's ->seeded state to DRBG_SEED_STATE_UNSEEDED so that a failure of the subsequently invoked __drbg_seed() will get signaled to drbg_generate(). Don't do it then. Signed-off-by: Nicolai Stange <nstange@suse.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> [Jason: for stable, undid the modifications for the backport of 5acd3548.] Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: drbg - always try to free Jitter RNG instanceStephan Müller2022-06-221-2/+4
| | | | | | | | | | | | | | commit f4d7cabd9661612f3019866664addbc463854102 upstream. The Jitter RNG is unconditionally allocated as a seed source follwoing the patch ffb82d3279a8. Thus, the instance must always be deallocated. Reported-by: syzbot+2e635807decef724a1fa@syzkaller.appspotmail.com Fixes: ffb82d3279a8 ("crypto: drbg - always seeded with SP800-90B ...") Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: drbg - move dynamic ->reseed_threshold adjustments to __drbg_seed()Nicolai Stange2022-06-221-9/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 262d83a4290c331cd4f617a457408bdb82fbb738 upstream. Since commit d2a5f0e4f727 ("crypto: drbg - reseed often if seedsource is degraded"), the maximum seed lifetime represented by ->reseed_threshold gets temporarily lowered if the get_random_bytes() source cannot provide sufficient entropy yet, as is common during boot, and restored back to the original value again once that has changed. More specifically, if the add_random_ready_callback() invoked from drbg_prepare_hrng() in the course of DRBG instantiation does not return -EALREADY, that is, if get_random_bytes() has not been fully initialized at this point yet, drbg_prepare_hrng() will lower ->reseed_threshold to a value of 50. The drbg_async_seed() scheduled from said random_ready_callback will eventually restore the original value. A future patch will replace the random_ready_callback based notification mechanism and thus, there will be no add_random_ready_callback() return value anymore which could get compared to -EALREADY. However, there's __drbg_seed() which gets invoked in the course of both, the DRBG instantiation as well as the eventual reseeding from get_random_bytes() in aforementioned drbg_async_seed(), if any. Moreover, it knows about the get_random_bytes() initialization state by the time the seed data had been obtained from it: the new_seed_state argument introduced with the previous patch would get set to DRBG_SEED_STATE_PARTIAL in case get_random_bytes() had not been fully initialized yet and to DRBG_SEED_STATE_FULL otherwise. Thus, __drbg_seed() provides a convenient alternative for managing that ->reseed_threshold lowering and restoring at a central place. Move all ->reseed_threshold adjustment code from drbg_prepare_hrng() and drbg_async_seed() respectively to __drbg_seed(). Make __drbg_seed() lower the ->reseed_threshold to 50 in case its new_seed_state argument equals DRBG_SEED_STATE_PARTIAL and let it restore the original value otherwise. There is no change in behaviour. Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Stephan Müller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: drbg - track whether DRBG was seeded with !rng_is_initialized()Nicolai Stange2022-06-221-4/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 2bcd25443868aa8863779a6ebc6c9319633025d2 upstream. Currently, the DRBG implementation schedules asynchronous works from random_ready_callbacks for reseeding the DRBG instances with output from get_random_bytes() once the latter has sufficient entropy available. However, as the get_random_bytes() initialization state can get queried by means of rng_is_initialized() now, there is no real need for this asynchronous reseeding logic anymore and it's better to keep things simple by doing it synchronously when needed instead, i.e. from drbg_generate() once rng_is_initialized() has flipped to true. Of course, for this to work, drbg_generate() would need some means by which it can tell whether or not rng_is_initialized() has flipped to true since the last seeding from get_random_bytes(). Or equivalently, whether or not the last seed from get_random_bytes() has happened when rng_is_initialized() was still evaluating to false. As it currently stands, enum drbg_seed_state allows for the representation of two different DRBG seeding states: DRBG_SEED_STATE_UNSEEDED and DRBG_SEED_STATE_FULL. The former makes drbg_generate() to invoke a full reseeding operation involving both, the rather expensive jitterentropy as well as the get_random_bytes() randomness sources. The DRBG_SEED_STATE_FULL state on the other hand implies that no reseeding at all is required for a !->pr DRBG variant. Introduce the new DRBG_SEED_STATE_PARTIAL state to enum drbg_seed_state for representing the condition that a DRBG was being seeded when rng_is_initialized() had still been false. In particular, this new state implies that - the given DRBG instance has been fully seeded from the jitterentropy source (if enabled) - and drbg_generate() is supposed to reseed from get_random_bytes() *only* once rng_is_initialized() turns to true. Up to now, the __drbg_seed() helper used to set the given DRBG instance's ->seeded state to constant DRBG_SEED_STATE_FULL. Introduce a new argument allowing for the specification of the to be written ->seeded value instead. Make the first of its two callers, drbg_seed(), determine the appropriate value based on rng_is_initialized(). The remaining caller, drbg_async_seed(), is known to get invoked only once rng_is_initialized() is true, hence let it pass constant DRBG_SEED_STATE_FULL for the new argument to __drbg_seed(). There is no change in behaviour, except for that the pr_devel() in drbg_generate() would now report "unseeded" for ->pr DRBG instances which had last been seeded when rng_is_initialized() was still evaluating to false. Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Stephan Müller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: drbg - prepare for more fine-grained tracking of seeding stateNicolai Stange2022-06-221-9/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit ce8ce31b2c5c8b18667784b8c515650c65d57b4e upstream. There are two different randomness sources the DRBGs are getting seeded from, namely the jitterentropy source (if enabled) and get_random_bytes(). At initial DRBG seeding time during boot, the latter might not have collected sufficient entropy for seeding itself yet and thus, the DRBG implementation schedules a reseed work from a random_ready_callback once that has happened. This is particularly important for the !->pr DRBG instances, for which (almost) no further reseeds are getting triggered during their lifetime. Because collecting data from the jitterentropy source is a rather expensive operation, the aforementioned asynchronously scheduled reseed work restricts itself to get_random_bytes() only. That is, it in some sense amends the initial DRBG seed derived from jitterentropy output at full (estimated) entropy with fresh randomness obtained from get_random_bytes() once that has been seeded with sufficient entropy itself. With the advent of rng_is_initialized(), there is no real need for doing the reseed operation from an asynchronously scheduled work anymore and a subsequent patch will make it synchronous by moving it next to related logic already present in drbg_generate(). However, for tracking whether a full reseed including the jitterentropy source is required or a "partial" reseed involving only get_random_bytes() would be sufficient already, the boolean struct drbg_state's ->seeded member must become a tristate value. Prepare for this by introducing the new enum drbg_seed_state and change struct drbg_state's ->seeded member's type from bool to that type. For facilitating review, enum drbg_seed_state is made to only contain two members corresponding to the former ->seeded values of false and true resp. at this point: DRBG_SEED_STATE_UNSEEDED and DRBG_SEED_STATE_FULL. A third one for tracking the intermediate state of "seeded from jitterentropy only" will be introduced with a subsequent patch. There is no change in behaviour at this point. Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Stephan Müller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: drbg - always seeded with SP800-90B compliant noise sourceStephan Müller2022-06-221-7/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit ffb82d3279a8a346e8831c117dc3a7fb65875211 upstream. As the Jitter RNG provides an SP800-90B compliant noise source, use this noise source always for the (re)seeding of the DRBG. To make sure the DRBG is always properly seeded, the reseed threshold is reduced to 1<<20 generate operations. The Jitter RNG may report health test failures. Such health test failures are treated as transient as follows. The DRBG will not reseed from the Jitter RNG (but from get_random_bytes) in case of a health test failure. Though, it produces the requested random number. The Jitter RNG has a failure counter where at most 1024 consecutive resets due to a health test failure are considered as a transient error. If more consecutive resets are required, the Jitter RNG will return a permanent error which is returned to the caller by the DRBG. With this approach, the worst case reseed threshold is significantly lower than mandated by SP800-90A in order to seed with an SP800-90B noise source: the DRBG has a reseed threshold of 2^20 * 1024 = 2^30 generate requests. Yet, in case of a transient Jitter RNG health test failure, the DRBG is seeded with the data obtained from get_random_bytes. However, if the Jitter RNG fails during the initial seeding operation even due to a health test error, the DRBG will send an error to the caller because at that time, the DRBG has received no seed that is SP800-90B compliant. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* random: replace custom notifier chain with standard oneJason A. Donenfeld2022-06-221-9/+8
| | | | | | | | | | | | | | | | | | | | | commit 5acd35487dc911541672b3ffc322851769c32a56 upstream. We previously rolled our own randomness readiness notifier, which only has two users in the whole kernel. Replace this with a more standard atomic notifier block that serves the same purpose with less code. Also unexport the symbols, because no modules use it, only unconditional builtins. The only drawback is that it's possible for a notification handler returning the "stop" code to prevent further processing, but given that there are only two users, and that we're unexporting this anyway, that doesn't seem like a significant drawback for the simplification we receive here. Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> [Jason: for stable, also backported to crypto/drbg.c, not unexporting.] Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: cryptd - Protect per-CPU resource by disabling BH.Sebastian Andrzej Siewior2022-06-141-12/+11
| | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 91e8bcd7b4da182e09ea19a2c73167345fe14c98 ] The access to cryptd_queue::cpu_queue is synchronized by disabling preemption in cryptd_enqueue_request() and disabling BH in cryptd_queue_worker(). This implies that access is allowed from BH. If cryptd_enqueue_request() is invoked from preemptible context _and_ soft interrupt then this can lead to list corruption since cryptd_enqueue_request() is not protected against access from soft interrupt. Replace get_cpu() in cryptd_enqueue_request() with local_bh_disable() to ensure BH is always disabled. Remove preempt_disable() from cryptd_queue_worker() since it is not needed because local_bh_disable() ensures synchronisation. Fixes: a045a18f0207 ("crypto: cryptd - Per-CPU thread implementation...") Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* crypto: ecrdsa - Fix incorrect use of vli_cmpVitaly Chikunov2022-06-061-4/+4
| | | | | | | | | | | | | commit 7cc7ab73f83ee6d50dc9536bc3355495d8600fad upstream. Correctly compare values that shall be greater-or-equal and not just greater. Fixes: 223475c353b6 ("crypto: ecrdsa - add EC-RDSA (GOST 34.10) algorithm") Cc: <stable@vger.kernel.org> Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: authenc - Fix sleep in atomic context in decrypt_tailHerbert Xu2022-04-151-1/+1
| | | | | | | | | | | | | | | | | | [ Upstream commit 66eae850333d639fc278d6f915c6fc01499ea893 ] The function crypto_authenc_decrypt_tail discards its flags argument and always relies on the flags from the original request when starting its sub-request. This is clearly wrong as it may cause the SLEEPABLE flag to be set when it shouldn't. Fixes: c53e75d5b112 ("crypto: authenc - Convert to new AEAD interface") Reported-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* crypto: rsa-pkcs1pad - fix buffer overread in pkcs1pad_verify_complete()Eric Biggers2022-04-151-0/+2
| | | | | | | | | | | | | | commit a24611ea356c7f3f0ec926da11b9482ac1f414fd upstream. Before checking whether the expected digest_info is present, we need to check that there are enough bytes remaining. Fixes: 39669c60ba49 ("crypto: Add hash param to pkcs1pad") Cc: <stable@vger.kernel.org> # v4.6+ Cc: Tadeusz Struk <tadeusz.struk@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: rsa-pkcs1pad - restore signature length checkEric Biggers2022-04-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | commit d3481accd974541e6a5d6a1fb588924a3519c36e upstream. RSA PKCS#1 v1.5 signatures are required to be the same length as the RSA key size. RFC8017 specifically requires the verifier to check this (https://datatracker.ietf.org/doc/html/rfc8017#section-8.2.2). Commit 39669c60ba49 ("crypto: Add hash param to pkcs1pad") changed the kernel to allow longer signatures, but didn't explain this part of the change; it seems to be unrelated to the rest of the commit. Revert this change, since it doesn't appear to be correct. We can be pretty sure that no one is relying on overly-long signatures (which would have to be front-padded with zeroes) being supported, given that they would have been broken since commit 5bb3d56f3518 ("crypto: akcipher - new verify API for public key algorithms"). Fixes: 39669c60ba49 ("crypto: Add hash param to pkcs1pad") Cc: <stable@vger.kernel.org> # v4.6+ Cc: Tadeusz Struk <tadeusz.struk@linaro.org> Suggested-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: rsa-pkcs1pad - correctly get hash from source scatterlistEric Biggers2022-04-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit e316f7179be22912281ce6331d96d7c121fb2b17 upstream. Commit 5bb3d56f3518 ("crypto: akcipher - new verify API for public key algorithms") changed akcipher_alg::verify to take in both the signature and the actual hash and do the signature verification, rather than just return the hash expected by the signature as was the case before. To do this, it implemented a hack where the signature and hash are concatenated with each other in one scatterlist. Obviously, for this to work correctly, akcipher_alg::verify needs to correctly extract the two items from the scatterlist it is given. Unfortunately, it doesn't correctly extract the hash in the case where the signature is longer than the RSA key size, as it assumes that the signature's length is equal to the RSA key size. This causes a prefix of the hash, or even the entire hash, to be taken from the *signature*. (Note, the case of a signature longer than the RSA key size should not be allowed in the first place; a separate patch will fix that.) It is unclear whether the resulting scheme has any useful security properties. Fix this by correctly extracting the hash from the scatterlist. Fixes: 5bb3d56f3518 ("crypto: akcipher - new verify API for public key algorithms") Cc: <stable@vger.kernel.org> # v5.2+ Reviewed-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: pcrypt - Delay write to padata->infoDaniel Jordan2021-11-171-4/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 68b6dea802cea0dbdd8bd7ccc60716b5a32a5d8a ] These three events can race when pcrypt is used multiple times in a template ("pcrypt(pcrypt(...))"): 1. [taskA] The caller makes the crypto request via crypto_aead_encrypt() 2. [kworkerB] padata serializes the inner pcrypt request 3. [kworkerC] padata serializes the outer pcrypt request 3 might finish before the call to crypto_aead_encrypt() returns in 1, resulting in two possible issues. First, a use-after-free of the crypto request's memory when, for example, taskA writes to the outer pcrypt request's padata->info in pcrypt_aead_enc() after kworkerC completes the request. Second, the outer pcrypt request overwrites the inner pcrypt request's return code with -EINPROGRESS, making a successful request appear to fail. For instance, kworkerB writes the outer pcrypt request's padata->info in pcrypt_aead_done() and then taskA overwrites it in pcrypt_aead_enc(). Avoid both situations by delaying the write of padata->info until after the inner crypto request's return code is checked. This prevents the use-after-free by not touching the crypto request's memory after the next-inner crypto request is made, and stops padata->info from being overwritten. Fixes: fa0657180cb85 ("crypto: pcrypt - Add pcrypt crypto parallelization wrapper") Reported-by: syzbot+b187b77c8474f9648fae@syzkaller.appspotmail.com Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* crypto: ecc - fix CRYPTO_DEFAULT_RNG dependencyArnd Bergmann2021-11-171-1/+1
| | | | | | | | | | | | | | | | | | | | | [ Upstream commit 38aa192a05f22f9778f9420e630f0322525ef12e ] The ecc.c file started out as part of the ECDH algorithm but got moved out into a standalone module later. It does not build without CRYPTO_DEFAULT_RNG, so now that other modules are using it as well we can run into this link error: aarch64-linux-ld: ecc.c:(.text+0xfc8): undefined reference to `crypto_default_rng' aarch64-linux-ld: ecc.c:(.text+0xff4): undefined reference to `crypto_put_default_rng' Move the 'select CRYPTO_DEFAULT_RNG' statement into the correct symbol. Fixes: 223475c353b6 ("crypto: ecrdsa - add EC-RDSA (GOST 34.10) algorithm") Fixes: ef4af956b9d0 ("crypto: ecdsa - Add support for ECDSA signature verification") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* crypto: shash - avoid comparing pointers to exported functions under CFIArd Biesheuvel2021-07-141-3/+15
| | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 22ca9f4aaf431a9413dcc115dd590123307f274f ] crypto_shash_alg_has_setkey() is implemented by testing whether the .setkey() member of a struct shash_alg points to the default version, called shash_no_setkey(). As crypto_shash_alg_has_setkey() is a static inline, this requires shash_no_setkey() to be exported to modules. Unfortunately, when building with CFI, function pointers are routed via CFI stubs which are private to each module (or to the kernel proper) and so this function pointer comparison may fail spuriously. Let's fix this by turning crypto_shash_alg_has_setkey() into an out of line function. Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* crypto: rng - fix crypto_rng_reset() refcounting when !CRYPTO_STATSEric Biggers2021-05-111-7/+3
| | | | | | | | | | | | | | | | | | commit bf1e9b3dff85a0433aaea6755c3f8036840dacfb upstream. crypto_stats_get() is a no-op when the kernel is compiled without CONFIG_CRYPTO_STATS, so pairing it with crypto_alg_put() unconditionally (as crypto_rng_reset() does) is wrong. Fix this by moving the call to crypto_stats_get() to just before the actual algorithm operation which might need it. This makes it always paired with crypto_stats_rng_seed(). Fixes: 9b91cc7900ce ("crypto: rng - Fix a refcounting bug in crypto_rng_reset()") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: api - check for ERR pointers in crypto_destroy_tfm()Ard Biesheuvel2021-05-111-1/+1
| | | | | | | | | | | | | | | | | | [ Upstream commit 72e2a02cdf58fbe06533cc21f6d7dd518cf01ae5 ] Given that crypto_alloc_tfm() may return ERR pointers, and to avoid crashes on obscure error paths where such pointers are presented to crypto_destroy_tfm() (such as [0]), add an ERR_PTR check there before dereferencing the second argument as a struct crypto_tfm pointer. [0] https://lore.kernel.org/linux-crypto/000000000000de949705bc59e0f6@google.com/ Reported-by: syzbot+12cf5fbfdeba210a89dd@syzkaller.appspotmail.com Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* crypto: x86 - Regularize glue function prototypesKees Cook2021-03-202-10/+14
| | | | | | | | | | | | | | | | | | | commit 31701000f52600cb1dd335dec9333addf1a69494 upstream. The crypto glue performed function prototype casting via macros to make indirect calls to assembly routines. Instead of performing casts at the call sites (which trips Control Flow Integrity prototype checking), switch each prototype to a common standard set of arguments which allows the removal of the existing macros. In order to keep pointer math unchanged, internal casting between u128 pointers and u8 pointers is added. Co-developed-by: João Moreira <joao.moreira@intel.com> Signed-off-by: João Moreira <joao.moreira@intel.com> Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Cc: Ard Biesheuvel <ardb@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: tcrypt - avoid signed overflow in byte countArd Biesheuvel2021-03-071-10/+10
| | | | | | | | | | | | | | | | [ Upstream commit cbebe807d8465ef8f541a57c12ed6a3b55f73629 ] The signed long type used for printing the number of bytes processed in tcrypt benchmarks limits the range to -/+ 2 GiB, which is not sufficient to cover the performance of common accelerated ciphers such as AES-NI when benchmarked with sec=1. So switch to u64 instead. While at it, fix up a missing printk->pr_cont conversion in the AEAD benchmark. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* crypto: ecdh_helper - Ensure 'len >= secret.len' in decode_key()Daniele Alessandrelli2021-03-041-0/+3
| | | | | | | | | | | | | | | | [ Upstream commit 49397f15495d5f06d87da962adab714720026232 ] The length ('len' parameter) passed to crypto_ecdh_decode_key() is never checked against the length encoded in the passed buffer ('buf' parameter). This could lead to an out-of-bounds access when the passed length is less than the encoded length. Add a check to prevent that. Fixes: e8c7f30611313 ("crypto: ecdh - Add ECDH software support") Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* crypto: asym_tpm: correct zero out potential secretsGreg Kroah-Hartman2021-01-121-1/+1
| | | | | | | | | | | | | | | | commit 26a5d10c7f4f9edc8a09f0301cc5ca55844db375 upstream. The function derive_pub_key() should be calling memzero_explicit() instead of memset() in case the complier decides to optimize away the call to memset() because it "knows" no one is going to touch the memory anymore. Cc: stable <stable@vger.kernel.org> Reported-by: Ilil Blum Shem-Tov <ilil.blum.shem-tov@intel.com> Tested-by: Ilil Blum Shem-Tov <ilil.blum.shem-tov@intel.com> Link: https://lore.kernel.org/r/X8ns4AfwjKudpyfe@kroah.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: ecdh - avoid buffer overflow in ecdh_set_secret()Ard Biesheuvel2021-01-121-1/+2
| | | | | | | | | | | | | | | | | | | | | commit 4204b130b8a4d1cd2c4a0d9a5a09993e8948defe upstream. Pavel reports that commit 8319c80ab523 ("crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()") fixes one problem but introduces another: the unconditional memcpy() introduced by that commit may overflow the target buffer if the source data is invalid, which could be the result of intentional tampering. So check params.key_size explicitly against the size of the target buffer before validating the key further. Fixes: 8319c80ab523 ("crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()") Reported-by: Pavel Machek <pavel@denx.de> Cc: <stable@vger.kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()Ard Biesheuvel2020-12-301-4/+5
| | | | | | | | | | | | | | | | | | | commit 8319c80ab523e5735c07bb0da3bae7a93997c61e upstream. ecdh_set_secret() casts a void* pointer to a const u64* in order to feed it into ecc_is_key_valid(). This is not generally permitted by the C standard, and leads to actual misalignment faults on ARMv6 cores. In some cases, these are fixed up in software, but this still leads to performance hits that are entirely avoidable. So let's copy the key into the ctx buffer first, which we will do anyway in the common case, and which guarantees correct alignment. Cc: <stable@vger.kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: af_alg - avoid undefined behavior accessing salg_nameEric Biggers2020-12-301-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit e1cbfc28701813b40fe3891526dfc55f475bae8d upstream. Commit 7175dac1d936 ("crypto: af_alg - Allow arbitrarily long algorithm names") made the kernel start accepting arbitrarily long algorithm names in sockaddr_alg. However, the actual length of the salg_name field stayed at the original 64 bytes. This is broken because the kernel can access indices >= 64 in salg_name, which is undefined behavior -- even though the memory that is accessed is still located within the sockaddr structure. It would only be defined behavior if the array were properly marked as arbitrary-length (either by making it a flexible array, which is the recommended way these days, or by making it an array of length 0 or 1). We can't simply change salg_name into a flexible array, since that would break source compatibility with userspace programs that embed sockaddr_alg into another struct, or (more commonly) declare a sockaddr_alg like 'struct sockaddr_alg sa = { .salg_name = "foo" };'. One solution would be to change salg_name into a flexible array only when '#ifdef __KERNEL__'. However, that would keep userspace without an easy way to actually use the longer algorithm names. Instead, add a new structure 'sockaddr_alg_new' that has the flexible array field, and expose it to both userspace and the kernel. Make the kernel use it correctly in alg_bind(). This addresses the syzbot report "UBSAN: array-index-out-of-bounds in alg_bind" (https://syzkaller.appspot.com/bug?extid=92ead4eb8e26a26d465e). Reported-by: syzbot+92ead4eb8e26a26d465e@syzkaller.appspotmail.com Fixes: 7175dac1d936 ("crypto: af_alg - Allow arbitrarily long algorithm names") Cc: <stable@vger.kernel.org> # v4.12+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: algif_skcipher - EBUSY on aio should be an errorHerbert Xu2020-10-291-1/+1
| | | | | | | | | | | | | [ Upstream commit 6ccc838ed96261a4fef35aa45f7d951b31aa3d9d ] I removed the MAY_BACKLOG flag on the aio path a while ago but the error check still incorrectly interpreted EBUSY as success. This may cause the submitter to wait for a request that will never complete. Fixes: 5da64d081413 ("crypto: algif_skcipher - Do not set...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* crypto: algif_aead - Do not set MAY_BACKLOG on the async pathHerbert Xu2020-10-291-3/+4
| | | | | | | | | | | | | | commit f4f2fe30bd59f6b848d0a90b2549441f6f3cc5a9 upstream. The async path cannot use MAY_BACKLOG because it is not meant to block, which is what MAY_BACKLOG does. On the other hand, both the sync and async paths can make use of MAY_SLEEP. Fixes: a8e3a343aba2 ("crypto: af_alg - add async support to...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: af_alg - Work around empty control messages without MSG_MOREHerbert Xu2020-09-031-3/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | commit 113767ab58ba2b4430fa802ccafd7e4c04ae512b upstream. The iwd daemon uses libell which sets up the skcipher operation with two separate control messages. As the first control message is sent without MSG_MORE, it is interpreted as an empty request. While libell should be fixed to use MSG_MORE where appropriate, this patch works around the bug in the kernel so that existing binaries continue to work. We will print a warning however. A separate issue is that the new kernel code no longer allows the control message to be sent twice within the same request. This restriction is obviously incompatible with what iwd was doing (first setting an IV and then sending the real control message). This patch changes the kernel so that this is explicitly allowed. Reported-by: Caleb Jorden <caljorden@hotmail.com> Fixes: a52eb0489f96 ("crypto: algif_aead - Only wake up when...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: algif_aead - fix uninitialized ctx->initOndrej Mosnacek2020-08-212-12/+1
| | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 39709b59769d024936bdb204d0b49a6ae9d83b3e ] In skcipher_accept_parent_nokey() the whole af_alg_ctx structure is cleared by memset() after allocation, so add such memset() also to aead_accept_parent_nokey() so that the new "init" field is also initialized to zero. Without that the initial ctx->init checks might randomly return true and cause errors. While there, also remove the redundant zero assignments in both functions. Found via libkcapi testsuite. Cc: Stephan Mueller <smueller@chronox.de> Fixes: a52eb0489f96 ("crypto: algif_aead - Only wake up when ctx->more is zero") Suggested-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
* crypto: af_alg - Fix regression on empty requestsHerbert Xu2020-08-211-1/+1
| | | | | | | | | | | | | | | | | [ Upstream commit 673136029221f5b7c0eea9efc9aabc1a081aa148 ] Some user-space programs rely on crypto requests that have no control metadata. This broke when a check was added to require the presence of control metadata with the ctx->init flag. This patch fixes the regression by setting ctx->init as long as one sendmsg(2) has been made, with or without a control message. Reported-by: Sachin Sant <sachinp@linux.vnet.ibm.com> Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Fixes: a52eb0489f96 ("crypto: algif_aead - Only wake up when...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>