summaryrefslogtreecommitdiff
path: root/crypto (follow)
Commit message (Collapse)AuthorAgeFilesLines
* crypto: fcrypt - drop unneeded alignmaskArd Biesheuvel2021-02-101-1/+0
| | | | | | | | The fcrypt implementation uses memcpy() to access the input and output buffers so there is no need to set an alignmask. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cast6 - use unaligned accessors instead of alignmaskArd Biesheuvel2021-02-101-22/+17
| | | | | | | | | | | Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the CAST6 input and output blocks, which propagates to mode drivers, and results in pointless copying on architectures that don't care about alignment, use the unaligned accessors, which will do the right thing on each respective architecture, avoiding the need for double buffering. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cast5 - use unaligned accessors instead of alignmaskArd Biesheuvel2021-02-101-14/+9
| | | | | | | | | | | Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the CAST5 input and output blocks, which propagates to mode drivers, and results in pointless copying on architectures that don't care about alignment, use the unaligned accessors, which will do the right thing on each respective architecture, avoiding the need for double buffering. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: camellia - use unaligned accessors instead of alignmaskArd Biesheuvel2021-02-101-29/+16
| | | | | | | | | | | Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the Camellia input and output blocks, which propagates to mode drivers, and results in pointless copying on architectures that don't care about alignment, use the unaligned accessors, which will do the right thing on each respective architecture, avoiding the need for double buffering. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: blowfish - use unaligned accessors instead of alignmaskArd Biesheuvel2021-02-101-14/+9
| | | | | | | | | | | Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the Blowfish input and output blocks, which propagates to mode drivers, and results in pointless copying on architectures that don't care about alignment, use the unaligned accessors, which will do the right thing on each respective architecture, avoiding the need for double buffering. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: serpent - use unaligned accessors instead of alignmaskArd Biesheuvel2021-02-101-27/+17
| | | | | | | | | | | Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the Serpent input and output blocks, which propagates to mode drivers, and results in pointless copying on architectures that don't care about alignment, use the unaligned accessors, which will do the right thing on each respective architecture, avoiding the need for double buffering. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: serpent - get rid of obsolete tnepres variantArd Biesheuvel2021-02-105-169/+7
| | | | | | | | | | | | | It is not trivial to trace back why exactly the tnepres variant of serpent was added ~17 years ago - Google searches come up mostly empty, but it seems to be related with the 'kerneli' version, which was based on an incorrect interpretation of the serpent spec. In other words, nobody is likely to care anymore today, so let's get rid of it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: michael_mic - fix broken misalignment handlingArd Biesheuvel2021-02-101-19/+12
| | | | | | | | | | | | | | | | | | | | | | The Michael MIC driver uses the cra_alignmask to ensure that pointers presented to its update and finup/final methods are 32-bit aligned. However, due to the way the shash API works, this is no guarantee that the 32-bit reads occurring in the update method are also aligned, as the size of the buffer presented to update may be of uneven length. For instance, an update() of 3 bytes followed by a misaligned update() of 4 or more bytes will result in a misaligned access using an accessor that is not suitable for this. On most architectures, this does not matter, and so setting the cra_alignmask is pointless. On architectures where this does matter, setting the cra_alignmask does not actually solve the problem. So let's get rid of the cra_alignmask, and use unaligned accessors instead, where appropriate. Cc: <stable@vger.kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: salsa20 - remove Salsa20 stream cipher algorithmArd Biesheuvel2021-01-296-1403/+1
| | | | | | | | | | Salsa20 is not used anywhere in the kernel, is not suitable for disk encryption, and widely considered to have been superseded by ChaCha20. So let's remove it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: tgr192 - remove Tiger 128/160/192 hash algorithmsArd Biesheuvel2021-01-296-876/+0
| | | | | | | | Tiger is never referenced anywhere in the kernel, and unlikely to be depended upon by userspace via AF_ALG. So let's remove it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: rmd320 - remove RIPE-MD 320 hash algorithmArd Biesheuvel2021-01-296-494/+1
| | | | | | | | RIPE-MD 320 is never referenced anywhere in the kernel, and unlikely to be depended upon by userspace via AF_ALG. So let's remove it Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: rmd256 - remove RIPE-MD 256 hash algorithmArd Biesheuvel2021-01-297-441/+1
| | | | | | | | RIPE-MD 256 is never referenced anywhere in the kernel, and unlikely to be depended upon by userspace via AF_ALG. So let's remove it Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: rmd128 - remove RIPE-MD 128 hash algorithmArd Biesheuvel2021-01-297-506/+1
| | | | | | | | RIPE-MD 128 is never referenced anywhere in the kernel, and unlikely to be depended upon by userspace via AF_ALG. So let's remove it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86 - remove glue helper moduleArd Biesheuvel2021-01-142-11/+0
| | | | | | | | | | All dependencies on the x86 glue helper module have been replaced by local instantiations of the new ECB/CBC preprocessor helper macros, so the glue helper module can be retired. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/twofish - drop dependency on glue helperArd Biesheuvel2021-01-141-2/+0
| | | | | | | | | Replace the glue helper dependency with implementations of ECB and CBC based on the new CPP macros, which avoid the need for indirect calls. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/cast6 - drop dependency on glue helperArd Biesheuvel2021-01-141-1/+0
| | | | | | | | | Replace the glue helper dependency with implementations of ECB and CBC based on the new CPP macros, which avoid the need for indirect calls. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/serpent - drop dependency on glue helperArd Biesheuvel2021-01-141-3/+0
| | | | | | | | | Replace the glue helper dependency with implementations of ECB and CBC based on the new CPP macros, which avoid the need for indirect calls. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/camellia - drop dependency on glue helperArd Biesheuvel2021-01-141-2/+0
| | | | | | | | | Replace the glue helper dependency with implementations of ECB and CBC based on the new CPP macros, which avoid the need for indirect calls. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/blowfish - drop CTR mode implementationArd Biesheuvel2021-01-141-0/+1
| | | | | | | | | Blowfish in counter mode is never used in the kernel, so there is no point in keeping an accelerated implementation around. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/des - drop CTR mode implementationArd Biesheuvel2021-01-141-0/+1
| | | | | | | | | DES or Triple DES in counter mode is never used in the kernel, so there is no point in keeping an accelerated implementation around. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/twofish - drop CTR mode implementationArd Biesheuvel2021-01-141-0/+2
| | | | | | | | | | | Twofish in CTR mode is never used by the kernel directly, and is highly unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop the accelerated CTR mode implementation, and instead, rely on the CTR template and the bare cipher. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/cast6 - drop CTR mode implementationArd Biesheuvel2021-01-141-0/+1
| | | | | | | | | | | CAST6 in CTR mode is never used by the kernel directly, and is highly unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop the accelerated CTR mode implementation, and instead, rely on the CTR template and the bare cipher. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/cast5 - drop CTR mode implementationArd Biesheuvel2021-01-141-0/+1
| | | | | | | | | | | CAST5 in CTR mode is never used by the kernel directly, and is highly unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop the accelerated CTR mode implementation, and instead, rely on the CTR template and the bare cipher. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/serpent - drop CTR mode implementationArd Biesheuvel2021-01-141-0/+3
| | | | | | | | | | | Serpent in CTR mode is never used by the kernel directly, and is highly unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop the accelerated CTR mode implementation, and instead, rely on the CTR template and the bare cipher. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/camellia - drop CTR mode implementationArd Biesheuvel2021-01-141-0/+1
| | | | | | | | | | | Camellia in CTR mode is never used by the kernel directly, and is highly unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop the accelerated CTR mode implementation, and instead, rely on the CTR template and the bare cipher. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/twofish - switch to XTS templateArd Biesheuvel2021-01-141-0/+1
| | | | | | | | | | Now that the XTS template can wrap accelerated ECB modes, it can be used to implement Twofish in XTS mode as well, which turns out to be at least as fast, and sometimes even faster Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/serpent- switch to XTS templateArd Biesheuvel2021-01-141-1/+1
| | | | | | | | | | Now that the XTS template can wrap accelerated ECB modes, it can be used to implement Serpent in XTS mode as well, which turns out to be at least as fast, and sometimes even faster Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/cast6 - switch to XTS templateArd Biesheuvel2021-01-141-1/+1
| | | | | | | | | | Now that the XTS template can wrap accelerated ECB modes, it can be used to implement CAST6 in XTS mode as well, which turns out to be at least as fast, and sometimes even faster Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/camellia - switch to XTS templateArd Biesheuvel2021-01-141-1/+1
| | | | | | | | | | Now that the XTS template can wrap accelerated ECB modes, it can be used to implement Camellia in XTS mode as well, which turns out to be at least as fast, and sometimes even faster. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/aes-ni-xts - rewrite and drop indirections via glue helperArd Biesheuvel2021-01-081-1/+0
| | | | | | | | | | | | | | | | | | | | | The AES-NI driver implements XTS via the glue helper, which consumes a struct with sets of function pointers which are invoked on chunks of input data of the appropriate size, as annotated in the struct. Let's get rid of this indirection, so that we can perform direct calls to the assembler helpers. Instead, let's adopt the arm64 strategy, i.e., provide a helper which can consume inputs of any size, provided that the penultimate, full block is passed via the last call if ciphertext stealing needs to be applied. This also allows us to enable the XTS mode for i386. Tested-by: Eric Biggers <ebiggers@google.com> # x86_64 Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reported-by: kernel test robot <lkp@intel.com> Reported-by: kernel test robot <lkp@intel.com> Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: blake2b - update file commentEric Biggers2021-01-031-13/+10
| | | | | | | | | | | The file comment for blake2b_generic.c makes it sound like it's the reference implementation of BLAKE2b with only minor changes. But it's actually been changed a lot. Update the comment to make this clearer. Reviewed-by: David Sterba <dsterba@suse.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: blake2b - sync with blake2s implementationEric Biggers2021-01-031-178/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sync the BLAKE2b code with the BLAKE2s code as much as possible: - Move a lot of code into new headers <crypto/blake2b.h> and <crypto/internal/blake2b.h>, and adjust it to be like the corresponding BLAKE2s code, i.e. like <crypto/blake2s.h> and <crypto/internal/blake2s.h>. - Rename constants, e.g. BLAKE2B_*_DIGEST_SIZE => BLAKE2B_*_HASH_SIZE. - Use a macro BLAKE2B_ALG() to define the shash_alg structs. - Export blake2b_compress_generic() for use as a fallback. This makes it much easier to add optimized implementations of BLAKE2b, as optimized implementations can use the helper functions crypto_blake2b_{setkey,init,update,final}() and blake2b_compress_generic(). The ARM implementation will use these. But this change is also helpful because it eliminates unnecessary differences between the BLAKE2b and BLAKE2s code, so that the same improvements can easily be made to both. (The two algorithms are basically identical, except for the word size and constants.) It also makes it straightforward to add a library API for BLAKE2b in the future if/when it's needed. This change does make the BLAKE2b code slightly more complicated than it needs to be, as it doesn't actually provide a library API yet. For example, __blake2b_update() doesn't really need to exist yet; it could just be inlined into crypto_blake2b_update(). But I believe this is outweighed by the benefits of keeping the code in sync. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: blake2s - share the "shash" API boilerplate codeEric Biggers2021-01-031-67/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add helper functions for shash implementations of BLAKE2s to include/crypto/internal/blake2s.h, taking advantage of __blake2s_update() and __blake2s_final() that were added by the previous patch to share more code between the library and shash implementations. crypto_blake2s_setkey() and crypto_blake2s_init() are usable as shash_alg::setkey and shash_alg::init directly, while crypto_blake2s_update() and crypto_blake2s_final() take an extra 'blake2s_compress_t' function pointer parameter. This allows the implementation of the compression function to be overridden, which is the only part that optimized implementations really care about. The new functions are inline functions (similar to those in sha1_base.h, sha256_base.h, and sm3_base.h) because this avoids needing to add a new module blake2s_helpers.ko, they aren't *too* long, and this avoids indirect calls which are expensive these days. Note that they can't go in blake2s_generic.ko, as that would require selecting CRYPTO_BLAKE2S from CRYPTO_BLAKE2S_X86, which would cause a recursive dependency. Finally, use these new helper functions in the x86 implementation of BLAKE2s. (This part should be a separate patch, but unfortunately the x86 implementation used the exact same function names like "crypto_blake2s_update()", so it had to be updated at the same time.) Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: blake2s - remove unneeded includesEric Biggers2021-01-031-2/+0
| | | | | | | | | | | It doesn't make sense for the generic implementation of BLAKE2s to include <crypto/internal/simd.h> and <linux/jump_label.h>, as these are things that would only be useful in an architecture-specific implementation. Remove these unnecessary includes. Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: blake2s - define shash_alg structs using macrosEric Biggers2021-01-031-61/+27
| | | | | | | | | | The shash_alg structs for the four variants of BLAKE2s are identical except for the algorithm name, driver name, and digest size. So, avoid code duplication by using a macro to define these structs. Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: remove cipher routines from public crypto APIArd Biesheuvel2021-01-0319-3/+39
| | | | | | | | | | | | | The cipher routines in the crypto API are mostly intended for templates implementing skcipher modes generically in software, and shouldn't be used outside of the crypto subsystem. So move the prototypes and all related definitions to a new header file under include/crypto/internal. Also, let's use the new module namespace feature to move the symbol exports into a new namespace CRYPTO_INTERNAL. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: tcrypt - avoid signed overflow in byte countArd Biesheuvel2021-01-031-10/+10
| | | | | | | | | | | | | The signed long type used for printing the number of bytes processed in tcrypt benchmarks limits the range to -/+ 2 GiB, which is not sufficient to cover the performance of common accelerated ciphers such as AES-NI when benchmarked with sec=1. So switch to u64 instead. While at it, fix up a missing printk->pr_cont conversion in the AEAD benchmark. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ecdh - avoid buffer overflow in ecdh_set_secret()Ard Biesheuvel2021-01-031-1/+2
| | | | | | | | | | | | | | | | | Pavel reports that commit 8319c80ab523 ("crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()") fixes one problem but introduces another: the unconditional memcpy() introduced by that commit may overflow the target buffer if the source data is invalid, which could be the result of intentional tampering. So check params.key_size explicitly against the size of the target buffer before validating the key further. Fixes: 8319c80ab523 ("crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()") Reported-by: Pavel Machek <pavel@denx.de> Cc: <stable@vger.kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aegis128 - avoid spurious references crypto_aegis128_update_simdArd Biesheuvel2020-12-041-2/+2
| | | | | | | | | | | | | | | Geert reports that builds where CONFIG_CRYPTO_AEGIS128_SIMD is not set may still emit references to crypto_aegis128_update_simd(), which cannot be satisfied and therefore break the build. These references only exist in functions that can be optimized away, but apparently, the compiler is not always able to prove this. So add some explicit checks for CONFIG_CRYPTO_AEGIS128_SIMD to help the compiler figure this out. Tested-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: seed - remove trailing semicolon in macro definitionTom Rix2020-12-041-1/+1
| | | | | | | The macro use will already have a semicolon. Signed-off-by: Tom Rix <trix@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()Ard Biesheuvel2020-12-041-4/+5
| | | | | | | | | | | | | | | ecdh_set_secret() casts a void* pointer to a const u64* in order to feed it into ecc_is_key_valid(). This is not generally permitted by the C standard, and leads to actual misalignment faults on ARMv6 cores. In some cases, these are fixed up in software, but this still leads to performance hits that are entirely avoidable. So let's copy the key into the ctx buffer first, which we will do anyway in the common case, and which guarantees correct alignment. Cc: <stable@vger.kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: tcrypt - include 1420 byte blocks in aead and skcipher benchmarksArd Biesheuvel2020-11-271-37/+44
| | | | | | | | | | | | | | | | WireGuard and IPsec both typically operate on input blocks that are ~1420 bytes in size, given the default Ethernet MTU of 1500 bytes and the overhead of the VPN metadata. Many aead and sckipher implementations are optimized for power-of-2 block sizes, and whether they perform well when operating on 1420 byte blocks cannot be easily extrapolated from the performance on power-of-2 block size. So let's add 1420 bytes explicitly, and round it up to the next blocksize multiple of the algo in question if it does not support 1420 byte blocks. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: tcrypt - permit tcrypt.ko to be builtinArd Biesheuvel2020-11-271-1/+1
| | | | | | | | | | | | | | | | When working on crypto algorithms, being able to run tcrypt quickly without booting an entire Linux installation can be very useful. For instance, QEMU/kvm can be used to boot a kernel from the command line, and having tcrypt.ko builtin would allow tcrypt to be executed to run benchmarks, or to run tests for algorithms that need to be instantiated from templates, without the need to make it past the point where the rootfs is mounted. So let's relax the requirement that tcrypt can only be built as a module when CONFIG_EXPERT is enabled. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: tcrypt - don't initialize at subsys_initcall timeArd Biesheuvel2020-11-271-1/+1
| | | | | | | | | | | | | | | | | | | | Commit 23311b0ed61a5 ("crypto: run initcalls for generic implementations earlier") converted tcrypt.ko's module_init() to subsys_initcall(), but this was unintentional: tcrypt.ko currently cannot be built into the core kernel, and so the subsys_initcall() gets converted into module_init() under the hood. Given that tcrypt.ko does not implement a generic version of a crypto algorithm that has to be available early during boot, there is no point in running the tcrypt init code earlier than implied by module_init(). However, for crypto development purposes, we will lift the restriction that tcrypt.ko must be built as a module, and when builtin, it makes sense for tcrypt.ko (which does its work inside the module init function) to run as late as possible. So let's switch to late_initcall() instead. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aegis128 - expose SIMD code path as separate driverArd Biesheuvel2020-11-271-77/+143
| | | | | | | | | | | | | | Wiring the SIMD code into the generic driver has the unfortunate side effect that the tcrypt testing code cannot distinguish them, and will therefore not use the latter to fuzz test the former, as it does for other algorithms. So let's refactor the code a bit so we can register two implementations: aegis128-generic and aegis128-simd. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aegis128/neon - move final tag check to SIMD domainArd Biesheuvel2020-11-273-18/+57
| | | | | | | | | | | | | | | Instead of calculating the tag and returning it to the caller on decryption, use a SIMD compare and min across vector to perform the comparison. This is slightly more efficient, and removes the need on the caller's part to wipe the tag from memory if the decryption failed. While at it, switch to unsigned int when passing cryptlen and assoclen - we don't support input sizes where it matters anyway. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aegis128/neon - optimize tail block handlingArd Biesheuvel2020-11-271-14/+75
| | | | | | | | | | | | | | Avoid copying the tail block via a stack buffer if the total size exceeds a single AEGIS block. In this case, we can use overlapping loads and stores and NEON permutation instructions instead, which leads to a modest performance improvement on some cores (< 5%), and is slightly cleaner. Note that we still need to use a stack buffer if the entire input is smaller than 16 bytes, given that we cannot use 16 byte NEON loads and stores safely in this case. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aegis128 - wipe plaintext and tag if decryption failsArd Biesheuvel2020-11-271-6/+26
| | | | | | | | | | | | | The AEGIS spec mentions explicitly that the security guarantees hold only if the resulting plaintext and tag of a failed decryption are withheld. So ensure that we abide by this. While at it, drop the unused struct aead_request *req parameter from crypto_aegis128_process_crypt(). Reviewed-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: sha - split sha.h into sha1.h and sha2.hEric Biggers2020-11-204-4/+4
| | | | | | | | | | | | | | | | | | | | | | | Currently <crypto/sha.h> contains declarations for both SHA-1 and SHA-2, and <crypto/sha3.h> contains declarations for SHA-3. This organization is inconsistent, but more importantly SHA-1 is no longer considered to be cryptographically secure. So to the extent possible, SHA-1 shouldn't be grouped together with any of the other SHA versions, and usage of it should be phased out. Therefore, split <crypto/sha.h> into two headers <crypto/sha1.h> and <crypto/sha2.h>, and make everyone explicitly specify whether they want the declarations for SHA-1, SHA-2, or both. This avoids making the SHA-1 declarations visible to files that don't want anything to do with SHA-1. It also prepares for potentially moving sha1.h into a new insecure/ or dangerous/ directory. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: Kconfig - CRYPTO_MANAGER_EXTRA_TESTS requires the managerJason A. Donenfeld2020-11-131-1/+1
| | | | | | | | | | | | The extra tests in the manager actually require the manager to be selected too. Otherwise the linker gives errors like: ld: arch/x86/crypto/chacha_glue.o: in function `chacha_simd_stream_xor': chacha_glue.c:(.text+0x422): undefined reference to `crypto_simd_disabled_for_test' Fixes: da1db20dc767 ("crypto: Kconfig - allow tests to be disabled when manager is disabled") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>