summaryrefslogtreecommitdiff
path: root/crypto/michael_mic.c (unfollow)
Commit message (Collapse)AuthorFilesLines
2023-04-06crypto: hash - Remove maximum statesize limitHerbert Xu1-2/+1
Remove the HASH_MAX_STATESIZE limit now that it is unused. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-06crypto: algif_hash - Allocate hash state with kmallocHerbert Xu1-4/+15
Allocating the hash state on the stack limits its size. Change this to use kmalloc so the limit can be removed for new drivers. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-06crypto: drbg - Only fail when jent is unavailable in FIPS modeHerbert Xu1-1/+1
When jent initialisation fails for any reason other than ENOENT, the entire drbg fails to initialise, even when we're not in FIPS mode. This is wrong because we can still use the kernel RNG when we're not in FIPS mode. Change it so that it only fails when we are in FIPS mode. Fixes: 023b75dd47bc ("crypto: drbg - Use callback API for random readiness") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-06crypto: jitter - permanent and intermittent health errorsStephan Müller3-120/+76
According to SP800-90B, two health failures are allowed: the intermittend and the permanent failure. So far, only the intermittent failure was implemented. The permanent failure was achieved by resetting the entire entropy source including its health test state and waiting for two or more back-to-back health errors. This approach is appropriate for RCT, but not for APT as APT has a non-linear cutoff value. Thus, this patch implements 2 cutoff values for both RCT/APT. This implies that the health state is left untouched when an intermittent failure occurs. The noise source is reset and a new APT powerup-self test is performed. Yet, whith the unchanged health test state, the counting of failures continues until a permanent failure is reached. Any non-failing raw entropy value causes the health tests to reset. The intermittent error has an unchanged significance level of 2^-30. The permanent error has a significance level of 2^-60. Considering that this level also indicates a false-positive rate (see SP800-90B section 4.2) a false-positive must only be incurred with a low probability when considering a fleet of Linux kernels as a whole. Hitting the permanent error may cause a panic(), the following calculation applies: Assuming that a fleet of 10^9 Linux kernels run concurrently with this patch in FIPS mode and on each kernel 2 health tests are performed every minute for one year, the chances of a false positive is about 1:1000 based on the binomial distribution. In addition, any power-up health test errors triggered with jent_entropy_init are treated as permanent errors. A permanent failure causes the entire entropy source to permanently return an error. This implies that a caller can only remedy the situation by re-allocating a new instance of the Jitter RNG. In a subsequent patch, a transparent re-allocation will be provided which also changes the implied heuristic entropy assessment. In addition, when the kernel is booted with fips=1, the Jitter RNG is defined to be part of a FIPS module. The permanent error of the Jitter RNG is translated as a FIPS module error. In this case, the entire FIPS module must cease operation. This is implemented in the kernel by invoking panic(). The patch also fixes an off-by-one in the RCT cutoff value which is now set to 30 instead of 31. This is because the counting of the values starts with 0. Reviewed-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Stephan Mueller <smueller@chronox.de> Reviewed-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-24async_tx: fix kernel-doc notation warningsRandy Dunlap2-7/+7
Fix kernel-doc warnings by adding "struct" keyword or "enum" keyword. Also fix 2 function parameter descriptions. Change some functions and structs from kernel-doc /** notation to regular /* comment notation. async_pq.c:18: warning: cannot understand function prototype: 'struct page *pq_scribble_page; ' async_pq.c:18: error: Cannot parse struct or union! async_pq.c:40: warning: No description found for return value of 'do_async_gen_syndrome' async_pq.c:109: warning: Function parameter or member 'blocks' not described in 'do_sync_gen_syndrome' async_pq.c:109: warning: Function parameter or member 'offsets' not described in 'do_sync_gen_syndrome' async_pq.c:109: warning: Function parameter or member 'disks' not described in 'do_sync_gen_syndrome' async_pq.c:109: warning: Function parameter or member 'len' not described in 'do_sync_gen_syndrome' async_pq.c:109: warning: Function parameter or member 'submit' not described in 'do_sync_gen_syndrome' async_tx.c:136: warning: cannot understand function prototype: 'enum submit_disposition ' async_tx.c:264: warning: Function parameter or member 'tx' not described in 'async_tx_quiesce' Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Reviewed-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-24crypto: api - Demote BUG_ON() in crypto_unregister_alg() to a WARN_ON()Toke Høiland-Jørgensen1-1/+3
The crypto_unregister_alg() function expects callers to ensure that any algorithm that is unregistered has a refcnt of exactly 1, and issues a BUG_ON() if this is not the case. However, there are in fact drivers that will call crypto_unregister_alg() without ensuring that the refcnt has been lowered first, most notably on system shutdown. This causes the BUG_ON() to trigger, which prevents a clean shutdown and hangs the system. To avoid such hangs on shutdown, demote the BUG_ON() in crypto_unregister_alg() to a WARN_ON() with early return. Cc stable because this problem was observed on a 6.2 kernel, cf the link below. Link: https://lore.kernel.org/r/87r0tyq8ph.fsf@toke.dk Cc: stable@vger.kernel.org Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-17crypto: fips - simplify one-level sysctl registration for crypto_sysctl_tableLuis Chamberlain1-10/+1
There is no need to declare an extra tables to just create directory, this can be easily be done with a prefix path with register_sysctl(). Simplify this registration. Signed-off-by: Luis Chamberlain <mcgrof@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: testmgr - fix RNG performance in fuzz testsEric Biggers1-97/+169
The performance of the crypto fuzz tests has greatly regressed since v5.18. When booting a kernel on an arm64 dev board with all software crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the fuzz tests now take about 200 seconds to run, or about 325 seconds with lockdep enabled, compared to about 5 seconds before. The root cause is that the random number generation has become much slower due to commit d4150779e60f ("random32: use real rng for non-deterministic randomness"). On my same arm64 dev board, at the time the fuzz tests are run, get_random_u8() is about 345x slower than prandom_u32_state(), or about 469x if lockdep is enabled. Lockdep makes a big difference, but much of the rest comes from the get_random_*() functions taking a *very* slow path when the CRNG is not yet initialized. Since the crypto self-tests run early during boot, even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in my case) doesn't prevent this. x86 systems don't have this issue, but they still see a significant regression if lockdep is enabled. Converting the "Fully random bytes" case in generate_random_bytes() to use get_random_bytes() helps significantly, improving the test time to about 27 seconds. But that's still over 5x slower than before. This is all a bit silly, though, since the fuzz tests don't actually need cryptographically secure random numbers. So let's just make them use a non-cryptographically-secure RNG as they did before. The original prandom_u32() is gone now, so let's use prandom_u32_state() instead, with an explicitly managed state, like various other self-tests in the kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This also has the benefit that no locking is required anymore, so performance should be even better than the original version that used prandom_u32(). Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: api - Check CRYPTO_USER instead of NET for reportHerbert Xu9-72/+36
The report function is currently conditionalised on CONFIG_NET. As it's only used by CONFIG_CRYPTO_USER, conditionalising on that instead of CONFIG_NET makes more sense. This gets rid of a rarely used code-path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: rng - Count error stats differentlyHerbert Xu3-77/+48
Move all stat code specific to rng into the rng code. While we're at it, change the stats so that bytes and counts are always incremented even in case of error. This allows the reference counting to be removed as we can now increment the counters prior to the operation. After the operation we simply increase the error count if necessary. This is safe as errors can only occur synchronously (or rather, the existing code already ignored asynchronous errors which are only visible to the callback function). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: skcipher - Count error stats differentlyHerbert Xu3-55/+87
Move all stat code specific to skcipher into the skcipher code. While we're at it, change the stats so that bytes and counts are always incremented even in case of error. This allows the reference counting to be removed as we can now increment the counters prior to the operation. After the operation we simply increase the error count if necessary. This is safe as errors can only occur synchronously (or rather, the existing code already ignored asynchronous errors which are only visible to the callback function). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: kpp - Count error stats differentlyHerbert Xu3-58/+34
Move all stat code specific to kpp into the kpp code. While we're at it, change the stats so that bytes and counts are always incremented even in case of error. This allows the reference counting to be removed as we can now increment the counters prior to the operation. After the operation we simply increase the error count if necessary. This is safe as errors can only occur synchronously (or rather, the existing code already ignored asynchronous errors which are only visible to the callback function). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: acomp - Count error stats differentlyHerbert Xu5-74/+101
Move all stat code specific to acomp into the acomp code. While we're at it, change the stats so that bytes and counts are always incremented even in case of error. This allows the reference counting to be removed as we can now increment the counters prior to the operation. After the operation we simply increase the error count if necessary. This is safe as errors can only occur synchronously (or rather, the existing code already ignored asynchronous errors which are only visible to the callback function). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: hash - Count error stats differentlyHerbert Xu5-117/+176
Move all stat code specific to hash into the hash code. While we're at it, change the stats so that bytes and counts are always incremented even in case of error. This allows the reference counting to be removed as we can now increment the counters prior to the operation. After the operation we simply increase the error count if necessary. This is safe as errors can only occur synchronously (or rather, the existing code already ignored asynchronous errors which are only visible to the callback function). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: akcipher - Count error stats differentlyHerbert Xu3-76/+34
Move all stat code specific to akcipher into the akcipher code. While we're at it, change the stats so that bytes and counts are always incremented even in case of error. This allows the reference counting to be removed as we can now increment the counters prior to the operation. After the operation we simply increase the error count if necessary. This is safe as errors can only occur synchronously (or rather, the existing code already ignored asynchronous errors which are only visible to the callback function). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: aead - Count error stats differentlyHerbert Xu3-60/+73
Move all stat code specific to aead into the aead code. While we're at it, change the stats so that bytes and counts are always incremented even in case of error. This allows the reference counting to be removed as we can now increment the counters prior to the operation. After the operation we simply increase the error count if necessary. This is safe as errors can only occur synchronously (or rather, the existing code already ignored asynchronous errors which are only visible to the callback function). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: algapi - Move stat reporting into algapiHerbert Xu1-0/+6
The stats code resurrected the unions from the early days of kernel crypto. This patch starts the process of moving them out to the individual type structures as we do for everything else. In particular, add a report_stat function to cra_type and call that from the stats code if available. This allows us to move the actual code over one-by-one. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-14crypto: proc - Print fips statusHerbert Xu1-0/+6
As FIPS may disable algorithms it is useful to show their status in /proc/crypto. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-14crypto: ecc - Silence sparse warningHerbert Xu1-2/+4
Rewrite the bitwise operations to silence the sparse warnings: CHECK ../crypto/ecc.c ../crypto/ecc.c:1387:39: warning: dubious: !x | y ../crypto/ecc.c:1397:47: warning: dubious: !x | y Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-13crypto: api - Use data directly in completion functionHerbert Xu19-139/+124
This patch does the final flag day conversion of all completion functions which are now all contained in the Crypto API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-13crypto: cryptd - Use request_complete helpersHerbert Xu1-108/+126
Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-13crypto: rsa-pkcs1pad - Use akcipher_request_completeHerbert Xu1-19/+15
Use the akcipher_request_complete helper instead of calling the completion function directly. In fact the previous code was buggy in that EINPROGRESS was never passed back to the original caller. Fixes: 7185f32fb45d ("crypto: rsa - RSA padding algorithm") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-13crypto: engine - Use crypto_request_completeHerbert Xu1-3/+3
Use the crypto_request_complete helper instead of calling the completion function directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-13crypto: hash - Use crypto_request_completeHerbert Xu1-105/+74
Use the crypto_request_complete helper instead of calling the completion function directly. This patch also removes the voodoo programming previously used for unaligned ahash operations and replaces it with a sub-request. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-13crypto: cryptd - Use subreq for AEADHerbert Xu1-4/+16
AEAD reuses the existing request object for its child. This is error-prone and unnecessary. This patch adds a subrequest object just like we do for skcipher and hash. This patch also restores the original completion function as we do for skcipher/hash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-13KEYS: asymmetric: Fix ECDSA use via keyctl uapiDenis Kenzior1-2/+22
When support for ECDSA keys was added, constraints for data & signature sizes were never updated. This makes it impossible to use such keys via keyctl API from userspace. Update constraint on max_data_size to 64 bytes in order to support SHA512-based signatures. Also update the signature length constraints per ECDSA signature encoding described in RFC 5480. Fixes: 2ceaf2ab3fcf ("x509: Add support for parsing x509 certs with ECDSA keys") Signed-off-by: Denis Kenzior <denkenz@gmail.com> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
2023-02-13crypto: certs: fix FIPS selftest dependencyArnd Bergmann2-1/+2
The selftest code is built into the x509_key_parser module, and depends on the pkcs7_message_parser module, which in turn has a dependency on the key parser, creating a dependency loop and a resulting link failure when the pkcs7 code is a loadable module: ld: crypto/asymmetric_keys/selftest.o: in function `fips_signature_selftest': crypto/asymmetric_keys/selftest.c:205: undefined reference to `pkcs7_parse_message' ld: crypto/asymmetric_keys/selftest.c:209: undefined reference to `pkcs7_supply_detached_data' ld: crypto/asymmetric_keys/selftest.c:211: undefined reference to `pkcs7_verify' ld: crypto/asymmetric_keys/selftest.c:215: undefined reference to `pkcs7_validate_trust' ld: crypto/asymmetric_keys/selftest.c:219: undefined reference to `pkcs7_free_message' Avoid this by only allowing the selftest to be enabled when either both parts are loadable modules, or both are built-in. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
2023-02-10crypto: testmgr - add diff-splits of src/dst into default cipher configZhang Yiqun1-0/+8
This type of request is often happened in AF_ALG cases. So add this vector in default cipher config array. Signed-off-by: Zhang Yiqun <zhangyiqun@phytium.com.cn> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-03Revert "crypto: rsa-pkcs1pad - Replace GFP_ATOMIC with GFP_KERNEL in ↵Herbert Xu1-1/+1
pkcs1pad_encrypt_sign_complete" This reverts commit 4e92247a228c2d2c8a063842aa3dad2b0c9256f3. While the akcipher API as a whole is designed to be called only from thread context, its completion path is still called from softirq context as usual. Therefore we must not use GFP_KERNEL on that path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-01-27crypto: engine - Fix excess parameter doc warningHerbert Xu1-1/+1
The engine parameter should not be marked for kernel doc as it triggers a warning. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-01-27crypto: xts - Handle EBUSY correctlyHerbert Xu1-4/+4
As it is xts only handles the special return value of EINPROGRESS, which means that in all other cases it will free data related to the request. However, as the caller of xts may specify MAY_BACKLOG, we also need to expect EBUSY and treat it in the same way. Otherwise backlogged requests will trigger a use-after-free. Fixes: 8118cd4cf772 ("crypto: xts - add support for ciphertext stealing") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-01-27crypto: cryptd - Remove unnecessary skcipher_request_zeroHerbert Xu1-2/+0
Previously the child skcipher request was stored on the stack and therefore needed to be zeroed. As it is now dynamically allocated we no longer need to do so. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-01-27crypto: testmgr - disallow certain DRBG hash functions in FIPS modeVladis Dronov1-4/+0
According to FIPS 140-3 IG, section D.R "Hash Functions Acceptable for Use in the SP 800-90A DRBGs", modules certified after May 16th, 2023 must not support the use of: SHA-224, SHA-384, SHA512-224, SHA512-256, SHA3-224, SHA3-384. Disallow HMAC and HASH DRBGs using SHA-384 in FIPS mode. Signed-off-by: Vladis Dronov <vdronov@redhat.com> Reviewed-by: Stephan Müller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-01-20crypto: seqiv - Handle EBUSY correctlyHerbert Xu1-1/+1
As it is seqiv only handles the special return value of EINPROGERSS, which means that in all other cases it will free data related to the request. However, as the caller of seqiv may specify MAY_BACKLOG, we also need to expect EBUSY and treat it in the same way. Otherwise backlogged requests will trigger a use-after-free. Fixes: 20089c96d6eb ("[CRYPTO] seqiv: Add Sequence Number IV Generator") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-01-20crypto: essiv - Handle EBUSY correctlyHerbert Xu1-1/+6
As it is essiv only handles the special return value of EINPROGERSS, which means that in all other cases it will free data related to the request. However, as the caller of essiv may specify MAY_BACKLOG, we also need to expect EBUSY and treat it in the same way. Otherwise backlogged requests will trigger a use-after-free. Fixes: 392e2e025cfd ("crypto: essiv - create wrapper template...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-01-20crypto: tcrypt - include larger key sizes in RFC4106 benchmarkArd Biesheuvel2-5/+5
RFC4106 wraps AES in GCM mode, and can be used with larger key sizes than 128/160 bits, just like AES itself. So add these to the tcrypt recipe so they will be benchmarked as well. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-01-19wifi: cfg80211: Deduplicate certificate loadingLukas Wunner1-0/+1
load_keys_from_buffer() in net/wireless/reg.c duplicates x509_load_certificate_list() in crypto/asymmetric_keys/x509_loader.c for no apparent reason. Deduplicate it. No functional change intended. Signed-off-by: Lukas Wunner <lukas@wunner.de> Acked-by: David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/r/e7280be84acda02634bc7cb52c97656182b9c700.1673197326.git.lukas@wunner.de Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-01-13crypto: skcipher - Use scatterwalk (un)map interface for dst and src buffersArd Biesheuvel1-18/+4
The skcipher walk API implementation avoids scatterwalk_map() for mapping the source and destination buffers, and invokes kmap_atomic() directly if the buffer in question is not in low memory (which can only happen on 32-bit architectures). This avoids some overhead on 64-bit architectures, and most notably, permits the skcipher code to run with preemption enabled. Now that scatterwalk_map() has been updated to use kmap_local(), none of this is needed, so we can simply use scatterwalk_map/unmap instead. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-01-06crypto: x86/aria - do not use magic number offsets of aria_ctxTaehee Yoo1-0/+4
aria-avx assembly code accesses members of aria_ctx with magic number offset. If the shape of struct aria_ctx is changed carelessly, aria-avx will not work. So, we need to ensure accessing members of aria_ctx with correct offset values, not with magic numbers. It adds ARIA_CTX_enc_key, ARIA_CTX_dec_key, and ARIA_CTX_rounds in the asm-offsets.c So, correct offset definitions will be generated. aria-avx assembly code can access members of aria_ctx safely with these definitions. Signed-off-by: Taehee Yoo <ap420073@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-01-06crypto: testmgr - allow ecdsa-nist-p256 and -p384 in FIPS modeNicolai Stange1-0/+2
The kernel provides implementations of the NIST ECDSA signature verification primitives. For key sizes of 256 and 384 bits respectively they are approved and can be enabled in FIPS mode. Do so. Signed-off-by: Nicolai Stange <nstange@suse.de> Signed-off-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-01-06crypto: testmgr - disallow plain ghash in FIPS modeNicolai Stange1-1/+0
ghash may be used only as part of the gcm(aes) construction in FIPS mode. Since commit a11311e8eed9 ("crypto: api - allow algs only in specific constructions in FIPS mode") there's support for using spawns which by itself are marked as non-approved from approved template instantiations. So simply mark plain ghash as non-approved in testmgr to block any attempts of direct instantiations in FIPS mode. Signed-off-by: Nicolai Stange <nstange@suse.de> Signed-off-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-01-06crypto: testmgr - disallow plain cbcmac(aes) in FIPS modeNicolai Stange1-1/+0
cbcmac(aes) may be used only as part of the ccm(aes) construction in FIPS mode. Since commit a11311e8eed9 ("crypto: api - allow algs only in specific constructions in FIPS mode") there's support for using spawns which by itself are marked as non-approved from approved template instantiations. So simply mark plain cbcmac(aes) as non-approved in testmgr to block any attempts of direct instantiations in FIPS mode. Signed-off-by: Nicolai Stange <nstange@suse.de> Signed-off-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-30crypto: wp512 - disable kmsan checks in wp512_process_buffer()Arnd Bergmann1-1/+1
The memory sanitizer causes excessive register spills in this function: crypto/wp512.c:782:13: error: stack frame size (2104) exceeds limit (2048) in 'wp512_process_buffer' [-Werror,-Wframe-larger-than] Assume that this one is safe, and mark it as needing no checks to get the stack usage back down to the normal level. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-30crypto: scatterwalk - use kmap_local() not kmap_atomic()Ard Biesheuvel2-4/+4
kmap_atomic() is used to create short-lived mappings of pages that may not be accessible via the kernel direct map. This is only needed on 32-bit architectures that implement CONFIG_HIGHMEM, but it can be used on 64-bit other architectures too, where the returned mapping is simply the kernel direct address of the page. However, kmap_atomic() does not support migration on CONFIG_HIGHMEM configurations, due to the use of per-CPU kmap slots, and so it disables preemption on all architectures, not just the 32-bit ones. This implies that all scatterwalk based crypto routines essentially execute with preemption disabled all the time, which is less than ideal. So let's switch scatterwalk_map/_unmap and the shash/ahash routines to kmap_local() instead, which serves a similar purpose, but without the resulting impact on preemption on architectures that have no need for CONFIG_HIGHMEM. Cc: Eric Biggers <ebiggers@kernel.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "Elliott, Robert (Servers)" <elliott@hpe.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02crypto: api - Increase MAX_ALGAPI_ALIGNMASK to 127Herbert Xu1-2/+7
Previously we limited the maximum alignment mask to 63. This is mostly due to stack usage for shash. This patch introduces a separate limit for shash algorithms and increases the general limit to 127 which is the value that we need for DMA allocations on arm64. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02crypto: Prepare to move crypto_tfm_ctxHerbert Xu17-17/+19
The helper crypto_tfm_ctx is only used by the Crypto API algorithm code and should really be in algapi.h. However, for historical reasons many files relied on it to be in crypto.h. This patch changes those files to use algapi.h instead in prepartion for a move. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02crypto: dh - Use helper to set reqsizeHerbert Xu1-1/+3
The value of reqsize must only be changed through the helper. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02crypto: rsa-pkcs1pad - Use helper to set reqsizeHerbert Xu1-1/+4
The value of reqsize must only be changed through the helper. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-25use less confusing names for iov_iter direction initializersAl Viro1-2/+2
READ/WRITE proved to be actively confusing - the meanings are "data destination, as used with read(2)" and "data source, as used with write(2)", but people keep interpreting those as "we read data from it" and "we write data to it", i.e. exactly the wrong way. Call them ITER_DEST and ITER_SOURCE - at least that is harder to misinterpret... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2022-11-25Revert "crypto: shash - avoid comparing pointers to exported functions under ↵Eric Biggers1-15/+3
CFI" This reverts commit d676b48fa74f5fb9d62de1673a63d390fc68bb47 because CFI no longer breaks cross-module function address equality, so crypto_shash_alg_has_setkey() can now be an inline function like before. This commit should not be backported to kernels that don't have the new CFI implementation. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>