summaryrefslogtreecommitdiff
path: root/crypto/sig.c (unfollow)
Commit message (Collapse)AuthorFilesLines
2024-03-13Revert "crypto: remove CONFIG_CRYPTO_STATS"Herbert Xu1-0/+13
This reverts commit 687f35bd1fc7a325fe6f817af864426e66c288b5. While removing CONFIG_CRYPTO_STATS is a worthy goal, this also removed unrelated infrastructure such as crypto_comp_alg_common. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-03-08crypto: scomp - remove memcpy if sg_nents is 1 and pages are lowmemBarry Song1-7/+29
while sg_nents is 1, which is always true for the current kernel as the only user - zswap is this case, we might have a chance to remove memcpy, thus improve the performance. Though sg_nents is 1, its buffer might cross two pages. If those pages are highmem, we have no cheap way to map them to contiguous virtual address because kmap doesn't support more than one page (kmap single higmem page could be still expensive for tlb) and vmap is expensive. So we also test and enure page is not highmem in order to safely use page_to_virt before removing the memcpy. The good news is that in the most majority of cases, we are lowmem, and we are always lowmem in those modern and popular hardware. Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Nhat Pham <nphamcs@gmail.com> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Barry Song <v-songbaohua@oppo.com> Tested-by: Chengming Zhou <zhouchengming@bytedance.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-03-08crypto: tcrypt - add ffdhe2048(dh) testVladis Dronov1-0/+3
Commit 7a8cb30a6685 ("crypto: dh - implement ffdheXYZ(dh) templates") implemented the said templates. Add ffdhe2048(dh) test as it is the fastest one. This is a requirement for the FIPS certification. Signed-off-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-03-01crypto: remove CONFIG_CRYPTO_STATSEric Biggers18-760/+47
Remove support for the "Crypto usage statistics" feature (CONFIG_CRYPTO_STATS). This feature does not appear to have ever been used, and it is harmful because it significantly reduces performance and is a large maintenance burden. Covering each of these points in detail: 1. Feature is not being used Since these generic crypto statistics are only readable using netlink, it's fairly straightforward to look for programs that use them. I'm unable to find any evidence that any such programs exist. For example, Debian Code Search returns no hits except the kernel header and kernel code itself and translations of the kernel header: https://codesearch.debian.net/search?q=CRYPTOCFGA_STAT&literal=1&perpkg=1 The patch series that added this feature in 2018 (https://lore.kernel.org/linux-crypto/1537351855-16618-1-git-send-email-clabbe@baylibre.com/) said "The goal is to have an ifconfig for crypto device." This doesn't appear to have happened. It's not clear that there is real demand for crypto statistics. Just because the kernel provides other types of statistics such as I/O and networking statistics and some people find those useful does not mean that crypto statistics are useful too. Further evidence that programs are not using CONFIG_CRYPTO_STATS is that it was able to be disabled in RHEL and Fedora as a bug fix (https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/2947). Even further evidence comes from the fact that there are and have been bugs in how the stats work, but they were never reported. For example, before Linux v6.7 hash stats were double-counted in most cases. There has also never been any documentation for this feature, so it might be hard to use even if someone wanted to. 2. CONFIG_CRYPTO_STATS significantly reduces performance Enabling CONFIG_CRYPTO_STATS significantly reduces the performance of the crypto API, even if no program ever retrieves the statistics. This primarily affects systems with large number of CPUs. For example, https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2039576 reported that Lustre client encryption performance improved from 21.7GB/s to 48.2GB/s by disabling CONFIG_CRYPTO_STATS. It can be argued that this means that CONFIG_CRYPTO_STATS should be optimized with per-cpu counters similar to many of the networking counters. But no one has done this in 5+ years. This is consistent with the fact that the feature appears to be unused, so there seems to be little interest in improving it as opposed to just disabling it. It can be argued that because CONFIG_CRYPTO_STATS is off by default, performance doesn't matter. But Linux distros tend to error on the side of enabling options. The option is enabled in Ubuntu and Arch Linux, and until recently was enabled in RHEL and Fedora (see above). So, even just having the option available is harmful to users. 3. CONFIG_CRYPTO_STATS is a large maintenance burden There are over 1000 lines of code associated with CONFIG_CRYPTO_STATS, spread among 32 files. It significantly complicates much of the implementation of the crypto API. After the initial submission, many fixes and refactorings have consumed effort of multiple people to keep this feature "working". We should be spending this effort elsewhere. Cc: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-03-01crypto: dh - Make public key test FIPS-onlyHerbert Xu1-28/+29
The function dh_is_pubkey_valid was added to for FIPS but it was only partially conditional to fips_enabled. In particular, the first test in the function relies on the last test to work properly, but the last test is only run in FIPS mode. Fix this inconsistency by making the whole function conditional on fips_enabled. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-02-24crypto: jitter - fix CRYPTO_JITTERENTROPY help textRandy Dunlap1-2/+3
Correct various small problems in the help text: a. change 2 spaces to ", " b. finish an incomplete sentence c. change non-working URL to working URL Fixes: 05d2a9cd8da6 ("crypto: Kconfig - simplify compression/RNG entries") Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218458 Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Bagas Sanjaya <bagasdotme@gmail.com> Cc: Robert Elliott <elliott@hpe.com> Cc: Christoph Biedl <bugzilla.kernel.bpeb@manchmal.in-ulm.de> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Acked-by: Bagas Sanjaya <bagasdotme@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-02-09crypto: rsa - restrict plaintext/ciphertext values moreJoachim Vandersmissen1-4/+32
SP 800-56Br2, Section 7.1.1 [1] specifies that: 1. If m does not satisfy 1 < m < (n – 1), output an indication that m is out of range, and exit without further processing. Similarly, Section 7.1.2 of the same standard specifies that: 1. If the ciphertext c does not satisfy 1 < c < (n – 1), output an indication that the ciphertext is out of range, and exit without further processing. This range is slightly more conservative than RFC3447, as it also excludes RSA fixed points 0, 1, and n - 1. [1] https://doi.org/10.6028/NIST.SP.800-56Br2 Signed-off-by: Joachim Vandersmissen <git@jvdsn.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-02-02crypto: ahash - unexport crypto_hash_alg_has_setkey()Eric Biggers1-11/+10
Since crypto_hash_alg_has_setkey() is only called from ahash.c itself, make it a static function. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-01-26crypto: testmgr - remove unused xts4096 and xts512 algorithms from testmgr.cJoachim Vandersmissen1-8/+0
Commit a93492cae30a ("crypto: ccree - remove data unit size support") removed support for the xts512 and xts4096 algorithms, but left them defined in testmgr.c. This patch removes those definitions. Signed-off-by: Joachim Vandersmissen <git@jvdsn.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-01-26crypto: asymmetric_keys - remove redundant pointer secsColin Ian King1-2/+2
The pointer secs is being assigned a value however secs is never read afterwards. The pointer secs is redundant and can be removed. Cleans up clang scan build warning: warning: Although the value stored to 'secs' is used in the enclosing expression, the value is never actually read from 'secs' [deadcode.DeadStores] Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-01-26crypto: pcbc - remove redundant assignment to nbytesColin Ian King1-2/+2
The assignment to nbytes is redundant, the while loop needs to just refer to the value in walk.nbytes and the value of nbytes is being re-assigned inside the loop on both paths of the following if-statement. Remove redundant assignment. Cleans up clang scan build warning: warning: Although the value stored to 'nbytes' is used in the enclosing expression, the value is never actually read from 'nbytes' [deadcode.DeadStores] Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-29crypto: scomp - fix req->dst buffer overflowChengming Zhou1-0/+6
The req->dst buffer size should be checked before copying from the scomp_scratch->dst to avoid req->dst buffer overflow problem. Fixes: a2c1712e606f ("crypto: acomp - add driver-side scomp interface") Reported-by: syzbot+3eff5e51bf1db122a16e@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/0000000000000b05cd060d6b5511@google.com/ Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Reviewed-by: Barry Song <v-songbaohua@oppo.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-22crypto: skcipher - Pass statesize for simple lskcipher instancesHerbert Xu1-0/+1
When ecb is used to wrap an lskcipher, the statesize isn't set correctly. Fix this by making the simple instance creator set the statesize. Reported-by: syzbot+8ffb0839a24e9c6bfa76@syzkaller.appspotmail.com Reported-by: Edward Adam Davis <eadavis@qq.com> Fixes: 0707e88b0fc9 ("crypto: skcipher - Make use of internal state") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15crypto: api - Disallow identical driver namesHerbert Xu1-0/+1
Disallow registration of two algorithms with identical driver names. Cc: <stable@vger.kernel.org> Reported-by: Ovidiu Panait <ovidiu.panait@windriver.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15crypto: iaa - Add support for deflate-iaa compression algorithmTom Zanussi1-0/+10
This patch registers the deflate-iaa deflate compression algorithm and hooks it up to the IAA hardware using the 'fixed' compression mode introduced in the previous patch. Because the IAA hardware has a 4k history-window limitation, only buffers <= 4k, or that have been compressed using a <= 4k history window, are technically compliant with the deflate spec, which allows for a window of up to 32k. Because of this limitation, the IAA fixed mode deflate algorithm is given its own algorithm name, 'deflate-iaa'. With this change, the deflate-iaa crypto algorithm is registered and operational, and compression and decompression operations are fully enabled following the successful binding of the first IAA workqueue to the iaa_crypto sub-driver. when there are no IAA workqueues bound to the driver, the IAA crypto algorithm can be unregistered by removing the module. A new iaa_crypto 'verify_compress' driver attribute is also added, allowing the user to toggle compression verification. If set, each compress will be internally decompressed and the contents verified, returning error codes if unsuccessful. This can be toggled with 0/1: echo 0 > /sys/bus/dsa/drivers/crypto/verify_compress The default setting is '1' - verify all compresses. The verify_compress value setting at the time the algorithm is registered is captured in the algorithm's crypto_ctx and used for all compresses when using the algorithm. [ Based on work originally by George Powley, Jing Lin and Kyung Min Park ] Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-08crypto: algif_skcipher - Fix stream cipher chainingHerbert Xu1-3/+69
Unlike algif_aead which is always issued in one go (thus limiting the maximum size of the request), algif_skcipher has always allowed unlimited input data by cutting them up as necessary and feeding the fragments to the underlying algorithm one at a time. However, because of deficiencies in the API, this has been broken for most stream ciphers such as arc4 or chacha. This is because they have an internal state in addition to the IV that must be preserved in order to continue processing. Fix this by using the new skcipher state API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-08crypto: arc4 - Add internal stateHerbert Xu1-1/+10
The arc4 algorithm has always had internal state. It's been buggy from day one in that the state has been stored in the shared tfm object. That means two users sharing the same tfm will end up affecting each other's output, or worse, they may end up with the same output. Fix this by declaring an internal state and storing the state there instead of within the tfm context. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-08crypto: skcipher - Make use of internal stateHerbert Xu2-8/+106
This patch adds code to the skcipher/lskcipher API to make use of the internal state if present. In particular, the skcipher lskcipher wrapper will allocate a buffer for the IV/state and feed that to the underlying lskcipher algorithm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-08crypto: skcipher - Add internal state supportHerbert Xu4-13/+19
Unlike chaining modes such as CBC, stream ciphers other than CTR usually hold an internal state that must be preserved if the operation is to be done piecemeal. This has not been represented in the API, resulting in the inability to split up stream cipher operations. This patch adds the basic representation of an internal state to skcipher and lskcipher. In the interest of backwards compatibility, the default has been set such that existing users are assumed to be operating in one go as opposed to piecemeal. With the new API, each lskcipher/skcipher algorithm has a new attribute called statesize. For skcipher, this is the size of the buffer that can be exported or imported similar to ahash. For lskcipher, instead of providing a buffer of ivsize, the user now has to provide a buffer of ivsize + statesize. Each skcipher operation is assumed to be final as they are now, but this may be overridden with a request flag. When the override occurs, the user may then export the partial state and reimport it later. For lskcipher operations this is reversed. All operations are not final and the state will be exported unless the FINAL bit is set. However, the CONT bit still has to be set for the state to be used. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-08crypto: cfb,ofb - Remove cfb and ofbHerbert Xu4-385/+0
Remove the unused algorithms CFB/OFB. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-08crypto: testmgr - Remove cfb and ofbHerbert Xu2-1187/+0
Remove test vectors for CFB/OFB. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-08crypto: tcrypt - Remove cfb and ofbHerbert Xu1-76/+0
Remove tests for CFB/OFB. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-08crypto: af_alg - Disallow multiple in-flight AIO requestsHerbert Xu1-1/+13
Having multiple in-flight AIO requests results in unpredictable output because they all share the same IV. Fix this by only allowing one request at a time. Fixes: a8e3a343aba2 ("crypto: af_alg - add async support to algif_aead") Fixes: 34c08cc85c38 ("crypto: algif - change algif_skcipher to be asynchronous") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-11-17crypto: drbg - Remove SHA1 from drbgDimitri John Ledkov2-37/+4
SP800-90C 3rd draft states that SHA-1 will be removed from all specifications, including drbg by end of 2030. Given kernels built today will be operating past that date, start complying with upcoming requirements. No functional change, as SHA-256 / SHA-512 based DRBG have always been the preferred ones. Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com> Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-11-17crypto: drbg - ensure drbg hmac sha512 is used in FIPS selftestsDimitri John Ledkov1-6/+6
Update code comment, self test & healthcheck to use HMAC SHA512, instead of HMAC SHA256. These changes are in dead-code, or FIPS enabled code-paths only and have not effect on usual kernel builds. On systems booting in FIPS mode that has the effect of switch sanity selftest to HMAC sha512 based (which has been the default DRBG). This patch updates code from e866d9bcdd ("crypto: DRBG - switch to HMAC SHA512 DRBG as default DRBG"), but is not interesting to cherry-pick for stable updates, because it doesn't affect regular builds, nor has any tangible effect on FIPS certifcation. Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com> Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-11-17crypto: drbg - update FIPS CTR self-checks to aes256Dimitri John Ledkov1-3/+3
When originally drbg was introduced FIPS self-checks for all types but CTR were using the most preferred parameters for each type of DRBG. Update CTR self-check to use aes256. This patch updates code from d7759a4400 ("crypto: drbg - SP800-90A Deterministic Random Bit Generator"), but is not interesting to cherry-pick for stable updates, because it doesn't affect regular builds, nor has any tangible effect on FIPS certifcation. Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com> Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-11-17crypto: drbg - ensure most preferred type is FIPS health checkedDimitri John Ledkov1-2/+4
drbg supports multiple types of drbg, and multiple parameters of each. Health check sanity only checks one drbg of a single type. One can enable all three types of drbg. And instead of checking the most preferred algorithm (last one wins), it is currently checking first one instead. Update ifdef to ensure that healthcheck prefers HMAC, over HASH, over CTR, last one wins, like all other code and functions. This patch updates code from d7759a4400 ("crypto: drbg - SP800-90A Deterministic Random Bit Generator"), but is not interesting to cherry-pick for stable updates, because it doesn't affect regular builds, nor has any tangible effect on FIPS certifcation. Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com> Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-11-17crypto: rsa - add a check for allocation failureDan Carpenter1-0/+2
Static checkers insist that the mpi_alloc() allocation can fail so add a check to prevent a NULL dereference. Small allocations like this can't actually fail in current kernels, but adding a check is very simple and makes the static checkers happy. Fixes: 7096d7d6238c ("crypto: rsa - allow only odd e and restrict value in FIPS mode") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-11-17crypto: shash - don't exclude async statuses from error statsEric Biggers1-5/+1
EINPROGRESS and EBUSY have special meaning for async operations. However, shash is always synchronous, so these statuses have no special meaning for shash and don't need to be excluded when handling errors. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-11-07crypto: ahash - Set using_shash for cloned ahash wrapper over shashDmitry Safonov1-0/+1
The cloned child of ahash that uses shash under the hood should use shash helpers (like crypto_shash_setkey()). The following panic may be observed on TCP-AO selftests: > ================================================================== > BUG: KASAN: wild-memory-access in crypto_mod_get+0x1b/0x60 > Write of size 4 at addr 5d5be0ff5c415e14 by task connect_ipv4/1397 > > CPU: 0 PID: 1397 Comm: connect_ipv4 Tainted: G W 6.6.0+ #47 > Call Trace: > <TASK> > dump_stack_lvl+0x46/0x70 > kasan_report+0xc3/0xf0 > kasan_check_range+0xec/0x190 > crypto_mod_get+0x1b/0x60 > crypto_spawn_alg+0x53/0x140 > crypto_spawn_tfm2+0x13/0x60 > hmac_init_tfm+0x25/0x60 > crypto_ahash_setkey+0x8b/0x100 > tcp_ao_add_cmd+0xe7a/0x1120 > do_tcp_setsockopt+0x5ed/0x12a0 > do_sock_setsockopt+0x82/0x100 > __sys_setsockopt+0xe9/0x160 > __x64_sys_setsockopt+0x60/0x70 > do_syscall_64+0x3c/0xe0 > entry_SYSCALL_64_after_hwframe+0x46/0x4e > ================================================================== > general protection fault, probably for non-canonical address 0x5d5be0ff5c415e14: 0000 [#1] PREEMPT SMP KASAN > CPU: 0 PID: 1397 Comm: connect_ipv4 Tainted: G B W 6.6.0+ #47 > Call Trace: > <TASK> > ? die_addr+0x3c/0xa0 > ? exc_general_protection+0x144/0x210 > ? asm_exc_general_protection+0x22/0x30 > ? add_taint+0x26/0x90 > ? crypto_mod_get+0x20/0x60 > ? crypto_mod_get+0x1b/0x60 > ? ahash_def_finup_done1+0x58/0x80 > crypto_spawn_alg+0x53/0x140 > crypto_spawn_tfm2+0x13/0x60 > hmac_init_tfm+0x25/0x60 > crypto_ahash_setkey+0x8b/0x100 > tcp_ao_add_cmd+0xe7a/0x1120 > do_tcp_setsockopt+0x5ed/0x12a0 > do_sock_setsockopt+0x82/0x100 > __sys_setsockopt+0xe9/0x160 > __x64_sys_setsockopt+0x60/0x70 > do_syscall_64+0x3c/0xe0 > entry_SYSCALL_64_after_hwframe+0x46/0x4e > </TASK> > RIP: 0010:crypto_mod_get+0x20/0x60 Make sure that the child/clone has using_shash set when parent is an shash user. Fixes: 890caf03421e ("crypto: ahash - optimize performance when wrapping shash") Cc: David Ahern <dsahern@kernel.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: Eric Biggers <ebiggers@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Francesco Ruggeri <fruggeri05@gmail.com> To: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jakub Kicinski <kuba@kernel.org> Cc: Paolo Abeni <pabeni@redhat.com> Cc: Salam Noureddine <noureddine@arista.com> Cc: netdev@vger.kernel.org Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Dmitry Safonov <dima@arista.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-11-07crypto: jitterentropy - Hide esoteric Kconfig options under FIPS and EXPERTHerbert Xu1-3/+25
As JITTERENTROPY is selected by default if you enable the CRYPTO API, any Kconfig options added there will show up for every single user. Hide the esoteric options under EXPERT as well as FIPS so that only distro makers will see them. Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-11-01crypto: adiantum - flush destination page before unmappingEric Biggers1-1/+3
Upon additional review, the new fast path in adiantum_finish() is missing the call to flush_dcache_page() that scatterwalk_map_and_copy() was doing. It's apparently debatable whether flush_dcache_page() is actually needed, as per the discussion at https://lore.kernel.org/lkml/YYP1lAq46NWzhOf0@casper.infradead.org/T/#u. However, it appears that currently all the helper functions that write to a page, such as scatterwalk_map_and_copy(), memcpy_to_page(), and memzero_page(), do the dcache flush. So do it to be consistent. Fixes: ba95588e2005 ("crypto: adiantum - add fast path for single-page messages") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-11-01crypto: testmgr - move pkcs1pad(rsa,sha3-*) to correct placeEric Biggers1-5/+5
alg_test_descs[] needs to be in sorted order, since it is used for binary search. This fixes the following boot-time warning: testmgr: alg_test_descs entries in wrong order: 'pkcs1pad(rsa,sha512)' before 'pkcs1pad(rsa,sha3-256)' Fixes: 6347eda6b6d6 ("crypto: rsa-pkcs1pad - Add FIPS 202 SHA-3 support") Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-31certs: Only allow certs signed by keys on the builtin keyringMimi Zohar1-0/+4
Originally the secondary trusted keyring provided a keyring to which extra keys may be added, provided those keys were not blacklisted and were vouched for by a key built into the kernel or already in the secondary trusted keyring. On systems with the machine keyring configured, additional keys may also be vouched for by a key on the machine keyring. Prevent loading additional certificates directly onto the secondary keyring, vouched for by keys on the machine keyring, yet allow these certificates to be loaded onto other trusted keyrings. Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>
2023-10-27crypto: asymmetric_keys - allow FIPS 202 SHA-3 signaturesDimitri John Ledkov4-1/+49
Add FIPS 202 SHA-3 hash signature support in x509 certificates, pkcs7 signatures, and authenticode signatures. Supports hashes of size 256 and up, as 224 is too weak for any practical purposes. Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: rsa-pkcs1pad - Add FIPS 202 SHA-3 supportDimitri John Ledkov2-1/+36
Add support in rsa-pkcs1pad for FIPS 202 SHA-3 hashes, sizes 256 and up. As 224 is too weak for any practical purposes. Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: FIPS 202 SHA-3 register in hash info for IMADimitri John Ledkov1-0/+6
Register FIPS 202 SHA-3 hashes in hash info for IMA and other users. Sizes 256 and up, as 224 is too weak for any practical purposes. Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: ahash - optimize performance when wrapping shashEric Biggers3-141/+162
The "ahash" API provides access to both CPU-based and hardware offload- based implementations of hash algorithms. Typically the former are implemented as "shash" algorithms under the hood, while the latter are implemented as "ahash" algorithms. The "ahash" API provides access to both. Various kernel subsystems use the ahash API because they want to support hashing hardware offload without using a separate API for it. Yet, the common case is that a crypto accelerator is not actually being used, and ahash is just wrapping a CPU-based shash algorithm. This patch optimizes the ahash API for that common case by eliminating the extra indirect call for each ahash operation on top of shash. It also fixes the double-counting of crypto stats in this scenario (though CONFIG_CRYPTO_STATS should *not* be enabled by anyone interested in performance anyway...), and it eliminates redundant checking of CRYPTO_TFM_NEED_KEY. As a bonus, it also shrinks struct crypto_ahash. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: ahash - check for shash type instead of not ahash typeEric Biggers1-5/+3
Since the previous patch made crypto_shash_type visible to ahash.c, change checks for '->cra_type != &crypto_ahash_type' to '->cra_type == &crypto_shash_type'. This makes more sense and avoids having to forward-declare crypto_ahash_type. The result is still the same, since the type is either shash or ahash here. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: hash - move "ahash wrapping shash" functions to ahash.cEric Biggers3-191/+188
The functions that are involved in implementing the ahash API on top of an shash algorithm belong better in ahash.c, not in shash.c where they currently are. Move them. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: ahash - improve file commentEric Biggers1-2/+6
Improve the file comment for crypto/ahash.c. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: ahash - remove struct ahash_request_privEric Biggers1-8/+0
struct ahash_request_priv is unused, so remove it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: gcm - stop using alignmask of ahashEric Biggers1-2/+1
Now that the alignmask for ahash and shash algorithms is always 0, simplify crypto_gcm_create_common() accordingly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: chacha20poly1305 - stop using alignmask of ahashEric Biggers1-2/+1
Now that the alignmask for ahash and shash algorithms is always 0, simplify chachapoly_create() accordingly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: ccm - stop using alignmask of ahashEric Biggers1-2/+1
Now that the alignmask for ahash and shash algorithms is always 0, simplify crypto_ccm_create_common() accordingly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: testmgr - stop checking crypto_ahash_alignmaskEric Biggers1-6/+3
Now that the alignmask for ahash and shash algorithms is always 0, crypto_ahash_alignmask() always returns 0 and will be removed. In preparation for this, stop checking crypto_ahash_alignmask() in testmgr. As a result of this change, test_sg_division::offset_relative_to_alignmask and testvec_config::key_offset_relative_to_alignmask no longer have any effect on ahash (or shash) algorithms. Therefore, also stop setting these flags in default_hash_testvec_configs[]. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: authencesn - stop using alignmask of ahashEric Biggers1-14/+6
Now that the alignmask for ahash and shash algorithms is always 0, simplify the code in authenc accordingly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: authenc - stop using alignmask of ahashEric Biggers1-10/+2
Now that the alignmask for ahash and shash algorithms is always 0, simplify the code in authenc accordingly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: ahash - remove support for nonzero alignmaskEric Biggers2-113/+12
Currently, the ahash API checks the alignment of all key and result buffers against the algorithm's declared alignmask, and for any unaligned buffers it falls back to manually aligned temporary buffers. This is virtually useless, however. First, since it does not apply to the message, its effect is much more limited than e.g. is the case for the alignmask for "skcipher". Second, the key and result buffers are given as virtual addresses and cannot (in general) be DMA'ed into, so drivers end up having to copy to/from them in software anyway. As a result it's easy to use memcpy() or the unaligned access helpers. The crypto_hash_walk_*() helper functions do use the alignmask to align the message. But with one exception those are only used for shash algorithms being exposed via the ahash API, not for native ahashes, and aligning the message is not required in this case, especially now that alignmask support has been removed from shash. The exception is the n2_core driver, which doesn't set an alignmask. In any case, no ahash algorithms actually set a nonzero alignmask anymore. Therefore, remove support for it from ahash. The benefit is that all the code to handle "misaligned" buffers in the ahash API goes away, reducing the overhead of the ahash API. This follows the same change that was made to shash. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27treewide: Add SPDX identifier to IETF ASN.1 modulesLukas Wunner6-0/+39
Per section 4.c. of the IETF Trust Legal Provisions, "Code Components" in IETF Documents are licensed on the terms of the BSD-3-Clause license: https://trustee.ietf.org/documents/trust-legal-provisions/tlp-5/ The term "Code Components" specifically includes ASN.1 modules: https://trustee.ietf.org/documents/trust-legal-provisions/code-components-list-3/ Add an SPDX identifier as well as a copyright notice pursuant to section 6.d. of the Trust Legal Provisions to all ASN.1 modules in the tree which are derived from IETF Documents. Section 4.d. of the Trust Legal Provisions requests that each Code Component identify the RFC from which it is taken, so link that RFC in every ASN.1 module. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>