summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* crypto: api - Make crypto_alg_lookup staticHerbert Xu2018-03-312-3/+2
| | | | | | | | The function crypto_alg_lookup is only usd within the crypto API and should be not be exported to the modules. This patch marks it as a static function. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Remove unused crypto_type lookup functionHerbert Xu2018-03-311-7/+1
| | | | | | | | | | The lookup function in crypto_type was only used for the implicit IV generators which have been completely removed from the crypto API. This patch removes the lookup function as it is now useless. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - add a new test case for CRC-T10DIFArd Biesheuvel2018-03-161-0/+259
| | | | | | | | | | In order to be able to test yield support under preempt, add a test vector for CRC-T10DIF that is long enough to take multiple iterations (and thus possible preemption between them) of the primary loop of the accelerated x86 and arm64 implementations. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ecc - Remove stack VLA usageKees Cook2018-03-161-6/+17
| | | | | | | | | | | | | On the quest to remove all VLAs from the kernel[1], this switches to a pair of kmalloc regions instead of using the stack. This also moves the get_random_bytes() after all allocations (and drops the needless "nbytes" variable). [1] https://lkml.org/lkml/2018/3/7/621 Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - introduce SM4 testsGilad Ben-Yossef2018-03-163-0/+143
| | | | | | | Add testmgr tests for the newly introduced SM4 ECB symmetric cipher. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: sm4 - introduce SM4 symmetric cipher algorithmGilad Ben-Yossef2018-03-163-0/+270
| | | | | | | | | | | | | | | | Introduce the SM4 cipher algorithms (OSCCA GB/T 32907-2016). SM4 (GBT.32907-2016) is a cryptographic standard issued by the Organization of State Commercial Administration of China (OSCCA) as an authorized cryptographic algorithms for the use within China. SMS4 was originally created for use in protecting wireless networks, and is mandated in the Chinese National Standard for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure) (GB.15629.11-2003). Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ecdh - fix to allow multi segment scatterlistsJames Bottomley2018-03-091-6/+17
| | | | | | | | | | Apparently the ecdh use case was in bluetooth which always has single element scatterlists, so the ecdh module was hard coded to expect them. Now we're using this in TPM, we need multi-element scatterlists, so remove this limitation. Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cfb - add support for Cipher FeedBack modeJames Bottomley2018-03-093-0/+362
| | | | | | | | | | | | | | TPM security routines require encryption and decryption with AES in CFB mode, so add it to the Linux Crypto schemes. CFB is basically a one time pad where the pad is generated initially from the encrypted IV and then subsequently from the encrypted previous block of ciphertext. The pad is XOR'd into the plain text to get the final ciphertext. https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#CFB Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ablk_helper - remove ablk_helperEric Biggers2018-03-033-155/+0
| | | | | | | | All users of ablk_helper have been converted over to crypto_simd, so remove ablk_helper. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: lrw - remove lrw_crypt()Eric Biggers2018-03-031-113/+39
| | | | | | | | | | | Now that all users of lrw_crypt() have been removed in favor of the LRW template wrapping an ECB mode algorithm, remove lrw_crypt(). Also remove crypto/lrw.h as that is no longer needed either; and fold 'struct lrw_table_ctx' into 'struct priv', lrw_init_table() into setkey(), and lrw_free_table() into exit_tfm(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: xts - remove xts_crypt()Eric Biggers2018-03-031-72/+0
| | | | | | | | Now that all users of xts_crypt() have been removed in favor of the XTS template wrapping an ECB mode algorithm, remove xts_crypt(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/camellia-aesni-avx, avx2 - convert to skcipher interfaceEric Biggers2018-03-031-10/+3
| | | | | | | | | | Convert the AESNI AVX and AESNI AVX2 implementations of Camellia from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/camellia - convert to skcipher interfaceEric Biggers2018-03-031-1/+1
| | | | | | | | Convert the x86 asm implementation of Camellia from the (deprecated) blkcipher interface over to the skcipher interface. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/camellia - remove XTS algorithmEric Biggers2018-03-031-1/+0
| | | | | | | | | | | | | The XTS template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic XTS code themselves via xts_crypt(). Remove the xts-camellia-asm algorithm which did this. Users who request xts(camellia) and previously would have gotten xts-camellia-asm will now get xts(ecb-camellia-asm) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/camellia - remove LRW algorithmEric Biggers2018-03-031-1/+0
| | | | | | | | | | | | | The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-camellia-asm algorithm which did this. Users who request lrw(camellia) and previously would have gotten lrw-camellia-asm will now get lrw(ecb-camellia-asm) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/camellia-aesni-avx2 - remove LRW algorithmEric Biggers2018-03-031-1/+0
| | | | | | | | | | | | | | The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-camellia-aesni-avx2 algorithm which did this. Users who request lrw(camellia) and previously would have gotten lrw-camellia-aesni-avx2 will now get lrw(ecb-camellia-aesni-avx2) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/camellia-aesni-avx - remove LRW algorithmEric Biggers2018-03-031-1/+0
| | | | | | | | | | | | | | The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-camellia-aesni algorithm which did this. Users who request lrw(camellia) and previously would have gotten lrw-camellia-aesni will now get lrw(ecb-camellia-aesni) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/des3_ede - convert to skcipher interfaceEric Biggers2018-03-031-1/+1
| | | | | | | | Convert the x86 asm implementation of Triple DES from the (deprecated) blkcipher interface over to the skcipher interface. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/blowfish: convert to skcipher interfaceEric Biggers2018-03-031-1/+1
| | | | | | | | Convert the x86 asm implementation of Blowfish from the (deprecated) blkcipher interface over to the skcipher interface. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/cast6-avx - convert to skcipher interfaceEric Biggers2018-03-031-5/+4
| | | | | | | | | Convert the AVX implementation of CAST6 from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/cast6-avx - remove LRW algorithmEric Biggers2018-03-031-1/+0
| | | | | | | | | | | | | The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-cast6-avx algorithm which did this. Users who request lrw(cast6) and previously would have gotten lrw-cast6-avx will now get lrw(ecb-cast6-avx) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/cast5-avx - convert to skcipher interfaceEric Biggers2018-03-031-4/+3
| | | | | | | | | Convert the AVX implementation of CAST5 from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/twofish-avx - convert to skcipher interfaceEric Biggers2018-03-031-4/+2
| | | | | | | | | | Convert the AVX implementation of Twofish from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/twofish-avx - remove LRW algorithmEric Biggers2018-03-031-1/+0
| | | | | | | | | | | | | The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-twofish-avx algorithm which did this. Users who request lrw(twofish) and previously would have gotten lrw-twofish-avx will now get lrw(ecb-twofish-avx) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/twofish-3way - convert to skcipher interfaceEric Biggers2018-03-031-1/+1
| | | | | | | | Convert the 3-way implementation of Twofish from the (deprecated) blkcipher interface over to the skcipher interface. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/twofish-3way - remove XTS algorithmEric Biggers2018-03-031-1/+0
| | | | | | | | | | | | | The XTS template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic XTS code themselves via xts_crypt(). Remove the xts-twofish-3way algorithm which did this. Users who request xts(twofish) and previously would have gotten xts-twofish-3way will now get xts(ecb-twofish-3way) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/twofish-3way - remove LRW algorithmEric Biggers2018-03-031-1/+0
| | | | | | | | | | | | | The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-twofish-3way algorithm which did this. Users who request lrw(twofish) and previously would have gotten lrw-twofish-3way will now get lrw(ecb-twofish-3way) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/serpent-avx,avx2 - convert to skcipher interfaceEric Biggers2018-03-031-9/+2
| | | | | | | | | | Convert the AVX and AVX2 implementations of Serpent from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/serpent-avx - remove LRW algorithmEric Biggers2018-03-031-1/+0
| | | | | | | | | | | | | The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-serpent-avx algorithm which did this. Users who request lrw(serpent) and previously would have gotten lrw-serpent-avx will now get lrw(ecb-serpent-avx) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/serpent-avx2 - remove LRW algorithmEric Biggers2018-03-031-1/+0
| | | | | | | | | | | | | The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-serpent-avx2 algorithm which did this. Users who request lrw(serpent) and previously would have gotten lrw-serpent-avx2 will now get lrw(ecb-serpent-avx2) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/serpent-sse2 - convert to skcipher interfaceEric Biggers2018-03-031-6/+4
| | | | | | | | | | Convert the SSE2 implementation of Serpent from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/serpent-sse2 - remove XTS algorithmEric Biggers2018-03-031-2/+0
| | | | | | | | | | | | | The XTS template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic XTS code themselves via xts_crypt(). Remove the xts-serpent-sse2 algorithm which did this. Users who request xts(serpent) and previously would have gotten xts-serpent-sse2 will now get xts(ecb-serpent-sse2) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/serpent-sse2 - remove LRW algorithmEric Biggers2018-03-031-2/+0
| | | | | | | | | | | | | The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-serpent-sse2 algorithm which did this. Users who request lrw(serpent) and previously would have gotten lrw-serpent-sse2 will now get lrw(ecb-serpent-sse2) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: simd - allow registering multiple algorithms at onceEric Biggers2018-03-031-0/+50
| | | | | | | | | | | | | | Add a function to crypto_simd that registers an array of skcipher algorithms, then allocates and registers the simd wrapper algorithms for them. It assumes the naming scheme where the names of the underlying algorithms are prefixed with two underscores. Also add the corresponding 'unregister' function. Most of the x86 crypto modules will be able to use these. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: speck - add test vectors for Speck64-XTSEric Biggers2018-02-222-0/+680
| | | | | | | | | | | | | Add test vectors for Speck64-XTS, generated in userspace using C code. The inputs were borrowed from the AES-XTS test vectors, with key lengths adjusted. xts-speck64-neon passes these tests. However, they aren't currently applicable for the generic XTS template, as that only supports a 128-bit block size. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: speck - add test vectors for Speck128-XTSEric Biggers2018-02-222-0/+696
| | | | | | | | | | Add test vectors for Speck128-XTS, generated in userspace using C code. The inputs were borrowed from the AES-XTS test vectors. Both xts(speck128-generic) and xts-speck128-neon pass these tests. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: speck - export common helpersEric Biggers2018-02-221-41/+49
| | | | | | | | | | | | Export the Speck constants and transform context and the ->setkey(), ->encrypt(), and ->decrypt() functions so that they can be reused by the ARM NEON implementation of Speck-XTS. The generic key expansion code will be reused because it is not performance-critical and is not vectorizable, while the generic encryption and decryption functions are needed as fallbacks and for the XTS tweak encryption. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: speck - add support for the Speck block cipherEric Biggers2018-02-225-0/+460
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a generic implementation of Speck, including the Speck128 and Speck64 variants. Speck is a lightweight block cipher that can be much faster than AES on processors that don't have AES instructions. We are planning to offer Speck-XTS (probably Speck128/256-XTS) as an option for dm-crypt and fscrypt on Android, for low-end mobile devices with older CPUs such as ARMv7 which don't have the Cryptography Extensions. Currently, such devices are unencrypted because AES is not fast enough, even when the NEON bit-sliced implementation of AES is used. Other AES alternatives such as Twofish, Threefish, Camellia, CAST6, and Serpent aren't fast enough either; it seems that only a modern ARX cipher can provide sufficient performance on these devices. This is a replacement for our original proposal (https://patchwork.kernel.org/patch/10101451/) which was to offer ChaCha20 for these devices. However, the use of a stream cipher for disk/file encryption with no space to store nonces would have been much more insecure than we thought initially, given that it would be used on top of flash storage as well as potentially on top of F2FS, neither of which is guaranteed to overwrite data in-place. Speck has been somewhat controversial due to its origin. Nevertheless, it has a straightforward design (it's an ARX cipher), and it appears to be the leading software-optimized lightweight block cipher currently, with the most cryptanalysis. It's also easy to implement without side channels, unlike AES. Moreover, we only intend Speck to be used when the status quo is no encryption, due to AES not being fast enough. We've also considered a novel length-preserving encryption mode based on ChaCha20 and Poly1305. While theoretically attractive, such a mode would be a brand new crypto construction and would be more complicated and difficult to implement efficiently in comparison to Speck-XTS. There is confusion about the byte and word orders of Speck, since the original paper doesn't specify them. But we have implemented it using the orders the authors recommended in a correspondence with them. The test vectors are taken from the original paper but were mapped to byte arrays using the recommended byte and word orders. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - Fix incorrect values in PKCS#1 test vectorConor McLoughlin2018-02-221-3/+3
| | | | | | | | | | | | | | | | | The RSA private key for the first form should have version, prime1, prime2, exponent1, exponent2, coefficient values 0. With non-zero values for prime1,2, exponent 1,2 and coefficient the Intel QAT driver will assume that values are provided for the private key second form. This will result in signature verification failures for modules where QAT device is present and the modules are signed with rsa,sha256. Cc: <stable@vger.kernel.org> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Conor McLoughlin <conor.mcloughlin@intel.com> Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: engine - Permit to enqueue all async requestsCorentin LABBE2018-02-151-137/+164
| | | | | | | | | | The crypto engine could actually only enqueue hash and ablkcipher request. This patch permit it to enqueue any type of crypto_async_request. Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com> Tested-by: Fabien Dessenne <fabien.dessenne@st.com> Tested-by: Fabien Dessenne <fabien.dessenne@st.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: user - Replace GFP_ATOMIC with GFP_KERNEL in crypto_reportJia-Ju Bai2018-02-151-1/+1
| | | | | | | | | | | | | After checking all possible call chains to crypto_report here, my tool finds that crypto_report is never called in atomic context. And crypto_report calls crypto_alg_match which calls down_read, thus it proves again that crypto_report can call functions which may sleep. Thus GFP_ATOMIC is not necessary, and it can be replaced with GFP_KERNEL. This is found by a static analysis tool named DCNS written by myself. Signed-off-by: Jia-Ju Bai <baijiaju1990@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: rsa-pkcs1pad - Replace GFP_ATOMIC with GFP_KERNEL in ↵Jia-Ju Bai2018-02-151-1/+1
| | | | | | | | | | | | | pkcs1pad_encrypt_sign_complete After checking all possible call chains to kzalloc here, my tool finds that this kzalloc is never called in atomic context. Thus GFP_ATOMIC is not necessary, and it can be replaced with GFP_KERNEL. This is found by a static analysis tool named DCNS written by myself. Signed-off-by: Jia-Ju Bai <baijiaju1990@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: mcryptd - remove pointless wrapper functionsEric Biggers2018-02-151-30/+4
| | | | | | | | | There is no need for ahash_mcryptd_{update,final,finup,digest}(); we should just call crypto_ahash_*() directly. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Require export/import in ahashKamil Konieczny2018-02-151-16/+2
| | | | | | | | Export and import are mandatory in async hash. As drivers were rewritten, drop empty wrappers and correct init of ahash transformation. Signed-off-by: Kamil Konieczny <k.konieczny@partner.samsung.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge branch 'linus' of ↵Linus Torvalds2018-02-121-100/+118
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto fixes from Herbert Xu: "This fixes the following issues: - oversize stack frames on mn10300 in sha3-generic - warning on old compilers in sha3-generic - API error in sun4i_ss_prng - potential dead-lock in sun4i_ss_prng - null-pointer dereference in sha512-mb - endless loop when DECO acquire fails in caam - kernel oops when hashing empty message in talitos" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: sun4i_ss_prng - convert lock to _bh in sun4i_ss_prng_generate crypto: sun4i_ss_prng - fix return value of sun4i_ss_prng_generate crypto: caam - fix endless loop when DECO acquire fails crypto: sha3-generic - Use __optimize to support old compilers compiler-gcc.h: __nostackprotector needs gcc-4.4 and up compiler-gcc.h: Introduce __optimize function attribute crypto: sha3-generic - deal with oversize stack frames crypto: talitos - fix Kernel Oops on hashing an empty file crypto: sha512-mb - initialize pending lengths correctly
| * crypto: sha3-generic - Use __optimize to support old compilersGeert Uytterhoeven2018-02-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | With gcc-4.1.2: crypto/sha3_generic.c:39: warning: ‘__optimize__’ attribute directive ignored Use the newly introduced __optimize macro to fix this. Fixes: 116121f60112aefd ("crypto: sha3-generic - rewrite KECCAK transform to help the compiler optimize") Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: sha3-generic - deal with oversize stack framesArd Biesheuvel2018-02-081-100/+118
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As reported by kbuild test robot, the optimized SHA3 C implementation compiles to mn10300 code that uses a disproportionate amount of stack space, i.e., crypto/sha3_generic.c: In function 'keccakf': crypto/sha3_generic.c:147:1: warning: the frame size of 1232 bytes is larger than 1024 bytes [-Wframe-larger-than=] As kindly diagnosed by Arnd, this does not only occur when building for the mn10300 architecture (which is what the report was about) but also for h8300, and builds for other 32-bit architectures show an increase in stack space utilization as well. Given that SHA3 operates on 64-bit quantities, and keeps a state matrix of 25 64-bit words, it is not surprising that 32-bit architectures with few general purpose registers are impacted the most by this, and it is therefore reasonable to implement a workaround that distinguishes between 32-bit and 64-bit architectures. Arnd figured out that taking the round calculation out of the loop, and inlining it explicitly but only on 64-bit architectures preserves most of the performance gain achieved by the rewrite, and also gets rid of the excessive use of stack space. Reported-by: kbuild test robot <fengguang.wu@intel.com> Suggested-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | vfs: do bulk POLL* -> EPOLL* replacementLinus Torvalds2018-02-111-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the mindless scripted replacement of kernel use of POLL* variables as described by Al, done by this script: for V in IN OUT PRI ERR RDNORM RDBAND WRNORM WRBAND HUP RDHUP NVAL MSG; do L=`git grep -l -w POLL$V | grep -v '^t' | grep -v /um/ | grep -v '^sa' | grep -v '/poll.h$'|grep -v '^D'` for f in $L; do sed -i "-es/^\([^\"]*\)\(\<POLL$V\>\)/\\1E\\2/" $f; done done with de-mangling cleanups yet to come. NOTE! On almost all architectures, the EPOLL* constants have the same values as the POLL* constants do. But they keyword here is "almost". For various bad reasons they aren't the same, and epoll() doesn't actually work quite correctly in some cases due to this on Sparc et al. The next patch from Al will sort out the final differences, and we should be all done. Scripted-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge tag 'for-linus-4.16-rc1-tag' of ↵Linus Torvalds2018-02-090-0/+0
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull xen fixes from Juergen Gross: "Only five small fixes for issues when running under Xen" * tag 'for-linus-4.16-rc1-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: xen: Fix {set,clear}_foreign_p2m_mapping on autotranslating guests pvcalls-back: do not return error on inet_accept EAGAIN xen-netfront: Fix race between device setup and open xen/grant-table: Use put_page instead of free_page x86/xen: init %gs very early to avoid page faults with stack protector
| * \ Merge branch 'master' of ↵Juergen Gross2018-01-2947-656/+777
| |\ \ | | | | | | | | | | | | ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/torvalds/linux