summaryrefslogtreecommitdiff
path: root/crypto/ahash.c (unfollow)
Commit message (Collapse)AuthorFilesLines
2012-06-27crypto: arc4 - now arc needs blockcipher supportSebastian Andrzej Siewior1-1/+1
Since commit c3ee1c83 ("crypto: arc4 - improve performance by adding ecb(arc4)) we need to pull in a blkcipher. |ERROR: "crypto_blkcipher_type" [crypto/arc4.ko] undefined! |ERROR: "blkcipher_walk_done" [crypto/arc4.ko] undefined! |ERROR: "blkcipher_walk_virt" [crypto/arc4.ko] undefined! Signed-off-by: Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-06-27crypto: twofish-avx - remove duplicated glue code and use shared glue code ↵Jussi Kivilinna1-0/+1
from glue_helper Now that shared glue code is available, convert twofish-avx to use it. Cc: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-06-27crypto: twofish-x86_64-3way - remove duplicated glue code and use shared ↵Jussi Kivilinna1-0/+1
glue code from glue_helper Now that shared glue code is available, convert twofish-x86_64-3way to use it. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-06-27crypto: camellia-x86_64 - remove duplicated glue code and use shared glue ↵Jussi Kivilinna1-0/+1
code from glue_helper Now that shared glue code is available, convert camellia-x86_64 to use it. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-06-27crypto: serpent-avx: remove duplicated glue code and use shared glue code ↵Jussi Kivilinna1-0/+1
from glue_helper Now that shared glue code is available, convert serpent-avx to use it. Cc: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-06-27crypto: serpent-sse2 - split generic glue code to new helper moduleJussi Kivilinna1-0/+7
Now that serpent-sse2 glue code has been made generic, it can be split to separate module. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-06-27crypto: aes_ni - change to use shared ablk_* functionsJussi Kivilinna1-0/+1
Remove duplicate ablk_* functions and make use of ablk_helper module instead. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-06-27crypto: twofish-avx - change to use shared ablk_* functionsJussi Kivilinna1-0/+1
Remove duplicate ablk_* functions and make use of ablk_helper module instead. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-06-27crypto: ablk_helper - move ablk_* functions from serpent-sse2/avx glue code ↵Jussi Kivilinna1-0/+8
to shared module Move ablk-* functions to separate module to share common code between cipher implementations. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-06-22crypto: algapi - Move larval completion into algbossHerbert Xu3-26/+9
It has been observed that sometimes the crypto allocation code will get stuck for 60 seconds or multiples thereof. This is usually caused by an algorithm failing to pass the self-test. If an algorithm fails to be constructed, we will immediately notify all larval waiters. However, if it succeeds in construction, but then fails the self-test, we won't notify anyone at all. This patch fixes this by merging the notification in the case where the algorithm fails to be constructed with that of the the case where it pases the self-test. This way regardless of what happens, we'll give the larval waiters an answer. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-06-14crypto: arc4 - improve performance by using u32 for ctx and variablesJussi Kivilinna1-6/+6
This patch changes u8 in struct arc4_ctx and variables to u32 (as AMD seems to have problem with u8 array). Below are tcrypt results of old 1-byte block cipher versus ecb(arc4) with u8 and ecb(arc4) with u32. tcrypt results, x86-64 (speed ratios: new-u32/old, new-u8/old): u32 u8 AMD Phenom II : x3.6 x2.7 Intel Core 2 : x2.0 x1.9 tcrypt results, i386 (speed ratios: new-u32/old, new-u8/old): u32 u8 Intel Atom N260 : x1.5 x1.4 Cc: Jon Oberheide <jon@oberheide.org> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-06-14crypto: arc4 - improve performance by adding ecb(arc4)Jussi Kivilinna1-22/+87
Currently arc4.c provides simple one-byte blocksize cipher which is wrapped by ecb() module, giving function call overhead on every encrypted byte. This patch adds ecb(arc4) directly into arc4.c for higher performance. tcrypt results (speed ratios: new/old): AMD Phenom II, x86-64 : x2.7 Intel Core 2, x86-64 : x1.9 Intel Atom N260, i386 : x1.4 Cc: Jon Oberheide <jon@oberheide.org> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-06-14crypto: testmgr - add ecb(arc4) speed testsJussi Kivilinna1-0/+10
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-06-12crypto: serpent - add x86_64/avx assembler implementationJohannes Goetzfried2-0/+80
This patch adds a x86_64/avx assembler implementation of the Serpent block cipher. The implementation is very similar to the sse2 implementation and processes eight blocks in parallel. Because of the new non-destructive three operand syntax all move-instructions can be removed and therefore a little performance increase is provided. Patch has been tested with tcrypt and automated filesystem tests. Tcrypt benchmark results: Intel Core i5-2500 CPU (fam:6, model:42, step:7) serpent-avx-x86_64 vs. serpent-sse2-x86_64 128bit key: (lrw:256bit) (xts:256bit) size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec 16B 1.03x 1.01x 1.01x 1.01x 1.00x 1.00x 1.00x 1.00x 1.00x 1.01x 64B 1.00x 1.00x 1.00x 1.00x 1.00x 0.99x 1.00x 1.01x 1.00x 1.00x 256B 1.05x 1.03x 1.00x 1.02x 1.05x 1.06x 1.05x 1.02x 1.05x 1.02x 1024B 1.05x 1.02x 1.00x 1.02x 1.05x 1.06x 1.05x 1.03x 1.05x 1.02x 8192B 1.05x 1.02x 1.00x 1.02x 1.06x 1.06x 1.04x 1.03x 1.04x 1.02x 256bit key: (lrw:384bit) (xts:512bit) size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec 16B 1.01x 1.00x 1.01x 1.01x 1.00x 1.00x 0.99x 1.03x 1.01x 1.01x 64B 1.00x 1.00x 1.00x 1.00x 1.00x 1.00x 1.00x 1.01x 1.00x 1.02x 256B 1.05x 1.02x 1.00x 1.02x 1.05x 1.02x 1.04x 1.05x 1.05x 1.02x 1024B 1.06x 1.02x 1.00x 1.02x 1.07x 1.06x 1.05x 1.04x 1.05x 1.02x 8192B 1.05x 1.02x 1.00x 1.02x 1.06x 1.06x 1.04x 1.05x 1.05x 1.02x serpent-avx-x86_64 vs aes-asm (8kB block): 128bit 256bit ecb-enc 1.26x 1.73x ecb-dec 1.20x 1.64x cbc-enc 0.33x 0.45x cbc-dec 1.24x 1.67x ctr-enc 1.32x 1.76x ctr-dec 1.32x 1.76x lrw-enc 1.20x 1.60x lrw-dec 1.15x 1.54x xts-enc 1.22x 1.64x xts-dec 1.17x 1.57x Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-06-12crypto: testmgr - expand twofish test vectorsJohannes Goetzfried1-32/+896
The AVX implementation of the twofish cipher processes 8 blocks parallel, so we need to make test vectors larger to check parallel code paths. Test vectors are also large enough to deal with 16 block parallel implementations which may occur in the future. Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-06-12crypto: twofish - add x86_64/avx assembler implementationJohannes Goetzfried3-0/+107
This patch adds a x86_64/avx assembler implementation of the Twofish block cipher. The implementation processes eight blocks in parallel (two 4 block chunk AVX operations). The table-lookups are done in general-purpose registers. For small blocksizes the 3way-parallel functions from the twofish-x86_64-3way module are called. A good performance increase is provided for blocksizes greater or equal to 128B. Patch has been tested with tcrypt and automated filesystem tests. Tcrypt benchmark results: Intel Core i5-2500 CPU (fam:6, model:42, step:7) twofish-avx-x86_64 vs. twofish-x86_64-3way 128bit key: (lrw:256bit) (xts:256bit) size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec 16B 0.96x 0.97x 1.00x 0.95x 0.97x 0.97x 0.96x 0.95x 0.95x 0.98x 64B 0.99x 0.99x 1.00x 0.99x 0.98x 0.98x 0.99x 0.98x 0.99x 0.98x 256B 1.20x 1.21x 1.00x 1.19x 1.15x 1.14x 1.19x 1.20x 1.18x 1.19x 1024B 1.29x 1.30x 1.00x 1.28x 1.23x 1.24x 1.26x 1.28x 1.26x 1.27x 8192B 1.31x 1.32x 1.00x 1.31x 1.25x 1.25x 1.28x 1.29x 1.28x 1.30x 256bit key: (lrw:384bit) (xts:512bit) size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec 16B 0.96x 0.96x 1.00x 0.96x 0.97x 0.98x 0.95x 0.95x 0.95x 0.96x 64B 1.00x 0.99x 1.00x 0.98x 0.98x 1.01x 0.98x 0.98x 0.98x 0.98x 256B 1.20x 1.21x 1.00x 1.21x 1.15x 1.15x 1.19x 1.20x 1.18x 1.19x 1024B 1.29x 1.30x 1.00x 1.28x 1.23x 1.23x 1.26x 1.27x 1.26x 1.27x 8192B 1.31x 1.33x 1.00x 1.31x 1.26x 1.26x 1.29x 1.29x 1.28x 1.30x twofish-avx-x86_64 vs aes-asm (8kB block): 128bit 256bit ecb-enc 1.19x 1.63x ecb-dec 1.18x 1.62x cbc-enc 0.75x 1.03x cbc-dec 1.23x 1.67x ctr-enc 1.24x 1.65x ctr-dec 1.24x 1.65x lrw-enc 1.15x 1.53x lrw-dec 1.14x 1.52x xts-enc 1.16x 1.56x xts-dec 1.16x 1.56x Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-06-12crypto: testmgr - Add new test cases for Blackfin CRC crypto driverSonic Zhang3-0/+102
Signed-off-by: Sonic Zhang <sonic.zhang@analog.com> Acked-by: Mike Frysinger <vapier@gentoo.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-05-22crypto: disable preemption while benchmarking RAID5 xor checksummingJim Kukunas1-0/+5
With CONFIG_PREEMPT=y, we need to disable preemption while benchmarking RAID5 xor checksumming to ensure we're actually measuring what we think we're measuring. Signed-off-by: Jim Kukunas <james.t.kukunas@linux.intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
2012-05-22crypto: wait for a full jiffy in do_xor_speedJim Kukunas1-3/+5
In the existing do_xor_speed(), there is no guarantee that we actually run do_2() for a full jiffy. We get the current jiffy, then run do_2() until the next jiffy. Instead, let's get the current jiffy, then wait until the next jiffy to start our test. Signed-off-by: Jim Kukunas <james.t.kukunas@linux.intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
2012-04-09um: several x86 hw-dependent crypto modules won't build on umlAl Viro1-3/+3
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2012-04-09crypto, xor: Sanitize checksumming function selection outputBorislav Petkov1-2/+3
Currently, it says [ 1.015541] xor: automatically using best checksumming function: generic_sse [ 1.040769] generic_sse: 6679.000 MB/sec [ 1.045377] xor: using function: generic_sse (6679.000 MB/sec) and repeats the function name three times unnecessarily. Change it into [ 1.015115] xor: automatically using best checksumming function: [ 1.040794] generic_sse: 6680.000 MB/sec and save us a line in dmesg. No functional change. Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-04-05crypto: sha512 - Fix byte counter overflow in SHA-512Kent Yoder1-1/+1
The current code only increments the upper 64 bits of the SHA-512 byte counter when the number of bytes hashed happens to hit 2^64 exactly. This patch increments the upper 64 bits whenever the lower 64 bits overflows. Signed-off-by: Kent Yoder <key@linux.vnet.ibm.com> Cc: stable@kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-04-02crypto: Stop using NLA_PUT*().David S. Miller8-38/+38
These macros contain a hidden goto, and are thus extremely error prone and make code hard to audit. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-03-29crypto: user - Fix size of netlink dump messageSteffen Klassert1-0/+8
The default netlink message size limit might be exceeded when dumping a lot of algorithms to userspace. As a result, not all of the instantiated algorithms dumped to userspace. So calculate an upper bound on the message size and call netlink_dump_start() with that value. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-03-29crypto: user - Fix lookup of algorithms with IV generatorSteffen Klassert3-5/+75
We lookup algorithms with crypto_alg_mod_lookup() when instantiating via crypto_add_alg(). However, algorithms that are wrapped by an IV genearator (e.g. aead or genicv type algorithms) need special care. The userspace process hangs until it gets a timeout when we use crypto_alg_mod_lookup() to lookup these algorithms. So export the lookup functions for these algorithms and use them in crypto_add_alg(). Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-03-29crypto: pcrypt - Use the online cpumask as the defaultSteffen Klassert1-4/+4
We use the active cpumask to determine the superset of cpus to use for parallelization. However, the active cpumask is for internal usage of the scheduler and therefore not the appropriate cpumask for these purposes. So use the online cpumask instead. Reported-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-03-23crypto: crc32c should use library implementationDarrick J. Wong2-91/+4
Since lib/crc32.c now provides crc32c, remove the software implementation here and call the library function instead. Signed-off-by: Darrick J. Wong <djwong@us.ibm.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Bob Pearson <rpearson@systemfabricworks.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-03-20crypto: remove the second argument of k[un]map_atomic()Cong Wang6-20/+20
Signed-off-by: Cong Wang <amwang@redhat.com>
2012-03-14crypto: camellia - add assembler implementation for x86_64Jussi Kivilinna1-0/+18
Patch adds x86_64 assembler implementation of Camellia block cipher. Two set of functions are provided. First set is regular 'one-block at time' encrypt/decrypt functions. Second is 'two-block at time' functions that gain performance increase on out-of-order CPUs. Performance of 2-way functions should be equal to 1-way functions with in-order CPUs. Patch has been tested with tcrypt and automated filesystem tests. Tcrypt benchmark results: AMD Phenom II 1055T (fam:16, model:10): camellia-asm vs camellia_generic: 128bit key: (lrw:256bit) (xts:256bit) size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec 16B 1.27x 1.22x 1.30x 1.42x 1.30x 1.34x 1.19x 1.05x 1.23x 1.24x 64B 1.74x 1.79x 1.43x 1.87x 1.81x 1.87x 1.48x 1.38x 1.55x 1.62x 256B 1.90x 1.87x 1.43x 1.94x 1.94x 1.95x 1.63x 1.62x 1.67x 1.70x 1024B 1.96x 1.93x 1.43x 1.95x 1.98x 2.01x 1.67x 1.69x 1.74x 1.80x 8192B 1.96x 1.96x 1.39x 1.93x 2.01x 2.03x 1.72x 1.64x 1.71x 1.76x 256bit key: (lrw:384bit) (xts:512bit) size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec 16B 1.23x 1.23x 1.33x 1.39x 1.34x 1.38x 1.04x 1.18x 1.21x 1.29x 64B 1.72x 1.69x 1.42x 1.78x 1.81x 1.89x 1.57x 1.52x 1.56x 1.65x 256B 1.85x 1.88x 1.42x 1.86x 1.93x 1.96x 1.69x 1.65x 1.70x 1.75x 1024B 1.88x 1.86x 1.45x 1.95x 1.96x 1.95x 1.77x 1.71x 1.77x 1.78x 8192B 1.91x 1.86x 1.42x 1.91x 2.03x 1.98x 1.73x 1.71x 1.78x 1.76x camellia-asm vs aes-asm (8kB block): 128bit 256bit ecb-enc 1.15x 1.22x ecb-dec 1.16x 1.16x cbc-enc 0.85x 0.90x cbc-dec 1.20x 1.23x ctr-enc 1.28x 1.30x ctr-dec 1.27x 1.28x lrw-enc 1.12x 1.16x lrw-dec 1.08x 1.10x xts-enc 1.11x 1.15x xts-dec 1.14x 1.15x Intel Core2 T8100 (fam:6, model:23, step:6): camellia-asm vs camellia_generic: 128bit key: (lrw:256bit) (xts:256bit) size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec 16B 1.10x 1.12x 1.14x 1.16x 1.16x 1.15x 1.02x 1.02x 1.08x 1.08x 64B 1.61x 1.60x 1.17x 1.68x 1.67x 1.66x 1.43x 1.42x 1.44x 1.42x 256B 1.65x 1.73x 1.17x 1.77x 1.81x 1.80x 1.54x 1.53x 1.58x 1.54x 1024B 1.76x 1.74x 1.18x 1.80x 1.85x 1.85x 1.60x 1.59x 1.65x 1.60x 8192B 1.77x 1.75x 1.19x 1.81x 1.85x 1.86x 1.63x 1.61x 1.66x 1.62x 256bit key: (lrw:384bit) (xts:512bit) size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec 16B 1.10x 1.07x 1.13x 1.16x 1.11x 1.16x 1.03x 1.02x 1.08x 1.07x 64B 1.61x 1.62x 1.15x 1.66x 1.63x 1.68x 1.47x 1.46x 1.47x 1.44x 256B 1.71x 1.70x 1.16x 1.75x 1.69x 1.79x 1.58x 1.57x 1.59x 1.55x 1024B 1.78x 1.72x 1.17x 1.75x 1.80x 1.80x 1.63x 1.62x 1.65x 1.62x 8192B 1.76x 1.73x 1.17x 1.78x 1.80x 1.81x 1.64x 1.62x 1.68x 1.64x camellia-asm vs aes-asm (8kB block): 128bit 256bit ecb-enc 1.17x 1.21x ecb-dec 1.17x 1.20x cbc-enc 0.80x 0.82x cbc-dec 1.22x 1.24x ctr-enc 1.25x 1.26x ctr-dec 1.25x 1.26x lrw-enc 1.14x 1.18x lrw-dec 1.13x 1.17x xts-enc 1.14x 1.18x xts-dec 1.14x 1.17x Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-03-14crypto: camellia - rename camellia.c to camellia_generic.cJussi Kivilinna2-1/+0
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-03-14crypto: camellia - fix checkpatch warningsJussi Kivilinna1-38/+41
Fix checkpatch warnings before renaming file. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-03-14crypto: camellia - rename camellia module to camellia_genericJussi Kivilinna2-1/+3
Rename camellia module to camellia_generic to allow optimized assembler implementations to autoload with module-alias. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-03-14crypto: tcrypt - add more camellia testsJussi Kivilinna1-0/+12
Add tests for CTR, LRW and XTS modes. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-03-14crypto: testmgr - add more camellia test vectorsJussi Kivilinna2-4/+1424
New ECB, CBC, CTR, LRW and XTS test vectors for camellia. Larger ECB/CBC test vectors needed for parallel 2-way camellia implementation. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-03-14crypto: camellia - simplify key setup and CAMELLIA_ROUNDSM macroJussi Kivilinna1-21/+3
camellia_setup_tail() applies 'inverse of the last half of P-function' to subkeys, which is unneeded if keys are applied directly to yl/yr in CAMELLIA_ROUNDSM. Patch speeds up key setup and should speed up CAMELLIA_ROUNDSM as applying key to yl/yr early has less register dependencies. Quick tcrypt camellia results: x86_64, AMD Phenom II, ~5% faster x86_64, Intel Core 2, ~0.5% faster i386, Intel Atom N270, ~1% faster Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-02-26netlink: add netlink_dump_control structure for netlink_dump_start()Pablo Neira Ayuso1-3/+7
Davem considers that the argument list of this interface is getting out of control. This patch tries to address this issue following his proposal: struct netlink_dump_control c = { .dump = dump, .done = done, ... }; netlink_dump_start(..., &c); Suggested by David S. Miller. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-02-16crypto: sha512 - use standard ror64()Alexey Dobriyan1-9/+4
Use standard ror64() instead of hand-written. There is no standard ror64, so create it. The difference is shift value being "unsigned int" instead of uint64_t (for which there is no reason). gcc starts to emit native ROR instructions which it doesn't do for some reason currently. This should make the code faster. Patch survives in-tree crypto test and ping flood with hmac(sha512) on. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-02-05crypto: In crypto_add_alg(), 'exact' wants to be initialized to 0Jesper Juhl1-1/+1
We declare 'exact' without initializing it and then do: [...] if (strlen(p->cru_driver_name)) exact = 1; if (priority && !exact) return -EINVAL; [...] If the first 'if' is not true, then the second will test an uninitialized 'exact'. As far as I can tell, what we want is for 'exact' to be initialized to 0 (zero/false). Signed-off-by: Jesper Juhl <jj@chaosbits.net> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-02-05crypto: sha512 - Avoid stack bloat on i386Herbert Xu1-36/+32
Unfortunately in reducing W from 80 to 16 we ended up unrolling the loop twice. As gcc has issues dealing with 64-bit ops on i386 this means that we end up using even more stack space (>1K). This patch solves the W reduction by moving LOAD_OP/BLEND_OP into the loop itself, thus avoiding the need to duplicate it. While the stack space still isn't great (>0.5K) it is at least in the same ball park as the amount of stack used for our C sha1 implementation. Note that this patch basically reverts to the original code so the diff looks bigger than it really is. Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-01-26crypto: sha512 - Use binary and instead of modulusHerbert Xu1-2/+2
The previous patch used the modulus operator over a power of 2 unnecessarily which may produce suboptimal binary code. This patch changes changes them to binary ands instead. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-01-26crypto: Add bulk algorithm registration interfaceMark Brown1-0/+35
Hardware crypto engines frequently need to register a selection of different algorithms with the core. Simplify their code slightly, especially the error handling, by providing functions to register a number of algorithms in a single call. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-01-15crypto: sha512 - use standard ror64()Alexey Dobriyan1-9/+4
Use standard ror64() instead of hand-written. There is no standard ror64, so create it. The difference is shift value being "unsigned int" instead of uint64_t (for which there is no reason). gcc starts to emit native ROR instructions which it doesn't do for some reason currently. This should make the code faster. Patch survives in-tree crypto test and ping flood with hmac(sha512) on. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-01-15crypto: sha512 - reduce stack usage to safe numberAlexey Dobriyan1-24/+34
For rounds 16--79, W[i] only depends on W[i - 2], W[i - 7], W[i - 15] and W[i - 16]. Consequently, keeping all W[80] array on stack is unnecessary, only 16 values are really needed. Using W[16] instead of W[80] greatly reduces stack usage (~750 bytes to ~340 bytes on x86_64). Line by line explanation: * BLEND_OP array is "circular" now, all indexes have to be modulo 16. Round number is positive, so remainder operation should be without surprises. * initial full message scheduling is trimmed to first 16 values which come from data block, the rest is calculated before it's needed. * original loop body is unrolled version of new SHA512_0_15 and SHA512_16_79 macros, unrolling was done to not do explicit variable renaming. Otherwise it's the very same code after preprocessing. See sha1_transform() code which does the same trick. Patch survives in-tree crypto test and original bugreport test (ping flood with hmac(sha512). See FIPS 180-2 for SHA-512 definition http://csrc.nist.gov/publications/fips/fips180-2/fips180-2withchangenotice.pdf Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-01-15crypto: sha512 - make it work, undo percpu message scheduleAlexey Dobriyan1-5/+1
commit d2188b3bbf2448b4b3f7082e5cffb1c2eb166799 aka "crypto: sha512 - Move message schedule W[80] to static percpu area" created global message schedule area. If sha512_update will ever be entered twice, hash will be silently calculated incorrectly. Probably the easiest way to notice incorrect hashes being calculated is to run 2 ping floods over AH with hmac(sha512): #!/usr/sbin/setkey -f flush; spdflush; add IP1 IP2 ah 25 -A hmac-sha512 0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000025; add IP2 IP1 ah 52 -A hmac-sha512 0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000052; spdadd IP1 IP2 any -P out ipsec ah/transport//require; spdadd IP2 IP1 any -P in ipsec ah/transport//require; XfrmInStateProtoError will start ticking with -EBADMSG being returned from ah_input(). This never happens with, say, hmac(sha1). With patch applied (on BOTH sides), XfrmInStateProtoError does not tick with multiple bidirectional ping flood streams like it doesn't tick with SHA-1. After this patch sha512_transform() will start using ~750 bytes of stack on x86_64. This is OK for simple loads, for something more heavy, stack reduction will be done separatedly. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2011-12-20crypto: gf128mul - remove leftover "(EXPERIMENTAL)" in KconfigJussi Kivilinna1-1/+1
CRYPTO_GF128MUL does not select EXPERIMENTAL anymore so remove the "(EXPERIMENTAL)" from its name. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2011-12-20crypto: serpent-sse2 - select LRW and XTSJussi Kivilinna1-0/+4
serpent-sse2 uses functions from LRW and XTS modules, so selecting would appear to be better option than using #ifdefs in serpent_sse2_glue.c to enable/disable LRW and XTS features. This also fixes build problem when serpent-sse2 would be build into kernel but XTS/LRW are build as modules. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2011-12-20crypto: twofish-x86_64-3way - select LRW and XTSJussi Kivilinna1-0/+2
twofish-x86_64-3way uses functions from LRW and XTS modules, so selecting would appear to be better option than using #ifdefs in twofish_glue_3way.c to enable/disable LRW and XTS features. This also fixes build problem when twofish-x86_64-3way would be build into kernel but XTS/LRW are build as modules. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2011-12-20crypto: xts - remove dependency on EXPERIMENTALJussi Kivilinna1-2/+1
XTS has been EXPERIMENTAL since it was introduced in 2007. I'd say by now it has seen enough testing to justify removal of EXPERIMENTAL tag. CC: Rik Snel <rsnel@cube.dyndns.org> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2011-12-20crypto: lrw - remove dependency on EXPERIMENTALJussi Kivilinna1-2/+1
LRW has been EXPERIMENTAL since it was introduced in 2006. I'd say by now it has seen enough testing to justify removal of EXPERIMENTAL tag. CC: Rik Snel <rsnel@cube.dyndns.org> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2011-11-30crypto: serpent-sse2 - should select CRYPTO_CRYPTDJussi Kivilinna1-0/+2
Since serpent_sse2_glue.c uses cryptd, CRYPTO_SERPENT_SSE2_X86_64 and CRYPTO_SERPENT_SSE2_586 should be selecting CRYPTO_CRYPTD. Reported-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Acked-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>