summaryrefslogtreecommitdiff
path: root/crypto/async_tx/async_memcpy.c
diff options
context:
space:
mode:
authorNeilBrown <neilb@suse.com>2016-01-07 11:02:34 +1100
committerVinod Koul <vinod.koul@intel.com>2016-01-07 11:06:18 +0530
commit874fbf3b2b51ca47c6a69a8e8ec5e8480c492478 (patch)
treee23eae0884d78fba55a41668cf809ae3ecd61159 /crypto/async_tx/async_memcpy.c
parentaed50612bf09718be8955e224864d7f2e83e0727 (diff)
downloadlinux-crypto-874fbf3b2b51ca47c6a69a8e8ec5e8480c492478.tar.gz
linux-crypto-874fbf3b2b51ca47c6a69a8e8ec5e8480c492478.zip
async_tx: use GFP_NOWAIT rather than GFP_IO
These async_XX functions are called from md/raid5 in an atomic section, between get_cpu() and put_cpu(), so they must not sleep. So use GFP_NOWAIT rather than GFP_IO. Dan Williams writes: Longer term async_tx needs to be merged into md directly as we can allocate this unmap data statically per-stripe rather than per request. Fixed: 72d3260bd533 ("async_pq: convert to dmaengine_unmap_data") Cc: stable@vger.kernel.org (v3.13+) Reported-and-tested-by: Stanislav Samsonov <slava@annapurnalabs.com> Acked-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Diffstat (limited to 'crypto/async_tx/async_memcpy.c')
-rw-r--r--crypto/async_tx/async_memcpy.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/crypto/async_tx/async_memcpy.c b/crypto/async_tx/async_memcpy.c
index f8c0b8db..88bc8e6b 100644
--- a/crypto/async_tx/async_memcpy.c
+++ b/crypto/async_tx/async_memcpy.c
@@ -53,7 +53,7 @@ async_memcpy(struct page *dest, struct page *src, unsigned int dest_offset,
struct dmaengine_unmap_data *unmap = NULL;
if (device)
- unmap = dmaengine_get_unmap_data(device->dev, 2, GFP_NOIO);
+ unmap = dmaengine_get_unmap_data(device->dev, 2, GFP_NOWAIT);
if (unmap && is_dma_copy_aligned(device, src_offset, dest_offset, len)) {
unsigned long dma_prep_flags = 0;