Add num_prefetch_page and gap_page.
if selected, prefetch/gap pages will be added for every dma
mapping request.
Also define alignment per mapping. This alignment will be used
to allocate/free iova during dma map/unmap requests.
Bug
1463797
Change-Id: I32b22930b4414e43223287a2555ec50fe3f4ce36
Signed-off-by: Sri Krishna chowdary <schowdary@nvidia.com>
Reviewed-on: http://git-master/r/413266
Reviewed-by: Hiroshi Doyu <hdoyu@nvidia.com>
Tested-by: Hiroshi Doyu <hdoyu@nvidia.com>
spinlock_t lock;
struct kref kref;
+
+ bool gap_page;
+ int num_prefetch_page;
+ /* FIXME: currently only alignment of 2^n is supported. */
+ size_t alignment;
+
};
struct dma_iommu_mapping *