Arun Swain [Wed, 5 Nov 2014 19:01:23 +0000 (11:01 -0800)]
dc: tegra: enable/disble prism conditionally
Enable/disbale prism depending on the brightmess
value but only if it is enabled from user space.
Otherwise it will affect test cases which require
prism to be off.
When AF bit is zero for an user space buffer, access faults are observed
during address translation in the secure world.
Lock userspace buffers using sys_mlock to ensure that AF bit is not
cleared once they are passed to the secure world. Also invoke
fixup_user_fault API to set AF bit to 1 to handle the case when AF bit
is zero even before calling sys_mlock.
Mahantesh Kumbar [Tue, 14 Oct 2014 12:14:34 +0000 (17:44 +0530)]
gk20a: Moved bind fecs to init_gr_support
-Moved bind fecs from work queue to init_gr_support.
-It makes all CPU->FECS communication to happen before
booting PMU, and after we boot PMU, only PMU talks to
FECS. So it removes possibility to race between CPU
and PMU talking to FECS.
Bibhay Ranjan [Fri, 24 Oct 2014 21:08:01 +0000 (14:08 -0700)]
bcmdhd:fix kernel panic due to early free of ndev
because of http://git-master/r/555458, net_device
structure was moved to some other context, which
eventually was doing free_netdev and an early
release of memory which resulted in kernel panic.
moving free_netdev back to it original context.
However, free_netdev has to be synchronized with
thread wl_event_handler as per Bug 200040067. So,
this syncronization has to be done by semaphore
netif_sem using a writers lock.
Manish Bansal [Tue, 28 Oct 2014 11:55:31 +0000 (17:25 +0530)]
net:wireless:bcmdhd: Don't enable Mon mode for p2p
P2P CERT 6.1.9 test case is failing becasue monitor mode
is creating an issue in registering Action frame for P2P-GO in
firmware, this was leading an error in receiving action frames
to GO interface. Not enabling it for P2P in driver though
DUT does not support the mode.
Sai Gurrappadi [Wed, 4 Jun 2014 00:46:23 +0000 (17:46 -0700)]
nohz: stat: Fix decreasing idle/iowait times
Always read nohz idle/iowait counters if nohz is enabled even if the cpu
is offline. This prevents a decreasing counter if a reader reads
/proc/stat before and after a cpu is offlined. Currently /proc/stat
switches between using the nohz counters and the sched-tick counters
both of which are updated independently and could therefore be out of
sync.
Commit "7386cdb nohz: Fix idle ticks in cpu summary line of /proc/stat"
introduced the check to fall back onto using the sched-tick counters
because the nohz counters updated incorrectly when a cpu was offlined.
However, commit "4b0c0f2 tick: Cleanup NOHZ per cpu data on cpu down"
properly fixes the issue by clearing nohz state on cpu-down thereby
preventing faulty nohz counter updates. So we can now safely remove the
cpu_online() checks introduced by 7386cdb.
Default route changes were missing as part of dbed723911c6ce4c1b9b3d3b8a9ac7ed681b646a. So,
ipv6 global address acquisition fails most of times.
Adding the missing default route changes resolved the issue.
This ioctl is intended to be used for the purpose
of changing the panel gamma and saturation while the
panel is on, without the visual artefacts that would
result from disabling CMU during the update, and while
minimizing risk of CPU/DC CMU access collisions.
The ioctl TEGRA_DC_EXT_SET_CMU_ALIGNED updates only
the entries in CMU CSC and LUT2 that have changed since
the last CMU update, keeps CMU enabled while doing
so, and aligns the update with the next FRAME_END_INT.
Vandana Salve [Mon, 20 Oct 2014 16:32:20 +0000 (22:02 +0530)]
dma: coherent: fix VPR dma allocation path
Fixed VPR allocation path by passing DMA_MEMORY_EXCLUSIVE
which will only allocate memory from the VPR region.
Do not allow dma_alloc_coherent() to fall back to
system memory when it's out of memory in the VPR region
Michael Frydrych [Thu, 10 Jul 2014 11:13:07 +0000 (14:13 +0300)]
video: tegra: dc: Keep shadow syncpt vals in sync.
dc keeps its own shadow copy of max syncpt value stored
by nvhost. Both copies must be kept in sync, otherwise
dc may advance a syncpt beyond the value which it will
return for subsequent flip.
Theoretically, only one copy of the syncpt needs to be
maintained. This fix adds a mantanance of the second copy
rather then removing one copy altogether to stay consistent
with original code.
disable vbus irq at device suspend if OTG cable connected
and re-enable vbus irq while device resume.
set extcon usb state to false if VBUS state is disconnect
in resume.
Lorenzo Colitti [Wed, 26 Mar 2014 10:35:41 +0000 (19:35 +0900)]
net: ipv6: autoconf routes into per-device tables
Currently, IPv6 router discovery always puts routes into
RT6_TABLE_MAIN. This causes problems for connection managers
that want to support multiple simultaneous network connections
and want control over which one is used by default (e.g., wifi
and wired).
To work around this connection managers typically take the routes
they prefer and copy them to static routes with low metrics in
the main table. This puts the burden on the connection manager
to watch netlink to see if the routes have changed, delete the
routes when their lifetime expires, etc.
Instead, this patch adds a per-interface sysctl to have the
kernel put autoconf routes into different tables. This allows
each interface to have its own autoconf table, and choosing the
default interface (or using different interfaces at the same
time for different types of traffic) can be done using
appropriate ip rules.
The sysctl behaves as follows:
- = 0: default. Put routes into RT6_TABLE_MAIN as before.
- > 0: manual. Put routes into the specified table.
- < 0: automatic. Add the absolute value of the sysctl to the
device's ifindex, and use that table.
The automatic mode is most useful in conjunction with
net.ipv6.conf.default.accept_ra_rt_table. A connection manager
or distribution could set it to, say, -100 on boot, and
thereafter just use IP rules.
Lorenzo Colitti [Wed, 26 Mar 2014 04:03:12 +0000 (13:03 +0900)]
net: support marking accepting TCP sockets
When using mark-based routing, sockets returned from accept()
may need to be marked differently depending on the incoming
connection request.
This is the case, for example, if different socket marks identify
different networks: a listening socket may want to accept
connections from all networks, but each connection should be
marked with the network that the request came in on, so that
subsequent packets are sent on the correct network.
This patch adds a sysctl to mark TCP sockets based on the fwmark
of the incoming SYN packet. If enabled, and an unmarked socket
receives a SYN, then the SYN packet's fwmark is written to the
connection's inet_request_sock, and later written back to the
accepted socket when the connection is established. If the
socket already has a nonzero mark, then the behaviour is the same
as it is today, i.e., the listening socket's fwmark is used.
Black-box tested using user-mode linux:
- IPv4/IPv6 SYN+ACK, FIN, etc. packets are routed based on the
mark of the incoming SYN packet.
- The socket returned by accept() is marked with the mark of the
incoming SYN packet.
- Tested with syncookies=1 and syncookies=2.
Lorenzo Colitti [Tue, 18 Mar 2014 11:52:27 +0000 (20:52 +0900)]
net: add a sysctl to reflect the fwmark on replies
Kernel-originated IP packets that have no user socket associated
with them (e.g., ICMP errors and echo replies, TCP RSTs, etc.)
are emitted with a mark of zero. Add a sysctl to make them have
the same mark as the packet they are replying to.
This allows an administrator that wishes to do so to use
mark-based routing, firewalling, etc. for these replies by
marking the original packets inbound.
Tested using user-mode linux:
- ICMP/ICMPv6 echo replies and errors.
- TCP RST packets (IPv4 and IPv6).
xerox_lin [Thu, 14 Aug 2014 06:48:44 +0000 (14:48 +0800)]
usb: Add support for rndis uplink aggregation
RNDIS protocol supports data aggregation on uplink and can help
reduce mips by reducing number of interrupts on device. Throughput
also improved by 20-30%. Aggregation is disabled by setting
aggregation packet size to 1. To help better UL throughput, set
as ul aggregation support to 3 rndis packets by default. It can be
configured via module parameter: rndis_ul_max_pkt_per_xfer.
Gagan Grover [Thu, 16 Oct 2014 07:25:15 +0000 (12:55 +0530)]
futex-prevent-requeue-pi-on-same-futex.patch
futex: Forbid uaddr == uaddr2 in futex_requeue(..., requeue_pi=1)
If uaddr == uaddr2, then we have broken the rule of only requeueing
froma non-pi futex to a pi futex with this call. If we attempt this,
then dangling pointers may be left for rt_waiter resulting in an
exploitable condition.
This change brings futex_requeue() in line with
futex_wait_requeue_pi() which performs the same check as per
commit 6f7b0a2a5c0f
("futex: Forbid uaddr == uaddr2 in futex_wait_requeue_pi()")
[ tglx: Compare the resulting keys as well, as uaddrs might be
different depending on the mapping ]
Jordan Nien [Wed, 8 Oct 2014 09:10:41 +0000 (17:10 +0800)]
input: touchscreen: raydium: update to 66.9
66.9 Change list:
[1]. base on firmware 66.7.
[2]. Fixed Line broken with palm
[3]. Keep the original setting of event report mode after
suspend/resume.
[4]. System Suspend / Resume with palm, will not enter the Auto Scan.
[5]. Fix Touch reported speed is not matching the actual speed in some
particular area which is 3 to 5mm away from edge [6]. Fix deadlock between rm_tch_cmd_process and rm_timer_work_handler
during lp0.
[6]. add solution for rm_ts_server issues with SELinux domain.
video: tegra: nvmap: remove support for Deprecated GET_ID/FROM_ID ioctl's
Remove support and add warning message for deprecated IOCTL's -
NVMAP_IOC_FROM_ID and NVMAP_IOC_GET_ID. These ioctl calls
are deprecated by corresponding FD ioctl calls.
Incremented nvmap_handle ref count in utility function
nvmap_get_id_from_dmabuf_fd() before the function release reference
to dma buffer. This is required to avoid race conditions in nvmap
code where nvmap_handle returned by this function could be freed
concurrently while the caller is still using it.
As a side effect of above change, every caller of this utility
function must decrement nvmap_handle ref count after using the
returned nvmap_handle.
Daniel Solomon [Fri, 15 Aug 2014 00:50:15 +0000 (17:50 -0700)]
video: tegra: dc: Avoid FRAME_END_INT conflict
Allowing for dc->lock to be acquired by the
caller in function tegra_dc_config_frame_end_intr
can result in FRAME_END_INT mask register being
overwritten if the lock is actually acquired by
another thread.
Refactor the critical section into its own function
and allow callers to call either function. Also
Change the name of tegra_dc_wait_for_frame_end
to indicate that it should be called with dc->lock
locked.
Daniel Solomon [Tue, 5 Aug 2014 21:48:42 +0000 (14:48 -0700)]
video: tegra: dc: Fix and refactor FRAME_END_INT
- Fix a conflict with other DC interrupt masks
when the DSI driver waits on FRAME_END_INT
- Move generic FRAME_END_INT mask/unmask and
wait-for functions to dc.c
Tuning vic scaling parameter to make vic more sensible to load and stay longer
at a slightly higher frequency.
- This tuning does not increase power in regular use cases.
- Thie will help fix CTS1 testPreviewFpsRange.
Bibhay Ranjan [Fri, 10 Oct 2014 12:25:53 +0000 (17:55 +0530)]
net: wireless: bcmdhd: synchronize 3 contexts
net_device and net_info is used in three different
contexts wl_event_handler, _dhd_sysioc_thread and
and within wl_dealloc_netinfo. These contexts are
getting triggered when AP association is happening
and simultaneously P2P interface deletion happens.
Because of this particular scenario, these two structs
get corrupted as they are not synchronized.
Now synchronizing them with readers/writers semaphore
makes sure, we do not free the memories in one context
while other context is still using it.
Jon McCaffrey [Fri, 10 Oct 2014 23:08:42 +0000 (16:08 -0700)]
nvmap: set background allocator to SCHED_IDLE
Set background allocator to SCHED_IDLE, so that it only runs when no
other processes wish to. Otherwise, it can run for 20-100ms with only
occasional interruption, signficantly disrupting other processing.
Alex Waterman [Mon, 14 Apr 2014 23:17:27 +0000 (16:17 -0700)]
video: tegra: nvmap: Fix zero page support
In the case that the zeroed page kernel config is set, the
userspace zeroed memory module param also required being set
otherwise non-zero memory could be placed back into the page
pools.
Alex Waterman [Mon, 28 Apr 2014 18:27:17 +0000 (11:27 -0700)]
video: tegra: nvmap: Consolidate zeroed mem config
Consilidate the NVMAP_FORCE_ZEROED_USER_PAGES config to only
two locations. Both are in nvmap_handle.c and ensure that
the module param zero_memory is enabled and unchangable when
NVMAP_FORCE_ZEROED_USER_PAGES is set.
Krishna Reddy [Tue, 5 Aug 2014 21:43:37 +0000 (14:43 -0700)]
video: tegra: nvmap: clean cache during page allocations into page pool
Clean cache during page allocations into page pool to
avoid cache clean overhead at the time of allocation.
Increase page pool refill size to 1MB from 512KB.
Alex Waterman [Thu, 3 Apr 2014 01:21:18 +0000 (18:21 -0700)]
video: tegra: nvmap: Remove old ZP support
Remove the old foreground page zeroing support. This is replaced
by using the background zeroed page pool support instead. If page
pools are empty clearing happens in the allocation context.
Alex Waterman [Thu, 3 Apr 2014 01:20:14 +0000 (18:20 -0700)]
video: tegra: nvmap: Add background allocator
Add a background kernel thread that allocates memory into the
page pool.
This allows zeroed pages to be allocated directly into the page
pool. In turn this avoids that overhead in the allocation path
itself (for page pool hits at least).
Pre-flushing the pages being placed into the page pool will be
implemented later.
Allen Yu [Mon, 21 Jul 2014 05:04:23 +0000 (13:04 +0800)]
video: tegra: dc: protect vsync code with lp_lock
There is a gap in vsycn code (wait for user vblank completion) that is not
and can not be protected by dc->lock. So there might be races between vsync
code and PM code. For example, if tegra_dsi_host_suspend() or tegra_dc_disable()
is called while vsync thread is waiting for the completion, dc clock will be
disabled as we drop all references to dc clock in _suspend() or _disable().
Fix description:
- Rename one_shot_lp_lock to lp_lock as we need it for continuous mode as well.
- Protect vsync code with lp_lock to eliminate races with PM code path.
Jon Mayo [Thu, 10 Jul 2014 18:07:05 +0000 (11:07 -0700)]
video: tegra: dc: eliminate races in vsync code
Fix races in vsync code, don't touch data structure outside of locks.
Support both continous mode panels, not just one-shot panels.
Use a bit flag so we don't clobber vblank settings needed by other modules.
Scott Long [Fri, 29 Aug 2014 23:18:18 +0000 (16:18 -0700)]
security: tlk_driver: free tmp memrefs
Release temporary memory parameter references at the conclusion
of a launch operation to ensure pages are unpinned and
other resources are properly cleaned up.
Gaurav Sarode [Mon, 4 Aug 2014 21:24:04 +0000 (14:24 -0700)]
video: tegra: nvmap: Fix sleeping while atomic warning
When reading /d/nvmap/iovmm/procrank, we first take clients_lock
spin_lock and then take ref_lock mutex inside nvmap_iovmm_get_client_mss.
This creates mutex inside spin_lock situation. To fix this,
clients_lock is converted to mutex.
Shital Jaju [Thu, 14 Aug 2014 17:13:26 +0000 (10:13 -0700)]
Update the cfg layer with new channel info
The upper layer is not notified of the change in channel after roaming.
This results in AGO creation on incorrect channel (previous AP channel)
after roaming. This change will update the cfg layer with the new channel
info after receiving romaing event.
Martin Chabot [Thu, 3 Jul 2014 13:47:31 +0000 (15:47 +0200)]
usb: host: tegra: no delay for boost frequency
Apply frequency boost as soon as bus_resume is done
to avoid no scheduling situation when there is a
lot of ehci_irq
Move frequency boost after ehci_resume to keep boost
for high speed device only.
Make sure only to decrement the PM counters if they were actually
incremented.
Note that the USB PM counter, but not necessarily the driver core PM
counter, is reset when the interface is unbound.
Fixes: 11ea859d64b6 ("USB: additional power savings for cdc-acm devices
that support remote wakeup")
Signed-off-by: Johan Hovold <jhovold@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 4421a014e97c6669db6eb8600ce83d29e5484842)
Change-Id: I9038def962389acfca7f6a583e719d15f0f8d758 Signed-off-by: Neil Patel <neilp@nvidia.com>
Reviewed-on: http://git-master/r/553907
GVS: Gerrit_Virtual_Submit Reviewed-by: Steve Lin <stlin@nvidia.com>
Added new file "maps" for nvmap heaps. In addition to data given by
existing "allocations" file, this also shows the client's virtual
mappings and total amount of handle physical memory that is actually
mapped to a client's virtual address space.
This change will help in tracking nvmap memory usage of processes.
Patch includes following nvmap changes:
- added "pid" field in nvmap_vma_list so now looking at handle's vma list,
we can say which vma belongs to which process.
- sorted handle's vma list in ascending order of handle offsets.
Change-Id: If7e25ca2ef43c036558c9c9ead5f67ee8eef6b42 Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-on: http://git-master/r/426734
(cherry picked from commit c1ddad1b13332386857f9f2964aa8968094e7e8c)
Reviewed-on: http://git-master/r/553676 Reviewed-by: Harry Lin <harlin@nvidia.com> Tested-by: Harry Lin <harlin@nvidia.com>
GVS: Gerrit_Virtual_Submit
Krishna Reddy [Fri, 20 Jun 2014 21:33:55 +0000 (14:33 -0700)]
video: tegra: nvmap: unify debug stats code
Unify debug stats code for iovmm and carveouts.
Change-Id: Ief800587870845ed6f566cb7afb2c91000d177ca Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-on: http://git-master/r/426733
(cherry picked from commit 0c0f7e5a9ef459d7940cc66af0a00321bb54d389)
Reviewed-on: http://git-master/r/553675 Reviewed-by: Harry Lin <harlin@nvidia.com> Tested-by: Harry Lin <harlin@nvidia.com>
GVS: Gerrit_Virtual_Submit
Krishna Reddy [Thu, 19 Jun 2014 23:10:23 +0000 (16:10 -0700)]
video: tegra: nvmap: don't count shared memory in full
Don't count shared memory in full in iovmm stats.
Add SHARE field to allocations info to show how many
processes are sharing the handle.
Update few comments in the code.
Remove unnecessary iovm_commit accounting.
Change-Id: I49650bf081d652dedc7139f639aae6da06965ecd Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-on: http://git-master/r/426274
(cherry picked from commit 92d47c10fbf7a315d4c953bafb71ee23032b7f65)
Reviewed-on: http://git-master/r/553673 Reviewed-by: Harry Lin <harlin@nvidia.com> Tested-by: Harry Lin <harlin@nvidia.com>
GVS: Gerrit_Virtual_Submit
Krishna Reddy [Tue, 17 Jun 2014 19:30:16 +0000 (12:30 -0700)]
video: tegra: nvmap: set handle dmabuf to NULL early
This can allow catching handle dmabuf usage during its free.
Change-Id: Ie20c7b860ca5194a190ff7005302bf50602d16ed Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-on: http://git-master/r/424329
(cherry picked from commit 10e648c2e2f9760c97ce55a1091d9e7097d2504d)
Reviewed-on: http://git-master/r/553669 Reviewed-by: Harry Lin <harlin@nvidia.com> Tested-by: Harry Lin <harlin@nvidia.com>
GVS: Gerrit_Virtual_Submit
Krishna Reddy [Fri, 20 Jun 2014 00:34:01 +0000 (17:34 -0700)]
video: tegra: nvmap: add handle share count to debug stats
handle share count provides info on how many processes are sharing
the handle. IOW, how many processes are holding a ref on handle.
Update the comments for umap/kmap_count.
Change-Id: I9f543ebf51842dad6ecd3bfeb7480496c98963be Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-on: http://git-master/r/426302
(cherry picked from commit 244c41508be0705cc232942b9403e17611f63e45)
Reviewed-on: http://git-master/r/553668 Reviewed-by: Harry Lin <harlin@nvidia.com> Tested-by: Harry Lin <harlin@nvidia.com>
GVS: Gerrit_Virtual_Submit
Change-Id: I0180f59ced7d070d1952e66cc7f1b21510a53c0e Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-on: http://git-master/r/418555
(cherry picked from commit bb20ce7cb828f4e64c64f538cace5f414d9e74fc)
Reviewed-on: http://git-master/r/553651 Reviewed-by: Harry Lin <harlin@nvidia.com> Tested-by: Harry Lin <harlin@nvidia.com>
GVS: Gerrit_Virtual_Submit
Krishna Reddy [Mon, 2 Jun 2014 23:18:56 +0000 (16:18 -0700)]
video: tegra: nvmap: track kernel and user map count
Track kernel and user map counts and add these to debug info.
Bug 1519700
Change-Id: I9b06bd748737dbfe57f531af4f9b61a48429d01a Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-on: http://git-master/r/417980
(cherry picked from commit e57544e6a284d228548ee555e6e1aff0f0a494e8)
Reviewed-on: http://git-master/r/553650 Reviewed-by: Harry Lin <harlin@nvidia.com> Tested-by: Harry Lin <harlin@nvidia.com>
GVS: Gerrit_Virtual_Submit
Krishna Reddy [Wed, 4 Jun 2014 21:50:05 +0000 (14:50 -0700)]
video: tegra: nvmap: track vma for all handles
Clean up the code related to mmap and handle nvmap_map_info_caller_ptr
failures graciously.
Initilize h->vmas at right place.
Add sanity checks in nvmap_vma_open/_close.
Vinayak Menon [Wed, 26 Feb 2014 19:06:22 +0000 (00:36 +0530)]
staging: android: lowmemorykiller: neglect swap cached pages in other_file
With ZRAM enabled it is observed that lowmemory killer
doesn't trigger properly. swap cached pages are
accounted in NR_FILE, and lowmemorykiller considers
this as reclaimable and adds to other_file. But these
pages can't be reclaimed unless lowmemorykiller triggers.
So subtract swap pages from other_file.
Processes reciding in memory are much faster than processes
in swap. For better user experience in a multiprocessing
environment it is preferable not to allow too many processes
to recide in memory.
Allen Yu [Wed, 20 Aug 2014 03:39:59 +0000 (11:39 +0800)]
media: tegra: nvavp: avoid racing in nvavp_uninit
nvavp_init() might be called when open_lock is dropped in nvavp_uninit(),
which will mess up the _init/_uninit sequence. To eliminate the racing,
removes the unnecessary cancel_work_sync() and also the _unlock/_lock
around it. It is safe to do so since nvavp_uninit() sets nvavp->pending
to false in nvavp_halt_vde(), and the work handler will do nothing if
nvavp->pending is false.
This change adds the LP1 Low Core Voltage feature to all platforms.
Enables the Core voltage to be lowered during voice call(LP1)
state. Also rearranges the sequence during reducing the core voltage.