Gentoo Archives: gentoo-commits

From: Mike Pagano <mpagano@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] proj/linux-patches:4.14 commit in: /
Date: Tue, 29 Oct 2019 14:00:28
Message-Id: 1572357542.9fd97c132540e56fdedd554fa601f0bb905da89a.mpagano@gentoo
1 commit: 9fd97c132540e56fdedd554fa601f0bb905da89a
2 Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
3 AuthorDate: Sun Jul 21 14:40:03 2019 +0000
4 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
5 CommitDate: Tue Oct 29 13:59:02 2019 +0000
6 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9fd97c13
7
8 Linux patch 4.14.134
9
10 Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
11
12 0000_README | 4 +
13 1133_linux-4.14.134.patch | 3972 +++++++++++++++++++++++++++++++++++++++++++++
14 2 files changed, 3976 insertions(+)
15
16 diff --git a/0000_README b/0000_README
17 index dd56b3e..befc228 100644
18 --- a/0000_README
19 +++ b/0000_README
20 @@ -575,6 +575,10 @@ Patch: 1132_linux-4.14.133.patch
21 From: https://www.kernel.org
22 Desc: Linux 4.14.133
23
24 +Patch: 1133_linux-4.14.134.patch
25 +From: https://www.kernel.org
26 +Desc: Linux 4.14.134
27 +
28 Patch: 1500_XATTR_USER_PREFIX.patch
29 From: https://bugs.gentoo.org/show_bug.cgi?id=470644
30 Desc: Support for namespace user.pax.* on tmpfs.
31
32 diff --git a/1133_linux-4.14.134.patch b/1133_linux-4.14.134.patch
33 new file mode 100644
34 index 0000000..3ba280f
35 --- /dev/null
36 +++ b/1133_linux-4.14.134.patch
37 @@ -0,0 +1,3972 @@
38 +diff --git a/Documentation/ABI/testing/sysfs-class-net-qmi b/Documentation/ABI/testing/sysfs-class-net-qmi
39 +index 7122d6264c49..c310db4ccbc2 100644
40 +--- a/Documentation/ABI/testing/sysfs-class-net-qmi
41 ++++ b/Documentation/ABI/testing/sysfs-class-net-qmi
42 +@@ -29,7 +29,7 @@ Contact: Bjørn Mork <bjorn@××××.no>
43 + Description:
44 + Unsigned integer.
45 +
46 +- Write a number ranging from 1 to 127 to add a qmap mux
47 ++ Write a number ranging from 1 to 254 to add a qmap mux
48 + based network device, supported by recent Qualcomm based
49 + modems.
50 +
51 +@@ -46,5 +46,5 @@ Contact: Bjørn Mork <bjorn@××××.no>
52 + Description:
53 + Unsigned integer.
54 +
55 +- Write a number ranging from 1 to 127 to delete a previously
56 ++ Write a number ranging from 1 to 254 to delete a previously
57 + created qmap mux based network device.
58 +diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
59 +index ffc064c1ec68..49311f3da6f2 100644
60 +--- a/Documentation/admin-guide/hw-vuln/index.rst
61 ++++ b/Documentation/admin-guide/hw-vuln/index.rst
62 +@@ -9,5 +9,6 @@ are configurable at compile, boot or run time.
63 + .. toctree::
64 + :maxdepth: 1
65 +
66 ++ spectre
67 + l1tf
68 + mds
69 +diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
70 +new file mode 100644
71 +index 000000000000..25f3b2532198
72 +--- /dev/null
73 ++++ b/Documentation/admin-guide/hw-vuln/spectre.rst
74 +@@ -0,0 +1,697 @@
75 ++.. SPDX-License-Identifier: GPL-2.0
76 ++
77 ++Spectre Side Channels
78 ++=====================
79 ++
80 ++Spectre is a class of side channel attacks that exploit branch prediction
81 ++and speculative execution on modern CPUs to read memory, possibly
82 ++bypassing access controls. Speculative execution side channel exploits
83 ++do not modify memory but attempt to infer privileged data in the memory.
84 ++
85 ++This document covers Spectre variant 1 and Spectre variant 2.
86 ++
87 ++Affected processors
88 ++-------------------
89 ++
90 ++Speculative execution side channel methods affect a wide range of modern
91 ++high performance processors, since most modern high speed processors
92 ++use branch prediction and speculative execution.
93 ++
94 ++The following CPUs are vulnerable:
95 ++
96 ++ - Intel Core, Atom, Pentium, and Xeon processors
97 ++
98 ++ - AMD Phenom, EPYC, and Zen processors
99 ++
100 ++ - IBM POWER and zSeries processors
101 ++
102 ++ - Higher end ARM processors
103 ++
104 ++ - Apple CPUs
105 ++
106 ++ - Higher end MIPS CPUs
107 ++
108 ++ - Likely most other high performance CPUs. Contact your CPU vendor for details.
109 ++
110 ++Whether a processor is affected or not can be read out from the Spectre
111 ++vulnerability files in sysfs. See :ref:`spectre_sys_info`.
112 ++
113 ++Related CVEs
114 ++------------
115 ++
116 ++The following CVE entries describe Spectre variants:
117 ++
118 ++ ============= ======================= =================
119 ++ CVE-2017-5753 Bounds check bypass Spectre variant 1
120 ++ CVE-2017-5715 Branch target injection Spectre variant 2
121 ++ ============= ======================= =================
122 ++
123 ++Problem
124 ++-------
125 ++
126 ++CPUs use speculative operations to improve performance. That may leave
127 ++traces of memory accesses or computations in the processor's caches,
128 ++buffers, and branch predictors. Malicious software may be able to
129 ++influence the speculative execution paths, and then use the side effects
130 ++of the speculative execution in the CPUs' caches and buffers to infer
131 ++privileged data touched during the speculative execution.
132 ++
133 ++Spectre variant 1 attacks take advantage of speculative execution of
134 ++conditional branches, while Spectre variant 2 attacks use speculative
135 ++execution of indirect branches to leak privileged memory.
136 ++See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[7] <spec_ref7>`
137 ++:ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`.
138 ++
139 ++Spectre variant 1 (Bounds Check Bypass)
140 ++---------------------------------------
141 ++
142 ++The bounds check bypass attack :ref:`[2] <spec_ref2>` takes advantage
143 ++of speculative execution that bypasses conditional branch instructions
144 ++used for memory access bounds check (e.g. checking if the index of an
145 ++array results in memory access within a valid range). This results in
146 ++memory accesses to invalid memory (with out-of-bound index) that are
147 ++done speculatively before validation checks resolve. Such speculative
148 ++memory accesses can leave side effects, creating side channels which
149 ++leak information to the attacker.
150 ++
151 ++There are some extensions of Spectre variant 1 attacks for reading data
152 ++over the network, see :ref:`[12] <spec_ref12>`. However such attacks
153 ++are difficult, low bandwidth, fragile, and are considered low risk.
154 ++
155 ++Spectre variant 2 (Branch Target Injection)
156 ++-------------------------------------------
157 ++
158 ++The branch target injection attack takes advantage of speculative
159 ++execution of indirect branches :ref:`[3] <spec_ref3>`. The indirect
160 ++branch predictors inside the processor used to guess the target of
161 ++indirect branches can be influenced by an attacker, causing gadget code
162 ++to be speculatively executed, thus exposing sensitive data touched by
163 ++the victim. The side effects left in the CPU's caches during speculative
164 ++execution can be measured to infer data values.
165 ++
166 ++.. _poison_btb:
167 ++
168 ++In Spectre variant 2 attacks, the attacker can steer speculative indirect
169 ++branches in the victim to gadget code by poisoning the branch target
170 ++buffer of a CPU used for predicting indirect branch addresses. Such
171 ++poisoning could be done by indirect branching into existing code,
172 ++with the address offset of the indirect branch under the attacker's
173 ++control. Since the branch prediction on impacted hardware does not
174 ++fully disambiguate branch address and uses the offset for prediction,
175 ++this could cause privileged code's indirect branch to jump to a gadget
176 ++code with the same offset.
177 ++
178 ++The most useful gadgets take an attacker-controlled input parameter (such
179 ++as a register value) so that the memory read can be controlled. Gadgets
180 ++without input parameters might be possible, but the attacker would have
181 ++very little control over what memory can be read, reducing the risk of
182 ++the attack revealing useful data.
183 ++
184 ++One other variant 2 attack vector is for the attacker to poison the
185 ++return stack buffer (RSB) :ref:`[13] <spec_ref13>` to cause speculative
186 ++subroutine return instruction execution to go to a gadget. An attacker's
187 ++imbalanced subroutine call instructions might "poison" entries in the
188 ++return stack buffer which are later consumed by a victim's subroutine
189 ++return instructions. This attack can be mitigated by flushing the return
190 ++stack buffer on context switch, or virtual machine (VM) exit.
191 ++
192 ++On systems with simultaneous multi-threading (SMT), attacks are possible
193 ++from the sibling thread, as level 1 cache and branch target buffer
194 ++(BTB) may be shared between hardware threads in a CPU core. A malicious
195 ++program running on the sibling thread may influence its peer's BTB to
196 ++steer its indirect branch speculations to gadget code, and measure the
197 ++speculative execution's side effects left in level 1 cache to infer the
198 ++victim's data.
199 ++
200 ++Attack scenarios
201 ++----------------
202 ++
203 ++The following list of attack scenarios have been anticipated, but may
204 ++not cover all possible attack vectors.
205 ++
206 ++1. A user process attacking the kernel
207 ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
208 ++
209 ++ The attacker passes a parameter to the kernel via a register or
210 ++ via a known address in memory during a syscall. Such parameter may
211 ++ be used later by the kernel as an index to an array or to derive
212 ++ a pointer for a Spectre variant 1 attack. The index or pointer
213 ++ is invalid, but bound checks are bypassed in the code branch taken
214 ++ for speculative execution. This could cause privileged memory to be
215 ++ accessed and leaked.
216 ++
217 ++ For kernel code that has been identified where data pointers could
218 ++ potentially be influenced for Spectre attacks, new "nospec" accessor
219 ++ macros are used to prevent speculative loading of data.
220 ++
221 ++ Spectre variant 2 attacker can :ref:`poison <poison_btb>` the branch
222 ++ target buffer (BTB) before issuing syscall to launch an attack.
223 ++ After entering the kernel, the kernel could use the poisoned branch
224 ++ target buffer on indirect jump and jump to gadget code in speculative
225 ++ execution.
226 ++
227 ++ If an attacker tries to control the memory addresses leaked during
228 ++ speculative execution, he would also need to pass a parameter to the
229 ++ gadget, either through a register or a known address in memory. After
230 ++ the gadget has executed, he can measure the side effect.
231 ++
232 ++ The kernel can protect itself against consuming poisoned branch
233 ++ target buffer entries by using return trampolines (also known as
234 ++ "retpoline") :ref:`[3] <spec_ref3>` :ref:`[9] <spec_ref9>` for all
235 ++ indirect branches. Return trampolines trap speculative execution paths
236 ++ to prevent jumping to gadget code during speculative execution.
237 ++ x86 CPUs with Enhanced Indirect Branch Restricted Speculation
238 ++ (Enhanced IBRS) available in hardware should use the feature to
239 ++ mitigate Spectre variant 2 instead of retpoline. Enhanced IBRS is
240 ++ more efficient than retpoline.
241 ++
242 ++ There may be gadget code in firmware which could be exploited with
243 ++ Spectre variant 2 attack by a rogue user process. To mitigate such
244 ++ attacks on x86, Indirect Branch Restricted Speculation (IBRS) feature
245 ++ is turned on before the kernel invokes any firmware code.
246 ++
247 ++2. A user process attacking another user process
248 ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
249 ++
250 ++ A malicious user process can try to attack another user process,
251 ++ either via a context switch on the same hardware thread, or from the
252 ++ sibling hyperthread sharing a physical processor core on simultaneous
253 ++ multi-threading (SMT) system.
254 ++
255 ++ Spectre variant 1 attacks generally require passing parameters
256 ++ between the processes, which needs a data passing relationship, such
257 ++ as remote procedure calls (RPC). Those parameters are used in gadget
258 ++ code to derive invalid data pointers accessing privileged memory in
259 ++ the attacked process.
260 ++
261 ++ Spectre variant 2 attacks can be launched from a rogue process by
262 ++ :ref:`poisoning <poison_btb>` the branch target buffer. This can
263 ++ influence the indirect branch targets for a victim process that either
264 ++ runs later on the same hardware thread, or running concurrently on
265 ++ a sibling hardware thread sharing the same physical core.
266 ++
267 ++ A user process can protect itself against Spectre variant 2 attacks
268 ++ by using the prctl() syscall to disable indirect branch speculation
269 ++ for itself. An administrator can also cordon off an unsafe process
270 ++ from polluting the branch target buffer by disabling the process's
271 ++ indirect branch speculation. This comes with a performance cost
272 ++ from not using indirect branch speculation and clearing the branch
273 ++ target buffer. When SMT is enabled on x86, for a process that has
274 ++ indirect branch speculation disabled, Single Threaded Indirect Branch
275 ++ Predictors (STIBP) :ref:`[4] <spec_ref4>` are turned on to prevent the
276 ++ sibling thread from controlling branch target buffer. In addition,
277 ++ the Indirect Branch Prediction Barrier (IBPB) is issued to clear the
278 ++ branch target buffer when context switching to and from such process.
279 ++
280 ++ On x86, the return stack buffer is stuffed on context switch.
281 ++ This prevents the branch target buffer from being used for branch
282 ++ prediction when the return stack buffer underflows while switching to
283 ++ a deeper call stack. Any poisoned entries in the return stack buffer
284 ++ left by the previous process will also be cleared.
285 ++
286 ++ User programs should use address space randomization to make attacks
287 ++ more difficult (Set /proc/sys/kernel/randomize_va_space = 1 or 2).
288 ++
289 ++3. A virtualized guest attacking the host
290 ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
291 ++
292 ++ The attack mechanism is similar to how user processes attack the
293 ++ kernel. The kernel is entered via hyper-calls or other virtualization
294 ++ exit paths.
295 ++
296 ++ For Spectre variant 1 attacks, rogue guests can pass parameters
297 ++ (e.g. in registers) via hyper-calls to derive invalid pointers to
298 ++ speculate into privileged memory after entering the kernel. For places
299 ++ where such kernel code has been identified, nospec accessor macros
300 ++ are used to stop speculative memory access.
301 ++
302 ++ For Spectre variant 2 attacks, rogue guests can :ref:`poison
303 ++ <poison_btb>` the branch target buffer or return stack buffer, causing
304 ++ the kernel to jump to gadget code in the speculative execution paths.
305 ++
306 ++ To mitigate variant 2, the host kernel can use return trampolines
307 ++ for indirect branches to bypass the poisoned branch target buffer,
308 ++ and flushing the return stack buffer on VM exit. This prevents rogue
309 ++ guests from affecting indirect branching in the host kernel.
310 ++
311 ++ To protect host processes from rogue guests, host processes can have
312 ++ indirect branch speculation disabled via prctl(). The branch target
313 ++ buffer is cleared before context switching to such processes.
314 ++
315 ++4. A virtualized guest attacking other guest
316 ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
317 ++
318 ++ A rogue guest may attack another guest to get data accessible by the
319 ++ other guest.
320 ++
321 ++ Spectre variant 1 attacks are possible if parameters can be passed
322 ++ between guests. This may be done via mechanisms such as shared memory
323 ++ or message passing. Such parameters could be used to derive data
324 ++ pointers to privileged data in guest. The privileged data could be
325 ++ accessed by gadget code in the victim's speculation paths.
326 ++
327 ++ Spectre variant 2 attacks can be launched from a rogue guest by
328 ++ :ref:`poisoning <poison_btb>` the branch target buffer or the return
329 ++ stack buffer. Such poisoned entries could be used to influence
330 ++ speculation execution paths in the victim guest.
331 ++
332 ++ Linux kernel mitigates attacks to other guests running in the same
333 ++ CPU hardware thread by flushing the return stack buffer on VM exit,
334 ++ and clearing the branch target buffer before switching to a new guest.
335 ++
336 ++ If SMT is used, Spectre variant 2 attacks from an untrusted guest
337 ++ in the sibling hyperthread can be mitigated by the administrator,
338 ++ by turning off the unsafe guest's indirect branch speculation via
339 ++ prctl(). A guest can also protect itself by turning on microcode
340 ++ based mitigations (such as IBPB or STIBP on x86) within the guest.
341 ++
342 ++.. _spectre_sys_info:
343 ++
344 ++Spectre system information
345 ++--------------------------
346 ++
347 ++The Linux kernel provides a sysfs interface to enumerate the current
348 ++mitigation status of the system for Spectre: whether the system is
349 ++vulnerable, and which mitigations are active.
350 ++
351 ++The sysfs file showing Spectre variant 1 mitigation status is:
352 ++
353 ++ /sys/devices/system/cpu/vulnerabilities/spectre_v1
354 ++
355 ++The possible values in this file are:
356 ++
357 ++ ======================================= =================================
358 ++ 'Mitigation: __user pointer sanitation' Protection in kernel on a case by
359 ++ case base with explicit pointer
360 ++ sanitation.
361 ++ ======================================= =================================
362 ++
363 ++However, the protections are put in place on a case by case basis,
364 ++and there is no guarantee that all possible attack vectors for Spectre
365 ++variant 1 are covered.
366 ++
367 ++The spectre_v2 kernel file reports if the kernel has been compiled with
368 ++retpoline mitigation or if the CPU has hardware mitigation, and if the
369 ++CPU has support for additional process-specific mitigation.
370 ++
371 ++This file also reports CPU features enabled by microcode to mitigate
372 ++attack between user processes:
373 ++
374 ++1. Indirect Branch Prediction Barrier (IBPB) to add additional
375 ++ isolation between processes of different users.
376 ++2. Single Thread Indirect Branch Predictors (STIBP) to add additional
377 ++ isolation between CPU threads running on the same core.
378 ++
379 ++These CPU features may impact performance when used and can be enabled
380 ++per process on a case-by-case base.
381 ++
382 ++The sysfs file showing Spectre variant 2 mitigation status is:
383 ++
384 ++ /sys/devices/system/cpu/vulnerabilities/spectre_v2
385 ++
386 ++The possible values in this file are:
387 ++
388 ++ - Kernel status:
389 ++
390 ++ ==================================== =================================
391 ++ 'Not affected' The processor is not vulnerable
392 ++ 'Vulnerable' Vulnerable, no mitigation
393 ++ 'Mitigation: Full generic retpoline' Software-focused mitigation
394 ++ 'Mitigation: Full AMD retpoline' AMD-specific software mitigation
395 ++ 'Mitigation: Enhanced IBRS' Hardware-focused mitigation
396 ++ ==================================== =================================
397 ++
398 ++ - Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is
399 ++ used to protect against Spectre variant 2 attacks when calling firmware (x86 only).
400 ++
401 ++ ========== =============================================================
402 ++ 'IBRS_FW' Protection against user program attacks when calling firmware
403 ++ ========== =============================================================
404 ++
405 ++ - Indirect branch prediction barrier (IBPB) status for protection between
406 ++ processes of different users. This feature can be controlled through
407 ++ prctl() per process, or through kernel command line options. This is
408 ++ an x86 only feature. For more details see below.
409 ++
410 ++ =================== ========================================================
411 ++ 'IBPB: disabled' IBPB unused
412 ++ 'IBPB: always-on' Use IBPB on all tasks
413 ++ 'IBPB: conditional' Use IBPB on SECCOMP or indirect branch restricted tasks
414 ++ =================== ========================================================
415 ++
416 ++ - Single threaded indirect branch prediction (STIBP) status for protection
417 ++ between different hyper threads. This feature can be controlled through
418 ++ prctl per process, or through kernel command line options. This is x86
419 ++ only feature. For more details see below.
420 ++
421 ++ ==================== ========================================================
422 ++ 'STIBP: disabled' STIBP unused
423 ++ 'STIBP: forced' Use STIBP on all tasks
424 ++ 'STIBP: conditional' Use STIBP on SECCOMP or indirect branch restricted tasks
425 ++ ==================== ========================================================
426 ++
427 ++ - Return stack buffer (RSB) protection status:
428 ++
429 ++ ============= ===========================================
430 ++ 'RSB filling' Protection of RSB on context switch enabled
431 ++ ============= ===========================================
432 ++
433 ++Full mitigation might require a microcode update from the CPU
434 ++vendor. When the necessary microcode is not available, the kernel will
435 ++report vulnerability.
436 ++
437 ++Turning on mitigation for Spectre variant 1 and Spectre variant 2
438 ++-----------------------------------------------------------------
439 ++
440 ++1. Kernel mitigation
441 ++^^^^^^^^^^^^^^^^^^^^
442 ++
443 ++ For the Spectre variant 1, vulnerable kernel code (as determined
444 ++ by code audit or scanning tools) is annotated on a case by case
445 ++ basis to use nospec accessor macros for bounds clipping :ref:`[2]
446 ++ <spec_ref2>` to avoid any usable disclosure gadgets. However, it may
447 ++ not cover all attack vectors for Spectre variant 1.
448 ++
449 ++ For Spectre variant 2 mitigation, the compiler turns indirect calls or
450 ++ jumps in the kernel into equivalent return trampolines (retpolines)
451 ++ :ref:`[3] <spec_ref3>` :ref:`[9] <spec_ref9>` to go to the target
452 ++ addresses. Speculative execution paths under retpolines are trapped
453 ++ in an infinite loop to prevent any speculative execution jumping to
454 ++ a gadget.
455 ++
456 ++ To turn on retpoline mitigation on a vulnerable CPU, the kernel
457 ++ needs to be compiled with a gcc compiler that supports the
458 ++ -mindirect-branch=thunk-extern -mindirect-branch-register options.
459 ++ If the kernel is compiled with a Clang compiler, the compiler needs
460 ++ to support -mretpoline-external-thunk option. The kernel config
461 ++ CONFIG_RETPOLINE needs to be turned on, and the CPU needs to run with
462 ++ the latest updated microcode.
463 ++
464 ++ On Intel Skylake-era systems the mitigation covers most, but not all,
465 ++ cases. See :ref:`[3] <spec_ref3>` for more details.
466 ++
467 ++ On CPUs with hardware mitigation for Spectre variant 2 (e.g. Enhanced
468 ++ IBRS on x86), retpoline is automatically disabled at run time.
469 ++
470 ++ The retpoline mitigation is turned on by default on vulnerable
471 ++ CPUs. It can be forced on or off by the administrator
472 ++ via the kernel command line and sysfs control files. See
473 ++ :ref:`spectre_mitigation_control_command_line`.
474 ++
475 ++ On x86, indirect branch restricted speculation is turned on by default
476 ++ before invoking any firmware code to prevent Spectre variant 2 exploits
477 ++ using the firmware.
478 ++
479 ++ Using kernel address space randomization (CONFIG_RANDOMIZE_SLAB=y
480 ++ and CONFIG_SLAB_FREELIST_RANDOM=y in the kernel configuration) makes
481 ++ attacks on the kernel generally more difficult.
482 ++
483 ++2. User program mitigation
484 ++^^^^^^^^^^^^^^^^^^^^^^^^^^
485 ++
486 ++ User programs can mitigate Spectre variant 1 using LFENCE or "bounds
487 ++ clipping". For more details see :ref:`[2] <spec_ref2>`.
488 ++
489 ++ For Spectre variant 2 mitigation, individual user programs
490 ++ can be compiled with return trampolines for indirect branches.
491 ++ This protects them from consuming poisoned entries in the branch
492 ++ target buffer left by malicious software. Alternatively, the
493 ++ programs can disable their indirect branch speculation via prctl()
494 ++ (See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
495 ++ On x86, this will turn on STIBP to guard against attacks from the
496 ++ sibling thread when the user program is running, and use IBPB to
497 ++ flush the branch target buffer when switching to/from the program.
498 ++
499 ++ Restricting indirect branch speculation on a user program will
500 ++ also prevent the program from launching a variant 2 attack
501 ++ on x86. All sand-boxed SECCOMP programs have indirect branch
502 ++ speculation restricted by default. Administrators can change
503 ++ that behavior via the kernel command line and sysfs control files.
504 ++ See :ref:`spectre_mitigation_control_command_line`.
505 ++
506 ++ Programs that disable their indirect branch speculation will have
507 ++ more overhead and run slower.
508 ++
509 ++ User programs should use address space randomization
510 ++ (/proc/sys/kernel/randomize_va_space = 1 or 2) to make attacks more
511 ++ difficult.
512 ++
513 ++3. VM mitigation
514 ++^^^^^^^^^^^^^^^^
515 ++
516 ++ Within the kernel, Spectre variant 1 attacks from rogue guests are
517 ++ mitigated on a case by case basis in VM exit paths. Vulnerable code
518 ++ uses nospec accessor macros for "bounds clipping", to avoid any
519 ++ usable disclosure gadgets. However, this may not cover all variant
520 ++ 1 attack vectors.
521 ++
522 ++ For Spectre variant 2 attacks from rogue guests to the kernel, the
523 ++ Linux kernel uses retpoline or Enhanced IBRS to prevent consumption of
524 ++ poisoned entries in branch target buffer left by rogue guests. It also
525 ++ flushes the return stack buffer on every VM exit to prevent a return
526 ++ stack buffer underflow so poisoned branch target buffer could be used,
527 ++ or attacker guests leaving poisoned entries in the return stack buffer.
528 ++
529 ++ To mitigate guest-to-guest attacks in the same CPU hardware thread,
530 ++ the branch target buffer is sanitized by flushing before switching
531 ++ to a new guest on a CPU.
532 ++
533 ++ The above mitigations are turned on by default on vulnerable CPUs.
534 ++
535 ++ To mitigate guest-to-guest attacks from sibling thread when SMT is
536 ++ in use, an untrusted guest running in the sibling thread can have
537 ++ its indirect branch speculation disabled by administrator via prctl().
538 ++
539 ++ The kernel also allows guests to use any microcode based mitigation
540 ++ they choose to use (such as IBPB or STIBP on x86) to protect themselves.
541 ++
542 ++.. _spectre_mitigation_control_command_line:
543 ++
544 ++Mitigation control on the kernel command line
545 ++---------------------------------------------
546 ++
547 ++Spectre variant 2 mitigation can be disabled or force enabled at the
548 ++kernel command line.
549 ++
550 ++ nospectre_v2
551 ++
552 ++ [X86] Disable all mitigations for the Spectre variant 2
553 ++ (indirect branch prediction) vulnerability. System may
554 ++ allow data leaks with this option, which is equivalent
555 ++ to spectre_v2=off.
556 ++
557 ++
558 ++ spectre_v2=
559 ++
560 ++ [X86] Control mitigation of Spectre variant 2
561 ++ (indirect branch speculation) vulnerability.
562 ++ The default operation protects the kernel from
563 ++ user space attacks.
564 ++
565 ++ on
566 ++ unconditionally enable, implies
567 ++ spectre_v2_user=on
568 ++ off
569 ++ unconditionally disable, implies
570 ++ spectre_v2_user=off
571 ++ auto
572 ++ kernel detects whether your CPU model is
573 ++ vulnerable
574 ++
575 ++ Selecting 'on' will, and 'auto' may, choose a
576 ++ mitigation method at run time according to the
577 ++ CPU, the available microcode, the setting of the
578 ++ CONFIG_RETPOLINE configuration option, and the
579 ++ compiler with which the kernel was built.
580 ++
581 ++ Selecting 'on' will also enable the mitigation
582 ++ against user space to user space task attacks.
583 ++
584 ++ Selecting 'off' will disable both the kernel and
585 ++ the user space protections.
586 ++
587 ++ Specific mitigations can also be selected manually:
588 ++
589 ++ retpoline
590 ++ replace indirect branches
591 ++ retpoline,generic
592 ++ google's original retpoline
593 ++ retpoline,amd
594 ++ AMD-specific minimal thunk
595 ++
596 ++ Not specifying this option is equivalent to
597 ++ spectre_v2=auto.
598 ++
599 ++For user space mitigation:
600 ++
601 ++ spectre_v2_user=
602 ++
603 ++ [X86] Control mitigation of Spectre variant 2
604 ++ (indirect branch speculation) vulnerability between
605 ++ user space tasks
606 ++
607 ++ on
608 ++ Unconditionally enable mitigations. Is
609 ++ enforced by spectre_v2=on
610 ++
611 ++ off
612 ++ Unconditionally disable mitigations. Is
613 ++ enforced by spectre_v2=off
614 ++
615 ++ prctl
616 ++ Indirect branch speculation is enabled,
617 ++ but mitigation can be enabled via prctl
618 ++ per thread. The mitigation control state
619 ++ is inherited on fork.
620 ++
621 ++ prctl,ibpb
622 ++ Like "prctl" above, but only STIBP is
623 ++ controlled per thread. IBPB is issued
624 ++ always when switching between different user
625 ++ space processes.
626 ++
627 ++ seccomp
628 ++ Same as "prctl" above, but all seccomp
629 ++ threads will enable the mitigation unless
630 ++ they explicitly opt out.
631 ++
632 ++ seccomp,ibpb
633 ++ Like "seccomp" above, but only STIBP is
634 ++ controlled per thread. IBPB is issued
635 ++ always when switching between different
636 ++ user space processes.
637 ++
638 ++ auto
639 ++ Kernel selects the mitigation depending on
640 ++ the available CPU features and vulnerability.
641 ++
642 ++ Default mitigation:
643 ++ If CONFIG_SECCOMP=y then "seccomp", otherwise "prctl"
644 ++
645 ++ Not specifying this option is equivalent to
646 ++ spectre_v2_user=auto.
647 ++
648 ++ In general the kernel by default selects
649 ++ reasonable mitigations for the current CPU. To
650 ++ disable Spectre variant 2 mitigations, boot with
651 ++ spectre_v2=off. Spectre variant 1 mitigations
652 ++ cannot be disabled.
653 ++
654 ++Mitigation selection guide
655 ++--------------------------
656 ++
657 ++1. Trusted userspace
658 ++^^^^^^^^^^^^^^^^^^^^
659 ++
660 ++ If all userspace applications are from trusted sources and do not
661 ++ execute externally supplied untrusted code, then the mitigations can
662 ++ be disabled.
663 ++
664 ++2. Protect sensitive programs
665 ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
666 ++
667 ++ For security-sensitive programs that have secrets (e.g. crypto
668 ++ keys), protection against Spectre variant 2 can be put in place by
669 ++ disabling indirect branch speculation when the program is running
670 ++ (See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
671 ++
672 ++3. Sandbox untrusted programs
673 ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
674 ++
675 ++ Untrusted programs that could be a source of attacks can be cordoned
676 ++ off by disabling their indirect branch speculation when they are run
677 ++ (See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
678 ++ This prevents untrusted programs from polluting the branch target
679 ++ buffer. All programs running in SECCOMP sandboxes have indirect
680 ++ branch speculation restricted by default. This behavior can be
681 ++ changed via the kernel command line and sysfs control files. See
682 ++ :ref:`spectre_mitigation_control_command_line`.
683 ++
684 ++3. High security mode
685 ++^^^^^^^^^^^^^^^^^^^^^
686 ++
687 ++ All Spectre variant 2 mitigations can be forced on
688 ++ at boot time for all programs (See the "on" option in
689 ++ :ref:`spectre_mitigation_control_command_line`). This will add
690 ++ overhead as indirect branch speculations for all programs will be
691 ++ restricted.
692 ++
693 ++ On x86, branch target buffer will be flushed with IBPB when switching
694 ++ to a new program. STIBP is left on all the time to protect programs
695 ++ against variant 2 attacks originating from programs running on
696 ++ sibling threads.
697 ++
698 ++ Alternatively, STIBP can be used only when running programs
699 ++ whose indirect branch speculation is explicitly disabled,
700 ++ while IBPB is still used all the time when switching to a new
701 ++ program to clear the branch target buffer (See "ibpb" option in
702 ++ :ref:`spectre_mitigation_control_command_line`). This "ibpb" option
703 ++ has less performance cost than the "on" option, which leaves STIBP
704 ++ on all the time.
705 ++
706 ++References on Spectre
707 ++---------------------
708 ++
709 ++Intel white papers:
710 ++
711 ++.. _spec_ref1:
712 ++
713 ++[1] `Intel analysis of speculative execution side channels <https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/Intel-Analysis-of-Speculative-Execution-Side-Channels.pdf>`_.
714 ++
715 ++.. _spec_ref2:
716 ++
717 ++[2] `Bounds check bypass <https://software.intel.com/security-software-guidance/software-guidance/bounds-check-bypass>`_.
718 ++
719 ++.. _spec_ref3:
720 ++
721 ++[3] `Deep dive: Retpoline: A branch target injection mitigation <https://software.intel.com/security-software-guidance/insights/deep-dive-retpoline-branch-target-injection-mitigation>`_.
722 ++
723 ++.. _spec_ref4:
724 ++
725 ++[4] `Deep Dive: Single Thread Indirect Branch Predictors <https://software.intel.com/security-software-guidance/insights/deep-dive-single-thread-indirect-branch-predictors>`_.
726 ++
727 ++AMD white papers:
728 ++
729 ++.. _spec_ref5:
730 ++
731 ++[5] `AMD64 technology indirect branch control extension <https://developer.amd.com/wp-content/resources/Architecture_Guidelines_Update_Indirect_Branch_Control.pdf>`_.
732 ++
733 ++.. _spec_ref6:
734 ++
735 ++[6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/90343-B_SoftwareTechniquesforManagingSpeculation_WP_7-18Update_FNL.pdf>`_.
736 ++
737 ++ARM white papers:
738 ++
739 ++.. _spec_ref7:
740 ++
741 ++[7] `Cache speculation side-channels <https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/download-the-whitepaper>`_.
742 ++
743 ++.. _spec_ref8:
744 ++
745 ++[8] `Cache speculation issues update <https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/latest-updates/cache-speculation-issues-update>`_.
746 ++
747 ++Google white paper:
748 ++
749 ++.. _spec_ref9:
750 ++
751 ++[9] `Retpoline: a software construct for preventing branch-target-injection <https://support.google.com/faqs/answer/7625886>`_.
752 ++
753 ++MIPS white paper:
754 ++
755 ++.. _spec_ref10:
756 ++
757 ++[10] `MIPS: response on speculative execution and side channel vulnerabilities <https://www.mips.com/blog/mips-response-on-speculative-execution-and-side-channel-vulnerabilities/>`_.
758 ++
759 ++Academic papers:
760 ++
761 ++.. _spec_ref11:
762 ++
763 ++[11] `Spectre Attacks: Exploiting Speculative Execution <https://spectreattack.com/spectre.pdf>`_.
764 ++
765 ++.. _spec_ref12:
766 ++
767 ++[12] `NetSpectre: Read Arbitrary Memory over Network <https://arxiv.org/abs/1807.10535>`_.
768 ++
769 ++.. _spec_ref13:
770 ++
771 ++[13] `Spectre Returns! Speculation Attacks using the Return Stack Buffer <https://www.usenix.org/system/files/conference/woot18/woot18-paper-koruyeh.pdf>`_.
772 +diff --git a/Documentation/devicetree/bindings/net/can/microchip,mcp251x.txt b/Documentation/devicetree/bindings/net/can/microchip,mcp251x.txt
773 +index ee3723beb701..33b38716b77f 100644
774 +--- a/Documentation/devicetree/bindings/net/can/microchip,mcp251x.txt
775 ++++ b/Documentation/devicetree/bindings/net/can/microchip,mcp251x.txt
776 +@@ -4,6 +4,7 @@ Required properties:
777 + - compatible: Should be one of the following:
778 + - "microchip,mcp2510" for MCP2510.
779 + - "microchip,mcp2515" for MCP2515.
780 ++ - "microchip,mcp25625" for MCP25625.
781 + - reg: SPI chip select.
782 + - clocks: The clock feeding the CAN controller.
783 + - interrupt-parent: The parent interrupt controller.
784 +diff --git a/Documentation/userspace-api/spec_ctrl.rst b/Documentation/userspace-api/spec_ctrl.rst
785 +index c4dbe6f7cdae..0fda8f614110 100644
786 +--- a/Documentation/userspace-api/spec_ctrl.rst
787 ++++ b/Documentation/userspace-api/spec_ctrl.rst
788 +@@ -47,6 +47,8 @@ If PR_SPEC_PRCTL is set, then the per-task control of the mitigation is
789 + available. If not set, prctl(PR_SET_SPECULATION_CTRL) for the speculation
790 + misfeature will fail.
791 +
792 ++.. _set_spec_ctrl:
793 ++
794 + PR_SET_SPECULATION_CTRL
795 + -----------------------
796 +
797 +diff --git a/Makefile b/Makefile
798 +index c36e64bd9ae7..97c744513af0 100644
799 +--- a/Makefile
800 ++++ b/Makefile
801 +@@ -1,7 +1,7 @@
802 + # SPDX-License-Identifier: GPL-2.0
803 + VERSION = 4
804 + PATCHLEVEL = 14
805 +-SUBLEVEL = 133
806 ++SUBLEVEL = 134
807 + EXTRAVERSION =
808 + NAME = Petit Gorille
809 +
810 +diff --git a/arch/arc/kernel/unwind.c b/arch/arc/kernel/unwind.c
811 +index 333daab7def0..93453fa48193 100644
812 +--- a/arch/arc/kernel/unwind.c
813 ++++ b/arch/arc/kernel/unwind.c
814 +@@ -185,11 +185,6 @@ static void *__init unw_hdr_alloc_early(unsigned long sz)
815 + MAX_DMA_ADDRESS);
816 + }
817 +
818 +-static void *unw_hdr_alloc(unsigned long sz)
819 +-{
820 +- return kmalloc(sz, GFP_KERNEL);
821 +-}
822 +-
823 + static void init_unwind_table(struct unwind_table *table, const char *name,
824 + const void *core_start, unsigned long core_size,
825 + const void *init_start, unsigned long init_size,
826 +@@ -370,6 +365,10 @@ ret_err:
827 + }
828 +
829 + #ifdef CONFIG_MODULES
830 ++static void *unw_hdr_alloc(unsigned long sz)
831 ++{
832 ++ return kmalloc(sz, GFP_KERNEL);
833 ++}
834 +
835 + static struct unwind_table *last_table;
836 +
837 +diff --git a/arch/arm/boot/dts/am335x-pcm-953.dtsi b/arch/arm/boot/dts/am335x-pcm-953.dtsi
838 +index 1ec8e0d80191..572fbd254690 100644
839 +--- a/arch/arm/boot/dts/am335x-pcm-953.dtsi
840 ++++ b/arch/arm/boot/dts/am335x-pcm-953.dtsi
841 +@@ -197,7 +197,7 @@
842 + bus-width = <4>;
843 + pinctrl-names = "default";
844 + pinctrl-0 = <&mmc1_pins>;
845 +- cd-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>;
846 ++ cd-gpios = <&gpio0 6 GPIO_ACTIVE_LOW>;
847 + status = "okay";
848 + };
849 +
850 +diff --git a/arch/arm/boot/dts/am335x-wega.dtsi b/arch/arm/boot/dts/am335x-wega.dtsi
851 +index 8ce541739b24..83e4fe595e37 100644
852 +--- a/arch/arm/boot/dts/am335x-wega.dtsi
853 ++++ b/arch/arm/boot/dts/am335x-wega.dtsi
854 +@@ -157,7 +157,7 @@
855 + bus-width = <4>;
856 + pinctrl-names = "default";
857 + pinctrl-0 = <&mmc1_pins>;
858 +- cd-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>;
859 ++ cd-gpios = <&gpio0 6 GPIO_ACTIVE_LOW>;
860 + status = "okay";
861 + };
862 +
863 +diff --git a/arch/arm/boot/dts/imx6ul.dtsi b/arch/arm/boot/dts/imx6ul.dtsi
864 +index 036aeba4f02c..49f4bdc0d864 100644
865 +--- a/arch/arm/boot/dts/imx6ul.dtsi
866 ++++ b/arch/arm/boot/dts/imx6ul.dtsi
867 +@@ -342,7 +342,7 @@
868 + pwm1: pwm@02080000 {
869 + compatible = "fsl,imx6ul-pwm", "fsl,imx27-pwm";
870 + reg = <0x02080000 0x4000>;
871 +- interrupts = <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>;
872 ++ interrupts = <GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>;
873 + clocks = <&clks IMX6UL_CLK_PWM1>,
874 + <&clks IMX6UL_CLK_PWM1>;
875 + clock-names = "ipg", "per";
876 +@@ -353,7 +353,7 @@
877 + pwm2: pwm@02084000 {
878 + compatible = "fsl,imx6ul-pwm", "fsl,imx27-pwm";
879 + reg = <0x02084000 0x4000>;
880 +- interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>;
881 ++ interrupts = <GIC_SPI 84 IRQ_TYPE_LEVEL_HIGH>;
882 + clocks = <&clks IMX6UL_CLK_PWM2>,
883 + <&clks IMX6UL_CLK_PWM2>;
884 + clock-names = "ipg", "per";
885 +@@ -364,7 +364,7 @@
886 + pwm3: pwm@02088000 {
887 + compatible = "fsl,imx6ul-pwm", "fsl,imx27-pwm";
888 + reg = <0x02088000 0x4000>;
889 +- interrupts = <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>;
890 ++ interrupts = <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>;
891 + clocks = <&clks IMX6UL_CLK_PWM3>,
892 + <&clks IMX6UL_CLK_PWM3>;
893 + clock-names = "ipg", "per";
894 +@@ -375,7 +375,7 @@
895 + pwm4: pwm@0208c000 {
896 + compatible = "fsl,imx6ul-pwm", "fsl,imx27-pwm";
897 + reg = <0x0208c000 0x4000>;
898 +- interrupts = <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>;
899 ++ interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>;
900 + clocks = <&clks IMX6UL_CLK_PWM4>,
901 + <&clks IMX6UL_CLK_PWM4>;
902 + clock-names = "ipg", "per";
903 +diff --git a/arch/arm/mach-davinci/board-da850-evm.c b/arch/arm/mach-davinci/board-da850-evm.c
904 +index 2f6ac1afa804..686e7e6f2eb3 100644
905 +--- a/arch/arm/mach-davinci/board-da850-evm.c
906 ++++ b/arch/arm/mach-davinci/board-da850-evm.c
907 +@@ -1464,6 +1464,8 @@ static __init void da850_evm_init(void)
908 + if (ret)
909 + pr_warn("%s: dsp/rproc registration failed: %d\n",
910 + __func__, ret);
911 ++
912 ++ regulator_has_full_constraints();
913 + }
914 +
915 + #ifdef CONFIG_SERIAL_8250_CONSOLE
916 +diff --git a/arch/arm/mach-davinci/devices-da8xx.c b/arch/arm/mach-davinci/devices-da8xx.c
917 +index 22440c05d66a..7120f93eab0b 100644
918 +--- a/arch/arm/mach-davinci/devices-da8xx.c
919 ++++ b/arch/arm/mach-davinci/devices-da8xx.c
920 +@@ -699,6 +699,9 @@ static struct platform_device da8xx_lcdc_device = {
921 + .id = 0,
922 + .num_resources = ARRAY_SIZE(da8xx_lcdc_resources),
923 + .resource = da8xx_lcdc_resources,
924 ++ .dev = {
925 ++ .coherent_dma_mask = DMA_BIT_MASK(32),
926 ++ }
927 + };
928 +
929 + int __init da8xx_register_lcdc(struct da8xx_lcdc_platform_data *pdata)
930 +diff --git a/arch/arm/mach-omap2/prm3xxx.c b/arch/arm/mach-omap2/prm3xxx.c
931 +index a2dd13217c89..2819c43fe754 100644
932 +--- a/arch/arm/mach-omap2/prm3xxx.c
933 ++++ b/arch/arm/mach-omap2/prm3xxx.c
934 +@@ -433,7 +433,7 @@ static void omap3_prm_reconfigure_io_chain(void)
935 + * registers, and omap3xxx_prm_reconfigure_io_chain() must be called.
936 + * No return value.
937 + */
938 +-static void __init omap3xxx_prm_enable_io_wakeup(void)
939 ++static void omap3xxx_prm_enable_io_wakeup(void)
940 + {
941 + if (prm_features & PRM_HAS_IO_WAKEUP)
942 + omap2_prm_set_mod_reg_bits(OMAP3430_EN_IO_MASK, WKUP_MOD,
943 +diff --git a/arch/mips/include/uapi/asm/sgidefs.h b/arch/mips/include/uapi/asm/sgidefs.h
944 +index 26143e3b7c26..69c3de90c536 100644
945 +--- a/arch/mips/include/uapi/asm/sgidefs.h
946 ++++ b/arch/mips/include/uapi/asm/sgidefs.h
947 +@@ -11,14 +11,6 @@
948 + #ifndef __ASM_SGIDEFS_H
949 + #define __ASM_SGIDEFS_H
950 +
951 +-/*
952 +- * Using a Linux compiler for building Linux seems logic but not to
953 +- * everybody.
954 +- */
955 +-#ifndef __linux__
956 +-#error Use a Linux compiler or give up.
957 +-#endif
958 +-
959 + /*
960 + * Definitions for the ISA levels
961 + *
962 +diff --git a/arch/s390/include/asm/facility.h b/arch/s390/include/asm/facility.h
963 +index 2d58478c2745..9fee469d7130 100644
964 +--- a/arch/s390/include/asm/facility.h
965 ++++ b/arch/s390/include/asm/facility.h
966 +@@ -59,6 +59,18 @@ static inline int test_facility(unsigned long nr)
967 + return __test_facility(nr, &S390_lowcore.stfle_fac_list);
968 + }
969 +
970 ++static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size)
971 ++{
972 ++ register unsigned long reg0 asm("0") = size - 1;
973 ++
974 ++ asm volatile(
975 ++ ".insn s,0xb2b00000,0(%1)" /* stfle */
976 ++ : "+d" (reg0)
977 ++ : "a" (stfle_fac_list)
978 ++ : "memory", "cc");
979 ++ return reg0;
980 ++}
981 ++
982 + /**
983 + * stfle - Store facility list extended
984 + * @stfle_fac_list: array where facility list can be stored
985 +@@ -76,13 +88,8 @@ static inline void stfle(u64 *stfle_fac_list, int size)
986 + memcpy(stfle_fac_list, &S390_lowcore.stfl_fac_list, 4);
987 + if (S390_lowcore.stfl_fac_list & 0x01000000) {
988 + /* More facility bits available with stfle */
989 +- register unsigned long reg0 asm("0") = size - 1;
990 +-
991 +- asm volatile(".insn s,0xb2b00000,0(%1)" /* stfle */
992 +- : "+d" (reg0)
993 +- : "a" (stfle_fac_list)
994 +- : "memory", "cc");
995 +- nr = (reg0 + 1) * 8; /* # bytes stored by stfle */
996 ++ nr = __stfle_asm(stfle_fac_list, size);
997 ++ nr = min_t(unsigned long, (nr + 1) * 8, size * 8);
998 + }
999 + memset((char *) stfle_fac_list + nr, 0, size * 8 - nr);
1000 + preempt_enable();
1001 +diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
1002 +index 45b5c6c4a55e..7c67d8939f3e 100644
1003 +--- a/arch/x86/kernel/head64.c
1004 ++++ b/arch/x86/kernel/head64.c
1005 +@@ -117,26 +117,27 @@ unsigned long __head __startup_64(unsigned long physaddr,
1006 + pgd[i + 0] = (pgdval_t)p4d + pgtable_flags;
1007 + pgd[i + 1] = (pgdval_t)p4d + pgtable_flags;
1008 +
1009 +- i = (physaddr >> P4D_SHIFT) % PTRS_PER_P4D;
1010 +- p4d[i + 0] = (pgdval_t)pud + pgtable_flags;
1011 +- p4d[i + 1] = (pgdval_t)pud + pgtable_flags;
1012 ++ i = physaddr >> P4D_SHIFT;
1013 ++ p4d[(i + 0) % PTRS_PER_P4D] = (pgdval_t)pud + pgtable_flags;
1014 ++ p4d[(i + 1) % PTRS_PER_P4D] = (pgdval_t)pud + pgtable_flags;
1015 + } else {
1016 + i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD;
1017 + pgd[i + 0] = (pgdval_t)pud + pgtable_flags;
1018 + pgd[i + 1] = (pgdval_t)pud + pgtable_flags;
1019 + }
1020 +
1021 +- i = (physaddr >> PUD_SHIFT) % PTRS_PER_PUD;
1022 +- pud[i + 0] = (pudval_t)pmd + pgtable_flags;
1023 +- pud[i + 1] = (pudval_t)pmd + pgtable_flags;
1024 ++ i = physaddr >> PUD_SHIFT;
1025 ++ pud[(i + 0) % PTRS_PER_PUD] = (pudval_t)pmd + pgtable_flags;
1026 ++ pud[(i + 1) % PTRS_PER_PUD] = (pudval_t)pmd + pgtable_flags;
1027 +
1028 + pmd_entry = __PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL;
1029 + pmd_entry += sme_get_me_mask();
1030 + pmd_entry += physaddr;
1031 +
1032 + for (i = 0; i < DIV_ROUND_UP(_end - _text, PMD_SIZE); i++) {
1033 +- int idx = i + (physaddr >> PMD_SHIFT) % PTRS_PER_PMD;
1034 +- pmd[idx] = pmd_entry + i * PMD_SIZE;
1035 ++ int idx = i + (physaddr >> PMD_SHIFT);
1036 ++
1037 ++ pmd[idx % PTRS_PER_PMD] = pmd_entry + i * PMD_SIZE;
1038 + }
1039 +
1040 + /*
1041 +diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
1042 +index ed5c4cdf0a34..2a65ab291312 100644
1043 +--- a/arch/x86/kernel/ptrace.c
1044 ++++ b/arch/x86/kernel/ptrace.c
1045 +@@ -24,6 +24,7 @@
1046 + #include <linux/rcupdate.h>
1047 + #include <linux/export.h>
1048 + #include <linux/context_tracking.h>
1049 ++#include <linux/nospec.h>
1050 +
1051 + #include <linux/uaccess.h>
1052 + #include <asm/pgtable.h>
1053 +@@ -651,9 +652,11 @@ static unsigned long ptrace_get_debugreg(struct task_struct *tsk, int n)
1054 + {
1055 + struct thread_struct *thread = &tsk->thread;
1056 + unsigned long val = 0;
1057 ++ int index = n;
1058 +
1059 + if (n < HBP_NUM) {
1060 +- struct perf_event *bp = thread->ptrace_bps[n];
1061 ++ struct perf_event *bp = thread->ptrace_bps[index];
1062 ++ index = array_index_nospec(index, HBP_NUM);
1063 +
1064 + if (bp)
1065 + val = bp->hw.info.address;
1066 +diff --git a/arch/x86/kernel/tls.c b/arch/x86/kernel/tls.c
1067 +index a5b802a12212..71d3fef1edc9 100644
1068 +--- a/arch/x86/kernel/tls.c
1069 ++++ b/arch/x86/kernel/tls.c
1070 +@@ -5,6 +5,7 @@
1071 + #include <linux/user.h>
1072 + #include <linux/regset.h>
1073 + #include <linux/syscalls.h>
1074 ++#include <linux/nospec.h>
1075 +
1076 + #include <linux/uaccess.h>
1077 + #include <asm/desc.h>
1078 +@@ -220,6 +221,7 @@ int do_get_thread_area(struct task_struct *p, int idx,
1079 + struct user_desc __user *u_info)
1080 + {
1081 + struct user_desc info;
1082 ++ int index;
1083 +
1084 + if (idx == -1 && get_user(idx, &u_info->entry_number))
1085 + return -EFAULT;
1086 +@@ -227,8 +229,11 @@ int do_get_thread_area(struct task_struct *p, int idx,
1087 + if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
1088 + return -EINVAL;
1089 +
1090 +- fill_user_desc(&info, idx,
1091 +- &p->thread.tls_array[idx - GDT_ENTRY_TLS_MIN]);
1092 ++ index = idx - GDT_ENTRY_TLS_MIN;
1093 ++ index = array_index_nospec(index,
1094 ++ GDT_ENTRY_TLS_MAX - GDT_ENTRY_TLS_MIN + 1);
1095 ++
1096 ++ fill_user_desc(&info, idx, &p->thread.tls_array[index]);
1097 +
1098 + if (copy_to_user(u_info, &info, sizeof(info)))
1099 + return -EFAULT;
1100 +diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
1101 +index 7d45ac451745..e65b0da1007b 100644
1102 +--- a/block/bfq-iosched.c
1103 ++++ b/block/bfq-iosched.c
1104 +@@ -3760,6 +3760,7 @@ static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync)
1105 + unsigned long flags;
1106 +
1107 + spin_lock_irqsave(&bfqd->lock, flags);
1108 ++ bfqq->bic = NULL;
1109 + bfq_exit_bfqq(bfqd, bfqq);
1110 + bic_set_bfqq(bic, NULL, is_sync);
1111 + spin_unlock_irqrestore(&bfqd->lock, flags);
1112 +diff --git a/drivers/android/binder.c b/drivers/android/binder.c
1113 +index 96a0f940e54d..1af9f36f89cf 100644
1114 +--- a/drivers/android/binder.c
1115 ++++ b/drivers/android/binder.c
1116 +@@ -3876,6 +3876,8 @@ retry:
1117 + case BINDER_WORK_TRANSACTION_COMPLETE: {
1118 + binder_inner_proc_unlock(proc);
1119 + cmd = BR_TRANSACTION_COMPLETE;
1120 ++ kfree(w);
1121 ++ binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
1122 + if (put_user(cmd, (uint32_t __user *)ptr))
1123 + return -EFAULT;
1124 + ptr += sizeof(uint32_t);
1125 +@@ -3884,8 +3886,6 @@ retry:
1126 + binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE,
1127 + "%d:%d BR_TRANSACTION_COMPLETE\n",
1128 + proc->pid, thread->pid);
1129 +- kfree(w);
1130 +- binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
1131 + } break;
1132 + case BINDER_WORK_NODE: {
1133 + struct binder_node *node = container_of(w, struct binder_node, work);
1134 +diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
1135 +index 07532d83be0b..e405ea3ca8d8 100644
1136 +--- a/drivers/base/cacheinfo.c
1137 ++++ b/drivers/base/cacheinfo.c
1138 +@@ -669,7 +669,8 @@ static int cacheinfo_cpu_pre_down(unsigned int cpu)
1139 +
1140 + static int __init cacheinfo_sysfs_init(void)
1141 + {
1142 +- return cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "base/cacheinfo:online",
1143 ++ return cpuhp_setup_state(CPUHP_AP_BASE_CACHEINFO_ONLINE,
1144 ++ "base/cacheinfo:online",
1145 + cacheinfo_cpu_online, cacheinfo_cpu_pre_down);
1146 + }
1147 + device_initcall(cacheinfo_sysfs_init);
1148 +diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
1149 +index 82e4d5cccf84..2df8564f08a0 100644
1150 +--- a/drivers/clk/ti/clkctrl.c
1151 ++++ b/drivers/clk/ti/clkctrl.c
1152 +@@ -215,6 +215,7 @@ static struct clk_hw *_ti_omap4_clkctrl_xlate(struct of_phandle_args *clkspec,
1153 + {
1154 + struct omap_clkctrl_provider *provider = data;
1155 + struct omap_clkctrl_clk *entry;
1156 ++ bool found = false;
1157 +
1158 + if (clkspec->args_count != 2)
1159 + return ERR_PTR(-EINVAL);
1160 +@@ -224,11 +225,13 @@ static struct clk_hw *_ti_omap4_clkctrl_xlate(struct of_phandle_args *clkspec,
1161 +
1162 + list_for_each_entry(entry, &provider->clocks, node) {
1163 + if (entry->reg_offset == clkspec->args[0] &&
1164 +- entry->bit_offset == clkspec->args[1])
1165 ++ entry->bit_offset == clkspec->args[1]) {
1166 ++ found = true;
1167 + break;
1168 ++ }
1169 + }
1170 +
1171 +- if (!entry)
1172 ++ if (!found)
1173 + return ERR_PTR(-EINVAL);
1174 +
1175 + return entry->clk;
1176 +diff --git a/drivers/crypto/nx/nx-842-powernv.c b/drivers/crypto/nx/nx-842-powernv.c
1177 +index 874ddf5e9087..dbf80b55c2a4 100644
1178 +--- a/drivers/crypto/nx/nx-842-powernv.c
1179 ++++ b/drivers/crypto/nx/nx-842-powernv.c
1180 +@@ -34,8 +34,6 @@ MODULE_ALIAS_CRYPTO("842-nx");
1181 + #define WORKMEM_ALIGN (CRB_ALIGN)
1182 + #define CSB_WAIT_MAX (5000) /* ms */
1183 + #define VAS_RETRIES (10)
1184 +-/* # of requests allowed per RxFIFO at a time. 0 for unlimited */
1185 +-#define MAX_CREDITS_PER_RXFIFO (1024)
1186 +
1187 + struct nx842_workmem {
1188 + /* Below fields must be properly aligned */
1189 +@@ -801,7 +799,11 @@ static int __init vas_cfg_coproc_info(struct device_node *dn, int chip_id,
1190 + rxattr.lnotify_lpid = lpid;
1191 + rxattr.lnotify_pid = pid;
1192 + rxattr.lnotify_tid = tid;
1193 +- rxattr.wcreds_max = MAX_CREDITS_PER_RXFIFO;
1194 ++ /*
1195 ++ * Maximum RX window credits can not be more than #CRBs in
1196 ++ * RxFIFO. Otherwise, can get checkstop if RxFIFO overruns.
1197 ++ */
1198 ++ rxattr.wcreds_max = fifo_size / CRB_SIZE;
1199 +
1200 + /*
1201 + * Open a VAS receice window which is used to configure RxFIFO
1202 +diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
1203 +index 4388f4e3840c..1f8fe1795964 100644
1204 +--- a/drivers/crypto/talitos.c
1205 ++++ b/drivers/crypto/talitos.c
1206 +@@ -2185,7 +2185,7 @@ static struct talitos_alg_template driver_algs[] = {
1207 + .base = {
1208 + .cra_name = "authenc(hmac(sha1),cbc(aes))",
1209 + .cra_driver_name = "authenc-hmac-sha1-"
1210 +- "cbc-aes-talitos",
1211 ++ "cbc-aes-talitos-hsna",
1212 + .cra_blocksize = AES_BLOCK_SIZE,
1213 + .cra_flags = CRYPTO_ALG_ASYNC,
1214 + },
1215 +@@ -2229,7 +2229,7 @@ static struct talitos_alg_template driver_algs[] = {
1216 + .cra_name = "authenc(hmac(sha1),"
1217 + "cbc(des3_ede))",
1218 + .cra_driver_name = "authenc-hmac-sha1-"
1219 +- "cbc-3des-talitos",
1220 ++ "cbc-3des-talitos-hsna",
1221 + .cra_blocksize = DES3_EDE_BLOCK_SIZE,
1222 + .cra_flags = CRYPTO_ALG_ASYNC,
1223 + },
1224 +@@ -2271,7 +2271,7 @@ static struct talitos_alg_template driver_algs[] = {
1225 + .base = {
1226 + .cra_name = "authenc(hmac(sha224),cbc(aes))",
1227 + .cra_driver_name = "authenc-hmac-sha224-"
1228 +- "cbc-aes-talitos",
1229 ++ "cbc-aes-talitos-hsna",
1230 + .cra_blocksize = AES_BLOCK_SIZE,
1231 + .cra_flags = CRYPTO_ALG_ASYNC,
1232 + },
1233 +@@ -2315,7 +2315,7 @@ static struct talitos_alg_template driver_algs[] = {
1234 + .cra_name = "authenc(hmac(sha224),"
1235 + "cbc(des3_ede))",
1236 + .cra_driver_name = "authenc-hmac-sha224-"
1237 +- "cbc-3des-talitos",
1238 ++ "cbc-3des-talitos-hsna",
1239 + .cra_blocksize = DES3_EDE_BLOCK_SIZE,
1240 + .cra_flags = CRYPTO_ALG_ASYNC,
1241 + },
1242 +@@ -2357,7 +2357,7 @@ static struct talitos_alg_template driver_algs[] = {
1243 + .base = {
1244 + .cra_name = "authenc(hmac(sha256),cbc(aes))",
1245 + .cra_driver_name = "authenc-hmac-sha256-"
1246 +- "cbc-aes-talitos",
1247 ++ "cbc-aes-talitos-hsna",
1248 + .cra_blocksize = AES_BLOCK_SIZE,
1249 + .cra_flags = CRYPTO_ALG_ASYNC,
1250 + },
1251 +@@ -2401,7 +2401,7 @@ static struct talitos_alg_template driver_algs[] = {
1252 + .cra_name = "authenc(hmac(sha256),"
1253 + "cbc(des3_ede))",
1254 + .cra_driver_name = "authenc-hmac-sha256-"
1255 +- "cbc-3des-talitos",
1256 ++ "cbc-3des-talitos-hsna",
1257 + .cra_blocksize = DES3_EDE_BLOCK_SIZE,
1258 + .cra_flags = CRYPTO_ALG_ASYNC,
1259 + },
1260 +@@ -2527,7 +2527,7 @@ static struct talitos_alg_template driver_algs[] = {
1261 + .base = {
1262 + .cra_name = "authenc(hmac(md5),cbc(aes))",
1263 + .cra_driver_name = "authenc-hmac-md5-"
1264 +- "cbc-aes-talitos",
1265 ++ "cbc-aes-talitos-hsna",
1266 + .cra_blocksize = AES_BLOCK_SIZE,
1267 + .cra_flags = CRYPTO_ALG_ASYNC,
1268 + },
1269 +@@ -2569,7 +2569,7 @@ static struct talitos_alg_template driver_algs[] = {
1270 + .base = {
1271 + .cra_name = "authenc(hmac(md5),cbc(des3_ede))",
1272 + .cra_driver_name = "authenc-hmac-md5-"
1273 +- "cbc-3des-talitos",
1274 ++ "cbc-3des-talitos-hsna",
1275 + .cra_blocksize = DES3_EDE_BLOCK_SIZE,
1276 + .cra_flags = CRYPTO_ALG_ASYNC,
1277 + },
1278 +diff --git a/drivers/firmware/efi/efi-bgrt.c b/drivers/firmware/efi/efi-bgrt.c
1279 +index 50793fda7819..e3d86aa1ad5d 100644
1280 +--- a/drivers/firmware/efi/efi-bgrt.c
1281 ++++ b/drivers/firmware/efi/efi-bgrt.c
1282 +@@ -50,11 +50,6 @@ void __init efi_bgrt_init(struct acpi_table_header *table)
1283 + bgrt->version);
1284 + goto out;
1285 + }
1286 +- if (bgrt->status & 0xfe) {
1287 +- pr_notice("Ignoring BGRT: reserved status bits are non-zero %u\n",
1288 +- bgrt->status);
1289 +- goto out;
1290 +- }
1291 + if (bgrt->image_type != 0) {
1292 + pr_notice("Ignoring BGRT: invalid image type %u (expected 0)\n",
1293 + bgrt->image_type);
1294 +diff --git a/drivers/gpu/drm/drm_bufs.c b/drivers/gpu/drm/drm_bufs.c
1295 +index 0f05b8d8fefa..b829fde80f7b 100644
1296 +--- a/drivers/gpu/drm/drm_bufs.c
1297 ++++ b/drivers/gpu/drm/drm_bufs.c
1298 +@@ -1321,7 +1321,10 @@ static int copy_one_buf(void *data, int count, struct drm_buf_entry *from)
1299 + .size = from->buf_size,
1300 + .low_mark = from->low_mark,
1301 + .high_mark = from->high_mark};
1302 +- return copy_to_user(to, &v, offsetof(struct drm_buf_desc, flags));
1303 ++
1304 ++ if (copy_to_user(to, &v, offsetof(struct drm_buf_desc, flags)))
1305 ++ return -EFAULT;
1306 ++ return 0;
1307 + }
1308 +
1309 + int drm_legacy_infobufs(struct drm_device *dev, void *data,
1310 +diff --git a/drivers/gpu/drm/drm_ioc32.c b/drivers/gpu/drm/drm_ioc32.c
1311 +index f8e96e648acf..bfeeb6a56135 100644
1312 +--- a/drivers/gpu/drm/drm_ioc32.c
1313 ++++ b/drivers/gpu/drm/drm_ioc32.c
1314 +@@ -372,7 +372,10 @@ static int copy_one_buf32(void *data, int count, struct drm_buf_entry *from)
1315 + .size = from->buf_size,
1316 + .low_mark = from->low_mark,
1317 + .high_mark = from->high_mark};
1318 +- return copy_to_user(to + count, &v, offsetof(drm_buf_desc32_t, flags));
1319 ++
1320 ++ if (copy_to_user(to + count, &v, offsetof(drm_buf_desc32_t, flags)))
1321 ++ return -EFAULT;
1322 ++ return 0;
1323 + }
1324 +
1325 + static int drm_legacy_infobufs32(struct drm_device *dev, void *data,
1326 +diff --git a/drivers/gpu/drm/udl/udl_drv.c b/drivers/gpu/drm/udl/udl_drv.c
1327 +index b45ac6bc8add..b428c3da7576 100644
1328 +--- a/drivers/gpu/drm/udl/udl_drv.c
1329 ++++ b/drivers/gpu/drm/udl/udl_drv.c
1330 +@@ -43,10 +43,16 @@ static const struct file_operations udl_driver_fops = {
1331 + .llseek = noop_llseek,
1332 + };
1333 +
1334 ++static void udl_driver_release(struct drm_device *dev)
1335 ++{
1336 ++ udl_fini(dev);
1337 ++ udl_modeset_cleanup(dev);
1338 ++ drm_dev_fini(dev);
1339 ++ kfree(dev);
1340 ++}
1341 ++
1342 + static struct drm_driver driver = {
1343 + .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME,
1344 +- .load = udl_driver_load,
1345 +- .unload = udl_driver_unload,
1346 + .release = udl_driver_release,
1347 +
1348 + /* gem hooks */
1349 +@@ -70,28 +76,56 @@ static struct drm_driver driver = {
1350 + .patchlevel = DRIVER_PATCHLEVEL,
1351 + };
1352 +
1353 ++static struct udl_device *udl_driver_create(struct usb_interface *interface)
1354 ++{
1355 ++ struct usb_device *udev = interface_to_usbdev(interface);
1356 ++ struct udl_device *udl;
1357 ++ int r;
1358 ++
1359 ++ udl = kzalloc(sizeof(*udl), GFP_KERNEL);
1360 ++ if (!udl)
1361 ++ return ERR_PTR(-ENOMEM);
1362 ++
1363 ++ r = drm_dev_init(&udl->drm, &driver, &interface->dev);
1364 ++ if (r) {
1365 ++ kfree(udl);
1366 ++ return ERR_PTR(r);
1367 ++ }
1368 ++
1369 ++ udl->udev = udev;
1370 ++ udl->drm.dev_private = udl;
1371 ++
1372 ++ r = udl_init(udl);
1373 ++ if (r) {
1374 ++ drm_dev_fini(&udl->drm);
1375 ++ kfree(udl);
1376 ++ return ERR_PTR(r);
1377 ++ }
1378 ++
1379 ++ usb_set_intfdata(interface, udl);
1380 ++ return udl;
1381 ++}
1382 ++
1383 + static int udl_usb_probe(struct usb_interface *interface,
1384 + const struct usb_device_id *id)
1385 + {
1386 +- struct usb_device *udev = interface_to_usbdev(interface);
1387 +- struct drm_device *dev;
1388 + int r;
1389 ++ struct udl_device *udl;
1390 +
1391 +- dev = drm_dev_alloc(&driver, &interface->dev);
1392 +- if (IS_ERR(dev))
1393 +- return PTR_ERR(dev);
1394 ++ udl = udl_driver_create(interface);
1395 ++ if (IS_ERR(udl))
1396 ++ return PTR_ERR(udl);
1397 +
1398 +- r = drm_dev_register(dev, (unsigned long)udev);
1399 ++ r = drm_dev_register(&udl->drm, 0);
1400 + if (r)
1401 + goto err_free;
1402 +
1403 +- usb_set_intfdata(interface, dev);
1404 +- DRM_INFO("Initialized udl on minor %d\n", dev->primary->index);
1405 ++ DRM_INFO("Initialized udl on minor %d\n", udl->drm.primary->index);
1406 +
1407 + return 0;
1408 +
1409 + err_free:
1410 +- drm_dev_unref(dev);
1411 ++ drm_dev_unref(&udl->drm);
1412 + return r;
1413 + }
1414 +
1415 +diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h
1416 +index 307455dd6526..d5a5dcd15dd8 100644
1417 +--- a/drivers/gpu/drm/udl/udl_drv.h
1418 ++++ b/drivers/gpu/drm/udl/udl_drv.h
1419 +@@ -49,8 +49,8 @@ struct urb_list {
1420 + struct udl_fbdev;
1421 +
1422 + struct udl_device {
1423 ++ struct drm_device drm;
1424 + struct device *dev;
1425 +- struct drm_device *ddev;
1426 + struct usb_device *udev;
1427 + struct drm_crtc *crtc;
1428 +
1429 +@@ -68,6 +68,8 @@ struct udl_device {
1430 + atomic_t cpu_kcycles_used; /* transpired during pixel processing */
1431 + };
1432 +
1433 ++#define to_udl(x) container_of(x, struct udl_device, drm)
1434 ++
1435 + struct udl_gem_object {
1436 + struct drm_gem_object base;
1437 + struct page **pages;
1438 +@@ -99,9 +101,8 @@ struct urb *udl_get_urb(struct drm_device *dev);
1439 + int udl_submit_urb(struct drm_device *dev, struct urb *urb, size_t len);
1440 + void udl_urb_completion(struct urb *urb);
1441 +
1442 +-int udl_driver_load(struct drm_device *dev, unsigned long flags);
1443 +-void udl_driver_unload(struct drm_device *dev);
1444 +-void udl_driver_release(struct drm_device *dev);
1445 ++int udl_init(struct udl_device *udl);
1446 ++void udl_fini(struct drm_device *dev);
1447 +
1448 + int udl_fbdev_init(struct drm_device *dev);
1449 + void udl_fbdev_cleanup(struct drm_device *dev);
1450 +diff --git a/drivers/gpu/drm/udl/udl_fb.c b/drivers/gpu/drm/udl/udl_fb.c
1451 +index 491f1892b50e..f41fd0684ce4 100644
1452 +--- a/drivers/gpu/drm/udl/udl_fb.c
1453 ++++ b/drivers/gpu/drm/udl/udl_fb.c
1454 +@@ -82,7 +82,7 @@ int udl_handle_damage(struct udl_framebuffer *fb, int x, int y,
1455 + int width, int height)
1456 + {
1457 + struct drm_device *dev = fb->base.dev;
1458 +- struct udl_device *udl = dev->dev_private;
1459 ++ struct udl_device *udl = to_udl(dev);
1460 + int i, ret;
1461 + char *cmd;
1462 + cycles_t start_cycles, end_cycles;
1463 +@@ -210,10 +210,10 @@ static int udl_fb_open(struct fb_info *info, int user)
1464 + {
1465 + struct udl_fbdev *ufbdev = info->par;
1466 + struct drm_device *dev = ufbdev->ufb.base.dev;
1467 +- struct udl_device *udl = dev->dev_private;
1468 ++ struct udl_device *udl = to_udl(dev);
1469 +
1470 + /* If the USB device is gone, we don't accept new opens */
1471 +- if (drm_dev_is_unplugged(udl->ddev))
1472 ++ if (drm_dev_is_unplugged(&udl->drm))
1473 + return -ENODEV;
1474 +
1475 + ufbdev->fb_count++;
1476 +@@ -441,7 +441,7 @@ static void udl_fbdev_destroy(struct drm_device *dev,
1477 +
1478 + int udl_fbdev_init(struct drm_device *dev)
1479 + {
1480 +- struct udl_device *udl = dev->dev_private;
1481 ++ struct udl_device *udl = to_udl(dev);
1482 + int bpp_sel = fb_bpp;
1483 + struct udl_fbdev *ufbdev;
1484 + int ret;
1485 +@@ -480,7 +480,7 @@ free:
1486 +
1487 + void udl_fbdev_cleanup(struct drm_device *dev)
1488 + {
1489 +- struct udl_device *udl = dev->dev_private;
1490 ++ struct udl_device *udl = to_udl(dev);
1491 + if (!udl->fbdev)
1492 + return;
1493 +
1494 +@@ -491,7 +491,7 @@ void udl_fbdev_cleanup(struct drm_device *dev)
1495 +
1496 + void udl_fbdev_unplug(struct drm_device *dev)
1497 + {
1498 +- struct udl_device *udl = dev->dev_private;
1499 ++ struct udl_device *udl = to_udl(dev);
1500 + struct udl_fbdev *ufbdev;
1501 + if (!udl->fbdev)
1502 + return;
1503 +diff --git a/drivers/gpu/drm/udl/udl_main.c b/drivers/gpu/drm/udl/udl_main.c
1504 +index 60866b422f81..124428f33e1e 100644
1505 +--- a/drivers/gpu/drm/udl/udl_main.c
1506 ++++ b/drivers/gpu/drm/udl/udl_main.c
1507 +@@ -28,7 +28,7 @@
1508 + static int udl_parse_vendor_descriptor(struct drm_device *dev,
1509 + struct usb_device *usbdev)
1510 + {
1511 +- struct udl_device *udl = dev->dev_private;
1512 ++ struct udl_device *udl = to_udl(dev);
1513 + char *desc;
1514 + char *buf;
1515 + char *desc_end;
1516 +@@ -164,7 +164,7 @@ void udl_urb_completion(struct urb *urb)
1517 +
1518 + static void udl_free_urb_list(struct drm_device *dev)
1519 + {
1520 +- struct udl_device *udl = dev->dev_private;
1521 ++ struct udl_device *udl = to_udl(dev);
1522 + int count = udl->urbs.count;
1523 + struct list_head *node;
1524 + struct urb_node *unode;
1525 +@@ -198,7 +198,7 @@ static void udl_free_urb_list(struct drm_device *dev)
1526 +
1527 + static int udl_alloc_urb_list(struct drm_device *dev, int count, size_t size)
1528 + {
1529 +- struct udl_device *udl = dev->dev_private;
1530 ++ struct udl_device *udl = to_udl(dev);
1531 + struct urb *urb;
1532 + struct urb_node *unode;
1533 + char *buf;
1534 +@@ -262,7 +262,7 @@ retry:
1535 +
1536 + struct urb *udl_get_urb(struct drm_device *dev)
1537 + {
1538 +- struct udl_device *udl = dev->dev_private;
1539 ++ struct udl_device *udl = to_udl(dev);
1540 + int ret = 0;
1541 + struct list_head *entry;
1542 + struct urb_node *unode;
1543 +@@ -296,7 +296,7 @@ error:
1544 +
1545 + int udl_submit_urb(struct drm_device *dev, struct urb *urb, size_t len)
1546 + {
1547 +- struct udl_device *udl = dev->dev_private;
1548 ++ struct udl_device *udl = to_udl(dev);
1549 + int ret;
1550 +
1551 + BUG_ON(len > udl->urbs.size);
1552 +@@ -311,20 +311,12 @@ int udl_submit_urb(struct drm_device *dev, struct urb *urb, size_t len)
1553 + return ret;
1554 + }
1555 +
1556 +-int udl_driver_load(struct drm_device *dev, unsigned long flags)
1557 ++int udl_init(struct udl_device *udl)
1558 + {
1559 +- struct usb_device *udev = (void*)flags;
1560 +- struct udl_device *udl;
1561 ++ struct drm_device *dev = &udl->drm;
1562 + int ret = -ENOMEM;
1563 +
1564 + DRM_DEBUG("\n");
1565 +- udl = kzalloc(sizeof(struct udl_device), GFP_KERNEL);
1566 +- if (!udl)
1567 +- return -ENOMEM;
1568 +-
1569 +- udl->udev = udev;
1570 +- udl->ddev = dev;
1571 +- dev->dev_private = udl;
1572 +
1573 + if (!udl_parse_vendor_descriptor(dev, udl->udev)) {
1574 + ret = -ENODEV;
1575 +@@ -359,7 +351,6 @@ err_fb:
1576 + err:
1577 + if (udl->urbs.count)
1578 + udl_free_urb_list(dev);
1579 +- kfree(udl);
1580 + DRM_ERROR("%d\n", ret);
1581 + return ret;
1582 + }
1583 +@@ -370,20 +361,12 @@ int udl_drop_usb(struct drm_device *dev)
1584 + return 0;
1585 + }
1586 +
1587 +-void udl_driver_unload(struct drm_device *dev)
1588 ++void udl_fini(struct drm_device *dev)
1589 + {
1590 +- struct udl_device *udl = dev->dev_private;
1591 ++ struct udl_device *udl = to_udl(dev);
1592 +
1593 + if (udl->urbs.count)
1594 + udl_free_urb_list(dev);
1595 +
1596 + udl_fbdev_cleanup(dev);
1597 +- kfree(udl);
1598 +-}
1599 +-
1600 +-void udl_driver_release(struct drm_device *dev)
1601 +-{
1602 +- udl_modeset_cleanup(dev);
1603 +- drm_dev_fini(dev);
1604 +- kfree(dev);
1605 + }
1606 +diff --git a/drivers/input/keyboard/imx_keypad.c b/drivers/input/keyboard/imx_keypad.c
1607 +index 2165f3dd328b..842c0235471d 100644
1608 +--- a/drivers/input/keyboard/imx_keypad.c
1609 ++++ b/drivers/input/keyboard/imx_keypad.c
1610 +@@ -530,11 +530,12 @@ static int imx_keypad_probe(struct platform_device *pdev)
1611 + return 0;
1612 + }
1613 +
1614 +-static int __maybe_unused imx_kbd_suspend(struct device *dev)
1615 ++static int __maybe_unused imx_kbd_noirq_suspend(struct device *dev)
1616 + {
1617 + struct platform_device *pdev = to_platform_device(dev);
1618 + struct imx_keypad *kbd = platform_get_drvdata(pdev);
1619 + struct input_dev *input_dev = kbd->input_dev;
1620 ++ unsigned short reg_val = readw(kbd->mmio_base + KPSR);
1621 +
1622 + /* imx kbd can wake up system even clock is disabled */
1623 + mutex_lock(&input_dev->mutex);
1624 +@@ -544,13 +545,20 @@ static int __maybe_unused imx_kbd_suspend(struct device *dev)
1625 +
1626 + mutex_unlock(&input_dev->mutex);
1627 +
1628 +- if (device_may_wakeup(&pdev->dev))
1629 ++ if (device_may_wakeup(&pdev->dev)) {
1630 ++ if (reg_val & KBD_STAT_KPKD)
1631 ++ reg_val |= KBD_STAT_KRIE;
1632 ++ if (reg_val & KBD_STAT_KPKR)
1633 ++ reg_val |= KBD_STAT_KDIE;
1634 ++ writew(reg_val, kbd->mmio_base + KPSR);
1635 ++
1636 + enable_irq_wake(kbd->irq);
1637 ++ }
1638 +
1639 + return 0;
1640 + }
1641 +
1642 +-static int __maybe_unused imx_kbd_resume(struct device *dev)
1643 ++static int __maybe_unused imx_kbd_noirq_resume(struct device *dev)
1644 + {
1645 + struct platform_device *pdev = to_platform_device(dev);
1646 + struct imx_keypad *kbd = platform_get_drvdata(pdev);
1647 +@@ -574,7 +582,9 @@ err_clk:
1648 + return ret;
1649 + }
1650 +
1651 +-static SIMPLE_DEV_PM_OPS(imx_kbd_pm_ops, imx_kbd_suspend, imx_kbd_resume);
1652 ++static const struct dev_pm_ops imx_kbd_pm_ops = {
1653 ++ SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(imx_kbd_noirq_suspend, imx_kbd_noirq_resume)
1654 ++};
1655 +
1656 + static struct platform_driver imx_keypad_driver = {
1657 + .driver = {
1658 +diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
1659 +index fda33fc3ffcc..ab4888d043f0 100644
1660 +--- a/drivers/input/mouse/elantech.c
1661 ++++ b/drivers/input/mouse/elantech.c
1662 +@@ -1191,6 +1191,8 @@ static const char * const middle_button_pnp_ids[] = {
1663 + "LEN2132", /* ThinkPad P52 */
1664 + "LEN2133", /* ThinkPad P72 w/ NFC */
1665 + "LEN2134", /* ThinkPad P72 */
1666 ++ "LEN0407",
1667 ++ "LEN0408",
1668 + NULL
1669 + };
1670 +
1671 +diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
1672 +index a5f279da83a1..1a6a05c45ee7 100644
1673 +--- a/drivers/input/mouse/synaptics.c
1674 ++++ b/drivers/input/mouse/synaptics.c
1675 +@@ -176,6 +176,7 @@ static const char * const smbus_pnp_ids[] = {
1676 + "LEN0072", /* X1 Carbon Gen 5 (2017) - Elan/ALPS trackpoint */
1677 + "LEN0073", /* X1 Carbon G5 (Elantech) */
1678 + "LEN0092", /* X1 Carbon 6 */
1679 ++ "LEN0093", /* T480 */
1680 + "LEN0096", /* X280 */
1681 + "LEN0097", /* X280 -> ALPS trackpoint */
1682 + "LEN200f", /* T450s */
1683 +diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
1684 +index 8573c70a1880..e705799976c2 100644
1685 +--- a/drivers/md/dm-verity-target.c
1686 ++++ b/drivers/md/dm-verity-target.c
1687 +@@ -276,8 +276,8 @@ static int verity_handle_err(struct dm_verity *v, enum verity_block_type type,
1688 + BUG();
1689 + }
1690 +
1691 +- DMERR("%s: %s block %llu is corrupted", v->data_dev->name, type_str,
1692 +- block);
1693 ++ DMERR_LIMIT("%s: %s block %llu is corrupted", v->data_dev->name,
1694 ++ type_str, block);
1695 +
1696 + if (v->corrupted_errs == DM_VERITY_MAX_CORRUPTED_ERRS)
1697 + DMERR("%s: reached maximum errors", v->data_dev->name);
1698 +diff --git a/drivers/md/md.c b/drivers/md/md.c
1699 +index b27a69388dcd..764ed9c46629 100644
1700 +--- a/drivers/md/md.c
1701 ++++ b/drivers/md/md.c
1702 +@@ -7605,9 +7605,9 @@ static void status_unused(struct seq_file *seq)
1703 + static int status_resync(struct seq_file *seq, struct mddev *mddev)
1704 + {
1705 + sector_t max_sectors, resync, res;
1706 +- unsigned long dt, db;
1707 +- sector_t rt;
1708 +- int scale;
1709 ++ unsigned long dt, db = 0;
1710 ++ sector_t rt, curr_mark_cnt, resync_mark_cnt;
1711 ++ int scale, recovery_active;
1712 + unsigned int per_milli;
1713 +
1714 + if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery) ||
1715 +@@ -7677,22 +7677,30 @@ static int status_resync(struct seq_file *seq, struct mddev *mddev)
1716 + * db: blocks written from mark until now
1717 + * rt: remaining time
1718 + *
1719 +- * rt is a sector_t, so could be 32bit or 64bit.
1720 +- * So we divide before multiply in case it is 32bit and close
1721 +- * to the limit.
1722 +- * We scale the divisor (db) by 32 to avoid losing precision
1723 +- * near the end of resync when the number of remaining sectors
1724 +- * is close to 'db'.
1725 +- * We then divide rt by 32 after multiplying by db to compensate.
1726 +- * The '+1' avoids division by zero if db is very small.
1727 ++ * rt is a sector_t, which is always 64bit now. We are keeping
1728 ++ * the original algorithm, but it is not really necessary.
1729 ++ *
1730 ++ * Original algorithm:
1731 ++ * So we divide before multiply in case it is 32bit and close
1732 ++ * to the limit.
1733 ++ * We scale the divisor (db) by 32 to avoid losing precision
1734 ++ * near the end of resync when the number of remaining sectors
1735 ++ * is close to 'db'.
1736 ++ * We then divide rt by 32 after multiplying by db to compensate.
1737 ++ * The '+1' avoids division by zero if db is very small.
1738 + */
1739 + dt = ((jiffies - mddev->resync_mark) / HZ);
1740 + if (!dt) dt++;
1741 +- db = (mddev->curr_mark_cnt - atomic_read(&mddev->recovery_active))
1742 +- - mddev->resync_mark_cnt;
1743 ++
1744 ++ curr_mark_cnt = mddev->curr_mark_cnt;
1745 ++ recovery_active = atomic_read(&mddev->recovery_active);
1746 ++ resync_mark_cnt = mddev->resync_mark_cnt;
1747 ++
1748 ++ if (curr_mark_cnt >= (recovery_active + resync_mark_cnt))
1749 ++ db = curr_mark_cnt - (recovery_active + resync_mark_cnt);
1750 +
1751 + rt = max_sectors - resync; /* number of remaining sectors */
1752 +- sector_div(rt, db/32+1);
1753 ++ rt = div64_u64(rt, db/32+1);
1754 + rt *= dt;
1755 + rt >>= 5;
1756 +
1757 +diff --git a/drivers/misc/vmw_vmci/vmci_context.c b/drivers/misc/vmw_vmci/vmci_context.c
1758 +index 21d0fa592145..bc089e634a75 100644
1759 +--- a/drivers/misc/vmw_vmci/vmci_context.c
1760 ++++ b/drivers/misc/vmw_vmci/vmci_context.c
1761 +@@ -29,6 +29,9 @@
1762 + #include "vmci_driver.h"
1763 + #include "vmci_event.h"
1764 +
1765 ++/* Use a wide upper bound for the maximum contexts. */
1766 ++#define VMCI_MAX_CONTEXTS 2000
1767 ++
1768 + /*
1769 + * List of current VMCI contexts. Contexts can be added by
1770 + * vmci_ctx_create() and removed via vmci_ctx_destroy().
1771 +@@ -125,19 +128,22 @@ struct vmci_ctx *vmci_ctx_create(u32 cid, u32 priv_flags,
1772 + /* Initialize host-specific VMCI context. */
1773 + init_waitqueue_head(&context->host_context.wait_queue);
1774 +
1775 +- context->queue_pair_array = vmci_handle_arr_create(0);
1776 ++ context->queue_pair_array =
1777 ++ vmci_handle_arr_create(0, VMCI_MAX_GUEST_QP_COUNT);
1778 + if (!context->queue_pair_array) {
1779 + error = -ENOMEM;
1780 + goto err_free_ctx;
1781 + }
1782 +
1783 +- context->doorbell_array = vmci_handle_arr_create(0);
1784 ++ context->doorbell_array =
1785 ++ vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT);
1786 + if (!context->doorbell_array) {
1787 + error = -ENOMEM;
1788 + goto err_free_qp_array;
1789 + }
1790 +
1791 +- context->pending_doorbell_array = vmci_handle_arr_create(0);
1792 ++ context->pending_doorbell_array =
1793 ++ vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT);
1794 + if (!context->pending_doorbell_array) {
1795 + error = -ENOMEM;
1796 + goto err_free_db_array;
1797 +@@ -212,7 +218,7 @@ static int ctx_fire_notification(u32 context_id, u32 priv_flags)
1798 + * We create an array to hold the subscribers we find when
1799 + * scanning through all contexts.
1800 + */
1801 +- subscriber_array = vmci_handle_arr_create(0);
1802 ++ subscriber_array = vmci_handle_arr_create(0, VMCI_MAX_CONTEXTS);
1803 + if (subscriber_array == NULL)
1804 + return VMCI_ERROR_NO_MEM;
1805 +
1806 +@@ -631,20 +637,26 @@ int vmci_ctx_add_notification(u32 context_id, u32 remote_cid)
1807 +
1808 + spin_lock(&context->lock);
1809 +
1810 +- list_for_each_entry(n, &context->notifier_list, node) {
1811 +- if (vmci_handle_is_equal(n->handle, notifier->handle)) {
1812 +- exists = true;
1813 +- break;
1814 ++ if (context->n_notifiers < VMCI_MAX_CONTEXTS) {
1815 ++ list_for_each_entry(n, &context->notifier_list, node) {
1816 ++ if (vmci_handle_is_equal(n->handle, notifier->handle)) {
1817 ++ exists = true;
1818 ++ break;
1819 ++ }
1820 + }
1821 +- }
1822 +
1823 +- if (exists) {
1824 +- kfree(notifier);
1825 +- result = VMCI_ERROR_ALREADY_EXISTS;
1826 ++ if (exists) {
1827 ++ kfree(notifier);
1828 ++ result = VMCI_ERROR_ALREADY_EXISTS;
1829 ++ } else {
1830 ++ list_add_tail_rcu(&notifier->node,
1831 ++ &context->notifier_list);
1832 ++ context->n_notifiers++;
1833 ++ result = VMCI_SUCCESS;
1834 ++ }
1835 + } else {
1836 +- list_add_tail_rcu(&notifier->node, &context->notifier_list);
1837 +- context->n_notifiers++;
1838 +- result = VMCI_SUCCESS;
1839 ++ kfree(notifier);
1840 ++ result = VMCI_ERROR_NO_MEM;
1841 + }
1842 +
1843 + spin_unlock(&context->lock);
1844 +@@ -729,8 +741,7 @@ static int vmci_ctx_get_chkpt_doorbells(struct vmci_ctx *context,
1845 + u32 *buf_size, void **pbuf)
1846 + {
1847 + struct dbell_cpt_state *dbells;
1848 +- size_t n_doorbells;
1849 +- int i;
1850 ++ u32 i, n_doorbells;
1851 +
1852 + n_doorbells = vmci_handle_arr_get_size(context->doorbell_array);
1853 + if (n_doorbells > 0) {
1854 +@@ -868,7 +879,8 @@ int vmci_ctx_rcv_notifications_get(u32 context_id,
1855 + spin_lock(&context->lock);
1856 +
1857 + *db_handle_array = context->pending_doorbell_array;
1858 +- context->pending_doorbell_array = vmci_handle_arr_create(0);
1859 ++ context->pending_doorbell_array =
1860 ++ vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT);
1861 + if (!context->pending_doorbell_array) {
1862 + context->pending_doorbell_array = *db_handle_array;
1863 + *db_handle_array = NULL;
1864 +@@ -950,12 +962,11 @@ int vmci_ctx_dbell_create(u32 context_id, struct vmci_handle handle)
1865 + return VMCI_ERROR_NOT_FOUND;
1866 +
1867 + spin_lock(&context->lock);
1868 +- if (!vmci_handle_arr_has_entry(context->doorbell_array, handle)) {
1869 +- vmci_handle_arr_append_entry(&context->doorbell_array, handle);
1870 +- result = VMCI_SUCCESS;
1871 +- } else {
1872 ++ if (!vmci_handle_arr_has_entry(context->doorbell_array, handle))
1873 ++ result = vmci_handle_arr_append_entry(&context->doorbell_array,
1874 ++ handle);
1875 ++ else
1876 + result = VMCI_ERROR_DUPLICATE_ENTRY;
1877 +- }
1878 +
1879 + spin_unlock(&context->lock);
1880 + vmci_ctx_put(context);
1881 +@@ -1091,15 +1102,16 @@ int vmci_ctx_notify_dbell(u32 src_cid,
1882 + if (!vmci_handle_arr_has_entry(
1883 + dst_context->pending_doorbell_array,
1884 + handle)) {
1885 +- vmci_handle_arr_append_entry(
1886 ++ result = vmci_handle_arr_append_entry(
1887 + &dst_context->pending_doorbell_array,
1888 + handle);
1889 +-
1890 +- ctx_signal_notify(dst_context);
1891 +- wake_up(&dst_context->host_context.wait_queue);
1892 +-
1893 ++ if (result == VMCI_SUCCESS) {
1894 ++ ctx_signal_notify(dst_context);
1895 ++ wake_up(&dst_context->host_context.wait_queue);
1896 ++ }
1897 ++ } else {
1898 ++ result = VMCI_SUCCESS;
1899 + }
1900 +- result = VMCI_SUCCESS;
1901 + }
1902 + spin_unlock(&dst_context->lock);
1903 + }
1904 +@@ -1126,13 +1138,11 @@ int vmci_ctx_qp_create(struct vmci_ctx *context, struct vmci_handle handle)
1905 + if (context == NULL || vmci_handle_is_invalid(handle))
1906 + return VMCI_ERROR_INVALID_ARGS;
1907 +
1908 +- if (!vmci_handle_arr_has_entry(context->queue_pair_array, handle)) {
1909 +- vmci_handle_arr_append_entry(&context->queue_pair_array,
1910 +- handle);
1911 +- result = VMCI_SUCCESS;
1912 +- } else {
1913 ++ if (!vmci_handle_arr_has_entry(context->queue_pair_array, handle))
1914 ++ result = vmci_handle_arr_append_entry(
1915 ++ &context->queue_pair_array, handle);
1916 ++ else
1917 + result = VMCI_ERROR_DUPLICATE_ENTRY;
1918 +- }
1919 +
1920 + return result;
1921 + }
1922 +diff --git a/drivers/misc/vmw_vmci/vmci_handle_array.c b/drivers/misc/vmw_vmci/vmci_handle_array.c
1923 +index 344973a0fb0a..917e18a8af95 100644
1924 +--- a/drivers/misc/vmw_vmci/vmci_handle_array.c
1925 ++++ b/drivers/misc/vmw_vmci/vmci_handle_array.c
1926 +@@ -16,24 +16,29 @@
1927 + #include <linux/slab.h>
1928 + #include "vmci_handle_array.h"
1929 +
1930 +-static size_t handle_arr_calc_size(size_t capacity)
1931 ++static size_t handle_arr_calc_size(u32 capacity)
1932 + {
1933 +- return sizeof(struct vmci_handle_arr) +
1934 ++ return VMCI_HANDLE_ARRAY_HEADER_SIZE +
1935 + capacity * sizeof(struct vmci_handle);
1936 + }
1937 +
1938 +-struct vmci_handle_arr *vmci_handle_arr_create(size_t capacity)
1939 ++struct vmci_handle_arr *vmci_handle_arr_create(u32 capacity, u32 max_capacity)
1940 + {
1941 + struct vmci_handle_arr *array;
1942 +
1943 ++ if (max_capacity == 0 || capacity > max_capacity)
1944 ++ return NULL;
1945 ++
1946 + if (capacity == 0)
1947 +- capacity = VMCI_HANDLE_ARRAY_DEFAULT_SIZE;
1948 ++ capacity = min((u32)VMCI_HANDLE_ARRAY_DEFAULT_CAPACITY,
1949 ++ max_capacity);
1950 +
1951 + array = kmalloc(handle_arr_calc_size(capacity), GFP_ATOMIC);
1952 + if (!array)
1953 + return NULL;
1954 +
1955 + array->capacity = capacity;
1956 ++ array->max_capacity = max_capacity;
1957 + array->size = 0;
1958 +
1959 + return array;
1960 +@@ -44,27 +49,34 @@ void vmci_handle_arr_destroy(struct vmci_handle_arr *array)
1961 + kfree(array);
1962 + }
1963 +
1964 +-void vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr,
1965 +- struct vmci_handle handle)
1966 ++int vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr,
1967 ++ struct vmci_handle handle)
1968 + {
1969 + struct vmci_handle_arr *array = *array_ptr;
1970 +
1971 + if (unlikely(array->size >= array->capacity)) {
1972 + /* reallocate. */
1973 + struct vmci_handle_arr *new_array;
1974 +- size_t new_capacity = array->capacity * VMCI_ARR_CAP_MULT;
1975 +- size_t new_size = handle_arr_calc_size(new_capacity);
1976 ++ u32 capacity_bump = min(array->max_capacity - array->capacity,
1977 ++ array->capacity);
1978 ++ size_t new_size = handle_arr_calc_size(array->capacity +
1979 ++ capacity_bump);
1980 ++
1981 ++ if (array->size >= array->max_capacity)
1982 ++ return VMCI_ERROR_NO_MEM;
1983 +
1984 + new_array = krealloc(array, new_size, GFP_ATOMIC);
1985 + if (!new_array)
1986 +- return;
1987 ++ return VMCI_ERROR_NO_MEM;
1988 +
1989 +- new_array->capacity = new_capacity;
1990 ++ new_array->capacity += capacity_bump;
1991 + *array_ptr = array = new_array;
1992 + }
1993 +
1994 + array->entries[array->size] = handle;
1995 + array->size++;
1996 ++
1997 ++ return VMCI_SUCCESS;
1998 + }
1999 +
2000 + /*
2001 +@@ -74,7 +86,7 @@ struct vmci_handle vmci_handle_arr_remove_entry(struct vmci_handle_arr *array,
2002 + struct vmci_handle entry_handle)
2003 + {
2004 + struct vmci_handle handle = VMCI_INVALID_HANDLE;
2005 +- size_t i;
2006 ++ u32 i;
2007 +
2008 + for (i = 0; i < array->size; i++) {
2009 + if (vmci_handle_is_equal(array->entries[i], entry_handle)) {
2010 +@@ -109,7 +121,7 @@ struct vmci_handle vmci_handle_arr_remove_tail(struct vmci_handle_arr *array)
2011 + * Handle at given index, VMCI_INVALID_HANDLE if invalid index.
2012 + */
2013 + struct vmci_handle
2014 +-vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index)
2015 ++vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, u32 index)
2016 + {
2017 + if (unlikely(index >= array->size))
2018 + return VMCI_INVALID_HANDLE;
2019 +@@ -120,7 +132,7 @@ vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index)
2020 + bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array,
2021 + struct vmci_handle entry_handle)
2022 + {
2023 +- size_t i;
2024 ++ u32 i;
2025 +
2026 + for (i = 0; i < array->size; i++)
2027 + if (vmci_handle_is_equal(array->entries[i], entry_handle))
2028 +diff --git a/drivers/misc/vmw_vmci/vmci_handle_array.h b/drivers/misc/vmw_vmci/vmci_handle_array.h
2029 +index b5f3a7f98cf1..0fc58597820e 100644
2030 +--- a/drivers/misc/vmw_vmci/vmci_handle_array.h
2031 ++++ b/drivers/misc/vmw_vmci/vmci_handle_array.h
2032 +@@ -17,32 +17,41 @@
2033 + #define _VMCI_HANDLE_ARRAY_H_
2034 +
2035 + #include <linux/vmw_vmci_defs.h>
2036 ++#include <linux/limits.h>
2037 + #include <linux/types.h>
2038 +
2039 +-#define VMCI_HANDLE_ARRAY_DEFAULT_SIZE 4
2040 +-#define VMCI_ARR_CAP_MULT 2 /* Array capacity multiplier */
2041 +-
2042 + struct vmci_handle_arr {
2043 +- size_t capacity;
2044 +- size_t size;
2045 ++ u32 capacity;
2046 ++ u32 max_capacity;
2047 ++ u32 size;
2048 ++ u32 pad;
2049 + struct vmci_handle entries[];
2050 + };
2051 +
2052 +-struct vmci_handle_arr *vmci_handle_arr_create(size_t capacity);
2053 ++#define VMCI_HANDLE_ARRAY_HEADER_SIZE \
2054 ++ offsetof(struct vmci_handle_arr, entries)
2055 ++/* Select a default capacity that results in a 64 byte sized array */
2056 ++#define VMCI_HANDLE_ARRAY_DEFAULT_CAPACITY 6
2057 ++/* Make sure that the max array size can be expressed by a u32 */
2058 ++#define VMCI_HANDLE_ARRAY_MAX_CAPACITY \
2059 ++ ((U32_MAX - VMCI_HANDLE_ARRAY_HEADER_SIZE - 1) / \
2060 ++ sizeof(struct vmci_handle))
2061 ++
2062 ++struct vmci_handle_arr *vmci_handle_arr_create(u32 capacity, u32 max_capacity);
2063 + void vmci_handle_arr_destroy(struct vmci_handle_arr *array);
2064 +-void vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr,
2065 +- struct vmci_handle handle);
2066 ++int vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr,
2067 ++ struct vmci_handle handle);
2068 + struct vmci_handle vmci_handle_arr_remove_entry(struct vmci_handle_arr *array,
2069 + struct vmci_handle
2070 + entry_handle);
2071 + struct vmci_handle vmci_handle_arr_remove_tail(struct vmci_handle_arr *array);
2072 + struct vmci_handle
2073 +-vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index);
2074 ++vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, u32 index);
2075 + bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array,
2076 + struct vmci_handle entry_handle);
2077 + struct vmci_handle *vmci_handle_arr_get_handles(struct vmci_handle_arr *array);
2078 +
2079 +-static inline size_t vmci_handle_arr_get_size(
2080 ++static inline u32 vmci_handle_arr_get_size(
2081 + const struct vmci_handle_arr *array)
2082 + {
2083 + return array->size;
2084 +diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
2085 +index d3ce904e929e..ebad93ac8f11 100644
2086 +--- a/drivers/net/can/m_can/m_can.c
2087 ++++ b/drivers/net/can/m_can/m_can.c
2088 +@@ -818,6 +818,27 @@ static int m_can_poll(struct napi_struct *napi, int quota)
2089 + if (!irqstatus)
2090 + goto end;
2091 +
2092 ++ /* Errata workaround for issue "Needless activation of MRAF irq"
2093 ++ * During frame reception while the MCAN is in Error Passive state
2094 ++ * and the Receive Error Counter has the value MCAN_ECR.REC = 127,
2095 ++ * it may happen that MCAN_IR.MRAF is set although there was no
2096 ++ * Message RAM access failure.
2097 ++ * If MCAN_IR.MRAF is enabled, an interrupt to the Host CPU is generated
2098 ++ * The Message RAM Access Failure interrupt routine needs to check
2099 ++ * whether MCAN_ECR.RP = ’1’ and MCAN_ECR.REC = 127.
2100 ++ * In this case, reset MCAN_IR.MRAF. No further action is required.
2101 ++ */
2102 ++ if ((priv->version <= 31) && (irqstatus & IR_MRAF) &&
2103 ++ (m_can_read(priv, M_CAN_ECR) & ECR_RP)) {
2104 ++ struct can_berr_counter bec;
2105 ++
2106 ++ __m_can_get_berr_counter(dev, &bec);
2107 ++ if (bec.rxerr == 127) {
2108 ++ m_can_write(priv, M_CAN_IR, IR_MRAF);
2109 ++ irqstatus &= ~IR_MRAF;
2110 ++ }
2111 ++ }
2112 ++
2113 + psr = m_can_read(priv, M_CAN_PSR);
2114 + if (irqstatus & IR_ERR_STATE)
2115 + work_done += m_can_handle_state_errors(dev, psr);
2116 +diff --git a/drivers/net/can/spi/Kconfig b/drivers/net/can/spi/Kconfig
2117 +index 8f2e0dd7b756..792e9c6c4a2f 100644
2118 +--- a/drivers/net/can/spi/Kconfig
2119 ++++ b/drivers/net/can/spi/Kconfig
2120 +@@ -8,9 +8,10 @@ config CAN_HI311X
2121 + Driver for the Holt HI311x SPI CAN controllers.
2122 +
2123 + config CAN_MCP251X
2124 +- tristate "Microchip MCP251x SPI CAN controllers"
2125 ++ tristate "Microchip MCP251x and MCP25625 SPI CAN controllers"
2126 + depends on HAS_DMA
2127 + ---help---
2128 +- Driver for the Microchip MCP251x SPI CAN controllers.
2129 ++ Driver for the Microchip MCP251x and MCP25625 SPI CAN
2130 ++ controllers.
2131 +
2132 + endmenu
2133 +diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
2134 +index f3f05fea8e1f..d8c448beab24 100644
2135 +--- a/drivers/net/can/spi/mcp251x.c
2136 ++++ b/drivers/net/can/spi/mcp251x.c
2137 +@@ -1,5 +1,5 @@
2138 + /*
2139 +- * CAN bus driver for Microchip 251x CAN Controller with SPI Interface
2140 ++ * CAN bus driver for Microchip 251x/25625 CAN Controller with SPI Interface
2141 + *
2142 + * MCP2510 support and bug fixes by Christian Pellegrin
2143 + * <chripell@××××××××.org>
2144 +@@ -41,7 +41,7 @@
2145 + * static struct spi_board_info spi_board_info[] = {
2146 + * {
2147 + * .modalias = "mcp2510",
2148 +- * // or "mcp2515" depending on your controller
2149 ++ * // "mcp2515" or "mcp25625" depending on your controller
2150 + * .platform_data = &mcp251x_info,
2151 + * .irq = IRQ_EINT13,
2152 + * .max_speed_hz = 2*1000*1000,
2153 +@@ -238,6 +238,7 @@ static const struct can_bittiming_const mcp251x_bittiming_const = {
2154 + enum mcp251x_model {
2155 + CAN_MCP251X_MCP2510 = 0x2510,
2156 + CAN_MCP251X_MCP2515 = 0x2515,
2157 ++ CAN_MCP251X_MCP25625 = 0x25625,
2158 + };
2159 +
2160 + struct mcp251x_priv {
2161 +@@ -280,7 +281,6 @@ static inline int mcp251x_is_##_model(struct spi_device *spi) \
2162 + }
2163 +
2164 + MCP251X_IS(2510);
2165 +-MCP251X_IS(2515);
2166 +
2167 + static void mcp251x_clean(struct net_device *net)
2168 + {
2169 +@@ -640,7 +640,7 @@ static int mcp251x_hw_reset(struct spi_device *spi)
2170 +
2171 + /* Wait for oscillator startup timer after reset */
2172 + mdelay(MCP251X_OST_DELAY_MS);
2173 +-
2174 ++
2175 + reg = mcp251x_read_reg(spi, CANSTAT);
2176 + if ((reg & CANCTRL_REQOP_MASK) != CANCTRL_REQOP_CONF)
2177 + return -ENODEV;
2178 +@@ -821,9 +821,8 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id)
2179 + /* receive buffer 0 */
2180 + if (intf & CANINTF_RX0IF) {
2181 + mcp251x_hw_rx(spi, 0);
2182 +- /*
2183 +- * Free one buffer ASAP
2184 +- * (The MCP2515 does this automatically.)
2185 ++ /* Free one buffer ASAP
2186 ++ * (The MCP2515/25625 does this automatically.)
2187 + */
2188 + if (mcp251x_is_2510(spi))
2189 + mcp251x_write_bits(spi, CANINTF, CANINTF_RX0IF, 0x00);
2190 +@@ -832,7 +831,7 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id)
2191 + /* receive buffer 1 */
2192 + if (intf & CANINTF_RX1IF) {
2193 + mcp251x_hw_rx(spi, 1);
2194 +- /* the MCP2515 does this automatically */
2195 ++ /* The MCP2515/25625 does this automatically. */
2196 + if (mcp251x_is_2510(spi))
2197 + clear_intf |= CANINTF_RX1IF;
2198 + }
2199 +@@ -1007,6 +1006,10 @@ static const struct of_device_id mcp251x_of_match[] = {
2200 + .compatible = "microchip,mcp2515",
2201 + .data = (void *)CAN_MCP251X_MCP2515,
2202 + },
2203 ++ {
2204 ++ .compatible = "microchip,mcp25625",
2205 ++ .data = (void *)CAN_MCP251X_MCP25625,
2206 ++ },
2207 + { }
2208 + };
2209 + MODULE_DEVICE_TABLE(of, mcp251x_of_match);
2210 +@@ -1020,6 +1023,10 @@ static const struct spi_device_id mcp251x_id_table[] = {
2211 + .name = "mcp2515",
2212 + .driver_data = (kernel_ulong_t)CAN_MCP251X_MCP2515,
2213 + },
2214 ++ {
2215 ++ .name = "mcp25625",
2216 ++ .driver_data = (kernel_ulong_t)CAN_MCP251X_MCP25625,
2217 ++ },
2218 + { }
2219 + };
2220 + MODULE_DEVICE_TABLE(spi, mcp251x_id_table);
2221 +@@ -1260,5 +1267,5 @@ module_spi_driver(mcp251x_can_driver);
2222 +
2223 + MODULE_AUTHOR("Chris Elston <celston@×××××××.com>, "
2224 + "Christian Pellegrin <chripell@××××××××.org>");
2225 +-MODULE_DESCRIPTION("Microchip 251x CAN driver");
2226 ++MODULE_DESCRIPTION("Microchip 251x/25625 CAN driver");
2227 + MODULE_LICENSE("GPL v2");
2228 +diff --git a/drivers/net/dsa/mv88e6xxx/global1_vtu.c b/drivers/net/dsa/mv88e6xxx/global1_vtu.c
2229 +index 8c8a0ec3d6e9..f260bd30c73a 100644
2230 +--- a/drivers/net/dsa/mv88e6xxx/global1_vtu.c
2231 ++++ b/drivers/net/dsa/mv88e6xxx/global1_vtu.c
2232 +@@ -416,7 +416,7 @@ int mv88e6185_g1_vtu_loadpurge(struct mv88e6xxx_chip *chip,
2233 + * VTU DBNum[7:4] are located in VTU Operation 11:8
2234 + */
2235 + op |= entry->fid & 0x000f;
2236 +- op |= (entry->fid & 0x00f0) << 8;
2237 ++ op |= (entry->fid & 0x00f0) << 4;
2238 + }
2239 +
2240 + return mv88e6xxx_g1_vtu_op(chip, op);
2241 +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
2242 +index 3fd1085a093f..65bc1929d1a8 100644
2243 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
2244 ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
2245 +@@ -1581,7 +1581,8 @@ static int bnx2x_get_module_info(struct net_device *dev,
2246 + }
2247 +
2248 + if (!sff8472_comp ||
2249 +- (diag_type & SFP_EEPROM_DIAG_ADDR_CHANGE_REQ)) {
2250 ++ (diag_type & SFP_EEPROM_DIAG_ADDR_CHANGE_REQ) ||
2251 ++ !(diag_type & SFP_EEPROM_DDM_IMPLEMENTED)) {
2252 + modinfo->type = ETH_MODULE_SFF_8079;
2253 + modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN;
2254 + } else {
2255 +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h
2256 +index b7d251108c19..7115f5025664 100644
2257 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h
2258 ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h
2259 +@@ -62,6 +62,7 @@
2260 + #define SFP_EEPROM_DIAG_TYPE_ADDR 0x5c
2261 + #define SFP_EEPROM_DIAG_TYPE_SIZE 1
2262 + #define SFP_EEPROM_DIAG_ADDR_CHANGE_REQ (1<<2)
2263 ++#define SFP_EEPROM_DDM_IMPLEMENTED (1<<6)
2264 + #define SFP_EEPROM_SFF_8472_COMP_ADDR 0x5e
2265 + #define SFP_EEPROM_SFF_8472_COMP_SIZE 1
2266 +
2267 +diff --git a/drivers/net/ethernet/cavium/liquidio/lio_core.c b/drivers/net/ethernet/cavium/liquidio/lio_core.c
2268 +index 23f6b60030c5..8c16298a252d 100644
2269 +--- a/drivers/net/ethernet/cavium/liquidio/lio_core.c
2270 ++++ b/drivers/net/ethernet/cavium/liquidio/lio_core.c
2271 +@@ -854,7 +854,7 @@ static void liquidio_schedule_droq_pkt_handlers(struct octeon_device *oct)
2272 +
2273 + if (droq->ops.poll_mode) {
2274 + droq->ops.napi_fn(droq);
2275 +- oct_priv->napi_mask |= (1 << oq_no);
2276 ++ oct_priv->napi_mask |= BIT_ULL(oq_no);
2277 + } else {
2278 + tasklet_schedule(&oct_priv->droq_tasklet);
2279 + }
2280 +diff --git a/drivers/net/ethernet/emulex/benet/be_ethtool.c b/drivers/net/ethernet/emulex/benet/be_ethtool.c
2281 +index 6ce7b8435ace..f66b246acaea 100644
2282 +--- a/drivers/net/ethernet/emulex/benet/be_ethtool.c
2283 ++++ b/drivers/net/ethernet/emulex/benet/be_ethtool.c
2284 +@@ -893,7 +893,7 @@ static void be_self_test(struct net_device *netdev, struct ethtool_test *test,
2285 + u64 *data)
2286 + {
2287 + struct be_adapter *adapter = netdev_priv(netdev);
2288 +- int status;
2289 ++ int status, cnt;
2290 + u8 link_status = 0;
2291 +
2292 + if (adapter->function_caps & BE_FUNCTION_CAPS_SUPER_NIC) {
2293 +@@ -904,6 +904,9 @@ static void be_self_test(struct net_device *netdev, struct ethtool_test *test,
2294 +
2295 + memset(data, 0, sizeof(u64) * ETHTOOL_TESTS_NUM);
2296 +
2297 ++ /* check link status before offline tests */
2298 ++ link_status = netif_carrier_ok(netdev);
2299 ++
2300 + if (test->flags & ETH_TEST_FL_OFFLINE) {
2301 + if (be_loopback_test(adapter, BE_MAC_LOOPBACK, &data[0]) != 0)
2302 + test->flags |= ETH_TEST_FL_FAILED;
2303 +@@ -924,13 +927,26 @@ static void be_self_test(struct net_device *netdev, struct ethtool_test *test,
2304 + test->flags |= ETH_TEST_FL_FAILED;
2305 + }
2306 +
2307 +- status = be_cmd_link_status_query(adapter, NULL, &link_status, 0);
2308 +- if (status) {
2309 +- test->flags |= ETH_TEST_FL_FAILED;
2310 +- data[4] = -1;
2311 +- } else if (!link_status) {
2312 ++ /* link status was down prior to test */
2313 ++ if (!link_status) {
2314 + test->flags |= ETH_TEST_FL_FAILED;
2315 + data[4] = 1;
2316 ++ return;
2317 ++ }
2318 ++
2319 ++ for (cnt = 10; cnt; cnt--) {
2320 ++ status = be_cmd_link_status_query(adapter, NULL, &link_status,
2321 ++ 0);
2322 ++ if (status) {
2323 ++ test->flags |= ETH_TEST_FL_FAILED;
2324 ++ data[4] = -1;
2325 ++ break;
2326 ++ }
2327 ++
2328 ++ if (link_status)
2329 ++ break;
2330 ++
2331 ++ msleep_interruptible(500);
2332 + }
2333 + }
2334 +
2335 +diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
2336 +index c914b338691b..956fbb164e6f 100644
2337 +--- a/drivers/net/ethernet/ibm/ibmvnic.c
2338 ++++ b/drivers/net/ethernet/ibm/ibmvnic.c
2339 +@@ -1489,6 +1489,9 @@ static int do_reset(struct ibmvnic_adapter *adapter,
2340 + return 0;
2341 + }
2342 +
2343 ++ /* refresh device's multicast list */
2344 ++ ibmvnic_set_multi(netdev);
2345 ++
2346 + /* kick napi */
2347 + for (i = 0; i < adapter->req_rx_queues; i++)
2348 + napi_schedule(&adapter->napi[i]);
2349 +diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
2350 +index 3c214a47c1c4..1ad345796e80 100644
2351 +--- a/drivers/net/ethernet/intel/e1000e/netdev.c
2352 ++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
2353 +@@ -4228,7 +4228,7 @@ void e1000e_up(struct e1000_adapter *adapter)
2354 + e1000_configure_msix(adapter);
2355 + e1000_irq_enable(adapter);
2356 +
2357 +- netif_start_queue(adapter->netdev);
2358 ++ /* Tx queue started by watchdog timer when link is up */
2359 +
2360 + e1000e_trigger_lsc(adapter);
2361 + }
2362 +@@ -4604,6 +4604,7 @@ int e1000e_open(struct net_device *netdev)
2363 + pm_runtime_get_sync(&pdev->dev);
2364 +
2365 + netif_carrier_off(netdev);
2366 ++ netif_stop_queue(netdev);
2367 +
2368 + /* allocate transmit descriptors */
2369 + err = e1000e_setup_tx_resources(adapter->tx_ring);
2370 +@@ -4664,7 +4665,6 @@ int e1000e_open(struct net_device *netdev)
2371 + e1000_irq_enable(adapter);
2372 +
2373 + adapter->tx_hang_recheck = false;
2374 +- netif_start_queue(netdev);
2375 +
2376 + hw->mac.get_link_status = true;
2377 + pm_runtime_put(&pdev->dev);
2378 +@@ -5286,6 +5286,7 @@ static void e1000_watchdog_task(struct work_struct *work)
2379 + if (phy->ops.cfg_on_link_up)
2380 + phy->ops.cfg_on_link_up(hw);
2381 +
2382 ++ netif_wake_queue(netdev);
2383 + netif_carrier_on(netdev);
2384 +
2385 + if (!test_bit(__E1000_DOWN, &adapter->state))
2386 +@@ -5299,6 +5300,7 @@ static void e1000_watchdog_task(struct work_struct *work)
2387 + /* Link status message must follow this format */
2388 + pr_info("%s NIC Link is Down\n", adapter->netdev->name);
2389 + netif_carrier_off(netdev);
2390 ++ netif_stop_queue(netdev);
2391 + if (!test_bit(__E1000_DOWN, &adapter->state))
2392 + mod_timer(&adapter->phy_info_timer,
2393 + round_jiffies(jiffies + 2 * HZ));
2394 +@@ -5306,13 +5308,8 @@ static void e1000_watchdog_task(struct work_struct *work)
2395 + /* 8000ES2LAN requires a Rx packet buffer work-around
2396 + * on link down event; reset the controller to flush
2397 + * the Rx packet buffer.
2398 +- *
2399 +- * If the link is lost the controller stops DMA, but
2400 +- * if there is queued Tx work it cannot be done. So
2401 +- * reset the controller to flush the Tx packet buffers.
2402 + */
2403 +- if ((adapter->flags & FLAG_RX_NEEDS_RESTART) ||
2404 +- e1000_desc_unused(tx_ring) + 1 < tx_ring->count)
2405 ++ if (adapter->flags & FLAG_RX_NEEDS_RESTART)
2406 + adapter->flags |= FLAG_RESTART_NOW;
2407 + else
2408 + pm_schedule_suspend(netdev->dev.parent,
2409 +@@ -5335,6 +5332,14 @@ link_up:
2410 + adapter->gotc_old = adapter->stats.gotc;
2411 + spin_unlock(&adapter->stats64_lock);
2412 +
2413 ++ /* If the link is lost the controller stops DMA, but
2414 ++ * if there is queued Tx work it cannot be done. So
2415 ++ * reset the controller to flush the Tx packet buffers.
2416 ++ */
2417 ++ if (!netif_carrier_ok(netdev) &&
2418 ++ (e1000_desc_unused(tx_ring) + 1 < tx_ring->count))
2419 ++ adapter->flags |= FLAG_RESTART_NOW;
2420 ++
2421 + /* If reset is necessary, do it outside of interrupt context. */
2422 + if (adapter->flags & FLAG_RESTART_NOW) {
2423 + schedule_work(&adapter->reset_task);
2424 +diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h
2425 +index 5acfbe5b8b9d..8ab7a4f98a07 100644
2426 +--- a/drivers/net/ethernet/mellanox/mlxsw/reg.h
2427 ++++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h
2428 +@@ -911,7 +911,7 @@ static inline void mlxsw_reg_spaft_pack(char *payload, u8 local_port,
2429 + MLXSW_REG_ZERO(spaft, payload);
2430 + mlxsw_reg_spaft_local_port_set(payload, local_port);
2431 + mlxsw_reg_spaft_allow_untagged_set(payload, allow_untagged);
2432 +- mlxsw_reg_spaft_allow_prio_tagged_set(payload, true);
2433 ++ mlxsw_reg_spaft_allow_prio_tagged_set(payload, allow_untagged);
2434 + mlxsw_reg_spaft_allow_tagged_set(payload, true);
2435 + }
2436 +
2437 +diff --git a/drivers/net/ethernet/sis/sis900.c b/drivers/net/ethernet/sis/sis900.c
2438 +index 40bd88362e3d..693f9582173b 100644
2439 +--- a/drivers/net/ethernet/sis/sis900.c
2440 ++++ b/drivers/net/ethernet/sis/sis900.c
2441 +@@ -1057,7 +1057,7 @@ sis900_open(struct net_device *net_dev)
2442 + sis900_set_mode(sis_priv, HW_SPEED_10_MBPS, FDX_CAPABLE_HALF_SELECTED);
2443 +
2444 + /* Enable all known interrupts by setting the interrupt mask. */
2445 +- sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE);
2446 ++ sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE | TxDESC);
2447 + sw32(cr, RxENA | sr32(cr));
2448 + sw32(ier, IE);
2449 +
2450 +@@ -1580,7 +1580,7 @@ static void sis900_tx_timeout(struct net_device *net_dev)
2451 + sw32(txdp, sis_priv->tx_ring_dma);
2452 +
2453 + /* Enable all known interrupts by setting the interrupt mask. */
2454 +- sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE);
2455 ++ sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE | TxDESC);
2456 + }
2457 +
2458 + /**
2459 +@@ -1620,7 +1620,7 @@ sis900_start_xmit(struct sk_buff *skb, struct net_device *net_dev)
2460 + spin_unlock_irqrestore(&sis_priv->lock, flags);
2461 + return NETDEV_TX_OK;
2462 + }
2463 +- sis_priv->tx_ring[entry].cmdsts = (OWN | skb->len);
2464 ++ sis_priv->tx_ring[entry].cmdsts = (OWN | INTR | skb->len);
2465 + sw32(cr, TxENA | sr32(cr));
2466 +
2467 + sis_priv->cur_tx ++;
2468 +@@ -1676,7 +1676,7 @@ static irqreturn_t sis900_interrupt(int irq, void *dev_instance)
2469 + do {
2470 + status = sr32(isr);
2471 +
2472 +- if ((status & (HIBERR|TxURN|TxERR|TxIDLE|RxORN|RxERR|RxOK)) == 0)
2473 ++ if ((status & (HIBERR|TxURN|TxERR|TxIDLE|TxDESC|RxORN|RxERR|RxOK)) == 0)
2474 + /* nothing intresting happened */
2475 + break;
2476 + handled = 1;
2477 +@@ -1686,7 +1686,7 @@ static irqreturn_t sis900_interrupt(int irq, void *dev_instance)
2478 + /* Rx interrupt */
2479 + sis900_rx(net_dev);
2480 +
2481 +- if (status & (TxURN | TxERR | TxIDLE))
2482 ++ if (status & (TxURN | TxERR | TxIDLE | TxDESC))
2483 + /* Tx interrupt */
2484 + sis900_finish_xmit(net_dev);
2485 +
2486 +@@ -1898,8 +1898,8 @@ static void sis900_finish_xmit (struct net_device *net_dev)
2487 +
2488 + if (tx_status & OWN) {
2489 + /* The packet is not transmitted yet (owned by hardware) !
2490 +- * Note: the interrupt is generated only when Tx Machine
2491 +- * is idle, so this is an almost impossible case */
2492 ++ * Note: this is an almost impossible condition
2493 ++ * in case of TxDESC ('descriptor interrupt') */
2494 + break;
2495 + }
2496 +
2497 +@@ -2475,7 +2475,7 @@ static int sis900_resume(struct pci_dev *pci_dev)
2498 + sis900_set_mode(sis_priv, HW_SPEED_10_MBPS, FDX_CAPABLE_HALF_SELECTED);
2499 +
2500 + /* Enable all known interrupts by setting the interrupt mask. */
2501 +- sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE);
2502 ++ sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE | TxDESC);
2503 + sw32(cr, RxENA | sr32(cr));
2504 + sw32(ier, IE);
2505 +
2506 +diff --git a/drivers/net/ppp/ppp_mppe.c b/drivers/net/ppp/ppp_mppe.c
2507 +index 6c7fd98cb00a..d9eda7c217e9 100644
2508 +--- a/drivers/net/ppp/ppp_mppe.c
2509 ++++ b/drivers/net/ppp/ppp_mppe.c
2510 +@@ -63,6 +63,7 @@ MODULE_AUTHOR("Frank Cusack <fcusack@×××××××.com>");
2511 + MODULE_DESCRIPTION("Point-to-Point Protocol Microsoft Point-to-Point Encryption support");
2512 + MODULE_LICENSE("Dual BSD/GPL");
2513 + MODULE_ALIAS("ppp-compress-" __stringify(CI_MPPE));
2514 ++MODULE_SOFTDEP("pre: arc4");
2515 + MODULE_VERSION("1.0.2");
2516 +
2517 + static unsigned int
2518 +diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
2519 +index 063daa3435e4..4b0144b2a252 100644
2520 +--- a/drivers/net/usb/qmi_wwan.c
2521 ++++ b/drivers/net/usb/qmi_wwan.c
2522 +@@ -153,7 +153,7 @@ static bool qmimux_has_slaves(struct usbnet *dev)
2523 +
2524 + static int qmimux_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
2525 + {
2526 +- unsigned int len, offset = 0;
2527 ++ unsigned int len, offset = 0, pad_len, pkt_len;
2528 + struct qmimux_hdr *hdr;
2529 + struct net_device *net;
2530 + struct sk_buff *skbn;
2531 +@@ -171,10 +171,16 @@ static int qmimux_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
2532 + if (hdr->pad & 0x80)
2533 + goto skip;
2534 +
2535 ++ /* extract padding length and check for valid length info */
2536 ++ pad_len = hdr->pad & 0x3f;
2537 ++ if (len == 0 || pad_len >= len)
2538 ++ goto skip;
2539 ++ pkt_len = len - pad_len;
2540 ++
2541 + net = qmimux_find_dev(dev, hdr->mux_id);
2542 + if (!net)
2543 + goto skip;
2544 +- skbn = netdev_alloc_skb(net, len);
2545 ++ skbn = netdev_alloc_skb(net, pkt_len);
2546 + if (!skbn)
2547 + return 0;
2548 + skbn->dev = net;
2549 +@@ -191,7 +197,7 @@ static int qmimux_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
2550 + goto skip;
2551 + }
2552 +
2553 +- skb_put_data(skbn, skb->data + offset + qmimux_hdr_sz, len);
2554 ++ skb_put_data(skbn, skb->data + offset + qmimux_hdr_sz, pkt_len);
2555 + if (netif_rx(skbn) != NET_RX_SUCCESS)
2556 + return 0;
2557 +
2558 +@@ -241,13 +247,14 @@ out_free_newdev:
2559 + return err;
2560 + }
2561 +
2562 +-static void qmimux_unregister_device(struct net_device *dev)
2563 ++static void qmimux_unregister_device(struct net_device *dev,
2564 ++ struct list_head *head)
2565 + {
2566 + struct qmimux_priv *priv = netdev_priv(dev);
2567 + struct net_device *real_dev = priv->real_dev;
2568 +
2569 + netdev_upper_dev_unlink(real_dev, dev);
2570 +- unregister_netdevice(dev);
2571 ++ unregister_netdevice_queue(dev, head);
2572 +
2573 + /* Get rid of the reference to real_dev */
2574 + dev_put(real_dev);
2575 +@@ -356,8 +363,8 @@ static ssize_t add_mux_store(struct device *d, struct device_attribute *attr, c
2576 + if (kstrtou8(buf, 0, &mux_id))
2577 + return -EINVAL;
2578 +
2579 +- /* mux_id [1 - 0x7f] range empirically found */
2580 +- if (mux_id < 1 || mux_id > 0x7f)
2581 ++ /* mux_id [1 - 254] for compatibility with ip(8) and the rmnet driver */
2582 ++ if (mux_id < 1 || mux_id > 254)
2583 + return -EINVAL;
2584 +
2585 + if (!rtnl_trylock())
2586 +@@ -418,7 +425,7 @@ static ssize_t del_mux_store(struct device *d, struct device_attribute *attr, c
2587 + ret = -EINVAL;
2588 + goto err;
2589 + }
2590 +- qmimux_unregister_device(del_dev);
2591 ++ qmimux_unregister_device(del_dev, NULL);
2592 +
2593 + if (!qmimux_has_slaves(dev))
2594 + info->flags &= ~QMI_WWAN_FLAG_MUX;
2595 +@@ -1417,6 +1424,7 @@ static void qmi_wwan_disconnect(struct usb_interface *intf)
2596 + struct qmi_wwan_state *info;
2597 + struct list_head *iter;
2598 + struct net_device *ldev;
2599 ++ LIST_HEAD(list);
2600 +
2601 + /* called twice if separate control and data intf */
2602 + if (!dev)
2603 +@@ -1429,8 +1437,9 @@ static void qmi_wwan_disconnect(struct usb_interface *intf)
2604 + }
2605 + rcu_read_lock();
2606 + netdev_for_each_upper_dev_rcu(dev->net, ldev, iter)
2607 +- qmimux_unregister_device(ldev);
2608 ++ qmimux_unregister_device(ldev, &list);
2609 + rcu_read_unlock();
2610 ++ unregister_netdevice_many(&list);
2611 + rtnl_unlock();
2612 + info->flags &= ~QMI_WWAN_FLAG_MUX;
2613 + }
2614 +diff --git a/drivers/net/wireless/ath/carl9170/usb.c b/drivers/net/wireless/ath/carl9170/usb.c
2615 +index e7c3f3b8457d..99f1897a775d 100644
2616 +--- a/drivers/net/wireless/ath/carl9170/usb.c
2617 ++++ b/drivers/net/wireless/ath/carl9170/usb.c
2618 +@@ -128,6 +128,8 @@ static const struct usb_device_id carl9170_usb_ids[] = {
2619 + };
2620 + MODULE_DEVICE_TABLE(usb, carl9170_usb_ids);
2621 +
2622 ++static struct usb_driver carl9170_driver;
2623 ++
2624 + static void carl9170_usb_submit_data_urb(struct ar9170 *ar)
2625 + {
2626 + struct urb *urb;
2627 +@@ -966,32 +968,28 @@ err_out:
2628 +
2629 + static void carl9170_usb_firmware_failed(struct ar9170 *ar)
2630 + {
2631 +- struct device *parent = ar->udev->dev.parent;
2632 +- struct usb_device *udev;
2633 +-
2634 +- /*
2635 +- * Store a copy of the usb_device pointer locally.
2636 +- * This is because device_release_driver initiates
2637 +- * carl9170_usb_disconnect, which in turn frees our
2638 +- * driver context (ar).
2639 ++ /* Store a copies of the usb_interface and usb_device pointer locally.
2640 ++ * This is because release_driver initiates carl9170_usb_disconnect,
2641 ++ * which in turn frees our driver context (ar).
2642 + */
2643 +- udev = ar->udev;
2644 ++ struct usb_interface *intf = ar->intf;
2645 ++ struct usb_device *udev = ar->udev;
2646 +
2647 + complete(&ar->fw_load_wait);
2648 ++ /* at this point 'ar' could be already freed. Don't use it anymore */
2649 ++ ar = NULL;
2650 +
2651 + /* unbind anything failed */
2652 +- if (parent)
2653 +- device_lock(parent);
2654 +-
2655 +- device_release_driver(&udev->dev);
2656 +- if (parent)
2657 +- device_unlock(parent);
2658 ++ usb_lock_device(udev);
2659 ++ usb_driver_release_interface(&carl9170_driver, intf);
2660 ++ usb_unlock_device(udev);
2661 +
2662 +- usb_put_dev(udev);
2663 ++ usb_put_intf(intf);
2664 + }
2665 +
2666 + static void carl9170_usb_firmware_finish(struct ar9170 *ar)
2667 + {
2668 ++ struct usb_interface *intf = ar->intf;
2669 + int err;
2670 +
2671 + err = carl9170_parse_firmware(ar);
2672 +@@ -1009,7 +1007,7 @@ static void carl9170_usb_firmware_finish(struct ar9170 *ar)
2673 + goto err_unrx;
2674 +
2675 + complete(&ar->fw_load_wait);
2676 +- usb_put_dev(ar->udev);
2677 ++ usb_put_intf(intf);
2678 + return;
2679 +
2680 + err_unrx:
2681 +@@ -1052,7 +1050,6 @@ static int carl9170_usb_probe(struct usb_interface *intf,
2682 + return PTR_ERR(ar);
2683 +
2684 + udev = interface_to_usbdev(intf);
2685 +- usb_get_dev(udev);
2686 + ar->udev = udev;
2687 + ar->intf = intf;
2688 + ar->features = id->driver_info;
2689 +@@ -1094,15 +1091,14 @@ static int carl9170_usb_probe(struct usb_interface *intf,
2690 + atomic_set(&ar->rx_anch_urbs, 0);
2691 + atomic_set(&ar->rx_pool_urbs, 0);
2692 +
2693 +- usb_get_dev(ar->udev);
2694 ++ usb_get_intf(intf);
2695 +
2696 + carl9170_set_state(ar, CARL9170_STOPPED);
2697 +
2698 + err = request_firmware_nowait(THIS_MODULE, 1, CARL9170FW_NAME,
2699 + &ar->udev->dev, GFP_KERNEL, ar, carl9170_usb_firmware_step2);
2700 + if (err) {
2701 +- usb_put_dev(udev);
2702 +- usb_put_dev(udev);
2703 ++ usb_put_intf(intf);
2704 + carl9170_free(ar);
2705 + }
2706 + return err;
2707 +@@ -1131,7 +1127,6 @@ static void carl9170_usb_disconnect(struct usb_interface *intf)
2708 +
2709 + carl9170_release_firmware(ar);
2710 + carl9170_free(ar);
2711 +- usb_put_dev(udev);
2712 + }
2713 +
2714 + #ifdef CONFIG_PM
2715 +diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
2716 +index 99676d6c4713..6c10b8c4ddbe 100644
2717 +--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
2718 ++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
2719 +@@ -1509,7 +1509,6 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
2720 + goto free;
2721 +
2722 + out_free_fw:
2723 +- iwl_dealloc_ucode(drv);
2724 + release_firmware(ucode_raw);
2725 + out_unbind:
2726 + complete(&drv->request_firmware_complete);
2727 +diff --git a/drivers/net/wireless/intersil/p54/p54usb.c b/drivers/net/wireless/intersil/p54/p54usb.c
2728 +index b0b86f701061..15661da6eedc 100644
2729 +--- a/drivers/net/wireless/intersil/p54/p54usb.c
2730 ++++ b/drivers/net/wireless/intersil/p54/p54usb.c
2731 +@@ -33,6 +33,8 @@ MODULE_ALIAS("prism54usb");
2732 + MODULE_FIRMWARE("isl3886usb");
2733 + MODULE_FIRMWARE("isl3887usb");
2734 +
2735 ++static struct usb_driver p54u_driver;
2736 ++
2737 + /*
2738 + * Note:
2739 + *
2740 +@@ -921,9 +923,9 @@ static void p54u_load_firmware_cb(const struct firmware *firmware,
2741 + {
2742 + struct p54u_priv *priv = context;
2743 + struct usb_device *udev = priv->udev;
2744 ++ struct usb_interface *intf = priv->intf;
2745 + int err;
2746 +
2747 +- complete(&priv->fw_wait_load);
2748 + if (firmware) {
2749 + priv->fw = firmware;
2750 + err = p54u_start_ops(priv);
2751 +@@ -932,26 +934,22 @@ static void p54u_load_firmware_cb(const struct firmware *firmware,
2752 + dev_err(&udev->dev, "Firmware not found.\n");
2753 + }
2754 +
2755 +- if (err) {
2756 +- struct device *parent = priv->udev->dev.parent;
2757 +-
2758 +- dev_err(&udev->dev, "failed to initialize device (%d)\n", err);
2759 +-
2760 +- if (parent)
2761 +- device_lock(parent);
2762 ++ complete(&priv->fw_wait_load);
2763 ++ /*
2764 ++ * At this point p54u_disconnect may have already freed
2765 ++ * the "priv" context. Do not use it anymore!
2766 ++ */
2767 ++ priv = NULL;
2768 +
2769 +- device_release_driver(&udev->dev);
2770 +- /*
2771 +- * At this point p54u_disconnect has already freed
2772 +- * the "priv" context. Do not use it anymore!
2773 +- */
2774 +- priv = NULL;
2775 ++ if (err) {
2776 ++ dev_err(&intf->dev, "failed to initialize device (%d)\n", err);
2777 +
2778 +- if (parent)
2779 +- device_unlock(parent);
2780 ++ usb_lock_device(udev);
2781 ++ usb_driver_release_interface(&p54u_driver, intf);
2782 ++ usb_unlock_device(udev);
2783 + }
2784 +
2785 +- usb_put_dev(udev);
2786 ++ usb_put_intf(intf);
2787 + }
2788 +
2789 + static int p54u_load_firmware(struct ieee80211_hw *dev,
2790 +@@ -972,14 +970,14 @@ static int p54u_load_firmware(struct ieee80211_hw *dev,
2791 + dev_info(&priv->udev->dev, "Loading firmware file %s\n",
2792 + p54u_fwlist[i].fw);
2793 +
2794 +- usb_get_dev(udev);
2795 ++ usb_get_intf(intf);
2796 + err = request_firmware_nowait(THIS_MODULE, 1, p54u_fwlist[i].fw,
2797 + device, GFP_KERNEL, priv,
2798 + p54u_load_firmware_cb);
2799 + if (err) {
2800 + dev_err(&priv->udev->dev, "(p54usb) cannot load firmware %s "
2801 + "(%d)!\n", p54u_fwlist[i].fw, err);
2802 +- usb_put_dev(udev);
2803 ++ usb_put_intf(intf);
2804 + }
2805 +
2806 + return err;
2807 +@@ -1011,8 +1009,6 @@ static int p54u_probe(struct usb_interface *intf,
2808 + skb_queue_head_init(&priv->rx_queue);
2809 + init_usb_anchor(&priv->submitted);
2810 +
2811 +- usb_get_dev(udev);
2812 +-
2813 + /* really lazy and simple way of figuring out if we're a 3887 */
2814 + /* TODO: should just stick the identification in the device table */
2815 + i = intf->altsetting->desc.bNumEndpoints;
2816 +@@ -1053,10 +1049,8 @@ static int p54u_probe(struct usb_interface *intf,
2817 + priv->upload_fw = p54u_upload_firmware_net2280;
2818 + }
2819 + err = p54u_load_firmware(dev, intf);
2820 +- if (err) {
2821 +- usb_put_dev(udev);
2822 ++ if (err)
2823 + p54_free_common(dev);
2824 +- }
2825 + return err;
2826 + }
2827 +
2828 +@@ -1072,7 +1066,6 @@ static void p54u_disconnect(struct usb_interface *intf)
2829 + wait_for_completion(&priv->fw_wait_load);
2830 + p54_unregister_common(dev);
2831 +
2832 +- usb_put_dev(interface_to_usbdev(intf));
2833 + release_firmware(priv->fw);
2834 + p54_free_common(dev);
2835 + }
2836 +diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h
2837 +index 9e75522d248a..342555ebafd7 100644
2838 +--- a/drivers/net/wireless/marvell/mwifiex/fw.h
2839 ++++ b/drivers/net/wireless/marvell/mwifiex/fw.h
2840 +@@ -1744,9 +1744,10 @@ struct mwifiex_ie_types_wmm_queue_status {
2841 + struct ieee_types_vendor_header {
2842 + u8 element_id;
2843 + u8 len;
2844 +- u8 oui[4]; /* 0~2: oui, 3: oui_type */
2845 +- u8 oui_subtype;
2846 +- u8 version;
2847 ++ struct {
2848 ++ u8 oui[3];
2849 ++ u8 oui_type;
2850 ++ } __packed oui;
2851 + } __packed;
2852 +
2853 + struct ieee_types_wmm_parameter {
2854 +@@ -1760,6 +1761,9 @@ struct ieee_types_wmm_parameter {
2855 + * Version [1]
2856 + */
2857 + struct ieee_types_vendor_header vend_hdr;
2858 ++ u8 oui_subtype;
2859 ++ u8 version;
2860 ++
2861 + u8 qos_info_bitmap;
2862 + u8 reserved;
2863 + struct ieee_types_wmm_ac_parameters ac_params[IEEE80211_NUM_ACS];
2864 +@@ -1777,6 +1781,8 @@ struct ieee_types_wmm_info {
2865 + * Version [1]
2866 + */
2867 + struct ieee_types_vendor_header vend_hdr;
2868 ++ u8 oui_subtype;
2869 ++ u8 version;
2870 +
2871 + u8 qos_info_bitmap;
2872 + } __packed;
2873 +diff --git a/drivers/net/wireless/marvell/mwifiex/ie.c b/drivers/net/wireless/marvell/mwifiex/ie.c
2874 +index 922e3d69fd84..32853496fe8c 100644
2875 +--- a/drivers/net/wireless/marvell/mwifiex/ie.c
2876 ++++ b/drivers/net/wireless/marvell/mwifiex/ie.c
2877 +@@ -329,6 +329,8 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv,
2878 + struct ieee80211_vendor_ie *vendorhdr;
2879 + u16 gen_idx = MWIFIEX_AUTO_IDX_MASK, ie_len = 0;
2880 + int left_len, parsed_len = 0;
2881 ++ unsigned int token_len;
2882 ++ int err = 0;
2883 +
2884 + if (!info->tail || !info->tail_len)
2885 + return 0;
2886 +@@ -344,6 +346,12 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv,
2887 + */
2888 + while (left_len > sizeof(struct ieee_types_header)) {
2889 + hdr = (void *)(info->tail + parsed_len);
2890 ++ token_len = hdr->len + sizeof(struct ieee_types_header);
2891 ++ if (token_len > left_len) {
2892 ++ err = -EINVAL;
2893 ++ goto out;
2894 ++ }
2895 ++
2896 + switch (hdr->element_id) {
2897 + case WLAN_EID_SSID:
2898 + case WLAN_EID_SUPP_RATES:
2899 +@@ -357,13 +365,16 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv,
2900 + case WLAN_EID_VENDOR_SPECIFIC:
2901 + break;
2902 + default:
2903 +- memcpy(gen_ie->ie_buffer + ie_len, hdr,
2904 +- hdr->len + sizeof(struct ieee_types_header));
2905 +- ie_len += hdr->len + sizeof(struct ieee_types_header);
2906 ++ if (ie_len + token_len > IEEE_MAX_IE_SIZE) {
2907 ++ err = -EINVAL;
2908 ++ goto out;
2909 ++ }
2910 ++ memcpy(gen_ie->ie_buffer + ie_len, hdr, token_len);
2911 ++ ie_len += token_len;
2912 + break;
2913 + }
2914 +- left_len -= hdr->len + sizeof(struct ieee_types_header);
2915 +- parsed_len += hdr->len + sizeof(struct ieee_types_header);
2916 ++ left_len -= token_len;
2917 ++ parsed_len += token_len;
2918 + }
2919 +
2920 + /* parse only WPA vendor IE from tail, WMM IE is configured by
2921 +@@ -373,15 +384,17 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv,
2922 + WLAN_OUI_TYPE_MICROSOFT_WPA,
2923 + info->tail, info->tail_len);
2924 + if (vendorhdr) {
2925 +- memcpy(gen_ie->ie_buffer + ie_len, vendorhdr,
2926 +- vendorhdr->len + sizeof(struct ieee_types_header));
2927 +- ie_len += vendorhdr->len + sizeof(struct ieee_types_header);
2928 ++ token_len = vendorhdr->len + sizeof(struct ieee_types_header);
2929 ++ if (ie_len + token_len > IEEE_MAX_IE_SIZE) {
2930 ++ err = -EINVAL;
2931 ++ goto out;
2932 ++ }
2933 ++ memcpy(gen_ie->ie_buffer + ie_len, vendorhdr, token_len);
2934 ++ ie_len += token_len;
2935 + }
2936 +
2937 +- if (!ie_len) {
2938 +- kfree(gen_ie);
2939 +- return 0;
2940 +- }
2941 ++ if (!ie_len)
2942 ++ goto out;
2943 +
2944 + gen_ie->ie_index = cpu_to_le16(gen_idx);
2945 + gen_ie->mgmt_subtype_mask = cpu_to_le16(MGMT_MASK_BEACON |
2946 +@@ -391,13 +404,15 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv,
2947 +
2948 + if (mwifiex_update_uap_custom_ie(priv, gen_ie, &gen_idx, NULL, NULL,
2949 + NULL, NULL)) {
2950 +- kfree(gen_ie);
2951 +- return -1;
2952 ++ err = -EINVAL;
2953 ++ goto out;
2954 + }
2955 +
2956 + priv->gen_idx = gen_idx;
2957 ++
2958 ++ out:
2959 + kfree(gen_ie);
2960 +- return 0;
2961 ++ return err;
2962 + }
2963 +
2964 + /* This function parses different IEs-head & tail IEs, beacon IEs,
2965 +diff --git a/drivers/net/wireless/marvell/mwifiex/scan.c b/drivers/net/wireless/marvell/mwifiex/scan.c
2966 +index c9d41ed77fc7..29284f9a0646 100644
2967 +--- a/drivers/net/wireless/marvell/mwifiex/scan.c
2968 ++++ b/drivers/net/wireless/marvell/mwifiex/scan.c
2969 +@@ -1244,6 +1244,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
2970 + }
2971 + switch (element_id) {
2972 + case WLAN_EID_SSID:
2973 ++ if (element_len > IEEE80211_MAX_SSID_LEN)
2974 ++ return -EINVAL;
2975 + bss_entry->ssid.ssid_len = element_len;
2976 + memcpy(bss_entry->ssid.ssid, (current_ptr + 2),
2977 + element_len);
2978 +@@ -1253,6 +1255,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
2979 + break;
2980 +
2981 + case WLAN_EID_SUPP_RATES:
2982 ++ if (element_len > MWIFIEX_SUPPORTED_RATES)
2983 ++ return -EINVAL;
2984 + memcpy(bss_entry->data_rates, current_ptr + 2,
2985 + element_len);
2986 + memcpy(bss_entry->supported_rates, current_ptr + 2,
2987 +@@ -1262,6 +1266,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
2988 + break;
2989 +
2990 + case WLAN_EID_FH_PARAMS:
2991 ++ if (element_len + 2 < sizeof(*fh_param_set))
2992 ++ return -EINVAL;
2993 + fh_param_set =
2994 + (struct ieee_types_fh_param_set *) current_ptr;
2995 + memcpy(&bss_entry->phy_param_set.fh_param_set,
2996 +@@ -1270,6 +1276,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
2997 + break;
2998 +
2999 + case WLAN_EID_DS_PARAMS:
3000 ++ if (element_len + 2 < sizeof(*ds_param_set))
3001 ++ return -EINVAL;
3002 + ds_param_set =
3003 + (struct ieee_types_ds_param_set *) current_ptr;
3004 +
3005 +@@ -1281,6 +1289,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
3006 + break;
3007 +
3008 + case WLAN_EID_CF_PARAMS:
3009 ++ if (element_len + 2 < sizeof(*cf_param_set))
3010 ++ return -EINVAL;
3011 + cf_param_set =
3012 + (struct ieee_types_cf_param_set *) current_ptr;
3013 + memcpy(&bss_entry->ss_param_set.cf_param_set,
3014 +@@ -1289,6 +1299,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
3015 + break;
3016 +
3017 + case WLAN_EID_IBSS_PARAMS:
3018 ++ if (element_len + 2 < sizeof(*ibss_param_set))
3019 ++ return -EINVAL;
3020 + ibss_param_set =
3021 + (struct ieee_types_ibss_param_set *)
3022 + current_ptr;
3023 +@@ -1298,10 +1310,14 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
3024 + break;
3025 +
3026 + case WLAN_EID_ERP_INFO:
3027 ++ if (!element_len)
3028 ++ return -EINVAL;
3029 + bss_entry->erp_flags = *(current_ptr + 2);
3030 + break;
3031 +
3032 + case WLAN_EID_PWR_CONSTRAINT:
3033 ++ if (!element_len)
3034 ++ return -EINVAL;
3035 + bss_entry->local_constraint = *(current_ptr + 2);
3036 + bss_entry->sensed_11h = true;
3037 + break;
3038 +@@ -1344,15 +1360,22 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
3039 + vendor_ie = (struct ieee_types_vendor_specific *)
3040 + current_ptr;
3041 +
3042 +- if (!memcmp
3043 +- (vendor_ie->vend_hdr.oui, wpa_oui,
3044 +- sizeof(wpa_oui))) {
3045 ++ /* 802.11 requires at least 3-byte OUI. */
3046 ++ if (element_len < sizeof(vendor_ie->vend_hdr.oui.oui))
3047 ++ return -EINVAL;
3048 ++
3049 ++ /* Not long enough for a match? Skip it. */
3050 ++ if (element_len < sizeof(wpa_oui))
3051 ++ break;
3052 ++
3053 ++ if (!memcmp(&vendor_ie->vend_hdr.oui, wpa_oui,
3054 ++ sizeof(wpa_oui))) {
3055 + bss_entry->bcn_wpa_ie =
3056 + (struct ieee_types_vendor_specific *)
3057 + current_ptr;
3058 + bss_entry->wpa_offset = (u16)
3059 + (current_ptr - bss_entry->beacon_buf);
3060 +- } else if (!memcmp(vendor_ie->vend_hdr.oui, wmm_oui,
3061 ++ } else if (!memcmp(&vendor_ie->vend_hdr.oui, wmm_oui,
3062 + sizeof(wmm_oui))) {
3063 + if (total_ie_len ==
3064 + sizeof(struct ieee_types_wmm_parameter) ||
3065 +diff --git a/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c b/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
3066 +index a6077ab3efc3..82828a207963 100644
3067 +--- a/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
3068 ++++ b/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
3069 +@@ -1388,7 +1388,7 @@ mwifiex_set_gen_ie_helper(struct mwifiex_private *priv, u8 *ie_data_ptr,
3070 + /* Test to see if it is a WPA IE, if not, then
3071 + * it is a gen IE
3072 + */
3073 +- if (!memcmp(pvendor_ie->oui, wpa_oui,
3074 ++ if (!memcmp(&pvendor_ie->oui, wpa_oui,
3075 + sizeof(wpa_oui))) {
3076 + /* IE is a WPA/WPA2 IE so call set_wpa function
3077 + */
3078 +@@ -1398,7 +1398,7 @@ mwifiex_set_gen_ie_helper(struct mwifiex_private *priv, u8 *ie_data_ptr,
3079 + goto next_ie;
3080 + }
3081 +
3082 +- if (!memcmp(pvendor_ie->oui, wps_oui,
3083 ++ if (!memcmp(&pvendor_ie->oui, wps_oui,
3084 + sizeof(wps_oui))) {
3085 + /* Test to see if it is a WPS IE,
3086 + * if so, enable wps session flag
3087 +diff --git a/drivers/net/wireless/marvell/mwifiex/wmm.c b/drivers/net/wireless/marvell/mwifiex/wmm.c
3088 +index 0edd26881321..7fba4d940131 100644
3089 +--- a/drivers/net/wireless/marvell/mwifiex/wmm.c
3090 ++++ b/drivers/net/wireless/marvell/mwifiex/wmm.c
3091 +@@ -240,7 +240,7 @@ mwifiex_wmm_setup_queue_priorities(struct mwifiex_private *priv,
3092 + mwifiex_dbg(priv->adapter, INFO,
3093 + "info: WMM Parameter IE: version=%d,\t"
3094 + "qos_info Parameter Set Count=%d, Reserved=%#x\n",
3095 +- wmm_ie->vend_hdr.version, wmm_ie->qos_info_bitmap &
3096 ++ wmm_ie->version, wmm_ie->qos_info_bitmap &
3097 + IEEE80211_WMM_IE_AP_QOSINFO_PARAM_SET_CNT_MASK,
3098 + wmm_ie->reserved);
3099 +
3100 +diff --git a/drivers/s390/cio/qdio_setup.c b/drivers/s390/cio/qdio_setup.c
3101 +index 35286907c636..d0090c5c88e7 100644
3102 +--- a/drivers/s390/cio/qdio_setup.c
3103 ++++ b/drivers/s390/cio/qdio_setup.c
3104 +@@ -150,6 +150,7 @@ static int __qdio_allocate_qs(struct qdio_q **irq_ptr_qs, int nr_queues)
3105 + return -ENOMEM;
3106 + }
3107 + irq_ptr_qs[i] = q;
3108 ++ INIT_LIST_HEAD(&q->entry);
3109 + }
3110 + return 0;
3111 + }
3112 +@@ -178,6 +179,7 @@ static void setup_queues_misc(struct qdio_q *q, struct qdio_irq *irq_ptr,
3113 + q->mask = 1 << (31 - i);
3114 + q->nr = i;
3115 + q->handler = handler;
3116 ++ INIT_LIST_HEAD(&q->entry);
3117 + }
3118 +
3119 + static void setup_storage_lists(struct qdio_q *q, struct qdio_irq *irq_ptr,
3120 +diff --git a/drivers/s390/cio/qdio_thinint.c b/drivers/s390/cio/qdio_thinint.c
3121 +index a739bdf9630e..831a3a0a2837 100644
3122 +--- a/drivers/s390/cio/qdio_thinint.c
3123 ++++ b/drivers/s390/cio/qdio_thinint.c
3124 +@@ -83,7 +83,6 @@ void tiqdio_add_input_queues(struct qdio_irq *irq_ptr)
3125 + mutex_lock(&tiq_list_lock);
3126 + list_add_rcu(&irq_ptr->input_qs[0]->entry, &tiq_list);
3127 + mutex_unlock(&tiq_list_lock);
3128 +- xchg(irq_ptr->dsci, 1 << 7);
3129 + }
3130 +
3131 + void tiqdio_remove_input_queues(struct qdio_irq *irq_ptr)
3132 +@@ -91,14 +90,14 @@ void tiqdio_remove_input_queues(struct qdio_irq *irq_ptr)
3133 + struct qdio_q *q;
3134 +
3135 + q = irq_ptr->input_qs[0];
3136 +- /* if establish triggered an error */
3137 +- if (!q || !q->entry.prev || !q->entry.next)
3138 ++ if (!q)
3139 + return;
3140 +
3141 + mutex_lock(&tiq_list_lock);
3142 + list_del_rcu(&q->entry);
3143 + mutex_unlock(&tiq_list_lock);
3144 + synchronize_rcu();
3145 ++ INIT_LIST_HEAD(&q->entry);
3146 + }
3147 +
3148 + static inline int has_multiple_inq_on_dsci(struct qdio_irq *irq_ptr)
3149 +diff --git a/drivers/staging/comedi/drivers/amplc_pci230.c b/drivers/staging/comedi/drivers/amplc_pci230.c
3150 +index 48c7890c3007..2b0b757dc626 100644
3151 +--- a/drivers/staging/comedi/drivers/amplc_pci230.c
3152 ++++ b/drivers/staging/comedi/drivers/amplc_pci230.c
3153 +@@ -2339,7 +2339,8 @@ static irqreturn_t pci230_interrupt(int irq, void *d)
3154 + devpriv->intr_running = false;
3155 + spin_unlock_irqrestore(&devpriv->isr_spinlock, irqflags);
3156 +
3157 +- comedi_handle_events(dev, s_ao);
3158 ++ if (s_ao)
3159 ++ comedi_handle_events(dev, s_ao);
3160 + comedi_handle_events(dev, s_ai);
3161 +
3162 + return IRQ_HANDLED;
3163 +diff --git a/drivers/staging/comedi/drivers/dt282x.c b/drivers/staging/comedi/drivers/dt282x.c
3164 +index d5295bbdd28c..37133d54dda1 100644
3165 +--- a/drivers/staging/comedi/drivers/dt282x.c
3166 ++++ b/drivers/staging/comedi/drivers/dt282x.c
3167 +@@ -566,7 +566,8 @@ static irqreturn_t dt282x_interrupt(int irq, void *d)
3168 + }
3169 + #endif
3170 + comedi_handle_events(dev, s);
3171 +- comedi_handle_events(dev, s_ao);
3172 ++ if (s_ao)
3173 ++ comedi_handle_events(dev, s_ao);
3174 +
3175 + return IRQ_RETVAL(handled);
3176 + }
3177 +diff --git a/drivers/staging/iio/cdc/ad7150.c b/drivers/staging/iio/cdc/ad7150.c
3178 +index a6f249e9c1e1..4d218d554878 100644
3179 +--- a/drivers/staging/iio/cdc/ad7150.c
3180 ++++ b/drivers/staging/iio/cdc/ad7150.c
3181 +@@ -6,6 +6,7 @@
3182 + * Licensed under the GPL-2 or later.
3183 + */
3184 +
3185 ++#include <linux/bitfield.h>
3186 + #include <linux/interrupt.h>
3187 + #include <linux/device.h>
3188 + #include <linux/kernel.h>
3189 +@@ -129,7 +130,7 @@ static int ad7150_read_event_config(struct iio_dev *indio_dev,
3190 + {
3191 + int ret;
3192 + u8 threshtype;
3193 +- bool adaptive;
3194 ++ bool thrfixed;
3195 + struct ad7150_chip_info *chip = iio_priv(indio_dev);
3196 +
3197 + ret = i2c_smbus_read_byte_data(chip->client, AD7150_CFG);
3198 +@@ -137,21 +138,23 @@ static int ad7150_read_event_config(struct iio_dev *indio_dev,
3199 + return ret;
3200 +
3201 + threshtype = (ret >> 5) & 0x03;
3202 +- adaptive = !!(ret & 0x80);
3203 ++
3204 ++ /*check if threshold mode is fixed or adaptive*/
3205 ++ thrfixed = FIELD_GET(AD7150_CFG_FIX, ret);
3206 +
3207 + switch (type) {
3208 + case IIO_EV_TYPE_MAG_ADAPTIVE:
3209 + if (dir == IIO_EV_DIR_RISING)
3210 +- return adaptive && (threshtype == 0x1);
3211 +- return adaptive && (threshtype == 0x0);
3212 ++ return !thrfixed && (threshtype == 0x1);
3213 ++ return !thrfixed && (threshtype == 0x0);
3214 + case IIO_EV_TYPE_THRESH_ADAPTIVE:
3215 + if (dir == IIO_EV_DIR_RISING)
3216 +- return adaptive && (threshtype == 0x3);
3217 +- return adaptive && (threshtype == 0x2);
3218 ++ return !thrfixed && (threshtype == 0x3);
3219 ++ return !thrfixed && (threshtype == 0x2);
3220 + case IIO_EV_TYPE_THRESH:
3221 + if (dir == IIO_EV_DIR_RISING)
3222 +- return !adaptive && (threshtype == 0x1);
3223 +- return !adaptive && (threshtype == 0x0);
3224 ++ return thrfixed && (threshtype == 0x1);
3225 ++ return thrfixed && (threshtype == 0x0);
3226 + default:
3227 + break;
3228 + }
3229 +diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
3230 +index ecf3d631bc09..ab0796d14ac1 100644
3231 +--- a/drivers/tty/serial/8250/8250_port.c
3232 ++++ b/drivers/tty/serial/8250/8250_port.c
3233 +@@ -1873,8 +1873,7 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
3234 +
3235 + status = serial_port_in(port, UART_LSR);
3236 +
3237 +- if (status & (UART_LSR_DR | UART_LSR_BI) &&
3238 +- iir & UART_IIR_RDI) {
3239 ++ if (status & (UART_LSR_DR | UART_LSR_BI)) {
3240 + if (!up->dma || handle_rx_dma(up, iir))
3241 + status = serial8250_rx_chars(up, status);
3242 + }
3243 +diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c
3244 +index 3a0e4f5d7b83..81d84e0c3c6c 100644
3245 +--- a/drivers/usb/gadget/function/u_ether.c
3246 ++++ b/drivers/usb/gadget/function/u_ether.c
3247 +@@ -190,11 +190,12 @@ rx_submit(struct eth_dev *dev, struct usb_request *req, gfp_t gfp_flags)
3248 + out = dev->port_usb->out_ep;
3249 + else
3250 + out = NULL;
3251 +- spin_unlock_irqrestore(&dev->lock, flags);
3252 +
3253 + if (!out)
3254 ++ {
3255 ++ spin_unlock_irqrestore(&dev->lock, flags);
3256 + return -ENOTCONN;
3257 +-
3258 ++ }
3259 +
3260 + /* Padding up to RX_EXTRA handles minor disagreements with host.
3261 + * Normally we use the USB "terminate on short read" convention;
3262 +@@ -218,6 +219,7 @@ rx_submit(struct eth_dev *dev, struct usb_request *req, gfp_t gfp_flags)
3263 +
3264 + if (dev->port_usb->is_fixed)
3265 + size = max_t(size_t, size, dev->port_usb->fixed_out_len);
3266 ++ spin_unlock_irqrestore(&dev->lock, flags);
3267 +
3268 + skb = __netdev_alloc_skb(dev->net, size + NET_IP_ALIGN, gfp_flags);
3269 + if (skb == NULL) {
3270 +diff --git a/drivers/usb/renesas_usbhs/fifo.c b/drivers/usb/renesas_usbhs/fifo.c
3271 +index 5d369b38868a..b6d9308d43ba 100644
3272 +--- a/drivers/usb/renesas_usbhs/fifo.c
3273 ++++ b/drivers/usb/renesas_usbhs/fifo.c
3274 +@@ -818,9 +818,8 @@ static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map)
3275 + }
3276 +
3277 + static void usbhsf_dma_complete(void *arg);
3278 +-static void xfer_work(struct work_struct *work)
3279 ++static void usbhsf_dma_xfer_preparing(struct usbhs_pkt *pkt)
3280 + {
3281 +- struct usbhs_pkt *pkt = container_of(work, struct usbhs_pkt, work);
3282 + struct usbhs_pipe *pipe = pkt->pipe;
3283 + struct usbhs_fifo *fifo;
3284 + struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);
3285 +@@ -828,12 +827,10 @@ static void xfer_work(struct work_struct *work)
3286 + struct dma_chan *chan;
3287 + struct device *dev = usbhs_priv_to_dev(priv);
3288 + enum dma_transfer_direction dir;
3289 +- unsigned long flags;
3290 +
3291 +- usbhs_lock(priv, flags);
3292 + fifo = usbhs_pipe_to_fifo(pipe);
3293 + if (!fifo)
3294 +- goto xfer_work_end;
3295 ++ return;
3296 +
3297 + chan = usbhsf_dma_chan_get(fifo, pkt);
3298 + dir = usbhs_pipe_is_dir_in(pipe) ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV;
3299 +@@ -842,7 +839,7 @@ static void xfer_work(struct work_struct *work)
3300 + pkt->trans, dir,
3301 + DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
3302 + if (!desc)
3303 +- goto xfer_work_end;
3304 ++ return;
3305 +
3306 + desc->callback = usbhsf_dma_complete;
3307 + desc->callback_param = pipe;
3308 +@@ -850,7 +847,7 @@ static void xfer_work(struct work_struct *work)
3309 + pkt->cookie = dmaengine_submit(desc);
3310 + if (pkt->cookie < 0) {
3311 + dev_err(dev, "Failed to submit dma descriptor\n");
3312 +- goto xfer_work_end;
3313 ++ return;
3314 + }
3315 +
3316 + dev_dbg(dev, " %s %d (%d/ %d)\n",
3317 +@@ -861,8 +858,17 @@ static void xfer_work(struct work_struct *work)
3318 + dma_async_issue_pending(chan);
3319 + usbhsf_dma_start(pipe, fifo);
3320 + usbhs_pipe_enable(pipe);
3321 ++}
3322 ++
3323 ++static void xfer_work(struct work_struct *work)
3324 ++{
3325 ++ struct usbhs_pkt *pkt = container_of(work, struct usbhs_pkt, work);
3326 ++ struct usbhs_pipe *pipe = pkt->pipe;
3327 ++ struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);
3328 ++ unsigned long flags;
3329 +
3330 +-xfer_work_end:
3331 ++ usbhs_lock(priv, flags);
3332 ++ usbhsf_dma_xfer_preparing(pkt);
3333 + usbhs_unlock(priv, flags);
3334 + }
3335 +
3336 +@@ -915,8 +921,13 @@ static int usbhsf_dma_prepare_push(struct usbhs_pkt *pkt, int *is_done)
3337 + pkt->trans = len;
3338 +
3339 + usbhsf_tx_irq_ctrl(pipe, 0);
3340 +- INIT_WORK(&pkt->work, xfer_work);
3341 +- schedule_work(&pkt->work);
3342 ++ /* FIXME: Workaound for usb dmac that driver can be used in atomic */
3343 ++ if (usbhs_get_dparam(priv, has_usb_dmac)) {
3344 ++ usbhsf_dma_xfer_preparing(pkt);
3345 ++ } else {
3346 ++ INIT_WORK(&pkt->work, xfer_work);
3347 ++ schedule_work(&pkt->work);
3348 ++ }
3349 +
3350 + return 0;
3351 +
3352 +@@ -1022,8 +1033,7 @@ static int usbhsf_dma_prepare_pop_with_usb_dmac(struct usbhs_pkt *pkt,
3353 +
3354 + pkt->trans = pkt->length;
3355 +
3356 +- INIT_WORK(&pkt->work, xfer_work);
3357 +- schedule_work(&pkt->work);
3358 ++ usbhsf_dma_xfer_preparing(pkt);
3359 +
3360 + return 0;
3361 +
3362 +diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
3363 +index e76395d7f17d..d2349c094767 100644
3364 +--- a/drivers/usb/serial/ftdi_sio.c
3365 ++++ b/drivers/usb/serial/ftdi_sio.c
3366 +@@ -1024,6 +1024,7 @@ static const struct usb_device_id id_table_combined[] = {
3367 + { USB_DEVICE(AIRBUS_DS_VID, AIRBUS_DS_P8GR) },
3368 + /* EZPrototypes devices */
3369 + { USB_DEVICE(EZPROTOTYPES_VID, HJELMSLUND_USB485_ISO_PID) },
3370 ++ { USB_DEVICE_INTERFACE_NUMBER(UNJO_VID, UNJO_ISODEBUG_V1_PID, 1) },
3371 + { } /* Terminating entry */
3372 + };
3373 +
3374 +diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
3375 +index 5755f0df0025..f12d806220b4 100644
3376 +--- a/drivers/usb/serial/ftdi_sio_ids.h
3377 ++++ b/drivers/usb/serial/ftdi_sio_ids.h
3378 +@@ -1543,3 +1543,9 @@
3379 + #define CHETCO_SEASMART_DISPLAY_PID 0xA5AD /* SeaSmart NMEA2000 Display */
3380 + #define CHETCO_SEASMART_LITE_PID 0xA5AE /* SeaSmart Lite USB Adapter */
3381 + #define CHETCO_SEASMART_ANALOG_PID 0xA5AF /* SeaSmart Analog Adapter */
3382 ++
3383 ++/*
3384 ++ * Unjo AB
3385 ++ */
3386 ++#define UNJO_VID 0x22B7
3387 ++#define UNJO_ISODEBUG_V1_PID 0x150D
3388 +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
3389 +index 3c8e4970876c..8b9e12ab1fe6 100644
3390 +--- a/drivers/usb/serial/option.c
3391 ++++ b/drivers/usb/serial/option.c
3392 +@@ -1346,6 +1346,7 @@ static const struct usb_device_id option_ids[] = {
3393 + .driver_info = RSVD(4) },
3394 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0414, 0xff, 0xff, 0xff) },
3395 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0417, 0xff, 0xff, 0xff) },
3396 ++ { USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x0601, 0xff) }, /* GosunCn ZTE WeLink ME3630 (RNDIS mode) */
3397 + { USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x0602, 0xff) }, /* GosunCn ZTE WeLink ME3630 (MBIM mode) */
3398 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1008, 0xff, 0xff, 0xff),
3399 + .driver_info = RSVD(4) },
3400 +diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c
3401 +index a120649beeca..d13a154c8424 100644
3402 +--- a/fs/crypto/policy.c
3403 ++++ b/fs/crypto/policy.c
3404 +@@ -81,6 +81,8 @@ int fscrypt_ioctl_set_policy(struct file *filp, const void __user *arg)
3405 + if (ret == -ENODATA) {
3406 + if (!S_ISDIR(inode->i_mode))
3407 + ret = -ENOTDIR;
3408 ++ else if (IS_DEADDIR(inode))
3409 ++ ret = -ENOENT;
3410 + else if (!inode->i_sb->s_cop->empty_dir(inode))
3411 + ret = -ENOTEMPTY;
3412 + else
3413 +diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
3414 +index 4cd0c2336624..9c81fd973418 100644
3415 +--- a/fs/quota/dquot.c
3416 ++++ b/fs/quota/dquot.c
3417 +@@ -1989,8 +1989,8 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
3418 + &warn_to[cnt]);
3419 + if (ret)
3420 + goto over_quota;
3421 +- ret = dquot_add_space(transfer_to[cnt], cur_space, rsv_space, 0,
3422 +- &warn_to[cnt]);
3423 ++ ret = dquot_add_space(transfer_to[cnt], cur_space, rsv_space,
3424 ++ DQUOT_SPACE_WARN, &warn_to[cnt]);
3425 + if (ret) {
3426 + spin_lock(&transfer_to[cnt]->dq_dqb_lock);
3427 + dquot_decr_inodes(transfer_to[cnt], inode_usage);
3428 +diff --git a/fs/udf/inode.c b/fs/udf/inode.c
3429 +index 28b9d7cca29b..3c1b54091d6c 100644
3430 +--- a/fs/udf/inode.c
3431 ++++ b/fs/udf/inode.c
3432 +@@ -470,13 +470,15 @@ static struct buffer_head *udf_getblk(struct inode *inode, long block,
3433 + return NULL;
3434 + }
3435 +
3436 +-/* Extend the file by 'blocks' blocks, return the number of extents added */
3437 ++/* Extend the file with new blocks totaling 'new_block_bytes',
3438 ++ * return the number of extents added
3439 ++ */
3440 + static int udf_do_extend_file(struct inode *inode,
3441 + struct extent_position *last_pos,
3442 + struct kernel_long_ad *last_ext,
3443 +- sector_t blocks)
3444 ++ loff_t new_block_bytes)
3445 + {
3446 +- sector_t add;
3447 ++ uint32_t add;
3448 + int count = 0, fake = !(last_ext->extLength & UDF_EXTENT_LENGTH_MASK);
3449 + struct super_block *sb = inode->i_sb;
3450 + struct kernel_lb_addr prealloc_loc = {};
3451 +@@ -486,7 +488,7 @@ static int udf_do_extend_file(struct inode *inode,
3452 +
3453 + /* The previous extent is fake and we should not extend by anything
3454 + * - there's nothing to do... */
3455 +- if (!blocks && fake)
3456 ++ if (!new_block_bytes && fake)
3457 + return 0;
3458 +
3459 + iinfo = UDF_I(inode);
3460 +@@ -517,13 +519,12 @@ static int udf_do_extend_file(struct inode *inode,
3461 + /* Can we merge with the previous extent? */
3462 + if ((last_ext->extLength & UDF_EXTENT_FLAG_MASK) ==
3463 + EXT_NOT_RECORDED_NOT_ALLOCATED) {
3464 +- add = ((1 << 30) - sb->s_blocksize -
3465 +- (last_ext->extLength & UDF_EXTENT_LENGTH_MASK)) >>
3466 +- sb->s_blocksize_bits;
3467 +- if (add > blocks)
3468 +- add = blocks;
3469 +- blocks -= add;
3470 +- last_ext->extLength += add << sb->s_blocksize_bits;
3471 ++ add = (1 << 30) - sb->s_blocksize -
3472 ++ (last_ext->extLength & UDF_EXTENT_LENGTH_MASK);
3473 ++ if (add > new_block_bytes)
3474 ++ add = new_block_bytes;
3475 ++ new_block_bytes -= add;
3476 ++ last_ext->extLength += add;
3477 + }
3478 +
3479 + if (fake) {
3480 +@@ -544,28 +545,27 @@ static int udf_do_extend_file(struct inode *inode,
3481 + }
3482 +
3483 + /* Managed to do everything necessary? */
3484 +- if (!blocks)
3485 ++ if (!new_block_bytes)
3486 + goto out;
3487 +
3488 + /* All further extents will be NOT_RECORDED_NOT_ALLOCATED */
3489 + last_ext->extLocation.logicalBlockNum = 0;
3490 + last_ext->extLocation.partitionReferenceNum = 0;
3491 +- add = (1 << (30-sb->s_blocksize_bits)) - 1;
3492 +- last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED |
3493 +- (add << sb->s_blocksize_bits);
3494 ++ add = (1 << 30) - sb->s_blocksize;
3495 ++ last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED | add;
3496 +
3497 + /* Create enough extents to cover the whole hole */
3498 +- while (blocks > add) {
3499 +- blocks -= add;
3500 ++ while (new_block_bytes > add) {
3501 ++ new_block_bytes -= add;
3502 + err = udf_add_aext(inode, last_pos, &last_ext->extLocation,
3503 + last_ext->extLength, 1);
3504 + if (err)
3505 + return err;
3506 + count++;
3507 + }
3508 +- if (blocks) {
3509 ++ if (new_block_bytes) {
3510 + last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED |
3511 +- (blocks << sb->s_blocksize_bits);
3512 ++ new_block_bytes;
3513 + err = udf_add_aext(inode, last_pos, &last_ext->extLocation,
3514 + last_ext->extLength, 1);
3515 + if (err)
3516 +@@ -596,6 +596,24 @@ out:
3517 + return count;
3518 + }
3519 +
3520 ++/* Extend the final block of the file to final_block_len bytes */
3521 ++static void udf_do_extend_final_block(struct inode *inode,
3522 ++ struct extent_position *last_pos,
3523 ++ struct kernel_long_ad *last_ext,
3524 ++ uint32_t final_block_len)
3525 ++{
3526 ++ struct super_block *sb = inode->i_sb;
3527 ++ uint32_t added_bytes;
3528 ++
3529 ++ added_bytes = final_block_len -
3530 ++ (last_ext->extLength & (sb->s_blocksize - 1));
3531 ++ last_ext->extLength += added_bytes;
3532 ++ UDF_I(inode)->i_lenExtents += added_bytes;
3533 ++
3534 ++ udf_write_aext(inode, last_pos, &last_ext->extLocation,
3535 ++ last_ext->extLength, 1);
3536 ++}
3537 ++
3538 + static int udf_extend_file(struct inode *inode, loff_t newsize)
3539 + {
3540 +
3541 +@@ -605,10 +623,12 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
3542 + int8_t etype;
3543 + struct super_block *sb = inode->i_sb;
3544 + sector_t first_block = newsize >> sb->s_blocksize_bits, offset;
3545 ++ unsigned long partial_final_block;
3546 + int adsize;
3547 + struct udf_inode_info *iinfo = UDF_I(inode);
3548 + struct kernel_long_ad extent;
3549 +- int err;
3550 ++ int err = 0;
3551 ++ int within_final_block;
3552 +
3553 + if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_SHORT)
3554 + adsize = sizeof(struct short_ad);
3555 +@@ -618,18 +638,8 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
3556 + BUG();
3557 +
3558 + etype = inode_bmap(inode, first_block, &epos, &eloc, &elen, &offset);
3559 ++ within_final_block = (etype != -1);
3560 +
3561 +- /* File has extent covering the new size (could happen when extending
3562 +- * inside a block)? */
3563 +- if (etype != -1)
3564 +- return 0;
3565 +- if (newsize & (sb->s_blocksize - 1))
3566 +- offset++;
3567 +- /* Extended file just to the boundary of the last file block? */
3568 +- if (offset == 0)
3569 +- return 0;
3570 +-
3571 +- /* Truncate is extending the file by 'offset' blocks */
3572 + if ((!epos.bh && epos.offset == udf_file_entry_alloc_offset(inode)) ||
3573 + (epos.bh && epos.offset == sizeof(struct allocExtDesc))) {
3574 + /* File has no extents at all or has empty last
3575 +@@ -643,7 +653,22 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
3576 + &extent.extLength, 0);
3577 + extent.extLength |= etype << 30;
3578 + }
3579 +- err = udf_do_extend_file(inode, &epos, &extent, offset);
3580 ++
3581 ++ partial_final_block = newsize & (sb->s_blocksize - 1);
3582 ++
3583 ++ /* File has extent covering the new size (could happen when extending
3584 ++ * inside a block)?
3585 ++ */
3586 ++ if (within_final_block) {
3587 ++ /* Extending file within the last file block */
3588 ++ udf_do_extend_final_block(inode, &epos, &extent,
3589 ++ partial_final_block);
3590 ++ } else {
3591 ++ loff_t add = ((loff_t)offset << sb->s_blocksize_bits) |
3592 ++ partial_final_block;
3593 ++ err = udf_do_extend_file(inode, &epos, &extent, add);
3594 ++ }
3595 ++
3596 + if (err < 0)
3597 + goto out;
3598 + err = 0;
3599 +@@ -745,6 +770,7 @@ static sector_t inode_getblk(struct inode *inode, sector_t block,
3600 + /* Are we beyond EOF? */
3601 + if (etype == -1) {
3602 + int ret;
3603 ++ loff_t hole_len;
3604 + isBeyondEOF = true;
3605 + if (count) {
3606 + if (c)
3607 +@@ -760,7 +786,8 @@ static sector_t inode_getblk(struct inode *inode, sector_t block,
3608 + startnum = (offset > 0);
3609 + }
3610 + /* Create extents for the hole between EOF and offset */
3611 +- ret = udf_do_extend_file(inode, &prev_epos, laarr, offset);
3612 ++ hole_len = (loff_t)offset << inode->i_blkbits;
3613 ++ ret = udf_do_extend_file(inode, &prev_epos, laarr, hole_len);
3614 + if (ret < 0) {
3615 + *err = ret;
3616 + newblock = 0;
3617 +diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
3618 +index 6376e2dcb0b7..0c78ad0cc515 100644
3619 +--- a/include/linux/cpuhotplug.h
3620 ++++ b/include/linux/cpuhotplug.h
3621 +@@ -163,6 +163,7 @@ enum cpuhp_state {
3622 + CPUHP_AP_PERF_POWERPC_THREAD_IMC_ONLINE,
3623 + CPUHP_AP_WORKQUEUE_ONLINE,
3624 + CPUHP_AP_RCUTREE_ONLINE,
3625 ++ CPUHP_AP_BASE_CACHEINFO_ONLINE,
3626 + CPUHP_AP_ONLINE_DYN,
3627 + CPUHP_AP_ONLINE_DYN_END = CPUHP_AP_ONLINE_DYN + 30,
3628 + CPUHP_AP_X86_HPET_ONLINE,
3629 +diff --git a/include/linux/kernel.h b/include/linux/kernel.h
3630 +index 1c5469adaa85..bb7baecef002 100644
3631 +--- a/include/linux/kernel.h
3632 ++++ b/include/linux/kernel.h
3633 +@@ -101,7 +101,8 @@
3634 + #define DIV_ROUND_DOWN_ULL(ll, d) \
3635 + ({ unsigned long long _tmp = (ll); do_div(_tmp, d); _tmp; })
3636 +
3637 +-#define DIV_ROUND_UP_ULL(ll, d) DIV_ROUND_DOWN_ULL((ll) + (d) - 1, (d))
3638 ++#define DIV_ROUND_UP_ULL(ll, d) \
3639 ++ DIV_ROUND_DOWN_ULL((unsigned long long)(ll) + (d) - 1, (d))
3640 +
3641 + #if BITS_PER_LONG == 32
3642 + # define DIV_ROUND_UP_SECTOR_T(ll,d) DIV_ROUND_UP_ULL(ll, d)
3643 +diff --git a/include/linux/vmw_vmci_defs.h b/include/linux/vmw_vmci_defs.h
3644 +index b724ef7005de..53c5e40a2a8f 100644
3645 +--- a/include/linux/vmw_vmci_defs.h
3646 ++++ b/include/linux/vmw_vmci_defs.h
3647 +@@ -68,9 +68,18 @@ enum {
3648 +
3649 + /*
3650 + * A single VMCI device has an upper limit of 128MB on the amount of
3651 +- * memory that can be used for queue pairs.
3652 ++ * memory that can be used for queue pairs. Since each queue pair
3653 ++ * consists of at least two pages, the memory limit also dictates the
3654 ++ * number of queue pairs a guest can create.
3655 + */
3656 + #define VMCI_MAX_GUEST_QP_MEMORY (128 * 1024 * 1024)
3657 ++#define VMCI_MAX_GUEST_QP_COUNT (VMCI_MAX_GUEST_QP_MEMORY / PAGE_SIZE / 2)
3658 ++
3659 ++/*
3660 ++ * There can be at most PAGE_SIZE doorbells since there is one doorbell
3661 ++ * per byte in the doorbell bitmap page.
3662 ++ */
3663 ++#define VMCI_MAX_GUEST_DOORBELL_COUNT PAGE_SIZE
3664 +
3665 + /*
3666 + * Queues with pre-mapped data pages must be small, so that we don't pin
3667 +diff --git a/include/net/ip6_tunnel.h b/include/net/ip6_tunnel.h
3668 +index d66f70f63734..3b0e3cdee1c3 100644
3669 +--- a/include/net/ip6_tunnel.h
3670 ++++ b/include/net/ip6_tunnel.h
3671 +@@ -152,9 +152,12 @@ static inline void ip6tunnel_xmit(struct sock *sk, struct sk_buff *skb,
3672 + memset(skb->cb, 0, sizeof(struct inet6_skb_parm));
3673 + pkt_len = skb->len - skb_inner_network_offset(skb);
3674 + err = ip6_local_out(dev_net(skb_dst(skb)->dev), sk, skb);
3675 +- if (unlikely(net_xmit_eval(err)))
3676 +- pkt_len = -1;
3677 +- iptunnel_xmit_stats(dev, pkt_len);
3678 ++
3679 ++ if (dev) {
3680 ++ if (unlikely(net_xmit_eval(err)))
3681 ++ pkt_len = -1;
3682 ++ iptunnel_xmit_stats(dev, pkt_len);
3683 ++ }
3684 + }
3685 + #endif
3686 + #endif
3687 +diff --git a/include/uapi/linux/nilfs2_ondisk.h b/include/uapi/linux/nilfs2_ondisk.h
3688 +index a7e66ab11d1d..c23f91ae5fe8 100644
3689 +--- a/include/uapi/linux/nilfs2_ondisk.h
3690 ++++ b/include/uapi/linux/nilfs2_ondisk.h
3691 +@@ -29,7 +29,7 @@
3692 +
3693 + #include <linux/types.h>
3694 + #include <linux/magic.h>
3695 +-
3696 ++#include <asm/byteorder.h>
3697 +
3698 + #define NILFS_INODE_BMAP_SIZE 7
3699 +
3700 +@@ -533,19 +533,19 @@ enum {
3701 + static inline void \
3702 + nilfs_checkpoint_set_##name(struct nilfs_checkpoint *cp) \
3703 + { \
3704 +- cp->cp_flags = cpu_to_le32(le32_to_cpu(cp->cp_flags) | \
3705 +- (1UL << NILFS_CHECKPOINT_##flag)); \
3706 ++ cp->cp_flags = __cpu_to_le32(__le32_to_cpu(cp->cp_flags) | \
3707 ++ (1UL << NILFS_CHECKPOINT_##flag)); \
3708 + } \
3709 + static inline void \
3710 + nilfs_checkpoint_clear_##name(struct nilfs_checkpoint *cp) \
3711 + { \
3712 +- cp->cp_flags = cpu_to_le32(le32_to_cpu(cp->cp_flags) & \
3713 ++ cp->cp_flags = __cpu_to_le32(__le32_to_cpu(cp->cp_flags) & \
3714 + ~(1UL << NILFS_CHECKPOINT_##flag)); \
3715 + } \
3716 + static inline int \
3717 + nilfs_checkpoint_##name(const struct nilfs_checkpoint *cp) \
3718 + { \
3719 +- return !!(le32_to_cpu(cp->cp_flags) & \
3720 ++ return !!(__le32_to_cpu(cp->cp_flags) & \
3721 + (1UL << NILFS_CHECKPOINT_##flag)); \
3722 + }
3723 +
3724 +@@ -595,20 +595,20 @@ enum {
3725 + static inline void \
3726 + nilfs_segment_usage_set_##name(struct nilfs_segment_usage *su) \
3727 + { \
3728 +- su->su_flags = cpu_to_le32(le32_to_cpu(su->su_flags) | \
3729 ++ su->su_flags = __cpu_to_le32(__le32_to_cpu(su->su_flags) | \
3730 + (1UL << NILFS_SEGMENT_USAGE_##flag));\
3731 + } \
3732 + static inline void \
3733 + nilfs_segment_usage_clear_##name(struct nilfs_segment_usage *su) \
3734 + { \
3735 + su->su_flags = \
3736 +- cpu_to_le32(le32_to_cpu(su->su_flags) & \
3737 ++ __cpu_to_le32(__le32_to_cpu(su->su_flags) & \
3738 + ~(1UL << NILFS_SEGMENT_USAGE_##flag)); \
3739 + } \
3740 + static inline int \
3741 + nilfs_segment_usage_##name(const struct nilfs_segment_usage *su) \
3742 + { \
3743 +- return !!(le32_to_cpu(su->su_flags) & \
3744 ++ return !!(__le32_to_cpu(su->su_flags) & \
3745 + (1UL << NILFS_SEGMENT_USAGE_##flag)); \
3746 + }
3747 +
3748 +@@ -619,15 +619,15 @@ NILFS_SEGMENT_USAGE_FNS(ERROR, error)
3749 + static inline void
3750 + nilfs_segment_usage_set_clean(struct nilfs_segment_usage *su)
3751 + {
3752 +- su->su_lastmod = cpu_to_le64(0);
3753 +- su->su_nblocks = cpu_to_le32(0);
3754 +- su->su_flags = cpu_to_le32(0);
3755 ++ su->su_lastmod = __cpu_to_le64(0);
3756 ++ su->su_nblocks = __cpu_to_le32(0);
3757 ++ su->su_flags = __cpu_to_le32(0);
3758 + }
3759 +
3760 + static inline int
3761 + nilfs_segment_usage_clean(const struct nilfs_segment_usage *su)
3762 + {
3763 +- return !le32_to_cpu(su->su_flags);
3764 ++ return !__le32_to_cpu(su->su_flags);
3765 + }
3766 +
3767 + /**
3768 +diff --git a/kernel/cpu.c b/kernel/cpu.c
3769 +index f370a0f43005..d768e15bef83 100644
3770 +--- a/kernel/cpu.c
3771 ++++ b/kernel/cpu.c
3772 +@@ -1944,6 +1944,9 @@ static ssize_t write_cpuhp_fail(struct device *dev,
3773 + if (ret)
3774 + return ret;
3775 +
3776 ++ if (fail < CPUHP_OFFLINE || fail > CPUHP_ONLINE)
3777 ++ return -EINVAL;
3778 ++
3779 + /*
3780 + * Cannot fail STARTING/DYING callbacks.
3781 + */
3782 +diff --git a/kernel/events/core.c b/kernel/events/core.c
3783 +index 580616e6fcee..3d4eb6f840eb 100644
3784 +--- a/kernel/events/core.c
3785 ++++ b/kernel/events/core.c
3786 +@@ -5630,7 +5630,7 @@ static void perf_sample_regs_user(struct perf_regs *regs_user,
3787 + if (user_mode(regs)) {
3788 + regs_user->abi = perf_reg_abi(current);
3789 + regs_user->regs = regs;
3790 +- } else if (current->mm) {
3791 ++ } else if (!(current->flags & PF_KTHREAD)) {
3792 + perf_get_regs_user(regs_user, regs, regs_user_copy);
3793 + } else {
3794 + regs_user->abi = PERF_SAMPLE_REGS_ABI_NONE;
3795 +diff --git a/net/can/af_can.c b/net/can/af_can.c
3796 +index 9de9678fa7d0..46c85731d16f 100644
3797 +--- a/net/can/af_can.c
3798 ++++ b/net/can/af_can.c
3799 +@@ -959,6 +959,8 @@ static struct pernet_operations can_pernet_ops __read_mostly = {
3800 +
3801 + static __init int can_init(void)
3802 + {
3803 ++ int err;
3804 ++
3805 + /* check for correct padding to be able to use the structs similarly */
3806 + BUILD_BUG_ON(offsetof(struct can_frame, can_dlc) !=
3807 + offsetof(struct canfd_frame, len) ||
3808 +@@ -972,15 +974,31 @@ static __init int can_init(void)
3809 + if (!rcv_cache)
3810 + return -ENOMEM;
3811 +
3812 +- register_pernet_subsys(&can_pernet_ops);
3813 ++ err = register_pernet_subsys(&can_pernet_ops);
3814 ++ if (err)
3815 ++ goto out_pernet;
3816 +
3817 + /* protocol register */
3818 +- sock_register(&can_family_ops);
3819 +- register_netdevice_notifier(&can_netdev_notifier);
3820 ++ err = sock_register(&can_family_ops);
3821 ++ if (err)
3822 ++ goto out_sock;
3823 ++ err = register_netdevice_notifier(&can_netdev_notifier);
3824 ++ if (err)
3825 ++ goto out_notifier;
3826 ++
3827 + dev_add_pack(&can_packet);
3828 + dev_add_pack(&canfd_packet);
3829 +
3830 + return 0;
3831 ++
3832 ++out_notifier:
3833 ++ sock_unregister(PF_CAN);
3834 ++out_sock:
3835 ++ unregister_pernet_subsys(&can_pernet_ops);
3836 ++out_pernet:
3837 ++ kmem_cache_destroy(rcv_cache);
3838 ++
3839 ++ return err;
3840 + }
3841 +
3842 + static __exit void can_exit(void)
3843 +diff --git a/net/core/skbuff.c b/net/core/skbuff.c
3844 +index 2b3b0307dd89..6d9fd7d4bdfa 100644
3845 +--- a/net/core/skbuff.c
3846 ++++ b/net/core/skbuff.c
3847 +@@ -2299,6 +2299,7 @@ do_frag_list:
3848 + kv.iov_base = skb->data + offset;
3849 + kv.iov_len = slen;
3850 + memset(&msg, 0, sizeof(msg));
3851 ++ msg.msg_flags = MSG_DONTWAIT;
3852 +
3853 + ret = kernel_sendmsg_locked(sk, &msg, &kv, 1, slen);
3854 + if (ret <= 0)
3855 +diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
3856 +index cb1b4772dac0..35d5a76867d0 100644
3857 +--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
3858 ++++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
3859 +@@ -265,8 +265,14 @@ static int nf_ct_frag6_queue(struct frag_queue *fq, struct sk_buff *skb,
3860 +
3861 + prev = fq->q.fragments_tail;
3862 + err = inet_frag_queue_insert(&fq->q, skb, offset, end);
3863 +- if (err)
3864 ++ if (err) {
3865 ++ if (err == IPFRAG_DUP) {
3866 ++ /* No error for duplicates, pretend they got queued. */
3867 ++ kfree_skb(skb);
3868 ++ return -EINPROGRESS;
3869 ++ }
3870 + goto insert_error;
3871 ++ }
3872 +
3873 + if (dev)
3874 + fq->iif = dev->ifindex;
3875 +@@ -293,15 +299,17 @@ static int nf_ct_frag6_queue(struct frag_queue *fq, struct sk_buff *skb,
3876 + skb->_skb_refdst = 0UL;
3877 + err = nf_ct_frag6_reasm(fq, skb, prev, dev);
3878 + skb->_skb_refdst = orefdst;
3879 +- return err;
3880 ++
3881 ++ /* After queue has assumed skb ownership, only 0 or
3882 ++ * -EINPROGRESS must be returned.
3883 ++ */
3884 ++ return err ? -EINPROGRESS : 0;
3885 + }
3886 +
3887 + skb_dst_drop(skb);
3888 + return -EINPROGRESS;
3889 +
3890 + insert_error:
3891 +- if (err == IPFRAG_DUP)
3892 +- goto err;
3893 + inet_frag_kill(&fq->q);
3894 + err:
3895 + skb_dst_drop(skb);
3896 +@@ -481,12 +489,6 @@ int nf_ct_frag6_gather(struct net *net, struct sk_buff *skb, u32 user)
3897 + ret = 0;
3898 + }
3899 +
3900 +- /* after queue has assumed skb ownership, only 0 or -EINPROGRESS
3901 +- * must be returned.
3902 +- */
3903 +- if (ret)
3904 +- ret = -EINPROGRESS;
3905 +-
3906 + spin_unlock_bh(&fq->q.lock);
3907 + inet_frag_put(&fq->q);
3908 + return ret;
3909 +diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
3910 +index a133acb43eb1..0e209a88d88a 100644
3911 +--- a/net/mac80211/ieee80211_i.h
3912 ++++ b/net/mac80211/ieee80211_i.h
3913 +@@ -1405,7 +1405,7 @@ ieee80211_get_sband(struct ieee80211_sub_if_data *sdata)
3914 + rcu_read_lock();
3915 + chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
3916 +
3917 +- if (WARN_ON(!chanctx_conf)) {
3918 ++ if (WARN_ON_ONCE(!chanctx_conf)) {
3919 + rcu_read_unlock();
3920 + return NULL;
3921 + }
3922 +diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c
3923 +index 96e57d7c2872..c6edae051e9b 100644
3924 +--- a/net/mac80211/mesh.c
3925 ++++ b/net/mac80211/mesh.c
3926 +@@ -922,6 +922,7 @@ void ieee80211_stop_mesh(struct ieee80211_sub_if_data *sdata)
3927 +
3928 + /* flush STAs and mpaths on this iface */
3929 + sta_info_flush(sdata);
3930 ++ ieee80211_free_keys(sdata, true);
3931 + mesh_path_flush_by_iface(sdata);
3932 +
3933 + /* stop the beacon */
3934 +@@ -1209,7 +1210,8 @@ int ieee80211_mesh_finish_csa(struct ieee80211_sub_if_data *sdata)
3935 + ifmsh->chsw_ttl = 0;
3936 +
3937 + /* Remove the CSA and MCSP elements from the beacon */
3938 +- tmp_csa_settings = rcu_dereference(ifmsh->csa);
3939 ++ tmp_csa_settings = rcu_dereference_protected(ifmsh->csa,
3940 ++ lockdep_is_held(&sdata->wdev.mtx));
3941 + RCU_INIT_POINTER(ifmsh->csa, NULL);
3942 + if (tmp_csa_settings)
3943 + kfree_rcu(tmp_csa_settings, rcu_head);
3944 +@@ -1231,6 +1233,8 @@ int ieee80211_mesh_csa_beacon(struct ieee80211_sub_if_data *sdata,
3945 + struct mesh_csa_settings *tmp_csa_settings;
3946 + int ret = 0;
3947 +
3948 ++ lockdep_assert_held(&sdata->wdev.mtx);
3949 ++
3950 + tmp_csa_settings = kmalloc(sizeof(*tmp_csa_settings),
3951 + GFP_ATOMIC);
3952 + if (!tmp_csa_settings)
3953 +diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
3954 +index 6d118357d9dc..9259529e0412 100644
3955 +--- a/net/sunrpc/clnt.c
3956 ++++ b/net/sunrpc/clnt.c
3957 +@@ -2706,6 +2706,7 @@ int rpc_clnt_add_xprt(struct rpc_clnt *clnt,
3958 + xprt = xprt_iter_xprt(&clnt->cl_xpi);
3959 + if (xps == NULL || xprt == NULL) {
3960 + rcu_read_unlock();
3961 ++ xprt_switch_put(xps);
3962 + return -EAGAIN;
3963 + }
3964 + resvport = xprt->resvport;
3965 +diff --git a/samples/bpf/bpf_load.c b/samples/bpf/bpf_load.c
3966 +index 2325d7ad76df..e8e8b756dc52 100644
3967 +--- a/samples/bpf/bpf_load.c
3968 ++++ b/samples/bpf/bpf_load.c
3969 +@@ -613,7 +613,7 @@ void read_trace_pipe(void)
3970 + static char buf[4096];
3971 + ssize_t sz;
3972 +
3973 +- sz = read(trace_fd, buf, sizeof(buf));
3974 ++ sz = read(trace_fd, buf, sizeof(buf) - 1);
3975 + if (sz > 0) {
3976 + buf[sz] = 0;
3977 + puts(buf);
3978 +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
3979 +index 3552b4b1f902..20914a33ca5d 100644
3980 +--- a/sound/pci/hda/patch_realtek.c
3981 ++++ b/sound/pci/hda/patch_realtek.c
3982 +@@ -3114,6 +3114,7 @@ static void alc256_init(struct hda_codec *codec)
3983 + alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
3984 + alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 1 << 15); /* Clear bit */
3985 + alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 0 << 15);
3986 ++ alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/
3987 + }
3988 +
3989 + static void alc256_shutup(struct hda_codec *codec)
3990 +@@ -7218,7 +7219,6 @@ static int patch_alc269(struct hda_codec *codec)
3991 + spec->shutup = alc256_shutup;
3992 + spec->init_hook = alc256_init;
3993 + spec->gen.mixer_nid = 0; /* ALC256 does not have any loopback mixer path */
3994 +- alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/
3995 + break;
3996 + case 0x10ec0257:
3997 + spec->codec_variant = ALC269_TYPE_ALC257;
3998 +diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c
3999 +index dc06f5e40041..526d808ecbbd 100644
4000 +--- a/virt/kvm/arm/vgic/vgic-its.c
4001 ++++ b/virt/kvm/arm/vgic/vgic-its.c
4002 +@@ -1677,6 +1677,7 @@ static void vgic_its_destroy(struct kvm_device *kvm_dev)
4003 + mutex_unlock(&its->its_lock);
4004 +
4005 + kfree(its);
4006 ++ kfree(kvm_dev);/* alloc by kvm_ioctl_create_device, free by .destroy */
4007 + }
4008 +
4009 + int vgic_its_has_attr_regs(struct kvm_device *dev,