Gentoo Archives: gentoo-commits

From: Mike Pagano <mpagano@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] proj/linux-patches:4.1 commit in: /
Date: Mon, 26 Oct 2015 20:49:45
Message-Id: 1445892569.a96b0651fc6a971fe0c2d4a77f574c77dfbddd0b.mpagano@gentoo
1 commit: a96b0651fc6a971fe0c2d4a77f574c77dfbddd0b
2 Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
3 AuthorDate: Mon Oct 26 20:49:29 2015 +0000
4 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
5 CommitDate: Mon Oct 26 20:49:29 2015 +0000
6 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a96b0651
7
8 Linux patch 4.1.11
9
10 0000_README | 4 +
11 1010_linux-4.1.11.patch | 8151 +++++++++++++++++++++++++++++++++++++++++++++++
12 2 files changed, 8155 insertions(+)
13
14 diff --git a/0000_README b/0000_README
15 index b9b941a..fa3fbdb 100644
16 --- a/0000_README
17 +++ b/0000_README
18 @@ -83,6 +83,10 @@ Patch: 1009_linux-4.1.10.patch
19 From: http://www.kernel.org
20 Desc: Linux 4.1.10
21
22 +Patch: 1010_linux-4.1.11.patch
23 +From: http://www.kernel.org
24 +Desc: Linux 4.1.11
25 +
26 Patch: 1500_XATTR_USER_PREFIX.patch
27 From: https://bugs.gentoo.org/show_bug.cgi?id=470644
28 Desc: Support for namespace user.pax.* on tmpfs.
29
30 diff --git a/1010_linux-4.1.11.patch b/1010_linux-4.1.11.patch
31 new file mode 100644
32 index 0000000..0200b32
33 --- /dev/null
34 +++ b/1010_linux-4.1.11.patch
35 @@ -0,0 +1,8151 @@
36 +diff --git a/Documentation/HOWTO b/Documentation/HOWTO
37 +index 93aa8604630e..21152d397b88 100644
38 +--- a/Documentation/HOWTO
39 ++++ b/Documentation/HOWTO
40 +@@ -218,16 +218,16 @@ The development process
41 + Linux kernel development process currently consists of a few different
42 + main kernel "branches" and lots of different subsystem-specific kernel
43 + branches. These different branches are:
44 +- - main 3.x kernel tree
45 +- - 3.x.y -stable kernel tree
46 +- - 3.x -git kernel patches
47 ++ - main 4.x kernel tree
48 ++ - 4.x.y -stable kernel tree
49 ++ - 4.x -git kernel patches
50 + - subsystem specific kernel trees and patches
51 +- - the 3.x -next kernel tree for integration tests
52 ++ - the 4.x -next kernel tree for integration tests
53 +
54 +-3.x kernel tree
55 ++4.x kernel tree
56 + -----------------
57 +-3.x kernels are maintained by Linus Torvalds, and can be found on
58 +-kernel.org in the pub/linux/kernel/v3.x/ directory. Its development
59 ++4.x kernels are maintained by Linus Torvalds, and can be found on
60 ++kernel.org in the pub/linux/kernel/v4.x/ directory. Its development
61 + process is as follows:
62 + - As soon as a new kernel is released a two weeks window is open,
63 + during this period of time maintainers can submit big diffs to
64 +@@ -262,20 +262,20 @@ mailing list about kernel releases:
65 + released according to perceived bug status, not according to a
66 + preconceived timeline."
67 +
68 +-3.x.y -stable kernel tree
69 ++4.x.y -stable kernel tree
70 + ---------------------------
71 + Kernels with 3-part versions are -stable kernels. They contain
72 + relatively small and critical fixes for security problems or significant
73 +-regressions discovered in a given 3.x kernel.
74 ++regressions discovered in a given 4.x kernel.
75 +
76 + This is the recommended branch for users who want the most recent stable
77 + kernel and are not interested in helping test development/experimental
78 + versions.
79 +
80 +-If no 3.x.y kernel is available, then the highest numbered 3.x
81 ++If no 4.x.y kernel is available, then the highest numbered 4.x
82 + kernel is the current stable kernel.
83 +
84 +-3.x.y are maintained by the "stable" team <stable@×××××××××××.org>, and
85 ++4.x.y are maintained by the "stable" team <stable@×××××××××××.org>, and
86 + are released as needs dictate. The normal release period is approximately
87 + two weeks, but it can be longer if there are no pressing problems. A
88 + security-related problem, instead, can cause a release to happen almost
89 +@@ -285,7 +285,7 @@ The file Documentation/stable_kernel_rules.txt in the kernel tree
90 + documents what kinds of changes are acceptable for the -stable tree, and
91 + how the release process works.
92 +
93 +-3.x -git patches
94 ++4.x -git patches
95 + ------------------
96 + These are daily snapshots of Linus' kernel tree which are managed in a
97 + git repository (hence the name.) These patches are usually released
98 +@@ -317,9 +317,9 @@ revisions to it, and maintainers can mark patches as under review,
99 + accepted, or rejected. Most of these patchwork sites are listed at
100 + http://patchwork.kernel.org/.
101 +
102 +-3.x -next kernel tree for integration tests
103 ++4.x -next kernel tree for integration tests
104 + ---------------------------------------------
105 +-Before updates from subsystem trees are merged into the mainline 3.x
106 ++Before updates from subsystem trees are merged into the mainline 4.x
107 + tree, they need to be integration-tested. For this purpose, a special
108 + testing repository exists into which virtually all subsystem trees are
109 + pulled on an almost daily basis:
110 +diff --git a/Makefile b/Makefile
111 +index d02f16b510dc..c7d877b1c248 100644
112 +--- a/Makefile
113 ++++ b/Makefile
114 +@@ -1,6 +1,6 @@
115 + VERSION = 4
116 + PATCHLEVEL = 1
117 +-SUBLEVEL = 10
118 ++SUBLEVEL = 11
119 + EXTRAVERSION =
120 + NAME = Series 4800
121 +
122 +diff --git a/arch/arm/Makefile b/arch/arm/Makefile
123 +index 985227cbbd1b..47f10e7ad1f6 100644
124 +--- a/arch/arm/Makefile
125 ++++ b/arch/arm/Makefile
126 +@@ -50,6 +50,14 @@ AS += -EL
127 + LD += -EL
128 + endif
129 +
130 ++#
131 ++# The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
132 ++# later may result in code being generated that handles signed short and signed
133 ++# char struct members incorrectly. So disable it.
134 ++# (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65932)
135 ++#
136 ++KBUILD_CFLAGS += $(call cc-option,-fno-ipa-sra)
137 ++
138 + # This selects which instruction set is used.
139 + # Note that GCC does not numerically define an architecture version
140 + # macro, but instead defines a whole series of macros which makes
141 +diff --git a/arch/arm/boot/dts/imx25-pdk.dts b/arch/arm/boot/dts/imx25-pdk.dts
142 +index dd45e6971bc3..9351296356dc 100644
143 +--- a/arch/arm/boot/dts/imx25-pdk.dts
144 ++++ b/arch/arm/boot/dts/imx25-pdk.dts
145 +@@ -10,6 +10,7 @@
146 + */
147 +
148 + /dts-v1/;
149 ++#include <dt-bindings/gpio/gpio.h>
150 + #include <dt-bindings/input/input.h>
151 + #include "imx25.dtsi"
152 +
153 +@@ -114,8 +115,8 @@
154 + &esdhc1 {
155 + pinctrl-names = "default";
156 + pinctrl-0 = <&pinctrl_esdhc1>;
157 +- cd-gpios = <&gpio2 1 0>;
158 +- wp-gpios = <&gpio2 0 0>;
159 ++ cd-gpios = <&gpio2 1 GPIO_ACTIVE_LOW>;
160 ++ wp-gpios = <&gpio2 0 GPIO_ACTIVE_HIGH>;
161 + status = "okay";
162 + };
163 +
164 +diff --git a/arch/arm/boot/dts/imx51-apf51dev.dts b/arch/arm/boot/dts/imx51-apf51dev.dts
165 +index 93d3ea12328c..0f3fe29b816e 100644
166 +--- a/arch/arm/boot/dts/imx51-apf51dev.dts
167 ++++ b/arch/arm/boot/dts/imx51-apf51dev.dts
168 +@@ -98,7 +98,7 @@
169 + &esdhc1 {
170 + pinctrl-names = "default";
171 + pinctrl-0 = <&pinctrl_esdhc1>;
172 +- cd-gpios = <&gpio2 29 GPIO_ACTIVE_HIGH>;
173 ++ cd-gpios = <&gpio2 29 GPIO_ACTIVE_LOW>;
174 + bus-width = <4>;
175 + status = "okay";
176 + };
177 +diff --git a/arch/arm/boot/dts/imx53-ard.dts b/arch/arm/boot/dts/imx53-ard.dts
178 +index e9337ad52f59..3bc18835fb4b 100644
179 +--- a/arch/arm/boot/dts/imx53-ard.dts
180 ++++ b/arch/arm/boot/dts/imx53-ard.dts
181 +@@ -103,8 +103,8 @@
182 + &esdhc1 {
183 + pinctrl-names = "default";
184 + pinctrl-0 = <&pinctrl_esdhc1>;
185 +- cd-gpios = <&gpio1 1 0>;
186 +- wp-gpios = <&gpio1 9 0>;
187 ++ cd-gpios = <&gpio1 1 GPIO_ACTIVE_LOW>;
188 ++ wp-gpios = <&gpio1 9 GPIO_ACTIVE_HIGH>;
189 + status = "okay";
190 + };
191 +
192 +diff --git a/arch/arm/boot/dts/imx53-m53evk.dts b/arch/arm/boot/dts/imx53-m53evk.dts
193 +index d0e0f57eb432..53f40885c530 100644
194 +--- a/arch/arm/boot/dts/imx53-m53evk.dts
195 ++++ b/arch/arm/boot/dts/imx53-m53evk.dts
196 +@@ -124,8 +124,8 @@
197 + &esdhc1 {
198 + pinctrl-names = "default";
199 + pinctrl-0 = <&pinctrl_esdhc1>;
200 +- cd-gpios = <&gpio1 1 0>;
201 +- wp-gpios = <&gpio1 9 0>;
202 ++ cd-gpios = <&gpio1 1 GPIO_ACTIVE_LOW>;
203 ++ wp-gpios = <&gpio1 9 GPIO_ACTIVE_HIGH>;
204 + status = "okay";
205 + };
206 +
207 +diff --git a/arch/arm/boot/dts/imx53-qsb-common.dtsi b/arch/arm/boot/dts/imx53-qsb-common.dtsi
208 +index 181ae5ebf23f..1f55187ed9ce 100644
209 +--- a/arch/arm/boot/dts/imx53-qsb-common.dtsi
210 ++++ b/arch/arm/boot/dts/imx53-qsb-common.dtsi
211 +@@ -147,8 +147,8 @@
212 + &esdhc3 {
213 + pinctrl-names = "default";
214 + pinctrl-0 = <&pinctrl_esdhc3>;
215 +- cd-gpios = <&gpio3 11 0>;
216 +- wp-gpios = <&gpio3 12 0>;
217 ++ cd-gpios = <&gpio3 11 GPIO_ACTIVE_LOW>;
218 ++ wp-gpios = <&gpio3 12 GPIO_ACTIVE_HIGH>;
219 + bus-width = <8>;
220 + status = "okay";
221 + };
222 +diff --git a/arch/arm/boot/dts/imx53-smd.dts b/arch/arm/boot/dts/imx53-smd.dts
223 +index 1d325576bcc0..fc89ce1e5763 100644
224 +--- a/arch/arm/boot/dts/imx53-smd.dts
225 ++++ b/arch/arm/boot/dts/imx53-smd.dts
226 +@@ -41,8 +41,8 @@
227 + &esdhc1 {
228 + pinctrl-names = "default";
229 + pinctrl-0 = <&pinctrl_esdhc1>;
230 +- cd-gpios = <&gpio3 13 0>;
231 +- wp-gpios = <&gpio4 11 0>;
232 ++ cd-gpios = <&gpio3 13 GPIO_ACTIVE_LOW>;
233 ++ wp-gpios = <&gpio4 11 GPIO_ACTIVE_HIGH>;
234 + status = "okay";
235 + };
236 +
237 +diff --git a/arch/arm/boot/dts/imx53-tqma53.dtsi b/arch/arm/boot/dts/imx53-tqma53.dtsi
238 +index 4f1f0e2868bf..e03373a58760 100644
239 +--- a/arch/arm/boot/dts/imx53-tqma53.dtsi
240 ++++ b/arch/arm/boot/dts/imx53-tqma53.dtsi
241 +@@ -41,8 +41,8 @@
242 + pinctrl-0 = <&pinctrl_esdhc2>,
243 + <&pinctrl_esdhc2_cdwp>;
244 + vmmc-supply = <&reg_3p3v>;
245 +- wp-gpios = <&gpio1 2 0>;
246 +- cd-gpios = <&gpio1 4 0>;
247 ++ wp-gpios = <&gpio1 2 GPIO_ACTIVE_HIGH>;
248 ++ cd-gpios = <&gpio1 4 GPIO_ACTIVE_LOW>;
249 + status = "disabled";
250 + };
251 +
252 +diff --git a/arch/arm/boot/dts/imx53-tx53.dtsi b/arch/arm/boot/dts/imx53-tx53.dtsi
253 +index 704bd72cbfec..d3e50b22064f 100644
254 +--- a/arch/arm/boot/dts/imx53-tx53.dtsi
255 ++++ b/arch/arm/boot/dts/imx53-tx53.dtsi
256 +@@ -183,7 +183,7 @@
257 + };
258 +
259 + &esdhc1 {
260 +- cd-gpios = <&gpio3 24 GPIO_ACTIVE_HIGH>;
261 ++ cd-gpios = <&gpio3 24 GPIO_ACTIVE_LOW>;
262 + fsl,wp-controller;
263 + pinctrl-names = "default";
264 + pinctrl-0 = <&pinctrl_esdhc1>;
265 +@@ -191,7 +191,7 @@
266 + };
267 +
268 + &esdhc2 {
269 +- cd-gpios = <&gpio3 25 GPIO_ACTIVE_HIGH>;
270 ++ cd-gpios = <&gpio3 25 GPIO_ACTIVE_LOW>;
271 + fsl,wp-controller;
272 + pinctrl-names = "default";
273 + pinctrl-0 = <&pinctrl_esdhc2>;
274 +diff --git a/arch/arm/boot/dts/imx53-voipac-bsb.dts b/arch/arm/boot/dts/imx53-voipac-bsb.dts
275 +index c17d3ad6dba5..fc51b87ad208 100644
276 +--- a/arch/arm/boot/dts/imx53-voipac-bsb.dts
277 ++++ b/arch/arm/boot/dts/imx53-voipac-bsb.dts
278 +@@ -119,8 +119,8 @@
279 + &esdhc2 {
280 + pinctrl-names = "default";
281 + pinctrl-0 = <&pinctrl_esdhc2>;
282 +- cd-gpios = <&gpio3 25 0>;
283 +- wp-gpios = <&gpio2 19 0>;
284 ++ cd-gpios = <&gpio3 25 GPIO_ACTIVE_LOW>;
285 ++ wp-gpios = <&gpio2 19 GPIO_ACTIVE_HIGH>;
286 + vmmc-supply = <&reg_3p3v>;
287 + status = "okay";
288 + };
289 +diff --git a/arch/arm/boot/dts/imx6qdl-rex.dtsi b/arch/arm/boot/dts/imx6qdl-rex.dtsi
290 +index 488a640796ac..394a4ace351a 100644
291 +--- a/arch/arm/boot/dts/imx6qdl-rex.dtsi
292 ++++ b/arch/arm/boot/dts/imx6qdl-rex.dtsi
293 +@@ -35,7 +35,6 @@
294 + compatible = "regulator-fixed";
295 + reg = <1>;
296 + pinctrl-names = "default";
297 +- pinctrl-0 = <&pinctrl_usbh1>;
298 + regulator-name = "usbh1_vbus";
299 + regulator-min-microvolt = <5000000>;
300 + regulator-max-microvolt = <5000000>;
301 +@@ -47,7 +46,6 @@
302 + compatible = "regulator-fixed";
303 + reg = <2>;
304 + pinctrl-names = "default";
305 +- pinctrl-0 = <&pinctrl_usbotg>;
306 + regulator-name = "usb_otg_vbus";
307 + regulator-min-microvolt = <5000000>;
308 + regulator-max-microvolt = <5000000>;
309 +diff --git a/arch/arm/boot/dts/omap3-beagle.dts b/arch/arm/boot/dts/omap3-beagle.dts
310 +index a5474113cd50..67659a0ed13e 100644
311 +--- a/arch/arm/boot/dts/omap3-beagle.dts
312 ++++ b/arch/arm/boot/dts/omap3-beagle.dts
313 +@@ -202,7 +202,7 @@
314 +
315 + tfp410_pins: pinmux_tfp410_pins {
316 + pinctrl-single,pins = <
317 +- 0x194 (PIN_OUTPUT | MUX_MODE4) /* hdq_sio.gpio_170 */
318 ++ 0x196 (PIN_OUTPUT | MUX_MODE4) /* hdq_sio.gpio_170 */
319 + >;
320 + };
321 +
322 +diff --git a/arch/arm/boot/dts/omap5-uevm.dts b/arch/arm/boot/dts/omap5-uevm.dts
323 +index 74777a6e200a..1b958e92d674 100644
324 +--- a/arch/arm/boot/dts/omap5-uevm.dts
325 ++++ b/arch/arm/boot/dts/omap5-uevm.dts
326 +@@ -174,8 +174,8 @@
327 +
328 + i2c5_pins: pinmux_i2c5_pins {
329 + pinctrl-single,pins = <
330 +- 0x184 (PIN_INPUT | MUX_MODE0) /* i2c5_scl */
331 +- 0x186 (PIN_INPUT | MUX_MODE0) /* i2c5_sda */
332 ++ 0x186 (PIN_INPUT | MUX_MODE0) /* i2c5_scl */
333 ++ 0x188 (PIN_INPUT | MUX_MODE0) /* i2c5_sda */
334 + >;
335 + };
336 +
337 +diff --git a/arch/arm/kernel/kgdb.c b/arch/arm/kernel/kgdb.c
338 +index a6ad93c9bce3..fd9eefce0a7b 100644
339 +--- a/arch/arm/kernel/kgdb.c
340 ++++ b/arch/arm/kernel/kgdb.c
341 +@@ -259,15 +259,17 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt)
342 + if (err)
343 + return err;
344 +
345 +- patch_text((void *)bpt->bpt_addr,
346 +- *(unsigned int *)arch_kgdb_ops.gdb_bpt_instr);
347 ++ /* Machine is already stopped, so we can use __patch_text() directly */
348 ++ __patch_text((void *)bpt->bpt_addr,
349 ++ *(unsigned int *)arch_kgdb_ops.gdb_bpt_instr);
350 +
351 + return err;
352 + }
353 +
354 + int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt)
355 + {
356 +- patch_text((void *)bpt->bpt_addr, *(unsigned int *)bpt->saved_instr);
357 ++ /* Machine is already stopped, so we can use __patch_text() directly */
358 ++ __patch_text((void *)bpt->bpt_addr, *(unsigned int *)bpt->saved_instr);
359 +
360 + return 0;
361 + }
362 +diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c
363 +index 423663e23791..586eef26203d 100644
364 +--- a/arch/arm/kernel/signal.c
365 ++++ b/arch/arm/kernel/signal.c
366 +@@ -343,12 +343,17 @@ setup_return(struct pt_regs *regs, struct ksignal *ksig,
367 + */
368 + thumb = handler & 1;
369 +
370 +-#if __LINUX_ARM_ARCH__ >= 7
371 ++#if __LINUX_ARM_ARCH__ >= 6
372 + /*
373 +- * Clear the If-Then Thumb-2 execution state
374 +- * ARM spec requires this to be all 000s in ARM mode
375 +- * Snapdragon S4/Krait misbehaves on a Thumb=>ARM
376 +- * signal transition without this.
377 ++ * Clear the If-Then Thumb-2 execution state. ARM spec
378 ++ * requires this to be all 000s in ARM mode. Snapdragon
379 ++ * S4/Krait misbehaves on a Thumb=>ARM signal transition
380 ++ * without this.
381 ++ *
382 ++ * We must do this whenever we are running on a Thumb-2
383 ++ * capable CPU, which includes ARMv6T2. However, we elect
384 ++ * to do this whenever we're on an ARMv6 or later CPU for
385 ++ * simplicity.
386 + */
387 + cpsr &= ~PSR_IT_MASK;
388 + #endif
389 +diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
390 +index 48efe2ee452c..58048b333d31 100644
391 +--- a/arch/arm/kvm/interrupts_head.S
392 ++++ b/arch/arm/kvm/interrupts_head.S
393 +@@ -518,8 +518,7 @@ ARM_BE8(rev r6, r6 )
394 +
395 + mrc p15, 0, r2, c14, c3, 1 @ CNTV_CTL
396 + str r2, [vcpu, #VCPU_TIMER_CNTV_CTL]
397 +- bic r2, #1 @ Clear ENABLE
398 +- mcr p15, 0, r2, c14, c3, 1 @ CNTV_CTL
399 ++
400 + isb
401 +
402 + mrrc p15, 3, rr_lo_hi(r2, r3), c14 @ CNTV_CVAL
403 +@@ -532,6 +531,9 @@ ARM_BE8(rev r6, r6 )
404 + mcrr p15, 4, r2, r2, c14 @ CNTVOFF
405 +
406 + 1:
407 ++ mov r2, #0 @ Clear ENABLE
408 ++ mcr p15, 0, r2, c14, c3, 1 @ CNTV_CTL
409 ++
410 + @ Allow physical timer/counter access for the host
411 + mrc p15, 4, r2, c14, c1, 0 @ CNTHCTL
412 + orr r2, r2, #(CNTHCTL_PL1PCEN | CNTHCTL_PL1PCTEN)
413 +diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
414 +index 1d5accbd3dcf..191dcfab9f60 100644
415 +--- a/arch/arm/kvm/mmu.c
416 ++++ b/arch/arm/kvm/mmu.c
417 +@@ -1790,8 +1790,10 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
418 + if (vma->vm_flags & VM_PFNMAP) {
419 + gpa_t gpa = mem->guest_phys_addr +
420 + (vm_start - mem->userspace_addr);
421 +- phys_addr_t pa = (vma->vm_pgoff << PAGE_SHIFT) +
422 +- vm_start - vma->vm_start;
423 ++ phys_addr_t pa;
424 ++
425 ++ pa = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT;
426 ++ pa += vm_start - vma->vm_start;
427 +
428 + /* IO region dirty page logging not allowed */
429 + if (memslot->flags & KVM_MEM_LOG_DIRTY_PAGES)
430 +diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c
431 +index 9bdf54795f05..56978199c479 100644
432 +--- a/arch/arm/mach-exynos/mcpm-exynos.c
433 ++++ b/arch/arm/mach-exynos/mcpm-exynos.c
434 +@@ -20,6 +20,7 @@
435 + #include <asm/cputype.h>
436 + #include <asm/cp15.h>
437 + #include <asm/mcpm.h>
438 ++#include <asm/smp_plat.h>
439 +
440 + #include "regs-pmu.h"
441 + #include "common.h"
442 +@@ -70,7 +71,31 @@ static int exynos_cpu_powerup(unsigned int cpu, unsigned int cluster)
443 + cluster >= EXYNOS5420_NR_CLUSTERS)
444 + return -EINVAL;
445 +
446 +- exynos_cpu_power_up(cpunr);
447 ++ if (!exynos_cpu_power_state(cpunr)) {
448 ++ exynos_cpu_power_up(cpunr);
449 ++
450 ++ /*
451 ++ * This assumes the cluster number of the big cores(Cortex A15)
452 ++ * is 0 and the Little cores(Cortex A7) is 1.
453 ++ * When the system was booted from the Little core,
454 ++ * they should be reset during power up cpu.
455 ++ */
456 ++ if (cluster &&
457 ++ cluster == MPIDR_AFFINITY_LEVEL(cpu_logical_map(0), 1)) {
458 ++ /*
459 ++ * Before we reset the Little cores, we should wait
460 ++ * the SPARE2 register is set to 1 because the init
461 ++ * codes of the iROM will set the register after
462 ++ * initialization.
463 ++ */
464 ++ while (!pmu_raw_readl(S5P_PMU_SPARE2))
465 ++ udelay(10);
466 ++
467 ++ pmu_raw_writel(EXYNOS5420_KFC_CORE_RESET(cpu),
468 ++ EXYNOS_SWRESET);
469 ++ }
470 ++ }
471 ++
472 + return 0;
473 + }
474 +
475 +diff --git a/arch/arm/mach-exynos/regs-pmu.h b/arch/arm/mach-exynos/regs-pmu.h
476 +index b7614333d296..fba9068ed260 100644
477 +--- a/arch/arm/mach-exynos/regs-pmu.h
478 ++++ b/arch/arm/mach-exynos/regs-pmu.h
479 +@@ -513,6 +513,12 @@ static inline unsigned int exynos_pmu_cpunr(unsigned int mpidr)
480 + #define SPREAD_ENABLE 0xF
481 + #define SPREAD_USE_STANDWFI 0xF
482 +
483 ++#define EXYNOS5420_KFC_CORE_RESET0 BIT(8)
484 ++#define EXYNOS5420_KFC_ETM_RESET0 BIT(20)
485 ++
486 ++#define EXYNOS5420_KFC_CORE_RESET(_nr) \
487 ++ ((EXYNOS5420_KFC_CORE_RESET0 | EXYNOS5420_KFC_ETM_RESET0) << (_nr))
488 ++
489 + #define EXYNOS5420_BB_CON1 0x0784
490 + #define EXYNOS5420_BB_SEL_EN BIT(31)
491 + #define EXYNOS5420_BB_PMOS_EN BIT(7)
492 +diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
493 +index 352962bc2e78..5170fd5c8e97 100644
494 +--- a/arch/arm64/kernel/efi.c
495 ++++ b/arch/arm64/kernel/efi.c
496 +@@ -257,7 +257,8 @@ static bool __init efi_virtmap_init(void)
497 + */
498 + if (!is_normal_ram(md))
499 + prot = __pgprot(PROT_DEVICE_nGnRE);
500 +- else if (md->type == EFI_RUNTIME_SERVICES_CODE)
501 ++ else if (md->type == EFI_RUNTIME_SERVICES_CODE ||
502 ++ !PAGE_ALIGNED(md->phys_addr))
503 + prot = PAGE_KERNEL_EXEC;
504 + else
505 + prot = PAGE_KERNEL;
506 +diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S
507 +index 08cafc518b9a..0f03a8fe2314 100644
508 +--- a/arch/arm64/kernel/entry-ftrace.S
509 ++++ b/arch/arm64/kernel/entry-ftrace.S
510 +@@ -178,6 +178,24 @@ ENTRY(ftrace_stub)
511 + ENDPROC(ftrace_stub)
512 +
513 + #ifdef CONFIG_FUNCTION_GRAPH_TRACER
514 ++ /* save return value regs*/
515 ++ .macro save_return_regs
516 ++ sub sp, sp, #64
517 ++ stp x0, x1, [sp]
518 ++ stp x2, x3, [sp, #16]
519 ++ stp x4, x5, [sp, #32]
520 ++ stp x6, x7, [sp, #48]
521 ++ .endm
522 ++
523 ++ /* restore return value regs*/
524 ++ .macro restore_return_regs
525 ++ ldp x0, x1, [sp]
526 ++ ldp x2, x3, [sp, #16]
527 ++ ldp x4, x5, [sp, #32]
528 ++ ldp x6, x7, [sp, #48]
529 ++ add sp, sp, #64
530 ++ .endm
531 ++
532 + /*
533 + * void ftrace_graph_caller(void)
534 + *
535 +@@ -204,11 +222,11 @@ ENDPROC(ftrace_graph_caller)
536 + * only when CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST is enabled.
537 + */
538 + ENTRY(return_to_handler)
539 +- str x0, [sp, #-16]!
540 ++ save_return_regs
541 + mov x0, x29 // parent's fp
542 + bl ftrace_return_to_handler// addr = ftrace_return_to_hander(fp);
543 + mov x30, x0 // restore the original return address
544 +- ldr x0, [sp], #16
545 ++ restore_return_regs
546 + ret
547 + END(return_to_handler)
548 + #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
549 +diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
550 +index 96da13167d4a..fa5efaa5c3ac 100644
551 +--- a/arch/arm64/mm/fault.c
552 ++++ b/arch/arm64/mm/fault.c
553 +@@ -279,6 +279,7 @@ retry:
554 + * starvation.
555 + */
556 + mm_flags &= ~FAULT_FLAG_ALLOW_RETRY;
557 ++ mm_flags |= FAULT_FLAG_TRIED;
558 + goto retry;
559 + }
560 + }
561 +diff --git a/arch/m68k/include/asm/linkage.h b/arch/m68k/include/asm/linkage.h
562 +index 5a822bb790f7..066e74f666ae 100644
563 +--- a/arch/m68k/include/asm/linkage.h
564 ++++ b/arch/m68k/include/asm/linkage.h
565 +@@ -4,4 +4,34 @@
566 + #define __ALIGN .align 4
567 + #define __ALIGN_STR ".align 4"
568 +
569 ++/*
570 ++ * Make sure the compiler doesn't do anything stupid with the
571 ++ * arguments on the stack - they are owned by the *caller*, not
572 ++ * the callee. This just fools gcc into not spilling into them,
573 ++ * and keeps it from doing tailcall recursion and/or using the
574 ++ * stack slots for temporaries, since they are live and "used"
575 ++ * all the way to the end of the function.
576 ++ */
577 ++#define asmlinkage_protect(n, ret, args...) \
578 ++ __asmlinkage_protect##n(ret, ##args)
579 ++#define __asmlinkage_protect_n(ret, args...) \
580 ++ __asm__ __volatile__ ("" : "=r" (ret) : "0" (ret), ##args)
581 ++#define __asmlinkage_protect0(ret) \
582 ++ __asmlinkage_protect_n(ret)
583 ++#define __asmlinkage_protect1(ret, arg1) \
584 ++ __asmlinkage_protect_n(ret, "m" (arg1))
585 ++#define __asmlinkage_protect2(ret, arg1, arg2) \
586 ++ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2))
587 ++#define __asmlinkage_protect3(ret, arg1, arg2, arg3) \
588 ++ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3))
589 ++#define __asmlinkage_protect4(ret, arg1, arg2, arg3, arg4) \
590 ++ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \
591 ++ "m" (arg4))
592 ++#define __asmlinkage_protect5(ret, arg1, arg2, arg3, arg4, arg5) \
593 ++ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \
594 ++ "m" (arg4), "m" (arg5))
595 ++#define __asmlinkage_protect6(ret, arg1, arg2, arg3, arg4, arg5, arg6) \
596 ++ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \
597 ++ "m" (arg4), "m" (arg5), "m" (arg6))
598 ++
599 + #endif
600 +diff --git a/arch/mips/loongson/common/env.c b/arch/mips/loongson/common/env.c
601 +index 22f04ca2ff3e..2efb18aafa4f 100644
602 +--- a/arch/mips/loongson/common/env.c
603 ++++ b/arch/mips/loongson/common/env.c
604 +@@ -64,6 +64,9 @@ void __init prom_init_env(void)
605 + }
606 + if (memsize == 0)
607 + memsize = 256;
608 ++
609 ++ loongson_sysconf.nr_uarts = 1;
610 ++
611 + pr_info("memsize=%u, highmemsize=%u\n", memsize, highmemsize);
612 + #else
613 + struct boot_params *boot_p;
614 +diff --git a/arch/mips/mm/dma-default.c b/arch/mips/mm/dma-default.c
615 +index 609d1241b0c4..371eec113659 100644
616 +--- a/arch/mips/mm/dma-default.c
617 ++++ b/arch/mips/mm/dma-default.c
618 +@@ -100,7 +100,7 @@ static gfp_t massage_gfp_flags(const struct device *dev, gfp_t gfp)
619 + else
620 + #endif
621 + #if defined(CONFIG_ZONE_DMA) && !defined(CONFIG_ZONE_DMA32)
622 +- if (dev->coherent_dma_mask < DMA_BIT_MASK(64))
623 ++ if (dev->coherent_dma_mask < DMA_BIT_MASK(sizeof(phys_addr_t) * 8))
624 + dma_flag = __GFP_DMA;
625 + else
626 + #endif
627 +diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
628 +index 453a8a47a467..964c0ce584ce 100644
629 +--- a/arch/powerpc/kvm/book3s.c
630 ++++ b/arch/powerpc/kvm/book3s.c
631 +@@ -826,12 +826,15 @@ int kvmppc_h_logical_ci_load(struct kvm_vcpu *vcpu)
632 + unsigned long size = kvmppc_get_gpr(vcpu, 4);
633 + unsigned long addr = kvmppc_get_gpr(vcpu, 5);
634 + u64 buf;
635 ++ int srcu_idx;
636 + int ret;
637 +
638 + if (!is_power_of_2(size) || (size > sizeof(buf)))
639 + return H_TOO_HARD;
640 +
641 ++ srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
642 + ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, addr, size, &buf);
643 ++ srcu_read_unlock(&vcpu->kvm->srcu, srcu_idx);
644 + if (ret != 0)
645 + return H_TOO_HARD;
646 +
647 +@@ -866,6 +869,7 @@ int kvmppc_h_logical_ci_store(struct kvm_vcpu *vcpu)
648 + unsigned long addr = kvmppc_get_gpr(vcpu, 5);
649 + unsigned long val = kvmppc_get_gpr(vcpu, 6);
650 + u64 buf;
651 ++ int srcu_idx;
652 + int ret;
653 +
654 + switch (size) {
655 +@@ -889,7 +893,9 @@ int kvmppc_h_logical_ci_store(struct kvm_vcpu *vcpu)
656 + return H_TOO_HARD;
657 + }
658 +
659 ++ srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
660 + ret = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, addr, size, &buf);
661 ++ srcu_read_unlock(&vcpu->kvm->srcu, srcu_idx);
662 + if (ret != 0)
663 + return H_TOO_HARD;
664 +
665 +diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
666 +index 3b2d2c5b6376..ffd98b2bfa16 100644
667 +--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
668 ++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
669 +@@ -1171,6 +1171,7 @@ mc_cont:
670 + bl kvmhv_accumulate_time
671 + #endif
672 +
673 ++ mr r3, r12
674 + /* Increment exit count, poke other threads to exit */
675 + bl kvmhv_commence_exit
676 + nop
677 +diff --git a/arch/powerpc/platforms/powernv/pci.c b/arch/powerpc/platforms/powernv/pci.c
678 +index bca2aeb6e4b6..3ff29cf6d05c 100644
679 +--- a/arch/powerpc/platforms/powernv/pci.c
680 ++++ b/arch/powerpc/platforms/powernv/pci.c
681 +@@ -99,6 +99,7 @@ static void pnv_teardown_msi_irqs(struct pci_dev *pdev)
682 + struct pci_controller *hose = pci_bus_to_host(pdev->bus);
683 + struct pnv_phb *phb = hose->private_data;
684 + struct msi_desc *entry;
685 ++ irq_hw_number_t hwirq;
686 +
687 + if (WARN_ON(!phb))
688 + return;
689 +@@ -106,10 +107,10 @@ static void pnv_teardown_msi_irqs(struct pci_dev *pdev)
690 + list_for_each_entry(entry, &pdev->msi_list, list) {
691 + if (entry->irq == NO_IRQ)
692 + continue;
693 ++ hwirq = virq_to_hw(entry->irq);
694 + irq_set_msi_desc(entry->irq, NULL);
695 +- msi_bitmap_free_hwirqs(&phb->msi_bmp,
696 +- virq_to_hw(entry->irq) - phb->msi_base, 1);
697 + irq_dispose_mapping(entry->irq);
698 ++ msi_bitmap_free_hwirqs(&phb->msi_bmp, hwirq - phb->msi_base, 1);
699 + }
700 + }
701 + #endif /* CONFIG_PCI_MSI */
702 +diff --git a/arch/powerpc/sysdev/fsl_msi.c b/arch/powerpc/sysdev/fsl_msi.c
703 +index f086c6f22dc9..fd16cb5d83f3 100644
704 +--- a/arch/powerpc/sysdev/fsl_msi.c
705 ++++ b/arch/powerpc/sysdev/fsl_msi.c
706 +@@ -128,15 +128,16 @@ static void fsl_teardown_msi_irqs(struct pci_dev *pdev)
707 + {
708 + struct msi_desc *entry;
709 + struct fsl_msi *msi_data;
710 ++ irq_hw_number_t hwirq;
711 +
712 + list_for_each_entry(entry, &pdev->msi_list, list) {
713 + if (entry->irq == NO_IRQ)
714 + continue;
715 ++ hwirq = virq_to_hw(entry->irq);
716 + msi_data = irq_get_chip_data(entry->irq);
717 + irq_set_msi_desc(entry->irq, NULL);
718 +- msi_bitmap_free_hwirqs(&msi_data->bitmap,
719 +- virq_to_hw(entry->irq), 1);
720 + irq_dispose_mapping(entry->irq);
721 ++ msi_bitmap_free_hwirqs(&msi_data->bitmap, hwirq, 1);
722 + }
723 +
724 + return;
725 +diff --git a/arch/powerpc/sysdev/mpic_pasemi_msi.c b/arch/powerpc/sysdev/mpic_pasemi_msi.c
726 +index a3f660eed6de..89496cf4e04d 100644
727 +--- a/arch/powerpc/sysdev/mpic_pasemi_msi.c
728 ++++ b/arch/powerpc/sysdev/mpic_pasemi_msi.c
729 +@@ -65,6 +65,7 @@ static struct irq_chip mpic_pasemi_msi_chip = {
730 + static void pasemi_msi_teardown_msi_irqs(struct pci_dev *pdev)
731 + {
732 + struct msi_desc *entry;
733 ++ irq_hw_number_t hwirq;
734 +
735 + pr_debug("pasemi_msi_teardown_msi_irqs, pdev %p\n", pdev);
736 +
737 +@@ -72,10 +73,11 @@ static void pasemi_msi_teardown_msi_irqs(struct pci_dev *pdev)
738 + if (entry->irq == NO_IRQ)
739 + continue;
740 +
741 ++ hwirq = virq_to_hw(entry->irq);
742 + irq_set_msi_desc(entry->irq, NULL);
743 +- msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap,
744 +- virq_to_hw(entry->irq), ALLOC_CHUNK);
745 + irq_dispose_mapping(entry->irq);
746 ++ msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap,
747 ++ hwirq, ALLOC_CHUNK);
748 + }
749 +
750 + return;
751 +diff --git a/arch/powerpc/sysdev/mpic_u3msi.c b/arch/powerpc/sysdev/mpic_u3msi.c
752 +index b2cef1809389..13a34b237559 100644
753 +--- a/arch/powerpc/sysdev/mpic_u3msi.c
754 ++++ b/arch/powerpc/sysdev/mpic_u3msi.c
755 +@@ -107,15 +107,16 @@ static u64 find_u4_magic_addr(struct pci_dev *pdev, unsigned int hwirq)
756 + static void u3msi_teardown_msi_irqs(struct pci_dev *pdev)
757 + {
758 + struct msi_desc *entry;
759 ++ irq_hw_number_t hwirq;
760 +
761 + list_for_each_entry(entry, &pdev->msi_list, list) {
762 + if (entry->irq == NO_IRQ)
763 + continue;
764 +
765 ++ hwirq = virq_to_hw(entry->irq);
766 + irq_set_msi_desc(entry->irq, NULL);
767 +- msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap,
768 +- virq_to_hw(entry->irq), 1);
769 + irq_dispose_mapping(entry->irq);
770 ++ msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap, hwirq, 1);
771 + }
772 +
773 + return;
774 +diff --git a/arch/powerpc/sysdev/ppc4xx_msi.c b/arch/powerpc/sysdev/ppc4xx_msi.c
775 +index 6e2e6aa378bb..02a137daa182 100644
776 +--- a/arch/powerpc/sysdev/ppc4xx_msi.c
777 ++++ b/arch/powerpc/sysdev/ppc4xx_msi.c
778 +@@ -124,16 +124,17 @@ void ppc4xx_teardown_msi_irqs(struct pci_dev *dev)
779 + {
780 + struct msi_desc *entry;
781 + struct ppc4xx_msi *msi_data = &ppc4xx_msi;
782 ++ irq_hw_number_t hwirq;
783 +
784 + dev_dbg(&dev->dev, "PCIE-MSI: tearing down msi irqs\n");
785 +
786 + list_for_each_entry(entry, &dev->msi_list, list) {
787 + if (entry->irq == NO_IRQ)
788 + continue;
789 ++ hwirq = virq_to_hw(entry->irq);
790 + irq_set_msi_desc(entry->irq, NULL);
791 +- msi_bitmap_free_hwirqs(&msi_data->bitmap,
792 +- virq_to_hw(entry->irq), 1);
793 + irq_dispose_mapping(entry->irq);
794 ++ msi_bitmap_free_hwirqs(&msi_data->bitmap, hwirq, 1);
795 + }
796 + }
797 +
798 +diff --git a/arch/s390/boot/compressed/Makefile b/arch/s390/boot/compressed/Makefile
799 +index d4788111c161..fac6ac9790fa 100644
800 +--- a/arch/s390/boot/compressed/Makefile
801 ++++ b/arch/s390/boot/compressed/Makefile
802 +@@ -10,7 +10,7 @@ targets += misc.o piggy.o sizes.h head.o
803 +
804 + KBUILD_CFLAGS := -m64 -D__KERNEL__ $(LINUX_INCLUDE) -O2
805 + KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
806 +-KBUILD_CFLAGS += $(cflags-y) -fno-delete-null-pointer-checks
807 ++KBUILD_CFLAGS += $(cflags-y) -fno-delete-null-pointer-checks -msoft-float
808 + KBUILD_CFLAGS += $(call cc-option,-mpacked-stack)
809 + KBUILD_CFLAGS += $(call cc-option,-ffreestanding)
810 +
811 +diff --git a/arch/s390/kernel/compat_signal.c b/arch/s390/kernel/compat_signal.c
812 +index fe8d6924efaa..c78ba51ae285 100644
813 +--- a/arch/s390/kernel/compat_signal.c
814 ++++ b/arch/s390/kernel/compat_signal.c
815 +@@ -48,6 +48,19 @@ typedef struct
816 + struct ucontext32 uc;
817 + } rt_sigframe32;
818 +
819 ++static inline void sigset_to_sigset32(unsigned long *set64,
820 ++ compat_sigset_word *set32)
821 ++{
822 ++ set32[0] = (compat_sigset_word) set64[0];
823 ++ set32[1] = (compat_sigset_word)(set64[0] >> 32);
824 ++}
825 ++
826 ++static inline void sigset32_to_sigset(compat_sigset_word *set32,
827 ++ unsigned long *set64)
828 ++{
829 ++ set64[0] = (unsigned long) set32[0] | ((unsigned long) set32[1] << 32);
830 ++}
831 ++
832 + int copy_siginfo_to_user32(compat_siginfo_t __user *to, const siginfo_t *from)
833 + {
834 + int err;
835 +@@ -303,10 +316,12 @@ COMPAT_SYSCALL_DEFINE0(sigreturn)
836 + {
837 + struct pt_regs *regs = task_pt_regs(current);
838 + sigframe32 __user *frame = (sigframe32 __user *)regs->gprs[15];
839 ++ compat_sigset_t cset;
840 + sigset_t set;
841 +
842 +- if (__copy_from_user(&set.sig, &frame->sc.oldmask, _SIGMASK_COPY_SIZE32))
843 ++ if (__copy_from_user(&cset.sig, &frame->sc.oldmask, _SIGMASK_COPY_SIZE32))
844 + goto badframe;
845 ++ sigset32_to_sigset(cset.sig, set.sig);
846 + set_current_blocked(&set);
847 + if (restore_sigregs32(regs, &frame->sregs))
848 + goto badframe;
849 +@@ -323,10 +338,12 @@ COMPAT_SYSCALL_DEFINE0(rt_sigreturn)
850 + {
851 + struct pt_regs *regs = task_pt_regs(current);
852 + rt_sigframe32 __user *frame = (rt_sigframe32 __user *)regs->gprs[15];
853 ++ compat_sigset_t cset;
854 + sigset_t set;
855 +
856 +- if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
857 ++ if (__copy_from_user(&cset, &frame->uc.uc_sigmask, sizeof(cset)))
858 + goto badframe;
859 ++ sigset32_to_sigset(cset.sig, set.sig);
860 + set_current_blocked(&set);
861 + if (compat_restore_altstack(&frame->uc.uc_stack))
862 + goto badframe;
863 +@@ -397,7 +414,7 @@ static int setup_frame32(struct ksignal *ksig, sigset_t *set,
864 + return -EFAULT;
865 +
866 + /* Create struct sigcontext32 on the signal stack */
867 +- memcpy(&sc.oldmask, &set->sig, _SIGMASK_COPY_SIZE32);
868 ++ sigset_to_sigset32(set->sig, sc.oldmask);
869 + sc.sregs = (__u32)(unsigned long __force) &frame->sregs;
870 + if (__copy_to_user(&frame->sc, &sc, sizeof(frame->sc)))
871 + return -EFAULT;
872 +@@ -458,6 +475,7 @@ static int setup_frame32(struct ksignal *ksig, sigset_t *set,
873 + static int setup_rt_frame32(struct ksignal *ksig, sigset_t *set,
874 + struct pt_regs *regs)
875 + {
876 ++ compat_sigset_t cset;
877 + rt_sigframe32 __user *frame;
878 + unsigned long restorer;
879 + size_t frame_size;
880 +@@ -505,11 +523,12 @@ static int setup_rt_frame32(struct ksignal *ksig, sigset_t *set,
881 + store_sigregs();
882 +
883 + /* Create ucontext on the signal stack. */
884 ++ sigset_to_sigset32(set->sig, cset.sig);
885 + if (__put_user(uc_flags, &frame->uc.uc_flags) ||
886 + __put_user(0, &frame->uc.uc_link) ||
887 + __compat_save_altstack(&frame->uc.uc_stack, regs->gprs[15]) ||
888 + save_sigregs32(regs, &frame->uc.uc_mcontext) ||
889 +- __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set)) ||
890 ++ __copy_to_user(&frame->uc.uc_sigmask, &cset, sizeof(cset)) ||
891 + save_sigregs_ext32(regs, &frame->uc.uc_mcontext_ext))
892 + return -EFAULT;
893 +
894 +diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
895 +index aef653193160..d1918a8c4393 100644
896 +--- a/arch/x86/kernel/alternative.c
897 ++++ b/arch/x86/kernel/alternative.c
898 +@@ -325,10 +325,15 @@ done:
899 +
900 + static void __init_or_module optimize_nops(struct alt_instr *a, u8 *instr)
901 + {
902 ++ unsigned long flags;
903 ++
904 + if (instr[0] != 0x90)
905 + return;
906 +
907 ++ local_irq_save(flags);
908 + add_nops(instr + (a->instrlen - a->padlen), a->padlen);
909 ++ sync_core();
910 ++ local_irq_restore(flags);
911 +
912 + DUMP_BYTES(instr, a->instrlen, "%p: [%d:%d) optimized NOPs: ",
913 + instr, a->instrlen - a->padlen, a->padlen);
914 +diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
915 +index cde732c1b495..307a49828826 100644
916 +--- a/arch/x86/kernel/apic/apic.c
917 ++++ b/arch/x86/kernel/apic/apic.c
918 +@@ -336,6 +336,13 @@ static void __setup_APIC_LVTT(unsigned int clocks, int oneshot, int irqen)
919 + apic_write(APIC_LVTT, lvtt_value);
920 +
921 + if (lvtt_value & APIC_LVT_TIMER_TSCDEADLINE) {
922 ++ /*
923 ++ * See Intel SDM: TSC-Deadline Mode chapter. In xAPIC mode,
924 ++ * writing to the APIC LVTT and TSC_DEADLINE MSR isn't serialized.
925 ++ * According to Intel, MFENCE can do the serialization here.
926 ++ */
927 ++ asm volatile("mfence" : : : "memory");
928 ++
929 + printk_once(KERN_DEBUG "TSC deadline timer enabled\n");
930 + return;
931 + }
932 +diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
933 +index 2813ea0f142e..22212615a137 100644
934 +--- a/arch/x86/kernel/cpu/perf_event_intel.c
935 ++++ b/arch/x86/kernel/cpu/perf_event_intel.c
936 +@@ -2098,9 +2098,12 @@ static struct event_constraint *
937 + intel_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
938 + struct perf_event *event)
939 + {
940 +- struct event_constraint *c1 = cpuc->event_constraint[idx];
941 ++ struct event_constraint *c1 = NULL;
942 + struct event_constraint *c2;
943 +
944 ++ if (idx >= 0) /* fake does < 0 */
945 ++ c1 = cpuc->event_constraint[idx];
946 ++
947 + /*
948 + * first time only
949 + * - static constraint: no change across incremental scheduling calls
950 +diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
951 +index c76d3e37c6e1..403ace539b73 100644
952 +--- a/arch/x86/kernel/crash.c
953 ++++ b/arch/x86/kernel/crash.c
954 +@@ -184,10 +184,9 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
955 + }
956 +
957 + #ifdef CONFIG_KEXEC_FILE
958 +-static int get_nr_ram_ranges_callback(unsigned long start_pfn,
959 +- unsigned long nr_pfn, void *arg)
960 ++static int get_nr_ram_ranges_callback(u64 start, u64 end, void *arg)
961 + {
962 +- int *nr_ranges = arg;
963 ++ unsigned int *nr_ranges = arg;
964 +
965 + (*nr_ranges)++;
966 + return 0;
967 +@@ -213,7 +212,7 @@ static void fill_up_crash_elf_data(struct crash_elf_data *ced,
968 +
969 + ced->image = image;
970 +
971 +- walk_system_ram_range(0, -1, &nr_ranges,
972 ++ walk_system_ram_res(0, -1, &nr_ranges,
973 + get_nr_ram_ranges_callback);
974 +
975 + ced->max_nr_ranges = nr_ranges;
976 +diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
977 +index 4bd6c197563d..6c9cb6073832 100644
978 +--- a/arch/x86/kernel/entry_64.S
979 ++++ b/arch/x86/kernel/entry_64.S
980 +@@ -1393,7 +1393,18 @@ END(error_exit)
981 + /* Runs on exception stack */
982 + ENTRY(nmi)
983 + INTR_FRAME
984 ++ /*
985 ++ * Fix up the exception frame if we're on Xen.
986 ++ * PARAVIRT_ADJUST_EXCEPTION_FRAME is guaranteed to push at most
987 ++ * one value to the stack on native, so it may clobber the rdx
988 ++ * scratch slot, but it won't clobber any of the important
989 ++ * slots past it.
990 ++ *
991 ++ * Xen is a different story, because the Xen frame itself overlaps
992 ++ * the "NMI executing" variable.
993 ++ */
994 + PARAVIRT_ADJUST_EXCEPTION_FRAME
995 ++
996 + /*
997 + * We allow breakpoints in NMIs. If a breakpoint occurs, then
998 + * the iretq it performs will take us out of NMI context.
999 +@@ -1445,9 +1456,12 @@ ENTRY(nmi)
1000 + * we don't want to enable interrupts, because then we'll end
1001 + * up in an awkward situation in which IRQs are on but NMIs
1002 + * are off.
1003 ++ *
1004 ++ * We also must not push anything to the stack before switching
1005 ++ * stacks lest we corrupt the "NMI executing" variable.
1006 + */
1007 +
1008 +- SWAPGS
1009 ++ SWAPGS_UNSAFE_STACK
1010 + cld
1011 + movq %rsp, %rdx
1012 + movq PER_CPU_VAR(kernel_stack), %rsp
1013 +diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
1014 +index c614dd492f5f..1f316f066c49 100644
1015 +--- a/arch/x86/kernel/paravirt.c
1016 ++++ b/arch/x86/kernel/paravirt.c
1017 +@@ -41,10 +41,18 @@
1018 + #include <asm/timer.h>
1019 + #include <asm/special_insns.h>
1020 +
1021 +-/* nop stub */
1022 +-void _paravirt_nop(void)
1023 +-{
1024 +-}
1025 ++/*
1026 ++ * nop stub, which must not clobber anything *including the stack* to
1027 ++ * avoid confusing the entry prologues.
1028 ++ */
1029 ++extern void _paravirt_nop(void);
1030 ++asm (".pushsection .entry.text, \"ax\"\n"
1031 ++ ".global _paravirt_nop\n"
1032 ++ "_paravirt_nop:\n\t"
1033 ++ "ret\n\t"
1034 ++ ".size _paravirt_nop, . - _paravirt_nop\n\t"
1035 ++ ".type _paravirt_nop, @function\n\t"
1036 ++ ".popsection");
1037 +
1038 + /* identity function, which can be inlined */
1039 + u32 _paravirt_ident_32(u32 x)
1040 +diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
1041 +index 5e0bf57d9944..58e02d938218 100644
1042 +--- a/arch/x86/kernel/process_64.c
1043 ++++ b/arch/x86/kernel/process_64.c
1044 +@@ -499,27 +499,59 @@ void set_personality_ia32(bool x32)
1045 + }
1046 + EXPORT_SYMBOL_GPL(set_personality_ia32);
1047 +
1048 ++/*
1049 ++ * Called from fs/proc with a reference on @p to find the function
1050 ++ * which called into schedule(). This needs to be done carefully
1051 ++ * because the task might wake up and we might look at a stack
1052 ++ * changing under us.
1053 ++ */
1054 + unsigned long get_wchan(struct task_struct *p)
1055 + {
1056 +- unsigned long stack;
1057 +- u64 fp, ip;
1058 ++ unsigned long start, bottom, top, sp, fp, ip;
1059 + int count = 0;
1060 +
1061 + if (!p || p == current || p->state == TASK_RUNNING)
1062 + return 0;
1063 +- stack = (unsigned long)task_stack_page(p);
1064 +- if (p->thread.sp < stack || p->thread.sp >= stack+THREAD_SIZE)
1065 ++
1066 ++ start = (unsigned long)task_stack_page(p);
1067 ++ if (!start)
1068 ++ return 0;
1069 ++
1070 ++ /*
1071 ++ * Layout of the stack page:
1072 ++ *
1073 ++ * ----------- topmax = start + THREAD_SIZE - sizeof(unsigned long)
1074 ++ * PADDING
1075 ++ * ----------- top = topmax - TOP_OF_KERNEL_STACK_PADDING
1076 ++ * stack
1077 ++ * ----------- bottom = start + sizeof(thread_info)
1078 ++ * thread_info
1079 ++ * ----------- start
1080 ++ *
1081 ++ * The tasks stack pointer points at the location where the
1082 ++ * framepointer is stored. The data on the stack is:
1083 ++ * ... IP FP ... IP FP
1084 ++ *
1085 ++ * We need to read FP and IP, so we need to adjust the upper
1086 ++ * bound by another unsigned long.
1087 ++ */
1088 ++ top = start + THREAD_SIZE - TOP_OF_KERNEL_STACK_PADDING;
1089 ++ top -= 2 * sizeof(unsigned long);
1090 ++ bottom = start + sizeof(struct thread_info);
1091 ++
1092 ++ sp = READ_ONCE(p->thread.sp);
1093 ++ if (sp < bottom || sp > top)
1094 + return 0;
1095 +- fp = *(u64 *)(p->thread.sp);
1096 ++
1097 ++ fp = READ_ONCE(*(unsigned long *)sp);
1098 + do {
1099 +- if (fp < (unsigned long)stack ||
1100 +- fp >= (unsigned long)stack+THREAD_SIZE)
1101 ++ if (fp < bottom || fp > top)
1102 + return 0;
1103 +- ip = *(u64 *)(fp+8);
1104 ++ ip = READ_ONCE(*(unsigned long *)(fp + sizeof(unsigned long)));
1105 + if (!in_sched_functions(ip))
1106 + return ip;
1107 +- fp = *(u64 *)fp;
1108 +- } while (count++ < 16);
1109 ++ fp = READ_ONCE(*(unsigned long *)fp);
1110 ++ } while (count++ < 16 && p->state != TASK_RUNNING);
1111 + return 0;
1112 + }
1113 +
1114 +diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
1115 +index 505449700e0c..21187ebee7d0 100644
1116 +--- a/arch/x86/kernel/tsc.c
1117 ++++ b/arch/x86/kernel/tsc.c
1118 +@@ -21,6 +21,7 @@
1119 + #include <asm/hypervisor.h>
1120 + #include <asm/nmi.h>
1121 + #include <asm/x86_init.h>
1122 ++#include <asm/geode.h>
1123 +
1124 + unsigned int __read_mostly cpu_khz; /* TSC clocks / usec, not used here */
1125 + EXPORT_SYMBOL(cpu_khz);
1126 +@@ -1004,15 +1005,17 @@ EXPORT_SYMBOL_GPL(mark_tsc_unstable);
1127 +
1128 + static void __init check_system_tsc_reliable(void)
1129 + {
1130 +-#ifdef CONFIG_MGEODE_LX
1131 +- /* RTSC counts during suspend */
1132 ++#if defined(CONFIG_MGEODEGX1) || defined(CONFIG_MGEODE_LX) || defined(CONFIG_X86_GENERIC)
1133 ++ if (is_geode_lx()) {
1134 ++ /* RTSC counts during suspend */
1135 + #define RTSC_SUSP 0x100
1136 +- unsigned long res_low, res_high;
1137 ++ unsigned long res_low, res_high;
1138 +
1139 +- rdmsr_safe(MSR_GEODE_BUSCONT_CONF0, &res_low, &res_high);
1140 +- /* Geode_LX - the OLPC CPU has a very reliable TSC */
1141 +- if (res_low & RTSC_SUSP)
1142 +- tsc_clocksource_reliable = 1;
1143 ++ rdmsr_safe(MSR_GEODE_BUSCONT_CONF0, &res_low, &res_high);
1144 ++ /* Geode_LX - the OLPC CPU has a very reliable TSC */
1145 ++ if (res_low & RTSC_SUSP)
1146 ++ tsc_clocksource_reliable = 1;
1147 ++ }
1148 + #endif
1149 + if (boot_cpu_has(X86_FEATURE_TSC_RELIABLE))
1150 + tsc_clocksource_reliable = 1;
1151 +diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
1152 +index 4911bf19122b..7858cd9acfe4 100644
1153 +--- a/arch/x86/kvm/svm.c
1154 ++++ b/arch/x86/kvm/svm.c
1155 +@@ -512,7 +512,7 @@ static void skip_emulated_instruction(struct kvm_vcpu *vcpu)
1156 + struct vcpu_svm *svm = to_svm(vcpu);
1157 +
1158 + if (svm->vmcb->control.next_rip != 0) {
1159 +- WARN_ON(!static_cpu_has(X86_FEATURE_NRIPS));
1160 ++ WARN_ON_ONCE(!static_cpu_has(X86_FEATURE_NRIPS));
1161 + svm->next_rip = svm->vmcb->control.next_rip;
1162 + }
1163 +
1164 +diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
1165 +index 2d73807f0d31..bc3041e1abbc 100644
1166 +--- a/arch/x86/kvm/vmx.c
1167 ++++ b/arch/x86/kvm/vmx.c
1168 +@@ -6144,6 +6144,8 @@ static __init int hardware_setup(void)
1169 + memcpy(vmx_msr_bitmap_longmode_x2apic,
1170 + vmx_msr_bitmap_longmode, PAGE_SIZE);
1171 +
1172 ++ set_bit(0, vmx_vpid_bitmap); /* 0 is reserved for host */
1173 ++
1174 + if (enable_apicv) {
1175 + for (msr = 0x800; msr <= 0x8ff; msr++)
1176 + vmx_disable_intercept_msr_read_x2apic(msr);
1177 +diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
1178 +index 3fba623e3ba5..f9977a7a9444 100644
1179 +--- a/arch/x86/mm/init_64.c
1180 ++++ b/arch/x86/mm/init_64.c
1181 +@@ -1132,7 +1132,7 @@ void mark_rodata_ro(void)
1182 + * has been zapped already via cleanup_highmem().
1183 + */
1184 + all_end = roundup((unsigned long)_brk_end, PMD_SIZE);
1185 +- set_memory_nx(rodata_start, (all_end - rodata_start) >> PAGE_SHIFT);
1186 ++ set_memory_nx(text_end, (all_end - text_end) >> PAGE_SHIFT);
1187 +
1188 + rodata_test();
1189 +
1190 +diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
1191 +index 841ea05e1b02..477384985ac9 100644
1192 +--- a/arch/x86/platform/efi/efi.c
1193 ++++ b/arch/x86/platform/efi/efi.c
1194 +@@ -679,6 +679,70 @@ out:
1195 + }
1196 +
1197 + /*
1198 ++ * Iterate the EFI memory map in reverse order because the regions
1199 ++ * will be mapped top-down. The end result is the same as if we had
1200 ++ * mapped things forward, but doesn't require us to change the
1201 ++ * existing implementation of efi_map_region().
1202 ++ */
1203 ++static inline void *efi_map_next_entry_reverse(void *entry)
1204 ++{
1205 ++ /* Initial call */
1206 ++ if (!entry)
1207 ++ return memmap.map_end - memmap.desc_size;
1208 ++
1209 ++ entry -= memmap.desc_size;
1210 ++ if (entry < memmap.map)
1211 ++ return NULL;
1212 ++
1213 ++ return entry;
1214 ++}
1215 ++
1216 ++/*
1217 ++ * efi_map_next_entry - Return the next EFI memory map descriptor
1218 ++ * @entry: Previous EFI memory map descriptor
1219 ++ *
1220 ++ * This is a helper function to iterate over the EFI memory map, which
1221 ++ * we do in different orders depending on the current configuration.
1222 ++ *
1223 ++ * To begin traversing the memory map @entry must be %NULL.
1224 ++ *
1225 ++ * Returns %NULL when we reach the end of the memory map.
1226 ++ */
1227 ++static void *efi_map_next_entry(void *entry)
1228 ++{
1229 ++ if (!efi_enabled(EFI_OLD_MEMMAP) && efi_enabled(EFI_64BIT)) {
1230 ++ /*
1231 ++ * Starting in UEFI v2.5 the EFI_PROPERTIES_TABLE
1232 ++ * config table feature requires us to map all entries
1233 ++ * in the same order as they appear in the EFI memory
1234 ++ * map. That is to say, entry N must have a lower
1235 ++ * virtual address than entry N+1. This is because the
1236 ++ * firmware toolchain leaves relative references in
1237 ++ * the code/data sections, which are split and become
1238 ++ * separate EFI memory regions. Mapping things
1239 ++ * out-of-order leads to the firmware accessing
1240 ++ * unmapped addresses.
1241 ++ *
1242 ++ * Since we need to map things this way whether or not
1243 ++ * the kernel actually makes use of
1244 ++ * EFI_PROPERTIES_TABLE, let's just switch to this
1245 ++ * scheme by default for 64-bit.
1246 ++ */
1247 ++ return efi_map_next_entry_reverse(entry);
1248 ++ }
1249 ++
1250 ++ /* Initial call */
1251 ++ if (!entry)
1252 ++ return memmap.map;
1253 ++
1254 ++ entry += memmap.desc_size;
1255 ++ if (entry >= memmap.map_end)
1256 ++ return NULL;
1257 ++
1258 ++ return entry;
1259 ++}
1260 ++
1261 ++/*
1262 + * Map the efi memory ranges of the runtime services and update new_mmap with
1263 + * virtual addresses.
1264 + */
1265 +@@ -688,7 +752,8 @@ static void * __init efi_map_regions(int *count, int *pg_shift)
1266 + unsigned long left = 0;
1267 + efi_memory_desc_t *md;
1268 +
1269 +- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
1270 ++ p = NULL;
1271 ++ while ((p = efi_map_next_entry(p))) {
1272 + md = p;
1273 + if (!(md->attribute & EFI_MEMORY_RUNTIME)) {
1274 + #ifdef CONFIG_X86_64
1275 +diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
1276 +index a671e837228d..0cc657160cb6 100644
1277 +--- a/arch/x86/xen/enlighten.c
1278 ++++ b/arch/x86/xen/enlighten.c
1279 +@@ -33,6 +33,10 @@
1280 + #include <linux/memblock.h>
1281 + #include <linux/edd.h>
1282 +
1283 ++#ifdef CONFIG_KEXEC_CORE
1284 ++#include <linux/kexec.h>
1285 ++#endif
1286 ++
1287 + #include <xen/xen.h>
1288 + #include <xen/events.h>
1289 + #include <xen/interface/xen.h>
1290 +@@ -1798,6 +1802,21 @@ static struct notifier_block xen_hvm_cpu_notifier = {
1291 + .notifier_call = xen_hvm_cpu_notify,
1292 + };
1293 +
1294 ++#ifdef CONFIG_KEXEC_CORE
1295 ++static void xen_hvm_shutdown(void)
1296 ++{
1297 ++ native_machine_shutdown();
1298 ++ if (kexec_in_progress)
1299 ++ xen_reboot(SHUTDOWN_soft_reset);
1300 ++}
1301 ++
1302 ++static void xen_hvm_crash_shutdown(struct pt_regs *regs)
1303 ++{
1304 ++ native_machine_crash_shutdown(regs);
1305 ++ xen_reboot(SHUTDOWN_soft_reset);
1306 ++}
1307 ++#endif
1308 ++
1309 + static void __init xen_hvm_guest_init(void)
1310 + {
1311 + if (xen_pv_domain())
1312 +@@ -1817,6 +1836,10 @@ static void __init xen_hvm_guest_init(void)
1313 + x86_init.irqs.intr_init = xen_init_IRQ;
1314 + xen_hvm_init_time_ops();
1315 + xen_hvm_init_mmu_ops();
1316 ++#ifdef CONFIG_KEXEC_CORE
1317 ++ machine_ops.shutdown = xen_hvm_shutdown;
1318 ++ machine_ops.crash_shutdown = xen_hvm_crash_shutdown;
1319 ++#endif
1320 + }
1321 + #endif
1322 +
1323 +diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
1324 +index df0c66cb7ad3..fdba441457ec 100644
1325 +--- a/drivers/base/cacheinfo.c
1326 ++++ b/drivers/base/cacheinfo.c
1327 +@@ -148,7 +148,11 @@ static void cache_shared_cpu_map_remove(unsigned int cpu)
1328 +
1329 + if (sibling == cpu) /* skip itself */
1330 + continue;
1331 ++
1332 + sib_cpu_ci = get_cpu_cacheinfo(sibling);
1333 ++ if (!sib_cpu_ci->info_list)
1334 ++ continue;
1335 ++
1336 + sib_leaf = sib_cpu_ci->info_list + index;
1337 + cpumask_clear_cpu(cpu, &sib_leaf->shared_cpu_map);
1338 + cpumask_clear_cpu(sibling, &this_leaf->shared_cpu_map);
1339 +@@ -159,6 +163,9 @@ static void cache_shared_cpu_map_remove(unsigned int cpu)
1340 +
1341 + static void free_cache_attributes(unsigned int cpu)
1342 + {
1343 ++ if (!per_cpu_cacheinfo(cpu))
1344 ++ return;
1345 ++
1346 + cache_shared_cpu_map_remove(cpu);
1347 +
1348 + kfree(per_cpu_cacheinfo(cpu));
1349 +@@ -514,8 +521,7 @@ static int cacheinfo_cpu_callback(struct notifier_block *nfb,
1350 + break;
1351 + case CPU_DEAD:
1352 + cache_remove_dev(cpu);
1353 +- if (per_cpu_cacheinfo(cpu))
1354 +- free_cache_attributes(cpu);
1355 ++ free_cache_attributes(cpu);
1356 + break;
1357 + }
1358 + return notifier_from_errno(rc);
1359 +diff --git a/drivers/base/property.c b/drivers/base/property.c
1360 +index 1d0b116cae95..0a60ef1500cd 100644
1361 +--- a/drivers/base/property.c
1362 ++++ b/drivers/base/property.c
1363 +@@ -26,9 +26,10 @@
1364 + */
1365 + void device_add_property_set(struct device *dev, struct property_set *pset)
1366 + {
1367 +- if (pset)
1368 +- pset->fwnode.type = FWNODE_PDATA;
1369 ++ if (!pset)
1370 ++ return;
1371 +
1372 ++ pset->fwnode.type = FWNODE_PDATA;
1373 + set_secondary_fwnode(dev, &pset->fwnode);
1374 + }
1375 + EXPORT_SYMBOL_GPL(device_add_property_set);
1376 +diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c
1377 +index 5799a0b9e6cc..c8941f39c919 100644
1378 +--- a/drivers/base/regmap/regmap-debugfs.c
1379 ++++ b/drivers/base/regmap/regmap-debugfs.c
1380 +@@ -32,8 +32,7 @@ static DEFINE_MUTEX(regmap_debugfs_early_lock);
1381 + /* Calculate the length of a fixed format */
1382 + static size_t regmap_calc_reg_len(int max_val, char *buf, size_t buf_size)
1383 + {
1384 +- snprintf(buf, buf_size, "%x", max_val);
1385 +- return strlen(buf);
1386 ++ return snprintf(NULL, 0, "%x", max_val);
1387 + }
1388 +
1389 + static ssize_t regmap_name_read_file(struct file *file,
1390 +@@ -432,7 +431,7 @@ static ssize_t regmap_access_read_file(struct file *file,
1391 + /* If we're in the region the user is trying to read */
1392 + if (p >= *ppos) {
1393 + /* ...but not beyond it */
1394 +- if (buf_pos >= count - 1 - tot_len)
1395 ++ if (buf_pos + tot_len + 1 >= count)
1396 + break;
1397 +
1398 + /* Format the register */
1399 +diff --git a/drivers/clk/ti/clk-3xxx.c b/drivers/clk/ti/clk-3xxx.c
1400 +index 757636d166cf..4ab28cfb8d2a 100644
1401 +--- a/drivers/clk/ti/clk-3xxx.c
1402 ++++ b/drivers/clk/ti/clk-3xxx.c
1403 +@@ -163,7 +163,6 @@ static struct ti_dt_clk omap3xxx_clks[] = {
1404 + DT_CLK(NULL, "gpio2_ick", "gpio2_ick"),
1405 + DT_CLK(NULL, "wdt3_ick", "wdt3_ick"),
1406 + DT_CLK(NULL, "uart3_ick", "uart3_ick"),
1407 +- DT_CLK(NULL, "uart4_ick", "uart4_ick"),
1408 + DT_CLK(NULL, "gpt9_ick", "gpt9_ick"),
1409 + DT_CLK(NULL, "gpt8_ick", "gpt8_ick"),
1410 + DT_CLK(NULL, "gpt7_ick", "gpt7_ick"),
1411 +@@ -308,6 +307,7 @@ static struct ti_dt_clk am35xx_clks[] = {
1412 + static struct ti_dt_clk omap36xx_clks[] = {
1413 + DT_CLK(NULL, "omap_192m_alwon_fck", "omap_192m_alwon_fck"),
1414 + DT_CLK(NULL, "uart4_fck", "uart4_fck"),
1415 ++ DT_CLK(NULL, "uart4_ick", "uart4_ick"),
1416 + { .node_name = NULL },
1417 + };
1418 +
1419 +diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
1420 +index bab67db54b7e..663045ce6fac 100644
1421 +--- a/drivers/cpufreq/cpufreq-dt.c
1422 ++++ b/drivers/cpufreq/cpufreq-dt.c
1423 +@@ -255,7 +255,8 @@ static int cpufreq_init(struct cpufreq_policy *policy)
1424 + rcu_read_unlock();
1425 +
1426 + tol_uV = opp_uV * priv->voltage_tolerance / 100;
1427 +- if (regulator_is_supported_voltage(cpu_reg, opp_uV,
1428 ++ if (regulator_is_supported_voltage(cpu_reg,
1429 ++ opp_uV - tol_uV,
1430 + opp_uV + tol_uV)) {
1431 + if (opp_uV < min_uV)
1432 + min_uV = opp_uV;
1433 +diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
1434 +index 6f9d27f9001c..e8d16997c5cb 100644
1435 +--- a/drivers/cpufreq/intel_pstate.c
1436 ++++ b/drivers/cpufreq/intel_pstate.c
1437 +@@ -48,9 +48,9 @@ static inline int32_t mul_fp(int32_t x, int32_t y)
1438 + return ((int64_t)x * (int64_t)y) >> FRAC_BITS;
1439 + }
1440 +
1441 +-static inline int32_t div_fp(int32_t x, int32_t y)
1442 ++static inline int32_t div_fp(s64 x, s64 y)
1443 + {
1444 +- return div_s64((int64_t)x << FRAC_BITS, y);
1445 ++ return div64_s64((int64_t)x << FRAC_BITS, y);
1446 + }
1447 +
1448 + static inline int ceiling_fp(int32_t x)
1449 +@@ -795,7 +795,7 @@ static inline void intel_pstate_set_sample_time(struct cpudata *cpu)
1450 + static inline int32_t intel_pstate_get_scaled_busy(struct cpudata *cpu)
1451 + {
1452 + int32_t core_busy, max_pstate, current_pstate, sample_ratio;
1453 +- u32 duration_us;
1454 ++ s64 duration_us;
1455 + u32 sample_time;
1456 +
1457 + /*
1458 +@@ -822,8 +822,8 @@ static inline int32_t intel_pstate_get_scaled_busy(struct cpudata *cpu)
1459 + * to adjust our busyness.
1460 + */
1461 + sample_time = pid_params.sample_rate_ms * USEC_PER_MSEC;
1462 +- duration_us = (u32) ktime_us_delta(cpu->sample.time,
1463 +- cpu->last_sample_time);
1464 ++ duration_us = ktime_us_delta(cpu->sample.time,
1465 ++ cpu->last_sample_time);
1466 + if (duration_us > sample_time * 3) {
1467 + sample_ratio = div_fp(int_tofp(sample_time),
1468 + int_tofp(duration_us));
1469 +diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c
1470 +index 1022c2e1a2b0..9e504d3b0d4f 100644
1471 +--- a/drivers/dma/dw/core.c
1472 ++++ b/drivers/dma/dw/core.c
1473 +@@ -1591,7 +1591,6 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
1474 + INIT_LIST_HEAD(&dw->dma.channels);
1475 + for (i = 0; i < nr_channels; i++) {
1476 + struct dw_dma_chan *dwc = &dw->chan[i];
1477 +- int r = nr_channels - i - 1;
1478 +
1479 + dwc->chan.device = &dw->dma;
1480 + dma_cookie_init(&dwc->chan);
1481 +@@ -1603,7 +1602,7 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
1482 +
1483 + /* 7 is highest priority & 0 is lowest. */
1484 + if (pdata->chan_priority == CHAN_PRIORITY_ASCENDING)
1485 +- dwc->priority = r;
1486 ++ dwc->priority = nr_channels - i - 1;
1487 + else
1488 + dwc->priority = i;
1489 +
1490 +@@ -1622,6 +1621,7 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
1491 + /* Hardware configuration */
1492 + if (autocfg) {
1493 + unsigned int dwc_params;
1494 ++ unsigned int r = DW_DMA_MAX_NR_CHANNELS - i - 1;
1495 + void __iomem *addr = chip->regs + r * sizeof(u32);
1496 +
1497 + dwc_params = dma_read_byaddr(addr, DWC_PARAMS);
1498 +diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c
1499 +index e29560e6b40b..950c87f5d279 100644
1500 +--- a/drivers/firmware/efi/libstub/arm-stub.c
1501 ++++ b/drivers/firmware/efi/libstub/arm-stub.c
1502 +@@ -13,6 +13,7 @@
1503 + */
1504 +
1505 + #include <linux/efi.h>
1506 ++#include <linux/sort.h>
1507 + #include <asm/efi.h>
1508 +
1509 + #include "efistub.h"
1510 +@@ -305,6 +306,44 @@ fail:
1511 + */
1512 + #define EFI_RT_VIRTUAL_BASE 0x40000000
1513 +
1514 ++static int cmp_mem_desc(const void *l, const void *r)
1515 ++{
1516 ++ const efi_memory_desc_t *left = l, *right = r;
1517 ++
1518 ++ return (left->phys_addr > right->phys_addr) ? 1 : -1;
1519 ++}
1520 ++
1521 ++/*
1522 ++ * Returns whether region @left ends exactly where region @right starts,
1523 ++ * or false if either argument is NULL.
1524 ++ */
1525 ++static bool regions_are_adjacent(efi_memory_desc_t *left,
1526 ++ efi_memory_desc_t *right)
1527 ++{
1528 ++ u64 left_end;
1529 ++
1530 ++ if (left == NULL || right == NULL)
1531 ++ return false;
1532 ++
1533 ++ left_end = left->phys_addr + left->num_pages * EFI_PAGE_SIZE;
1534 ++
1535 ++ return left_end == right->phys_addr;
1536 ++}
1537 ++
1538 ++/*
1539 ++ * Returns whether region @left and region @right have compatible memory type
1540 ++ * mapping attributes, and are both EFI_MEMORY_RUNTIME regions.
1541 ++ */
1542 ++static bool regions_have_compatible_memory_type_attrs(efi_memory_desc_t *left,
1543 ++ efi_memory_desc_t *right)
1544 ++{
1545 ++ static const u64 mem_type_mask = EFI_MEMORY_WB | EFI_MEMORY_WT |
1546 ++ EFI_MEMORY_WC | EFI_MEMORY_UC |
1547 ++ EFI_MEMORY_RUNTIME;
1548 ++
1549 ++ return ((left->attribute ^ right->attribute) & mem_type_mask) == 0;
1550 ++}
1551 ++
1552 + /*
1553 + * efi_get_virtmap() - create a virtual mapping for the EFI memory map
1554 + *
1555 +@@ -317,33 +356,52 @@ void efi_get_virtmap(efi_memory_desc_t *memory_map, unsigned long map_size,
1556 + int *count)
1557 + {
1558 + u64 efi_virt_base = EFI_RT_VIRTUAL_BASE;
1559 +- efi_memory_desc_t *out = runtime_map;
1560 ++ efi_memory_desc_t *in, *prev = NULL, *out = runtime_map;
1561 + int l;
1562 +
1563 +- for (l = 0; l < map_size; l += desc_size) {
1564 +- efi_memory_desc_t *in = (void *)memory_map + l;
1565 ++ /*
1566 ++ * To work around potential issues with the Properties Table feature
1567 ++ * introduced in UEFI 2.5, which may split PE/COFF executable images
1568 ++ * in memory into several RuntimeServicesCode and RuntimeServicesData
1569 ++ * regions, we need to preserve the relative offsets between adjacent
1570 ++ * EFI_MEMORY_RUNTIME regions with the same memory type attributes.
1571 ++ * The easiest way to find adjacent regions is to sort the memory map
1572 ++ * before traversing it.
1573 ++ */
1574 ++ sort(memory_map, map_size / desc_size, desc_size, cmp_mem_desc, NULL);
1575 ++
1576 ++ for (l = 0; l < map_size; l += desc_size, prev = in) {
1577 + u64 paddr, size;
1578 +
1579 ++ in = (void *)memory_map + l;
1580 + if (!(in->attribute & EFI_MEMORY_RUNTIME))
1581 + continue;
1582 +
1583 ++ paddr = in->phys_addr;
1584 ++ size = in->num_pages * EFI_PAGE_SIZE;
1585 ++
1586 + /*
1587 + * Make the mapping compatible with 64k pages: this allows
1588 + * a 4k page size kernel to kexec a 64k page size kernel and
1589 + * vice versa.
1590 + */
1591 +- paddr = round_down(in->phys_addr, SZ_64K);
1592 +- size = round_up(in->num_pages * EFI_PAGE_SIZE +
1593 +- in->phys_addr - paddr, SZ_64K);
1594 +-
1595 +- /*
1596 +- * Avoid wasting memory on PTEs by choosing a virtual base that
1597 +- * is compatible with section mappings if this region has the
1598 +- * appropriate size and physical alignment. (Sections are 2 MB
1599 +- * on 4k granule kernels)
1600 +- */
1601 +- if (IS_ALIGNED(in->phys_addr, SZ_2M) && size >= SZ_2M)
1602 +- efi_virt_base = round_up(efi_virt_base, SZ_2M);
1603 ++ if (!regions_are_adjacent(prev, in) ||
1604 ++ !regions_have_compatible_memory_type_attrs(prev, in)) {
1605 ++
1606 ++ paddr = round_down(in->phys_addr, SZ_64K);
1607 ++ size += in->phys_addr - paddr;
1608 ++
1609 ++ /*
1610 ++ * Avoid wasting memory on PTEs by choosing a virtual
1611 ++ * base that is compatible with section mappings if this
1612 ++ * region has the appropriate size and physical
1613 ++ * alignment. (Sections are 2 MB on 4k granule kernels)
1614 ++ */
1615 ++ if (IS_ALIGNED(in->phys_addr, SZ_2M) && size >= SZ_2M)
1616 ++ efi_virt_base = round_up(efi_virt_base, SZ_2M);
1617 ++ else
1618 ++ efi_virt_base = round_up(efi_virt_base, SZ_64K);
1619 ++ }
1620 +
1621 + in->virt_addr = efi_virt_base + in->phys_addr - paddr;
1622 + efi_virt_base += size;
1623 +diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
1624 +index b0487c9f018c..7f467fdc9107 100644
1625 +--- a/drivers/gpu/drm/drm_dp_mst_topology.c
1626 ++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
1627 +@@ -804,8 +804,6 @@ static void drm_dp_destroy_mst_branch_device(struct kref *kref)
1628 + struct drm_dp_mst_port *port, *tmp;
1629 + bool wake_tx = false;
1630 +
1631 +- cancel_work_sync(&mstb->mgr->work);
1632 +-
1633 + /*
1634 + * destroy all ports - don't need lock
1635 + * as there are no more references to the mst branch
1636 +@@ -1977,6 +1975,8 @@ void drm_dp_mst_topology_mgr_suspend(struct drm_dp_mst_topology_mgr *mgr)
1637 + drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL,
1638 + DP_MST_EN | DP_UPSTREAM_IS_SRC);
1639 + mutex_unlock(&mgr->lock);
1640 ++ flush_work(&mgr->work);
1641 ++ flush_work(&mgr->destroy_connector_work);
1642 + }
1643 + EXPORT_SYMBOL(drm_dp_mst_topology_mgr_suspend);
1644 +
1645 +@@ -2730,6 +2730,7 @@ EXPORT_SYMBOL(drm_dp_mst_topology_mgr_init);
1646 + */
1647 + void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr)
1648 + {
1649 ++ flush_work(&mgr->work);
1650 + flush_work(&mgr->destroy_connector_work);
1651 + mutex_lock(&mgr->payload_lock);
1652 + kfree(mgr->payloads);
1653 +diff --git a/drivers/gpu/drm/drm_lock.c b/drivers/gpu/drm/drm_lock.c
1654 +index f861361a635e..4924d381b664 100644
1655 +--- a/drivers/gpu/drm/drm_lock.c
1656 ++++ b/drivers/gpu/drm/drm_lock.c
1657 +@@ -61,6 +61,9 @@ int drm_legacy_lock(struct drm_device *dev, void *data,
1658 + struct drm_master *master = file_priv->master;
1659 + int ret = 0;
1660 +
1661 ++ if (drm_core_check_feature(dev, DRIVER_MODESET))
1662 ++ return -EINVAL;
1663 ++
1664 + ++file_priv->lock_count;
1665 +
1666 + if (lock->context == DRM_KERNEL_CONTEXT) {
1667 +@@ -153,6 +156,9 @@ int drm_legacy_unlock(struct drm_device *dev, void *data, struct drm_file *file_
1668 + struct drm_lock *lock = data;
1669 + struct drm_master *master = file_priv->master;
1670 +
1671 ++ if (drm_core_check_feature(dev, DRIVER_MODESET))
1672 ++ return -EINVAL;
1673 ++
1674 + if (lock->context == DRM_KERNEL_CONTEXT) {
1675 + DRM_ERROR("Process %d using kernel context %d\n",
1676 + task_pid_nr(current), lock->context);
1677 +diff --git a/drivers/gpu/drm/i915/intel_bios.c b/drivers/gpu/drm/i915/intel_bios.c
1678 +index c684085cb56a..fadf9865709e 100644
1679 +--- a/drivers/gpu/drm/i915/intel_bios.c
1680 ++++ b/drivers/gpu/drm/i915/intel_bios.c
1681 +@@ -41,7 +41,7 @@ find_section(struct bdb_header *bdb, int section_id)
1682 + {
1683 + u8 *base = (u8 *)bdb;
1684 + int index = 0;
1685 +- u16 total, current_size;
1686 ++ u32 total, current_size;
1687 + u8 current_id;
1688 +
1689 + /* skip to first section */
1690 +@@ -56,6 +56,10 @@ find_section(struct bdb_header *bdb, int section_id)
1691 + current_size = *((u16 *)(base + index));
1692 + index += 2;
1693 +
1694 ++ /* The MIPI Sequence Block v3+ has a separate size field. */
1695 ++ if (current_id == BDB_MIPI_SEQUENCE && *(base + index) >= 3)
1696 ++ current_size = *((const u32 *)(base + index + 1));
1697 ++
1698 + if (index + current_size > total)
1699 + return NULL;
1700 +
1701 +@@ -845,6 +849,12 @@ parse_mipi(struct drm_i915_private *dev_priv, struct bdb_header *bdb)
1702 + return;
1703 + }
1704 +
1705 ++ /* Fail gracefully for forward incompatible sequence block. */
1706 ++ if (sequence->version >= 3) {
1707 ++ DRM_ERROR("Unable to parse MIPI Sequence Block v3+\n");
1708 ++ return;
1709 ++ }
1710 ++
1711 + DRM_DEBUG_DRIVER("Found MIPI sequence block\n");
1712 +
1713 + block_size = get_blocksize(sequence);
1714 +diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
1715 +index 32248791bc4b..52921a871230 100644
1716 +--- a/drivers/gpu/drm/qxl/qxl_display.c
1717 ++++ b/drivers/gpu/drm/qxl/qxl_display.c
1718 +@@ -618,7 +618,7 @@ static int qxl_crtc_mode_set(struct drm_crtc *crtc,
1719 + adjusted_mode->hdisplay,
1720 + adjusted_mode->vdisplay);
1721 +
1722 +- if (qcrtc->index == 0)
1723 ++ if (bo->is_primary == false)
1724 + recreate_primary = true;
1725 +
1726 + if (bo->surf.stride * bo->surf.height > qdev->vram_size) {
1727 +@@ -886,13 +886,15 @@ static enum drm_connector_status qxl_conn_detect(
1728 + drm_connector_to_qxl_output(connector);
1729 + struct drm_device *ddev = connector->dev;
1730 + struct qxl_device *qdev = ddev->dev_private;
1731 +- int connected;
1732 ++ bool connected = false;
1733 +
1734 + /* The first monitor is always connected */
1735 +- connected = (output->index == 0) ||
1736 +- (qdev->client_monitors_config &&
1737 +- qdev->client_monitors_config->count > output->index &&
1738 +- qxl_head_enabled(&qdev->client_monitors_config->heads[output->index]));
1739 ++ if (!qdev->client_monitors_config) {
1740 ++ if (output->index == 0)
1741 ++ connected = true;
1742 ++ } else
1743 ++ connected = qdev->client_monitors_config->count > output->index &&
1744 ++ qxl_head_enabled(&qdev->client_monitors_config->heads[output->index]);
1745 +
1746 + DRM_DEBUG("#%d connected: %d\n", output->index, connected);
1747 + if (!connected)
1748 +diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c
1749 +index dd39f434b4a7..b4ff4c134fbb 100644
1750 +--- a/drivers/gpu/drm/radeon/atombios_encoders.c
1751 ++++ b/drivers/gpu/drm/radeon/atombios_encoders.c
1752 +@@ -1624,8 +1624,9 @@ radeon_atom_encoder_dpms_avivo(struct drm_encoder *encoder, int mode)
1753 + } else
1754 + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
1755 + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
1756 +- args.ucAction = ATOM_LCD_BLON;
1757 +- atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
1758 ++ struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
1759 ++
1760 ++ atombios_set_backlight_level(radeon_encoder, dig->backlight_level);
1761 + }
1762 + break;
1763 + case DRM_MODE_DPMS_STANDBY:
1764 +@@ -1706,8 +1707,7 @@ radeon_atom_encoder_dpms_dig(struct drm_encoder *encoder, int mode)
1765 + atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_ON, 0);
1766 + }
1767 + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
1768 +- atombios_dig_transmitter_setup(encoder,
1769 +- ATOM_TRANSMITTER_ACTION_LCD_BLON, 0, 0);
1770 ++ atombios_set_backlight_level(radeon_encoder, dig->backlight_level);
1771 + if (ext_encoder)
1772 + atombios_external_encoder_setup(encoder, ext_encoder, ATOM_ENABLE);
1773 + break;
1774 +diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c
1775 +index bd1c99deac71..2aaedbe0b023 100644
1776 +--- a/drivers/hwmon/nct6775.c
1777 ++++ b/drivers/hwmon/nct6775.c
1778 +@@ -354,6 +354,10 @@ static const u16 NCT6775_REG_TEMP_CRIT[ARRAY_SIZE(nct6775_temp_label) - 1]
1779 +
1780 + /* NCT6776 specific data */
1781 +
1782 ++/* STEP_UP_TIME and STEP_DOWN_TIME regs are swapped for all chips but NCT6775 */
1783 ++#define NCT6776_REG_FAN_STEP_UP_TIME NCT6775_REG_FAN_STEP_DOWN_TIME
1784 ++#define NCT6776_REG_FAN_STEP_DOWN_TIME NCT6775_REG_FAN_STEP_UP_TIME
1785 ++
1786 + static const s8 NCT6776_ALARM_BITS[] = {
1787 + 0, 1, 2, 3, 8, 21, 20, 16, /* in0.. in7 */
1788 + 17, -1, -1, -1, -1, -1, -1, /* in8..in14 */
1789 +@@ -3528,8 +3532,8 @@ static int nct6775_probe(struct platform_device *pdev)
1790 + data->REG_FAN_PULSES = NCT6776_REG_FAN_PULSES;
1791 + data->FAN_PULSE_SHIFT = NCT6775_FAN_PULSE_SHIFT;
1792 + data->REG_FAN_TIME[0] = NCT6775_REG_FAN_STOP_TIME;
1793 +- data->REG_FAN_TIME[1] = NCT6775_REG_FAN_STEP_UP_TIME;
1794 +- data->REG_FAN_TIME[2] = NCT6775_REG_FAN_STEP_DOWN_TIME;
1795 ++ data->REG_FAN_TIME[1] = NCT6776_REG_FAN_STEP_UP_TIME;
1796 ++ data->REG_FAN_TIME[2] = NCT6776_REG_FAN_STEP_DOWN_TIME;
1797 + data->REG_TOLERANCE_H = NCT6776_REG_TOLERANCE_H;
1798 + data->REG_PWM[0] = NCT6775_REG_PWM;
1799 + data->REG_PWM[1] = NCT6775_REG_FAN_START_OUTPUT;
1800 +@@ -3600,8 +3604,8 @@ static int nct6775_probe(struct platform_device *pdev)
1801 + data->REG_FAN_PULSES = NCT6779_REG_FAN_PULSES;
1802 + data->FAN_PULSE_SHIFT = NCT6775_FAN_PULSE_SHIFT;
1803 + data->REG_FAN_TIME[0] = NCT6775_REG_FAN_STOP_TIME;
1804 +- data->REG_FAN_TIME[1] = NCT6775_REG_FAN_STEP_UP_TIME;
1805 +- data->REG_FAN_TIME[2] = NCT6775_REG_FAN_STEP_DOWN_TIME;
1806 ++ data->REG_FAN_TIME[1] = NCT6776_REG_FAN_STEP_UP_TIME;
1807 ++ data->REG_FAN_TIME[2] = NCT6776_REG_FAN_STEP_DOWN_TIME;
1808 + data->REG_TOLERANCE_H = NCT6776_REG_TOLERANCE_H;
1809 + data->REG_PWM[0] = NCT6775_REG_PWM;
1810 + data->REG_PWM[1] = NCT6775_REG_FAN_START_OUTPUT;
1811 +@@ -3677,8 +3681,8 @@ static int nct6775_probe(struct platform_device *pdev)
1812 + data->REG_FAN_PULSES = NCT6779_REG_FAN_PULSES;
1813 + data->FAN_PULSE_SHIFT = NCT6775_FAN_PULSE_SHIFT;
1814 + data->REG_FAN_TIME[0] = NCT6775_REG_FAN_STOP_TIME;
1815 +- data->REG_FAN_TIME[1] = NCT6775_REG_FAN_STEP_UP_TIME;
1816 +- data->REG_FAN_TIME[2] = NCT6775_REG_FAN_STEP_DOWN_TIME;
1817 ++ data->REG_FAN_TIME[1] = NCT6776_REG_FAN_STEP_UP_TIME;
1818 ++ data->REG_FAN_TIME[2] = NCT6776_REG_FAN_STEP_DOWN_TIME;
1819 + data->REG_TOLERANCE_H = NCT6776_REG_TOLERANCE_H;
1820 + data->REG_PWM[0] = NCT6775_REG_PWM;
1821 + data->REG_PWM[1] = NCT6775_REG_FAN_START_OUTPUT;
1822 +diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
1823 +index 575a072d765f..c32a934f7693 100644
1824 +--- a/drivers/infiniband/ulp/isert/ib_isert.c
1825 ++++ b/drivers/infiniband/ulp/isert/ib_isert.c
1826 +@@ -2996,9 +2996,16 @@ isert_get_dataout(struct iscsi_conn *conn, struct iscsi_cmd *cmd, bool recovery)
1827 + static int
1828 + isert_immediate_queue(struct iscsi_conn *conn, struct iscsi_cmd *cmd, int state)
1829 + {
1830 +- int ret;
1831 ++ struct isert_cmd *isert_cmd = iscsit_priv_cmd(cmd);
1832 ++ int ret = 0;
1833 +
1834 + switch (state) {
1835 ++ case ISTATE_REMOVE:
1836 ++ spin_lock_bh(&conn->cmd_lock);
1837 ++ list_del_init(&cmd->i_conn_node);
1838 ++ spin_unlock_bh(&conn->cmd_lock);
1839 ++ isert_put_cmd(isert_cmd, true);
1840 ++ break;
1841 + case ISTATE_SEND_NOPIN_WANT_RESPONSE:
1842 + ret = isert_put_nopin(cmd, conn, false);
1843 + break;
1844 +@@ -3363,6 +3370,41 @@ isert_wait4flush(struct isert_conn *isert_conn)
1845 + wait_for_completion(&isert_conn->wait_comp_err);
1846 + }
1847 +
1848 ++/**
1849 ++ * isert_put_unsol_pending_cmds() - Drop commands waiting for
1850 ++ * unsolicitate dataout
1851 ++ * @conn: iscsi connection
1852 ++ *
1853 ++ * We might still have commands that are waiting for unsolicited
1854 ++ * dataouts messages. We must put the extra reference on those
1855 ++ * before blocking on the target_wait_for_session_cmds
1856 ++ */
1857 ++static void
1858 ++isert_put_unsol_pending_cmds(struct iscsi_conn *conn)
1859 ++{
1860 ++ struct iscsi_cmd *cmd, *tmp;
1861 ++ static LIST_HEAD(drop_cmd_list);
1862 ++
1863 ++ spin_lock_bh(&conn->cmd_lock);
1864 ++ list_for_each_entry_safe(cmd, tmp, &conn->conn_cmd_list, i_conn_node) {
1865 ++ if ((cmd->cmd_flags & ICF_NON_IMMEDIATE_UNSOLICITED_DATA) &&
1866 ++ (cmd->write_data_done < conn->sess->sess_ops->FirstBurstLength) &&
1867 ++ (cmd->write_data_done < cmd->se_cmd.data_length))
1868 ++ list_move_tail(&cmd->i_conn_node, &drop_cmd_list);
1869 ++ }
1870 ++ spin_unlock_bh(&conn->cmd_lock);
1871 ++
1872 ++ list_for_each_entry_safe(cmd, tmp, &drop_cmd_list, i_conn_node) {
1873 ++ list_del_init(&cmd->i_conn_node);
1874 ++ if (cmd->i_state != ISTATE_REMOVE) {
1875 ++ struct isert_cmd *isert_cmd = iscsit_priv_cmd(cmd);
1876 ++
1877 ++ isert_info("conn %p dropping cmd %p\n", conn, cmd);
1878 ++ isert_put_cmd(isert_cmd, true);
1879 ++ }
1880 ++ }
1881 ++}
1882 ++
1883 + static void isert_wait_conn(struct iscsi_conn *conn)
1884 + {
1885 + struct isert_conn *isert_conn = conn->context;
1886 +@@ -3381,8 +3423,9 @@ static void isert_wait_conn(struct iscsi_conn *conn)
1887 + isert_conn_terminate(isert_conn);
1888 + mutex_unlock(&isert_conn->mutex);
1889 +
1890 +- isert_wait4cmds(conn);
1891 + isert_wait4flush(isert_conn);
1892 ++ isert_put_unsol_pending_cmds(conn);
1893 ++ isert_wait4cmds(conn);
1894 + isert_wait4logout(isert_conn);
1895 +
1896 + queue_work(isert_release_wq, &isert_conn->release_work);
1897 +diff --git a/drivers/irqchip/irq-atmel-aic5.c b/drivers/irqchip/irq-atmel-aic5.c
1898 +index a2e8c3f876cb..c2c578f0b268 100644
1899 +--- a/drivers/irqchip/irq-atmel-aic5.c
1900 ++++ b/drivers/irqchip/irq-atmel-aic5.c
1901 +@@ -88,28 +88,36 @@ static void aic5_mask(struct irq_data *d)
1902 + {
1903 + struct irq_domain *domain = d->domain;
1904 + struct irq_domain_chip_generic *dgc = domain->gc;
1905 +- struct irq_chip_generic *gc = dgc->gc[0];
1906 ++ struct irq_chip_generic *bgc = dgc->gc[0];
1907 ++ struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
1908 +
1909 +- /* Disable interrupt on AIC5 */
1910 +- irq_gc_lock(gc);
1911 ++ /*
1912 ++ * Disable interrupt on AIC5. We always take the lock of the
1913 ++ * first irq chip as all chips share the same registers.
1914 ++ */
1915 ++ irq_gc_lock(bgc);
1916 + irq_reg_writel(gc, d->hwirq, AT91_AIC5_SSR);
1917 + irq_reg_writel(gc, 1, AT91_AIC5_IDCR);
1918 + gc->mask_cache &= ~d->mask;
1919 +- irq_gc_unlock(gc);
1920 ++ irq_gc_unlock(bgc);
1921 + }
1922 +
1923 + static void aic5_unmask(struct irq_data *d)
1924 + {
1925 + struct irq_domain *domain = d->domain;
1926 + struct irq_domain_chip_generic *dgc = domain->gc;
1927 +- struct irq_chip_generic *gc = dgc->gc[0];
1928 ++ struct irq_chip_generic *bgc = dgc->gc[0];
1929 ++ struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
1930 +
1931 +- /* Enable interrupt on AIC5 */
1932 +- irq_gc_lock(gc);
1933 ++ /*
1934 ++ * Enable interrupt on AIC5. We always take the lock of the
1935 ++ * first irq chip as all chips share the same registers.
1936 ++ */
1937 ++ irq_gc_lock(bgc);
1938 + irq_reg_writel(gc, d->hwirq, AT91_AIC5_SSR);
1939 + irq_reg_writel(gc, 1, AT91_AIC5_IECR);
1940 + gc->mask_cache |= d->mask;
1941 +- irq_gc_unlock(gc);
1942 ++ irq_gc_unlock(bgc);
1943 + }
1944 +
1945 + static int aic5_retrigger(struct irq_data *d)
1946 +diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
1947 +index c00e2db351ba..9a791dd52199 100644
1948 +--- a/drivers/irqchip/irq-gic-v3-its.c
1949 ++++ b/drivers/irqchip/irq-gic-v3-its.c
1950 +@@ -921,8 +921,10 @@ retry_baser:
1951 + * non-cacheable as well.
1952 + */
1953 + shr = tmp & GITS_BASER_SHAREABILITY_MASK;
1954 +- if (!shr)
1955 ++ if (!shr) {
1956 + cache = GITS_BASER_nC;
1957 ++ __flush_dcache_area(base, alloc_size);
1958 ++ }
1959 + goto retry_baser;
1960 + }
1961 +
1962 +@@ -1163,6 +1165,8 @@ static struct its_device *its_create_device(struct its_node *its, u32 dev_id,
1963 + return NULL;
1964 + }
1965 +
1966 ++ __flush_dcache_area(itt, sz);
1967 ++
1968 + dev->its = its;
1969 + dev->itt = itt;
1970 + dev->nr_ites = nr_ites;
1971 +diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
1972 +index 7fb2a19ac649..557f8a53a062 100644
1973 +--- a/drivers/leds/led-class.c
1974 ++++ b/drivers/leds/led-class.c
1975 +@@ -223,12 +223,15 @@ static int led_classdev_next_name(const char *init_name, char *name,
1976 + {
1977 + unsigned int i = 0;
1978 + int ret = 0;
1979 ++ struct device *dev;
1980 +
1981 + strlcpy(name, init_name, len);
1982 +
1983 +- while (class_find_device(leds_class, NULL, name, match_name) &&
1984 +- (ret < len))
1985 ++ while ((ret < len) &&
1986 ++ (dev = class_find_device(leds_class, NULL, name, match_name))) {
1987 ++ put_device(dev);
1988 + ret = snprintf(name, len, "%s_%u", init_name, ++i);
1989 ++ }
1990 +
1991 + if (ret >= len)
1992 + return -ENOMEM;
1993 +diff --git a/drivers/macintosh/windfarm_core.c b/drivers/macintosh/windfarm_core.c
1994 +index 3ee198b65843..cc7ece1712b5 100644
1995 +--- a/drivers/macintosh/windfarm_core.c
1996 ++++ b/drivers/macintosh/windfarm_core.c
1997 +@@ -435,7 +435,7 @@ int wf_unregister_client(struct notifier_block *nb)
1998 + {
1999 + mutex_lock(&wf_lock);
2000 + blocking_notifier_chain_unregister(&wf_client_list, nb);
2001 +- wf_client_count++;
2002 ++ wf_client_count--;
2003 + if (wf_client_count == 0)
2004 + wf_stop_thread();
2005 + mutex_unlock(&wf_lock);
2006 +diff --git a/drivers/md/bitmap.c b/drivers/md/bitmap.c
2007 +index c90118e90708..a7621a258936 100644
2008 +--- a/drivers/md/bitmap.c
2009 ++++ b/drivers/md/bitmap.c
2010 +@@ -2000,7 +2000,8 @@ int bitmap_resize(struct bitmap *bitmap, sector_t blocks,
2011 + if (bitmap->mddev->bitmap_info.offset || bitmap->mddev->bitmap_info.file)
2012 + ret = bitmap_storage_alloc(&store, chunks,
2013 + !bitmap->mddev->bitmap_info.external,
2014 +- bitmap->cluster_slot);
2015 ++ mddev_is_clustered(bitmap->mddev)
2016 ++ ? bitmap->cluster_slot : 0);
2017 + if (ret)
2018 + goto err;
2019 +
2020 +diff --git a/drivers/md/dm-cache-policy-cleaner.c b/drivers/md/dm-cache-policy-cleaner.c
2021 +index 004e463c9423..8308f4b434ec 100644
2022 +--- a/drivers/md/dm-cache-policy-cleaner.c
2023 ++++ b/drivers/md/dm-cache-policy-cleaner.c
2024 +@@ -435,7 +435,7 @@ static struct dm_cache_policy *wb_create(dm_cblock_t cache_size,
2025 + static struct dm_cache_policy_type wb_policy_type = {
2026 + .name = "cleaner",
2027 + .version = {1, 0, 0},
2028 +- .hint_size = 0,
2029 ++ .hint_size = 4,
2030 + .owner = THIS_MODULE,
2031 + .create = wb_create
2032 + };
2033 +diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
2034 +index 5503e43e5f28..049282e6482f 100644
2035 +--- a/drivers/md/dm-crypt.c
2036 ++++ b/drivers/md/dm-crypt.c
2037 +@@ -955,7 +955,8 @@ static void crypt_free_buffer_pages(struct crypt_config *cc, struct bio *clone);
2038 +
2039 + /*
2040 + * Generate a new unfragmented bio with the given size
2041 +- * This should never violate the device limitations
2042 ++ * This should never violate the device limitations (but only because
2043 ++ * max_segment_size is being constrained to PAGE_SIZE).
2044 + *
2045 + * This function may be called concurrently. If we allocate from the mempool
2046 + * concurrently, there is a possibility of deadlock. For example, if we have
2047 +@@ -2040,9 +2041,20 @@ static int crypt_iterate_devices(struct dm_target *ti,
2048 + return fn(ti, cc->dev, cc->start, ti->len, data);
2049 + }
2050 +
2051 ++static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
2052 ++{
2053 ++ /*
2054 ++ * Unfortunate constraint that is required to avoid the potential
2055 ++ * for exceeding underlying device's max_segments limits -- due to
2056 ++ * crypt_alloc_buffer() possibly allocating pages for the encryption
2057 ++ * bio that are not as physically contiguous as the original bio.
2058 ++ */
2059 ++ limits->max_segment_size = PAGE_SIZE;
2060 ++}
2061 ++
2062 + static struct target_type crypt_target = {
2063 + .name = "crypt",
2064 +- .version = {1, 14, 0},
2065 ++ .version = {1, 14, 1},
2066 + .module = THIS_MODULE,
2067 + .ctr = crypt_ctr,
2068 + .dtr = crypt_dtr,
2069 +@@ -2054,6 +2066,7 @@ static struct target_type crypt_target = {
2070 + .message = crypt_message,
2071 + .merge = crypt_merge,
2072 + .iterate_devices = crypt_iterate_devices,
2073 ++ .io_hints = crypt_io_hints,
2074 + };
2075 +
2076 + static int __init dm_crypt_init(void)
2077 +diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
2078 +index 88e4c7f24986..2c1f2e13719e 100644
2079 +--- a/drivers/md/dm-raid.c
2080 ++++ b/drivers/md/dm-raid.c
2081 +@@ -327,8 +327,7 @@ static int validate_region_size(struct raid_set *rs, unsigned long region_size)
2082 + */
2083 + if (min_region_size > (1 << 13)) {
2084 + /* If not a power of 2, make it the next power of 2 */
2085 +- if (min_region_size & (min_region_size - 1))
2086 +- region_size = 1 << fls(region_size);
2087 ++ region_size = roundup_pow_of_two(min_region_size);
2088 + DMINFO("Choosing default region size of %lu sectors",
2089 + region_size);
2090 + } else {
2091 +diff --git a/drivers/md/dm.c b/drivers/md/dm.c
2092 +index 697f34fba06b..8b72ceee0f61 100644
2093 +--- a/drivers/md/dm.c
2094 ++++ b/drivers/md/dm.c
2095 +@@ -2925,8 +2925,6 @@ static void __dm_destroy(struct mapped_device *md, bool wait)
2096 +
2097 + might_sleep();
2098 +
2099 +- map = dm_get_live_table(md, &srcu_idx);
2100 +-
2101 + spin_lock(&_minor_lock);
2102 + idr_replace(&_minor_idr, MINOR_ALLOCED, MINOR(disk_devt(dm_disk(md))));
2103 + set_bit(DMF_FREEING, &md->flags);
2104 +@@ -2940,14 +2938,14 @@ static void __dm_destroy(struct mapped_device *md, bool wait)
2105 + * do not race with internal suspend.
2106 + */
2107 + mutex_lock(&md->suspend_lock);
2108 ++ map = dm_get_live_table(md, &srcu_idx);
2109 + if (!dm_suspended_md(md)) {
2110 + dm_table_presuspend_targets(map);
2111 + dm_table_postsuspend_targets(map);
2112 + }
2113 +- mutex_unlock(&md->suspend_lock);
2114 +-
2115 + /* dm_put_live_table must be before msleep, otherwise deadlock is possible */
2116 + dm_put_live_table(md, srcu_idx);
2117 ++ mutex_unlock(&md->suspend_lock);
2118 +
2119 + /*
2120 + * Rare, but there may be I/O requests still going to complete,
2121 +diff --git a/drivers/md/persistent-data/dm-btree-internal.h b/drivers/md/persistent-data/dm-btree-internal.h
2122 +index bf2b80d5c470..8731b6ea026b 100644
2123 +--- a/drivers/md/persistent-data/dm-btree-internal.h
2124 ++++ b/drivers/md/persistent-data/dm-btree-internal.h
2125 +@@ -138,4 +138,10 @@ int lower_bound(struct btree_node *n, uint64_t key);
2126 +
2127 + extern struct dm_block_validator btree_node_validator;
2128 +
2129 ++/*
2130 ++ * Value type for upper levels of multi-level btrees.
2131 ++ */
2132 ++extern void init_le64_type(struct dm_transaction_manager *tm,
2133 ++ struct dm_btree_value_type *vt);
2134 ++
2135 + #endif /* DM_BTREE_INTERNAL_H */
2136 +diff --git a/drivers/md/persistent-data/dm-btree-remove.c b/drivers/md/persistent-data/dm-btree-remove.c
2137 +index a03178e91a79..7c0d75547ccf 100644
2138 +--- a/drivers/md/persistent-data/dm-btree-remove.c
2139 ++++ b/drivers/md/persistent-data/dm-btree-remove.c
2140 +@@ -544,14 +544,6 @@ static int remove_raw(struct shadow_spine *s, struct dm_btree_info *info,
2141 + return r;
2142 + }
2143 +
2144 +-static struct dm_btree_value_type le64_type = {
2145 +- .context = NULL,
2146 +- .size = sizeof(__le64),
2147 +- .inc = NULL,
2148 +- .dec = NULL,
2149 +- .equal = NULL
2150 +-};
2151 +-
2152 + int dm_btree_remove(struct dm_btree_info *info, dm_block_t root,
2153 + uint64_t *keys, dm_block_t *new_root)
2154 + {
2155 +@@ -559,12 +551,14 @@ int dm_btree_remove(struct dm_btree_info *info, dm_block_t root,
2156 + int index = 0, r = 0;
2157 + struct shadow_spine spine;
2158 + struct btree_node *n;
2159 ++ struct dm_btree_value_type le64_vt;
2160 +
2161 ++ init_le64_type(info->tm, &le64_vt);
2162 + init_shadow_spine(&spine, info);
2163 + for (level = 0; level < info->levels; level++) {
2164 + r = remove_raw(&spine, info,
2165 + (level == last_level ?
2166 +- &info->value_type : &le64_type),
2167 ++ &info->value_type : &le64_vt),
2168 + root, keys[level], (unsigned *)&index);
2169 + if (r < 0)
2170 + break;
2171 +diff --git a/drivers/md/persistent-data/dm-btree-spine.c b/drivers/md/persistent-data/dm-btree-spine.c
2172 +index 1b5e13ec7f96..0dee514ba4c5 100644
2173 +--- a/drivers/md/persistent-data/dm-btree-spine.c
2174 ++++ b/drivers/md/persistent-data/dm-btree-spine.c
2175 +@@ -249,3 +249,40 @@ int shadow_root(struct shadow_spine *s)
2176 + {
2177 + return s->root;
2178 + }
2179 ++
2180 ++static void le64_inc(void *context, const void *value_le)
2181 ++{
2182 ++ struct dm_transaction_manager *tm = context;
2183 ++ __le64 v_le;
2184 ++
2185 ++ memcpy(&v_le, value_le, sizeof(v_le));
2186 ++ dm_tm_inc(tm, le64_to_cpu(v_le));
2187 ++}
2188 ++
2189 ++static void le64_dec(void *context, const void *value_le)
2190 ++{
2191 ++ struct dm_transaction_manager *tm = context;
2192 ++ __le64 v_le;
2193 ++
2194 ++ memcpy(&v_le, value_le, sizeof(v_le));
2195 ++ dm_tm_dec(tm, le64_to_cpu(v_le));
2196 ++}
2197 ++
2198 ++static int le64_equal(void *context, const void *value1_le, const void *value2_le)
2199 ++{
2200 ++ __le64 v1_le, v2_le;
2201 ++
2202 ++ memcpy(&v1_le, value1_le, sizeof(v1_le));
2203 ++ memcpy(&v2_le, value2_le, sizeof(v2_le));
2204 ++ return v1_le == v2_le;
2205 ++}
2206 ++
2207 ++void init_le64_type(struct dm_transaction_manager *tm,
2208 ++ struct dm_btree_value_type *vt)
2209 ++{
2210 ++ vt->context = tm;
2211 ++ vt->size = sizeof(__le64);
2212 ++ vt->inc = le64_inc;
2213 ++ vt->dec = le64_dec;
2214 ++ vt->equal = le64_equal;
2215 ++}
2216 +diff --git a/drivers/md/persistent-data/dm-btree.c b/drivers/md/persistent-data/dm-btree.c
2217 +index fdd3793e22f9..c7726cebc495 100644
2218 +--- a/drivers/md/persistent-data/dm-btree.c
2219 ++++ b/drivers/md/persistent-data/dm-btree.c
2220 +@@ -667,12 +667,7 @@ static int insert(struct dm_btree_info *info, dm_block_t root,
2221 + struct btree_node *n;
2222 + struct dm_btree_value_type le64_type;
2223 +
2224 +- le64_type.context = NULL;
2225 +- le64_type.size = sizeof(__le64);
2226 +- le64_type.inc = NULL;
2227 +- le64_type.dec = NULL;
2228 +- le64_type.equal = NULL;
2229 +-
2230 ++ init_le64_type(info->tm, &le64_type);
2231 + init_shadow_spine(&spine, info);
2232 +
2233 + for (level = 0; level < (info->levels - 1); level++) {
2234 +diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
2235 +index efb654eb5399..0875e5e7e09a 100644
2236 +--- a/drivers/md/raid0.c
2237 ++++ b/drivers/md/raid0.c
2238 +@@ -83,7 +83,7 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
2239 + char b[BDEVNAME_SIZE];
2240 + char b2[BDEVNAME_SIZE];
2241 + struct r0conf *conf = kzalloc(sizeof(*conf), GFP_KERNEL);
2242 +- bool discard_supported = false;
2243 ++ unsigned short blksize = 512;
2244 +
2245 + if (!conf)
2246 + return -ENOMEM;
2247 +@@ -98,6 +98,9 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
2248 + sector_div(sectors, mddev->chunk_sectors);
2249 + rdev1->sectors = sectors * mddev->chunk_sectors;
2250 +
2251 ++ blksize = max(blksize, queue_logical_block_size(
2252 ++ rdev1->bdev->bd_disk->queue));
2253 ++
2254 + rdev_for_each(rdev2, mddev) {
2255 + pr_debug("md/raid0:%s: comparing %s(%llu)"
2256 + " with %s(%llu)\n",
2257 +@@ -134,6 +137,18 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
2258 + }
2259 + pr_debug("md/raid0:%s: FINAL %d zones\n",
2260 + mdname(mddev), conf->nr_strip_zones);
2261 ++ /*
2262 ++ * now since we have the hard sector sizes, we can make sure
2263 ++ * chunk size is a multiple of that sector size
2264 ++ */
2265 ++ if ((mddev->chunk_sectors << 9) % blksize) {
2266 ++ printk(KERN_ERR "md/raid0:%s: chunk_size of %d not multiple of block size %d\n",
2267 ++ mdname(mddev),
2268 ++ mddev->chunk_sectors << 9, blksize);
2269 ++ err = -EINVAL;
2270 ++ goto abort;
2271 ++ }
2272 ++
2273 + err = -ENOMEM;
2274 + conf->strip_zone = kzalloc(sizeof(struct strip_zone)*
2275 + conf->nr_strip_zones, GFP_KERNEL);
2276 +@@ -188,19 +203,12 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
2277 + }
2278 + dev[j] = rdev1;
2279 +
2280 +- if (mddev->queue)
2281 +- disk_stack_limits(mddev->gendisk, rdev1->bdev,
2282 +- rdev1->data_offset << 9);
2283 +-
2284 + if (rdev1->bdev->bd_disk->queue->merge_bvec_fn)
2285 + conf->has_merge_bvec = 1;
2286 +
2287 + if (!smallest || (rdev1->sectors < smallest->sectors))
2288 + smallest = rdev1;
2289 + cnt++;
2290 +-
2291 +- if (blk_queue_discard(bdev_get_queue(rdev1->bdev)))
2292 +- discard_supported = true;
2293 + }
2294 + if (cnt != mddev->raid_disks) {
2295 + printk(KERN_ERR "md/raid0:%s: too few disks (%d of %d) - "
2296 +@@ -261,28 +269,6 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
2297 + (unsigned long long)smallest->sectors);
2298 + }
2299 +
2300 +- /*
2301 +- * now since we have the hard sector sizes, we can make sure
2302 +- * chunk size is a multiple of that sector size
2303 +- */
2304 +- if ((mddev->chunk_sectors << 9) % queue_logical_block_size(mddev->queue)) {
2305 +- printk(KERN_ERR "md/raid0:%s: chunk_size of %d not valid\n",
2306 +- mdname(mddev),
2307 +- mddev->chunk_sectors << 9);
2308 +- goto abort;
2309 +- }
2310 +-
2311 +- if (mddev->queue) {
2312 +- blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9);
2313 +- blk_queue_io_opt(mddev->queue,
2314 +- (mddev->chunk_sectors << 9) * mddev->raid_disks);
2315 +-
2316 +- if (!discard_supported)
2317 +- queue_flag_clear_unlocked(QUEUE_FLAG_DISCARD, mddev->queue);
2318 +- else
2319 +- queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, mddev->queue);
2320 +- }
2321 +-
2322 + pr_debug("md/raid0:%s: done.\n", mdname(mddev));
2323 + *private_conf = conf;
2324 +
2325 +@@ -433,12 +419,6 @@ static int raid0_run(struct mddev *mddev)
2326 + if (md_check_no_bitmap(mddev))
2327 + return -EINVAL;
2328 +
2329 +- if (mddev->queue) {
2330 +- blk_queue_max_hw_sectors(mddev->queue, mddev->chunk_sectors);
2331 +- blk_queue_max_write_same_sectors(mddev->queue, mddev->chunk_sectors);
2332 +- blk_queue_max_discard_sectors(mddev->queue, mddev->chunk_sectors);
2333 +- }
2334 +-
2335 + /* if private is not null, we are here after takeover */
2336 + if (mddev->private == NULL) {
2337 + ret = create_strip_zones(mddev, &conf);
2338 +@@ -447,6 +427,29 @@ static int raid0_run(struct mddev *mddev)
2339 + mddev->private = conf;
2340 + }
2341 + conf = mddev->private;
2342 ++ if (mddev->queue) {
2343 ++ struct md_rdev *rdev;
2344 ++ bool discard_supported = false;
2345 ++
2346 ++ blk_queue_max_hw_sectors(mddev->queue, mddev->chunk_sectors);
2347 ++ blk_queue_max_write_same_sectors(mddev->queue, mddev->chunk_sectors);
2348 ++ blk_queue_max_discard_sectors(mddev->queue, mddev->chunk_sectors);
2349 ++
2350 ++ blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9);
2351 ++ blk_queue_io_opt(mddev->queue,
2352 ++ (mddev->chunk_sectors << 9) * mddev->raid_disks);
2353 ++
2354 ++ rdev_for_each(rdev, mddev) {
2355 ++ disk_stack_limits(mddev->gendisk, rdev->bdev,
2356 ++ rdev->data_offset << 9);
2357 ++ if (blk_queue_discard(bdev_get_queue(rdev->bdev)))
2358 ++ discard_supported = true;
2359 ++ }
2360 ++ if (!discard_supported)
2361 ++ queue_flag_clear_unlocked(QUEUE_FLAG_DISCARD, mddev->queue);
2362 ++ else
2363 ++ queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, mddev->queue);
2364 ++ }
2365 +
2366 + /* calculate array device size */
2367 + md_set_array_sectors(mddev, raid0_size(mddev, 0, 0));
2368 +diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
2369 +index 8be0df758e68..a0b1b460377d 100644
2370 +--- a/drivers/mmc/core/host.c
2371 ++++ b/drivers/mmc/core/host.c
2372 +@@ -373,7 +373,7 @@ int mmc_of_parse(struct mmc_host *host)
2373 + 0, &cd_gpio_invert);
2374 + if (!ret)
2375 + dev_info(host->parent, "Got CD GPIO\n");
2376 +- else if (ret != -ENOENT)
2377 ++ else if (ret != -ENOENT && ret != -ENOSYS)
2378 + return ret;
2379 +
2380 + /*
2381 +@@ -397,7 +397,7 @@ int mmc_of_parse(struct mmc_host *host)
2382 + ret = mmc_gpiod_request_ro(host, "wp", 0, false, 0, &ro_gpio_invert);
2383 + if (!ret)
2384 + dev_info(host->parent, "Got WP GPIO\n");
2385 +- else if (ret != -ENOENT)
2386 ++ else if (ret != -ENOENT && ret != -ENOSYS)
2387 + return ret;
2388 +
2389 + /* See the comment on CD inversion above */
2390 +diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
2391 +index 5f5adafb253a..b354c8bffb9e 100644
2392 +--- a/drivers/mmc/host/dw_mmc.c
2393 ++++ b/drivers/mmc/host/dw_mmc.c
2394 +@@ -99,6 +99,9 @@ struct idmac_desc {
2395 +
2396 + __le32 des3; /* buffer 2 physical address */
2397 + };
2398 ++
2399 ++/* Each descriptor can transfer up to 4KB of data in chained mode */
2400 ++#define DW_MCI_DESC_DATA_LENGTH 0x1000
2401 + #endif /* CONFIG_MMC_DW_IDMAC */
2402 +
2403 + static bool dw_mci_reset(struct dw_mci *host);
2404 +@@ -462,66 +465,96 @@ static void dw_mci_idmac_complete_dma(struct dw_mci *host)
2405 + static void dw_mci_translate_sglist(struct dw_mci *host, struct mmc_data *data,
2406 + unsigned int sg_len)
2407 + {
2408 ++ unsigned int desc_len;
2409 + int i;
2410 + if (host->dma_64bit_address == 1) {
2411 +- struct idmac_desc_64addr *desc = host->sg_cpu;
2412 ++ struct idmac_desc_64addr *desc_first, *desc_last, *desc;
2413 ++
2414 ++ desc_first = desc_last = desc = host->sg_cpu;
2415 +
2416 +- for (i = 0; i < sg_len; i++, desc++) {
2417 ++ for (i = 0; i < sg_len; i++) {
2418 + unsigned int length = sg_dma_len(&data->sg[i]);
2419 + u64 mem_addr = sg_dma_address(&data->sg[i]);
2420 +
2421 +- /*
2422 +- * Set the OWN bit and disable interrupts for this
2423 +- * descriptor
2424 +- */
2425 +- desc->des0 = IDMAC_DES0_OWN | IDMAC_DES0_DIC |
2426 +- IDMAC_DES0_CH;
2427 +- /* Buffer length */
2428 +- IDMAC_64ADDR_SET_BUFFER1_SIZE(desc, length);
2429 +-
2430 +- /* Physical address to DMA to/from */
2431 +- desc->des4 = mem_addr & 0xffffffff;
2432 +- desc->des5 = mem_addr >> 32;
2433 ++ for ( ; length ; desc++) {
2434 ++ desc_len = (length <= DW_MCI_DESC_DATA_LENGTH) ?
2435 ++ length : DW_MCI_DESC_DATA_LENGTH;
2436 ++
2437 ++ length -= desc_len;
2438 ++
2439 ++ /*
2440 ++ * Set the OWN bit and disable interrupts
2441 ++ * for this descriptor
2442 ++ */
2443 ++ desc->des0 = IDMAC_DES0_OWN | IDMAC_DES0_DIC |
2444 ++ IDMAC_DES0_CH;
2445 ++
2446 ++ /* Buffer length */
2447 ++ IDMAC_64ADDR_SET_BUFFER1_SIZE(desc, desc_len);
2448 ++
2449 ++ /* Physical address to DMA to/from */
2450 ++ desc->des4 = mem_addr & 0xffffffff;
2451 ++ desc->des5 = mem_addr >> 32;
2452 ++
2453 ++ /* Update physical address for the next desc */
2454 ++ mem_addr += desc_len;
2455 ++
2456 ++ /* Save pointer to the last descriptor */
2457 ++ desc_last = desc;
2458 ++ }
2459 + }
2460 +
2461 + /* Set first descriptor */
2462 +- desc = host->sg_cpu;
2463 +- desc->des0 |= IDMAC_DES0_FD;
2464 ++ desc_first->des0 |= IDMAC_DES0_FD;
2465 +
2466 + /* Set last descriptor */
2467 +- desc = host->sg_cpu + (i - 1) *
2468 +- sizeof(struct idmac_desc_64addr);
2469 +- desc->des0 &= ~(IDMAC_DES0_CH | IDMAC_DES0_DIC);
2470 +- desc->des0 |= IDMAC_DES0_LD;
2471 ++ desc_last->des0 &= ~(IDMAC_DES0_CH | IDMAC_DES0_DIC);
2472 ++ desc_last->des0 |= IDMAC_DES0_LD;
2473 +
2474 + } else {
2475 +- struct idmac_desc *desc = host->sg_cpu;
2476 ++ struct idmac_desc *desc_first, *desc_last, *desc;
2477 ++
2478 ++ desc_first = desc_last = desc = host->sg_cpu;
2479 +
2480 +- for (i = 0; i < sg_len; i++, desc++) {
2481 ++ for (i = 0; i < sg_len; i++) {
2482 + unsigned int length = sg_dma_len(&data->sg[i]);
2483 + u32 mem_addr = sg_dma_address(&data->sg[i]);
2484 +
2485 +- /*
2486 +- * Set the OWN bit and disable interrupts for this
2487 +- * descriptor
2488 +- */
2489 +- desc->des0 = cpu_to_le32(IDMAC_DES0_OWN |
2490 +- IDMAC_DES0_DIC | IDMAC_DES0_CH);
2491 +- /* Buffer length */
2492 +- IDMAC_SET_BUFFER1_SIZE(desc, length);
2493 ++ for ( ; length ; desc++) {
2494 ++ desc_len = (length <= DW_MCI_DESC_DATA_LENGTH) ?
2495 ++ length : DW_MCI_DESC_DATA_LENGTH;
2496 ++
2497 ++ length -= desc_len;
2498 ++
2499 ++ /*
2500 ++ * Set the OWN bit and disable interrupts
2501 ++ * for this descriptor
2502 ++ */
2503 ++ desc->des0 = cpu_to_le32(IDMAC_DES0_OWN |
2504 ++ IDMAC_DES0_DIC |
2505 ++ IDMAC_DES0_CH);
2506 ++
2507 ++ /* Buffer length */
2508 ++ IDMAC_SET_BUFFER1_SIZE(desc, desc_len);
2509 +
2510 +- /* Physical address to DMA to/from */
2511 +- desc->des2 = cpu_to_le32(mem_addr);
2512 ++ /* Physical address to DMA to/from */
2513 ++ desc->des2 = cpu_to_le32(mem_addr);
2514 ++
2515 ++ /* Update physical address for the next desc */
2516 ++ mem_addr += desc_len;
2517 ++
2518 ++ /* Save pointer to the last descriptor */
2519 ++ desc_last = desc;
2520 ++ }
2521 + }
2522 +
2523 + /* Set first descriptor */
2524 +- desc = host->sg_cpu;
2525 +- desc->des0 |= cpu_to_le32(IDMAC_DES0_FD);
2526 ++ desc_first->des0 |= cpu_to_le32(IDMAC_DES0_FD);
2527 +
2528 + /* Set last descriptor */
2529 +- desc = host->sg_cpu + (i - 1) * sizeof(struct idmac_desc);
2530 +- desc->des0 &= cpu_to_le32(~(IDMAC_DES0_CH | IDMAC_DES0_DIC));
2531 +- desc->des0 |= cpu_to_le32(IDMAC_DES0_LD);
2532 ++ desc_last->des0 &= cpu_to_le32(~(IDMAC_DES0_CH |
2533 ++ IDMAC_DES0_DIC));
2534 ++ desc_last->des0 |= cpu_to_le32(IDMAC_DES0_LD);
2535 + }
2536 +
2537 + wmb();
2538 +@@ -2406,7 +2439,7 @@ static int dw_mci_init_slot(struct dw_mci *host, unsigned int id)
2539 + #ifdef CONFIG_MMC_DW_IDMAC
2540 + mmc->max_segs = host->ring_size;
2541 + mmc->max_blk_size = 65536;
2542 +- mmc->max_seg_size = 0x1000;
2543 ++ mmc->max_seg_size = DW_MCI_DESC_DATA_LENGTH;
2544 + mmc->max_req_size = mmc->max_seg_size * host->ring_size;
2545 + mmc->max_blk_count = mmc->max_req_size / 512;
2546 + #else
2547 +diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
2548 +index 82f512d87cb8..461698b038f7 100644
2549 +--- a/drivers/mmc/host/sdhci-esdhc-imx.c
2550 ++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
2551 +@@ -868,6 +868,7 @@ sdhci_esdhc_imx_probe_dt(struct platform_device *pdev,
2552 + struct esdhc_platform_data *boarddata)
2553 + {
2554 + struct device_node *np = pdev->dev.of_node;
2555 ++ int ret;
2556 +
2557 + if (!np)
2558 + return -ENODEV;
2559 +@@ -903,6 +904,14 @@ sdhci_esdhc_imx_probe_dt(struct platform_device *pdev,
2560 +
2561 + mmc_of_parse_voltage(np, &host->ocr_mask);
2562 +
2563 ++ /* call to generic mmc_of_parse to support additional capabilities */
2564 ++ ret = mmc_of_parse(host->mmc);
2565 ++ if (ret)
2566 ++ return ret;
2567 ++
2568 ++ if (!IS_ERR_VALUE(mmc_gpio_get_cd(host->mmc)))
2569 ++ host->quirks &= ~SDHCI_QUIRK_BROKEN_CARD_DETECTION;
2570 ++
2571 + return 0;
2572 + }
2573 + #else
2574 +@@ -924,6 +933,7 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
2575 + struct esdhc_platform_data *boarddata;
2576 + int err;
2577 + struct pltfm_imx_data *imx_data;
2578 ++ bool dt = true;
2579 +
2580 + host = sdhci_pltfm_init(pdev, &sdhci_esdhc_imx_pdata, 0);
2581 + if (IS_ERR(host))
2582 +@@ -1011,11 +1021,44 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
2583 + }
2584 + imx_data->boarddata = *((struct esdhc_platform_data *)
2585 + host->mmc->parent->platform_data);
2586 ++ dt = false;
2587 ++ }
2588 ++ /* write_protect */
2589 ++ if (boarddata->wp_type == ESDHC_WP_GPIO && !dt) {
2590 ++ err = mmc_gpio_request_ro(host->mmc, boarddata->wp_gpio);
2591 ++ if (err) {
2592 ++ dev_err(mmc_dev(host->mmc),
2593 ++ "failed to request write-protect gpio!\n");
2594 ++ goto disable_clk;
2595 ++ }
2596 ++ host->mmc->caps2 |= MMC_CAP2_RO_ACTIVE_HIGH;
2597 + }
2598 +
2599 + /* card_detect */
2600 +- if (boarddata->cd_type == ESDHC_CD_CONTROLLER)
2601 ++ switch (boarddata->cd_type) {
2602 ++ case ESDHC_CD_GPIO:
2603 ++ if (dt)
2604 ++ break;
2605 ++ err = mmc_gpio_request_cd(host->mmc, boarddata->cd_gpio, 0);
2606 ++ if (err) {
2607 ++ dev_err(mmc_dev(host->mmc),
2608 ++ "failed to request card-detect gpio!\n");
2609 ++ goto disable_clk;
2610 ++ }
2611 ++ /* fall through */
2612 ++
2613 ++ case ESDHC_CD_CONTROLLER:
2614 ++ /* we have a working card_detect back */
2615 + host->quirks &= ~SDHCI_QUIRK_BROKEN_CARD_DETECTION;
2616 ++ break;
2617 ++
2618 ++ case ESDHC_CD_PERMANENT:
2619 ++ host->mmc->caps |= MMC_CAP_NONREMOVABLE;
2620 ++ break;
2621 ++
2622 ++ case ESDHC_CD_NONE:
2623 ++ break;
2624 ++ }
2625 +
2626 + switch (boarddata->max_bus_width) {
2627 + case 8:
2628 +@@ -1048,11 +1091,6 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
2629 + host->quirks2 |= SDHCI_QUIRK2_NO_1_8_V;
2630 + }
2631 +
2632 +- /* call to generic mmc_of_parse to support additional capabilities */
2633 +- err = mmc_of_parse(host->mmc);
2634 +- if (err)
2635 +- goto disable_clk;
2636 +-
2637 + err = sdhci_add_host(host);
2638 + if (err)
2639 + goto disable_clk;
2640 +diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
2641 +index fd41b91436ec..cbaf3df3ebd9 100644
2642 +--- a/drivers/mmc/host/sdhci.c
2643 ++++ b/drivers/mmc/host/sdhci.c
2644 +@@ -55,8 +55,7 @@ static int sdhci_execute_tuning(struct mmc_host *mmc, u32 opcode);
2645 + static void sdhci_tuning_timer(unsigned long data);
2646 + static void sdhci_enable_preset_value(struct sdhci_host *host, bool enable);
2647 + static int sdhci_pre_dma_transfer(struct sdhci_host *host,
2648 +- struct mmc_data *data,
2649 +- struct sdhci_host_next *next);
2650 ++ struct mmc_data *data);
2651 + static int sdhci_do_get_cd(struct sdhci_host *host);
2652 +
2653 + #ifdef CONFIG_PM
2654 +@@ -510,7 +509,7 @@ static int sdhci_adma_table_pre(struct sdhci_host *host,
2655 + goto fail;
2656 + BUG_ON(host->align_addr & host->align_mask);
2657 +
2658 +- host->sg_count = sdhci_pre_dma_transfer(host, data, NULL);
2659 ++ host->sg_count = sdhci_pre_dma_transfer(host, data);
2660 + if (host->sg_count < 0)
2661 + goto unmap_align;
2662 +
2663 +@@ -649,9 +648,11 @@ static void sdhci_adma_table_post(struct sdhci_host *host,
2664 + }
2665 + }
2666 +
2667 +- if (!data->host_cookie)
2668 ++ if (data->host_cookie == COOKIE_MAPPED) {
2669 + dma_unmap_sg(mmc_dev(host->mmc), data->sg,
2670 + data->sg_len, direction);
2671 ++ data->host_cookie = COOKIE_UNMAPPED;
2672 ++ }
2673 + }
2674 +
2675 + static u8 sdhci_calc_timeout(struct sdhci_host *host, struct mmc_command *cmd)
2676 +@@ -847,7 +848,7 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd)
2677 + } else {
2678 + int sg_cnt;
2679 +
2680 +- sg_cnt = sdhci_pre_dma_transfer(host, data, NULL);
2681 ++ sg_cnt = sdhci_pre_dma_transfer(host, data);
2682 + if (sg_cnt <= 0) {
2683 + /*
2684 + * This only happens when someone fed
2685 +@@ -963,11 +964,13 @@ static void sdhci_finish_data(struct sdhci_host *host)
2686 + if (host->flags & SDHCI_USE_ADMA)
2687 + sdhci_adma_table_post(host, data);
2688 + else {
2689 +- if (!data->host_cookie)
2690 ++ if (data->host_cookie == COOKIE_MAPPED) {
2691 + dma_unmap_sg(mmc_dev(host->mmc),
2692 + data->sg, data->sg_len,
2693 + (data->flags & MMC_DATA_READ) ?
2694 + DMA_FROM_DEVICE : DMA_TO_DEVICE);
2695 ++ data->host_cookie = COOKIE_UNMAPPED;
2696 ++ }
2697 + }
2698 + }
2699 +
2700 +@@ -2131,49 +2134,36 @@ static void sdhci_post_req(struct mmc_host *mmc, struct mmc_request *mrq,
2701 + struct mmc_data *data = mrq->data;
2702 +
2703 + if (host->flags & SDHCI_REQ_USE_DMA) {
2704 +- if (data->host_cookie)
2705 ++ if (data->host_cookie == COOKIE_GIVEN ||
2706 ++ data->host_cookie == COOKIE_MAPPED)
2707 + dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
2708 + data->flags & MMC_DATA_WRITE ?
2709 + DMA_TO_DEVICE : DMA_FROM_DEVICE);
2710 +- mrq->data->host_cookie = 0;
2711 ++ data->host_cookie = COOKIE_UNMAPPED;
2712 + }
2713 + }
2714 +
2715 + static int sdhci_pre_dma_transfer(struct sdhci_host *host,
2716 +- struct mmc_data *data,
2717 +- struct sdhci_host_next *next)
2718 ++ struct mmc_data *data)
2719 + {
2720 + int sg_count;
2721 +
2722 +- if (!next && data->host_cookie &&
2723 +- data->host_cookie != host->next_data.cookie) {
2724 +- pr_debug(DRIVER_NAME "[%s] invalid cookie: %d, next-cookie %d\n",
2725 +- __func__, data->host_cookie, host->next_data.cookie);
2726 +- data->host_cookie = 0;
2727 ++ if (data->host_cookie == COOKIE_MAPPED) {
2728 ++ data->host_cookie = COOKIE_GIVEN;
2729 ++ return data->sg_count;
2730 + }
2731 +
2732 +- /* Check if next job is already prepared */
2733 +- if (next ||
2734 +- (!next && data->host_cookie != host->next_data.cookie)) {
2735 +- sg_count = dma_map_sg(mmc_dev(host->mmc), data->sg,
2736 +- data->sg_len,
2737 +- data->flags & MMC_DATA_WRITE ?
2738 +- DMA_TO_DEVICE : DMA_FROM_DEVICE);
2739 +-
2740 +- } else {
2741 +- sg_count = host->next_data.sg_count;
2742 +- host->next_data.sg_count = 0;
2743 +- }
2744 ++ WARN_ON(data->host_cookie == COOKIE_GIVEN);
2745 +
2746 ++ sg_count = dma_map_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
2747 ++ data->flags & MMC_DATA_WRITE ?
2748 ++ DMA_TO_DEVICE : DMA_FROM_DEVICE);
2749 +
2750 + if (sg_count == 0)
2751 +- return -EINVAL;
2752 ++ return -ENOSPC;
2753 +
2754 +- if (next) {
2755 +- next->sg_count = sg_count;
2756 +- data->host_cookie = ++next->cookie < 0 ? 1 : next->cookie;
2757 +- } else
2758 +- host->sg_count = sg_count;
2759 ++ data->sg_count = sg_count;
2760 ++ data->host_cookie = COOKIE_MAPPED;
2761 +
2762 + return sg_count;
2763 + }
2764 +@@ -2183,16 +2173,10 @@ static void sdhci_pre_req(struct mmc_host *mmc, struct mmc_request *mrq,
2765 + {
2766 + struct sdhci_host *host = mmc_priv(mmc);
2767 +
2768 +- if (mrq->data->host_cookie) {
2769 +- mrq->data->host_cookie = 0;
2770 +- return;
2771 +- }
2772 ++ mrq->data->host_cookie = COOKIE_UNMAPPED;
2773 +
2774 + if (host->flags & SDHCI_REQ_USE_DMA)
2775 +- if (sdhci_pre_dma_transfer(host,
2776 +- mrq->data,
2777 +- &host->next_data) < 0)
2778 +- mrq->data->host_cookie = 0;
2779 ++ sdhci_pre_dma_transfer(host, mrq->data);
2780 + }
2781 +
2782 + static void sdhci_card_event(struct mmc_host *mmc)
2783 +@@ -3090,7 +3074,6 @@ int sdhci_add_host(struct sdhci_host *host)
2784 + host->max_clk = host->ops->get_max_clock(host);
2785 + }
2786 +
2787 +- host->next_data.cookie = 1;
2788 + /*
2789 + * In case of Host Controller v3.00, find out whether clock
2790 + * multiplier is supported.
2791 +diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
2792 +index e639b7f435e5..eea23f62356a 100644
2793 +--- a/drivers/mmc/host/sdhci.h
2794 ++++ b/drivers/mmc/host/sdhci.h
2795 +@@ -309,9 +309,10 @@ struct sdhci_adma2_64_desc {
2796 + */
2797 + #define SDHCI_MAX_SEGS 128
2798 +
2799 +-struct sdhci_host_next {
2800 +- unsigned int sg_count;
2801 +- s32 cookie;
2802 ++enum sdhci_cookie {
2803 ++ COOKIE_UNMAPPED,
2804 ++ COOKIE_MAPPED,
2805 ++ COOKIE_GIVEN,
2806 + };
2807 +
2808 + struct sdhci_host {
2809 +@@ -506,7 +507,6 @@ struct sdhci_host {
2810 + #define SDHCI_TUNING_MODE_1 0
2811 + struct timer_list tuning_timer; /* Timer for tuning */
2812 +
2813 +- struct sdhci_host_next next_data;
2814 + unsigned long private[0] ____cacheline_aligned;
2815 + };
2816 +
2817 +diff --git a/drivers/mtd/nand/pxa3xx_nand.c b/drivers/mtd/nand/pxa3xx_nand.c
2818 +index a4615fcc3d00..94a357d93bab 100644
2819 +--- a/drivers/mtd/nand/pxa3xx_nand.c
2820 ++++ b/drivers/mtd/nand/pxa3xx_nand.c
2821 +@@ -1475,6 +1475,9 @@ static int pxa3xx_nand_scan(struct mtd_info *mtd)
2822 + if (pdata->keep_config && !pxa3xx_nand_detect_config(info))
2823 + goto KEEP_CONFIG;
2824 +
2825 ++ /* Set a default chunk size */
2826 ++ info->chunk_size = 512;
2827 ++
2828 + ret = pxa3xx_nand_sensing(info);
2829 + if (ret) {
2830 + dev_info(&info->pdev->dev, "There is no chip on cs %d!\n",
2831 +diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
2832 +index 6f93b2990d25..499b8e433d3d 100644
2833 +--- a/drivers/mtd/nand/sunxi_nand.c
2834 ++++ b/drivers/mtd/nand/sunxi_nand.c
2835 +@@ -138,6 +138,10 @@
2836 + #define NFC_ECC_MODE GENMASK(15, 12)
2837 + #define NFC_RANDOM_SEED GENMASK(30, 16)
2838 +
2839 ++/* NFC_USER_DATA helper macros */
2840 ++#define NFC_BUF_TO_USER_DATA(buf) ((buf)[0] | ((buf)[1] << 8) | \
2841 ++ ((buf)[2] << 16) | ((buf)[3] << 24))
2842 ++
2843 + #define NFC_DEFAULT_TIMEOUT_MS 1000
2844 +
2845 + #define NFC_SRAM_SIZE 1024
2846 +@@ -632,15 +636,9 @@ static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
2847 + offset = layout->eccpos[i * ecc->bytes] - 4 + mtd->writesize;
2848 +
2849 + /* Fill OOB data in */
2850 +- if (oob_required) {
2851 +- tmp = 0xffffffff;
2852 +- memcpy_toio(nfc->regs + NFC_REG_USER_DATA_BASE, &tmp,
2853 +- 4);
2854 +- } else {
2855 +- memcpy_toio(nfc->regs + NFC_REG_USER_DATA_BASE,
2856 +- chip->oob_poi + offset - mtd->writesize,
2857 +- 4);
2858 +- }
2859 ++ writel(NFC_BUF_TO_USER_DATA(chip->oob_poi +
2860 ++ layout->oobfree[i].offset),
2861 ++ nfc->regs + NFC_REG_USER_DATA_BASE);
2862 +
2863 + chip->cmdfunc(mtd, NAND_CMD_RNDIN, offset, -1);
2864 +
2865 +@@ -770,14 +768,8 @@ static int sunxi_nfc_hw_syndrome_ecc_write_page(struct mtd_info *mtd,
2866 + offset += ecc->size;
2867 +
2868 + /* Fill OOB data in */
2869 +- if (oob_required) {
2870 +- tmp = 0xffffffff;
2871 +- memcpy_toio(nfc->regs + NFC_REG_USER_DATA_BASE, &tmp,
2872 +- 4);
2873 +- } else {
2874 +- memcpy_toio(nfc->regs + NFC_REG_USER_DATA_BASE, oob,
2875 +- 4);
2876 +- }
2877 ++ writel(NFC_BUF_TO_USER_DATA(oob),
2878 ++ nfc->regs + NFC_REG_USER_DATA_BASE);
2879 +
2880 + tmp = NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD | NFC_ACCESS_DIR |
2881 + (1 << 30);
2882 +@@ -1312,6 +1304,7 @@ static void sunxi_nand_chips_cleanup(struct sunxi_nfc *nfc)
2883 + node);
2884 + nand_release(&chip->mtd);
2885 + sunxi_nand_ecc_cleanup(&chip->nand.ecc);
2886 ++ list_del(&chip->node);
2887 + }
2888 + }
2889 +
2890 +diff --git a/drivers/mtd/ubi/io.c b/drivers/mtd/ubi/io.c
2891 +index 5bbd1f094f4e..1fc23e48fe8e 100644
2892 +--- a/drivers/mtd/ubi/io.c
2893 ++++ b/drivers/mtd/ubi/io.c
2894 +@@ -926,6 +926,11 @@ static int validate_vid_hdr(const struct ubi_device *ubi,
2895 + goto bad;
2896 + }
2897 +
2898 ++ if (data_size > ubi->leb_size) {
2899 ++ ubi_err(ubi, "bad data_size");
2900 ++ goto bad;
2901 ++ }
2902 ++
2903 + if (vol_type == UBI_VID_STATIC) {
2904 + /*
2905 + * Although from high-level point of view static volumes may
2906 +diff --git a/drivers/mtd/ubi/vtbl.c b/drivers/mtd/ubi/vtbl.c
2907 +index 68c9c5ea676f..bf2f916df4e2 100644
2908 +--- a/drivers/mtd/ubi/vtbl.c
2909 ++++ b/drivers/mtd/ubi/vtbl.c
2910 +@@ -646,6 +646,7 @@ static int init_volumes(struct ubi_device *ubi,
2911 + if (ubi->corr_peb_count)
2912 + ubi_err(ubi, "%d PEBs are corrupted and not used",
2913 + ubi->corr_peb_count);
2914 ++ return -ENOSPC;
2915 + }
2916 + ubi->rsvd_pebs += reserved_pebs;
2917 + ubi->avail_pebs -= reserved_pebs;
2918 +diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
2919 +index 16214d3d57a4..18fef94542f8 100644
2920 +--- a/drivers/mtd/ubi/wl.c
2921 ++++ b/drivers/mtd/ubi/wl.c
2922 +@@ -1601,6 +1601,7 @@ int ubi_wl_init(struct ubi_device *ubi, struct ubi_attach_info *ai)
2923 + if (ubi->corr_peb_count)
2924 + ubi_err(ubi, "%d PEBs are corrupted and not used",
2925 + ubi->corr_peb_count);
2926 ++ err = -ENOSPC;
2927 + goto out_free;
2928 + }
2929 + ubi->avail_pebs -= reserved_pebs;
2930 +diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
2931 +index 4f6bf996851e..7dfbcde34509 100644
2932 +--- a/drivers/net/ethernet/intel/igb/igb_main.c
2933 ++++ b/drivers/net/ethernet/intel/igb/igb_main.c
2934 +@@ -2864,7 +2864,7 @@ static void igb_probe_vfs(struct igb_adapter *adapter)
2935 + return;
2936 +
2937 + pci_sriov_set_totalvfs(pdev, 7);
2938 +- igb_pci_enable_sriov(pdev, max_vfs);
2939 ++ igb_enable_sriov(pdev, max_vfs);
2940 +
2941 + #endif /* CONFIG_PCI_IOV */
2942 + }
2943 +diff --git a/drivers/net/wireless/ath/ath10k/htc.c b/drivers/net/wireless/ath/ath10k/htc.c
2944 +index 2fd9e180272b..c5dc6b57212e 100644
2945 +--- a/drivers/net/wireless/ath/ath10k/htc.c
2946 ++++ b/drivers/net/wireless/ath/ath10k/htc.c
2947 +@@ -163,8 +163,10 @@ int ath10k_htc_send(struct ath10k_htc *htc,
2948 + skb_cb->eid = eid;
2949 + skb_cb->paddr = dma_map_single(dev, skb->data, skb->len, DMA_TO_DEVICE);
2950 + ret = dma_mapping_error(dev, skb_cb->paddr);
2951 +- if (ret)
2952 ++ if (ret) {
2953 ++ ret = -EIO;
2954 + goto err_credits;
2955 ++ }
2956 +
2957 + sg_item.transfer_id = ep->eid;
2958 + sg_item.transfer_context = skb;
2959 +diff --git a/drivers/net/wireless/ath/ath10k/htt_tx.c b/drivers/net/wireless/ath/ath10k/htt_tx.c
2960 +index cbd2bc9e6202..7f4854a52a7c 100644
2961 +--- a/drivers/net/wireless/ath/ath10k/htt_tx.c
2962 ++++ b/drivers/net/wireless/ath/ath10k/htt_tx.c
2963 +@@ -371,8 +371,10 @@ int ath10k_htt_mgmt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
2964 + skb_cb->paddr = dma_map_single(dev, msdu->data, msdu->len,
2965 + DMA_TO_DEVICE);
2966 + res = dma_mapping_error(dev, skb_cb->paddr);
2967 +- if (res)
2968 ++ if (res) {
2969 ++ res = -EIO;
2970 + goto err_free_txdesc;
2971 ++ }
2972 +
2973 + skb_put(txdesc, len);
2974 + cmd = (struct htt_cmd *)txdesc->data;
2975 +@@ -463,8 +465,10 @@ int ath10k_htt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
2976 + skb_cb->paddr = dma_map_single(dev, msdu->data, msdu->len,
2977 + DMA_TO_DEVICE);
2978 + res = dma_mapping_error(dev, skb_cb->paddr);
2979 +- if (res)
2980 ++ if (res) {
2981 ++ res = -EIO;
2982 + goto err_free_txbuf;
2983 ++ }
2984 +
2985 + if (likely(use_frags)) {
2986 + frags = skb_cb->htt.txbuf->frags;
2987 +diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
2988 +index 973485bd4121..5e021b0b3f9e 100644
2989 +--- a/drivers/net/wireless/ath/ath10k/mac.c
2990 ++++ b/drivers/net/wireless/ath/ath10k/mac.c
2991 +@@ -4464,6 +4464,21 @@ static int ath10k_set_rts_threshold(struct ieee80211_hw *hw, u32 value)
2992 + return ret;
2993 + }
2994 +
2995 ++static int ath10k_mac_op_set_frag_threshold(struct ieee80211_hw *hw, u32 value)
2996 ++{
2997 ++ /* Even though there's a WMI enum for fragmentation threshold no known
2998 ++ * firmware actually implements it. Moreover it is not possible to rely
2999 ++ * frame fragmentation to mac80211 because firmware clears the "more
3000 ++ * fragments" bit in frame control making it impossible for remote
3001 ++ * devices to reassemble frames.
3002 ++ *
3003 ++ * Hence implement a dummy callback just to say fragmentation isn't
3004 ++ * supported. This effectively prevents mac80211 from doing frame
3005 ++ * fragmentation in software.
3006 ++ */
3007 ++ return -EOPNOTSUPP;
3008 ++}
3009 ++
3010 + static void ath10k_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
3011 + u32 queues, bool drop)
3012 + {
3013 +@@ -5108,6 +5123,7 @@ static const struct ieee80211_ops ath10k_ops = {
3014 + .remain_on_channel = ath10k_remain_on_channel,
3015 + .cancel_remain_on_channel = ath10k_cancel_remain_on_channel,
3016 + .set_rts_threshold = ath10k_set_rts_threshold,
3017 ++ .set_frag_threshold = ath10k_mac_op_set_frag_threshold,
3018 + .flush = ath10k_flush,
3019 + .tx_last_beacon = ath10k_tx_last_beacon,
3020 + .set_antenna = ath10k_set_antenna,
3021 +diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
3022 +index ead543282128..3c4c800ab505 100644
3023 +--- a/drivers/net/wireless/ath/ath10k/pci.c
3024 ++++ b/drivers/net/wireless/ath/ath10k/pci.c
3025 +@@ -1378,8 +1378,10 @@ static int ath10k_pci_hif_exchange_bmi_msg(struct ath10k *ar,
3026 +
3027 + req_paddr = dma_map_single(ar->dev, treq, req_len, DMA_TO_DEVICE);
3028 + ret = dma_mapping_error(ar->dev, req_paddr);
3029 +- if (ret)
3030 ++ if (ret) {
3031 ++ ret = -EIO;
3032 + goto err_dma;
3033 ++ }
3034 +
3035 + if (resp && resp_len) {
3036 + tresp = kzalloc(*resp_len, GFP_KERNEL);
3037 +@@ -1391,8 +1393,10 @@ static int ath10k_pci_hif_exchange_bmi_msg(struct ath10k *ar,
3038 + resp_paddr = dma_map_single(ar->dev, tresp, *resp_len,
3039 + DMA_FROM_DEVICE);
3040 + ret = dma_mapping_error(ar->dev, resp_paddr);
3041 +- if (ret)
3042 ++ if (ret) {
3043 ++ ret = EIO;
3044 + goto err_req;
3045 ++ }
3046 +
3047 + xfer.wait_for_resp = true;
3048 + xfer.resp_len = 0;
3049 +diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
3050 +index c7ea77edce24..408ecd98e61b 100644
3051 +--- a/drivers/net/wireless/ath/ath10k/wmi.c
3052 ++++ b/drivers/net/wireless/ath/ath10k/wmi.c
3053 +@@ -2517,6 +2517,7 @@ void ath10k_wmi_event_host_swba(struct ath10k *ar, struct sk_buff *skb)
3054 + ath10k_warn(ar, "failed to map beacon: %d\n",
3055 + ret);
3056 + dev_kfree_skb_any(bcn);
3057 ++ ret = -EIO;
3058 + goto skip;
3059 + }
3060 +
3061 +diff --git a/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c b/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c
3062 +index 1c6788aecc62..40d72312f3df 100644
3063 +--- a/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c
3064 ++++ b/drivers/net/wireless/rsi/rsi_91x_sdio_ops.c
3065 +@@ -203,8 +203,10 @@ static int rsi_load_ta_instructions(struct rsi_common *common)
3066 +
3067 + /* Copy firmware into DMA-accessible memory */
3068 + fw = kmemdup(fw_entry->data, fw_entry->size, GFP_KERNEL);
3069 +- if (!fw)
3070 +- return -ENOMEM;
3071 ++ if (!fw) {
3072 ++ status = -ENOMEM;
3073 ++ goto out;
3074 ++ }
3075 + len = fw_entry->size;
3076 +
3077 + if (len % 4)
3078 +@@ -217,6 +219,8 @@ static int rsi_load_ta_instructions(struct rsi_common *common)
3079 +
3080 + status = rsi_copy_to_card(common, fw, len, num_blocks);
3081 + kfree(fw);
3082 ++
3083 ++out:
3084 + release_firmware(fw_entry);
3085 + return status;
3086 + }
3087 +diff --git a/drivers/net/wireless/rsi/rsi_91x_usb_ops.c b/drivers/net/wireless/rsi/rsi_91x_usb_ops.c
3088 +index 30c2cf7fa93b..de4900862836 100644
3089 +--- a/drivers/net/wireless/rsi/rsi_91x_usb_ops.c
3090 ++++ b/drivers/net/wireless/rsi/rsi_91x_usb_ops.c
3091 +@@ -148,8 +148,10 @@ static int rsi_load_ta_instructions(struct rsi_common *common)
3092 +
3093 + /* Copy firmware into DMA-accessible memory */
3094 + fw = kmemdup(fw_entry->data, fw_entry->size, GFP_KERNEL);
3095 +- if (!fw)
3096 +- return -ENOMEM;
3097 ++ if (!fw) {
3098 ++ status = -ENOMEM;
3099 ++ goto out;
3100 ++ }
3101 + len = fw_entry->size;
3102 +
3103 + if (len % 4)
3104 +@@ -162,6 +164,8 @@ static int rsi_load_ta_instructions(struct rsi_common *common)
3105 +
3106 + status = rsi_copy_to_card(common, fw, len, num_blocks);
3107 + kfree(fw);
3108 ++
3109 ++out:
3110 + release_firmware(fw_entry);
3111 + return status;
3112 + }
3113 +diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
3114 +index e031c943286e..52f081f4dfd5 100644
3115 +--- a/drivers/net/xen-netfront.c
3116 ++++ b/drivers/net/xen-netfront.c
3117 +@@ -1353,7 +1353,8 @@ static void xennet_disconnect_backend(struct netfront_info *info)
3118 + queue->tx_evtchn = queue->rx_evtchn = 0;
3119 + queue->tx_irq = queue->rx_irq = 0;
3120 +
3121 +- napi_synchronize(&queue->napi);
3122 ++ if (netif_running(info->netdev))
3123 ++ napi_synchronize(&queue->napi);
3124 +
3125 + xennet_release_tx_bufs(queue);
3126 + xennet_release_rx_bufs(queue);
3127 +diff --git a/drivers/pci/access.c b/drivers/pci/access.c
3128 +index b965c12168b7..502a82ca1db0 100644
3129 +--- a/drivers/pci/access.c
3130 ++++ b/drivers/pci/access.c
3131 +@@ -442,7 +442,8 @@ static const struct pci_vpd_ops pci_vpd_pci22_ops = {
3132 + static ssize_t pci_vpd_f0_read(struct pci_dev *dev, loff_t pos, size_t count,
3133 + void *arg)
3134 + {
3135 +- struct pci_dev *tdev = pci_get_slot(dev->bus, PCI_SLOT(dev->devfn));
3136 ++ struct pci_dev *tdev = pci_get_slot(dev->bus,
3137 ++ PCI_DEVFN(PCI_SLOT(dev->devfn), 0));
3138 + ssize_t ret;
3139 +
3140 + if (!tdev)
3141 +@@ -456,7 +457,8 @@ static ssize_t pci_vpd_f0_read(struct pci_dev *dev, loff_t pos, size_t count,
3142 + static ssize_t pci_vpd_f0_write(struct pci_dev *dev, loff_t pos, size_t count,
3143 + const void *arg)
3144 + {
3145 +- struct pci_dev *tdev = pci_get_slot(dev->bus, PCI_SLOT(dev->devfn));
3146 ++ struct pci_dev *tdev = pci_get_slot(dev->bus,
3147 ++ PCI_DEVFN(PCI_SLOT(dev->devfn), 0));
3148 + ssize_t ret;
3149 +
3150 + if (!tdev)
3151 +@@ -473,22 +475,6 @@ static const struct pci_vpd_ops pci_vpd_f0_ops = {
3152 + .release = pci_vpd_pci22_release,
3153 + };
3154 +
3155 +-static int pci_vpd_f0_dev_check(struct pci_dev *dev)
3156 +-{
3157 +- struct pci_dev *tdev = pci_get_slot(dev->bus, PCI_SLOT(dev->devfn));
3158 +- int ret = 0;
3159 +-
3160 +- if (!tdev)
3161 +- return -ENODEV;
3162 +- if (!tdev->vpd || !tdev->multifunction ||
3163 +- dev->class != tdev->class || dev->vendor != tdev->vendor ||
3164 +- dev->device != tdev->device)
3165 +- ret = -ENODEV;
3166 +-
3167 +- pci_dev_put(tdev);
3168 +- return ret;
3169 +-}
3170 +-
3171 + int pci_vpd_pci22_init(struct pci_dev *dev)
3172 + {
3173 + struct pci_vpd_pci22 *vpd;
3174 +@@ -497,12 +483,7 @@ int pci_vpd_pci22_init(struct pci_dev *dev)
3175 + cap = pci_find_capability(dev, PCI_CAP_ID_VPD);
3176 + if (!cap)
3177 + return -ENODEV;
3178 +- if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) {
3179 +- int ret = pci_vpd_f0_dev_check(dev);
3180 +
3181 +- if (ret)
3182 +- return ret;
3183 +- }
3184 + vpd = kzalloc(sizeof(*vpd), GFP_ATOMIC);
3185 + if (!vpd)
3186 + return -ENOMEM;
3187 +diff --git a/drivers/pci/bus.c b/drivers/pci/bus.c
3188 +index 6fbd3f2b5992..d3346d23963b 100644
3189 +--- a/drivers/pci/bus.c
3190 ++++ b/drivers/pci/bus.c
3191 +@@ -256,6 +256,8 @@ bool pci_bus_clip_resource(struct pci_dev *dev, int idx)
3192 +
3193 + res->start = start;
3194 + res->end = end;
3195 ++ res->flags &= ~IORESOURCE_UNSET;
3196 ++ orig_res.flags &= ~IORESOURCE_UNSET;
3197 + dev_printk(KERN_DEBUG, &dev->dev, "%pR clipped to %pR\n",
3198 + &orig_res, res);
3199 +
3200 +diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
3201 +index 804cd3b02c66..4a6933f02cd0 100644
3202 +--- a/drivers/pci/quirks.c
3203 ++++ b/drivers/pci/quirks.c
3204 +@@ -1915,11 +1915,27 @@ static void quirk_netmos(struct pci_dev *dev)
3205 + DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_NETMOS, PCI_ANY_ID,
3206 + PCI_CLASS_COMMUNICATION_SERIAL, 8, quirk_netmos);
3207 +
3208 ++/*
3209 ++ * Quirk non-zero PCI functions to route VPD access through function 0 for
3210 ++ * devices that share VPD resources between functions. The functions are
3211 ++ * expected to be identical devices.
3212 ++ */
3213 + static void quirk_f0_vpd_link(struct pci_dev *dev)
3214 + {
3215 +- if (!dev->multifunction || !PCI_FUNC(dev->devfn))
3216 ++ struct pci_dev *f0;
3217 ++
3218 ++ if (!PCI_FUNC(dev->devfn))
3219 + return;
3220 +- dev->dev_flags |= PCI_DEV_FLAGS_VPD_REF_F0;
3221 ++
3222 ++ f0 = pci_get_slot(dev->bus, PCI_DEVFN(PCI_SLOT(dev->devfn), 0));
3223 ++ if (!f0)
3224 ++ return;
3225 ++
3226 ++ if (f0->vpd && dev->class == f0->class &&
3227 ++ dev->vendor == f0->vendor && dev->device == f0->device)
3228 ++ dev->dev_flags |= PCI_DEV_FLAGS_VPD_REF_F0;
3229 ++
3230 ++ pci_dev_put(f0);
3231 + }
3232 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
3233 + PCI_CLASS_NETWORK_ETHERNET, 8, quirk_f0_vpd_link);
3234 +diff --git a/drivers/pcmcia/sa1100_generic.c b/drivers/pcmcia/sa1100_generic.c
3235 +index 803945259da8..42861cc70158 100644
3236 +--- a/drivers/pcmcia/sa1100_generic.c
3237 ++++ b/drivers/pcmcia/sa1100_generic.c
3238 +@@ -93,7 +93,6 @@ static int sa11x0_drv_pcmcia_remove(struct platform_device *dev)
3239 + for (i = 0; i < sinfo->nskt; i++)
3240 + soc_pcmcia_remove_one(&sinfo->skt[i]);
3241 +
3242 +- clk_put(sinfo->clk);
3243 + kfree(sinfo);
3244 + return 0;
3245 + }
3246 +diff --git a/drivers/pcmcia/sa11xx_base.c b/drivers/pcmcia/sa11xx_base.c
3247 +index cf6de2c2b329..553d70a67f80 100644
3248 +--- a/drivers/pcmcia/sa11xx_base.c
3249 ++++ b/drivers/pcmcia/sa11xx_base.c
3250 +@@ -222,7 +222,7 @@ int sa11xx_drv_pcmcia_probe(struct device *dev, struct pcmcia_low_level *ops,
3251 + int i, ret = 0;
3252 + struct clk *clk;
3253 +
3254 +- clk = clk_get(dev, NULL);
3255 ++ clk = devm_clk_get(dev, NULL);
3256 + if (IS_ERR(clk))
3257 + return PTR_ERR(clk);
3258 +
3259 +@@ -251,7 +251,6 @@ int sa11xx_drv_pcmcia_probe(struct device *dev, struct pcmcia_low_level *ops,
3260 + if (ret) {
3261 + while (--i >= 0)
3262 + soc_pcmcia_remove_one(&sinfo->skt[i]);
3263 +- clk_put(clk);
3264 + kfree(sinfo);
3265 + } else {
3266 + dev_set_drvdata(dev, sinfo);
3267 +diff --git a/drivers/platform/x86/toshiba_acpi.c b/drivers/platform/x86/toshiba_acpi.c
3268 +index 9956b9902bb4..93e54a0f471a 100644
3269 +--- a/drivers/platform/x86/toshiba_acpi.c
3270 ++++ b/drivers/platform/x86/toshiba_acpi.c
3271 +@@ -2525,11 +2525,9 @@ static int toshiba_acpi_setup_keyboard(struct toshiba_acpi_dev *dev)
3272 + if (error)
3273 + return error;
3274 +
3275 +- error = toshiba_hotkey_event_type_get(dev, &events_type);
3276 +- if (error) {
3277 +- pr_err("Unable to query Hotkey Event Type\n");
3278 +- return error;
3279 +- }
3280 ++ if (toshiba_hotkey_event_type_get(dev, &events_type))
3281 ++ pr_notice("Unable to query Hotkey Event Type\n");
3282 ++
3283 + dev->hotkey_event_type = events_type;
3284 +
3285 + dev->hotkey_dev = input_allocate_device();
3286 +diff --git a/drivers/power/avs/Kconfig b/drivers/power/avs/Kconfig
3287 +index 7f3d389bd601..a67eeace6a89 100644
3288 +--- a/drivers/power/avs/Kconfig
3289 ++++ b/drivers/power/avs/Kconfig
3290 +@@ -13,7 +13,7 @@ menuconfig POWER_AVS
3291 +
3292 + config ROCKCHIP_IODOMAIN
3293 + tristate "Rockchip IO domain support"
3294 +- depends on ARCH_ROCKCHIP && OF
3295 ++ depends on POWER_AVS && ARCH_ROCKCHIP && OF
3296 + help
3297 + Say y here to enable support io domains on Rockchip SoCs. It is
3298 + necessary for the io domain setting of the SoC to match the
3299 +diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
3300 +index add419d6ff34..a56a7b243e91 100644
3301 +--- a/drivers/scsi/3w-9xxx.c
3302 ++++ b/drivers/scsi/3w-9xxx.c
3303 +@@ -212,6 +212,17 @@ static const struct file_operations twa_fops = {
3304 + .llseek = noop_llseek,
3305 + };
3306 +
3307 ++/*
3308 ++ * The controllers use an inline buffer instead of a mapped SGL for small,
3309 ++ * single entry buffers. Note that we treat a zero-length transfer like
3310 ++ * a mapped SGL.
3311 ++ */
3312 ++static bool twa_command_mapped(struct scsi_cmnd *cmd)
3313 ++{
3314 ++ return scsi_sg_count(cmd) != 1 ||
3315 ++ scsi_bufflen(cmd) >= TW_MIN_SGL_LENGTH;
3316 ++}
3317 ++
3318 + /* This function will complete an aen request from the isr */
3319 + static int twa_aen_complete(TW_Device_Extension *tw_dev, int request_id)
3320 + {
3321 +@@ -1339,7 +1350,8 @@ static irqreturn_t twa_interrupt(int irq, void *dev_instance)
3322 + }
3323 +
3324 + /* Now complete the io */
3325 +- scsi_dma_unmap(cmd);
3326 ++ if (twa_command_mapped(cmd))
3327 ++ scsi_dma_unmap(cmd);
3328 + cmd->scsi_done(cmd);
3329 + tw_dev->state[request_id] = TW_S_COMPLETED;
3330 + twa_free_request_id(tw_dev, request_id);
3331 +@@ -1582,7 +1594,8 @@ static int twa_reset_device_extension(TW_Device_Extension *tw_dev)
3332 + struct scsi_cmnd *cmd = tw_dev->srb[i];
3333 +
3334 + cmd->result = (DID_RESET << 16);
3335 +- scsi_dma_unmap(cmd);
3336 ++ if (twa_command_mapped(cmd))
3337 ++ scsi_dma_unmap(cmd);
3338 + cmd->scsi_done(cmd);
3339 + }
3340 + }
3341 +@@ -1765,12 +1778,14 @@ static int twa_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_
3342 + retval = twa_scsiop_execute_scsi(tw_dev, request_id, NULL, 0, NULL);
3343 + switch (retval) {
3344 + case SCSI_MLQUEUE_HOST_BUSY:
3345 +- scsi_dma_unmap(SCpnt);
3346 ++ if (twa_command_mapped(SCpnt))
3347 ++ scsi_dma_unmap(SCpnt);
3348 + twa_free_request_id(tw_dev, request_id);
3349 + break;
3350 + case 1:
3351 + SCpnt->result = (DID_ERROR << 16);
3352 +- scsi_dma_unmap(SCpnt);
3353 ++ if (twa_command_mapped(SCpnt))
3354 ++ scsi_dma_unmap(SCpnt);
3355 + done(SCpnt);
3356 + tw_dev->state[request_id] = TW_S_COMPLETED;
3357 + twa_free_request_id(tw_dev, request_id);
3358 +@@ -1831,8 +1846,7 @@ static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id,
3359 + /* Map sglist from scsi layer to cmd packet */
3360 +
3361 + if (scsi_sg_count(srb)) {
3362 +- if ((scsi_sg_count(srb) == 1) &&
3363 +- (scsi_bufflen(srb) < TW_MIN_SGL_LENGTH)) {
3364 ++ if (!twa_command_mapped(srb)) {
3365 + if (srb->sc_data_direction == DMA_TO_DEVICE ||
3366 + srb->sc_data_direction == DMA_BIDIRECTIONAL)
3367 + scsi_sg_copy_to_buffer(srb,
3368 +@@ -1905,7 +1919,7 @@ static void twa_scsiop_execute_scsi_complete(TW_Device_Extension *tw_dev, int re
3369 + {
3370 + struct scsi_cmnd *cmd = tw_dev->srb[request_id];
3371 +
3372 +- if (scsi_bufflen(cmd) < TW_MIN_SGL_LENGTH &&
3373 ++ if (!twa_command_mapped(cmd) &&
3374 + (cmd->sc_data_direction == DMA_FROM_DEVICE ||
3375 + cmd->sc_data_direction == DMA_BIDIRECTIONAL)) {
3376 + if (scsi_sg_count(cmd) == 1) {
3377 +diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
3378 +index a9aa38903efe..cccab6188328 100644
3379 +--- a/drivers/scsi/ipr.c
3380 ++++ b/drivers/scsi/ipr.c
3381 +@@ -4554,7 +4554,7 @@ static ssize_t ipr_store_raw_mode(struct device *dev,
3382 + spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
3383 + res = (struct ipr_resource_entry *)sdev->hostdata;
3384 + if (res) {
3385 +- if (ioa_cfg->sis64 && ipr_is_af_dasd_device(res)) {
3386 ++ if (ipr_is_af_dasd_device(res)) {
3387 + res->raw_mode = simple_strtoul(buf, NULL, 10);
3388 + len = strlen(buf);
3389 + if (res->sdev)
3390 +diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
3391 +index ce6c770d74d5..c6b93d273799 100644
3392 +--- a/drivers/scsi/scsi_error.c
3393 ++++ b/drivers/scsi/scsi_error.c
3394 +@@ -2169,8 +2169,17 @@ int scsi_error_handler(void *data)
3395 + * We never actually get interrupted because kthread_run
3396 + * disables signal delivery for the created thread.
3397 + */
3398 +- while (!kthread_should_stop()) {
3399 ++ while (true) {
3400 ++ /*
3401 ++ * The sequence in kthread_stop() sets the stop flag first
3402 ++ * then wakes the process. To avoid missed wakeups, the task
3403 ++ * should always be in a non running state before the stop
3404 ++ * flag is checked
3405 ++ */
3406 + set_current_state(TASK_INTERRUPTIBLE);
3407 ++ if (kthread_should_stop())
3408 ++ break;
3409 ++
3410 + if ((shost->host_failed == 0 && shost->host_eh_scheduled == 0) ||
3411 + shost->host_failed != atomic_read(&shost->host_busy)) {
3412 + SCSI_LOG_ERROR_RECOVERY(1,
3413 +diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
3414 +index e3223ac75a7c..f089082c00e1 100644
3415 +--- a/drivers/spi/spi-pxa2xx.c
3416 ++++ b/drivers/spi/spi-pxa2xx.c
3417 +@@ -624,6 +624,10 @@ static irqreturn_t ssp_int(int irq, void *dev_id)
3418 + if (!(sccr1_reg & SSCR1_TIE))
3419 + mask &= ~SSSR_TFS;
3420 +
3421 ++ /* Ignore RX timeout interrupt if it is disabled */
3422 ++ if (!(sccr1_reg & SSCR1_TINTE))
3423 ++ mask &= ~SSSR_TINT;
3424 ++
3425 + if (!(status & mask))
3426 + return IRQ_NONE;
3427 +
3428 +diff --git a/drivers/spi/spi-xtensa-xtfpga.c b/drivers/spi/spi-xtensa-xtfpga.c
3429 +index 2e32ea2f194f..be6155cba9de 100644
3430 +--- a/drivers/spi/spi-xtensa-xtfpga.c
3431 ++++ b/drivers/spi/spi-xtensa-xtfpga.c
3432 +@@ -34,13 +34,13 @@ struct xtfpga_spi {
3433 + static inline void xtfpga_spi_write32(const struct xtfpga_spi *spi,
3434 + unsigned addr, u32 val)
3435 + {
3436 +- iowrite32(val, spi->regs + addr);
3437 ++ __raw_writel(val, spi->regs + addr);
3438 + }
3439 +
3440 + static inline unsigned int xtfpga_spi_read32(const struct xtfpga_spi *spi,
3441 + unsigned addr)
3442 + {
3443 +- return ioread32(spi->regs + addr);
3444 ++ return __raw_readl(spi->regs + addr);
3445 + }
3446 +
3447 + static inline void xtfpga_spi_wait_busy(struct xtfpga_spi *xspi)
3448 +diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
3449 +index d35c1a13217c..029dbd33b4b2 100644
3450 +--- a/drivers/spi/spi.c
3451 ++++ b/drivers/spi/spi.c
3452 +@@ -1427,8 +1427,7 @@ static struct class spi_master_class = {
3453 + *
3454 + * The caller is responsible for assigning the bus number and initializing
3455 + * the master's methods before calling spi_register_master(); and (after errors
3456 +- * adding the device) calling spi_master_put() and kfree() to prevent a memory
3457 +- * leak.
3458 ++ * adding the device) calling spi_master_put() to prevent a memory leak.
3459 + */
3460 + struct spi_master *spi_alloc_master(struct device *dev, unsigned size)
3461 + {
3462 +diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
3463 +index 92c909eed6b5..8fab566e0f0b 100644
3464 +--- a/drivers/spi/spidev.c
3465 ++++ b/drivers/spi/spidev.c
3466 +@@ -664,7 +664,8 @@ static int spidev_release(struct inode *inode, struct file *filp)
3467 + kfree(spidev->rx_buffer);
3468 + spidev->rx_buffer = NULL;
3469 +
3470 +- spidev->speed_hz = spidev->spi->max_speed_hz;
3471 ++ if (spidev->spi)
3472 ++ spidev->speed_hz = spidev->spi->max_speed_hz;
3473 +
3474 + /* ... after we unbound from the underlying device? */
3475 + spin_lock_irq(&spidev->spi_lock);
3476 +diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
3477 +index b0b96ab31954..abbc42a56e7c 100644
3478 +--- a/drivers/staging/android/ion/ion.c
3479 ++++ b/drivers/staging/android/ion/ion.c
3480 +@@ -1179,13 +1179,13 @@ struct ion_handle *ion_import_dma_buf(struct ion_client *client, int fd)
3481 + mutex_unlock(&client->lock);
3482 + goto end;
3483 + }
3484 +- mutex_unlock(&client->lock);
3485 +
3486 + handle = ion_handle_create(client, buffer);
3487 +- if (IS_ERR(handle))
3488 ++ if (IS_ERR(handle)) {
3489 ++ mutex_unlock(&client->lock);
3490 + goto end;
3491 ++ }
3492 +
3493 +- mutex_lock(&client->lock);
3494 + ret = ion_handle_add(client, handle);
3495 + mutex_unlock(&client->lock);
3496 + if (ret) {
3497 +diff --git a/drivers/staging/speakup/fakekey.c b/drivers/staging/speakup/fakekey.c
3498 +index 4299cf45f947..5e1f16c36b49 100644
3499 +--- a/drivers/staging/speakup/fakekey.c
3500 ++++ b/drivers/staging/speakup/fakekey.c
3501 +@@ -81,6 +81,7 @@ void speakup_fake_down_arrow(void)
3502 + __this_cpu_write(reporting_keystroke, true);
3503 + input_report_key(virt_keyboard, KEY_DOWN, PRESSED);
3504 + input_report_key(virt_keyboard, KEY_DOWN, RELEASED);
3505 ++ input_sync(virt_keyboard);
3506 + __this_cpu_write(reporting_keystroke, false);
3507 +
3508 + /* reenable preemption */
3509 +diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
3510 +index 0ab6e2efd28c..330bbe831066 100644
3511 +--- a/drivers/target/iscsi/iscsi_target.c
3512 ++++ b/drivers/target/iscsi/iscsi_target.c
3513 +@@ -341,7 +341,6 @@ static struct iscsi_np *iscsit_get_np(
3514 +
3515 + struct iscsi_np *iscsit_add_np(
3516 + struct __kernel_sockaddr_storage *sockaddr,
3517 +- char *ip_str,
3518 + int network_transport)
3519 + {
3520 + struct sockaddr_in *sock_in;
3521 +@@ -370,11 +369,9 @@ struct iscsi_np *iscsit_add_np(
3522 + np->np_flags |= NPF_IP_NETWORK;
3523 + if (sockaddr->ss_family == AF_INET6) {
3524 + sock_in6 = (struct sockaddr_in6 *)sockaddr;
3525 +- snprintf(np->np_ip, IPV6_ADDRESS_SPACE, "%s", ip_str);
3526 + np->np_port = ntohs(sock_in6->sin6_port);
3527 + } else {
3528 + sock_in = (struct sockaddr_in *)sockaddr;
3529 +- sprintf(np->np_ip, "%s", ip_str);
3530 + np->np_port = ntohs(sock_in->sin_port);
3531 + }
3532 +
3533 +@@ -411,8 +408,8 @@ struct iscsi_np *iscsit_add_np(
3534 + list_add_tail(&np->np_list, &g_np_list);
3535 + mutex_unlock(&np_lock);
3536 +
3537 +- pr_debug("CORE[0] - Added Network Portal: %s:%hu on %s\n",
3538 +- np->np_ip, np->np_port, np->np_transport->name);
3539 ++ pr_debug("CORE[0] - Added Network Portal: %pISc:%hu on %s\n",
3540 ++ &np->np_sockaddr, np->np_port, np->np_transport->name);
3541 +
3542 + return np;
3543 + }
3544 +@@ -481,8 +478,8 @@ int iscsit_del_np(struct iscsi_np *np)
3545 + list_del(&np->np_list);
3546 + mutex_unlock(&np_lock);
3547 +
3548 +- pr_debug("CORE[0] - Removed Network Portal: %s:%hu on %s\n",
3549 +- np->np_ip, np->np_port, np->np_transport->name);
3550 ++ pr_debug("CORE[0] - Removed Network Portal: %pISc:%hu on %s\n",
3551 ++ &np->np_sockaddr, np->np_port, np->np_transport->name);
3552 +
3553 + iscsit_put_transport(np->np_transport);
3554 + kfree(np);
3555 +@@ -3467,7 +3464,6 @@ iscsit_build_sendtargets_response(struct iscsi_cmd *cmd,
3556 + tpg_np_list) {
3557 + struct iscsi_np *np = tpg_np->tpg_np;
3558 + bool inaddr_any = iscsit_check_inaddr_any(np);
3559 +- char *fmt_str;
3560 +
3561 + if (np->np_network_transport != network_transport)
3562 + continue;
3563 +@@ -3495,15 +3491,18 @@ iscsit_build_sendtargets_response(struct iscsi_cmd *cmd,
3564 + }
3565 + }
3566 +
3567 +- if (np->np_sockaddr.ss_family == AF_INET6)
3568 +- fmt_str = "TargetAddress=[%s]:%hu,%hu";
3569 +- else
3570 +- fmt_str = "TargetAddress=%s:%hu,%hu";
3571 +-
3572 +- len = sprintf(buf, fmt_str,
3573 +- inaddr_any ? conn->local_ip : np->np_ip,
3574 +- np->np_port,
3575 +- tpg->tpgt);
3576 ++ if (inaddr_any) {
3577 ++ len = sprintf(buf, "TargetAddress="
3578 ++ "%s:%hu,%hu",
3579 ++ conn->local_ip,
3580 ++ np->np_port,
3581 ++ tpg->tpgt);
3582 ++ } else {
3583 ++ len = sprintf(buf, "TargetAddress="
3584 ++ "%pISpc,%hu",
3585 ++ &np->np_sockaddr,
3586 ++ tpg->tpgt);
3587 ++ }
3588 + len += 1;
3589 +
3590 + if ((len + payload_len) > buffer_len) {
3591 +diff --git a/drivers/target/iscsi/iscsi_target.h b/drivers/target/iscsi/iscsi_target.h
3592 +index 7d0f9c00d9c2..d294f030a097 100644
3593 +--- a/drivers/target/iscsi/iscsi_target.h
3594 ++++ b/drivers/target/iscsi/iscsi_target.h
3595 +@@ -13,7 +13,7 @@ extern int iscsit_deaccess_np(struct iscsi_np *, struct iscsi_portal_group *,
3596 + extern bool iscsit_check_np_match(struct __kernel_sockaddr_storage *,
3597 + struct iscsi_np *, int);
3598 + extern struct iscsi_np *iscsit_add_np(struct __kernel_sockaddr_storage *,
3599 +- char *, int);
3600 ++ int);
3601 + extern int iscsit_reset_np_thread(struct iscsi_np *, struct iscsi_tpg_np *,
3602 + struct iscsi_portal_group *, bool);
3603 + extern int iscsit_del_np(struct iscsi_np *);
3604 +diff --git a/drivers/target/iscsi/iscsi_target_configfs.c b/drivers/target/iscsi/iscsi_target_configfs.c
3605 +index 469fce44ebad..6f2fb546477e 100644
3606 +--- a/drivers/target/iscsi/iscsi_target_configfs.c
3607 ++++ b/drivers/target/iscsi/iscsi_target_configfs.c
3608 +@@ -100,7 +100,7 @@ static ssize_t lio_target_np_store_sctp(
3609 + * Use existing np->np_sockaddr for SCTP network portal reference
3610 + */
3611 + tpg_np_sctp = iscsit_tpg_add_network_portal(tpg, &np->np_sockaddr,
3612 +- np->np_ip, tpg_np, ISCSI_SCTP_TCP);
3613 ++ tpg_np, ISCSI_SCTP_TCP);
3614 + if (!tpg_np_sctp || IS_ERR(tpg_np_sctp))
3615 + goto out;
3616 + } else {
3617 +@@ -178,7 +178,7 @@ static ssize_t lio_target_np_store_iser(
3618 + }
3619 +
3620 + tpg_np_iser = iscsit_tpg_add_network_portal(tpg, &np->np_sockaddr,
3621 +- np->np_ip, tpg_np, ISCSI_INFINIBAND);
3622 ++ tpg_np, ISCSI_INFINIBAND);
3623 + if (IS_ERR(tpg_np_iser)) {
3624 + rc = PTR_ERR(tpg_np_iser);
3625 + goto out;
3626 +@@ -249,8 +249,8 @@ static struct se_tpg_np *lio_target_call_addnptotpg(
3627 + return ERR_PTR(-EINVAL);
3628 + }
3629 + str++; /* Skip over leading "[" */
3630 +- *str2 = '\0'; /* Terminate the IPv6 address */
3631 +- str2++; /* Skip over the "]" */
3632 ++ *str2 = '\0'; /* Terminate the unbracketed IPv6 address */
3633 ++ str2++; /* Skip over the \0 */
3634 + port_str = strstr(str2, ":");
3635 + if (!port_str) {
3636 + pr_err("Unable to locate \":port\""
3637 +@@ -317,7 +317,7 @@ static struct se_tpg_np *lio_target_call_addnptotpg(
3638 + * sys/kernel/config/iscsi/$IQN/$TPG/np/$IP:$PORT/
3639 + *
3640 + */
3641 +- tpg_np = iscsit_tpg_add_network_portal(tpg, &sockaddr, str, NULL,
3642 ++ tpg_np = iscsit_tpg_add_network_portal(tpg, &sockaddr, NULL,
3643 + ISCSI_TCP);
3644 + if (IS_ERR(tpg_np)) {
3645 + iscsit_put_tpg(tpg);
3646 +@@ -345,8 +345,8 @@ static void lio_target_call_delnpfromtpg(
3647 +
3648 + se_tpg = &tpg->tpg_se_tpg;
3649 + pr_debug("LIO_Target_ConfigFS: DEREGISTER -> %s TPGT: %hu"
3650 +- " PORTAL: %s:%hu\n", config_item_name(&se_tpg->se_tpg_wwn->wwn_group.cg_item),
3651 +- tpg->tpgt, tpg_np->tpg_np->np_ip, tpg_np->tpg_np->np_port);
3652 ++ " PORTAL: %pISc:%hu\n", config_item_name(&se_tpg->se_tpg_wwn->wwn_group.cg_item),
3653 ++ tpg->tpgt, &tpg_np->tpg_np->np_sockaddr, tpg_np->tpg_np->np_port);
3654 +
3655 + ret = iscsit_tpg_del_network_portal(tpg, tpg_np);
3656 + if (ret < 0)
3657 +diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
3658 +index c3bccaddb592..39654e917cd8 100644
3659 +--- a/drivers/target/iscsi/iscsi_target_login.c
3660 ++++ b/drivers/target/iscsi/iscsi_target_login.c
3661 +@@ -879,8 +879,8 @@ static void iscsi_handle_login_thread_timeout(unsigned long data)
3662 + struct iscsi_np *np = (struct iscsi_np *) data;
3663 +
3664 + spin_lock_bh(&np->np_thread_lock);
3665 +- pr_err("iSCSI Login timeout on Network Portal %s:%hu\n",
3666 +- np->np_ip, np->np_port);
3667 ++ pr_err("iSCSI Login timeout on Network Portal %pISc:%hu\n",
3668 ++ &np->np_sockaddr, np->np_port);
3669 +
3670 + if (np->np_login_timer_flags & ISCSI_TF_STOP) {
3671 + spin_unlock_bh(&np->np_thread_lock);
3672 +@@ -1358,8 +1358,8 @@ static int __iscsi_target_login_thread(struct iscsi_np *np)
3673 + spin_lock_bh(&np->np_thread_lock);
3674 + if (np->np_thread_state != ISCSI_NP_THREAD_ACTIVE) {
3675 + spin_unlock_bh(&np->np_thread_lock);
3676 +- pr_err("iSCSI Network Portal on %s:%hu currently not"
3677 +- " active.\n", np->np_ip, np->np_port);
3678 ++ pr_err("iSCSI Network Portal on %pISc:%hu currently not"
3679 ++ " active.\n", &np->np_sockaddr, np->np_port);
3680 + iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
3681 + ISCSI_LOGIN_STATUS_SVC_UNAVAILABLE);
3682 + goto new_sess_out;
3683 +diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c
3684 +index 5e3295fe404d..3bc7d62c0a65 100644
3685 +--- a/drivers/target/iscsi/iscsi_target_tpg.c
3686 ++++ b/drivers/target/iscsi/iscsi_target_tpg.c
3687 +@@ -460,7 +460,6 @@ static bool iscsit_tpg_check_network_portal(
3688 + struct iscsi_tpg_np *iscsit_tpg_add_network_portal(
3689 + struct iscsi_portal_group *tpg,
3690 + struct __kernel_sockaddr_storage *sockaddr,
3691 +- char *ip_str,
3692 + struct iscsi_tpg_np *tpg_np_parent,
3693 + int network_transport)
3694 + {
3695 +@@ -470,8 +469,8 @@ struct iscsi_tpg_np *iscsit_tpg_add_network_portal(
3696 + if (!tpg_np_parent) {
3697 + if (iscsit_tpg_check_network_portal(tpg->tpg_tiqn, sockaddr,
3698 + network_transport)) {
3699 +- pr_err("Network Portal: %s already exists on a"
3700 +- " different TPG on %s\n", ip_str,
3701 ++ pr_err("Network Portal: %pISc already exists on a"
3702 ++ " different TPG on %s\n", sockaddr,
3703 + tpg->tpg_tiqn->tiqn);
3704 + return ERR_PTR(-EEXIST);
3705 + }
3706 +@@ -484,7 +483,7 @@ struct iscsi_tpg_np *iscsit_tpg_add_network_portal(
3707 + return ERR_PTR(-ENOMEM);
3708 + }
3709 +
3710 +- np = iscsit_add_np(sockaddr, ip_str, network_transport);
3711 ++ np = iscsit_add_np(sockaddr, network_transport);
3712 + if (IS_ERR(np)) {
3713 + kfree(tpg_np);
3714 + return ERR_CAST(np);
3715 +@@ -514,8 +513,8 @@ struct iscsi_tpg_np *iscsit_tpg_add_network_portal(
3716 + spin_unlock(&tpg_np_parent->tpg_np_parent_lock);
3717 + }
3718 +
3719 +- pr_debug("CORE[%s] - Added Network Portal: %s:%hu,%hu on %s\n",
3720 +- tpg->tpg_tiqn->tiqn, np->np_ip, np->np_port, tpg->tpgt,
3721 ++ pr_debug("CORE[%s] - Added Network Portal: %pISc:%hu,%hu on %s\n",
3722 ++ tpg->tpg_tiqn->tiqn, &np->np_sockaddr, np->np_port, tpg->tpgt,
3723 + np->np_transport->name);
3724 +
3725 + return tpg_np;
3726 +@@ -528,8 +527,8 @@ static int iscsit_tpg_release_np(
3727 + {
3728 + iscsit_clear_tpg_np_login_thread(tpg_np, tpg, true);
3729 +
3730 +- pr_debug("CORE[%s] - Removed Network Portal: %s:%hu,%hu on %s\n",
3731 +- tpg->tpg_tiqn->tiqn, np->np_ip, np->np_port, tpg->tpgt,
3732 ++ pr_debug("CORE[%s] - Removed Network Portal: %pISc:%hu,%hu on %s\n",
3733 ++ tpg->tpg_tiqn->tiqn, &np->np_sockaddr, np->np_port, tpg->tpgt,
3734 + np->np_transport->name);
3735 +
3736 + tpg_np->tpg_np = NULL;
3737 +diff --git a/drivers/target/iscsi/iscsi_target_tpg.h b/drivers/target/iscsi/iscsi_target_tpg.h
3738 +index 95ff5bdecd71..28abda89ea98 100644
3739 +--- a/drivers/target/iscsi/iscsi_target_tpg.h
3740 ++++ b/drivers/target/iscsi/iscsi_target_tpg.h
3741 +@@ -22,7 +22,7 @@ extern struct iscsi_node_attrib *iscsit_tpg_get_node_attrib(struct iscsi_session
3742 + extern void iscsit_tpg_del_external_nps(struct iscsi_tpg_np *);
3743 + extern struct iscsi_tpg_np *iscsit_tpg_locate_child_np(struct iscsi_tpg_np *, int);
3744 + extern struct iscsi_tpg_np *iscsit_tpg_add_network_portal(struct iscsi_portal_group *,
3745 +- struct __kernel_sockaddr_storage *, char *, struct iscsi_tpg_np *,
3746 ++ struct __kernel_sockaddr_storage *, struct iscsi_tpg_np *,
3747 + int);
3748 + extern int iscsit_tpg_del_network_portal(struct iscsi_portal_group *,
3749 + struct iscsi_tpg_np *);
3750 +diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c
3751 +index a15411c79ae9..08aa7cc58694 100644
3752 +--- a/drivers/target/target_core_pr.c
3753 ++++ b/drivers/target/target_core_pr.c
3754 +@@ -328,6 +328,9 @@ static int core_scsi3_pr_seq_non_holder(
3755 + int legacy = 0; /* Act like a legacy device and return
3756 + * RESERVATION CONFLICT on some CDBs */
3757 +
3758 ++ if (!se_sess->se_node_acl->device_list)
3759 ++ return;
3760 ++
3761 + se_deve = se_sess->se_node_acl->device_list[cmd->orig_fe_lun];
3762 + /*
3763 + * Determine if the registration should be ignored due to
3764 +diff --git a/drivers/target/target_core_ua.c b/drivers/target/target_core_ua.c
3765 +index 1738b1646988..9fc33e84439a 100644
3766 +--- a/drivers/target/target_core_ua.c
3767 ++++ b/drivers/target/target_core_ua.c
3768 +@@ -48,7 +48,7 @@ target_scsi3_ua_check(struct se_cmd *cmd)
3769 + return 0;
3770 +
3771 + nacl = sess->se_node_acl;
3772 +- if (!nacl)
3773 ++ if (!nacl || !nacl->device_list)
3774 + return 0;
3775 +
3776 + deve = nacl->device_list[cmd->orig_fe_lun];
3777 +@@ -90,7 +90,7 @@ int core_scsi3_ua_allocate(
3778 + /*
3779 + * PASSTHROUGH OPS
3780 + */
3781 +- if (!nacl)
3782 ++ if (!nacl || !nacl->device_list)
3783 + return -EINVAL;
3784 +
3785 + ua = kmem_cache_zalloc(se_ua_cache, GFP_ATOMIC);
3786 +@@ -208,7 +208,7 @@ void core_scsi3_ua_for_check_condition(
3787 + return;
3788 +
3789 + nacl = sess->se_node_acl;
3790 +- if (!nacl)
3791 ++ if (!nacl || !nacl->device_list)
3792 + return;
3793 +
3794 + spin_lock_irq(&nacl->device_list_lock);
3795 +@@ -276,7 +276,7 @@ int core_scsi3_ua_clear_for_request_sense(
3796 + return -EINVAL;
3797 +
3798 + nacl = sess->se_node_acl;
3799 +- if (!nacl)
3800 ++ if (!nacl || !nacl->device_list)
3801 + return -EINVAL;
3802 +
3803 + spin_lock_irq(&nacl->device_list_lock);
3804 +diff --git a/drivers/target/target_core_xcopy.c b/drivers/target/target_core_xcopy.c
3805 +index 8fd680ac941b..4609305a1591 100644
3806 +--- a/drivers/target/target_core_xcopy.c
3807 ++++ b/drivers/target/target_core_xcopy.c
3808 +@@ -465,6 +465,8 @@ int target_xcopy_setup_pt(void)
3809 + memset(&xcopy_pt_sess, 0, sizeof(struct se_session));
3810 + INIT_LIST_HEAD(&xcopy_pt_sess.sess_list);
3811 + INIT_LIST_HEAD(&xcopy_pt_sess.sess_acl_list);
3812 ++ INIT_LIST_HEAD(&xcopy_pt_sess.sess_cmd_list);
3813 ++ spin_lock_init(&xcopy_pt_sess.sess_cmd_lock);
3814 +
3815 + xcopy_pt_nacl.se_tpg = &xcopy_pt_tpg;
3816 + xcopy_pt_nacl.nacl_sess = &xcopy_pt_sess;
3817 +@@ -666,7 +668,7 @@ static int target_xcopy_read_source(
3818 + pr_debug("XCOPY: Built READ_16: LBA: %llu Sectors: %u Length: %u\n",
3819 + (unsigned long long)src_lba, src_sectors, length);
3820 +
3821 +- transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, NULL, length,
3822 ++ transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, &xcopy_pt_sess, length,
3823 + DMA_FROM_DEVICE, 0, &xpt_cmd->sense_buffer[0]);
3824 + xop->src_pt_cmd = xpt_cmd;
3825 +
3826 +@@ -726,7 +728,7 @@ static int target_xcopy_write_destination(
3827 + pr_debug("XCOPY: Built WRITE_16: LBA: %llu Sectors: %u Length: %u\n",
3828 + (unsigned long long)dst_lba, dst_sectors, length);
3829 +
3830 +- transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, NULL, length,
3831 ++ transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, &xcopy_pt_sess, length,
3832 + DMA_TO_DEVICE, 0, &xpt_cmd->sense_buffer[0]);
3833 + xop->dst_pt_cmd = xpt_cmd;
3834 +
3835 +diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
3836 +index 16ed0b6c7f9c..6b6c6606af5f 100644
3837 +--- a/drivers/tty/n_tty.c
3838 ++++ b/drivers/tty/n_tty.c
3839 +@@ -343,8 +343,7 @@ static void n_tty_packet_mode_flush(struct tty_struct *tty)
3840 + spin_lock_irqsave(&tty->ctrl_lock, flags);
3841 + tty->ctrl_status |= TIOCPKT_FLUSHREAD;
3842 + spin_unlock_irqrestore(&tty->ctrl_lock, flags);
3843 +- if (waitqueue_active(&tty->link->read_wait))
3844 +- wake_up_interruptible(&tty->link->read_wait);
3845 ++ wake_up_interruptible(&tty->link->read_wait);
3846 + }
3847 + }
3848 +
3849 +@@ -1383,8 +1382,7 @@ handle_newline:
3850 + put_tty_queue(c, ldata);
3851 + smp_store_release(&ldata->canon_head, ldata->read_head);
3852 + kill_fasync(&tty->fasync, SIGIO, POLL_IN);
3853 +- if (waitqueue_active(&tty->read_wait))
3854 +- wake_up_interruptible_poll(&tty->read_wait, POLLIN);
3855 ++ wake_up_interruptible_poll(&tty->read_wait, POLLIN);
3856 + return 0;
3857 + }
3858 + }
3859 +@@ -1670,8 +1668,7 @@ static void __receive_buf(struct tty_struct *tty, const unsigned char *cp,
3860 +
3861 + if ((read_cnt(ldata) >= ldata->minimum_to_wake) || L_EXTPROC(tty)) {
3862 + kill_fasync(&tty->fasync, SIGIO, POLL_IN);
3863 +- if (waitqueue_active(&tty->read_wait))
3864 +- wake_up_interruptible_poll(&tty->read_wait, POLLIN);
3865 ++ wake_up_interruptible_poll(&tty->read_wait, POLLIN);
3866 + }
3867 + }
3868 +
3869 +@@ -1890,10 +1887,8 @@ static void n_tty_set_termios(struct tty_struct *tty, struct ktermios *old)
3870 + }
3871 +
3872 + /* The termios change make the tty ready for I/O */
3873 +- if (waitqueue_active(&tty->write_wait))
3874 +- wake_up_interruptible(&tty->write_wait);
3875 +- if (waitqueue_active(&tty->read_wait))
3876 +- wake_up_interruptible(&tty->read_wait);
3877 ++ wake_up_interruptible(&tty->write_wait);
3878 ++ wake_up_interruptible(&tty->read_wait);
3879 + }
3880 +
3881 + /**
3882 +diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
3883 +index 4506e405c8f3..b4fd8debf941 100644
3884 +--- a/drivers/tty/serial/8250/8250_core.c
3885 ++++ b/drivers/tty/serial/8250/8250_core.c
3886 +@@ -339,6 +339,14 @@ configured less than Maximum supported fifo bytes */
3887 + UART_FCR7_64BYTE,
3888 + .flags = UART_CAP_FIFO,
3889 + },
3890 ++ [PORT_RT2880] = {
3891 ++ .name = "Palmchip BK-3103",
3892 ++ .fifo_size = 16,
3893 ++ .tx_loadsz = 16,
3894 ++ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
3895 ++ .rxtrig_bytes = {1, 4, 8, 14},
3896 ++ .flags = UART_CAP_FIFO,
3897 ++ },
3898 + };
3899 +
3900 + /* Uart divisor latch read */
3901 +diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
3902 +index 763eb20fe321..0cc622afb67d 100644
3903 +--- a/drivers/tty/serial/amba-pl011.c
3904 ++++ b/drivers/tty/serial/amba-pl011.c
3905 +@@ -1360,9 +1360,9 @@ static void pl011_tx_softirq(struct work_struct *work)
3906 + struct uart_amba_port *uap =
3907 + container_of(dwork, struct uart_amba_port, tx_softirq_work);
3908 +
3909 +- spin_lock(&uap->port.lock);
3910 ++ spin_lock_irq(&uap->port.lock);
3911 + while (pl011_tx_chars(uap)) ;
3912 +- spin_unlock(&uap->port.lock);
3913 ++ spin_unlock_irq(&uap->port.lock);
3914 + }
3915 +
3916 + static void pl011_tx_irq_seen(struct uart_amba_port *uap)
3917 +diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
3918 +index 5ca1dfb0561c..85323ff75edf 100644
3919 +--- a/drivers/tty/serial/atmel_serial.c
3920 ++++ b/drivers/tty/serial/atmel_serial.c
3921 +@@ -2640,7 +2640,7 @@ static int atmel_serial_probe(struct platform_device *pdev)
3922 + ret = atmel_init_gpios(port, &pdev->dev);
3923 + if (ret < 0) {
3924 + dev_err(&pdev->dev, "Failed to initialize GPIOs.");
3925 +- goto err;
3926 ++ goto err_clear_bit;
3927 + }
3928 +
3929 + ret = atmel_init_port(port, pdev);
3930 +diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
3931 +index e5695467598f..21837f14a403 100644
3932 +--- a/drivers/tty/tty_io.c
3933 ++++ b/drivers/tty/tty_io.c
3934 +@@ -2144,8 +2144,24 @@ retry_open:
3935 + if (!noctty &&
3936 + current->signal->leader &&
3937 + !current->signal->tty &&
3938 +- tty->session == NULL)
3939 +- __proc_set_tty(tty);
3940 ++ tty->session == NULL) {
3941 ++ /*
3942 ++ * Don't let a process that only has write access to the tty
3943 ++ * obtain the privileges associated with having a tty as
3944 ++ * controlling terminal (being able to reopen it with full
3945 ++ * access through /dev/tty, being able to perform pushback).
3946 ++ * Many distributions set the group of all ttys to "tty" and
3947 ++ * grant write-only access to all terminals for setgid tty
3948 ++ * binaries, which should not imply full privileges on all ttys.
3949 ++ *
3950 ++ * This could theoretically break old code that performs open()
3951 ++ * on a write-only file descriptor. In that case, it might be
3952 ++ * necessary to also permit this if
3953 ++ * inode_permission(inode, MAY_READ) == 0.
3954 ++ */
3955 ++ if (filp->f_mode & FMODE_READ)
3956 ++ __proc_set_tty(tty);
3957 ++ }
3958 + spin_unlock_irq(&current->sighand->siglock);
3959 + read_unlock(&tasklist_lock);
3960 + tty_unlock(tty);
3961 +@@ -2434,7 +2450,7 @@ static int fionbio(struct file *file, int __user *p)
3962 + * Takes ->siglock() when updating signal->tty
3963 + */
3964 +
3965 +-static int tiocsctty(struct tty_struct *tty, int arg)
3966 ++static int tiocsctty(struct tty_struct *tty, struct file *file, int arg)
3967 + {
3968 + int ret = 0;
3969 +
3970 +@@ -2468,6 +2484,13 @@ static int tiocsctty(struct tty_struct *tty, int arg)
3971 + goto unlock;
3972 + }
3973 + }
3974 ++
3975 ++ /* See the comment in tty_open(). */
3976 ++ if ((file->f_mode & FMODE_READ) == 0 && !capable(CAP_SYS_ADMIN)) {
3977 ++ ret = -EPERM;
3978 ++ goto unlock;
3979 ++ }
3980 ++
3981 + proc_set_tty(tty);
3982 + unlock:
3983 + read_unlock(&tasklist_lock);
3984 +@@ -2860,7 +2883,7 @@ long tty_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
3985 + no_tty();
3986 + return 0;
3987 + case TIOCSCTTY:
3988 +- return tiocsctty(tty, arg);
3989 ++ return tiocsctty(tty, file, arg);
3990 + case TIOCGPGRP:
3991 + return tiocgpgrp(tty, real_tty, p);
3992 + case TIOCSPGRP:
3993 +diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c
3994 +index 389f0e034259..fa774323ebda 100644
3995 +--- a/drivers/usb/chipidea/ci_hdrc_imx.c
3996 ++++ b/drivers/usb/chipidea/ci_hdrc_imx.c
3997 +@@ -56,7 +56,7 @@ static const struct of_device_id ci_hdrc_imx_dt_ids[] = {
3998 + { .compatible = "fsl,imx27-usb", .data = &imx27_usb_data},
3999 + { .compatible = "fsl,imx6q-usb", .data = &imx6q_usb_data},
4000 + { .compatible = "fsl,imx6sl-usb", .data = &imx6sl_usb_data},
4001 +- { .compatible = "fsl,imx6sx-usb", .data = &imx6sl_usb_data},
4002 ++ { .compatible = "fsl,imx6sx-usb", .data = &imx6sx_usb_data},
4003 + { /* sentinel */ }
4004 + };
4005 + MODULE_DEVICE_TABLE(of, ci_hdrc_imx_dt_ids);
4006 +diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c
4007 +index 764f668d45a9..6e53c24fa1cb 100644
4008 +--- a/drivers/usb/chipidea/udc.c
4009 ++++ b/drivers/usb/chipidea/udc.c
4010 +@@ -656,6 +656,44 @@ __acquires(hwep->lock)
4011 + return 0;
4012 + }
4013 +
4014 ++static int _ep_set_halt(struct usb_ep *ep, int value, bool check_transfer)
4015 ++{
4016 ++ struct ci_hw_ep *hwep = container_of(ep, struct ci_hw_ep, ep);
4017 ++ int direction, retval = 0;
4018 ++ unsigned long flags;
4019 ++
4020 ++ if (ep == NULL || hwep->ep.desc == NULL)
4021 ++ return -EINVAL;
4022 ++
4023 ++ if (usb_endpoint_xfer_isoc(hwep->ep.desc))
4024 ++ return -EOPNOTSUPP;
4025 ++
4026 ++ spin_lock_irqsave(hwep->lock, flags);
4027 ++
4028 ++ if (value && hwep->dir == TX && check_transfer &&
4029 ++ !list_empty(&hwep->qh.queue) &&
4030 ++ !usb_endpoint_xfer_control(hwep->ep.desc)) {
4031 ++ spin_unlock_irqrestore(hwep->lock, flags);
4032 ++ return -EAGAIN;
4033 ++ }
4034 ++
4035 ++ direction = hwep->dir;
4036 ++ do {
4037 ++ retval |= hw_ep_set_halt(hwep->ci, hwep->num, hwep->dir, value);
4038 ++
4039 ++ if (!value)
4040 ++ hwep->wedge = 0;
4041 ++
4042 ++ if (hwep->type == USB_ENDPOINT_XFER_CONTROL)
4043 ++ hwep->dir = (hwep->dir == TX) ? RX : TX;
4044 ++
4045 ++ } while (hwep->dir != direction);
4046 ++
4047 ++ spin_unlock_irqrestore(hwep->lock, flags);
4048 ++ return retval;
4049 ++}
4050 ++
4051 ++
4052 + /**
4053 + * _gadget_stop_activity: stops all USB activity, flushes & disables all endpts
4054 + * @gadget: gadget
4055 +@@ -1051,7 +1089,7 @@ __acquires(ci->lock)
4056 + num += ci->hw_ep_max / 2;
4057 +
4058 + spin_unlock(&ci->lock);
4059 +- err = usb_ep_set_halt(&ci->ci_hw_ep[num].ep);
4060 ++ err = _ep_set_halt(&ci->ci_hw_ep[num].ep, 1, false);
4061 + spin_lock(&ci->lock);
4062 + if (!err)
4063 + isr_setup_status_phase(ci);
4064 +@@ -1110,8 +1148,8 @@ delegate:
4065 +
4066 + if (err < 0) {
4067 + spin_unlock(&ci->lock);
4068 +- if (usb_ep_set_halt(&hwep->ep))
4069 +- dev_err(ci->dev, "error: ep_set_halt\n");
4070 ++ if (_ep_set_halt(&hwep->ep, 1, false))
4071 ++ dev_err(ci->dev, "error: _ep_set_halt\n");
4072 + spin_lock(&ci->lock);
4073 + }
4074 + }
4075 +@@ -1142,9 +1180,9 @@ __acquires(ci->lock)
4076 + err = isr_setup_status_phase(ci);
4077 + if (err < 0) {
4078 + spin_unlock(&ci->lock);
4079 +- if (usb_ep_set_halt(&hwep->ep))
4080 ++ if (_ep_set_halt(&hwep->ep, 1, false))
4081 + dev_err(ci->dev,
4082 +- "error: ep_set_halt\n");
4083 ++ "error: _ep_set_halt\n");
4084 + spin_lock(&ci->lock);
4085 + }
4086 + }
4087 +@@ -1390,41 +1428,7 @@ static int ep_dequeue(struct usb_ep *ep, struct usb_request *req)
4088 + */
4089 + static int ep_set_halt(struct usb_ep *ep, int value)
4090 + {
4091 +- struct ci_hw_ep *hwep = container_of(ep, struct ci_hw_ep, ep);
4092 +- int direction, retval = 0;
4093 +- unsigned long flags;
4094 +-
4095 +- if (ep == NULL || hwep->ep.desc == NULL)
4096 +- return -EINVAL;
4097 +-
4098 +- if (usb_endpoint_xfer_isoc(hwep->ep.desc))
4099 +- return -EOPNOTSUPP;
4100 +-
4101 +- spin_lock_irqsave(hwep->lock, flags);
4102 +-
4103 +-#ifndef STALL_IN
4104 +- /* g_file_storage MS compliant but g_zero fails chapter 9 compliance */
4105 +- if (value && hwep->type == USB_ENDPOINT_XFER_BULK && hwep->dir == TX &&
4106 +- !list_empty(&hwep->qh.queue)) {
4107 +- spin_unlock_irqrestore(hwep->lock, flags);
4108 +- return -EAGAIN;
4109 +- }
4110 +-#endif
4111 +-
4112 +- direction = hwep->dir;
4113 +- do {
4114 +- retval |= hw_ep_set_halt(hwep->ci, hwep->num, hwep->dir, value);
4115 +-
4116 +- if (!value)
4117 +- hwep->wedge = 0;
4118 +-
4119 +- if (hwep->type == USB_ENDPOINT_XFER_CONTROL)
4120 +- hwep->dir = (hwep->dir == TX) ? RX : TX;
4121 +-
4122 +- } while (hwep->dir != direction);
4123 +-
4124 +- spin_unlock_irqrestore(hwep->lock, flags);
4125 +- return retval;
4126 ++ return _ep_set_halt(ep, value, true);
4127 + }
4128 +
4129 + /**
4130 +diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c
4131 +index b2a540b43f97..b9ddf0c1ffe5 100644
4132 +--- a/drivers/usb/core/config.c
4133 ++++ b/drivers/usb/core/config.c
4134 +@@ -112,7 +112,7 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
4135 + cfgno, inum, asnum, ep->desc.bEndpointAddress);
4136 + ep->ss_ep_comp.bmAttributes = 16;
4137 + } else if (usb_endpoint_xfer_isoc(&ep->desc) &&
4138 +- desc->bmAttributes > 2) {
4139 ++ USB_SS_MULT(desc->bmAttributes) > 3) {
4140 + dev_warn(ddev, "Isoc endpoint has Mult of %d in "
4141 + "config %d interface %d altsetting %d ep %d: "
4142 + "setting to 3\n", desc->bmAttributes + 1,
4143 +@@ -121,7 +121,8 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
4144 + }
4145 +
4146 + if (usb_endpoint_xfer_isoc(&ep->desc))
4147 +- max_tx = (desc->bMaxBurst + 1) * (desc->bmAttributes + 1) *
4148 ++ max_tx = (desc->bMaxBurst + 1) *
4149 ++ (USB_SS_MULT(desc->bmAttributes)) *
4150 + usb_endpoint_maxp(&ep->desc);
4151 + else if (usb_endpoint_xfer_int(&ep->desc))
4152 + max_tx = usb_endpoint_maxp(&ep->desc) *
4153 +diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
4154 +index d85abfed84cc..f5a381945db2 100644
4155 +--- a/drivers/usb/core/quirks.c
4156 ++++ b/drivers/usb/core/quirks.c
4157 +@@ -54,6 +54,13 @@ static const struct usb_device_id usb_quirk_list[] = {
4158 + { USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT },
4159 + { USB_DEVICE(0x046d, 0x0843), .driver_info = USB_QUIRK_DELAY_INIT },
4160 +
4161 ++ /* Logitech ConferenceCam CC3000e */
4162 ++ { USB_DEVICE(0x046d, 0x0847), .driver_info = USB_QUIRK_DELAY_INIT },
4163 ++ { USB_DEVICE(0x046d, 0x0848), .driver_info = USB_QUIRK_DELAY_INIT },
4164 ++
4165 ++ /* Logitech PTZ Pro Camera */
4166 ++ { USB_DEVICE(0x046d, 0x0853), .driver_info = USB_QUIRK_DELAY_INIT },
4167 ++
4168 + /* Logitech Quickcam Fusion */
4169 + { USB_DEVICE(0x046d, 0x08c1), .driver_info = USB_QUIRK_RESET_RESUME },
4170 +
4171 +@@ -78,6 +85,12 @@ static const struct usb_device_id usb_quirk_list[] = {
4172 + /* Philips PSC805 audio device */
4173 + { USB_DEVICE(0x0471, 0x0155), .driver_info = USB_QUIRK_RESET_RESUME },
4174 +
4175 ++ /* Plantronic Audio 655 DSP */
4176 ++ { USB_DEVICE(0x047f, 0xc008), .driver_info = USB_QUIRK_RESET_RESUME },
4177 ++
4178 ++ /* Plantronic Audio 648 USB */
4179 ++ { USB_DEVICE(0x047f, 0xc013), .driver_info = USB_QUIRK_RESET_RESUME },
4180 ++
4181 + /* Artisman Watchdog Dongle */
4182 + { USB_DEVICE(0x04b4, 0x0526), .driver_info =
4183 + USB_QUIRK_CONFIG_INTF_STRINGS },
4184 +diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
4185 +index 9a8c936cd42c..41f841fa6c4d 100644
4186 +--- a/drivers/usb/host/xhci-mem.c
4187 ++++ b/drivers/usb/host/xhci-mem.c
4188 +@@ -1498,10 +1498,10 @@ int xhci_endpoint_init(struct xhci_hcd *xhci,
4189 + * use Event Data TRBs, and we don't chain in a link TRB on short
4190 + * transfers, we're basically dividing by 1.
4191 + *
4192 +- * xHCI 1.0 specification indicates that the Average TRB Length should
4193 +- * be set to 8 for control endpoints.
4194 ++ * xHCI 1.0 and 1.1 specification indicates that the Average TRB Length
4195 ++ * should be set to 8 for control endpoints.
4196 + */
4197 +- if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version == 0x100)
4198 ++ if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version >= 0x100)
4199 + ep_ctx->tx_info |= cpu_to_le32(AVG_TRB_LENGTH_FOR_EP(8));
4200 + else
4201 + ep_ctx->tx_info |=
4202 +@@ -1792,8 +1792,7 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci)
4203 + int size;
4204 + int i, j, num_ports;
4205 +
4206 +- if (timer_pending(&xhci->cmd_timer))
4207 +- del_timer_sync(&xhci->cmd_timer);
4208 ++ del_timer_sync(&xhci->cmd_timer);
4209 +
4210 + /* Free the Event Ring Segment Table and the actual Event Ring */
4211 + size = sizeof(struct xhci_erst_entry)*(xhci->erst.num_entries);
4212 +@@ -2321,6 +2320,10 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
4213 +
4214 + INIT_LIST_HEAD(&xhci->cmd_list);
4215 +
4216 ++ /* init command timeout timer */
4217 ++ setup_timer(&xhci->cmd_timer, xhci_handle_command_timeout,
4218 ++ (unsigned long)xhci);
4219 ++
4220 + page_size = readl(&xhci->op_regs->page_size);
4221 + xhci_dbg_trace(xhci, trace_xhci_dbg_init,
4222 + "Supported page size register = 0x%x", page_size);
4223 +@@ -2505,10 +2508,6 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
4224 + "Wrote ERST address to ir_set 0.");
4225 + xhci_print_ir_set(xhci, 0);
4226 +
4227 +- /* init command timeout timer */
4228 +- setup_timer(&xhci->cmd_timer, xhci_handle_command_timeout,
4229 +- (unsigned long)xhci);
4230 +-
4231 + /*
4232 + * XXX: Might need to set the Interrupter Moderation Register to
4233 + * something other than the default (~1ms minimum between interrupts).
4234 +diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
4235 +index b3a0a2275f5a..ad975a2975ca 100644
4236 +--- a/drivers/usb/host/xhci-ring.c
4237 ++++ b/drivers/usb/host/xhci-ring.c
4238 +@@ -302,6 +302,15 @@ static int xhci_abort_cmd_ring(struct xhci_hcd *xhci)
4239 + ret = xhci_handshake(&xhci->op_regs->cmd_ring,
4240 + CMD_RING_RUNNING, 0, 5 * 1000 * 1000);
4241 + if (ret < 0) {
4242 ++ /* we are about to kill xhci, give it one more chance */
4243 ++ xhci_write_64(xhci, temp_64 | CMD_RING_ABORT,
4244 ++ &xhci->op_regs->cmd_ring);
4245 ++ udelay(1000);
4246 ++ ret = xhci_handshake(&xhci->op_regs->cmd_ring,
4247 ++ CMD_RING_RUNNING, 0, 3 * 1000 * 1000);
4248 ++ if (ret == 0)
4249 ++ return 0;
4250 ++
4251 + xhci_err(xhci, "Stopped the command ring failed, "
4252 + "maybe the host is dead\n");
4253 + xhci->xhc_state |= XHCI_STATE_DYING;
4254 +@@ -3041,9 +3050,11 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4255 + struct xhci_td *td;
4256 + struct scatterlist *sg;
4257 + int num_sgs;
4258 +- int trb_buff_len, this_sg_len, running_total;
4259 ++ int trb_buff_len, this_sg_len, running_total, ret;
4260 + unsigned int total_packet_count;
4261 ++ bool zero_length_needed;
4262 + bool first_trb;
4263 ++ int last_trb_num;
4264 + u64 addr;
4265 + bool more_trbs_coming;
4266 +
4267 +@@ -3059,13 +3070,27 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4268 + total_packet_count = DIV_ROUND_UP(urb->transfer_buffer_length,
4269 + usb_endpoint_maxp(&urb->ep->desc));
4270 +
4271 +- trb_buff_len = prepare_transfer(xhci, xhci->devs[slot_id],
4272 ++ ret = prepare_transfer(xhci, xhci->devs[slot_id],
4273 + ep_index, urb->stream_id,
4274 + num_trbs, urb, 0, mem_flags);
4275 +- if (trb_buff_len < 0)
4276 +- return trb_buff_len;
4277 ++ if (ret < 0)
4278 ++ return ret;
4279 +
4280 + urb_priv = urb->hcpriv;
4281 ++
4282 ++ /* Deal with URB_ZERO_PACKET - need one more td/trb */
4283 ++ zero_length_needed = urb->transfer_flags & URB_ZERO_PACKET &&
4284 ++ urb_priv->length == 2;
4285 ++ if (zero_length_needed) {
4286 ++ num_trbs++;
4287 ++ xhci_dbg(xhci, "Creating zero length td.\n");
4288 ++ ret = prepare_transfer(xhci, xhci->devs[slot_id],
4289 ++ ep_index, urb->stream_id,
4290 ++ 1, urb, 1, mem_flags);
4291 ++ if (ret < 0)
4292 ++ return ret;
4293 ++ }
4294 ++
4295 + td = urb_priv->td[0];
4296 +
4297 + /*
4298 +@@ -3095,6 +3120,7 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4299 + trb_buff_len = urb->transfer_buffer_length;
4300 +
4301 + first_trb = true;
4302 ++ last_trb_num = zero_length_needed ? 2 : 1;
4303 + /* Queue the first TRB, even if it's zero-length */
4304 + do {
4305 + u32 field = 0;
4306 +@@ -3112,12 +3138,15 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4307 + /* Chain all the TRBs together; clear the chain bit in the last
4308 + * TRB to indicate it's the last TRB in the chain.
4309 + */
4310 +- if (num_trbs > 1) {
4311 ++ if (num_trbs > last_trb_num) {
4312 + field |= TRB_CHAIN;
4313 +- } else {
4314 +- /* FIXME - add check for ZERO_PACKET flag before this */
4315 ++ } else if (num_trbs == last_trb_num) {
4316 + td->last_trb = ep_ring->enqueue;
4317 + field |= TRB_IOC;
4318 ++ } else if (zero_length_needed && num_trbs == 1) {
4319 ++ trb_buff_len = 0;
4320 ++ urb_priv->td[1]->last_trb = ep_ring->enqueue;
4321 ++ field |= TRB_IOC;
4322 + }
4323 +
4324 + /* Only set interrupt on short packet for IN endpoints */
4325 +@@ -3179,7 +3208,7 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4326 + if (running_total + trb_buff_len > urb->transfer_buffer_length)
4327 + trb_buff_len =
4328 + urb->transfer_buffer_length - running_total;
4329 +- } while (running_total < urb->transfer_buffer_length);
4330 ++ } while (num_trbs > 0);
4331 +
4332 + check_trb_math(urb, num_trbs, running_total);
4333 + giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id,
4334 +@@ -3197,7 +3226,9 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4335 + int num_trbs;
4336 + struct xhci_generic_trb *start_trb;
4337 + bool first_trb;
4338 ++ int last_trb_num;
4339 + bool more_trbs_coming;
4340 ++ bool zero_length_needed;
4341 + int start_cycle;
4342 + u32 field, length_field;
4343 +
4344 +@@ -3228,7 +3259,6 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4345 + num_trbs++;
4346 + running_total += TRB_MAX_BUFF_SIZE;
4347 + }
4348 +- /* FIXME: this doesn't deal with URB_ZERO_PACKET - need one more */
4349 +
4350 + ret = prepare_transfer(xhci, xhci->devs[slot_id],
4351 + ep_index, urb->stream_id,
4352 +@@ -3237,6 +3267,20 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4353 + return ret;
4354 +
4355 + urb_priv = urb->hcpriv;
4356 ++
4357 ++ /* Deal with URB_ZERO_PACKET - need one more td/trb */
4358 ++ zero_length_needed = urb->transfer_flags & URB_ZERO_PACKET &&
4359 ++ urb_priv->length == 2;
4360 ++ if (zero_length_needed) {
4361 ++ num_trbs++;
4362 ++ xhci_dbg(xhci, "Creating zero length td.\n");
4363 ++ ret = prepare_transfer(xhci, xhci->devs[slot_id],
4364 ++ ep_index, urb->stream_id,
4365 ++ 1, urb, 1, mem_flags);
4366 ++ if (ret < 0)
4367 ++ return ret;
4368 ++ }
4369 ++
4370 + td = urb_priv->td[0];
4371 +
4372 + /*
4373 +@@ -3258,7 +3302,7 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4374 + trb_buff_len = urb->transfer_buffer_length;
4375 +
4376 + first_trb = true;
4377 +-
4378 ++ last_trb_num = zero_length_needed ? 2 : 1;
4379 + /* Queue the first TRB, even if it's zero-length */
4380 + do {
4381 + u32 remainder = 0;
4382 +@@ -3275,12 +3319,15 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4383 + /* Chain all the TRBs together; clear the chain bit in the last
4384 + * TRB to indicate it's the last TRB in the chain.
4385 + */
4386 +- if (num_trbs > 1) {
4387 ++ if (num_trbs > last_trb_num) {
4388 + field |= TRB_CHAIN;
4389 +- } else {
4390 +- /* FIXME - add check for ZERO_PACKET flag before this */
4391 ++ } else if (num_trbs == last_trb_num) {
4392 + td->last_trb = ep_ring->enqueue;
4393 + field |= TRB_IOC;
4394 ++ } else if (zero_length_needed && num_trbs == 1) {
4395 ++ trb_buff_len = 0;
4396 ++ urb_priv->td[1]->last_trb = ep_ring->enqueue;
4397 ++ field |= TRB_IOC;
4398 + }
4399 +
4400 + /* Only set interrupt on short packet for IN endpoints */
4401 +@@ -3318,7 +3365,7 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4402 + trb_buff_len = urb->transfer_buffer_length - running_total;
4403 + if (trb_buff_len > TRB_MAX_BUFF_SIZE)
4404 + trb_buff_len = TRB_MAX_BUFF_SIZE;
4405 +- } while (running_total < urb->transfer_buffer_length);
4406 ++ } while (num_trbs > 0);
4407 +
4408 + check_trb_math(urb, num_trbs, running_total);
4409 + giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id,
4410 +@@ -3385,8 +3432,8 @@ int xhci_queue_ctrl_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4411 + if (start_cycle == 0)
4412 + field |= 0x1;
4413 +
4414 +- /* xHCI 1.0 6.4.1.2.1: Transfer Type field */
4415 +- if (xhci->hci_version == 0x100) {
4416 ++ /* xHCI 1.0/1.1 6.4.1.2.1: Transfer Type field */
4417 ++ if (xhci->hci_version >= 0x100) {
4418 + if (urb->transfer_buffer_length > 0) {
4419 + if (setup->bRequestType & USB_DIR_IN)
4420 + field |= TRB_TX_TYPE(TRB_DATA_IN);
4421 +diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
4422 +index c502c2277aeb..26f62b2b33f8 100644
4423 +--- a/drivers/usb/host/xhci.c
4424 ++++ b/drivers/usb/host/xhci.c
4425 +@@ -146,7 +146,8 @@ static int xhci_start(struct xhci_hcd *xhci)
4426 + "waited %u microseconds.\n",
4427 + XHCI_MAX_HALT_USEC);
4428 + if (!ret)
4429 +- xhci->xhc_state &= ~XHCI_STATE_HALTED;
4430 ++ xhci->xhc_state &= ~(XHCI_STATE_HALTED | XHCI_STATE_DYING);
4431 ++
4432 + return ret;
4433 + }
4434 +
4435 +@@ -683,8 +684,11 @@ void xhci_stop(struct usb_hcd *hcd)
4436 + u32 temp;
4437 + struct xhci_hcd *xhci = hcd_to_xhci(hcd);
4438 +
4439 ++ mutex_lock(&xhci->mutex);
4440 ++
4441 + if (!usb_hcd_is_primary_hcd(hcd)) {
4442 + xhci_only_stop_hcd(xhci->shared_hcd);
4443 ++ mutex_unlock(&xhci->mutex);
4444 + return;
4445 + }
4446 +
4447 +@@ -723,6 +727,7 @@ void xhci_stop(struct usb_hcd *hcd)
4448 + xhci_dbg_trace(xhci, trace_xhci_dbg_init,
4449 + "xhci_stop completed - status = %x",
4450 + readl(&xhci->op_regs->status));
4451 ++ mutex_unlock(&xhci->mutex);
4452 + }
4453 +
4454 + /*
4455 +@@ -1340,6 +1345,11 @@ int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flags)
4456 +
4457 + if (usb_endpoint_xfer_isoc(&urb->ep->desc))
4458 + size = urb->number_of_packets;
4459 ++ else if (usb_endpoint_is_bulk_out(&urb->ep->desc) &&
4460 ++ urb->transfer_buffer_length > 0 &&
4461 ++ urb->transfer_flags & URB_ZERO_PACKET &&
4462 ++ !(urb->transfer_buffer_length % usb_endpoint_maxp(&urb->ep->desc)))
4463 ++ size = 2;
4464 + else
4465 + size = 1;
4466 +
4467 +@@ -3790,6 +3800,9 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
4468 +
4469 + mutex_lock(&xhci->mutex);
4470 +
4471 ++ if (xhci->xhc_state) /* dying or halted */
4472 ++ goto out;
4473 ++
4474 + if (!udev->slot_id) {
4475 + xhci_dbg_trace(xhci, trace_xhci_dbg_address,
4476 + "Bad Slot ID %d", udev->slot_id);
4477 +diff --git a/drivers/usb/misc/chaoskey.c b/drivers/usb/misc/chaoskey.c
4478 +index 3ad5d19e4d04..23c794813e6a 100644
4479 +--- a/drivers/usb/misc/chaoskey.c
4480 ++++ b/drivers/usb/misc/chaoskey.c
4481 +@@ -472,7 +472,7 @@ static int chaoskey_rng_read(struct hwrng *rng, void *data,
4482 + if (this_time > max)
4483 + this_time = max;
4484 +
4485 +- memcpy(data, dev->buf, this_time);
4486 ++ memcpy(data, dev->buf + dev->used, this_time);
4487 +
4488 + dev->used += this_time;
4489 +
4490 +diff --git a/drivers/usb/musb/musb_cppi41.c b/drivers/usb/musb/musb_cppi41.c
4491 +index 8bd8c5e26921..d5a140745640 100644
4492 +--- a/drivers/usb/musb/musb_cppi41.c
4493 ++++ b/drivers/usb/musb/musb_cppi41.c
4494 +@@ -614,7 +614,7 @@ static int cppi41_dma_controller_start(struct cppi41_dma_controller *controller)
4495 + {
4496 + struct musb *musb = controller->musb;
4497 + struct device *dev = musb->controller;
4498 +- struct device_node *np = dev->of_node;
4499 ++ struct device_node *np = dev->parent->of_node;
4500 + struct cppi41_dma_channel *cppi41_channel;
4501 + int count;
4502 + int i;
4503 +@@ -664,7 +664,7 @@ static int cppi41_dma_controller_start(struct cppi41_dma_controller *controller)
4504 + musb_dma->status = MUSB_DMA_STATUS_FREE;
4505 + musb_dma->max_len = SZ_4M;
4506 +
4507 +- dc = dma_request_slave_channel(dev, str);
4508 ++ dc = dma_request_slave_channel(dev->parent, str);
4509 + if (!dc) {
4510 + dev_err(dev, "Failed to request %s.\n", str);
4511 + ret = -EPROBE_DEFER;
4512 +@@ -694,7 +694,7 @@ struct dma_controller *dma_controller_create(struct musb *musb,
4513 + struct cppi41_dma_controller *controller;
4514 + int ret = 0;
4515 +
4516 +- if (!musb->controller->of_node) {
4517 ++ if (!musb->controller->parent->of_node) {
4518 + dev_err(musb->controller, "Need DT for the DMA engine.\n");
4519 + return NULL;
4520 + }
4521 +diff --git a/drivers/usb/musb/musb_dsps.c b/drivers/usb/musb/musb_dsps.c
4522 +index 65d931a28a14..dcac5e7f19e0 100644
4523 +--- a/drivers/usb/musb/musb_dsps.c
4524 ++++ b/drivers/usb/musb/musb_dsps.c
4525 +@@ -225,8 +225,11 @@ static void dsps_musb_enable(struct musb *musb)
4526 +
4527 + dsps_writel(reg_base, wrp->epintr_set, epmask);
4528 + dsps_writel(reg_base, wrp->coreintr_set, coremask);
4529 +- /* start polling for ID change. */
4530 +- mod_timer(&glue->timer, jiffies + msecs_to_jiffies(wrp->poll_timeout));
4531 ++ /* start polling for ID change in dual-role idle mode */
4532 ++ if (musb->xceiv->otg->state == OTG_STATE_B_IDLE &&
4533 ++ musb->port_mode == MUSB_PORT_MODE_DUAL_ROLE)
4534 ++ mod_timer(&glue->timer, jiffies +
4535 ++ msecs_to_jiffies(wrp->poll_timeout));
4536 + dsps_musb_try_idle(musb, 0);
4537 + }
4538 +
4539 +diff --git a/drivers/usb/phy/phy-generic.c b/drivers/usb/phy/phy-generic.c
4540 +index deee68eafb72..0cd85f2ccddd 100644
4541 +--- a/drivers/usb/phy/phy-generic.c
4542 ++++ b/drivers/usb/phy/phy-generic.c
4543 +@@ -230,7 +230,8 @@ int usb_phy_gen_create_phy(struct device *dev, struct usb_phy_generic *nop,
4544 + clk_rate = pdata->clk_rate;
4545 + needs_vcc = pdata->needs_vcc;
4546 + if (gpio_is_valid(pdata->gpio_reset)) {
4547 +- err = devm_gpio_request_one(dev, pdata->gpio_reset, 0,
4548 ++ err = devm_gpio_request_one(dev, pdata->gpio_reset,
4549 ++ GPIOF_ACTIVE_LOW,
4550 + dev_name(dev));
4551 + if (!err)
4552 + nop->gpiod_reset =
4553 +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
4554 +index 876423b8892c..7c8eb4c4c175 100644
4555 +--- a/drivers/usb/serial/option.c
4556 ++++ b/drivers/usb/serial/option.c
4557 +@@ -278,6 +278,10 @@ static void option_instat_callback(struct urb *urb);
4558 + #define ZTE_PRODUCT_MF622 0x0001
4559 + #define ZTE_PRODUCT_MF628 0x0015
4560 + #define ZTE_PRODUCT_MF626 0x0031
4561 ++#define ZTE_PRODUCT_ZM8620_X 0x0396
4562 ++#define ZTE_PRODUCT_ME3620_MBIM 0x0426
4563 ++#define ZTE_PRODUCT_ME3620_X 0x1432
4564 ++#define ZTE_PRODUCT_ME3620_L 0x1433
4565 + #define ZTE_PRODUCT_AC2726 0xfff1
4566 + #define ZTE_PRODUCT_MG880 0xfffd
4567 + #define ZTE_PRODUCT_CDMA_TECH 0xfffe
4568 +@@ -544,6 +548,18 @@ static const struct option_blacklist_info zte_mc2716_z_blacklist = {
4569 + .sendsetup = BIT(1) | BIT(2) | BIT(3),
4570 + };
4571 +
4572 ++static const struct option_blacklist_info zte_me3620_mbim_blacklist = {
4573 ++ .reserved = BIT(2) | BIT(3) | BIT(4),
4574 ++};
4575 ++
4576 ++static const struct option_blacklist_info zte_me3620_xl_blacklist = {
4577 ++ .reserved = BIT(3) | BIT(4) | BIT(5),
4578 ++};
4579 ++
4580 ++static const struct option_blacklist_info zte_zm8620_x_blacklist = {
4581 ++ .reserved = BIT(3) | BIT(4) | BIT(5),
4582 ++};
4583 ++
4584 + static const struct option_blacklist_info huawei_cdc12_blacklist = {
4585 + .reserved = BIT(1) | BIT(2),
4586 + };
4587 +@@ -1591,6 +1607,14 @@ static const struct usb_device_id option_ids[] = {
4588 + .driver_info = (kernel_ulong_t)&zte_ad3812_z_blacklist },
4589 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MC2716, 0xff, 0xff, 0xff),
4590 + .driver_info = (kernel_ulong_t)&zte_mc2716_z_blacklist },
4591 ++ { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_L),
4592 ++ .driver_info = (kernel_ulong_t)&zte_me3620_xl_blacklist },
4593 ++ { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_MBIM),
4594 ++ .driver_info = (kernel_ulong_t)&zte_me3620_mbim_blacklist },
4595 ++ { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_X),
4596 ++ .driver_info = (kernel_ulong_t)&zte_me3620_xl_blacklist },
4597 ++ { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ZM8620_X),
4598 ++ .driver_info = (kernel_ulong_t)&zte_zm8620_x_blacklist },
4599 + { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x01) },
4600 + { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x05) },
4601 + { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x86, 0x10) },
4602 +diff --git a/drivers/usb/serial/whiteheat.c b/drivers/usb/serial/whiteheat.c
4603 +index 6c3734d2b45a..d3ea90bef84d 100644
4604 +--- a/drivers/usb/serial/whiteheat.c
4605 ++++ b/drivers/usb/serial/whiteheat.c
4606 +@@ -80,6 +80,8 @@ static int whiteheat_firmware_download(struct usb_serial *serial,
4607 + static int whiteheat_firmware_attach(struct usb_serial *serial);
4608 +
4609 + /* function prototypes for the Connect Tech WhiteHEAT serial converter */
4610 ++static int whiteheat_probe(struct usb_serial *serial,
4611 ++ const struct usb_device_id *id);
4612 + static int whiteheat_attach(struct usb_serial *serial);
4613 + static void whiteheat_release(struct usb_serial *serial);
4614 + static int whiteheat_port_probe(struct usb_serial_port *port);
4615 +@@ -116,6 +118,7 @@ static struct usb_serial_driver whiteheat_device = {
4616 + .description = "Connect Tech - WhiteHEAT",
4617 + .id_table = id_table_std,
4618 + .num_ports = 4,
4619 ++ .probe = whiteheat_probe,
4620 + .attach = whiteheat_attach,
4621 + .release = whiteheat_release,
4622 + .port_probe = whiteheat_port_probe,
4623 +@@ -217,6 +220,34 @@ static int whiteheat_firmware_attach(struct usb_serial *serial)
4624 + /*****************************************************************************
4625 + * Connect Tech's White Heat serial driver functions
4626 + *****************************************************************************/
4627 ++
4628 ++static int whiteheat_probe(struct usb_serial *serial,
4629 ++ const struct usb_device_id *id)
4630 ++{
4631 ++ struct usb_host_interface *iface_desc;
4632 ++ struct usb_endpoint_descriptor *endpoint;
4633 ++ size_t num_bulk_in = 0;
4634 ++ size_t num_bulk_out = 0;
4635 ++ size_t min_num_bulk;
4636 ++ unsigned int i;
4637 ++
4638 ++ iface_desc = serial->interface->cur_altsetting;
4639 ++
4640 ++ for (i = 0; i < iface_desc->desc.bNumEndpoints; i++) {
4641 ++ endpoint = &iface_desc->endpoint[i].desc;
4642 ++ if (usb_endpoint_is_bulk_in(endpoint))
4643 ++ ++num_bulk_in;
4644 ++ if (usb_endpoint_is_bulk_out(endpoint))
4645 ++ ++num_bulk_out;
4646 ++ }
4647 ++
4648 ++ min_num_bulk = COMMAND_PORT + 1;
4649 ++ if (num_bulk_in < min_num_bulk || num_bulk_out < min_num_bulk)
4650 ++ return -ENODEV;
4651 ++
4652 ++ return 0;
4653 ++}
4654 ++
4655 + static int whiteheat_attach(struct usb_serial *serial)
4656 + {
4657 + struct usb_serial_port *command_port;
4658 +diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
4659 +index 109462303087..d1e1e1704da1 100644
4660 +--- a/drivers/video/fbdev/Kconfig
4661 ++++ b/drivers/video/fbdev/Kconfig
4662 +@@ -298,7 +298,7 @@ config FB_ARMCLCD
4663 +
4664 + # Helper logic selected only by the ARM Versatile platform family.
4665 + config PLAT_VERSATILE_CLCD
4666 +- def_bool ARCH_VERSATILE || ARCH_REALVIEW || ARCH_VEXPRESS
4667 ++ def_bool ARCH_VERSATILE || ARCH_REALVIEW || ARCH_VEXPRESS || ARCH_INTEGRATOR
4668 + depends on ARM
4669 + depends on FB_ARMCLCD && FB=y
4670 +
4671 +diff --git a/drivers/watchdog/sunxi_wdt.c b/drivers/watchdog/sunxi_wdt.c
4672 +index a29afb37c48c..47bd8a14d01f 100644
4673 +--- a/drivers/watchdog/sunxi_wdt.c
4674 ++++ b/drivers/watchdog/sunxi_wdt.c
4675 +@@ -184,7 +184,7 @@ static int sunxi_wdt_start(struct watchdog_device *wdt_dev)
4676 + /* Set system reset function */
4677 + reg = readl(wdt_base + regs->wdt_cfg);
4678 + reg &= ~(regs->wdt_reset_mask);
4679 +- reg |= ~(regs->wdt_reset_val);
4680 ++ reg |= regs->wdt_reset_val;
4681 + writel(reg, wdt_base + regs->wdt_cfg);
4682 +
4683 + /* Enable watchdog */
4684 +diff --git a/fs/block_dev.c b/fs/block_dev.c
4685 +index c7e4163ede87..ccfd31f1df3a 100644
4686 +--- a/fs/block_dev.c
4687 ++++ b/fs/block_dev.c
4688 +@@ -1234,6 +1234,13 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, int for_part)
4689 + goto out_clear;
4690 + }
4691 + bd_set_size(bdev, (loff_t)bdev->bd_part->nr_sects << 9);
4692 ++ /*
4693 ++ * If the partition is not aligned on a page
4694 ++ * boundary, we can't do dax I/O to it.
4695 ++ */
4696 ++ if ((bdev->bd_part->start_sect % (PAGE_SIZE / 512)) ||
4697 ++ (bdev->bd_part->nr_sects % (PAGE_SIZE / 512)))
4698 ++ bdev->bd_inode->i_flags &= ~S_DAX;
4699 + }
4700 + } else {
4701 + if (bdev->bd_contains == bdev) {
4702 +diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
4703 +index c32d226bfecc..885f533a34d9 100644
4704 +--- a/fs/btrfs/extent_io.c
4705 ++++ b/fs/btrfs/extent_io.c
4706 +@@ -2795,7 +2795,8 @@ static int submit_extent_page(int rw, struct extent_io_tree *tree,
4707 + bio_end_io_t end_io_func,
4708 + int mirror_num,
4709 + unsigned long prev_bio_flags,
4710 +- unsigned long bio_flags)
4711 ++ unsigned long bio_flags,
4712 ++ bool force_bio_submit)
4713 + {
4714 + int ret = 0;
4715 + struct bio *bio;
4716 +@@ -2813,6 +2814,7 @@ static int submit_extent_page(int rw, struct extent_io_tree *tree,
4717 + contig = bio_end_sector(bio) == sector;
4718 +
4719 + if (prev_bio_flags != bio_flags || !contig ||
4720 ++ force_bio_submit ||
4721 + merge_bio(rw, tree, page, offset, page_size, bio, bio_flags) ||
4722 + bio_add_page(bio, page, page_size, offset) < page_size) {
4723 + ret = submit_one_bio(rw, bio, mirror_num,
4724 +@@ -2906,7 +2908,8 @@ static int __do_readpage(struct extent_io_tree *tree,
4725 + get_extent_t *get_extent,
4726 + struct extent_map **em_cached,
4727 + struct bio **bio, int mirror_num,
4728 +- unsigned long *bio_flags, int rw)
4729 ++ unsigned long *bio_flags, int rw,
4730 ++ u64 *prev_em_start)
4731 + {
4732 + struct inode *inode = page->mapping->host;
4733 + u64 start = page_offset(page);
4734 +@@ -2954,6 +2957,7 @@ static int __do_readpage(struct extent_io_tree *tree,
4735 + }
4736 + while (cur <= end) {
4737 + unsigned long pnr = (last_byte >> PAGE_CACHE_SHIFT) + 1;
4738 ++ bool force_bio_submit = false;
4739 +
4740 + if (cur >= last_byte) {
4741 + char *userpage;
4742 +@@ -3004,6 +3008,49 @@ static int __do_readpage(struct extent_io_tree *tree,
4743 + block_start = em->block_start;
4744 + if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags))
4745 + block_start = EXTENT_MAP_HOLE;
4746 ++
4747 ++ /*
4748 ++ * If we have a file range that points to a compressed extent
4749 ++ * and it's followed by a consecutive file range that points to
4750 ++ * to the same compressed extent (possibly with a different
4751 ++ * offset and/or length, so it either points to the whole extent
4752 ++ * or only part of it), we must make sure we do not submit a
4753 ++ * single bio to populate the pages for the 2 ranges because
4754 ++ * this makes the compressed extent read zero out the pages
4755 ++ * belonging to the 2nd range. Imagine the following scenario:
4756 ++ *
4757 ++ * File layout
4758 ++ * [0 - 8K] [8K - 24K]
4759 ++ * | |
4760 ++ * | |
4761 ++ * points to extent X, points to extent X,
4762 ++ * offset 4K, length of 8K offset 0, length 16K
4763 ++ *
4764 ++ * [extent X, compressed length = 4K uncompressed length = 16K]
4765 ++ *
4766 ++ * If the bio to read the compressed extent covers both ranges,
4767 ++ * it will decompress extent X into the pages belonging to the
4768 ++ * first range and then it will stop, zeroing out the remaining
4769 ++ * pages that belong to the other range that points to extent X.
4770 ++ * So here we make sure we submit 2 bios, one for the first
4771 ++ * range and another one for the third range. Both will target
4772 ++ * the same physical extent from disk, but we can't currently
4773 ++ * make the compressed bio endio callback populate the pages
4774 ++ * for both ranges because each compressed bio is tightly
4775 ++ * coupled with a single extent map, and each range can have
4776 ++ * an extent map with a different offset value relative to the
4777 ++ * uncompressed data of our extent and different lengths. This
4778 ++ * is a corner case so we prioritize correctness over
4779 ++ * non-optimal behavior (submitting 2 bios for the same extent).
4780 ++ */
4781 ++ if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) &&
4782 ++ prev_em_start && *prev_em_start != (u64)-1 &&
4783 ++ *prev_em_start != em->orig_start)
4784 ++ force_bio_submit = true;
4785 ++
4786 ++ if (prev_em_start)
4787 ++ *prev_em_start = em->orig_start;
4788 ++
4789 + free_extent_map(em);
4790 + em = NULL;
4791 +
4792 +@@ -3053,7 +3100,8 @@ static int __do_readpage(struct extent_io_tree *tree,
4793 + bdev, bio, pnr,
4794 + end_bio_extent_readpage, mirror_num,
4795 + *bio_flags,
4796 +- this_bio_flag);
4797 ++ this_bio_flag,
4798 ++ force_bio_submit);
4799 + if (!ret) {
4800 + nr++;
4801 + *bio_flags = this_bio_flag;
4802 +@@ -3080,7 +3128,8 @@ static inline void __do_contiguous_readpages(struct extent_io_tree *tree,
4803 + get_extent_t *get_extent,
4804 + struct extent_map **em_cached,
4805 + struct bio **bio, int mirror_num,
4806 +- unsigned long *bio_flags, int rw)
4807 ++ unsigned long *bio_flags, int rw,
4808 ++ u64 *prev_em_start)
4809 + {
4810 + struct inode *inode;
4811 + struct btrfs_ordered_extent *ordered;
4812 +@@ -3100,7 +3149,7 @@ static inline void __do_contiguous_readpages(struct extent_io_tree *tree,
4813 +
4814 + for (index = 0; index < nr_pages; index++) {
4815 + __do_readpage(tree, pages[index], get_extent, em_cached, bio,
4816 +- mirror_num, bio_flags, rw);
4817 ++ mirror_num, bio_flags, rw, prev_em_start);
4818 + page_cache_release(pages[index]);
4819 + }
4820 + }
4821 +@@ -3110,7 +3159,8 @@ static void __extent_readpages(struct extent_io_tree *tree,
4822 + int nr_pages, get_extent_t *get_extent,
4823 + struct extent_map **em_cached,
4824 + struct bio **bio, int mirror_num,
4825 +- unsigned long *bio_flags, int rw)
4826 ++ unsigned long *bio_flags, int rw,
4827 ++ u64 *prev_em_start)
4828 + {
4829 + u64 start = 0;
4830 + u64 end = 0;
4831 +@@ -3131,7 +3181,7 @@ static void __extent_readpages(struct extent_io_tree *tree,
4832 + index - first_index, start,
4833 + end, get_extent, em_cached,
4834 + bio, mirror_num, bio_flags,
4835 +- rw);
4836 ++ rw, prev_em_start);
4837 + start = page_start;
4838 + end = start + PAGE_CACHE_SIZE - 1;
4839 + first_index = index;
4840 +@@ -3142,7 +3192,8 @@ static void __extent_readpages(struct extent_io_tree *tree,
4841 + __do_contiguous_readpages(tree, &pages[first_index],
4842 + index - first_index, start,
4843 + end, get_extent, em_cached, bio,
4844 +- mirror_num, bio_flags, rw);
4845 ++ mirror_num, bio_flags, rw,
4846 ++ prev_em_start);
4847 + }
4848 +
4849 + static int __extent_read_full_page(struct extent_io_tree *tree,
4850 +@@ -3168,7 +3219,7 @@ static int __extent_read_full_page(struct extent_io_tree *tree,
4851 + }
4852 +
4853 + ret = __do_readpage(tree, page, get_extent, NULL, bio, mirror_num,
4854 +- bio_flags, rw);
4855 ++ bio_flags, rw, NULL);
4856 + return ret;
4857 + }
4858 +
4859 +@@ -3194,7 +3245,7 @@ int extent_read_full_page_nolock(struct extent_io_tree *tree, struct page *page,
4860 + int ret;
4861 +
4862 + ret = __do_readpage(tree, page, get_extent, NULL, &bio, mirror_num,
4863 +- &bio_flags, READ);
4864 ++ &bio_flags, READ, NULL);
4865 + if (bio)
4866 + ret = submit_one_bio(READ, bio, mirror_num, bio_flags);
4867 + return ret;
4868 +@@ -3447,7 +3498,7 @@ static noinline_for_stack int __extent_writepage_io(struct inode *inode,
4869 + sector, iosize, pg_offset,
4870 + bdev, &epd->bio, max_nr,
4871 + end_bio_extent_writepage,
4872 +- 0, 0, 0);
4873 ++ 0, 0, 0, false);
4874 + if (ret)
4875 + SetPageError(page);
4876 + }
4877 +@@ -3749,7 +3800,7 @@ static noinline_for_stack int write_one_eb(struct extent_buffer *eb,
4878 + ret = submit_extent_page(rw, tree, p, offset >> 9,
4879 + PAGE_CACHE_SIZE, 0, bdev, &epd->bio,
4880 + -1, end_bio_extent_buffer_writepage,
4881 +- 0, epd->bio_flags, bio_flags);
4882 ++ 0, epd->bio_flags, bio_flags, false);
4883 + epd->bio_flags = bio_flags;
4884 + if (ret) {
4885 + set_btree_ioerr(p);
4886 +@@ -4153,6 +4204,7 @@ int extent_readpages(struct extent_io_tree *tree,
4887 + struct page *page;
4888 + struct extent_map *em_cached = NULL;
4889 + int nr = 0;
4890 ++ u64 prev_em_start = (u64)-1;
4891 +
4892 + for (page_idx = 0; page_idx < nr_pages; page_idx++) {
4893 + page = list_entry(pages->prev, struct page, lru);
4894 +@@ -4169,12 +4221,12 @@ int extent_readpages(struct extent_io_tree *tree,
4895 + if (nr < ARRAY_SIZE(pagepool))
4896 + continue;
4897 + __extent_readpages(tree, pagepool, nr, get_extent, &em_cached,
4898 +- &bio, 0, &bio_flags, READ);
4899 ++ &bio, 0, &bio_flags, READ, &prev_em_start);
4900 + nr = 0;
4901 + }
4902 + if (nr)
4903 + __extent_readpages(tree, pagepool, nr, get_extent, &em_cached,
4904 +- &bio, 0, &bio_flags, READ);
4905 ++ &bio, 0, &bio_flags, READ, &prev_em_start);
4906 +
4907 + if (em_cached)
4908 + free_extent_map(em_cached);
4909 +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
4910 +index 8bb013672aee..e3b39f0c4666 100644
4911 +--- a/fs/btrfs/inode.c
4912 ++++ b/fs/btrfs/inode.c
4913 +@@ -5035,7 +5035,8 @@ void btrfs_evict_inode(struct inode *inode)
4914 + goto no_delete;
4915 + }
4916 + /* do we really want it for ->i_nlink > 0 and zero btrfs_root_refs? */
4917 +- btrfs_wait_ordered_range(inode, 0, (u64)-1);
4918 ++ if (!special_file(inode->i_mode))
4919 ++ btrfs_wait_ordered_range(inode, 0, (u64)-1);
4920 +
4921 + btrfs_free_io_failure_record(inode, 0, (u64)-1);
4922 +
4923 +diff --git a/fs/cifs/cifsencrypt.c b/fs/cifs/cifsencrypt.c
4924 +index aa0dc2573374..afa09fce8151 100644
4925 +--- a/fs/cifs/cifsencrypt.c
4926 ++++ b/fs/cifs/cifsencrypt.c
4927 +@@ -444,6 +444,48 @@ find_domain_name(struct cifs_ses *ses, const struct nls_table *nls_cp)
4928 + return 0;
4929 + }
4930 +
4931 ++/* Server has provided av pairs/target info in the type 2 challenge
4932 ++ * packet and we have plucked it and stored within smb session.
4933 ++ * We parse that blob here to find the server given timestamp
4934 ++ * as part of ntlmv2 authentication (or local current time as
4935 ++ * default in case of failure)
4936 ++ */
4937 ++static __le64
4938 ++find_timestamp(struct cifs_ses *ses)
4939 ++{
4940 ++ unsigned int attrsize;
4941 ++ unsigned int type;
4942 ++ unsigned int onesize = sizeof(struct ntlmssp2_name);
4943 ++ unsigned char *blobptr;
4944 ++ unsigned char *blobend;
4945 ++ struct ntlmssp2_name *attrptr;
4946 ++
4947 ++ if (!ses->auth_key.len || !ses->auth_key.response)
4948 ++ return 0;
4949 ++
4950 ++ blobptr = ses->auth_key.response;
4951 ++ blobend = blobptr + ses->auth_key.len;
4952 ++
4953 ++ while (blobptr + onesize < blobend) {
4954 ++ attrptr = (struct ntlmssp2_name *) blobptr;
4955 ++ type = le16_to_cpu(attrptr->type);
4956 ++ if (type == NTLMSSP_AV_EOL)
4957 ++ break;
4958 ++ blobptr += 2; /* advance attr type */
4959 ++ attrsize = le16_to_cpu(attrptr->length);
4960 ++ blobptr += 2; /* advance attr size */
4961 ++ if (blobptr + attrsize > blobend)
4962 ++ break;
4963 ++ if (type == NTLMSSP_AV_TIMESTAMP) {
4964 ++ if (attrsize == sizeof(u64))
4965 ++ return *((__le64 *)blobptr);
4966 ++ }
4967 ++ blobptr += attrsize; /* advance attr value */
4968 ++ }
4969 ++
4970 ++ return cpu_to_le64(cifs_UnixTimeToNT(CURRENT_TIME));
4971 ++}
4972 ++
4973 + static int calc_ntlmv2_hash(struct cifs_ses *ses, char *ntlmv2_hash,
4974 + const struct nls_table *nls_cp)
4975 + {
4976 +@@ -641,6 +683,7 @@ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)
4977 + struct ntlmv2_resp *ntlmv2;
4978 + char ntlmv2_hash[16];
4979 + unsigned char *tiblob = NULL; /* target info blob */
4980 ++ __le64 rsp_timestamp;
4981 +
4982 + if (ses->server->negflavor == CIFS_NEGFLAVOR_EXTENDED) {
4983 + if (!ses->domainName) {
4984 +@@ -659,6 +702,12 @@ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)
4985 + }
4986 + }
4987 +
4988 ++ /* Must be within 5 minutes of the server (or in range +/-2h
4989 ++ * in case of Mac OS X), so simply carry over server timestamp
4990 ++ * (as Windows 7 does)
4991 ++ */
4992 ++ rsp_timestamp = find_timestamp(ses);
4993 ++
4994 + baselen = CIFS_SESS_KEY_SIZE + sizeof(struct ntlmv2_resp);
4995 + tilen = ses->auth_key.len;
4996 + tiblob = ses->auth_key.response;
4997 +@@ -675,8 +724,8 @@ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)
4998 + (ses->auth_key.response + CIFS_SESS_KEY_SIZE);
4999 + ntlmv2->blob_signature = cpu_to_le32(0x00000101);
5000 + ntlmv2->reserved = 0;
5001 +- /* Must be within 5 minutes of the server */
5002 +- ntlmv2->time = cpu_to_le64(cifs_UnixTimeToNT(CURRENT_TIME));
5003 ++ ntlmv2->time = rsp_timestamp;
5004 ++
5005 + get_random_bytes(&ntlmv2->client_chal, sizeof(ntlmv2->client_chal));
5006 + ntlmv2->reserved2 = 0;
5007 +
5008 +diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
5009 +index f621b44cb800..6b66dd5d1540 100644
5010 +--- a/fs/cifs/inode.c
5011 ++++ b/fs/cifs/inode.c
5012 +@@ -2034,7 +2034,6 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs,
5013 + struct tcon_link *tlink = NULL;
5014 + struct cifs_tcon *tcon = NULL;
5015 + struct TCP_Server_Info *server;
5016 +- struct cifs_io_parms io_parms;
5017 +
5018 + /*
5019 + * To avoid spurious oplock breaks from server, in the case of
5020 +@@ -2056,18 +2055,6 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs,
5021 + rc = -ENOSYS;
5022 + cifsFileInfo_put(open_file);
5023 + cifs_dbg(FYI, "SetFSize for attrs rc = %d\n", rc);
5024 +- if ((rc == -EINVAL) || (rc == -EOPNOTSUPP)) {
5025 +- unsigned int bytes_written;
5026 +-
5027 +- io_parms.netfid = open_file->fid.netfid;
5028 +- io_parms.pid = open_file->pid;
5029 +- io_parms.tcon = tcon;
5030 +- io_parms.offset = 0;
5031 +- io_parms.length = attrs->ia_size;
5032 +- rc = CIFSSMBWrite(xid, &io_parms, &bytes_written,
5033 +- NULL, NULL, 1);
5034 +- cifs_dbg(FYI, "Wrt seteof rc %d\n", rc);
5035 +- }
5036 + } else
5037 + rc = -EINVAL;
5038 +
5039 +@@ -2093,28 +2080,7 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs,
5040 + else
5041 + rc = -ENOSYS;
5042 + cifs_dbg(FYI, "SetEOF by path (setattrs) rc = %d\n", rc);
5043 +- if ((rc == -EINVAL) || (rc == -EOPNOTSUPP)) {
5044 +- __u16 netfid;
5045 +- int oplock = 0;
5046 +
5047 +- rc = SMBLegacyOpen(xid, tcon, full_path, FILE_OPEN,
5048 +- GENERIC_WRITE, CREATE_NOT_DIR, &netfid,
5049 +- &oplock, NULL, cifs_sb->local_nls,
5050 +- cifs_remap(cifs_sb));
5051 +- if (rc == 0) {
5052 +- unsigned int bytes_written;
5053 +-
5054 +- io_parms.netfid = netfid;
5055 +- io_parms.pid = current->tgid;
5056 +- io_parms.tcon = tcon;
5057 +- io_parms.offset = 0;
5058 +- io_parms.length = attrs->ia_size;
5059 +- rc = CIFSSMBWrite(xid, &io_parms, &bytes_written, NULL,
5060 +- NULL, 1);
5061 +- cifs_dbg(FYI, "wrt seteof rc %d\n", rc);
5062 +- CIFSSMBClose(xid, tcon, netfid);
5063 +- }
5064 +- }
5065 + if (tlink)
5066 + cifs_put_tlink(tlink);
5067 +
5068 +diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
5069 +index 54daee5ad4c1..1678b9cb94c7 100644
5070 +--- a/fs/cifs/smb2ops.c
5071 ++++ b/fs/cifs/smb2ops.c
5072 +@@ -50,9 +50,13 @@ change_conf(struct TCP_Server_Info *server)
5073 + break;
5074 + default:
5075 + server->echoes = true;
5076 +- server->oplocks = true;
5077 ++ if (enable_oplocks) {
5078 ++ server->oplocks = true;
5079 ++ server->oplock_credits = 1;
5080 ++ } else
5081 ++ server->oplocks = false;
5082 ++
5083 + server->echo_credits = 1;
5084 +- server->oplock_credits = 1;
5085 + }
5086 + server->credits -= server->echo_credits + server->oplock_credits;
5087 + return 0;
5088 +diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
5089 +index 54cbe19d9c08..894f259d3989 100644
5090 +--- a/fs/cifs/smb2pdu.c
5091 ++++ b/fs/cifs/smb2pdu.c
5092 +@@ -46,6 +46,7 @@
5093 + #include "smb2status.h"
5094 + #include "smb2glob.h"
5095 + #include "cifspdu.h"
5096 ++#include "cifs_spnego.h"
5097 +
5098 + /*
5099 + * The following table defines the expected "StructureSize" of SMB2 requests
5100 +@@ -427,19 +428,15 @@ SMB2_negotiate(const unsigned int xid, struct cifs_ses *ses)
5101 + cifs_dbg(FYI, "missing security blob on negprot\n");
5102 +
5103 + rc = cifs_enable_signing(server, ses->sign);
5104 +-#ifdef CONFIG_SMB2_ASN1 /* BB REMOVEME when updated asn1.c ready */
5105 + if (rc)
5106 + goto neg_exit;
5107 +- if (blob_length)
5108 ++ if (blob_length) {
5109 + rc = decode_negTokenInit(security_blob, blob_length, server);
5110 +- if (rc == 1)
5111 +- rc = 0;
5112 +- else if (rc == 0) {
5113 +- rc = -EIO;
5114 +- goto neg_exit;
5115 ++ if (rc == 1)
5116 ++ rc = 0;
5117 ++ else if (rc == 0)
5118 ++ rc = -EIO;
5119 + }
5120 +-#endif
5121 +-
5122 + neg_exit:
5123 + free_rsp_buf(resp_buftype, rsp);
5124 + return rc;
5125 +@@ -533,7 +530,8 @@ SMB2_sess_setup(const unsigned int xid, struct cifs_ses *ses,
5126 + __le32 phase = NtLmNegotiate; /* NTLMSSP, if needed, is multistage */
5127 + struct TCP_Server_Info *server = ses->server;
5128 + u16 blob_length = 0;
5129 +- char *security_blob;
5130 ++ struct key *spnego_key = NULL;
5131 ++ char *security_blob = NULL;
5132 + char *ntlmssp_blob = NULL;
5133 + bool use_spnego = false; /* else use raw ntlmssp */
5134 +
5135 +@@ -561,7 +559,8 @@ SMB2_sess_setup(const unsigned int xid, struct cifs_ses *ses,
5136 + ses->ntlmssp->sesskey_per_smbsess = true;
5137 +
5138 + /* FIXME: allow for other auth types besides NTLMSSP (e.g. krb5) */
5139 +- ses->sectype = RawNTLMSSP;
5140 ++ if (ses->sectype != Kerberos && ses->sectype != RawNTLMSSP)
5141 ++ ses->sectype = RawNTLMSSP;
5142 +
5143 + ssetup_ntlmssp_authenticate:
5144 + if (phase == NtLmChallenge)
5145 +@@ -590,7 +589,48 @@ ssetup_ntlmssp_authenticate:
5146 + iov[0].iov_base = (char *)req;
5147 + /* 4 for rfc1002 length field and 1 for pad */
5148 + iov[0].iov_len = get_rfc1002_length(req) + 4 - 1;
5149 +- if (phase == NtLmNegotiate) {
5150 ++
5151 ++ if (ses->sectype == Kerberos) {
5152 ++#ifdef CONFIG_CIFS_UPCALL
5153 ++ struct cifs_spnego_msg *msg;
5154 ++
5155 ++ spnego_key = cifs_get_spnego_key(ses);
5156 ++ if (IS_ERR(spnego_key)) {
5157 ++ rc = PTR_ERR(spnego_key);
5158 ++ spnego_key = NULL;
5159 ++ goto ssetup_exit;
5160 ++ }
5161 ++
5162 ++ msg = spnego_key->payload.data;
5163 ++ /*
5164 ++ * check version field to make sure that cifs.upcall is
5165 ++ * sending us a response in an expected form
5166 ++ */
5167 ++ if (msg->version != CIFS_SPNEGO_UPCALL_VERSION) {
5168 ++ cifs_dbg(VFS,
5169 ++ "bad cifs.upcall version. Expected %d got %d",
5170 ++ CIFS_SPNEGO_UPCALL_VERSION, msg->version);
5171 ++ rc = -EKEYREJECTED;
5172 ++ goto ssetup_exit;
5173 ++ }
5174 ++ ses->auth_key.response = kmemdup(msg->data, msg->sesskey_len,
5175 ++ GFP_KERNEL);
5176 ++ if (!ses->auth_key.response) {
5177 ++ cifs_dbg(VFS,
5178 ++ "Kerberos can't allocate (%u bytes) memory",
5179 ++ msg->sesskey_len);
5180 ++ rc = -ENOMEM;
5181 ++ goto ssetup_exit;
5182 ++ }
5183 ++ ses->auth_key.len = msg->sesskey_len;
5184 ++ blob_length = msg->secblob_len;
5185 ++ iov[1].iov_base = msg->data + msg->sesskey_len;
5186 ++ iov[1].iov_len = blob_length;
5187 ++#else
5188 ++ rc = -EOPNOTSUPP;
5189 ++ goto ssetup_exit;
5190 ++#endif /* CONFIG_CIFS_UPCALL */
5191 ++ } else if (phase == NtLmNegotiate) { /* if not krb5 must be ntlmssp */
5192 + ntlmssp_blob = kmalloc(sizeof(struct _NEGOTIATE_MESSAGE),
5193 + GFP_KERNEL);
5194 + if (ntlmssp_blob == NULL) {
5195 +@@ -613,6 +653,8 @@ ssetup_ntlmssp_authenticate:
5196 + /* with raw NTLMSSP we don't encapsulate in SPNEGO */
5197 + security_blob = ntlmssp_blob;
5198 + }
5199 ++ iov[1].iov_base = security_blob;
5200 ++ iov[1].iov_len = blob_length;
5201 + } else if (phase == NtLmAuthenticate) {
5202 + req->hdr.SessionId = ses->Suid;
5203 + ntlmssp_blob = kzalloc(sizeof(struct _NEGOTIATE_MESSAGE) + 500,
5204 +@@ -640,6 +682,8 @@ ssetup_ntlmssp_authenticate:
5205 + } else {
5206 + security_blob = ntlmssp_blob;
5207 + }
5208 ++ iov[1].iov_base = security_blob;
5209 ++ iov[1].iov_len = blob_length;
5210 + } else {
5211 + cifs_dbg(VFS, "illegal ntlmssp phase\n");
5212 + rc = -EIO;
5213 +@@ -651,8 +695,6 @@ ssetup_ntlmssp_authenticate:
5214 + cpu_to_le16(sizeof(struct smb2_sess_setup_req) -
5215 + 1 /* pad */ - 4 /* rfc1001 len */);
5216 + req->SecurityBufferLength = cpu_to_le16(blob_length);
5217 +- iov[1].iov_base = security_blob;
5218 +- iov[1].iov_len = blob_length;
5219 +
5220 + inc_rfc1001_len(req, blob_length - 1 /* pad */);
5221 +
5222 +@@ -663,6 +705,7 @@ ssetup_ntlmssp_authenticate:
5223 +
5224 + kfree(security_blob);
5225 + rsp = (struct smb2_sess_setup_rsp *)iov[0].iov_base;
5226 ++ ses->Suid = rsp->hdr.SessionId;
5227 + if (resp_buftype != CIFS_NO_BUFFER &&
5228 + rsp->hdr.Status == STATUS_MORE_PROCESSING_REQUIRED) {
5229 + if (phase != NtLmNegotiate) {
5230 +@@ -680,7 +723,6 @@ ssetup_ntlmssp_authenticate:
5231 + /* NTLMSSP Negotiate sent now processing challenge (response) */
5232 + phase = NtLmChallenge; /* process ntlmssp challenge */
5233 + rc = 0; /* MORE_PROCESSING is not an error here but expected */
5234 +- ses->Suid = rsp->hdr.SessionId;
5235 + rc = decode_ntlmssp_challenge(rsp->Buffer,
5236 + le16_to_cpu(rsp->SecurityBufferLength), ses);
5237 + }
5238 +@@ -737,6 +779,10 @@ keygen_exit:
5239 + kfree(ses->auth_key.response);
5240 + ses->auth_key.response = NULL;
5241 + }
5242 ++ if (spnego_key) {
5243 ++ key_invalidate(spnego_key);
5244 ++ key_put(spnego_key);
5245 ++ }
5246 + kfree(ses->ntlmssp);
5247 +
5248 + return rc;
5249 +diff --git a/fs/dcache.c b/fs/dcache.c
5250 +index 5d03eb0ec0ac..0046ab7d4f3d 100644
5251 +--- a/fs/dcache.c
5252 ++++ b/fs/dcache.c
5253 +@@ -1676,7 +1676,8 @@ void d_set_d_op(struct dentry *dentry, const struct dentry_operations *op)
5254 + DCACHE_OP_COMPARE |
5255 + DCACHE_OP_REVALIDATE |
5256 + DCACHE_OP_WEAK_REVALIDATE |
5257 +- DCACHE_OP_DELETE ));
5258 ++ DCACHE_OP_DELETE |
5259 ++ DCACHE_OP_SELECT_INODE));
5260 + dentry->d_op = op;
5261 + if (!op)
5262 + return;
5263 +@@ -1692,6 +1693,8 @@ void d_set_d_op(struct dentry *dentry, const struct dentry_operations *op)
5264 + dentry->d_flags |= DCACHE_OP_DELETE;
5265 + if (op->d_prune)
5266 + dentry->d_flags |= DCACHE_OP_PRUNE;
5267 ++ if (op->d_select_inode)
5268 ++ dentry->d_flags |= DCACHE_OP_SELECT_INODE;
5269 +
5270 + }
5271 + EXPORT_SYMBOL(d_set_d_op);
5272 +@@ -2923,6 +2926,13 @@ restart:
5273 +
5274 + if (dentry == vfsmnt->mnt_root || IS_ROOT(dentry)) {
5275 + struct mount *parent = ACCESS_ONCE(mnt->mnt_parent);
5276 ++ /* Escaped? */
5277 ++ if (dentry != vfsmnt->mnt_root) {
5278 ++ bptr = *buffer;
5279 ++ blen = *buflen;
5280 ++ error = 3;
5281 ++ break;
5282 ++ }
5283 + /* Global root? */
5284 + if (mnt != parent) {
5285 + dentry = ACCESS_ONCE(mnt->mnt_mountpoint);
5286 +diff --git a/fs/internal.h b/fs/internal.h
5287 +index 01dce1d1476b..4d5af583ab03 100644
5288 +--- a/fs/internal.h
5289 ++++ b/fs/internal.h
5290 +@@ -107,6 +107,7 @@ extern struct file *do_file_open_root(struct dentry *, struct vfsmount *,
5291 + extern long do_handle_open(int mountdirfd,
5292 + struct file_handle __user *ufh, int open_flag);
5293 + extern int open_check_o_direct(struct file *f);
5294 ++extern int vfs_open(const struct path *, struct file *, const struct cred *);
5295 +
5296 + /*
5297 + * inode.c
5298 +diff --git a/fs/namei.c b/fs/namei.c
5299 +index fe30d3be43a8..ccd7f98d85b9 100644
5300 +--- a/fs/namei.c
5301 ++++ b/fs/namei.c
5302 +@@ -505,6 +505,24 @@ struct nameidata {
5303 + char *saved_names[MAX_NESTED_LINKS + 1];
5304 + };
5305 +
5306 ++/**
5307 ++ * path_connected - Verify that a path->dentry is below path->mnt.mnt_root
5308 ++ * @path: nameidate to verify
5309 ++ *
5310 ++ * Rename can sometimes move a file or directory outside of a bind
5311 ++ * mount, path_connected allows those cases to be detected.
5312 ++ */
5313 ++static bool path_connected(const struct path *path)
5314 ++{
5315 ++ struct vfsmount *mnt = path->mnt;
5316 ++
5317 ++ /* Only bind mounts can have disconnected paths */
5318 ++ if (mnt->mnt_root == mnt->mnt_sb->s_root)
5319 ++ return true;
5320 ++
5321 ++ return is_subdir(path->dentry, mnt->mnt_root);
5322 ++}
5323 ++
5324 + /*
5325 + * Path walking has 2 modes, rcu-walk and ref-walk (see
5326 + * Documentation/filesystems/path-lookup.txt). In situations when we can't
5327 +@@ -1194,6 +1212,8 @@ static int follow_dotdot_rcu(struct nameidata *nd)
5328 + goto failed;
5329 + nd->path.dentry = parent;
5330 + nd->seq = seq;
5331 ++ if (unlikely(!path_connected(&nd->path)))
5332 ++ goto failed;
5333 + break;
5334 + }
5335 + if (!follow_up_rcu(&nd->path))
5336 +@@ -1290,7 +1310,7 @@ static void follow_mount(struct path *path)
5337 + }
5338 + }
5339 +
5340 +-static void follow_dotdot(struct nameidata *nd)
5341 ++static int follow_dotdot(struct nameidata *nd)
5342 + {
5343 + if (!nd->root.mnt)
5344 + set_root(nd);
5345 +@@ -1306,6 +1326,10 @@ static void follow_dotdot(struct nameidata *nd)
5346 + /* rare case of legitimate dget_parent()... */
5347 + nd->path.dentry = dget_parent(nd->path.dentry);
5348 + dput(old);
5349 ++ if (unlikely(!path_connected(&nd->path))) {
5350 ++ path_put(&nd->path);
5351 ++ return -ENOENT;
5352 ++ }
5353 + break;
5354 + }
5355 + if (!follow_up(&nd->path))
5356 +@@ -1313,6 +1337,7 @@ static void follow_dotdot(struct nameidata *nd)
5357 + }
5358 + follow_mount(&nd->path);
5359 + nd->inode = nd->path.dentry->d_inode;
5360 ++ return 0;
5361 + }
5362 +
5363 + /*
5364 +@@ -1428,8 +1453,6 @@ static int lookup_fast(struct nameidata *nd,
5365 + negative = d_is_negative(dentry);
5366 + if (read_seqcount_retry(&dentry->d_seq, seq))
5367 + return -ECHILD;
5368 +- if (negative)
5369 +- return -ENOENT;
5370 +
5371 + /*
5372 + * This sequence count validates that the parent had no
5373 +@@ -1450,6 +1473,12 @@ static int lookup_fast(struct nameidata *nd,
5374 + goto unlazy;
5375 + }
5376 + }
5377 ++ /*
5378 ++ * Note: do negative dentry check after revalidation in
5379 ++ * case that drops it.
5380 ++ */
5381 ++ if (negative)
5382 ++ return -ENOENT;
5383 + path->mnt = mnt;
5384 + path->dentry = dentry;
5385 + if (likely(__follow_mount_rcu(nd, path, inode)))
5386 +@@ -1541,7 +1570,7 @@ static inline int handle_dots(struct nameidata *nd, int type)
5387 + if (follow_dotdot_rcu(nd))
5388 + return -ECHILD;
5389 + } else
5390 +- follow_dotdot(nd);
5391 ++ return follow_dotdot(nd);
5392 + }
5393 + return 0;
5394 + }
5395 +@@ -2290,7 +2319,7 @@ mountpoint_last(struct nameidata *nd, struct path *path)
5396 + if (unlikely(nd->last_type != LAST_NORM)) {
5397 + error = handle_dots(nd, nd->last_type);
5398 + if (error)
5399 +- goto out;
5400 ++ return error;
5401 + dentry = dget(nd->path.dentry);
5402 + goto done;
5403 + }
5404 +diff --git a/fs/nfs/filelayout/filelayout.c b/fs/nfs/filelayout/filelayout.c
5405 +index a46bf6de9ce4..fb1fb2774d34 100644
5406 +--- a/fs/nfs/filelayout/filelayout.c
5407 ++++ b/fs/nfs/filelayout/filelayout.c
5408 +@@ -628,23 +628,18 @@ out_put:
5409 + goto out;
5410 + }
5411 +
5412 +-static void filelayout_free_fh_array(struct nfs4_filelayout_segment *fl)
5413 ++static void _filelayout_free_lseg(struct nfs4_filelayout_segment *fl)
5414 + {
5415 + int i;
5416 +
5417 +- for (i = 0; i < fl->num_fh; i++) {
5418 +- if (!fl->fh_array[i])
5419 +- break;
5420 +- kfree(fl->fh_array[i]);
5421 ++ if (fl->fh_array) {
5422 ++ for (i = 0; i < fl->num_fh; i++) {
5423 ++ if (!fl->fh_array[i])
5424 ++ break;
5425 ++ kfree(fl->fh_array[i]);
5426 ++ }
5427 ++ kfree(fl->fh_array);
5428 + }
5429 +- kfree(fl->fh_array);
5430 +- fl->fh_array = NULL;
5431 +-}
5432 +-
5433 +-static void
5434 +-_filelayout_free_lseg(struct nfs4_filelayout_segment *fl)
5435 +-{
5436 +- filelayout_free_fh_array(fl);
5437 + kfree(fl);
5438 + }
5439 +
5440 +@@ -715,21 +710,21 @@ filelayout_decode_layout(struct pnfs_layout_hdr *flo,
5441 + /* Do we want to use a mempool here? */
5442 + fl->fh_array[i] = kmalloc(sizeof(struct nfs_fh), gfp_flags);
5443 + if (!fl->fh_array[i])
5444 +- goto out_err_free;
5445 ++ goto out_err;
5446 +
5447 + p = xdr_inline_decode(&stream, 4);
5448 + if (unlikely(!p))
5449 +- goto out_err_free;
5450 ++ goto out_err;
5451 + fl->fh_array[i]->size = be32_to_cpup(p++);
5452 + if (sizeof(struct nfs_fh) < fl->fh_array[i]->size) {
5453 + printk(KERN_ERR "NFS: Too big fh %d received %d\n",
5454 + i, fl->fh_array[i]->size);
5455 +- goto out_err_free;
5456 ++ goto out_err;
5457 + }
5458 +
5459 + p = xdr_inline_decode(&stream, fl->fh_array[i]->size);
5460 + if (unlikely(!p))
5461 +- goto out_err_free;
5462 ++ goto out_err;
5463 + memcpy(fl->fh_array[i]->data, p, fl->fh_array[i]->size);
5464 + dprintk("DEBUG: %s: fh len %d\n", __func__,
5465 + fl->fh_array[i]->size);
5466 +@@ -738,8 +733,6 @@ filelayout_decode_layout(struct pnfs_layout_hdr *flo,
5467 + __free_page(scratch);
5468 + return 0;
5469 +
5470 +-out_err_free:
5471 +- filelayout_free_fh_array(fl);
5472 + out_err:
5473 + __free_page(scratch);
5474 + return -EIO;
5475 +diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
5476 +index 069914ce7641..93d355c8b467 100644
5477 +--- a/fs/nfs/pagelist.c
5478 ++++ b/fs/nfs/pagelist.c
5479 +@@ -508,7 +508,7 @@ size_t nfs_generic_pg_test(struct nfs_pageio_descriptor *desc,
5480 + * for it without upsetting the slab allocator.
5481 + */
5482 + if (((mirror->pg_count + req->wb_bytes) >> PAGE_SHIFT) *
5483 +- sizeof(struct page) > PAGE_SIZE)
5484 ++ sizeof(struct page *) > PAGE_SIZE)
5485 + return 0;
5486 +
5487 + return min(mirror->pg_bsize - mirror->pg_count, (size_t)req->wb_bytes);
5488 +diff --git a/fs/nfs/read.c b/fs/nfs/read.c
5489 +index ae0ff7a11b40..01b8cc8e8cfc 100644
5490 +--- a/fs/nfs/read.c
5491 ++++ b/fs/nfs/read.c
5492 +@@ -72,6 +72,9 @@ void nfs_pageio_reset_read_mds(struct nfs_pageio_descriptor *pgio)
5493 + {
5494 + struct nfs_pgio_mirror *mirror;
5495 +
5496 ++ if (pgio->pg_ops && pgio->pg_ops->pg_cleanup)
5497 ++ pgio->pg_ops->pg_cleanup(pgio);
5498 ++
5499 + pgio->pg_ops = &nfs_pgio_rw_ops;
5500 +
5501 + /* read path should never have more than one mirror */
5502 +diff --git a/fs/nfs/write.c b/fs/nfs/write.c
5503 +index 07115b9b1ad2..d9851a6a2813 100644
5504 +--- a/fs/nfs/write.c
5505 ++++ b/fs/nfs/write.c
5506 +@@ -1203,7 +1203,7 @@ static int nfs_can_extend_write(struct file *file, struct page *page, struct ino
5507 + return 1;
5508 + if (!flctx || (list_empty_careful(&flctx->flc_flock) &&
5509 + list_empty_careful(&flctx->flc_posix)))
5510 +- return 0;
5511 ++ return 1;
5512 +
5513 + /* Check to see if there are whole file write locks */
5514 + ret = 0;
5515 +@@ -1331,6 +1331,9 @@ void nfs_pageio_reset_write_mds(struct nfs_pageio_descriptor *pgio)
5516 + {
5517 + struct nfs_pgio_mirror *mirror;
5518 +
5519 ++ if (pgio->pg_ops && pgio->pg_ops->pg_cleanup)
5520 ++ pgio->pg_ops->pg_cleanup(pgio);
5521 ++
5522 + pgio->pg_ops = &nfs_pgio_rw_ops;
5523 +
5524 + nfs_pageio_stop_mirroring(pgio);
5525 +diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c
5526 +index fdf4b41d0609..482cfd34472d 100644
5527 +--- a/fs/ocfs2/dlm/dlmmaster.c
5528 ++++ b/fs/ocfs2/dlm/dlmmaster.c
5529 +@@ -1439,6 +1439,7 @@ int dlm_master_request_handler(struct o2net_msg *msg, u32 len, void *data,
5530 + int found, ret;
5531 + int set_maybe;
5532 + int dispatch_assert = 0;
5533 ++ int dispatched = 0;
5534 +
5535 + if (!dlm_grab(dlm))
5536 + return DLM_MASTER_RESP_NO;
5537 +@@ -1658,15 +1659,18 @@ send_response:
5538 + mlog(ML_ERROR, "failed to dispatch assert master work\n");
5539 + response = DLM_MASTER_RESP_ERROR;
5540 + dlm_lockres_put(res);
5541 +- } else
5542 ++ } else {
5543 ++ dispatched = 1;
5544 + __dlm_lockres_grab_inflight_worker(dlm, res);
5545 ++ }
5546 + spin_unlock(&res->spinlock);
5547 + } else {
5548 + if (res)
5549 + dlm_lockres_put(res);
5550 + }
5551 +
5552 +- dlm_put(dlm);
5553 ++ if (!dispatched)
5554 ++ dlm_put(dlm);
5555 + return response;
5556 + }
5557 +
5558 +@@ -2090,7 +2094,6 @@ int dlm_dispatch_assert_master(struct dlm_ctxt *dlm,
5559 +
5560 +
5561 + /* queue up work for dlm_assert_master_worker */
5562 +- dlm_grab(dlm); /* get an extra ref for the work item */
5563 + dlm_init_work_item(dlm, item, dlm_assert_master_worker, NULL);
5564 + item->u.am.lockres = res; /* already have a ref */
5565 + /* can optionally ignore node numbers higher than this node */
5566 +diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c
5567 +index ce12e0b1a31f..3d90ad7ff91f 100644
5568 +--- a/fs/ocfs2/dlm/dlmrecovery.c
5569 ++++ b/fs/ocfs2/dlm/dlmrecovery.c
5570 +@@ -1694,6 +1694,7 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
5571 + unsigned int hash;
5572 + int master = DLM_LOCK_RES_OWNER_UNKNOWN;
5573 + u32 flags = DLM_ASSERT_MASTER_REQUERY;
5574 ++ int dispatched = 0;
5575 +
5576 + if (!dlm_grab(dlm)) {
5577 + /* since the domain has gone away on this
5578 +@@ -1719,8 +1720,10 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
5579 + dlm_put(dlm);
5580 + /* sender will take care of this and retry */
5581 + return ret;
5582 +- } else
5583 ++ } else {
5584 ++ dispatched = 1;
5585 + __dlm_lockres_grab_inflight_worker(dlm, res);
5586 ++ }
5587 + spin_unlock(&res->spinlock);
5588 + } else {
5589 + /* put.. incase we are not the master */
5590 +@@ -1730,7 +1733,8 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
5591 + }
5592 + spin_unlock(&dlm->spinlock);
5593 +
5594 +- dlm_put(dlm);
5595 ++ if (!dispatched)
5596 ++ dlm_put(dlm);
5597 + return master;
5598 + }
5599 +
5600 +diff --git a/fs/open.c b/fs/open.c
5601 +index 98e5a52dc68c..f9d2bf935099 100644
5602 +--- a/fs/open.c
5603 ++++ b/fs/open.c
5604 +@@ -678,18 +678,18 @@ int open_check_o_direct(struct file *f)
5605 + }
5606 +
5607 + static int do_dentry_open(struct file *f,
5608 ++ struct inode *inode,
5609 + int (*open)(struct inode *, struct file *),
5610 + const struct cred *cred)
5611 + {
5612 + static const struct file_operations empty_fops = {};
5613 +- struct inode *inode;
5614 + int error;
5615 +
5616 + f->f_mode = OPEN_FMODE(f->f_flags) | FMODE_LSEEK |
5617 + FMODE_PREAD | FMODE_PWRITE;
5618 +
5619 + path_get(&f->f_path);
5620 +- inode = f->f_inode = f->f_path.dentry->d_inode;
5621 ++ f->f_inode = inode;
5622 + f->f_mapping = inode->i_mapping;
5623 +
5624 + if (unlikely(f->f_flags & O_PATH)) {
5625 +@@ -793,7 +793,8 @@ int finish_open(struct file *file, struct dentry *dentry,
5626 + BUG_ON(*opened & FILE_OPENED); /* once it's opened, it's opened */
5627 +
5628 + file->f_path.dentry = dentry;
5629 +- error = do_dentry_open(file, open, current_cred());
5630 ++ error = do_dentry_open(file, d_backing_inode(dentry), open,
5631 ++ current_cred());
5632 + if (!error)
5633 + *opened |= FILE_OPENED;
5634 +
5635 +@@ -822,6 +823,28 @@ int finish_no_open(struct file *file, struct dentry *dentry)
5636 + }
5637 + EXPORT_SYMBOL(finish_no_open);
5638 +
5639 ++/**
5640 ++ * vfs_open - open the file at the given path
5641 ++ * @path: path to open
5642 ++ * @file: newly allocated file with f_flag initialized
5643 ++ * @cred: credentials to use
5644 ++ */
5645 ++int vfs_open(const struct path *path, struct file *file,
5646 ++ const struct cred *cred)
5647 ++{
5648 ++ struct dentry *dentry = path->dentry;
5649 ++ struct inode *inode = dentry->d_inode;
5650 ++
5651 ++ file->f_path = *path;
5652 ++ if (dentry->d_flags & DCACHE_OP_SELECT_INODE) {
5653 ++ inode = dentry->d_op->d_select_inode(dentry, file->f_flags);
5654 ++ if (IS_ERR(inode))
5655 ++ return PTR_ERR(inode);
5656 ++ }
5657 ++
5658 ++ return do_dentry_open(file, inode, NULL, cred);
5659 ++}
5660 ++
5661 + struct file *dentry_open(const struct path *path, int flags,
5662 + const struct cred *cred)
5663 + {
5664 +@@ -853,26 +876,6 @@ struct file *dentry_open(const struct path *path, int flags,
5665 + }
5666 + EXPORT_SYMBOL(dentry_open);
5667 +
5668 +-/**
5669 +- * vfs_open - open the file at the given path
5670 +- * @path: path to open
5671 +- * @filp: newly allocated file with f_flag initialized
5672 +- * @cred: credentials to use
5673 +- */
5674 +-int vfs_open(const struct path *path, struct file *filp,
5675 +- const struct cred *cred)
5676 +-{
5677 +- struct inode *inode = path->dentry->d_inode;
5678 +-
5679 +- if (inode->i_op->dentry_open)
5680 +- return inode->i_op->dentry_open(path->dentry, filp, cred);
5681 +- else {
5682 +- filp->f_path = *path;
5683 +- return do_dentry_open(filp, NULL, cred);
5684 +- }
5685 +-}
5686 +-EXPORT_SYMBOL(vfs_open);
5687 +-
5688 + static inline int build_open_flags(int flags, umode_t mode, struct open_flags *op)
5689 + {
5690 + int lookup_flags = 0;
5691 +diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
5692 +index 04f124884687..ba0db2638946 100644
5693 +--- a/fs/overlayfs/inode.c
5694 ++++ b/fs/overlayfs/inode.c
5695 +@@ -336,37 +336,33 @@ static bool ovl_open_need_copy_up(int flags, enum ovl_path_type type,
5696 + return true;
5697 + }
5698 +
5699 +-static int ovl_dentry_open(struct dentry *dentry, struct file *file,
5700 +- const struct cred *cred)
5701 ++struct inode *ovl_d_select_inode(struct dentry *dentry, unsigned file_flags)
5702 + {
5703 + int err;
5704 + struct path realpath;
5705 + enum ovl_path_type type;
5706 +- bool want_write = false;
5707 ++
5708 ++ if (d_is_dir(dentry))
5709 ++ return d_backing_inode(dentry);
5710 +
5711 + type = ovl_path_real(dentry, &realpath);
5712 +- if (ovl_open_need_copy_up(file->f_flags, type, realpath.dentry)) {
5713 +- want_write = true;
5714 ++ if (ovl_open_need_copy_up(file_flags, type, realpath.dentry)) {
5715 + err = ovl_want_write(dentry);
5716 + if (err)
5717 +- goto out;
5718 ++ return ERR_PTR(err);
5719 +
5720 +- if (file->f_flags & O_TRUNC)
5721 ++ if (file_flags & O_TRUNC)
5722 + err = ovl_copy_up_last(dentry, NULL, true);
5723 + else
5724 + err = ovl_copy_up(dentry);
5725 ++ ovl_drop_write(dentry);
5726 + if (err)
5727 +- goto out_drop_write;
5728 ++ return ERR_PTR(err);
5729 +
5730 + ovl_path_upper(dentry, &realpath);
5731 + }
5732 +
5733 +- err = vfs_open(&realpath, file, cred);
5734 +-out_drop_write:
5735 +- if (want_write)
5736 +- ovl_drop_write(dentry);
5737 +-out:
5738 +- return err;
5739 ++ return d_backing_inode(realpath.dentry);
5740 + }
5741 +
5742 + static const struct inode_operations ovl_file_inode_operations = {
5743 +@@ -377,7 +373,6 @@ static const struct inode_operations ovl_file_inode_operations = {
5744 + .getxattr = ovl_getxattr,
5745 + .listxattr = ovl_listxattr,
5746 + .removexattr = ovl_removexattr,
5747 +- .dentry_open = ovl_dentry_open,
5748 + };
5749 +
5750 + static const struct inode_operations ovl_symlink_inode_operations = {
5751 +diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
5752 +index 17ac5afc9ffb..ea5a40b06e3a 100644
5753 +--- a/fs/overlayfs/overlayfs.h
5754 ++++ b/fs/overlayfs/overlayfs.h
5755 +@@ -173,6 +173,7 @@ ssize_t ovl_getxattr(struct dentry *dentry, const char *name,
5756 + void *value, size_t size);
5757 + ssize_t ovl_listxattr(struct dentry *dentry, char *list, size_t size);
5758 + int ovl_removexattr(struct dentry *dentry, const char *name);
5759 ++struct inode *ovl_d_select_inode(struct dentry *dentry, unsigned file_flags);
5760 +
5761 + struct inode *ovl_new_inode(struct super_block *sb, umode_t mode,
5762 + struct ovl_entry *oe);
5763 +diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
5764 +index 155989455a72..33f2d27a6792 100644
5765 +--- a/fs/overlayfs/super.c
5766 ++++ b/fs/overlayfs/super.c
5767 +@@ -275,6 +275,7 @@ static void ovl_dentry_release(struct dentry *dentry)
5768 +
5769 + static const struct dentry_operations ovl_dentry_operations = {
5770 + .d_release = ovl_dentry_release,
5771 ++ .d_select_inode = ovl_d_select_inode,
5772 + };
5773 +
5774 + static struct ovl_entry *ovl_alloc_entry(unsigned int numlower)
5775 +diff --git a/fs/ubifs/xattr.c b/fs/ubifs/xattr.c
5776 +index 96f3448b6eb4..fd65b3f1923c 100644
5777 +--- a/fs/ubifs/xattr.c
5778 ++++ b/fs/ubifs/xattr.c
5779 +@@ -652,11 +652,8 @@ int ubifs_init_security(struct inode *dentry, struct inode *inode,
5780 + {
5781 + int err;
5782 +
5783 +- mutex_lock(&inode->i_mutex);
5784 + err = security_inode_init_security(inode, dentry, qstr,
5785 + &init_xattrs, 0);
5786 +- mutex_unlock(&inode->i_mutex);
5787 +-
5788 + if (err) {
5789 + struct ubifs_info *c = dentry->i_sb->s_fs_info;
5790 + ubifs_err(c, "cannot initialize security for inode %lu, error %d",
5791 +diff --git a/include/linux/dcache.h b/include/linux/dcache.h
5792 +index df334cbacc6d..167ec0934049 100644
5793 +--- a/include/linux/dcache.h
5794 ++++ b/include/linux/dcache.h
5795 +@@ -160,6 +160,7 @@ struct dentry_operations {
5796 + char *(*d_dname)(struct dentry *, char *, int);
5797 + struct vfsmount *(*d_automount)(struct path *);
5798 + int (*d_manage)(struct dentry *, bool);
5799 ++ struct inode *(*d_select_inode)(struct dentry *, unsigned);
5800 + } ____cacheline_aligned;
5801 +
5802 + /*
5803 +@@ -225,6 +226,7 @@ struct dentry_operations {
5804 +
5805 + #define DCACHE_MAY_FREE 0x00800000
5806 + #define DCACHE_FALLTHRU 0x01000000 /* Fall through to lower layer */
5807 ++#define DCACHE_OP_SELECT_INODE 0x02000000 /* Unioned entry: dcache op selects inode */
5808 +
5809 + extern seqlock_t rename_lock;
5810 +
5811 +diff --git a/include/linux/fs.h b/include/linux/fs.h
5812 +index 571aab91bfc0..f93192333b37 100644
5813 +--- a/include/linux/fs.h
5814 ++++ b/include/linux/fs.h
5815 +@@ -1641,7 +1641,6 @@ struct inode_operations {
5816 + int (*set_acl)(struct inode *, struct posix_acl *, int);
5817 +
5818 + /* WARNING: probably going away soon, do not use! */
5819 +- int (*dentry_open)(struct dentry *, struct file *, const struct cred *);
5820 + } ____cacheline_aligned;
5821 +
5822 + ssize_t rw_copy_check_uvector(int type, const struct iovec __user * uvector,
5823 +@@ -2193,7 +2192,6 @@ extern struct file *file_open_name(struct filename *, int, umode_t);
5824 + extern struct file *filp_open(const char *, int, umode_t);
5825 + extern struct file *file_open_root(struct dentry *, struct vfsmount *,
5826 + const char *, int);
5827 +-extern int vfs_open(const struct path *, struct file *, const struct cred *);
5828 + extern struct file * dentry_open(const struct path *, int, const struct cred *);
5829 + extern int filp_close(struct file *, fl_owner_t id);
5830 +
5831 +diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h
5832 +index de722d4e9d61..258daf914c6d 100644
5833 +--- a/include/linux/mmc/core.h
5834 ++++ b/include/linux/mmc/core.h
5835 +@@ -121,6 +121,7 @@ struct mmc_data {
5836 + struct mmc_request *mrq; /* associated request */
5837 +
5838 + unsigned int sg_len; /* size of scatter list */
5839 ++ int sg_count; /* mapped sg entries */
5840 + struct scatterlist *sg; /* I/O scatter list */
5841 + s32 host_cookie; /* host private data */
5842 + };
5843 +diff --git a/include/linux/security.h b/include/linux/security.h
5844 +index 18264ea9e314..5d45b4fd91d2 100644
5845 +--- a/include/linux/security.h
5846 ++++ b/include/linux/security.h
5847 +@@ -2527,7 +2527,7 @@ static inline int security_task_prctl(int option, unsigned long arg2,
5848 + unsigned long arg4,
5849 + unsigned long arg5)
5850 + {
5851 +- return cap_task_prctl(option, arg2, arg3, arg3, arg5);
5852 ++ return cap_task_prctl(option, arg2, arg3, arg4, arg5);
5853 + }
5854 +
5855 + static inline void security_task_to_inode(struct task_struct *p, struct inode *inode)
5856 +diff --git a/include/net/netfilter/nf_queue.h b/include/net/netfilter/nf_queue.h
5857 +index d81d584157e1..e8635854a55b 100644
5858 +--- a/include/net/netfilter/nf_queue.h
5859 ++++ b/include/net/netfilter/nf_queue.h
5860 +@@ -24,6 +24,8 @@ struct nf_queue_entry {
5861 + struct nf_queue_handler {
5862 + int (*outfn)(struct nf_queue_entry *entry,
5863 + unsigned int queuenum);
5864 ++ void (*nf_hook_drop)(struct net *net,
5865 ++ struct nf_hook_ops *ops);
5866 + };
5867 +
5868 + void nf_register_queue_handler(const struct nf_queue_handler *qh);
5869 +diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
5870 +index e6bcf55dcf20..fd0ca42b1d63 100644
5871 +--- a/include/net/netfilter/nf_tables.h
5872 ++++ b/include/net/netfilter/nf_tables.h
5873 +@@ -125,7 +125,7 @@ static inline enum nft_data_types nft_dreg_to_type(enum nft_registers reg)
5874 +
5875 + static inline enum nft_registers nft_type_to_reg(enum nft_data_types type)
5876 + {
5877 +- return type == NFT_DATA_VERDICT ? NFT_REG_VERDICT : NFT_REG_1;
5878 ++ return type == NFT_DATA_VERDICT ? NFT_REG_VERDICT : NFT_REG_1 * NFT_REG_SIZE / NFT_REG32_SIZE;
5879 + }
5880 +
5881 + unsigned int nft_parse_register(const struct nlattr *attr);
5882 +diff --git a/include/target/iscsi/iscsi_target_core.h b/include/target/iscsi/iscsi_target_core.h
5883 +index 73abbc54063d..7bd03f867fca 100644
5884 +--- a/include/target/iscsi/iscsi_target_core.h
5885 ++++ b/include/target/iscsi/iscsi_target_core.h
5886 +@@ -787,7 +787,6 @@ struct iscsi_np {
5887 + enum iscsi_timer_flags_table np_login_timer_flags;
5888 + u32 np_exports;
5889 + enum np_flags_table np_flags;
5890 +- unsigned char np_ip[IPV6_ADDRESS_SPACE];
5891 + u16 np_port;
5892 + spinlock_t np_thread_lock;
5893 + struct completion np_restart_comp;
5894 +diff --git a/include/xen/interface/sched.h b/include/xen/interface/sched.h
5895 +index 9ce083960a25..f18490985fc8 100644
5896 +--- a/include/xen/interface/sched.h
5897 ++++ b/include/xen/interface/sched.h
5898 +@@ -107,5 +107,13 @@ struct sched_watchdog {
5899 + #define SHUTDOWN_suspend 2 /* Clean up, save suspend info, kill. */
5900 + #define SHUTDOWN_crash 3 /* Tell controller we've crashed. */
5901 + #define SHUTDOWN_watchdog 4 /* Restart because watchdog time expired. */
5902 ++/*
5903 ++ * Domain asked to perform 'soft reset' for it. The expected behavior is to
5904 ++ * reset internal Xen state for the domain returning it to the point where it
5905 ++ * was created but leaving the domain's memory contents and vCPU contexts
5906 ++ * intact. This will allow the domain to start over and set up all Xen specific
5907 ++ * interfaces again.
5908 ++ */
5909 ++#define SHUTDOWN_soft_reset 5
5910 +
5911 + #endif /* __XEN_PUBLIC_SCHED_H__ */
5912 +diff --git a/ipc/msg.c b/ipc/msg.c
5913 +index 2b6fdbb9e0e9..652540613d26 100644
5914 +--- a/ipc/msg.c
5915 ++++ b/ipc/msg.c
5916 +@@ -137,13 +137,6 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
5917 + return retval;
5918 + }
5919 +
5920 +- /* ipc_addid() locks msq upon success. */
5921 +- id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni);
5922 +- if (id < 0) {
5923 +- ipc_rcu_putref(msq, msg_rcu_free);
5924 +- return id;
5925 +- }
5926 +-
5927 + msq->q_stime = msq->q_rtime = 0;
5928 + msq->q_ctime = get_seconds();
5929 + msq->q_cbytes = msq->q_qnum = 0;
5930 +@@ -153,6 +146,13 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
5931 + INIT_LIST_HEAD(&msq->q_receivers);
5932 + INIT_LIST_HEAD(&msq->q_senders);
5933 +
5934 ++ /* ipc_addid() locks msq upon success. */
5935 ++ id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni);
5936 ++ if (id < 0) {
5937 ++ ipc_rcu_putref(msq, msg_rcu_free);
5938 ++ return id;
5939 ++ }
5940 ++
5941 + ipc_unlock_object(&msq->q_perm);
5942 + rcu_read_unlock();
5943 +
5944 +diff --git a/ipc/shm.c b/ipc/shm.c
5945 +index 6d767071c367..499a8bd22fad 100644
5946 +--- a/ipc/shm.c
5947 ++++ b/ipc/shm.c
5948 +@@ -550,12 +550,6 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
5949 + if (IS_ERR(file))
5950 + goto no_file;
5951 +
5952 +- id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni);
5953 +- if (id < 0) {
5954 +- error = id;
5955 +- goto no_id;
5956 +- }
5957 +-
5958 + shp->shm_cprid = task_tgid_vnr(current);
5959 + shp->shm_lprid = 0;
5960 + shp->shm_atim = shp->shm_dtim = 0;
5961 +@@ -564,6 +558,13 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
5962 + shp->shm_nattch = 0;
5963 + shp->shm_file = file;
5964 + shp->shm_creator = current;
5965 ++
5966 ++ id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni);
5967 ++ if (id < 0) {
5968 ++ error = id;
5969 ++ goto no_id;
5970 ++ }
5971 ++
5972 + list_add(&shp->shm_clist, &current->sysvshm.shm_clist);
5973 +
5974 + /*
5975 +diff --git a/ipc/util.c b/ipc/util.c
5976 +index ff3323ef8d8b..c917e9fd10b1 100644
5977 +--- a/ipc/util.c
5978 ++++ b/ipc/util.c
5979 +@@ -237,6 +237,10 @@ int ipc_addid(struct ipc_ids *ids, struct kern_ipc_perm *new, int size)
5980 + rcu_read_lock();
5981 + spin_lock(&new->lock);
5982 +
5983 ++ current_euid_egid(&euid, &egid);
5984 ++ new->cuid = new->uid = euid;
5985 ++ new->gid = new->cgid = egid;
5986 ++
5987 + id = idr_alloc(&ids->ipcs_idr, new,
5988 + (next_id < 0) ? 0 : ipcid_to_idx(next_id), 0,
5989 + GFP_NOWAIT);
5990 +@@ -249,10 +253,6 @@ int ipc_addid(struct ipc_ids *ids, struct kern_ipc_perm *new, int size)
5991 +
5992 + ids->in_use++;
5993 +
5994 +- current_euid_egid(&euid, &egid);
5995 +- new->cuid = new->uid = euid;
5996 +- new->gid = new->cgid = egid;
5997 +-
5998 + if (next_id < 0) {
5999 + new->seq = ids->seq++;
6000 + if (ids->seq > IPCID_SEQ_MAX)
6001 +diff --git a/kernel/events/core.c b/kernel/events/core.c
6002 +index 94817491407b..e1af58e23bee 100644
6003 +--- a/kernel/events/core.c
6004 ++++ b/kernel/events/core.c
6005 +@@ -4411,14 +4411,6 @@ static void ring_buffer_wakeup(struct perf_event *event)
6006 + rcu_read_unlock();
6007 + }
6008 +
6009 +-static void rb_free_rcu(struct rcu_head *rcu_head)
6010 +-{
6011 +- struct ring_buffer *rb;
6012 +-
6013 +- rb = container_of(rcu_head, struct ring_buffer, rcu_head);
6014 +- rb_free(rb);
6015 +-}
6016 +-
6017 + struct ring_buffer *ring_buffer_get(struct perf_event *event)
6018 + {
6019 + struct ring_buffer *rb;
6020 +diff --git a/kernel/events/internal.h b/kernel/events/internal.h
6021 +index 9f6ce9ba4a04..a6adc36a3732 100644
6022 +--- a/kernel/events/internal.h
6023 ++++ b/kernel/events/internal.h
6024 +@@ -11,6 +11,7 @@
6025 + struct ring_buffer {
6026 + atomic_t refcount;
6027 + struct rcu_head rcu_head;
6028 ++ struct irq_work irq_work;
6029 + #ifdef CONFIG_PERF_USE_VMALLOC
6030 + struct work_struct work;
6031 + int page_order; /* allocation order */
6032 +@@ -55,6 +56,15 @@ struct ring_buffer {
6033 + };
6034 +
6035 + extern void rb_free(struct ring_buffer *rb);
6036 ++
6037 ++static inline void rb_free_rcu(struct rcu_head *rcu_head)
6038 ++{
6039 ++ struct ring_buffer *rb;
6040 ++
6041 ++ rb = container_of(rcu_head, struct ring_buffer, rcu_head);
6042 ++ rb_free(rb);
6043 ++}
6044 ++
6045 + extern struct ring_buffer *
6046 + rb_alloc(int nr_pages, long watermark, int cpu, int flags);
6047 + extern void perf_event_wakeup(struct perf_event *event);
6048 +diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
6049 +index a7604c81168e..7f63ad978cb8 100644
6050 +--- a/kernel/events/ring_buffer.c
6051 ++++ b/kernel/events/ring_buffer.c
6052 +@@ -221,6 +221,8 @@ void perf_output_end(struct perf_output_handle *handle)
6053 + rcu_read_unlock();
6054 + }
6055 +
6056 ++static void rb_irq_work(struct irq_work *work);
6057 ++
6058 + static void
6059 + ring_buffer_init(struct ring_buffer *rb, long watermark, int flags)
6060 + {
6061 +@@ -241,6 +243,16 @@ ring_buffer_init(struct ring_buffer *rb, long watermark, int flags)
6062 +
6063 + INIT_LIST_HEAD(&rb->event_list);
6064 + spin_lock_init(&rb->event_lock);
6065 ++ init_irq_work(&rb->irq_work, rb_irq_work);
6066 ++}
6067 ++
6068 ++static void ring_buffer_put_async(struct ring_buffer *rb)
6069 ++{
6070 ++ if (!atomic_dec_and_test(&rb->refcount))
6071 ++ return;
6072 ++
6073 ++ rb->rcu_head.next = (void *)rb;
6074 ++ irq_work_queue(&rb->irq_work);
6075 + }
6076 +
6077 + /*
6078 +@@ -319,7 +331,7 @@ err_put:
6079 + rb_free_aux(rb);
6080 +
6081 + err:
6082 +- ring_buffer_put(rb);
6083 ++ ring_buffer_put_async(rb);
6084 + handle->event = NULL;
6085 +
6086 + return NULL;
6087 +@@ -370,7 +382,7 @@ void perf_aux_output_end(struct perf_output_handle *handle, unsigned long size,
6088 +
6089 + local_set(&rb->aux_nest, 0);
6090 + rb_free_aux(rb);
6091 +- ring_buffer_put(rb);
6092 ++ ring_buffer_put_async(rb);
6093 + }
6094 +
6095 + /*
6096 +@@ -559,7 +571,18 @@ static void __rb_free_aux(struct ring_buffer *rb)
6097 + void rb_free_aux(struct ring_buffer *rb)
6098 + {
6099 + if (atomic_dec_and_test(&rb->aux_refcount))
6100 ++ irq_work_queue(&rb->irq_work);
6101 ++}
6102 ++
6103 ++static void rb_irq_work(struct irq_work *work)
6104 ++{
6105 ++ struct ring_buffer *rb = container_of(work, struct ring_buffer, irq_work);
6106 ++
6107 ++ if (!atomic_read(&rb->aux_refcount))
6108 + __rb_free_aux(rb);
6109 ++
6110 ++ if (rb->rcu_head.next == (void *)rb)
6111 ++ call_rcu(&rb->rcu_head, rb_free_rcu);
6112 + }
6113 +
6114 + #ifndef CONFIG_PERF_USE_VMALLOC
6115 +diff --git a/kernel/irq/proc.c b/kernel/irq/proc.c
6116 +index df2f4642d1e7..5c38f59741e2 100644
6117 +--- a/kernel/irq/proc.c
6118 ++++ b/kernel/irq/proc.c
6119 +@@ -12,6 +12,7 @@
6120 + #include <linux/seq_file.h>
6121 + #include <linux/interrupt.h>
6122 + #include <linux/kernel_stat.h>
6123 ++#include <linux/mutex.h>
6124 +
6125 + #include "internals.h"
6126 +
6127 +@@ -323,18 +324,29 @@ void register_handler_proc(unsigned int irq, struct irqaction *action)
6128 +
6129 + void register_irq_proc(unsigned int irq, struct irq_desc *desc)
6130 + {
6131 ++ static DEFINE_MUTEX(register_lock);
6132 + char name [MAX_NAMELEN];
6133 +
6134 +- if (!root_irq_dir || (desc->irq_data.chip == &no_irq_chip) || desc->dir)
6135 ++ if (!root_irq_dir || (desc->irq_data.chip == &no_irq_chip))
6136 + return;
6137 +
6138 ++ /*
6139 ++ * irq directories are registered only when a handler is
6140 ++ * added, not when the descriptor is created, so multiple
6141 ++ * tasks might try to register at the same time.
6142 ++ */
6143 ++ mutex_lock(&register_lock);
6144 ++
6145 ++ if (desc->dir)
6146 ++ goto out_unlock;
6147 ++
6148 + memset(name, 0, MAX_NAMELEN);
6149 + sprintf(name, "%d", irq);
6150 +
6151 + /* create /proc/irq/1234 */
6152 + desc->dir = proc_mkdir(name, root_irq_dir);
6153 + if (!desc->dir)
6154 +- return;
6155 ++ goto out_unlock;
6156 +
6157 + #ifdef CONFIG_SMP
6158 + /* create /proc/irq/<irq>/smp_affinity */
6159 +@@ -355,6 +367,9 @@ void register_irq_proc(unsigned int irq, struct irq_desc *desc)
6160 +
6161 + proc_create_data("spurious", 0444, desc->dir,
6162 + &irq_spurious_proc_fops, (void *)(long)irq);
6163 ++
6164 ++out_unlock:
6165 ++ mutex_unlock(&register_lock);
6166 + }
6167 +
6168 + void unregister_irq_proc(unsigned int irq, struct irq_desc *desc)
6169 +diff --git a/kernel/sched/core.c b/kernel/sched/core.c
6170 +index e6910526c84b..8476206a1e19 100644
6171 +--- a/kernel/sched/core.c
6172 ++++ b/kernel/sched/core.c
6173 +@@ -2217,11 +2217,11 @@ static struct rq *finish_task_switch(struct task_struct *prev)
6174 + * If a task dies, then it sets TASK_DEAD in tsk->state and calls
6175 + * schedule one last time. The schedule call will never return, and
6176 + * the scheduled task must drop that reference.
6177 +- * The test for TASK_DEAD must occur while the runqueue locks are
6178 +- * still held, otherwise prev could be scheduled on another cpu, die
6179 +- * there before we look at prev->state, and then the reference would
6180 +- * be dropped twice.
6181 +- * Manfred Spraul <manfred@××××××××××××.com>
6182 ++ *
6183 ++ * We must observe prev->state before clearing prev->on_cpu (in
6184 ++ * finish_lock_switch), otherwise a concurrent wakeup can get prev
6185 ++ * running on another CPU and we could rave with its RUNNING -> DEAD
6186 ++ * transition, resulting in a double drop.
6187 + */
6188 + prev_state = prev->state;
6189 + vtime_task_switch(prev);
6190 +@@ -2358,13 +2358,20 @@ unsigned long nr_running(void)
6191 +
6192 + /*
6193 + * Check if only the current task is running on the cpu.
6194 ++ *
6195 ++ * Caution: this function does not check that the caller has disabled
6196 ++ * preemption, thus the result might have a time-of-check-to-time-of-use
6197 ++ * race. The caller is responsible to use it correctly, for example:
6198 ++ *
6199 ++ * - from a non-preemptable section (of course)
6200 ++ *
6201 ++ * - from a thread that is bound to a single CPU
6202 ++ *
6203 ++ * - in a loop with very short iterations (e.g. a polling loop)
6204 + */
6205 + bool single_task_running(void)
6206 + {
6207 +- if (cpu_rq(smp_processor_id())->nr_running == 1)
6208 +- return true;
6209 +- else
6210 +- return false;
6211 ++ return raw_rq()->nr_running == 1;
6212 + }
6213 + EXPORT_SYMBOL(single_task_running);
6214 +
6215 +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
6216 +index c2980e8733bc..77690b653ca9 100644
6217 +--- a/kernel/sched/fair.c
6218 ++++ b/kernel/sched/fair.c
6219 +@@ -5126,18 +5126,21 @@ again:
6220 + * entity, update_curr() will update its vruntime, otherwise
6221 + * forget we've ever seen it.
6222 + */
6223 +- if (curr && curr->on_rq)
6224 +- update_curr(cfs_rq);
6225 +- else
6226 +- curr = NULL;
6227 ++ if (curr) {
6228 ++ if (curr->on_rq)
6229 ++ update_curr(cfs_rq);
6230 ++ else
6231 ++ curr = NULL;
6232 +
6233 +- /*
6234 +- * This call to check_cfs_rq_runtime() will do the throttle and
6235 +- * dequeue its entity in the parent(s). Therefore the 'simple'
6236 +- * nr_running test will indeed be correct.
6237 +- */
6238 +- if (unlikely(check_cfs_rq_runtime(cfs_rq)))
6239 +- goto simple;
6240 ++ /*
6241 ++ * This call to check_cfs_rq_runtime() will do the
6242 ++ * throttle and dequeue its entity in the parent(s).
6243 ++ * Therefore the 'simple' nr_running test will indeed
6244 ++ * be correct.
6245 ++ */
6246 ++ if (unlikely(check_cfs_rq_runtime(cfs_rq)))
6247 ++ goto simple;
6248 ++ }
6249 +
6250 + se = pick_next_entity(cfs_rq, curr);
6251 + cfs_rq = group_cfs_rq(se);
6252 +diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
6253 +index e0e129993958..aa1f059de4f7 100644
6254 +--- a/kernel/sched/sched.h
6255 ++++ b/kernel/sched/sched.h
6256 +@@ -1068,9 +1068,10 @@ static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
6257 + * After ->on_cpu is cleared, the task can be moved to a different CPU.
6258 + * We must ensure this doesn't happen until the switch is completely
6259 + * finished.
6260 ++ *
6261 ++ * Pairs with the control dependency and rmb in try_to_wake_up().
6262 + */
6263 +- smp_wmb();
6264 +- prev->on_cpu = 0;
6265 ++ smp_store_release(&prev->on_cpu, 0);
6266 + #endif
6267 + #ifdef CONFIG_DEBUG_SPINLOCK
6268 + /* this is a valid case when another task releases the spinlock */
6269 +diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
6270 +index 946acb72179f..414d9df94724 100644
6271 +--- a/kernel/time/timekeeping.c
6272 ++++ b/kernel/time/timekeeping.c
6273 +@@ -1615,7 +1615,7 @@ static __always_inline void timekeeping_freqadjust(struct timekeeper *tk,
6274 + negative = (tick_error < 0);
6275 +
6276 + /* Sort out the magnitude of the correction */
6277 +- tick_error = abs(tick_error);
6278 ++ tick_error = abs64(tick_error);
6279 + for (adj = 0; tick_error > interval; adj++)
6280 + tick_error >>= 1;
6281 +
6282 +diff --git a/lib/iommu-common.c b/lib/iommu-common.c
6283 +index df30632f0bef..4fdeee02e0a9 100644
6284 +--- a/lib/iommu-common.c
6285 ++++ b/lib/iommu-common.c
6286 +@@ -21,8 +21,7 @@ static DEFINE_PER_CPU(unsigned int, iommu_hash_common);
6287 +
6288 + static inline bool need_flush(struct iommu_map_table *iommu)
6289 + {
6290 +- return (iommu->lazy_flush != NULL &&
6291 +- (iommu->flags & IOMMU_NEED_FLUSH) != 0);
6292 ++ return ((iommu->flags & IOMMU_NEED_FLUSH) != 0);
6293 + }
6294 +
6295 + static inline void set_flush(struct iommu_map_table *iommu)
6296 +@@ -211,7 +210,8 @@ unsigned long iommu_tbl_range_alloc(struct device *dev,
6297 + goto bail;
6298 + }
6299 + }
6300 +- if (n < pool->hint || need_flush(iommu)) {
6301 ++ if (iommu->lazy_flush &&
6302 ++ (n < pool->hint || need_flush(iommu))) {
6303 + clear_flush(iommu);
6304 + iommu->lazy_flush(iommu);
6305 + }
6306 +diff --git a/mm/hugetlb.c b/mm/hugetlb.c
6307 +index 8c4c1f9f9a9a..a6ff935476e3 100644
6308 +--- a/mm/hugetlb.c
6309 ++++ b/mm/hugetlb.c
6310 +@@ -2897,6 +2897,14 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
6311 + continue;
6312 +
6313 + /*
6314 ++ * Shared VMAs have their own reserves and do not affect
6315 ++ * MAP_PRIVATE accounting but it is possible that a shared
6316 ++ * VMA is using the same page so check and skip such VMAs.
6317 ++ */
6318 ++ if (iter_vma->vm_flags & VM_MAYSHARE)
6319 ++ continue;
6320 ++
6321 ++ /*
6322 + * Unmap the page from other VMAs without their own reserves.
6323 + * They get marked to be SIGKILLed if they fault in these
6324 + * areas. This is because a future no-page fault on this VMA
6325 +diff --git a/mm/migrate.c b/mm/migrate.c
6326 +index f53838fe3dfe..2c37b1a44a8c 100644
6327 +--- a/mm/migrate.c
6328 ++++ b/mm/migrate.c
6329 +@@ -1062,7 +1062,7 @@ out:
6330 + if (rc != MIGRATEPAGE_SUCCESS && put_new_page)
6331 + put_new_page(new_hpage, private);
6332 + else
6333 +- put_page(new_hpage);
6334 ++ putback_active_hugepage(new_hpage);
6335 +
6336 + if (result) {
6337 + if (rc)
6338 +diff --git a/mm/slab.c b/mm/slab.c
6339 +index 3dd2d1ff9d5d..330039fdcf18 100644
6340 +--- a/mm/slab.c
6341 ++++ b/mm/slab.c
6342 +@@ -2189,9 +2189,16 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
6343 + size += BYTES_PER_WORD;
6344 + }
6345 + #if FORCED_DEBUG && defined(CONFIG_DEBUG_PAGEALLOC)
6346 +- if (size >= kmalloc_size(INDEX_NODE + 1)
6347 +- && cachep->object_size > cache_line_size()
6348 +- && ALIGN(size, cachep->align) < PAGE_SIZE) {
6349 ++ /*
6350 ++ * To activate debug pagealloc, off-slab management is necessary
6351 ++ * requirement. In early phase of initialization, small sized slab
6352 ++ * doesn't get initialized so it would not be possible. So, we need
6353 ++ * to check size >= 256. It guarantees that all necessary small
6354 ++ * sized slab is initialized in current slab initialization sequence.
6355 ++ */
6356 ++ if (!slab_early_init && size >= kmalloc_size(INDEX_NODE) &&
6357 ++ size >= 256 && cachep->object_size > cache_line_size() &&
6358 ++ ALIGN(size, cachep->align) < PAGE_SIZE) {
6359 + cachep->obj_offset += PAGE_SIZE - ALIGN(size, cachep->align);
6360 + size = PAGE_SIZE;
6361 + }
6362 +diff --git a/net/batman-adv/distributed-arp-table.c b/net/batman-adv/distributed-arp-table.c
6363 +index aad022dd15df..95b3167cf036 100644
6364 +--- a/net/batman-adv/distributed-arp-table.c
6365 ++++ b/net/batman-adv/distributed-arp-table.c
6366 +@@ -15,6 +15,7 @@
6367 + * along with this program; if not, see <http://www.gnu.org/licenses/>.
6368 + */
6369 +
6370 ++#include <linux/bitops.h>
6371 + #include <linux/if_ether.h>
6372 + #include <linux/if_arp.h>
6373 + #include <linux/if_vlan.h>
6374 +@@ -422,7 +423,7 @@ static bool batadv_is_orig_node_eligible(struct batadv_dat_candidate *res,
6375 + int j;
6376 +
6377 + /* check if orig node candidate is running DAT */
6378 +- if (!(candidate->capabilities & BATADV_ORIG_CAPA_HAS_DAT))
6379 ++ if (!test_bit(BATADV_ORIG_CAPA_HAS_DAT, &candidate->capabilities))
6380 + goto out;
6381 +
6382 + /* Check if this node has already been selected... */
6383 +@@ -682,9 +683,9 @@ static void batadv_dat_tvlv_ogm_handler_v1(struct batadv_priv *bat_priv,
6384 + uint16_t tvlv_value_len)
6385 + {
6386 + if (flags & BATADV_TVLV_HANDLER_OGM_CIFNOTFND)
6387 +- orig->capabilities &= ~BATADV_ORIG_CAPA_HAS_DAT;
6388 ++ clear_bit(BATADV_ORIG_CAPA_HAS_DAT, &orig->capabilities);
6389 + else
6390 +- orig->capabilities |= BATADV_ORIG_CAPA_HAS_DAT;
6391 ++ set_bit(BATADV_ORIG_CAPA_HAS_DAT, &orig->capabilities);
6392 + }
6393 +
6394 + /**
6395 +diff --git a/net/batman-adv/multicast.c b/net/batman-adv/multicast.c
6396 +index b24e4bb64fb5..8653c1a506f4 100644
6397 +--- a/net/batman-adv/multicast.c
6398 ++++ b/net/batman-adv/multicast.c
6399 +@@ -15,6 +15,8 @@
6400 + * along with this program; if not, see <http://www.gnu.org/licenses/>.
6401 + */
6402 +
6403 ++#include <linux/bitops.h>
6404 ++#include <linux/bug.h>
6405 + #include "main.h"
6406 + #include "multicast.h"
6407 + #include "originator.h"
6408 +@@ -565,19 +567,26 @@ batadv_mcast_forw_mode(struct batadv_priv *bat_priv, struct sk_buff *skb,
6409 + *
6410 + * If the BATADV_MCAST_WANT_ALL_UNSNOOPABLES flag of this originator,
6411 + * orig, has toggled then this method updates counter and list accordingly.
6412 ++ *
6413 ++ * Caller needs to hold orig->mcast_handler_lock.
6414 + */
6415 + static void batadv_mcast_want_unsnoop_update(struct batadv_priv *bat_priv,
6416 + struct batadv_orig_node *orig,
6417 + uint8_t mcast_flags)
6418 + {
6419 ++ struct hlist_node *node = &orig->mcast_want_all_unsnoopables_node;
6420 ++ struct hlist_head *head = &bat_priv->mcast.want_all_unsnoopables_list;
6421 ++
6422 + /* switched from flag unset to set */
6423 + if (mcast_flags & BATADV_MCAST_WANT_ALL_UNSNOOPABLES &&
6424 + !(orig->mcast_flags & BATADV_MCAST_WANT_ALL_UNSNOOPABLES)) {
6425 + atomic_inc(&bat_priv->mcast.num_want_all_unsnoopables);
6426 +
6427 + spin_lock_bh(&bat_priv->mcast.want_lists_lock);
6428 +- hlist_add_head_rcu(&orig->mcast_want_all_unsnoopables_node,
6429 +- &bat_priv->mcast.want_all_unsnoopables_list);
6430 ++ /* flag checks above + mcast_handler_lock prevents this */
6431 ++ WARN_ON(!hlist_unhashed(node));
6432 ++
6433 ++ hlist_add_head_rcu(node, head);
6434 + spin_unlock_bh(&bat_priv->mcast.want_lists_lock);
6435 + /* switched from flag set to unset */
6436 + } else if (!(mcast_flags & BATADV_MCAST_WANT_ALL_UNSNOOPABLES) &&
6437 +@@ -585,7 +594,10 @@ static void batadv_mcast_want_unsnoop_update(struct batadv_priv *bat_priv,
6438 + atomic_dec(&bat_priv->mcast.num_want_all_unsnoopables);
6439 +
6440 + spin_lock_bh(&bat_priv->mcast.want_lists_lock);
6441 +- hlist_del_rcu(&orig->mcast_want_all_unsnoopables_node);
6442 ++ /* flag checks above + mcast_handler_lock prevents this */
6443 ++ WARN_ON(hlist_unhashed(node));
6444 ++
6445 ++ hlist_del_init_rcu(node);
6446 + spin_unlock_bh(&bat_priv->mcast.want_lists_lock);
6447 + }
6448 + }
6449 +@@ -598,19 +610,26 @@ static void batadv_mcast_want_unsnoop_update(struct batadv_priv *bat_priv,
6450 + *
6451 + * If the BATADV_MCAST_WANT_ALL_IPV4 flag of this originator, orig, has
6452 + * toggled then this method updates counter and list accordingly.
6453 ++ *
6454 ++ * Caller needs to hold orig->mcast_handler_lock.
6455 + */
6456 + static void batadv_mcast_want_ipv4_update(struct batadv_priv *bat_priv,
6457 + struct batadv_orig_node *orig,
6458 + uint8_t mcast_flags)
6459 + {
6460 ++ struct hlist_node *node = &orig->mcast_want_all_ipv4_node;
6461 ++ struct hlist_head *head = &bat_priv->mcast.want_all_ipv4_list;
6462 ++
6463 + /* switched from flag unset to set */
6464 + if (mcast_flags & BATADV_MCAST_WANT_ALL_IPV4 &&
6465 + !(orig->mcast_flags & BATADV_MCAST_WANT_ALL_IPV4)) {
6466 + atomic_inc(&bat_priv->mcast.num_want_all_ipv4);
6467 +
6468 + spin_lock_bh(&bat_priv->mcast.want_lists_lock);
6469 +- hlist_add_head_rcu(&orig->mcast_want_all_ipv4_node,
6470 +- &bat_priv->mcast.want_all_ipv4_list);
6471 ++ /* flag checks above + mcast_handler_lock prevents this */
6472 ++ WARN_ON(!hlist_unhashed(node));
6473 ++
6474 ++ hlist_add_head_rcu(node, head);
6475 + spin_unlock_bh(&bat_priv->mcast.want_lists_lock);
6476 + /* switched from flag set to unset */
6477 + } else if (!(mcast_flags & BATADV_MCAST_WANT_ALL_IPV4) &&
6478 +@@ -618,7 +637,10 @@ static void batadv_mcast_want_ipv4_update(struct batadv_priv *bat_priv,
6479 + atomic_dec(&bat_priv->mcast.num_want_all_ipv4);
6480 +
6481 + spin_lock_bh(&bat_priv->mcast.want_lists_lock);
6482 +- hlist_del_rcu(&orig->mcast_want_all_ipv4_node);
6483 ++ /* flag checks above + mcast_handler_lock prevents this */
6484 ++ WARN_ON(hlist_unhashed(node));
6485 ++
6486 ++ hlist_del_init_rcu(node);
6487 + spin_unlock_bh(&bat_priv->mcast.want_lists_lock);
6488 + }
6489 + }
6490 +@@ -631,19 +653,26 @@ static void batadv_mcast_want_ipv4_update(struct batadv_priv *bat_priv,
6491 + *
6492 + * If the BATADV_MCAST_WANT_ALL_IPV6 flag of this originator, orig, has
6493 + * toggled then this method updates counter and list accordingly.
6494 ++ *
6495 ++ * Caller needs to hold orig->mcast_handler_lock.
6496 + */
6497 + static void batadv_mcast_want_ipv6_update(struct batadv_priv *bat_priv,
6498 + struct batadv_orig_node *orig,
6499 + uint8_t mcast_flags)
6500 + {
6501 ++ struct hlist_node *node = &orig->mcast_want_all_ipv6_node;
6502 ++ struct hlist_head *head = &bat_priv->mcast.want_all_ipv6_list;
6503 ++
6504 + /* switched from flag unset to set */
6505 + if (mcast_flags & BATADV_MCAST_WANT_ALL_IPV6 &&
6506 + !(orig->mcast_flags & BATADV_MCAST_WANT_ALL_IPV6)) {
6507 + atomic_inc(&bat_priv->mcast.num_want_all_ipv6);
6508 +
6509 + spin_lock_bh(&bat_priv->mcast.want_lists_lock);
6510 +- hlist_add_head_rcu(&orig->mcast_want_all_ipv6_node,
6511 +- &bat_priv->mcast.want_all_ipv6_list);
6512 ++ /* flag checks above + mcast_handler_lock prevents this */
6513 ++ WARN_ON(!hlist_unhashed(node));
6514 ++
6515 ++ hlist_add_head_rcu(node, head);
6516 + spin_unlock_bh(&bat_priv->mcast.want_lists_lock);
6517 + /* switched from flag set to unset */
6518 + } else if (!(mcast_flags & BATADV_MCAST_WANT_ALL_IPV6) &&
6519 +@@ -651,7 +680,10 @@ static void batadv_mcast_want_ipv6_update(struct batadv_priv *bat_priv,
6520 + atomic_dec(&bat_priv->mcast.num_want_all_ipv6);
6521 +
6522 + spin_lock_bh(&bat_priv->mcast.want_lists_lock);
6523 +- hlist_del_rcu(&orig->mcast_want_all_ipv6_node);
6524 ++ /* flag checks above + mcast_handler_lock prevents this */
6525 ++ WARN_ON(hlist_unhashed(node));
6526 ++
6527 ++ hlist_del_init_rcu(node);
6528 + spin_unlock_bh(&bat_priv->mcast.want_lists_lock);
6529 + }
6530 + }
6531 +@@ -674,39 +706,42 @@ static void batadv_mcast_tvlv_ogm_handler_v1(struct batadv_priv *bat_priv,
6532 + uint8_t mcast_flags = BATADV_NO_FLAGS;
6533 + bool orig_initialized;
6534 +
6535 +- orig_initialized = orig->capa_initialized & BATADV_ORIG_CAPA_HAS_MCAST;
6536 ++ if (orig_mcast_enabled && tvlv_value &&
6537 ++ (tvlv_value_len >= sizeof(mcast_flags)))
6538 ++ mcast_flags = *(uint8_t *)tvlv_value;
6539 ++
6540 ++ spin_lock_bh(&orig->mcast_handler_lock);
6541 ++ orig_initialized = test_bit(BATADV_ORIG_CAPA_HAS_MCAST,
6542 ++ &orig->capa_initialized);
6543 +
6544 + /* If mcast support is turned on decrease the disabled mcast node
6545 + * counter only if we had increased it for this node before. If this
6546 + * is a completely new orig_node no need to decrease the counter.
6547 + */
6548 + if (orig_mcast_enabled &&
6549 +- !(orig->capabilities & BATADV_ORIG_CAPA_HAS_MCAST)) {
6550 ++ !test_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capabilities)) {
6551 + if (orig_initialized)
6552 + atomic_dec(&bat_priv->mcast.num_disabled);
6553 +- orig->capabilities |= BATADV_ORIG_CAPA_HAS_MCAST;
6554 ++ set_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capabilities);
6555 + /* If mcast support is being switched off or if this is an initial
6556 + * OGM without mcast support then increase the disabled mcast
6557 + * node counter.
6558 + */
6559 + } else if (!orig_mcast_enabled &&
6560 +- (orig->capabilities & BATADV_ORIG_CAPA_HAS_MCAST ||
6561 ++ (test_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capabilities) ||
6562 + !orig_initialized)) {
6563 + atomic_inc(&bat_priv->mcast.num_disabled);
6564 +- orig->capabilities &= ~BATADV_ORIG_CAPA_HAS_MCAST;
6565 ++ clear_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capabilities);
6566 + }
6567 +
6568 +- orig->capa_initialized |= BATADV_ORIG_CAPA_HAS_MCAST;
6569 +-
6570 +- if (orig_mcast_enabled && tvlv_value &&
6571 +- (tvlv_value_len >= sizeof(mcast_flags)))
6572 +- mcast_flags = *(uint8_t *)tvlv_value;
6573 ++ set_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capa_initialized);
6574 +
6575 + batadv_mcast_want_unsnoop_update(bat_priv, orig, mcast_flags);
6576 + batadv_mcast_want_ipv4_update(bat_priv, orig, mcast_flags);
6577 + batadv_mcast_want_ipv6_update(bat_priv, orig, mcast_flags);
6578 +
6579 + orig->mcast_flags = mcast_flags;
6580 ++ spin_unlock_bh(&orig->mcast_handler_lock);
6581 + }
6582 +
6583 + /**
6584 +@@ -740,11 +775,15 @@ void batadv_mcast_purge_orig(struct batadv_orig_node *orig)
6585 + {
6586 + struct batadv_priv *bat_priv = orig->bat_priv;
6587 +
6588 +- if (!(orig->capabilities & BATADV_ORIG_CAPA_HAS_MCAST) &&
6589 +- orig->capa_initialized & BATADV_ORIG_CAPA_HAS_MCAST)
6590 ++ spin_lock_bh(&orig->mcast_handler_lock);
6591 ++
6592 ++ if (!test_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capabilities) &&
6593 ++ test_bit(BATADV_ORIG_CAPA_HAS_MCAST, &orig->capa_initialized))
6594 + atomic_dec(&bat_priv->mcast.num_disabled);
6595 +
6596 + batadv_mcast_want_unsnoop_update(bat_priv, orig, BATADV_NO_FLAGS);
6597 + batadv_mcast_want_ipv4_update(bat_priv, orig, BATADV_NO_FLAGS);
6598 + batadv_mcast_want_ipv6_update(bat_priv, orig, BATADV_NO_FLAGS);
6599 ++
6600 ++ spin_unlock_bh(&orig->mcast_handler_lock);
6601 + }
6602 +diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c
6603 +index 127cc4d7380a..a449195c5b2b 100644
6604 +--- a/net/batman-adv/network-coding.c
6605 ++++ b/net/batman-adv/network-coding.c
6606 +@@ -15,6 +15,7 @@
6607 + * along with this program; if not, see <http://www.gnu.org/licenses/>.
6608 + */
6609 +
6610 ++#include <linux/bitops.h>
6611 + #include <linux/debugfs.h>
6612 +
6613 + #include "main.h"
6614 +@@ -105,9 +106,9 @@ static void batadv_nc_tvlv_ogm_handler_v1(struct batadv_priv *bat_priv,
6615 + uint16_t tvlv_value_len)
6616 + {
6617 + if (flags & BATADV_TVLV_HANDLER_OGM_CIFNOTFND)
6618 +- orig->capabilities &= ~BATADV_ORIG_CAPA_HAS_NC;
6619 ++ clear_bit(BATADV_ORIG_CAPA_HAS_NC, &orig->capabilities);
6620 + else
6621 +- orig->capabilities |= BATADV_ORIG_CAPA_HAS_NC;
6622 ++ set_bit(BATADV_ORIG_CAPA_HAS_NC, &orig->capabilities);
6623 + }
6624 +
6625 + /**
6626 +@@ -871,7 +872,7 @@ void batadv_nc_update_nc_node(struct batadv_priv *bat_priv,
6627 + goto out;
6628 +
6629 + /* check if orig node is network coding enabled */
6630 +- if (!(orig_node->capabilities & BATADV_ORIG_CAPA_HAS_NC))
6631 ++ if (!test_bit(BATADV_ORIG_CAPA_HAS_NC, &orig_node->capabilities))
6632 + goto out;
6633 +
6634 + /* accept ogms from 'good' neighbors and single hop neighbors */
6635 +diff --git a/net/batman-adv/originator.c b/net/batman-adv/originator.c
6636 +index 90e805aba379..dfae97408628 100644
6637 +--- a/net/batman-adv/originator.c
6638 ++++ b/net/batman-adv/originator.c
6639 +@@ -678,8 +678,13 @@ struct batadv_orig_node *batadv_orig_node_new(struct batadv_priv *bat_priv,
6640 + orig_node->last_seen = jiffies;
6641 + reset_time = jiffies - 1 - msecs_to_jiffies(BATADV_RESET_PROTECTION_MS);
6642 + orig_node->bcast_seqno_reset = reset_time;
6643 ++
6644 + #ifdef CONFIG_BATMAN_ADV_MCAST
6645 + orig_node->mcast_flags = BATADV_NO_FLAGS;
6646 ++ INIT_HLIST_NODE(&orig_node->mcast_want_all_unsnoopables_node);
6647 ++ INIT_HLIST_NODE(&orig_node->mcast_want_all_ipv4_node);
6648 ++ INIT_HLIST_NODE(&orig_node->mcast_want_all_ipv6_node);
6649 ++ spin_lock_init(&orig_node->mcast_handler_lock);
6650 + #endif
6651 +
6652 + /* create a vlan object for the "untagged" LAN */
6653 +diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c
6654 +index 5ec31d7de24f..a0b1b861b968 100644
6655 +--- a/net/batman-adv/soft-interface.c
6656 ++++ b/net/batman-adv/soft-interface.c
6657 +@@ -172,6 +172,7 @@ static int batadv_interface_tx(struct sk_buff *skb,
6658 + int gw_mode;
6659 + enum batadv_forw_mode forw_mode;
6660 + struct batadv_orig_node *mcast_single_orig = NULL;
6661 ++ int network_offset = ETH_HLEN;
6662 +
6663 + if (atomic_read(&bat_priv->mesh_state) != BATADV_MESH_ACTIVE)
6664 + goto dropped;
6665 +@@ -184,14 +185,18 @@ static int batadv_interface_tx(struct sk_buff *skb,
6666 + case ETH_P_8021Q:
6667 + vhdr = vlan_eth_hdr(skb);
6668 +
6669 +- if (vhdr->h_vlan_encapsulated_proto != ethertype)
6670 ++ if (vhdr->h_vlan_encapsulated_proto != ethertype) {
6671 ++ network_offset += VLAN_HLEN;
6672 + break;
6673 ++ }
6674 +
6675 + /* fall through */
6676 + case ETH_P_BATMAN:
6677 + goto dropped;
6678 + }
6679 +
6680 ++ skb_set_network_header(skb, network_offset);
6681 ++
6682 + if (batadv_bla_tx(bat_priv, skb, vid))
6683 + goto dropped;
6684 +
6685 +@@ -449,6 +454,9 @@ out:
6686 + */
6687 + void batadv_softif_vlan_free_ref(struct batadv_softif_vlan *vlan)
6688 + {
6689 ++ if (!vlan)
6690 ++ return;
6691 ++
6692 + if (atomic_dec_and_test(&vlan->refcount)) {
6693 + spin_lock_bh(&vlan->bat_priv->softif_vlan_list_lock);
6694 + hlist_del_rcu(&vlan->list);
6695 +diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
6696 +index 07b263a437d1..4f2a9d2c56db 100644
6697 +--- a/net/batman-adv/translation-table.c
6698 ++++ b/net/batman-adv/translation-table.c
6699 +@@ -15,6 +15,7 @@
6700 + * along with this program; if not, see <http://www.gnu.org/licenses/>.
6701 + */
6702 +
6703 ++#include <linux/bitops.h>
6704 + #include "main.h"
6705 + #include "translation-table.h"
6706 + #include "soft-interface.h"
6707 +@@ -575,6 +576,9 @@ bool batadv_tt_local_add(struct net_device *soft_iface, const uint8_t *addr,
6708 +
6709 + /* increase the refcounter of the related vlan */
6710 + vlan = batadv_softif_vlan_get(bat_priv, vid);
6711 ++ if (WARN(!vlan, "adding TT local entry %pM to non-existent VLAN %d",
6712 ++ addr, BATADV_PRINT_VID(vid)))
6713 ++ goto out;
6714 +
6715 + batadv_dbg(BATADV_DBG_TT, bat_priv,
6716 + "Creating new local tt entry: %pM (vid: %d, ttvn: %d)\n",
6717 +@@ -1015,6 +1019,7 @@ uint16_t batadv_tt_local_remove(struct batadv_priv *bat_priv,
6718 + struct batadv_tt_local_entry *tt_local_entry;
6719 + uint16_t flags, curr_flags = BATADV_NO_FLAGS;
6720 + struct batadv_softif_vlan *vlan;
6721 ++ void *tt_entry_exists;
6722 +
6723 + tt_local_entry = batadv_tt_local_hash_find(bat_priv, addr, vid);
6724 + if (!tt_local_entry)
6725 +@@ -1042,11 +1047,22 @@ uint16_t batadv_tt_local_remove(struct batadv_priv *bat_priv,
6726 + * immediately purge it
6727 + */
6728 + batadv_tt_local_event(bat_priv, tt_local_entry, BATADV_TT_CLIENT_DEL);
6729 +- hlist_del_rcu(&tt_local_entry->common.hash_entry);
6730 ++
6731 ++ tt_entry_exists = batadv_hash_remove(bat_priv->tt.local_hash,
6732 ++ batadv_compare_tt,
6733 ++ batadv_choose_tt,
6734 ++ &tt_local_entry->common);
6735 ++ if (!tt_entry_exists)
6736 ++ goto out;
6737 ++
6738 ++ /* extra call to free the local tt entry */
6739 + batadv_tt_local_entry_free_ref(tt_local_entry);
6740 +
6741 + /* decrease the reference held for this vlan */
6742 + vlan = batadv_softif_vlan_get(bat_priv, vid);
6743 ++ if (!vlan)
6744 ++ goto out;
6745 ++
6746 + batadv_softif_vlan_free_ref(vlan);
6747 + batadv_softif_vlan_free_ref(vlan);
6748 +
6749 +@@ -1147,8 +1163,10 @@ static void batadv_tt_local_table_free(struct batadv_priv *bat_priv)
6750 + /* decrease the reference held for this vlan */
6751 + vlan = batadv_softif_vlan_get(bat_priv,
6752 + tt_common_entry->vid);
6753 +- batadv_softif_vlan_free_ref(vlan);
6754 +- batadv_softif_vlan_free_ref(vlan);
6755 ++ if (vlan) {
6756 ++ batadv_softif_vlan_free_ref(vlan);
6757 ++ batadv_softif_vlan_free_ref(vlan);
6758 ++ }
6759 +
6760 + batadv_tt_local_entry_free_ref(tt_local);
6761 + }
6762 +@@ -1843,7 +1861,7 @@ void batadv_tt_global_del_orig(struct batadv_priv *bat_priv,
6763 + }
6764 + spin_unlock_bh(list_lock);
6765 + }
6766 +- orig_node->capa_initialized &= ~BATADV_ORIG_CAPA_HAS_TT;
6767 ++ clear_bit(BATADV_ORIG_CAPA_HAS_TT, &orig_node->capa_initialized);
6768 + }
6769 +
6770 + static bool batadv_tt_global_to_purge(struct batadv_tt_global_entry *tt_global,
6771 +@@ -2802,7 +2820,7 @@ static void _batadv_tt_update_changes(struct batadv_priv *bat_priv,
6772 + return;
6773 + }
6774 + }
6775 +- orig_node->capa_initialized |= BATADV_ORIG_CAPA_HAS_TT;
6776 ++ set_bit(BATADV_ORIG_CAPA_HAS_TT, &orig_node->capa_initialized);
6777 + }
6778 +
6779 + static void batadv_tt_fill_gtable(struct batadv_priv *bat_priv,
6780 +@@ -3188,8 +3206,10 @@ static void batadv_tt_local_purge_pending_clients(struct batadv_priv *bat_priv)
6781 +
6782 + /* decrease the reference held for this vlan */
6783 + vlan = batadv_softif_vlan_get(bat_priv, tt_common->vid);
6784 +- batadv_softif_vlan_free_ref(vlan);
6785 +- batadv_softif_vlan_free_ref(vlan);
6786 ++ if (vlan) {
6787 ++ batadv_softif_vlan_free_ref(vlan);
6788 ++ batadv_softif_vlan_free_ref(vlan);
6789 ++ }
6790 +
6791 + batadv_tt_local_entry_free_ref(tt_local);
6792 + }
6793 +@@ -3302,7 +3322,8 @@ static void batadv_tt_update_orig(struct batadv_priv *bat_priv,
6794 + bool has_tt_init;
6795 +
6796 + tt_vlan = (struct batadv_tvlv_tt_vlan_data *)tt_buff;
6797 +- has_tt_init = orig_node->capa_initialized & BATADV_ORIG_CAPA_HAS_TT;
6798 ++ has_tt_init = test_bit(BATADV_ORIG_CAPA_HAS_TT,
6799 ++ &orig_node->capa_initialized);
6800 +
6801 + /* orig table not initialised AND first diff is in the OGM OR the ttvn
6802 + * increased by one -> we can apply the attached changes
6803 +diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h
6804 +index 9398c3fb4174..26c37be2aa05 100644
6805 +--- a/net/batman-adv/types.h
6806 ++++ b/net/batman-adv/types.h
6807 +@@ -204,6 +204,7 @@ struct batadv_orig_bat_iv {
6808 + * @batadv_dat_addr_t: address of the orig node in the distributed hash
6809 + * @last_seen: time when last packet from this node was received
6810 + * @bcast_seqno_reset: time when the broadcast seqno window was reset
6811 ++ * @mcast_handler_lock: synchronizes mcast-capability and -flag changes
6812 + * @mcast_flags: multicast flags announced by the orig node
6813 + * @mcast_want_all_unsnoop_node: a list node for the
6814 + * mcast.want_all_unsnoopables list
6815 +@@ -251,13 +252,15 @@ struct batadv_orig_node {
6816 + unsigned long last_seen;
6817 + unsigned long bcast_seqno_reset;
6818 + #ifdef CONFIG_BATMAN_ADV_MCAST
6819 ++ /* synchronizes mcast tvlv specific orig changes */
6820 ++ spinlock_t mcast_handler_lock;
6821 + uint8_t mcast_flags;
6822 + struct hlist_node mcast_want_all_unsnoopables_node;
6823 + struct hlist_node mcast_want_all_ipv4_node;
6824 + struct hlist_node mcast_want_all_ipv6_node;
6825 + #endif
6826 +- uint8_t capabilities;
6827 +- uint8_t capa_initialized;
6828 ++ unsigned long capabilities;
6829 ++ unsigned long capa_initialized;
6830 + atomic_t last_ttvn;
6831 + unsigned char *tt_buff;
6832 + int16_t tt_buff_len;
6833 +@@ -296,10 +299,10 @@ struct batadv_orig_node {
6834 + * (= orig node announces a tvlv of type BATADV_TVLV_MCAST)
6835 + */
6836 + enum batadv_orig_capabilities {
6837 +- BATADV_ORIG_CAPA_HAS_DAT = BIT(0),
6838 +- BATADV_ORIG_CAPA_HAS_NC = BIT(1),
6839 +- BATADV_ORIG_CAPA_HAS_TT = BIT(2),
6840 +- BATADV_ORIG_CAPA_HAS_MCAST = BIT(3),
6841 ++ BATADV_ORIG_CAPA_HAS_DAT,
6842 ++ BATADV_ORIG_CAPA_HAS_NC,
6843 ++ BATADV_ORIG_CAPA_HAS_TT,
6844 ++ BATADV_ORIG_CAPA_HAS_MCAST,
6845 + };
6846 +
6847 + /**
6848 +diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
6849 +index 7b815bcc8c9b..69ad5091e2ce 100644
6850 +--- a/net/bluetooth/smp.c
6851 ++++ b/net/bluetooth/smp.c
6852 +@@ -2294,12 +2294,6 @@ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level)
6853 + if (!conn)
6854 + return 1;
6855 +
6856 +- chan = conn->smp;
6857 +- if (!chan) {
6858 +- BT_ERR("SMP security requested but not available");
6859 +- return 1;
6860 +- }
6861 +-
6862 + if (!hci_dev_test_flag(hcon->hdev, HCI_LE_ENABLED))
6863 + return 1;
6864 +
6865 +@@ -2313,6 +2307,12 @@ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level)
6866 + if (smp_ltk_encrypt(conn, hcon->pending_sec_level))
6867 + return 0;
6868 +
6869 ++ chan = conn->smp;
6870 ++ if (!chan) {
6871 ++ BT_ERR("SMP security requested but not available");
6872 ++ return 1;
6873 ++ }
6874 ++
6875 + l2cap_chan_lock(chan);
6876 +
6877 + /* If SMP is already in progress ignore this request */
6878 +diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
6879 +index b27fc401c6a9..e664706b350c 100644
6880 +--- a/net/ipv4/inet_connection_sock.c
6881 ++++ b/net/ipv4/inet_connection_sock.c
6882 +@@ -584,7 +584,7 @@ static bool reqsk_queue_unlink(struct request_sock_queue *queue,
6883 + }
6884 +
6885 + spin_unlock(&queue->syn_wait_lock);
6886 +- if (del_timer_sync(&req->rsk_timer))
6887 ++ if (timer_pending(&req->rsk_timer) && del_timer_sync(&req->rsk_timer))
6888 + reqsk_put(req);
6889 + return found;
6890 + }
6891 +diff --git a/net/netfilter/core.c b/net/netfilter/core.c
6892 +index e6163017c42d..5d0c6fd59475 100644
6893 +--- a/net/netfilter/core.c
6894 ++++ b/net/netfilter/core.c
6895 +@@ -89,6 +89,7 @@ void nf_unregister_hook(struct nf_hook_ops *reg)
6896 + static_key_slow_dec(&nf_hooks_needed[reg->pf][reg->hooknum]);
6897 + #endif
6898 + synchronize_net();
6899 ++ nf_queue_nf_hook_drop(reg);
6900 + }
6901 + EXPORT_SYMBOL(nf_unregister_hook);
6902 +
6903 +diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c
6904 +index 5d2b806a862e..38fbc194b9cb 100644
6905 +--- a/net/netfilter/ipvs/ip_vs_core.c
6906 ++++ b/net/netfilter/ipvs/ip_vs_core.c
6907 +@@ -319,7 +319,13 @@ ip_vs_sched_persist(struct ip_vs_service *svc,
6908 + * return *ignored=0 i.e. ICMP and NF_DROP
6909 + */
6910 + sched = rcu_dereference(svc->scheduler);
6911 +- dest = sched->schedule(svc, skb, iph);
6912 ++ if (sched) {
6913 ++ /* read svc->sched_data after svc->scheduler */
6914 ++ smp_rmb();
6915 ++ dest = sched->schedule(svc, skb, iph);
6916 ++ } else {
6917 ++ dest = NULL;
6918 ++ }
6919 + if (!dest) {
6920 + IP_VS_DBG(1, "p-schedule: no dest found.\n");
6921 + kfree(param.pe_data);
6922 +@@ -467,7 +473,13 @@ ip_vs_schedule(struct ip_vs_service *svc, struct sk_buff *skb,
6923 + }
6924 +
6925 + sched = rcu_dereference(svc->scheduler);
6926 +- dest = sched->schedule(svc, skb, iph);
6927 ++ if (sched) {
6928 ++ /* read svc->sched_data after svc->scheduler */
6929 ++ smp_rmb();
6930 ++ dest = sched->schedule(svc, skb, iph);
6931 ++ } else {
6932 ++ dest = NULL;
6933 ++ }
6934 + if (dest == NULL) {
6935 + IP_VS_DBG(1, "Schedule: no dest found.\n");
6936 + return NULL;
6937 +diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
6938 +index 285eae3a1454..24c554201a76 100644
6939 +--- a/net/netfilter/ipvs/ip_vs_ctl.c
6940 ++++ b/net/netfilter/ipvs/ip_vs_ctl.c
6941 +@@ -842,15 +842,16 @@ __ip_vs_update_dest(struct ip_vs_service *svc, struct ip_vs_dest *dest,
6942 + __ip_vs_dst_cache_reset(dest);
6943 + spin_unlock_bh(&dest->dst_lock);
6944 +
6945 +- sched = rcu_dereference_protected(svc->scheduler, 1);
6946 + if (add) {
6947 + ip_vs_start_estimator(svc->net, &dest->stats);
6948 + list_add_rcu(&dest->n_list, &svc->destinations);
6949 + svc->num_dests++;
6950 +- if (sched->add_dest)
6951 ++ sched = rcu_dereference_protected(svc->scheduler, 1);
6952 ++ if (sched && sched->add_dest)
6953 + sched->add_dest(svc, dest);
6954 + } else {
6955 +- if (sched->upd_dest)
6956 ++ sched = rcu_dereference_protected(svc->scheduler, 1);
6957 ++ if (sched && sched->upd_dest)
6958 + sched->upd_dest(svc, dest);
6959 + }
6960 + }
6961 +@@ -1084,7 +1085,7 @@ static void __ip_vs_unlink_dest(struct ip_vs_service *svc,
6962 + struct ip_vs_scheduler *sched;
6963 +
6964 + sched = rcu_dereference_protected(svc->scheduler, 1);
6965 +- if (sched->del_dest)
6966 ++ if (sched && sched->del_dest)
6967 + sched->del_dest(svc, dest);
6968 + }
6969 + }
6970 +@@ -1175,11 +1176,14 @@ ip_vs_add_service(struct net *net, struct ip_vs_service_user_kern *u,
6971 + ip_vs_use_count_inc();
6972 +
6973 + /* Lookup the scheduler by 'u->sched_name' */
6974 +- sched = ip_vs_scheduler_get(u->sched_name);
6975 +- if (sched == NULL) {
6976 +- pr_info("Scheduler module ip_vs_%s not found\n", u->sched_name);
6977 +- ret = -ENOENT;
6978 +- goto out_err;
6979 ++ if (strcmp(u->sched_name, "none")) {
6980 ++ sched = ip_vs_scheduler_get(u->sched_name);
6981 ++ if (!sched) {
6982 ++ pr_info("Scheduler module ip_vs_%s not found\n",
6983 ++ u->sched_name);
6984 ++ ret = -ENOENT;
6985 ++ goto out_err;
6986 ++ }
6987 + }
6988 +
6989 + if (u->pe_name && *u->pe_name) {
6990 +@@ -1240,10 +1244,12 @@ ip_vs_add_service(struct net *net, struct ip_vs_service_user_kern *u,
6991 + spin_lock_init(&svc->stats.lock);
6992 +
6993 + /* Bind the scheduler */
6994 +- ret = ip_vs_bind_scheduler(svc, sched);
6995 +- if (ret)
6996 +- goto out_err;
6997 +- sched = NULL;
6998 ++ if (sched) {
6999 ++ ret = ip_vs_bind_scheduler(svc, sched);
7000 ++ if (ret)
7001 ++ goto out_err;
7002 ++ sched = NULL;
7003 ++ }
7004 +
7005 + /* Bind the ct retriever */
7006 + RCU_INIT_POINTER(svc->pe, pe);
7007 +@@ -1291,17 +1297,20 @@ ip_vs_add_service(struct net *net, struct ip_vs_service_user_kern *u,
7008 + static int
7009 + ip_vs_edit_service(struct ip_vs_service *svc, struct ip_vs_service_user_kern *u)
7010 + {
7011 +- struct ip_vs_scheduler *sched, *old_sched;
7012 ++ struct ip_vs_scheduler *sched = NULL, *old_sched;
7013 + struct ip_vs_pe *pe = NULL, *old_pe = NULL;
7014 + int ret = 0;
7015 +
7016 + /*
7017 + * Lookup the scheduler, by 'u->sched_name'
7018 + */
7019 +- sched = ip_vs_scheduler_get(u->sched_name);
7020 +- if (sched == NULL) {
7021 +- pr_info("Scheduler module ip_vs_%s not found\n", u->sched_name);
7022 +- return -ENOENT;
7023 ++ if (strcmp(u->sched_name, "none")) {
7024 ++ sched = ip_vs_scheduler_get(u->sched_name);
7025 ++ if (!sched) {
7026 ++ pr_info("Scheduler module ip_vs_%s not found\n",
7027 ++ u->sched_name);
7028 ++ return -ENOENT;
7029 ++ }
7030 + }
7031 + old_sched = sched;
7032 +
7033 +@@ -1329,14 +1338,20 @@ ip_vs_edit_service(struct ip_vs_service *svc, struct ip_vs_service_user_kern *u)
7034 +
7035 + old_sched = rcu_dereference_protected(svc->scheduler, 1);
7036 + if (sched != old_sched) {
7037 ++ if (old_sched) {
7038 ++ ip_vs_unbind_scheduler(svc, old_sched);
7039 ++ RCU_INIT_POINTER(svc->scheduler, NULL);
7040 ++ /* Wait all svc->sched_data users */
7041 ++ synchronize_rcu();
7042 ++ }
7043 + /* Bind the new scheduler */
7044 +- ret = ip_vs_bind_scheduler(svc, sched);
7045 +- if (ret) {
7046 +- old_sched = sched;
7047 +- goto out;
7048 ++ if (sched) {
7049 ++ ret = ip_vs_bind_scheduler(svc, sched);
7050 ++ if (ret) {
7051 ++ ip_vs_scheduler_put(sched);
7052 ++ goto out;
7053 ++ }
7054 + }
7055 +- /* Unbind the old scheduler on success */
7056 +- ip_vs_unbind_scheduler(svc, old_sched);
7057 + }
7058 +
7059 + /*
7060 +@@ -1982,6 +1997,7 @@ static int ip_vs_info_seq_show(struct seq_file *seq, void *v)
7061 + const struct ip_vs_iter *iter = seq->private;
7062 + const struct ip_vs_dest *dest;
7063 + struct ip_vs_scheduler *sched = rcu_dereference(svc->scheduler);
7064 ++ char *sched_name = sched ? sched->name : "none";
7065 +
7066 + if (iter->table == ip_vs_svc_table) {
7067 + #ifdef CONFIG_IP_VS_IPV6
7068 +@@ -1990,18 +2006,18 @@ static int ip_vs_info_seq_show(struct seq_file *seq, void *v)
7069 + ip_vs_proto_name(svc->protocol),
7070 + &svc->addr.in6,
7071 + ntohs(svc->port),
7072 +- sched->name);
7073 ++ sched_name);
7074 + else
7075 + #endif
7076 + seq_printf(seq, "%s %08X:%04X %s %s ",
7077 + ip_vs_proto_name(svc->protocol),
7078 + ntohl(svc->addr.ip),
7079 + ntohs(svc->port),
7080 +- sched->name,
7081 ++ sched_name,
7082 + (svc->flags & IP_VS_SVC_F_ONEPACKET)?"ops ":"");
7083 + } else {
7084 + seq_printf(seq, "FWM %08X %s %s",
7085 +- svc->fwmark, sched->name,
7086 ++ svc->fwmark, sched_name,
7087 + (svc->flags & IP_VS_SVC_F_ONEPACKET)?"ops ":"");
7088 + }
7089 +
7090 +@@ -2427,13 +2443,15 @@ ip_vs_copy_service(struct ip_vs_service_entry *dst, struct ip_vs_service *src)
7091 + {
7092 + struct ip_vs_scheduler *sched;
7093 + struct ip_vs_kstats kstats;
7094 ++ char *sched_name;
7095 +
7096 + sched = rcu_dereference_protected(src->scheduler, 1);
7097 ++ sched_name = sched ? sched->name : "none";
7098 + dst->protocol = src->protocol;
7099 + dst->addr = src->addr.ip;
7100 + dst->port = src->port;
7101 + dst->fwmark = src->fwmark;
7102 +- strlcpy(dst->sched_name, sched->name, sizeof(dst->sched_name));
7103 ++ strlcpy(dst->sched_name, sched_name, sizeof(dst->sched_name));
7104 + dst->flags = src->flags;
7105 + dst->timeout = src->timeout / HZ;
7106 + dst->netmask = src->netmask;
7107 +@@ -2892,6 +2910,7 @@ static int ip_vs_genl_fill_service(struct sk_buff *skb,
7108 + struct ip_vs_flags flags = { .flags = svc->flags,
7109 + .mask = ~0 };
7110 + struct ip_vs_kstats kstats;
7111 ++ char *sched_name;
7112 +
7113 + nl_service = nla_nest_start(skb, IPVS_CMD_ATTR_SERVICE);
7114 + if (!nl_service)
7115 +@@ -2910,8 +2929,9 @@ static int ip_vs_genl_fill_service(struct sk_buff *skb,
7116 + }
7117 +
7118 + sched = rcu_dereference_protected(svc->scheduler, 1);
7119 ++ sched_name = sched ? sched->name : "none";
7120 + pe = rcu_dereference_protected(svc->pe, 1);
7121 +- if (nla_put_string(skb, IPVS_SVC_ATTR_SCHED_NAME, sched->name) ||
7122 ++ if (nla_put_string(skb, IPVS_SVC_ATTR_SCHED_NAME, sched_name) ||
7123 + (pe && nla_put_string(skb, IPVS_SVC_ATTR_PE_NAME, pe->name)) ||
7124 + nla_put(skb, IPVS_SVC_ATTR_FLAGS, sizeof(flags), &flags) ||
7125 + nla_put_u32(skb, IPVS_SVC_ATTR_TIMEOUT, svc->timeout / HZ) ||
7126 +diff --git a/net/netfilter/ipvs/ip_vs_sched.c b/net/netfilter/ipvs/ip_vs_sched.c
7127 +index 199760c71f39..7e8141647943 100644
7128 +--- a/net/netfilter/ipvs/ip_vs_sched.c
7129 ++++ b/net/netfilter/ipvs/ip_vs_sched.c
7130 +@@ -74,7 +74,7 @@ void ip_vs_unbind_scheduler(struct ip_vs_service *svc,
7131 +
7132 + if (sched->done_service)
7133 + sched->done_service(svc);
7134 +- /* svc->scheduler can not be set to NULL */
7135 ++ /* svc->scheduler can be set to NULL only by caller */
7136 + }
7137 +
7138 +
7139 +@@ -147,21 +147,21 @@ void ip_vs_scheduler_put(struct ip_vs_scheduler *scheduler)
7140 +
7141 + void ip_vs_scheduler_err(struct ip_vs_service *svc, const char *msg)
7142 + {
7143 +- struct ip_vs_scheduler *sched;
7144 ++ struct ip_vs_scheduler *sched = rcu_dereference(svc->scheduler);
7145 ++ char *sched_name = sched ? sched->name : "none";
7146 +
7147 +- sched = rcu_dereference(svc->scheduler);
7148 + if (svc->fwmark) {
7149 + IP_VS_ERR_RL("%s: FWM %u 0x%08X - %s\n",
7150 +- sched->name, svc->fwmark, svc->fwmark, msg);
7151 ++ sched_name, svc->fwmark, svc->fwmark, msg);
7152 + #ifdef CONFIG_IP_VS_IPV6
7153 + } else if (svc->af == AF_INET6) {
7154 + IP_VS_ERR_RL("%s: %s [%pI6c]:%d - %s\n",
7155 +- sched->name, ip_vs_proto_name(svc->protocol),
7156 ++ sched_name, ip_vs_proto_name(svc->protocol),
7157 + &svc->addr.in6, ntohs(svc->port), msg);
7158 + #endif
7159 + } else {
7160 + IP_VS_ERR_RL("%s: %s %pI4:%d - %s\n",
7161 +- sched->name, ip_vs_proto_name(svc->protocol),
7162 ++ sched_name, ip_vs_proto_name(svc->protocol),
7163 + &svc->addr.ip, ntohs(svc->port), msg);
7164 + }
7165 + }
7166 +diff --git a/net/netfilter/ipvs/ip_vs_sync.c b/net/netfilter/ipvs/ip_vs_sync.c
7167 +index 19b9cce6c210..150047c739fa 100644
7168 +--- a/net/netfilter/ipvs/ip_vs_sync.c
7169 ++++ b/net/netfilter/ipvs/ip_vs_sync.c
7170 +@@ -612,7 +612,7 @@ static void ip_vs_sync_conn_v0(struct net *net, struct ip_vs_conn *cp,
7171 + pkts = atomic_add_return(1, &cp->in_pkts);
7172 + else
7173 + pkts = sysctl_sync_threshold(ipvs);
7174 +- ip_vs_sync_conn(net, cp->control, pkts);
7175 ++ ip_vs_sync_conn(net, cp, pkts);
7176 + }
7177 + }
7178 +
7179 +diff --git a/net/netfilter/ipvs/ip_vs_xmit.c b/net/netfilter/ipvs/ip_vs_xmit.c
7180 +index 19986ec5f21a..258f1e05250f 100644
7181 +--- a/net/netfilter/ipvs/ip_vs_xmit.c
7182 ++++ b/net/netfilter/ipvs/ip_vs_xmit.c
7183 +@@ -130,7 +130,6 @@ static struct rtable *do_output_route4(struct net *net, __be32 daddr,
7184 +
7185 + memset(&fl4, 0, sizeof(fl4));
7186 + fl4.daddr = daddr;
7187 +- fl4.saddr = (rt_mode & IP_VS_RT_MODE_CONNECT) ? *saddr : 0;
7188 + fl4.flowi4_flags = (rt_mode & IP_VS_RT_MODE_KNOWN_NH) ?
7189 + FLOWI_FLAG_KNOWN_NH : 0;
7190 +
7191 +@@ -519,10 +518,27 @@ static inline int ip_vs_tunnel_xmit_prepare(struct sk_buff *skb,
7192 + if (ret == NF_ACCEPT) {
7193 + nf_reset(skb);
7194 + skb_forward_csum(skb);
7195 ++ if (!skb->sk)
7196 ++ skb_sender_cpu_clear(skb);
7197 + }
7198 + return ret;
7199 + }
7200 +
7201 ++/* In the event of a remote destination, it's possible that we would have
7202 ++ * matches against an old socket (particularly a TIME-WAIT socket). This
7203 ++ * causes havoc down the line (ip_local_out et. al. expect regular sockets
7204 ++ * and invalid memory accesses will happen) so simply drop the association
7205 ++ * in this case.
7206 ++*/
7207 ++static inline void ip_vs_drop_early_demux_sk(struct sk_buff *skb)
7208 ++{
7209 ++ /* If dev is set, the packet came from the LOCAL_IN callback and
7210 ++ * not from a local TCP socket.
7211 ++ */
7212 ++ if (skb->dev)
7213 ++ skb_orphan(skb);
7214 ++}
7215 ++
7216 + /* return NF_STOLEN (sent) or NF_ACCEPT if local=1 (not sent) */
7217 + static inline int ip_vs_nat_send_or_cont(int pf, struct sk_buff *skb,
7218 + struct ip_vs_conn *cp, int local)
7219 +@@ -534,12 +550,23 @@ static inline int ip_vs_nat_send_or_cont(int pf, struct sk_buff *skb,
7220 + ip_vs_notrack(skb);
7221 + else
7222 + ip_vs_update_conntrack(skb, cp, 1);
7223 ++
7224 ++ /* Remove the early_demux association unless it's bound for the
7225 ++ * exact same port and address on this host after translation.
7226 ++ */
7227 ++ if (!local || cp->vport != cp->dport ||
7228 ++ !ip_vs_addr_equal(cp->af, &cp->vaddr, &cp->daddr))
7229 ++ ip_vs_drop_early_demux_sk(skb);
7230 ++
7231 + if (!local) {
7232 + skb_forward_csum(skb);
7233 ++ if (!skb->sk)
7234 ++ skb_sender_cpu_clear(skb);
7235 + NF_HOOK(pf, NF_INET_LOCAL_OUT, NULL, skb,
7236 + NULL, skb_dst(skb)->dev, dst_output_sk);
7237 + } else
7238 + ret = NF_ACCEPT;
7239 ++
7240 + return ret;
7241 + }
7242 +
7243 +@@ -553,7 +580,10 @@ static inline int ip_vs_send_or_cont(int pf, struct sk_buff *skb,
7244 + if (likely(!(cp->flags & IP_VS_CONN_F_NFCT)))
7245 + ip_vs_notrack(skb);
7246 + if (!local) {
7247 ++ ip_vs_drop_early_demux_sk(skb);
7248 + skb_forward_csum(skb);
7249 ++ if (!skb->sk)
7250 ++ skb_sender_cpu_clear(skb);
7251 + NF_HOOK(pf, NF_INET_LOCAL_OUT, NULL, skb,
7252 + NULL, skb_dst(skb)->dev, dst_output_sk);
7253 + } else
7254 +@@ -841,6 +871,8 @@ ip_vs_prepare_tunneled_skb(struct sk_buff *skb, int skb_af,
7255 + struct ipv6hdr *old_ipv6h = NULL;
7256 + #endif
7257 +
7258 ++ ip_vs_drop_early_demux_sk(skb);
7259 ++
7260 + if (skb_headroom(skb) < max_headroom || skb_cloned(skb)) {
7261 + new_skb = skb_realloc_headroom(skb, max_headroom);
7262 + if (!new_skb)
7263 +diff --git a/net/netfilter/nf_conntrack_expect.c b/net/netfilter/nf_conntrack_expect.c
7264 +index 7a17070c5dab..b45a4223cb05 100644
7265 +--- a/net/netfilter/nf_conntrack_expect.c
7266 ++++ b/net/netfilter/nf_conntrack_expect.c
7267 +@@ -219,7 +219,8 @@ static inline int expect_clash(const struct nf_conntrack_expect *a,
7268 + a->mask.src.u3.all[count] & b->mask.src.u3.all[count];
7269 + }
7270 +
7271 +- return nf_ct_tuple_mask_cmp(&a->tuple, &b->tuple, &intersect_mask);
7272 ++ return nf_ct_tuple_mask_cmp(&a->tuple, &b->tuple, &intersect_mask) &&
7273 ++ nf_ct_zone(a->master) == nf_ct_zone(b->master);
7274 + }
7275 +
7276 + static inline int expect_matches(const struct nf_conntrack_expect *a,
7277 +diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
7278 +index d1c23940a86a..6b8b0abbfab4 100644
7279 +--- a/net/netfilter/nf_conntrack_netlink.c
7280 ++++ b/net/netfilter/nf_conntrack_netlink.c
7281 +@@ -2995,11 +2995,6 @@ ctnetlink_create_expect(struct net *net, u16 zone,
7282 + }
7283 +
7284 + err = nf_ct_expect_related_report(exp, portid, report);
7285 +- if (err < 0)
7286 +- goto err_exp;
7287 +-
7288 +- return 0;
7289 +-err_exp:
7290 + nf_ct_expect_put(exp);
7291 + err_ct:
7292 + nf_ct_put(ct);
7293 +diff --git a/net/netfilter/nf_internals.h b/net/netfilter/nf_internals.h
7294 +index ea7f36784b3d..399210693c2a 100644
7295 +--- a/net/netfilter/nf_internals.h
7296 ++++ b/net/netfilter/nf_internals.h
7297 +@@ -19,6 +19,7 @@ unsigned int nf_iterate(struct list_head *head, struct sk_buff *skb,
7298 + /* nf_queue.c */
7299 + int nf_queue(struct sk_buff *skb, struct nf_hook_ops *elem,
7300 + struct nf_hook_state *state, unsigned int queuenum);
7301 ++void nf_queue_nf_hook_drop(struct nf_hook_ops *ops);
7302 + int __init netfilter_queue_init(void);
7303 +
7304 + /* nf_log.c */
7305 +diff --git a/net/netfilter/nf_log.c b/net/netfilter/nf_log.c
7306 +index 675d12c69e32..a5d41dfa9f05 100644
7307 +--- a/net/netfilter/nf_log.c
7308 ++++ b/net/netfilter/nf_log.c
7309 +@@ -107,12 +107,17 @@ EXPORT_SYMBOL(nf_log_register);
7310 +
7311 + void nf_log_unregister(struct nf_logger *logger)
7312 + {
7313 ++ const struct nf_logger *log;
7314 + int i;
7315 +
7316 + mutex_lock(&nf_log_mutex);
7317 +- for (i = 0; i < NFPROTO_NUMPROTO; i++)
7318 +- RCU_INIT_POINTER(loggers[i][logger->type], NULL);
7319 ++ for (i = 0; i < NFPROTO_NUMPROTO; i++) {
7320 ++ log = nft_log_dereference(loggers[i][logger->type]);
7321 ++ if (log == logger)
7322 ++ RCU_INIT_POINTER(loggers[i][logger->type], NULL);
7323 ++ }
7324 + mutex_unlock(&nf_log_mutex);
7325 ++ synchronize_rcu();
7326 + }
7327 + EXPORT_SYMBOL(nf_log_unregister);
7328 +
7329 +diff --git a/net/netfilter/nf_queue.c b/net/netfilter/nf_queue.c
7330 +index 2e88032cd5ad..cd60d397fe05 100644
7331 +--- a/net/netfilter/nf_queue.c
7332 ++++ b/net/netfilter/nf_queue.c
7333 +@@ -105,6 +105,23 @@ bool nf_queue_entry_get_refs(struct nf_queue_entry *entry)
7334 + }
7335 + EXPORT_SYMBOL_GPL(nf_queue_entry_get_refs);
7336 +
7337 ++void nf_queue_nf_hook_drop(struct nf_hook_ops *ops)
7338 ++{
7339 ++ const struct nf_queue_handler *qh;
7340 ++ struct net *net;
7341 ++
7342 ++ rtnl_lock();
7343 ++ rcu_read_lock();
7344 ++ qh = rcu_dereference(queue_handler);
7345 ++ if (qh) {
7346 ++ for_each_net(net) {
7347 ++ qh->nf_hook_drop(net, ops);
7348 ++ }
7349 ++ }
7350 ++ rcu_read_unlock();
7351 ++ rtnl_unlock();
7352 ++}
7353 ++
7354 + /*
7355 + * Any packet that leaves via this function must come back
7356 + * through nf_reinject().
7357 +diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c
7358 +index f153b07073af..f77bad46ac68 100644
7359 +--- a/net/netfilter/nf_tables_core.c
7360 ++++ b/net/netfilter/nf_tables_core.c
7361 +@@ -114,7 +114,8 @@ unsigned int
7362 + nft_do_chain(struct nft_pktinfo *pkt, const struct nf_hook_ops *ops)
7363 + {
7364 + const struct nft_chain *chain = ops->priv, *basechain = chain;
7365 +- const struct net *net = read_pnet(&nft_base_chain(basechain)->pnet);
7366 ++ const struct net *chain_net = read_pnet(&nft_base_chain(basechain)->pnet);
7367 ++ const struct net *net = dev_net(pkt->in ? pkt->in : pkt->out);
7368 + const struct nft_rule *rule;
7369 + const struct nft_expr *expr, *last;
7370 + struct nft_regs regs;
7371 +@@ -124,6 +125,10 @@ nft_do_chain(struct nft_pktinfo *pkt, const struct nf_hook_ops *ops)
7372 + int rulenum;
7373 + unsigned int gencursor = nft_genmask_cur(net);
7374 +
7375 ++ /* Ignore chains that are not for the current network namespace */
7376 ++ if (!net_eq(net, chain_net))
7377 ++ return NF_ACCEPT;
7378 ++
7379 + do_chain:
7380 + rulenum = 0;
7381 + rule = list_entry(&chain->rules, struct nft_rule, list);
7382 +diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c
7383 +index 8b117c90ecd7..69e3ceffa14d 100644
7384 +--- a/net/netfilter/nfnetlink.c
7385 ++++ b/net/netfilter/nfnetlink.c
7386 +@@ -432,6 +432,7 @@ done:
7387 + static void nfnetlink_rcv(struct sk_buff *skb)
7388 + {
7389 + struct nlmsghdr *nlh = nlmsg_hdr(skb);
7390 ++ u_int16_t res_id;
7391 + int msglen;
7392 +
7393 + if (nlh->nlmsg_len < NLMSG_HDRLEN ||
7394 +@@ -456,7 +457,12 @@ static void nfnetlink_rcv(struct sk_buff *skb)
7395 +
7396 + nfgenmsg = nlmsg_data(nlh);
7397 + skb_pull(skb, msglen);
7398 +- nfnetlink_rcv_batch(skb, nlh, nfgenmsg->res_id);
7399 ++ /* Work around old nft using host byte order */
7400 ++ if (nfgenmsg->res_id == NFNL_SUBSYS_NFTABLES)
7401 ++ res_id = NFNL_SUBSYS_NFTABLES;
7402 ++ else
7403 ++ res_id = ntohs(nfgenmsg->res_id);
7404 ++ nfnetlink_rcv_batch(skb, nlh, res_id);
7405 + } else {
7406 + netlink_rcv_skb(skb, &nfnetlink_rcv_msg);
7407 + }
7408 +diff --git a/net/netfilter/nfnetlink_queue_core.c b/net/netfilter/nfnetlink_queue_core.c
7409 +index 11c7682fa0ea..32d0437abdd8 100644
7410 +--- a/net/netfilter/nfnetlink_queue_core.c
7411 ++++ b/net/netfilter/nfnetlink_queue_core.c
7412 +@@ -824,6 +824,27 @@ static struct notifier_block nfqnl_dev_notifier = {
7413 + .notifier_call = nfqnl_rcv_dev_event,
7414 + };
7415 +
7416 ++static int nf_hook_cmp(struct nf_queue_entry *entry, unsigned long ops_ptr)
7417 ++{
7418 ++ return entry->elem == (struct nf_hook_ops *)ops_ptr;
7419 ++}
7420 ++
7421 ++static void nfqnl_nf_hook_drop(struct net *net, struct nf_hook_ops *hook)
7422 ++{
7423 ++ struct nfnl_queue_net *q = nfnl_queue_pernet(net);
7424 ++ int i;
7425 ++
7426 ++ rcu_read_lock();
7427 ++ for (i = 0; i < INSTANCE_BUCKETS; i++) {
7428 ++ struct nfqnl_instance *inst;
7429 ++ struct hlist_head *head = &q->instance_table[i];
7430 ++
7431 ++ hlist_for_each_entry_rcu(inst, head, hlist)
7432 ++ nfqnl_flush(inst, nf_hook_cmp, (unsigned long)hook);
7433 ++ }
7434 ++ rcu_read_unlock();
7435 ++}
7436 ++
7437 + static int
7438 + nfqnl_rcv_nl_event(struct notifier_block *this,
7439 + unsigned long event, void *ptr)
7440 +@@ -1031,7 +1052,8 @@ static const struct nla_policy nfqa_cfg_policy[NFQA_CFG_MAX+1] = {
7441 + };
7442 +
7443 + static const struct nf_queue_handler nfqh = {
7444 +- .outfn = &nfqnl_enqueue_packet,
7445 ++ .outfn = &nfqnl_enqueue_packet,
7446 ++ .nf_hook_drop = &nfqnl_nf_hook_drop,
7447 + };
7448 +
7449 + static int
7450 +diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
7451 +index 7f29cfc76349..4d05c7bf5a03 100644
7452 +--- a/net/netfilter/nft_compat.c
7453 ++++ b/net/netfilter/nft_compat.c
7454 +@@ -617,6 +617,13 @@ struct nft_xt {
7455 +
7456 + static struct nft_expr_type nft_match_type;
7457 +
7458 ++static bool nft_match_cmp(const struct xt_match *match,
7459 ++ const char *name, u32 rev, u32 family)
7460 ++{
7461 ++ return strcmp(match->name, name) == 0 && match->revision == rev &&
7462 ++ (match->family == NFPROTO_UNSPEC || match->family == family);
7463 ++}
7464 ++
7465 + static const struct nft_expr_ops *
7466 + nft_match_select_ops(const struct nft_ctx *ctx,
7467 + const struct nlattr * const tb[])
7468 +@@ -624,7 +631,7 @@ nft_match_select_ops(const struct nft_ctx *ctx,
7469 + struct nft_xt *nft_match;
7470 + struct xt_match *match;
7471 + char *mt_name;
7472 +- __u32 rev, family;
7473 ++ u32 rev, family;
7474 +
7475 + if (tb[NFTA_MATCH_NAME] == NULL ||
7476 + tb[NFTA_MATCH_REV] == NULL ||
7477 +@@ -639,8 +646,7 @@ nft_match_select_ops(const struct nft_ctx *ctx,
7478 + list_for_each_entry(nft_match, &nft_match_list, head) {
7479 + struct xt_match *match = nft_match->ops.data;
7480 +
7481 +- if (strcmp(match->name, mt_name) == 0 &&
7482 +- match->revision == rev && match->family == family) {
7483 ++ if (nft_match_cmp(match, mt_name, rev, family)) {
7484 + if (!try_module_get(match->me))
7485 + return ERR_PTR(-ENOENT);
7486 +
7487 +@@ -691,6 +697,13 @@ static LIST_HEAD(nft_target_list);
7488 +
7489 + static struct nft_expr_type nft_target_type;
7490 +
7491 ++static bool nft_target_cmp(const struct xt_target *tg,
7492 ++ const char *name, u32 rev, u32 family)
7493 ++{
7494 ++ return strcmp(tg->name, name) == 0 && tg->revision == rev &&
7495 ++ (tg->family == NFPROTO_UNSPEC || tg->family == family);
7496 ++}
7497 ++
7498 + static const struct nft_expr_ops *
7499 + nft_target_select_ops(const struct nft_ctx *ctx,
7500 + const struct nlattr * const tb[])
7501 +@@ -698,7 +711,7 @@ nft_target_select_ops(const struct nft_ctx *ctx,
7502 + struct nft_xt *nft_target;
7503 + struct xt_target *target;
7504 + char *tg_name;
7505 +- __u32 rev, family;
7506 ++ u32 rev, family;
7507 +
7508 + if (tb[NFTA_TARGET_NAME] == NULL ||
7509 + tb[NFTA_TARGET_REV] == NULL ||
7510 +@@ -713,8 +726,7 @@ nft_target_select_ops(const struct nft_ctx *ctx,
7511 + list_for_each_entry(nft_target, &nft_target_list, head) {
7512 + struct xt_target *target = nft_target->ops.data;
7513 +
7514 +- if (strcmp(target->name, tg_name) == 0 &&
7515 +- target->revision == rev && target->family == family) {
7516 ++ if (nft_target_cmp(target, tg_name, rev, family)) {
7517 + if (!try_module_get(target->me))
7518 + return ERR_PTR(-ENOENT);
7519 +
7520 +diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
7521 +index 7de33d1af9b6..7fa6d78331ed 100644
7522 +--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
7523 ++++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
7524 +@@ -382,6 +382,7 @@ static int send_reply(struct svcxprt_rdma *rdma,
7525 + int byte_count)
7526 + {
7527 + struct ib_send_wr send_wr;
7528 ++ u32 xdr_off;
7529 + int sge_no;
7530 + int sge_bytes;
7531 + int page_no;
7532 +@@ -416,8 +417,8 @@ static int send_reply(struct svcxprt_rdma *rdma,
7533 + ctxt->direction = DMA_TO_DEVICE;
7534 +
7535 + /* Map the payload indicated by 'byte_count' */
7536 ++ xdr_off = 0;
7537 + for (sge_no = 1; byte_count && sge_no < vec->count; sge_no++) {
7538 +- int xdr_off = 0;
7539 + sge_bytes = min_t(size_t, vec->sge[sge_no].iov_len, byte_count);
7540 + byte_count -= sge_bytes;
7541 + ctxt->sge[sge_no].addr =
7542 +@@ -455,6 +456,13 @@ static int send_reply(struct svcxprt_rdma *rdma,
7543 + }
7544 + rqstp->rq_next_page = rqstp->rq_respages + 1;
7545 +
7546 ++ /* The loop above bumps sc_dma_used for each sge. The
7547 ++ * xdr_buf.tail gets a separate sge, but resides in the
7548 ++ * same page as xdr_buf.head. Don't count it twice.
7549 ++ */
7550 ++ if (sge_no > ctxt->count)
7551 ++ atomic_dec(&rdma->sc_dma_used);
7552 ++
7553 + if (sge_no > rdma->sc_max_sge) {
7554 + pr_err("svcrdma: Too many sges (%d)\n", sge_no);
7555 + goto err;
7556 +diff --git a/sound/arm/Kconfig b/sound/arm/Kconfig
7557 +index 885683a3b0bd..e0406211716b 100644
7558 +--- a/sound/arm/Kconfig
7559 ++++ b/sound/arm/Kconfig
7560 +@@ -9,6 +9,14 @@ menuconfig SND_ARM
7561 + Drivers that are implemented on ASoC can be found in
7562 + "ALSA for SoC audio support" section.
7563 +
7564 ++config SND_PXA2XX_LIB
7565 ++ tristate
7566 ++ select SND_AC97_CODEC if SND_PXA2XX_LIB_AC97
7567 ++ select SND_DMAENGINE_PCM
7568 ++
7569 ++config SND_PXA2XX_LIB_AC97
7570 ++ bool
7571 ++
7572 + if SND_ARM
7573 +
7574 + config SND_ARMAACI
7575 +@@ -21,13 +29,6 @@ config SND_PXA2XX_PCM
7576 + tristate
7577 + select SND_PCM
7578 +
7579 +-config SND_PXA2XX_LIB
7580 +- tristate
7581 +- select SND_AC97_CODEC if SND_PXA2XX_LIB_AC97
7582 +-
7583 +-config SND_PXA2XX_LIB_AC97
7584 +- bool
7585 +-
7586 + config SND_PXA2XX_AC97
7587 + tristate "AC97 driver for the Intel PXA2xx chip"
7588 + depends on ARCH_PXA
7589 +diff --git a/sound/pci/hda/patch_cirrus.c b/sound/pci/hda/patch_cirrus.c
7590 +index 3a24f7739aaa..b791529bf31c 100644
7591 +--- a/sound/pci/hda/patch_cirrus.c
7592 ++++ b/sound/pci/hda/patch_cirrus.c
7593 +@@ -634,6 +634,7 @@ static const struct snd_pci_quirk cs4208_mac_fixup_tbl[] = {
7594 + SND_PCI_QUIRK(0x106b, 0x5e00, "MacBookPro 11,2", CS4208_MBP11),
7595 + SND_PCI_QUIRK(0x106b, 0x7100, "MacBookAir 6,1", CS4208_MBA6),
7596 + SND_PCI_QUIRK(0x106b, 0x7200, "MacBookAir 6,2", CS4208_MBA6),
7597 ++ SND_PCI_QUIRK(0x106b, 0x7b00, "MacBookPro 12,1", CS4208_MBP11),
7598 + {} /* terminator */
7599 + };
7600 +
7601 +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
7602 +index 6fe862594e9b..57bb5a559f8e 100644
7603 +--- a/sound/pci/hda/patch_realtek.c
7604 ++++ b/sound/pci/hda/patch_realtek.c
7605 +@@ -4182,6 +4182,24 @@ static void alc_fixup_disable_aamix(struct hda_codec *codec,
7606 + }
7607 + }
7608 +
7609 ++/* fixup for Thinkpad docks: add dock pins, avoid HP parser fixup */
7610 ++static void alc_fixup_tpt440_dock(struct hda_codec *codec,
7611 ++ const struct hda_fixup *fix, int action)
7612 ++{
7613 ++ static const struct hda_pintbl pincfgs[] = {
7614 ++ { 0x16, 0x21211010 }, /* dock headphone */
7615 ++ { 0x19, 0x21a11010 }, /* dock mic */
7616 ++ { }
7617 ++ };
7618 ++ struct alc_spec *spec = codec->spec;
7619 ++
7620 ++ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
7621 ++ spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP;
7622 ++ codec->power_save_node = 0; /* avoid click noises */
7623 ++ snd_hda_apply_pincfgs(codec, pincfgs);
7624 ++ }
7625 ++}
7626 ++
7627 + static void alc_shutup_dell_xps13(struct hda_codec *codec)
7628 + {
7629 + struct alc_spec *spec = codec->spec;
7630 +@@ -4507,7 +4525,6 @@ enum {
7631 + ALC255_FIXUP_HEADSET_MODE_NO_HP_MIC,
7632 + ALC293_FIXUP_DELL1_MIC_NO_PRESENCE,
7633 + ALC292_FIXUP_TPT440_DOCK,
7634 +- ALC292_FIXUP_TPT440_DOCK2,
7635 + ALC283_FIXUP_BXBT2807_MIC,
7636 + ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED,
7637 + ALC282_FIXUP_ASPIRE_V5_PINS,
7638 +@@ -4972,17 +4989,7 @@ static const struct hda_fixup alc269_fixups[] = {
7639 + },
7640 + [ALC292_FIXUP_TPT440_DOCK] = {
7641 + .type = HDA_FIXUP_FUNC,
7642 +- .v.func = alc269_fixup_pincfg_no_hp_to_lineout,
7643 +- .chained = true,
7644 +- .chain_id = ALC292_FIXUP_TPT440_DOCK2
7645 +- },
7646 +- [ALC292_FIXUP_TPT440_DOCK2] = {
7647 +- .type = HDA_FIXUP_PINS,
7648 +- .v.pins = (const struct hda_pintbl[]) {
7649 +- { 0x16, 0x21211010 }, /* dock headphone */
7650 +- { 0x19, 0x21a11010 }, /* dock mic */
7651 +- { }
7652 +- },
7653 ++ .v.func = alc_fixup_tpt440_dock,
7654 + .chained = true,
7655 + .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST
7656 + },
7657 +@@ -5226,6 +5233,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
7658 + SND_PCI_QUIRK(0x17aa, 0x2212, "Thinkpad T440", ALC292_FIXUP_TPT440_DOCK),
7659 + SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad X240", ALC292_FIXUP_TPT440_DOCK),
7660 + SND_PCI_QUIRK(0x17aa, 0x2215, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
7661 ++ SND_PCI_QUIRK(0x17aa, 0x2223, "ThinkPad T550", ALC292_FIXUP_TPT440_DOCK),
7662 + SND_PCI_QUIRK(0x17aa, 0x2226, "ThinkPad X250", ALC292_FIXUP_TPT440_DOCK),
7663 + SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
7664 + SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP),
7665 +diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
7666 +index 25f0f45e6640..b1bc66783974 100644
7667 +--- a/sound/pci/hda/patch_sigmatel.c
7668 ++++ b/sound/pci/hda/patch_sigmatel.c
7669 +@@ -4522,7 +4522,11 @@ static int patch_stac92hd73xx(struct hda_codec *codec)
7670 + return err;
7671 +
7672 + spec = codec->spec;
7673 +- codec->power_save_node = 1;
7674 ++ /* enable power_save_node only for new 92HD89xx chips, as it causes
7675 ++ * click noises on old 92HD73xx chips.
7676 ++ */
7677 ++ if ((codec->core.vendor_id & 0xfffffff0) != 0x111d7670)
7678 ++ codec->power_save_node = 1;
7679 + spec->linear_tone_beep = 0;
7680 + spec->gen.mixer_nid = 0x1d;
7681 + spec->have_spdif_mux = 1;
7682 +diff --git a/sound/soc/au1x/db1200.c b/sound/soc/au1x/db1200.c
7683 +index c75995f2779c..b914a08258ea 100644
7684 +--- a/sound/soc/au1x/db1200.c
7685 ++++ b/sound/soc/au1x/db1200.c
7686 +@@ -129,6 +129,8 @@ static struct snd_soc_dai_link db1300_i2s_dai = {
7687 + .cpu_dai_name = "au1xpsc_i2s.2",
7688 + .platform_name = "au1xpsc-pcm.2",
7689 + .codec_name = "wm8731.0-001b",
7690 ++ .dai_fmt = SND_SOC_DAIFMT_LEFT_J | SND_SOC_DAIFMT_NB_NF |
7691 ++ SND_SOC_DAIFMT_CBM_CFM,
7692 + .ops = &db1200_i2s_wm8731_ops,
7693 + };
7694 +
7695 +@@ -146,6 +148,8 @@ static struct snd_soc_dai_link db1550_i2s_dai = {
7696 + .cpu_dai_name = "au1xpsc_i2s.3",
7697 + .platform_name = "au1xpsc-pcm.3",
7698 + .codec_name = "wm8731.0-001b",
7699 ++ .dai_fmt = SND_SOC_DAIFMT_LEFT_J | SND_SOC_DAIFMT_NB_NF |
7700 ++ SND_SOC_DAIFMT_CBM_CFM,
7701 + .ops = &db1200_i2s_wm8731_ops,
7702 + };
7703 +
7704 +diff --git a/sound/soc/codecs/sgtl5000.c b/sound/soc/codecs/sgtl5000.c
7705 +index 3593a1496056..3a29c0ac5d8a 100644
7706 +--- a/sound/soc/codecs/sgtl5000.c
7707 ++++ b/sound/soc/codecs/sgtl5000.c
7708 +@@ -1339,8 +1339,8 @@ static int sgtl5000_probe(struct snd_soc_codec *codec)
7709 + sgtl5000->micbias_resistor << SGTL5000_BIAS_R_SHIFT);
7710 +
7711 + snd_soc_update_bits(codec, SGTL5000_CHIP_MIC_CTRL,
7712 +- SGTL5000_BIAS_R_MASK,
7713 +- sgtl5000->micbias_voltage << SGTL5000_BIAS_R_SHIFT);
7714 ++ SGTL5000_BIAS_VOLT_MASK,
7715 ++ sgtl5000->micbias_voltage << SGTL5000_BIAS_VOLT_SHIFT);
7716 + /*
7717 + * disable DAP
7718 + * TODO:
7719 +diff --git a/sound/soc/dwc/designware_i2s.c b/sound/soc/dwc/designware_i2s.c
7720 +index a3e97b46b64e..0d28e3b356f6 100644
7721 +--- a/sound/soc/dwc/designware_i2s.c
7722 ++++ b/sound/soc/dwc/designware_i2s.c
7723 +@@ -131,10 +131,10 @@ static inline void i2s_clear_irqs(struct dw_i2s_dev *dev, u32 stream)
7724 +
7725 + if (stream == SNDRV_PCM_STREAM_PLAYBACK) {
7726 + for (i = 0; i < 4; i++)
7727 +- i2s_write_reg(dev->i2s_base, TOR(i), 0);
7728 ++ i2s_read_reg(dev->i2s_base, TOR(i));
7729 + } else {
7730 + for (i = 0; i < 4; i++)
7731 +- i2s_write_reg(dev->i2s_base, ROR(i), 0);
7732 ++ i2s_read_reg(dev->i2s_base, ROR(i));
7733 + }
7734 + }
7735 +
7736 +diff --git a/sound/soc/pxa/Kconfig b/sound/soc/pxa/Kconfig
7737 +index 39cea80846c3..f2bf8661dd21 100644
7738 +--- a/sound/soc/pxa/Kconfig
7739 ++++ b/sound/soc/pxa/Kconfig
7740 +@@ -1,7 +1,6 @@
7741 + config SND_PXA2XX_SOC
7742 + tristate "SoC Audio for the Intel PXA2xx chip"
7743 + depends on ARCH_PXA
7744 +- select SND_ARM
7745 + select SND_PXA2XX_LIB
7746 + help
7747 + Say Y or M if you want to add support for codecs attached to
7748 +@@ -25,7 +24,6 @@ config SND_PXA2XX_AC97
7749 + config SND_PXA2XX_SOC_AC97
7750 + tristate
7751 + select AC97_BUS
7752 +- select SND_ARM
7753 + select SND_PXA2XX_LIB_AC97
7754 + select SND_SOC_AC97_BUS
7755 +
7756 +diff --git a/sound/soc/pxa/pxa2xx-ac97.c b/sound/soc/pxa/pxa2xx-ac97.c
7757 +index 1f6054650991..9e4b04e0fbd1 100644
7758 +--- a/sound/soc/pxa/pxa2xx-ac97.c
7759 ++++ b/sound/soc/pxa/pxa2xx-ac97.c
7760 +@@ -49,7 +49,7 @@ static struct snd_ac97_bus_ops pxa2xx_ac97_ops = {
7761 + .reset = pxa2xx_ac97_cold_reset,
7762 + };
7763 +
7764 +-static unsigned long pxa2xx_ac97_pcm_stereo_in_req = 12;
7765 ++static unsigned long pxa2xx_ac97_pcm_stereo_in_req = 11;
7766 + static struct snd_dmaengine_dai_dma_data pxa2xx_ac97_pcm_stereo_in = {
7767 + .addr = __PREG(PCDR),
7768 + .addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES,
7769 +@@ -57,7 +57,7 @@ static struct snd_dmaengine_dai_dma_data pxa2xx_ac97_pcm_stereo_in = {
7770 + .filter_data = &pxa2xx_ac97_pcm_stereo_in_req,
7771 + };
7772 +
7773 +-static unsigned long pxa2xx_ac97_pcm_stereo_out_req = 11;
7774 ++static unsigned long pxa2xx_ac97_pcm_stereo_out_req = 12;
7775 + static struct snd_dmaengine_dai_dma_data pxa2xx_ac97_pcm_stereo_out = {
7776 + .addr = __PREG(PCDR),
7777 + .addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES,
7778 +diff --git a/sound/synth/emux/emux_oss.c b/sound/synth/emux/emux_oss.c
7779 +index 82e350e9501c..ac75816ada7c 100644
7780 +--- a/sound/synth/emux/emux_oss.c
7781 ++++ b/sound/synth/emux/emux_oss.c
7782 +@@ -69,7 +69,8 @@ snd_emux_init_seq_oss(struct snd_emux *emu)
7783 + struct snd_seq_oss_reg *arg;
7784 + struct snd_seq_device *dev;
7785 +
7786 +- if (snd_seq_device_new(emu->card, 0, SNDRV_SEQ_DEV_ID_OSS,
7787 ++ /* using device#1 here for avoiding conflicts with OPL3 */
7788 ++ if (snd_seq_device_new(emu->card, 1, SNDRV_SEQ_DEV_ID_OSS,
7789 + sizeof(struct snd_seq_oss_reg), &dev) < 0)
7790 + return;
7791 +
7792 +diff --git a/tools/lib/traceevent/event-parse.c b/tools/lib/traceevent/event-parse.c
7793 +index 29f94f6f0d9e..ed5461f065bd 100644
7794 +--- a/tools/lib/traceevent/event-parse.c
7795 ++++ b/tools/lib/traceevent/event-parse.c
7796 +@@ -3721,7 +3721,7 @@ static void print_str_arg(struct trace_seq *s, void *data, int size,
7797 + struct format_field *field;
7798 + struct printk_map *printk;
7799 + long long val, fval;
7800 +- unsigned long addr;
7801 ++ unsigned long long addr;
7802 + char *str;
7803 + unsigned char *hex;
7804 + int print;
7805 +@@ -3754,13 +3754,30 @@ static void print_str_arg(struct trace_seq *s, void *data, int size,
7806 + */
7807 + if (!(field->flags & FIELD_IS_ARRAY) &&
7808 + field->size == pevent->long_size) {
7809 +- addr = *(unsigned long *)(data + field->offset);
7810 ++
7811 ++ /* Handle heterogeneous recording and processing
7812 ++ * architectures
7813 ++ *
7814 ++ * CASE I:
7815 ++ * Traces recorded on 32-bit devices (32-bit
7816 ++ * addressing) and processed on 64-bit devices:
7817 ++ * In this case, only 32 bits should be read.
7818 ++ *
7819 ++ * CASE II:
7820 ++ * Traces recorded on 64 bit devices and processed
7821 ++ * on 32-bit devices:
7822 ++ * In this case, 64 bits must be read.
7823 ++ */
7824 ++ addr = (pevent->long_size == 8) ?
7825 ++ *(unsigned long long *)(data + field->offset) :
7826 ++ (unsigned long long)*(unsigned int *)(data + field->offset);
7827 ++
7828 + /* Check if it matches a print format */
7829 + printk = find_printk(pevent, addr);
7830 + if (printk)
7831 + trace_seq_puts(s, printk->printk);
7832 + else
7833 +- trace_seq_printf(s, "%lx", addr);
7834 ++ trace_seq_printf(s, "%llx", addr);
7835 + break;
7836 + }
7837 + str = malloc(len + 1);
7838 +diff --git a/tools/perf/arch/alpha/Build b/tools/perf/arch/alpha/Build
7839 +new file mode 100644
7840 +index 000000000000..1bb8bf6d7fd4
7841 +--- /dev/null
7842 ++++ b/tools/perf/arch/alpha/Build
7843 +@@ -0,0 +1 @@
7844 ++# empty
7845 +diff --git a/tools/perf/arch/mips/Build b/tools/perf/arch/mips/Build
7846 +new file mode 100644
7847 +index 000000000000..1bb8bf6d7fd4
7848 +--- /dev/null
7849 ++++ b/tools/perf/arch/mips/Build
7850 +@@ -0,0 +1 @@
7851 ++# empty
7852 +diff --git a/tools/perf/arch/parisc/Build b/tools/perf/arch/parisc/Build
7853 +new file mode 100644
7854 +index 000000000000..1bb8bf6d7fd4
7855 +--- /dev/null
7856 ++++ b/tools/perf/arch/parisc/Build
7857 +@@ -0,0 +1 @@
7858 ++# empty
7859 +diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
7860 +index f7b8218785f6..a1f3ffc2786d 100644
7861 +--- a/tools/perf/builtin-stat.c
7862 ++++ b/tools/perf/builtin-stat.c
7863 +@@ -1227,7 +1227,7 @@ static void abs_printout(int id, int nr, struct perf_evsel *evsel, double avg)
7864 + static void print_aggr(char *prefix)
7865 + {
7866 + struct perf_evsel *counter;
7867 +- int cpu, cpu2, s, s2, id, nr;
7868 ++ int cpu, s, s2, id, nr;
7869 + double uval;
7870 + u64 ena, run, val;
7871 +
7872 +@@ -1240,8 +1240,7 @@ static void print_aggr(char *prefix)
7873 + val = ena = run = 0;
7874 + nr = 0;
7875 + for (cpu = 0; cpu < perf_evsel__nr_cpus(counter); cpu++) {
7876 +- cpu2 = perf_evsel__cpus(counter)->map[cpu];
7877 +- s2 = aggr_get_id(evsel_list->cpus, cpu2);
7878 ++ s2 = aggr_get_id(perf_evsel__cpus(counter), cpu);
7879 + if (s2 != id)
7880 + continue;
7881 + val += counter->counts->cpu[cpu].val;
7882 +diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
7883 +index 918fd8ae2d80..23eea5e7fa94 100644
7884 +--- a/tools/perf/util/header.c
7885 ++++ b/tools/perf/util/header.c
7886 +@@ -1426,7 +1426,7 @@ static int process_nrcpus(struct perf_file_section *section __maybe_unused,
7887 + if (ph->needs_swap)
7888 + nr = bswap_32(nr);
7889 +
7890 +- ph->env.nr_cpus_online = nr;
7891 ++ ph->env.nr_cpus_avail = nr;
7892 +
7893 + ret = readn(fd, &nr, sizeof(nr));
7894 + if (ret != sizeof(nr))
7895 +@@ -1435,7 +1435,7 @@ static int process_nrcpus(struct perf_file_section *section __maybe_unused,
7896 + if (ph->needs_swap)
7897 + nr = bswap_32(nr);
7898 +
7899 +- ph->env.nr_cpus_avail = nr;
7900 ++ ph->env.nr_cpus_online = nr;
7901 + return 0;
7902 + }
7903 +
7904 +diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
7905 +index cc22b9158b93..c7966c0fa13e 100644
7906 +--- a/tools/perf/util/hist.c
7907 ++++ b/tools/perf/util/hist.c
7908 +@@ -151,6 +151,9 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
7909 + hists__new_col_len(hists, HISTC_LOCAL_WEIGHT, 12);
7910 + hists__new_col_len(hists, HISTC_GLOBAL_WEIGHT, 12);
7911 +
7912 ++ if (h->srcline)
7913 ++ hists__new_col_len(hists, HISTC_SRCLINE, strlen(h->srcline));
7914 ++
7915 + if (h->transaction)
7916 + hists__new_col_len(hists, HISTC_TRANSACTION,
7917 + hist_entry__transaction_len());
7918 +diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
7919 +index a7ab6063e038..3ddfab315e19 100644
7920 +--- a/tools/perf/util/symbol-elf.c
7921 ++++ b/tools/perf/util/symbol-elf.c
7922 +@@ -1253,8 +1253,6 @@ out_close:
7923 + static int kcore__init(struct kcore *kcore, char *filename, int elfclass,
7924 + bool temp)
7925 + {
7926 +- GElf_Ehdr *ehdr;
7927 +-
7928 + kcore->elfclass = elfclass;
7929 +
7930 + if (temp)
7931 +@@ -1271,9 +1269,7 @@ static int kcore__init(struct kcore *kcore, char *filename, int elfclass,
7932 + if (!gelf_newehdr(kcore->elf, elfclass))
7933 + goto out_end;
7934 +
7935 +- ehdr = gelf_getehdr(kcore->elf, &kcore->ehdr);
7936 +- if (!ehdr)
7937 +- goto out_end;
7938 ++ memset(&kcore->ehdr, 0, sizeof(GElf_Ehdr));
7939 +
7940 + return 0;
7941 +
7942 +@@ -1330,23 +1326,18 @@ static int kcore__copy_hdr(struct kcore *from, struct kcore *to, size_t count)
7943 + static int kcore__add_phdr(struct kcore *kcore, int idx, off_t offset,
7944 + u64 addr, u64 len)
7945 + {
7946 +- GElf_Phdr gphdr;
7947 +- GElf_Phdr *phdr;
7948 +-
7949 +- phdr = gelf_getphdr(kcore->elf, idx, &gphdr);
7950 +- if (!phdr)
7951 +- return -1;
7952 +-
7953 +- phdr->p_type = PT_LOAD;
7954 +- phdr->p_flags = PF_R | PF_W | PF_X;
7955 +- phdr->p_offset = offset;
7956 +- phdr->p_vaddr = addr;
7957 +- phdr->p_paddr = 0;
7958 +- phdr->p_filesz = len;
7959 +- phdr->p_memsz = len;
7960 +- phdr->p_align = page_size;
7961 +-
7962 +- if (!gelf_update_phdr(kcore->elf, idx, phdr))
7963 ++ GElf_Phdr phdr = {
7964 ++ .p_type = PT_LOAD,
7965 ++ .p_flags = PF_R | PF_W | PF_X,
7966 ++ .p_offset = offset,
7967 ++ .p_vaddr = addr,
7968 ++ .p_paddr = 0,
7969 ++ .p_filesz = len,
7970 ++ .p_memsz = len,
7971 ++ .p_align = page_size,
7972 ++ };
7973 ++
7974 ++ if (!gelf_update_phdr(kcore->elf, idx, &phdr))
7975 + return -1;
7976 +
7977 + return 0;
7978 +diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
7979 +index 9ff4193dfa49..79db45336e3a 100644
7980 +--- a/virt/kvm/eventfd.c
7981 ++++ b/virt/kvm/eventfd.c
7982 +@@ -771,40 +771,14 @@ static enum kvm_bus ioeventfd_bus_from_flags(__u32 flags)
7983 + return KVM_MMIO_BUS;
7984 + }
7985 +
7986 +-static int
7987 +-kvm_assign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
7988 ++static int kvm_assign_ioeventfd_idx(struct kvm *kvm,
7989 ++ enum kvm_bus bus_idx,
7990 ++ struct kvm_ioeventfd *args)
7991 + {
7992 +- enum kvm_bus bus_idx;
7993 +- struct _ioeventfd *p;
7994 +- struct eventfd_ctx *eventfd;
7995 +- int ret;
7996 +-
7997 +- bus_idx = ioeventfd_bus_from_flags(args->flags);
7998 +- /* must be natural-word sized, or 0 to ignore length */
7999 +- switch (args->len) {
8000 +- case 0:
8001 +- case 1:
8002 +- case 2:
8003 +- case 4:
8004 +- case 8:
8005 +- break;
8006 +- default:
8007 +- return -EINVAL;
8008 +- }
8009 +-
8010 +- /* check for range overflow */
8011 +- if (args->addr + args->len < args->addr)
8012 +- return -EINVAL;
8013 +
8014 +- /* check for extra flags that we don't understand */
8015 +- if (args->flags & ~KVM_IOEVENTFD_VALID_FLAG_MASK)
8016 +- return -EINVAL;
8017 +-
8018 +- /* ioeventfd with no length can't be combined with DATAMATCH */
8019 +- if (!args->len &&
8020 +- args->flags & (KVM_IOEVENTFD_FLAG_PIO |
8021 +- KVM_IOEVENTFD_FLAG_DATAMATCH))
8022 +- return -EINVAL;
8023 ++ struct eventfd_ctx *eventfd;
8024 ++ struct _ioeventfd *p;
8025 ++ int ret;
8026 +
8027 + eventfd = eventfd_ctx_fdget(args->fd);
8028 + if (IS_ERR(eventfd))
8029 +@@ -843,16 +817,6 @@ kvm_assign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
8030 + if (ret < 0)
8031 + goto unlock_fail;
8032 +
8033 +- /* When length is ignored, MMIO is also put on a separate bus, for
8034 +- * faster lookups.
8035 +- */
8036 +- if (!args->len && !(args->flags & KVM_IOEVENTFD_FLAG_PIO)) {
8037 +- ret = kvm_io_bus_register_dev(kvm, KVM_FAST_MMIO_BUS,
8038 +- p->addr, 0, &p->dev);
8039 +- if (ret < 0)
8040 +- goto register_fail;
8041 +- }
8042 +-
8043 + kvm->buses[bus_idx]->ioeventfd_count++;
8044 + list_add_tail(&p->list, &kvm->ioeventfds);
8045 +
8046 +@@ -860,8 +824,6 @@ kvm_assign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
8047 +
8048 + return 0;
8049 +
8050 +-register_fail:
8051 +- kvm_io_bus_unregister_dev(kvm, bus_idx, &p->dev);
8052 + unlock_fail:
8053 + mutex_unlock(&kvm->slots_lock);
8054 +
8055 +@@ -873,14 +835,13 @@ fail:
8056 + }
8057 +
8058 + static int
8059 +-kvm_deassign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
8060 ++kvm_deassign_ioeventfd_idx(struct kvm *kvm, enum kvm_bus bus_idx,
8061 ++ struct kvm_ioeventfd *args)
8062 + {
8063 +- enum kvm_bus bus_idx;
8064 + struct _ioeventfd *p, *tmp;
8065 + struct eventfd_ctx *eventfd;
8066 + int ret = -ENOENT;
8067 +
8068 +- bus_idx = ioeventfd_bus_from_flags(args->flags);
8069 + eventfd = eventfd_ctx_fdget(args->fd);
8070 + if (IS_ERR(eventfd))
8071 + return PTR_ERR(eventfd);
8072 +@@ -901,10 +862,6 @@ kvm_deassign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
8073 + continue;
8074 +
8075 + kvm_io_bus_unregister_dev(kvm, bus_idx, &p->dev);
8076 +- if (!p->length) {
8077 +- kvm_io_bus_unregister_dev(kvm, KVM_FAST_MMIO_BUS,
8078 +- &p->dev);
8079 +- }
8080 + kvm->buses[bus_idx]->ioeventfd_count--;
8081 + ioeventfd_release(p);
8082 + ret = 0;
8083 +@@ -918,6 +875,71 @@ kvm_deassign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
8084 + return ret;
8085 + }
8086 +
8087 ++static int kvm_deassign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
8088 ++{
8089 ++ enum kvm_bus bus_idx = ioeventfd_bus_from_flags(args->flags);
8090 ++ int ret = kvm_deassign_ioeventfd_idx(kvm, bus_idx, args);
8091 ++
8092 ++ if (!args->len && bus_idx == KVM_MMIO_BUS)
8093 ++ kvm_deassign_ioeventfd_idx(kvm, KVM_FAST_MMIO_BUS, args);
8094 ++
8095 ++ return ret;
8096 ++}
8097 ++
8098 ++static int
8099 ++kvm_assign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
8100 ++{
8101 ++ enum kvm_bus bus_idx;
8102 ++ int ret;
8103 ++
8104 ++ bus_idx = ioeventfd_bus_from_flags(args->flags);
8105 ++ /* must be natural-word sized, or 0 to ignore length */
8106 ++ switch (args->len) {
8107 ++ case 0:
8108 ++ case 1:
8109 ++ case 2:
8110 ++ case 4:
8111 ++ case 8:
8112 ++ break;
8113 ++ default:
8114 ++ return -EINVAL;
8115 ++ }
8116 ++
8117 ++ /* check for range overflow */
8118 ++ if (args->addr + args->len < args->addr)
8119 ++ return -EINVAL;
8120 ++
8121 ++ /* check for extra flags that we don't understand */
8122 ++ if (args->flags & ~KVM_IOEVENTFD_VALID_FLAG_MASK)
8123 ++ return -EINVAL;
8124 ++
8125 ++ /* ioeventfd with no length can't be combined with DATAMATCH */
8126 ++ if (!args->len &&
8127 ++ args->flags & (KVM_IOEVENTFD_FLAG_PIO |
8128 ++ KVM_IOEVENTFD_FLAG_DATAMATCH))
8129 ++ return -EINVAL;
8130 ++
8131 ++ ret = kvm_assign_ioeventfd_idx(kvm, bus_idx, args);
8132 ++ if (ret)
8133 ++ goto fail;
8134 ++
8135 ++ /* When length is ignored, MMIO is also put on a separate bus, for
8136 ++ * faster lookups.
8137 ++ */
8138 ++ if (!args->len && bus_idx == KVM_MMIO_BUS) {
8139 ++ ret = kvm_assign_ioeventfd_idx(kvm, KVM_FAST_MMIO_BUS, args);
8140 ++ if (ret < 0)
8141 ++ goto fast_fail;
8142 ++ }
8143 ++
8144 ++ return 0;
8145 ++
8146 ++fast_fail:
8147 ++ kvm_deassign_ioeventfd_idx(kvm, bus_idx, args);
8148 ++fail:
8149 ++ return ret;
8150 ++}
8151 ++
8152 + int
8153 + kvm_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
8154 + {
8155 +diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
8156 +index 90977418aeb6..85422985235f 100644
8157 +--- a/virt/kvm/kvm_main.c
8158 ++++ b/virt/kvm/kvm_main.c
8159 +@@ -2935,10 +2935,25 @@ static void kvm_io_bus_destroy(struct kvm_io_bus *bus)
8160 + static inline int kvm_io_bus_cmp(const struct kvm_io_range *r1,
8161 + const struct kvm_io_range *r2)
8162 + {
8163 +- if (r1->addr < r2->addr)
8164 ++ gpa_t addr1 = r1->addr;
8165 ++ gpa_t addr2 = r2->addr;
8166 ++
8167 ++ if (addr1 < addr2)
8168 + return -1;
8169 +- if (r1->addr + r1->len > r2->addr + r2->len)
8170 ++
8171 ++ /* If r2->len == 0, match the exact address. If r2->len != 0,
8172 ++ * accept any overlapping write. Any order is acceptable for
8173 ++ * overlapping ranges, because kvm_io_bus_get_first_dev ensures
8174 ++ * we process all of them.
8175 ++ */
8176 ++ if (r2->len) {
8177 ++ addr1 += r1->len;
8178 ++ addr2 += r2->len;
8179 ++ }
8180 ++
8181 ++ if (addr1 > addr2)
8182 + return 1;
8183 ++
8184 + return 0;
8185 + }
8186 +