Gentoo Archives: gentoo-commits

From: Mike Pagano <mpagano@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] proj/linux-patches:5.10 commit in: /
Date: Thu, 27 Jan 2022 11:37:55
Message-Id: 1643283457.53b59dae9c32fb59a8c7d1d56391b196ba0ec67e.mpagano@gentoo
1 commit: 53b59dae9c32fb59a8c7d1d56391b196ba0ec67e
2 Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
3 AuthorDate: Thu Jan 27 11:37:37 2022 +0000
4 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
5 CommitDate: Thu Jan 27 11:37:37 2022 +0000
6 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=53b59dae
7
8 Linux patch 5.10.94
9
10 Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
11
12 0000_README | 4 +
13 1093_linux-5.10.94.patch | 21291 +++++++++++++++++++++++++++++++++++++++++++++
14 2 files changed, 21295 insertions(+)
15
16 diff --git a/0000_README b/0000_README
17 index ababebf8..8c30f470 100644
18 --- a/0000_README
19 +++ b/0000_README
20 @@ -415,6 +415,10 @@ Patch: 1092_linux-5.10.93.patch
21 From: http://www.kernel.org
22 Desc: Linux 5.10.93
23
24 +Patch: 1093_linux-5.10.94.patch
25 +From: http://www.kernel.org
26 +Desc: Linux 5.10.94
27 +
28 Patch: 1500_XATTR_USER_PREFIX.patch
29 From: https://bugs.gentoo.org/show_bug.cgi?id=470644
30 Desc: Support for namespace user.pax.* on tmpfs.
31
32 diff --git a/1093_linux-5.10.94.patch b/1093_linux-5.10.94.patch
33 new file mode 100644
34 index 00000000..8bbbc313
35 --- /dev/null
36 +++ b/1093_linux-5.10.94.patch
37 @@ -0,0 +1,21291 @@
38 +diff --git a/Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32 b/Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32
39 +deleted file mode 100644
40 +index 73498ff666bd7..0000000000000
41 +--- a/Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32
42 ++++ /dev/null
43 +@@ -1,62 +0,0 @@
44 +-What: /sys/bus/iio/devices/iio:deviceX/in_count0_preset
45 +-KernelVersion: 4.13
46 +-Contact: fabrice.gasnier@××.com
47 +-Description:
48 +- Reading returns the current preset value. Writing sets the
49 +- preset value. Encoder counts continuously from 0 to preset
50 +- value, depending on direction (up/down).
51 +-
52 +-What: /sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
53 +-KernelVersion: 4.13
54 +-Contact: fabrice.gasnier@××.com
55 +-Description:
56 +- Reading returns the list possible quadrature modes.
57 +-
58 +-What: /sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode
59 +-KernelVersion: 4.13
60 +-Contact: fabrice.gasnier@××.com
61 +-Description:
62 +- Configure the device counter quadrature modes:
63 +-
64 +- - non-quadrature:
65 +- Encoder IN1 input servers as the count input (up
66 +- direction).
67 +-
68 +- - quadrature:
69 +- Encoder IN1 and IN2 inputs are mixed to get direction
70 +- and count.
71 +-
72 +-What: /sys/bus/iio/devices/iio:deviceX/in_count_polarity_available
73 +-KernelVersion: 4.13
74 +-Contact: fabrice.gasnier@××.com
75 +-Description:
76 +- Reading returns the list possible active edges.
77 +-
78 +-What: /sys/bus/iio/devices/iio:deviceX/in_count0_polarity
79 +-KernelVersion: 4.13
80 +-Contact: fabrice.gasnier@××.com
81 +-Description:
82 +- Configure the device encoder/counter active edge:
83 +-
84 +- - rising-edge
85 +- - falling-edge
86 +- - both-edges
87 +-
88 +- In non-quadrature mode, device counts up on active edge.
89 +-
90 +- In quadrature mode, encoder counting scenarios are as follows:
91 +-
92 +- +---------+----------+--------------------+--------------------+
93 +- | Active | Level on | IN1 signal | IN2 signal |
94 +- | edge | opposite +----------+---------+----------+---------+
95 +- | | signal | Rising | Falling | Rising | Falling |
96 +- +---------+----------+----------+---------+----------+---------+
97 +- | Rising | High -> | Down | - | Up | - |
98 +- | edge | Low -> | Up | - | Down | - |
99 +- +---------+----------+----------+---------+----------+---------+
100 +- | Falling | High -> | - | Up | - | Down |
101 +- | edge | Low -> | - | Down | - | Up |
102 +- +---------+----------+----------+---------+----------+---------+
103 +- | Both | High -> | Down | Up | Up | Down |
104 +- | edges | Low -> | Up | Down | Down | Up |
105 +- +---------+----------+----------+---------+----------+---------+
106 +diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
107 +index e05e581af5cfe..985181dba0bac 100644
108 +--- a/Documentation/admin-guide/hw-vuln/spectre.rst
109 ++++ b/Documentation/admin-guide/hw-vuln/spectre.rst
110 +@@ -468,7 +468,7 @@ Spectre variant 2
111 + before invoking any firmware code to prevent Spectre variant 2 exploits
112 + using the firmware.
113 +
114 +- Using kernel address space randomization (CONFIG_RANDOMIZE_SLAB=y
115 ++ Using kernel address space randomization (CONFIG_RANDOMIZE_BASE=y
116 + and CONFIG_SLAB_FREELIST_RANDOM=y in the kernel configuration) makes
117 + attacks on the kernel generally more difficult.
118 +
119 +diff --git a/Documentation/devicetree/bindings/display/amlogic,meson-dw-hdmi.yaml b/Documentation/devicetree/bindings/display/amlogic,meson-dw-hdmi.yaml
120 +index 0da42ab8fd3a5..8a67bb889f18a 100644
121 +--- a/Documentation/devicetree/bindings/display/amlogic,meson-dw-hdmi.yaml
122 ++++ b/Documentation/devicetree/bindings/display/amlogic,meson-dw-hdmi.yaml
123 +@@ -10,6 +10,9 @@ title: Amlogic specific extensions to the Synopsys Designware HDMI Controller
124 + maintainers:
125 + - Neil Armstrong <narmstrong@××××××××.com>
126 +
127 ++allOf:
128 ++ - $ref: /schemas/sound/name-prefix.yaml#
129 ++
130 + description: |
131 + The Amlogic Meson Synopsys Designware Integration is composed of
132 + - A Synopsys DesignWare HDMI Controller IP
133 +@@ -99,6 +102,8 @@ properties:
134 + "#sound-dai-cells":
135 + const: 0
136 +
137 ++ sound-name-prefix: true
138 ++
139 + required:
140 + - compatible
141 + - reg
142 +diff --git a/Documentation/devicetree/bindings/display/amlogic,meson-vpu.yaml b/Documentation/devicetree/bindings/display/amlogic,meson-vpu.yaml
143 +index a8d202c9d004c..b8cb1b4dae1ff 100644
144 +--- a/Documentation/devicetree/bindings/display/amlogic,meson-vpu.yaml
145 ++++ b/Documentation/devicetree/bindings/display/amlogic,meson-vpu.yaml
146 +@@ -78,6 +78,10 @@ properties:
147 + interrupts:
148 + maxItems: 1
149 +
150 ++ amlogic,canvas:
151 ++ description: should point to a canvas provider node
152 ++ $ref: /schemas/types.yaml#/definitions/phandle
153 ++
154 + power-domains:
155 + maxItems: 1
156 + description: phandle to the associated power domain
157 +@@ -106,6 +110,7 @@ required:
158 + - port@1
159 + - "#address-cells"
160 + - "#size-cells"
161 ++ - amlogic,canvas
162 +
163 + additionalProperties: false
164 +
165 +@@ -118,6 +123,7 @@ examples:
166 + interrupts = <3>;
167 + #address-cells = <1>;
168 + #size-cells = <0>;
169 ++ amlogic,canvas = <&canvas>;
170 +
171 + /* CVBS VDAC output port */
172 + port@0 {
173 +diff --git a/Documentation/devicetree/bindings/thermal/thermal-zones.yaml b/Documentation/devicetree/bindings/thermal/thermal-zones.yaml
174 +index 164f71598c595..1b3954aa71c15 100644
175 +--- a/Documentation/devicetree/bindings/thermal/thermal-zones.yaml
176 ++++ b/Documentation/devicetree/bindings/thermal/thermal-zones.yaml
177 +@@ -199,12 +199,11 @@ patternProperties:
178 +
179 + contribution:
180 + $ref: /schemas/types.yaml#/definitions/uint32
181 +- minimum: 0
182 +- maximum: 100
183 + description:
184 +- The percentage contribution of the cooling devices at the
185 +- specific trip temperature referenced in this map
186 +- to this thermal zone
187 ++ The cooling contribution to the thermal zone of the referred
188 ++ cooling device at the referred trip point. The contribution is
189 ++ a ratio of the sum of all cooling contributions within a
190 ++ thermal zone.
191 +
192 + required:
193 + - trip
194 +diff --git a/Documentation/devicetree/bindings/watchdog/samsung-wdt.yaml b/Documentation/devicetree/bindings/watchdog/samsung-wdt.yaml
195 +index 76cb9586ee00c..93cd77a6e92c0 100644
196 +--- a/Documentation/devicetree/bindings/watchdog/samsung-wdt.yaml
197 ++++ b/Documentation/devicetree/bindings/watchdog/samsung-wdt.yaml
198 +@@ -39,8 +39,8 @@ properties:
199 + samsung,syscon-phandle:
200 + $ref: /schemas/types.yaml#/definitions/phandle
201 + description:
202 +- Phandle to the PMU system controller node (in case of Exynos5250
203 +- and Exynos5420).
204 ++ Phandle to the PMU system controller node (in case of Exynos5250,
205 ++ Exynos5420 and Exynos7).
206 +
207 + required:
208 + - compatible
209 +@@ -58,6 +58,7 @@ allOf:
210 + enum:
211 + - samsung,exynos5250-wdt
212 + - samsung,exynos5420-wdt
213 ++ - samsung,exynos7-wdt
214 + then:
215 + required:
216 + - samsung,syscon-phandle
217 +diff --git a/Documentation/driver-api/dmaengine/dmatest.rst b/Documentation/driver-api/dmaengine/dmatest.rst
218 +index ee268d445d38b..d2e1d8b58e7dc 100644
219 +--- a/Documentation/driver-api/dmaengine/dmatest.rst
220 ++++ b/Documentation/driver-api/dmaengine/dmatest.rst
221 +@@ -143,13 +143,14 @@ Part 5 - Handling channel allocation
222 + Allocating Channels
223 + -------------------
224 +
225 +-Channels are required to be configured prior to starting the test run.
226 +-Attempting to run the test without configuring the channels will fail.
227 ++Channels do not need to be configured prior to starting a test run. Attempting
228 ++to run the test without configuring the channels will result in testing any
229 ++channels that are available.
230 +
231 + Example::
232 +
233 + % echo 1 > /sys/module/dmatest/parameters/run
234 +- dmatest: Could not start test, no channels configured
235 ++ dmatest: No channels configured, continue with any
236 +
237 + Channels are registered using the "channel" parameter. Channels can be requested by their
238 + name, once requested, the channel is registered and a pending thread is added to the test list.
239 +diff --git a/Documentation/driver-api/firewire.rst b/Documentation/driver-api/firewire.rst
240 +index 94a2d7f01d999..d3cfa73cbb2b4 100644
241 +--- a/Documentation/driver-api/firewire.rst
242 ++++ b/Documentation/driver-api/firewire.rst
243 +@@ -19,7 +19,7 @@ of kernel interfaces is available via exported symbols in `firewire-core` module
244 + Firewire char device data structures
245 + ====================================
246 +
247 +-.. include:: /ABI/stable/firewire-cdev
248 ++.. include:: ../ABI/stable/firewire-cdev
249 + :literal:
250 +
251 + .. kernel-doc:: include/uapi/linux/firewire-cdev.h
252 +@@ -28,7 +28,7 @@ Firewire char device data structures
253 + Firewire device probing and sysfs interfaces
254 + ============================================
255 +
256 +-.. include:: /ABI/stable/sysfs-bus-firewire
257 ++.. include:: ../ABI/stable/sysfs-bus-firewire
258 + :literal:
259 +
260 + .. kernel-doc:: drivers/firewire/core-device.c
261 +diff --git a/Documentation/firmware-guide/acpi/dsd/data-node-references.rst b/Documentation/firmware-guide/acpi/dsd/data-node-references.rst
262 +index 9b17dc77d18c5..da0e46496fc4d 100644
263 +--- a/Documentation/firmware-guide/acpi/dsd/data-node-references.rst
264 ++++ b/Documentation/firmware-guide/acpi/dsd/data-node-references.rst
265 +@@ -5,7 +5,7 @@
266 + Referencing hierarchical data nodes
267 + ===================================
268 +
269 +-:Copyright: |copy| 2018 Intel Corporation
270 ++:Copyright: |copy| 2018, 2021 Intel Corporation
271 + :Author: Sakari Ailus <sakari.ailus@×××××××××××.com>
272 +
273 + ACPI in general allows referring to device objects in the tree only.
274 +@@ -52,12 +52,14 @@ the ANOD object which is also the final target node of the reference.
275 + Name (NOD0, Package() {
276 + ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
277 + Package () {
278 ++ Package () { "reg", 0 },
279 + Package () { "random-property", 3 },
280 + }
281 + })
282 + Name (NOD1, Package() {
283 + ToUUID("dbb8e3e6-5886-4ba6-8795-1319f52a966b"),
284 + Package () {
285 ++ Package () { "reg", 1 },
286 + Package () { "anothernode", "ANOD" },
287 + }
288 + })
289 +@@ -74,7 +76,11 @@ the ANOD object which is also the final target node of the reference.
290 + Name (_DSD, Package () {
291 + ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
292 + Package () {
293 +- Package () { "reference", ^DEV0, "node@1", "anothernode" },
294 ++ Package () {
295 ++ "reference", Package () {
296 ++ ^DEV0, "node@1", "anothernode"
297 ++ }
298 ++ },
299 + }
300 + })
301 + }
302 +diff --git a/Makefile b/Makefile
303 +index 993559750df9d..1071ec486aa5b 100644
304 +--- a/Makefile
305 ++++ b/Makefile
306 +@@ -1,7 +1,7 @@
307 + # SPDX-License-Identifier: GPL-2.0
308 + VERSION = 5
309 + PATCHLEVEL = 10
310 +-SUBLEVEL = 93
311 ++SUBLEVEL = 94
312 + EXTRAVERSION =
313 + NAME = Dare mighty things
314 +
315 +diff --git a/arch/arm/Kconfig.debug b/arch/arm/Kconfig.debug
316 +index 8986a91a6f31b..dd1cf70353986 100644
317 +--- a/arch/arm/Kconfig.debug
318 ++++ b/arch/arm/Kconfig.debug
319 +@@ -400,12 +400,12 @@ choice
320 + Say Y here if you want kernel low-level debugging support
321 + on i.MX25.
322 +
323 +- config DEBUG_IMX21_IMX27_UART
324 +- bool "i.MX21 and i.MX27 Debug UART"
325 +- depends on SOC_IMX21 || SOC_IMX27
326 ++ config DEBUG_IMX27_UART
327 ++ bool "i.MX27 Debug UART"
328 ++ depends on SOC_IMX27
329 + help
330 + Say Y here if you want kernel low-level debugging support
331 +- on i.MX21 or i.MX27.
332 ++ on i.MX27.
333 +
334 + config DEBUG_IMX28_UART
335 + bool "i.MX28 Debug UART"
336 +@@ -1523,7 +1523,7 @@ config DEBUG_IMX_UART_PORT
337 + int "i.MX Debug UART Port Selection"
338 + depends on DEBUG_IMX1_UART || \
339 + DEBUG_IMX25_UART || \
340 +- DEBUG_IMX21_IMX27_UART || \
341 ++ DEBUG_IMX27_UART || \
342 + DEBUG_IMX31_UART || \
343 + DEBUG_IMX35_UART || \
344 + DEBUG_IMX50_UART || \
345 +@@ -1591,12 +1591,12 @@ config DEBUG_LL_INCLUDE
346 + default "debug/icedcc.S" if DEBUG_ICEDCC
347 + default "debug/imx.S" if DEBUG_IMX1_UART || \
348 + DEBUG_IMX25_UART || \
349 +- DEBUG_IMX21_IMX27_UART || \
350 ++ DEBUG_IMX27_UART || \
351 + DEBUG_IMX31_UART || \
352 + DEBUG_IMX35_UART || \
353 + DEBUG_IMX50_UART || \
354 + DEBUG_IMX51_UART || \
355 +- DEBUG_IMX53_UART ||\
356 ++ DEBUG_IMX53_UART || \
357 + DEBUG_IMX6Q_UART || \
358 + DEBUG_IMX6SL_UART || \
359 + DEBUG_IMX6SX_UART || \
360 +diff --git a/arch/arm/boot/compressed/efi-header.S b/arch/arm/boot/compressed/efi-header.S
361 +index c0e7a745103e2..230030c130853 100644
362 +--- a/arch/arm/boot/compressed/efi-header.S
363 ++++ b/arch/arm/boot/compressed/efi-header.S
364 +@@ -9,16 +9,22 @@
365 + #include <linux/sizes.h>
366 +
367 + .macro __nop
368 +-#ifdef CONFIG_EFI_STUB
369 +- @ This is almost but not quite a NOP, since it does clobber the
370 +- @ condition flags. But it is the best we can do for EFI, since
371 +- @ PE/COFF expects the magic string "MZ" at offset 0, while the
372 +- @ ARM/Linux boot protocol expects an executable instruction
373 +- @ there.
374 +- .inst MZ_MAGIC | (0x1310 << 16) @ tstne r0, #0x4d000
375 +-#else
376 + AR_CLASS( mov r0, r0 )
377 + M_CLASS( nop.w )
378 ++ .endm
379 ++
380 ++ .macro __initial_nops
381 ++#ifdef CONFIG_EFI_STUB
382 ++ @ This is a two-instruction NOP, which happens to bear the
383 ++ @ PE/COFF signature "MZ" in the first two bytes, so the kernel
384 ++ @ is accepted as an EFI binary. Booting via the UEFI stub
385 ++ @ will not execute those instructions, but the ARM/Linux
386 ++ @ boot protocol does, so we need some NOPs here.
387 ++ .inst MZ_MAGIC | (0xe225 << 16) @ eor r5, r5, 0x4d000
388 ++ eor r5, r5, 0x4d000 @ undo previous insn
389 ++#else
390 ++ __nop
391 ++ __nop
392 + #endif
393 + .endm
394 +
395 +diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S
396 +index 247ce90559901..7a38c63b62bf0 100644
397 +--- a/arch/arm/boot/compressed/head.S
398 ++++ b/arch/arm/boot/compressed/head.S
399 +@@ -190,7 +190,8 @@ start:
400 + * were patching the initial instructions of the kernel, i.e
401 + * had started to exploit this "patch area".
402 + */
403 +- .rept 7
404 ++ __initial_nops
405 ++ .rept 5
406 + __nop
407 + .endr
408 + #ifndef CONFIG_THUMB2_KERNEL
409 +diff --git a/arch/arm/boot/dts/armada-38x.dtsi b/arch/arm/boot/dts/armada-38x.dtsi
410 +index 9b1a24cc5e91f..df3c8d1d8f641 100644
411 +--- a/arch/arm/boot/dts/armada-38x.dtsi
412 ++++ b/arch/arm/boot/dts/armada-38x.dtsi
413 +@@ -168,7 +168,7 @@
414 + };
415 +
416 + uart0: serial@12000 {
417 +- compatible = "marvell,armada-38x-uart";
418 ++ compatible = "marvell,armada-38x-uart", "ns16550a";
419 + reg = <0x12000 0x100>;
420 + reg-shift = <2>;
421 + interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>;
422 +@@ -178,7 +178,7 @@
423 + };
424 +
425 + uart1: serial@12100 {
426 +- compatible = "marvell,armada-38x-uart";
427 ++ compatible = "marvell,armada-38x-uart", "ns16550a";
428 + reg = <0x12100 0x100>;
429 + reg-shift = <2>;
430 + interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
431 +diff --git a/arch/arm/boot/dts/gemini-nas4220b.dts b/arch/arm/boot/dts/gemini-nas4220b.dts
432 +index 13112a8a5dd88..6544c730340fa 100644
433 +--- a/arch/arm/boot/dts/gemini-nas4220b.dts
434 ++++ b/arch/arm/boot/dts/gemini-nas4220b.dts
435 +@@ -84,7 +84,7 @@
436 + partitions {
437 + compatible = "redboot-fis";
438 + /* Eraseblock at 0xfe0000 */
439 +- fis-index-block = <0x1fc>;
440 ++ fis-index-block = <0x7f>;
441 + };
442 + };
443 +
444 +diff --git a/arch/arm/boot/dts/omap3-n900.dts b/arch/arm/boot/dts/omap3-n900.dts
445 +index 32335d4ce478b..d40c3d2c4914e 100644
446 +--- a/arch/arm/boot/dts/omap3-n900.dts
447 ++++ b/arch/arm/boot/dts/omap3-n900.dts
448 +@@ -8,6 +8,7 @@
449 +
450 + #include "omap34xx.dtsi"
451 + #include <dt-bindings/input/input.h>
452 ++#include <dt-bindings/leds/common.h>
453 +
454 + /*
455 + * Default secure signed bootloader (Nokia X-Loader) does not enable L3 firewall
456 +@@ -630,63 +631,92 @@
457 + };
458 +
459 + lp5523: lp5523@32 {
460 ++ #address-cells = <1>;
461 ++ #size-cells = <0>;
462 + compatible = "national,lp5523";
463 + reg = <0x32>;
464 + clock-mode = /bits/ 8 <0>; /* LP55XX_CLOCK_AUTO */
465 +- enable-gpio = <&gpio2 9 GPIO_ACTIVE_HIGH>; /* 41 */
466 ++ enable-gpios = <&gpio2 9 GPIO_ACTIVE_HIGH>; /* 41 */
467 +
468 +- chan0 {
469 ++ led@0 {
470 ++ reg = <0>;
471 + chan-name = "lp5523:kb1";
472 + led-cur = /bits/ 8 <50>;
473 + max-cur = /bits/ 8 <100>;
474 ++ color = <LED_COLOR_ID_WHITE>;
475 ++ function = LED_FUNCTION_KBD_BACKLIGHT;
476 + };
477 +
478 +- chan1 {
479 ++ led@1 {
480 ++ reg = <1>;
481 + chan-name = "lp5523:kb2";
482 + led-cur = /bits/ 8 <50>;
483 + max-cur = /bits/ 8 <100>;
484 ++ color = <LED_COLOR_ID_WHITE>;
485 ++ function = LED_FUNCTION_KBD_BACKLIGHT;
486 + };
487 +
488 +- chan2 {
489 ++ led@2 {
490 ++ reg = <2>;
491 + chan-name = "lp5523:kb3";
492 + led-cur = /bits/ 8 <50>;
493 + max-cur = /bits/ 8 <100>;
494 ++ color = <LED_COLOR_ID_WHITE>;
495 ++ function = LED_FUNCTION_KBD_BACKLIGHT;
496 + };
497 +
498 +- chan3 {
499 ++ led@3 {
500 ++ reg = <3>;
501 + chan-name = "lp5523:kb4";
502 + led-cur = /bits/ 8 <50>;
503 + max-cur = /bits/ 8 <100>;
504 ++ color = <LED_COLOR_ID_WHITE>;
505 ++ function = LED_FUNCTION_KBD_BACKLIGHT;
506 + };
507 +
508 +- chan4 {
509 ++ led@4 {
510 ++ reg = <4>;
511 + chan-name = "lp5523:b";
512 + led-cur = /bits/ 8 <50>;
513 + max-cur = /bits/ 8 <100>;
514 ++ color = <LED_COLOR_ID_BLUE>;
515 ++ function = LED_FUNCTION_STATUS;
516 + };
517 +
518 +- chan5 {
519 ++ led@5 {
520 ++ reg = <5>;
521 + chan-name = "lp5523:g";
522 + led-cur = /bits/ 8 <50>;
523 + max-cur = /bits/ 8 <100>;
524 ++ color = <LED_COLOR_ID_GREEN>;
525 ++ function = LED_FUNCTION_STATUS;
526 + };
527 +
528 +- chan6 {
529 ++ led@6 {
530 ++ reg = <6>;
531 + chan-name = "lp5523:r";
532 + led-cur = /bits/ 8 <50>;
533 + max-cur = /bits/ 8 <100>;
534 ++ color = <LED_COLOR_ID_RED>;
535 ++ function = LED_FUNCTION_STATUS;
536 + };
537 +
538 +- chan7 {
539 ++ led@7 {
540 ++ reg = <7>;
541 + chan-name = "lp5523:kb5";
542 + led-cur = /bits/ 8 <50>;
543 + max-cur = /bits/ 8 <100>;
544 ++ color = <LED_COLOR_ID_WHITE>;
545 ++ function = LED_FUNCTION_KBD_BACKLIGHT;
546 + };
547 +
548 +- chan8 {
549 ++ led@8 {
550 ++ reg = <8>;
551 + chan-name = "lp5523:kb6";
552 + led-cur = /bits/ 8 <50>;
553 + max-cur = /bits/ 8 <100>;
554 ++ color = <LED_COLOR_ID_WHITE>;
555 ++ function = LED_FUNCTION_KBD_BACKLIGHT;
556 + };
557 + };
558 +
559 +diff --git a/arch/arm/boot/dts/stm32f429-disco.dts b/arch/arm/boot/dts/stm32f429-disco.dts
560 +index 075ac57d0bf4a..6435e099c6326 100644
561 +--- a/arch/arm/boot/dts/stm32f429-disco.dts
562 ++++ b/arch/arm/boot/dts/stm32f429-disco.dts
563 +@@ -192,7 +192,7 @@
564 +
565 + display: display@1{
566 + /* Connect panel-ilitek-9341 to ltdc */
567 +- compatible = "st,sf-tc240t-9370-t";
568 ++ compatible = "st,sf-tc240t-9370-t", "ilitek,ili9341";
569 + reg = <1>;
570 + spi-3wire;
571 + spi-max-frequency = <10000000>;
572 +diff --git a/arch/arm/include/debug/imx-uart.h b/arch/arm/include/debug/imx-uart.h
573 +index c8eb83d4b8964..3edbb3c5b42bf 100644
574 +--- a/arch/arm/include/debug/imx-uart.h
575 ++++ b/arch/arm/include/debug/imx-uart.h
576 +@@ -11,13 +11,6 @@
577 + #define IMX1_UART_BASE_ADDR(n) IMX1_UART##n##_BASE_ADDR
578 + #define IMX1_UART_BASE(n) IMX1_UART_BASE_ADDR(n)
579 +
580 +-#define IMX21_UART1_BASE_ADDR 0x1000a000
581 +-#define IMX21_UART2_BASE_ADDR 0x1000b000
582 +-#define IMX21_UART3_BASE_ADDR 0x1000c000
583 +-#define IMX21_UART4_BASE_ADDR 0x1000d000
584 +-#define IMX21_UART_BASE_ADDR(n) IMX21_UART##n##_BASE_ADDR
585 +-#define IMX21_UART_BASE(n) IMX21_UART_BASE_ADDR(n)
586 +-
587 + #define IMX25_UART1_BASE_ADDR 0x43f90000
588 + #define IMX25_UART2_BASE_ADDR 0x43f94000
589 + #define IMX25_UART3_BASE_ADDR 0x5000c000
590 +@@ -26,6 +19,13 @@
591 + #define IMX25_UART_BASE_ADDR(n) IMX25_UART##n##_BASE_ADDR
592 + #define IMX25_UART_BASE(n) IMX25_UART_BASE_ADDR(n)
593 +
594 ++#define IMX27_UART1_BASE_ADDR 0x1000a000
595 ++#define IMX27_UART2_BASE_ADDR 0x1000b000
596 ++#define IMX27_UART3_BASE_ADDR 0x1000c000
597 ++#define IMX27_UART4_BASE_ADDR 0x1000d000
598 ++#define IMX27_UART_BASE_ADDR(n) IMX27_UART##n##_BASE_ADDR
599 ++#define IMX27_UART_BASE(n) IMX27_UART_BASE_ADDR(n)
600 ++
601 + #define IMX31_UART1_BASE_ADDR 0x43f90000
602 + #define IMX31_UART2_BASE_ADDR 0x43f94000
603 + #define IMX31_UART3_BASE_ADDR 0x5000c000
604 +@@ -112,10 +112,10 @@
605 +
606 + #ifdef CONFIG_DEBUG_IMX1_UART
607 + #define UART_PADDR IMX_DEBUG_UART_BASE(IMX1)
608 +-#elif defined(CONFIG_DEBUG_IMX21_IMX27_UART)
609 +-#define UART_PADDR IMX_DEBUG_UART_BASE(IMX21)
610 + #elif defined(CONFIG_DEBUG_IMX25_UART)
611 + #define UART_PADDR IMX_DEBUG_UART_BASE(IMX25)
612 ++#elif defined(CONFIG_DEBUG_IMX27_UART)
613 ++#define UART_PADDR IMX_DEBUG_UART_BASE(IMX27)
614 + #elif defined(CONFIG_DEBUG_IMX31_UART)
615 + #define UART_PADDR IMX_DEBUG_UART_BASE(IMX31)
616 + #elif defined(CONFIG_DEBUG_IMX35_UART)
617 +diff --git a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
618 +index ee949255ced3f..09ef73b99dd86 100644
619 +--- a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
620 ++++ b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
621 +@@ -154,8 +154,10 @@ static int __init rcar_gen2_regulator_quirk(void)
622 + return -ENODEV;
623 +
624 + for_each_matching_node_and_match(np, rcar_gen2_quirk_match, &id) {
625 +- if (!of_device_is_available(np))
626 ++ if (!of_device_is_available(np)) {
627 ++ of_node_put(np);
628 + break;
629 ++ }
630 +
631 + ret = of_property_read_u32(np, "reg", &addr);
632 + if (ret) /* Skip invalid entry and continue */
633 +@@ -164,6 +166,7 @@ static int __init rcar_gen2_regulator_quirk(void)
634 + quirk = kzalloc(sizeof(*quirk), GFP_KERNEL);
635 + if (!quirk) {
636 + ret = -ENOMEM;
637 ++ of_node_put(np);
638 + goto err_mem;
639 + }
640 +
641 +diff --git a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
642 +index 959b299344e54..7342c8a2b322d 100644
643 +--- a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
644 ++++ b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
645 +@@ -52,7 +52,7 @@
646 + secure-monitor = <&sm>;
647 + };
648 +
649 +- gpu_opp_table: gpu-opp-table {
650 ++ gpu_opp_table: opp-table-gpu {
651 + compatible = "operating-points-v2";
652 +
653 + opp-124999998 {
654 +diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
655 +index 59b5f39088757..b9b8cd4b5ba9d 100644
656 +--- a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
657 ++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
658 +@@ -543,7 +543,7 @@
659 + pinctrl-0 = <&nor_pins>;
660 + pinctrl-names = "default";
661 +
662 +- mx25u64: spi-flash@0 {
663 ++ mx25u64: flash@0 {
664 + #address-cells = <1>;
665 + #size-cells = <1>;
666 + compatible = "mxicy,mx25u6435f", "jedec,spi-nor";
667 +diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
668 +index a350fee1264d7..a4d34398da358 100644
669 +--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
670 ++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-wetek.dtsi
671 +@@ -6,6 +6,7 @@
672 + */
673 +
674 + #include "meson-gxbb.dtsi"
675 ++#include <dt-bindings/gpio/gpio.h>
676 +
677 + / {
678 + aliases {
679 +@@ -64,6 +65,7 @@
680 + regulator-name = "VDDIO_AO18";
681 + regulator-min-microvolt = <1800000>;
682 + regulator-max-microvolt = <1800000>;
683 ++ regulator-always-on;
684 + };
685 +
686 + vcc_3v3: regulator-vcc_3v3 {
687 +@@ -161,6 +163,7 @@
688 + status = "okay";
689 + pinctrl-0 = <&hdmi_hpd_pins>, <&hdmi_i2c_pins>;
690 + pinctrl-names = "default";
691 ++ hdmi-supply = <&vddio_ao18>;
692 + };
693 +
694 + &hdmi_tx_tmds_port {
695 +diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds.dts b/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds.dts
696 +index 13cdc958ba3ea..71858c9376c25 100644
697 +--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds.dts
698 ++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a-qds.dts
699 +@@ -261,11 +261,6 @@
700 + vcc-supply = <&sb_3v3>;
701 + };
702 +
703 +- rtc@51 {
704 +- compatible = "nxp,pcf2129";
705 +- reg = <0x51>;
706 +- };
707 +-
708 + eeprom@56 {
709 + compatible = "atmel,24c512";
710 + reg = <0x56>;
711 +@@ -307,6 +302,15 @@
712 +
713 + };
714 +
715 ++&i2c1 {
716 ++ status = "okay";
717 ++
718 ++ rtc@51 {
719 ++ compatible = "nxp,pcf2129";
720 ++ reg = <0x51>;
721 ++ };
722 ++};
723 ++
724 + &enetc_port1 {
725 + phy-handle = <&qds_phy1>;
726 + phy-connection-type = "rgmii-id";
727 +diff --git a/arch/arm64/boot/dts/marvell/cn9130.dtsi b/arch/arm64/boot/dts/marvell/cn9130.dtsi
728 +index a2b7e5ec979d3..327b04134134f 100644
729 +--- a/arch/arm64/boot/dts/marvell/cn9130.dtsi
730 ++++ b/arch/arm64/boot/dts/marvell/cn9130.dtsi
731 +@@ -11,6 +11,13 @@
732 + model = "Marvell Armada CN9130 SoC";
733 + compatible = "marvell,cn9130", "marvell,armada-ap807-quad",
734 + "marvell,armada-ap807";
735 ++
736 ++ aliases {
737 ++ gpio1 = &cp0_gpio1;
738 ++ gpio2 = &cp0_gpio2;
739 ++ spi1 = &cp0_spi0;
740 ++ spi2 = &cp0_spi1;
741 ++ };
742 + };
743 +
744 + /*
745 +@@ -35,3 +42,11 @@
746 + #undef CP11X_PCIE0_BASE
747 + #undef CP11X_PCIE1_BASE
748 + #undef CP11X_PCIE2_BASE
749 ++
750 ++&cp0_gpio1 {
751 ++ status = "okay";
752 ++};
753 ++
754 ++&cp0_gpio2 {
755 ++ status = "okay";
756 ++};
757 +diff --git a/arch/arm64/boot/dts/nvidia/tegra186.dtsi b/arch/arm64/boot/dts/nvidia/tegra186.dtsi
758 +index 0c46ab7bbbf37..eec6418ecdb1a 100644
759 +--- a/arch/arm64/boot/dts/nvidia/tegra186.dtsi
760 ++++ b/arch/arm64/boot/dts/nvidia/tegra186.dtsi
761 +@@ -985,7 +985,7 @@
762 +
763 + ccplex@e000000 {
764 + compatible = "nvidia,tegra186-ccplex-cluster";
765 +- reg = <0x0 0x0e000000 0x0 0x3fffff>;
766 ++ reg = <0x0 0x0e000000 0x0 0x400000>;
767 +
768 + nvidia,bpmp = <&bpmp>;
769 + };
770 +diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
771 +index 9b5007e5f790f..05cf606b85c9f 100644
772 +--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
773 ++++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
774 +@@ -782,13 +782,12 @@
775 + reg = <0x3510000 0x10000>;
776 + interrupts = <GIC_SPI 161 IRQ_TYPE_LEVEL_HIGH>;
777 + clocks = <&bpmp TEGRA194_CLK_HDA>,
778 +- <&bpmp TEGRA194_CLK_HDA2CODEC_2X>,
779 +- <&bpmp TEGRA194_CLK_HDA2HDMICODEC>;
780 +- clock-names = "hda", "hda2codec_2x", "hda2hdmi";
781 ++ <&bpmp TEGRA194_CLK_HDA2HDMICODEC>,
782 ++ <&bpmp TEGRA194_CLK_HDA2CODEC_2X>;
783 ++ clock-names = "hda", "hda2hdmi", "hda2codec_2x";
784 + resets = <&bpmp TEGRA194_RESET_HDA>,
785 +- <&bpmp TEGRA194_RESET_HDA2CODEC_2X>,
786 + <&bpmp TEGRA194_RESET_HDA2HDMICODEC>;
787 +- reset-names = "hda", "hda2codec_2x", "hda2hdmi";
788 ++ reset-names = "hda", "hda2hdmi";
789 + power-domains = <&bpmp TEGRA194_POWER_DOMAIN_DISP>;
790 + interconnects = <&mc TEGRA194_MEMORY_CLIENT_HDAR &emc>,
791 + <&mc TEGRA194_MEMORY_CLIENT_HDAW &emc>;
792 +diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
793 +index 9cb8f7a052df9..2a1f03cdb52c7 100644
794 +--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
795 ++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
796 +@@ -221,7 +221,7 @@
797 + interrupts = <GIC_SPI 208 IRQ_TYPE_LEVEL_HIGH>;
798 + gpio-controller;
799 + #gpio-cells = <2>;
800 +- gpio-ranges = <&tlmm 0 80>;
801 ++ gpio-ranges = <&tlmm 0 0 80>;
802 + interrupt-controller;
803 + #interrupt-cells = <2>;
804 +
805 +diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
806 +index b1ffc056eea0b..291276a38d7cd 100644
807 +--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
808 ++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
809 +@@ -18,8 +18,8 @@
810 + #size-cells = <2>;
811 +
812 + aliases {
813 +- sdhc1 = &sdhc_1; /* SDC1 eMMC slot */
814 +- sdhc2 = &sdhc_2; /* SDC2 SD card slot */
815 ++ mmc0 = &sdhc_1; /* SDC1 eMMC slot */
816 ++ mmc1 = &sdhc_2; /* SDC2 SD card slot */
817 + };
818 +
819 + chosen { };
820 +diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
821 +index eef17434d12ae..ef5d03a150693 100644
822 +--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
823 ++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
824 +@@ -645,9 +645,6 @@
825 + nvmem-cells = <&gpu_speed_bin>;
826 + nvmem-cell-names = "speed_bin";
827 +
828 +- qcom,gpu-quirk-two-pass-use-wfi;
829 +- qcom,gpu-quirk-fault-detect-mask;
830 +-
831 + operating-points-v2 = <&gpu_opp_table>;
832 +
833 + gpu_opp_table: opp-table {
834 +diff --git a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
835 +index ad6561843ba28..e080c317b5e3d 100644
836 +--- a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
837 ++++ b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
838 +@@ -365,6 +365,10 @@
839 + dai@1 {
840 + reg = <1>;
841 + };
842 ++
843 ++ dai@2 {
844 ++ reg = <2>;
845 ++ };
846 + };
847 +
848 + &sound {
849 +@@ -377,6 +381,7 @@
850 + "SpkrLeft IN", "SPK1 OUT",
851 + "SpkrRight IN", "SPK2 OUT",
852 + "MM_DL1", "MultiMedia1 Playback",
853 ++ "MM_DL3", "MultiMedia3 Playback",
854 + "MultiMedia2 Capture", "MM_UL2";
855 +
856 + mm1-dai-link {
857 +@@ -393,6 +398,13 @@
858 + };
859 + };
860 +
861 ++ mm3-dai-link {
862 ++ link-name = "MultiMedia3";
863 ++ cpu {
864 ++ sound-dai = <&q6asmdai MSM_FRONTEND_DAI_MULTIMEDIA3>;
865 ++ };
866 ++ };
867 ++
868 + slim-dai-link {
869 + link-name = "SLIM Playback";
870 + cpu {
871 +@@ -422,6 +434,21 @@
872 + sound-dai = <&wcd9340 1>;
873 + };
874 + };
875 ++
876 ++ slim-wcd-dai-link {
877 ++ link-name = "SLIM WCD Playback";
878 ++ cpu {
879 ++ sound-dai = <&q6afedai SLIMBUS_1_RX>;
880 ++ };
881 ++
882 ++ platform {
883 ++ sound-dai = <&q6routing>;
884 ++ };
885 ++
886 ++ codec {
887 ++ sound-dai = <&wcd9340 2>;
888 ++ };
889 ++ };
890 + };
891 +
892 + &tlmm {
893 +diff --git a/arch/arm64/boot/dts/renesas/cat875.dtsi b/arch/arm64/boot/dts/renesas/cat875.dtsi
894 +index 801ea54b027c4..20f8adc635e72 100644
895 +--- a/arch/arm64/boot/dts/renesas/cat875.dtsi
896 ++++ b/arch/arm64/boot/dts/renesas/cat875.dtsi
897 +@@ -18,6 +18,7 @@
898 + pinctrl-names = "default";
899 + renesas,no-ether-link;
900 + phy-handle = <&phy0>;
901 ++ phy-mode = "rgmii-id";
902 + status = "okay";
903 +
904 + phy0: ethernet-phy@0 {
905 +diff --git a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
906 +index 5832ad830ed14..1ab9f9604af6c 100644
907 +--- a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
908 ++++ b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
909 +@@ -25,7 +25,7 @@
910 + #size-cells = <1>;
911 + ranges = <0x00 0x00 0x00100000 0x1c000>;
912 +
913 +- serdes_ln_ctrl: serdes-ln-ctrl@4080 {
914 ++ serdes_ln_ctrl: mux-controller@4080 {
915 + compatible = "mmio-mux";
916 + #mux-control-cells = <1>;
917 + mux-reg-masks = <0x4080 0x3>, <0x4084 0x3>, /* SERDES0 lane0/1 select */
918 +diff --git a/arch/arm64/boot/dts/ti/k3-j7200.dtsi b/arch/arm64/boot/dts/ti/k3-j7200.dtsi
919 +index 66169bcf7c9a4..03a9623f0f956 100644
920 +--- a/arch/arm64/boot/dts/ti/k3-j7200.dtsi
921 ++++ b/arch/arm64/boot/dts/ti/k3-j7200.dtsi
922 +@@ -60,7 +60,7 @@
923 + i-cache-sets = <256>;
924 + d-cache-size = <0x8000>;
925 + d-cache-line-size = <64>;
926 +- d-cache-sets = <128>;
927 ++ d-cache-sets = <256>;
928 + next-level-cache = <&L2_0>;
929 + };
930 +
931 +@@ -74,7 +74,7 @@
932 + i-cache-sets = <256>;
933 + d-cache-size = <0x8000>;
934 + d-cache-line-size = <64>;
935 +- d-cache-sets = <128>;
936 ++ d-cache-sets = <256>;
937 + next-level-cache = <&L2_0>;
938 + };
939 + };
940 +@@ -84,7 +84,7 @@
941 + cache-level = <2>;
942 + cache-size = <0x100000>;
943 + cache-line-size = <64>;
944 +- cache-sets = <2048>;
945 ++ cache-sets = <1024>;
946 + next-level-cache = <&msmc_l3>;
947 + };
948 +
949 +diff --git a/arch/arm64/boot/dts/ti/k3-j721e.dtsi b/arch/arm64/boot/dts/ti/k3-j721e.dtsi
950 +index cc483f7344af3..a199227327ed2 100644
951 +--- a/arch/arm64/boot/dts/ti/k3-j721e.dtsi
952 ++++ b/arch/arm64/boot/dts/ti/k3-j721e.dtsi
953 +@@ -61,7 +61,7 @@
954 + i-cache-sets = <256>;
955 + d-cache-size = <0x8000>;
956 + d-cache-line-size = <64>;
957 +- d-cache-sets = <128>;
958 ++ d-cache-sets = <256>;
959 + next-level-cache = <&L2_0>;
960 + };
961 +
962 +@@ -75,7 +75,7 @@
963 + i-cache-sets = <256>;
964 + d-cache-size = <0x8000>;
965 + d-cache-line-size = <64>;
966 +- d-cache-sets = <128>;
967 ++ d-cache-sets = <256>;
968 + next-level-cache = <&L2_0>;
969 + };
970 + };
971 +@@ -85,7 +85,7 @@
972 + cache-level = <2>;
973 + cache-size = <0x100000>;
974 + cache-line-size = <64>;
975 +- cache-sets = <2048>;
976 ++ cache-sets = <1024>;
977 + next-level-cache = <&msmc_l3>;
978 + };
979 +
980 +diff --git a/arch/arm64/lib/clear_page.S b/arch/arm64/lib/clear_page.S
981 +index 073acbf02a7c8..1fd5d790ab800 100644
982 +--- a/arch/arm64/lib/clear_page.S
983 ++++ b/arch/arm64/lib/clear_page.S
984 +@@ -14,8 +14,9 @@
985 + * Parameters:
986 + * x0 - dest
987 + */
988 +-SYM_FUNC_START(clear_page)
989 ++SYM_FUNC_START_PI(clear_page)
990 + mrs x1, dczid_el0
991 ++ tbnz x1, #4, 2f /* Branch if DC ZVA is prohibited */
992 + and w1, w1, #0xf
993 + mov x2, #4
994 + lsl x1, x2, x1
995 +@@ -25,5 +26,14 @@ SYM_FUNC_START(clear_page)
996 + tst x0, #(PAGE_SIZE - 1)
997 + b.ne 1b
998 + ret
999 +-SYM_FUNC_END(clear_page)
1000 ++
1001 ++2: stnp xzr, xzr, [x0]
1002 ++ stnp xzr, xzr, [x0, #16]
1003 ++ stnp xzr, xzr, [x0, #32]
1004 ++ stnp xzr, xzr, [x0, #48]
1005 ++ add x0, x0, #64
1006 ++ tst x0, #(PAGE_SIZE - 1)
1007 ++ b.ne 2b
1008 ++ ret
1009 ++SYM_FUNC_END_PI(clear_page)
1010 + EXPORT_SYMBOL(clear_page)
1011 +diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S
1012 +index e7a793961408d..29144f4cd4492 100644
1013 +--- a/arch/arm64/lib/copy_page.S
1014 ++++ b/arch/arm64/lib/copy_page.S
1015 +@@ -17,7 +17,7 @@
1016 + * x0 - dest
1017 + * x1 - src
1018 + */
1019 +-SYM_FUNC_START(copy_page)
1020 ++SYM_FUNC_START_PI(copy_page)
1021 + alternative_if ARM64_HAS_NO_HW_PREFETCH
1022 + // Prefetch three cache lines ahead.
1023 + prfm pldl1strm, [x1, #128]
1024 +@@ -75,5 +75,5 @@ alternative_else_nop_endif
1025 + stnp x16, x17, [x0, #112 - 256]
1026 +
1027 + ret
1028 +-SYM_FUNC_END(copy_page)
1029 ++SYM_FUNC_END_PI(copy_page)
1030 + EXPORT_SYMBOL(copy_page)
1031 +diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
1032 +index 23d756fe0fd6c..3442bdd4314cb 100644
1033 +--- a/arch/mips/Kconfig
1034 ++++ b/arch/mips/Kconfig
1035 +@@ -1985,6 +1985,10 @@ config SYS_HAS_CPU_MIPS64_R1
1036 + config SYS_HAS_CPU_MIPS64_R2
1037 + bool
1038 +
1039 ++config SYS_HAS_CPU_MIPS64_R5
1040 ++ bool
1041 ++ select ARCH_HAS_SYNC_DMA_FOR_CPU if DMA_NONCOHERENT
1042 ++
1043 + config SYS_HAS_CPU_MIPS64_R6
1044 + bool
1045 + select ARCH_HAS_SYNC_DMA_FOR_CPU if DMA_NONCOHERENT
1046 +@@ -2146,7 +2150,7 @@ config CPU_SUPPORTS_ADDRWINCFG
1047 + bool
1048 + config CPU_SUPPORTS_HUGEPAGES
1049 + bool
1050 +- depends on !(32BIT && (ARCH_PHYS_ADDR_T_64BIT || EVA))
1051 ++ depends on !(32BIT && (PHYS_ADDR_T_64BIT || EVA))
1052 + config MIPS_PGD_C0_CONTEXT
1053 + bool
1054 + default y if 64BIT && (CPU_MIPSR2 || CPU_MIPSR6) && !CPU_XLP
1055 +diff --git a/arch/mips/bcm63xx/clk.c b/arch/mips/bcm63xx/clk.c
1056 +index aba6e2d6a736c..dcfa0ea912fe1 100644
1057 +--- a/arch/mips/bcm63xx/clk.c
1058 ++++ b/arch/mips/bcm63xx/clk.c
1059 +@@ -387,6 +387,12 @@ struct clk *clk_get_parent(struct clk *clk)
1060 + }
1061 + EXPORT_SYMBOL(clk_get_parent);
1062 +
1063 ++int clk_set_parent(struct clk *clk, struct clk *parent)
1064 ++{
1065 ++ return 0;
1066 ++}
1067 ++EXPORT_SYMBOL(clk_set_parent);
1068 ++
1069 + unsigned long clk_get_rate(struct clk *clk)
1070 + {
1071 + if (!clk)
1072 +diff --git a/arch/mips/cavium-octeon/octeon-platform.c b/arch/mips/cavium-octeon/octeon-platform.c
1073 +index d56e9b9d2e434..a994022e32c9f 100644
1074 +--- a/arch/mips/cavium-octeon/octeon-platform.c
1075 ++++ b/arch/mips/cavium-octeon/octeon-platform.c
1076 +@@ -328,6 +328,7 @@ static int __init octeon_ehci_device_init(void)
1077 +
1078 + pd->dev.platform_data = &octeon_ehci_pdata;
1079 + octeon_ehci_hw_start(&pd->dev);
1080 ++ put_device(&pd->dev);
1081 +
1082 + return ret;
1083 + }
1084 +@@ -391,6 +392,7 @@ static int __init octeon_ohci_device_init(void)
1085 +
1086 + pd->dev.platform_data = &octeon_ohci_pdata;
1087 + octeon_ohci_hw_start(&pd->dev);
1088 ++ put_device(&pd->dev);
1089 +
1090 + return ret;
1091 + }
1092 +diff --git a/arch/mips/cavium-octeon/octeon-usb.c b/arch/mips/cavium-octeon/octeon-usb.c
1093 +index 950e6c6e86297..fa87e5aa1811d 100644
1094 +--- a/arch/mips/cavium-octeon/octeon-usb.c
1095 ++++ b/arch/mips/cavium-octeon/octeon-usb.c
1096 +@@ -544,6 +544,7 @@ static int __init dwc3_octeon_device_init(void)
1097 + devm_iounmap(&pdev->dev, base);
1098 + devm_release_mem_region(&pdev->dev, res->start,
1099 + resource_size(res));
1100 ++ put_device(&pdev->dev);
1101 + }
1102 + } while (node != NULL);
1103 +
1104 +diff --git a/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h b/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h
1105 +index 87a5bfbf8cfe9..28572ddfb004a 100644
1106 +--- a/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h
1107 ++++ b/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h
1108 +@@ -36,7 +36,7 @@
1109 + nop
1110 + /* Loongson-3A R2/R3 */
1111 + andi t0, (PRID_IMP_MASK | PRID_REV_MASK)
1112 +- slti t0, (PRID_IMP_LOONGSON_64C | PRID_REV_LOONGSON3A_R2_0)
1113 ++ slti t0, t0, (PRID_IMP_LOONGSON_64C | PRID_REV_LOONGSON3A_R2_0)
1114 + bnez t0, 2f
1115 + nop
1116 + 1:
1117 +@@ -71,7 +71,7 @@
1118 + nop
1119 + /* Loongson-3A R2/R3 */
1120 + andi t0, (PRID_IMP_MASK | PRID_REV_MASK)
1121 +- slti t0, (PRID_IMP_LOONGSON_64C | PRID_REV_LOONGSON3A_R2_0)
1122 ++ slti t0, t0, (PRID_IMP_LOONGSON_64C | PRID_REV_LOONGSON3A_R2_0)
1123 + bnez t0, 2f
1124 + nop
1125 + 1:
1126 +diff --git a/arch/mips/include/asm/octeon/cvmx-bootinfo.h b/arch/mips/include/asm/octeon/cvmx-bootinfo.h
1127 +index c114a7ba0badd..e77e8b7c00838 100644
1128 +--- a/arch/mips/include/asm/octeon/cvmx-bootinfo.h
1129 ++++ b/arch/mips/include/asm/octeon/cvmx-bootinfo.h
1130 +@@ -317,7 +317,7 @@ enum cvmx_chip_types_enum {
1131 +
1132 + /* Functions to return string based on type */
1133 + #define ENUM_BRD_TYPE_CASE(x) \
1134 +- case x: return(#x + 16); /* Skip CVMX_BOARD_TYPE_ */
1135 ++ case x: return (&#x[16]); /* Skip CVMX_BOARD_TYPE_ */
1136 + static inline const char *cvmx_board_type_to_string(enum
1137 + cvmx_board_types_enum type)
1138 + {
1139 +@@ -408,7 +408,7 @@ static inline const char *cvmx_board_type_to_string(enum
1140 + }
1141 +
1142 + #define ENUM_CHIP_TYPE_CASE(x) \
1143 +- case x: return(#x + 15); /* Skip CVMX_CHIP_TYPE */
1144 ++ case x: return (&#x[15]); /* Skip CVMX_CHIP_TYPE */
1145 + static inline const char *cvmx_chip_type_to_string(enum
1146 + cvmx_chip_types_enum type)
1147 + {
1148 +diff --git a/arch/mips/lantiq/clk.c b/arch/mips/lantiq/clk.c
1149 +index 4916cccf378fd..7a623684d9b5e 100644
1150 +--- a/arch/mips/lantiq/clk.c
1151 ++++ b/arch/mips/lantiq/clk.c
1152 +@@ -164,6 +164,12 @@ struct clk *clk_get_parent(struct clk *clk)
1153 + }
1154 + EXPORT_SYMBOL(clk_get_parent);
1155 +
1156 ++int clk_set_parent(struct clk *clk, struct clk *parent)
1157 ++{
1158 ++ return 0;
1159 ++}
1160 ++EXPORT_SYMBOL(clk_set_parent);
1161 ++
1162 + static inline u32 get_counter_resolution(void)
1163 + {
1164 + u32 res;
1165 +diff --git a/arch/openrisc/include/asm/syscalls.h b/arch/openrisc/include/asm/syscalls.h
1166 +index 3a7eeae6f56a8..aa1c7e98722e3 100644
1167 +--- a/arch/openrisc/include/asm/syscalls.h
1168 ++++ b/arch/openrisc/include/asm/syscalls.h
1169 +@@ -22,9 +22,11 @@ asmlinkage long sys_or1k_atomic(unsigned long type, unsigned long *v1,
1170 +
1171 + asmlinkage long __sys_clone(unsigned long clone_flags, unsigned long newsp,
1172 + void __user *parent_tid, void __user *child_tid, int tls);
1173 ++asmlinkage long __sys_clone3(struct clone_args __user *uargs, size_t size);
1174 + asmlinkage long __sys_fork(void);
1175 +
1176 + #define sys_clone __sys_clone
1177 ++#define sys_clone3 __sys_clone3
1178 + #define sys_fork __sys_fork
1179 +
1180 + #endif /* __ASM_OPENRISC_SYSCALLS_H */
1181 +diff --git a/arch/openrisc/kernel/entry.S b/arch/openrisc/kernel/entry.S
1182 +index 98e4f97db5159..b42d32d79b2e6 100644
1183 +--- a/arch/openrisc/kernel/entry.S
1184 ++++ b/arch/openrisc/kernel/entry.S
1185 +@@ -1170,6 +1170,11 @@ ENTRY(__sys_clone)
1186 + l.j _fork_save_extra_regs_and_call
1187 + l.nop
1188 +
1189 ++ENTRY(__sys_clone3)
1190 ++ l.movhi r29,hi(sys_clone3)
1191 ++ l.j _fork_save_extra_regs_and_call
1192 ++ l.ori r29,r29,lo(sys_clone3)
1193 ++
1194 + ENTRY(__sys_fork)
1195 + l.movhi r29,hi(sys_fork)
1196 + l.ori r29,r29,lo(sys_fork)
1197 +diff --git a/arch/parisc/include/asm/special_insns.h b/arch/parisc/include/asm/special_insns.h
1198 +index a303ae9a77f41..16ee41e77174f 100644
1199 +--- a/arch/parisc/include/asm/special_insns.h
1200 ++++ b/arch/parisc/include/asm/special_insns.h
1201 +@@ -2,28 +2,32 @@
1202 + #ifndef __PARISC_SPECIAL_INSNS_H
1203 + #define __PARISC_SPECIAL_INSNS_H
1204 +
1205 +-#define lpa(va) ({ \
1206 +- unsigned long pa; \
1207 +- __asm__ __volatile__( \
1208 +- "copy %%r0,%0\n\t" \
1209 +- "lpa %%r0(%1),%0" \
1210 +- : "=r" (pa) \
1211 +- : "r" (va) \
1212 +- : "memory" \
1213 +- ); \
1214 +- pa; \
1215 ++#define lpa(va) ({ \
1216 ++ unsigned long pa; \
1217 ++ __asm__ __volatile__( \
1218 ++ "copy %%r0,%0\n" \
1219 ++ "8:\tlpa %%r0(%1),%0\n" \
1220 ++ "9:\n" \
1221 ++ ASM_EXCEPTIONTABLE_ENTRY(8b, 9b) \
1222 ++ : "=&r" (pa) \
1223 ++ : "r" (va) \
1224 ++ : "memory" \
1225 ++ ); \
1226 ++ pa; \
1227 + })
1228 +
1229 +-#define lpa_user(va) ({ \
1230 +- unsigned long pa; \
1231 +- __asm__ __volatile__( \
1232 +- "copy %%r0,%0\n\t" \
1233 +- "lpa %%r0(%%sr3,%1),%0" \
1234 +- : "=r" (pa) \
1235 +- : "r" (va) \
1236 +- : "memory" \
1237 +- ); \
1238 +- pa; \
1239 ++#define lpa_user(va) ({ \
1240 ++ unsigned long pa; \
1241 ++ __asm__ __volatile__( \
1242 ++ "copy %%r0,%0\n" \
1243 ++ "8:\tlpa %%r0(%%sr3,%1),%0\n" \
1244 ++ "9:\n" \
1245 ++ ASM_EXCEPTIONTABLE_ENTRY(8b, 9b) \
1246 ++ : "=&r" (pa) \
1247 ++ : "r" (va) \
1248 ++ : "memory" \
1249 ++ ); \
1250 ++ pa; \
1251 + })
1252 +
1253 + #define mfctl(reg) ({ \
1254 +diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
1255 +index 43f56335759a4..269b737d26299 100644
1256 +--- a/arch/parisc/kernel/traps.c
1257 ++++ b/arch/parisc/kernel/traps.c
1258 +@@ -784,7 +784,7 @@ void notrace handle_interruption(int code, struct pt_regs *regs)
1259 + * unless pagefault_disable() was called before.
1260 + */
1261 +
1262 +- if (fault_space == 0 && !faulthandler_disabled())
1263 ++ if (faulthandler_disabled() || fault_space == 0)
1264 + {
1265 + /* Clean up and return if in exception table. */
1266 + if (fixup_exception(regs))
1267 +diff --git a/arch/powerpc/boot/dts/fsl/qoriq-fman3l-0.dtsi b/arch/powerpc/boot/dts/fsl/qoriq-fman3l-0.dtsi
1268 +index c90702b04a530..48e5cd61599c6 100644
1269 +--- a/arch/powerpc/boot/dts/fsl/qoriq-fman3l-0.dtsi
1270 ++++ b/arch/powerpc/boot/dts/fsl/qoriq-fman3l-0.dtsi
1271 +@@ -79,6 +79,7 @@ fman0: fman@400000 {
1272 + #size-cells = <0>;
1273 + compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
1274 + reg = <0xfc000 0x1000>;
1275 ++ fsl,erratum-a009885;
1276 + };
1277 +
1278 + xmdio0: mdio@fd000 {
1279 +@@ -86,6 +87,7 @@ fman0: fman@400000 {
1280 + #size-cells = <0>;
1281 + compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
1282 + reg = <0xfd000 0x1000>;
1283 ++ fsl,erratum-a009885;
1284 + };
1285 + };
1286 +
1287 +diff --git a/arch/powerpc/include/asm/cpu_setup_power.h b/arch/powerpc/include/asm/cpu_setup_power.h
1288 +new file mode 100644
1289 +index 0000000000000..24be9131f8032
1290 +--- /dev/null
1291 ++++ b/arch/powerpc/include/asm/cpu_setup_power.h
1292 +@@ -0,0 +1,12 @@
1293 ++/* SPDX-License-Identifier: GPL-2.0-or-later */
1294 ++/*
1295 ++ * Copyright (C) 2020 IBM Corporation
1296 ++ */
1297 ++void __setup_cpu_power7(unsigned long offset, struct cpu_spec *spec);
1298 ++void __restore_cpu_power7(void);
1299 ++void __setup_cpu_power8(unsigned long offset, struct cpu_spec *spec);
1300 ++void __restore_cpu_power8(void);
1301 ++void __setup_cpu_power9(unsigned long offset, struct cpu_spec *spec);
1302 ++void __restore_cpu_power9(void);
1303 ++void __setup_cpu_power10(unsigned long offset, struct cpu_spec *spec);
1304 ++void __restore_cpu_power10(void);
1305 +diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
1306 +index 0363734ff56e0..0f2acbb966740 100644
1307 +--- a/arch/powerpc/include/asm/hw_irq.h
1308 ++++ b/arch/powerpc/include/asm/hw_irq.h
1309 +@@ -38,6 +38,8 @@
1310 + #define PACA_IRQ_MUST_HARD_MASK (PACA_IRQ_EE)
1311 + #endif
1312 +
1313 ++#endif /* CONFIG_PPC64 */
1314 ++
1315 + /*
1316 + * flags for paca->irq_soft_mask
1317 + */
1318 +@@ -46,8 +48,6 @@
1319 + #define IRQS_PMI_DISABLED 2
1320 + #define IRQS_ALL_DISABLED (IRQS_DISABLED | IRQS_PMI_DISABLED)
1321 +
1322 +-#endif /* CONFIG_PPC64 */
1323 +-
1324 + #ifndef __ASSEMBLY__
1325 +
1326 + extern void replay_system_reset(void);
1327 +@@ -175,6 +175,42 @@ static inline bool arch_irqs_disabled(void)
1328 + return arch_irqs_disabled_flags(arch_local_save_flags());
1329 + }
1330 +
1331 ++static inline void set_pmi_irq_pending(void)
1332 ++{
1333 ++ /*
1334 ++ * Invoked from PMU callback functions to set PMI bit in the paca.
1335 ++ * This has to be called with irq's disabled (via hard_irq_disable()).
1336 ++ */
1337 ++ if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
1338 ++ WARN_ON_ONCE(mfmsr() & MSR_EE);
1339 ++
1340 ++ get_paca()->irq_happened |= PACA_IRQ_PMI;
1341 ++}
1342 ++
1343 ++static inline void clear_pmi_irq_pending(void)
1344 ++{
1345 ++ /*
1346 ++ * Invoked from PMU callback functions to clear the pending PMI bit
1347 ++ * in the paca.
1348 ++ */
1349 ++ if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
1350 ++ WARN_ON_ONCE(mfmsr() & MSR_EE);
1351 ++
1352 ++ get_paca()->irq_happened &= ~PACA_IRQ_PMI;
1353 ++}
1354 ++
1355 ++static inline bool pmi_irq_pending(void)
1356 ++{
1357 ++ /*
1358 ++ * Invoked from PMU callback functions to check if there is a pending
1359 ++ * PMI bit in the paca.
1360 ++ */
1361 ++ if (get_paca()->irq_happened & PACA_IRQ_PMI)
1362 ++ return true;
1363 ++
1364 ++ return false;
1365 ++}
1366 ++
1367 + #ifdef CONFIG_PPC_BOOK3S
1368 + /*
1369 + * To support disabling and enabling of irq with PMI, set of
1370 +@@ -296,6 +332,10 @@ extern void irq_set_pending_from_srr1(unsigned long srr1);
1371 +
1372 + extern void force_external_irq_replay(void);
1373 +
1374 ++static inline void irq_soft_mask_regs_set_state(struct pt_regs *regs, unsigned long val)
1375 ++{
1376 ++ regs->softe = val;
1377 ++}
1378 + #else /* CONFIG_PPC64 */
1379 +
1380 + static inline unsigned long arch_local_save_flags(void)
1381 +@@ -364,6 +404,13 @@ static inline bool arch_irq_disabled_regs(struct pt_regs *regs)
1382 +
1383 + static inline void may_hard_irq_enable(void) { }
1384 +
1385 ++static inline void clear_pmi_irq_pending(void) { }
1386 ++static inline void set_pmi_irq_pending(void) { }
1387 ++static inline bool pmi_irq_pending(void) { return false; }
1388 ++
1389 ++static inline void irq_soft_mask_regs_set_state(struct pt_regs *regs, unsigned long val)
1390 ++{
1391 ++}
1392 + #endif /* CONFIG_PPC64 */
1393 +
1394 + #define ARCH_IRQ_INIT_FLAGS IRQ_NOREQUEST
1395 +diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
1396 +index f4b98903064f5..6afb14b6bbc26 100644
1397 +--- a/arch/powerpc/include/asm/reg.h
1398 ++++ b/arch/powerpc/include/asm/reg.h
1399 +@@ -865,6 +865,7 @@
1400 + #define MMCR0_BHRBA 0x00200000UL /* BHRB Access allowed in userspace */
1401 + #define MMCR0_EBE 0x00100000UL /* Event based branch enable */
1402 + #define MMCR0_PMCC 0x000c0000UL /* PMC control */
1403 ++#define MMCR0_PMCCEXT ASM_CONST(0x00000200) /* PMCCEXT control */
1404 + #define MMCR0_PMCC_U6 0x00080000UL /* PMC1-6 are R/W by user (PR) */
1405 + #define MMCR0_PMC1CE 0x00008000UL /* PMC1 count enable*/
1406 + #define MMCR0_PMCjCE ASM_CONST(0x00004000) /* PMCj count enable*/
1407 +diff --git a/arch/powerpc/kernel/btext.c b/arch/powerpc/kernel/btext.c
1408 +index 803c2a45b22ac..1cffb5e7c38d6 100644
1409 +--- a/arch/powerpc/kernel/btext.c
1410 ++++ b/arch/powerpc/kernel/btext.c
1411 +@@ -241,8 +241,10 @@ int __init btext_find_display(int allow_nonstdout)
1412 + rc = btext_initialize(np);
1413 + printk("result: %d\n", rc);
1414 + }
1415 +- if (rc == 0)
1416 ++ if (rc == 0) {
1417 ++ of_node_put(np);
1418 + break;
1419 ++ }
1420 + }
1421 + return rc;
1422 + }
1423 +diff --git a/arch/powerpc/kernel/cpu_setup_power.S b/arch/powerpc/kernel/cpu_setup_power.S
1424 +deleted file mode 100644
1425 +index 704e8b9501eee..0000000000000
1426 +--- a/arch/powerpc/kernel/cpu_setup_power.S
1427 ++++ /dev/null
1428 +@@ -1,252 +0,0 @@
1429 +-/* SPDX-License-Identifier: GPL-2.0-or-later */
1430 +-/*
1431 +- * This file contains low level CPU setup functions.
1432 +- * Copyright (C) 2003 Benjamin Herrenschmidt (benh@×××××××××××××××.org)
1433 +- */
1434 +-
1435 +-#include <asm/processor.h>
1436 +-#include <asm/page.h>
1437 +-#include <asm/cputable.h>
1438 +-#include <asm/ppc_asm.h>
1439 +-#include <asm/asm-offsets.h>
1440 +-#include <asm/cache.h>
1441 +-#include <asm/book3s/64/mmu-hash.h>
1442 +-
1443 +-/* Entry: r3 = crap, r4 = ptr to cputable entry
1444 +- *
1445 +- * Note that we can be called twice for pseudo-PVRs
1446 +- */
1447 +-_GLOBAL(__setup_cpu_power7)
1448 +- mflr r11
1449 +- bl __init_hvmode_206
1450 +- mtlr r11
1451 +- beqlr
1452 +- li r0,0
1453 +- mtspr SPRN_LPID,r0
1454 +- LOAD_REG_IMMEDIATE(r0, PCR_MASK)
1455 +- mtspr SPRN_PCR,r0
1456 +- mfspr r3,SPRN_LPCR
1457 +- li r4,(LPCR_LPES1 >> LPCR_LPES_SH)
1458 +- bl __init_LPCR_ISA206
1459 +- mtlr r11
1460 +- blr
1461 +-
1462 +-_GLOBAL(__restore_cpu_power7)
1463 +- mflr r11
1464 +- mfmsr r3
1465 +- rldicl. r0,r3,4,63
1466 +- beqlr
1467 +- li r0,0
1468 +- mtspr SPRN_LPID,r0
1469 +- LOAD_REG_IMMEDIATE(r0, PCR_MASK)
1470 +- mtspr SPRN_PCR,r0
1471 +- mfspr r3,SPRN_LPCR
1472 +- li r4,(LPCR_LPES1 >> LPCR_LPES_SH)
1473 +- bl __init_LPCR_ISA206
1474 +- mtlr r11
1475 +- blr
1476 +-
1477 +-_GLOBAL(__setup_cpu_power8)
1478 +- mflr r11
1479 +- bl __init_FSCR
1480 +- bl __init_PMU
1481 +- bl __init_PMU_ISA207
1482 +- bl __init_hvmode_206
1483 +- mtlr r11
1484 +- beqlr
1485 +- li r0,0
1486 +- mtspr SPRN_LPID,r0
1487 +- LOAD_REG_IMMEDIATE(r0, PCR_MASK)
1488 +- mtspr SPRN_PCR,r0
1489 +- mfspr r3,SPRN_LPCR
1490 +- ori r3, r3, LPCR_PECEDH
1491 +- li r4,0 /* LPES = 0 */
1492 +- bl __init_LPCR_ISA206
1493 +- bl __init_HFSCR
1494 +- bl __init_PMU_HV
1495 +- bl __init_PMU_HV_ISA207
1496 +- mtlr r11
1497 +- blr
1498 +-
1499 +-_GLOBAL(__restore_cpu_power8)
1500 +- mflr r11
1501 +- bl __init_FSCR
1502 +- bl __init_PMU
1503 +- bl __init_PMU_ISA207
1504 +- mfmsr r3
1505 +- rldicl. r0,r3,4,63
1506 +- mtlr r11
1507 +- beqlr
1508 +- li r0,0
1509 +- mtspr SPRN_LPID,r0
1510 +- LOAD_REG_IMMEDIATE(r0, PCR_MASK)
1511 +- mtspr SPRN_PCR,r0
1512 +- mfspr r3,SPRN_LPCR
1513 +- ori r3, r3, LPCR_PECEDH
1514 +- li r4,0 /* LPES = 0 */
1515 +- bl __init_LPCR_ISA206
1516 +- bl __init_HFSCR
1517 +- bl __init_PMU_HV
1518 +- bl __init_PMU_HV_ISA207
1519 +- mtlr r11
1520 +- blr
1521 +-
1522 +-_GLOBAL(__setup_cpu_power10)
1523 +- mflr r11
1524 +- bl __init_FSCR_power10
1525 +- bl __init_PMU
1526 +- bl __init_PMU_ISA31
1527 +- b 1f
1528 +-
1529 +-_GLOBAL(__setup_cpu_power9)
1530 +- mflr r11
1531 +- bl __init_FSCR_power9
1532 +- bl __init_PMU
1533 +-1: bl __init_hvmode_206
1534 +- mtlr r11
1535 +- beqlr
1536 +- li r0,0
1537 +- mtspr SPRN_PSSCR,r0
1538 +- mtspr SPRN_LPID,r0
1539 +- mtspr SPRN_PID,r0
1540 +- LOAD_REG_IMMEDIATE(r0, PCR_MASK)
1541 +- mtspr SPRN_PCR,r0
1542 +- mfspr r3,SPRN_LPCR
1543 +- LOAD_REG_IMMEDIATE(r4, LPCR_PECEDH | LPCR_PECE_HVEE | LPCR_HVICE | LPCR_HEIC)
1544 +- or r3, r3, r4
1545 +- LOAD_REG_IMMEDIATE(r4, LPCR_UPRT | LPCR_HR)
1546 +- andc r3, r3, r4
1547 +- li r4,0 /* LPES = 0 */
1548 +- bl __init_LPCR_ISA300
1549 +- bl __init_HFSCR
1550 +- bl __init_PMU_HV
1551 +- mtlr r11
1552 +- blr
1553 +-
1554 +-_GLOBAL(__restore_cpu_power10)
1555 +- mflr r11
1556 +- bl __init_FSCR_power10
1557 +- bl __init_PMU
1558 +- bl __init_PMU_ISA31
1559 +- b 1f
1560 +-
1561 +-_GLOBAL(__restore_cpu_power9)
1562 +- mflr r11
1563 +- bl __init_FSCR_power9
1564 +- bl __init_PMU
1565 +-1: mfmsr r3
1566 +- rldicl. r0,r3,4,63
1567 +- mtlr r11
1568 +- beqlr
1569 +- li r0,0
1570 +- mtspr SPRN_PSSCR,r0
1571 +- mtspr SPRN_LPID,r0
1572 +- mtspr SPRN_PID,r0
1573 +- LOAD_REG_IMMEDIATE(r0, PCR_MASK)
1574 +- mtspr SPRN_PCR,r0
1575 +- mfspr r3,SPRN_LPCR
1576 +- LOAD_REG_IMMEDIATE(r4, LPCR_PECEDH | LPCR_PECE_HVEE | LPCR_HVICE | LPCR_HEIC)
1577 +- or r3, r3, r4
1578 +- LOAD_REG_IMMEDIATE(r4, LPCR_UPRT | LPCR_HR)
1579 +- andc r3, r3, r4
1580 +- li r4,0 /* LPES = 0 */
1581 +- bl __init_LPCR_ISA300
1582 +- bl __init_HFSCR
1583 +- bl __init_PMU_HV
1584 +- mtlr r11
1585 +- blr
1586 +-
1587 +-__init_hvmode_206:
1588 +- /* Disable CPU_FTR_HVMODE and exit if MSR:HV is not set */
1589 +- mfmsr r3
1590 +- rldicl. r0,r3,4,63
1591 +- bnelr
1592 +- ld r5,CPU_SPEC_FEATURES(r4)
1593 +- LOAD_REG_IMMEDIATE(r6,CPU_FTR_HVMODE | CPU_FTR_P9_TM_HV_ASSIST)
1594 +- andc r5,r5,r6
1595 +- std r5,CPU_SPEC_FEATURES(r4)
1596 +- blr
1597 +-
1598 +-__init_LPCR_ISA206:
1599 +- /* Setup a sane LPCR:
1600 +- * Called with initial LPCR in R3 and desired LPES 2-bit value in R4
1601 +- *
1602 +- * LPES = 0b01 (HSRR0/1 used for 0x500)
1603 +- * PECE = 0b111
1604 +- * DPFD = 4
1605 +- * HDICE = 0
1606 +- * VC = 0b100 (VPM0=1, VPM1=0, ISL=0)
1607 +- * VRMASD = 0b10000 (L=1, LP=00)
1608 +- *
1609 +- * Other bits untouched for now
1610 +- */
1611 +- li r5,0x10
1612 +- rldimi r3,r5, LPCR_VRMASD_SH, 64-LPCR_VRMASD_SH-5
1613 +-
1614 +- /* POWER9 has no VRMASD */
1615 +-__init_LPCR_ISA300:
1616 +- rldimi r3,r4, LPCR_LPES_SH, 64-LPCR_LPES_SH-2
1617 +- ori r3,r3,(LPCR_PECE0|LPCR_PECE1|LPCR_PECE2)
1618 +- li r5,4
1619 +- rldimi r3,r5, LPCR_DPFD_SH, 64-LPCR_DPFD_SH-3
1620 +- clrrdi r3,r3,1 /* clear HDICE */
1621 +- li r5,4
1622 +- rldimi r3,r5, LPCR_VC_SH, 0
1623 +- mtspr SPRN_LPCR,r3
1624 +- isync
1625 +- blr
1626 +-
1627 +-__init_FSCR_power10:
1628 +- mfspr r3, SPRN_FSCR
1629 +- ori r3, r3, FSCR_PREFIX
1630 +- mtspr SPRN_FSCR, r3
1631 +- // fall through
1632 +-
1633 +-__init_FSCR_power9:
1634 +- mfspr r3, SPRN_FSCR
1635 +- ori r3, r3, FSCR_SCV
1636 +- mtspr SPRN_FSCR, r3
1637 +- // fall through
1638 +-
1639 +-__init_FSCR:
1640 +- mfspr r3,SPRN_FSCR
1641 +- ori r3,r3,FSCR_TAR|FSCR_EBB
1642 +- mtspr SPRN_FSCR,r3
1643 +- blr
1644 +-
1645 +-__init_HFSCR:
1646 +- mfspr r3,SPRN_HFSCR
1647 +- ori r3,r3,HFSCR_TAR|HFSCR_TM|HFSCR_BHRB|HFSCR_PM|\
1648 +- HFSCR_DSCR|HFSCR_VECVSX|HFSCR_FP|HFSCR_EBB|HFSCR_MSGP
1649 +- mtspr SPRN_HFSCR,r3
1650 +- blr
1651 +-
1652 +-__init_PMU_HV:
1653 +- li r5,0
1654 +- mtspr SPRN_MMCRC,r5
1655 +- blr
1656 +-
1657 +-__init_PMU_HV_ISA207:
1658 +- li r5,0
1659 +- mtspr SPRN_MMCRH,r5
1660 +- blr
1661 +-
1662 +-__init_PMU:
1663 +- li r5,0
1664 +- mtspr SPRN_MMCRA,r5
1665 +- mtspr SPRN_MMCR0,r5
1666 +- mtspr SPRN_MMCR1,r5
1667 +- mtspr SPRN_MMCR2,r5
1668 +- blr
1669 +-
1670 +-__init_PMU_ISA207:
1671 +- li r5,0
1672 +- mtspr SPRN_MMCRS,r5
1673 +- blr
1674 +-
1675 +-__init_PMU_ISA31:
1676 +- li r5,0
1677 +- mtspr SPRN_MMCR3,r5
1678 +- LOAD_REG_IMMEDIATE(r5, MMCRA_BHRB_DISABLE)
1679 +- mtspr SPRN_MMCRA,r5
1680 +- blr
1681 +diff --git a/arch/powerpc/kernel/cpu_setup_power.c b/arch/powerpc/kernel/cpu_setup_power.c
1682 +new file mode 100644
1683 +index 0000000000000..3cca88ee96d71
1684 +--- /dev/null
1685 ++++ b/arch/powerpc/kernel/cpu_setup_power.c
1686 +@@ -0,0 +1,272 @@
1687 ++// SPDX-License-Identifier: GPL-2.0-or-later
1688 ++/*
1689 ++ * Copyright 2020, Jordan Niethe, IBM Corporation.
1690 ++ *
1691 ++ * This file contains low level CPU setup functions.
1692 ++ * Originally written in assembly by Benjamin Herrenschmidt & various other
1693 ++ * authors.
1694 ++ */
1695 ++
1696 ++#include <asm/reg.h>
1697 ++#include <asm/synch.h>
1698 ++#include <linux/bitops.h>
1699 ++#include <asm/cputable.h>
1700 ++#include <asm/cpu_setup_power.h>
1701 ++
1702 ++/* Disable CPU_FTR_HVMODE and return false if MSR:HV is not set */
1703 ++static bool init_hvmode_206(struct cpu_spec *t)
1704 ++{
1705 ++ u64 msr;
1706 ++
1707 ++ msr = mfmsr();
1708 ++ if (msr & MSR_HV)
1709 ++ return true;
1710 ++
1711 ++ t->cpu_features &= ~(CPU_FTR_HVMODE | CPU_FTR_P9_TM_HV_ASSIST);
1712 ++ return false;
1713 ++}
1714 ++
1715 ++static void init_LPCR_ISA300(u64 lpcr, u64 lpes)
1716 ++{
1717 ++ /* POWER9 has no VRMASD */
1718 ++ lpcr |= (lpes << LPCR_LPES_SH) & LPCR_LPES;
1719 ++ lpcr |= LPCR_PECE0|LPCR_PECE1|LPCR_PECE2;
1720 ++ lpcr |= (4ull << LPCR_DPFD_SH) & LPCR_DPFD;
1721 ++ lpcr &= ~LPCR_HDICE; /* clear HDICE */
1722 ++ lpcr |= (4ull << LPCR_VC_SH);
1723 ++ mtspr(SPRN_LPCR, lpcr);
1724 ++ isync();
1725 ++}
1726 ++
1727 ++/*
1728 ++ * Setup a sane LPCR:
1729 ++ * Called with initial LPCR and desired LPES 2-bit value
1730 ++ *
1731 ++ * LPES = 0b01 (HSRR0/1 used for 0x500)
1732 ++ * PECE = 0b111
1733 ++ * DPFD = 4
1734 ++ * HDICE = 0
1735 ++ * VC = 0b100 (VPM0=1, VPM1=0, ISL=0)
1736 ++ * VRMASD = 0b10000 (L=1, LP=00)
1737 ++ *
1738 ++ * Other bits untouched for now
1739 ++ */
1740 ++static void init_LPCR_ISA206(u64 lpcr, u64 lpes)
1741 ++{
1742 ++ lpcr |= (0x10ull << LPCR_VRMASD_SH) & LPCR_VRMASD;
1743 ++ init_LPCR_ISA300(lpcr, lpes);
1744 ++}
1745 ++
1746 ++static void init_FSCR(void)
1747 ++{
1748 ++ u64 fscr;
1749 ++
1750 ++ fscr = mfspr(SPRN_FSCR);
1751 ++ fscr |= FSCR_TAR|FSCR_EBB;
1752 ++ mtspr(SPRN_FSCR, fscr);
1753 ++}
1754 ++
1755 ++static void init_FSCR_power9(void)
1756 ++{
1757 ++ u64 fscr;
1758 ++
1759 ++ fscr = mfspr(SPRN_FSCR);
1760 ++ fscr |= FSCR_SCV;
1761 ++ mtspr(SPRN_FSCR, fscr);
1762 ++ init_FSCR();
1763 ++}
1764 ++
1765 ++static void init_FSCR_power10(void)
1766 ++{
1767 ++ u64 fscr;
1768 ++
1769 ++ fscr = mfspr(SPRN_FSCR);
1770 ++ fscr |= FSCR_PREFIX;
1771 ++ mtspr(SPRN_FSCR, fscr);
1772 ++ init_FSCR_power9();
1773 ++}
1774 ++
1775 ++static void init_HFSCR(void)
1776 ++{
1777 ++ u64 hfscr;
1778 ++
1779 ++ hfscr = mfspr(SPRN_HFSCR);
1780 ++ hfscr |= HFSCR_TAR|HFSCR_TM|HFSCR_BHRB|HFSCR_PM|HFSCR_DSCR|\
1781 ++ HFSCR_VECVSX|HFSCR_FP|HFSCR_EBB|HFSCR_MSGP;
1782 ++ mtspr(SPRN_HFSCR, hfscr);
1783 ++}
1784 ++
1785 ++static void init_PMU_HV(void)
1786 ++{
1787 ++ mtspr(SPRN_MMCRC, 0);
1788 ++}
1789 ++
1790 ++static void init_PMU_HV_ISA207(void)
1791 ++{
1792 ++ mtspr(SPRN_MMCRH, 0);
1793 ++}
1794 ++
1795 ++static void init_PMU(void)
1796 ++{
1797 ++ mtspr(SPRN_MMCRA, 0);
1798 ++ mtspr(SPRN_MMCR0, 0);
1799 ++ mtspr(SPRN_MMCR1, 0);
1800 ++ mtspr(SPRN_MMCR2, 0);
1801 ++}
1802 ++
1803 ++static void init_PMU_ISA207(void)
1804 ++{
1805 ++ mtspr(SPRN_MMCRS, 0);
1806 ++}
1807 ++
1808 ++static void init_PMU_ISA31(void)
1809 ++{
1810 ++ mtspr(SPRN_MMCR3, 0);
1811 ++ mtspr(SPRN_MMCRA, MMCRA_BHRB_DISABLE);
1812 ++ mtspr(SPRN_MMCR0, MMCR0_PMCCEXT);
1813 ++}
1814 ++
1815 ++/*
1816 ++ * Note that we can be called twice of pseudo-PVRs.
1817 ++ * The parameter offset is not used.
1818 ++ */
1819 ++
1820 ++void __setup_cpu_power7(unsigned long offset, struct cpu_spec *t)
1821 ++{
1822 ++ if (!init_hvmode_206(t))
1823 ++ return;
1824 ++
1825 ++ mtspr(SPRN_LPID, 0);
1826 ++ mtspr(SPRN_PCR, PCR_MASK);
1827 ++ init_LPCR_ISA206(mfspr(SPRN_LPCR), LPCR_LPES1 >> LPCR_LPES_SH);
1828 ++}
1829 ++
1830 ++void __restore_cpu_power7(void)
1831 ++{
1832 ++ u64 msr;
1833 ++
1834 ++ msr = mfmsr();
1835 ++ if (!(msr & MSR_HV))
1836 ++ return;
1837 ++
1838 ++ mtspr(SPRN_LPID, 0);
1839 ++ mtspr(SPRN_PCR, PCR_MASK);
1840 ++ init_LPCR_ISA206(mfspr(SPRN_LPCR), LPCR_LPES1 >> LPCR_LPES_SH);
1841 ++}
1842 ++
1843 ++void __setup_cpu_power8(unsigned long offset, struct cpu_spec *t)
1844 ++{
1845 ++ init_FSCR();
1846 ++ init_PMU();
1847 ++ init_PMU_ISA207();
1848 ++
1849 ++ if (!init_hvmode_206(t))
1850 ++ return;
1851 ++
1852 ++ mtspr(SPRN_LPID, 0);
1853 ++ mtspr(SPRN_PCR, PCR_MASK);
1854 ++ init_LPCR_ISA206(mfspr(SPRN_LPCR) | LPCR_PECEDH, 0); /* LPES = 0 */
1855 ++ init_HFSCR();
1856 ++ init_PMU_HV();
1857 ++ init_PMU_HV_ISA207();
1858 ++}
1859 ++
1860 ++void __restore_cpu_power8(void)
1861 ++{
1862 ++ u64 msr;
1863 ++
1864 ++ init_FSCR();
1865 ++ init_PMU();
1866 ++ init_PMU_ISA207();
1867 ++
1868 ++ msr = mfmsr();
1869 ++ if (!(msr & MSR_HV))
1870 ++ return;
1871 ++
1872 ++ mtspr(SPRN_LPID, 0);
1873 ++ mtspr(SPRN_PCR, PCR_MASK);
1874 ++ init_LPCR_ISA206(mfspr(SPRN_LPCR) | LPCR_PECEDH, 0); /* LPES = 0 */
1875 ++ init_HFSCR();
1876 ++ init_PMU_HV();
1877 ++ init_PMU_HV_ISA207();
1878 ++}
1879 ++
1880 ++void __setup_cpu_power9(unsigned long offset, struct cpu_spec *t)
1881 ++{
1882 ++ init_FSCR_power9();
1883 ++ init_PMU();
1884 ++
1885 ++ if (!init_hvmode_206(t))
1886 ++ return;
1887 ++
1888 ++ mtspr(SPRN_PSSCR, 0);
1889 ++ mtspr(SPRN_LPID, 0);
1890 ++ mtspr(SPRN_PID, 0);
1891 ++ mtspr(SPRN_PCR, PCR_MASK);
1892 ++ init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\
1893 ++ LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0);
1894 ++ init_HFSCR();
1895 ++ init_PMU_HV();
1896 ++}
1897 ++
1898 ++void __restore_cpu_power9(void)
1899 ++{
1900 ++ u64 msr;
1901 ++
1902 ++ init_FSCR_power9();
1903 ++ init_PMU();
1904 ++
1905 ++ msr = mfmsr();
1906 ++ if (!(msr & MSR_HV))
1907 ++ return;
1908 ++
1909 ++ mtspr(SPRN_PSSCR, 0);
1910 ++ mtspr(SPRN_LPID, 0);
1911 ++ mtspr(SPRN_PID, 0);
1912 ++ mtspr(SPRN_PCR, PCR_MASK);
1913 ++ init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\
1914 ++ LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0);
1915 ++ init_HFSCR();
1916 ++ init_PMU_HV();
1917 ++}
1918 ++
1919 ++void __setup_cpu_power10(unsigned long offset, struct cpu_spec *t)
1920 ++{
1921 ++ init_FSCR_power10();
1922 ++ init_PMU();
1923 ++ init_PMU_ISA31();
1924 ++
1925 ++ if (!init_hvmode_206(t))
1926 ++ return;
1927 ++
1928 ++ mtspr(SPRN_PSSCR, 0);
1929 ++ mtspr(SPRN_LPID, 0);
1930 ++ mtspr(SPRN_PID, 0);
1931 ++ mtspr(SPRN_PCR, PCR_MASK);
1932 ++ init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\
1933 ++ LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0);
1934 ++ init_HFSCR();
1935 ++ init_PMU_HV();
1936 ++}
1937 ++
1938 ++void __restore_cpu_power10(void)
1939 ++{
1940 ++ u64 msr;
1941 ++
1942 ++ init_FSCR_power10();
1943 ++ init_PMU();
1944 ++ init_PMU_ISA31();
1945 ++
1946 ++ msr = mfmsr();
1947 ++ if (!(msr & MSR_HV))
1948 ++ return;
1949 ++
1950 ++ mtspr(SPRN_PSSCR, 0);
1951 ++ mtspr(SPRN_LPID, 0);
1952 ++ mtspr(SPRN_PID, 0);
1953 ++ mtspr(SPRN_PCR, PCR_MASK);
1954 ++ init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\
1955 ++ LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0);
1956 ++ init_HFSCR();
1957 ++ init_PMU_HV();
1958 ++}
1959 +diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c
1960 +index 29de58d4dfb76..8fdb40ee86d11 100644
1961 +--- a/arch/powerpc/kernel/cputable.c
1962 ++++ b/arch/powerpc/kernel/cputable.c
1963 +@@ -60,19 +60,15 @@ extern void __setup_cpu_7410(unsigned long offset, struct cpu_spec* spec);
1964 + extern void __setup_cpu_745x(unsigned long offset, struct cpu_spec* spec);
1965 + #endif /* CONFIG_PPC32 */
1966 + #ifdef CONFIG_PPC64
1967 ++#include <asm/cpu_setup_power.h>
1968 + extern void __setup_cpu_ppc970(unsigned long offset, struct cpu_spec* spec);
1969 + extern void __setup_cpu_ppc970MP(unsigned long offset, struct cpu_spec* spec);
1970 + extern void __setup_cpu_pa6t(unsigned long offset, struct cpu_spec* spec);
1971 + extern void __restore_cpu_pa6t(void);
1972 + extern void __restore_cpu_ppc970(void);
1973 +-extern void __setup_cpu_power7(unsigned long offset, struct cpu_spec* spec);
1974 +-extern void __restore_cpu_power7(void);
1975 +-extern void __setup_cpu_power8(unsigned long offset, struct cpu_spec* spec);
1976 +-extern void __restore_cpu_power8(void);
1977 +-extern void __setup_cpu_power9(unsigned long offset, struct cpu_spec* spec);
1978 +-extern void __restore_cpu_power9(void);
1979 +-extern void __setup_cpu_power10(unsigned long offset, struct cpu_spec* spec);
1980 +-extern void __restore_cpu_power10(void);
1981 ++extern long __machine_check_early_realmode_p7(struct pt_regs *regs);
1982 ++extern long __machine_check_early_realmode_p8(struct pt_regs *regs);
1983 ++extern long __machine_check_early_realmode_p9(struct pt_regs *regs);
1984 + #endif /* CONFIG_PPC64 */
1985 + #if defined(CONFIG_E500)
1986 + extern void __setup_cpu_e5500(unsigned long offset, struct cpu_spec* spec);
1987 +diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
1988 +index 1098863e17ee8..9d079659b24d3 100644
1989 +--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
1990 ++++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
1991 +@@ -454,6 +454,7 @@ static void init_pmu_power10(void)
1992 +
1993 + mtspr(SPRN_MMCR3, 0);
1994 + mtspr(SPRN_MMCRA, MMCRA_BHRB_DISABLE);
1995 ++ mtspr(SPRN_MMCR0, MMCR0_PMCCEXT);
1996 + }
1997 +
1998 + static int __init feat_enable_pmu_power10(struct dt_cpu_feature *f)
1999 +diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
2000 +index eddf362caedce..c3bb800dc4352 100644
2001 +--- a/arch/powerpc/kernel/fadump.c
2002 ++++ b/arch/powerpc/kernel/fadump.c
2003 +@@ -1641,6 +1641,14 @@ int __init setup_fadump(void)
2004 + else if (fw_dump.reserve_dump_area_size)
2005 + fw_dump.ops->fadump_init_mem_struct(&fw_dump);
2006 +
2007 ++ /*
2008 ++ * In case of panic, fadump is triggered via ppc_panic_event()
2009 ++ * panic notifier. Setting crash_kexec_post_notifiers to 'true'
2010 ++ * lets panic() function take crash friendly path before panic
2011 ++ * notifiers are invoked.
2012 ++ */
2013 ++ crash_kexec_post_notifiers = true;
2014 ++
2015 + return 1;
2016 + }
2017 + subsys_initcall(setup_fadump);
2018 +diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
2019 +index a1ae00689e0f4..aeb9bc9958749 100644
2020 +--- a/arch/powerpc/kernel/head_40x.S
2021 ++++ b/arch/powerpc/kernel/head_40x.S
2022 +@@ -27,6 +27,7 @@
2023 +
2024 + #include <linux/init.h>
2025 + #include <linux/pgtable.h>
2026 ++#include <linux/sizes.h>
2027 + #include <asm/processor.h>
2028 + #include <asm/page.h>
2029 + #include <asm/mmu.h>
2030 +@@ -626,7 +627,7 @@ start_here:
2031 + b . /* prevent prefetch past rfi */
2032 +
2033 + /* Set up the initial MMU state so we can do the first level of
2034 +- * kernel initialization. This maps the first 16 MBytes of memory 1:1
2035 ++ * kernel initialization. This maps the first 32 MBytes of memory 1:1
2036 + * virtual to physical and more importantly sets the cache mode.
2037 + */
2038 + initial_mmu:
2039 +@@ -663,6 +664,12 @@ initial_mmu:
2040 + tlbwe r4,r0,TLB_DATA /* Load the data portion of the entry */
2041 + tlbwe r3,r0,TLB_TAG /* Load the tag portion of the entry */
2042 +
2043 ++ li r0,62 /* TLB slot 62 */
2044 ++ addis r4,r4,SZ_16M@h
2045 ++ addis r3,r3,SZ_16M@h
2046 ++ tlbwe r4,r0,TLB_DATA /* Load the data portion of the entry */
2047 ++ tlbwe r3,r0,TLB_TAG /* Load the tag portion of the entry */
2048 ++
2049 + isync
2050 +
2051 + /* Establish the exception vector base
2052 +diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
2053 +index 7e337c570ea6b..9e71c0739f08d 100644
2054 +--- a/arch/powerpc/kernel/prom_init.c
2055 ++++ b/arch/powerpc/kernel/prom_init.c
2056 +@@ -2956,7 +2956,7 @@ static void __init fixup_device_tree_efika_add_phy(void)
2057 +
2058 + /* Check if the phy-handle property exists - bail if it does */
2059 + rv = prom_getprop(node, "phy-handle", prop, sizeof(prop));
2060 +- if (!rv)
2061 ++ if (rv <= 0)
2062 + return;
2063 +
2064 + /*
2065 +diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
2066 +index 452cbf98bfd71..cf99f57aed822 100644
2067 +--- a/arch/powerpc/kernel/smp.c
2068 ++++ b/arch/powerpc/kernel/smp.c
2069 +@@ -60,6 +60,7 @@
2070 + #include <asm/cpu_has_feature.h>
2071 + #include <asm/ftrace.h>
2072 + #include <asm/kup.h>
2073 ++#include <asm/fadump.h>
2074 +
2075 + #ifdef DEBUG
2076 + #include <asm/udbg.h>
2077 +@@ -594,6 +595,45 @@ void crash_send_ipi(void (*crash_ipi_callback)(struct pt_regs *))
2078 + }
2079 + #endif
2080 +
2081 ++#ifdef CONFIG_NMI_IPI
2082 ++static void crash_stop_this_cpu(struct pt_regs *regs)
2083 ++#else
2084 ++static void crash_stop_this_cpu(void *dummy)
2085 ++#endif
2086 ++{
2087 ++ /*
2088 ++ * Just busy wait here and avoid marking CPU as offline to ensure
2089 ++ * register data is captured appropriately.
2090 ++ */
2091 ++ while (1)
2092 ++ cpu_relax();
2093 ++}
2094 ++
2095 ++void crash_smp_send_stop(void)
2096 ++{
2097 ++ static bool stopped = false;
2098 ++
2099 ++ /*
2100 ++ * In case of fadump, register data for all CPUs is captured by f/w
2101 ++ * on ibm,os-term rtas call. Skip IPI callbacks to other CPUs before
2102 ++ * this rtas call to avoid tricky post processing of those CPUs'
2103 ++ * backtraces.
2104 ++ */
2105 ++ if (should_fadump_crash())
2106 ++ return;
2107 ++
2108 ++ if (stopped)
2109 ++ return;
2110 ++
2111 ++ stopped = true;
2112 ++
2113 ++#ifdef CONFIG_NMI_IPI
2114 ++ smp_send_nmi_ipi(NMI_IPI_ALL_OTHERS, crash_stop_this_cpu, 1000000);
2115 ++#else
2116 ++ smp_call_function(crash_stop_this_cpu, NULL, 0);
2117 ++#endif /* CONFIG_NMI_IPI */
2118 ++}
2119 ++
2120 + #ifdef CONFIG_NMI_IPI
2121 + static void nmi_stop_this_cpu(struct pt_regs *regs)
2122 + {
2123 +@@ -1488,10 +1528,12 @@ void start_secondary(void *unused)
2124 + BUG();
2125 + }
2126 +
2127 ++#ifdef CONFIG_PROFILING
2128 + int setup_profiling_timer(unsigned int multiplier)
2129 + {
2130 + return 0;
2131 + }
2132 ++#endif
2133 +
2134 + static void fixup_topology(void)
2135 + {
2136 +diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
2137 +index 77dffea3d5373..069d451240fa4 100644
2138 +--- a/arch/powerpc/kernel/traps.c
2139 ++++ b/arch/powerpc/kernel/traps.c
2140 +@@ -1922,11 +1922,40 @@ void vsx_unavailable_tm(struct pt_regs *regs)
2141 + }
2142 + #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
2143 +
2144 +-void performance_monitor_exception(struct pt_regs *regs)
2145 ++static void performance_monitor_exception_nmi(struct pt_regs *regs)
2146 ++{
2147 ++ nmi_enter();
2148 ++
2149 ++ __this_cpu_inc(irq_stat.pmu_irqs);
2150 ++
2151 ++ perf_irq(regs);
2152 ++
2153 ++ nmi_exit();
2154 ++}
2155 ++
2156 ++static void performance_monitor_exception_async(struct pt_regs *regs)
2157 + {
2158 ++ irq_enter();
2159 ++
2160 + __this_cpu_inc(irq_stat.pmu_irqs);
2161 +
2162 + perf_irq(regs);
2163 ++
2164 ++ irq_exit();
2165 ++}
2166 ++
2167 ++void performance_monitor_exception(struct pt_regs *regs)
2168 ++{
2169 ++ /*
2170 ++ * On 64-bit, if perf interrupts hit in a local_irq_disable
2171 ++ * (soft-masked) region, we consider them as NMIs. This is required to
2172 ++ * prevent hash faults on user addresses when reading callchains (and
2173 ++ * looks better from an irq tracing perspective).
2174 ++ */
2175 ++ if (IS_ENABLED(CONFIG_PPC64) && unlikely(arch_irq_disabled_regs(regs)))
2176 ++ performance_monitor_exception_nmi(regs);
2177 ++ else
2178 ++ performance_monitor_exception_async(regs);
2179 + }
2180 +
2181 + #ifdef CONFIG_PPC_ADV_DEBUG_REGS
2182 +diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
2183 +index af3c15a1d41eb..75b2a6c4db5a5 100644
2184 +--- a/arch/powerpc/kernel/watchdog.c
2185 ++++ b/arch/powerpc/kernel/watchdog.c
2186 +@@ -132,6 +132,10 @@ static void set_cpumask_stuck(const struct cpumask *cpumask, u64 tb)
2187 + {
2188 + cpumask_or(&wd_smp_cpus_stuck, &wd_smp_cpus_stuck, cpumask);
2189 + cpumask_andnot(&wd_smp_cpus_pending, &wd_smp_cpus_pending, cpumask);
2190 ++ /*
2191 ++ * See wd_smp_clear_cpu_pending()
2192 ++ */
2193 ++ smp_mb();
2194 + if (cpumask_empty(&wd_smp_cpus_pending)) {
2195 + wd_smp_last_reset_tb = tb;
2196 + cpumask_andnot(&wd_smp_cpus_pending,
2197 +@@ -217,13 +221,44 @@ static void wd_smp_clear_cpu_pending(int cpu, u64 tb)
2198 +
2199 + cpumask_clear_cpu(cpu, &wd_smp_cpus_stuck);
2200 + wd_smp_unlock(&flags);
2201 ++ } else {
2202 ++ /*
2203 ++ * The last CPU to clear pending should have reset the
2204 ++ * watchdog so we generally should not find it empty
2205 ++ * here if our CPU was clear. However it could happen
2206 ++ * due to a rare race with another CPU taking the
2207 ++ * last CPU out of the mask concurrently.
2208 ++ *
2209 ++ * We can't add a warning for it. But just in case
2210 ++ * there is a problem with the watchdog that is causing
2211 ++ * the mask to not be reset, try to kick it along here.
2212 ++ */
2213 ++ if (unlikely(cpumask_empty(&wd_smp_cpus_pending)))
2214 ++ goto none_pending;
2215 + }
2216 + return;
2217 + }
2218 ++
2219 + cpumask_clear_cpu(cpu, &wd_smp_cpus_pending);
2220 ++
2221 ++ /*
2222 ++ * Order the store to clear pending with the load(s) to check all
2223 ++ * words in the pending mask to check they are all empty. This orders
2224 ++ * with the same barrier on another CPU. This prevents two CPUs
2225 ++ * clearing the last 2 pending bits, but neither seeing the other's
2226 ++ * store when checking if the mask is empty, and missing an empty
2227 ++ * mask, which ends with a false positive.
2228 ++ */
2229 ++ smp_mb();
2230 + if (cpumask_empty(&wd_smp_cpus_pending)) {
2231 + unsigned long flags;
2232 +
2233 ++none_pending:
2234 ++ /*
2235 ++ * Double check under lock because more than one CPU could see
2236 ++ * a clear mask with the lockless check after clearing their
2237 ++ * pending bits.
2238 ++ */
2239 + wd_smp_lock(&flags);
2240 + if (cpumask_empty(&wd_smp_cpus_pending)) {
2241 + wd_smp_last_reset_tb = tb;
2242 +@@ -314,8 +349,12 @@ void arch_touch_nmi_watchdog(void)
2243 + {
2244 + unsigned long ticks = tb_ticks_per_usec * wd_timer_period_ms * 1000;
2245 + int cpu = smp_processor_id();
2246 +- u64 tb = get_tb();
2247 ++ u64 tb;
2248 +
2249 ++ if (!cpumask_test_cpu(cpu, &watchdog_cpumask))
2250 ++ return;
2251 ++
2252 ++ tb = get_tb();
2253 + if (tb - per_cpu(wd_timer_tb, cpu) >= ticks) {
2254 + per_cpu(wd_timer_tb, cpu) = tb;
2255 + wd_smp_clear_cpu_pending(cpu, tb);
2256 +diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
2257 +index 175967a195c44..527c205d5a5f5 100644
2258 +--- a/arch/powerpc/kvm/book3s_hv.c
2259 ++++ b/arch/powerpc/kvm/book3s_hv.c
2260 +@@ -4557,8 +4557,12 @@ static int kvmppc_core_prepare_memory_region_hv(struct kvm *kvm,
2261 + unsigned long npages = mem->memory_size >> PAGE_SHIFT;
2262 +
2263 + if (change == KVM_MR_CREATE) {
2264 +- slot->arch.rmap = vzalloc(array_size(npages,
2265 +- sizeof(*slot->arch.rmap)));
2266 ++ unsigned long size = array_size(npages, sizeof(*slot->arch.rmap));
2267 ++
2268 ++ if ((size >> PAGE_SHIFT) > totalram_pages())
2269 ++ return -ENOMEM;
2270 ++
2271 ++ slot->arch.rmap = vzalloc(size);
2272 + if (!slot->arch.rmap)
2273 + return -ENOMEM;
2274 + }
2275 +diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
2276 +index a5f1ae892ba68..d0b6c8c16c48a 100644
2277 +--- a/arch/powerpc/kvm/book3s_hv_nested.c
2278 ++++ b/arch/powerpc/kvm/book3s_hv_nested.c
2279 +@@ -510,7 +510,7 @@ long kvmhv_copy_tofrom_guest_nested(struct kvm_vcpu *vcpu)
2280 + if (eaddr & (0xFFFUL << 52))
2281 + return H_PARAMETER;
2282 +
2283 +- buf = kzalloc(n, GFP_KERNEL);
2284 ++ buf = kzalloc(n, GFP_KERNEL | __GFP_NOWARN);
2285 + if (!buf)
2286 + return H_NO_MEM;
2287 +
2288 +diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
2289 +index 1d5eec847b883..295959487b76d 100644
2290 +--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
2291 ++++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
2292 +@@ -1152,7 +1152,7 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
2293 +
2294 + int pud_clear_huge(pud_t *pud)
2295 + {
2296 +- if (pud_huge(*pud)) {
2297 ++ if (pud_is_leaf(*pud)) {
2298 + pud_clear(pud);
2299 + return 1;
2300 + }
2301 +@@ -1199,7 +1199,7 @@ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot)
2302 +
2303 + int pmd_clear_huge(pmd_t *pmd)
2304 + {
2305 +- if (pmd_huge(*pmd)) {
2306 ++ if (pmd_is_leaf(*pmd)) {
2307 + pmd_clear(pmd);
2308 + return 1;
2309 + }
2310 +diff --git a/arch/powerpc/mm/kasan/book3s_32.c b/arch/powerpc/mm/kasan/book3s_32.c
2311 +index 202bd260a0095..35b287b0a8da4 100644
2312 +--- a/arch/powerpc/mm/kasan/book3s_32.c
2313 ++++ b/arch/powerpc/mm/kasan/book3s_32.c
2314 +@@ -19,7 +19,8 @@ int __init kasan_init_region(void *start, size_t size)
2315 + block = memblock_alloc(k_size, k_size_base);
2316 +
2317 + if (block && k_size_base >= SZ_128K && k_start == ALIGN(k_start, k_size_base)) {
2318 +- int k_size_more = 1 << (ffs(k_size - k_size_base) - 1);
2319 ++ int shift = ffs(k_size - k_size_base);
2320 ++ int k_size_more = shift ? 1 << (shift - 1) : 0;
2321 +
2322 + setbat(-1, k_start, __pa(block), k_size_base, PAGE_KERNEL);
2323 + if (k_size_more >= SZ_128K)
2324 +diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
2325 +index cc6e2f94517fc..aefc2bfdf1049 100644
2326 +--- a/arch/powerpc/mm/pgtable_64.c
2327 ++++ b/arch/powerpc/mm/pgtable_64.c
2328 +@@ -102,7 +102,8 @@ EXPORT_SYMBOL(__pte_frag_size_shift);
2329 + struct page *p4d_page(p4d_t p4d)
2330 + {
2331 + if (p4d_is_leaf(p4d)) {
2332 +- VM_WARN_ON(!p4d_huge(p4d));
2333 ++ if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
2334 ++ VM_WARN_ON(!p4d_huge(p4d));
2335 + return pte_page(p4d_pte(p4d));
2336 + }
2337 + return virt_to_page(p4d_page_vaddr(p4d));
2338 +@@ -112,7 +113,8 @@ struct page *p4d_page(p4d_t p4d)
2339 + struct page *pud_page(pud_t pud)
2340 + {
2341 + if (pud_is_leaf(pud)) {
2342 +- VM_WARN_ON(!pud_huge(pud));
2343 ++ if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
2344 ++ VM_WARN_ON(!pud_huge(pud));
2345 + return pte_page(pud_pte(pud));
2346 + }
2347 + return virt_to_page(pud_page_vaddr(pud));
2348 +@@ -125,7 +127,13 @@ struct page *pud_page(pud_t pud)
2349 + struct page *pmd_page(pmd_t pmd)
2350 + {
2351 + if (pmd_is_leaf(pmd)) {
2352 +- VM_WARN_ON(!(pmd_large(pmd) || pmd_huge(pmd)));
2353 ++ /*
2354 ++ * vmalloc_to_page may be called on any vmap address (not only
2355 ++ * vmalloc), and it uses pmd_page() etc., when huge vmap is
2356 ++ * enabled so these checks can't be used.
2357 ++ */
2358 ++ if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
2359 ++ VM_WARN_ON(!(pmd_large(pmd) || pmd_huge(pmd)));
2360 + return pte_page(pmd_pte(pmd));
2361 + }
2362 + return virt_to_page(pmd_page_vaddr(pmd));
2363 +diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
2364 +index 91452313489f1..bd34e062bd290 100644
2365 +--- a/arch/powerpc/perf/core-book3s.c
2366 ++++ b/arch/powerpc/perf/core-book3s.c
2367 +@@ -95,6 +95,7 @@ static unsigned int freeze_events_kernel = MMCR0_FCS;
2368 + #define SPRN_SIER3 0
2369 + #define MMCRA_SAMPLE_ENABLE 0
2370 + #define MMCRA_BHRB_DISABLE 0
2371 ++#define MMCR0_PMCCEXT 0
2372 +
2373 + static inline unsigned long perf_ip_adjust(struct pt_regs *regs)
2374 + {
2375 +@@ -109,10 +110,6 @@ static inline void perf_read_regs(struct pt_regs *regs)
2376 + {
2377 + regs->result = 0;
2378 + }
2379 +-static inline int perf_intr_is_nmi(struct pt_regs *regs)
2380 +-{
2381 +- return 0;
2382 +-}
2383 +
2384 + static inline int siar_valid(struct pt_regs *regs)
2385 + {
2386 +@@ -331,15 +328,6 @@ static inline void perf_read_regs(struct pt_regs *regs)
2387 + regs->result = use_siar;
2388 + }
2389 +
2390 +-/*
2391 +- * If interrupts were soft-disabled when a PMU interrupt occurs, treat
2392 +- * it as an NMI.
2393 +- */
2394 +-static inline int perf_intr_is_nmi(struct pt_regs *regs)
2395 +-{
2396 +- return (regs->softe & IRQS_DISABLED);
2397 +-}
2398 +-
2399 + /*
2400 + * On processors like P7+ that have the SIAR-Valid bit, marked instructions
2401 + * must be sampled only if the SIAR-valid bit is set.
2402 +@@ -817,6 +805,19 @@ static void write_pmc(int idx, unsigned long val)
2403 + }
2404 + }
2405 +
2406 ++static int any_pmc_overflown(struct cpu_hw_events *cpuhw)
2407 ++{
2408 ++ int i, idx;
2409 ++
2410 ++ for (i = 0; i < cpuhw->n_events; i++) {
2411 ++ idx = cpuhw->event[i]->hw.idx;
2412 ++ if ((idx) && ((int)read_pmc(idx) < 0))
2413 ++ return idx;
2414 ++ }
2415 ++
2416 ++ return 0;
2417 ++}
2418 ++
2419 + /* Called from sysrq_handle_showregs() */
2420 + void perf_event_print_debug(void)
2421 + {
2422 +@@ -1240,11 +1241,16 @@ static void power_pmu_disable(struct pmu *pmu)
2423 +
2424 + /*
2425 + * Set the 'freeze counters' bit, clear EBE/BHRBA/PMCC/PMAO/FC56
2426 ++ * Also clear PMXE to disable PMI's getting triggered in some
2427 ++ * corner cases during PMU disable.
2428 + */
2429 + val = mmcr0 = mfspr(SPRN_MMCR0);
2430 + val |= MMCR0_FC;
2431 + val &= ~(MMCR0_EBE | MMCR0_BHRBA | MMCR0_PMCC | MMCR0_PMAO |
2432 +- MMCR0_FC56);
2433 ++ MMCR0_PMXE | MMCR0_FC56);
2434 ++ /* Set mmcr0 PMCCEXT for p10 */
2435 ++ if (ppmu->flags & PPMU_ARCH_31)
2436 ++ val |= MMCR0_PMCCEXT;
2437 +
2438 + /*
2439 + * The barrier is to make sure the mtspr has been
2440 +@@ -1255,6 +1261,23 @@ static void power_pmu_disable(struct pmu *pmu)
2441 + mb();
2442 + isync();
2443 +
2444 ++ /*
2445 ++ * Some corner cases could clear the PMU counter overflow
2446 ++ * while a masked PMI is pending. One such case is when
2447 ++ * a PMI happens during interrupt replay and perf counter
2448 ++ * values are cleared by PMU callbacks before replay.
2449 ++ *
2450 ++ * If any PMC corresponding to the active PMU events are
2451 ++ * overflown, disable the interrupt by clearing the paca
2452 ++ * bit for PMI since we are disabling the PMU now.
2453 ++ * Otherwise provide a warning if there is PMI pending, but
2454 ++ * no counter is found overflown.
2455 ++ */
2456 ++ if (any_pmc_overflown(cpuhw))
2457 ++ clear_pmi_irq_pending();
2458 ++ else
2459 ++ WARN_ON(pmi_irq_pending());
2460 ++
2461 + val = mmcra = cpuhw->mmcr.mmcra;
2462 +
2463 + /*
2464 +@@ -1346,6 +1369,15 @@ static void power_pmu_enable(struct pmu *pmu)
2465 + * (possibly updated for removal of events).
2466 + */
2467 + if (!cpuhw->n_added) {
2468 ++ /*
2469 ++ * If there is any active event with an overflown PMC
2470 ++ * value, set back PACA_IRQ_PMI which would have been
2471 ++ * cleared in power_pmu_disable().
2472 ++ */
2473 ++ hard_irq_disable();
2474 ++ if (any_pmc_overflown(cpuhw))
2475 ++ set_pmi_irq_pending();
2476 ++
2477 + mtspr(SPRN_MMCRA, cpuhw->mmcr.mmcra & ~MMCRA_SAMPLE_ENABLE);
2478 + mtspr(SPRN_MMCR1, cpuhw->mmcr.mmcr1);
2479 + if (ppmu->flags & PPMU_ARCH_31)
2480 +@@ -2250,7 +2282,6 @@ static void __perf_event_interrupt(struct pt_regs *regs)
2481 + struct perf_event *event;
2482 + unsigned long val[8];
2483 + int found, active;
2484 +- int nmi;
2485 +
2486 + if (cpuhw->n_limited)
2487 + freeze_limited_counters(cpuhw, mfspr(SPRN_PMC5),
2488 +@@ -2258,18 +2289,6 @@ static void __perf_event_interrupt(struct pt_regs *regs)
2489 +
2490 + perf_read_regs(regs);
2491 +
2492 +- /*
2493 +- * If perf interrupts hit in a local_irq_disable (soft-masked) region,
2494 +- * we consider them as NMIs. This is required to prevent hash faults on
2495 +- * user addresses when reading callchains. See the NMI test in
2496 +- * do_hash_page.
2497 +- */
2498 +- nmi = perf_intr_is_nmi(regs);
2499 +- if (nmi)
2500 +- nmi_enter();
2501 +- else
2502 +- irq_enter();
2503 +-
2504 + /* Read all the PMCs since we'll need them a bunch of times */
2505 + for (i = 0; i < ppmu->n_counter; ++i)
2506 + val[i] = read_pmc(i + 1);
2507 +@@ -2296,6 +2315,14 @@ static void __perf_event_interrupt(struct pt_regs *regs)
2508 + break;
2509 + }
2510 + }
2511 ++
2512 ++ /*
2513 ++ * Clear PACA_IRQ_PMI in case it was set by
2514 ++ * set_pmi_irq_pending() when PMU was enabled
2515 ++ * after accounting for interrupts.
2516 ++ */
2517 ++ clear_pmi_irq_pending();
2518 ++
2519 + if (!active)
2520 + /* reset non active counters that have overflowed */
2521 + write_pmc(i + 1, 0);
2522 +@@ -2315,8 +2342,15 @@ static void __perf_event_interrupt(struct pt_regs *regs)
2523 + }
2524 + }
2525 + }
2526 +- if (!found && !nmi && printk_ratelimit())
2527 +- printk(KERN_WARNING "Can't find PMC that caused IRQ\n");
2528 ++
2529 ++ /*
2530 ++ * During system wide profling or while specific CPU is monitored for an
2531 ++ * event, some corner cases could cause PMC to overflow in idle path. This
2532 ++ * will trigger a PMI after waking up from idle. Since counter values are _not_
2533 ++ * saved/restored in idle path, can lead to below "Can't find PMC" message.
2534 ++ */
2535 ++ if (unlikely(!found) && !arch_irq_disabled_regs(regs))
2536 ++ printk_ratelimited(KERN_WARNING "Can't find PMC that caused IRQ\n");
2537 +
2538 + /*
2539 + * Reset MMCR0 to its normal value. This will set PMXE and
2540 +@@ -2326,11 +2360,6 @@ static void __perf_event_interrupt(struct pt_regs *regs)
2541 + * we get back out of this interrupt.
2542 + */
2543 + write_mmcr0(cpuhw, cpuhw->mmcr.mmcr0);
2544 +-
2545 +- if (nmi)
2546 +- nmi_exit();
2547 +- else
2548 +- irq_exit();
2549 + }
2550 +
2551 + static void perf_event_interrupt(struct pt_regs *regs)
2552 +diff --git a/arch/powerpc/perf/core-fsl-emb.c b/arch/powerpc/perf/core-fsl-emb.c
2553 +index e0e7e276bfd25..ee721f420a7ba 100644
2554 +--- a/arch/powerpc/perf/core-fsl-emb.c
2555 ++++ b/arch/powerpc/perf/core-fsl-emb.c
2556 +@@ -31,19 +31,6 @@ static atomic_t num_events;
2557 + /* Used to avoid races in calling reserve/release_pmc_hardware */
2558 + static DEFINE_MUTEX(pmc_reserve_mutex);
2559 +
2560 +-/*
2561 +- * If interrupts were soft-disabled when a PMU interrupt occurs, treat
2562 +- * it as an NMI.
2563 +- */
2564 +-static inline int perf_intr_is_nmi(struct pt_regs *regs)
2565 +-{
2566 +-#ifdef __powerpc64__
2567 +- return (regs->softe & IRQS_DISABLED);
2568 +-#else
2569 +- return 0;
2570 +-#endif
2571 +-}
2572 +-
2573 + static void perf_event_interrupt(struct pt_regs *regs);
2574 +
2575 + /*
2576 +@@ -659,13 +646,6 @@ static void perf_event_interrupt(struct pt_regs *regs)
2577 + struct perf_event *event;
2578 + unsigned long val;
2579 + int found = 0;
2580 +- int nmi;
2581 +-
2582 +- nmi = perf_intr_is_nmi(regs);
2583 +- if (nmi)
2584 +- nmi_enter();
2585 +- else
2586 +- irq_enter();
2587 +
2588 + for (i = 0; i < ppmu->n_counter; ++i) {
2589 + event = cpuhw->event[i];
2590 +@@ -690,11 +670,6 @@ static void perf_event_interrupt(struct pt_regs *regs)
2591 + mtmsr(mfmsr() | MSR_PMM);
2592 + mtpmr(PMRN_PMGC0, PMGC0_PMIE | PMGC0_FCECE);
2593 + isync();
2594 +-
2595 +- if (nmi)
2596 +- nmi_exit();
2597 +- else
2598 +- irq_exit();
2599 + }
2600 +
2601 + void hw_perf_event_setup(int cpu)
2602 +diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
2603 +index 5e8eedda45d39..58448f0e47213 100644
2604 +--- a/arch/powerpc/perf/isa207-common.c
2605 ++++ b/arch/powerpc/perf/isa207-common.c
2606 +@@ -561,6 +561,14 @@ int isa207_compute_mmcr(u64 event[], int n_ev,
2607 + if (!(pmc_inuse & 0x60))
2608 + mmcr->mmcr0 |= MMCR0_FC56;
2609 +
2610 ++ /*
2611 ++ * Set mmcr0 (PMCCEXT) for p10 which
2612 ++ * will restrict access to group B registers
2613 ++ * when MMCR0 PMCC=0b00.
2614 ++ */
2615 ++ if (cpu_has_feature(CPU_FTR_ARCH_31))
2616 ++ mmcr->mmcr0 |= MMCR0_PMCCEXT;
2617 ++
2618 + mmcr->mmcr1 = mmcr1;
2619 + mmcr->mmcra = mmcra;
2620 + mmcr->mmcr2 = mmcr2;
2621 +diff --git a/arch/powerpc/platforms/cell/iommu.c b/arch/powerpc/platforms/cell/iommu.c
2622 +index 2124831cf57c0..d04079b34d7c2 100644
2623 +--- a/arch/powerpc/platforms/cell/iommu.c
2624 ++++ b/arch/powerpc/platforms/cell/iommu.c
2625 +@@ -976,6 +976,7 @@ static int __init cell_iommu_fixed_mapping_init(void)
2626 + if (hbase < dbase || (hend > (dbase + dsize))) {
2627 + pr_debug("iommu: hash window doesn't fit in"
2628 + "real DMA window\n");
2629 ++ of_node_put(np);
2630 + return -1;
2631 + }
2632 + }
2633 +diff --git a/arch/powerpc/platforms/cell/pervasive.c b/arch/powerpc/platforms/cell/pervasive.c
2634 +index 9068edef71f78..59999902e4a6a 100644
2635 +--- a/arch/powerpc/platforms/cell/pervasive.c
2636 ++++ b/arch/powerpc/platforms/cell/pervasive.c
2637 +@@ -77,6 +77,7 @@ static int cbe_system_reset_exception(struct pt_regs *regs)
2638 + switch (regs->msr & SRR1_WAKEMASK) {
2639 + case SRR1_WAKEDEC:
2640 + set_dec(1);
2641 ++ break;
2642 + case SRR1_WAKEEE:
2643 + /*
2644 + * Handle these when interrupts get re-enabled and we take
2645 +diff --git a/arch/powerpc/platforms/embedded6xx/hlwd-pic.c b/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
2646 +index a1b7f79a8a152..de10c13de15c6 100644
2647 +--- a/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
2648 ++++ b/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
2649 +@@ -215,6 +215,7 @@ void hlwd_pic_probe(void)
2650 + irq_set_chained_handler(cascade_virq,
2651 + hlwd_pic_irq_cascade);
2652 + hlwd_irq_host = host;
2653 ++ of_node_put(np);
2654 + break;
2655 + }
2656 + }
2657 +diff --git a/arch/powerpc/platforms/powermac/low_i2c.c b/arch/powerpc/platforms/powermac/low_i2c.c
2658 +index f77a59b5c2e1a..df89d916236d9 100644
2659 +--- a/arch/powerpc/platforms/powermac/low_i2c.c
2660 ++++ b/arch/powerpc/platforms/powermac/low_i2c.c
2661 +@@ -582,6 +582,7 @@ static void __init kw_i2c_add(struct pmac_i2c_host_kw *host,
2662 + bus->close = kw_i2c_close;
2663 + bus->xfer = kw_i2c_xfer;
2664 + mutex_init(&bus->mutex);
2665 ++ lockdep_register_key(&bus->lock_key);
2666 + lockdep_set_class(&bus->mutex, &bus->lock_key);
2667 + if (controller == busnode)
2668 + bus->flags = pmac_i2c_multibus;
2669 +@@ -810,6 +811,7 @@ static void __init pmu_i2c_probe(void)
2670 + bus->hostdata = bus + 1;
2671 + bus->xfer = pmu_i2c_xfer;
2672 + mutex_init(&bus->mutex);
2673 ++ lockdep_register_key(&bus->lock_key);
2674 + lockdep_set_class(&bus->mutex, &bus->lock_key);
2675 + bus->flags = pmac_i2c_multibus;
2676 + list_add(&bus->link, &pmac_i2c_busses);
2677 +@@ -933,6 +935,7 @@ static void __init smu_i2c_probe(void)
2678 + bus->hostdata = bus + 1;
2679 + bus->xfer = smu_i2c_xfer;
2680 + mutex_init(&bus->mutex);
2681 ++ lockdep_register_key(&bus->lock_key);
2682 + lockdep_set_class(&bus->mutex, &bus->lock_key);
2683 + bus->flags = 0;
2684 + list_add(&bus->link, &pmac_i2c_busses);
2685 +diff --git a/arch/powerpc/platforms/powernv/opal-lpc.c b/arch/powerpc/platforms/powernv/opal-lpc.c
2686 +index 608569082ba0b..123a0e799b7bd 100644
2687 +--- a/arch/powerpc/platforms/powernv/opal-lpc.c
2688 ++++ b/arch/powerpc/platforms/powernv/opal-lpc.c
2689 +@@ -396,6 +396,7 @@ void __init opal_lpc_init(void)
2690 + if (!of_get_property(np, "primary", NULL))
2691 + continue;
2692 + opal_lpc_chip_id = of_get_ibm_chip_id(np);
2693 ++ of_node_put(np);
2694 + break;
2695 + }
2696 + if (opal_lpc_chip_id < 0)
2697 +diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
2698 +index 1e3674d7ea7bc..b57eeaff7bb33 100644
2699 +--- a/arch/powerpc/sysdev/xive/spapr.c
2700 ++++ b/arch/powerpc/sysdev/xive/spapr.c
2701 +@@ -658,6 +658,9 @@ static int xive_spapr_debug_show(struct seq_file *m, void *private)
2702 + struct xive_irq_bitmap *xibm;
2703 + char *buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
2704 +
2705 ++ if (!buf)
2706 ++ return -ENOMEM;
2707 ++
2708 + list_for_each_entry(xibm, &xive_irq_bitmaps, list) {
2709 + memset(buf, 0, PAGE_SIZE);
2710 + bitmap_print_to_pagebuf(true, buf, xibm->bitmap, xibm->count);
2711 +diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
2712 +index 11d2c8395e2ae..6d99b1be0082f 100644
2713 +--- a/arch/s390/mm/pgalloc.c
2714 ++++ b/arch/s390/mm/pgalloc.c
2715 +@@ -253,13 +253,15 @@ void page_table_free(struct mm_struct *mm, unsigned long *table)
2716 + /* Free 2K page table fragment of a 4K page */
2717 + bit = (__pa(table) & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t));
2718 + spin_lock_bh(&mm->context.lock);
2719 +- mask = atomic_xor_bits(&page->_refcount, 1U << (bit + 24));
2720 ++ mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24));
2721 + mask >>= 24;
2722 + if (mask & 3)
2723 + list_add(&page->lru, &mm->context.pgtable_list);
2724 + else
2725 + list_del(&page->lru);
2726 + spin_unlock_bh(&mm->context.lock);
2727 ++ mask = atomic_xor_bits(&page->_refcount, 0x10U << (bit + 24));
2728 ++ mask >>= 24;
2729 + if (mask != 0)
2730 + return;
2731 + } else {
2732 +diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
2733 +index d11b3d41c3785..d5d768188b3ba 100644
2734 +--- a/arch/um/drivers/virtio_uml.c
2735 ++++ b/arch/um/drivers/virtio_uml.c
2736 +@@ -1076,6 +1076,8 @@ static void virtio_uml_release_dev(struct device *d)
2737 + container_of(d, struct virtio_device, dev);
2738 + struct virtio_uml_device *vu_dev = to_virtio_uml_device(vdev);
2739 +
2740 ++ time_travel_propagate_time();
2741 ++
2742 + /* might not have been opened due to not negotiating the feature */
2743 + if (vu_dev->req_fd >= 0) {
2744 + um_free_irq(VIRTIO_IRQ, vu_dev);
2745 +@@ -1109,6 +1111,8 @@ static int virtio_uml_probe(struct platform_device *pdev)
2746 + vu_dev->pdev = pdev;
2747 + vu_dev->req_fd = -1;
2748 +
2749 ++ time_travel_propagate_time();
2750 ++
2751 + do {
2752 + rc = os_connect_socket(pdata->socket_path);
2753 + } while (rc == -EINTR);
2754 +diff --git a/arch/um/include/asm/delay.h b/arch/um/include/asm/delay.h
2755 +index 56fc2b8f2dd01..e79b2ab6f40c8 100644
2756 +--- a/arch/um/include/asm/delay.h
2757 ++++ b/arch/um/include/asm/delay.h
2758 +@@ -14,7 +14,7 @@ static inline void um_ndelay(unsigned long nsecs)
2759 + ndelay(nsecs);
2760 + }
2761 + #undef ndelay
2762 +-#define ndelay um_ndelay
2763 ++#define ndelay(n) um_ndelay(n)
2764 +
2765 + static inline void um_udelay(unsigned long usecs)
2766 + {
2767 +@@ -26,5 +26,5 @@ static inline void um_udelay(unsigned long usecs)
2768 + udelay(usecs);
2769 + }
2770 + #undef udelay
2771 +-#define udelay um_udelay
2772 ++#define udelay(n) um_udelay(n)
2773 + #endif /* __UM_DELAY_H */
2774 +diff --git a/arch/um/include/shared/registers.h b/arch/um/include/shared/registers.h
2775 +index 0c50fa6e8a55b..fbb709a222839 100644
2776 +--- a/arch/um/include/shared/registers.h
2777 ++++ b/arch/um/include/shared/registers.h
2778 +@@ -16,8 +16,8 @@ extern int restore_fp_registers(int pid, unsigned long *fp_regs);
2779 + extern int save_fpx_registers(int pid, unsigned long *fp_regs);
2780 + extern int restore_fpx_registers(int pid, unsigned long *fp_regs);
2781 + extern int save_registers(int pid, struct uml_pt_regs *regs);
2782 +-extern int restore_registers(int pid, struct uml_pt_regs *regs);
2783 +-extern int init_registers(int pid);
2784 ++extern int restore_pid_registers(int pid, struct uml_pt_regs *regs);
2785 ++extern int init_pid_registers(int pid);
2786 + extern void get_safe_registers(unsigned long *regs, unsigned long *fp_regs);
2787 + extern unsigned long get_thread_reg(int reg, jmp_buf *buf);
2788 + extern int get_fp_registers(int pid, unsigned long *regs);
2789 +diff --git a/arch/um/os-Linux/registers.c b/arch/um/os-Linux/registers.c
2790 +index 2d9270508e156..b123955be7acc 100644
2791 +--- a/arch/um/os-Linux/registers.c
2792 ++++ b/arch/um/os-Linux/registers.c
2793 +@@ -21,7 +21,7 @@ int save_registers(int pid, struct uml_pt_regs *regs)
2794 + return 0;
2795 + }
2796 +
2797 +-int restore_registers(int pid, struct uml_pt_regs *regs)
2798 ++int restore_pid_registers(int pid, struct uml_pt_regs *regs)
2799 + {
2800 + int err;
2801 +
2802 +@@ -36,7 +36,7 @@ int restore_registers(int pid, struct uml_pt_regs *regs)
2803 + static unsigned long exec_regs[MAX_REG_NR];
2804 + static unsigned long exec_fp_regs[FP_SIZE];
2805 +
2806 +-int init_registers(int pid)
2807 ++int init_pid_registers(int pid)
2808 + {
2809 + int err;
2810 +
2811 +diff --git a/arch/um/os-Linux/start_up.c b/arch/um/os-Linux/start_up.c
2812 +index f79dc338279e6..b28373a2b8d2d 100644
2813 +--- a/arch/um/os-Linux/start_up.c
2814 ++++ b/arch/um/os-Linux/start_up.c
2815 +@@ -336,7 +336,7 @@ void __init os_early_checks(void)
2816 + check_tmpexec();
2817 +
2818 + pid = start_ptraced_child();
2819 +- if (init_registers(pid))
2820 ++ if (init_pid_registers(pid))
2821 + fatal("Failed to initialize default registers");
2822 + stop_ptraced_child(pid, 1, 1);
2823 + }
2824 +diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
2825 +index 6004047d25fdd..bf91e0a36d77f 100644
2826 +--- a/arch/x86/boot/compressed/Makefile
2827 ++++ b/arch/x86/boot/compressed/Makefile
2828 +@@ -28,7 +28,11 @@ KCOV_INSTRUMENT := n
2829 + targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
2830 + vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4 vmlinux.bin.zst
2831 +
2832 +-KBUILD_CFLAGS := -m$(BITS) -O2
2833 ++# CLANG_FLAGS must come before any cc-disable-warning or cc-option calls in
2834 ++# case of cross compiling, as it has the '--target=' flag, which is needed to
2835 ++# avoid errors with '-march=i386', and future flags may depend on the target to
2836 ++# be valid.
2837 ++KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS)
2838 + KBUILD_CFLAGS += -fno-strict-aliasing -fPIE
2839 + KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
2840 + cflags-$(CONFIG_X86_32) := -march=i386
2841 +@@ -46,7 +50,6 @@ KBUILD_CFLAGS += -D__DISABLE_EXPORTS
2842 + # Disable relocation relaxation in case the link is not PIE.
2843 + KBUILD_CFLAGS += $(call as-option,-Wa$(comma)-mrelax-relocations=no)
2844 + KBUILD_CFLAGS += -include $(srctree)/include/linux/hidden.h
2845 +-KBUILD_CFLAGS += $(CLANG_FLAGS)
2846 +
2847 + # sev-es.c indirectly inludes inat-table.h which is generated during
2848 + # compilation and stored in $(objtree). Add the directory to the includes so
2849 +diff --git a/arch/x86/configs/i386_defconfig b/arch/x86/configs/i386_defconfig
2850 +index 78210793d357c..38d7acb9610cc 100644
2851 +--- a/arch/x86/configs/i386_defconfig
2852 ++++ b/arch/x86/configs/i386_defconfig
2853 +@@ -264,3 +264,4 @@ CONFIG_BLK_DEV_IO_TRACE=y
2854 + CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
2855 + CONFIG_EARLY_PRINTK_DBGP=y
2856 + CONFIG_DEBUG_BOOT_PARAMS=y
2857 ++CONFIG_KALLSYMS_ALL=y
2858 +diff --git a/arch/x86/configs/x86_64_defconfig b/arch/x86/configs/x86_64_defconfig
2859 +index 9936528e19393..c6e587a9a6f85 100644
2860 +--- a/arch/x86/configs/x86_64_defconfig
2861 ++++ b/arch/x86/configs/x86_64_defconfig
2862 +@@ -260,3 +260,4 @@ CONFIG_BLK_DEV_IO_TRACE=y
2863 + CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
2864 + CONFIG_EARLY_PRINTK_DBGP=y
2865 + CONFIG_DEBUG_BOOT_PARAMS=y
2866 ++CONFIG_KALLSYMS_ALL=y
2867 +diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
2868 +index 5db5d083c8732..331474b150f16 100644
2869 +--- a/arch/x86/include/asm/realmode.h
2870 ++++ b/arch/x86/include/asm/realmode.h
2871 +@@ -89,6 +89,7 @@ static inline void set_real_mode_mem(phys_addr_t mem)
2872 + }
2873 +
2874 + void reserve_real_mode(void);
2875 ++void load_trampoline_pgtable(void);
2876 +
2877 + #endif /* __ASSEMBLY__ */
2878 +
2879 +diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
2880 +index 5c95d242f38d7..bb1430283c726 100644
2881 +--- a/arch/x86/include/asm/uaccess.h
2882 ++++ b/arch/x86/include/asm/uaccess.h
2883 +@@ -314,11 +314,12 @@ do { \
2884 + do { \
2885 + __chk_user_ptr(ptr); \
2886 + switch (size) { \
2887 +- unsigned char x_u8__; \
2888 +- case 1: \
2889 ++ case 1: { \
2890 ++ unsigned char x_u8__; \
2891 + __get_user_asm(x_u8__, ptr, "b", "=q", label); \
2892 + (x) = x_u8__; \
2893 + break; \
2894 ++ } \
2895 + case 2: \
2896 + __get_user_asm(x, ptr, "w", "=r", label); \
2897 + break; \
2898 +diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
2899 +index 14b34963eb1f7..5cf1a024408bf 100644
2900 +--- a/arch/x86/kernel/cpu/mce/core.c
2901 ++++ b/arch/x86/kernel/cpu/mce/core.c
2902 +@@ -295,11 +295,17 @@ static void wait_for_panic(void)
2903 + panic("Panicing machine check CPU died");
2904 + }
2905 +
2906 +-static void mce_panic(const char *msg, struct mce *final, char *exp)
2907 ++static noinstr void mce_panic(const char *msg, struct mce *final, char *exp)
2908 + {
2909 +- int apei_err = 0;
2910 + struct llist_node *pending;
2911 + struct mce_evt_llist *l;
2912 ++ int apei_err = 0;
2913 ++
2914 ++ /*
2915 ++ * Allow instrumentation around external facilities usage. Not that it
2916 ++ * matters a whole lot since the machine is going to panic anyway.
2917 ++ */
2918 ++ instrumentation_begin();
2919 +
2920 + if (!fake_panic) {
2921 + /*
2922 +@@ -314,7 +320,7 @@ static void mce_panic(const char *msg, struct mce *final, char *exp)
2923 + } else {
2924 + /* Don't log too much for fake panic */
2925 + if (atomic_inc_return(&mce_fake_panicked) > 1)
2926 +- return;
2927 ++ goto out;
2928 + }
2929 + pending = mce_gen_pool_prepare_records();
2930 + /* First print corrected ones that are still unlogged */
2931 +@@ -352,6 +358,9 @@ static void mce_panic(const char *msg, struct mce *final, char *exp)
2932 + panic(msg);
2933 + } else
2934 + pr_emerg(HW_ERR "Fake kernel panic: %s\n", msg);
2935 ++
2936 ++out:
2937 ++ instrumentation_end();
2938 + }
2939 +
2940 + /* Support code for software error injection */
2941 +@@ -682,7 +691,7 @@ static struct notifier_block mce_default_nb = {
2942 + /*
2943 + * Read ADDR and MISC registers.
2944 + */
2945 +-static void mce_read_aux(struct mce *m, int i)
2946 ++static noinstr void mce_read_aux(struct mce *m, int i)
2947 + {
2948 + if (m->status & MCI_STATUS_MISCV)
2949 + m->misc = mce_rdmsrl(msr_ops.misc(i));
2950 +@@ -1061,10 +1070,13 @@ static int mce_start(int *no_way_out)
2951 + * Synchronize between CPUs after main scanning loop.
2952 + * This invokes the bulk of the Monarch processing.
2953 + */
2954 +-static int mce_end(int order)
2955 ++static noinstr int mce_end(int order)
2956 + {
2957 +- int ret = -1;
2958 + u64 timeout = (u64)mca_cfg.monarch_timeout * NSEC_PER_USEC;
2959 ++ int ret = -1;
2960 ++
2961 ++ /* Allow instrumentation around external facilities. */
2962 ++ instrumentation_begin();
2963 +
2964 + if (!timeout)
2965 + goto reset;
2966 +@@ -1108,7 +1120,8 @@ static int mce_end(int order)
2967 + /*
2968 + * Don't reset anything. That's done by the Monarch.
2969 + */
2970 +- return 0;
2971 ++ ret = 0;
2972 ++ goto out;
2973 + }
2974 +
2975 + /*
2976 +@@ -1123,6 +1136,10 @@ reset:
2977 + * Let others run again.
2978 + */
2979 + atomic_set(&mce_executing, 0);
2980 ++
2981 ++out:
2982 ++ instrumentation_end();
2983 ++
2984 + return ret;
2985 + }
2986 +
2987 +@@ -1443,6 +1460,14 @@ noinstr void do_machine_check(struct pt_regs *regs)
2988 + if (worst != MCE_AR_SEVERITY && !kill_it)
2989 + goto out;
2990 +
2991 ++ /*
2992 ++ * Enable instrumentation around the external facilities like
2993 ++ * task_work_add() (via queue_task_work()), fixup_exception() etc.
2994 ++ * For now, that is. Fixing this properly would need a lot more involved
2995 ++ * reorganization.
2996 ++ */
2997 ++ instrumentation_begin();
2998 ++
2999 + /* Fault was in user mode and we need to take some action */
3000 + if ((m.cs & 3) == 3) {
3001 + /* If this triggers there is no way to recover. Die hard. */
3002 +@@ -1468,6 +1493,9 @@ noinstr void do_machine_check(struct pt_regs *regs)
3003 + if (m.kflags & MCE_IN_KERNEL_COPYIN)
3004 + queue_task_work(&m, msg, kill_it);
3005 + }
3006 ++
3007 ++ instrumentation_end();
3008 ++
3009 + out:
3010 + mce_wrmsrl(MSR_IA32_MCG_STATUS, 0);
3011 + }
3012 +diff --git a/arch/x86/kernel/cpu/mce/inject.c b/arch/x86/kernel/cpu/mce/inject.c
3013 +index 3a44346f22766..e7808309d4710 100644
3014 +--- a/arch/x86/kernel/cpu/mce/inject.c
3015 ++++ b/arch/x86/kernel/cpu/mce/inject.c
3016 +@@ -347,7 +347,7 @@ static ssize_t flags_write(struct file *filp, const char __user *ubuf,
3017 + char buf[MAX_FLAG_OPT_SIZE], *__buf;
3018 + int err;
3019 +
3020 +- if (cnt > MAX_FLAG_OPT_SIZE)
3021 ++ if (!cnt || cnt > MAX_FLAG_OPT_SIZE)
3022 + return -EINVAL;
3023 +
3024 + if (copy_from_user(&buf, ubuf, cnt))
3025 +diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
3026 +index 0c6d1dc59fa21..8e27cbefaa4bf 100644
3027 +--- a/arch/x86/kernel/early-quirks.c
3028 ++++ b/arch/x86/kernel/early-quirks.c
3029 +@@ -515,6 +515,7 @@ static const struct intel_early_ops gen11_early_ops __initconst = {
3030 + .stolen_size = gen9_stolen_size,
3031 + };
3032 +
3033 ++/* Intel integrated GPUs for which we need to reserve "stolen memory" */
3034 + static const struct pci_device_id intel_early_ids[] __initconst = {
3035 + INTEL_I830_IDS(&i830_early_ops),
3036 + INTEL_I845G_IDS(&i845_early_ops),
3037 +@@ -588,6 +589,13 @@ static void __init intel_graphics_quirks(int num, int slot, int func)
3038 + u16 device;
3039 + int i;
3040 +
3041 ++ /*
3042 ++ * Reserve "stolen memory" for an integrated GPU. If we've already
3043 ++ * found one, there's nothing to do for other (discrete) GPUs.
3044 ++ */
3045 ++ if (resource_size(&intel_graphics_stolen_res))
3046 ++ return;
3047 ++
3048 + device = read_pci_config_16(num, slot, func, PCI_DEVICE_ID);
3049 +
3050 + for (i = 0; i < ARRAY_SIZE(intel_early_ids); i++) {
3051 +@@ -700,7 +708,7 @@ static struct chipset early_qrk[] __initdata = {
3052 + { PCI_VENDOR_ID_INTEL, 0x3406, PCI_CLASS_BRIDGE_HOST,
3053 + PCI_BASE_CLASS_BRIDGE, 0, intel_remapping_check },
3054 + { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, PCI_CLASS_DISPLAY_VGA, PCI_ANY_ID,
3055 +- QFLAG_APPLY_ONCE, intel_graphics_quirks },
3056 ++ 0, intel_graphics_quirks },
3057 + /*
3058 + * HPET on the current version of the Baytrail platform has accuracy
3059 + * problems: it will halt in deep idle state - so we disable it.
3060 +diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
3061 +index 798a6f73f8946..df3514835b356 100644
3062 +--- a/arch/x86/kernel/reboot.c
3063 ++++ b/arch/x86/kernel/reboot.c
3064 +@@ -113,17 +113,9 @@ void __noreturn machine_real_restart(unsigned int type)
3065 + spin_unlock(&rtc_lock);
3066 +
3067 + /*
3068 +- * Switch back to the initial page table.
3069 ++ * Switch to the trampoline page table.
3070 + */
3071 +-#ifdef CONFIG_X86_32
3072 +- load_cr3(initial_page_table);
3073 +-#else
3074 +- write_cr3(real_mode_header->trampoline_pgd);
3075 +-
3076 +- /* Exiting long mode will fail if CR4.PCIDE is set. */
3077 +- if (boot_cpu_has(X86_FEATURE_PCID))
3078 +- cr4_clear_bits(X86_CR4_PCIDE);
3079 +-#endif
3080 ++ load_trampoline_pgtable();
3081 +
3082 + /* Jump to the identity-mapped low memory code */
3083 + #ifdef CONFIG_X86_32
3084 +diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
3085 +index f9f1b45e5ddc4..13d1a0ac8916a 100644
3086 +--- a/arch/x86/kernel/tsc.c
3087 ++++ b/arch/x86/kernel/tsc.c
3088 +@@ -1127,6 +1127,7 @@ static int tsc_cs_enable(struct clocksource *cs)
3089 + static struct clocksource clocksource_tsc_early = {
3090 + .name = "tsc-early",
3091 + .rating = 299,
3092 ++ .uncertainty_margin = 32 * NSEC_PER_MSEC,
3093 + .read = read_tsc,
3094 + .mask = CLOCKSOURCE_MASK(64),
3095 + .flags = CLOCK_SOURCE_IS_CONTINUOUS |
3096 +diff --git a/arch/x86/kvm/vmx/posted_intr.c b/arch/x86/kvm/vmx/posted_intr.c
3097 +index fbd9b10354790..5f8acd2faa7c1 100644
3098 +--- a/arch/x86/kvm/vmx/posted_intr.c
3099 ++++ b/arch/x86/kvm/vmx/posted_intr.c
3100 +@@ -15,7 +15,7 @@
3101 + * can find which vCPU should be waken up.
3102 + */
3103 + static DEFINE_PER_CPU(struct list_head, blocked_vcpu_on_cpu);
3104 +-static DEFINE_PER_CPU(spinlock_t, blocked_vcpu_on_cpu_lock);
3105 ++static DEFINE_PER_CPU(raw_spinlock_t, blocked_vcpu_on_cpu_lock);
3106 +
3107 + static inline struct pi_desc *vcpu_to_pi_desc(struct kvm_vcpu *vcpu)
3108 + {
3109 +@@ -121,9 +121,9 @@ static void __pi_post_block(struct kvm_vcpu *vcpu)
3110 + new.control) != old.control);
3111 +
3112 + if (!WARN_ON_ONCE(vcpu->pre_pcpu == -1)) {
3113 +- spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
3114 ++ raw_spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
3115 + list_del(&vcpu->blocked_vcpu_list);
3116 +- spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
3117 ++ raw_spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
3118 + vcpu->pre_pcpu = -1;
3119 + }
3120 + }
3121 +@@ -154,11 +154,11 @@ int pi_pre_block(struct kvm_vcpu *vcpu)
3122 + local_irq_disable();
3123 + if (!WARN_ON_ONCE(vcpu->pre_pcpu != -1)) {
3124 + vcpu->pre_pcpu = vcpu->cpu;
3125 +- spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
3126 ++ raw_spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
3127 + list_add_tail(&vcpu->blocked_vcpu_list,
3128 + &per_cpu(blocked_vcpu_on_cpu,
3129 + vcpu->pre_pcpu));
3130 +- spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
3131 ++ raw_spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
3132 + }
3133 +
3134 + do {
3135 +@@ -215,7 +215,7 @@ void pi_wakeup_handler(void)
3136 + struct kvm_vcpu *vcpu;
3137 + int cpu = smp_processor_id();
3138 +
3139 +- spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
3140 ++ raw_spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
3141 + list_for_each_entry(vcpu, &per_cpu(blocked_vcpu_on_cpu, cpu),
3142 + blocked_vcpu_list) {
3143 + struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu);
3144 +@@ -223,13 +223,13 @@ void pi_wakeup_handler(void)
3145 + if (pi_test_on(pi_desc) == 1)
3146 + kvm_vcpu_kick(vcpu);
3147 + }
3148 +- spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
3149 ++ raw_spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
3150 + }
3151 +
3152 + void __init pi_init_cpu(int cpu)
3153 + {
3154 + INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu));
3155 +- spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
3156 ++ raw_spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
3157 + }
3158 +
3159 + bool pi_has_pending_interrupt(struct kvm_vcpu *vcpu)
3160 +diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
3161 +index 3313bffbecd4d..1a702c6a226ec 100644
3162 +--- a/arch/x86/realmode/init.c
3163 ++++ b/arch/x86/realmode/init.c
3164 +@@ -17,6 +17,32 @@ u32 *trampoline_cr4_features;
3165 + /* Hold the pgd entry used on booting additional CPUs */
3166 + pgd_t trampoline_pgd_entry;
3167 +
3168 ++void load_trampoline_pgtable(void)
3169 ++{
3170 ++#ifdef CONFIG_X86_32
3171 ++ load_cr3(initial_page_table);
3172 ++#else
3173 ++ /*
3174 ++ * This function is called before exiting to real-mode and that will
3175 ++ * fail with CR4.PCIDE still set.
3176 ++ */
3177 ++ if (boot_cpu_has(X86_FEATURE_PCID))
3178 ++ cr4_clear_bits(X86_CR4_PCIDE);
3179 ++
3180 ++ write_cr3(real_mode_header->trampoline_pgd);
3181 ++#endif
3182 ++
3183 ++ /*
3184 ++ * The CR3 write above will not flush global TLB entries.
3185 ++ * Stale, global entries from previous page tables may still be
3186 ++ * present. Flush those stale entries.
3187 ++ *
3188 ++ * This ensures that memory accessed while running with
3189 ++ * trampoline_pgd is *actually* mapped into trampoline_pgd.
3190 ++ */
3191 ++ __flush_tlb_all();
3192 ++}
3193 ++
3194 + void __init reserve_real_mode(void)
3195 + {
3196 + phys_addr_t mem;
3197 +diff --git a/arch/x86/um/syscalls_64.c b/arch/x86/um/syscalls_64.c
3198 +index 58f51667e2e4b..8249685b40960 100644
3199 +--- a/arch/x86/um/syscalls_64.c
3200 ++++ b/arch/x86/um/syscalls_64.c
3201 +@@ -11,6 +11,7 @@
3202 + #include <linux/uaccess.h>
3203 + #include <asm/prctl.h> /* XXX This should get the constants from libc */
3204 + #include <os.h>
3205 ++#include <registers.h>
3206 +
3207 + long arch_prctl(struct task_struct *task, int option,
3208 + unsigned long __user *arg2)
3209 +@@ -35,7 +36,7 @@ long arch_prctl(struct task_struct *task, int option,
3210 + switch (option) {
3211 + case ARCH_SET_FS:
3212 + case ARCH_SET_GS:
3213 +- ret = restore_registers(pid, &current->thread.regs.regs);
3214 ++ ret = restore_pid_registers(pid, &current->thread.regs.regs);
3215 + if (ret)
3216 + return ret;
3217 + break;
3218 +diff --git a/block/blk-flush.c b/block/blk-flush.c
3219 +index 70f1d02135ed6..33b487b5cbf78 100644
3220 +--- a/block/blk-flush.c
3221 ++++ b/block/blk-flush.c
3222 +@@ -236,8 +236,10 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error)
3223 + * avoiding use-after-free.
3224 + */
3225 + WRITE_ONCE(flush_rq->state, MQ_RQ_IDLE);
3226 +- if (fq->rq_status != BLK_STS_OK)
3227 ++ if (fq->rq_status != BLK_STS_OK) {
3228 + error = fq->rq_status;
3229 ++ fq->rq_status = BLK_STS_OK;
3230 ++ }
3231 +
3232 + if (!q->elevator) {
3233 + flush_rq->tag = BLK_MQ_NO_TAG;
3234 +diff --git a/block/blk-pm.c b/block/blk-pm.c
3235 +index 17bd020268d42..2dad62cc15727 100644
3236 +--- a/block/blk-pm.c
3237 ++++ b/block/blk-pm.c
3238 +@@ -163,27 +163,19 @@ EXPORT_SYMBOL(blk_pre_runtime_resume);
3239 + /**
3240 + * blk_post_runtime_resume - Post runtime resume processing
3241 + * @q: the queue of the device
3242 +- * @err: return value of the device's runtime_resume function
3243 + *
3244 + * Description:
3245 +- * Update the queue's runtime status according to the return value of the
3246 +- * device's runtime_resume function. If the resume was successful, call
3247 +- * blk_set_runtime_active() to do the real work of restarting the queue.
3248 ++ * For historical reasons, this routine merely calls blk_set_runtime_active()
3249 ++ * to do the real work of restarting the queue. It does this regardless of
3250 ++ * whether the device's runtime-resume succeeded; even if it failed the
3251 ++ * driver or error handler will need to communicate with the device.
3252 + *
3253 + * This function should be called near the end of the device's
3254 + * runtime_resume callback.
3255 + */
3256 +-void blk_post_runtime_resume(struct request_queue *q, int err)
3257 ++void blk_post_runtime_resume(struct request_queue *q)
3258 + {
3259 +- if (!q->dev)
3260 +- return;
3261 +- if (!err) {
3262 +- blk_set_runtime_active(q);
3263 +- } else {
3264 +- spin_lock_irq(&q->queue_lock);
3265 +- q->rpm_status = RPM_SUSPENDED;
3266 +- spin_unlock_irq(&q->queue_lock);
3267 +- }
3268 ++ blk_set_runtime_active(q);
3269 + }
3270 + EXPORT_SYMBOL(blk_post_runtime_resume);
3271 +
3272 +@@ -201,7 +193,7 @@ EXPORT_SYMBOL(blk_post_runtime_resume);
3273 + * runtime PM status and re-enable peeking requests from the queue. It
3274 + * should be called before first request is added to the queue.
3275 + *
3276 +- * This function is also called by blk_post_runtime_resume() for successful
3277 ++ * This function is also called by blk_post_runtime_resume() for
3278 + * runtime resumes. It does everything necessary to restart the queue.
3279 + */
3280 + void blk_set_runtime_active(struct request_queue *q)
3281 +diff --git a/crypto/jitterentropy.c b/crypto/jitterentropy.c
3282 +index 6e147c43fc186..37c4c308339e4 100644
3283 +--- a/crypto/jitterentropy.c
3284 ++++ b/crypto/jitterentropy.c
3285 +@@ -265,7 +265,6 @@ static int jent_stuck(struct rand_data *ec, __u64 current_delta)
3286 + {
3287 + __u64 delta2 = jent_delta(ec->last_delta, current_delta);
3288 + __u64 delta3 = jent_delta(ec->last_delta2, delta2);
3289 +- unsigned int delta_masked = current_delta & JENT_APT_WORD_MASK;
3290 +
3291 + ec->last_delta = current_delta;
3292 + ec->last_delta2 = delta2;
3293 +@@ -274,7 +273,7 @@ static int jent_stuck(struct rand_data *ec, __u64 current_delta)
3294 + * Insert the result of the comparison of two back-to-back time
3295 + * deltas.
3296 + */
3297 +- jent_apt_insert(ec, delta_masked);
3298 ++ jent_apt_insert(ec, current_delta);
3299 +
3300 + if (!current_delta || !delta2 || !delta3) {
3301 + /* RCT with a stuck bit */
3302 +diff --git a/drivers/acpi/acpica/exfield.c b/drivers/acpi/acpica/exfield.c
3303 +index 3323a2ba6a313..b3230e511870a 100644
3304 +--- a/drivers/acpi/acpica/exfield.c
3305 ++++ b/drivers/acpi/acpica/exfield.c
3306 +@@ -326,12 +326,7 @@ acpi_ex_write_data_to_field(union acpi_operand_object *source_desc,
3307 + obj_desc->field.base_byte_offset,
3308 + source_desc->buffer.pointer, data_length);
3309 +
3310 +- if ((obj_desc->field.region_obj->region.address ==
3311 +- PCC_MASTER_SUBSPACE
3312 +- && MASTER_SUBSPACE_COMMAND(obj_desc->field.
3313 +- base_byte_offset))
3314 +- || GENERIC_SUBSPACE_COMMAND(obj_desc->field.
3315 +- base_byte_offset)) {
3316 ++ if (MASTER_SUBSPACE_COMMAND(obj_desc->field.base_byte_offset)) {
3317 +
3318 + /* Perform the write */
3319 +
3320 +diff --git a/drivers/acpi/acpica/exoparg1.c b/drivers/acpi/acpica/exoparg1.c
3321 +index a46d685a3ffcf..9d67dfd93d5b6 100644
3322 +--- a/drivers/acpi/acpica/exoparg1.c
3323 ++++ b/drivers/acpi/acpica/exoparg1.c
3324 +@@ -1007,7 +1007,8 @@ acpi_status acpi_ex_opcode_1A_0T_1R(struct acpi_walk_state *walk_state)
3325 + (walk_state, return_desc,
3326 + &temp_desc);
3327 + if (ACPI_FAILURE(status)) {
3328 +- goto cleanup;
3329 ++ return_ACPI_STATUS
3330 ++ (status);
3331 + }
3332 +
3333 + return_desc = temp_desc;
3334 +diff --git a/drivers/acpi/acpica/hwesleep.c b/drivers/acpi/acpica/hwesleep.c
3335 +index 4836a4b8b38b8..142a755be6881 100644
3336 +--- a/drivers/acpi/acpica/hwesleep.c
3337 ++++ b/drivers/acpi/acpica/hwesleep.c
3338 +@@ -104,7 +104,9 @@ acpi_status acpi_hw_extended_sleep(u8 sleep_state)
3339 +
3340 + /* Flush caches, as per ACPI specification */
3341 +
3342 +- ACPI_FLUSH_CPU_CACHE();
3343 ++ if (sleep_state < ACPI_STATE_S4) {
3344 ++ ACPI_FLUSH_CPU_CACHE();
3345 ++ }
3346 +
3347 + status = acpi_os_enter_sleep(sleep_state, sleep_control, 0);
3348 + if (status == AE_CTRL_TERMINATE) {
3349 +diff --git a/drivers/acpi/acpica/hwsleep.c b/drivers/acpi/acpica/hwsleep.c
3350 +index fcc84d196238a..6a20bb5059c1d 100644
3351 +--- a/drivers/acpi/acpica/hwsleep.c
3352 ++++ b/drivers/acpi/acpica/hwsleep.c
3353 +@@ -110,7 +110,9 @@ acpi_status acpi_hw_legacy_sleep(u8 sleep_state)
3354 +
3355 + /* Flush caches, as per ACPI specification */
3356 +
3357 +- ACPI_FLUSH_CPU_CACHE();
3358 ++ if (sleep_state < ACPI_STATE_S4) {
3359 ++ ACPI_FLUSH_CPU_CACHE();
3360 ++ }
3361 +
3362 + status = acpi_os_enter_sleep(sleep_state, pm1a_control, pm1b_control);
3363 + if (status == AE_CTRL_TERMINATE) {
3364 +diff --git a/drivers/acpi/acpica/hwxfsleep.c b/drivers/acpi/acpica/hwxfsleep.c
3365 +index f1645d87864c3..3948c34d85830 100644
3366 +--- a/drivers/acpi/acpica/hwxfsleep.c
3367 ++++ b/drivers/acpi/acpica/hwxfsleep.c
3368 +@@ -162,8 +162,6 @@ acpi_status acpi_enter_sleep_state_s4bios(void)
3369 + return_ACPI_STATUS(status);
3370 + }
3371 +
3372 +- ACPI_FLUSH_CPU_CACHE();
3373 +-
3374 + status = acpi_hw_write_port(acpi_gbl_FADT.smi_command,
3375 + (u32)acpi_gbl_FADT.s4_bios_request, 8);
3376 + if (ACPI_FAILURE(status)) {
3377 +diff --git a/drivers/acpi/acpica/utdelete.c b/drivers/acpi/acpica/utdelete.c
3378 +index 72d2c0b656339..cb1750e7a6281 100644
3379 +--- a/drivers/acpi/acpica/utdelete.c
3380 ++++ b/drivers/acpi/acpica/utdelete.c
3381 +@@ -422,6 +422,7 @@ acpi_ut_update_ref_count(union acpi_operand_object *object, u32 action)
3382 + ACPI_WARNING((AE_INFO,
3383 + "Obj %p, Reference Count is already zero, cannot decrement\n",
3384 + object));
3385 ++ return;
3386 + }
3387 +
3388 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_ALLOCATIONS,
3389 +diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
3390 +index e04352c1dc2ce..2376f57b3617a 100644
3391 +--- a/drivers/acpi/battery.c
3392 ++++ b/drivers/acpi/battery.c
3393 +@@ -59,6 +59,7 @@ static int battery_bix_broken_package;
3394 + static int battery_notification_delay_ms;
3395 + static int battery_ac_is_broken;
3396 + static int battery_check_pmic = 1;
3397 ++static int battery_quirk_notcharging;
3398 + static unsigned int cache_time = 1000;
3399 + module_param(cache_time, uint, 0644);
3400 + MODULE_PARM_DESC(cache_time, "cache time in milliseconds");
3401 +@@ -222,6 +223,8 @@ static int acpi_battery_get_property(struct power_supply *psy,
3402 + val->intval = POWER_SUPPLY_STATUS_CHARGING;
3403 + else if (acpi_battery_is_charged(battery))
3404 + val->intval = POWER_SUPPLY_STATUS_FULL;
3405 ++ else if (battery_quirk_notcharging)
3406 ++ val->intval = POWER_SUPPLY_STATUS_NOT_CHARGING;
3407 + else
3408 + val->intval = POWER_SUPPLY_STATUS_UNKNOWN;
3409 + break;
3410 +@@ -1105,6 +1108,12 @@ battery_do_not_check_pmic_quirk(const struct dmi_system_id *d)
3411 + return 0;
3412 + }
3413 +
3414 ++static int __init battery_quirk_not_charging(const struct dmi_system_id *d)
3415 ++{
3416 ++ battery_quirk_notcharging = 1;
3417 ++ return 0;
3418 ++}
3419 ++
3420 + static const struct dmi_system_id bat_dmi_table[] __initconst = {
3421 + {
3422 + /* NEC LZ750/LS */
3423 +@@ -1149,6 +1158,19 @@ static const struct dmi_system_id bat_dmi_table[] __initconst = {
3424 + DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo MIIX 320-10ICR"),
3425 + },
3426 + },
3427 ++ {
3428 ++ /*
3429 ++ * On Lenovo ThinkPads the BIOS specification defines
3430 ++ * a state when the bits for charging and discharging
3431 ++ * are both set to 0. That state is "Not Charging".
3432 ++ */
3433 ++ .callback = battery_quirk_not_charging,
3434 ++ .ident = "Lenovo ThinkPad",
3435 ++ .matches = {
3436 ++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
3437 ++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad"),
3438 ++ },
3439 ++ },
3440 + {},
3441 + };
3442 +
3443 +diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
3444 +index e317214aabec5..5e14288fcabe9 100644
3445 +--- a/drivers/acpi/bus.c
3446 ++++ b/drivers/acpi/bus.c
3447 +@@ -98,8 +98,8 @@ int acpi_bus_get_status(struct acpi_device *device)
3448 + acpi_status status;
3449 + unsigned long long sta;
3450 +
3451 +- if (acpi_device_always_present(device)) {
3452 +- acpi_set_device_status(device, ACPI_STA_DEFAULT);
3453 ++ if (acpi_device_override_status(device, &sta)) {
3454 ++ acpi_set_device_status(device, sta);
3455 + return 0;
3456 + }
3457 +
3458 +diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
3459 +index be3e0921a6c00..3f2e5ea9ab6b7 100644
3460 +--- a/drivers/acpi/ec.c
3461 ++++ b/drivers/acpi/ec.c
3462 +@@ -166,6 +166,7 @@ struct acpi_ec_query {
3463 + struct transaction transaction;
3464 + struct work_struct work;
3465 + struct acpi_ec_query_handler *handler;
3466 ++ struct acpi_ec *ec;
3467 + };
3468 +
3469 + static int acpi_ec_query(struct acpi_ec *ec, u8 *data);
3470 +@@ -469,6 +470,7 @@ static void acpi_ec_submit_query(struct acpi_ec *ec)
3471 + ec_dbg_evt("Command(%s) submitted/blocked",
3472 + acpi_ec_cmd_string(ACPI_EC_COMMAND_QUERY));
3473 + ec->nr_pending_queries++;
3474 ++ ec->events_in_progress++;
3475 + queue_work(ec_wq, &ec->work);
3476 + }
3477 + }
3478 +@@ -535,7 +537,7 @@ static void acpi_ec_enable_event(struct acpi_ec *ec)
3479 + #ifdef CONFIG_PM_SLEEP
3480 + static void __acpi_ec_flush_work(void)
3481 + {
3482 +- drain_workqueue(ec_wq); /* flush ec->work */
3483 ++ flush_workqueue(ec_wq); /* flush ec->work */
3484 + flush_workqueue(ec_query_wq); /* flush queries */
3485 + }
3486 +
3487 +@@ -1116,7 +1118,7 @@ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit)
3488 + }
3489 + EXPORT_SYMBOL_GPL(acpi_ec_remove_query_handler);
3490 +
3491 +-static struct acpi_ec_query *acpi_ec_create_query(u8 *pval)
3492 ++static struct acpi_ec_query *acpi_ec_create_query(struct acpi_ec *ec, u8 *pval)
3493 + {
3494 + struct acpi_ec_query *q;
3495 + struct transaction *t;
3496 +@@ -1124,11 +1126,13 @@ static struct acpi_ec_query *acpi_ec_create_query(u8 *pval)
3497 + q = kzalloc(sizeof (struct acpi_ec_query), GFP_KERNEL);
3498 + if (!q)
3499 + return NULL;
3500 ++
3501 + INIT_WORK(&q->work, acpi_ec_event_processor);
3502 + t = &q->transaction;
3503 + t->command = ACPI_EC_COMMAND_QUERY;
3504 + t->rdata = pval;
3505 + t->rlen = 1;
3506 ++ q->ec = ec;
3507 + return q;
3508 + }
3509 +
3510 +@@ -1145,13 +1149,21 @@ static void acpi_ec_event_processor(struct work_struct *work)
3511 + {
3512 + struct acpi_ec_query *q = container_of(work, struct acpi_ec_query, work);
3513 + struct acpi_ec_query_handler *handler = q->handler;
3514 ++ struct acpi_ec *ec = q->ec;
3515 +
3516 + ec_dbg_evt("Query(0x%02x) started", handler->query_bit);
3517 ++
3518 + if (handler->func)
3519 + handler->func(handler->data);
3520 + else if (handler->handle)
3521 + acpi_evaluate_object(handler->handle, NULL, NULL, NULL);
3522 ++
3523 + ec_dbg_evt("Query(0x%02x) stopped", handler->query_bit);
3524 ++
3525 ++ spin_lock_irq(&ec->lock);
3526 ++ ec->queries_in_progress--;
3527 ++ spin_unlock_irq(&ec->lock);
3528 ++
3529 + acpi_ec_delete_query(q);
3530 + }
3531 +
3532 +@@ -1161,7 +1173,7 @@ static int acpi_ec_query(struct acpi_ec *ec, u8 *data)
3533 + int result;
3534 + struct acpi_ec_query *q;
3535 +
3536 +- q = acpi_ec_create_query(&value);
3537 ++ q = acpi_ec_create_query(ec, &value);
3538 + if (!q)
3539 + return -ENOMEM;
3540 +
3541 +@@ -1183,19 +1195,20 @@ static int acpi_ec_query(struct acpi_ec *ec, u8 *data)
3542 + }
3543 +
3544 + /*
3545 +- * It is reported that _Qxx are evaluated in a parallel way on
3546 +- * Windows:
3547 ++ * It is reported that _Qxx are evaluated in a parallel way on Windows:
3548 + * https://bugzilla.kernel.org/show_bug.cgi?id=94411
3549 + *
3550 +- * Put this log entry before schedule_work() in order to make
3551 +- * it appearing before any other log entries occurred during the
3552 +- * work queue execution.
3553 ++ * Put this log entry before queue_work() to make it appear in the log
3554 ++ * before any other messages emitted during workqueue handling.
3555 + */
3556 + ec_dbg_evt("Query(0x%02x) scheduled", value);
3557 +- if (!queue_work(ec_query_wq, &q->work)) {
3558 +- ec_dbg_evt("Query(0x%02x) overlapped", value);
3559 +- result = -EBUSY;
3560 +- }
3561 ++
3562 ++ spin_lock_irq(&ec->lock);
3563 ++
3564 ++ ec->queries_in_progress++;
3565 ++ queue_work(ec_query_wq, &q->work);
3566 ++
3567 ++ spin_unlock_irq(&ec->lock);
3568 +
3569 + err_exit:
3570 + if (result)
3571 +@@ -1253,6 +1266,10 @@ static void acpi_ec_event_handler(struct work_struct *work)
3572 + ec_dbg_evt("Event stopped");
3573 +
3574 + acpi_ec_check_event(ec);
3575 ++
3576 ++ spin_lock_irqsave(&ec->lock, flags);
3577 ++ ec->events_in_progress--;
3578 ++ spin_unlock_irqrestore(&ec->lock, flags);
3579 + }
3580 +
3581 + static void acpi_ec_handle_interrupt(struct acpi_ec *ec)
3582 +@@ -2034,6 +2051,7 @@ void acpi_ec_set_gpe_wake_mask(u8 action)
3583 +
3584 + bool acpi_ec_dispatch_gpe(void)
3585 + {
3586 ++ bool work_in_progress;
3587 + u32 ret;
3588 +
3589 + if (!first_ec)
3590 +@@ -2054,8 +2072,19 @@ bool acpi_ec_dispatch_gpe(void)
3591 + if (ret == ACPI_INTERRUPT_HANDLED)
3592 + pm_pr_dbg("ACPI EC GPE dispatched\n");
3593 +
3594 +- /* Flush the event and query workqueues. */
3595 +- acpi_ec_flush_work();
3596 ++ /* Drain EC work. */
3597 ++ do {
3598 ++ acpi_ec_flush_work();
3599 ++
3600 ++ pm_pr_dbg("ACPI EC work flushed\n");
3601 ++
3602 ++ spin_lock_irq(&first_ec->lock);
3603 ++
3604 ++ work_in_progress = first_ec->events_in_progress +
3605 ++ first_ec->queries_in_progress > 0;
3606 ++
3607 ++ spin_unlock_irq(&first_ec->lock);
3608 ++ } while (work_in_progress && !pm_wakeup_pending());
3609 +
3610 + return false;
3611 + }
3612 +diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
3613 +index a958ad60a3394..125e4901c9b47 100644
3614 +--- a/drivers/acpi/internal.h
3615 ++++ b/drivers/acpi/internal.h
3616 +@@ -184,6 +184,8 @@ struct acpi_ec {
3617 + struct work_struct work;
3618 + unsigned long timestamp;
3619 + unsigned long nr_pending_queries;
3620 ++ unsigned int events_in_progress;
3621 ++ unsigned int queries_in_progress;
3622 + bool busy_polling;
3623 + unsigned int polling_guard;
3624 + };
3625 +diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
3626 +index de0533bd4e086..67a5ee2fedfd3 100644
3627 +--- a/drivers/acpi/scan.c
3628 ++++ b/drivers/acpi/scan.c
3629 +@@ -1577,6 +1577,7 @@ static bool acpi_device_enumeration_by_parent(struct acpi_device *device)
3630 + {
3631 + struct list_head resource_list;
3632 + bool is_serial_bus_slave = false;
3633 ++ static const struct acpi_device_id ignore_serial_bus_ids[] = {
3634 + /*
3635 + * These devices have multiple I2cSerialBus resources and an i2c-client
3636 + * must be instantiated for each, each with its own i2c_device_id.
3637 +@@ -1585,11 +1586,18 @@ static bool acpi_device_enumeration_by_parent(struct acpi_device *device)
3638 + * drivers/platform/x86/i2c-multi-instantiate.c driver, which knows
3639 + * which i2c_device_id to use for each resource.
3640 + */
3641 +- static const struct acpi_device_id i2c_multi_instantiate_ids[] = {
3642 + {"BSG1160", },
3643 + {"BSG2150", },
3644 + {"INT33FE", },
3645 + {"INT3515", },
3646 ++ /*
3647 ++ * HIDs of device with an UartSerialBusV2 resource for which userspace
3648 ++ * expects a regular tty cdev to be created (instead of the in kernel
3649 ++ * serdev) and which have a kernel driver which expects a platform_dev
3650 ++ * such as the rfkill-gpio driver.
3651 ++ */
3652 ++ {"BCM4752", },
3653 ++ {"LNV4752", },
3654 + {}
3655 + };
3656 +
3657 +@@ -1603,8 +1611,7 @@ static bool acpi_device_enumeration_by_parent(struct acpi_device *device)
3658 + fwnode_property_present(&device->fwnode, "baud")))
3659 + return true;
3660 +
3661 +- /* Instantiate a pdev for the i2c-multi-instantiate drv to bind to */
3662 +- if (!acpi_match_device_ids(device, i2c_multi_instantiate_ids))
3663 ++ if (!acpi_match_device_ids(device, ignore_serial_bus_ids))
3664 + return false;
3665 +
3666 + INIT_LIST_HEAD(&resource_list);
3667 +diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c
3668 +index bdc1ba00aee9f..3f9a162be84e3 100644
3669 +--- a/drivers/acpi/x86/utils.c
3670 ++++ b/drivers/acpi/x86/utils.c
3671 +@@ -22,58 +22,71 @@
3672 + * Some BIOS-es (temporarily) hide specific APCI devices to work around Windows
3673 + * driver bugs. We use DMI matching to match known cases of this.
3674 + *
3675 +- * We work around this by always reporting ACPI_STA_DEFAULT for these
3676 +- * devices. Note this MUST only be done for devices where this is safe.
3677 ++ * Likewise sometimes some not-actually present devices are sometimes
3678 ++ * reported as present, which may cause issues.
3679 + *
3680 +- * This forcing of devices to be present is limited to specific CPU (SoC)
3681 +- * models both to avoid potentially causing trouble on other models and
3682 +- * because some HIDs are re-used on different SoCs for completely
3683 +- * different devices.
3684 ++ * We work around this by using the below quirk list to override the status
3685 ++ * reported by the _STA method with a fixed value (ACPI_STA_DEFAULT or 0).
3686 ++ * Note this MUST only be done for devices where this is safe.
3687 ++ *
3688 ++ * This status overriding is limited to specific CPU (SoC) models both to
3689 ++ * avoid potentially causing trouble on other models and because some HIDs
3690 ++ * are re-used on different SoCs for completely different devices.
3691 + */
3692 +-struct always_present_id {
3693 ++struct override_status_id {
3694 + struct acpi_device_id hid[2];
3695 + struct x86_cpu_id cpu_ids[2];
3696 + struct dmi_system_id dmi_ids[2]; /* Optional */
3697 + const char *uid;
3698 ++ const char *path;
3699 ++ unsigned long long status;
3700 + };
3701 +
3702 +-#define X86_MATCH(model) X86_MATCH_INTEL_FAM6_MODEL(model, NULL)
3703 +-
3704 +-#define ENTRY(hid, uid, cpu_models, dmi...) { \
3705 ++#define ENTRY(status, hid, uid, path, cpu_model, dmi...) { \
3706 + { { hid, }, {} }, \
3707 +- { cpu_models, {} }, \
3708 ++ { X86_MATCH_INTEL_FAM6_MODEL(cpu_model, NULL), {} }, \
3709 + { { .matches = dmi }, {} }, \
3710 + uid, \
3711 ++ path, \
3712 ++ status, \
3713 + }
3714 +
3715 +-static const struct always_present_id always_present_ids[] = {
3716 ++#define PRESENT_ENTRY_HID(hid, uid, cpu_model, dmi...) \
3717 ++ ENTRY(ACPI_STA_DEFAULT, hid, uid, NULL, cpu_model, dmi)
3718 ++
3719 ++#define NOT_PRESENT_ENTRY_HID(hid, uid, cpu_model, dmi...) \
3720 ++ ENTRY(0, hid, uid, NULL, cpu_model, dmi)
3721 ++
3722 ++#define PRESENT_ENTRY_PATH(path, cpu_model, dmi...) \
3723 ++ ENTRY(ACPI_STA_DEFAULT, "", NULL, path, cpu_model, dmi)
3724 ++
3725 ++#define NOT_PRESENT_ENTRY_PATH(path, cpu_model, dmi...) \
3726 ++ ENTRY(0, "", NULL, path, cpu_model, dmi)
3727 ++
3728 ++static const struct override_status_id override_status_ids[] = {
3729 + /*
3730 + * Bay / Cherry Trail PWM directly poked by GPU driver in win10,
3731 + * but Linux uses a separate PWM driver, harmless if not used.
3732 + */
3733 +- ENTRY("80860F09", "1", X86_MATCH(ATOM_SILVERMONT), {}),
3734 +- ENTRY("80862288", "1", X86_MATCH(ATOM_AIRMONT), {}),
3735 ++ PRESENT_ENTRY_HID("80860F09", "1", ATOM_SILVERMONT, {}),
3736 ++ PRESENT_ENTRY_HID("80862288", "1", ATOM_AIRMONT, {}),
3737 +
3738 +- /* Lenovo Yoga Book uses PWM2 for keyboard backlight control */
3739 +- ENTRY("80862289", "2", X86_MATCH(ATOM_AIRMONT), {
3740 +- DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X9"),
3741 +- }),
3742 + /*
3743 + * The INT0002 device is necessary to clear wakeup interrupt sources
3744 + * on Cherry Trail devices, without it we get nobody cared IRQ msgs.
3745 + */
3746 +- ENTRY("INT0002", "1", X86_MATCH(ATOM_AIRMONT), {}),
3747 ++ PRESENT_ENTRY_HID("INT0002", "1", ATOM_AIRMONT, {}),
3748 + /*
3749 + * On the Dell Venue 11 Pro 7130 and 7139, the DSDT hides
3750 + * the touchscreen ACPI device until a certain time
3751 + * after _SB.PCI0.GFX0.LCD.LCD1._ON gets called has passed
3752 + * *and* _STA has been called at least 3 times since.
3753 + */
3754 +- ENTRY("SYNA7500", "1", X86_MATCH(HASWELL_L), {
3755 ++ PRESENT_ENTRY_HID("SYNA7500", "1", HASWELL_L, {
3756 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
3757 + DMI_MATCH(DMI_PRODUCT_NAME, "Venue 11 Pro 7130"),
3758 + }),
3759 +- ENTRY("SYNA7500", "1", X86_MATCH(HASWELL_L), {
3760 ++ PRESENT_ENTRY_HID("SYNA7500", "1", HASWELL_L, {
3761 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
3762 + DMI_MATCH(DMI_PRODUCT_NAME, "Venue 11 Pro 7139"),
3763 + }),
3764 +@@ -81,54 +94,83 @@ static const struct always_present_id always_present_ids[] = {
3765 + /*
3766 + * The GPD win BIOS dated 20170221 has disabled the accelerometer, the
3767 + * drivers sometimes cause crashes under Windows and this is how the
3768 +- * manufacturer has solved this :| Note that the the DMI data is less
3769 +- * generic then it seems, a board_vendor of "AMI Corporation" is quite
3770 +- * rare and a board_name of "Default String" also is rare.
3771 ++ * manufacturer has solved this :| The DMI match may not seem unique,
3772 ++ * but it is. In the 67000+ DMI decode dumps from linux-hardware.org
3773 ++ * only 116 have board_vendor set to "AMI Corporation" and of those 116
3774 ++ * only the GPD win and pocket entries' board_name is "Default string".
3775 + *
3776 + * Unfortunately the GPD pocket also uses these strings and its BIOS
3777 + * was copy-pasted from the GPD win, so it has a disabled KIOX000A
3778 + * node which we should not enable, thus we also check the BIOS date.
3779 + */
3780 +- ENTRY("KIOX000A", "1", X86_MATCH(ATOM_AIRMONT), {
3781 ++ PRESENT_ENTRY_HID("KIOX000A", "1", ATOM_AIRMONT, {
3782 + DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
3783 + DMI_MATCH(DMI_BOARD_NAME, "Default string"),
3784 + DMI_MATCH(DMI_PRODUCT_NAME, "Default string"),
3785 + DMI_MATCH(DMI_BIOS_DATE, "02/21/2017")
3786 + }),
3787 +- ENTRY("KIOX000A", "1", X86_MATCH(ATOM_AIRMONT), {
3788 ++ PRESENT_ENTRY_HID("KIOX000A", "1", ATOM_AIRMONT, {
3789 + DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
3790 + DMI_MATCH(DMI_BOARD_NAME, "Default string"),
3791 + DMI_MATCH(DMI_PRODUCT_NAME, "Default string"),
3792 + DMI_MATCH(DMI_BIOS_DATE, "03/20/2017")
3793 + }),
3794 +- ENTRY("KIOX000A", "1", X86_MATCH(ATOM_AIRMONT), {
3795 ++ PRESENT_ENTRY_HID("KIOX000A", "1", ATOM_AIRMONT, {
3796 + DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
3797 + DMI_MATCH(DMI_BOARD_NAME, "Default string"),
3798 + DMI_MATCH(DMI_PRODUCT_NAME, "Default string"),
3799 + DMI_MATCH(DMI_BIOS_DATE, "05/25/2017")
3800 + }),
3801 ++
3802 ++ /*
3803 ++ * The GPD win/pocket have a PCI wifi card, but its DSDT has the SDIO
3804 ++ * mmc controller enabled and that has a child-device which _PS3
3805 ++ * method sets a GPIO causing the PCI wifi card to turn off.
3806 ++ * See above remark about uniqueness of the DMI match.
3807 ++ */
3808 ++ NOT_PRESENT_ENTRY_PATH("\\_SB_.PCI0.SDHB.BRC1", ATOM_AIRMONT, {
3809 ++ DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
3810 ++ DMI_EXACT_MATCH(DMI_BOARD_NAME, "Default string"),
3811 ++ DMI_EXACT_MATCH(DMI_BOARD_SERIAL, "Default string"),
3812 ++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Default string"),
3813 ++ }),
3814 + };
3815 +
3816 +-bool acpi_device_always_present(struct acpi_device *adev)
3817 ++bool acpi_device_override_status(struct acpi_device *adev, unsigned long long *status)
3818 + {
3819 + bool ret = false;
3820 + unsigned int i;
3821 +
3822 +- for (i = 0; i < ARRAY_SIZE(always_present_ids); i++) {
3823 +- if (acpi_match_device_ids(adev, always_present_ids[i].hid))
3824 ++ for (i = 0; i < ARRAY_SIZE(override_status_ids); i++) {
3825 ++ if (!x86_match_cpu(override_status_ids[i].cpu_ids))
3826 + continue;
3827 +
3828 +- if (!adev->pnp.unique_id ||
3829 +- strcmp(adev->pnp.unique_id, always_present_ids[i].uid))
3830 ++ if (override_status_ids[i].dmi_ids[0].matches[0].slot &&
3831 ++ !dmi_check_system(override_status_ids[i].dmi_ids))
3832 + continue;
3833 +
3834 +- if (!x86_match_cpu(always_present_ids[i].cpu_ids))
3835 +- continue;
3836 ++ if (override_status_ids[i].path) {
3837 ++ struct acpi_buffer path = { ACPI_ALLOCATE_BUFFER, NULL };
3838 ++ bool match;
3839 +
3840 +- if (always_present_ids[i].dmi_ids[0].matches[0].slot &&
3841 +- !dmi_check_system(always_present_ids[i].dmi_ids))
3842 +- continue;
3843 ++ if (acpi_get_name(adev->handle, ACPI_FULL_PATHNAME, &path))
3844 ++ continue;
3845 ++
3846 ++ match = strcmp((char *)path.pointer, override_status_ids[i].path) == 0;
3847 ++ kfree(path.pointer);
3848 ++
3849 ++ if (!match)
3850 ++ continue;
3851 ++ } else {
3852 ++ if (acpi_match_device_ids(adev, override_status_ids[i].hid))
3853 ++ continue;
3854 ++
3855 ++ if (!adev->pnp.unique_id ||
3856 ++ strcmp(adev->pnp.unique_id, override_status_ids[i].uid))
3857 ++ continue;
3858 ++ }
3859 +
3860 ++ *status = override_status_ids[i].status;
3861 + ret = true;
3862 + break;
3863 + }
3864 +diff --git a/drivers/android/binder.c b/drivers/android/binder.c
3865 +index 80e2bbb36422e..366b124057081 100644
3866 +--- a/drivers/android/binder.c
3867 ++++ b/drivers/android/binder.c
3868 +@@ -2657,8 +2657,8 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda,
3869 + if (!ret)
3870 + ret = binder_translate_fd(fd, offset, t, thread,
3871 + in_reply_to);
3872 +- if (ret < 0)
3873 +- return ret;
3874 ++ if (ret)
3875 ++ return ret > 0 ? -EINVAL : ret;
3876 + }
3877 + return 0;
3878 + }
3879 +diff --git a/drivers/base/core.c b/drivers/base/core.c
3880 +index 389d13616d1df..c0566aff53551 100644
3881 +--- a/drivers/base/core.c
3882 ++++ b/drivers/base/core.c
3883 +@@ -348,8 +348,7 @@ static void device_link_release_fn(struct work_struct *work)
3884 + /* Ensure that all references to the link object have been dropped. */
3885 + device_link_synchronize_removal();
3886 +
3887 +- while (refcount_dec_not_one(&link->rpm_active))
3888 +- pm_runtime_put(link->supplier);
3889 ++ pm_runtime_release_supplier(link, true);
3890 +
3891 + put_device(link->consumer);
3892 + put_device(link->supplier);
3893 +diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
3894 +index bc649da4899a0..1573319404888 100644
3895 +--- a/drivers/base/power/runtime.c
3896 ++++ b/drivers/base/power/runtime.c
3897 +@@ -305,19 +305,40 @@ static int rpm_get_suppliers(struct device *dev)
3898 + return 0;
3899 + }
3900 +
3901 ++/**
3902 ++ * pm_runtime_release_supplier - Drop references to device link's supplier.
3903 ++ * @link: Target device link.
3904 ++ * @check_idle: Whether or not to check if the supplier device is idle.
3905 ++ *
3906 ++ * Drop all runtime PM references associated with @link to its supplier device
3907 ++ * and if @check_idle is set, check if that device is idle (and so it can be
3908 ++ * suspended).
3909 ++ */
3910 ++void pm_runtime_release_supplier(struct device_link *link, bool check_idle)
3911 ++{
3912 ++ struct device *supplier = link->supplier;
3913 ++
3914 ++ /*
3915 ++ * The additional power.usage_count check is a safety net in case
3916 ++ * the rpm_active refcount becomes saturated, in which case
3917 ++ * refcount_dec_not_one() would return true forever, but it is not
3918 ++ * strictly necessary.
3919 ++ */
3920 ++ while (refcount_dec_not_one(&link->rpm_active) &&
3921 ++ atomic_read(&supplier->power.usage_count) > 0)
3922 ++ pm_runtime_put_noidle(supplier);
3923 ++
3924 ++ if (check_idle)
3925 ++ pm_request_idle(supplier);
3926 ++}
3927 ++
3928 + static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend)
3929 + {
3930 + struct device_link *link;
3931 +
3932 + list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,
3933 +- device_links_read_lock_held()) {
3934 +-
3935 +- while (refcount_dec_not_one(&link->rpm_active))
3936 +- pm_runtime_put_noidle(link->supplier);
3937 +-
3938 +- if (try_to_suspend)
3939 +- pm_request_idle(link->supplier);
3940 +- }
3941 ++ device_links_read_lock_held())
3942 ++ pm_runtime_release_supplier(link, try_to_suspend);
3943 + }
3944 +
3945 + static void rpm_put_suppliers(struct device *dev)
3946 +@@ -1755,9 +1776,7 @@ void pm_runtime_drop_link(struct device_link *link)
3947 + return;
3948 +
3949 + pm_runtime_drop_link_count(link->consumer);
3950 +-
3951 +- while (refcount_dec_not_one(&link->rpm_active))
3952 +- pm_runtime_put(link->supplier);
3953 ++ pm_runtime_release_supplier(link, true);
3954 + }
3955 +
3956 + static bool pm_runtime_need_not_resume(struct device *dev)
3957 +diff --git a/drivers/base/property.c b/drivers/base/property.c
3958 +index 4c43d30145c6b..cf88a5554d9c5 100644
3959 +--- a/drivers/base/property.c
3960 ++++ b/drivers/base/property.c
3961 +@@ -1195,8 +1195,10 @@ fwnode_graph_devcon_match(struct fwnode_handle *fwnode, const char *con_id,
3962 +
3963 + fwnode_graph_for_each_endpoint(fwnode, ep) {
3964 + node = fwnode_graph_get_remote_port_parent(ep);
3965 +- if (!fwnode_device_is_available(node))
3966 ++ if (!fwnode_device_is_available(node)) {
3967 ++ fwnode_handle_put(node);
3968 + continue;
3969 ++ }
3970 +
3971 + ret = match(node, con_id, data);
3972 + fwnode_handle_put(node);
3973 +diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
3974 +index 456a1787e18d0..55a30afc14a00 100644
3975 +--- a/drivers/base/regmap/regmap.c
3976 ++++ b/drivers/base/regmap/regmap.c
3977 +@@ -620,6 +620,7 @@ int regmap_attach_dev(struct device *dev, struct regmap *map,
3978 + if (ret)
3979 + return ret;
3980 +
3981 ++ regmap_debugfs_exit(map);
3982 + regmap_debugfs_init(map);
3983 +
3984 + /* Add a devres resource for dev_get_regmap() */
3985 +diff --git a/drivers/base/swnode.c b/drivers/base/swnode.c
3986 +index 206bd4d7d7e23..d2fb3eb5816c3 100644
3987 +--- a/drivers/base/swnode.c
3988 ++++ b/drivers/base/swnode.c
3989 +@@ -519,7 +519,7 @@ software_node_get_reference_args(const struct fwnode_handle *fwnode,
3990 + return -ENOENT;
3991 +
3992 + if (nargs_prop) {
3993 +- error = property_entry_read_int_array(swnode->node->properties,
3994 ++ error = property_entry_read_int_array(ref->node->properties,
3995 + nargs_prop, sizeof(u32),
3996 + &nargs_prop_val, 1);
3997 + if (error)
3998 +diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
3999 +index 7df79ae6b0a1e..aaee15058d181 100644
4000 +--- a/drivers/block/floppy.c
4001 ++++ b/drivers/block/floppy.c
4002 +@@ -1015,7 +1015,7 @@ static DECLARE_DELAYED_WORK(fd_timer, fd_timer_workfn);
4003 + static void cancel_activity(void)
4004 + {
4005 + do_floppy = NULL;
4006 +- cancel_delayed_work_sync(&fd_timer);
4007 ++ cancel_delayed_work(&fd_timer);
4008 + cancel_work_sync(&floppy_work);
4009 + }
4010 +
4011 +@@ -3169,6 +3169,8 @@ static void raw_cmd_free(struct floppy_raw_cmd **ptr)
4012 + }
4013 + }
4014 +
4015 ++#define MAX_LEN (1UL << MAX_ORDER << PAGE_SHIFT)
4016 ++
4017 + static int raw_cmd_copyin(int cmd, void __user *param,
4018 + struct floppy_raw_cmd **rcmd)
4019 + {
4020 +@@ -3198,7 +3200,7 @@ loop:
4021 + ptr->resultcode = 0;
4022 +
4023 + if (ptr->flags & (FD_RAW_READ | FD_RAW_WRITE)) {
4024 +- if (ptr->length <= 0)
4025 ++ if (ptr->length <= 0 || ptr->length >= MAX_LEN)
4026 + return -EINVAL;
4027 + ptr->kernel_data = (char *)fd_dma_mem_alloc(ptr->length);
4028 + fallback_on_nodma_alloc(&ptr->kernel_data, ptr->length);
4029 +diff --git a/drivers/bluetooth/btmtksdio.c b/drivers/bluetooth/btmtksdio.c
4030 +index 5f9f027956317..74856a5862162 100644
4031 +--- a/drivers/bluetooth/btmtksdio.c
4032 ++++ b/drivers/bluetooth/btmtksdio.c
4033 +@@ -1042,6 +1042,8 @@ static int btmtksdio_runtime_suspend(struct device *dev)
4034 + if (!bdev)
4035 + return 0;
4036 +
4037 ++ sdio_set_host_pm_flags(func, MMC_PM_KEEP_POWER);
4038 ++
4039 + sdio_claim_host(bdev->func);
4040 +
4041 + sdio_writel(bdev->func, C_FW_OWN_REQ_SET, MTK_REG_CHLPCR, &err);
4042 +diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c
4043 +index 8ea5ca8d71d6d..259a643377c24 100644
4044 +--- a/drivers/bluetooth/hci_bcm.c
4045 ++++ b/drivers/bluetooth/hci_bcm.c
4046 +@@ -1164,7 +1164,12 @@ static int bcm_probe(struct platform_device *pdev)
4047 + return -ENOMEM;
4048 +
4049 + dev->dev = &pdev->dev;
4050 +- dev->irq = platform_get_irq(pdev, 0);
4051 ++
4052 ++ ret = platform_get_irq(pdev, 0);
4053 ++ if (ret < 0)
4054 ++ return ret;
4055 ++
4056 ++ dev->irq = ret;
4057 +
4058 + /* Initialize routing field to an unused value */
4059 + dev->pcm_int_params[0] = 0xff;
4060 +diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
4061 +index 4184faef9f169..dc7ee5dd2eeca 100644
4062 +--- a/drivers/bluetooth/hci_qca.c
4063 ++++ b/drivers/bluetooth/hci_qca.c
4064 +@@ -1844,6 +1844,9 @@ static int qca_power_off(struct hci_dev *hdev)
4065 + hu->hdev->hw_error = NULL;
4066 + hu->hdev->cmd_timeout = NULL;
4067 +
4068 ++ del_timer_sync(&qca->wake_retrans_timer);
4069 ++ del_timer_sync(&qca->tx_idle_timer);
4070 ++
4071 + /* Stop sending shutdown command if soc crashes. */
4072 + if (soc_type != QCA_ROME
4073 + && qca->memdump_state == QCA_MEMDUMP_IDLE) {
4074 +@@ -1987,7 +1990,7 @@ static int qca_serdev_probe(struct serdev_device *serdev)
4075 +
4076 + qcadev->bt_en = devm_gpiod_get_optional(&serdev->dev, "enable",
4077 + GPIOD_OUT_LOW);
4078 +- if (!qcadev->bt_en) {
4079 ++ if (IS_ERR_OR_NULL(qcadev->bt_en)) {
4080 + dev_warn(&serdev->dev, "failed to acquire enable gpio\n");
4081 + power_ctrl_enabled = false;
4082 + }
4083 +diff --git a/drivers/bluetooth/hci_vhci.c b/drivers/bluetooth/hci_vhci.c
4084 +index 8ab26dec5f6e8..8469f9876dd26 100644
4085 +--- a/drivers/bluetooth/hci_vhci.c
4086 ++++ b/drivers/bluetooth/hci_vhci.c
4087 +@@ -121,6 +121,8 @@ static int __vhci_create_device(struct vhci_data *data, __u8 opcode)
4088 + if (opcode & 0x80)
4089 + set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
4090 +
4091 ++ set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
4092 ++
4093 + if (hci_register_dev(hdev) < 0) {
4094 + BT_ERR("Can't register HCI device");
4095 + hci_free_dev(hdev);
4096 +diff --git a/drivers/char/mwave/3780i.h b/drivers/char/mwave/3780i.h
4097 +index 9ccb6b270b071..95164246afd1a 100644
4098 +--- a/drivers/char/mwave/3780i.h
4099 ++++ b/drivers/char/mwave/3780i.h
4100 +@@ -68,7 +68,7 @@ typedef struct {
4101 + unsigned char ClockControl:1; /* RW: Clock control: 0=normal, 1=stop 3780i clocks */
4102 + unsigned char SoftReset:1; /* RW: Soft reset 0=normal, 1=soft reset active */
4103 + unsigned char ConfigMode:1; /* RW: Configuration mode, 0=normal, 1=config mode */
4104 +- unsigned char Reserved:5; /* 0: Reserved */
4105 ++ unsigned short Reserved:13; /* 0: Reserved */
4106 + } DSP_ISA_SLAVE_CONTROL;
4107 +
4108 +
4109 +diff --git a/drivers/char/random.c b/drivers/char/random.c
4110 +index 8c94380e7a463..5444206f35e22 100644
4111 +--- a/drivers/char/random.c
4112 ++++ b/drivers/char/random.c
4113 +@@ -922,12 +922,14 @@ static struct crng_state *select_crng(void)
4114 +
4115 + /*
4116 + * crng_fast_load() can be called by code in the interrupt service
4117 +- * path. So we can't afford to dilly-dally.
4118 ++ * path. So we can't afford to dilly-dally. Returns the number of
4119 ++ * bytes processed from cp.
4120 + */
4121 +-static int crng_fast_load(const char *cp, size_t len)
4122 ++static size_t crng_fast_load(const char *cp, size_t len)
4123 + {
4124 + unsigned long flags;
4125 + char *p;
4126 ++ size_t ret = 0;
4127 +
4128 + if (!spin_trylock_irqsave(&primary_crng.lock, flags))
4129 + return 0;
4130 +@@ -938,7 +940,7 @@ static int crng_fast_load(const char *cp, size_t len)
4131 + p = (unsigned char *) &primary_crng.state[4];
4132 + while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) {
4133 + p[crng_init_cnt % CHACHA_KEY_SIZE] ^= *cp;
4134 +- cp++; crng_init_cnt++; len--;
4135 ++ cp++; crng_init_cnt++; len--; ret++;
4136 + }
4137 + spin_unlock_irqrestore(&primary_crng.lock, flags);
4138 + if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
4139 +@@ -946,7 +948,7 @@ static int crng_fast_load(const char *cp, size_t len)
4140 + crng_init = 1;
4141 + pr_notice("fast init done\n");
4142 + }
4143 +- return 1;
4144 ++ return ret;
4145 + }
4146 +
4147 + /*
4148 +@@ -1299,7 +1301,7 @@ void add_interrupt_randomness(int irq, int irq_flags)
4149 + if (unlikely(crng_init == 0)) {
4150 + if ((fast_pool->count >= 64) &&
4151 + crng_fast_load((char *) fast_pool->pool,
4152 +- sizeof(fast_pool->pool))) {
4153 ++ sizeof(fast_pool->pool)) > 0) {
4154 + fast_pool->count = 0;
4155 + fast_pool->last = now;
4156 + }
4157 +@@ -2319,8 +2321,11 @@ void add_hwgenerator_randomness(const char *buffer, size_t count,
4158 + struct entropy_store *poolp = &input_pool;
4159 +
4160 + if (unlikely(crng_init == 0)) {
4161 +- crng_fast_load(buffer, count);
4162 +- return;
4163 ++ size_t ret = crng_fast_load(buffer, count);
4164 ++ count -= ret;
4165 ++ buffer += ret;
4166 ++ if (!count || crng_init == 0)
4167 ++ return;
4168 + }
4169 +
4170 + /* Suspend writing if we're above the trickle threshold.
4171 +diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
4172 +index b2659a4c40168..dc56b976d8162 100644
4173 +--- a/drivers/char/tpm/tpm_tis_core.c
4174 ++++ b/drivers/char/tpm/tpm_tis_core.c
4175 +@@ -950,9 +950,11 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
4176 + priv->timeout_max = TPM_TIMEOUT_USECS_MAX;
4177 + priv->phy_ops = phy_ops;
4178 +
4179 ++ dev_set_drvdata(&chip->dev, priv);
4180 ++
4181 + rc = tpm_tis_read32(priv, TPM_DID_VID(0), &vendor);
4182 + if (rc < 0)
4183 +- goto out_err;
4184 ++ return rc;
4185 +
4186 + priv->manufacturer_id = vendor;
4187 +
4188 +@@ -962,8 +964,6 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
4189 + priv->timeout_max = TIS_TIMEOUT_MAX_ATML;
4190 + }
4191 +
4192 +- dev_set_drvdata(&chip->dev, priv);
4193 +-
4194 + if (is_bsw()) {
4195 + priv->ilb_base_addr = ioremap(INTEL_LEGACY_BLK_BASE_ADDR,
4196 + ILB_REMAP_SIZE);
4197 +@@ -994,7 +994,15 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
4198 + intmask |= TPM_INTF_CMD_READY_INT | TPM_INTF_LOCALITY_CHANGE_INT |
4199 + TPM_INTF_DATA_AVAIL_INT | TPM_INTF_STS_VALID_INT;
4200 + intmask &= ~TPM_GLOBAL_INT_ENABLE;
4201 ++
4202 ++ rc = request_locality(chip, 0);
4203 ++ if (rc < 0) {
4204 ++ rc = -ENODEV;
4205 ++ goto out_err;
4206 ++ }
4207 ++
4208 + tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask);
4209 ++ release_locality(chip, 0);
4210 +
4211 + rc = tpm_chip_start(chip);
4212 + if (rc)
4213 +diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c
4214 +index 1ac803e14fa3e..178886823b90c 100644
4215 +--- a/drivers/clk/bcm/clk-bcm2835.c
4216 ++++ b/drivers/clk/bcm/clk-bcm2835.c
4217 +@@ -933,8 +933,7 @@ static int bcm2835_clock_is_on(struct clk_hw *hw)
4218 +
4219 + static u32 bcm2835_clock_choose_div(struct clk_hw *hw,
4220 + unsigned long rate,
4221 +- unsigned long parent_rate,
4222 +- bool round_up)
4223 ++ unsigned long parent_rate)
4224 + {
4225 + struct bcm2835_clock *clock = bcm2835_clock_from_hw(hw);
4226 + const struct bcm2835_clock_data *data = clock->data;
4227 +@@ -946,10 +945,6 @@ static u32 bcm2835_clock_choose_div(struct clk_hw *hw,
4228 +
4229 + rem = do_div(temp, rate);
4230 + div = temp;
4231 +-
4232 +- /* Round up and mask off the unused bits */
4233 +- if (round_up && ((div & unused_frac_mask) != 0 || rem != 0))
4234 +- div += unused_frac_mask + 1;
4235 + div &= ~unused_frac_mask;
4236 +
4237 + /* different clamping limits apply for a mash clock */
4238 +@@ -1080,7 +1075,7 @@ static int bcm2835_clock_set_rate(struct clk_hw *hw,
4239 + struct bcm2835_clock *clock = bcm2835_clock_from_hw(hw);
4240 + struct bcm2835_cprman *cprman = clock->cprman;
4241 + const struct bcm2835_clock_data *data = clock->data;
4242 +- u32 div = bcm2835_clock_choose_div(hw, rate, parent_rate, false);
4243 ++ u32 div = bcm2835_clock_choose_div(hw, rate, parent_rate);
4244 + u32 ctl;
4245 +
4246 + spin_lock(&cprman->regs_lock);
4247 +@@ -1131,7 +1126,7 @@ static unsigned long bcm2835_clock_choose_div_and_prate(struct clk_hw *hw,
4248 +
4249 + if (!(BIT(parent_idx) & data->set_rate_parent)) {
4250 + *prate = clk_hw_get_rate(parent);
4251 +- *div = bcm2835_clock_choose_div(hw, rate, *prate, true);
4252 ++ *div = bcm2835_clock_choose_div(hw, rate, *prate);
4253 +
4254 + *avgrate = bcm2835_clock_rate_from_divisor(clock, *prate, *div);
4255 +
4256 +@@ -1217,7 +1212,7 @@ static int bcm2835_clock_determine_rate(struct clk_hw *hw,
4257 + rate = bcm2835_clock_choose_div_and_prate(hw, i, req->rate,
4258 + &div, &prate,
4259 + &avgrate);
4260 +- if (rate > best_rate && rate <= req->rate) {
4261 ++ if (abs(req->rate - rate) < abs(req->rate - best_rate)) {
4262 + best_parent = parent;
4263 + best_prate = prate;
4264 + best_rate = rate;
4265 +diff --git a/drivers/clk/clk-bm1880.c b/drivers/clk/clk-bm1880.c
4266 +index e6d6599d310a1..fad78a22218e8 100644
4267 +--- a/drivers/clk/clk-bm1880.c
4268 ++++ b/drivers/clk/clk-bm1880.c
4269 +@@ -522,14 +522,6 @@ static struct clk_hw *bm1880_clk_register_pll(struct bm1880_pll_hw_clock *pll_cl
4270 + return hw;
4271 + }
4272 +
4273 +-static void bm1880_clk_unregister_pll(struct clk_hw *hw)
4274 +-{
4275 +- struct bm1880_pll_hw_clock *pll_hw = to_bm1880_pll_clk(hw);
4276 +-
4277 +- clk_hw_unregister(hw);
4278 +- kfree(pll_hw);
4279 +-}
4280 +-
4281 + static int bm1880_clk_register_plls(struct bm1880_pll_hw_clock *clks,
4282 + int num_clks,
4283 + struct bm1880_clock_data *data)
4284 +@@ -555,7 +547,7 @@ static int bm1880_clk_register_plls(struct bm1880_pll_hw_clock *clks,
4285 +
4286 + err_clk:
4287 + while (i--)
4288 +- bm1880_clk_unregister_pll(data->hw_data.hws[clks[i].pll.id]);
4289 ++ clk_hw_unregister(data->hw_data.hws[clks[i].pll.id]);
4290 +
4291 + return PTR_ERR(hw);
4292 + }
4293 +@@ -695,14 +687,6 @@ static struct clk_hw *bm1880_clk_register_div(struct bm1880_div_hw_clock *div_cl
4294 + return hw;
4295 + }
4296 +
4297 +-static void bm1880_clk_unregister_div(struct clk_hw *hw)
4298 +-{
4299 +- struct bm1880_div_hw_clock *div_hw = to_bm1880_div_clk(hw);
4300 +-
4301 +- clk_hw_unregister(hw);
4302 +- kfree(div_hw);
4303 +-}
4304 +-
4305 + static int bm1880_clk_register_divs(struct bm1880_div_hw_clock *clks,
4306 + int num_clks,
4307 + struct bm1880_clock_data *data)
4308 +@@ -729,7 +713,7 @@ static int bm1880_clk_register_divs(struct bm1880_div_hw_clock *clks,
4309 +
4310 + err_clk:
4311 + while (i--)
4312 +- bm1880_clk_unregister_div(data->hw_data.hws[clks[i].div.id]);
4313 ++ clk_hw_unregister(data->hw_data.hws[clks[i].div.id]);
4314 +
4315 + return PTR_ERR(hw);
4316 + }
4317 +diff --git a/drivers/clk/clk-si5341.c b/drivers/clk/clk-si5341.c
4318 +index eb22f4fdbc6b4..772b48ad0cd78 100644
4319 +--- a/drivers/clk/clk-si5341.c
4320 ++++ b/drivers/clk/clk-si5341.c
4321 +@@ -1576,7 +1576,7 @@ static int si5341_probe(struct i2c_client *client,
4322 + clk_prepare(data->clk[i].hw.clk);
4323 + }
4324 +
4325 +- err = of_clk_add_hw_provider(client->dev.of_node, of_clk_si5341_get,
4326 ++ err = devm_of_clk_add_hw_provider(&client->dev, of_clk_si5341_get,
4327 + data);
4328 + if (err) {
4329 + dev_err(&client->dev, "unable to add clk provider\n");
4330 +diff --git a/drivers/clk/clk-stm32f4.c b/drivers/clk/clk-stm32f4.c
4331 +index 5c75e3d906c20..682a18b392f08 100644
4332 +--- a/drivers/clk/clk-stm32f4.c
4333 ++++ b/drivers/clk/clk-stm32f4.c
4334 +@@ -129,7 +129,6 @@ static const struct stm32f4_gate_data stm32f429_gates[] __initconst = {
4335 + { STM32F4_RCC_APB2ENR, 20, "spi5", "apb2_div" },
4336 + { STM32F4_RCC_APB2ENR, 21, "spi6", "apb2_div" },
4337 + { STM32F4_RCC_APB2ENR, 22, "sai1", "apb2_div" },
4338 +- { STM32F4_RCC_APB2ENR, 26, "ltdc", "apb2_div" },
4339 + };
4340 +
4341 + static const struct stm32f4_gate_data stm32f469_gates[] __initconst = {
4342 +@@ -211,7 +210,6 @@ static const struct stm32f4_gate_data stm32f469_gates[] __initconst = {
4343 + { STM32F4_RCC_APB2ENR, 20, "spi5", "apb2_div" },
4344 + { STM32F4_RCC_APB2ENR, 21, "spi6", "apb2_div" },
4345 + { STM32F4_RCC_APB2ENR, 22, "sai1", "apb2_div" },
4346 +- { STM32F4_RCC_APB2ENR, 26, "ltdc", "apb2_div" },
4347 + };
4348 +
4349 + static const struct stm32f4_gate_data stm32f746_gates[] __initconst = {
4350 +@@ -286,7 +284,6 @@ static const struct stm32f4_gate_data stm32f746_gates[] __initconst = {
4351 + { STM32F4_RCC_APB2ENR, 21, "spi6", "apb2_div" },
4352 + { STM32F4_RCC_APB2ENR, 22, "sai1", "apb2_div" },
4353 + { STM32F4_RCC_APB2ENR, 23, "sai2", "apb2_div" },
4354 +- { STM32F4_RCC_APB2ENR, 26, "ltdc", "apb2_div" },
4355 + };
4356 +
4357 + static const struct stm32f4_gate_data stm32f769_gates[] __initconst = {
4358 +@@ -364,7 +361,6 @@ static const struct stm32f4_gate_data stm32f769_gates[] __initconst = {
4359 + { STM32F4_RCC_APB2ENR, 21, "spi6", "apb2_div" },
4360 + { STM32F4_RCC_APB2ENR, 22, "sai1", "apb2_div" },
4361 + { STM32F4_RCC_APB2ENR, 23, "sai2", "apb2_div" },
4362 +- { STM32F4_RCC_APB2ENR, 26, "ltdc", "apb2_div" },
4363 + { STM32F4_RCC_APB2ENR, 30, "mdio", "apb2_div" },
4364 + };
4365 +
4366 +diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
4367 +index 515ef39c4610c..b8a0e3d23698c 100644
4368 +--- a/drivers/clk/clk.c
4369 ++++ b/drivers/clk/clk.c
4370 +@@ -3314,6 +3314,24 @@ static int __init clk_debug_init(void)
4371 + {
4372 + struct clk_core *core;
4373 +
4374 ++#ifdef CLOCK_ALLOW_WRITE_DEBUGFS
4375 ++ pr_warn("\n");
4376 ++ pr_warn("********************************************************************\n");
4377 ++ pr_warn("** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE **\n");
4378 ++ pr_warn("** **\n");
4379 ++ pr_warn("** WRITEABLE clk DebugFS SUPPORT HAS BEEN ENABLED IN THIS KERNEL **\n");
4380 ++ pr_warn("** **\n");
4381 ++ pr_warn("** This means that this kernel is built to expose clk operations **\n");
4382 ++ pr_warn("** such as parent or rate setting, enabling, disabling, etc. **\n");
4383 ++ pr_warn("** to userspace, which may compromise security on your system. **\n");
4384 ++ pr_warn("** **\n");
4385 ++ pr_warn("** If you see this message and you are not debugging the **\n");
4386 ++ pr_warn("** kernel, report this immediately to your vendor! **\n");
4387 ++ pr_warn("** **\n");
4388 ++ pr_warn("** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE **\n");
4389 ++ pr_warn("********************************************************************\n");
4390 ++#endif
4391 ++
4392 + rootdir = debugfs_create_dir("clk", NULL);
4393 +
4394 + debugfs_create_file("clk_summary", 0444, rootdir, &all_lists,
4395 +diff --git a/drivers/clk/imx/clk-imx8mn.c b/drivers/clk/imx/clk-imx8mn.c
4396 +index 33a7ddc23cd24..db122d94db583 100644
4397 +--- a/drivers/clk/imx/clk-imx8mn.c
4398 ++++ b/drivers/clk/imx/clk-imx8mn.c
4399 +@@ -274,9 +274,9 @@ static const char * const imx8mn_pdm_sels[] = {"osc_24m", "sys_pll2_100m", "audi
4400 +
4401 + static const char * const imx8mn_dram_core_sels[] = {"dram_pll_out", "dram_alt_root", };
4402 +
4403 +-static const char * const imx8mn_clko1_sels[] = {"osc_24m", "sys_pll1_800m", "osc_27m",
4404 +- "sys_pll1_200m", "audio_pll2_out", "vpu_pll",
4405 +- "sys_pll1_80m", };
4406 ++static const char * const imx8mn_clko1_sels[] = {"osc_24m", "sys_pll1_800m", "dummy",
4407 ++ "sys_pll1_200m", "audio_pll2_out", "sys_pll2_500m",
4408 ++ "dummy", "sys_pll1_80m", };
4409 + static const char * const imx8mn_clko2_sels[] = {"osc_24m", "sys_pll2_200m", "sys_pll1_400m",
4410 + "sys_pll2_166m", "sys_pll3_out", "audio_pll1_out",
4411 + "video_pll1_out", "osc_32k", };
4412 +diff --git a/drivers/clk/meson/gxbb.c b/drivers/clk/meson/gxbb.c
4413 +index 0a68af6eec3dd..d42551a46ec91 100644
4414 +--- a/drivers/clk/meson/gxbb.c
4415 ++++ b/drivers/clk/meson/gxbb.c
4416 +@@ -712,6 +712,35 @@ static struct clk_regmap gxbb_mpll_prediv = {
4417 + };
4418 +
4419 + static struct clk_regmap gxbb_mpll0_div = {
4420 ++ .data = &(struct meson_clk_mpll_data){
4421 ++ .sdm = {
4422 ++ .reg_off = HHI_MPLL_CNTL7,
4423 ++ .shift = 0,
4424 ++ .width = 14,
4425 ++ },
4426 ++ .sdm_en = {
4427 ++ .reg_off = HHI_MPLL_CNTL,
4428 ++ .shift = 25,
4429 ++ .width = 1,
4430 ++ },
4431 ++ .n2 = {
4432 ++ .reg_off = HHI_MPLL_CNTL7,
4433 ++ .shift = 16,
4434 ++ .width = 9,
4435 ++ },
4436 ++ .lock = &meson_clk_lock,
4437 ++ },
4438 ++ .hw.init = &(struct clk_init_data){
4439 ++ .name = "mpll0_div",
4440 ++ .ops = &meson_clk_mpll_ops,
4441 ++ .parent_hws = (const struct clk_hw *[]) {
4442 ++ &gxbb_mpll_prediv.hw
4443 ++ },
4444 ++ .num_parents = 1,
4445 ++ },
4446 ++};
4447 ++
4448 ++static struct clk_regmap gxl_mpll0_div = {
4449 + .data = &(struct meson_clk_mpll_data){
4450 + .sdm = {
4451 + .reg_off = HHI_MPLL_CNTL7,
4452 +@@ -748,7 +777,16 @@ static struct clk_regmap gxbb_mpll0 = {
4453 + .hw.init = &(struct clk_init_data){
4454 + .name = "mpll0",
4455 + .ops = &clk_regmap_gate_ops,
4456 +- .parent_hws = (const struct clk_hw *[]) { &gxbb_mpll0_div.hw },
4457 ++ .parent_data = &(const struct clk_parent_data) {
4458 ++ /*
4459 ++ * Note:
4460 ++ * GXL and GXBB have different SDM_EN registers. We
4461 ++ * fallback to the global naming string mechanism so
4462 ++ * mpll0_div picks up the appropriate one.
4463 ++ */
4464 ++ .name = "mpll0_div",
4465 ++ .index = -1,
4466 ++ },
4467 + .num_parents = 1,
4468 + .flags = CLK_SET_RATE_PARENT,
4469 + },
4470 +@@ -3043,7 +3081,7 @@ static struct clk_hw_onecell_data gxl_hw_onecell_data = {
4471 + [CLKID_VAPB_1] = &gxbb_vapb_1.hw,
4472 + [CLKID_VAPB_SEL] = &gxbb_vapb_sel.hw,
4473 + [CLKID_VAPB] = &gxbb_vapb.hw,
4474 +- [CLKID_MPLL0_DIV] = &gxbb_mpll0_div.hw,
4475 ++ [CLKID_MPLL0_DIV] = &gxl_mpll0_div.hw,
4476 + [CLKID_MPLL1_DIV] = &gxbb_mpll1_div.hw,
4477 + [CLKID_MPLL2_DIV] = &gxbb_mpll2_div.hw,
4478 + [CLKID_MPLL_PREDIV] = &gxbb_mpll_prediv.hw,
4479 +@@ -3438,7 +3476,7 @@ static struct clk_regmap *const gxl_clk_regmaps[] = {
4480 + &gxbb_mpll0,
4481 + &gxbb_mpll1,
4482 + &gxbb_mpll2,
4483 +- &gxbb_mpll0_div,
4484 ++ &gxl_mpll0_div,
4485 + &gxbb_mpll1_div,
4486 + &gxbb_mpll2_div,
4487 + &gxbb_cts_amclk_div,
4488 +diff --git a/drivers/counter/Kconfig b/drivers/counter/Kconfig
4489 +index 2de53ab0dd252..cbdf84200e278 100644
4490 +--- a/drivers/counter/Kconfig
4491 ++++ b/drivers/counter/Kconfig
4492 +@@ -41,7 +41,7 @@ config STM32_TIMER_CNT
4493 +
4494 + config STM32_LPTIMER_CNT
4495 + tristate "STM32 LP Timer encoder counter driver"
4496 +- depends on (MFD_STM32_LPTIMER || COMPILE_TEST) && IIO
4497 ++ depends on MFD_STM32_LPTIMER || COMPILE_TEST
4498 + help
4499 + Select this option to enable STM32 Low-Power Timer quadrature encoder
4500 + and counter driver.
4501 +diff --git a/drivers/counter/stm32-lptimer-cnt.c b/drivers/counter/stm32-lptimer-cnt.c
4502 +index fd6828e2d34f5..937439635d53f 100644
4503 +--- a/drivers/counter/stm32-lptimer-cnt.c
4504 ++++ b/drivers/counter/stm32-lptimer-cnt.c
4505 +@@ -12,8 +12,8 @@
4506 +
4507 + #include <linux/bitfield.h>
4508 + #include <linux/counter.h>
4509 +-#include <linux/iio/iio.h>
4510 + #include <linux/mfd/stm32-lptimer.h>
4511 ++#include <linux/mod_devicetable.h>
4512 + #include <linux/module.h>
4513 + #include <linux/pinctrl/consumer.h>
4514 + #include <linux/platform_device.h>
4515 +@@ -107,249 +107,27 @@ static int stm32_lptim_setup(struct stm32_lptim_cnt *priv, int enable)
4516 + return regmap_update_bits(priv->regmap, STM32_LPTIM_CFGR, mask, val);
4517 + }
4518 +
4519 +-static int stm32_lptim_write_raw(struct iio_dev *indio_dev,
4520 +- struct iio_chan_spec const *chan,
4521 +- int val, int val2, long mask)
4522 +-{
4523 +- struct stm32_lptim_cnt *priv = iio_priv(indio_dev);
4524 +- int ret;
4525 +-
4526 +- switch (mask) {
4527 +- case IIO_CHAN_INFO_ENABLE:
4528 +- if (val < 0 || val > 1)
4529 +- return -EINVAL;
4530 +-
4531 +- /* Check nobody uses the timer, or already disabled/enabled */
4532 +- ret = stm32_lptim_is_enabled(priv);
4533 +- if ((ret < 0) || (!ret && !val))
4534 +- return ret;
4535 +- if (val && ret)
4536 +- return -EBUSY;
4537 +-
4538 +- ret = stm32_lptim_setup(priv, val);
4539 +- if (ret)
4540 +- return ret;
4541 +- return stm32_lptim_set_enable_state(priv, val);
4542 +-
4543 +- default:
4544 +- return -EINVAL;
4545 +- }
4546 +-}
4547 +-
4548 +-static int stm32_lptim_read_raw(struct iio_dev *indio_dev,
4549 +- struct iio_chan_spec const *chan,
4550 +- int *val, int *val2, long mask)
4551 +-{
4552 +- struct stm32_lptim_cnt *priv = iio_priv(indio_dev);
4553 +- u32 dat;
4554 +- int ret;
4555 +-
4556 +- switch (mask) {
4557 +- case IIO_CHAN_INFO_RAW:
4558 +- ret = regmap_read(priv->regmap, STM32_LPTIM_CNT, &dat);
4559 +- if (ret)
4560 +- return ret;
4561 +- *val = dat;
4562 +- return IIO_VAL_INT;
4563 +-
4564 +- case IIO_CHAN_INFO_ENABLE:
4565 +- ret = stm32_lptim_is_enabled(priv);
4566 +- if (ret < 0)
4567 +- return ret;
4568 +- *val = ret;
4569 +- return IIO_VAL_INT;
4570 +-
4571 +- case IIO_CHAN_INFO_SCALE:
4572 +- /* Non-quadrature mode: scale = 1 */
4573 +- *val = 1;
4574 +- *val2 = 0;
4575 +- if (priv->quadrature_mode) {
4576 +- /*
4577 +- * Quadrature encoder mode:
4578 +- * - both edges, quarter cycle, scale is 0.25
4579 +- * - either rising/falling edge scale is 0.5
4580 +- */
4581 +- if (priv->polarity > 1)
4582 +- *val2 = 2;
4583 +- else
4584 +- *val2 = 1;
4585 +- }
4586 +- return IIO_VAL_FRACTIONAL_LOG2;
4587 +-
4588 +- default:
4589 +- return -EINVAL;
4590 +- }
4591 +-}
4592 +-
4593 +-static const struct iio_info stm32_lptim_cnt_iio_info = {
4594 +- .read_raw = stm32_lptim_read_raw,
4595 +- .write_raw = stm32_lptim_write_raw,
4596 +-};
4597 +-
4598 +-static const char *const stm32_lptim_quadrature_modes[] = {
4599 +- "non-quadrature",
4600 +- "quadrature",
4601 +-};
4602 +-
4603 +-static int stm32_lptim_get_quadrature_mode(struct iio_dev *indio_dev,
4604 +- const struct iio_chan_spec *chan)
4605 +-{
4606 +- struct stm32_lptim_cnt *priv = iio_priv(indio_dev);
4607 +-
4608 +- return priv->quadrature_mode;
4609 +-}
4610 +-
4611 +-static int stm32_lptim_set_quadrature_mode(struct iio_dev *indio_dev,
4612 +- const struct iio_chan_spec *chan,
4613 +- unsigned int type)
4614 +-{
4615 +- struct stm32_lptim_cnt *priv = iio_priv(indio_dev);
4616 +-
4617 +- if (stm32_lptim_is_enabled(priv))
4618 +- return -EBUSY;
4619 +-
4620 +- priv->quadrature_mode = type;
4621 +-
4622 +- return 0;
4623 +-}
4624 +-
4625 +-static const struct iio_enum stm32_lptim_quadrature_mode_en = {
4626 +- .items = stm32_lptim_quadrature_modes,
4627 +- .num_items = ARRAY_SIZE(stm32_lptim_quadrature_modes),
4628 +- .get = stm32_lptim_get_quadrature_mode,
4629 +- .set = stm32_lptim_set_quadrature_mode,
4630 +-};
4631 +-
4632 +-static const char * const stm32_lptim_cnt_polarity[] = {
4633 +- "rising-edge", "falling-edge", "both-edges",
4634 +-};
4635 +-
4636 +-static int stm32_lptim_cnt_get_polarity(struct iio_dev *indio_dev,
4637 +- const struct iio_chan_spec *chan)
4638 +-{
4639 +- struct stm32_lptim_cnt *priv = iio_priv(indio_dev);
4640 +-
4641 +- return priv->polarity;
4642 +-}
4643 +-
4644 +-static int stm32_lptim_cnt_set_polarity(struct iio_dev *indio_dev,
4645 +- const struct iio_chan_spec *chan,
4646 +- unsigned int type)
4647 +-{
4648 +- struct stm32_lptim_cnt *priv = iio_priv(indio_dev);
4649 +-
4650 +- if (stm32_lptim_is_enabled(priv))
4651 +- return -EBUSY;
4652 +-
4653 +- priv->polarity = type;
4654 +-
4655 +- return 0;
4656 +-}
4657 +-
4658 +-static const struct iio_enum stm32_lptim_cnt_polarity_en = {
4659 +- .items = stm32_lptim_cnt_polarity,
4660 +- .num_items = ARRAY_SIZE(stm32_lptim_cnt_polarity),
4661 +- .get = stm32_lptim_cnt_get_polarity,
4662 +- .set = stm32_lptim_cnt_set_polarity,
4663 +-};
4664 +-
4665 +-static ssize_t stm32_lptim_cnt_get_ceiling(struct stm32_lptim_cnt *priv,
4666 +- char *buf)
4667 +-{
4668 +- return snprintf(buf, PAGE_SIZE, "%u\n", priv->ceiling);
4669 +-}
4670 +-
4671 +-static ssize_t stm32_lptim_cnt_set_ceiling(struct stm32_lptim_cnt *priv,
4672 +- const char *buf, size_t len)
4673 +-{
4674 +- int ret;
4675 +-
4676 +- if (stm32_lptim_is_enabled(priv))
4677 +- return -EBUSY;
4678 +-
4679 +- ret = kstrtouint(buf, 0, &priv->ceiling);
4680 +- if (ret)
4681 +- return ret;
4682 +-
4683 +- if (priv->ceiling > STM32_LPTIM_MAX_ARR)
4684 +- return -EINVAL;
4685 +-
4686 +- return len;
4687 +-}
4688 +-
4689 +-static ssize_t stm32_lptim_cnt_get_preset_iio(struct iio_dev *indio_dev,
4690 +- uintptr_t private,
4691 +- const struct iio_chan_spec *chan,
4692 +- char *buf)
4693 +-{
4694 +- struct stm32_lptim_cnt *priv = iio_priv(indio_dev);
4695 +-
4696 +- return stm32_lptim_cnt_get_ceiling(priv, buf);
4697 +-}
4698 +-
4699 +-static ssize_t stm32_lptim_cnt_set_preset_iio(struct iio_dev *indio_dev,
4700 +- uintptr_t private,
4701 +- const struct iio_chan_spec *chan,
4702 +- const char *buf, size_t len)
4703 +-{
4704 +- struct stm32_lptim_cnt *priv = iio_priv(indio_dev);
4705 +-
4706 +- return stm32_lptim_cnt_set_ceiling(priv, buf, len);
4707 +-}
4708 +-
4709 +-/* LP timer with encoder */
4710 +-static const struct iio_chan_spec_ext_info stm32_lptim_enc_ext_info[] = {
4711 +- {
4712 +- .name = "preset",
4713 +- .shared = IIO_SEPARATE,
4714 +- .read = stm32_lptim_cnt_get_preset_iio,
4715 +- .write = stm32_lptim_cnt_set_preset_iio,
4716 +- },
4717 +- IIO_ENUM("polarity", IIO_SEPARATE, &stm32_lptim_cnt_polarity_en),
4718 +- IIO_ENUM_AVAILABLE("polarity", &stm32_lptim_cnt_polarity_en),
4719 +- IIO_ENUM("quadrature_mode", IIO_SEPARATE,
4720 +- &stm32_lptim_quadrature_mode_en),
4721 +- IIO_ENUM_AVAILABLE("quadrature_mode", &stm32_lptim_quadrature_mode_en),
4722 +- {}
4723 +-};
4724 +-
4725 +-static const struct iio_chan_spec stm32_lptim_enc_channels = {
4726 +- .type = IIO_COUNT,
4727 +- .channel = 0,
4728 +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
4729 +- BIT(IIO_CHAN_INFO_ENABLE) |
4730 +- BIT(IIO_CHAN_INFO_SCALE),
4731 +- .ext_info = stm32_lptim_enc_ext_info,
4732 +- .indexed = 1,
4733 +-};
4734 +-
4735 +-/* LP timer without encoder (counter only) */
4736 +-static const struct iio_chan_spec_ext_info stm32_lptim_cnt_ext_info[] = {
4737 +- {
4738 +- .name = "preset",
4739 +- .shared = IIO_SEPARATE,
4740 +- .read = stm32_lptim_cnt_get_preset_iio,
4741 +- .write = stm32_lptim_cnt_set_preset_iio,
4742 +- },
4743 +- IIO_ENUM("polarity", IIO_SEPARATE, &stm32_lptim_cnt_polarity_en),
4744 +- IIO_ENUM_AVAILABLE("polarity", &stm32_lptim_cnt_polarity_en),
4745 +- {}
4746 +-};
4747 +-
4748 +-static const struct iio_chan_spec stm32_lptim_cnt_channels = {
4749 +- .type = IIO_COUNT,
4750 +- .channel = 0,
4751 +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
4752 +- BIT(IIO_CHAN_INFO_ENABLE) |
4753 +- BIT(IIO_CHAN_INFO_SCALE),
4754 +- .ext_info = stm32_lptim_cnt_ext_info,
4755 +- .indexed = 1,
4756 +-};
4757 +-
4758 + /**
4759 + * enum stm32_lptim_cnt_function - enumerates LPTimer counter & encoder modes
4760 + * @STM32_LPTIM_COUNTER_INCREASE: up count on IN1 rising, falling or both edges
4761 + * @STM32_LPTIM_ENCODER_BOTH_EDGE: count on both edges (IN1 & IN2 quadrature)
4762 ++ *
4763 ++ * In non-quadrature mode, device counts up on active edge.
4764 ++ * In quadrature mode, encoder counting scenarios are as follows:
4765 ++ * +---------+----------+--------------------+--------------------+
4766 ++ * | Active | Level on | IN1 signal | IN2 signal |
4767 ++ * | edge | opposite +----------+---------+----------+---------+
4768 ++ * | | signal | Rising | Falling | Rising | Falling |
4769 ++ * +---------+----------+----------+---------+----------+---------+
4770 ++ * | Rising | High -> | Down | - | Up | - |
4771 ++ * | edge | Low -> | Up | - | Down | - |
4772 ++ * +---------+----------+----------+---------+----------+---------+
4773 ++ * | Falling | High -> | - | Up | - | Down |
4774 ++ * | edge | Low -> | - | Down | - | Up |
4775 ++ * +---------+----------+----------+---------+----------+---------+
4776 ++ * | Both | High -> | Down | Up | Up | Down |
4777 ++ * | edges | Low -> | Up | Down | Down | Up |
4778 ++ * +---------+----------+----------+---------+----------+---------+
4779 + */
4780 + enum stm32_lptim_cnt_function {
4781 + STM32_LPTIM_COUNTER_INCREASE,
4782 +@@ -484,7 +262,7 @@ static ssize_t stm32_lptim_cnt_ceiling_read(struct counter_device *counter,
4783 + {
4784 + struct stm32_lptim_cnt *const priv = counter->priv;
4785 +
4786 +- return stm32_lptim_cnt_get_ceiling(priv, buf);
4787 ++ return snprintf(buf, PAGE_SIZE, "%u\n", priv->ceiling);
4788 + }
4789 +
4790 + static ssize_t stm32_lptim_cnt_ceiling_write(struct counter_device *counter,
4791 +@@ -493,8 +271,22 @@ static ssize_t stm32_lptim_cnt_ceiling_write(struct counter_device *counter,
4792 + const char *buf, size_t len)
4793 + {
4794 + struct stm32_lptim_cnt *const priv = counter->priv;
4795 ++ unsigned int ceiling;
4796 ++ int ret;
4797 ++
4798 ++ if (stm32_lptim_is_enabled(priv))
4799 ++ return -EBUSY;
4800 ++
4801 ++ ret = kstrtouint(buf, 0, &ceiling);
4802 ++ if (ret)
4803 ++ return ret;
4804 ++
4805 ++ if (ceiling > STM32_LPTIM_MAX_ARR)
4806 ++ return -EINVAL;
4807 ++
4808 ++ priv->ceiling = ceiling;
4809 +
4810 +- return stm32_lptim_cnt_set_ceiling(priv, buf, len);
4811 ++ return len;
4812 + }
4813 +
4814 + static const struct counter_count_ext stm32_lptim_cnt_ext[] = {
4815 +@@ -630,32 +422,19 @@ static int stm32_lptim_cnt_probe(struct platform_device *pdev)
4816 + {
4817 + struct stm32_lptimer *ddata = dev_get_drvdata(pdev->dev.parent);
4818 + struct stm32_lptim_cnt *priv;
4819 +- struct iio_dev *indio_dev;
4820 +- int ret;
4821 +
4822 + if (IS_ERR_OR_NULL(ddata))
4823 + return -EINVAL;
4824 +
4825 +- indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*priv));
4826 +- if (!indio_dev)
4827 ++ priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
4828 ++ if (!priv)
4829 + return -ENOMEM;
4830 +
4831 +- priv = iio_priv(indio_dev);
4832 + priv->dev = &pdev->dev;
4833 + priv->regmap = ddata->regmap;
4834 + priv->clk = ddata->clk;
4835 + priv->ceiling = STM32_LPTIM_MAX_ARR;
4836 +
4837 +- /* Initialize IIO device */
4838 +- indio_dev->name = dev_name(&pdev->dev);
4839 +- indio_dev->dev.of_node = pdev->dev.of_node;
4840 +- indio_dev->info = &stm32_lptim_cnt_iio_info;
4841 +- if (ddata->has_encoder)
4842 +- indio_dev->channels = &stm32_lptim_enc_channels;
4843 +- else
4844 +- indio_dev->channels = &stm32_lptim_cnt_channels;
4845 +- indio_dev->num_channels = 1;
4846 +-
4847 + /* Initialize Counter device */
4848 + priv->counter.name = dev_name(&pdev->dev);
4849 + priv->counter.parent = &pdev->dev;
4850 +@@ -673,10 +452,6 @@ static int stm32_lptim_cnt_probe(struct platform_device *pdev)
4851 +
4852 + platform_set_drvdata(pdev, priv);
4853 +
4854 +- ret = devm_iio_device_register(&pdev->dev, indio_dev);
4855 +- if (ret)
4856 +- return ret;
4857 +-
4858 + return devm_counter_register(&pdev->dev, &priv->counter);
4859 + }
4860 +
4861 +diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
4862 +index 8e159fb6af9cd..30dafe8fc5054 100644
4863 +--- a/drivers/cpufreq/cpufreq.c
4864 ++++ b/drivers/cpufreq/cpufreq.c
4865 +@@ -1400,7 +1400,7 @@ static int cpufreq_online(unsigned int cpu)
4866 +
4867 + ret = freq_qos_add_request(&policy->constraints,
4868 + policy->min_freq_req, FREQ_QOS_MIN,
4869 +- policy->min);
4870 ++ FREQ_QOS_MIN_DEFAULT_VALUE);
4871 + if (ret < 0) {
4872 + /*
4873 + * So we don't call freq_qos_remove_request() for an
4874 +@@ -1420,7 +1420,7 @@ static int cpufreq_online(unsigned int cpu)
4875 +
4876 + ret = freq_qos_add_request(&policy->constraints,
4877 + policy->max_freq_req, FREQ_QOS_MAX,
4878 +- policy->max);
4879 ++ FREQ_QOS_MAX_DEFAULT_VALUE);
4880 + if (ret < 0) {
4881 + policy->max_freq_req = NULL;
4882 + goto out_destroy_policy;
4883 +diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
4884 +index a780e627838ae..5a40c7d10cc9a 100644
4885 +--- a/drivers/crypto/caam/caamalg_qi2.c
4886 ++++ b/drivers/crypto/caam/caamalg_qi2.c
4887 +@@ -5467,7 +5467,7 @@ int dpaa2_caam_enqueue(struct device *dev, struct caam_request *req)
4888 + dpaa2_fd_set_len(&fd, dpaa2_fl_get_len(&req->fd_flt[1]));
4889 + dpaa2_fd_set_flc(&fd, req->flc_dma);
4890 +
4891 +- ppriv = this_cpu_ptr(priv->ppriv);
4892 ++ ppriv = raw_cpu_ptr(priv->ppriv);
4893 + for (i = 0; i < (priv->dpseci_attr.num_tx_queues << 1); i++) {
4894 + err = dpaa2_io_service_enqueue_fq(ppriv->dpio, ppriv->req_fqid,
4895 + &fd);
4896 +diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
4897 +index 9b968ac4ee7b6..a196bb8b17010 100644
4898 +--- a/drivers/crypto/omap-aes.c
4899 ++++ b/drivers/crypto/omap-aes.c
4900 +@@ -1302,7 +1302,7 @@ static int omap_aes_suspend(struct device *dev)
4901 +
4902 + static int omap_aes_resume(struct device *dev)
4903 + {
4904 +- pm_runtime_resume_and_get(dev);
4905 ++ pm_runtime_get_sync(dev);
4906 + return 0;
4907 + }
4908 + #endif
4909 +diff --git a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
4910 +index d7ca222f0df18..74afafc84c716 100644
4911 +--- a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
4912 ++++ b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
4913 +@@ -111,37 +111,19 @@ static int __adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr)
4914 +
4915 + mutex_lock(lock);
4916 +
4917 +- /* Check if PF2VF CSR is in use by remote function */
4918 ++ /* Check if the PFVF CSR is in use by remote function */
4919 + val = ADF_CSR_RD(pmisc_bar_addr, pf2vf_offset);
4920 + if ((val & remote_in_use_mask) == remote_in_use_pattern) {
4921 + dev_dbg(&GET_DEV(accel_dev),
4922 +- "PF2VF CSR in use by remote function\n");
4923 ++ "PFVF CSR in use by remote function\n");
4924 + ret = -EBUSY;
4925 + goto out;
4926 + }
4927 +
4928 +- /* Attempt to get ownership of PF2VF CSR */
4929 + msg &= ~local_in_use_mask;
4930 + msg |= local_in_use_pattern;
4931 +- ADF_CSR_WR(pmisc_bar_addr, pf2vf_offset, msg);
4932 +
4933 +- /* Wait in case remote func also attempting to get ownership */
4934 +- msleep(ADF_IOV_MSG_COLLISION_DETECT_DELAY);
4935 +-
4936 +- val = ADF_CSR_RD(pmisc_bar_addr, pf2vf_offset);
4937 +- if ((val & local_in_use_mask) != local_in_use_pattern) {
4938 +- dev_dbg(&GET_DEV(accel_dev),
4939 +- "PF2VF CSR in use by remote - collision detected\n");
4940 +- ret = -EBUSY;
4941 +- goto out;
4942 +- }
4943 +-
4944 +- /*
4945 +- * This function now owns the PV2VF CSR. The IN_USE_BY pattern must
4946 +- * remain in the PF2VF CSR for all writes including ACK from remote
4947 +- * until this local function relinquishes the CSR. Send the message
4948 +- * by interrupting the remote.
4949 +- */
4950 ++ /* Attempt to get ownership of the PFVF CSR */
4951 + ADF_CSR_WR(pmisc_bar_addr, pf2vf_offset, msg | int_bit);
4952 +
4953 + /* Wait for confirmation from remote func it received the message */
4954 +@@ -150,6 +132,12 @@ static int __adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr)
4955 + val = ADF_CSR_RD(pmisc_bar_addr, pf2vf_offset);
4956 + } while ((val & int_bit) && (count++ < ADF_IOV_MSG_ACK_MAX_RETRY));
4957 +
4958 ++ if (val & int_bit) {
4959 ++ dev_dbg(&GET_DEV(accel_dev), "ACK not received from remote\n");
4960 ++ val &= ~int_bit;
4961 ++ ret = -EIO;
4962 ++ }
4963 ++
4964 + if (val != msg) {
4965 + dev_dbg(&GET_DEV(accel_dev),
4966 + "Collision - PFVF CSR overwritten by remote function\n");
4967 +@@ -157,13 +145,7 @@ static int __adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr)
4968 + goto out;
4969 + }
4970 +
4971 +- if (val & int_bit) {
4972 +- dev_dbg(&GET_DEV(accel_dev), "ACK not received from remote\n");
4973 +- val &= ~int_bit;
4974 +- ret = -EIO;
4975 +- }
4976 +-
4977 +- /* Finished with PF2VF CSR; relinquish it and leave msg in CSR */
4978 ++ /* Finished with the PFVF CSR; relinquish it and leave msg in CSR */
4979 + ADF_CSR_WR(pmisc_bar_addr, pf2vf_offset, val & ~local_in_use_mask);
4980 + out:
4981 + mutex_unlock(lock);
4982 +@@ -171,12 +153,13 @@ out:
4983 + }
4984 +
4985 + /**
4986 +- * adf_iov_putmsg() - send PF2VF message
4987 ++ * adf_iov_putmsg() - send PFVF message
4988 + * @accel_dev: Pointer to acceleration device.
4989 + * @msg: Message to send
4990 +- * @vf_nr: VF number to which the message will be sent
4991 ++ * @vf_nr: VF number to which the message will be sent if on PF, ignored
4992 ++ * otherwise
4993 + *
4994 +- * Function sends a messge from the PF to a VF
4995 ++ * Function sends a message through the PFVF channel
4996 + *
4997 + * Return: 0 on success, error code otherwise.
4998 + */
4999 +diff --git a/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c b/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c
5000 +index 54b738da829d8..3e25fac051b25 100644
5001 +--- a/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c
5002 ++++ b/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c
5003 +@@ -8,7 +8,7 @@
5004 + * adf_vf2pf_notify_init() - send init msg to PF
5005 + * @accel_dev: Pointer to acceleration VF device.
5006 + *
5007 +- * Function sends an init messge from the VF to a PF
5008 ++ * Function sends an init message from the VF to a PF
5009 + *
5010 + * Return: 0 on success, error code otherwise.
5011 + */
5012 +@@ -31,7 +31,7 @@ EXPORT_SYMBOL_GPL(adf_vf2pf_notify_init);
5013 + * adf_vf2pf_notify_shutdown() - send shutdown msg to PF
5014 + * @accel_dev: Pointer to acceleration VF device.
5015 + *
5016 +- * Function sends a shutdown messge from the VF to a PF
5017 ++ * Function sends a shutdown message from the VF to a PF
5018 + *
5019 + * Return: void
5020 + */
5021 +diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c
5022 +index 87be96a0b0bba..8b4e79d882af4 100644
5023 +--- a/drivers/crypto/qce/sha.c
5024 ++++ b/drivers/crypto/qce/sha.c
5025 +@@ -533,8 +533,8 @@ static int qce_ahash_register_one(const struct qce_ahash_def *def,
5026 +
5027 + ret = crypto_register_ahash(alg);
5028 + if (ret) {
5029 +- kfree(tmpl);
5030 + dev_err(qce->dev, "%s registration failed\n", base->cra_name);
5031 ++ kfree(tmpl);
5032 + return ret;
5033 + }
5034 +
5035 +diff --git a/drivers/crypto/qce/skcipher.c b/drivers/crypto/qce/skcipher.c
5036 +index d8053789c8828..89c7fc3efbd71 100644
5037 +--- a/drivers/crypto/qce/skcipher.c
5038 ++++ b/drivers/crypto/qce/skcipher.c
5039 +@@ -433,8 +433,8 @@ static int qce_skcipher_register_one(const struct qce_skcipher_def *def,
5040 +
5041 + ret = crypto_register_skcipher(alg);
5042 + if (ret) {
5043 +- kfree(tmpl);
5044 + dev_err(qce->dev, "%s registration failed\n", alg->base.cra_name);
5045 ++ kfree(tmpl);
5046 + return ret;
5047 + }
5048 +
5049 +diff --git a/drivers/crypto/stm32/stm32-crc32.c b/drivers/crypto/stm32/stm32-crc32.c
5050 +index 75867c0b00172..be1bf39a317de 100644
5051 +--- a/drivers/crypto/stm32/stm32-crc32.c
5052 ++++ b/drivers/crypto/stm32/stm32-crc32.c
5053 +@@ -279,7 +279,7 @@ static struct shash_alg algs[] = {
5054 + .digestsize = CHKSUM_DIGEST_SIZE,
5055 + .base = {
5056 + .cra_name = "crc32",
5057 +- .cra_driver_name = DRIVER_NAME,
5058 ++ .cra_driver_name = "stm32-crc32-crc32",
5059 + .cra_priority = 200,
5060 + .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
5061 + .cra_blocksize = CHKSUM_BLOCK_SIZE,
5062 +@@ -301,7 +301,7 @@ static struct shash_alg algs[] = {
5063 + .digestsize = CHKSUM_DIGEST_SIZE,
5064 + .base = {
5065 + .cra_name = "crc32c",
5066 +- .cra_driver_name = DRIVER_NAME,
5067 ++ .cra_driver_name = "stm32-crc32-crc32c",
5068 + .cra_priority = 200,
5069 + .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
5070 + .cra_blocksize = CHKSUM_BLOCK_SIZE,
5071 +diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c
5072 +index 7999b26a16ed0..81eb136b6c11d 100644
5073 +--- a/drivers/crypto/stm32/stm32-cryp.c
5074 ++++ b/drivers/crypto/stm32/stm32-cryp.c
5075 +@@ -37,7 +37,6 @@
5076 + /* Mode mask = bits [15..0] */
5077 + #define FLG_MODE_MASK GENMASK(15, 0)
5078 + /* Bit [31..16] status */
5079 +-#define FLG_CCM_PADDED_WA BIT(16)
5080 +
5081 + /* Registers */
5082 + #define CRYP_CR 0x00000000
5083 +@@ -105,8 +104,6 @@
5084 + /* Misc */
5085 + #define AES_BLOCK_32 (AES_BLOCK_SIZE / sizeof(u32))
5086 + #define GCM_CTR_INIT 2
5087 +-#define _walked_in (cryp->in_walk.offset - cryp->in_sg->offset)
5088 +-#define _walked_out (cryp->out_walk.offset - cryp->out_sg->offset)
5089 + #define CRYP_AUTOSUSPEND_DELAY 50
5090 +
5091 + struct stm32_cryp_caps {
5092 +@@ -144,26 +141,16 @@ struct stm32_cryp {
5093 + size_t authsize;
5094 + size_t hw_blocksize;
5095 +
5096 +- size_t total_in;
5097 +- size_t total_in_save;
5098 +- size_t total_out;
5099 +- size_t total_out_save;
5100 ++ size_t payload_in;
5101 ++ size_t header_in;
5102 ++ size_t payload_out;
5103 +
5104 +- struct scatterlist *in_sg;
5105 + struct scatterlist *out_sg;
5106 +- struct scatterlist *out_sg_save;
5107 +-
5108 +- struct scatterlist in_sgl;
5109 +- struct scatterlist out_sgl;
5110 +- bool sgs_copied;
5111 +-
5112 +- int in_sg_len;
5113 +- int out_sg_len;
5114 +
5115 + struct scatter_walk in_walk;
5116 + struct scatter_walk out_walk;
5117 +
5118 +- u32 last_ctr[4];
5119 ++ __be32 last_ctr[4];
5120 + u32 gcm_ctr;
5121 + };
5122 +
5123 +@@ -262,6 +249,7 @@ static inline int stm32_cryp_wait_output(struct stm32_cryp *cryp)
5124 + }
5125 +
5126 + static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp);
5127 ++static void stm32_cryp_finish_req(struct stm32_cryp *cryp, int err);
5128 +
5129 + static struct stm32_cryp *stm32_cryp_find_dev(struct stm32_cryp_ctx *ctx)
5130 + {
5131 +@@ -283,103 +271,6 @@ static struct stm32_cryp *stm32_cryp_find_dev(struct stm32_cryp_ctx *ctx)
5132 + return cryp;
5133 + }
5134 +
5135 +-static int stm32_cryp_check_aligned(struct scatterlist *sg, size_t total,
5136 +- size_t align)
5137 +-{
5138 +- int len = 0;
5139 +-
5140 +- if (!total)
5141 +- return 0;
5142 +-
5143 +- if (!IS_ALIGNED(total, align))
5144 +- return -EINVAL;
5145 +-
5146 +- while (sg) {
5147 +- if (!IS_ALIGNED(sg->offset, sizeof(u32)))
5148 +- return -EINVAL;
5149 +-
5150 +- if (!IS_ALIGNED(sg->length, align))
5151 +- return -EINVAL;
5152 +-
5153 +- len += sg->length;
5154 +- sg = sg_next(sg);
5155 +- }
5156 +-
5157 +- if (len != total)
5158 +- return -EINVAL;
5159 +-
5160 +- return 0;
5161 +-}
5162 +-
5163 +-static int stm32_cryp_check_io_aligned(struct stm32_cryp *cryp)
5164 +-{
5165 +- int ret;
5166 +-
5167 +- ret = stm32_cryp_check_aligned(cryp->in_sg, cryp->total_in,
5168 +- cryp->hw_blocksize);
5169 +- if (ret)
5170 +- return ret;
5171 +-
5172 +- ret = stm32_cryp_check_aligned(cryp->out_sg, cryp->total_out,
5173 +- cryp->hw_blocksize);
5174 +-
5175 +- return ret;
5176 +-}
5177 +-
5178 +-static void sg_copy_buf(void *buf, struct scatterlist *sg,
5179 +- unsigned int start, unsigned int nbytes, int out)
5180 +-{
5181 +- struct scatter_walk walk;
5182 +-
5183 +- if (!nbytes)
5184 +- return;
5185 +-
5186 +- scatterwalk_start(&walk, sg);
5187 +- scatterwalk_advance(&walk, start);
5188 +- scatterwalk_copychunks(buf, &walk, nbytes, out);
5189 +- scatterwalk_done(&walk, out, 0);
5190 +-}
5191 +-
5192 +-static int stm32_cryp_copy_sgs(struct stm32_cryp *cryp)
5193 +-{
5194 +- void *buf_in, *buf_out;
5195 +- int pages, total_in, total_out;
5196 +-
5197 +- if (!stm32_cryp_check_io_aligned(cryp)) {
5198 +- cryp->sgs_copied = 0;
5199 +- return 0;
5200 +- }
5201 +-
5202 +- total_in = ALIGN(cryp->total_in, cryp->hw_blocksize);
5203 +- pages = total_in ? get_order(total_in) : 1;
5204 +- buf_in = (void *)__get_free_pages(GFP_ATOMIC, pages);
5205 +-
5206 +- total_out = ALIGN(cryp->total_out, cryp->hw_blocksize);
5207 +- pages = total_out ? get_order(total_out) : 1;
5208 +- buf_out = (void *)__get_free_pages(GFP_ATOMIC, pages);
5209 +-
5210 +- if (!buf_in || !buf_out) {
5211 +- dev_err(cryp->dev, "Can't allocate pages when unaligned\n");
5212 +- cryp->sgs_copied = 0;
5213 +- return -EFAULT;
5214 +- }
5215 +-
5216 +- sg_copy_buf(buf_in, cryp->in_sg, 0, cryp->total_in, 0);
5217 +-
5218 +- sg_init_one(&cryp->in_sgl, buf_in, total_in);
5219 +- cryp->in_sg = &cryp->in_sgl;
5220 +- cryp->in_sg_len = 1;
5221 +-
5222 +- sg_init_one(&cryp->out_sgl, buf_out, total_out);
5223 +- cryp->out_sg_save = cryp->out_sg;
5224 +- cryp->out_sg = &cryp->out_sgl;
5225 +- cryp->out_sg_len = 1;
5226 +-
5227 +- cryp->sgs_copied = 1;
5228 +-
5229 +- return 0;
5230 +-}
5231 +-
5232 + static void stm32_cryp_hw_write_iv(struct stm32_cryp *cryp, __be32 *iv)
5233 + {
5234 + if (!iv)
5235 +@@ -481,16 +372,99 @@ static int stm32_cryp_gcm_init(struct stm32_cryp *cryp, u32 cfg)
5236 +
5237 + /* Wait for end of processing */
5238 + ret = stm32_cryp_wait_enable(cryp);
5239 +- if (ret)
5240 ++ if (ret) {
5241 + dev_err(cryp->dev, "Timeout (gcm init)\n");
5242 ++ return ret;
5243 ++ }
5244 +
5245 +- return ret;
5246 ++ /* Prepare next phase */
5247 ++ if (cryp->areq->assoclen) {
5248 ++ cfg |= CR_PH_HEADER;
5249 ++ stm32_cryp_write(cryp, CRYP_CR, cfg);
5250 ++ } else if (stm32_cryp_get_input_text_len(cryp)) {
5251 ++ cfg |= CR_PH_PAYLOAD;
5252 ++ stm32_cryp_write(cryp, CRYP_CR, cfg);
5253 ++ }
5254 ++
5255 ++ return 0;
5256 ++}
5257 ++
5258 ++static void stm32_crypt_gcmccm_end_header(struct stm32_cryp *cryp)
5259 ++{
5260 ++ u32 cfg;
5261 ++ int err;
5262 ++
5263 ++ /* Check if whole header written */
5264 ++ if (!cryp->header_in) {
5265 ++ /* Wait for completion */
5266 ++ err = stm32_cryp_wait_busy(cryp);
5267 ++ if (err) {
5268 ++ dev_err(cryp->dev, "Timeout (gcm/ccm header)\n");
5269 ++ stm32_cryp_write(cryp, CRYP_IMSCR, 0);
5270 ++ stm32_cryp_finish_req(cryp, err);
5271 ++ return;
5272 ++ }
5273 ++
5274 ++ if (stm32_cryp_get_input_text_len(cryp)) {
5275 ++ /* Phase 3 : payload */
5276 ++ cfg = stm32_cryp_read(cryp, CRYP_CR);
5277 ++ cfg &= ~CR_CRYPEN;
5278 ++ stm32_cryp_write(cryp, CRYP_CR, cfg);
5279 ++
5280 ++ cfg &= ~CR_PH_MASK;
5281 ++ cfg |= CR_PH_PAYLOAD | CR_CRYPEN;
5282 ++ stm32_cryp_write(cryp, CRYP_CR, cfg);
5283 ++ } else {
5284 ++ /*
5285 ++ * Phase 4 : tag.
5286 ++ * Nothing to read, nothing to write, caller have to
5287 ++ * end request
5288 ++ */
5289 ++ }
5290 ++ }
5291 ++}
5292 ++
5293 ++static void stm32_cryp_write_ccm_first_header(struct stm32_cryp *cryp)
5294 ++{
5295 ++ unsigned int i;
5296 ++ size_t written;
5297 ++ size_t len;
5298 ++ u32 alen = cryp->areq->assoclen;
5299 ++ u32 block[AES_BLOCK_32] = {0};
5300 ++ u8 *b8 = (u8 *)block;
5301 ++
5302 ++ if (alen <= 65280) {
5303 ++ /* Write first u32 of B1 */
5304 ++ b8[0] = (alen >> 8) & 0xFF;
5305 ++ b8[1] = alen & 0xFF;
5306 ++ len = 2;
5307 ++ } else {
5308 ++ /* Build the two first u32 of B1 */
5309 ++ b8[0] = 0xFF;
5310 ++ b8[1] = 0xFE;
5311 ++ b8[2] = (alen & 0xFF000000) >> 24;
5312 ++ b8[3] = (alen & 0x00FF0000) >> 16;
5313 ++ b8[4] = (alen & 0x0000FF00) >> 8;
5314 ++ b8[5] = alen & 0x000000FF;
5315 ++ len = 6;
5316 ++ }
5317 ++
5318 ++ written = min_t(size_t, AES_BLOCK_SIZE - len, alen);
5319 ++
5320 ++ scatterwalk_copychunks((char *)block + len, &cryp->in_walk, written, 0);
5321 ++ for (i = 0; i < AES_BLOCK_32; i++)
5322 ++ stm32_cryp_write(cryp, CRYP_DIN, block[i]);
5323 ++
5324 ++ cryp->header_in -= written;
5325 ++
5326 ++ stm32_crypt_gcmccm_end_header(cryp);
5327 + }
5328 +
5329 + static int stm32_cryp_ccm_init(struct stm32_cryp *cryp, u32 cfg)
5330 + {
5331 + int ret;
5332 +- u8 iv[AES_BLOCK_SIZE], b0[AES_BLOCK_SIZE];
5333 ++ u32 iv_32[AES_BLOCK_32], b0_32[AES_BLOCK_32];
5334 ++ u8 *iv = (u8 *)iv_32, *b0 = (u8 *)b0_32;
5335 + __be32 *bd;
5336 + u32 *d;
5337 + unsigned int i, textlen;
5338 +@@ -531,10 +505,24 @@ static int stm32_cryp_ccm_init(struct stm32_cryp *cryp, u32 cfg)
5339 +
5340 + /* Wait for end of processing */
5341 + ret = stm32_cryp_wait_enable(cryp);
5342 +- if (ret)
5343 ++ if (ret) {
5344 + dev_err(cryp->dev, "Timeout (ccm init)\n");
5345 ++ return ret;
5346 ++ }
5347 +
5348 +- return ret;
5349 ++ /* Prepare next phase */
5350 ++ if (cryp->areq->assoclen) {
5351 ++ cfg |= CR_PH_HEADER | CR_CRYPEN;
5352 ++ stm32_cryp_write(cryp, CRYP_CR, cfg);
5353 ++
5354 ++ /* Write first (special) block (may move to next phase [payload]) */
5355 ++ stm32_cryp_write_ccm_first_header(cryp);
5356 ++ } else if (stm32_cryp_get_input_text_len(cryp)) {
5357 ++ cfg |= CR_PH_PAYLOAD;
5358 ++ stm32_cryp_write(cryp, CRYP_CR, cfg);
5359 ++ }
5360 ++
5361 ++ return 0;
5362 + }
5363 +
5364 + static int stm32_cryp_hw_init(struct stm32_cryp *cryp)
5365 +@@ -542,7 +530,7 @@ static int stm32_cryp_hw_init(struct stm32_cryp *cryp)
5366 + int ret;
5367 + u32 cfg, hw_mode;
5368 +
5369 +- pm_runtime_resume_and_get(cryp->dev);
5370 ++ pm_runtime_get_sync(cryp->dev);
5371 +
5372 + /* Disable interrupt */
5373 + stm32_cryp_write(cryp, CRYP_IMSCR, 0);
5374 +@@ -605,16 +593,6 @@ static int stm32_cryp_hw_init(struct stm32_cryp *cryp)
5375 + if (ret)
5376 + return ret;
5377 +
5378 +- /* Phase 2 : header (authenticated data) */
5379 +- if (cryp->areq->assoclen) {
5380 +- cfg |= CR_PH_HEADER;
5381 +- } else if (stm32_cryp_get_input_text_len(cryp)) {
5382 +- cfg |= CR_PH_PAYLOAD;
5383 +- stm32_cryp_write(cryp, CRYP_CR, cfg);
5384 +- } else {
5385 +- cfg |= CR_PH_INIT;
5386 +- }
5387 +-
5388 + break;
5389 +
5390 + case CR_DES_CBC:
5391 +@@ -633,8 +611,6 @@ static int stm32_cryp_hw_init(struct stm32_cryp *cryp)
5392 +
5393 + stm32_cryp_write(cryp, CRYP_CR, cfg);
5394 +
5395 +- cryp->flags &= ~FLG_CCM_PADDED_WA;
5396 +-
5397 + return 0;
5398 + }
5399 +
5400 +@@ -644,28 +620,9 @@ static void stm32_cryp_finish_req(struct stm32_cryp *cryp, int err)
5401 + /* Phase 4 : output tag */
5402 + err = stm32_cryp_read_auth_tag(cryp);
5403 +
5404 +- if (!err && (!(is_gcm(cryp) || is_ccm(cryp))))
5405 ++ if (!err && (!(is_gcm(cryp) || is_ccm(cryp) || is_ecb(cryp))))
5406 + stm32_cryp_get_iv(cryp);
5407 +
5408 +- if (cryp->sgs_copied) {
5409 +- void *buf_in, *buf_out;
5410 +- int pages, len;
5411 +-
5412 +- buf_in = sg_virt(&cryp->in_sgl);
5413 +- buf_out = sg_virt(&cryp->out_sgl);
5414 +-
5415 +- sg_copy_buf(buf_out, cryp->out_sg_save, 0,
5416 +- cryp->total_out_save, 1);
5417 +-
5418 +- len = ALIGN(cryp->total_in_save, cryp->hw_blocksize);
5419 +- pages = len ? get_order(len) : 1;
5420 +- free_pages((unsigned long)buf_in, pages);
5421 +-
5422 +- len = ALIGN(cryp->total_out_save, cryp->hw_blocksize);
5423 +- pages = len ? get_order(len) : 1;
5424 +- free_pages((unsigned long)buf_out, pages);
5425 +- }
5426 +-
5427 + pm_runtime_mark_last_busy(cryp->dev);
5428 + pm_runtime_put_autosuspend(cryp->dev);
5429 +
5430 +@@ -674,8 +631,6 @@ static void stm32_cryp_finish_req(struct stm32_cryp *cryp, int err)
5431 + else
5432 + crypto_finalize_skcipher_request(cryp->engine, cryp->req,
5433 + err);
5434 +-
5435 +- memset(cryp->ctx->key, 0, cryp->ctx->keylen);
5436 + }
5437 +
5438 + static int stm32_cryp_cpu_start(struct stm32_cryp *cryp)
5439 +@@ -801,7 +756,20 @@ static int stm32_cryp_aes_aead_setkey(struct crypto_aead *tfm, const u8 *key,
5440 + static int stm32_cryp_aes_gcm_setauthsize(struct crypto_aead *tfm,
5441 + unsigned int authsize)
5442 + {
5443 +- return authsize == AES_BLOCK_SIZE ? 0 : -EINVAL;
5444 ++ switch (authsize) {
5445 ++ case 4:
5446 ++ case 8:
5447 ++ case 12:
5448 ++ case 13:
5449 ++ case 14:
5450 ++ case 15:
5451 ++ case 16:
5452 ++ break;
5453 ++ default:
5454 ++ return -EINVAL;
5455 ++ }
5456 ++
5457 ++ return 0;
5458 + }
5459 +
5460 + static int stm32_cryp_aes_ccm_setauthsize(struct crypto_aead *tfm,
5461 +@@ -825,31 +793,61 @@ static int stm32_cryp_aes_ccm_setauthsize(struct crypto_aead *tfm,
5462 +
5463 + static int stm32_cryp_aes_ecb_encrypt(struct skcipher_request *req)
5464 + {
5465 ++ if (req->cryptlen % AES_BLOCK_SIZE)
5466 ++ return -EINVAL;
5467 ++
5468 ++ if (req->cryptlen == 0)
5469 ++ return 0;
5470 ++
5471 + return stm32_cryp_crypt(req, FLG_AES | FLG_ECB | FLG_ENCRYPT);
5472 + }
5473 +
5474 + static int stm32_cryp_aes_ecb_decrypt(struct skcipher_request *req)
5475 + {
5476 ++ if (req->cryptlen % AES_BLOCK_SIZE)
5477 ++ return -EINVAL;
5478 ++
5479 ++ if (req->cryptlen == 0)
5480 ++ return 0;
5481 ++
5482 + return stm32_cryp_crypt(req, FLG_AES | FLG_ECB);
5483 + }
5484 +
5485 + static int stm32_cryp_aes_cbc_encrypt(struct skcipher_request *req)
5486 + {
5487 ++ if (req->cryptlen % AES_BLOCK_SIZE)
5488 ++ return -EINVAL;
5489 ++
5490 ++ if (req->cryptlen == 0)
5491 ++ return 0;
5492 ++
5493 + return stm32_cryp_crypt(req, FLG_AES | FLG_CBC | FLG_ENCRYPT);
5494 + }
5495 +
5496 + static int stm32_cryp_aes_cbc_decrypt(struct skcipher_request *req)
5497 + {
5498 ++ if (req->cryptlen % AES_BLOCK_SIZE)
5499 ++ return -EINVAL;
5500 ++
5501 ++ if (req->cryptlen == 0)
5502 ++ return 0;
5503 ++
5504 + return stm32_cryp_crypt(req, FLG_AES | FLG_CBC);
5505 + }
5506 +
5507 + static int stm32_cryp_aes_ctr_encrypt(struct skcipher_request *req)
5508 + {
5509 ++ if (req->cryptlen == 0)
5510 ++ return 0;
5511 ++
5512 + return stm32_cryp_crypt(req, FLG_AES | FLG_CTR | FLG_ENCRYPT);
5513 + }
5514 +
5515 + static int stm32_cryp_aes_ctr_decrypt(struct skcipher_request *req)
5516 + {
5517 ++ if (req->cryptlen == 0)
5518 ++ return 0;
5519 ++
5520 + return stm32_cryp_crypt(req, FLG_AES | FLG_CTR);
5521 + }
5522 +
5523 +@@ -863,53 +861,122 @@ static int stm32_cryp_aes_gcm_decrypt(struct aead_request *req)
5524 + return stm32_cryp_aead_crypt(req, FLG_AES | FLG_GCM);
5525 + }
5526 +
5527 ++static inline int crypto_ccm_check_iv(const u8 *iv)
5528 ++{
5529 ++ /* 2 <= L <= 8, so 1 <= L' <= 7. */
5530 ++ if (iv[0] < 1 || iv[0] > 7)
5531 ++ return -EINVAL;
5532 ++
5533 ++ return 0;
5534 ++}
5535 ++
5536 + static int stm32_cryp_aes_ccm_encrypt(struct aead_request *req)
5537 + {
5538 ++ int err;
5539 ++
5540 ++ err = crypto_ccm_check_iv(req->iv);
5541 ++ if (err)
5542 ++ return err;
5543 ++
5544 + return stm32_cryp_aead_crypt(req, FLG_AES | FLG_CCM | FLG_ENCRYPT);
5545 + }
5546 +
5547 + static int stm32_cryp_aes_ccm_decrypt(struct aead_request *req)
5548 + {
5549 ++ int err;
5550 ++
5551 ++ err = crypto_ccm_check_iv(req->iv);
5552 ++ if (err)
5553 ++ return err;
5554 ++
5555 + return stm32_cryp_aead_crypt(req, FLG_AES | FLG_CCM);
5556 + }
5557 +
5558 + static int stm32_cryp_des_ecb_encrypt(struct skcipher_request *req)
5559 + {
5560 ++ if (req->cryptlen % DES_BLOCK_SIZE)
5561 ++ return -EINVAL;
5562 ++
5563 ++ if (req->cryptlen == 0)
5564 ++ return 0;
5565 ++
5566 + return stm32_cryp_crypt(req, FLG_DES | FLG_ECB | FLG_ENCRYPT);
5567 + }
5568 +
5569 + static int stm32_cryp_des_ecb_decrypt(struct skcipher_request *req)
5570 + {
5571 ++ if (req->cryptlen % DES_BLOCK_SIZE)
5572 ++ return -EINVAL;
5573 ++
5574 ++ if (req->cryptlen == 0)
5575 ++ return 0;
5576 ++
5577 + return stm32_cryp_crypt(req, FLG_DES | FLG_ECB);
5578 + }
5579 +
5580 + static int stm32_cryp_des_cbc_encrypt(struct skcipher_request *req)
5581 + {
5582 ++ if (req->cryptlen % DES_BLOCK_SIZE)
5583 ++ return -EINVAL;
5584 ++
5585 ++ if (req->cryptlen == 0)
5586 ++ return 0;
5587 ++
5588 + return stm32_cryp_crypt(req, FLG_DES | FLG_CBC | FLG_ENCRYPT);
5589 + }
5590 +
5591 + static int stm32_cryp_des_cbc_decrypt(struct skcipher_request *req)
5592 + {
5593 ++ if (req->cryptlen % DES_BLOCK_SIZE)
5594 ++ return -EINVAL;
5595 ++
5596 ++ if (req->cryptlen == 0)
5597 ++ return 0;
5598 ++
5599 + return stm32_cryp_crypt(req, FLG_DES | FLG_CBC);
5600 + }
5601 +
5602 + static int stm32_cryp_tdes_ecb_encrypt(struct skcipher_request *req)
5603 + {
5604 ++ if (req->cryptlen % DES_BLOCK_SIZE)
5605 ++ return -EINVAL;
5606 ++
5607 ++ if (req->cryptlen == 0)
5608 ++ return 0;
5609 ++
5610 + return stm32_cryp_crypt(req, FLG_TDES | FLG_ECB | FLG_ENCRYPT);
5611 + }
5612 +
5613 + static int stm32_cryp_tdes_ecb_decrypt(struct skcipher_request *req)
5614 + {
5615 ++ if (req->cryptlen % DES_BLOCK_SIZE)
5616 ++ return -EINVAL;
5617 ++
5618 ++ if (req->cryptlen == 0)
5619 ++ return 0;
5620 ++
5621 + return stm32_cryp_crypt(req, FLG_TDES | FLG_ECB);
5622 + }
5623 +
5624 + static int stm32_cryp_tdes_cbc_encrypt(struct skcipher_request *req)
5625 + {
5626 ++ if (req->cryptlen % DES_BLOCK_SIZE)
5627 ++ return -EINVAL;
5628 ++
5629 ++ if (req->cryptlen == 0)
5630 ++ return 0;
5631 ++
5632 + return stm32_cryp_crypt(req, FLG_TDES | FLG_CBC | FLG_ENCRYPT);
5633 + }
5634 +
5635 + static int stm32_cryp_tdes_cbc_decrypt(struct skcipher_request *req)
5636 + {
5637 ++ if (req->cryptlen % DES_BLOCK_SIZE)
5638 ++ return -EINVAL;
5639 ++
5640 ++ if (req->cryptlen == 0)
5641 ++ return 0;
5642 ++
5643 + return stm32_cryp_crypt(req, FLG_TDES | FLG_CBC);
5644 + }
5645 +
5646 +@@ -919,6 +986,7 @@ static int stm32_cryp_prepare_req(struct skcipher_request *req,
5647 + struct stm32_cryp_ctx *ctx;
5648 + struct stm32_cryp *cryp;
5649 + struct stm32_cryp_reqctx *rctx;
5650 ++ struct scatterlist *in_sg;
5651 + int ret;
5652 +
5653 + if (!req && !areq)
5654 +@@ -944,76 +1012,55 @@ static int stm32_cryp_prepare_req(struct skcipher_request *req,
5655 + if (req) {
5656 + cryp->req = req;
5657 + cryp->areq = NULL;
5658 +- cryp->total_in = req->cryptlen;
5659 +- cryp->total_out = cryp->total_in;
5660 ++ cryp->header_in = 0;
5661 ++ cryp->payload_in = req->cryptlen;
5662 ++ cryp->payload_out = req->cryptlen;
5663 ++ cryp->authsize = 0;
5664 + } else {
5665 + /*
5666 + * Length of input and output data:
5667 + * Encryption case:
5668 +- * INPUT = AssocData || PlainText
5669 ++ * INPUT = AssocData || PlainText
5670 + * <- assoclen -> <- cryptlen ->
5671 +- * <------- total_in ----------->
5672 + *
5673 +- * OUTPUT = AssocData || CipherText || AuthTag
5674 +- * <- assoclen -> <- cryptlen -> <- authsize ->
5675 +- * <---------------- total_out ----------------->
5676 ++ * OUTPUT = AssocData || CipherText || AuthTag
5677 ++ * <- assoclen -> <-- cryptlen --> <- authsize ->
5678 + *
5679 + * Decryption case:
5680 +- * INPUT = AssocData || CipherText || AuthTag
5681 +- * <- assoclen -> <--------- cryptlen --------->
5682 +- * <- authsize ->
5683 +- * <---------------- total_in ------------------>
5684 ++ * INPUT = AssocData || CipherTex || AuthTag
5685 ++ * <- assoclen ---> <---------- cryptlen ---------->
5686 + *
5687 +- * OUTPUT = AssocData || PlainText
5688 +- * <- assoclen -> <- crypten - authsize ->
5689 +- * <---------- total_out ----------------->
5690 ++ * OUTPUT = AssocData || PlainText
5691 ++ * <- assoclen -> <- cryptlen - authsize ->
5692 + */
5693 + cryp->areq = areq;
5694 + cryp->req = NULL;
5695 + cryp->authsize = crypto_aead_authsize(crypto_aead_reqtfm(areq));
5696 +- cryp->total_in = areq->assoclen + areq->cryptlen;
5697 +- if (is_encrypt(cryp))
5698 +- /* Append auth tag to output */
5699 +- cryp->total_out = cryp->total_in + cryp->authsize;
5700 +- else
5701 +- /* No auth tag in output */
5702 +- cryp->total_out = cryp->total_in - cryp->authsize;
5703 ++ if (is_encrypt(cryp)) {
5704 ++ cryp->payload_in = areq->cryptlen;
5705 ++ cryp->header_in = areq->assoclen;
5706 ++ cryp->payload_out = areq->cryptlen;
5707 ++ } else {
5708 ++ cryp->payload_in = areq->cryptlen - cryp->authsize;
5709 ++ cryp->header_in = areq->assoclen;
5710 ++ cryp->payload_out = cryp->payload_in;
5711 ++ }
5712 + }
5713 +
5714 +- cryp->total_in_save = cryp->total_in;
5715 +- cryp->total_out_save = cryp->total_out;
5716 ++ in_sg = req ? req->src : areq->src;
5717 ++ scatterwalk_start(&cryp->in_walk, in_sg);
5718 +
5719 +- cryp->in_sg = req ? req->src : areq->src;
5720 + cryp->out_sg = req ? req->dst : areq->dst;
5721 +- cryp->out_sg_save = cryp->out_sg;
5722 +-
5723 +- cryp->in_sg_len = sg_nents_for_len(cryp->in_sg, cryp->total_in);
5724 +- if (cryp->in_sg_len < 0) {
5725 +- dev_err(cryp->dev, "Cannot get in_sg_len\n");
5726 +- ret = cryp->in_sg_len;
5727 +- return ret;
5728 +- }
5729 +-
5730 +- cryp->out_sg_len = sg_nents_for_len(cryp->out_sg, cryp->total_out);
5731 +- if (cryp->out_sg_len < 0) {
5732 +- dev_err(cryp->dev, "Cannot get out_sg_len\n");
5733 +- ret = cryp->out_sg_len;
5734 +- return ret;
5735 +- }
5736 +-
5737 +- ret = stm32_cryp_copy_sgs(cryp);
5738 +- if (ret)
5739 +- return ret;
5740 +-
5741 +- scatterwalk_start(&cryp->in_walk, cryp->in_sg);
5742 + scatterwalk_start(&cryp->out_walk, cryp->out_sg);
5743 +
5744 + if (is_gcm(cryp) || is_ccm(cryp)) {
5745 + /* In output, jump after assoc data */
5746 +- scatterwalk_advance(&cryp->out_walk, cryp->areq->assoclen);
5747 +- cryp->total_out -= cryp->areq->assoclen;
5748 ++ scatterwalk_copychunks(NULL, &cryp->out_walk, cryp->areq->assoclen, 2);
5749 + }
5750 +
5751 ++ if (is_ctr(cryp))
5752 ++ memset(cryp->last_ctr, 0, sizeof(cryp->last_ctr));
5753 ++
5754 + ret = stm32_cryp_hw_init(cryp);
5755 + return ret;
5756 + }
5757 +@@ -1061,8 +1108,7 @@ static int stm32_cryp_aead_one_req(struct crypto_engine *engine, void *areq)
5758 + if (!cryp)
5759 + return -ENODEV;
5760 +
5761 +- if (unlikely(!cryp->areq->assoclen &&
5762 +- !stm32_cryp_get_input_text_len(cryp))) {
5763 ++ if (unlikely(!cryp->payload_in && !cryp->header_in)) {
5764 + /* No input data to process: get tag and finish */
5765 + stm32_cryp_finish_req(cryp, 0);
5766 + return 0;
5767 +@@ -1071,43 +1117,10 @@ static int stm32_cryp_aead_one_req(struct crypto_engine *engine, void *areq)
5768 + return stm32_cryp_cpu_start(cryp);
5769 + }
5770 +
5771 +-static u32 *stm32_cryp_next_out(struct stm32_cryp *cryp, u32 *dst,
5772 +- unsigned int n)
5773 +-{
5774 +- scatterwalk_advance(&cryp->out_walk, n);
5775 +-
5776 +- if (unlikely(cryp->out_sg->length == _walked_out)) {
5777 +- cryp->out_sg = sg_next(cryp->out_sg);
5778 +- if (cryp->out_sg) {
5779 +- scatterwalk_start(&cryp->out_walk, cryp->out_sg);
5780 +- return (sg_virt(cryp->out_sg) + _walked_out);
5781 +- }
5782 +- }
5783 +-
5784 +- return (u32 *)((u8 *)dst + n);
5785 +-}
5786 +-
5787 +-static u32 *stm32_cryp_next_in(struct stm32_cryp *cryp, u32 *src,
5788 +- unsigned int n)
5789 +-{
5790 +- scatterwalk_advance(&cryp->in_walk, n);
5791 +-
5792 +- if (unlikely(cryp->in_sg->length == _walked_in)) {
5793 +- cryp->in_sg = sg_next(cryp->in_sg);
5794 +- if (cryp->in_sg) {
5795 +- scatterwalk_start(&cryp->in_walk, cryp->in_sg);
5796 +- return (sg_virt(cryp->in_sg) + _walked_in);
5797 +- }
5798 +- }
5799 +-
5800 +- return (u32 *)((u8 *)src + n);
5801 +-}
5802 +-
5803 + static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp)
5804 + {
5805 +- u32 cfg, size_bit, *dst, d32;
5806 +- u8 *d8;
5807 +- unsigned int i, j;
5808 ++ u32 cfg, size_bit;
5809 ++ unsigned int i;
5810 + int ret = 0;
5811 +
5812 + /* Update Config */
5813 +@@ -1130,7 +1143,7 @@ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp)
5814 + stm32_cryp_write(cryp, CRYP_DIN, size_bit);
5815 +
5816 + size_bit = is_encrypt(cryp) ? cryp->areq->cryptlen :
5817 +- cryp->areq->cryptlen - AES_BLOCK_SIZE;
5818 ++ cryp->areq->cryptlen - cryp->authsize;
5819 + size_bit *= 8;
5820 + if (cryp->caps->swap_final)
5821 + size_bit = (__force u32)cpu_to_be32(size_bit);
5822 +@@ -1139,11 +1152,9 @@ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp)
5823 + stm32_cryp_write(cryp, CRYP_DIN, size_bit);
5824 + } else {
5825 + /* CCM: write CTR0 */
5826 +- u8 iv[AES_BLOCK_SIZE];
5827 +- u32 *iv32 = (u32 *)iv;
5828 +- __be32 *biv;
5829 +-
5830 +- biv = (void *)iv;
5831 ++ u32 iv32[AES_BLOCK_32];
5832 ++ u8 *iv = (u8 *)iv32;
5833 ++ __be32 *biv = (__be32 *)iv32;
5834 +
5835 + memcpy(iv, cryp->areq->iv, AES_BLOCK_SIZE);
5836 + memset(iv + AES_BLOCK_SIZE - 1 - iv[0], 0, iv[0] + 1);
5837 +@@ -1165,39 +1176,18 @@ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp)
5838 + }
5839 +
5840 + if (is_encrypt(cryp)) {
5841 ++ u32 out_tag[AES_BLOCK_32];
5842 ++
5843 + /* Get and write tag */
5844 +- dst = sg_virt(cryp->out_sg) + _walked_out;
5845 ++ for (i = 0; i < AES_BLOCK_32; i++)
5846 ++ out_tag[i] = stm32_cryp_read(cryp, CRYP_DOUT);
5847 +
5848 +- for (i = 0; i < AES_BLOCK_32; i++) {
5849 +- if (cryp->total_out >= sizeof(u32)) {
5850 +- /* Read a full u32 */
5851 +- *dst = stm32_cryp_read(cryp, CRYP_DOUT);
5852 +-
5853 +- dst = stm32_cryp_next_out(cryp, dst,
5854 +- sizeof(u32));
5855 +- cryp->total_out -= sizeof(u32);
5856 +- } else if (!cryp->total_out) {
5857 +- /* Empty fifo out (data from input padding) */
5858 +- stm32_cryp_read(cryp, CRYP_DOUT);
5859 +- } else {
5860 +- /* Read less than an u32 */
5861 +- d32 = stm32_cryp_read(cryp, CRYP_DOUT);
5862 +- d8 = (u8 *)&d32;
5863 +-
5864 +- for (j = 0; j < cryp->total_out; j++) {
5865 +- *((u8 *)dst) = *(d8++);
5866 +- dst = stm32_cryp_next_out(cryp, dst, 1);
5867 +- }
5868 +- cryp->total_out = 0;
5869 +- }
5870 +- }
5871 ++ scatterwalk_copychunks(out_tag, &cryp->out_walk, cryp->authsize, 1);
5872 + } else {
5873 + /* Get and check tag */
5874 + u32 in_tag[AES_BLOCK_32], out_tag[AES_BLOCK_32];
5875 +
5876 +- scatterwalk_map_and_copy(in_tag, cryp->in_sg,
5877 +- cryp->total_in_save - cryp->authsize,
5878 +- cryp->authsize, 0);
5879 ++ scatterwalk_copychunks(in_tag, &cryp->in_walk, cryp->authsize, 0);
5880 +
5881 + for (i = 0; i < AES_BLOCK_32; i++)
5882 + out_tag[i] = stm32_cryp_read(cryp, CRYP_DOUT);
5883 +@@ -1217,115 +1207,59 @@ static void stm32_cryp_check_ctr_counter(struct stm32_cryp *cryp)
5884 + {
5885 + u32 cr;
5886 +
5887 +- if (unlikely(cryp->last_ctr[3] == 0xFFFFFFFF)) {
5888 +- cryp->last_ctr[3] = 0;
5889 +- cryp->last_ctr[2]++;
5890 +- if (!cryp->last_ctr[2]) {
5891 +- cryp->last_ctr[1]++;
5892 +- if (!cryp->last_ctr[1])
5893 +- cryp->last_ctr[0]++;
5894 +- }
5895 ++ if (unlikely(cryp->last_ctr[3] == cpu_to_be32(0xFFFFFFFF))) {
5896 ++ /*
5897 ++ * In this case, we need to increment manually the ctr counter,
5898 ++ * as HW doesn't handle the U32 carry.
5899 ++ */
5900 ++ crypto_inc((u8 *)cryp->last_ctr, sizeof(cryp->last_ctr));
5901 +
5902 + cr = stm32_cryp_read(cryp, CRYP_CR);
5903 + stm32_cryp_write(cryp, CRYP_CR, cr & ~CR_CRYPEN);
5904 +
5905 +- stm32_cryp_hw_write_iv(cryp, (u32 *)cryp->last_ctr);
5906 ++ stm32_cryp_hw_write_iv(cryp, cryp->last_ctr);
5907 +
5908 + stm32_cryp_write(cryp, CRYP_CR, cr);
5909 + }
5910 +
5911 +- cryp->last_ctr[0] = stm32_cryp_read(cryp, CRYP_IV0LR);
5912 +- cryp->last_ctr[1] = stm32_cryp_read(cryp, CRYP_IV0RR);
5913 +- cryp->last_ctr[2] = stm32_cryp_read(cryp, CRYP_IV1LR);
5914 +- cryp->last_ctr[3] = stm32_cryp_read(cryp, CRYP_IV1RR);
5915 ++ /* The IV registers are BE */
5916 ++ cryp->last_ctr[0] = cpu_to_be32(stm32_cryp_read(cryp, CRYP_IV0LR));
5917 ++ cryp->last_ctr[1] = cpu_to_be32(stm32_cryp_read(cryp, CRYP_IV0RR));
5918 ++ cryp->last_ctr[2] = cpu_to_be32(stm32_cryp_read(cryp, CRYP_IV1LR));
5919 ++ cryp->last_ctr[3] = cpu_to_be32(stm32_cryp_read(cryp, CRYP_IV1RR));
5920 + }
5921 +
5922 +-static bool stm32_cryp_irq_read_data(struct stm32_cryp *cryp)
5923 ++static void stm32_cryp_irq_read_data(struct stm32_cryp *cryp)
5924 + {
5925 +- unsigned int i, j;
5926 +- u32 d32, *dst;
5927 +- u8 *d8;
5928 +- size_t tag_size;
5929 +-
5930 +- /* Do no read tag now (if any) */
5931 +- if (is_encrypt(cryp) && (is_gcm(cryp) || is_ccm(cryp)))
5932 +- tag_size = cryp->authsize;
5933 +- else
5934 +- tag_size = 0;
5935 +-
5936 +- dst = sg_virt(cryp->out_sg) + _walked_out;
5937 ++ unsigned int i;
5938 ++ u32 block[AES_BLOCK_32];
5939 +
5940 +- for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) {
5941 +- if (likely(cryp->total_out - tag_size >= sizeof(u32))) {
5942 +- /* Read a full u32 */
5943 +- *dst = stm32_cryp_read(cryp, CRYP_DOUT);
5944 ++ for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++)
5945 ++ block[i] = stm32_cryp_read(cryp, CRYP_DOUT);
5946 +
5947 +- dst = stm32_cryp_next_out(cryp, dst, sizeof(u32));
5948 +- cryp->total_out -= sizeof(u32);
5949 +- } else if (cryp->total_out == tag_size) {
5950 +- /* Empty fifo out (data from input padding) */
5951 +- d32 = stm32_cryp_read(cryp, CRYP_DOUT);
5952 +- } else {
5953 +- /* Read less than an u32 */
5954 +- d32 = stm32_cryp_read(cryp, CRYP_DOUT);
5955 +- d8 = (u8 *)&d32;
5956 +-
5957 +- for (j = 0; j < cryp->total_out - tag_size; j++) {
5958 +- *((u8 *)dst) = *(d8++);
5959 +- dst = stm32_cryp_next_out(cryp, dst, 1);
5960 +- }
5961 +- cryp->total_out = tag_size;
5962 +- }
5963 +- }
5964 +-
5965 +- return !(cryp->total_out - tag_size) || !cryp->total_in;
5966 ++ scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize,
5967 ++ cryp->payload_out), 1);
5968 ++ cryp->payload_out -= min_t(size_t, cryp->hw_blocksize,
5969 ++ cryp->payload_out);
5970 + }
5971 +
5972 + static void stm32_cryp_irq_write_block(struct stm32_cryp *cryp)
5973 + {
5974 +- unsigned int i, j;
5975 +- u32 *src;
5976 +- u8 d8[4];
5977 +- size_t tag_size;
5978 +-
5979 +- /* Do no write tag (if any) */
5980 +- if (is_decrypt(cryp) && (is_gcm(cryp) || is_ccm(cryp)))
5981 +- tag_size = cryp->authsize;
5982 +- else
5983 +- tag_size = 0;
5984 +-
5985 +- src = sg_virt(cryp->in_sg) + _walked_in;
5986 ++ unsigned int i;
5987 ++ u32 block[AES_BLOCK_32] = {0};
5988 +
5989 +- for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) {
5990 +- if (likely(cryp->total_in - tag_size >= sizeof(u32))) {
5991 +- /* Write a full u32 */
5992 +- stm32_cryp_write(cryp, CRYP_DIN, *src);
5993 ++ scatterwalk_copychunks(block, &cryp->in_walk, min_t(size_t, cryp->hw_blocksize,
5994 ++ cryp->payload_in), 0);
5995 ++ for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++)
5996 ++ stm32_cryp_write(cryp, CRYP_DIN, block[i]);
5997 +
5998 +- src = stm32_cryp_next_in(cryp, src, sizeof(u32));
5999 +- cryp->total_in -= sizeof(u32);
6000 +- } else if (cryp->total_in == tag_size) {
6001 +- /* Write padding data */
6002 +- stm32_cryp_write(cryp, CRYP_DIN, 0);
6003 +- } else {
6004 +- /* Write less than an u32 */
6005 +- memset(d8, 0, sizeof(u32));
6006 +- for (j = 0; j < cryp->total_in - tag_size; j++) {
6007 +- d8[j] = *((u8 *)src);
6008 +- src = stm32_cryp_next_in(cryp, src, 1);
6009 +- }
6010 +-
6011 +- stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
6012 +- cryp->total_in = tag_size;
6013 +- }
6014 +- }
6015 ++ cryp->payload_in -= min_t(size_t, cryp->hw_blocksize, cryp->payload_in);
6016 + }
6017 +
6018 + static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp)
6019 + {
6020 + int err;
6021 +- u32 cfg, tmp[AES_BLOCK_32];
6022 +- size_t total_in_ori = cryp->total_in;
6023 +- struct scatterlist *out_sg_ori = cryp->out_sg;
6024 ++ u32 cfg, block[AES_BLOCK_32] = {0};
6025 + unsigned int i;
6026 +
6027 + /* 'Special workaround' procedure described in the datasheet */
6028 +@@ -1350,18 +1284,25 @@ static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp)
6029 +
6030 + /* b) pad and write the last block */
6031 + stm32_cryp_irq_write_block(cryp);
6032 +- cryp->total_in = total_in_ori;
6033 ++ /* wait end of process */
6034 + err = stm32_cryp_wait_output(cryp);
6035 + if (err) {
6036 +- dev_err(cryp->dev, "Timeout (write gcm header)\n");
6037 ++ dev_err(cryp->dev, "Timeout (write gcm last data)\n");
6038 + return stm32_cryp_finish_req(cryp, err);
6039 + }
6040 +
6041 + /* c) get and store encrypted data */
6042 +- stm32_cryp_irq_read_data(cryp);
6043 +- scatterwalk_map_and_copy(tmp, out_sg_ori,
6044 +- cryp->total_in_save - total_in_ori,
6045 +- total_in_ori, 0);
6046 ++ /*
6047 ++ * Same code as stm32_cryp_irq_read_data(), but we want to store
6048 ++ * block value
6049 ++ */
6050 ++ for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++)
6051 ++ block[i] = stm32_cryp_read(cryp, CRYP_DOUT);
6052 ++
6053 ++ scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize,
6054 ++ cryp->payload_out), 1);
6055 ++ cryp->payload_out -= min_t(size_t, cryp->hw_blocksize,
6056 ++ cryp->payload_out);
6057 +
6058 + /* d) change mode back to AES GCM */
6059 + cfg &= ~CR_ALGO_MASK;
6060 +@@ -1374,19 +1315,13 @@ static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp)
6061 + stm32_cryp_write(cryp, CRYP_CR, cfg);
6062 +
6063 + /* f) write padded data */
6064 +- for (i = 0; i < AES_BLOCK_32; i++) {
6065 +- if (cryp->total_in)
6066 +- stm32_cryp_write(cryp, CRYP_DIN, tmp[i]);
6067 +- else
6068 +- stm32_cryp_write(cryp, CRYP_DIN, 0);
6069 +-
6070 +- cryp->total_in -= min_t(size_t, sizeof(u32), cryp->total_in);
6071 +- }
6072 ++ for (i = 0; i < AES_BLOCK_32; i++)
6073 ++ stm32_cryp_write(cryp, CRYP_DIN, block[i]);
6074 +
6075 + /* g) Empty fifo out */
6076 + err = stm32_cryp_wait_output(cryp);
6077 + if (err) {
6078 +- dev_err(cryp->dev, "Timeout (write gcm header)\n");
6079 ++ dev_err(cryp->dev, "Timeout (write gcm padded data)\n");
6080 + return stm32_cryp_finish_req(cryp, err);
6081 + }
6082 +
6083 +@@ -1399,16 +1334,14 @@ static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp)
6084 +
6085 + static void stm32_cryp_irq_set_npblb(struct stm32_cryp *cryp)
6086 + {
6087 +- u32 cfg, payload_bytes;
6088 ++ u32 cfg;
6089 +
6090 + /* disable ip, set NPBLB and reneable ip */
6091 + cfg = stm32_cryp_read(cryp, CRYP_CR);
6092 + cfg &= ~CR_CRYPEN;
6093 + stm32_cryp_write(cryp, CRYP_CR, cfg);
6094 +
6095 +- payload_bytes = is_decrypt(cryp) ? cryp->total_in - cryp->authsize :
6096 +- cryp->total_in;
6097 +- cfg |= (cryp->hw_blocksize - payload_bytes) << CR_NBPBL_SHIFT;
6098 ++ cfg |= (cryp->hw_blocksize - cryp->payload_in) << CR_NBPBL_SHIFT;
6099 + cfg |= CR_CRYPEN;
6100 + stm32_cryp_write(cryp, CRYP_CR, cfg);
6101 + }
6102 +@@ -1417,13 +1350,11 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
6103 + {
6104 + int err = 0;
6105 + u32 cfg, iv1tmp;
6106 +- u32 cstmp1[AES_BLOCK_32], cstmp2[AES_BLOCK_32], tmp[AES_BLOCK_32];
6107 +- size_t last_total_out, total_in_ori = cryp->total_in;
6108 +- struct scatterlist *out_sg_ori = cryp->out_sg;
6109 ++ u32 cstmp1[AES_BLOCK_32], cstmp2[AES_BLOCK_32];
6110 ++ u32 block[AES_BLOCK_32] = {0};
6111 + unsigned int i;
6112 +
6113 + /* 'Special workaround' procedure described in the datasheet */
6114 +- cryp->flags |= FLG_CCM_PADDED_WA;
6115 +
6116 + /* a) disable ip */
6117 + stm32_cryp_write(cryp, CRYP_IMSCR, 0);
6118 +@@ -1453,7 +1384,7 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
6119 +
6120 + /* b) pad and write the last block */
6121 + stm32_cryp_irq_write_block(cryp);
6122 +- cryp->total_in = total_in_ori;
6123 ++ /* wait end of process */
6124 + err = stm32_cryp_wait_output(cryp);
6125 + if (err) {
6126 + dev_err(cryp->dev, "Timeout (wite ccm padded data)\n");
6127 +@@ -1461,13 +1392,16 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
6128 + }
6129 +
6130 + /* c) get and store decrypted data */
6131 +- last_total_out = cryp->total_out;
6132 +- stm32_cryp_irq_read_data(cryp);
6133 ++ /*
6134 ++ * Same code as stm32_cryp_irq_read_data(), but we want to store
6135 ++ * block value
6136 ++ */
6137 ++ for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++)
6138 ++ block[i] = stm32_cryp_read(cryp, CRYP_DOUT);
6139 +
6140 +- memset(tmp, 0, sizeof(tmp));
6141 +- scatterwalk_map_and_copy(tmp, out_sg_ori,
6142 +- cryp->total_out_save - last_total_out,
6143 +- last_total_out, 0);
6144 ++ scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize,
6145 ++ cryp->payload_out), 1);
6146 ++ cryp->payload_out -= min_t(size_t, cryp->hw_blocksize, cryp->payload_out);
6147 +
6148 + /* d) Load again CRYP_CSGCMCCMxR */
6149 + for (i = 0; i < ARRAY_SIZE(cstmp2); i++)
6150 +@@ -1484,10 +1418,10 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
6151 + stm32_cryp_write(cryp, CRYP_CR, cfg);
6152 +
6153 + /* g) XOR and write padded data */
6154 +- for (i = 0; i < ARRAY_SIZE(tmp); i++) {
6155 +- tmp[i] ^= cstmp1[i];
6156 +- tmp[i] ^= cstmp2[i];
6157 +- stm32_cryp_write(cryp, CRYP_DIN, tmp[i]);
6158 ++ for (i = 0; i < ARRAY_SIZE(block); i++) {
6159 ++ block[i] ^= cstmp1[i];
6160 ++ block[i] ^= cstmp2[i];
6161 ++ stm32_cryp_write(cryp, CRYP_DIN, block[i]);
6162 + }
6163 +
6164 + /* h) wait for completion */
6165 +@@ -1501,30 +1435,34 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp)
6166 +
6167 + static void stm32_cryp_irq_write_data(struct stm32_cryp *cryp)
6168 + {
6169 +- if (unlikely(!cryp->total_in)) {
6170 ++ if (unlikely(!cryp->payload_in)) {
6171 + dev_warn(cryp->dev, "No more data to process\n");
6172 + return;
6173 + }
6174 +
6175 +- if (unlikely(cryp->total_in < AES_BLOCK_SIZE &&
6176 ++ if (unlikely(cryp->payload_in < AES_BLOCK_SIZE &&
6177 + (stm32_cryp_get_hw_mode(cryp) == CR_AES_GCM) &&
6178 + is_encrypt(cryp))) {
6179 + /* Padding for AES GCM encryption */
6180 +- if (cryp->caps->padding_wa)
6181 ++ if (cryp->caps->padding_wa) {
6182 + /* Special case 1 */
6183 +- return stm32_cryp_irq_write_gcm_padded_data(cryp);
6184 ++ stm32_cryp_irq_write_gcm_padded_data(cryp);
6185 ++ return;
6186 ++ }
6187 +
6188 + /* Setting padding bytes (NBBLB) */
6189 + stm32_cryp_irq_set_npblb(cryp);
6190 + }
6191 +
6192 +- if (unlikely((cryp->total_in - cryp->authsize < AES_BLOCK_SIZE) &&
6193 ++ if (unlikely((cryp->payload_in < AES_BLOCK_SIZE) &&
6194 + (stm32_cryp_get_hw_mode(cryp) == CR_AES_CCM) &&
6195 + is_decrypt(cryp))) {
6196 + /* Padding for AES CCM decryption */
6197 +- if (cryp->caps->padding_wa)
6198 ++ if (cryp->caps->padding_wa) {
6199 + /* Special case 2 */
6200 +- return stm32_cryp_irq_write_ccm_padded_data(cryp);
6201 ++ stm32_cryp_irq_write_ccm_padded_data(cryp);
6202 ++ return;
6203 ++ }
6204 +
6205 + /* Setting padding bytes (NBBLB) */
6206 + stm32_cryp_irq_set_npblb(cryp);
6207 +@@ -1536,192 +1474,60 @@ static void stm32_cryp_irq_write_data(struct stm32_cryp *cryp)
6208 + stm32_cryp_irq_write_block(cryp);
6209 + }
6210 +
6211 +-static void stm32_cryp_irq_write_gcm_header(struct stm32_cryp *cryp)
6212 ++static void stm32_cryp_irq_write_gcmccm_header(struct stm32_cryp *cryp)
6213 + {
6214 +- int err;
6215 +- unsigned int i, j;
6216 +- u32 cfg, *src;
6217 +-
6218 +- src = sg_virt(cryp->in_sg) + _walked_in;
6219 +-
6220 +- for (i = 0; i < AES_BLOCK_32; i++) {
6221 +- stm32_cryp_write(cryp, CRYP_DIN, *src);
6222 +-
6223 +- src = stm32_cryp_next_in(cryp, src, sizeof(u32));
6224 +- cryp->total_in -= min_t(size_t, sizeof(u32), cryp->total_in);
6225 +-
6226 +- /* Check if whole header written */
6227 +- if ((cryp->total_in_save - cryp->total_in) ==
6228 +- cryp->areq->assoclen) {
6229 +- /* Write padding if needed */
6230 +- for (j = i + 1; j < AES_BLOCK_32; j++)
6231 +- stm32_cryp_write(cryp, CRYP_DIN, 0);
6232 +-
6233 +- /* Wait for completion */
6234 +- err = stm32_cryp_wait_busy(cryp);
6235 +- if (err) {
6236 +- dev_err(cryp->dev, "Timeout (gcm header)\n");
6237 +- return stm32_cryp_finish_req(cryp, err);
6238 +- }
6239 +-
6240 +- if (stm32_cryp_get_input_text_len(cryp)) {
6241 +- /* Phase 3 : payload */
6242 +- cfg = stm32_cryp_read(cryp, CRYP_CR);
6243 +- cfg &= ~CR_CRYPEN;
6244 +- stm32_cryp_write(cryp, CRYP_CR, cfg);
6245 +-
6246 +- cfg &= ~CR_PH_MASK;
6247 +- cfg |= CR_PH_PAYLOAD;
6248 +- cfg |= CR_CRYPEN;
6249 +- stm32_cryp_write(cryp, CRYP_CR, cfg);
6250 +- } else {
6251 +- /* Phase 4 : tag */
6252 +- stm32_cryp_write(cryp, CRYP_IMSCR, 0);
6253 +- stm32_cryp_finish_req(cryp, 0);
6254 +- }
6255 +-
6256 +- break;
6257 +- }
6258 +-
6259 +- if (!cryp->total_in)
6260 +- break;
6261 +- }
6262 +-}
6263 ++ unsigned int i;
6264 ++ u32 block[AES_BLOCK_32] = {0};
6265 ++ size_t written;
6266 +
6267 +-static void stm32_cryp_irq_write_ccm_header(struct stm32_cryp *cryp)
6268 +-{
6269 +- int err;
6270 +- unsigned int i = 0, j, k;
6271 +- u32 alen, cfg, *src;
6272 +- u8 d8[4];
6273 +-
6274 +- src = sg_virt(cryp->in_sg) + _walked_in;
6275 +- alen = cryp->areq->assoclen;
6276 +-
6277 +- if (!_walked_in) {
6278 +- if (cryp->areq->assoclen <= 65280) {
6279 +- /* Write first u32 of B1 */
6280 +- d8[0] = (alen >> 8) & 0xFF;
6281 +- d8[1] = alen & 0xFF;
6282 +- d8[2] = *((u8 *)src);
6283 +- src = stm32_cryp_next_in(cryp, src, 1);
6284 +- d8[3] = *((u8 *)src);
6285 +- src = stm32_cryp_next_in(cryp, src, 1);
6286 +-
6287 +- stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
6288 +- i++;
6289 +-
6290 +- cryp->total_in -= min_t(size_t, 2, cryp->total_in);
6291 +- } else {
6292 +- /* Build the two first u32 of B1 */
6293 +- d8[0] = 0xFF;
6294 +- d8[1] = 0xFE;
6295 +- d8[2] = alen & 0xFF000000;
6296 +- d8[3] = alen & 0x00FF0000;
6297 +-
6298 +- stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
6299 +- i++;
6300 +-
6301 +- d8[0] = alen & 0x0000FF00;
6302 +- d8[1] = alen & 0x000000FF;
6303 +- d8[2] = *((u8 *)src);
6304 +- src = stm32_cryp_next_in(cryp, src, 1);
6305 +- d8[3] = *((u8 *)src);
6306 +- src = stm32_cryp_next_in(cryp, src, 1);
6307 +-
6308 +- stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
6309 +- i++;
6310 +-
6311 +- cryp->total_in -= min_t(size_t, 2, cryp->total_in);
6312 +- }
6313 +- }
6314 ++ written = min_t(size_t, AES_BLOCK_SIZE, cryp->header_in);
6315 +
6316 +- /* Write next u32 */
6317 +- for (; i < AES_BLOCK_32; i++) {
6318 +- /* Build an u32 */
6319 +- memset(d8, 0, sizeof(u32));
6320 +- for (k = 0; k < sizeof(u32); k++) {
6321 +- d8[k] = *((u8 *)src);
6322 +- src = stm32_cryp_next_in(cryp, src, 1);
6323 +-
6324 +- cryp->total_in -= min_t(size_t, 1, cryp->total_in);
6325 +- if ((cryp->total_in_save - cryp->total_in) == alen)
6326 +- break;
6327 +- }
6328 ++ scatterwalk_copychunks(block, &cryp->in_walk, written, 0);
6329 ++ for (i = 0; i < AES_BLOCK_32; i++)
6330 ++ stm32_cryp_write(cryp, CRYP_DIN, block[i]);
6331 +
6332 +- stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8);
6333 +-
6334 +- if ((cryp->total_in_save - cryp->total_in) == alen) {
6335 +- /* Write padding if needed */
6336 +- for (j = i + 1; j < AES_BLOCK_32; j++)
6337 +- stm32_cryp_write(cryp, CRYP_DIN, 0);
6338 +-
6339 +- /* Wait for completion */
6340 +- err = stm32_cryp_wait_busy(cryp);
6341 +- if (err) {
6342 +- dev_err(cryp->dev, "Timeout (ccm header)\n");
6343 +- return stm32_cryp_finish_req(cryp, err);
6344 +- }
6345 +-
6346 +- if (stm32_cryp_get_input_text_len(cryp)) {
6347 +- /* Phase 3 : payload */
6348 +- cfg = stm32_cryp_read(cryp, CRYP_CR);
6349 +- cfg &= ~CR_CRYPEN;
6350 +- stm32_cryp_write(cryp, CRYP_CR, cfg);
6351 +-
6352 +- cfg &= ~CR_PH_MASK;
6353 +- cfg |= CR_PH_PAYLOAD;
6354 +- cfg |= CR_CRYPEN;
6355 +- stm32_cryp_write(cryp, CRYP_CR, cfg);
6356 +- } else {
6357 +- /* Phase 4 : tag */
6358 +- stm32_cryp_write(cryp, CRYP_IMSCR, 0);
6359 +- stm32_cryp_finish_req(cryp, 0);
6360 +- }
6361 ++ cryp->header_in -= written;
6362 +
6363 +- break;
6364 +- }
6365 +- }
6366 ++ stm32_crypt_gcmccm_end_header(cryp);
6367 + }
6368 +
6369 + static irqreturn_t stm32_cryp_irq_thread(int irq, void *arg)
6370 + {
6371 + struct stm32_cryp *cryp = arg;
6372 + u32 ph;
6373 ++ u32 it_mask = stm32_cryp_read(cryp, CRYP_IMSCR);
6374 +
6375 + if (cryp->irq_status & MISR_OUT)
6376 + /* Output FIFO IRQ: read data */
6377 +- if (unlikely(stm32_cryp_irq_read_data(cryp))) {
6378 +- /* All bytes processed, finish */
6379 +- stm32_cryp_write(cryp, CRYP_IMSCR, 0);
6380 +- stm32_cryp_finish_req(cryp, 0);
6381 +- return IRQ_HANDLED;
6382 +- }
6383 ++ stm32_cryp_irq_read_data(cryp);
6384 +
6385 + if (cryp->irq_status & MISR_IN) {
6386 +- if (is_gcm(cryp)) {
6387 ++ if (is_gcm(cryp) || is_ccm(cryp)) {
6388 + ph = stm32_cryp_read(cryp, CRYP_CR) & CR_PH_MASK;
6389 + if (unlikely(ph == CR_PH_HEADER))
6390 + /* Write Header */
6391 +- stm32_cryp_irq_write_gcm_header(cryp);
6392 +- else
6393 +- /* Input FIFO IRQ: write data */
6394 +- stm32_cryp_irq_write_data(cryp);
6395 +- cryp->gcm_ctr++;
6396 +- } else if (is_ccm(cryp)) {
6397 +- ph = stm32_cryp_read(cryp, CRYP_CR) & CR_PH_MASK;
6398 +- if (unlikely(ph == CR_PH_HEADER))
6399 +- /* Write Header */
6400 +- stm32_cryp_irq_write_ccm_header(cryp);
6401 ++ stm32_cryp_irq_write_gcmccm_header(cryp);
6402 + else
6403 + /* Input FIFO IRQ: write data */
6404 + stm32_cryp_irq_write_data(cryp);
6405 ++ if (is_gcm(cryp))
6406 ++ cryp->gcm_ctr++;
6407 + } else {
6408 + /* Input FIFO IRQ: write data */
6409 + stm32_cryp_irq_write_data(cryp);
6410 + }
6411 + }
6412 +
6413 ++ /* Mask useless interrupts */
6414 ++ if (!cryp->payload_in && !cryp->header_in)
6415 ++ it_mask &= ~IMSCR_IN;
6416 ++ if (!cryp->payload_out)
6417 ++ it_mask &= ~IMSCR_OUT;
6418 ++ stm32_cryp_write(cryp, CRYP_IMSCR, it_mask);
6419 ++
6420 ++ if (!cryp->payload_in && !cryp->header_in && !cryp->payload_out)
6421 ++ stm32_cryp_finish_req(cryp, 0);
6422 ++
6423 + return IRQ_HANDLED;
6424 + }
6425 +
6426 +@@ -1742,7 +1548,7 @@ static struct skcipher_alg crypto_algs[] = {
6427 + .base.cra_flags = CRYPTO_ALG_ASYNC,
6428 + .base.cra_blocksize = AES_BLOCK_SIZE,
6429 + .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx),
6430 +- .base.cra_alignmask = 0xf,
6431 ++ .base.cra_alignmask = 0,
6432 + .base.cra_module = THIS_MODULE,
6433 +
6434 + .init = stm32_cryp_init_tfm,
6435 +@@ -1759,7 +1565,7 @@ static struct skcipher_alg crypto_algs[] = {
6436 + .base.cra_flags = CRYPTO_ALG_ASYNC,
6437 + .base.cra_blocksize = AES_BLOCK_SIZE,
6438 + .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx),
6439 +- .base.cra_alignmask = 0xf,
6440 ++ .base.cra_alignmask = 0,
6441 + .base.cra_module = THIS_MODULE,
6442 +
6443 + .init = stm32_cryp_init_tfm,
6444 +@@ -1777,7 +1583,7 @@ static struct skcipher_alg crypto_algs[] = {
6445 + .base.cra_flags = CRYPTO_ALG_ASYNC,
6446 + .base.cra_blocksize = 1,
6447 + .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx),
6448 +- .base.cra_alignmask = 0xf,
6449 ++ .base.cra_alignmask = 0,
6450 + .base.cra_module = THIS_MODULE,
6451 +
6452 + .init = stm32_cryp_init_tfm,
6453 +@@ -1795,7 +1601,7 @@ static struct skcipher_alg crypto_algs[] = {
6454 + .base.cra_flags = CRYPTO_ALG_ASYNC,
6455 + .base.cra_blocksize = DES_BLOCK_SIZE,
6456 + .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx),
6457 +- .base.cra_alignmask = 0xf,
6458 ++ .base.cra_alignmask = 0,
6459 + .base.cra_module = THIS_MODULE,
6460 +
6461 + .init = stm32_cryp_init_tfm,
6462 +@@ -1812,7 +1618,7 @@ static struct skcipher_alg crypto_algs[] = {
6463 + .base.cra_flags = CRYPTO_ALG_ASYNC,
6464 + .base.cra_blocksize = DES_BLOCK_SIZE,
6465 + .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx),
6466 +- .base.cra_alignmask = 0xf,
6467 ++ .base.cra_alignmask = 0,
6468 + .base.cra_module = THIS_MODULE,
6469 +
6470 + .init = stm32_cryp_init_tfm,
6471 +@@ -1830,7 +1636,7 @@ static struct skcipher_alg crypto_algs[] = {
6472 + .base.cra_flags = CRYPTO_ALG_ASYNC,
6473 + .base.cra_blocksize = DES_BLOCK_SIZE,
6474 + .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx),
6475 +- .base.cra_alignmask = 0xf,
6476 ++ .base.cra_alignmask = 0,
6477 + .base.cra_module = THIS_MODULE,
6478 +
6479 + .init = stm32_cryp_init_tfm,
6480 +@@ -1847,7 +1653,7 @@ static struct skcipher_alg crypto_algs[] = {
6481 + .base.cra_flags = CRYPTO_ALG_ASYNC,
6482 + .base.cra_blocksize = DES_BLOCK_SIZE,
6483 + .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx),
6484 +- .base.cra_alignmask = 0xf,
6485 ++ .base.cra_alignmask = 0,
6486 + .base.cra_module = THIS_MODULE,
6487 +
6488 + .init = stm32_cryp_init_tfm,
6489 +@@ -1877,7 +1683,7 @@ static struct aead_alg aead_algs[] = {
6490 + .cra_flags = CRYPTO_ALG_ASYNC,
6491 + .cra_blocksize = 1,
6492 + .cra_ctxsize = sizeof(struct stm32_cryp_ctx),
6493 +- .cra_alignmask = 0xf,
6494 ++ .cra_alignmask = 0,
6495 + .cra_module = THIS_MODULE,
6496 + },
6497 + },
6498 +@@ -1897,7 +1703,7 @@ static struct aead_alg aead_algs[] = {
6499 + .cra_flags = CRYPTO_ALG_ASYNC,
6500 + .cra_blocksize = 1,
6501 + .cra_ctxsize = sizeof(struct stm32_cryp_ctx),
6502 +- .cra_alignmask = 0xf,
6503 ++ .cra_alignmask = 0,
6504 + .cra_module = THIS_MODULE,
6505 + },
6506 + },
6507 +@@ -2025,8 +1831,6 @@ err_engine1:
6508 + list_del(&cryp->list);
6509 + spin_unlock(&cryp_list.lock);
6510 +
6511 +- pm_runtime_disable(dev);
6512 +- pm_runtime_put_noidle(dev);
6513 + pm_runtime_disable(dev);
6514 + pm_runtime_put_noidle(dev);
6515 +
6516 +diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
6517 +index ff5362da118d8..16bb52836b28d 100644
6518 +--- a/drivers/crypto/stm32/stm32-hash.c
6519 ++++ b/drivers/crypto/stm32/stm32-hash.c
6520 +@@ -812,7 +812,7 @@ static void stm32_hash_finish_req(struct ahash_request *req, int err)
6521 + static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
6522 + struct stm32_hash_request_ctx *rctx)
6523 + {
6524 +- pm_runtime_resume_and_get(hdev->dev);
6525 ++ pm_runtime_get_sync(hdev->dev);
6526 +
6527 + if (!(HASH_FLAGS_INIT & hdev->flags)) {
6528 + stm32_hash_write(hdev, HASH_CR, HASH_CR_INIT);
6529 +@@ -961,7 +961,7 @@ static int stm32_hash_export(struct ahash_request *req, void *out)
6530 + u32 *preg;
6531 + unsigned int i;
6532 +
6533 +- pm_runtime_resume_and_get(hdev->dev);
6534 ++ pm_runtime_get_sync(hdev->dev);
6535 +
6536 + while ((stm32_hash_read(hdev, HASH_SR) & HASH_SR_BUSY))
6537 + cpu_relax();
6538 +@@ -999,7 +999,7 @@ static int stm32_hash_import(struct ahash_request *req, const void *in)
6539 +
6540 + preg = rctx->hw_context;
6541 +
6542 +- pm_runtime_resume_and_get(hdev->dev);
6543 ++ pm_runtime_get_sync(hdev->dev);
6544 +
6545 + stm32_hash_write(hdev, HASH_IMR, *preg++);
6546 + stm32_hash_write(hdev, HASH_STR, *preg++);
6547 +diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
6548 +index d3fbd950be944..3e07f961e2f3d 100644
6549 +--- a/drivers/dma-buf/dma-fence-array.c
6550 ++++ b/drivers/dma-buf/dma-fence-array.c
6551 +@@ -104,7 +104,11 @@ static bool dma_fence_array_signaled(struct dma_fence *fence)
6552 + {
6553 + struct dma_fence_array *array = to_dma_fence_array(fence);
6554 +
6555 +- return atomic_read(&array->num_pending) <= 0;
6556 ++ if (atomic_read(&array->num_pending) > 0)
6557 ++ return false;
6558 ++
6559 ++ dma_fence_array_clear_pending_error(array);
6560 ++ return true;
6561 + }
6562 +
6563 + static void dma_fence_array_release(struct dma_fence *fence)
6564 +diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
6565 +index 627ad74c879fd..90afba0b36fe9 100644
6566 +--- a/drivers/dma/at_xdmac.c
6567 ++++ b/drivers/dma/at_xdmac.c
6568 +@@ -89,6 +89,7 @@
6569 + #define AT_XDMAC_CNDC_NDE (0x1 << 0) /* Channel x Next Descriptor Enable */
6570 + #define AT_XDMAC_CNDC_NDSUP (0x1 << 1) /* Channel x Next Descriptor Source Update */
6571 + #define AT_XDMAC_CNDC_NDDUP (0x1 << 2) /* Channel x Next Descriptor Destination Update */
6572 ++#define AT_XDMAC_CNDC_NDVIEW_MASK GENMASK(28, 27)
6573 + #define AT_XDMAC_CNDC_NDVIEW_NDV0 (0x0 << 3) /* Channel x Next Descriptor View 0 */
6574 + #define AT_XDMAC_CNDC_NDVIEW_NDV1 (0x1 << 3) /* Channel x Next Descriptor View 1 */
6575 + #define AT_XDMAC_CNDC_NDVIEW_NDV2 (0x2 << 3) /* Channel x Next Descriptor View 2 */
6576 +@@ -220,15 +221,15 @@ struct at_xdmac {
6577 +
6578 + /* Linked List Descriptor */
6579 + struct at_xdmac_lld {
6580 +- dma_addr_t mbr_nda; /* Next Descriptor Member */
6581 +- u32 mbr_ubc; /* Microblock Control Member */
6582 +- dma_addr_t mbr_sa; /* Source Address Member */
6583 +- dma_addr_t mbr_da; /* Destination Address Member */
6584 +- u32 mbr_cfg; /* Configuration Register */
6585 +- u32 mbr_bc; /* Block Control Register */
6586 +- u32 mbr_ds; /* Data Stride Register */
6587 +- u32 mbr_sus; /* Source Microblock Stride Register */
6588 +- u32 mbr_dus; /* Destination Microblock Stride Register */
6589 ++ u32 mbr_nda; /* Next Descriptor Member */
6590 ++ u32 mbr_ubc; /* Microblock Control Member */
6591 ++ u32 mbr_sa; /* Source Address Member */
6592 ++ u32 mbr_da; /* Destination Address Member */
6593 ++ u32 mbr_cfg; /* Configuration Register */
6594 ++ u32 mbr_bc; /* Block Control Register */
6595 ++ u32 mbr_ds; /* Data Stride Register */
6596 ++ u32 mbr_sus; /* Source Microblock Stride Register */
6597 ++ u32 mbr_dus; /* Destination Microblock Stride Register */
6598 + };
6599 +
6600 + /* 64-bit alignment needed to update CNDA and CUBC registers in an atomic way. */
6601 +@@ -338,9 +339,6 @@ static void at_xdmac_start_xfer(struct at_xdmac_chan *atchan,
6602 +
6603 + dev_vdbg(chan2dev(&atchan->chan), "%s: desc 0x%p\n", __func__, first);
6604 +
6605 +- if (at_xdmac_chan_is_enabled(atchan))
6606 +- return;
6607 +-
6608 + /* Set transfer as active to not try to start it again. */
6609 + first->active_xfer = true;
6610 +
6611 +@@ -356,7 +354,8 @@ static void at_xdmac_start_xfer(struct at_xdmac_chan *atchan,
6612 + */
6613 + if (at_xdmac_chan_is_cyclic(atchan))
6614 + reg = AT_XDMAC_CNDC_NDVIEW_NDV1;
6615 +- else if (first->lld.mbr_ubc & AT_XDMAC_MBR_UBC_NDV3)
6616 ++ else if ((first->lld.mbr_ubc &
6617 ++ AT_XDMAC_CNDC_NDVIEW_MASK) == AT_XDMAC_MBR_UBC_NDV3)
6618 + reg = AT_XDMAC_CNDC_NDVIEW_NDV3;
6619 + else
6620 + reg = AT_XDMAC_CNDC_NDVIEW_NDV2;
6621 +@@ -427,13 +426,12 @@ static dma_cookie_t at_xdmac_tx_submit(struct dma_async_tx_descriptor *tx)
6622 + spin_lock_irqsave(&atchan->lock, irqflags);
6623 + cookie = dma_cookie_assign(tx);
6624 +
6625 ++ list_add_tail(&desc->xfer_node, &atchan->xfers_list);
6626 ++ spin_unlock_irqrestore(&atchan->lock, irqflags);
6627 ++
6628 + dev_vdbg(chan2dev(tx->chan), "%s: atchan 0x%p, add desc 0x%p to xfers_list\n",
6629 + __func__, atchan, desc);
6630 +- list_add_tail(&desc->xfer_node, &atchan->xfers_list);
6631 +- if (list_is_singular(&atchan->xfers_list))
6632 +- at_xdmac_start_xfer(atchan, desc);
6633 +
6634 +- spin_unlock_irqrestore(&atchan->lock, irqflags);
6635 + return cookie;
6636 + }
6637 +
6638 +@@ -1563,14 +1561,17 @@ static void at_xdmac_handle_cyclic(struct at_xdmac_chan *atchan)
6639 + struct at_xdmac_desc *desc;
6640 + struct dma_async_tx_descriptor *txd;
6641 +
6642 +- if (!list_empty(&atchan->xfers_list)) {
6643 +- desc = list_first_entry(&atchan->xfers_list,
6644 +- struct at_xdmac_desc, xfer_node);
6645 +- txd = &desc->tx_dma_desc;
6646 +-
6647 +- if (txd->flags & DMA_PREP_INTERRUPT)
6648 +- dmaengine_desc_get_callback_invoke(txd, NULL);
6649 ++ spin_lock_irq(&atchan->lock);
6650 ++ if (list_empty(&atchan->xfers_list)) {
6651 ++ spin_unlock_irq(&atchan->lock);
6652 ++ return;
6653 + }
6654 ++ desc = list_first_entry(&atchan->xfers_list, struct at_xdmac_desc,
6655 ++ xfer_node);
6656 ++ spin_unlock_irq(&atchan->lock);
6657 ++ txd = &desc->tx_dma_desc;
6658 ++ if (txd->flags & DMA_PREP_INTERRUPT)
6659 ++ dmaengine_desc_get_callback_invoke(txd, NULL);
6660 + }
6661 +
6662 + static void at_xdmac_handle_error(struct at_xdmac_chan *atchan)
6663 +@@ -1724,11 +1725,9 @@ static void at_xdmac_issue_pending(struct dma_chan *chan)
6664 +
6665 + dev_dbg(chan2dev(&atchan->chan), "%s\n", __func__);
6666 +
6667 +- if (!at_xdmac_chan_is_cyclic(atchan)) {
6668 +- spin_lock_irqsave(&atchan->lock, flags);
6669 +- at_xdmac_advance_work(atchan);
6670 +- spin_unlock_irqrestore(&atchan->lock, flags);
6671 +- }
6672 ++ spin_lock_irqsave(&atchan->lock, flags);
6673 ++ at_xdmac_advance_work(atchan);
6674 ++ spin_unlock_irqrestore(&atchan->lock, flags);
6675 +
6676 + return;
6677 + }
6678 +diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
6679 +index b84303be8edf5..4eb63f1ad2247 100644
6680 +--- a/drivers/dma/mmp_pdma.c
6681 ++++ b/drivers/dma/mmp_pdma.c
6682 +@@ -728,12 +728,6 @@ static int mmp_pdma_config_write(struct dma_chan *dchan,
6683 +
6684 + chan->dir = direction;
6685 + chan->dev_addr = addr;
6686 +- /* FIXME: drivers should be ported over to use the filter
6687 +- * function. Once that's done, the following two lines can
6688 +- * be removed.
6689 +- */
6690 +- if (cfg->slave_id)
6691 +- chan->drcmr = cfg->slave_id;
6692 +
6693 + return 0;
6694 + }
6695 +diff --git a/drivers/dma/pxa_dma.c b/drivers/dma/pxa_dma.c
6696 +index 349fb312c8725..b4ef4f19f7dec 100644
6697 +--- a/drivers/dma/pxa_dma.c
6698 ++++ b/drivers/dma/pxa_dma.c
6699 +@@ -911,13 +911,6 @@ static void pxad_get_config(struct pxad_chan *chan,
6700 + *dcmd |= PXA_DCMD_BURST16;
6701 + else if (maxburst == 32)
6702 + *dcmd |= PXA_DCMD_BURST32;
6703 +-
6704 +- /* FIXME: drivers should be ported over to use the filter
6705 +- * function. Once that's done, the following two lines can
6706 +- * be removed.
6707 +- */
6708 +- if (chan->cfg.slave_id)
6709 +- chan->drcmr = chan->cfg.slave_id;
6710 + }
6711 +
6712 + static struct dma_async_tx_descriptor *
6713 +diff --git a/drivers/dma/stm32-mdma.c b/drivers/dma/stm32-mdma.c
6714 +index 9d473923712ad..fe36738f2dd7e 100644
6715 +--- a/drivers/dma/stm32-mdma.c
6716 ++++ b/drivers/dma/stm32-mdma.c
6717 +@@ -184,7 +184,7 @@
6718 + #define STM32_MDMA_CTBR(x) (0x68 + 0x40 * (x))
6719 + #define STM32_MDMA_CTBR_DBUS BIT(17)
6720 + #define STM32_MDMA_CTBR_SBUS BIT(16)
6721 +-#define STM32_MDMA_CTBR_TSEL_MASK GENMASK(7, 0)
6722 ++#define STM32_MDMA_CTBR_TSEL_MASK GENMASK(5, 0)
6723 + #define STM32_MDMA_CTBR_TSEL(n) STM32_MDMA_SET(n, \
6724 + STM32_MDMA_CTBR_TSEL_MASK)
6725 +
6726 +diff --git a/drivers/dma/uniphier-xdmac.c b/drivers/dma/uniphier-xdmac.c
6727 +index d6b8a202474f4..290836b7e1be2 100644
6728 +--- a/drivers/dma/uniphier-xdmac.c
6729 ++++ b/drivers/dma/uniphier-xdmac.c
6730 +@@ -131,8 +131,9 @@ uniphier_xdmac_next_desc(struct uniphier_xdmac_chan *xc)
6731 + static void uniphier_xdmac_chan_start(struct uniphier_xdmac_chan *xc,
6732 + struct uniphier_xdmac_desc *xd)
6733 + {
6734 +- u32 src_mode, src_addr, src_width;
6735 +- u32 dst_mode, dst_addr, dst_width;
6736 ++ u32 src_mode, src_width;
6737 ++ u32 dst_mode, dst_width;
6738 ++ dma_addr_t src_addr, dst_addr;
6739 + u32 val, its, tnum;
6740 + enum dma_slave_buswidth buswidth;
6741 +
6742 +diff --git a/drivers/edac/synopsys_edac.c b/drivers/edac/synopsys_edac.c
6743 +index 1a801a5d3b08b..92906b56b1a2b 100644
6744 +--- a/drivers/edac/synopsys_edac.c
6745 ++++ b/drivers/edac/synopsys_edac.c
6746 +@@ -1351,8 +1351,7 @@ static int mc_probe(struct platform_device *pdev)
6747 + }
6748 + }
6749 +
6750 +- if (of_device_is_compatible(pdev->dev.of_node,
6751 +- "xlnx,zynqmp-ddrc-2.40a"))
6752 ++ if (priv->p_data->quirks & DDR_ECC_INTR_SUPPORT)
6753 + setup_address_map(priv);
6754 + #endif
6755 +
6756 +diff --git a/drivers/firmware/google/Kconfig b/drivers/firmware/google/Kconfig
6757 +index 97968aece54f8..931544c9f63d4 100644
6758 +--- a/drivers/firmware/google/Kconfig
6759 ++++ b/drivers/firmware/google/Kconfig
6760 +@@ -3,9 +3,9 @@ menuconfig GOOGLE_FIRMWARE
6761 + bool "Google Firmware Drivers"
6762 + default n
6763 + help
6764 +- These firmware drivers are used by Google's servers. They are
6765 +- only useful if you are working directly on one of their
6766 +- proprietary servers. If in doubt, say "N".
6767 ++ These firmware drivers are used by Google servers,
6768 ++ Chromebooks and other devices using coreboot firmware.
6769 ++ If in doubt, say "N".
6770 +
6771 + if GOOGLE_FIRMWARE
6772 +
6773 +diff --git a/drivers/gpio/gpio-aspeed.c b/drivers/gpio/gpio-aspeed.c
6774 +index b966f5e28ebff..e0d5d80ec8e0f 100644
6775 +--- a/drivers/gpio/gpio-aspeed.c
6776 ++++ b/drivers/gpio/gpio-aspeed.c
6777 +@@ -53,7 +53,7 @@ struct aspeed_gpio_config {
6778 + struct aspeed_gpio {
6779 + struct gpio_chip chip;
6780 + struct irq_chip irqc;
6781 +- spinlock_t lock;
6782 ++ raw_spinlock_t lock;
6783 + void __iomem *base;
6784 + int irq;
6785 + const struct aspeed_gpio_config *config;
6786 +@@ -413,14 +413,14 @@ static void aspeed_gpio_set(struct gpio_chip *gc, unsigned int offset,
6787 + unsigned long flags;
6788 + bool copro;
6789 +
6790 +- spin_lock_irqsave(&gpio->lock, flags);
6791 ++ raw_spin_lock_irqsave(&gpio->lock, flags);
6792 + copro = aspeed_gpio_copro_request(gpio, offset);
6793 +
6794 + __aspeed_gpio_set(gc, offset, val);
6795 +
6796 + if (copro)
6797 + aspeed_gpio_copro_release(gpio, offset);
6798 +- spin_unlock_irqrestore(&gpio->lock, flags);
6799 ++ raw_spin_unlock_irqrestore(&gpio->lock, flags);
6800 + }
6801 +
6802 + static int aspeed_gpio_dir_in(struct gpio_chip *gc, unsigned int offset)
6803 +@@ -435,7 +435,7 @@ static int aspeed_gpio_dir_in(struct gpio_chip *gc, unsigned int offset)
6804 + if (!have_input(gpio, offset))
6805 + return -ENOTSUPP;
6806 +
6807 +- spin_lock_irqsave(&gpio->lock, flags);
6808 ++ raw_spin_lock_irqsave(&gpio->lock, flags);
6809 +
6810 + reg = ioread32(addr);
6811 + reg &= ~GPIO_BIT(offset);
6812 +@@ -445,7 +445,7 @@ static int aspeed_gpio_dir_in(struct gpio_chip *gc, unsigned int offset)
6813 + if (copro)
6814 + aspeed_gpio_copro_release(gpio, offset);
6815 +
6816 +- spin_unlock_irqrestore(&gpio->lock, flags);
6817 ++ raw_spin_unlock_irqrestore(&gpio->lock, flags);
6818 +
6819 + return 0;
6820 + }
6821 +@@ -463,7 +463,7 @@ static int aspeed_gpio_dir_out(struct gpio_chip *gc,
6822 + if (!have_output(gpio, offset))
6823 + return -ENOTSUPP;
6824 +
6825 +- spin_lock_irqsave(&gpio->lock, flags);
6826 ++ raw_spin_lock_irqsave(&gpio->lock, flags);
6827 +
6828 + reg = ioread32(addr);
6829 + reg |= GPIO_BIT(offset);
6830 +@@ -474,7 +474,7 @@ static int aspeed_gpio_dir_out(struct gpio_chip *gc,
6831 +
6832 + if (copro)
6833 + aspeed_gpio_copro_release(gpio, offset);
6834 +- spin_unlock_irqrestore(&gpio->lock, flags);
6835 ++ raw_spin_unlock_irqrestore(&gpio->lock, flags);
6836 +
6837 + return 0;
6838 + }
6839 +@@ -492,11 +492,11 @@ static int aspeed_gpio_get_direction(struct gpio_chip *gc, unsigned int offset)
6840 + if (!have_output(gpio, offset))
6841 + return GPIO_LINE_DIRECTION_IN;
6842 +
6843 +- spin_lock_irqsave(&gpio->lock, flags);
6844 ++ raw_spin_lock_irqsave(&gpio->lock, flags);
6845 +
6846 + val = ioread32(bank_reg(gpio, bank, reg_dir)) & GPIO_BIT(offset);
6847 +
6848 +- spin_unlock_irqrestore(&gpio->lock, flags);
6849 ++ raw_spin_unlock_irqrestore(&gpio->lock, flags);
6850 +
6851 + return val ? GPIO_LINE_DIRECTION_OUT : GPIO_LINE_DIRECTION_IN;
6852 + }
6853 +@@ -539,14 +539,14 @@ static void aspeed_gpio_irq_ack(struct irq_data *d)
6854 +
6855 + status_addr = bank_reg(gpio, bank, reg_irq_status);
6856 +
6857 +- spin_lock_irqsave(&gpio->lock, flags);
6858 ++ raw_spin_lock_irqsave(&gpio->lock, flags);
6859 + copro = aspeed_gpio_copro_request(gpio, offset);
6860 +
6861 + iowrite32(bit, status_addr);
6862 +
6863 + if (copro)
6864 + aspeed_gpio_copro_release(gpio, offset);
6865 +- spin_unlock_irqrestore(&gpio->lock, flags);
6866 ++ raw_spin_unlock_irqrestore(&gpio->lock, flags);
6867 + }
6868 +
6869 + static void aspeed_gpio_irq_set_mask(struct irq_data *d, bool set)
6870 +@@ -565,7 +565,7 @@ static void aspeed_gpio_irq_set_mask(struct irq_data *d, bool set)
6871 +
6872 + addr = bank_reg(gpio, bank, reg_irq_enable);
6873 +
6874 +- spin_lock_irqsave(&gpio->lock, flags);
6875 ++ raw_spin_lock_irqsave(&gpio->lock, flags);
6876 + copro = aspeed_gpio_copro_request(gpio, offset);
6877 +
6878 + reg = ioread32(addr);
6879 +@@ -577,7 +577,7 @@ static void aspeed_gpio_irq_set_mask(struct irq_data *d, bool set)
6880 +
6881 + if (copro)
6882 + aspeed_gpio_copro_release(gpio, offset);
6883 +- spin_unlock_irqrestore(&gpio->lock, flags);
6884 ++ raw_spin_unlock_irqrestore(&gpio->lock, flags);
6885 + }
6886 +
6887 + static void aspeed_gpio_irq_mask(struct irq_data *d)
6888 +@@ -629,7 +629,7 @@ static int aspeed_gpio_set_type(struct irq_data *d, unsigned int type)
6889 + return -EINVAL;
6890 + }
6891 +
6892 +- spin_lock_irqsave(&gpio->lock, flags);
6893 ++ raw_spin_lock_irqsave(&gpio->lock, flags);
6894 + copro = aspeed_gpio_copro_request(gpio, offset);
6895 +
6896 + addr = bank_reg(gpio, bank, reg_irq_type0);
6897 +@@ -649,7 +649,7 @@ static int aspeed_gpio_set_type(struct irq_data *d, unsigned int type)
6898 +
6899 + if (copro)
6900 + aspeed_gpio_copro_release(gpio, offset);
6901 +- spin_unlock_irqrestore(&gpio->lock, flags);
6902 ++ raw_spin_unlock_irqrestore(&gpio->lock, flags);
6903 +
6904 + irq_set_handler_locked(d, handler);
6905 +
6906 +@@ -719,7 +719,7 @@ static int aspeed_gpio_reset_tolerance(struct gpio_chip *chip,
6907 +
6908 + treg = bank_reg(gpio, to_bank(offset), reg_tolerance);
6909 +
6910 +- spin_lock_irqsave(&gpio->lock, flags);
6911 ++ raw_spin_lock_irqsave(&gpio->lock, flags);
6912 + copro = aspeed_gpio_copro_request(gpio, offset);
6913 +
6914 + val = readl(treg);
6915 +@@ -733,7 +733,7 @@ static int aspeed_gpio_reset_tolerance(struct gpio_chip *chip,
6916 +
6917 + if (copro)
6918 + aspeed_gpio_copro_release(gpio, offset);
6919 +- spin_unlock_irqrestore(&gpio->lock, flags);
6920 ++ raw_spin_unlock_irqrestore(&gpio->lock, flags);
6921 +
6922 + return 0;
6923 + }
6924 +@@ -859,7 +859,7 @@ static int enable_debounce(struct gpio_chip *chip, unsigned int offset,
6925 + return rc;
6926 + }
6927 +
6928 +- spin_lock_irqsave(&gpio->lock, flags);
6929 ++ raw_spin_lock_irqsave(&gpio->lock, flags);
6930 +
6931 + if (timer_allocation_registered(gpio, offset)) {
6932 + rc = unregister_allocated_timer(gpio, offset);
6933 +@@ -919,7 +919,7 @@ static int enable_debounce(struct gpio_chip *chip, unsigned int offset,
6934 + configure_timer(gpio, offset, i);
6935 +
6936 + out:
6937 +- spin_unlock_irqrestore(&gpio->lock, flags);
6938 ++ raw_spin_unlock_irqrestore(&gpio->lock, flags);
6939 +
6940 + return rc;
6941 + }
6942 +@@ -930,13 +930,13 @@ static int disable_debounce(struct gpio_chip *chip, unsigned int offset)
6943 + unsigned long flags;
6944 + int rc;
6945 +
6946 +- spin_lock_irqsave(&gpio->lock, flags);
6947 ++ raw_spin_lock_irqsave(&gpio->lock, flags);
6948 +
6949 + rc = unregister_allocated_timer(gpio, offset);
6950 + if (!rc)
6951 + configure_timer(gpio, offset, 0);
6952 +
6953 +- spin_unlock_irqrestore(&gpio->lock, flags);
6954 ++ raw_spin_unlock_irqrestore(&gpio->lock, flags);
6955 +
6956 + return rc;
6957 + }
6958 +@@ -1018,7 +1018,7 @@ int aspeed_gpio_copro_grab_gpio(struct gpio_desc *desc,
6959 + return -EINVAL;
6960 + bindex = offset >> 3;
6961 +
6962 +- spin_lock_irqsave(&gpio->lock, flags);
6963 ++ raw_spin_lock_irqsave(&gpio->lock, flags);
6964 +
6965 + /* Sanity check, this shouldn't happen */
6966 + if (gpio->cf_copro_bankmap[bindex] == 0xff) {
6967 +@@ -1039,7 +1039,7 @@ int aspeed_gpio_copro_grab_gpio(struct gpio_desc *desc,
6968 + if (bit)
6969 + *bit = GPIO_OFFSET(offset);
6970 + bail:
6971 +- spin_unlock_irqrestore(&gpio->lock, flags);
6972 ++ raw_spin_unlock_irqrestore(&gpio->lock, flags);
6973 + return rc;
6974 + }
6975 + EXPORT_SYMBOL_GPL(aspeed_gpio_copro_grab_gpio);
6976 +@@ -1063,7 +1063,7 @@ int aspeed_gpio_copro_release_gpio(struct gpio_desc *desc)
6977 + return -EINVAL;
6978 + bindex = offset >> 3;
6979 +
6980 +- spin_lock_irqsave(&gpio->lock, flags);
6981 ++ raw_spin_lock_irqsave(&gpio->lock, flags);
6982 +
6983 + /* Sanity check, this shouldn't happen */
6984 + if (gpio->cf_copro_bankmap[bindex] == 0) {
6985 +@@ -1077,7 +1077,7 @@ int aspeed_gpio_copro_release_gpio(struct gpio_desc *desc)
6986 + aspeed_gpio_change_cmd_source(gpio, bank, bindex,
6987 + GPIO_CMDSRC_ARM);
6988 + bail:
6989 +- spin_unlock_irqrestore(&gpio->lock, flags);
6990 ++ raw_spin_unlock_irqrestore(&gpio->lock, flags);
6991 + return rc;
6992 + }
6993 + EXPORT_SYMBOL_GPL(aspeed_gpio_copro_release_gpio);
6994 +@@ -1151,7 +1151,7 @@ static int __init aspeed_gpio_probe(struct platform_device *pdev)
6995 + if (IS_ERR(gpio->base))
6996 + return PTR_ERR(gpio->base);
6997 +
6998 +- spin_lock_init(&gpio->lock);
6999 ++ raw_spin_lock_init(&gpio->lock);
7000 +
7001 + gpio_id = of_match_node(aspeed_gpio_of_table, pdev->dev.of_node);
7002 + if (!gpio_id)
7003 +diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
7004 +index 6f11714ce0239..55e4f402ec8b6 100644
7005 +--- a/drivers/gpio/gpiolib-acpi.c
7006 ++++ b/drivers/gpio/gpiolib-acpi.c
7007 +@@ -969,10 +969,17 @@ int acpi_dev_gpio_irq_get_by(struct acpi_device *adev, const char *name, int ind
7008 + irq_flags = acpi_dev_get_irq_type(info.triggering,
7009 + info.polarity);
7010 +
7011 +- /* Set type if specified and different than the current one */
7012 +- if (irq_flags != IRQ_TYPE_NONE &&
7013 +- irq_flags != irq_get_trigger_type(irq))
7014 +- irq_set_irq_type(irq, irq_flags);
7015 ++ /*
7016 ++ * If the IRQ is not already in use then set type
7017 ++ * if specified and different than the current one.
7018 ++ */
7019 ++ if (can_request_irq(irq, irq_flags)) {
7020 ++ if (irq_flags != IRQ_TYPE_NONE &&
7021 ++ irq_flags != irq_get_trigger_type(irq))
7022 ++ irq_set_irq_type(irq, irq_flags);
7023 ++ } else {
7024 ++ dev_dbg(&adev->dev, "IRQ %d already in use\n", irq);
7025 ++ }
7026 +
7027 + return irq;
7028 + }
7029 +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
7030 +index 0de66f59adb8a..df1f9b88a53f9 100644
7031 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
7032 ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
7033 +@@ -387,6 +387,9 @@ amdgpu_connector_lcd_native_mode(struct drm_encoder *encoder)
7034 + native_mode->vdisplay != 0 &&
7035 + native_mode->clock != 0) {
7036 + mode = drm_mode_duplicate(dev, native_mode);
7037 ++ if (!mode)
7038 ++ return NULL;
7039 ++
7040 + mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER;
7041 + drm_mode_set_name(mode);
7042 +
7043 +@@ -401,6 +404,9 @@ amdgpu_connector_lcd_native_mode(struct drm_encoder *encoder)
7044 + * simpler.
7045 + */
7046 + mode = drm_cvt_mode(dev, native_mode->hdisplay, native_mode->vdisplay, 60, true, false, false);
7047 ++ if (!mode)
7048 ++ return NULL;
7049 ++
7050 + mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER;
7051 + DRM_DEBUG_KMS("Adding cvt approximation of native panel mode %s\n", mode->name);
7052 + }
7053 +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
7054 +index 2f70fdd6104f2..582055136cdbf 100644
7055 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
7056 ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
7057 +@@ -267,7 +267,6 @@ int amdgpu_irq_init(struct amdgpu_device *adev)
7058 + if (!amdgpu_device_has_dc_support(adev)) {
7059 + if (!adev->enable_virtual_display)
7060 + /* Disable vblank IRQs aggressively for power-saving */
7061 +- /* XXX: can this be enabled for DC? */
7062 + adev_to_drm(adev)->vblank_disable_immediate = true;
7063 +
7064 + r = drm_vblank_init(adev_to_drm(adev), adev->mode_info.num_crtc);
7065 +diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
7066 +index 9ab65ca7df777..873bc33912e23 100644
7067 +--- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
7068 ++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
7069 +@@ -524,10 +524,10 @@ static void gmc_v8_0_mc_program(struct amdgpu_device *adev)
7070 + static int gmc_v8_0_mc_init(struct amdgpu_device *adev)
7071 + {
7072 + int r;
7073 ++ u32 tmp;
7074 +
7075 + adev->gmc.vram_width = amdgpu_atombios_get_vram_width(adev);
7076 + if (!adev->gmc.vram_width) {
7077 +- u32 tmp;
7078 + int chansize, numchan;
7079 +
7080 + /* Get VRAM informations */
7081 +@@ -571,8 +571,15 @@ static int gmc_v8_0_mc_init(struct amdgpu_device *adev)
7082 + adev->gmc.vram_width = numchan * chansize;
7083 + }
7084 + /* size in MB on si */
7085 +- adev->gmc.mc_vram_size = RREG32(mmCONFIG_MEMSIZE) * 1024ULL * 1024ULL;
7086 +- adev->gmc.real_vram_size = RREG32(mmCONFIG_MEMSIZE) * 1024ULL * 1024ULL;
7087 ++ tmp = RREG32(mmCONFIG_MEMSIZE);
7088 ++ /* some boards may have garbage in the upper 16 bits */
7089 ++ if (tmp & 0xffff0000) {
7090 ++ DRM_INFO("Probable bad vram size: 0x%08x\n", tmp);
7091 ++ if (tmp & 0xffff)
7092 ++ tmp &= 0xffff;
7093 ++ }
7094 ++ adev->gmc.mc_vram_size = tmp * 1024ULL * 1024ULL;
7095 ++ adev->gmc.real_vram_size = adev->gmc.mc_vram_size;
7096 +
7097 + if (!(adev->flags & AMD_IS_APU)) {
7098 + r = amdgpu_device_resize_fb_bar(adev);
7099 +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
7100 +index a5b6f36fe1d72..6c8f141103da4 100644
7101 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
7102 ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
7103 +@@ -1069,6 +1069,9 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
7104 + adev_to_drm(adev)->mode_config.cursor_width = adev->dm.dc->caps.max_cursor_size;
7105 + adev_to_drm(adev)->mode_config.cursor_height = adev->dm.dc->caps.max_cursor_size;
7106 +
7107 ++ /* Disable vblank IRQs aggressively for power-saving */
7108 ++ adev_to_drm(adev)->vblank_disable_immediate = true;
7109 ++
7110 + if (drm_vblank_init(adev_to_drm(adev), adev->dm.display_indexes_num)) {
7111 + DRM_ERROR(
7112 + "amdgpu: failed to initialize sw for display support.\n");
7113 +diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
7114 +index 284ed1c8a35ac..93f5229c303e7 100644
7115 +--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
7116 ++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
7117 +@@ -2436,7 +2436,8 @@ static void commit_planes_for_stream(struct dc *dc,
7118 + }
7119 +
7120 + if ((update_type != UPDATE_TYPE_FAST) && stream->update_flags.bits.dsc_changed)
7121 +- if (top_pipe_to_program->stream_res.tg->funcs->lock_doublebuffer_enable) {
7122 ++ if (top_pipe_to_program &&
7123 ++ top_pipe_to_program->stream_res.tg->funcs->lock_doublebuffer_enable) {
7124 + if (should_use_dmub_lock(stream->link)) {
7125 + union dmub_hw_lock_flags hw_locks = { 0 };
7126 + struct dmub_hw_lock_inst_flags inst_flags = { 0 };
7127 +diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
7128 +index 9f383b9041d28..49109614510b8 100644
7129 +--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
7130 ++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
7131 +@@ -2098,6 +2098,12 @@ static int default_attr_update(struct amdgpu_device *adev, struct amdgpu_device_
7132 + }
7133 + }
7134 +
7135 ++ /* setting should not be allowed from VF */
7136 ++ if (amdgpu_sriov_vf(adev)) {
7137 ++ dev_attr->attr.mode &= ~S_IWUGO;
7138 ++ dev_attr->store = NULL;
7139 ++ }
7140 ++
7141 + #undef DEVICE_ATTR_IS
7142 +
7143 + return 0;
7144 +diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c
7145 +index 914c569ab8c15..cab3f5c4e2fc8 100644
7146 +--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c
7147 ++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c
7148 +@@ -1086,11 +1086,21 @@ int analogix_dp_send_psr_spd(struct analogix_dp_device *dp,
7149 + if (!blocking)
7150 + return 0;
7151 +
7152 ++ /*
7153 ++ * db[1]!=0: entering PSR, wait for fully active remote frame buffer.
7154 ++ * db[1]==0: exiting PSR, wait for either
7155 ++ * (a) ACTIVE_RESYNC - the sink "must display the
7156 ++ * incoming active frames from the Source device with no visible
7157 ++ * glitches and/or artifacts", even though timings may still be
7158 ++ * re-synchronizing; or
7159 ++ * (b) INACTIVE - the transition is fully complete.
7160 ++ */
7161 + ret = readx_poll_timeout(analogix_dp_get_psr_status, dp, psr_status,
7162 + psr_status >= 0 &&
7163 + ((vsc->db[1] && psr_status == DP_PSR_SINK_ACTIVE_RFB) ||
7164 +- (!vsc->db[1] && psr_status == DP_PSR_SINK_INACTIVE)), 1500,
7165 +- DP_TIMEOUT_PSR_LOOP_MS * 1000);
7166 ++ (!vsc->db[1] && (psr_status == DP_PSR_SINK_ACTIVE_RESYNC ||
7167 ++ psr_status == DP_PSR_SINK_INACTIVE))),
7168 ++ 1500, DP_TIMEOUT_PSR_LOOP_MS * 1000);
7169 + if (ret) {
7170 + dev_warn(dp->dev, "Failed to apply PSR %d\n", ret);
7171 + return ret;
7172 +diff --git a/drivers/gpu/drm/bridge/display-connector.c b/drivers/gpu/drm/bridge/display-connector.c
7173 +index 4d278573cdb99..544a47335cac4 100644
7174 +--- a/drivers/gpu/drm/bridge/display-connector.c
7175 ++++ b/drivers/gpu/drm/bridge/display-connector.c
7176 +@@ -104,7 +104,7 @@ static int display_connector_probe(struct platform_device *pdev)
7177 + {
7178 + struct display_connector *conn;
7179 + unsigned int type;
7180 +- const char *label;
7181 ++ const char *label = NULL;
7182 + int ret;
7183 +
7184 + conn = devm_kzalloc(&pdev->dev, sizeof(*conn), GFP_KERNEL);
7185 +diff --git a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
7186 +index d2808c4a6fb1c..cce98bf2a4e73 100644
7187 +--- a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
7188 ++++ b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
7189 +@@ -306,19 +306,10 @@ out:
7190 + mutex_unlock(&ge_b850v3_lvds_dev_mutex);
7191 + }
7192 +
7193 +-static int stdp4028_ge_b850v3_fw_probe(struct i2c_client *stdp4028_i2c,
7194 +- const struct i2c_device_id *id)
7195 ++static int ge_b850v3_register(void)
7196 + {
7197 ++ struct i2c_client *stdp4028_i2c = ge_b850v3_lvds_ptr->stdp4028_i2c;
7198 + struct device *dev = &stdp4028_i2c->dev;
7199 +- int ret;
7200 +-
7201 +- ret = ge_b850v3_lvds_init(dev);
7202 +-
7203 +- if (ret)
7204 +- return ret;
7205 +-
7206 +- ge_b850v3_lvds_ptr->stdp4028_i2c = stdp4028_i2c;
7207 +- i2c_set_clientdata(stdp4028_i2c, ge_b850v3_lvds_ptr);
7208 +
7209 + /* drm bridge initialization */
7210 + ge_b850v3_lvds_ptr->bridge.funcs = &ge_b850v3_lvds_funcs;
7211 +@@ -343,6 +334,27 @@ static int stdp4028_ge_b850v3_fw_probe(struct i2c_client *stdp4028_i2c,
7212 + "ge-b850v3-lvds-dp", ge_b850v3_lvds_ptr);
7213 + }
7214 +
7215 ++static int stdp4028_ge_b850v3_fw_probe(struct i2c_client *stdp4028_i2c,
7216 ++ const struct i2c_device_id *id)
7217 ++{
7218 ++ struct device *dev = &stdp4028_i2c->dev;
7219 ++ int ret;
7220 ++
7221 ++ ret = ge_b850v3_lvds_init(dev);
7222 ++
7223 ++ if (ret)
7224 ++ return ret;
7225 ++
7226 ++ ge_b850v3_lvds_ptr->stdp4028_i2c = stdp4028_i2c;
7227 ++ i2c_set_clientdata(stdp4028_i2c, ge_b850v3_lvds_ptr);
7228 ++
7229 ++ /* Only register after both bridges are probed */
7230 ++ if (!ge_b850v3_lvds_ptr->stdp2690_i2c)
7231 ++ return 0;
7232 ++
7233 ++ return ge_b850v3_register();
7234 ++}
7235 ++
7236 + static int stdp4028_ge_b850v3_fw_remove(struct i2c_client *stdp4028_i2c)
7237 + {
7238 + ge_b850v3_lvds_remove();
7239 +@@ -386,7 +398,11 @@ static int stdp2690_ge_b850v3_fw_probe(struct i2c_client *stdp2690_i2c,
7240 + ge_b850v3_lvds_ptr->stdp2690_i2c = stdp2690_i2c;
7241 + i2c_set_clientdata(stdp2690_i2c, ge_b850v3_lvds_ptr);
7242 +
7243 +- return 0;
7244 ++ /* Only register after both bridges are probed */
7245 ++ if (!ge_b850v3_lvds_ptr->stdp4028_i2c)
7246 ++ return 0;
7247 ++
7248 ++ return ge_b850v3_register();
7249 + }
7250 +
7251 + static int stdp2690_ge_b850v3_fw_remove(struct i2c_client *stdp2690_i2c)
7252 +diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-ahb-audio.c b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-ahb-audio.c
7253 +index d0db1acf11d73..7d2ed0ed2fe26 100644
7254 +--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-ahb-audio.c
7255 ++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-ahb-audio.c
7256 +@@ -320,13 +320,17 @@ static int dw_hdmi_open(struct snd_pcm_substream *substream)
7257 + struct snd_pcm_runtime *runtime = substream->runtime;
7258 + struct snd_dw_hdmi *dw = substream->private_data;
7259 + void __iomem *base = dw->data.base;
7260 ++ u8 *eld;
7261 + int ret;
7262 +
7263 + runtime->hw = dw_hdmi_hw;
7264 +
7265 +- ret = snd_pcm_hw_constraint_eld(runtime, dw->data.eld);
7266 +- if (ret < 0)
7267 +- return ret;
7268 ++ eld = dw->data.get_eld(dw->data.hdmi);
7269 ++ if (eld) {
7270 ++ ret = snd_pcm_hw_constraint_eld(runtime, eld);
7271 ++ if (ret < 0)
7272 ++ return ret;
7273 ++ }
7274 +
7275 + ret = snd_pcm_limit_hw_rates(runtime);
7276 + if (ret < 0)
7277 +diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-audio.h b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-audio.h
7278 +index cb07dc0da5a70..f72d27208ebef 100644
7279 +--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-audio.h
7280 ++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-audio.h
7281 +@@ -9,15 +9,15 @@ struct dw_hdmi_audio_data {
7282 + void __iomem *base;
7283 + int irq;
7284 + struct dw_hdmi *hdmi;
7285 +- u8 *eld;
7286 ++ u8 *(*get_eld)(struct dw_hdmi *hdmi);
7287 + };
7288 +
7289 + struct dw_hdmi_i2s_audio_data {
7290 + struct dw_hdmi *hdmi;
7291 +- u8 *eld;
7292 +
7293 + void (*write)(struct dw_hdmi *hdmi, u8 val, int offset);
7294 + u8 (*read)(struct dw_hdmi *hdmi, int offset);
7295 ++ u8 *(*get_eld)(struct dw_hdmi *hdmi);
7296 + };
7297 +
7298 + #endif
7299 +diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-i2s-audio.c b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-i2s-audio.c
7300 +index 9fef6413741dc..9682416056ed6 100644
7301 +--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-i2s-audio.c
7302 ++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-i2s-audio.c
7303 +@@ -135,8 +135,15 @@ static int dw_hdmi_i2s_get_eld(struct device *dev, void *data, uint8_t *buf,
7304 + size_t len)
7305 + {
7306 + struct dw_hdmi_i2s_audio_data *audio = data;
7307 ++ u8 *eld;
7308 ++
7309 ++ eld = audio->get_eld(audio->hdmi);
7310 ++ if (eld)
7311 ++ memcpy(buf, eld, min_t(size_t, MAX_ELD_BYTES, len));
7312 ++ else
7313 ++ /* Pass en empty ELD if connector not available */
7314 ++ memset(buf, 0, len);
7315 +
7316 +- memcpy(buf, audio->eld, min_t(size_t, MAX_ELD_BYTES, len));
7317 + return 0;
7318 + }
7319 +
7320 +diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
7321 +index 0c79a9ba48bb6..29c0eb4bd7546 100644
7322 +--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
7323 ++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
7324 +@@ -756,6 +756,14 @@ static void hdmi_enable_audio_clk(struct dw_hdmi *hdmi, bool enable)
7325 + hdmi_writeb(hdmi, hdmi->mc_clkdis, HDMI_MC_CLKDIS);
7326 + }
7327 +
7328 ++static u8 *hdmi_audio_get_eld(struct dw_hdmi *hdmi)
7329 ++{
7330 ++ if (!hdmi->curr_conn)
7331 ++ return NULL;
7332 ++
7333 ++ return hdmi->curr_conn->eld;
7334 ++}
7335 ++
7336 + static void dw_hdmi_ahb_audio_enable(struct dw_hdmi *hdmi)
7337 + {
7338 + hdmi_set_cts_n(hdmi, hdmi->audio_cts, hdmi->audio_n);
7339 +@@ -3395,7 +3403,7 @@ struct dw_hdmi *dw_hdmi_probe(struct platform_device *pdev,
7340 + audio.base = hdmi->regs;
7341 + audio.irq = irq;
7342 + audio.hdmi = hdmi;
7343 +- audio.eld = hdmi->connector.eld;
7344 ++ audio.get_eld = hdmi_audio_get_eld;
7345 + hdmi->enable_audio = dw_hdmi_ahb_audio_enable;
7346 + hdmi->disable_audio = dw_hdmi_ahb_audio_disable;
7347 +
7348 +@@ -3408,7 +3416,7 @@ struct dw_hdmi *dw_hdmi_probe(struct platform_device *pdev,
7349 + struct dw_hdmi_i2s_audio_data audio;
7350 +
7351 + audio.hdmi = hdmi;
7352 +- audio.eld = hdmi->connector.eld;
7353 ++ audio.get_eld = hdmi_audio_get_eld;
7354 + audio.write = hdmi_writeb;
7355 + audio.read = hdmi_readb;
7356 + hdmi->enable_audio = dw_hdmi_i2s_audio_enable;
7357 +diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi86.c b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
7358 +index ecdf9b01340f5..1a58481037b3f 100644
7359 +--- a/drivers/gpu/drm/bridge/ti-sn65dsi86.c
7360 ++++ b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
7361 +@@ -171,6 +171,7 @@ static const struct regmap_config ti_sn_bridge_regmap_config = {
7362 + .val_bits = 8,
7363 + .volatile_table = &ti_sn_bridge_volatile_table,
7364 + .cache_type = REGCACHE_NONE,
7365 ++ .max_register = 0xFF,
7366 + };
7367 +
7368 + static void ti_sn_bridge_write_u16(struct ti_sn_bridge *pdata,
7369 +diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
7370 +index cd162d406078a..006e3b896caea 100644
7371 +--- a/drivers/gpu/drm/drm_drv.c
7372 ++++ b/drivers/gpu/drm/drm_drv.c
7373 +@@ -577,6 +577,7 @@ static int drm_dev_init(struct drm_device *dev,
7374 + struct drm_driver *driver,
7375 + struct device *parent)
7376 + {
7377 ++ struct inode *inode;
7378 + int ret;
7379 +
7380 + if (!drm_core_init_complete) {
7381 +@@ -613,13 +614,15 @@ static int drm_dev_init(struct drm_device *dev,
7382 + if (ret)
7383 + return ret;
7384 +
7385 +- dev->anon_inode = drm_fs_inode_new();
7386 +- if (IS_ERR(dev->anon_inode)) {
7387 +- ret = PTR_ERR(dev->anon_inode);
7388 ++ inode = drm_fs_inode_new();
7389 ++ if (IS_ERR(inode)) {
7390 ++ ret = PTR_ERR(inode);
7391 + DRM_ERROR("Cannot allocate anonymous inode: %d\n", ret);
7392 + goto err;
7393 + }
7394 +
7395 ++ dev->anon_inode = inode;
7396 ++
7397 + if (drm_core_check_feature(dev, DRIVER_RENDER)) {
7398 + ret = drm_minor_alloc(dev, DRM_MINOR_RENDER);
7399 + if (ret)
7400 +diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
7401 +index a950d5db211c5..9d1bd8f491ad7 100644
7402 +--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
7403 ++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
7404 +@@ -248,6 +248,12 @@ static const struct dmi_system_id orientation_data[] = {
7405 + DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGM"),
7406 + },
7407 + .driver_data = (void *)&lcd1200x1920_rightside_up,
7408 ++ }, { /* Lenovo Yoga Book X90F / X91F / X91L */
7409 ++ .matches = {
7410 ++ /* Non exact match to match all versions */
7411 ++ DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X9"),
7412 ++ },
7413 ++ .driver_data = (void *)&lcd1200x1920_rightside_up,
7414 + }, { /* OneGX1 Pro */
7415 + .matches = {
7416 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "SYSTEM_MANUFACTURER"),
7417 +diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
7418 +index 5f24cc52c2878..ed2c50011d445 100644
7419 +--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
7420 ++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
7421 +@@ -469,6 +469,12 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
7422 + return -EINVAL;
7423 + }
7424 +
7425 ++ if (args->stream_size > SZ_64K || args->nr_relocs > SZ_64K ||
7426 ++ args->nr_bos > SZ_64K || args->nr_pmrs > 128) {
7427 ++ DRM_ERROR("submit arguments out of size limits\n");
7428 ++ return -EINVAL;
7429 ++ }
7430 ++
7431 + /*
7432 + * Copy the command submission and bo array to kernel space in
7433 + * one go, and do this outside of any locks.
7434 +diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
7435 +index 1c75c8ed5bcea..85eddd492774d 100644
7436 +--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
7437 ++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
7438 +@@ -130,6 +130,7 @@ struct etnaviv_gpu {
7439 +
7440 + /* hang detection */
7441 + u32 hangcheck_dma_addr;
7442 ++ u32 hangcheck_fence;
7443 +
7444 + void __iomem *mmio;
7445 + int irq;
7446 +diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
7447 +index cd46c882269cc..026b6c0731198 100644
7448 +--- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
7449 ++++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
7450 +@@ -106,8 +106,10 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job)
7451 + */
7452 + dma_addr = gpu_read(gpu, VIVS_FE_DMA_ADDRESS);
7453 + change = dma_addr - gpu->hangcheck_dma_addr;
7454 +- if (change < 0 || change > 16) {
7455 ++ if (gpu->completed_fence != gpu->hangcheck_fence ||
7456 ++ change < 0 || change > 16) {
7457 + gpu->hangcheck_dma_addr = dma_addr;
7458 ++ gpu->hangcheck_fence = gpu->completed_fence;
7459 + goto out_no_timeout;
7460 + }
7461 +
7462 +diff --git a/drivers/gpu/drm/lima/lima_device.c b/drivers/gpu/drm/lima/lima_device.c
7463 +index 65fdca366e41f..36c9905894278 100644
7464 +--- a/drivers/gpu/drm/lima/lima_device.c
7465 ++++ b/drivers/gpu/drm/lima/lima_device.c
7466 +@@ -357,6 +357,7 @@ int lima_device_init(struct lima_device *ldev)
7467 + int err, i;
7468 +
7469 + dma_set_coherent_mask(ldev->dev, DMA_BIT_MASK(32));
7470 ++ dma_set_max_seg_size(ldev->dev, UINT_MAX);
7471 +
7472 + err = lima_clk_init(ldev);
7473 + if (err)
7474 +diff --git a/drivers/gpu/drm/mediatek/mtk_mipi_tx.c b/drivers/gpu/drm/mediatek/mtk_mipi_tx.c
7475 +index 8cee2591e7284..ccc742dc78bd9 100644
7476 +--- a/drivers/gpu/drm/mediatek/mtk_mipi_tx.c
7477 ++++ b/drivers/gpu/drm/mediatek/mtk_mipi_tx.c
7478 +@@ -147,6 +147,8 @@ static int mtk_mipi_tx_probe(struct platform_device *pdev)
7479 + return -ENOMEM;
7480 +
7481 + mipi_tx->driver_data = of_device_get_match_data(dev);
7482 ++ if (!mipi_tx->driver_data)
7483 ++ return -ENODEV;
7484 +
7485 + mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
7486 + mipi_tx->regs = devm_ioremap_resource(dev, mem);
7487 +diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig
7488 +index dabb4a1ccdcf7..1aad34b5ffd7f 100644
7489 +--- a/drivers/gpu/drm/msm/Kconfig
7490 ++++ b/drivers/gpu/drm/msm/Kconfig
7491 +@@ -60,6 +60,7 @@ config DRM_MSM_HDMI_HDCP
7492 + config DRM_MSM_DP
7493 + bool "Enable DisplayPort support in MSM DRM driver"
7494 + depends on DRM_MSM
7495 ++ select RATIONAL
7496 + default y
7497 + help
7498 + Compile in support for DP driver in MSM DRM driver. DP external
7499 +diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
7500 +index b4a2e8eb35dd2..08e082d0443af 100644
7501 +--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
7502 ++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
7503 +@@ -71,8 +71,8 @@ static int _dpu_danger_signal_status(struct seq_file *s,
7504 + &status);
7505 + } else {
7506 + seq_puts(s, "\nSafe signal status:\n");
7507 +- if (kms->hw_mdp->ops.get_danger_status)
7508 +- kms->hw_mdp->ops.get_danger_status(kms->hw_mdp,
7509 ++ if (kms->hw_mdp->ops.get_safe_status)
7510 ++ kms->hw_mdp->ops.get_safe_status(kms->hw_mdp,
7511 + &status);
7512 + }
7513 + pm_runtime_put_sync(&kms->pdev->dev);
7514 +diff --git a/drivers/gpu/drm/nouveau/dispnv04/disp.c b/drivers/gpu/drm/nouveau/dispnv04/disp.c
7515 +index 7739f46470d3e..99fee4d8cd318 100644
7516 +--- a/drivers/gpu/drm/nouveau/dispnv04/disp.c
7517 ++++ b/drivers/gpu/drm/nouveau/dispnv04/disp.c
7518 +@@ -205,7 +205,7 @@ nv04_display_destroy(struct drm_device *dev)
7519 + nvif_notify_dtor(&disp->flip);
7520 +
7521 + nouveau_display(dev)->priv = NULL;
7522 +- kfree(disp);
7523 ++ vfree(disp);
7524 +
7525 + nvif_object_unmap(&drm->client.device.object);
7526 + }
7527 +@@ -223,7 +223,7 @@ nv04_display_create(struct drm_device *dev)
7528 + struct nv04_display *disp;
7529 + int i, ret;
7530 +
7531 +- disp = kzalloc(sizeof(*disp), GFP_KERNEL);
7532 ++ disp = vzalloc(sizeof(*disp));
7533 + if (!disp)
7534 + return -ENOMEM;
7535 +
7536 +diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
7537 +index a0fe607c9c07f..3bfc55c571b5e 100644
7538 +--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
7539 ++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
7540 +@@ -94,20 +94,13 @@ nvkm_pmu_fini(struct nvkm_subdev *subdev, bool suspend)
7541 + return 0;
7542 + }
7543 +
7544 +-static int
7545 ++static void
7546 + nvkm_pmu_reset(struct nvkm_pmu *pmu)
7547 + {
7548 + struct nvkm_device *device = pmu->subdev.device;
7549 +
7550 + if (!pmu->func->enabled(pmu))
7551 +- return 0;
7552 +-
7553 +- /* Inhibit interrupts, and wait for idle. */
7554 +- nvkm_wr32(device, 0x10a014, 0x0000ffff);
7555 +- nvkm_msec(device, 2000,
7556 +- if (!nvkm_rd32(device, 0x10a04c))
7557 +- break;
7558 +- );
7559 ++ return;
7560 +
7561 + /* Reset. */
7562 + if (pmu->func->reset)
7563 +@@ -118,25 +111,37 @@ nvkm_pmu_reset(struct nvkm_pmu *pmu)
7564 + if (!(nvkm_rd32(device, 0x10a10c) & 0x00000006))
7565 + break;
7566 + );
7567 +-
7568 +- return 0;
7569 + }
7570 +
7571 + static int
7572 + nvkm_pmu_preinit(struct nvkm_subdev *subdev)
7573 + {
7574 + struct nvkm_pmu *pmu = nvkm_pmu(subdev);
7575 +- return nvkm_pmu_reset(pmu);
7576 ++ nvkm_pmu_reset(pmu);
7577 ++ return 0;
7578 + }
7579 +
7580 + static int
7581 + nvkm_pmu_init(struct nvkm_subdev *subdev)
7582 + {
7583 + struct nvkm_pmu *pmu = nvkm_pmu(subdev);
7584 +- int ret = nvkm_pmu_reset(pmu);
7585 +- if (ret == 0 && pmu->func->init)
7586 +- ret = pmu->func->init(pmu);
7587 +- return ret;
7588 ++ struct nvkm_device *device = pmu->subdev.device;
7589 ++
7590 ++ if (!pmu->func->init)
7591 ++ return 0;
7592 ++
7593 ++ if (pmu->func->enabled(pmu)) {
7594 ++ /* Inhibit interrupts, and wait for idle. */
7595 ++ nvkm_wr32(device, 0x10a014, 0x0000ffff);
7596 ++ nvkm_msec(device, 2000,
7597 ++ if (!nvkm_rd32(device, 0x10a04c))
7598 ++ break;
7599 ++ );
7600 ++
7601 ++ nvkm_pmu_reset(pmu);
7602 ++ }
7603 ++
7604 ++ return pmu->func->init(pmu);
7605 + }
7606 +
7607 + static void *
7608 +diff --git a/drivers/gpu/drm/panel/panel-innolux-p079zca.c b/drivers/gpu/drm/panel/panel-innolux-p079zca.c
7609 +index aea3162253914..f194b62e290ca 100644
7610 +--- a/drivers/gpu/drm/panel/panel-innolux-p079zca.c
7611 ++++ b/drivers/gpu/drm/panel/panel-innolux-p079zca.c
7612 +@@ -484,6 +484,7 @@ static void innolux_panel_del(struct innolux_panel *innolux)
7613 + static int innolux_panel_probe(struct mipi_dsi_device *dsi)
7614 + {
7615 + const struct panel_desc *desc;
7616 ++ struct innolux_panel *innolux;
7617 + int err;
7618 +
7619 + desc = of_device_get_match_data(&dsi->dev);
7620 +@@ -495,7 +496,14 @@ static int innolux_panel_probe(struct mipi_dsi_device *dsi)
7621 + if (err < 0)
7622 + return err;
7623 +
7624 +- return mipi_dsi_attach(dsi);
7625 ++ err = mipi_dsi_attach(dsi);
7626 ++ if (err < 0) {
7627 ++ innolux = mipi_dsi_get_drvdata(dsi);
7628 ++ innolux_panel_del(innolux);
7629 ++ return err;
7630 ++ }
7631 ++
7632 ++ return 0;
7633 + }
7634 +
7635 + static int innolux_panel_remove(struct mipi_dsi_device *dsi)
7636 +diff --git a/drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c b/drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c
7637 +index 86e4213e8bb13..daccb1fd5fdad 100644
7638 +--- a/drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c
7639 ++++ b/drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c
7640 +@@ -406,7 +406,13 @@ static int kingdisplay_panel_probe(struct mipi_dsi_device *dsi)
7641 + if (err < 0)
7642 + return err;
7643 +
7644 +- return mipi_dsi_attach(dsi);
7645 ++ err = mipi_dsi_attach(dsi);
7646 ++ if (err < 0) {
7647 ++ kingdisplay_panel_del(kingdisplay);
7648 ++ return err;
7649 ++ }
7650 ++
7651 ++ return 0;
7652 + }
7653 +
7654 + static int kingdisplay_panel_remove(struct mipi_dsi_device *dsi)
7655 +diff --git a/drivers/gpu/drm/radeon/radeon_kms.c b/drivers/gpu/drm/radeon/radeon_kms.c
7656 +index 8c0a572940e82..32070e94f6c49 100644
7657 +--- a/drivers/gpu/drm/radeon/radeon_kms.c
7658 ++++ b/drivers/gpu/drm/radeon/radeon_kms.c
7659 +@@ -634,6 +634,8 @@ void radeon_driver_lastclose_kms(struct drm_device *dev)
7660 + int radeon_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv)
7661 + {
7662 + struct radeon_device *rdev = dev->dev_private;
7663 ++ struct radeon_fpriv *fpriv;
7664 ++ struct radeon_vm *vm;
7665 + int r;
7666 +
7667 + file_priv->driver_priv = NULL;
7668 +@@ -646,48 +648,52 @@ int radeon_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv)
7669 +
7670 + /* new gpu have virtual address space support */
7671 + if (rdev->family >= CHIP_CAYMAN) {
7672 +- struct radeon_fpriv *fpriv;
7673 +- struct radeon_vm *vm;
7674 +
7675 + fpriv = kzalloc(sizeof(*fpriv), GFP_KERNEL);
7676 + if (unlikely(!fpriv)) {
7677 + r = -ENOMEM;
7678 +- goto out_suspend;
7679 ++ goto err_suspend;
7680 + }
7681 +
7682 + if (rdev->accel_working) {
7683 + vm = &fpriv->vm;
7684 + r = radeon_vm_init(rdev, vm);
7685 +- if (r) {
7686 +- kfree(fpriv);
7687 +- goto out_suspend;
7688 +- }
7689 ++ if (r)
7690 ++ goto err_fpriv;
7691 +
7692 + r = radeon_bo_reserve(rdev->ring_tmp_bo.bo, false);
7693 +- if (r) {
7694 +- radeon_vm_fini(rdev, vm);
7695 +- kfree(fpriv);
7696 +- goto out_suspend;
7697 +- }
7698 ++ if (r)
7699 ++ goto err_vm_fini;
7700 +
7701 + /* map the ib pool buffer read only into
7702 + * virtual address space */
7703 + vm->ib_bo_va = radeon_vm_bo_add(rdev, vm,
7704 + rdev->ring_tmp_bo.bo);
7705 ++ if (!vm->ib_bo_va) {
7706 ++ r = -ENOMEM;
7707 ++ goto err_vm_fini;
7708 ++ }
7709 ++
7710 + r = radeon_vm_bo_set_addr(rdev, vm->ib_bo_va,
7711 + RADEON_VA_IB_OFFSET,
7712 + RADEON_VM_PAGE_READABLE |
7713 + RADEON_VM_PAGE_SNOOPED);
7714 +- if (r) {
7715 +- radeon_vm_fini(rdev, vm);
7716 +- kfree(fpriv);
7717 +- goto out_suspend;
7718 +- }
7719 ++ if (r)
7720 ++ goto err_vm_fini;
7721 + }
7722 + file_priv->driver_priv = fpriv;
7723 + }
7724 +
7725 +-out_suspend:
7726 ++ pm_runtime_mark_last_busy(dev->dev);
7727 ++ pm_runtime_put_autosuspend(dev->dev);
7728 ++ return 0;
7729 ++
7730 ++err_vm_fini:
7731 ++ radeon_vm_fini(rdev, vm);
7732 ++err_fpriv:
7733 ++ kfree(fpriv);
7734 ++
7735 ++err_suspend:
7736 + pm_runtime_mark_last_busy(dev->dev);
7737 + pm_runtime_put_autosuspend(dev->dev);
7738 + return r;
7739 +diff --git a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
7740 +index 1b9738e44909d..065604c5837de 100644
7741 +--- a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
7742 ++++ b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
7743 +@@ -215,6 +215,7 @@ static void rcar_du_crtc_set_display_timing(struct rcar_du_crtc *rcrtc)
7744 + const struct drm_display_mode *mode = &rcrtc->crtc.state->adjusted_mode;
7745 + struct rcar_du_device *rcdu = rcrtc->dev;
7746 + unsigned long mode_clock = mode->clock * 1000;
7747 ++ unsigned int hdse_offset;
7748 + u32 dsmr;
7749 + u32 escr;
7750 +
7751 +@@ -298,10 +299,15 @@ static void rcar_du_crtc_set_display_timing(struct rcar_du_crtc *rcrtc)
7752 + | DSMR_DIPM_DISP | DSMR_CSPM;
7753 + rcar_du_crtc_write(rcrtc, DSMR, dsmr);
7754 +
7755 ++ hdse_offset = 19;
7756 ++ if (rcrtc->group->cmms_mask & BIT(rcrtc->index % 2))
7757 ++ hdse_offset += 25;
7758 ++
7759 + /* Display timings */
7760 +- rcar_du_crtc_write(rcrtc, HDSR, mode->htotal - mode->hsync_start - 19);
7761 ++ rcar_du_crtc_write(rcrtc, HDSR, mode->htotal - mode->hsync_start -
7762 ++ hdse_offset);
7763 + rcar_du_crtc_write(rcrtc, HDER, mode->htotal - mode->hsync_start +
7764 +- mode->hdisplay - 19);
7765 ++ mode->hdisplay - hdse_offset);
7766 + rcar_du_crtc_write(rcrtc, HSWR, mode->hsync_end -
7767 + mode->hsync_start - 1);
7768 + rcar_du_crtc_write(rcrtc, HCR, mode->htotal - 1);
7769 +@@ -831,6 +837,7 @@ rcar_du_crtc_mode_valid(struct drm_crtc *crtc,
7770 + struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc);
7771 + struct rcar_du_device *rcdu = rcrtc->dev;
7772 + bool interlaced = mode->flags & DRM_MODE_FLAG_INTERLACE;
7773 ++ unsigned int min_sync_porch;
7774 + unsigned int vbp;
7775 +
7776 + if (interlaced && !rcar_du_has(rcdu, RCAR_DU_FEATURE_INTERLACED))
7777 +@@ -838,9 +845,14 @@ rcar_du_crtc_mode_valid(struct drm_crtc *crtc,
7778 +
7779 + /*
7780 + * The hardware requires a minimum combined horizontal sync and back
7781 +- * porch of 20 pixels and a minimum vertical back porch of 3 lines.
7782 ++ * porch of 20 pixels (when CMM isn't used) or 45 pixels (when CMM is
7783 ++ * used), and a minimum vertical back porch of 3 lines.
7784 + */
7785 +- if (mode->htotal - mode->hsync_start < 20)
7786 ++ min_sync_porch = 20;
7787 ++ if (rcrtc->group->cmms_mask & BIT(rcrtc->index % 2))
7788 ++ min_sync_porch += 25;
7789 ++
7790 ++ if (mode->htotal - mode->hsync_start < min_sync_porch)
7791 + return MODE_HBLANK_NARROW;
7792 +
7793 + vbp = (mode->vtotal - mode->vsync_end) / (interlaced ? 2 : 1);
7794 +diff --git a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
7795 +index d0c9610ad2202..b0fb3c3cba596 100644
7796 +--- a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
7797 ++++ b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
7798 +@@ -243,6 +243,8 @@ struct dw_mipi_dsi_rockchip {
7799 + struct dw_mipi_dsi *dmd;
7800 + const struct rockchip_dw_dsi_chip_data *cdata;
7801 + struct dw_mipi_dsi_plat_data pdata;
7802 ++
7803 ++ bool dsi_bound;
7804 + };
7805 +
7806 + struct dphy_pll_parameter_map {
7807 +@@ -753,10 +755,6 @@ static void dw_mipi_dsi_encoder_enable(struct drm_encoder *encoder)
7808 + if (mux < 0)
7809 + return;
7810 +
7811 +- pm_runtime_get_sync(dsi->dev);
7812 +- if (dsi->slave)
7813 +- pm_runtime_get_sync(dsi->slave->dev);
7814 +-
7815 + /*
7816 + * For the RK3399, the clk of grf must be enabled before writing grf
7817 + * register. And for RK3288 or other soc, this grf_clk must be NULL,
7818 +@@ -775,20 +773,10 @@ static void dw_mipi_dsi_encoder_enable(struct drm_encoder *encoder)
7819 + clk_disable_unprepare(dsi->grf_clk);
7820 + }
7821 +
7822 +-static void dw_mipi_dsi_encoder_disable(struct drm_encoder *encoder)
7823 +-{
7824 +- struct dw_mipi_dsi_rockchip *dsi = to_dsi(encoder);
7825 +-
7826 +- if (dsi->slave)
7827 +- pm_runtime_put(dsi->slave->dev);
7828 +- pm_runtime_put(dsi->dev);
7829 +-}
7830 +-
7831 + static const struct drm_encoder_helper_funcs
7832 + dw_mipi_dsi_encoder_helper_funcs = {
7833 + .atomic_check = dw_mipi_dsi_encoder_atomic_check,
7834 + .enable = dw_mipi_dsi_encoder_enable,
7835 +- .disable = dw_mipi_dsi_encoder_disable,
7836 + };
7837 +
7838 + static int rockchip_dsi_drm_create_encoder(struct dw_mipi_dsi_rockchip *dsi,
7839 +@@ -918,10 +906,14 @@ static int dw_mipi_dsi_rockchip_bind(struct device *dev,
7840 + put_device(second);
7841 + }
7842 +
7843 ++ pm_runtime_get_sync(dsi->dev);
7844 ++ if (dsi->slave)
7845 ++ pm_runtime_get_sync(dsi->slave->dev);
7846 ++
7847 + ret = clk_prepare_enable(dsi->pllref_clk);
7848 + if (ret) {
7849 + DRM_DEV_ERROR(dev, "Failed to enable pllref_clk: %d\n", ret);
7850 +- return ret;
7851 ++ goto out_pm_runtime;
7852 + }
7853 +
7854 + /*
7855 +@@ -933,7 +925,7 @@ static int dw_mipi_dsi_rockchip_bind(struct device *dev,
7856 + ret = clk_prepare_enable(dsi->grf_clk);
7857 + if (ret) {
7858 + DRM_DEV_ERROR(dsi->dev, "Failed to enable grf_clk: %d\n", ret);
7859 +- return ret;
7860 ++ goto out_pll_clk;
7861 + }
7862 +
7863 + dw_mipi_dsi_rockchip_config(dsi);
7864 +@@ -945,16 +937,27 @@ static int dw_mipi_dsi_rockchip_bind(struct device *dev,
7865 + ret = rockchip_dsi_drm_create_encoder(dsi, drm_dev);
7866 + if (ret) {
7867 + DRM_DEV_ERROR(dev, "Failed to create drm encoder\n");
7868 +- return ret;
7869 ++ goto out_pll_clk;
7870 + }
7871 +
7872 + ret = dw_mipi_dsi_bind(dsi->dmd, &dsi->encoder);
7873 + if (ret) {
7874 + DRM_DEV_ERROR(dev, "Failed to bind: %d\n", ret);
7875 +- return ret;
7876 ++ goto out_pll_clk;
7877 + }
7878 +
7879 ++ dsi->dsi_bound = true;
7880 ++
7881 + return 0;
7882 ++
7883 ++out_pll_clk:
7884 ++ clk_disable_unprepare(dsi->pllref_clk);
7885 ++out_pm_runtime:
7886 ++ pm_runtime_put(dsi->dev);
7887 ++ if (dsi->slave)
7888 ++ pm_runtime_put(dsi->slave->dev);
7889 ++
7890 ++ return ret;
7891 + }
7892 +
7893 + static void dw_mipi_dsi_rockchip_unbind(struct device *dev,
7894 +@@ -966,9 +969,15 @@ static void dw_mipi_dsi_rockchip_unbind(struct device *dev,
7895 + if (dsi->is_slave)
7896 + return;
7897 +
7898 ++ dsi->dsi_bound = false;
7899 ++
7900 + dw_mipi_dsi_unbind(dsi->dmd);
7901 +
7902 + clk_disable_unprepare(dsi->pllref_clk);
7903 ++
7904 ++ pm_runtime_put(dsi->dev);
7905 ++ if (dsi->slave)
7906 ++ pm_runtime_put(dsi->slave->dev);
7907 + }
7908 +
7909 + static const struct component_ops dw_mipi_dsi_rockchip_ops = {
7910 +@@ -1026,6 +1035,36 @@ static const struct dw_mipi_dsi_host_ops dw_mipi_dsi_rockchip_host_ops = {
7911 + .detach = dw_mipi_dsi_rockchip_host_detach,
7912 + };
7913 +
7914 ++static int __maybe_unused dw_mipi_dsi_rockchip_resume(struct device *dev)
7915 ++{
7916 ++ struct dw_mipi_dsi_rockchip *dsi = dev_get_drvdata(dev);
7917 ++ int ret;
7918 ++
7919 ++ /*
7920 ++ * Re-configure DSI state, if we were previously initialized. We need
7921 ++ * to do this before rockchip_drm_drv tries to re-enable() any panels.
7922 ++ */
7923 ++ if (dsi->dsi_bound) {
7924 ++ ret = clk_prepare_enable(dsi->grf_clk);
7925 ++ if (ret) {
7926 ++ DRM_DEV_ERROR(dsi->dev, "Failed to enable grf_clk: %d\n", ret);
7927 ++ return ret;
7928 ++ }
7929 ++
7930 ++ dw_mipi_dsi_rockchip_config(dsi);
7931 ++ if (dsi->slave)
7932 ++ dw_mipi_dsi_rockchip_config(dsi->slave);
7933 ++
7934 ++ clk_disable_unprepare(dsi->grf_clk);
7935 ++ }
7936 ++
7937 ++ return 0;
7938 ++}
7939 ++
7940 ++static const struct dev_pm_ops dw_mipi_dsi_rockchip_pm_ops = {
7941 ++ SET_LATE_SYSTEM_SLEEP_PM_OPS(NULL, dw_mipi_dsi_rockchip_resume)
7942 ++};
7943 ++
7944 + static int dw_mipi_dsi_rockchip_probe(struct platform_device *pdev)
7945 + {
7946 + struct device *dev = &pdev->dev;
7947 +@@ -1126,14 +1165,10 @@ static int dw_mipi_dsi_rockchip_probe(struct platform_device *pdev)
7948 + if (ret != -EPROBE_DEFER)
7949 + DRM_DEV_ERROR(dev,
7950 + "Failed to probe dw_mipi_dsi: %d\n", ret);
7951 +- goto err_clkdisable;
7952 ++ return ret;
7953 + }
7954 +
7955 + return 0;
7956 +-
7957 +-err_clkdisable:
7958 +- clk_disable_unprepare(dsi->pllref_clk);
7959 +- return ret;
7960 + }
7961 +
7962 + static int dw_mipi_dsi_rockchip_remove(struct platform_device *pdev)
7963 +@@ -1249,6 +1284,7 @@ struct platform_driver dw_mipi_dsi_rockchip_driver = {
7964 + .remove = dw_mipi_dsi_rockchip_remove,
7965 + .driver = {
7966 + .of_match_table = dw_mipi_dsi_rockchip_dt_ids,
7967 ++ .pm = &dw_mipi_dsi_rockchip_pm_ops,
7968 + .name = "dw-mipi-dsi-rockchip",
7969 + },
7970 + };
7971 +diff --git a/drivers/gpu/drm/tegra/vic.c b/drivers/gpu/drm/tegra/vic.c
7972 +index b77f726303d89..ec0e4d8f0aade 100644
7973 +--- a/drivers/gpu/drm/tegra/vic.c
7974 ++++ b/drivers/gpu/drm/tegra/vic.c
7975 +@@ -5,6 +5,7 @@
7976 +
7977 + #include <linux/clk.h>
7978 + #include <linux/delay.h>
7979 ++#include <linux/dma-mapping.h>
7980 + #include <linux/host1x.h>
7981 + #include <linux/iommu.h>
7982 + #include <linux/module.h>
7983 +@@ -265,10 +266,8 @@ static int vic_load_firmware(struct vic *vic)
7984 +
7985 + if (!client->group) {
7986 + virt = dma_alloc_coherent(vic->dev, size, &iova, GFP_KERNEL);
7987 +-
7988 +- err = dma_mapping_error(vic->dev, iova);
7989 +- if (err < 0)
7990 +- return err;
7991 ++ if (!virt)
7992 ++ return -ENOMEM;
7993 + } else {
7994 + virt = tegra_drm_alloc(tegra, size, &iova);
7995 + }
7996 +diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
7997 +index eb4b7df02ca03..f673292eec9db 100644
7998 +--- a/drivers/gpu/drm/ttm/ttm_bo.c
7999 ++++ b/drivers/gpu/drm/ttm/ttm_bo.c
8000 +@@ -789,6 +789,8 @@ int ttm_mem_evict_first(struct ttm_bo_device *bdev,
8001 + ret = ttm_bo_evict(bo, ctx);
8002 + if (locked)
8003 + ttm_bo_unreserve(bo);
8004 ++ else
8005 ++ ttm_bo_move_to_lru_tail_unlocked(bo);
8006 +
8007 + ttm_bo_put(bo);
8008 + return ret;
8009 +diff --git a/drivers/gpu/drm/vboxvideo/vbox_main.c b/drivers/gpu/drm/vboxvideo/vbox_main.c
8010 +index d68d9bad76747..c5ea880d17b29 100644
8011 +--- a/drivers/gpu/drm/vboxvideo/vbox_main.c
8012 ++++ b/drivers/gpu/drm/vboxvideo/vbox_main.c
8013 +@@ -123,8 +123,8 @@ int vbox_hw_init(struct vbox_private *vbox)
8014 + /* Create guest-heap mem-pool use 2^4 = 16 byte chunks */
8015 + vbox->guest_pool = devm_gen_pool_create(vbox->ddev.dev, 4, -1,
8016 + "vboxvideo-accel");
8017 +- if (!vbox->guest_pool)
8018 +- return -ENOMEM;
8019 ++ if (IS_ERR(vbox->guest_pool))
8020 ++ return PTR_ERR(vbox->guest_pool);
8021 +
8022 + ret = gen_pool_add_virt(vbox->guest_pool,
8023 + (unsigned long)vbox->guest_heap,
8024 +diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
8025 +index ee293f061f0a8..9392de2679a1d 100644
8026 +--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
8027 ++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
8028 +@@ -79,6 +79,7 @@
8029 + # define VC4_HD_M_SW_RST BIT(2)
8030 + # define VC4_HD_M_ENABLE BIT(0)
8031 +
8032 ++#define HSM_MIN_CLOCK_FREQ 120000000
8033 + #define CEC_CLOCK_FREQ 40000
8034 + #define VC4_HSM_MID_CLOCK 149985000
8035 +
8036 +@@ -1398,8 +1399,14 @@ static int vc4_hdmi_cec_adap_enable(struct cec_adapter *adap, bool enable)
8037 + struct vc4_hdmi *vc4_hdmi = cec_get_drvdata(adap);
8038 + /* clock period in microseconds */
8039 + const u32 usecs = 1000000 / CEC_CLOCK_FREQ;
8040 +- u32 val = HDMI_READ(HDMI_CEC_CNTRL_5);
8041 ++ u32 val;
8042 ++ int ret;
8043 +
8044 ++ ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev);
8045 ++ if (ret)
8046 ++ return ret;
8047 ++
8048 ++ val = HDMI_READ(HDMI_CEC_CNTRL_5);
8049 + val &= ~(VC4_HDMI_CEC_TX_SW_RESET | VC4_HDMI_CEC_RX_SW_RESET |
8050 + VC4_HDMI_CEC_CNT_TO_4700_US_MASK |
8051 + VC4_HDMI_CEC_CNT_TO_4500_US_MASK);
8052 +@@ -1524,6 +1531,8 @@ static int vc4_hdmi_cec_init(struct vc4_hdmi *vc4_hdmi)
8053 + if (ret < 0)
8054 + goto err_delete_cec_adap;
8055 +
8056 ++ pm_runtime_put(&vc4_hdmi->pdev->dev);
8057 ++
8058 + return 0;
8059 +
8060 + err_delete_cec_adap:
8061 +@@ -1806,6 +1815,19 @@ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
8062 + vc4_hdmi->disable_wifi_frequencies =
8063 + of_property_read_bool(dev->of_node, "wifi-2.4ghz-coexistence");
8064 +
8065 ++ /*
8066 ++ * If we boot without any cable connected to the HDMI connector,
8067 ++ * the firmware will skip the HSM initialization and leave it
8068 ++ * with a rate of 0, resulting in a bus lockup when we're
8069 ++ * accessing the registers even if it's enabled.
8070 ++ *
8071 ++ * Let's put a sensible default at runtime_resume so that we
8072 ++ * don't end up in this situation.
8073 ++ */
8074 ++ ret = clk_set_min_rate(vc4_hdmi->hsm_clock, HSM_MIN_CLOCK_FREQ);
8075 ++ if (ret)
8076 ++ goto err_put_ddc;
8077 ++
8078 + if (vc4_hdmi->variant->reset)
8079 + vc4_hdmi->variant->reset(vc4_hdmi);
8080 +
8081 +diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
8082 +index d0ebb70e2fdd6..a2c09dca4eef9 100644
8083 +--- a/drivers/gpu/host1x/dev.c
8084 ++++ b/drivers/gpu/host1x/dev.c
8085 +@@ -18,6 +18,10 @@
8086 + #include <trace/events/host1x.h>
8087 + #undef CREATE_TRACE_POINTS
8088 +
8089 ++#if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
8090 ++#include <asm/dma-iommu.h>
8091 ++#endif
8092 ++
8093 + #include "bus.h"
8094 + #include "channel.h"
8095 + #include "debug.h"
8096 +@@ -232,6 +236,17 @@ static struct iommu_domain *host1x_iommu_attach(struct host1x *host)
8097 + struct iommu_domain *domain = iommu_get_domain_for_dev(host->dev);
8098 + int err;
8099 +
8100 ++#if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
8101 ++ if (host->dev->archdata.mapping) {
8102 ++ struct dma_iommu_mapping *mapping =
8103 ++ to_dma_iommu_mapping(host->dev);
8104 ++ arm_iommu_detach_device(host->dev);
8105 ++ arm_iommu_release_mapping(mapping);
8106 ++
8107 ++ domain = iommu_get_domain_for_dev(host->dev);
8108 ++ }
8109 ++#endif
8110 ++
8111 + /*
8112 + * We may not always want to enable IOMMU support (for example if the
8113 + * host1x firewall is already enabled and we don't support addressing
8114 +diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
8115 +index 5c1d33cda863b..e5d2e7e9541b8 100644
8116 +--- a/drivers/hid/hid-apple.c
8117 ++++ b/drivers/hid/hid-apple.c
8118 +@@ -415,7 +415,7 @@ static int apple_input_configured(struct hid_device *hdev,
8119 +
8120 + if ((asc->quirks & APPLE_HAS_FN) && !asc->fn_found) {
8121 + hid_info(hdev, "Fn key not found (Apple Wireless Keyboard clone?), disabling Fn key handling\n");
8122 +- asc->quirks = 0;
8123 ++ asc->quirks &= ~APPLE_HAS_FN;
8124 + }
8125 +
8126 + return 0;
8127 +diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
8128 +index 580d378342c41..eb53855898c8d 100644
8129 +--- a/drivers/hid/hid-input.c
8130 ++++ b/drivers/hid/hid-input.c
8131 +@@ -1288,6 +1288,12 @@ void hidinput_hid_event(struct hid_device *hid, struct hid_field *field, struct
8132 +
8133 + input = field->hidinput->input;
8134 +
8135 ++ if (usage->type == EV_ABS &&
8136 ++ (((*quirks & HID_QUIRK_X_INVERT) && usage->code == ABS_X) ||
8137 ++ ((*quirks & HID_QUIRK_Y_INVERT) && usage->code == ABS_Y))) {
8138 ++ value = field->logical_maximum - value;
8139 ++ }
8140 ++
8141 + if (usage->hat_min < usage->hat_max || usage->hat_dir) {
8142 + int hat_dir = usage->hat_dir;
8143 + if (!hat_dir)
8144 +diff --git a/drivers/hid/hid-uclogic-params.c b/drivers/hid/hid-uclogic-params.c
8145 +index dd05bed4ca53a..38f9bbad81c17 100644
8146 +--- a/drivers/hid/hid-uclogic-params.c
8147 ++++ b/drivers/hid/hid-uclogic-params.c
8148 +@@ -65,7 +65,7 @@ static int uclogic_params_get_str_desc(__u8 **pbuf, struct hid_device *hdev,
8149 + __u8 idx, size_t len)
8150 + {
8151 + int rc;
8152 +- struct usb_device *udev = hid_to_usb_dev(hdev);
8153 ++ struct usb_device *udev;
8154 + __u8 *buf = NULL;
8155 +
8156 + /* Check arguments */
8157 +@@ -74,6 +74,8 @@ static int uclogic_params_get_str_desc(__u8 **pbuf, struct hid_device *hdev,
8158 + goto cleanup;
8159 + }
8160 +
8161 ++ udev = hid_to_usb_dev(hdev);
8162 ++
8163 + buf = kmalloc(len, GFP_KERNEL);
8164 + if (buf == NULL) {
8165 + rc = -ENOMEM;
8166 +@@ -449,7 +451,7 @@ static int uclogic_params_frame_init_v1_buttonpad(
8167 + {
8168 + int rc;
8169 + bool found = false;
8170 +- struct usb_device *usb_dev = hid_to_usb_dev(hdev);
8171 ++ struct usb_device *usb_dev;
8172 + char *str_buf = NULL;
8173 + const size_t str_len = 16;
8174 +
8175 +@@ -459,6 +461,8 @@ static int uclogic_params_frame_init_v1_buttonpad(
8176 + goto cleanup;
8177 + }
8178 +
8179 ++ usb_dev = hid_to_usb_dev(hdev);
8180 ++
8181 + /*
8182 + * Enable generic button mode
8183 + */
8184 +@@ -705,9 +709,9 @@ static int uclogic_params_huion_init(struct uclogic_params *params,
8185 + struct hid_device *hdev)
8186 + {
8187 + int rc;
8188 +- struct usb_device *udev = hid_to_usb_dev(hdev);
8189 +- struct usb_interface *iface = to_usb_interface(hdev->dev.parent);
8190 +- __u8 bInterfaceNumber = iface->cur_altsetting->desc.bInterfaceNumber;
8191 ++ struct usb_device *udev;
8192 ++ struct usb_interface *iface;
8193 ++ __u8 bInterfaceNumber;
8194 + bool found;
8195 + /* The resulting parameters (noop) */
8196 + struct uclogic_params p = {0, };
8197 +@@ -721,6 +725,10 @@ static int uclogic_params_huion_init(struct uclogic_params *params,
8198 + goto cleanup;
8199 + }
8200 +
8201 ++ udev = hid_to_usb_dev(hdev);
8202 ++ iface = to_usb_interface(hdev->dev.parent);
8203 ++ bInterfaceNumber = iface->cur_altsetting->desc.bInterfaceNumber;
8204 ++
8205 + /* If it's not a pen interface */
8206 + if (bInterfaceNumber != 0) {
8207 + /* TODO: Consider marking the interface invalid */
8208 +@@ -832,10 +840,10 @@ int uclogic_params_init(struct uclogic_params *params,
8209 + struct hid_device *hdev)
8210 + {
8211 + int rc;
8212 +- struct usb_device *udev = hid_to_usb_dev(hdev);
8213 +- __u8 bNumInterfaces = udev->config->desc.bNumInterfaces;
8214 +- struct usb_interface *iface = to_usb_interface(hdev->dev.parent);
8215 +- __u8 bInterfaceNumber = iface->cur_altsetting->desc.bInterfaceNumber;
8216 ++ struct usb_device *udev;
8217 ++ __u8 bNumInterfaces;
8218 ++ struct usb_interface *iface;
8219 ++ __u8 bInterfaceNumber;
8220 + bool found;
8221 + /* The resulting parameters (noop) */
8222 + struct uclogic_params p = {0, };
8223 +@@ -846,6 +854,11 @@ int uclogic_params_init(struct uclogic_params *params,
8224 + goto cleanup;
8225 + }
8226 +
8227 ++ udev = hid_to_usb_dev(hdev);
8228 ++ bNumInterfaces = udev->config->desc.bNumInterfaces;
8229 ++ iface = to_usb_interface(hdev->dev.parent);
8230 ++ bInterfaceNumber = iface->cur_altsetting->desc.bInterfaceNumber;
8231 ++
8232 + /*
8233 + * Set replacement report descriptor if the original matches the
8234 + * specified size. Otherwise keep interface unchanged.
8235 +diff --git a/drivers/hid/hid-vivaldi.c b/drivers/hid/hid-vivaldi.c
8236 +index 72957a9f71170..576518e704ee6 100644
8237 +--- a/drivers/hid/hid-vivaldi.c
8238 ++++ b/drivers/hid/hid-vivaldi.c
8239 +@@ -74,10 +74,11 @@ static void vivaldi_feature_mapping(struct hid_device *hdev,
8240 + struct hid_usage *usage)
8241 + {
8242 + struct vivaldi_data *drvdata = hid_get_drvdata(hdev);
8243 ++ struct hid_report *report = field->report;
8244 + int fn_key;
8245 + int ret;
8246 + u32 report_len;
8247 +- u8 *buf;
8248 ++ u8 *report_data, *buf;
8249 +
8250 + if (field->logical != HID_USAGE_FN_ROW_PHYSMAP ||
8251 + (usage->hid & HID_USAGE_PAGE) != HID_UP_ORDINAL)
8252 +@@ -89,12 +90,24 @@ static void vivaldi_feature_mapping(struct hid_device *hdev,
8253 + if (fn_key > drvdata->max_function_row_key)
8254 + drvdata->max_function_row_key = fn_key;
8255 +
8256 +- buf = hid_alloc_report_buf(field->report, GFP_KERNEL);
8257 +- if (!buf)
8258 ++ report_data = buf = hid_alloc_report_buf(report, GFP_KERNEL);
8259 ++ if (!report_data)
8260 + return;
8261 +
8262 +- report_len = hid_report_len(field->report);
8263 +- ret = hid_hw_raw_request(hdev, field->report->id, buf,
8264 ++ report_len = hid_report_len(report);
8265 ++ if (!report->id) {
8266 ++ /*
8267 ++ * hid_hw_raw_request() will stuff report ID (which will be 0)
8268 ++ * into the first byte of the buffer even for unnumbered
8269 ++ * reports, so we need to account for this to avoid getting
8270 ++ * -EOVERFLOW in return.
8271 ++ * Note that hid_alloc_report_buf() adds 7 bytes to the size
8272 ++ * so we can safely say that we have space for an extra byte.
8273 ++ */
8274 ++ report_len++;
8275 ++ }
8276 ++
8277 ++ ret = hid_hw_raw_request(hdev, report->id, report_data,
8278 + report_len, HID_FEATURE_REPORT,
8279 + HID_REQ_GET_REPORT);
8280 + if (ret < 0) {
8281 +@@ -103,7 +116,16 @@ static void vivaldi_feature_mapping(struct hid_device *hdev,
8282 + goto out;
8283 + }
8284 +
8285 +- ret = hid_report_raw_event(hdev, HID_FEATURE_REPORT, buf,
8286 ++ if (!report->id) {
8287 ++ /*
8288 ++ * Undo the damage from hid_hw_raw_request() for unnumbered
8289 ++ * reports.
8290 ++ */
8291 ++ report_data++;
8292 ++ report_len--;
8293 ++ }
8294 ++
8295 ++ ret = hid_report_raw_event(hdev, HID_FEATURE_REPORT, report_data,
8296 + report_len, 0);
8297 + if (ret) {
8298 + dev_warn(&hdev->dev, "failed to report feature %d\n",
8299 +diff --git a/drivers/hid/uhid.c b/drivers/hid/uhid.c
8300 +index 8fe3efcb83271..fc06d8bb42e0f 100644
8301 +--- a/drivers/hid/uhid.c
8302 ++++ b/drivers/hid/uhid.c
8303 +@@ -28,11 +28,22 @@
8304 +
8305 + struct uhid_device {
8306 + struct mutex devlock;
8307 ++
8308 ++ /* This flag tracks whether the HID device is usable for commands from
8309 ++ * userspace. The flag is already set before hid_add_device(), which
8310 ++ * runs in workqueue context, to allow hid_add_device() to communicate
8311 ++ * with userspace.
8312 ++ * However, if hid_add_device() fails, the flag is cleared without
8313 ++ * holding devlock.
8314 ++ * We guarantee that if @running changes from true to false while you're
8315 ++ * holding @devlock, it's still fine to access @hid.
8316 ++ */
8317 + bool running;
8318 +
8319 + __u8 *rd_data;
8320 + uint rd_size;
8321 +
8322 ++ /* When this is NULL, userspace may use UHID_CREATE/UHID_CREATE2. */
8323 + struct hid_device *hid;
8324 + struct uhid_event input_buf;
8325 +
8326 +@@ -63,9 +74,18 @@ static void uhid_device_add_worker(struct work_struct *work)
8327 + if (ret) {
8328 + hid_err(uhid->hid, "Cannot register HID device: error %d\n", ret);
8329 +
8330 +- hid_destroy_device(uhid->hid);
8331 +- uhid->hid = NULL;
8332 ++ /* We used to call hid_destroy_device() here, but that's really
8333 ++ * messy to get right because we have to coordinate with
8334 ++ * concurrent writes from userspace that might be in the middle
8335 ++ * of using uhid->hid.
8336 ++ * Just leave uhid->hid as-is for now, and clean it up when
8337 ++ * userspace tries to close or reinitialize the uhid instance.
8338 ++ *
8339 ++ * However, we do have to clear the ->running flag and do a
8340 ++ * wakeup to make sure userspace knows that the device is gone.
8341 ++ */
8342 + uhid->running = false;
8343 ++ wake_up_interruptible(&uhid->report_wait);
8344 + }
8345 + }
8346 +
8347 +@@ -474,7 +494,7 @@ static int uhid_dev_create2(struct uhid_device *uhid,
8348 + void *rd_data;
8349 + int ret;
8350 +
8351 +- if (uhid->running)
8352 ++ if (uhid->hid)
8353 + return -EALREADY;
8354 +
8355 + rd_size = ev->u.create2.rd_size;
8356 +@@ -556,7 +576,7 @@ static int uhid_dev_create(struct uhid_device *uhid,
8357 +
8358 + static int uhid_dev_destroy(struct uhid_device *uhid)
8359 + {
8360 +- if (!uhid->running)
8361 ++ if (!uhid->hid)
8362 + return -EINVAL;
8363 +
8364 + uhid->running = false;
8365 +@@ -565,6 +585,7 @@ static int uhid_dev_destroy(struct uhid_device *uhid)
8366 + cancel_work_sync(&uhid->worker);
8367 +
8368 + hid_destroy_device(uhid->hid);
8369 ++ uhid->hid = NULL;
8370 + kfree(uhid->rd_data);
8371 +
8372 + return 0;
8373 +diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
8374 +index c25274275258f..d90bfa8b7313e 100644
8375 +--- a/drivers/hid/wacom_wac.c
8376 ++++ b/drivers/hid/wacom_wac.c
8377 +@@ -2566,6 +2566,24 @@ static void wacom_wac_finger_slot(struct wacom_wac *wacom_wac,
8378 + }
8379 + }
8380 +
8381 ++static bool wacom_wac_slot_is_active(struct input_dev *dev, int key)
8382 ++{
8383 ++ struct input_mt *mt = dev->mt;
8384 ++ struct input_mt_slot *s;
8385 ++
8386 ++ if (!mt)
8387 ++ return false;
8388 ++
8389 ++ for (s = mt->slots; s != mt->slots + mt->num_slots; s++) {
8390 ++ if (s->key == key &&
8391 ++ input_mt_get_value(s, ABS_MT_TRACKING_ID) >= 0) {
8392 ++ return true;
8393 ++ }
8394 ++ }
8395 ++
8396 ++ return false;
8397 ++}
8398 ++
8399 + static void wacom_wac_finger_event(struct hid_device *hdev,
8400 + struct hid_field *field, struct hid_usage *usage, __s32 value)
8401 + {
8402 +@@ -2613,9 +2631,14 @@ static void wacom_wac_finger_event(struct hid_device *hdev,
8403 + }
8404 +
8405 + if (usage->usage_index + 1 == field->report_count) {
8406 +- if (equivalent_usage == wacom_wac->hid_data.last_slot_field &&
8407 +- wacom_wac->hid_data.confidence)
8408 +- wacom_wac_finger_slot(wacom_wac, wacom_wac->touch_input);
8409 ++ if (equivalent_usage == wacom_wac->hid_data.last_slot_field) {
8410 ++ bool touch_removed = wacom_wac_slot_is_active(wacom_wac->touch_input,
8411 ++ wacom_wac->hid_data.id) && !wacom_wac->hid_data.tipswitch;
8412 ++
8413 ++ if (wacom_wac->hid_data.confidence || touch_removed) {
8414 ++ wacom_wac_finger_slot(wacom_wac, wacom_wac->touch_input);
8415 ++ }
8416 ++ }
8417 + }
8418 + }
8419 +
8420 +@@ -2631,6 +2654,10 @@ static void wacom_wac_finger_pre_report(struct hid_device *hdev,
8421 +
8422 + hid_data->confidence = true;
8423 +
8424 ++ hid_data->cc_report = 0;
8425 ++ hid_data->cc_index = -1;
8426 ++ hid_data->cc_value_index = -1;
8427 ++
8428 + for (i = 0; i < report->maxfield; i++) {
8429 + struct hid_field *field = report->field[i];
8430 + int j;
8431 +@@ -2664,11 +2691,14 @@ static void wacom_wac_finger_pre_report(struct hid_device *hdev,
8432 + hid_data->cc_index >= 0) {
8433 + struct hid_field *field = report->field[hid_data->cc_index];
8434 + int value = field->value[hid_data->cc_value_index];
8435 +- if (value)
8436 ++ if (value) {
8437 + hid_data->num_expected = value;
8438 ++ hid_data->num_received = 0;
8439 ++ }
8440 + }
8441 + else {
8442 + hid_data->num_expected = wacom_wac->features.touch_max;
8443 ++ hid_data->num_received = 0;
8444 + }
8445 + }
8446 +
8447 +@@ -2692,6 +2722,7 @@ static void wacom_wac_finger_report(struct hid_device *hdev,
8448 +
8449 + input_sync(input);
8450 + wacom_wac->hid_data.num_received = 0;
8451 ++ wacom_wac->hid_data.num_expected = 0;
8452 +
8453 + /* keep touch state for pen event */
8454 + wacom_wac->shared->touch_down = wacom_wac_finger_count_touches(wacom_wac);
8455 +diff --git a/drivers/hsi/hsi_core.c b/drivers/hsi/hsi_core.c
8456 +index a5f92e2889cb8..a330f58d45fc6 100644
8457 +--- a/drivers/hsi/hsi_core.c
8458 ++++ b/drivers/hsi/hsi_core.c
8459 +@@ -102,6 +102,7 @@ struct hsi_client *hsi_new_client(struct hsi_port *port,
8460 + if (device_register(&cl->device) < 0) {
8461 + pr_err("hsi: failed to register client: %s\n", info->name);
8462 + put_device(&cl->device);
8463 ++ goto err;
8464 + }
8465 +
8466 + return cl;
8467 +diff --git a/drivers/hwmon/mr75203.c b/drivers/hwmon/mr75203.c
8468 +index 18da5a25e89ab..046523d47c29b 100644
8469 +--- a/drivers/hwmon/mr75203.c
8470 ++++ b/drivers/hwmon/mr75203.c
8471 +@@ -93,7 +93,7 @@
8472 + #define VM_CH_REQ BIT(21)
8473 +
8474 + #define IP_TMR 0x05
8475 +-#define POWER_DELAY_CYCLE_256 0x80
8476 ++#define POWER_DELAY_CYCLE_256 0x100
8477 + #define POWER_DELAY_CYCLE_64 0x40
8478 +
8479 + #define PVT_POLL_DELAY_US 20
8480 +diff --git a/drivers/i2c/busses/i2c-designware-pcidrv.c b/drivers/i2c/busses/i2c-designware-pcidrv.c
8481 +index 55c83a7a24f36..56c87ade0e89d 100644
8482 +--- a/drivers/i2c/busses/i2c-designware-pcidrv.c
8483 ++++ b/drivers/i2c/busses/i2c-designware-pcidrv.c
8484 +@@ -37,10 +37,10 @@ enum dw_pci_ctl_id_t {
8485 + };
8486 +
8487 + struct dw_scl_sda_cfg {
8488 +- u32 ss_hcnt;
8489 +- u32 fs_hcnt;
8490 +- u32 ss_lcnt;
8491 +- u32 fs_lcnt;
8492 ++ u16 ss_hcnt;
8493 ++ u16 fs_hcnt;
8494 ++ u16 ss_lcnt;
8495 ++ u16 fs_lcnt;
8496 + u32 sda_hold;
8497 + };
8498 +
8499 +diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
8500 +index eab6fd6b890eb..5618c1ff34dc3 100644
8501 +--- a/drivers/i2c/busses/i2c-i801.c
8502 ++++ b/drivers/i2c/busses/i2c-i801.c
8503 +@@ -797,6 +797,11 @@ static int i801_block_transaction(struct i801_priv *priv,
8504 + int result = 0;
8505 + unsigned char hostc;
8506 +
8507 ++ if (read_write == I2C_SMBUS_READ && command == I2C_SMBUS_BLOCK_DATA)
8508 ++ data->block[0] = I2C_SMBUS_BLOCK_MAX;
8509 ++ else if (data->block[0] < 1 || data->block[0] > I2C_SMBUS_BLOCK_MAX)
8510 ++ return -EPROTO;
8511 ++
8512 + if (command == I2C_SMBUS_I2C_BLOCK_DATA) {
8513 + if (read_write == I2C_SMBUS_WRITE) {
8514 + /* set I2C_EN bit in configuration register */
8515 +@@ -810,16 +815,6 @@ static int i801_block_transaction(struct i801_priv *priv,
8516 + }
8517 + }
8518 +
8519 +- if (read_write == I2C_SMBUS_WRITE
8520 +- || command == I2C_SMBUS_I2C_BLOCK_DATA) {
8521 +- if (data->block[0] < 1)
8522 +- data->block[0] = 1;
8523 +- if (data->block[0] > I2C_SMBUS_BLOCK_MAX)
8524 +- data->block[0] = I2C_SMBUS_BLOCK_MAX;
8525 +- } else {
8526 +- data->block[0] = 32; /* max for SMBus block reads */
8527 +- }
8528 +-
8529 + /* Experience has shown that the block buffer can only be used for
8530 + SMBus (not I2C) block transactions, even though the datasheet
8531 + doesn't mention this limitation. */
8532 +diff --git a/drivers/i2c/busses/i2c-mpc.c b/drivers/i2c/busses/i2c-mpc.c
8533 +index af349661fd769..8de8296d25831 100644
8534 +--- a/drivers/i2c/busses/i2c-mpc.c
8535 ++++ b/drivers/i2c/busses/i2c-mpc.c
8536 +@@ -105,23 +105,30 @@ static irqreturn_t mpc_i2c_isr(int irq, void *dev_id)
8537 + /* Sometimes 9th clock pulse isn't generated, and slave doesn't release
8538 + * the bus, because it wants to send ACK.
8539 + * Following sequence of enabling/disabling and sending start/stop generates
8540 +- * the 9 pulses, so it's all OK.
8541 ++ * the 9 pulses, each with a START then ending with STOP, so it's all OK.
8542 + */
8543 + static void mpc_i2c_fixup(struct mpc_i2c *i2c)
8544 + {
8545 + int k;
8546 +- u32 delay_val = 1000000 / i2c->real_clk + 1;
8547 +-
8548 +- if (delay_val < 2)
8549 +- delay_val = 2;
8550 ++ unsigned long flags;
8551 +
8552 + for (k = 9; k; k--) {
8553 + writeccr(i2c, 0);
8554 +- writeccr(i2c, CCR_MSTA | CCR_MTX | CCR_MEN);
8555 ++ writeb(0, i2c->base + MPC_I2C_SR); /* clear any status bits */
8556 ++ writeccr(i2c, CCR_MEN | CCR_MSTA); /* START */
8557 ++ readb(i2c->base + MPC_I2C_DR); /* init xfer */
8558 ++ udelay(15); /* let it hit the bus */
8559 ++ local_irq_save(flags); /* should not be delayed further */
8560 ++ writeccr(i2c, CCR_MEN | CCR_MSTA | CCR_RSTA); /* delay SDA */
8561 + readb(i2c->base + MPC_I2C_DR);
8562 +- writeccr(i2c, CCR_MEN);
8563 +- udelay(delay_val << 1);
8564 ++ if (k != 1)
8565 ++ udelay(5);
8566 ++ local_irq_restore(flags);
8567 + }
8568 ++ writeccr(i2c, CCR_MEN); /* Initiate STOP */
8569 ++ readb(i2c->base + MPC_I2C_DR);
8570 ++ udelay(15); /* Let STOP propagate */
8571 ++ writeccr(i2c, 0);
8572 + }
8573 +
8574 + static int i2c_wait(struct mpc_i2c *i2c, unsigned timeout, int writing)
8575 +diff --git a/drivers/iio/adc/ti-adc081c.c b/drivers/iio/adc/ti-adc081c.c
8576 +index b64718daa2017..c79cd88cd4231 100644
8577 +--- a/drivers/iio/adc/ti-adc081c.c
8578 ++++ b/drivers/iio/adc/ti-adc081c.c
8579 +@@ -19,6 +19,7 @@
8580 + #include <linux/i2c.h>
8581 + #include <linux/module.h>
8582 + #include <linux/mod_devicetable.h>
8583 ++#include <linux/property.h>
8584 +
8585 + #include <linux/iio/iio.h>
8586 + #include <linux/iio/buffer.h>
8587 +@@ -151,13 +152,16 @@ static int adc081c_probe(struct i2c_client *client,
8588 + {
8589 + struct iio_dev *iio;
8590 + struct adc081c *adc;
8591 +- struct adcxx1c_model *model;
8592 ++ const struct adcxx1c_model *model;
8593 + int err;
8594 +
8595 + if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_WORD_DATA))
8596 + return -EOPNOTSUPP;
8597 +
8598 +- model = &adcxx1c_models[id->driver_data];
8599 ++ if (dev_fwnode(&client->dev))
8600 ++ model = device_get_match_data(&client->dev);
8601 ++ else
8602 ++ model = &adcxx1c_models[id->driver_data];
8603 +
8604 + iio = devm_iio_device_alloc(&client->dev, sizeof(*adc));
8605 + if (!iio)
8606 +@@ -224,10 +228,17 @@ static const struct i2c_device_id adc081c_id[] = {
8607 + };
8608 + MODULE_DEVICE_TABLE(i2c, adc081c_id);
8609 +
8610 ++static const struct acpi_device_id adc081c_acpi_match[] = {
8611 ++ /* Used on some AAEON boards */
8612 ++ { "ADC081C", (kernel_ulong_t)&adcxx1c_models[ADC081C] },
8613 ++ { }
8614 ++};
8615 ++MODULE_DEVICE_TABLE(acpi, adc081c_acpi_match);
8616 ++
8617 + static const struct of_device_id adc081c_of_match[] = {
8618 +- { .compatible = "ti,adc081c" },
8619 +- { .compatible = "ti,adc101c" },
8620 +- { .compatible = "ti,adc121c" },
8621 ++ { .compatible = "ti,adc081c", .data = &adcxx1c_models[ADC081C] },
8622 ++ { .compatible = "ti,adc101c", .data = &adcxx1c_models[ADC101C] },
8623 ++ { .compatible = "ti,adc121c", .data = &adcxx1c_models[ADC121C] },
8624 + { }
8625 + };
8626 + MODULE_DEVICE_TABLE(of, adc081c_of_match);
8627 +@@ -236,6 +247,7 @@ static struct i2c_driver adc081c_driver = {
8628 + .driver = {
8629 + .name = "adc081c",
8630 + .of_match_table = adc081c_of_match,
8631 ++ .acpi_match_table = adc081c_acpi_match,
8632 + },
8633 + .probe = adc081c_probe,
8634 + .remove = adc081c_remove,
8635 +diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
8636 +index 8e54184566f7f..4d4ba09f6cf93 100644
8637 +--- a/drivers/infiniband/core/cma.c
8638 ++++ b/drivers/infiniband/core/cma.c
8639 +@@ -775,6 +775,7 @@ static int cma_resolve_ib_dev(struct rdma_id_private *id_priv)
8640 + unsigned int p;
8641 + u16 pkey, index;
8642 + enum ib_port_state port_state;
8643 ++ int ret;
8644 + int i;
8645 +
8646 + cma_dev = NULL;
8647 +@@ -793,9 +794,14 @@ static int cma_resolve_ib_dev(struct rdma_id_private *id_priv)
8648 +
8649 + if (ib_get_cached_port_state(cur_dev->device, p, &port_state))
8650 + continue;
8651 +- for (i = 0; !rdma_query_gid(cur_dev->device,
8652 +- p, i, &gid);
8653 +- i++) {
8654 ++
8655 ++ for (i = 0; i < cur_dev->device->port_data[p].immutable.gid_tbl_len;
8656 ++ ++i) {
8657 ++ ret = rdma_query_gid(cur_dev->device, p, i,
8658 ++ &gid);
8659 ++ if (ret)
8660 ++ continue;
8661 ++
8662 + if (!memcmp(&gid, dgid, sizeof(gid))) {
8663 + cma_dev = cur_dev;
8664 + sgid = gid;
8665 +diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
8666 +index 76b9c436edcd2..aa526c5ca0cf3 100644
8667 +--- a/drivers/infiniband/core/device.c
8668 ++++ b/drivers/infiniband/core/device.c
8669 +@@ -2411,7 +2411,8 @@ int ib_find_gid(struct ib_device *device, union ib_gid *gid,
8670 + ++i) {
8671 + ret = rdma_query_gid(device, port, i, &tmp_gid);
8672 + if (ret)
8673 +- return ret;
8674 ++ continue;
8675 ++
8676 + if (!memcmp(&tmp_gid, gid, sizeof *gid)) {
8677 + *port_num = port;
8678 + if (index)
8679 +diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
8680 +index 441eb421e5e59..5759027914b01 100644
8681 +--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
8682 ++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
8683 +@@ -614,8 +614,6 @@ int bnxt_qplib_alloc_rcfw_channel(struct bnxt_qplib_res *res,
8684 + if (!cmdq->cmdq_bitmap)
8685 + goto fail;
8686 +
8687 +- cmdq->bmap_size = bmap_size;
8688 +-
8689 + /* Allocate one extra to hold the QP1 entries */
8690 + rcfw->qp_tbl_size = qp_tbl_sz + 1;
8691 + rcfw->qp_tbl = kcalloc(rcfw->qp_tbl_size, sizeof(struct bnxt_qplib_qp_node),
8692 +@@ -663,8 +661,8 @@ void bnxt_qplib_disable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw)
8693 + iounmap(cmdq->cmdq_mbox.reg.bar_reg);
8694 + iounmap(creq->creq_db.reg.bar_reg);
8695 +
8696 +- indx = find_first_bit(cmdq->cmdq_bitmap, cmdq->bmap_size);
8697 +- if (indx != cmdq->bmap_size)
8698 ++ indx = find_first_bit(cmdq->cmdq_bitmap, rcfw->cmdq_depth);
8699 ++ if (indx != rcfw->cmdq_depth)
8700 + dev_err(&rcfw->pdev->dev,
8701 + "disabling RCFW with pending cmd-bit %lx\n", indx);
8702 +
8703 +diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
8704 +index 5f2f0a5a3560f..6953f4e53dd20 100644
8705 +--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
8706 ++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
8707 +@@ -150,7 +150,6 @@ struct bnxt_qplib_cmdq_ctx {
8708 + wait_queue_head_t waitq;
8709 + unsigned long flags;
8710 + unsigned long *cmdq_bitmap;
8711 +- u32 bmap_size;
8712 + u32 seq_num;
8713 + };
8714 +
8715 +diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
8716 +index 861e19fdfeb46..12e5461581cb4 100644
8717 +--- a/drivers/infiniband/hw/cxgb4/qp.c
8718 ++++ b/drivers/infiniband/hw/cxgb4/qp.c
8719 +@@ -2469,6 +2469,7 @@ int c4iw_ib_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
8720 + memset(attr, 0, sizeof(*attr));
8721 + memset(init_attr, 0, sizeof(*init_attr));
8722 + attr->qp_state = to_ib_qp_state(qhp->attr.state);
8723 ++ attr->cur_qp_state = to_ib_qp_state(qhp->attr.state);
8724 + init_attr->cap.max_send_wr = qhp->attr.sq_num_entries;
8725 + init_attr->cap.max_recv_wr = qhp->attr.rq_num_entries;
8726 + init_attr->cap.max_send_sge = qhp->attr.sq_max_sges;
8727 +diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
8728 +index ba65823a5c0bb..1e8b3e4ef1b17 100644
8729 +--- a/drivers/infiniband/hw/hns/hns_roce_main.c
8730 ++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
8731 +@@ -279,6 +279,9 @@ static enum rdma_link_layer hns_roce_get_link_layer(struct ib_device *device,
8732 + static int hns_roce_query_pkey(struct ib_device *ib_dev, u8 port, u16 index,
8733 + u16 *pkey)
8734 + {
8735 ++ if (index > 0)
8736 ++ return -EINVAL;
8737 ++
8738 + *pkey = PKEY_ID;
8739 +
8740 + return 0;
8741 +@@ -356,7 +359,7 @@ static int hns_roce_mmap(struct ib_ucontext *context,
8742 + return rdma_user_mmap_io(context, vma,
8743 + to_hr_ucontext(context)->uar.pfn,
8744 + PAGE_SIZE,
8745 +- pgprot_noncached(vma->vm_page_prot),
8746 ++ pgprot_device(vma->vm_page_prot),
8747 + NULL);
8748 +
8749 + /* vm_pgoff: 1 -- TPTR */
8750 +diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
8751 +index 16d5283651894..eeb87f31cd252 100644
8752 +--- a/drivers/infiniband/hw/qedr/verbs.c
8753 ++++ b/drivers/infiniband/hw/qedr/verbs.c
8754 +@@ -1918,6 +1918,7 @@ static int qedr_create_user_qp(struct qedr_dev *dev,
8755 + /* db offset was calculated in copy_qp_uresp, now set in the user q */
8756 + if (qedr_qp_has_sq(qp)) {
8757 + qp->usq.db_addr = ctx->dpi_addr + uresp.sq_db_offset;
8758 ++ qp->sq.max_wr = attrs->cap.max_send_wr;
8759 + rc = qedr_db_recovery_add(dev, qp->usq.db_addr,
8760 + &qp->usq.db_rec_data->db_data,
8761 + DB_REC_WIDTH_32B,
8762 +@@ -1928,6 +1929,7 @@ static int qedr_create_user_qp(struct qedr_dev *dev,
8763 +
8764 + if (qedr_qp_has_rq(qp)) {
8765 + qp->urq.db_addr = ctx->dpi_addr + uresp.rq_db_offset;
8766 ++ qp->rq.max_wr = attrs->cap.max_recv_wr;
8767 + rc = qedr_db_recovery_add(dev, qp->urq.db_addr,
8768 + &qp->urq.db_rec_data->db_data,
8769 + DB_REC_WIDTH_32B,
8770 +diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c
8771 +index 0cb4b01fd9101..66ffb516bdaf0 100644
8772 +--- a/drivers/infiniband/sw/rxe/rxe_opcode.c
8773 ++++ b/drivers/infiniband/sw/rxe/rxe_opcode.c
8774 +@@ -110,7 +110,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = {
8775 + }
8776 + },
8777 + [IB_OPCODE_RC_SEND_MIDDLE] = {
8778 +- .name = "IB_OPCODE_RC_SEND_MIDDLE]",
8779 ++ .name = "IB_OPCODE_RC_SEND_MIDDLE",
8780 + .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_SEND_MASK
8781 + | RXE_MIDDLE_MASK,
8782 + .length = RXE_BTH_BYTES,
8783 +diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
8784 +index 28de889aa5164..3f31a52f7044f 100644
8785 +--- a/drivers/iommu/amd/init.c
8786 ++++ b/drivers/iommu/amd/init.c
8787 +@@ -805,16 +805,27 @@ static int iommu_ga_log_enable(struct amd_iommu *iommu)
8788 + {
8789 + #ifdef CONFIG_IRQ_REMAP
8790 + u32 status, i;
8791 ++ u64 entry;
8792 +
8793 + if (!iommu->ga_log)
8794 + return -EINVAL;
8795 +
8796 +- status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET);
8797 +-
8798 + /* Check if already running */
8799 +- if (status & (MMIO_STATUS_GALOG_RUN_MASK))
8800 ++ status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET);
8801 ++ if (WARN_ON(status & (MMIO_STATUS_GALOG_RUN_MASK)))
8802 + return 0;
8803 +
8804 ++ entry = iommu_virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512;
8805 ++ memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_BASE_OFFSET,
8806 ++ &entry, sizeof(entry));
8807 ++ entry = (iommu_virt_to_phys(iommu->ga_log_tail) &
8808 ++ (BIT_ULL(52)-1)) & ~7ULL;
8809 ++ memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_TAIL_OFFSET,
8810 ++ &entry, sizeof(entry));
8811 ++ writel(0x00, iommu->mmio_base + MMIO_GA_HEAD_OFFSET);
8812 ++ writel(0x00, iommu->mmio_base + MMIO_GA_TAIL_OFFSET);
8813 ++
8814 ++
8815 + iommu_feature_enable(iommu, CONTROL_GAINT_EN);
8816 + iommu_feature_enable(iommu, CONTROL_GALOG_EN);
8817 +
8818 +@@ -824,17 +835,15 @@ static int iommu_ga_log_enable(struct amd_iommu *iommu)
8819 + break;
8820 + }
8821 +
8822 +- if (i >= LOOP_TIMEOUT)
8823 ++ if (WARN_ON(i >= LOOP_TIMEOUT))
8824 + return -EINVAL;
8825 + #endif /* CONFIG_IRQ_REMAP */
8826 + return 0;
8827 + }
8828 +
8829 +-#ifdef CONFIG_IRQ_REMAP
8830 + static int iommu_init_ga_log(struct amd_iommu *iommu)
8831 + {
8832 +- u64 entry;
8833 +-
8834 ++#ifdef CONFIG_IRQ_REMAP
8835 + if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir))
8836 + return 0;
8837 +
8838 +@@ -848,32 +857,13 @@ static int iommu_init_ga_log(struct amd_iommu *iommu)
8839 + if (!iommu->ga_log_tail)
8840 + goto err_out;
8841 +
8842 +- entry = iommu_virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512;
8843 +- memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_BASE_OFFSET,
8844 +- &entry, sizeof(entry));
8845 +- entry = (iommu_virt_to_phys(iommu->ga_log_tail) &
8846 +- (BIT_ULL(52)-1)) & ~7ULL;
8847 +- memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_TAIL_OFFSET,
8848 +- &entry, sizeof(entry));
8849 +- writel(0x00, iommu->mmio_base + MMIO_GA_HEAD_OFFSET);
8850 +- writel(0x00, iommu->mmio_base + MMIO_GA_TAIL_OFFSET);
8851 +-
8852 + return 0;
8853 + err_out:
8854 + free_ga_log(iommu);
8855 + return -EINVAL;
8856 +-}
8857 +-#endif /* CONFIG_IRQ_REMAP */
8858 +-
8859 +-static int iommu_init_ga(struct amd_iommu *iommu)
8860 +-{
8861 +- int ret = 0;
8862 +-
8863 +-#ifdef CONFIG_IRQ_REMAP
8864 +- ret = iommu_init_ga_log(iommu);
8865 ++#else
8866 ++ return 0;
8867 + #endif /* CONFIG_IRQ_REMAP */
8868 +-
8869 +- return ret;
8870 + }
8871 +
8872 + static int __init alloc_cwwb_sem(struct amd_iommu *iommu)
8873 +@@ -1860,7 +1850,7 @@ static int __init iommu_init_pci(struct amd_iommu *iommu)
8874 + if (iommu_feature(iommu, FEATURE_PPR) && alloc_ppr_log(iommu))
8875 + return -ENOMEM;
8876 +
8877 +- ret = iommu_init_ga(iommu);
8878 ++ ret = iommu_init_ga_log(iommu);
8879 + if (ret)
8880 + return ret;
8881 +
8882 +diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
8883 +index a688f22cbe3b5..3bcd3afe97783 100644
8884 +--- a/drivers/iommu/io-pgtable-arm-v7s.c
8885 ++++ b/drivers/iommu/io-pgtable-arm-v7s.c
8886 +@@ -242,13 +242,17 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
8887 + __GFP_ZERO | ARM_V7S_TABLE_GFP_DMA, get_order(size));
8888 + else if (lvl == 2)
8889 + table = kmem_cache_zalloc(data->l2_tables, gfp);
8890 ++
8891 ++ if (!table)
8892 ++ return NULL;
8893 ++
8894 + phys = virt_to_phys(table);
8895 + if (phys != (arm_v7s_iopte)phys) {
8896 + /* Doesn't fit in PTE */
8897 + dev_err(dev, "Page table does not fit in PTE: %pa", &phys);
8898 + goto out_free;
8899 + }
8900 +- if (table && !cfg->coherent_walk) {
8901 ++ if (!cfg->coherent_walk) {
8902 + dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
8903 + if (dma_mapping_error(dev, dma))
8904 + goto out_free;
8905 +diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
8906 +index bcfbd0e44a4a0..e1cd31c0e3c19 100644
8907 +--- a/drivers/iommu/io-pgtable-arm.c
8908 ++++ b/drivers/iommu/io-pgtable-arm.c
8909 +@@ -302,11 +302,12 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data,
8910 + static arm_lpae_iopte arm_lpae_install_table(arm_lpae_iopte *table,
8911 + arm_lpae_iopte *ptep,
8912 + arm_lpae_iopte curr,
8913 +- struct io_pgtable_cfg *cfg)
8914 ++ struct arm_lpae_io_pgtable *data)
8915 + {
8916 + arm_lpae_iopte old, new;
8917 ++ struct io_pgtable_cfg *cfg = &data->iop.cfg;
8918 +
8919 +- new = __pa(table) | ARM_LPAE_PTE_TYPE_TABLE;
8920 ++ new = paddr_to_iopte(__pa(table), data) | ARM_LPAE_PTE_TYPE_TABLE;
8921 + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS)
8922 + new |= ARM_LPAE_PTE_NSTABLE;
8923 +
8924 +@@ -357,7 +358,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova,
8925 + if (!cptep)
8926 + return -ENOMEM;
8927 +
8928 +- pte = arm_lpae_install_table(cptep, ptep, 0, cfg);
8929 ++ pte = arm_lpae_install_table(cptep, ptep, 0, data);
8930 + if (pte)
8931 + __arm_lpae_free_pages(cptep, tblsz, cfg);
8932 + } else if (!cfg->coherent_walk && !(pte & ARM_LPAE_PTE_SW_SYNC)) {
8933 +@@ -546,7 +547,7 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
8934 + __arm_lpae_init_pte(data, blk_paddr, pte, lvl, &tablep[i]);
8935 + }
8936 +
8937 +- pte = arm_lpae_install_table(tablep, ptep, blk_pte, cfg);
8938 ++ pte = arm_lpae_install_table(tablep, ptep, blk_pte, data);
8939 + if (pte != blk_pte) {
8940 + __arm_lpae_free_pages(tablep, tablesz, cfg);
8941 + /*
8942 +diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
8943 +index 30d969a4c5fde..1164d1a42cbc5 100644
8944 +--- a/drivers/iommu/iova.c
8945 ++++ b/drivers/iommu/iova.c
8946 +@@ -64,8 +64,7 @@ static void free_iova_flush_queue(struct iova_domain *iovad)
8947 + if (!has_iova_flush_queue(iovad))
8948 + return;
8949 +
8950 +- if (timer_pending(&iovad->fq_timer))
8951 +- del_timer(&iovad->fq_timer);
8952 ++ del_timer_sync(&iovad->fq_timer);
8953 +
8954 + fq_destroy_all_entries(iovad);
8955 +
8956 +diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
8957 +index 1bdb7acf445f4..04d1b3963b6ba 100644
8958 +--- a/drivers/irqchip/irq-gic-v3.c
8959 ++++ b/drivers/irqchip/irq-gic-v3.c
8960 +@@ -915,6 +915,22 @@ static int __gic_update_rdist_properties(struct redist_region *region,
8961 + {
8962 + u64 typer = gic_read_typer(ptr + GICR_TYPER);
8963 +
8964 ++ /* Boot-time cleanip */
8965 ++ if ((typer & GICR_TYPER_VLPIS) && (typer & GICR_TYPER_RVPEID)) {
8966 ++ u64 val;
8967 ++
8968 ++ /* Deactivate any present vPE */
8969 ++ val = gicr_read_vpendbaser(ptr + SZ_128K + GICR_VPENDBASER);
8970 ++ if (val & GICR_VPENDBASER_Valid)
8971 ++ gicr_write_vpendbaser(GICR_VPENDBASER_PendingLast,
8972 ++ ptr + SZ_128K + GICR_VPENDBASER);
8973 ++
8974 ++ /* Mark the VPE table as invalid */
8975 ++ val = gicr_read_vpropbaser(ptr + SZ_128K + GICR_VPROPBASER);
8976 ++ val &= ~GICR_VPROPBASER_4_1_VALID;
8977 ++ gicr_write_vpropbaser(val, ptr + SZ_128K + GICR_VPROPBASER);
8978 ++ }
8979 ++
8980 + gic_data.rdists.has_vlpis &= !!(typer & GICR_TYPER_VLPIS);
8981 +
8982 + /* RVPEID implies some form of DirectLPI, no matter what the doc says... :-/ */
8983 +diff --git a/drivers/md/dm.c b/drivers/md/dm.c
8984 +index 19a70f434029b..6030cba5b0382 100644
8985 +--- a/drivers/md/dm.c
8986 ++++ b/drivers/md/dm.c
8987 +@@ -1894,8 +1894,10 @@ static struct mapped_device *alloc_dev(int minor)
8988 + if (IS_ENABLED(CONFIG_DAX_DRIVER)) {
8989 + md->dax_dev = alloc_dax(md, md->disk->disk_name,
8990 + &dm_dax_ops, 0);
8991 +- if (IS_ERR(md->dax_dev))
8992 ++ if (IS_ERR(md->dax_dev)) {
8993 ++ md->dax_dev = NULL;
8994 + goto bad;
8995 ++ }
8996 + }
8997 +
8998 + add_disk_no_queue_reg(md->disk);
8999 +diff --git a/drivers/md/persistent-data/dm-btree.c b/drivers/md/persistent-data/dm-btree.c
9000 +index ef6e78d45d5b8..ee3e63aa864bf 100644
9001 +--- a/drivers/md/persistent-data/dm-btree.c
9002 ++++ b/drivers/md/persistent-data/dm-btree.c
9003 +@@ -83,14 +83,16 @@ void inc_children(struct dm_transaction_manager *tm, struct btree_node *n,
9004 + }
9005 +
9006 + static int insert_at(size_t value_size, struct btree_node *node, unsigned index,
9007 +- uint64_t key, void *value)
9008 +- __dm_written_to_disk(value)
9009 ++ uint64_t key, void *value)
9010 ++ __dm_written_to_disk(value)
9011 + {
9012 + uint32_t nr_entries = le32_to_cpu(node->header.nr_entries);
9013 ++ uint32_t max_entries = le32_to_cpu(node->header.max_entries);
9014 + __le64 key_le = cpu_to_le64(key);
9015 +
9016 + if (index > nr_entries ||
9017 +- index >= le32_to_cpu(node->header.max_entries)) {
9018 ++ index >= max_entries ||
9019 ++ nr_entries >= max_entries) {
9020 + DMERR("too many entries in btree node for insert");
9021 + __dm_unbless_for_disk(value);
9022 + return -ENOMEM;
9023 +diff --git a/drivers/md/persistent-data/dm-space-map-common.c b/drivers/md/persistent-data/dm-space-map-common.c
9024 +index a213bf11738fb..85853ab629717 100644
9025 +--- a/drivers/md/persistent-data/dm-space-map-common.c
9026 ++++ b/drivers/md/persistent-data/dm-space-map-common.c
9027 +@@ -281,6 +281,11 @@ int sm_ll_lookup_bitmap(struct ll_disk *ll, dm_block_t b, uint32_t *result)
9028 + struct disk_index_entry ie_disk;
9029 + struct dm_block *blk;
9030 +
9031 ++ if (b >= ll->nr_blocks) {
9032 ++ DMERR_LIMIT("metadata block out of bounds");
9033 ++ return -EINVAL;
9034 ++ }
9035 ++
9036 + b = do_div(index, ll->entries_per_block);
9037 + r = ll->load_ie(ll, index, &ie_disk);
9038 + if (r < 0)
9039 +diff --git a/drivers/media/Kconfig b/drivers/media/Kconfig
9040 +index a6d073f2e036a..d157af63be417 100644
9041 +--- a/drivers/media/Kconfig
9042 ++++ b/drivers/media/Kconfig
9043 +@@ -142,10 +142,10 @@ config MEDIA_TEST_SUPPORT
9044 + prompt "Test drivers" if MEDIA_SUPPORT_FILTER
9045 + default y if !MEDIA_SUPPORT_FILTER
9046 + help
9047 +- Those drivers should not be used on production Kernels, but
9048 +- can be useful on debug ones. It enables several dummy drivers
9049 +- that simulate a real hardware. Very useful to test userspace
9050 +- applications and to validate if the subsystem core is doesn't
9051 ++ These drivers should not be used on production kernels, but
9052 ++ can be useful on debug ones. This option enables several dummy drivers
9053 ++ that simulate real hardware. Very useful to test userspace
9054 ++ applications and to validate if the subsystem core doesn't
9055 + have regressions.
9056 +
9057 + Say Y if you want to use some virtual test driver.
9058 +diff --git a/drivers/media/cec/core/cec-pin.c b/drivers/media/cec/core/cec-pin.c
9059 +index f006bd8eec63c..f8452a1f9fc6c 100644
9060 +--- a/drivers/media/cec/core/cec-pin.c
9061 ++++ b/drivers/media/cec/core/cec-pin.c
9062 +@@ -1033,6 +1033,7 @@ static int cec_pin_thread_func(void *_adap)
9063 + {
9064 + struct cec_adapter *adap = _adap;
9065 + struct cec_pin *pin = adap->pin;
9066 ++ bool irq_enabled = false;
9067 +
9068 + for (;;) {
9069 + wait_event_interruptible(pin->kthread_waitq,
9070 +@@ -1060,6 +1061,7 @@ static int cec_pin_thread_func(void *_adap)
9071 + ns_to_ktime(pin->work_rx_msg.rx_ts));
9072 + msg->len = 0;
9073 + }
9074 ++
9075 + if (pin->work_tx_status) {
9076 + unsigned int tx_status = pin->work_tx_status;
9077 +
9078 +@@ -1083,27 +1085,39 @@ static int cec_pin_thread_func(void *_adap)
9079 + switch (atomic_xchg(&pin->work_irq_change,
9080 + CEC_PIN_IRQ_UNCHANGED)) {
9081 + case CEC_PIN_IRQ_DISABLE:
9082 +- pin->ops->disable_irq(adap);
9083 ++ if (irq_enabled) {
9084 ++ pin->ops->disable_irq(adap);
9085 ++ irq_enabled = false;
9086 ++ }
9087 + cec_pin_high(pin);
9088 + cec_pin_to_idle(pin);
9089 + hrtimer_start(&pin->timer, ns_to_ktime(0),
9090 + HRTIMER_MODE_REL);
9091 + break;
9092 + case CEC_PIN_IRQ_ENABLE:
9093 ++ if (irq_enabled)
9094 ++ break;
9095 + pin->enable_irq_failed = !pin->ops->enable_irq(adap);
9096 + if (pin->enable_irq_failed) {
9097 + cec_pin_to_idle(pin);
9098 + hrtimer_start(&pin->timer, ns_to_ktime(0),
9099 + HRTIMER_MODE_REL);
9100 ++ } else {
9101 ++ irq_enabled = true;
9102 + }
9103 + break;
9104 + default:
9105 + break;
9106 + }
9107 +-
9108 + if (kthread_should_stop())
9109 + break;
9110 + }
9111 ++ if (pin->ops->disable_irq && irq_enabled)
9112 ++ pin->ops->disable_irq(adap);
9113 ++ hrtimer_cancel(&pin->timer);
9114 ++ cec_pin_read(pin);
9115 ++ cec_pin_to_idle(pin);
9116 ++ pin->state = CEC_ST_OFF;
9117 + return 0;
9118 + }
9119 +
9120 +@@ -1130,13 +1144,7 @@ static int cec_pin_adap_enable(struct cec_adapter *adap, bool enable)
9121 + hrtimer_start(&pin->timer, ns_to_ktime(0),
9122 + HRTIMER_MODE_REL);
9123 + } else {
9124 +- if (pin->ops->disable_irq)
9125 +- pin->ops->disable_irq(adap);
9126 +- hrtimer_cancel(&pin->timer);
9127 + kthread_stop(pin->kthread);
9128 +- cec_pin_read(pin);
9129 +- cec_pin_to_idle(pin);
9130 +- pin->state = CEC_ST_OFF;
9131 + }
9132 + return 0;
9133 + }
9134 +@@ -1157,11 +1165,8 @@ void cec_pin_start_timer(struct cec_pin *pin)
9135 + if (pin->state != CEC_ST_RX_IRQ)
9136 + return;
9137 +
9138 +- atomic_set(&pin->work_irq_change, CEC_PIN_IRQ_UNCHANGED);
9139 +- pin->ops->disable_irq(pin->adap);
9140 +- cec_pin_high(pin);
9141 +- cec_pin_to_idle(pin);
9142 +- hrtimer_start(&pin->timer, ns_to_ktime(0), HRTIMER_MODE_REL);
9143 ++ atomic_set(&pin->work_irq_change, CEC_PIN_IRQ_DISABLE);
9144 ++ wake_up_interruptible(&pin->kthread_waitq);
9145 + }
9146 +
9147 + static int cec_pin_adap_transmit(struct cec_adapter *adap, u8 attempts,
9148 +diff --git a/drivers/media/common/saa7146/saa7146_fops.c b/drivers/media/common/saa7146/saa7146_fops.c
9149 +index d6531874faa65..8047e305f3d01 100644
9150 +--- a/drivers/media/common/saa7146/saa7146_fops.c
9151 ++++ b/drivers/media/common/saa7146/saa7146_fops.c
9152 +@@ -523,7 +523,7 @@ int saa7146_vv_init(struct saa7146_dev* dev, struct saa7146_ext_vv *ext_vv)
9153 + ERR("out of memory. aborting.\n");
9154 + kfree(vv);
9155 + v4l2_ctrl_handler_free(hdl);
9156 +- return -1;
9157 ++ return -ENOMEM;
9158 + }
9159 +
9160 + saa7146_video_uops.init(dev,vv);
9161 +diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
9162 +index 2f3a5996d3fc9..fe626109ef4db 100644
9163 +--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
9164 ++++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
9165 +@@ -150,7 +150,7 @@ static void *vb2_dc_alloc(struct device *dev, unsigned long attrs,
9166 + buf->cookie = dma_alloc_attrs(dev, size, &buf->dma_addr,
9167 + GFP_KERNEL | gfp_flags, buf->attrs);
9168 + if (!buf->cookie) {
9169 +- dev_err(dev, "dma_alloc_coherent of size %ld failed\n", size);
9170 ++ dev_err(dev, "dma_alloc_coherent of size %lu failed\n", size);
9171 + kfree(buf);
9172 + return ERR_PTR(-ENOMEM);
9173 + }
9174 +@@ -196,9 +196,9 @@ static int vb2_dc_mmap(void *buf_priv, struct vm_area_struct *vma)
9175 +
9176 + vma->vm_ops->open(vma);
9177 +
9178 +- pr_debug("%s: mapped dma addr 0x%08lx at 0x%08lx, size %ld\n",
9179 +- __func__, (unsigned long)buf->dma_addr, vma->vm_start,
9180 +- buf->size);
9181 ++ pr_debug("%s: mapped dma addr 0x%08lx at 0x%08lx, size %lu\n",
9182 ++ __func__, (unsigned long)buf->dma_addr, vma->vm_start,
9183 ++ buf->size);
9184 +
9185 + return 0;
9186 + }
9187 +diff --git a/drivers/media/dvb-core/dmxdev.c b/drivers/media/dvb-core/dmxdev.c
9188 +index f14a872d12687..e58cb8434dafe 100644
9189 +--- a/drivers/media/dvb-core/dmxdev.c
9190 ++++ b/drivers/media/dvb-core/dmxdev.c
9191 +@@ -1413,7 +1413,7 @@ static const struct dvb_device dvbdev_dvr = {
9192 + };
9193 + int dvb_dmxdev_init(struct dmxdev *dmxdev, struct dvb_adapter *dvb_adapter)
9194 + {
9195 +- int i;
9196 ++ int i, ret;
9197 +
9198 + if (dmxdev->demux->open(dmxdev->demux) < 0)
9199 + return -EUSERS;
9200 +@@ -1432,14 +1432,26 @@ int dvb_dmxdev_init(struct dmxdev *dmxdev, struct dvb_adapter *dvb_adapter)
9201 + DMXDEV_STATE_FREE);
9202 + }
9203 +
9204 +- dvb_register_device(dvb_adapter, &dmxdev->dvbdev, &dvbdev_demux, dmxdev,
9205 ++ ret = dvb_register_device(dvb_adapter, &dmxdev->dvbdev, &dvbdev_demux, dmxdev,
9206 + DVB_DEVICE_DEMUX, dmxdev->filternum);
9207 +- dvb_register_device(dvb_adapter, &dmxdev->dvr_dvbdev, &dvbdev_dvr,
9208 ++ if (ret < 0)
9209 ++ goto err_register_dvbdev;
9210 ++
9211 ++ ret = dvb_register_device(dvb_adapter, &dmxdev->dvr_dvbdev, &dvbdev_dvr,
9212 + dmxdev, DVB_DEVICE_DVR, dmxdev->filternum);
9213 ++ if (ret < 0)
9214 ++ goto err_register_dvr_dvbdev;
9215 +
9216 + dvb_ringbuffer_init(&dmxdev->dvr_buffer, NULL, 8192);
9217 +
9218 + return 0;
9219 ++
9220 ++err_register_dvr_dvbdev:
9221 ++ dvb_unregister_device(dmxdev->dvbdev);
9222 ++err_register_dvbdev:
9223 ++ vfree(dmxdev->filter);
9224 ++ dmxdev->filter = NULL;
9225 ++ return ret;
9226 + }
9227 +
9228 + EXPORT_SYMBOL(dvb_dmxdev_init);
9229 +diff --git a/drivers/media/dvb-frontends/dib8000.c b/drivers/media/dvb-frontends/dib8000.c
9230 +index bb02354a48b81..d67f2dd997d06 100644
9231 +--- a/drivers/media/dvb-frontends/dib8000.c
9232 ++++ b/drivers/media/dvb-frontends/dib8000.c
9233 +@@ -4473,8 +4473,10 @@ static struct dvb_frontend *dib8000_init(struct i2c_adapter *i2c_adap, u8 i2c_ad
9234 +
9235 + state->timf_default = cfg->pll->timf;
9236 +
9237 +- if (dib8000_identify(&state->i2c) == 0)
9238 ++ if (dib8000_identify(&state->i2c) == 0) {
9239 ++ kfree(fe);
9240 + goto error;
9241 ++ }
9242 +
9243 + dibx000_init_i2c_master(&state->i2c_master, DIB8000, state->i2c.adap, state->i2c.addr);
9244 +
9245 +diff --git a/drivers/media/pci/b2c2/flexcop-pci.c b/drivers/media/pci/b2c2/flexcop-pci.c
9246 +index a9d9520a94c6d..c9e6c7d663768 100644
9247 +--- a/drivers/media/pci/b2c2/flexcop-pci.c
9248 ++++ b/drivers/media/pci/b2c2/flexcop-pci.c
9249 +@@ -185,6 +185,8 @@ static irqreturn_t flexcop_pci_isr(int irq, void *dev_id)
9250 + dma_addr_t cur_addr =
9251 + fc->read_ibi_reg(fc,dma1_008).dma_0x8.dma_cur_addr << 2;
9252 + u32 cur_pos = cur_addr - fc_pci->dma[0].dma_addr0;
9253 ++ if (cur_pos > fc_pci->dma[0].size * 2)
9254 ++ goto error;
9255 +
9256 + deb_irq("%u irq: %08x cur_addr: %llx: cur_pos: %08x, last_cur_pos: %08x ",
9257 + jiffies_to_usecs(jiffies - fc_pci->last_irq),
9258 +@@ -225,6 +227,7 @@ static irqreturn_t flexcop_pci_isr(int irq, void *dev_id)
9259 + ret = IRQ_NONE;
9260 + }
9261 +
9262 ++error:
9263 + spin_unlock_irqrestore(&fc_pci->irq_lock, flags);
9264 + return ret;
9265 + }
9266 +diff --git a/drivers/media/pci/saa7146/hexium_gemini.c b/drivers/media/pci/saa7146/hexium_gemini.c
9267 +index 2214c74bbbf15..3947701cd6c7e 100644
9268 +--- a/drivers/media/pci/saa7146/hexium_gemini.c
9269 ++++ b/drivers/media/pci/saa7146/hexium_gemini.c
9270 +@@ -284,7 +284,12 @@ static int hexium_attach(struct saa7146_dev *dev, struct saa7146_pci_extension_d
9271 + hexium_set_input(hexium, 0);
9272 + hexium->cur_input = 0;
9273 +
9274 +- saa7146_vv_init(dev, &vv_data);
9275 ++ ret = saa7146_vv_init(dev, &vv_data);
9276 ++ if (ret) {
9277 ++ i2c_del_adapter(&hexium->i2c_adapter);
9278 ++ kfree(hexium);
9279 ++ return ret;
9280 ++ }
9281 +
9282 + vv_data.vid_ops.vidioc_enum_input = vidioc_enum_input;
9283 + vv_data.vid_ops.vidioc_g_input = vidioc_g_input;
9284 +diff --git a/drivers/media/pci/saa7146/hexium_orion.c b/drivers/media/pci/saa7146/hexium_orion.c
9285 +index 39d14c179d229..2eb4bee16b71f 100644
9286 +--- a/drivers/media/pci/saa7146/hexium_orion.c
9287 ++++ b/drivers/media/pci/saa7146/hexium_orion.c
9288 +@@ -355,10 +355,16 @@ static struct saa7146_ext_vv vv_data;
9289 + static int hexium_attach(struct saa7146_dev *dev, struct saa7146_pci_extension_data *info)
9290 + {
9291 + struct hexium *hexium = (struct hexium *) dev->ext_priv;
9292 ++ int ret;
9293 +
9294 + DEB_EE("\n");
9295 +
9296 +- saa7146_vv_init(dev, &vv_data);
9297 ++ ret = saa7146_vv_init(dev, &vv_data);
9298 ++ if (ret) {
9299 ++ pr_err("Error in saa7146_vv_init()\n");
9300 ++ return ret;
9301 ++ }
9302 ++
9303 + vv_data.vid_ops.vidioc_enum_input = vidioc_enum_input;
9304 + vv_data.vid_ops.vidioc_g_input = vidioc_g_input;
9305 + vv_data.vid_ops.vidioc_s_input = vidioc_s_input;
9306 +diff --git a/drivers/media/pci/saa7146/mxb.c b/drivers/media/pci/saa7146/mxb.c
9307 +index 73fc901ecf3db..bf0b9b0914cd5 100644
9308 +--- a/drivers/media/pci/saa7146/mxb.c
9309 ++++ b/drivers/media/pci/saa7146/mxb.c
9310 +@@ -683,10 +683,16 @@ static struct saa7146_ext_vv vv_data;
9311 + static int mxb_attach(struct saa7146_dev *dev, struct saa7146_pci_extension_data *info)
9312 + {
9313 + struct mxb *mxb;
9314 ++ int ret;
9315 +
9316 + DEB_EE("dev:%p\n", dev);
9317 +
9318 +- saa7146_vv_init(dev, &vv_data);
9319 ++ ret = saa7146_vv_init(dev, &vv_data);
9320 ++ if (ret) {
9321 ++ ERR("Error in saa7146_vv_init()");
9322 ++ return ret;
9323 ++ }
9324 ++
9325 + if (mxb_probe(dev)) {
9326 + saa7146_vv_release(dev);
9327 + return -1;
9328 +diff --git a/drivers/media/platform/aspeed-video.c b/drivers/media/platform/aspeed-video.c
9329 +index 7bb6babdcade0..debc7509c173c 100644
9330 +--- a/drivers/media/platform/aspeed-video.c
9331 ++++ b/drivers/media/platform/aspeed-video.c
9332 +@@ -500,6 +500,10 @@ static void aspeed_video_enable_mode_detect(struct aspeed_video *video)
9333 + aspeed_video_update(video, VE_INTERRUPT_CTRL, 0,
9334 + VE_INTERRUPT_MODE_DETECT);
9335 +
9336 ++ /* Disable mode detect in order to re-trigger */
9337 ++ aspeed_video_update(video, VE_SEQ_CTRL,
9338 ++ VE_SEQ_CTRL_TRIG_MODE_DET, 0);
9339 ++
9340 + /* Trigger mode detect */
9341 + aspeed_video_update(video, VE_SEQ_CTRL, 0, VE_SEQ_CTRL_TRIG_MODE_DET);
9342 + }
9343 +@@ -552,6 +556,8 @@ static void aspeed_video_irq_res_change(struct aspeed_video *video, ulong delay)
9344 + set_bit(VIDEO_RES_CHANGE, &video->flags);
9345 + clear_bit(VIDEO_FRAME_INPRG, &video->flags);
9346 +
9347 ++ video->v4l2_input_status = V4L2_IN_ST_NO_SIGNAL;
9348 ++
9349 + aspeed_video_off(video);
9350 + aspeed_video_bufs_done(video, VB2_BUF_STATE_ERROR);
9351 +
9352 +@@ -786,10 +792,6 @@ static void aspeed_video_get_resolution(struct aspeed_video *video)
9353 + return;
9354 + }
9355 +
9356 +- /* Disable mode detect in order to re-trigger */
9357 +- aspeed_video_update(video, VE_SEQ_CTRL,
9358 +- VE_SEQ_CTRL_TRIG_MODE_DET, 0);
9359 +-
9360 + aspeed_video_check_and_set_polarity(video);
9361 +
9362 + aspeed_video_enable_mode_detect(video);
9363 +@@ -1337,7 +1339,6 @@ static void aspeed_video_resolution_work(struct work_struct *work)
9364 + struct delayed_work *dwork = to_delayed_work(work);
9365 + struct aspeed_video *video = container_of(dwork, struct aspeed_video,
9366 + res_work);
9367 +- u32 input_status = video->v4l2_input_status;
9368 +
9369 + aspeed_video_on(video);
9370 +
9371 +@@ -1350,8 +1351,7 @@ static void aspeed_video_resolution_work(struct work_struct *work)
9372 + aspeed_video_get_resolution(video);
9373 +
9374 + if (video->detected_timings.width != video->active_timings.width ||
9375 +- video->detected_timings.height != video->active_timings.height ||
9376 +- input_status != video->v4l2_input_status) {
9377 ++ video->detected_timings.height != video->active_timings.height) {
9378 + static const struct v4l2_event ev = {
9379 + .type = V4L2_EVENT_SOURCE_CHANGE,
9380 + .u.src_change.changes = V4L2_EVENT_SRC_CH_RESOLUTION,
9381 +diff --git a/drivers/media/platform/coda/coda-common.c b/drivers/media/platform/coda/coda-common.c
9382 +index 87a2c706f7477..1eed69d29149f 100644
9383 +--- a/drivers/media/platform/coda/coda-common.c
9384 ++++ b/drivers/media/platform/coda/coda-common.c
9385 +@@ -1537,11 +1537,13 @@ static void coda_pic_run_work(struct work_struct *work)
9386 +
9387 + if (!wait_for_completion_timeout(&ctx->completion,
9388 + msecs_to_jiffies(1000))) {
9389 +- dev_err(dev->dev, "CODA PIC_RUN timeout\n");
9390 ++ if (ctx->use_bit) {
9391 ++ dev_err(dev->dev, "CODA PIC_RUN timeout\n");
9392 +
9393 +- ctx->hold = true;
9394 ++ ctx->hold = true;
9395 +
9396 +- coda_hw_reset(ctx);
9397 ++ coda_hw_reset(ctx);
9398 ++ }
9399 +
9400 + if (ctx->ops->run_timeout)
9401 + ctx->ops->run_timeout(ctx);
9402 +diff --git a/drivers/media/platform/coda/coda-jpeg.c b/drivers/media/platform/coda/coda-jpeg.c
9403 +index b11cfbe166dd3..a72f4655e5ad5 100644
9404 +--- a/drivers/media/platform/coda/coda-jpeg.c
9405 ++++ b/drivers/media/platform/coda/coda-jpeg.c
9406 +@@ -1127,7 +1127,8 @@ static int coda9_jpeg_prepare_encode(struct coda_ctx *ctx)
9407 + coda_write(dev, 0, CODA9_REG_JPEG_GBU_BT_PTR);
9408 + coda_write(dev, 0, CODA9_REG_JPEG_GBU_WD_PTR);
9409 + coda_write(dev, 0, CODA9_REG_JPEG_GBU_BBSR);
9410 +- coda_write(dev, 0, CODA9_REG_JPEG_BBC_STRM_CTRL);
9411 ++ coda_write(dev, BIT(31) | ((end_addr - start_addr - header_len) / 256),
9412 ++ CODA9_REG_JPEG_BBC_STRM_CTRL);
9413 + coda_write(dev, 0, CODA9_REG_JPEG_GBU_CTRL);
9414 + coda_write(dev, 0, CODA9_REG_JPEG_GBU_FF_RPTR);
9415 + coda_write(dev, 127, CODA9_REG_JPEG_GBU_BBER);
9416 +@@ -1257,6 +1258,23 @@ static void coda9_jpeg_finish_encode(struct coda_ctx *ctx)
9417 + coda_hw_reset(ctx);
9418 + }
9419 +
9420 ++static void coda9_jpeg_encode_timeout(struct coda_ctx *ctx)
9421 ++{
9422 ++ struct coda_dev *dev = ctx->dev;
9423 ++ u32 end_addr, wr_ptr;
9424 ++
9425 ++ /* Handle missing BBC overflow interrupt via timeout */
9426 ++ end_addr = coda_read(dev, CODA9_REG_JPEG_BBC_END_ADDR);
9427 ++ wr_ptr = coda_read(dev, CODA9_REG_JPEG_BBC_WR_PTR);
9428 ++ if (wr_ptr >= end_addr - 256) {
9429 ++ v4l2_err(&dev->v4l2_dev, "JPEG too large for capture buffer\n");
9430 ++ coda9_jpeg_finish_encode(ctx);
9431 ++ return;
9432 ++ }
9433 ++
9434 ++ coda_hw_reset(ctx);
9435 ++}
9436 ++
9437 + static void coda9_jpeg_release(struct coda_ctx *ctx)
9438 + {
9439 + int i;
9440 +@@ -1276,6 +1294,7 @@ const struct coda_context_ops coda9_jpeg_encode_ops = {
9441 + .start_streaming = coda9_jpeg_start_encoding,
9442 + .prepare_run = coda9_jpeg_prepare_encode,
9443 + .finish_run = coda9_jpeg_finish_encode,
9444 ++ .run_timeout = coda9_jpeg_encode_timeout,
9445 + .release = coda9_jpeg_release,
9446 + };
9447 +
9448 +diff --git a/drivers/media/platform/coda/imx-vdoa.c b/drivers/media/platform/coda/imx-vdoa.c
9449 +index 8bc0d83718193..dd6e2e320264e 100644
9450 +--- a/drivers/media/platform/coda/imx-vdoa.c
9451 ++++ b/drivers/media/platform/coda/imx-vdoa.c
9452 +@@ -287,7 +287,11 @@ static int vdoa_probe(struct platform_device *pdev)
9453 + struct resource *res;
9454 + int ret;
9455 +
9456 +- dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));
9457 ++ ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));
9458 ++ if (ret) {
9459 ++ dev_err(&pdev->dev, "DMA enable failed\n");
9460 ++ return ret;
9461 ++ }
9462 +
9463 + vdoa = devm_kzalloc(&pdev->dev, sizeof(*vdoa), GFP_KERNEL);
9464 + if (!vdoa)
9465 +diff --git a/drivers/media/platform/imx-pxp.c b/drivers/media/platform/imx-pxp.c
9466 +index 08d76eb05ed1a..62356adebc39e 100644
9467 +--- a/drivers/media/platform/imx-pxp.c
9468 ++++ b/drivers/media/platform/imx-pxp.c
9469 +@@ -1664,6 +1664,8 @@ static int pxp_probe(struct platform_device *pdev)
9470 + if (irq < 0)
9471 + return irq;
9472 +
9473 ++ spin_lock_init(&dev->irqlock);
9474 ++
9475 + ret = devm_request_threaded_irq(&pdev->dev, irq, NULL, pxp_irq_handler,
9476 + IRQF_ONESHOT, dev_name(&pdev->dev), dev);
9477 + if (ret < 0) {
9478 +@@ -1681,8 +1683,6 @@ static int pxp_probe(struct platform_device *pdev)
9479 + goto err_clk;
9480 + }
9481 +
9482 +- spin_lock_init(&dev->irqlock);
9483 +-
9484 + ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev);
9485 + if (ret)
9486 + goto err_clk;
9487 +diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c
9488 +index 219c2c5b78efc..5f93bc670edb2 100644
9489 +--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c
9490 ++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c
9491 +@@ -237,11 +237,11 @@ static int fops_vcodec_release(struct file *file)
9492 + mtk_v4l2_debug(1, "[%d] encoder", ctx->id);
9493 + mutex_lock(&dev->dev_mutex);
9494 +
9495 ++ v4l2_m2m_ctx_release(ctx->m2m_ctx);
9496 + mtk_vcodec_enc_release(ctx);
9497 + v4l2_fh_del(&ctx->fh);
9498 + v4l2_fh_exit(&ctx->fh);
9499 + v4l2_ctrl_handler_free(&ctx->ctrl_hdl);
9500 +- v4l2_m2m_ctx_release(ctx->m2m_ctx);
9501 +
9502 + list_del_init(&ctx->list);
9503 + kfree(ctx);
9504 +diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
9505 +index 58ddebbb84468..1d621f7769035 100644
9506 +--- a/drivers/media/platform/qcom/venus/core.c
9507 ++++ b/drivers/media/platform/qcom/venus/core.c
9508 +@@ -222,7 +222,6 @@ static int venus_probe(struct platform_device *pdev)
9509 + return -ENOMEM;
9510 +
9511 + core->dev = dev;
9512 +- platform_set_drvdata(pdev, core);
9513 +
9514 + r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
9515 + core->base = devm_ioremap_resource(dev, r);
9516 +@@ -252,7 +251,7 @@ static int venus_probe(struct platform_device *pdev)
9517 + return -ENODEV;
9518 +
9519 + if (core->pm_ops->core_get) {
9520 +- ret = core->pm_ops->core_get(dev);
9521 ++ ret = core->pm_ops->core_get(core);
9522 + if (ret)
9523 + return ret;
9524 + }
9525 +@@ -277,6 +276,12 @@ static int venus_probe(struct platform_device *pdev)
9526 + if (ret)
9527 + goto err_core_put;
9528 +
9529 ++ ret = v4l2_device_register(dev, &core->v4l2_dev);
9530 ++ if (ret)
9531 ++ goto err_core_deinit;
9532 ++
9533 ++ platform_set_drvdata(pdev, core);
9534 ++
9535 + pm_runtime_enable(dev);
9536 +
9537 + ret = pm_runtime_get_sync(dev);
9538 +@@ -289,11 +294,11 @@ static int venus_probe(struct platform_device *pdev)
9539 +
9540 + ret = venus_firmware_init(core);
9541 + if (ret)
9542 +- goto err_runtime_disable;
9543 ++ goto err_of_depopulate;
9544 +
9545 + ret = venus_boot(core);
9546 + if (ret)
9547 +- goto err_runtime_disable;
9548 ++ goto err_firmware_deinit;
9549 +
9550 + ret = hfi_core_resume(core, true);
9551 + if (ret)
9552 +@@ -311,10 +316,6 @@ static int venus_probe(struct platform_device *pdev)
9553 + if (ret)
9554 + goto err_venus_shutdown;
9555 +
9556 +- ret = v4l2_device_register(dev, &core->v4l2_dev);
9557 +- if (ret)
9558 +- goto err_core_deinit;
9559 +-
9560 + ret = pm_runtime_put_sync(dev);
9561 + if (ret) {
9562 + pm_runtime_get_noresume(dev);
9563 +@@ -327,18 +328,22 @@ static int venus_probe(struct platform_device *pdev)
9564 +
9565 + err_dev_unregister:
9566 + v4l2_device_unregister(&core->v4l2_dev);
9567 +-err_core_deinit:
9568 +- hfi_core_deinit(core, false);
9569 + err_venus_shutdown:
9570 + venus_shutdown(core);
9571 ++err_firmware_deinit:
9572 ++ venus_firmware_deinit(core);
9573 ++err_of_depopulate:
9574 ++ of_platform_depopulate(dev);
9575 + err_runtime_disable:
9576 + pm_runtime_put_noidle(dev);
9577 + pm_runtime_set_suspended(dev);
9578 + pm_runtime_disable(dev);
9579 + hfi_destroy(core);
9580 ++err_core_deinit:
9581 ++ hfi_core_deinit(core, false);
9582 + err_core_put:
9583 + if (core->pm_ops->core_put)
9584 +- core->pm_ops->core_put(dev);
9585 ++ core->pm_ops->core_put(core);
9586 + return ret;
9587 + }
9588 +
9589 +@@ -364,11 +369,14 @@ static int venus_remove(struct platform_device *pdev)
9590 + pm_runtime_disable(dev);
9591 +
9592 + if (pm_ops->core_put)
9593 +- pm_ops->core_put(dev);
9594 ++ pm_ops->core_put(core);
9595 ++
9596 ++ v4l2_device_unregister(&core->v4l2_dev);
9597 +
9598 + hfi_destroy(core);
9599 +
9600 + v4l2_device_unregister(&core->v4l2_dev);
9601 ++
9602 + mutex_destroy(&core->pm_lock);
9603 + mutex_destroy(&core->lock);
9604 + venus_dbgfs_deinit(core);
9605 +@@ -387,7 +395,7 @@ static __maybe_unused int venus_runtime_suspend(struct device *dev)
9606 + return ret;
9607 +
9608 + if (pm_ops->core_power) {
9609 +- ret = pm_ops->core_power(dev, POWER_OFF);
9610 ++ ret = pm_ops->core_power(core, POWER_OFF);
9611 + if (ret)
9612 + return ret;
9613 + }
9614 +@@ -405,7 +413,8 @@ static __maybe_unused int venus_runtime_suspend(struct device *dev)
9615 + err_video_path:
9616 + icc_set_bw(core->cpucfg_path, kbps_to_icc(1000), 0);
9617 + err_cpucfg_path:
9618 +- pm_ops->core_power(dev, POWER_ON);
9619 ++ if (pm_ops->core_power)
9620 ++ pm_ops->core_power(core, POWER_ON);
9621 +
9622 + return ret;
9623 + }
9624 +@@ -425,7 +434,7 @@ static __maybe_unused int venus_runtime_resume(struct device *dev)
9625 + return ret;
9626 +
9627 + if (pm_ops->core_power) {
9628 +- ret = pm_ops->core_power(dev, POWER_ON);
9629 ++ ret = pm_ops->core_power(core, POWER_ON);
9630 + if (ret)
9631 + return ret;
9632 + }
9633 +diff --git a/drivers/media/platform/qcom/venus/core.h b/drivers/media/platform/qcom/venus/core.h
9634 +index 05c9fbd51f0c0..f2a0ef9ee884e 100644
9635 +--- a/drivers/media/platform/qcom/venus/core.h
9636 ++++ b/drivers/media/platform/qcom/venus/core.h
9637 +@@ -123,7 +123,6 @@ struct venus_caps {
9638 + * @clks: an array of struct clk pointers
9639 + * @vcodec0_clks: an array of vcodec0 struct clk pointers
9640 + * @vcodec1_clks: an array of vcodec1 struct clk pointers
9641 +- * @pd_dl_venus: pmdomain device-link for venus domain
9642 + * @pmdomains: an array of pmdomains struct device pointers
9643 + * @vdev_dec: a reference to video device structure for decoder instances
9644 + * @vdev_enc: a reference to video device structure for encoder instances
9645 +@@ -161,7 +160,6 @@ struct venus_core {
9646 + struct icc_path *cpucfg_path;
9647 + struct opp_table *opp_table;
9648 + bool has_opp_table;
9649 +- struct device_link *pd_dl_venus;
9650 + struct device *pmdomains[VIDC_PMDOMAINS_NUM_MAX];
9651 + struct device_link *opp_dl_venus;
9652 + struct device *opp_pmdomain;
9653 +diff --git a/drivers/media/platform/qcom/venus/pm_helpers.c b/drivers/media/platform/qcom/venus/pm_helpers.c
9654 +index 2946547a0df4a..710f9a2b132b0 100644
9655 +--- a/drivers/media/platform/qcom/venus/pm_helpers.c
9656 ++++ b/drivers/media/platform/qcom/venus/pm_helpers.c
9657 +@@ -147,14 +147,12 @@ static u32 load_per_type(struct venus_core *core, u32 session_type)
9658 + struct venus_inst *inst = NULL;
9659 + u32 mbs_per_sec = 0;
9660 +
9661 +- mutex_lock(&core->lock);
9662 + list_for_each_entry(inst, &core->instances, list) {
9663 + if (inst->session_type != session_type)
9664 + continue;
9665 +
9666 + mbs_per_sec += load_per_instance(inst);
9667 + }
9668 +- mutex_unlock(&core->lock);
9669 +
9670 + return mbs_per_sec;
9671 + }
9672 +@@ -203,14 +201,12 @@ static int load_scale_bw(struct venus_core *core)
9673 + struct venus_inst *inst = NULL;
9674 + u32 mbs_per_sec, avg, peak, total_avg = 0, total_peak = 0;
9675 +
9676 +- mutex_lock(&core->lock);
9677 + list_for_each_entry(inst, &core->instances, list) {
9678 + mbs_per_sec = load_per_instance(inst);
9679 + mbs_to_bw(inst, mbs_per_sec, &avg, &peak);
9680 + total_avg += avg;
9681 + total_peak += peak;
9682 + }
9683 +- mutex_unlock(&core->lock);
9684 +
9685 + /*
9686 + * keep minimum bandwidth vote for "video-mem" path,
9687 +@@ -237,8 +233,9 @@ static int load_scale_v1(struct venus_inst *inst)
9688 + struct device *dev = core->dev;
9689 + u32 mbs_per_sec;
9690 + unsigned int i;
9691 +- int ret;
9692 ++ int ret = 0;
9693 +
9694 ++ mutex_lock(&core->lock);
9695 + mbs_per_sec = load_per_type(core, VIDC_SESSION_TYPE_ENC) +
9696 + load_per_type(core, VIDC_SESSION_TYPE_DEC);
9697 +
9698 +@@ -263,29 +260,28 @@ set_freq:
9699 + if (ret) {
9700 + dev_err(dev, "failed to set clock rate %lu (%d)\n",
9701 + freq, ret);
9702 +- return ret;
9703 ++ goto exit;
9704 + }
9705 +
9706 + ret = load_scale_bw(core);
9707 + if (ret) {
9708 + dev_err(dev, "failed to set bandwidth (%d)\n",
9709 + ret);
9710 +- return ret;
9711 ++ goto exit;
9712 + }
9713 +
9714 +- return 0;
9715 ++exit:
9716 ++ mutex_unlock(&core->lock);
9717 ++ return ret;
9718 + }
9719 +
9720 +-static int core_get_v1(struct device *dev)
9721 ++static int core_get_v1(struct venus_core *core)
9722 + {
9723 +- struct venus_core *core = dev_get_drvdata(dev);
9724 +-
9725 + return core_clks_get(core);
9726 + }
9727 +
9728 +-static int core_power_v1(struct device *dev, int on)
9729 ++static int core_power_v1(struct venus_core *core, int on)
9730 + {
9731 +- struct venus_core *core = dev_get_drvdata(dev);
9732 + int ret = 0;
9733 +
9734 + if (on == POWER_ON)
9735 +@@ -752,12 +748,12 @@ static int venc_power_v4(struct device *dev, int on)
9736 + return ret;
9737 + }
9738 +
9739 +-static int vcodec_domains_get(struct device *dev)
9740 ++static int vcodec_domains_get(struct venus_core *core)
9741 + {
9742 + int ret;
9743 + struct opp_table *opp_table;
9744 + struct device **opp_virt_dev;
9745 +- struct venus_core *core = dev_get_drvdata(dev);
9746 ++ struct device *dev = core->dev;
9747 + const struct venus_resources *res = core->res;
9748 + struct device *pd;
9749 + unsigned int i;
9750 +@@ -773,13 +769,6 @@ static int vcodec_domains_get(struct device *dev)
9751 + core->pmdomains[i] = pd;
9752 + }
9753 +
9754 +- core->pd_dl_venus = device_link_add(dev, core->pmdomains[0],
9755 +- DL_FLAG_PM_RUNTIME |
9756 +- DL_FLAG_STATELESS |
9757 +- DL_FLAG_RPM_ACTIVE);
9758 +- if (!core->pd_dl_venus)
9759 +- return -ENODEV;
9760 +-
9761 + skip_pmdomains:
9762 + if (!core->has_opp_table)
9763 + return 0;
9764 +@@ -806,29 +795,23 @@ skip_pmdomains:
9765 + opp_dl_add_err:
9766 + dev_pm_opp_detach_genpd(core->opp_table);
9767 + opp_attach_err:
9768 +- if (core->pd_dl_venus) {
9769 +- device_link_del(core->pd_dl_venus);
9770 +- for (i = 0; i < res->vcodec_pmdomains_num; i++) {
9771 +- if (IS_ERR_OR_NULL(core->pmdomains[i]))
9772 +- continue;
9773 +- dev_pm_domain_detach(core->pmdomains[i], true);
9774 +- }
9775 ++ for (i = 0; i < res->vcodec_pmdomains_num; i++) {
9776 ++ if (IS_ERR_OR_NULL(core->pmdomains[i]))
9777 ++ continue;
9778 ++ dev_pm_domain_detach(core->pmdomains[i], true);
9779 + }
9780 ++
9781 + return ret;
9782 + }
9783 +
9784 +-static void vcodec_domains_put(struct device *dev)
9785 ++static void vcodec_domains_put(struct venus_core *core)
9786 + {
9787 +- struct venus_core *core = dev_get_drvdata(dev);
9788 + const struct venus_resources *res = core->res;
9789 + unsigned int i;
9790 +
9791 + if (!res->vcodec_pmdomains_num)
9792 + goto skip_pmdomains;
9793 +
9794 +- if (core->pd_dl_venus)
9795 +- device_link_del(core->pd_dl_venus);
9796 +-
9797 + for (i = 0; i < res->vcodec_pmdomains_num; i++) {
9798 + if (IS_ERR_OR_NULL(core->pmdomains[i]))
9799 + continue;
9800 +@@ -845,9 +828,9 @@ skip_pmdomains:
9801 + dev_pm_opp_detach_genpd(core->opp_table);
9802 + }
9803 +
9804 +-static int core_get_v4(struct device *dev)
9805 ++static int core_get_v4(struct venus_core *core)
9806 + {
9807 +- struct venus_core *core = dev_get_drvdata(dev);
9808 ++ struct device *dev = core->dev;
9809 + const struct venus_resources *res = core->res;
9810 + int ret;
9811 +
9812 +@@ -886,7 +869,7 @@ static int core_get_v4(struct device *dev)
9813 + }
9814 + }
9815 +
9816 +- ret = vcodec_domains_get(dev);
9817 ++ ret = vcodec_domains_get(core);
9818 + if (ret) {
9819 + if (core->has_opp_table)
9820 + dev_pm_opp_of_remove_table(dev);
9821 +@@ -897,14 +880,14 @@ static int core_get_v4(struct device *dev)
9822 + return 0;
9823 + }
9824 +
9825 +-static void core_put_v4(struct device *dev)
9826 ++static void core_put_v4(struct venus_core *core)
9827 + {
9828 +- struct venus_core *core = dev_get_drvdata(dev);
9829 ++ struct device *dev = core->dev;
9830 +
9831 + if (legacy_binding)
9832 + return;
9833 +
9834 +- vcodec_domains_put(dev);
9835 ++ vcodec_domains_put(core);
9836 +
9837 + if (core->has_opp_table)
9838 + dev_pm_opp_of_remove_table(dev);
9839 +@@ -913,19 +896,33 @@ static void core_put_v4(struct device *dev)
9840 +
9841 + }
9842 +
9843 +-static int core_power_v4(struct device *dev, int on)
9844 ++static int core_power_v4(struct venus_core *core, int on)
9845 + {
9846 +- struct venus_core *core = dev_get_drvdata(dev);
9847 ++ struct device *dev = core->dev;
9848 ++ struct device *pmctrl = core->pmdomains[0];
9849 + int ret = 0;
9850 +
9851 + if (on == POWER_ON) {
9852 ++ if (pmctrl) {
9853 ++ ret = pm_runtime_get_sync(pmctrl);
9854 ++ if (ret < 0) {
9855 ++ pm_runtime_put_noidle(pmctrl);
9856 ++ return ret;
9857 ++ }
9858 ++ }
9859 ++
9860 + ret = core_clks_enable(core);
9861 ++ if (ret < 0 && pmctrl)
9862 ++ pm_runtime_put_sync(pmctrl);
9863 + } else {
9864 + /* Drop the performance state vote */
9865 + if (core->opp_pmdomain)
9866 + dev_pm_opp_set_rate(dev, 0);
9867 +
9868 + core_clks_disable(core);
9869 ++
9870 ++ if (pmctrl)
9871 ++ pm_runtime_put_sync(pmctrl);
9872 + }
9873 +
9874 + return ret;
9875 +@@ -962,13 +959,13 @@ static int load_scale_v4(struct venus_inst *inst)
9876 + struct device *dev = core->dev;
9877 + unsigned long freq = 0, freq_core1 = 0, freq_core2 = 0;
9878 + unsigned long filled_len = 0;
9879 +- int i, ret;
9880 ++ int i, ret = 0;
9881 +
9882 + for (i = 0; i < inst->num_input_bufs; i++)
9883 + filled_len = max(filled_len, inst->payloads[i]);
9884 +
9885 + if (inst->session_type == VIDC_SESSION_TYPE_DEC && !filled_len)
9886 +- return 0;
9887 ++ return ret;
9888 +
9889 + freq = calculate_inst_freq(inst, filled_len);
9890 + inst->clk_data.freq = freq;
9891 +@@ -984,7 +981,6 @@ static int load_scale_v4(struct venus_inst *inst)
9892 + freq_core2 += inst->clk_data.freq;
9893 + }
9894 + }
9895 +- mutex_unlock(&core->lock);
9896 +
9897 + freq = max(freq_core1, freq_core2);
9898 +
9899 +@@ -1008,17 +1004,19 @@ set_freq:
9900 + if (ret) {
9901 + dev_err(dev, "failed to set clock rate %lu (%d)\n",
9902 + freq, ret);
9903 +- return ret;
9904 ++ goto exit;
9905 + }
9906 +
9907 + ret = load_scale_bw(core);
9908 + if (ret) {
9909 + dev_err(dev, "failed to set bandwidth (%d)\n",
9910 + ret);
9911 +- return ret;
9912 ++ goto exit;
9913 + }
9914 +
9915 +- return 0;
9916 ++exit:
9917 ++ mutex_unlock(&core->lock);
9918 ++ return ret;
9919 + }
9920 +
9921 + static const struct venus_pm_ops pm_ops_v4 = {
9922 +diff --git a/drivers/media/platform/qcom/venus/pm_helpers.h b/drivers/media/platform/qcom/venus/pm_helpers.h
9923 +index aa2f6afa23544..a492c50c5543c 100644
9924 +--- a/drivers/media/platform/qcom/venus/pm_helpers.h
9925 ++++ b/drivers/media/platform/qcom/venus/pm_helpers.h
9926 +@@ -4,14 +4,15 @@
9927 + #define __VENUS_PM_HELPERS_H__
9928 +
9929 + struct device;
9930 ++struct venus_core;
9931 +
9932 + #define POWER_ON 1
9933 + #define POWER_OFF 0
9934 +
9935 + struct venus_pm_ops {
9936 +- int (*core_get)(struct device *dev);
9937 +- void (*core_put)(struct device *dev);
9938 +- int (*core_power)(struct device *dev, int on);
9939 ++ int (*core_get)(struct venus_core *core);
9940 ++ void (*core_put)(struct venus_core *core);
9941 ++ int (*core_power)(struct venus_core *core, int on);
9942 +
9943 + int (*vdec_get)(struct device *dev);
9944 + void (*vdec_put)(struct device *dev);
9945 +diff --git a/drivers/media/platform/rcar-vin/rcar-csi2.c b/drivers/media/platform/rcar-vin/rcar-csi2.c
9946 +index d2d87a204e918..5e8e48a721a04 100644
9947 +--- a/drivers/media/platform/rcar-vin/rcar-csi2.c
9948 ++++ b/drivers/media/platform/rcar-vin/rcar-csi2.c
9949 +@@ -436,16 +436,23 @@ static int rcsi2_wait_phy_start(struct rcar_csi2 *priv,
9950 + static int rcsi2_set_phypll(struct rcar_csi2 *priv, unsigned int mbps)
9951 + {
9952 + const struct rcsi2_mbps_reg *hsfreq;
9953 ++ const struct rcsi2_mbps_reg *hsfreq_prev = NULL;
9954 +
9955 +- for (hsfreq = priv->info->hsfreqrange; hsfreq->mbps != 0; hsfreq++)
9956 ++ for (hsfreq = priv->info->hsfreqrange; hsfreq->mbps != 0; hsfreq++) {
9957 + if (hsfreq->mbps >= mbps)
9958 + break;
9959 ++ hsfreq_prev = hsfreq;
9960 ++ }
9961 +
9962 + if (!hsfreq->mbps) {
9963 + dev_err(priv->dev, "Unsupported PHY speed (%u Mbps)", mbps);
9964 + return -ERANGE;
9965 + }
9966 +
9967 ++ if (hsfreq_prev &&
9968 ++ ((mbps - hsfreq_prev->mbps) <= (hsfreq->mbps - mbps)))
9969 ++ hsfreq = hsfreq_prev;
9970 ++
9971 + rcsi2_write(priv, PHYPLL_REG, PHYPLL_HSFREQRANGE(hsfreq->reg));
9972 +
9973 + return 0;
9974 +@@ -969,10 +976,17 @@ static int rcsi2_phtw_write_mbps(struct rcar_csi2 *priv, unsigned int mbps,
9975 + const struct rcsi2_mbps_reg *values, u16 code)
9976 + {
9977 + const struct rcsi2_mbps_reg *value;
9978 ++ const struct rcsi2_mbps_reg *prev_value = NULL;
9979 +
9980 +- for (value = values; value->mbps; value++)
9981 ++ for (value = values; value->mbps; value++) {
9982 + if (value->mbps >= mbps)
9983 + break;
9984 ++ prev_value = value;
9985 ++ }
9986 ++
9987 ++ if (prev_value &&
9988 ++ ((mbps - prev_value->mbps) <= (value->mbps - mbps)))
9989 ++ value = prev_value;
9990 +
9991 + if (!value->mbps) {
9992 + dev_err(priv->dev, "Unsupported PHY speed (%u Mbps)", mbps);
9993 +diff --git a/drivers/media/platform/rcar-vin/rcar-v4l2.c b/drivers/media/platform/rcar-vin/rcar-v4l2.c
9994 +index 3e7a3ae2a6b97..0bbe6f9f92062 100644
9995 +--- a/drivers/media/platform/rcar-vin/rcar-v4l2.c
9996 ++++ b/drivers/media/platform/rcar-vin/rcar-v4l2.c
9997 +@@ -175,20 +175,27 @@ static void rvin_format_align(struct rvin_dev *vin, struct v4l2_pix_format *pix)
9998 + break;
9999 + }
10000 +
10001 +- /* HW limit width to a multiple of 32 (2^5) for NV12/16 else 2 (2^1) */
10002 ++ /* Hardware limits width alignment based on format. */
10003 + switch (pix->pixelformat) {
10004 ++ /* Multiple of 32 (2^5) for NV12/16. */
10005 + case V4L2_PIX_FMT_NV12:
10006 + case V4L2_PIX_FMT_NV16:
10007 + walign = 5;
10008 + break;
10009 +- default:
10010 ++ /* Multiple of 2 (2^1) for YUV. */
10011 ++ case V4L2_PIX_FMT_YUYV:
10012 ++ case V4L2_PIX_FMT_UYVY:
10013 + walign = 1;
10014 + break;
10015 ++ /* No multiple for RGB. */
10016 ++ default:
10017 ++ walign = 0;
10018 ++ break;
10019 + }
10020 +
10021 + /* Limit to VIN capabilities */
10022 +- v4l_bound_align_image(&pix->width, 2, vin->info->max_width, walign,
10023 +- &pix->height, 4, vin->info->max_height, 2, 0);
10024 ++ v4l_bound_align_image(&pix->width, 5, vin->info->max_width, walign,
10025 ++ &pix->height, 2, vin->info->max_height, 0, 0);
10026 +
10027 + pix->bytesperline = rvin_format_bytesperline(vin, pix);
10028 + pix->sizeimage = rvin_format_sizeimage(pix);
10029 +diff --git a/drivers/media/radio/si470x/radio-si470x-i2c.c b/drivers/media/radio/si470x/radio-si470x-i2c.c
10030 +index a972c0705ac79..76d39e2e87706 100644
10031 +--- a/drivers/media/radio/si470x/radio-si470x-i2c.c
10032 ++++ b/drivers/media/radio/si470x/radio-si470x-i2c.c
10033 +@@ -368,7 +368,7 @@ static int si470x_i2c_probe(struct i2c_client *client)
10034 + if (radio->hdl.error) {
10035 + retval = radio->hdl.error;
10036 + dev_err(&client->dev, "couldn't register control\n");
10037 +- goto err_dev;
10038 ++ goto err_all;
10039 + }
10040 +
10041 + /* video device initialization */
10042 +@@ -463,7 +463,6 @@ static int si470x_i2c_probe(struct i2c_client *client)
10043 + return 0;
10044 + err_all:
10045 + v4l2_ctrl_handler_free(&radio->hdl);
10046 +-err_dev:
10047 + v4l2_device_unregister(&radio->v4l2_dev);
10048 + err_initial:
10049 + return retval;
10050 +diff --git a/drivers/media/rc/igorplugusb.c b/drivers/media/rc/igorplugusb.c
10051 +index effaa5751d6c9..3e9988ee785f0 100644
10052 +--- a/drivers/media/rc/igorplugusb.c
10053 ++++ b/drivers/media/rc/igorplugusb.c
10054 +@@ -64,9 +64,11 @@ static void igorplugusb_irdata(struct igorplugusb *ir, unsigned len)
10055 + if (start >= len) {
10056 + dev_err(ir->dev, "receive overflow invalid: %u", overflow);
10057 + } else {
10058 +- if (overflow > 0)
10059 ++ if (overflow > 0) {
10060 + dev_warn(ir->dev, "receive overflow, at least %u lost",
10061 + overflow);
10062 ++ ir_raw_event_reset(ir->rc);
10063 ++ }
10064 +
10065 + do {
10066 + rawir.duration = ir->buf_in[i] * 85;
10067 +diff --git a/drivers/media/rc/mceusb.c b/drivers/media/rc/mceusb.c
10068 +index 8870c4e6c5f44..dbb5a4f44bda5 100644
10069 +--- a/drivers/media/rc/mceusb.c
10070 ++++ b/drivers/media/rc/mceusb.c
10071 +@@ -1430,7 +1430,7 @@ static void mceusb_gen1_init(struct mceusb_dev *ir)
10072 + */
10073 + ret = usb_control_msg(ir->usbdev, usb_rcvctrlpipe(ir->usbdev, 0),
10074 + USB_REQ_SET_ADDRESS, USB_TYPE_VENDOR, 0, 0,
10075 +- data, USB_CTRL_MSG_SZ, HZ * 3);
10076 ++ data, USB_CTRL_MSG_SZ, 3000);
10077 + dev_dbg(dev, "set address - ret = %d", ret);
10078 + dev_dbg(dev, "set address - data[0] = %d, data[1] = %d",
10079 + data[0], data[1]);
10080 +@@ -1438,20 +1438,20 @@ static void mceusb_gen1_init(struct mceusb_dev *ir)
10081 + /* set feature: bit rate 38400 bps */
10082 + ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
10083 + USB_REQ_SET_FEATURE, USB_TYPE_VENDOR,
10084 +- 0xc04e, 0x0000, NULL, 0, HZ * 3);
10085 ++ 0xc04e, 0x0000, NULL, 0, 3000);
10086 +
10087 + dev_dbg(dev, "set feature - ret = %d", ret);
10088 +
10089 + /* bRequest 4: set char length to 8 bits */
10090 + ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
10091 + 4, USB_TYPE_VENDOR,
10092 +- 0x0808, 0x0000, NULL, 0, HZ * 3);
10093 ++ 0x0808, 0x0000, NULL, 0, 3000);
10094 + dev_dbg(dev, "set char length - retB = %d", ret);
10095 +
10096 + /* bRequest 2: set handshaking to use DTR/DSR */
10097 + ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
10098 + 2, USB_TYPE_VENDOR,
10099 +- 0x0000, 0x0100, NULL, 0, HZ * 3);
10100 ++ 0x0000, 0x0100, NULL, 0, 3000);
10101 + dev_dbg(dev, "set handshake - retC = %d", ret);
10102 +
10103 + /* device resume */
10104 +diff --git a/drivers/media/rc/redrat3.c b/drivers/media/rc/redrat3.c
10105 +index 2cf3377ec63a7..a61f9820ade95 100644
10106 +--- a/drivers/media/rc/redrat3.c
10107 ++++ b/drivers/media/rc/redrat3.c
10108 +@@ -404,7 +404,7 @@ static int redrat3_send_cmd(int cmd, struct redrat3_dev *rr3)
10109 + udev = rr3->udev;
10110 + res = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), cmd,
10111 + USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
10112 +- 0x0000, 0x0000, data, sizeof(u8), HZ * 10);
10113 ++ 0x0000, 0x0000, data, sizeof(u8), 10000);
10114 +
10115 + if (res < 0) {
10116 + dev_err(rr3->dev, "%s: Error sending rr3 cmd res %d, data %d",
10117 +@@ -480,7 +480,7 @@ static u32 redrat3_get_timeout(struct redrat3_dev *rr3)
10118 + pipe = usb_rcvctrlpipe(rr3->udev, 0);
10119 + ret = usb_control_msg(rr3->udev, pipe, RR3_GET_IR_PARAM,
10120 + USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
10121 +- RR3_IR_IO_SIG_TIMEOUT, 0, tmp, len, HZ * 5);
10122 ++ RR3_IR_IO_SIG_TIMEOUT, 0, tmp, len, 5000);
10123 + if (ret != len)
10124 + dev_warn(rr3->dev, "Failed to read timeout from hardware\n");
10125 + else {
10126 +@@ -510,7 +510,7 @@ static int redrat3_set_timeout(struct rc_dev *rc_dev, unsigned int timeoutus)
10127 + ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), RR3_SET_IR_PARAM,
10128 + USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
10129 + RR3_IR_IO_SIG_TIMEOUT, 0, timeout, sizeof(*timeout),
10130 +- HZ * 25);
10131 ++ 25000);
10132 + dev_dbg(dev, "set ir parm timeout %d ret 0x%02x\n",
10133 + be32_to_cpu(*timeout), ret);
10134 +
10135 +@@ -542,32 +542,32 @@ static void redrat3_reset(struct redrat3_dev *rr3)
10136 + *val = 0x01;
10137 + rc = usb_control_msg(udev, rxpipe, RR3_RESET,
10138 + USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
10139 +- RR3_CPUCS_REG_ADDR, 0, val, len, HZ * 25);
10140 ++ RR3_CPUCS_REG_ADDR, 0, val, len, 25000);
10141 + dev_dbg(dev, "reset returned 0x%02x\n", rc);
10142 +
10143 + *val = length_fuzz;
10144 + rc = usb_control_msg(udev, txpipe, RR3_SET_IR_PARAM,
10145 + USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
10146 +- RR3_IR_IO_LENGTH_FUZZ, 0, val, len, HZ * 25);
10147 ++ RR3_IR_IO_LENGTH_FUZZ, 0, val, len, 25000);
10148 + dev_dbg(dev, "set ir parm len fuzz %d rc 0x%02x\n", *val, rc);
10149 +
10150 + *val = (65536 - (minimum_pause * 2000)) / 256;
10151 + rc = usb_control_msg(udev, txpipe, RR3_SET_IR_PARAM,
10152 + USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
10153 +- RR3_IR_IO_MIN_PAUSE, 0, val, len, HZ * 25);
10154 ++ RR3_IR_IO_MIN_PAUSE, 0, val, len, 25000);
10155 + dev_dbg(dev, "set ir parm min pause %d rc 0x%02x\n", *val, rc);
10156 +
10157 + *val = periods_measure_carrier;
10158 + rc = usb_control_msg(udev, txpipe, RR3_SET_IR_PARAM,
10159 + USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
10160 +- RR3_IR_IO_PERIODS_MF, 0, val, len, HZ * 25);
10161 ++ RR3_IR_IO_PERIODS_MF, 0, val, len, 25000);
10162 + dev_dbg(dev, "set ir parm periods measure carrier %d rc 0x%02x", *val,
10163 + rc);
10164 +
10165 + *val = RR3_DRIVER_MAXLENS;
10166 + rc = usb_control_msg(udev, txpipe, RR3_SET_IR_PARAM,
10167 + USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
10168 +- RR3_IR_IO_MAX_LENGTHS, 0, val, len, HZ * 25);
10169 ++ RR3_IR_IO_MAX_LENGTHS, 0, val, len, 25000);
10170 + dev_dbg(dev, "set ir parm max lens %d rc 0x%02x\n", *val, rc);
10171 +
10172 + kfree(val);
10173 +@@ -585,7 +585,7 @@ static void redrat3_get_firmware_rev(struct redrat3_dev *rr3)
10174 + rc = usb_control_msg(rr3->udev, usb_rcvctrlpipe(rr3->udev, 0),
10175 + RR3_FW_VERSION,
10176 + USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
10177 +- 0, 0, buffer, RR3_FW_VERSION_LEN, HZ * 5);
10178 ++ 0, 0, buffer, RR3_FW_VERSION_LEN, 5000);
10179 +
10180 + if (rc >= 0)
10181 + dev_info(rr3->dev, "Firmware rev: %s", buffer);
10182 +@@ -825,14 +825,14 @@ static int redrat3_transmit_ir(struct rc_dev *rcdev, unsigned *txbuf,
10183 +
10184 + pipe = usb_sndbulkpipe(rr3->udev, rr3->ep_out->bEndpointAddress);
10185 + ret = usb_bulk_msg(rr3->udev, pipe, irdata,
10186 +- sendbuf_len, &ret_len, 10 * HZ);
10187 ++ sendbuf_len, &ret_len, 10000);
10188 + dev_dbg(dev, "sent %d bytes, (ret %d)\n", ret_len, ret);
10189 +
10190 + /* now tell the hardware to transmit what we sent it */
10191 + pipe = usb_rcvctrlpipe(rr3->udev, 0);
10192 + ret = usb_control_msg(rr3->udev, pipe, RR3_TX_SEND_SIGNAL,
10193 + USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
10194 +- 0, 0, irdata, 2, HZ * 10);
10195 ++ 0, 0, irdata, 2, 10000);
10196 +
10197 + if (ret < 0)
10198 + dev_err(dev, "Error: control msg send failed, rc %d\n", ret);
10199 +diff --git a/drivers/media/tuners/msi001.c b/drivers/media/tuners/msi001.c
10200 +index 78e6fd600d8ef..44247049a3190 100644
10201 +--- a/drivers/media/tuners/msi001.c
10202 ++++ b/drivers/media/tuners/msi001.c
10203 +@@ -442,6 +442,13 @@ static int msi001_probe(struct spi_device *spi)
10204 + V4L2_CID_RF_TUNER_BANDWIDTH_AUTO, 0, 1, 1, 1);
10205 + dev->bandwidth = v4l2_ctrl_new_std(&dev->hdl, &msi001_ctrl_ops,
10206 + V4L2_CID_RF_TUNER_BANDWIDTH, 200000, 8000000, 1, 200000);
10207 ++ if (dev->hdl.error) {
10208 ++ ret = dev->hdl.error;
10209 ++ dev_err(&spi->dev, "Could not initialize controls\n");
10210 ++ /* control init failed, free handler */
10211 ++ goto err_ctrl_handler_free;
10212 ++ }
10213 ++
10214 + v4l2_ctrl_auto_cluster(2, &dev->bandwidth_auto, 0, false);
10215 + dev->lna_gain = v4l2_ctrl_new_std(&dev->hdl, &msi001_ctrl_ops,
10216 + V4L2_CID_RF_TUNER_LNA_GAIN, 0, 1, 1, 1);
10217 +diff --git a/drivers/media/tuners/si2157.c b/drivers/media/tuners/si2157.c
10218 +index fefb2625f6558..75ddf7ed1faff 100644
10219 +--- a/drivers/media/tuners/si2157.c
10220 ++++ b/drivers/media/tuners/si2157.c
10221 +@@ -90,7 +90,7 @@ static int si2157_init(struct dvb_frontend *fe)
10222 + dev_dbg(&client->dev, "\n");
10223 +
10224 + /* Try to get Xtal trim property, to verify tuner still running */
10225 +- memcpy(cmd.args, "\x15\x00\x04\x02", 4);
10226 ++ memcpy(cmd.args, "\x15\x00\x02\x04", 4);
10227 + cmd.wlen = 4;
10228 + cmd.rlen = 4;
10229 + ret = si2157_cmd_execute(client, &cmd);
10230 +diff --git a/drivers/media/usb/b2c2/flexcop-usb.c b/drivers/media/usb/b2c2/flexcop-usb.c
10231 +index e731243267e49..a2563c2540808 100644
10232 +--- a/drivers/media/usb/b2c2/flexcop-usb.c
10233 ++++ b/drivers/media/usb/b2c2/flexcop-usb.c
10234 +@@ -87,7 +87,7 @@ static int flexcop_usb_readwrite_dw(struct flexcop_device *fc, u16 wRegOffsPCI,
10235 + 0,
10236 + fc_usb->data,
10237 + sizeof(u32),
10238 +- B2C2_WAIT_FOR_OPERATION_RDW * HZ);
10239 ++ B2C2_WAIT_FOR_OPERATION_RDW);
10240 +
10241 + if (ret != sizeof(u32)) {
10242 + err("error while %s dword from %d (%d).", read ? "reading" :
10243 +@@ -155,7 +155,7 @@ static int flexcop_usb_v8_memory_req(struct flexcop_usb *fc_usb,
10244 + wIndex,
10245 + fc_usb->data,
10246 + buflen,
10247 +- nWaitTime * HZ);
10248 ++ nWaitTime);
10249 + if (ret != buflen)
10250 + ret = -EIO;
10251 +
10252 +@@ -249,13 +249,13 @@ static int flexcop_usb_i2c_req(struct flexcop_i2c_adapter *i2c,
10253 + /* DKT 020208 - add this to support special case of DiSEqC */
10254 + case USB_FUNC_I2C_CHECKWRITE:
10255 + pipe = B2C2_USB_CTRL_PIPE_OUT;
10256 +- nWaitTime = 2;
10257 ++ nWaitTime = 2000;
10258 + request_type |= USB_DIR_OUT;
10259 + break;
10260 + case USB_FUNC_I2C_READ:
10261 + case USB_FUNC_I2C_REPEATREAD:
10262 + pipe = B2C2_USB_CTRL_PIPE_IN;
10263 +- nWaitTime = 2;
10264 ++ nWaitTime = 2000;
10265 + request_type |= USB_DIR_IN;
10266 + break;
10267 + default:
10268 +@@ -282,7 +282,7 @@ static int flexcop_usb_i2c_req(struct flexcop_i2c_adapter *i2c,
10269 + wIndex,
10270 + fc_usb->data,
10271 + buflen,
10272 +- nWaitTime * HZ);
10273 ++ nWaitTime);
10274 +
10275 + if (ret != buflen)
10276 + ret = -EIO;
10277 +diff --git a/drivers/media/usb/b2c2/flexcop-usb.h b/drivers/media/usb/b2c2/flexcop-usb.h
10278 +index 2f230bf72252b..c7cca1a5ee59d 100644
10279 +--- a/drivers/media/usb/b2c2/flexcop-usb.h
10280 ++++ b/drivers/media/usb/b2c2/flexcop-usb.h
10281 +@@ -91,13 +91,13 @@ typedef enum {
10282 + UTILITY_SRAM_TESTVERIFY = 0x16,
10283 + } flexcop_usb_utility_function_t;
10284 +
10285 +-#define B2C2_WAIT_FOR_OPERATION_RW (1*HZ)
10286 +-#define B2C2_WAIT_FOR_OPERATION_RDW (3*HZ)
10287 +-#define B2C2_WAIT_FOR_OPERATION_WDW (1*HZ)
10288 ++#define B2C2_WAIT_FOR_OPERATION_RW 1000
10289 ++#define B2C2_WAIT_FOR_OPERATION_RDW 3000
10290 ++#define B2C2_WAIT_FOR_OPERATION_WDW 1000
10291 +
10292 +-#define B2C2_WAIT_FOR_OPERATION_V8READ (3*HZ)
10293 +-#define B2C2_WAIT_FOR_OPERATION_V8WRITE (3*HZ)
10294 +-#define B2C2_WAIT_FOR_OPERATION_V8FLASH (3*HZ)
10295 ++#define B2C2_WAIT_FOR_OPERATION_V8READ 3000
10296 ++#define B2C2_WAIT_FOR_OPERATION_V8WRITE 3000
10297 ++#define B2C2_WAIT_FOR_OPERATION_V8FLASH 3000
10298 +
10299 + typedef enum {
10300 + V8_MEMORY_PAGE_DVB_CI = 0x20,
10301 +diff --git a/drivers/media/usb/cpia2/cpia2_usb.c b/drivers/media/usb/cpia2/cpia2_usb.c
10302 +index 76aac06f9fb8e..cba03b2864738 100644
10303 +--- a/drivers/media/usb/cpia2/cpia2_usb.c
10304 ++++ b/drivers/media/usb/cpia2/cpia2_usb.c
10305 +@@ -550,7 +550,7 @@ static int write_packet(struct usb_device *udev,
10306 + 0, /* index */
10307 + buf, /* buffer */
10308 + size,
10309 +- HZ);
10310 ++ 1000);
10311 +
10312 + kfree(buf);
10313 + return ret;
10314 +@@ -582,7 +582,7 @@ static int read_packet(struct usb_device *udev,
10315 + 0, /* index */
10316 + buf, /* buffer */
10317 + size,
10318 +- HZ);
10319 ++ 1000);
10320 +
10321 + if (ret >= 0)
10322 + memcpy(registers, buf, size);
10323 +diff --git a/drivers/media/usb/dvb-usb/dib0700_core.c b/drivers/media/usb/dvb-usb/dib0700_core.c
10324 +index 70219b3e85666..7ea8f68b0f458 100644
10325 +--- a/drivers/media/usb/dvb-usb/dib0700_core.c
10326 ++++ b/drivers/media/usb/dvb-usb/dib0700_core.c
10327 +@@ -618,8 +618,6 @@ int dib0700_streaming_ctrl(struct dvb_usb_adapter *adap, int onoff)
10328 + deb_info("the endpoint number (%i) is not correct, use the adapter id instead", adap->fe_adap[0].stream.props.endpoint);
10329 + if (onoff)
10330 + st->channel_state |= 1 << (adap->id);
10331 +- else
10332 +- st->channel_state |= 1 << ~(adap->id);
10333 + } else {
10334 + if (onoff)
10335 + st->channel_state |= 1 << (adap->fe_adap[0].stream.props.endpoint-2);
10336 +diff --git a/drivers/media/usb/dvb-usb/dw2102.c b/drivers/media/usb/dvb-usb/dw2102.c
10337 +index a27a684403252..aa929db56db1f 100644
10338 +--- a/drivers/media/usb/dvb-usb/dw2102.c
10339 ++++ b/drivers/media/usb/dvb-usb/dw2102.c
10340 +@@ -2148,46 +2148,153 @@ static struct dvb_usb_device_properties s6x0_properties = {
10341 + }
10342 + };
10343 +
10344 +-static const struct dvb_usb_device_description d1100 = {
10345 +- "Prof 1100 USB ",
10346 +- {&dw2102_table[PROF_1100], NULL},
10347 +- {NULL},
10348 +-};
10349 ++static struct dvb_usb_device_properties p1100_properties = {
10350 ++ .caps = DVB_USB_IS_AN_I2C_ADAPTER,
10351 ++ .usb_ctrl = DEVICE_SPECIFIC,
10352 ++ .size_of_priv = sizeof(struct dw2102_state),
10353 ++ .firmware = P1100_FIRMWARE,
10354 ++ .no_reconnect = 1,
10355 +
10356 +-static const struct dvb_usb_device_description d660 = {
10357 +- "TeVii S660 USB",
10358 +- {&dw2102_table[TEVII_S660], NULL},
10359 +- {NULL},
10360 +-};
10361 ++ .i2c_algo = &s6x0_i2c_algo,
10362 ++ .rc.core = {
10363 ++ .rc_interval = 150,
10364 ++ .rc_codes = RC_MAP_TBS_NEC,
10365 ++ .module_name = "dw2102",
10366 ++ .allowed_protos = RC_PROTO_BIT_NEC,
10367 ++ .rc_query = prof_rc_query,
10368 ++ },
10369 +
10370 +-static const struct dvb_usb_device_description d480_1 = {
10371 +- "TeVii S480.1 USB",
10372 +- {&dw2102_table[TEVII_S480_1], NULL},
10373 +- {NULL},
10374 ++ .generic_bulk_ctrl_endpoint = 0x81,
10375 ++ .num_adapters = 1,
10376 ++ .download_firmware = dw2102_load_firmware,
10377 ++ .read_mac_address = s6x0_read_mac_address,
10378 ++ .adapter = {
10379 ++ {
10380 ++ .num_frontends = 1,
10381 ++ .fe = {{
10382 ++ .frontend_attach = stv0288_frontend_attach,
10383 ++ .stream = {
10384 ++ .type = USB_BULK,
10385 ++ .count = 8,
10386 ++ .endpoint = 0x82,
10387 ++ .u = {
10388 ++ .bulk = {
10389 ++ .buffersize = 4096,
10390 ++ }
10391 ++ }
10392 ++ },
10393 ++ } },
10394 ++ }
10395 ++ },
10396 ++ .num_device_descs = 1,
10397 ++ .devices = {
10398 ++ {"Prof 1100 USB ",
10399 ++ {&dw2102_table[PROF_1100], NULL},
10400 ++ {NULL},
10401 ++ },
10402 ++ }
10403 + };
10404 +
10405 +-static const struct dvb_usb_device_description d480_2 = {
10406 +- "TeVii S480.2 USB",
10407 +- {&dw2102_table[TEVII_S480_2], NULL},
10408 +- {NULL},
10409 +-};
10410 ++static struct dvb_usb_device_properties s660_properties = {
10411 ++ .caps = DVB_USB_IS_AN_I2C_ADAPTER,
10412 ++ .usb_ctrl = DEVICE_SPECIFIC,
10413 ++ .size_of_priv = sizeof(struct dw2102_state),
10414 ++ .firmware = S660_FIRMWARE,
10415 ++ .no_reconnect = 1,
10416 +
10417 +-static const struct dvb_usb_device_description d7500 = {
10418 +- "Prof 7500 USB DVB-S2",
10419 +- {&dw2102_table[PROF_7500], NULL},
10420 +- {NULL},
10421 +-};
10422 ++ .i2c_algo = &s6x0_i2c_algo,
10423 ++ .rc.core = {
10424 ++ .rc_interval = 150,
10425 ++ .rc_codes = RC_MAP_TEVII_NEC,
10426 ++ .module_name = "dw2102",
10427 ++ .allowed_protos = RC_PROTO_BIT_NEC,
10428 ++ .rc_query = dw2102_rc_query,
10429 ++ },
10430 +
10431 +-static const struct dvb_usb_device_description d421 = {
10432 +- "TeVii S421 PCI",
10433 +- {&dw2102_table[TEVII_S421], NULL},
10434 +- {NULL},
10435 ++ .generic_bulk_ctrl_endpoint = 0x81,
10436 ++ .num_adapters = 1,
10437 ++ .download_firmware = dw2102_load_firmware,
10438 ++ .read_mac_address = s6x0_read_mac_address,
10439 ++ .adapter = {
10440 ++ {
10441 ++ .num_frontends = 1,
10442 ++ .fe = {{
10443 ++ .frontend_attach = ds3000_frontend_attach,
10444 ++ .stream = {
10445 ++ .type = USB_BULK,
10446 ++ .count = 8,
10447 ++ .endpoint = 0x82,
10448 ++ .u = {
10449 ++ .bulk = {
10450 ++ .buffersize = 4096,
10451 ++ }
10452 ++ }
10453 ++ },
10454 ++ } },
10455 ++ }
10456 ++ },
10457 ++ .num_device_descs = 3,
10458 ++ .devices = {
10459 ++ {"TeVii S660 USB",
10460 ++ {&dw2102_table[TEVII_S660], NULL},
10461 ++ {NULL},
10462 ++ },
10463 ++ {"TeVii S480.1 USB",
10464 ++ {&dw2102_table[TEVII_S480_1], NULL},
10465 ++ {NULL},
10466 ++ },
10467 ++ {"TeVii S480.2 USB",
10468 ++ {&dw2102_table[TEVII_S480_2], NULL},
10469 ++ {NULL},
10470 ++ },
10471 ++ }
10472 + };
10473 +
10474 +-static const struct dvb_usb_device_description d632 = {
10475 +- "TeVii S632 USB",
10476 +- {&dw2102_table[TEVII_S632], NULL},
10477 +- {NULL},
10478 ++static struct dvb_usb_device_properties p7500_properties = {
10479 ++ .caps = DVB_USB_IS_AN_I2C_ADAPTER,
10480 ++ .usb_ctrl = DEVICE_SPECIFIC,
10481 ++ .size_of_priv = sizeof(struct dw2102_state),
10482 ++ .firmware = P7500_FIRMWARE,
10483 ++ .no_reconnect = 1,
10484 ++
10485 ++ .i2c_algo = &s6x0_i2c_algo,
10486 ++ .rc.core = {
10487 ++ .rc_interval = 150,
10488 ++ .rc_codes = RC_MAP_TBS_NEC,
10489 ++ .module_name = "dw2102",
10490 ++ .allowed_protos = RC_PROTO_BIT_NEC,
10491 ++ .rc_query = prof_rc_query,
10492 ++ },
10493 ++
10494 ++ .generic_bulk_ctrl_endpoint = 0x81,
10495 ++ .num_adapters = 1,
10496 ++ .download_firmware = dw2102_load_firmware,
10497 ++ .read_mac_address = s6x0_read_mac_address,
10498 ++ .adapter = {
10499 ++ {
10500 ++ .num_frontends = 1,
10501 ++ .fe = {{
10502 ++ .frontend_attach = prof_7500_frontend_attach,
10503 ++ .stream = {
10504 ++ .type = USB_BULK,
10505 ++ .count = 8,
10506 ++ .endpoint = 0x82,
10507 ++ .u = {
10508 ++ .bulk = {
10509 ++ .buffersize = 4096,
10510 ++ }
10511 ++ }
10512 ++ },
10513 ++ } },
10514 ++ }
10515 ++ },
10516 ++ .num_device_descs = 1,
10517 ++ .devices = {
10518 ++ {"Prof 7500 USB DVB-S2",
10519 ++ {&dw2102_table[PROF_7500], NULL},
10520 ++ {NULL},
10521 ++ },
10522 ++ }
10523 + };
10524 +
10525 + static struct dvb_usb_device_properties su3000_properties = {
10526 +@@ -2267,6 +2374,59 @@ static struct dvb_usb_device_properties su3000_properties = {
10527 + }
10528 + };
10529 +
10530 ++static struct dvb_usb_device_properties s421_properties = {
10531 ++ .caps = DVB_USB_IS_AN_I2C_ADAPTER,
10532 ++ .usb_ctrl = DEVICE_SPECIFIC,
10533 ++ .size_of_priv = sizeof(struct dw2102_state),
10534 ++ .power_ctrl = su3000_power_ctrl,
10535 ++ .num_adapters = 1,
10536 ++ .identify_state = su3000_identify_state,
10537 ++ .i2c_algo = &su3000_i2c_algo,
10538 ++
10539 ++ .rc.core = {
10540 ++ .rc_interval = 150,
10541 ++ .rc_codes = RC_MAP_SU3000,
10542 ++ .module_name = "dw2102",
10543 ++ .allowed_protos = RC_PROTO_BIT_RC5,
10544 ++ .rc_query = su3000_rc_query,
10545 ++ },
10546 ++
10547 ++ .read_mac_address = su3000_read_mac_address,
10548 ++
10549 ++ .generic_bulk_ctrl_endpoint = 0x01,
10550 ++
10551 ++ .adapter = {
10552 ++ {
10553 ++ .num_frontends = 1,
10554 ++ .fe = {{
10555 ++ .streaming_ctrl = su3000_streaming_ctrl,
10556 ++ .frontend_attach = m88rs2000_frontend_attach,
10557 ++ .stream = {
10558 ++ .type = USB_BULK,
10559 ++ .count = 8,
10560 ++ .endpoint = 0x82,
10561 ++ .u = {
10562 ++ .bulk = {
10563 ++ .buffersize = 4096,
10564 ++ }
10565 ++ }
10566 ++ }
10567 ++ } },
10568 ++ }
10569 ++ },
10570 ++ .num_device_descs = 2,
10571 ++ .devices = {
10572 ++ { "TeVii S421 PCI",
10573 ++ { &dw2102_table[TEVII_S421], NULL },
10574 ++ { NULL },
10575 ++ },
10576 ++ { "TeVii S632 USB",
10577 ++ { &dw2102_table[TEVII_S632], NULL },
10578 ++ { NULL },
10579 ++ },
10580 ++ }
10581 ++};
10582 ++
10583 + static struct dvb_usb_device_properties t220_properties = {
10584 + .caps = DVB_USB_IS_AN_I2C_ADAPTER,
10585 + .usb_ctrl = DEVICE_SPECIFIC,
10586 +@@ -2384,101 +2544,33 @@ static struct dvb_usb_device_properties tt_s2_4600_properties = {
10587 + static int dw2102_probe(struct usb_interface *intf,
10588 + const struct usb_device_id *id)
10589 + {
10590 +- int retval = -ENOMEM;
10591 +- struct dvb_usb_device_properties *p1100;
10592 +- struct dvb_usb_device_properties *s660;
10593 +- struct dvb_usb_device_properties *p7500;
10594 +- struct dvb_usb_device_properties *s421;
10595 +-
10596 +- p1100 = kmemdup(&s6x0_properties,
10597 +- sizeof(struct dvb_usb_device_properties), GFP_KERNEL);
10598 +- if (!p1100)
10599 +- goto err0;
10600 +-
10601 +- /* copy default structure */
10602 +- /* fill only different fields */
10603 +- p1100->firmware = P1100_FIRMWARE;
10604 +- p1100->devices[0] = d1100;
10605 +- p1100->rc.core.rc_query = prof_rc_query;
10606 +- p1100->rc.core.rc_codes = RC_MAP_TBS_NEC;
10607 +- p1100->adapter->fe[0].frontend_attach = stv0288_frontend_attach;
10608 +-
10609 +- s660 = kmemdup(&s6x0_properties,
10610 +- sizeof(struct dvb_usb_device_properties), GFP_KERNEL);
10611 +- if (!s660)
10612 +- goto err1;
10613 +-
10614 +- s660->firmware = S660_FIRMWARE;
10615 +- s660->num_device_descs = 3;
10616 +- s660->devices[0] = d660;
10617 +- s660->devices[1] = d480_1;
10618 +- s660->devices[2] = d480_2;
10619 +- s660->adapter->fe[0].frontend_attach = ds3000_frontend_attach;
10620 +-
10621 +- p7500 = kmemdup(&s6x0_properties,
10622 +- sizeof(struct dvb_usb_device_properties), GFP_KERNEL);
10623 +- if (!p7500)
10624 +- goto err2;
10625 +-
10626 +- p7500->firmware = P7500_FIRMWARE;
10627 +- p7500->devices[0] = d7500;
10628 +- p7500->rc.core.rc_query = prof_rc_query;
10629 +- p7500->rc.core.rc_codes = RC_MAP_TBS_NEC;
10630 +- p7500->adapter->fe[0].frontend_attach = prof_7500_frontend_attach;
10631 +-
10632 +-
10633 +- s421 = kmemdup(&su3000_properties,
10634 +- sizeof(struct dvb_usb_device_properties), GFP_KERNEL);
10635 +- if (!s421)
10636 +- goto err3;
10637 +-
10638 +- s421->num_device_descs = 2;
10639 +- s421->devices[0] = d421;
10640 +- s421->devices[1] = d632;
10641 +- s421->adapter->fe[0].frontend_attach = m88rs2000_frontend_attach;
10642 +-
10643 +- if (0 == dvb_usb_device_init(intf, &dw2102_properties,
10644 +- THIS_MODULE, NULL, adapter_nr) ||
10645 +- 0 == dvb_usb_device_init(intf, &dw2104_properties,
10646 +- THIS_MODULE, NULL, adapter_nr) ||
10647 +- 0 == dvb_usb_device_init(intf, &dw3101_properties,
10648 +- THIS_MODULE, NULL, adapter_nr) ||
10649 +- 0 == dvb_usb_device_init(intf, &s6x0_properties,
10650 +- THIS_MODULE, NULL, adapter_nr) ||
10651 +- 0 == dvb_usb_device_init(intf, p1100,
10652 +- THIS_MODULE, NULL, adapter_nr) ||
10653 +- 0 == dvb_usb_device_init(intf, s660,
10654 +- THIS_MODULE, NULL, adapter_nr) ||
10655 +- 0 == dvb_usb_device_init(intf, p7500,
10656 +- THIS_MODULE, NULL, adapter_nr) ||
10657 +- 0 == dvb_usb_device_init(intf, s421,
10658 +- THIS_MODULE, NULL, adapter_nr) ||
10659 +- 0 == dvb_usb_device_init(intf, &su3000_properties,
10660 +- THIS_MODULE, NULL, adapter_nr) ||
10661 +- 0 == dvb_usb_device_init(intf, &t220_properties,
10662 +- THIS_MODULE, NULL, adapter_nr) ||
10663 +- 0 == dvb_usb_device_init(intf, &tt_s2_4600_properties,
10664 +- THIS_MODULE, NULL, adapter_nr)) {
10665 +-
10666 +- /* clean up copied properties */
10667 +- kfree(s421);
10668 +- kfree(p7500);
10669 +- kfree(s660);
10670 +- kfree(p1100);
10671 ++ if (!(dvb_usb_device_init(intf, &dw2102_properties,
10672 ++ THIS_MODULE, NULL, adapter_nr) &&
10673 ++ dvb_usb_device_init(intf, &dw2104_properties,
10674 ++ THIS_MODULE, NULL, adapter_nr) &&
10675 ++ dvb_usb_device_init(intf, &dw3101_properties,
10676 ++ THIS_MODULE, NULL, adapter_nr) &&
10677 ++ dvb_usb_device_init(intf, &s6x0_properties,
10678 ++ THIS_MODULE, NULL, adapter_nr) &&
10679 ++ dvb_usb_device_init(intf, &p1100_properties,
10680 ++ THIS_MODULE, NULL, adapter_nr) &&
10681 ++ dvb_usb_device_init(intf, &s660_properties,
10682 ++ THIS_MODULE, NULL, adapter_nr) &&
10683 ++ dvb_usb_device_init(intf, &p7500_properties,
10684 ++ THIS_MODULE, NULL, adapter_nr) &&
10685 ++ dvb_usb_device_init(intf, &s421_properties,
10686 ++ THIS_MODULE, NULL, adapter_nr) &&
10687 ++ dvb_usb_device_init(intf, &su3000_properties,
10688 ++ THIS_MODULE, NULL, adapter_nr) &&
10689 ++ dvb_usb_device_init(intf, &t220_properties,
10690 ++ THIS_MODULE, NULL, adapter_nr) &&
10691 ++ dvb_usb_device_init(intf, &tt_s2_4600_properties,
10692 ++ THIS_MODULE, NULL, adapter_nr))) {
10693 +
10694 + return 0;
10695 + }
10696 +
10697 +- retval = -ENODEV;
10698 +- kfree(s421);
10699 +-err3:
10700 +- kfree(p7500);
10701 +-err2:
10702 +- kfree(s660);
10703 +-err1:
10704 +- kfree(p1100);
10705 +-err0:
10706 +- return retval;
10707 ++ return -ENODEV;
10708 + }
10709 +
10710 + static void dw2102_disconnect(struct usb_interface *intf)
10711 +diff --git a/drivers/media/usb/dvb-usb/m920x.c b/drivers/media/usb/dvb-usb/m920x.c
10712 +index 4bb5b82599a79..691e05833db19 100644
10713 +--- a/drivers/media/usb/dvb-usb/m920x.c
10714 ++++ b/drivers/media/usb/dvb-usb/m920x.c
10715 +@@ -274,6 +274,13 @@ static int m920x_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int nu
10716 + /* Should check for ack here, if we knew how. */
10717 + }
10718 + if (msg[i].flags & I2C_M_RD) {
10719 ++ char *read = kmalloc(1, GFP_KERNEL);
10720 ++ if (!read) {
10721 ++ ret = -ENOMEM;
10722 ++ kfree(read);
10723 ++ goto unlock;
10724 ++ }
10725 ++
10726 + for (j = 0; j < msg[i].len; j++) {
10727 + /* Last byte of transaction?
10728 + * Send STOP, otherwise send ACK. */
10729 +@@ -281,9 +288,12 @@ static int m920x_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int nu
10730 +
10731 + if ((ret = m920x_read(d->udev, M9206_I2C, 0x0,
10732 + 0x20 | stop,
10733 +- &msg[i].buf[j], 1)) != 0)
10734 ++ read, 1)) != 0)
10735 + goto unlock;
10736 ++ msg[i].buf[j] = read[0];
10737 + }
10738 ++
10739 ++ kfree(read);
10740 + } else {
10741 + for (j = 0; j < msg[i].len; j++) {
10742 + /* Last byte of transaction? Then send STOP. */
10743 +diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
10744 +index cf45cc566cbe2..87e375562dbb2 100644
10745 +--- a/drivers/media/usb/em28xx/em28xx-cards.c
10746 ++++ b/drivers/media/usb/em28xx/em28xx-cards.c
10747 +@@ -3575,8 +3575,10 @@ static int em28xx_init_dev(struct em28xx *dev, struct usb_device *udev,
10748 +
10749 + if (dev->is_audio_only) {
10750 + retval = em28xx_audio_setup(dev);
10751 +- if (retval)
10752 +- return -ENODEV;
10753 ++ if (retval) {
10754 ++ retval = -ENODEV;
10755 ++ goto err_deinit_media;
10756 ++ }
10757 + em28xx_init_extension(dev);
10758 +
10759 + return 0;
10760 +@@ -3595,7 +3597,7 @@ static int em28xx_init_dev(struct em28xx *dev, struct usb_device *udev,
10761 + dev_err(&dev->intf->dev,
10762 + "%s: em28xx_i2c_register bus 0 - error [%d]!\n",
10763 + __func__, retval);
10764 +- return retval;
10765 ++ goto err_deinit_media;
10766 + }
10767 +
10768 + /* register i2c bus 1 */
10769 +@@ -3611,9 +3613,7 @@ static int em28xx_init_dev(struct em28xx *dev, struct usb_device *udev,
10770 + "%s: em28xx_i2c_register bus 1 - error [%d]!\n",
10771 + __func__, retval);
10772 +
10773 +- em28xx_i2c_unregister(dev, 0);
10774 +-
10775 +- return retval;
10776 ++ goto err_unreg_i2c;
10777 + }
10778 + }
10779 +
10780 +@@ -3621,6 +3621,12 @@ static int em28xx_init_dev(struct em28xx *dev, struct usb_device *udev,
10781 + em28xx_card_setup(dev);
10782 +
10783 + return 0;
10784 ++
10785 ++err_unreg_i2c:
10786 ++ em28xx_i2c_unregister(dev, 0);
10787 ++err_deinit_media:
10788 ++ em28xx_unregister_media_device(dev);
10789 ++ return retval;
10790 + }
10791 +
10792 + static int em28xx_duplicate_dev(struct em28xx *dev)
10793 +diff --git a/drivers/media/usb/em28xx/em28xx-core.c b/drivers/media/usb/em28xx/em28xx-core.c
10794 +index af9216278024f..308bc029099d9 100644
10795 +--- a/drivers/media/usb/em28xx/em28xx-core.c
10796 ++++ b/drivers/media/usb/em28xx/em28xx-core.c
10797 +@@ -89,7 +89,7 @@ int em28xx_read_reg_req_len(struct em28xx *dev, u8 req, u16 reg,
10798 + mutex_lock(&dev->ctrl_urb_lock);
10799 + ret = usb_control_msg(udev, pipe, req,
10800 + USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
10801 +- 0x0000, reg, dev->urb_buf, len, HZ);
10802 ++ 0x0000, reg, dev->urb_buf, len, 1000);
10803 + if (ret < 0) {
10804 + em28xx_regdbg("(pipe 0x%08x): IN: %02x %02x %02x %02x %02x %02x %02x %02x failed with error %i\n",
10805 + pipe,
10806 +@@ -158,7 +158,7 @@ int em28xx_write_regs_req(struct em28xx *dev, u8 req, u16 reg, char *buf,
10807 + memcpy(dev->urb_buf, buf, len);
10808 + ret = usb_control_msg(udev, pipe, req,
10809 + USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
10810 +- 0x0000, reg, dev->urb_buf, len, HZ);
10811 ++ 0x0000, reg, dev->urb_buf, len, 1000);
10812 + mutex_unlock(&dev->ctrl_urb_lock);
10813 +
10814 + if (ret < 0) {
10815 +diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
10816 +index d38dee1792e41..3915d551d59e7 100644
10817 +--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
10818 ++++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
10819 +@@ -1467,7 +1467,7 @@ static int pvr2_upload_firmware1(struct pvr2_hdw *hdw)
10820 + for (address = 0; address < fwsize; address += 0x800) {
10821 + memcpy(fw_ptr, fw_entry->data + address, 0x800);
10822 + ret += usb_control_msg(hdw->usb_dev, pipe, 0xa0, 0x40, address,
10823 +- 0, fw_ptr, 0x800, HZ);
10824 ++ 0, fw_ptr, 0x800, 1000);
10825 + }
10826 +
10827 + trace_firmware("Upload done, releasing device's CPU");
10828 +@@ -1605,7 +1605,7 @@ int pvr2_upload_firmware2(struct pvr2_hdw *hdw)
10829 + ((u32 *)fw_ptr)[icnt] = swab32(((u32 *)fw_ptr)[icnt]);
10830 +
10831 + ret |= usb_bulk_msg(hdw->usb_dev, pipe, fw_ptr,bcnt,
10832 +- &actual_length, HZ);
10833 ++ &actual_length, 1000);
10834 + ret |= (actual_length != bcnt);
10835 + if (ret) break;
10836 + fw_done += bcnt;
10837 +@@ -3438,7 +3438,7 @@ void pvr2_hdw_cpufw_set_enabled(struct pvr2_hdw *hdw,
10838 + 0xa0,0xc0,
10839 + address,0,
10840 + hdw->fw_buffer+address,
10841 +- 0x800,HZ);
10842 ++ 0x800,1000);
10843 + if (ret < 0) break;
10844 + }
10845 +
10846 +@@ -3977,7 +3977,7 @@ void pvr2_hdw_cpureset_assert(struct pvr2_hdw *hdw,int val)
10847 + /* Write the CPUCS register on the 8051. The lsb of the register
10848 + is the reset bit; a 1 asserts reset while a 0 clears it. */
10849 + pipe = usb_sndctrlpipe(hdw->usb_dev, 0);
10850 +- ret = usb_control_msg(hdw->usb_dev,pipe,0xa0,0x40,0xe600,0,da,1,HZ);
10851 ++ ret = usb_control_msg(hdw->usb_dev,pipe,0xa0,0x40,0xe600,0,da,1,1000);
10852 + if (ret < 0) {
10853 + pvr2_trace(PVR2_TRACE_ERROR_LEGS,
10854 + "cpureset_assert(%d) error=%d",val,ret);
10855 +diff --git a/drivers/media/usb/s2255/s2255drv.c b/drivers/media/usb/s2255/s2255drv.c
10856 +index 4af55e2478be1..cb15eb32d2a6b 100644
10857 +--- a/drivers/media/usb/s2255/s2255drv.c
10858 ++++ b/drivers/media/usb/s2255/s2255drv.c
10859 +@@ -1884,7 +1884,7 @@ static long s2255_vendor_req(struct s2255_dev *dev, unsigned char Request,
10860 + USB_TYPE_VENDOR | USB_RECIP_DEVICE |
10861 + USB_DIR_IN,
10862 + Value, Index, buf,
10863 +- TransferBufferLength, HZ * 5);
10864 ++ TransferBufferLength, USB_CTRL_SET_TIMEOUT);
10865 +
10866 + if (r >= 0)
10867 + memcpy(TransferBuffer, buf, TransferBufferLength);
10868 +@@ -1893,7 +1893,7 @@ static long s2255_vendor_req(struct s2255_dev *dev, unsigned char Request,
10869 + r = usb_control_msg(dev->udev, usb_sndctrlpipe(dev->udev, 0),
10870 + Request, USB_TYPE_VENDOR | USB_RECIP_DEVICE,
10871 + Value, Index, buf,
10872 +- TransferBufferLength, HZ * 5);
10873 ++ TransferBufferLength, USB_CTRL_SET_TIMEOUT);
10874 + }
10875 + kfree(buf);
10876 + return r;
10877 +diff --git a/drivers/media/usb/stk1160/stk1160-core.c b/drivers/media/usb/stk1160/stk1160-core.c
10878 +index b4f8bc5db1389..4e1698f788187 100644
10879 +--- a/drivers/media/usb/stk1160/stk1160-core.c
10880 ++++ b/drivers/media/usb/stk1160/stk1160-core.c
10881 +@@ -65,7 +65,7 @@ int stk1160_read_reg(struct stk1160 *dev, u16 reg, u8 *value)
10882 + return -ENOMEM;
10883 + ret = usb_control_msg(dev->udev, pipe, 0x00,
10884 + USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
10885 +- 0x00, reg, buf, sizeof(u8), HZ);
10886 ++ 0x00, reg, buf, sizeof(u8), 1000);
10887 + if (ret < 0) {
10888 + stk1160_err("read failed on reg 0x%x (%d)\n",
10889 + reg, ret);
10890 +@@ -85,7 +85,7 @@ int stk1160_write_reg(struct stk1160 *dev, u16 reg, u16 value)
10891 +
10892 + ret = usb_control_msg(dev->udev, pipe, 0x01,
10893 + USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
10894 +- value, reg, NULL, 0, HZ);
10895 ++ value, reg, NULL, 0, 1000);
10896 + if (ret < 0) {
10897 + stk1160_err("write failed on reg 0x%x (%d)\n",
10898 + reg, ret);
10899 +diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
10900 +index a3dfacf069c44..c884020b28784 100644
10901 +--- a/drivers/media/usb/uvc/uvcvideo.h
10902 ++++ b/drivers/media/usb/uvc/uvcvideo.h
10903 +@@ -183,7 +183,7 @@
10904 + /* Maximum status buffer size in bytes of interrupt URB. */
10905 + #define UVC_MAX_STATUS_SIZE 16
10906 +
10907 +-#define UVC_CTRL_CONTROL_TIMEOUT 500
10908 ++#define UVC_CTRL_CONTROL_TIMEOUT 5000
10909 + #define UVC_CTRL_STREAMING_TIMEOUT 5000
10910 +
10911 + /* Maximum allowed number of control mappings per device */
10912 +diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
10913 +index 4ffa14e44efe4..6d6d30dbbe68b 100644
10914 +--- a/drivers/media/v4l2-core/v4l2-ioctl.c
10915 ++++ b/drivers/media/v4l2-core/v4l2-ioctl.c
10916 +@@ -2127,6 +2127,7 @@ static int v4l_prepare_buf(const struct v4l2_ioctl_ops *ops,
10917 + static int v4l_g_parm(const struct v4l2_ioctl_ops *ops,
10918 + struct file *file, void *fh, void *arg)
10919 + {
10920 ++ struct video_device *vfd = video_devdata(file);
10921 + struct v4l2_streamparm *p = arg;
10922 + v4l2_std_id std;
10923 + int ret = check_fmt(file, p->type);
10924 +@@ -2138,7 +2139,8 @@ static int v4l_g_parm(const struct v4l2_ioctl_ops *ops,
10925 + if (p->type != V4L2_BUF_TYPE_VIDEO_CAPTURE &&
10926 + p->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)
10927 + return -EINVAL;
10928 +- p->parm.capture.readbuffers = 2;
10929 ++ if (vfd->device_caps & V4L2_CAP_READWRITE)
10930 ++ p->parm.capture.readbuffers = 2;
10931 + ret = ops->vidioc_g_std(file, fh, &std);
10932 + if (ret == 0)
10933 + v4l2_video_std_frame_period(std, &p->parm.capture.timeperframe);
10934 +diff --git a/drivers/memory/renesas-rpc-if.c b/drivers/memory/renesas-rpc-if.c
10935 +index a760ab08256ff..9019121a80f53 100644
10936 +--- a/drivers/memory/renesas-rpc-if.c
10937 ++++ b/drivers/memory/renesas-rpc-if.c
10938 +@@ -245,7 +245,7 @@ int rpcif_sw_init(struct rpcif *rpc, struct device *dev)
10939 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dirmap");
10940 + rpc->dirmap = devm_ioremap_resource(&pdev->dev, res);
10941 + if (IS_ERR(rpc->dirmap))
10942 +- rpc->dirmap = NULL;
10943 ++ return PTR_ERR(rpc->dirmap);
10944 + rpc->size = resource_size(res);
10945 +
10946 + rpc->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
10947 +diff --git a/drivers/mfd/atmel-flexcom.c b/drivers/mfd/atmel-flexcom.c
10948 +index d2f5c073fdf31..559eb4d352b68 100644
10949 +--- a/drivers/mfd/atmel-flexcom.c
10950 ++++ b/drivers/mfd/atmel-flexcom.c
10951 +@@ -87,8 +87,7 @@ static const struct of_device_id atmel_flexcom_of_match[] = {
10952 + };
10953 + MODULE_DEVICE_TABLE(of, atmel_flexcom_of_match);
10954 +
10955 +-#ifdef CONFIG_PM_SLEEP
10956 +-static int atmel_flexcom_resume(struct device *dev)
10957 ++static int __maybe_unused atmel_flexcom_resume_noirq(struct device *dev)
10958 + {
10959 + struct atmel_flexcom *ddata = dev_get_drvdata(dev);
10960 + int err;
10961 +@@ -105,16 +104,16 @@ static int atmel_flexcom_resume(struct device *dev)
10962 +
10963 + return 0;
10964 + }
10965 +-#endif
10966 +
10967 +-static SIMPLE_DEV_PM_OPS(atmel_flexcom_pm_ops, NULL,
10968 +- atmel_flexcom_resume);
10969 ++static const struct dev_pm_ops atmel_flexcom_pm_ops = {
10970 ++ .resume_noirq = atmel_flexcom_resume_noirq,
10971 ++};
10972 +
10973 + static struct platform_driver atmel_flexcom_driver = {
10974 + .probe = atmel_flexcom_probe,
10975 + .driver = {
10976 + .name = "atmel_flexcom",
10977 +- .pm = &atmel_flexcom_pm_ops,
10978 ++ .pm = pm_ptr(&atmel_flexcom_pm_ops),
10979 + .of_match_table = atmel_flexcom_of_match,
10980 + },
10981 + };
10982 +diff --git a/drivers/misc/lattice-ecp3-config.c b/drivers/misc/lattice-ecp3-config.c
10983 +index 5eaf74447ca1e..556bb7d705f53 100644
10984 +--- a/drivers/misc/lattice-ecp3-config.c
10985 ++++ b/drivers/misc/lattice-ecp3-config.c
10986 +@@ -76,12 +76,12 @@ static void firmware_load(const struct firmware *fw, void *context)
10987 +
10988 + if (fw == NULL) {
10989 + dev_err(&spi->dev, "Cannot load firmware, aborting\n");
10990 +- return;
10991 ++ goto out;
10992 + }
10993 +
10994 + if (fw->size == 0) {
10995 + dev_err(&spi->dev, "Error: Firmware size is 0!\n");
10996 +- return;
10997 ++ goto out;
10998 + }
10999 +
11000 + /* Fill dummy data (24 stuffing bits for commands) */
11001 +@@ -103,7 +103,7 @@ static void firmware_load(const struct firmware *fw, void *context)
11002 + dev_err(&spi->dev,
11003 + "Error: No supported FPGA detected (JEDEC_ID=%08x)!\n",
11004 + jedec_id);
11005 +- return;
11006 ++ goto out;
11007 + }
11008 +
11009 + dev_info(&spi->dev, "FPGA %s detected\n", ecp3_dev[i].name);
11010 +@@ -116,7 +116,7 @@ static void firmware_load(const struct firmware *fw, void *context)
11011 + buffer = kzalloc(fw->size + 8, GFP_KERNEL);
11012 + if (!buffer) {
11013 + dev_err(&spi->dev, "Error: Can't allocate memory!\n");
11014 +- return;
11015 ++ goto out;
11016 + }
11017 +
11018 + /*
11019 +@@ -155,7 +155,7 @@ static void firmware_load(const struct firmware *fw, void *context)
11020 + "Error: Timeout waiting for FPGA to clear (status=%08x)!\n",
11021 + status);
11022 + kfree(buffer);
11023 +- return;
11024 ++ goto out;
11025 + }
11026 +
11027 + dev_info(&spi->dev, "Configuring the FPGA...\n");
11028 +@@ -181,7 +181,7 @@ static void firmware_load(const struct firmware *fw, void *context)
11029 + release_firmware(fw);
11030 +
11031 + kfree(buffer);
11032 +-
11033 ++out:
11034 + complete(&data->fw_loaded);
11035 + }
11036 +
11037 +diff --git a/drivers/misc/lkdtm/Makefile b/drivers/misc/lkdtm/Makefile
11038 +index 30c8ac24635d4..4405fb2bc7a00 100644
11039 +--- a/drivers/misc/lkdtm/Makefile
11040 ++++ b/drivers/misc/lkdtm/Makefile
11041 +@@ -16,7 +16,7 @@ KCOV_INSTRUMENT_rodata.o := n
11042 +
11043 + OBJCOPYFLAGS :=
11044 + OBJCOPYFLAGS_rodata_objcopy.o := \
11045 +- --rename-section .noinstr.text=.rodata,alloc,readonly,load
11046 ++ --rename-section .noinstr.text=.rodata,alloc,readonly,load,contents
11047 + targets += rodata.o rodata_objcopy.o
11048 + $(obj)/rodata_objcopy.o: $(obj)/rodata.o FORCE
11049 + $(call if_changed,objcopy)
11050 +diff --git a/drivers/mmc/core/sdio.c b/drivers/mmc/core/sdio.c
11051 +index 1b0853a82189a..99a4ce68d82f1 100644
11052 +--- a/drivers/mmc/core/sdio.c
11053 ++++ b/drivers/mmc/core/sdio.c
11054 +@@ -708,6 +708,8 @@ try_again:
11055 + if (host->ops->init_card)
11056 + host->ops->init_card(host, card);
11057 +
11058 ++ card->ocr = ocr_card;
11059 ++
11060 + /*
11061 + * If the host and card support UHS-I mode request the card
11062 + * to switch to 1.8V signaling level. No 1.8v signalling if
11063 +@@ -820,7 +822,7 @@ try_again:
11064 + goto mismatch;
11065 + }
11066 + }
11067 +- card->ocr = ocr_card;
11068 ++
11069 + mmc_fixup_device(card, sdio_fixup_methods);
11070 +
11071 + if (card->type == MMC_TYPE_SD_COMBO) {
11072 +diff --git a/drivers/mmc/host/meson-mx-sdhc-mmc.c b/drivers/mmc/host/meson-mx-sdhc-mmc.c
11073 +index 8fdd0bbbfa21f..28aa78aa08f3f 100644
11074 +--- a/drivers/mmc/host/meson-mx-sdhc-mmc.c
11075 ++++ b/drivers/mmc/host/meson-mx-sdhc-mmc.c
11076 +@@ -854,6 +854,11 @@ static int meson_mx_sdhc_probe(struct platform_device *pdev)
11077 + goto err_disable_pclk;
11078 +
11079 + irq = platform_get_irq(pdev, 0);
11080 ++ if (irq < 0) {
11081 ++ ret = irq;
11082 ++ goto err_disable_pclk;
11083 ++ }
11084 ++
11085 + ret = devm_request_threaded_irq(dev, irq, meson_mx_sdhc_irq,
11086 + meson_mx_sdhc_irq_thread, IRQF_ONESHOT,
11087 + NULL, host);
11088 +diff --git a/drivers/mmc/host/meson-mx-sdio.c b/drivers/mmc/host/meson-mx-sdio.c
11089 +index 1c5299cd0cbe1..264aae2a2b0cf 100644
11090 +--- a/drivers/mmc/host/meson-mx-sdio.c
11091 ++++ b/drivers/mmc/host/meson-mx-sdio.c
11092 +@@ -663,6 +663,11 @@ static int meson_mx_mmc_probe(struct platform_device *pdev)
11093 + }
11094 +
11095 + irq = platform_get_irq(pdev, 0);
11096 ++ if (irq < 0) {
11097 ++ ret = irq;
11098 ++ goto error_free_mmc;
11099 ++ }
11100 ++
11101 + ret = devm_request_threaded_irq(host->controller_dev, irq,
11102 + meson_mx_mmc_irq,
11103 + meson_mx_mmc_irq_thread, IRQF_ONESHOT,
11104 +diff --git a/drivers/mtd/hyperbus/rpc-if.c b/drivers/mtd/hyperbus/rpc-if.c
11105 +index ecb050ba95cdf..dc164c18f8429 100644
11106 +--- a/drivers/mtd/hyperbus/rpc-if.c
11107 ++++ b/drivers/mtd/hyperbus/rpc-if.c
11108 +@@ -124,7 +124,9 @@ static int rpcif_hb_probe(struct platform_device *pdev)
11109 + if (!hyperbus)
11110 + return -ENOMEM;
11111 +
11112 +- rpcif_sw_init(&hyperbus->rpc, pdev->dev.parent);
11113 ++ error = rpcif_sw_init(&hyperbus->rpc, pdev->dev.parent);
11114 ++ if (error)
11115 ++ return error;
11116 +
11117 + platform_set_drvdata(pdev, hyperbus);
11118 +
11119 +@@ -150,9 +152,9 @@ static int rpcif_hb_remove(struct platform_device *pdev)
11120 + {
11121 + struct rpcif_hyperbus *hyperbus = platform_get_drvdata(pdev);
11122 + int error = hyperbus_unregister_device(&hyperbus->hbdev);
11123 +- struct rpcif *rpc = dev_get_drvdata(pdev->dev.parent);
11124 +
11125 +- rpcif_disable_rpm(rpc);
11126 ++ rpcif_disable_rpm(&hyperbus->rpc);
11127 ++
11128 + return error;
11129 + }
11130 +
11131 +diff --git a/drivers/mtd/mtdpart.c b/drivers/mtd/mtdpart.c
11132 +index 95d47422bbf20..5725818fa199f 100644
11133 +--- a/drivers/mtd/mtdpart.c
11134 ++++ b/drivers/mtd/mtdpart.c
11135 +@@ -313,7 +313,7 @@ static int __mtd_del_partition(struct mtd_info *mtd)
11136 + if (err)
11137 + return err;
11138 +
11139 +- list_del(&child->part.node);
11140 ++ list_del(&mtd->part.node);
11141 + free_partition(mtd);
11142 +
11143 + return 0;
11144 +diff --git a/drivers/mtd/nand/bbt.c b/drivers/mtd/nand/bbt.c
11145 +index 044adf9138546..64af6898131d6 100644
11146 +--- a/drivers/mtd/nand/bbt.c
11147 ++++ b/drivers/mtd/nand/bbt.c
11148 +@@ -123,7 +123,7 @@ int nanddev_bbt_set_block_status(struct nand_device *nand, unsigned int entry,
11149 + unsigned int rbits = bits_per_block + offs - BITS_PER_LONG;
11150 +
11151 + pos[1] &= ~GENMASK(rbits - 1, 0);
11152 +- pos[1] |= val >> rbits;
11153 ++ pos[1] |= val >> (bits_per_block - rbits);
11154 + }
11155 +
11156 + return 0;
11157 +diff --git a/drivers/mtd/nand/raw/davinci_nand.c b/drivers/mtd/nand/raw/davinci_nand.c
11158 +index f8c36d19ab47f..bfd3f440aca57 100644
11159 +--- a/drivers/mtd/nand/raw/davinci_nand.c
11160 ++++ b/drivers/mtd/nand/raw/davinci_nand.c
11161 +@@ -372,17 +372,15 @@ correct:
11162 + }
11163 +
11164 + /**
11165 +- * nand_read_page_hwecc_oob_first - hw ecc, read oob first
11166 ++ * nand_davinci_read_page_hwecc_oob_first - Hardware ECC page read with ECC
11167 ++ * data read from OOB area
11168 + * @chip: nand chip info structure
11169 + * @buf: buffer to store read data
11170 + * @oob_required: caller requires OOB data read to chip->oob_poi
11171 + * @page: page number to read
11172 + *
11173 +- * Hardware ECC for large page chips, require OOB to be read first. For this
11174 +- * ECC mode, the write_page method is re-used from ECC_HW. These methods
11175 +- * read/write ECC from the OOB area, unlike the ECC_HW_SYNDROME support with
11176 +- * multiple ECC steps, follows the "infix ECC" scheme and reads/writes ECC from
11177 +- * the data area, by overwriting the NAND manufacturer bad block markings.
11178 ++ * Hardware ECC for large page chips, which requires the ECC data to be
11179 ++ * extracted from the OOB before the actual data is read.
11180 + */
11181 + static int nand_davinci_read_page_hwecc_oob_first(struct nand_chip *chip,
11182 + uint8_t *buf,
11183 +@@ -394,7 +392,6 @@ static int nand_davinci_read_page_hwecc_oob_first(struct nand_chip *chip,
11184 + int eccsteps = chip->ecc.steps;
11185 + uint8_t *p = buf;
11186 + uint8_t *ecc_code = chip->ecc.code_buf;
11187 +- uint8_t *ecc_calc = chip->ecc.calc_buf;
11188 + unsigned int max_bitflips = 0;
11189 +
11190 + /* Read the OOB area first */
11191 +@@ -402,7 +399,8 @@ static int nand_davinci_read_page_hwecc_oob_first(struct nand_chip *chip,
11192 + if (ret)
11193 + return ret;
11194 +
11195 +- ret = nand_read_page_op(chip, page, 0, NULL, 0);
11196 ++ /* Move read cursor to start of page */
11197 ++ ret = nand_change_read_column_op(chip, 0, NULL, 0, false);
11198 + if (ret)
11199 + return ret;
11200 +
11201 +@@ -420,8 +418,6 @@ static int nand_davinci_read_page_hwecc_oob_first(struct nand_chip *chip,
11202 + if (ret)
11203 + return ret;
11204 +
11205 +- chip->ecc.calculate(chip, p, &ecc_calc[i]);
11206 +-
11207 + stat = chip->ecc.correct(chip, p, &ecc_code[i], NULL);
11208 + if (stat == -EBADMSG &&
11209 + (chip->ecc.options & NAND_ECC_GENERIC_ERASED_CHECK)) {
11210 +diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
11211 +index a6658567d55c0..226d527b6c6b7 100644
11212 +--- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
11213 ++++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
11214 +@@ -711,14 +711,32 @@ static void gpmi_nfc_compute_timings(struct gpmi_nand_data *this,
11215 + (use_half_period ? BM_GPMI_CTRL1_HALF_PERIOD : 0);
11216 + }
11217 +
11218 +-static void gpmi_nfc_apply_timings(struct gpmi_nand_data *this)
11219 ++static int gpmi_nfc_apply_timings(struct gpmi_nand_data *this)
11220 + {
11221 + struct gpmi_nfc_hardware_timing *hw = &this->hw;
11222 + struct resources *r = &this->resources;
11223 + void __iomem *gpmi_regs = r->gpmi_regs;
11224 + unsigned int dll_wait_time_us;
11225 ++ int ret;
11226 ++
11227 ++ /* Clock dividers do NOT guarantee a clean clock signal on its output
11228 ++ * during the change of the divide factor on i.MX6Q/UL/SX. On i.MX7/8,
11229 ++ * all clock dividers provide these guarantee.
11230 ++ */
11231 ++ if (GPMI_IS_MX6Q(this) || GPMI_IS_MX6SX(this))
11232 ++ clk_disable_unprepare(r->clock[0]);
11233 +
11234 +- clk_set_rate(r->clock[0], hw->clk_rate);
11235 ++ ret = clk_set_rate(r->clock[0], hw->clk_rate);
11236 ++ if (ret) {
11237 ++ dev_err(this->dev, "cannot set clock rate to %lu Hz: %d\n", hw->clk_rate, ret);
11238 ++ return ret;
11239 ++ }
11240 ++
11241 ++ if (GPMI_IS_MX6Q(this) || GPMI_IS_MX6SX(this)) {
11242 ++ ret = clk_prepare_enable(r->clock[0]);
11243 ++ if (ret)
11244 ++ return ret;
11245 ++ }
11246 +
11247 + writel(hw->timing0, gpmi_regs + HW_GPMI_TIMING0);
11248 + writel(hw->timing1, gpmi_regs + HW_GPMI_TIMING1);
11249 +@@ -737,6 +755,8 @@ static void gpmi_nfc_apply_timings(struct gpmi_nand_data *this)
11250 +
11251 + /* Wait for the DLL to settle. */
11252 + udelay(dll_wait_time_us);
11253 ++
11254 ++ return 0;
11255 + }
11256 +
11257 + static int gpmi_setup_interface(struct nand_chip *chip, int chipnr,
11258 +@@ -1032,15 +1052,6 @@ static int gpmi_get_clks(struct gpmi_nand_data *this)
11259 + r->clock[i] = clk;
11260 + }
11261 +
11262 +- if (GPMI_IS_MX6(this))
11263 +- /*
11264 +- * Set the default value for the gpmi clock.
11265 +- *
11266 +- * If you want to use the ONFI nand which is in the
11267 +- * Synchronous Mode, you should change the clock as you need.
11268 +- */
11269 +- clk_set_rate(r->clock[0], 22000000);
11270 +-
11271 + return 0;
11272 +
11273 + err_clock:
11274 +@@ -2278,7 +2289,9 @@ static int gpmi_nfc_exec_op(struct nand_chip *chip,
11275 + */
11276 + if (this->hw.must_apply_timings) {
11277 + this->hw.must_apply_timings = false;
11278 +- gpmi_nfc_apply_timings(this);
11279 ++ ret = gpmi_nfc_apply_timings(this);
11280 ++ if (ret)
11281 ++ return ret;
11282 + }
11283 +
11284 + dev_dbg(this->dev, "%s: %d instructions\n", __func__, op->ninstrs);
11285 +diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
11286 +index 645c7cabcbe4d..99770b1671923 100644
11287 +--- a/drivers/net/bonding/bond_main.c
11288 ++++ b/drivers/net/bonding/bond_main.c
11289 +@@ -1061,9 +1061,6 @@ static bool bond_should_notify_peers(struct bonding *bond)
11290 + slave = rcu_dereference(bond->curr_active_slave);
11291 + rcu_read_unlock();
11292 +
11293 +- netdev_dbg(bond->dev, "bond_should_notify_peers: slave %s\n",
11294 +- slave ? slave->dev->name : "NULL");
11295 +-
11296 + if (!slave || !bond->send_peer_notif ||
11297 + bond->send_peer_notif %
11298 + max(1, bond->params.peer_notif_delay) != 0 ||
11299 +@@ -1071,6 +1068,9 @@ static bool bond_should_notify_peers(struct bonding *bond)
11300 + test_bit(__LINK_STATE_LINKWATCH_PENDING, &slave->dev->state))
11301 + return false;
11302 +
11303 ++ netdev_dbg(bond->dev, "bond_should_notify_peers: slave %s\n",
11304 ++ slave ? slave->dev->name : "NULL");
11305 ++
11306 + return true;
11307 + }
11308 +
11309 +@@ -4562,25 +4562,39 @@ static netdev_tx_t bond_xmit_broadcast(struct sk_buff *skb,
11310 + struct bonding *bond = netdev_priv(bond_dev);
11311 + struct slave *slave = NULL;
11312 + struct list_head *iter;
11313 ++ bool xmit_suc = false;
11314 ++ bool skb_used = false;
11315 +
11316 + bond_for_each_slave_rcu(bond, slave, iter) {
11317 +- if (bond_is_last_slave(bond, slave))
11318 +- break;
11319 +- if (bond_slave_is_up(slave) && slave->link == BOND_LINK_UP) {
11320 +- struct sk_buff *skb2 = skb_clone(skb, GFP_ATOMIC);
11321 ++ struct sk_buff *skb2;
11322 ++
11323 ++ if (!(bond_slave_is_up(slave) && slave->link == BOND_LINK_UP))
11324 ++ continue;
11325 +
11326 ++ if (bond_is_last_slave(bond, slave)) {
11327 ++ skb2 = skb;
11328 ++ skb_used = true;
11329 ++ } else {
11330 ++ skb2 = skb_clone(skb, GFP_ATOMIC);
11331 + if (!skb2) {
11332 + net_err_ratelimited("%s: Error: %s: skb_clone() failed\n",
11333 + bond_dev->name, __func__);
11334 + continue;
11335 + }
11336 +- bond_dev_queue_xmit(bond, skb2, slave->dev);
11337 + }
11338 ++
11339 ++ if (bond_dev_queue_xmit(bond, skb2, slave->dev) == NETDEV_TX_OK)
11340 ++ xmit_suc = true;
11341 + }
11342 +- if (slave && bond_slave_is_up(slave) && slave->link == BOND_LINK_UP)
11343 +- return bond_dev_queue_xmit(bond, skb, slave->dev);
11344 +
11345 +- return bond_tx_drop(bond_dev, skb);
11346 ++ if (!skb_used)
11347 ++ dev_kfree_skb_any(skb);
11348 ++
11349 ++ if (xmit_suc)
11350 ++ return NETDEV_TX_OK;
11351 ++
11352 ++ atomic_long_inc(&bond_dev->tx_dropped);
11353 ++ return NET_XMIT_DROP;
11354 + }
11355 +
11356 + /*------------------------- Device initialization ---------------------------*/
11357 +diff --git a/drivers/net/can/softing/softing_cs.c b/drivers/net/can/softing/softing_cs.c
11358 +index 2e93ee7923739..e5c939b63fa65 100644
11359 +--- a/drivers/net/can/softing/softing_cs.c
11360 ++++ b/drivers/net/can/softing/softing_cs.c
11361 +@@ -293,7 +293,7 @@ static int softingcs_probe(struct pcmcia_device *pcmcia)
11362 + return 0;
11363 +
11364 + platform_failed:
11365 +- kfree(dev);
11366 ++ platform_device_put(pdev);
11367 + mem_failed:
11368 + pcmcia_bad:
11369 + pcmcia_failed:
11370 +diff --git a/drivers/net/can/softing/softing_fw.c b/drivers/net/can/softing/softing_fw.c
11371 +index ccd649a8e37bd..bad69a4abec10 100644
11372 +--- a/drivers/net/can/softing/softing_fw.c
11373 ++++ b/drivers/net/can/softing/softing_fw.c
11374 +@@ -565,18 +565,19 @@ int softing_startstop(struct net_device *dev, int up)
11375 + if (ret < 0)
11376 + goto failed;
11377 + }
11378 +- /* enable_error_frame */
11379 +- /*
11380 ++
11381 ++ /* enable_error_frame
11382 ++ *
11383 + * Error reporting is switched off at the moment since
11384 + * the receiving of them is not yet 100% verified
11385 + * This should be enabled sooner or later
11386 +- *
11387 +- if (error_reporting) {
11388 ++ */
11389 ++ if (0 && error_reporting) {
11390 + ret = softing_fct_cmd(card, 51, "enable_error_frame");
11391 + if (ret < 0)
11392 + goto failed;
11393 + }
11394 +- */
11395 ++
11396 + /* initialize interface */
11397 + iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 2]);
11398 + iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 4]);
11399 +diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
11400 +index 4e13f6dfb91a2..abe00a085f6fc 100644
11401 +--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
11402 ++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
11403 +@@ -1288,7 +1288,7 @@ mcp251xfd_tef_obj_read(const struct mcp251xfd_priv *priv,
11404 + len > tx_ring->obj_num ||
11405 + offset + len > tx_ring->obj_num)) {
11406 + netdev_err(priv->ndev,
11407 +- "Trying to read to many TEF objects (max=%d, offset=%d, len=%d).\n",
11408 ++ "Trying to read too many TEF objects (max=%d, offset=%d, len=%d).\n",
11409 + tx_ring->obj_num, offset, len);
11410 + return -ERANGE;
11411 + }
11412 +@@ -2497,7 +2497,7 @@ static int mcp251xfd_register_chip_detect(struct mcp251xfd_priv *priv)
11413 + if (!mcp251xfd_is_251X(priv) &&
11414 + priv->devtype_data.model != devtype_data->model) {
11415 + netdev_info(ndev,
11416 +- "Detected %s, but firmware specifies a %s. Fixing up.",
11417 ++ "Detected %s, but firmware specifies a %s. Fixing up.\n",
11418 + __mcp251xfd_get_model_str(devtype_data->model),
11419 + mcp251xfd_get_model_str(priv));
11420 + }
11421 +@@ -2534,7 +2534,7 @@ static int mcp251xfd_register_check_rx_int(struct mcp251xfd_priv *priv)
11422 + return 0;
11423 +
11424 + netdev_info(priv->ndev,
11425 +- "RX_INT active after softreset, disabling RX_INT support.");
11426 ++ "RX_INT active after softreset, disabling RX_INT support.\n");
11427 + devm_gpiod_put(&priv->spi->dev, priv->rx_int);
11428 + priv->rx_int = NULL;
11429 +
11430 +diff --git a/drivers/net/can/xilinx_can.c b/drivers/net/can/xilinx_can.c
11431 +index 48d746e18f302..375998263af7a 100644
11432 +--- a/drivers/net/can/xilinx_can.c
11433 ++++ b/drivers/net/can/xilinx_can.c
11434 +@@ -1762,7 +1762,12 @@ static int xcan_probe(struct platform_device *pdev)
11435 + spin_lock_init(&priv->tx_lock);
11436 +
11437 + /* Get IRQ for the device */
11438 +- ndev->irq = platform_get_irq(pdev, 0);
11439 ++ ret = platform_get_irq(pdev, 0);
11440 ++ if (ret < 0)
11441 ++ goto err_free;
11442 ++
11443 ++ ndev->irq = ret;
11444 ++
11445 + ndev->flags |= IFF_ECHO; /* We support local echo */
11446 +
11447 + platform_set_drvdata(pdev, ndev);
11448 +diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
11449 +index db74241935ab4..e19cf020e5ae1 100644
11450 +--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
11451 ++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
11452 +@@ -3962,10 +3962,12 @@ static int bcmgenet_probe(struct platform_device *pdev)
11453 +
11454 + /* Request the WOL interrupt and advertise suspend if available */
11455 + priv->wol_irq_disabled = true;
11456 +- err = devm_request_irq(&pdev->dev, priv->wol_irq, bcmgenet_wol_isr, 0,
11457 +- dev->name, priv);
11458 +- if (!err)
11459 +- device_set_wakeup_capable(&pdev->dev, 1);
11460 ++ if (priv->wol_irq > 0) {
11461 ++ err = devm_request_irq(&pdev->dev, priv->wol_irq,
11462 ++ bcmgenet_wol_isr, 0, dev->name, priv);
11463 ++ if (!err)
11464 ++ device_set_wakeup_capable(&pdev->dev, 1);
11465 ++ }
11466 +
11467 + /* Set the needed headroom to account for any possible
11468 + * features enabling/disabling at runtime
11469 +diff --git a/drivers/net/ethernet/chelsio/libcxgb/libcxgb_cm.c b/drivers/net/ethernet/chelsio/libcxgb/libcxgb_cm.c
11470 +index d04a6c1634452..da8d10475a08e 100644
11471 +--- a/drivers/net/ethernet/chelsio/libcxgb/libcxgb_cm.c
11472 ++++ b/drivers/net/ethernet/chelsio/libcxgb/libcxgb_cm.c
11473 +@@ -32,6 +32,7 @@
11474 +
11475 + #include <linux/tcp.h>
11476 + #include <linux/ipv6.h>
11477 ++#include <net/inet_ecn.h>
11478 + #include <net/route.h>
11479 + #include <net/ip6_route.h>
11480 +
11481 +@@ -99,7 +100,7 @@ cxgb_find_route(struct cxgb4_lld_info *lldi,
11482 +
11483 + rt = ip_route_output_ports(&init_net, &fl4, NULL, peer_ip, local_ip,
11484 + peer_port, local_port, IPPROTO_TCP,
11485 +- tos, 0);
11486 ++ tos & ~INET_ECN_MASK, 0);
11487 + if (IS_ERR(rt))
11488 + return NULL;
11489 + n = dst_neigh_lookup(&rt->dst, &peer_ip);
11490 +diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
11491 +index 8df6f081f2447..d11fcfd927c0b 100644
11492 +--- a/drivers/net/ethernet/cortina/gemini.c
11493 ++++ b/drivers/net/ethernet/cortina/gemini.c
11494 +@@ -305,21 +305,21 @@ static void gmac_speed_set(struct net_device *netdev)
11495 + switch (phydev->speed) {
11496 + case 1000:
11497 + status.bits.speed = GMAC_SPEED_1000;
11498 +- if (phydev->interface == PHY_INTERFACE_MODE_RGMII)
11499 ++ if (phy_interface_mode_is_rgmii(phydev->interface))
11500 + status.bits.mii_rmii = GMAC_PHY_RGMII_1000;
11501 + netdev_dbg(netdev, "connect %s to RGMII @ 1Gbit\n",
11502 + phydev_name(phydev));
11503 + break;
11504 + case 100:
11505 + status.bits.speed = GMAC_SPEED_100;
11506 +- if (phydev->interface == PHY_INTERFACE_MODE_RGMII)
11507 ++ if (phy_interface_mode_is_rgmii(phydev->interface))
11508 + status.bits.mii_rmii = GMAC_PHY_RGMII_100_10;
11509 + netdev_dbg(netdev, "connect %s to RGMII @ 100 Mbit\n",
11510 + phydev_name(phydev));
11511 + break;
11512 + case 10:
11513 + status.bits.speed = GMAC_SPEED_10;
11514 +- if (phydev->interface == PHY_INTERFACE_MODE_RGMII)
11515 ++ if (phy_interface_mode_is_rgmii(phydev->interface))
11516 + status.bits.mii_rmii = GMAC_PHY_RGMII_100_10;
11517 + netdev_dbg(netdev, "connect %s to RGMII @ 10 Mbit\n",
11518 + phydev_name(phydev));
11519 +@@ -389,6 +389,9 @@ static int gmac_setup_phy(struct net_device *netdev)
11520 + status.bits.mii_rmii = GMAC_PHY_GMII;
11521 + break;
11522 + case PHY_INTERFACE_MODE_RGMII:
11523 ++ case PHY_INTERFACE_MODE_RGMII_ID:
11524 ++ case PHY_INTERFACE_MODE_RGMII_TXID:
11525 ++ case PHY_INTERFACE_MODE_RGMII_RXID:
11526 + netdev_dbg(netdev,
11527 + "RGMII: set GMAC0 and GMAC1 to MII/RGMII mode\n");
11528 + status.bits.mii_rmii = GMAC_PHY_RGMII_100_10;
11529 +diff --git a/drivers/net/ethernet/freescale/fman/mac.c b/drivers/net/ethernet/freescale/fman/mac.c
11530 +index 901749a7a318b..6eeccc11b76ef 100644
11531 +--- a/drivers/net/ethernet/freescale/fman/mac.c
11532 ++++ b/drivers/net/ethernet/freescale/fman/mac.c
11533 +@@ -94,14 +94,17 @@ static void mac_exception(void *handle, enum fman_mac_exceptions ex)
11534 + __func__, ex);
11535 + }
11536 +
11537 +-static void set_fman_mac_params(struct mac_device *mac_dev,
11538 +- struct fman_mac_params *params)
11539 ++static int set_fman_mac_params(struct mac_device *mac_dev,
11540 ++ struct fman_mac_params *params)
11541 + {
11542 + struct mac_priv_s *priv = mac_dev->priv;
11543 +
11544 + params->base_addr = (typeof(params->base_addr))
11545 + devm_ioremap(priv->dev, mac_dev->res->start,
11546 + resource_size(mac_dev->res));
11547 ++ if (!params->base_addr)
11548 ++ return -ENOMEM;
11549 ++
11550 + memcpy(&params->addr, mac_dev->addr, sizeof(mac_dev->addr));
11551 + params->max_speed = priv->max_speed;
11552 + params->phy_if = mac_dev->phy_if;
11553 +@@ -112,6 +115,8 @@ static void set_fman_mac_params(struct mac_device *mac_dev,
11554 + params->event_cb = mac_exception;
11555 + params->dev_id = mac_dev;
11556 + params->internal_phy_node = priv->internal_phy_node;
11557 ++
11558 ++ return 0;
11559 + }
11560 +
11561 + static int tgec_initialization(struct mac_device *mac_dev)
11562 +@@ -123,7 +128,9 @@ static int tgec_initialization(struct mac_device *mac_dev)
11563 +
11564 + priv = mac_dev->priv;
11565 +
11566 +- set_fman_mac_params(mac_dev, &params);
11567 ++ err = set_fman_mac_params(mac_dev, &params);
11568 ++ if (err)
11569 ++ goto _return;
11570 +
11571 + mac_dev->fman_mac = tgec_config(&params);
11572 + if (!mac_dev->fman_mac) {
11573 +@@ -169,7 +176,9 @@ static int dtsec_initialization(struct mac_device *mac_dev)
11574 +
11575 + priv = mac_dev->priv;
11576 +
11577 +- set_fman_mac_params(mac_dev, &params);
11578 ++ err = set_fman_mac_params(mac_dev, &params);
11579 ++ if (err)
11580 ++ goto _return;
11581 +
11582 + mac_dev->fman_mac = dtsec_config(&params);
11583 + if (!mac_dev->fman_mac) {
11584 +@@ -218,7 +227,9 @@ static int memac_initialization(struct mac_device *mac_dev)
11585 +
11586 + priv = mac_dev->priv;
11587 +
11588 +- set_fman_mac_params(mac_dev, &params);
11589 ++ err = set_fman_mac_params(mac_dev, &params);
11590 ++ if (err)
11591 ++ goto _return;
11592 +
11593 + if (priv->max_speed == SPEED_10000)
11594 + params.phy_if = PHY_INTERFACE_MODE_XGMII;
11595 +diff --git a/drivers/net/ethernet/freescale/xgmac_mdio.c b/drivers/net/ethernet/freescale/xgmac_mdio.c
11596 +index bfa2826c55454..b7984a772e12d 100644
11597 +--- a/drivers/net/ethernet/freescale/xgmac_mdio.c
11598 ++++ b/drivers/net/ethernet/freescale/xgmac_mdio.c
11599 +@@ -49,6 +49,7 @@ struct tgec_mdio_controller {
11600 + struct mdio_fsl_priv {
11601 + struct tgec_mdio_controller __iomem *mdio_base;
11602 + bool is_little_endian;
11603 ++ bool has_a009885;
11604 + bool has_a011043;
11605 + };
11606 +
11607 +@@ -184,10 +185,10 @@ static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
11608 + {
11609 + struct mdio_fsl_priv *priv = (struct mdio_fsl_priv *)bus->priv;
11610 + struct tgec_mdio_controller __iomem *regs = priv->mdio_base;
11611 ++ unsigned long flags;
11612 + uint16_t dev_addr;
11613 + uint32_t mdio_stat;
11614 + uint32_t mdio_ctl;
11615 +- uint16_t value;
11616 + int ret;
11617 + bool endian = priv->is_little_endian;
11618 +
11619 +@@ -219,12 +220,18 @@ static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
11620 + return ret;
11621 + }
11622 +
11623 ++ if (priv->has_a009885)
11624 ++ /* Once the operation completes, i.e. MDIO_STAT_BSY clears, we
11625 ++ * must read back the data register within 16 MDC cycles.
11626 ++ */
11627 ++ local_irq_save(flags);
11628 ++
11629 + /* Initiate the read */
11630 + xgmac_write32(mdio_ctl | MDIO_CTL_READ, &regs->mdio_ctl, endian);
11631 +
11632 + ret = xgmac_wait_until_done(&bus->dev, regs, endian);
11633 + if (ret)
11634 +- return ret;
11635 ++ goto irq_restore;
11636 +
11637 + /* Return all Fs if nothing was there */
11638 + if ((xgmac_read32(&regs->mdio_stat, endian) & MDIO_STAT_RD_ER) &&
11639 +@@ -232,13 +239,17 @@ static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
11640 + dev_dbg(&bus->dev,
11641 + "Error while reading PHY%d reg at %d.%hhu\n",
11642 + phy_id, dev_addr, regnum);
11643 +- return 0xffff;
11644 ++ ret = 0xffff;
11645 ++ } else {
11646 ++ ret = xgmac_read32(&regs->mdio_data, endian) & 0xffff;
11647 ++ dev_dbg(&bus->dev, "read %04x\n", ret);
11648 + }
11649 +
11650 +- value = xgmac_read32(&regs->mdio_data, endian) & 0xffff;
11651 +- dev_dbg(&bus->dev, "read %04x\n", value);
11652 ++irq_restore:
11653 ++ if (priv->has_a009885)
11654 ++ local_irq_restore(flags);
11655 +
11656 +- return value;
11657 ++ return ret;
11658 + }
11659 +
11660 + static int xgmac_mdio_probe(struct platform_device *pdev)
11661 +@@ -282,6 +293,8 @@ static int xgmac_mdio_probe(struct platform_device *pdev)
11662 + priv->is_little_endian = device_property_read_bool(&pdev->dev,
11663 + "little-endian");
11664 +
11665 ++ priv->has_a009885 = device_property_read_bool(&pdev->dev,
11666 ++ "fsl,erratum-a009885");
11667 + priv->has_a011043 = device_property_read_bool(&pdev->dev,
11668 + "fsl,erratum-a011043");
11669 +
11670 +@@ -307,9 +320,10 @@ err_ioremap:
11671 + static int xgmac_mdio_remove(struct platform_device *pdev)
11672 + {
11673 + struct mii_bus *bus = platform_get_drvdata(pdev);
11674 ++ struct mdio_fsl_priv *priv = bus->priv;
11675 +
11676 + mdiobus_unregister(bus);
11677 +- iounmap(bus->priv);
11678 ++ iounmap(priv->mdio_base);
11679 + mdiobus_free(bus);
11680 +
11681 + return 0;
11682 +diff --git a/drivers/net/ethernet/i825xx/sni_82596.c b/drivers/net/ethernet/i825xx/sni_82596.c
11683 +index 27937c5d79567..daec9ce04531b 100644
11684 +--- a/drivers/net/ethernet/i825xx/sni_82596.c
11685 ++++ b/drivers/net/ethernet/i825xx/sni_82596.c
11686 +@@ -117,9 +117,10 @@ static int sni_82596_probe(struct platform_device *dev)
11687 + netdevice->dev_addr[5] = readb(eth_addr + 0x06);
11688 + iounmap(eth_addr);
11689 +
11690 +- if (!netdevice->irq) {
11691 ++ if (netdevice->irq < 0) {
11692 + printk(KERN_ERR "%s: IRQ not found for i82596 at 0x%lx\n",
11693 + __FILE__, netdevice->base_addr);
11694 ++ retval = netdevice->irq;
11695 + goto probe_failed;
11696 + }
11697 +
11698 +diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
11699 +index a2d3f04a9ff22..7d7dc0754a3a1 100644
11700 +--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
11701 ++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
11702 +@@ -215,7 +215,7 @@ static void mtk_mac_config(struct phylink_config *config, unsigned int mode,
11703 + phylink_config);
11704 + struct mtk_eth *eth = mac->hw;
11705 + u32 mcr_cur, mcr_new, sid, i;
11706 +- int val, ge_mode, err;
11707 ++ int val, ge_mode, err = 0;
11708 +
11709 + /* MT76x8 has no hardware settings between for the MAC */
11710 + if (!MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628) &&
11711 +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
11712 +index 2e55e00888715..6af0dd8471691 100644
11713 +--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
11714 ++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
11715 +@@ -147,8 +147,12 @@ static void cmd_ent_put(struct mlx5_cmd_work_ent *ent)
11716 + if (!refcount_dec_and_test(&ent->refcnt))
11717 + return;
11718 +
11719 +- if (ent->idx >= 0)
11720 +- cmd_free_index(ent->cmd, ent->idx);
11721 ++ if (ent->idx >= 0) {
11722 ++ struct mlx5_cmd *cmd = ent->cmd;
11723 ++
11724 ++ cmd_free_index(cmd, ent->idx);
11725 ++ up(ent->page_queue ? &cmd->pages_sem : &cmd->sem);
11726 ++ }
11727 +
11728 + cmd_free_ent(ent);
11729 + }
11730 +@@ -883,25 +887,6 @@ static bool opcode_allowed(struct mlx5_cmd *cmd, u16 opcode)
11731 + return cmd->allowed_opcode == opcode;
11732 + }
11733 +
11734 +-static int cmd_alloc_index_retry(struct mlx5_cmd *cmd)
11735 +-{
11736 +- unsigned long alloc_end = jiffies + msecs_to_jiffies(1000);
11737 +- int idx;
11738 +-
11739 +-retry:
11740 +- idx = cmd_alloc_index(cmd);
11741 +- if (idx < 0 && time_before(jiffies, alloc_end)) {
11742 +- /* Index allocation can fail on heavy load of commands. This is a temporary
11743 +- * situation as the current command already holds the semaphore, meaning that
11744 +- * another command completion is being handled and it is expected to release
11745 +- * the entry index soon.
11746 +- */
11747 +- cpu_relax();
11748 +- goto retry;
11749 +- }
11750 +- return idx;
11751 +-}
11752 +-
11753 + bool mlx5_cmd_is_down(struct mlx5_core_dev *dev)
11754 + {
11755 + return pci_channel_offline(dev->pdev) ||
11756 +@@ -926,7 +911,7 @@ static void cmd_work_handler(struct work_struct *work)
11757 + sem = ent->page_queue ? &cmd->pages_sem : &cmd->sem;
11758 + down(sem);
11759 + if (!ent->page_queue) {
11760 +- alloc_ret = cmd_alloc_index_retry(cmd);
11761 ++ alloc_ret = cmd_alloc_index(cmd);
11762 + if (alloc_ret < 0) {
11763 + mlx5_core_err_rl(dev, "failed to allocate command entry\n");
11764 + if (ent->callback) {
11765 +@@ -1582,8 +1567,6 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force
11766 + vector = vec & 0xffffffff;
11767 + for (i = 0; i < (1 << cmd->log_sz); i++) {
11768 + if (test_bit(i, &vector)) {
11769 +- struct semaphore *sem;
11770 +-
11771 + ent = cmd->ent_arr[i];
11772 +
11773 + /* if we already completed the command, ignore it */
11774 +@@ -1606,10 +1589,6 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force
11775 + dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR)
11776 + cmd_ent_put(ent);
11777 +
11778 +- if (ent->page_queue)
11779 +- sem = &cmd->pages_sem;
11780 +- else
11781 +- sem = &cmd->sem;
11782 + ent->ts2 = ktime_get_ns();
11783 + memcpy(ent->out->first.data, ent->lay->out, sizeof(ent->lay->out));
11784 + dump_command(dev, ent, 0);
11785 +@@ -1663,7 +1642,6 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force
11786 + */
11787 + complete(&ent->done);
11788 + }
11789 +- up(sem);
11790 + }
11791 + }
11792 + }
11793 +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c
11794 +index 71e8d66fa1509..6692bc8333f73 100644
11795 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c
11796 ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c
11797 +@@ -11,13 +11,13 @@ static int mlx5e_xsk_map_pool(struct mlx5e_priv *priv,
11798 + {
11799 + struct device *dev = mlx5_core_dma_dev(priv->mdev);
11800 +
11801 +- return xsk_pool_dma_map(pool, dev, 0);
11802 ++ return xsk_pool_dma_map(pool, dev, DMA_ATTR_SKIP_CPU_SYNC);
11803 + }
11804 +
11805 + static void mlx5e_xsk_unmap_pool(struct mlx5e_priv *priv,
11806 + struct xsk_buff_pool *pool)
11807 + {
11808 +- return xsk_pool_dma_unmap(pool, 0);
11809 ++ return xsk_pool_dma_unmap(pool, DMA_ATTR_SKIP_CPU_SYNC);
11810 + }
11811 +
11812 + static int mlx5e_xsk_get_pools(struct mlx5e_xsk *xsk)
11813 +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
11814 +index 2f6c3a5813ed1..16e98ac47624c 100644
11815 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
11816 ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
11817 +@@ -5024,9 +5024,13 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
11818 + }
11819 +
11820 + if (mlx5_vxlan_allowed(mdev->vxlan) || mlx5_geneve_tx_allowed(mdev)) {
11821 +- netdev->hw_features |= NETIF_F_GSO_UDP_TUNNEL;
11822 +- netdev->hw_enc_features |= NETIF_F_GSO_UDP_TUNNEL;
11823 +- netdev->vlan_features |= NETIF_F_GSO_UDP_TUNNEL;
11824 ++ netdev->hw_features |= NETIF_F_GSO_UDP_TUNNEL |
11825 ++ NETIF_F_GSO_UDP_TUNNEL_CSUM;
11826 ++ netdev->hw_enc_features |= NETIF_F_GSO_UDP_TUNNEL |
11827 ++ NETIF_F_GSO_UDP_TUNNEL_CSUM;
11828 ++ netdev->gso_partial_features = NETIF_F_GSO_UDP_TUNNEL_CSUM;
11829 ++ netdev->vlan_features |= NETIF_F_GSO_UDP_TUNNEL |
11830 ++ NETIF_F_GSO_UDP_TUNNEL_CSUM;
11831 + }
11832 +
11833 + if (mlx5e_tunnel_proto_supported(mdev, IPPROTO_GRE)) {
11834 +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
11835 +index 117a593414537..d384403d73f69 100644
11836 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
11837 ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
11838 +@@ -276,8 +276,8 @@ static inline int mlx5e_page_alloc_pool(struct mlx5e_rq *rq,
11839 + if (unlikely(!dma_info->page))
11840 + return -ENOMEM;
11841 +
11842 +- dma_info->addr = dma_map_page(rq->pdev, dma_info->page, 0,
11843 +- PAGE_SIZE, rq->buff.map_dir);
11844 ++ dma_info->addr = dma_map_page_attrs(rq->pdev, dma_info->page, 0, PAGE_SIZE,
11845 ++ rq->buff.map_dir, DMA_ATTR_SKIP_CPU_SYNC);
11846 + if (unlikely(dma_mapping_error(rq->pdev, dma_info->addr))) {
11847 + page_pool_recycle_direct(rq->page_pool, dma_info->page);
11848 + dma_info->page = NULL;
11849 +@@ -298,7 +298,8 @@ static inline int mlx5e_page_alloc(struct mlx5e_rq *rq,
11850 +
11851 + void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info)
11852 + {
11853 +- dma_unmap_page(rq->pdev, dma_info->addr, PAGE_SIZE, rq->buff.map_dir);
11854 ++ dma_unmap_page_attrs(rq->pdev, dma_info->addr, PAGE_SIZE, rq->buff.map_dir,
11855 ++ DMA_ATTR_SKIP_CPU_SYNC);
11856 + }
11857 +
11858 + void mlx5e_page_release_dynamic(struct mlx5e_rq *rq,
11859 +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
11860 +index 15c3a9058e728..0f0d250bbc150 100644
11861 +--- a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
11862 ++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
11863 +@@ -265,10 +265,8 @@ static int mlx5_lag_fib_event(struct notifier_block *nb,
11864 + fen_info = container_of(info, struct fib_entry_notifier_info,
11865 + info);
11866 + fi = fen_info->fi;
11867 +- if (fi->nh) {
11868 +- NL_SET_ERR_MSG_MOD(info->extack, "IPv4 route with nexthop objects is not supported");
11869 +- return notifier_from_errno(-EINVAL);
11870 +- }
11871 ++ if (fi->nh)
11872 ++ return NOTIFY_DONE;
11873 + fib_dev = fib_info_nh(fen_info->fi, 0)->fib_nh_dev;
11874 + if (fib_dev != ldev->pf[MLX5_LAG_P1].netdev &&
11875 + fib_dev != ldev->pf[MLX5_LAG_P2].netdev) {
11876 +diff --git a/drivers/net/ethernet/mellanox/mlxsw/cmd.h b/drivers/net/ethernet/mellanox/mlxsw/cmd.h
11877 +index 5ffdfb532cb7f..91f68fb0b420a 100644
11878 +--- a/drivers/net/ethernet/mellanox/mlxsw/cmd.h
11879 ++++ b/drivers/net/ethernet/mellanox/mlxsw/cmd.h
11880 +@@ -905,6 +905,18 @@ static inline int mlxsw_cmd_sw2hw_rdq(struct mlxsw_core *mlxsw_core,
11881 + */
11882 + MLXSW_ITEM32(cmd_mbox, sw2hw_dq, cq, 0x00, 24, 8);
11883 +
11884 ++enum mlxsw_cmd_mbox_sw2hw_dq_sdq_lp {
11885 ++ MLXSW_CMD_MBOX_SW2HW_DQ_SDQ_LP_WQE,
11886 ++ MLXSW_CMD_MBOX_SW2HW_DQ_SDQ_LP_IGNORE_WQE,
11887 ++};
11888 ++
11889 ++/* cmd_mbox_sw2hw_dq_sdq_lp
11890 ++ * SDQ local Processing
11891 ++ * 0: local processing by wqe.lp
11892 ++ * 1: local processing (ignoring wqe.lp)
11893 ++ */
11894 ++MLXSW_ITEM32(cmd_mbox, sw2hw_dq, sdq_lp, 0x00, 23, 1);
11895 ++
11896 + /* cmd_mbox_sw2hw_dq_sdq_tclass
11897 + * SDQ: CPU Egress TClass
11898 + * RDQ: Reserved
11899 +diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c
11900 +index ffaeda75eec42..dbb16ce25bdf3 100644
11901 +--- a/drivers/net/ethernet/mellanox/mlxsw/pci.c
11902 ++++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c
11903 +@@ -285,6 +285,7 @@ static int mlxsw_pci_sdq_init(struct mlxsw_pci *mlxsw_pci, char *mbox,
11904 + struct mlxsw_pci_queue *q)
11905 + {
11906 + int tclass;
11907 ++ int lp;
11908 + int i;
11909 + int err;
11910 +
11911 +@@ -292,9 +293,12 @@ static int mlxsw_pci_sdq_init(struct mlxsw_pci *mlxsw_pci, char *mbox,
11912 + q->consumer_counter = 0;
11913 + tclass = q->num == MLXSW_PCI_SDQ_EMAD_INDEX ? MLXSW_PCI_SDQ_EMAD_TC :
11914 + MLXSW_PCI_SDQ_CTL_TC;
11915 ++ lp = q->num == MLXSW_PCI_SDQ_EMAD_INDEX ? MLXSW_CMD_MBOX_SW2HW_DQ_SDQ_LP_IGNORE_WQE :
11916 ++ MLXSW_CMD_MBOX_SW2HW_DQ_SDQ_LP_WQE;
11917 +
11918 + /* Set CQ of same number of this SDQ. */
11919 + mlxsw_cmd_mbox_sw2hw_dq_cq_set(mbox, q->num);
11920 ++ mlxsw_cmd_mbox_sw2hw_dq_sdq_lp_set(mbox, lp);
11921 + mlxsw_cmd_mbox_sw2hw_dq_sdq_tclass_set(mbox, tclass);
11922 + mlxsw_cmd_mbox_sw2hw_dq_log2_dq_sz_set(mbox, 3); /* 8 pages */
11923 + for (i = 0; i < MLXSW_PCI_AQ_PAGES; i++) {
11924 +@@ -1599,7 +1603,7 @@ static int mlxsw_pci_skb_transmit(void *bus_priv, struct sk_buff *skb,
11925 +
11926 + wqe = elem_info->elem;
11927 + mlxsw_pci_wqe_c_set(wqe, 1); /* always report completion */
11928 +- mlxsw_pci_wqe_lp_set(wqe, !!tx_info->is_emad);
11929 ++ mlxsw_pci_wqe_lp_set(wqe, 0);
11930 + mlxsw_pci_wqe_type_set(wqe, MLXSW_PCI_WQE_TYPE_ETHERNET);
11931 +
11932 + err = mlxsw_pci_wqe_frag_map(mlxsw_pci, wqe, 0, skb->data,
11933 +@@ -1900,6 +1904,7 @@ int mlxsw_pci_driver_register(struct pci_driver *pci_driver)
11934 + {
11935 + pci_driver->probe = mlxsw_pci_probe;
11936 + pci_driver->remove = mlxsw_pci_remove;
11937 ++ pci_driver->shutdown = mlxsw_pci_remove;
11938 + return pci_register_driver(pci_driver);
11939 + }
11940 + EXPORT_SYMBOL(mlxsw_pci_driver_register);
11941 +diff --git a/drivers/net/ethernet/mscc/ocelot_flower.c b/drivers/net/ethernet/mscc/ocelot_flower.c
11942 +index 3655503352928..217e8333de6c6 100644
11943 +--- a/drivers/net/ethernet/mscc/ocelot_flower.c
11944 ++++ b/drivers/net/ethernet/mscc/ocelot_flower.c
11945 +@@ -462,13 +462,6 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress,
11946 + return -EOPNOTSUPP;
11947 + }
11948 +
11949 +- if (filter->block_id == VCAP_IS1 &&
11950 +- !is_zero_ether_addr(match.mask->dst)) {
11951 +- NL_SET_ERR_MSG_MOD(extack,
11952 +- "Key type S1_NORMAL cannot match on destination MAC");
11953 +- return -EOPNOTSUPP;
11954 +- }
11955 +-
11956 + /* The hw support mac matches only for MAC_ETYPE key,
11957 + * therefore if other matches(port, tcp flags, etc) are added
11958 + * then just bail out
11959 +@@ -483,6 +476,14 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress,
11960 + return -EOPNOTSUPP;
11961 +
11962 + flow_rule_match_eth_addrs(rule, &match);
11963 ++
11964 ++ if (filter->block_id == VCAP_IS1 &&
11965 ++ !is_zero_ether_addr(match.mask->dst)) {
11966 ++ NL_SET_ERR_MSG_MOD(extack,
11967 ++ "Key type S1_NORMAL cannot match on destination MAC");
11968 ++ return -EOPNOTSUPP;
11969 ++ }
11970 ++
11971 + filter->key_type = OCELOT_VCAP_KEY_ETYPE;
11972 + ether_addr_copy(filter->key.etype.dmac.value,
11973 + match.key->dst);
11974 +diff --git a/drivers/net/ethernet/rocker/rocker_ofdpa.c b/drivers/net/ethernet/rocker/rocker_ofdpa.c
11975 +index 7072b249c8bd6..8157666209798 100644
11976 +--- a/drivers/net/ethernet/rocker/rocker_ofdpa.c
11977 ++++ b/drivers/net/ethernet/rocker/rocker_ofdpa.c
11978 +@@ -2795,7 +2795,8 @@ static void ofdpa_fib4_abort(struct rocker *rocker)
11979 + if (!ofdpa_port)
11980 + continue;
11981 + nh->fib_nh_flags &= ~RTNH_F_OFFLOAD;
11982 +- ofdpa_flow_tbl_del(ofdpa_port, OFDPA_OP_FLAG_REMOVE,
11983 ++ ofdpa_flow_tbl_del(ofdpa_port,
11984 ++ OFDPA_OP_FLAG_REMOVE | OFDPA_OP_FLAG_NOWAIT,
11985 + flow_entry);
11986 + }
11987 + spin_unlock_irqrestore(&ofdpa->flow_tbl_lock, flags);
11988 +diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
11989 +index 69c79cc24e6e4..0baf85122f5ac 100644
11990 +--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
11991 ++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
11992 +@@ -41,8 +41,9 @@
11993 + #include "xilinx_axienet.h"
11994 +
11995 + /* Descriptors defines for Tx and Rx DMA */
11996 +-#define TX_BD_NUM_DEFAULT 64
11997 ++#define TX_BD_NUM_DEFAULT 128
11998 + #define RX_BD_NUM_DEFAULT 1024
11999 ++#define TX_BD_NUM_MIN (MAX_SKB_FRAGS + 1)
12000 + #define TX_BD_NUM_MAX 4096
12001 + #define RX_BD_NUM_MAX 4096
12002 +
12003 +@@ -496,7 +497,8 @@ static void axienet_setoptions(struct net_device *ndev, u32 options)
12004 +
12005 + static int __axienet_device_reset(struct axienet_local *lp)
12006 + {
12007 +- u32 timeout;
12008 ++ u32 value;
12009 ++ int ret;
12010 +
12011 + /* Reset Axi DMA. This would reset Axi Ethernet core as well. The reset
12012 + * process of Axi DMA takes a while to complete as all pending
12013 +@@ -506,15 +508,23 @@ static int __axienet_device_reset(struct axienet_local *lp)
12014 + * they both reset the entire DMA core, so only one needs to be used.
12015 + */
12016 + axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, XAXIDMA_CR_RESET_MASK);
12017 +- timeout = DELAY_OF_ONE_MILLISEC;
12018 +- while (axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET) &
12019 +- XAXIDMA_CR_RESET_MASK) {
12020 +- udelay(1);
12021 +- if (--timeout == 0) {
12022 +- netdev_err(lp->ndev, "%s: DMA reset timeout!\n",
12023 +- __func__);
12024 +- return -ETIMEDOUT;
12025 +- }
12026 ++ ret = read_poll_timeout(axienet_dma_in32, value,
12027 ++ !(value & XAXIDMA_CR_RESET_MASK),
12028 ++ DELAY_OF_ONE_MILLISEC, 50000, false, lp,
12029 ++ XAXIDMA_TX_CR_OFFSET);
12030 ++ if (ret) {
12031 ++ dev_err(lp->dev, "%s: DMA reset timeout!\n", __func__);
12032 ++ return ret;
12033 ++ }
12034 ++
12035 ++ /* Wait for PhyRstCmplt bit to be set, indicating the PHY reset has finished */
12036 ++ ret = read_poll_timeout(axienet_ior, value,
12037 ++ value & XAE_INT_PHYRSTCMPLT_MASK,
12038 ++ DELAY_OF_ONE_MILLISEC, 50000, false, lp,
12039 ++ XAE_IS_OFFSET);
12040 ++ if (ret) {
12041 ++ dev_err(lp->dev, "%s: timeout waiting for PhyRstCmplt\n", __func__);
12042 ++ return ret;
12043 + }
12044 +
12045 + return 0;
12046 +@@ -623,6 +633,8 @@ static int axienet_free_tx_chain(struct net_device *ndev, u32 first_bd,
12047 + if (nr_bds == -1 && !(status & XAXIDMA_BD_STS_COMPLETE_MASK))
12048 + break;
12049 +
12050 ++ /* Ensure we see complete descriptor update */
12051 ++ dma_rmb();
12052 + phys = desc_get_phys_addr(lp, cur_p);
12053 + dma_unmap_single(ndev->dev.parent, phys,
12054 + (cur_p->cntrl & XAXIDMA_BD_CTRL_LENGTH_MASK),
12055 +@@ -631,13 +643,15 @@ static int axienet_free_tx_chain(struct net_device *ndev, u32 first_bd,
12056 + if (cur_p->skb && (status & XAXIDMA_BD_STS_COMPLETE_MASK))
12057 + dev_consume_skb_irq(cur_p->skb);
12058 +
12059 +- cur_p->cntrl = 0;
12060 + cur_p->app0 = 0;
12061 + cur_p->app1 = 0;
12062 + cur_p->app2 = 0;
12063 + cur_p->app4 = 0;
12064 +- cur_p->status = 0;
12065 + cur_p->skb = NULL;
12066 ++ /* ensure our transmit path and device don't prematurely see status cleared */
12067 ++ wmb();
12068 ++ cur_p->cntrl = 0;
12069 ++ cur_p->status = 0;
12070 +
12071 + if (sizep)
12072 + *sizep += status & XAXIDMA_BD_STS_ACTUAL_LEN_MASK;
12073 +@@ -646,6 +660,32 @@ static int axienet_free_tx_chain(struct net_device *ndev, u32 first_bd,
12074 + return i;
12075 + }
12076 +
12077 ++/**
12078 ++ * axienet_check_tx_bd_space - Checks if a BD/group of BDs are currently busy
12079 ++ * @lp: Pointer to the axienet_local structure
12080 ++ * @num_frag: The number of BDs to check for
12081 ++ *
12082 ++ * Return: 0, on success
12083 ++ * NETDEV_TX_BUSY, if any of the descriptors are not free
12084 ++ *
12085 ++ * This function is invoked before BDs are allocated and transmission starts.
12086 ++ * This function returns 0 if a BD or group of BDs can be allocated for
12087 ++ * transmission. If the BD or any of the BDs are not free the function
12088 ++ * returns a busy status. This is invoked from axienet_start_xmit.
12089 ++ */
12090 ++static inline int axienet_check_tx_bd_space(struct axienet_local *lp,
12091 ++ int num_frag)
12092 ++{
12093 ++ struct axidma_bd *cur_p;
12094 ++
12095 ++ /* Ensure we see all descriptor updates from device or TX IRQ path */
12096 ++ rmb();
12097 ++ cur_p = &lp->tx_bd_v[(lp->tx_bd_tail + num_frag) % lp->tx_bd_num];
12098 ++ if (cur_p->cntrl)
12099 ++ return NETDEV_TX_BUSY;
12100 ++ return 0;
12101 ++}
12102 ++
12103 + /**
12104 + * axienet_start_xmit_done - Invoked once a transmit is completed by the
12105 + * Axi DMA Tx channel.
12106 +@@ -675,30 +715,8 @@ static void axienet_start_xmit_done(struct net_device *ndev)
12107 + /* Matches barrier in axienet_start_xmit */
12108 + smp_mb();
12109 +
12110 +- netif_wake_queue(ndev);
12111 +-}
12112 +-
12113 +-/**
12114 +- * axienet_check_tx_bd_space - Checks if a BD/group of BDs are currently busy
12115 +- * @lp: Pointer to the axienet_local structure
12116 +- * @num_frag: The number of BDs to check for
12117 +- *
12118 +- * Return: 0, on success
12119 +- * NETDEV_TX_BUSY, if any of the descriptors are not free
12120 +- *
12121 +- * This function is invoked before BDs are allocated and transmission starts.
12122 +- * This function returns 0 if a BD or group of BDs can be allocated for
12123 +- * transmission. If the BD or any of the BDs are not free the function
12124 +- * returns a busy status. This is invoked from axienet_start_xmit.
12125 +- */
12126 +-static inline int axienet_check_tx_bd_space(struct axienet_local *lp,
12127 +- int num_frag)
12128 +-{
12129 +- struct axidma_bd *cur_p;
12130 +- cur_p = &lp->tx_bd_v[(lp->tx_bd_tail + num_frag) % lp->tx_bd_num];
12131 +- if (cur_p->status & XAXIDMA_BD_STS_ALL_MASK)
12132 +- return NETDEV_TX_BUSY;
12133 +- return 0;
12134 ++ if (!axienet_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1))
12135 ++ netif_wake_queue(ndev);
12136 + }
12137 +
12138 + /**
12139 +@@ -730,20 +748,15 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
12140 + num_frag = skb_shinfo(skb)->nr_frags;
12141 + cur_p = &lp->tx_bd_v[lp->tx_bd_tail];
12142 +
12143 +- if (axienet_check_tx_bd_space(lp, num_frag)) {
12144 +- if (netif_queue_stopped(ndev))
12145 +- return NETDEV_TX_BUSY;
12146 +-
12147 ++ if (axienet_check_tx_bd_space(lp, num_frag + 1)) {
12148 ++ /* Should not happen as last start_xmit call should have
12149 ++ * checked for sufficient space and queue should only be
12150 ++ * woken when sufficient space is available.
12151 ++ */
12152 + netif_stop_queue(ndev);
12153 +-
12154 +- /* Matches barrier in axienet_start_xmit_done */
12155 +- smp_mb();
12156 +-
12157 +- /* Space might have just been freed - check again */
12158 +- if (axienet_check_tx_bd_space(lp, num_frag))
12159 +- return NETDEV_TX_BUSY;
12160 +-
12161 +- netif_wake_queue(ndev);
12162 ++ if (net_ratelimit())
12163 ++ netdev_warn(ndev, "TX ring unexpectedly full\n");
12164 ++ return NETDEV_TX_BUSY;
12165 + }
12166 +
12167 + if (skb->ip_summed == CHECKSUM_PARTIAL) {
12168 +@@ -804,6 +817,18 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
12169 + if (++lp->tx_bd_tail >= lp->tx_bd_num)
12170 + lp->tx_bd_tail = 0;
12171 +
12172 ++ /* Stop queue if next transmit may not have space */
12173 ++ if (axienet_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1)) {
12174 ++ netif_stop_queue(ndev);
12175 ++
12176 ++ /* Matches barrier in axienet_start_xmit_done */
12177 ++ smp_mb();
12178 ++
12179 ++ /* Space might have just been freed - check again */
12180 ++ if (!axienet_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1))
12181 ++ netif_wake_queue(ndev);
12182 ++ }
12183 ++
12184 + return NETDEV_TX_OK;
12185 + }
12186 +
12187 +@@ -834,6 +859,8 @@ static void axienet_recv(struct net_device *ndev)
12188 +
12189 + tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci;
12190 +
12191 ++ /* Ensure we see complete descriptor update */
12192 ++ dma_rmb();
12193 + phys = desc_get_phys_addr(lp, cur_p);
12194 + dma_unmap_single(ndev->dev.parent, phys, lp->max_frm_size,
12195 + DMA_FROM_DEVICE);
12196 +@@ -1355,7 +1382,8 @@ static int axienet_ethtools_set_ringparam(struct net_device *ndev,
12197 + if (ering->rx_pending > RX_BD_NUM_MAX ||
12198 + ering->rx_mini_pending ||
12199 + ering->rx_jumbo_pending ||
12200 +- ering->rx_pending > TX_BD_NUM_MAX)
12201 ++ ering->tx_pending < TX_BD_NUM_MIN ||
12202 ++ ering->tx_pending > TX_BD_NUM_MAX)
12203 + return -EINVAL;
12204 +
12205 + if (netif_running(ndev))
12206 +@@ -2015,6 +2043,11 @@ static int axienet_probe(struct platform_device *pdev)
12207 + lp->coalesce_count_rx = XAXIDMA_DFT_RX_THRESHOLD;
12208 + lp->coalesce_count_tx = XAXIDMA_DFT_TX_THRESHOLD;
12209 +
12210 ++ /* Reset core now that clocks are enabled, prior to accessing MDIO */
12211 ++ ret = __axienet_device_reset(lp);
12212 ++ if (ret)
12213 ++ goto cleanup_clk;
12214 ++
12215 + lp->phy_node = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
12216 + if (lp->phy_node) {
12217 + ret = axienet_mdio_setup(lp);
12218 +diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
12219 +index 91616182c311f..4dda2ab19c265 100644
12220 +--- a/drivers/net/phy/marvell.c
12221 ++++ b/drivers/net/phy/marvell.c
12222 +@@ -1090,6 +1090,12 @@ static int m88e1118_config_init(struct phy_device *phydev)
12223 + if (err < 0)
12224 + return err;
12225 +
12226 ++ if (phy_interface_is_rgmii(phydev)) {
12227 ++ err = m88e1121_config_aneg_rgmii_delays(phydev);
12228 ++ if (err < 0)
12229 ++ return err;
12230 ++ }
12231 ++
12232 + /* Adjust LED Control */
12233 + if (phydev->dev_flags & MARVELL_PHY_M1118_DNS323_LEDS)
12234 + err = phy_write(phydev, 0x10, 0x1100);
12235 +diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
12236 +index 2645ca35103c9..c416ab1d2b008 100644
12237 +--- a/drivers/net/phy/mdio_bus.c
12238 ++++ b/drivers/net/phy/mdio_bus.c
12239 +@@ -588,7 +588,7 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
12240 + mdiobus_setup_mdiodev_from_board_info(bus, mdiobus_create_device);
12241 +
12242 + bus->state = MDIOBUS_REGISTERED;
12243 +- pr_info("%s: probed\n", bus->name);
12244 ++ dev_dbg(&bus->dev, "probed\n");
12245 + return 0;
12246 +
12247 + error:
12248 +diff --git a/drivers/net/phy/phy-core.c b/drivers/net/phy/phy-core.c
12249 +index 8d333d3084ed3..cccb83dae673b 100644
12250 +--- a/drivers/net/phy/phy-core.c
12251 ++++ b/drivers/net/phy/phy-core.c
12252 +@@ -161,11 +161,11 @@ static const struct phy_setting settings[] = {
12253 + PHY_SETTING( 2500, FULL, 2500baseT_Full ),
12254 + PHY_SETTING( 2500, FULL, 2500baseX_Full ),
12255 + /* 1G */
12256 +- PHY_SETTING( 1000, FULL, 1000baseKX_Full ),
12257 + PHY_SETTING( 1000, FULL, 1000baseT_Full ),
12258 + PHY_SETTING( 1000, HALF, 1000baseT_Half ),
12259 + PHY_SETTING( 1000, FULL, 1000baseT1_Full ),
12260 + PHY_SETTING( 1000, FULL, 1000baseX_Full ),
12261 ++ PHY_SETTING( 1000, FULL, 1000baseKX_Full ),
12262 + /* 100M */
12263 + PHY_SETTING( 100, FULL, 100baseT_Full ),
12264 + PHY_SETTING( 100, FULL, 100baseT1_Full ),
12265 +diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
12266 +index 32c34c728c7a1..efffa65f82143 100644
12267 +--- a/drivers/net/phy/sfp.c
12268 ++++ b/drivers/net/phy/sfp.c
12269 +@@ -1589,17 +1589,20 @@ static int sfp_sm_probe_for_phy(struct sfp *sfp)
12270 + static int sfp_module_parse_power(struct sfp *sfp)
12271 + {
12272 + u32 power_mW = 1000;
12273 ++ bool supports_a2;
12274 +
12275 + if (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_POWER_DECL))
12276 + power_mW = 1500;
12277 + if (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_HIGH_POWER_LEVEL))
12278 + power_mW = 2000;
12279 +
12280 ++ supports_a2 = sfp->id.ext.sff8472_compliance !=
12281 ++ SFP_SFF8472_COMPLIANCE_NONE ||
12282 ++ sfp->id.ext.diagmon & SFP_DIAGMON_DDM;
12283 ++
12284 + if (power_mW > sfp->max_power_mW) {
12285 + /* Module power specification exceeds the allowed maximum. */
12286 +- if (sfp->id.ext.sff8472_compliance ==
12287 +- SFP_SFF8472_COMPLIANCE_NONE &&
12288 +- !(sfp->id.ext.diagmon & SFP_DIAGMON_DDM)) {
12289 ++ if (!supports_a2) {
12290 + /* The module appears not to implement bus address
12291 + * 0xa2, so assume that the module powers up in the
12292 + * indicated mode.
12293 +@@ -1616,11 +1619,25 @@ static int sfp_module_parse_power(struct sfp *sfp)
12294 + }
12295 + }
12296 +
12297 ++ if (power_mW <= 1000) {
12298 ++ /* Modules below 1W do not require a power change sequence */
12299 ++ sfp->module_power_mW = power_mW;
12300 ++ return 0;
12301 ++ }
12302 ++
12303 ++ if (!supports_a2) {
12304 ++ /* The module power level is below the host maximum and the
12305 ++ * module appears not to implement bus address 0xa2, so assume
12306 ++ * that the module powers up in the indicated mode.
12307 ++ */
12308 ++ return 0;
12309 ++ }
12310 ++
12311 + /* If the module requires a higher power mode, but also requires
12312 + * an address change sequence, warn the user that the module may
12313 + * not be functional.
12314 + */
12315 +- if (sfp->id.ext.diagmon & SFP_DIAGMON_ADDRMODE && power_mW > 1000) {
12316 ++ if (sfp->id.ext.diagmon & SFP_DIAGMON_ADDRMODE) {
12317 + dev_warn(sfp->dev,
12318 + "Address Change Sequence not supported but module requires %u.%uW, module may not be functional\n",
12319 + power_mW / 1000, (power_mW / 100) % 10);
12320 +diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
12321 +index 33b2e0fb68bbb..2b9815ec4a622 100644
12322 +--- a/drivers/net/ppp/ppp_generic.c
12323 ++++ b/drivers/net/ppp/ppp_generic.c
12324 +@@ -69,6 +69,8 @@
12325 + #define MPHDRLEN 6 /* multilink protocol header length */
12326 + #define MPHDRLEN_SSN 4 /* ditto with short sequence numbers */
12327 +
12328 ++#define PPP_PROTO_LEN 2
12329 ++
12330 + /*
12331 + * An instance of /dev/ppp can be associated with either a ppp
12332 + * interface unit or a ppp channel. In both cases, file->private_data
12333 +@@ -496,6 +498,9 @@ static ssize_t ppp_write(struct file *file, const char __user *buf,
12334 +
12335 + if (!pf)
12336 + return -ENXIO;
12337 ++ /* All PPP packets should start with the 2-byte protocol */
12338 ++ if (count < PPP_PROTO_LEN)
12339 ++ return -EINVAL;
12340 + ret = -ENOMEM;
12341 + skb = alloc_skb(count + pf->hdrlen, GFP_KERNEL);
12342 + if (!skb)
12343 +@@ -1632,7 +1637,7 @@ ppp_send_frame(struct ppp *ppp, struct sk_buff *skb)
12344 + }
12345 +
12346 + ++ppp->stats64.tx_packets;
12347 +- ppp->stats64.tx_bytes += skb->len - 2;
12348 ++ ppp->stats64.tx_bytes += skb->len - PPP_PROTO_LEN;
12349 +
12350 + switch (proto) {
12351 + case PPP_IP:
12352 +diff --git a/drivers/net/usb/mcs7830.c b/drivers/net/usb/mcs7830.c
12353 +index 09bfa6a4dfbc1..7e40e2e2f3723 100644
12354 +--- a/drivers/net/usb/mcs7830.c
12355 ++++ b/drivers/net/usb/mcs7830.c
12356 +@@ -108,8 +108,16 @@ static const char driver_name[] = "MOSCHIP usb-ethernet driver";
12357 +
12358 + static int mcs7830_get_reg(struct usbnet *dev, u16 index, u16 size, void *data)
12359 + {
12360 +- return usbnet_read_cmd(dev, MCS7830_RD_BREQ, MCS7830_RD_BMREQ,
12361 +- 0x0000, index, data, size);
12362 ++ int ret;
12363 ++
12364 ++ ret = usbnet_read_cmd(dev, MCS7830_RD_BREQ, MCS7830_RD_BMREQ,
12365 ++ 0x0000, index, data, size);
12366 ++ if (ret < 0)
12367 ++ return ret;
12368 ++ else if (ret < size)
12369 ++ return -ENODATA;
12370 ++
12371 ++ return ret;
12372 + }
12373 +
12374 + static int mcs7830_set_reg(struct usbnet *dev, u16 index, u16 size, const void *data)
12375 +diff --git a/drivers/net/wireless/ath/ar5523/ar5523.c b/drivers/net/wireless/ath/ar5523/ar5523.c
12376 +index 49cc4b7ed5163..1baec4b412c8d 100644
12377 +--- a/drivers/net/wireless/ath/ar5523/ar5523.c
12378 ++++ b/drivers/net/wireless/ath/ar5523/ar5523.c
12379 +@@ -153,6 +153,10 @@ static void ar5523_cmd_rx_cb(struct urb *urb)
12380 + ar5523_err(ar, "Invalid reply to WDCMSG_TARGET_START");
12381 + return;
12382 + }
12383 ++ if (!cmd->odata) {
12384 ++ ar5523_err(ar, "Unexpected WDCMSG_TARGET_START reply");
12385 ++ return;
12386 ++ }
12387 + memcpy(cmd->odata, hdr + 1, sizeof(u32));
12388 + cmd->olen = sizeof(u32);
12389 + cmd->res = 0;
12390 +diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
12391 +index d73ad60b571c2..d0967bb1f3871 100644
12392 +--- a/drivers/net/wireless/ath/ath10k/core.c
12393 ++++ b/drivers/net/wireless/ath/ath10k/core.c
12394 +@@ -89,6 +89,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
12395 + .rri_on_ddr = false,
12396 + .hw_filter_reset_required = true,
12397 + .fw_diag_ce_download = false,
12398 ++ .credit_size_workaround = false,
12399 + .tx_stats_over_pktlog = true,
12400 + },
12401 + {
12402 +@@ -123,6 +124,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
12403 + .rri_on_ddr = false,
12404 + .hw_filter_reset_required = true,
12405 + .fw_diag_ce_download = false,
12406 ++ .credit_size_workaround = false,
12407 + .tx_stats_over_pktlog = true,
12408 + },
12409 + {
12410 +@@ -158,6 +160,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
12411 + .rri_on_ddr = false,
12412 + .hw_filter_reset_required = true,
12413 + .fw_diag_ce_download = false,
12414 ++ .credit_size_workaround = false,
12415 + .tx_stats_over_pktlog = false,
12416 + },
12417 + {
12418 +@@ -187,6 +190,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
12419 + .num_wds_entries = 0x20,
12420 + .uart_pin_workaround = true,
12421 + .tx_stats_over_pktlog = false,
12422 ++ .credit_size_workaround = false,
12423 + .bmi_large_size_download = true,
12424 + .supports_peer_stats_info = true,
12425 + },
12426 +@@ -222,6 +226,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
12427 + .rri_on_ddr = false,
12428 + .hw_filter_reset_required = true,
12429 + .fw_diag_ce_download = false,
12430 ++ .credit_size_workaround = false,
12431 + .tx_stats_over_pktlog = false,
12432 + },
12433 + {
12434 +@@ -256,6 +261,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
12435 + .rri_on_ddr = false,
12436 + .hw_filter_reset_required = true,
12437 + .fw_diag_ce_download = false,
12438 ++ .credit_size_workaround = false,
12439 + .tx_stats_over_pktlog = false,
12440 + },
12441 + {
12442 +@@ -290,6 +296,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
12443 + .rri_on_ddr = false,
12444 + .hw_filter_reset_required = true,
12445 + .fw_diag_ce_download = false,
12446 ++ .credit_size_workaround = false,
12447 + .tx_stats_over_pktlog = false,
12448 + },
12449 + {
12450 +@@ -327,6 +334,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
12451 + .rri_on_ddr = false,
12452 + .hw_filter_reset_required = true,
12453 + .fw_diag_ce_download = true,
12454 ++ .credit_size_workaround = false,
12455 + .tx_stats_over_pktlog = false,
12456 + .supports_peer_stats_info = true,
12457 + },
12458 +@@ -368,6 +376,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
12459 + .rri_on_ddr = false,
12460 + .hw_filter_reset_required = true,
12461 + .fw_diag_ce_download = false,
12462 ++ .credit_size_workaround = false,
12463 + .tx_stats_over_pktlog = false,
12464 + },
12465 + {
12466 +@@ -415,6 +424,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
12467 + .rri_on_ddr = false,
12468 + .hw_filter_reset_required = true,
12469 + .fw_diag_ce_download = false,
12470 ++ .credit_size_workaround = false,
12471 + .tx_stats_over_pktlog = false,
12472 + },
12473 + {
12474 +@@ -459,6 +469,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
12475 + .rri_on_ddr = false,
12476 + .hw_filter_reset_required = true,
12477 + .fw_diag_ce_download = false,
12478 ++ .credit_size_workaround = false,
12479 + .tx_stats_over_pktlog = false,
12480 + },
12481 + {
12482 +@@ -493,6 +504,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
12483 + .rri_on_ddr = false,
12484 + .hw_filter_reset_required = true,
12485 + .fw_diag_ce_download = false,
12486 ++ .credit_size_workaround = false,
12487 + .tx_stats_over_pktlog = false,
12488 + },
12489 + {
12490 +@@ -529,6 +541,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
12491 + .rri_on_ddr = false,
12492 + .hw_filter_reset_required = true,
12493 + .fw_diag_ce_download = true,
12494 ++ .credit_size_workaround = false,
12495 + .tx_stats_over_pktlog = false,
12496 + },
12497 + {
12498 +@@ -557,6 +570,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
12499 + .ast_skid_limit = 0x10,
12500 + .num_wds_entries = 0x20,
12501 + .uart_pin_workaround = true,
12502 ++ .credit_size_workaround = true,
12503 + },
12504 + {
12505 + .id = QCA4019_HW_1_0_DEV_VERSION,
12506 +@@ -597,6 +611,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
12507 + .rri_on_ddr = false,
12508 + .hw_filter_reset_required = true,
12509 + .fw_diag_ce_download = false,
12510 ++ .credit_size_workaround = false,
12511 + .tx_stats_over_pktlog = false,
12512 + },
12513 + {
12514 +@@ -624,6 +639,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
12515 + .rri_on_ddr = true,
12516 + .hw_filter_reset_required = false,
12517 + .fw_diag_ce_download = false,
12518 ++ .credit_size_workaround = false,
12519 + .tx_stats_over_pktlog = false,
12520 + },
12521 + };
12522 +@@ -697,6 +713,7 @@ static void ath10k_send_suspend_complete(struct ath10k *ar)
12523 +
12524 + static int ath10k_init_sdio(struct ath10k *ar, enum ath10k_firmware_mode mode)
12525 + {
12526 ++ bool mtu_workaround = ar->hw_params.credit_size_workaround;
12527 + int ret;
12528 + u32 param = 0;
12529 +
12530 +@@ -714,7 +731,7 @@ static int ath10k_init_sdio(struct ath10k *ar, enum ath10k_firmware_mode mode)
12531 +
12532 + param |= HI_ACS_FLAGS_SDIO_REDUCE_TX_COMPL_SET;
12533 +
12534 +- if (mode == ATH10K_FIRMWARE_MODE_NORMAL)
12535 ++ if (mode == ATH10K_FIRMWARE_MODE_NORMAL && !mtu_workaround)
12536 + param |= HI_ACS_FLAGS_ALT_DATA_CREDIT_SIZE;
12537 + else
12538 + param &= ~HI_ACS_FLAGS_ALT_DATA_CREDIT_SIZE;
12539 +diff --git a/drivers/net/wireless/ath/ath10k/htt_tx.c b/drivers/net/wireless/ath/ath10k/htt_tx.c
12540 +index 1fc0a312ab587..5f67da47036cf 100644
12541 +--- a/drivers/net/wireless/ath/ath10k/htt_tx.c
12542 ++++ b/drivers/net/wireless/ath/ath10k/htt_tx.c
12543 +@@ -147,6 +147,9 @@ void ath10k_htt_tx_dec_pending(struct ath10k_htt *htt)
12544 + htt->num_pending_tx--;
12545 + if (htt->num_pending_tx == htt->max_num_pending_tx - 1)
12546 + ath10k_mac_tx_unlock(htt->ar, ATH10K_TX_PAUSE_Q_FULL);
12547 ++
12548 ++ if (htt->num_pending_tx == 0)
12549 ++ wake_up(&htt->empty_tx_wq);
12550 + }
12551 +
12552 + int ath10k_htt_tx_inc_pending(struct ath10k_htt *htt)
12553 +diff --git a/drivers/net/wireless/ath/ath10k/hw.h b/drivers/net/wireless/ath/ath10k/hw.h
12554 +index c6ded21f5ed69..d3ef83ad577da 100644
12555 +--- a/drivers/net/wireless/ath/ath10k/hw.h
12556 ++++ b/drivers/net/wireless/ath/ath10k/hw.h
12557 +@@ -618,6 +618,9 @@ struct ath10k_hw_params {
12558 + */
12559 + bool uart_pin_workaround;
12560 +
12561 ++ /* Workaround for the credit size calculation */
12562 ++ bool credit_size_workaround;
12563 ++
12564 + /* tx stats support over pktlog */
12565 + bool tx_stats_over_pktlog;
12566 +
12567 +diff --git a/drivers/net/wireless/ath/ath10k/txrx.c b/drivers/net/wireless/ath/ath10k/txrx.c
12568 +index aefe1f7f906c0..f51f1cf2c6a40 100644
12569 +--- a/drivers/net/wireless/ath/ath10k/txrx.c
12570 ++++ b/drivers/net/wireless/ath/ath10k/txrx.c
12571 +@@ -82,8 +82,6 @@ int ath10k_txrx_tx_unref(struct ath10k_htt *htt,
12572 + flags = skb_cb->flags;
12573 + ath10k_htt_tx_free_msdu_id(htt, tx_done->msdu_id);
12574 + ath10k_htt_tx_dec_pending(htt);
12575 +- if (htt->num_pending_tx == 0)
12576 +- wake_up(&htt->empty_tx_wq);
12577 + spin_unlock_bh(&htt->tx_lock);
12578 +
12579 + rcu_read_lock();
12580 +diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
12581 +index 430723c64adce..9ff6e68533142 100644
12582 +--- a/drivers/net/wireless/ath/ath11k/ahb.c
12583 ++++ b/drivers/net/wireless/ath/ath11k/ahb.c
12584 +@@ -175,8 +175,11 @@ static void __ath11k_ahb_ext_irq_disable(struct ath11k_base *ab)
12585 +
12586 + ath11k_ahb_ext_grp_disable(irq_grp);
12587 +
12588 +- napi_synchronize(&irq_grp->napi);
12589 +- napi_disable(&irq_grp->napi);
12590 ++ if (irq_grp->napi_enabled) {
12591 ++ napi_synchronize(&irq_grp->napi);
12592 ++ napi_disable(&irq_grp->napi);
12593 ++ irq_grp->napi_enabled = false;
12594 ++ }
12595 + }
12596 + }
12597 +
12598 +@@ -206,13 +209,13 @@ static void ath11k_ahb_clearbit32(struct ath11k_base *ab, u8 bit, u32 offset)
12599 +
12600 + static void ath11k_ahb_ce_irq_enable(struct ath11k_base *ab, u16 ce_id)
12601 + {
12602 +- const struct ce_pipe_config *ce_config;
12603 ++ const struct ce_attr *ce_attr;
12604 +
12605 +- ce_config = &ab->hw_params.target_ce_config[ce_id];
12606 +- if (__le32_to_cpu(ce_config->pipedir) & PIPEDIR_OUT)
12607 ++ ce_attr = &ab->hw_params.host_ce_config[ce_id];
12608 ++ if (ce_attr->src_nentries)
12609 + ath11k_ahb_setbit32(ab, ce_id, CE_HOST_IE_ADDRESS);
12610 +
12611 +- if (__le32_to_cpu(ce_config->pipedir) & PIPEDIR_IN) {
12612 ++ if (ce_attr->dest_nentries) {
12613 + ath11k_ahb_setbit32(ab, ce_id, CE_HOST_IE_2_ADDRESS);
12614 + ath11k_ahb_setbit32(ab, ce_id + CE_HOST_IE_3_SHIFT,
12615 + CE_HOST_IE_3_ADDRESS);
12616 +@@ -221,13 +224,13 @@ static void ath11k_ahb_ce_irq_enable(struct ath11k_base *ab, u16 ce_id)
12617 +
12618 + static void ath11k_ahb_ce_irq_disable(struct ath11k_base *ab, u16 ce_id)
12619 + {
12620 +- const struct ce_pipe_config *ce_config;
12621 ++ const struct ce_attr *ce_attr;
12622 +
12623 +- ce_config = &ab->hw_params.target_ce_config[ce_id];
12624 +- if (__le32_to_cpu(ce_config->pipedir) & PIPEDIR_OUT)
12625 ++ ce_attr = &ab->hw_params.host_ce_config[ce_id];
12626 ++ if (ce_attr->src_nentries)
12627 + ath11k_ahb_clearbit32(ab, ce_id, CE_HOST_IE_ADDRESS);
12628 +
12629 +- if (__le32_to_cpu(ce_config->pipedir) & PIPEDIR_IN) {
12630 ++ if (ce_attr->dest_nentries) {
12631 + ath11k_ahb_clearbit32(ab, ce_id, CE_HOST_IE_2_ADDRESS);
12632 + ath11k_ahb_clearbit32(ab, ce_id + CE_HOST_IE_3_SHIFT,
12633 + CE_HOST_IE_3_ADDRESS);
12634 +@@ -300,7 +303,10 @@ static void ath11k_ahb_ext_irq_enable(struct ath11k_base *ab)
12635 + for (i = 0; i < ATH11K_EXT_IRQ_GRP_NUM_MAX; i++) {
12636 + struct ath11k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
12637 +
12638 +- napi_enable(&irq_grp->napi);
12639 ++ if (!irq_grp->napi_enabled) {
12640 ++ napi_enable(&irq_grp->napi);
12641 ++ irq_grp->napi_enabled = true;
12642 ++ }
12643 + ath11k_ahb_ext_grp_enable(irq_grp);
12644 + }
12645 + }
12646 +diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
12647 +index c8e36251068c9..d2f2898d17b49 100644
12648 +--- a/drivers/net/wireless/ath/ath11k/core.h
12649 ++++ b/drivers/net/wireless/ath/ath11k/core.h
12650 +@@ -124,6 +124,7 @@ struct ath11k_ext_irq_grp {
12651 + u32 num_irq;
12652 + u32 grp_id;
12653 + u64 timestamp;
12654 ++ bool napi_enabled;
12655 + struct napi_struct napi;
12656 + struct net_device napi_ndev;
12657 + };
12658 +@@ -687,7 +688,6 @@ struct ath11k_base {
12659 + u32 wlan_init_status;
12660 + int irq_num[ATH11K_IRQ_NUM_MAX];
12661 + struct ath11k_ext_irq_grp ext_irq_grp[ATH11K_EXT_IRQ_GRP_NUM_MAX];
12662 +- struct napi_struct *napi;
12663 + struct ath11k_targ_cap target_caps;
12664 + u32 ext_service_bitmap[WMI_SERVICE_EXT_BM_SIZE];
12665 + bool pdevs_macaddr_valid;
12666 +diff --git a/drivers/net/wireless/ath/ath11k/dp.h b/drivers/net/wireless/ath/ath11k/dp.h
12667 +index ee8db812589b3..c4972233149f4 100644
12668 +--- a/drivers/net/wireless/ath/ath11k/dp.h
12669 ++++ b/drivers/net/wireless/ath/ath11k/dp.h
12670 +@@ -514,7 +514,8 @@ struct htt_ppdu_stats_cfg_cmd {
12671 + } __packed;
12672 +
12673 + #define HTT_PPDU_STATS_CFG_MSG_TYPE GENMASK(7, 0)
12674 +-#define HTT_PPDU_STATS_CFG_PDEV_ID GENMASK(15, 8)
12675 ++#define HTT_PPDU_STATS_CFG_SOC_STATS BIT(8)
12676 ++#define HTT_PPDU_STATS_CFG_PDEV_ID GENMASK(15, 9)
12677 + #define HTT_PPDU_STATS_CFG_TLV_TYPE_BITMASK GENMASK(31, 16)
12678 +
12679 + enum htt_ppdu_stats_tag_type {
12680 +diff --git a/drivers/net/wireless/ath/ath11k/dp_tx.c b/drivers/net/wireless/ath/ath11k/dp_tx.c
12681 +index 21dfd08d3debb..092eee735da29 100644
12682 +--- a/drivers/net/wireless/ath/ath11k/dp_tx.c
12683 ++++ b/drivers/net/wireless/ath/ath11k/dp_tx.c
12684 +@@ -894,7 +894,7 @@ int ath11k_dp_tx_htt_h2t_ppdu_stats_req(struct ath11k *ar, u32 mask)
12685 + cmd->msg = FIELD_PREP(HTT_PPDU_STATS_CFG_MSG_TYPE,
12686 + HTT_H2T_MSG_TYPE_PPDU_STATS_CFG);
12687 +
12688 +- pdev_mask = 1 << (i + 1);
12689 ++ pdev_mask = 1 << (ar->pdev_idx + i);
12690 + cmd->msg |= FIELD_PREP(HTT_PPDU_STATS_CFG_PDEV_ID, pdev_mask);
12691 + cmd->msg |= FIELD_PREP(HTT_PPDU_STATS_CFG_TLV_TYPE_BITMASK, mask);
12692 +
12693 +diff --git a/drivers/net/wireless/ath/ath11k/hal.c b/drivers/net/wireless/ath/ath11k/hal.c
12694 +index 9904c0eb75875..f3b9108ab6bd0 100644
12695 +--- a/drivers/net/wireless/ath/ath11k/hal.c
12696 ++++ b/drivers/net/wireless/ath/ath11k/hal.c
12697 +@@ -991,6 +991,7 @@ int ath11k_hal_srng_setup(struct ath11k_base *ab, enum hal_ring_type type,
12698 + srng->msi_data = params->msi_data;
12699 + srng->initialized = 1;
12700 + spin_lock_init(&srng->lock);
12701 ++ lockdep_set_class(&srng->lock, hal->srng_key + ring_id);
12702 +
12703 + for (i = 0; i < HAL_SRNG_NUM_REG_GRP; i++) {
12704 + srng->hwreg_base[i] = srng_config->reg_start[i] +
12705 +@@ -1237,6 +1238,24 @@ static int ath11k_hal_srng_create_config(struct ath11k_base *ab)
12706 + return 0;
12707 + }
12708 +
12709 ++static void ath11k_hal_register_srng_key(struct ath11k_base *ab)
12710 ++{
12711 ++ struct ath11k_hal *hal = &ab->hal;
12712 ++ u32 ring_id;
12713 ++
12714 ++ for (ring_id = 0; ring_id < HAL_SRNG_RING_ID_MAX; ring_id++)
12715 ++ lockdep_register_key(hal->srng_key + ring_id);
12716 ++}
12717 ++
12718 ++static void ath11k_hal_unregister_srng_key(struct ath11k_base *ab)
12719 ++{
12720 ++ struct ath11k_hal *hal = &ab->hal;
12721 ++ u32 ring_id;
12722 ++
12723 ++ for (ring_id = 0; ring_id < HAL_SRNG_RING_ID_MAX; ring_id++)
12724 ++ lockdep_unregister_key(hal->srng_key + ring_id);
12725 ++}
12726 ++
12727 + int ath11k_hal_srng_init(struct ath11k_base *ab)
12728 + {
12729 + struct ath11k_hal *hal = &ab->hal;
12730 +@@ -1256,6 +1275,8 @@ int ath11k_hal_srng_init(struct ath11k_base *ab)
12731 + if (ret)
12732 + goto err_free_cont_rdp;
12733 +
12734 ++ ath11k_hal_register_srng_key(ab);
12735 ++
12736 + return 0;
12737 +
12738 + err_free_cont_rdp:
12739 +@@ -1270,6 +1291,7 @@ void ath11k_hal_srng_deinit(struct ath11k_base *ab)
12740 + {
12741 + struct ath11k_hal *hal = &ab->hal;
12742 +
12743 ++ ath11k_hal_unregister_srng_key(ab);
12744 + ath11k_hal_free_cont_rdp(ab);
12745 + ath11k_hal_free_cont_wrp(ab);
12746 + kfree(hal->srng_config);
12747 +diff --git a/drivers/net/wireless/ath/ath11k/hal.h b/drivers/net/wireless/ath/ath11k/hal.h
12748 +index 1f1b29cd0aa39..5fbfded8d546c 100644
12749 +--- a/drivers/net/wireless/ath/ath11k/hal.h
12750 ++++ b/drivers/net/wireless/ath/ath11k/hal.h
12751 +@@ -888,6 +888,8 @@ struct ath11k_hal {
12752 + /* shadow register configuration */
12753 + u32 shadow_reg_addr[HAL_SHADOW_NUM_REGS];
12754 + int num_shadow_reg_configured;
12755 ++
12756 ++ struct lock_class_key srng_key[HAL_SRNG_RING_ID_MAX];
12757 + };
12758 +
12759 + u32 ath11k_hal_reo_qdesc_size(u32 ba_window_size, u8 tid);
12760 +diff --git a/drivers/net/wireless/ath/ath11k/hw.c b/drivers/net/wireless/ath/ath11k/hw.c
12761 +index 66331da350129..f6282e8702923 100644
12762 +--- a/drivers/net/wireless/ath/ath11k/hw.c
12763 ++++ b/drivers/net/wireless/ath/ath11k/hw.c
12764 +@@ -246,8 +246,6 @@ const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_ipq8074 = {
12765 + const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_qca6390 = {
12766 + .tx = {
12767 + ATH11K_TX_RING_MASK_0,
12768 +- ATH11K_TX_RING_MASK_1,
12769 +- ATH11K_TX_RING_MASK_2,
12770 + },
12771 + .rx_mon_status = {
12772 + 0, 0, 0, 0,
12773 +diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
12774 +index 0924bc8b35205..cc9122f420243 100644
12775 +--- a/drivers/net/wireless/ath/ath11k/mac.c
12776 ++++ b/drivers/net/wireless/ath/ath11k/mac.c
12777 +@@ -1,6 +1,7 @@
12778 + // SPDX-License-Identifier: BSD-3-Clause-Clear
12779 + /*
12780 + * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
12781 ++ * Copyright (c) 2021 Qualcomm Innovation Center, Inc. All rights reserved.
12782 + */
12783 +
12784 + #include <net/mac80211.h>
12785 +@@ -792,11 +793,15 @@ static int ath11k_mac_setup_bcn_tmpl(struct ath11k_vif *arvif)
12786 +
12787 + if (cfg80211_find_ie(WLAN_EID_RSN, ies, (skb_tail_pointer(bcn) - ies)))
12788 + arvif->rsnie_present = true;
12789 ++ else
12790 ++ arvif->rsnie_present = false;
12791 +
12792 + if (cfg80211_find_vendor_ie(WLAN_OUI_MICROSOFT,
12793 + WLAN_OUI_TYPE_MICROSOFT_WPA,
12794 + ies, (skb_tail_pointer(bcn) - ies)))
12795 + arvif->wpaie_present = true;
12796 ++ else
12797 ++ arvif->wpaie_present = false;
12798 +
12799 + ret = ath11k_wmi_bcn_tmpl(ar, arvif->vdev_id, &offs, bcn);
12800 +
12801 +@@ -2316,9 +2321,12 @@ static int ath11k_mac_op_hw_scan(struct ieee80211_hw *hw,
12802 + arg.scan_id = ATH11K_SCAN_ID;
12803 +
12804 + if (req->ie_len) {
12805 ++ arg.extraie.ptr = kmemdup(req->ie, req->ie_len, GFP_KERNEL);
12806 ++ if (!arg.extraie.ptr) {
12807 ++ ret = -ENOMEM;
12808 ++ goto exit;
12809 ++ }
12810 + arg.extraie.len = req->ie_len;
12811 +- arg.extraie.ptr = kzalloc(req->ie_len, GFP_KERNEL);
12812 +- memcpy(arg.extraie.ptr, req->ie, req->ie_len);
12813 + }
12814 +
12815 + if (req->n_ssids) {
12816 +@@ -2395,9 +2403,7 @@ static int ath11k_install_key(struct ath11k_vif *arvif,
12817 + return 0;
12818 +
12819 + if (cmd == DISABLE_KEY) {
12820 +- /* TODO: Check if FW expects value other than NONE for del */
12821 +- /* arg.key_cipher = WMI_CIPHER_NONE; */
12822 +- arg.key_len = 0;
12823 ++ arg.key_cipher = WMI_CIPHER_NONE;
12824 + arg.key_data = NULL;
12825 + goto install;
12826 + }
12827 +@@ -2529,7 +2535,7 @@ static int ath11k_mac_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
12828 + /* flush the fragments cache during key (re)install to
12829 + * ensure all frags in the new frag list belong to the same key.
12830 + */
12831 +- if (peer && cmd == SET_KEY)
12832 ++ if (peer && sta && cmd == SET_KEY)
12833 + ath11k_peer_frags_flush(ar, peer);
12834 + spin_unlock_bh(&ab->base_lock);
12835 +
12836 +@@ -3878,23 +3884,32 @@ static int __ath11k_set_antenna(struct ath11k *ar, u32 tx_ant, u32 rx_ant)
12837 + return 0;
12838 + }
12839 +
12840 +-int ath11k_mac_tx_mgmt_pending_free(int buf_id, void *skb, void *ctx)
12841 ++static void ath11k_mac_tx_mgmt_free(struct ath11k *ar, int buf_id)
12842 + {
12843 +- struct sk_buff *msdu = skb;
12844 ++ struct sk_buff *msdu;
12845 + struct ieee80211_tx_info *info;
12846 +- struct ath11k *ar = ctx;
12847 +- struct ath11k_base *ab = ar->ab;
12848 +
12849 + spin_lock_bh(&ar->txmgmt_idr_lock);
12850 +- idr_remove(&ar->txmgmt_idr, buf_id);
12851 ++ msdu = idr_remove(&ar->txmgmt_idr, buf_id);
12852 + spin_unlock_bh(&ar->txmgmt_idr_lock);
12853 +- dma_unmap_single(ab->dev, ATH11K_SKB_CB(msdu)->paddr, msdu->len,
12854 ++
12855 ++ if (!msdu)
12856 ++ return;
12857 ++
12858 ++ dma_unmap_single(ar->ab->dev, ATH11K_SKB_CB(msdu)->paddr, msdu->len,
12859 + DMA_TO_DEVICE);
12860 +
12861 + info = IEEE80211_SKB_CB(msdu);
12862 + memset(&info->status, 0, sizeof(info->status));
12863 +
12864 + ieee80211_free_txskb(ar->hw, msdu);
12865 ++}
12866 ++
12867 ++int ath11k_mac_tx_mgmt_pending_free(int buf_id, void *skb, void *ctx)
12868 ++{
12869 ++ struct ath11k *ar = ctx;
12870 ++
12871 ++ ath11k_mac_tx_mgmt_free(ar, buf_id);
12872 +
12873 + return 0;
12874 + }
12875 +@@ -3903,17 +3918,10 @@ static int ath11k_mac_vif_txmgmt_idr_remove(int buf_id, void *skb, void *ctx)
12876 + {
12877 + struct ieee80211_vif *vif = ctx;
12878 + struct ath11k_skb_cb *skb_cb = ATH11K_SKB_CB((struct sk_buff *)skb);
12879 +- struct sk_buff *msdu = skb;
12880 + struct ath11k *ar = skb_cb->ar;
12881 +- struct ath11k_base *ab = ar->ab;
12882 +
12883 +- if (skb_cb->vif == vif) {
12884 +- spin_lock_bh(&ar->txmgmt_idr_lock);
12885 +- idr_remove(&ar->txmgmt_idr, buf_id);
12886 +- spin_unlock_bh(&ar->txmgmt_idr_lock);
12887 +- dma_unmap_single(ab->dev, skb_cb->paddr, msdu->len,
12888 +- DMA_TO_DEVICE);
12889 +- }
12890 ++ if (skb_cb->vif == vif)
12891 ++ ath11k_mac_tx_mgmt_free(ar, buf_id);
12892 +
12893 + return 0;
12894 + }
12895 +@@ -3928,6 +3936,8 @@ static int ath11k_mac_mgmt_tx_wmi(struct ath11k *ar, struct ath11k_vif *arvif,
12896 + int buf_id;
12897 + int ret;
12898 +
12899 ++ ATH11K_SKB_CB(skb)->ar = ar;
12900 ++
12901 + spin_lock_bh(&ar->txmgmt_idr_lock);
12902 + buf_id = idr_alloc(&ar->txmgmt_idr, skb, 0,
12903 + ATH11K_TX_MGMT_NUM_PENDING_MAX, GFP_ATOMIC);
12904 +diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c
12905 +index d7eb6b7160bb4..105e344240c10 100644
12906 +--- a/drivers/net/wireless/ath/ath11k/pci.c
12907 ++++ b/drivers/net/wireless/ath/ath11k/pci.c
12908 +@@ -416,8 +416,11 @@ static void __ath11k_pci_ext_irq_disable(struct ath11k_base *sc)
12909 +
12910 + ath11k_pci_ext_grp_disable(irq_grp);
12911 +
12912 +- napi_synchronize(&irq_grp->napi);
12913 +- napi_disable(&irq_grp->napi);
12914 ++ if (irq_grp->napi_enabled) {
12915 ++ napi_synchronize(&irq_grp->napi);
12916 ++ napi_disable(&irq_grp->napi);
12917 ++ irq_grp->napi_enabled = false;
12918 ++ }
12919 + }
12920 + }
12921 +
12922 +@@ -436,7 +439,10 @@ static void ath11k_pci_ext_irq_enable(struct ath11k_base *ab)
12923 + for (i = 0; i < ATH11K_EXT_IRQ_GRP_NUM_MAX; i++) {
12924 + struct ath11k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
12925 +
12926 +- napi_enable(&irq_grp->napi);
12927 ++ if (!irq_grp->napi_enabled) {
12928 ++ napi_enable(&irq_grp->napi);
12929 ++ irq_grp->napi_enabled = true;
12930 ++ }
12931 + ath11k_pci_ext_grp_enable(irq_grp);
12932 + }
12933 + }
12934 +diff --git a/drivers/net/wireless/ath/ath11k/reg.c b/drivers/net/wireless/ath/ath11k/reg.c
12935 +index b8f9f34408879..e34311516b958 100644
12936 +--- a/drivers/net/wireless/ath/ath11k/reg.c
12937 ++++ b/drivers/net/wireless/ath/ath11k/reg.c
12938 +@@ -456,6 +456,9 @@ ath11k_reg_adjust_bw(u16 start_freq, u16 end_freq, u16 max_bw)
12939 + {
12940 + u16 bw;
12941 +
12942 ++ if (end_freq <= start_freq)
12943 ++ return 0;
12944 ++
12945 + bw = end_freq - start_freq;
12946 + bw = min_t(u16, bw, max_bw);
12947 +
12948 +@@ -463,8 +466,10 @@ ath11k_reg_adjust_bw(u16 start_freq, u16 end_freq, u16 max_bw)
12949 + bw = 80;
12950 + else if (bw >= 40 && bw < 80)
12951 + bw = 40;
12952 +- else if (bw < 40)
12953 ++ else if (bw >= 20 && bw < 40)
12954 + bw = 20;
12955 ++ else
12956 ++ bw = 0;
12957 +
12958 + return bw;
12959 + }
12960 +@@ -488,73 +493,77 @@ ath11k_reg_update_weather_radar_band(struct ath11k_base *ab,
12961 + struct cur_reg_rule *reg_rule,
12962 + u8 *rule_idx, u32 flags, u16 max_bw)
12963 + {
12964 ++ u32 start_freq;
12965 + u32 end_freq;
12966 + u16 bw;
12967 + u8 i;
12968 +
12969 + i = *rule_idx;
12970 +
12971 ++ /* there might be situations when even the input rule must be dropped */
12972 ++ i--;
12973 ++
12974 ++ /* frequencies below weather radar */
12975 + bw = ath11k_reg_adjust_bw(reg_rule->start_freq,
12976 + ETSI_WEATHER_RADAR_BAND_LOW, max_bw);
12977 ++ if (bw > 0) {
12978 ++ i++;
12979 +
12980 +- ath11k_reg_update_rule(regd->reg_rules + i, reg_rule->start_freq,
12981 +- ETSI_WEATHER_RADAR_BAND_LOW, bw,
12982 +- reg_rule->ant_gain, reg_rule->reg_power,
12983 +- flags);
12984 ++ ath11k_reg_update_rule(regd->reg_rules + i,
12985 ++ reg_rule->start_freq,
12986 ++ ETSI_WEATHER_RADAR_BAND_LOW, bw,
12987 ++ reg_rule->ant_gain, reg_rule->reg_power,
12988 ++ flags);
12989 +
12990 +- ath11k_dbg(ab, ATH11K_DBG_REG,
12991 +- "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
12992 +- i + 1, reg_rule->start_freq, ETSI_WEATHER_RADAR_BAND_LOW,
12993 +- bw, reg_rule->ant_gain, reg_rule->reg_power,
12994 +- regd->reg_rules[i].dfs_cac_ms,
12995 +- flags);
12996 +-
12997 +- if (reg_rule->end_freq > ETSI_WEATHER_RADAR_BAND_HIGH)
12998 +- end_freq = ETSI_WEATHER_RADAR_BAND_HIGH;
12999 +- else
13000 +- end_freq = reg_rule->end_freq;
13001 ++ ath11k_dbg(ab, ATH11K_DBG_REG,
13002 ++ "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
13003 ++ i + 1, reg_rule->start_freq,
13004 ++ ETSI_WEATHER_RADAR_BAND_LOW, bw, reg_rule->ant_gain,
13005 ++ reg_rule->reg_power, regd->reg_rules[i].dfs_cac_ms,
13006 ++ flags);
13007 ++ }
13008 +
13009 +- bw = ath11k_reg_adjust_bw(ETSI_WEATHER_RADAR_BAND_LOW, end_freq,
13010 +- max_bw);
13011 ++ /* weather radar frequencies */
13012 ++ start_freq = max_t(u32, reg_rule->start_freq,
13013 ++ ETSI_WEATHER_RADAR_BAND_LOW);
13014 ++ end_freq = min_t(u32, reg_rule->end_freq, ETSI_WEATHER_RADAR_BAND_HIGH);
13015 +
13016 +- i++;
13017 ++ bw = ath11k_reg_adjust_bw(start_freq, end_freq, max_bw);
13018 ++ if (bw > 0) {
13019 ++ i++;
13020 +
13021 +- ath11k_reg_update_rule(regd->reg_rules + i,
13022 +- ETSI_WEATHER_RADAR_BAND_LOW, end_freq, bw,
13023 +- reg_rule->ant_gain, reg_rule->reg_power,
13024 +- flags);
13025 ++ ath11k_reg_update_rule(regd->reg_rules + i, start_freq,
13026 ++ end_freq, bw, reg_rule->ant_gain,
13027 ++ reg_rule->reg_power, flags);
13028 +
13029 +- regd->reg_rules[i].dfs_cac_ms = ETSI_WEATHER_RADAR_BAND_CAC_TIMEOUT;
13030 ++ regd->reg_rules[i].dfs_cac_ms = ETSI_WEATHER_RADAR_BAND_CAC_TIMEOUT;
13031 +
13032 +- ath11k_dbg(ab, ATH11K_DBG_REG,
13033 +- "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
13034 +- i + 1, ETSI_WEATHER_RADAR_BAND_LOW, end_freq,
13035 +- bw, reg_rule->ant_gain, reg_rule->reg_power,
13036 +- regd->reg_rules[i].dfs_cac_ms,
13037 +- flags);
13038 +-
13039 +- if (end_freq == reg_rule->end_freq) {
13040 +- regd->n_reg_rules--;
13041 +- *rule_idx = i;
13042 +- return;
13043 ++ ath11k_dbg(ab, ATH11K_DBG_REG,
13044 ++ "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
13045 ++ i + 1, start_freq, end_freq, bw,
13046 ++ reg_rule->ant_gain, reg_rule->reg_power,
13047 ++ regd->reg_rules[i].dfs_cac_ms, flags);
13048 + }
13049 +
13050 ++ /* frequencies above weather radar */
13051 + bw = ath11k_reg_adjust_bw(ETSI_WEATHER_RADAR_BAND_HIGH,
13052 + reg_rule->end_freq, max_bw);
13053 ++ if (bw > 0) {
13054 ++ i++;
13055 +
13056 +- i++;
13057 +-
13058 +- ath11k_reg_update_rule(regd->reg_rules + i, ETSI_WEATHER_RADAR_BAND_HIGH,
13059 +- reg_rule->end_freq, bw,
13060 +- reg_rule->ant_gain, reg_rule->reg_power,
13061 +- flags);
13062 ++ ath11k_reg_update_rule(regd->reg_rules + i,
13063 ++ ETSI_WEATHER_RADAR_BAND_HIGH,
13064 ++ reg_rule->end_freq, bw,
13065 ++ reg_rule->ant_gain, reg_rule->reg_power,
13066 ++ flags);
13067 +
13068 +- ath11k_dbg(ab, ATH11K_DBG_REG,
13069 +- "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
13070 +- i + 1, ETSI_WEATHER_RADAR_BAND_HIGH, reg_rule->end_freq,
13071 +- bw, reg_rule->ant_gain, reg_rule->reg_power,
13072 +- regd->reg_rules[i].dfs_cac_ms,
13073 +- flags);
13074 ++ ath11k_dbg(ab, ATH11K_DBG_REG,
13075 ++ "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
13076 ++ i + 1, ETSI_WEATHER_RADAR_BAND_HIGH,
13077 ++ reg_rule->end_freq, bw, reg_rule->ant_gain,
13078 ++ reg_rule->reg_power, regd->reg_rules[i].dfs_cac_ms,
13079 ++ flags);
13080 ++ }
13081 +
13082 + *rule_idx = i;
13083 + }
13084 +diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
13085 +index e84127165d858..53846dc9a5c5a 100644
13086 +--- a/drivers/net/wireless/ath/ath11k/wmi.c
13087 ++++ b/drivers/net/wireless/ath/ath11k/wmi.c
13088 +@@ -1665,7 +1665,8 @@ int ath11k_wmi_vdev_install_key(struct ath11k *ar,
13089 + tlv = (struct wmi_tlv *)(skb->data + sizeof(*cmd));
13090 + tlv->header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_ARRAY_BYTE) |
13091 + FIELD_PREP(WMI_TLV_LEN, key_len_aligned);
13092 +- memcpy(tlv->value, (u8 *)arg->key_data, key_len_aligned);
13093 ++ if (arg->key_data)
13094 ++ memcpy(tlv->value, (u8 *)arg->key_data, key_len_aligned);
13095 +
13096 + ret = ath11k_wmi_cmd_send(wmi, skb, WMI_VDEV_INSTALL_KEY_CMDID);
13097 + if (ret) {
13098 +@@ -5421,7 +5422,7 @@ static int ath11k_reg_chan_list_event(struct ath11k_base *ab, struct sk_buff *sk
13099 + ar = ab->pdevs[pdev_idx].ar;
13100 + kfree(ab->new_regd[pdev_idx]);
13101 + ab->new_regd[pdev_idx] = regd;
13102 +- ieee80211_queue_work(ar->hw, &ar->regd_update_work);
13103 ++ queue_work(ab->workqueue, &ar->regd_update_work);
13104 + } else {
13105 + /* This regd would be applied during mac registration and is
13106 + * held constant throughout for regd intersection purpose
13107 +diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
13108 +index 860da13bfb6ac..f06eec99de688 100644
13109 +--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
13110 ++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
13111 +@@ -590,6 +590,13 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
13112 + return;
13113 + }
13114 +
13115 ++ if (pkt_len > 2 * MAX_RX_BUF_SIZE) {
13116 ++ dev_err(&hif_dev->udev->dev,
13117 ++ "ath9k_htc: invalid pkt_len (%x)\n", pkt_len);
13118 ++ RX_STAT_INC(skb_dropped);
13119 ++ return;
13120 ++ }
13121 ++
13122 + pad_len = 4 - (pkt_len & 0x3);
13123 + if (pad_len == 4)
13124 + pad_len = 0;
13125 +diff --git a/drivers/net/wireless/ath/wcn36xx/dxe.c b/drivers/net/wireless/ath/wcn36xx/dxe.c
13126 +index cf4eb0fb28151..6c62ffc799a2b 100644
13127 +--- a/drivers/net/wireless/ath/wcn36xx/dxe.c
13128 ++++ b/drivers/net/wireless/ath/wcn36xx/dxe.c
13129 +@@ -272,6 +272,21 @@ static int wcn36xx_dxe_enable_ch_int(struct wcn36xx *wcn, u16 wcn_ch)
13130 + return 0;
13131 + }
13132 +
13133 ++static void wcn36xx_dxe_disable_ch_int(struct wcn36xx *wcn, u16 wcn_ch)
13134 ++{
13135 ++ int reg_data = 0;
13136 ++
13137 ++ wcn36xx_dxe_read_register(wcn,
13138 ++ WCN36XX_DXE_INT_MASK_REG,
13139 ++ &reg_data);
13140 ++
13141 ++ reg_data &= ~wcn_ch;
13142 ++
13143 ++ wcn36xx_dxe_write_register(wcn,
13144 ++ WCN36XX_DXE_INT_MASK_REG,
13145 ++ (int)reg_data);
13146 ++}
13147 ++
13148 + static int wcn36xx_dxe_fill_skb(struct device *dev,
13149 + struct wcn36xx_dxe_ctl *ctl,
13150 + gfp_t gfp)
13151 +@@ -869,7 +884,6 @@ int wcn36xx_dxe_init(struct wcn36xx *wcn)
13152 + WCN36XX_DXE_WQ_TX_L);
13153 +
13154 + wcn36xx_dxe_read_register(wcn, WCN36XX_DXE_REG_CH_EN, &reg_data);
13155 +- wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_L);
13156 +
13157 + /***************************************/
13158 + /* Init descriptors for TX HIGH channel */
13159 +@@ -893,9 +907,6 @@ int wcn36xx_dxe_init(struct wcn36xx *wcn)
13160 +
13161 + wcn36xx_dxe_read_register(wcn, WCN36XX_DXE_REG_CH_EN, &reg_data);
13162 +
13163 +- /* Enable channel interrupts */
13164 +- wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_H);
13165 +-
13166 + /***************************************/
13167 + /* Init descriptors for RX LOW channel */
13168 + /***************************************/
13169 +@@ -905,7 +916,6 @@ int wcn36xx_dxe_init(struct wcn36xx *wcn)
13170 + goto out_err_rxl_ch;
13171 + }
13172 +
13173 +-
13174 + /* For RX we need to preallocated buffers */
13175 + wcn36xx_dxe_ch_alloc_skb(wcn, &wcn->dxe_rx_l_ch);
13176 +
13177 +@@ -928,9 +938,6 @@ int wcn36xx_dxe_init(struct wcn36xx *wcn)
13178 + WCN36XX_DXE_REG_CTL_RX_L,
13179 + WCN36XX_DXE_CH_DEFAULT_CTL_RX_L);
13180 +
13181 +- /* Enable channel interrupts */
13182 +- wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_L);
13183 +-
13184 + /***************************************/
13185 + /* Init descriptors for RX HIGH channel */
13186 + /***************************************/
13187 +@@ -962,15 +969,18 @@ int wcn36xx_dxe_init(struct wcn36xx *wcn)
13188 + WCN36XX_DXE_REG_CTL_RX_H,
13189 + WCN36XX_DXE_CH_DEFAULT_CTL_RX_H);
13190 +
13191 +- /* Enable channel interrupts */
13192 +- wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_H);
13193 +-
13194 + ret = wcn36xx_dxe_request_irqs(wcn);
13195 + if (ret < 0)
13196 + goto out_err_irq;
13197 +
13198 + timer_setup(&wcn->tx_ack_timer, wcn36xx_dxe_tx_timer, 0);
13199 +
13200 ++ /* Enable channel interrupts */
13201 ++ wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_L);
13202 ++ wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_H);
13203 ++ wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_L);
13204 ++ wcn36xx_dxe_enable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_H);
13205 ++
13206 + return 0;
13207 +
13208 + out_err_irq:
13209 +@@ -987,6 +997,14 @@ out_err_txh_ch:
13210 +
13211 + void wcn36xx_dxe_deinit(struct wcn36xx *wcn)
13212 + {
13213 ++ int reg_data = 0;
13214 ++
13215 ++ /* Disable channel interrupts */
13216 ++ wcn36xx_dxe_disable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_H);
13217 ++ wcn36xx_dxe_disable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_RX_L);
13218 ++ wcn36xx_dxe_disable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_H);
13219 ++ wcn36xx_dxe_disable_ch_int(wcn, WCN36XX_INT_MASK_CHAN_TX_L);
13220 ++
13221 + free_irq(wcn->tx_irq, wcn);
13222 + free_irq(wcn->rx_irq, wcn);
13223 + del_timer(&wcn->tx_ack_timer);
13224 +@@ -996,6 +1014,15 @@ void wcn36xx_dxe_deinit(struct wcn36xx *wcn)
13225 + wcn->tx_ack_skb = NULL;
13226 + }
13227 +
13228 ++ /* Put the DXE block into reset before freeing memory */
13229 ++ reg_data = WCN36XX_DXE_REG_RESET;
13230 ++ wcn36xx_dxe_write_register(wcn, WCN36XX_DXE_REG_CSR_RESET, reg_data);
13231 ++
13232 + wcn36xx_dxe_ch_free_skbs(wcn, &wcn->dxe_rx_l_ch);
13233 + wcn36xx_dxe_ch_free_skbs(wcn, &wcn->dxe_rx_h_ch);
13234 ++
13235 ++ wcn36xx_dxe_deinit_descs(wcn->dev, &wcn->dxe_tx_l_ch);
13236 ++ wcn36xx_dxe_deinit_descs(wcn->dev, &wcn->dxe_tx_h_ch);
13237 ++ wcn36xx_dxe_deinit_descs(wcn->dev, &wcn->dxe_rx_l_ch);
13238 ++ wcn36xx_dxe_deinit_descs(wcn->dev, &wcn->dxe_rx_h_ch);
13239 + }
13240 +diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
13241 +index 629ddfd74da1a..9aaf6f7473333 100644
13242 +--- a/drivers/net/wireless/ath/wcn36xx/main.c
13243 ++++ b/drivers/net/wireless/ath/wcn36xx/main.c
13244 +@@ -397,6 +397,7 @@ static void wcn36xx_change_opchannel(struct wcn36xx *wcn, int ch)
13245 + static int wcn36xx_config(struct ieee80211_hw *hw, u32 changed)
13246 + {
13247 + struct wcn36xx *wcn = hw->priv;
13248 ++ int ret;
13249 +
13250 + wcn36xx_dbg(WCN36XX_DBG_MAC, "mac config changed 0x%08x\n", changed);
13251 +
13252 +@@ -412,17 +413,31 @@ static int wcn36xx_config(struct ieee80211_hw *hw, u32 changed)
13253 + * want to receive/transmit regular data packets, then
13254 + * simply stop the scan session and exit PS mode.
13255 + */
13256 +- wcn36xx_smd_finish_scan(wcn, HAL_SYS_MODE_SCAN,
13257 +- wcn->sw_scan_vif);
13258 +- wcn->sw_scan_channel = 0;
13259 ++ if (wcn->sw_scan_channel)
13260 ++ wcn36xx_smd_end_scan(wcn, wcn->sw_scan_channel);
13261 ++ if (wcn->sw_scan_init) {
13262 ++ wcn36xx_smd_finish_scan(wcn, HAL_SYS_MODE_SCAN,
13263 ++ wcn->sw_scan_vif);
13264 ++ }
13265 + } else if (wcn->sw_scan) {
13266 + /* A scan is ongoing, do not change the operating
13267 + * channel, but start a scan session on the channel.
13268 + */
13269 +- wcn36xx_smd_init_scan(wcn, HAL_SYS_MODE_SCAN,
13270 +- wcn->sw_scan_vif);
13271 ++ if (wcn->sw_scan_channel)
13272 ++ wcn36xx_smd_end_scan(wcn, wcn->sw_scan_channel);
13273 ++ if (!wcn->sw_scan_init) {
13274 ++ /* This can fail if we are unable to notify the
13275 ++ * operating channel.
13276 ++ */
13277 ++ ret = wcn36xx_smd_init_scan(wcn,
13278 ++ HAL_SYS_MODE_SCAN,
13279 ++ wcn->sw_scan_vif);
13280 ++ if (ret) {
13281 ++ mutex_unlock(&wcn->conf_mutex);
13282 ++ return -EIO;
13283 ++ }
13284 ++ }
13285 + wcn36xx_smd_start_scan(wcn, ch);
13286 +- wcn->sw_scan_channel = ch;
13287 + } else {
13288 + wcn36xx_change_opchannel(wcn, ch);
13289 + }
13290 +@@ -709,7 +724,12 @@ static void wcn36xx_sw_scan_complete(struct ieee80211_hw *hw,
13291 + struct wcn36xx *wcn = hw->priv;
13292 +
13293 + /* ensure that any scan session is finished */
13294 +- wcn36xx_smd_finish_scan(wcn, HAL_SYS_MODE_SCAN, wcn->sw_scan_vif);
13295 ++ if (wcn->sw_scan_channel)
13296 ++ wcn36xx_smd_end_scan(wcn, wcn->sw_scan_channel);
13297 ++ if (wcn->sw_scan_init) {
13298 ++ wcn36xx_smd_finish_scan(wcn, HAL_SYS_MODE_SCAN,
13299 ++ wcn->sw_scan_vif);
13300 ++ }
13301 + wcn->sw_scan = false;
13302 + wcn->sw_scan_opchannel = 0;
13303 + }
13304 +diff --git a/drivers/net/wireless/ath/wcn36xx/smd.c b/drivers/net/wireless/ath/wcn36xx/smd.c
13305 +index 3793907ace92e..7f00cb6f5e16b 100644
13306 +--- a/drivers/net/wireless/ath/wcn36xx/smd.c
13307 ++++ b/drivers/net/wireless/ath/wcn36xx/smd.c
13308 +@@ -730,6 +730,7 @@ int wcn36xx_smd_init_scan(struct wcn36xx *wcn, enum wcn36xx_hal_sys_mode mode,
13309 + wcn36xx_err("hal_init_scan response failed err=%d\n", ret);
13310 + goto out;
13311 + }
13312 ++ wcn->sw_scan_init = true;
13313 + out:
13314 + mutex_unlock(&wcn->hal_mutex);
13315 + return ret;
13316 +@@ -760,6 +761,7 @@ int wcn36xx_smd_start_scan(struct wcn36xx *wcn, u8 scan_channel)
13317 + wcn36xx_err("hal_start_scan response failed err=%d\n", ret);
13318 + goto out;
13319 + }
13320 ++ wcn->sw_scan_channel = scan_channel;
13321 + out:
13322 + mutex_unlock(&wcn->hal_mutex);
13323 + return ret;
13324 +@@ -790,6 +792,7 @@ int wcn36xx_smd_end_scan(struct wcn36xx *wcn, u8 scan_channel)
13325 + wcn36xx_err("hal_end_scan response failed err=%d\n", ret);
13326 + goto out;
13327 + }
13328 ++ wcn->sw_scan_channel = 0;
13329 + out:
13330 + mutex_unlock(&wcn->hal_mutex);
13331 + return ret;
13332 +@@ -831,6 +834,7 @@ int wcn36xx_smd_finish_scan(struct wcn36xx *wcn,
13333 + wcn36xx_err("hal_finish_scan response failed err=%d\n", ret);
13334 + goto out;
13335 + }
13336 ++ wcn->sw_scan_init = false;
13337 + out:
13338 + mutex_unlock(&wcn->hal_mutex);
13339 + return ret;
13340 +@@ -2603,7 +2607,7 @@ static int wcn36xx_smd_missed_beacon_ind(struct wcn36xx *wcn,
13341 + wcn36xx_dbg(WCN36XX_DBG_HAL, "beacon missed bss_index %d\n",
13342 + tmp->bss_index);
13343 + vif = wcn36xx_priv_to_vif(tmp);
13344 +- ieee80211_connection_loss(vif);
13345 ++ ieee80211_beacon_loss(vif);
13346 + }
13347 + return 0;
13348 + }
13349 +@@ -2618,7 +2622,7 @@ static int wcn36xx_smd_missed_beacon_ind(struct wcn36xx *wcn,
13350 + wcn36xx_dbg(WCN36XX_DBG_HAL, "beacon missed bss_index %d\n",
13351 + rsp->bss_index);
13352 + vif = wcn36xx_priv_to_vif(tmp);
13353 +- ieee80211_connection_loss(vif);
13354 ++ ieee80211_beacon_loss(vif);
13355 + return 0;
13356 + }
13357 + }
13358 +diff --git a/drivers/net/wireless/ath/wcn36xx/txrx.c b/drivers/net/wireless/ath/wcn36xx/txrx.c
13359 +index bbd7194c82e27..f33e7228a1010 100644
13360 +--- a/drivers/net/wireless/ath/wcn36xx/txrx.c
13361 ++++ b/drivers/net/wireless/ath/wcn36xx/txrx.c
13362 +@@ -237,7 +237,6 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb)
13363 + const struct wcn36xx_rate *rate;
13364 + struct ieee80211_hdr *hdr;
13365 + struct wcn36xx_rx_bd *bd;
13366 +- struct ieee80211_supported_band *sband;
13367 + u16 fc, sn;
13368 +
13369 + /*
13370 +@@ -259,8 +258,6 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb)
13371 + fc = __le16_to_cpu(hdr->frame_control);
13372 + sn = IEEE80211_SEQ_TO_SN(__le16_to_cpu(hdr->seq_ctrl));
13373 +
13374 +- status.freq = WCN36XX_CENTER_FREQ(wcn);
13375 +- status.band = WCN36XX_BAND(wcn);
13376 + status.mactime = 10;
13377 + status.signal = -get_rssi0(bd);
13378 + status.antenna = 1;
13379 +@@ -272,18 +269,36 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb)
13380 +
13381 + wcn36xx_dbg(WCN36XX_DBG_RX, "status.flags=%x\n", status.flag);
13382 +
13383 ++ if (bd->scan_learn) {
13384 ++ /* If packet originate from hardware scanning, extract the
13385 ++ * band/channel from bd descriptor.
13386 ++ */
13387 ++ u8 hwch = (bd->reserved0 << 4) + bd->rx_ch;
13388 ++
13389 ++ if (bd->rf_band != 1 && hwch <= sizeof(ab_rx_ch_map) && hwch >= 1) {
13390 ++ status.band = NL80211_BAND_5GHZ;
13391 ++ status.freq = ieee80211_channel_to_frequency(ab_rx_ch_map[hwch - 1],
13392 ++ status.band);
13393 ++ } else {
13394 ++ status.band = NL80211_BAND_2GHZ;
13395 ++ status.freq = ieee80211_channel_to_frequency(hwch, status.band);
13396 ++ }
13397 ++ } else {
13398 ++ status.band = WCN36XX_BAND(wcn);
13399 ++ status.freq = WCN36XX_CENTER_FREQ(wcn);
13400 ++ }
13401 ++
13402 + if (bd->rate_id < ARRAY_SIZE(wcn36xx_rate_table)) {
13403 + rate = &wcn36xx_rate_table[bd->rate_id];
13404 + status.encoding = rate->encoding;
13405 + status.enc_flags = rate->encoding_flags;
13406 + status.bw = rate->bw;
13407 + status.rate_idx = rate->mcs_or_legacy_index;
13408 +- sband = wcn->hw->wiphy->bands[status.band];
13409 + status.nss = 1;
13410 +
13411 + if (status.band == NL80211_BAND_5GHZ &&
13412 + status.encoding == RX_ENC_LEGACY &&
13413 +- status.rate_idx >= sband->n_bitrates) {
13414 ++ status.rate_idx >= 4) {
13415 + /* no dsss rates in 5Ghz rates table */
13416 + status.rate_idx -= 4;
13417 + }
13418 +@@ -298,22 +313,6 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb)
13419 + ieee80211_is_probe_resp(hdr->frame_control))
13420 + status.boottime_ns = ktime_get_boottime_ns();
13421 +
13422 +- if (bd->scan_learn) {
13423 +- /* If packet originates from hardware scanning, extract the
13424 +- * band/channel from bd descriptor.
13425 +- */
13426 +- u8 hwch = (bd->reserved0 << 4) + bd->rx_ch;
13427 +-
13428 +- if (bd->rf_band != 1 && hwch <= sizeof(ab_rx_ch_map) && hwch >= 1) {
13429 +- status.band = NL80211_BAND_5GHZ;
13430 +- status.freq = ieee80211_channel_to_frequency(ab_rx_ch_map[hwch - 1],
13431 +- status.band);
13432 +- } else {
13433 +- status.band = NL80211_BAND_2GHZ;
13434 +- status.freq = ieee80211_channel_to_frequency(hwch, status.band);
13435 +- }
13436 +- }
13437 +-
13438 + memcpy(IEEE80211_SKB_RXCB(skb), &status, sizeof(status));
13439 +
13440 + if (ieee80211_is_beacon(hdr->frame_control)) {
13441 +diff --git a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
13442 +index 9b4dee2fc6483..5c40d0bdee245 100644
13443 +--- a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
13444 ++++ b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
13445 +@@ -231,6 +231,7 @@ struct wcn36xx {
13446 + struct cfg80211_scan_request *scan_req;
13447 + bool sw_scan;
13448 + u8 sw_scan_opchannel;
13449 ++ bool sw_scan_init;
13450 + u8 sw_scan_channel;
13451 + struct ieee80211_vif *sw_scan_vif;
13452 + struct mutex scan_lock;
13453 +diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
13454 +index be214f39f52be..30c6d7b18599a 100644
13455 +--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
13456 ++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
13457 +@@ -185,6 +185,9 @@ static void iwl_dealloc_ucode(struct iwl_drv *drv)
13458 +
13459 + for (i = 0; i < IWL_UCODE_TYPE_MAX; i++)
13460 + iwl_free_fw_img(drv, drv->fw.img + i);
13461 ++
13462 ++ /* clear the data for the aborted load case */
13463 ++ memset(&drv->fw, 0, sizeof(drv->fw));
13464 + }
13465 +
13466 + static int iwl_alloc_fw_desc(struct iwl_drv *drv, struct fw_desc *desc,
13467 +@@ -1365,6 +1368,7 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
13468 + int i;
13469 + bool load_module = false;
13470 + bool usniffer_images = false;
13471 ++ bool failure = true;
13472 +
13473 + fw->ucode_capa.max_probe_length = IWL_DEFAULT_MAX_PROBE_LENGTH;
13474 + fw->ucode_capa.standard_phy_calibration_size =
13475 +@@ -1625,15 +1629,9 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
13476 + * else from proceeding if the module fails to load
13477 + * or hangs loading.
13478 + */
13479 +- if (load_module) {
13480 ++ if (load_module)
13481 + request_module("%s", op->name);
13482 +-#ifdef CONFIG_IWLWIFI_OPMODE_MODULAR
13483 +- if (err)
13484 +- IWL_ERR(drv,
13485 +- "failed to load module %s (error %d), is dynamic loading enabled?\n",
13486 +- op->name, err);
13487 +-#endif
13488 +- }
13489 ++ failure = false;
13490 + goto free;
13491 +
13492 + try_again:
13493 +@@ -1649,6 +1647,9 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
13494 + complete(&drv->request_firmware_complete);
13495 + device_release_driver(drv->trans->dev);
13496 + free:
13497 ++ if (failure)
13498 ++ iwl_dealloc_ucode(drv);
13499 ++
13500 + if (pieces) {
13501 + for (i = 0; i < ARRAY_SIZE(pieces->img); i++)
13502 + kfree(pieces->img[i].sec);
13503 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
13504 +index a0ce761d0c59b..b1335fe3b01a2 100644
13505 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
13506 ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
13507 +@@ -967,7 +967,7 @@ static void iwl_mvm_ftm_rtt_smoothing(struct iwl_mvm *mvm,
13508 + overshoot = IWL_MVM_FTM_INITIATOR_SMOOTH_OVERSHOOT;
13509 + alpha = IWL_MVM_FTM_INITIATOR_SMOOTH_ALPHA;
13510 +
13511 +- rtt_avg = (alpha * rtt + (100 - alpha) * resp->rtt_avg) / 100;
13512 ++ rtt_avg = div_s64(alpha * rtt + (100 - alpha) * resp->rtt_avg, 100);
13513 +
13514 + IWL_DEBUG_INFO(mvm,
13515 + "%pM: prev rtt_avg=%lld, new rtt_avg=%lld, rtt=%lld\n",
13516 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
13517 +index 81cc85a97eb20..922a7ea0cd24e 100644
13518 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
13519 ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
13520 +@@ -1739,6 +1739,7 @@ static void iwl_mvm_recalc_multicast(struct iwl_mvm *mvm)
13521 + struct iwl_mvm_mc_iter_data iter_data = {
13522 + .mvm = mvm,
13523 + };
13524 ++ int ret;
13525 +
13526 + lockdep_assert_held(&mvm->mutex);
13527 +
13528 +@@ -1748,6 +1749,22 @@ static void iwl_mvm_recalc_multicast(struct iwl_mvm *mvm)
13529 + ieee80211_iterate_active_interfaces_atomic(
13530 + mvm->hw, IEEE80211_IFACE_ITER_NORMAL,
13531 + iwl_mvm_mc_iface_iterator, &iter_data);
13532 ++
13533 ++ /*
13534 ++ * Send a (synchronous) ech command so that we wait for the
13535 ++ * multiple asynchronous MCAST_FILTER_CMD commands sent by
13536 ++ * the interface iterator. Otherwise, we might get here over
13537 ++ * and over again (by userspace just sending a lot of these)
13538 ++ * and the CPU can send them faster than the firmware can
13539 ++ * process them.
13540 ++ * Note that the CPU is still faster - but with this we'll
13541 ++ * actually send fewer commands overall because the CPU will
13542 ++ * not schedule the work in mac80211 as frequently if it's
13543 ++ * still running when rescheduled (possibly multiple times).
13544 ++ */
13545 ++ ret = iwl_mvm_send_cmd_pdu(mvm, ECHO_CMD, 0, 0, NULL);
13546 ++ if (ret)
13547 ++ IWL_ERR(mvm, "Failed to synchronize multicast groups update\n");
13548 + }
13549 +
13550 + static u64 iwl_mvm_prepare_multicast(struct ieee80211_hw *hw,
13551 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
13552 +index 838734fec5023..86b3fb321dfdd 100644
13553 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
13554 ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
13555 +@@ -177,12 +177,39 @@ static int iwl_mvm_create_skb(struct iwl_mvm *mvm, struct sk_buff *skb,
13556 + struct iwl_rx_mpdu_desc *desc = (void *)pkt->data;
13557 + unsigned int headlen, fraglen, pad_len = 0;
13558 + unsigned int hdrlen = ieee80211_hdrlen(hdr->frame_control);
13559 ++ u8 mic_crc_len = u8_get_bits(desc->mac_flags1,
13560 ++ IWL_RX_MPDU_MFLG1_MIC_CRC_LEN_MASK) << 1;
13561 +
13562 + if (desc->mac_flags2 & IWL_RX_MPDU_MFLG2_PAD) {
13563 + len -= 2;
13564 + pad_len = 2;
13565 + }
13566 +
13567 ++ /*
13568 ++ * For non monitor interface strip the bytes the RADA might not have
13569 ++ * removed. As monitor interface cannot exist with other interfaces
13570 ++ * this removal is safe.
13571 ++ */
13572 ++ if (mic_crc_len && !ieee80211_hw_check(mvm->hw, RX_INCLUDES_FCS)) {
13573 ++ u32 pkt_flags = le32_to_cpu(pkt->len_n_flags);
13574 ++
13575 ++ /*
13576 ++ * If RADA was not enabled then decryption was not performed so
13577 ++ * the MIC cannot be removed.
13578 ++ */
13579 ++ if (!(pkt_flags & FH_RSCSR_RADA_EN)) {
13580 ++ if (WARN_ON(crypt_len > mic_crc_len))
13581 ++ return -EINVAL;
13582 ++
13583 ++ mic_crc_len -= crypt_len;
13584 ++ }
13585 ++
13586 ++ if (WARN_ON(mic_crc_len > len))
13587 ++ return -EINVAL;
13588 ++
13589 ++ len -= mic_crc_len;
13590 ++ }
13591 ++
13592 + /* If frame is small enough to fit in skb->head, pull it completely.
13593 + * If not, only pull ieee80211_hdr (including crypto if present, and
13594 + * an additional 8 bytes for SNAP/ethertype, see below) so that
13595 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
13596 +index a5d90e028833c..46255d2c555b6 100644
13597 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
13598 ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
13599 +@@ -2157,7 +2157,7 @@ static int iwl_mvm_check_running_scans(struct iwl_mvm *mvm, int type)
13600 + return -EIO;
13601 + }
13602 +
13603 +-#define SCAN_TIMEOUT 20000
13604 ++#define SCAN_TIMEOUT 30000
13605 +
13606 + void iwl_mvm_scan_timeout_wk(struct work_struct *work)
13607 + {
13608 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
13609 +index 394598b14a173..3f081cdea09ca 100644
13610 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
13611 ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
13612 +@@ -98,14 +98,13 @@ void iwl_mvm_roc_done_wk(struct work_struct *wk)
13613 + struct iwl_mvm *mvm = container_of(wk, struct iwl_mvm, roc_done_wk);
13614 +
13615 + /*
13616 +- * Clear the ROC_RUNNING /ROC_AUX_RUNNING status bit.
13617 ++ * Clear the ROC_RUNNING status bit.
13618 + * This will cause the TX path to drop offchannel transmissions.
13619 + * That would also be done by mac80211, but it is racy, in particular
13620 + * in the case that the time event actually completed in the firmware
13621 + * (which is handled in iwl_mvm_te_handle_notif).
13622 + */
13623 + clear_bit(IWL_MVM_STATUS_ROC_RUNNING, &mvm->status);
13624 +- clear_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status);
13625 +
13626 + synchronize_net();
13627 +
13628 +@@ -131,9 +130,19 @@ void iwl_mvm_roc_done_wk(struct work_struct *wk)
13629 + mvmvif = iwl_mvm_vif_from_mac80211(mvm->p2p_device_vif);
13630 + iwl_mvm_flush_sta(mvm, &mvmvif->bcast_sta, true);
13631 + }
13632 +- } else {
13633 ++ }
13634 ++
13635 ++ /*
13636 ++ * Clear the ROC_AUX_RUNNING status bit.
13637 ++ * This will cause the TX path to drop offchannel transmissions.
13638 ++ * That would also be done by mac80211, but it is racy, in particular
13639 ++ * in the case that the time event actually completed in the firmware
13640 ++ * (which is handled in iwl_mvm_te_handle_notif).
13641 ++ */
13642 ++ if (test_and_clear_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status)) {
13643 + /* do the same in case of hot spot 2.0 */
13644 + iwl_mvm_flush_sta(mvm, &mvm->aux_sta, true);
13645 ++
13646 + /* In newer version of this command an aux station is added only
13647 + * in cases of dedicated tx queue and need to be removed in end
13648 + * of use */
13649 +@@ -1157,15 +1166,10 @@ void iwl_mvm_schedule_session_protection(struct iwl_mvm *mvm,
13650 + cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id,
13651 + mvmvif->color)),
13652 + .action = cpu_to_le32(FW_CTXT_ACTION_ADD),
13653 ++ .conf_id = cpu_to_le32(SESSION_PROTECT_CONF_ASSOC),
13654 + .duration_tu = cpu_to_le32(MSEC_TO_TU(duration)),
13655 + };
13656 +
13657 +- /* The time_event_data.id field is reused to save session
13658 +- * protection's configuration.
13659 +- */
13660 +- mvmvif->time_event_data.id = SESSION_PROTECT_CONF_ASSOC;
13661 +- cmd.conf_id = cpu_to_le32(mvmvif->time_event_data.id);
13662 +-
13663 + lockdep_assert_held(&mvm->mutex);
13664 +
13665 + spin_lock_bh(&mvm->time_event_lock);
13666 +@@ -1179,6 +1183,11 @@ void iwl_mvm_schedule_session_protection(struct iwl_mvm *mvm,
13667 + }
13668 +
13669 + iwl_mvm_te_clear_data(mvm, te_data);
13670 ++ /*
13671 ++ * The time_event_data.id field is reused to save session
13672 ++ * protection's configuration.
13673 ++ */
13674 ++ te_data->id = le32_to_cpu(cmd.conf_id);
13675 + te_data->duration = le32_to_cpu(cmd.duration_tu);
13676 + spin_unlock_bh(&mvm->time_event_lock);
13677 +
13678 +diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
13679 +index 2c13fa8f28200..6aedf5762571d 100644
13680 +--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
13681 ++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
13682 +@@ -2260,7 +2260,12 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
13683 + }
13684 + }
13685 +
13686 +- if (inta_hw & MSIX_HW_INT_CAUSES_REG_WAKEUP) {
13687 ++ /*
13688 ++ * In some rare cases when the HW is in a bad state, we may
13689 ++ * get this interrupt too early, when prph_info is still NULL.
13690 ++ * So make sure that it's not NULL to prevent crashing.
13691 ++ */
13692 ++ if (inta_hw & MSIX_HW_INT_CAUSES_REG_WAKEUP && trans_pcie->prph_info) {
13693 + u32 sleep_notif =
13694 + le32_to_cpu(trans_pcie->prph_info->sleep_notif);
13695 + if (sleep_notif == IWL_D3_SLEEP_STATUS_SUSPEND ||
13696 +diff --git a/drivers/net/wireless/intel/iwlwifi/queue/tx.c b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
13697 +index 9181221a2434d..0136df00ff6a6 100644
13698 +--- a/drivers/net/wireless/intel/iwlwifi/queue/tx.c
13699 ++++ b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
13700 +@@ -1148,6 +1148,7 @@ int iwl_txq_alloc(struct iwl_trans *trans, struct iwl_txq *txq, int slots_num,
13701 + return 0;
13702 + err_free_tfds:
13703 + dma_free_coherent(trans->dev, tfd_sz, txq->tfds, txq->dma_addr);
13704 ++ txq->tfds = NULL;
13705 + error:
13706 + if (txq->entries && cmd_queue)
13707 + for (i = 0; i < slots_num; i++)
13708 +diff --git a/drivers/net/wireless/marvell/mwifiex/sta_event.c b/drivers/net/wireless/marvell/mwifiex/sta_event.c
13709 +index bc79ca4cb803c..753458628f86a 100644
13710 +--- a/drivers/net/wireless/marvell/mwifiex/sta_event.c
13711 ++++ b/drivers/net/wireless/marvell/mwifiex/sta_event.c
13712 +@@ -364,10 +364,12 @@ static void mwifiex_process_uap_tx_pause(struct mwifiex_private *priv,
13713 + sta_ptr = mwifiex_get_sta_entry(priv, tp->peermac);
13714 + if (sta_ptr && sta_ptr->tx_pause != tp->tx_pause) {
13715 + sta_ptr->tx_pause = tp->tx_pause;
13716 ++ spin_unlock_bh(&priv->sta_list_spinlock);
13717 + mwifiex_update_ralist_tx_pause(priv, tp->peermac,
13718 + tp->tx_pause);
13719 ++ } else {
13720 ++ spin_unlock_bh(&priv->sta_list_spinlock);
13721 + }
13722 +- spin_unlock_bh(&priv->sta_list_spinlock);
13723 + }
13724 + }
13725 +
13726 +@@ -399,11 +401,13 @@ static void mwifiex_process_sta_tx_pause(struct mwifiex_private *priv,
13727 + sta_ptr = mwifiex_get_sta_entry(priv, tp->peermac);
13728 + if (sta_ptr && sta_ptr->tx_pause != tp->tx_pause) {
13729 + sta_ptr->tx_pause = tp->tx_pause;
13730 ++ spin_unlock_bh(&priv->sta_list_spinlock);
13731 + mwifiex_update_ralist_tx_pause(priv,
13732 + tp->peermac,
13733 + tp->tx_pause);
13734 ++ } else {
13735 ++ spin_unlock_bh(&priv->sta_list_spinlock);
13736 + }
13737 +- spin_unlock_bh(&priv->sta_list_spinlock);
13738 + }
13739 + }
13740 + }
13741 +diff --git a/drivers/net/wireless/marvell/mwifiex/usb.c b/drivers/net/wireless/marvell/mwifiex/usb.c
13742 +index 9736aa0ab7fd4..8f01fcbe93961 100644
13743 +--- a/drivers/net/wireless/marvell/mwifiex/usb.c
13744 ++++ b/drivers/net/wireless/marvell/mwifiex/usb.c
13745 +@@ -130,7 +130,8 @@ static int mwifiex_usb_recv(struct mwifiex_adapter *adapter,
13746 + default:
13747 + mwifiex_dbg(adapter, ERROR,
13748 + "unknown recv_type %#x\n", recv_type);
13749 +- return -1;
13750 ++ ret = -1;
13751 ++ goto exit_restore_skb;
13752 + }
13753 + break;
13754 + case MWIFIEX_USB_EP_DATA:
13755 +diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
13756 +index 565efd8806247..2ef1416899f03 100644
13757 +--- a/drivers/net/wireless/realtek/rtw88/main.c
13758 ++++ b/drivers/net/wireless/realtek/rtw88/main.c
13759 +@@ -1652,7 +1652,7 @@ int rtw_core_init(struct rtw_dev *rtwdev)
13760 +
13761 + /* default rx filter setting */
13762 + rtwdev->hal.rcr = BIT_APP_FCS | BIT_APP_MIC | BIT_APP_ICV |
13763 +- BIT_HTC_LOC_CTRL | BIT_APP_PHYSTS |
13764 ++ BIT_PKTCTL_DLEN | BIT_HTC_LOC_CTRL | BIT_APP_PHYSTS |
13765 + BIT_AB | BIT_AM | BIT_APM;
13766 +
13767 + ret = rtw_load_firmware(rtwdev, RTW_NORMAL_FW);
13768 +diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c.h b/drivers/net/wireless/realtek/rtw88/rtw8821c.h
13769 +index bd01e82b6bcd0..8d1e8ff71d7ef 100644
13770 +--- a/drivers/net/wireless/realtek/rtw88/rtw8821c.h
13771 ++++ b/drivers/net/wireless/realtek/rtw88/rtw8821c.h
13772 +@@ -131,7 +131,7 @@ _rtw_write32s_mask(struct rtw_dev *rtwdev, u32 addr, u32 mask, u32 data)
13773 + #define WLAN_TX_FUNC_CFG2 0x30
13774 + #define WLAN_MAC_OPT_NORM_FUNC1 0x98
13775 + #define WLAN_MAC_OPT_LB_FUNC1 0x80
13776 +-#define WLAN_MAC_OPT_FUNC2 0x30810041
13777 ++#define WLAN_MAC_OPT_FUNC2 0xb0810041
13778 +
13779 + #define WLAN_SIFS_CFG (WLAN_SIFS_CCK_CONT_TX | \
13780 + (WLAN_SIFS_OFDM_CONT_TX << BIT_SHIFT_SIFS_OFDM_CTX) | \
13781 +diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822b.c b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
13782 +index 22d0dd640ac94..dbfd67c3f598c 100644
13783 +--- a/drivers/net/wireless/realtek/rtw88/rtw8822b.c
13784 ++++ b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
13785 +@@ -204,7 +204,7 @@ static void rtw8822b_phy_set_param(struct rtw_dev *rtwdev)
13786 + #define WLAN_TX_FUNC_CFG2 0x30
13787 + #define WLAN_MAC_OPT_NORM_FUNC1 0x98
13788 + #define WLAN_MAC_OPT_LB_FUNC1 0x80
13789 +-#define WLAN_MAC_OPT_FUNC2 0x30810041
13790 ++#define WLAN_MAC_OPT_FUNC2 0xb0810041
13791 +
13792 + #define WLAN_SIFS_CFG (WLAN_SIFS_CCK_CONT_TX | \
13793 + (WLAN_SIFS_OFDM_CONT_TX << BIT_SHIFT_SIFS_OFDM_CTX) | \
13794 +diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
13795 +index 79ad6232dce83..cee586335552d 100644
13796 +--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c
13797 ++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
13798 +@@ -1248,7 +1248,7 @@ static void rtw8822c_phy_set_param(struct rtw_dev *rtwdev)
13799 + #define WLAN_TX_FUNC_CFG2 0x30
13800 + #define WLAN_MAC_OPT_NORM_FUNC1 0x98
13801 + #define WLAN_MAC_OPT_LB_FUNC1 0x80
13802 +-#define WLAN_MAC_OPT_FUNC2 0x30810041
13803 ++#define WLAN_MAC_OPT_FUNC2 0xb0810041
13804 + #define WLAN_MAC_INT_MIG_CFG 0x33330000
13805 +
13806 + #define WLAN_SIFS_CFG (WLAN_SIFS_CCK_CONT_TX | \
13807 +diff --git a/drivers/net/wireless/rsi/rsi_91x_main.c b/drivers/net/wireless/rsi/rsi_91x_main.c
13808 +index 8c638cfeac52f..fe8aed58ac088 100644
13809 +--- a/drivers/net/wireless/rsi/rsi_91x_main.c
13810 ++++ b/drivers/net/wireless/rsi/rsi_91x_main.c
13811 +@@ -23,6 +23,7 @@
13812 + #include "rsi_common.h"
13813 + #include "rsi_coex.h"
13814 + #include "rsi_hal.h"
13815 ++#include "rsi_usb.h"
13816 +
13817 + u32 rsi_zone_enabled = /* INFO_ZONE |
13818 + INIT_ZONE |
13819 +@@ -168,6 +169,9 @@ int rsi_read_pkt(struct rsi_common *common, u8 *rx_pkt, s32 rcv_pkt_len)
13820 + frame_desc = &rx_pkt[index];
13821 + actual_length = *(u16 *)&frame_desc[0];
13822 + offset = *(u16 *)&frame_desc[2];
13823 ++ if (!rcv_pkt_len && offset >
13824 ++ RSI_MAX_RX_USB_PKT_SIZE - FRAME_DESC_SZ)
13825 ++ goto fail;
13826 +
13827 + queueno = rsi_get_queueno(frame_desc, offset);
13828 + length = rsi_get_length(frame_desc, offset);
13829 +diff --git a/drivers/net/wireless/rsi/rsi_91x_usb.c b/drivers/net/wireless/rsi/rsi_91x_usb.c
13830 +index d881df9ebd0c3..11388a1469621 100644
13831 +--- a/drivers/net/wireless/rsi/rsi_91x_usb.c
13832 ++++ b/drivers/net/wireless/rsi/rsi_91x_usb.c
13833 +@@ -269,8 +269,12 @@ static void rsi_rx_done_handler(struct urb *urb)
13834 + struct rsi_91x_usbdev *dev = (struct rsi_91x_usbdev *)rx_cb->data;
13835 + int status = -EINVAL;
13836 +
13837 ++ if (!rx_cb->rx_skb)
13838 ++ return;
13839 ++
13840 + if (urb->status) {
13841 + dev_kfree_skb(rx_cb->rx_skb);
13842 ++ rx_cb->rx_skb = NULL;
13843 + return;
13844 + }
13845 +
13846 +@@ -294,8 +298,10 @@ out:
13847 + if (rsi_rx_urb_submit(dev->priv, rx_cb->ep_num, GFP_ATOMIC))
13848 + rsi_dbg(ERR_ZONE, "%s: Failed in urb submission", __func__);
13849 +
13850 +- if (status)
13851 ++ if (status) {
13852 + dev_kfree_skb(rx_cb->rx_skb);
13853 ++ rx_cb->rx_skb = NULL;
13854 ++ }
13855 + }
13856 +
13857 + static void rsi_rx_urb_kill(struct rsi_hw *adapter, u8 ep_num)
13858 +@@ -322,7 +328,6 @@ static int rsi_rx_urb_submit(struct rsi_hw *adapter, u8 ep_num, gfp_t mem_flags)
13859 + struct sk_buff *skb;
13860 + u8 dword_align_bytes = 0;
13861 +
13862 +-#define RSI_MAX_RX_USB_PKT_SIZE 3000
13863 + skb = dev_alloc_skb(RSI_MAX_RX_USB_PKT_SIZE);
13864 + if (!skb)
13865 + return -ENOMEM;
13866 +diff --git a/drivers/net/wireless/rsi/rsi_usb.h b/drivers/net/wireless/rsi/rsi_usb.h
13867 +index 8702f434b5699..ad88f8c70a351 100644
13868 +--- a/drivers/net/wireless/rsi/rsi_usb.h
13869 ++++ b/drivers/net/wireless/rsi/rsi_usb.h
13870 +@@ -44,6 +44,8 @@
13871 + #define RSI_USB_BUF_SIZE 4096
13872 + #define RSI_USB_CTRL_BUF_SIZE 0x04
13873 +
13874 ++#define RSI_MAX_RX_USB_PKT_SIZE 3000
13875 ++
13876 + struct rx_usb_ctrl_block {
13877 + u8 *data;
13878 + struct urb *rx_urb;
13879 +diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
13880 +index 6b170083cd248..21d89d80d0838 100644
13881 +--- a/drivers/nvmem/core.c
13882 ++++ b/drivers/nvmem/core.c
13883 +@@ -222,6 +222,8 @@ static umode_t nvmem_bin_attr_is_visible(struct kobject *kobj,
13884 + struct device *dev = kobj_to_dev(kobj);
13885 + struct nvmem_device *nvmem = to_nvmem_device(dev);
13886 +
13887 ++ attr->size = nvmem->size;
13888 ++
13889 + return nvmem_bin_attr_get_umode(nvmem);
13890 + }
13891 +
13892 +diff --git a/drivers/of/base.c b/drivers/of/base.c
13893 +index 161a23631472d..a44a0e7ba2510 100644
13894 +--- a/drivers/of/base.c
13895 ++++ b/drivers/of/base.c
13896 +@@ -1328,9 +1328,14 @@ int of_phandle_iterator_next(struct of_phandle_iterator *it)
13897 + * property data length
13898 + */
13899 + if (it->cur + count > it->list_end) {
13900 +- pr_err("%pOF: %s = %d found %d\n",
13901 +- it->parent, it->cells_name,
13902 +- count, it->cell_count);
13903 ++ if (it->cells_name)
13904 ++ pr_err("%pOF: %s = %d found %td\n",
13905 ++ it->parent, it->cells_name,
13906 ++ count, it->list_end - it->cur);
13907 ++ else
13908 ++ pr_err("%pOF: phandle %s needs %d, found %td\n",
13909 ++ it->parent, of_node_full_name(it->node),
13910 ++ count, it->list_end - it->cur);
13911 + goto err;
13912 + }
13913 + }
13914 +diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
13915 +index 1d4b0b7d0cc10..5407bbdb64395 100644
13916 +--- a/drivers/of/unittest.c
13917 ++++ b/drivers/of/unittest.c
13918 +@@ -910,11 +910,18 @@ static void __init of_unittest_dma_ranges_one(const char *path,
13919 + if (!rc) {
13920 + phys_addr_t paddr;
13921 + dma_addr_t dma_addr;
13922 +- struct device dev_bogus;
13923 ++ struct device *dev_bogus;
13924 +
13925 +- dev_bogus.dma_range_map = map;
13926 +- paddr = dma_to_phys(&dev_bogus, expect_dma_addr);
13927 +- dma_addr = phys_to_dma(&dev_bogus, expect_paddr);
13928 ++ dev_bogus = kzalloc(sizeof(struct device), GFP_KERNEL);
13929 ++ if (!dev_bogus) {
13930 ++ unittest(0, "kzalloc() failed\n");
13931 ++ kfree(map);
13932 ++ return;
13933 ++ }
13934 ++
13935 ++ dev_bogus->dma_range_map = map;
13936 ++ paddr = dma_to_phys(dev_bogus, expect_dma_addr);
13937 ++ dma_addr = phys_to_dma(dev_bogus, expect_paddr);
13938 +
13939 + unittest(paddr == expect_paddr,
13940 + "of_dma_get_range: wrong phys addr %pap (expecting %llx) on node %pOF\n",
13941 +@@ -924,6 +931,7 @@ static void __init of_unittest_dma_ranges_one(const char *path,
13942 + &dma_addr, expect_dma_addr, np);
13943 +
13944 + kfree(map);
13945 ++ kfree(dev_bogus);
13946 + }
13947 + of_node_put(np);
13948 + #endif
13949 +@@ -933,8 +941,9 @@ static void __init of_unittest_parse_dma_ranges(void)
13950 + {
13951 + of_unittest_dma_ranges_one("/testcase-data/address-tests/device@70000000",
13952 + 0x0, 0x20000000);
13953 +- of_unittest_dma_ranges_one("/testcase-data/address-tests/bus@80000000/device@1000",
13954 +- 0x100000000, 0x20000000);
13955 ++ if (IS_ENABLED(CONFIG_ARCH_DMA_ADDR_T_64BIT))
13956 ++ of_unittest_dma_ranges_one("/testcase-data/address-tests/bus@80000000/device@1000",
13957 ++ 0x100000000, 0x20000000);
13958 + of_unittest_dma_ranges_one("/testcase-data/address-tests/pci@90000000",
13959 + 0x80000000, 0x20000000);
13960 + }
13961 +diff --git a/drivers/parisc/pdc_stable.c b/drivers/parisc/pdc_stable.c
13962 +index e090978518f1a..4760f82def6ec 100644
13963 +--- a/drivers/parisc/pdc_stable.c
13964 ++++ b/drivers/parisc/pdc_stable.c
13965 +@@ -979,8 +979,10 @@ pdcs_register_pathentries(void)
13966 + entry->kobj.kset = paths_kset;
13967 + err = kobject_init_and_add(&entry->kobj, &ktype_pdcspath, NULL,
13968 + "%s", entry->name);
13969 +- if (err)
13970 ++ if (err) {
13971 ++ kobject_put(&entry->kobj);
13972 + return err;
13973 ++ }
13974 +
13975 + /* kobject is now registered */
13976 + write_lock(&entry->rw_lock);
13977 +diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
13978 +index 0f6a6685ab5b5..f30144c8c0bd2 100644
13979 +--- a/drivers/pci/controller/pci-aardvark.c
13980 ++++ b/drivers/pci/controller/pci-aardvark.c
13981 +@@ -879,7 +879,6 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
13982 + return PCI_BRIDGE_EMUL_HANDLED;
13983 + }
13984 +
13985 +- case PCI_CAP_LIST_ID:
13986 + case PCI_EXP_DEVCAP:
13987 + case PCI_EXP_DEVCTL:
13988 + *value = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg);
13989 +@@ -960,6 +959,9 @@ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
13990 + /* Support interrupt A for MSI feature */
13991 + bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE;
13992 +
13993 ++ /* Aardvark HW provides PCIe Capability structure in version 2 */
13994 ++ bridge->pcie_conf.cap = cpu_to_le16(2);
13995 ++
13996 + /* Indicates supports for Completion Retry Status */
13997 + bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS);
13998 +
13999 +diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
14000 +index ed13e81cd691d..2dc6890dbcaa2 100644
14001 +--- a/drivers/pci/controller/pci-mvebu.c
14002 ++++ b/drivers/pci/controller/pci-mvebu.c
14003 +@@ -573,6 +573,8 @@ static struct pci_bridge_emul_ops mvebu_pci_bridge_emul_ops = {
14004 + static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
14005 + {
14006 + struct pci_bridge_emul *bridge = &port->bridge;
14007 ++ u32 pcie_cap = mvebu_readl(port, PCIE_CAP_PCIEXP);
14008 ++ u8 pcie_cap_ver = ((pcie_cap >> 16) & PCI_EXP_FLAGS_VERS);
14009 +
14010 + bridge->conf.vendor = PCI_VENDOR_ID_MARVELL;
14011 + bridge->conf.device = mvebu_readl(port, PCIE_DEV_ID_OFF) >> 16;
14012 +@@ -585,6 +587,12 @@ static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
14013 + bridge->conf.iolimit = PCI_IO_RANGE_TYPE_32;
14014 + }
14015 +
14016 ++ /*
14017 ++ * Older mvebu hardware provides PCIe Capability structure only in
14018 ++ * version 1. New hardware provides it in version 2.
14019 ++ */
14020 ++ bridge->pcie_conf.cap = cpu_to_le16(pcie_cap_ver);
14021 ++
14022 + bridge->has_pcie = true;
14023 + bridge->data = port;
14024 + bridge->ops = &mvebu_pci_bridge_emul_ops;
14025 +diff --git a/drivers/pci/controller/pci-xgene.c b/drivers/pci/controller/pci-xgene.c
14026 +index c33b385ac918e..b651b6f444691 100644
14027 +--- a/drivers/pci/controller/pci-xgene.c
14028 ++++ b/drivers/pci/controller/pci-xgene.c
14029 +@@ -467,7 +467,7 @@ static int xgene_pcie_select_ib_reg(u8 *ib_reg_mask, u64 size)
14030 + return 1;
14031 + }
14032 +
14033 +- if ((size > SZ_1K) && (size < SZ_1T) && !(*ib_reg_mask & (1 << 0))) {
14034 ++ if ((size > SZ_1K) && (size < SZ_4G) && !(*ib_reg_mask & (1 << 0))) {
14035 + *ib_reg_mask |= (1 << 0);
14036 + return 0;
14037 + }
14038 +diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h
14039 +index 4fd200d8b0a9d..f1f789fe0637a 100644
14040 +--- a/drivers/pci/hotplug/pciehp.h
14041 ++++ b/drivers/pci/hotplug/pciehp.h
14042 +@@ -72,6 +72,8 @@ extern int pciehp_poll_time;
14043 + * @reset_lock: prevents access to the Data Link Layer Link Active bit in the
14044 + * Link Status register and to the Presence Detect State bit in the Slot
14045 + * Status register during a slot reset which may cause them to flap
14046 ++ * @depth: Number of additional hotplug ports in the path to the root bus,
14047 ++ * used as lock subclass for @reset_lock
14048 + * @ist_running: flag to keep user request waiting while IRQ thread is running
14049 + * @request_result: result of last user request submitted to the IRQ thread
14050 + * @requester: wait queue to wake up on completion of user request,
14051 +@@ -103,6 +105,7 @@ struct controller {
14052 +
14053 + struct hotplug_slot hotplug_slot; /* hotplug core interface */
14054 + struct rw_semaphore reset_lock;
14055 ++ unsigned int depth;
14056 + unsigned int ist_running;
14057 + int request_result;
14058 + wait_queue_head_t requester;
14059 +diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c
14060 +index ad3393930ecb4..e7fe4b42f0394 100644
14061 +--- a/drivers/pci/hotplug/pciehp_core.c
14062 ++++ b/drivers/pci/hotplug/pciehp_core.c
14063 +@@ -166,7 +166,7 @@ static void pciehp_check_presence(struct controller *ctrl)
14064 + {
14065 + int occupied;
14066 +
14067 +- down_read(&ctrl->reset_lock);
14068 ++ down_read_nested(&ctrl->reset_lock, ctrl->depth);
14069 + mutex_lock(&ctrl->state_lock);
14070 +
14071 + occupied = pciehp_card_present_or_link_active(ctrl);
14072 +diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
14073 +index 9d06939736c0f..90da17c6da664 100644
14074 +--- a/drivers/pci/hotplug/pciehp_hpc.c
14075 ++++ b/drivers/pci/hotplug/pciehp_hpc.c
14076 +@@ -583,7 +583,7 @@ static void pciehp_ignore_dpc_link_change(struct controller *ctrl,
14077 + * the corresponding link change may have been ignored above.
14078 + * Synthesize it to ensure that it is acted on.
14079 + */
14080 +- down_read(&ctrl->reset_lock);
14081 ++ down_read_nested(&ctrl->reset_lock, ctrl->depth);
14082 + if (!pciehp_check_link_active(ctrl))
14083 + pciehp_request(ctrl, PCI_EXP_SLTSTA_DLLSC);
14084 + up_read(&ctrl->reset_lock);
14085 +@@ -746,7 +746,7 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
14086 + * Disable requests have higher priority than Presence Detect Changed
14087 + * or Data Link Layer State Changed events.
14088 + */
14089 +- down_read(&ctrl->reset_lock);
14090 ++ down_read_nested(&ctrl->reset_lock, ctrl->depth);
14091 + if (events & DISABLE_SLOT)
14092 + pciehp_handle_disable_request(ctrl);
14093 + else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
14094 +@@ -880,7 +880,7 @@ int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, int probe)
14095 + if (probe)
14096 + return 0;
14097 +
14098 +- down_write(&ctrl->reset_lock);
14099 ++ down_write_nested(&ctrl->reset_lock, ctrl->depth);
14100 +
14101 + if (!ATTN_BUTTN(ctrl)) {
14102 + ctrl_mask |= PCI_EXP_SLTCTL_PDCE;
14103 +@@ -936,6 +936,20 @@ static inline void dbg_ctrl(struct controller *ctrl)
14104 +
14105 + #define FLAG(x, y) (((x) & (y)) ? '+' : '-')
14106 +
14107 ++static inline int pcie_hotplug_depth(struct pci_dev *dev)
14108 ++{
14109 ++ struct pci_bus *bus = dev->bus;
14110 ++ int depth = 0;
14111 ++
14112 ++ while (bus->parent) {
14113 ++ bus = bus->parent;
14114 ++ if (bus->self && bus->self->is_hotplug_bridge)
14115 ++ depth++;
14116 ++ }
14117 ++
14118 ++ return depth;
14119 ++}
14120 ++
14121 + struct controller *pcie_init(struct pcie_device *dev)
14122 + {
14123 + struct controller *ctrl;
14124 +@@ -949,6 +963,7 @@ struct controller *pcie_init(struct pcie_device *dev)
14125 + return NULL;
14126 +
14127 + ctrl->pcie = dev;
14128 ++ ctrl->depth = pcie_hotplug_depth(dev->port);
14129 + pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &slot_cap);
14130 +
14131 + if (pdev->hotplug_user_indicators)
14132 +diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
14133 +index 57314fec2261b..3da69b26e6743 100644
14134 +--- a/drivers/pci/msi.c
14135 ++++ b/drivers/pci/msi.c
14136 +@@ -1291,19 +1291,24 @@ EXPORT_SYMBOL(pci_free_irq_vectors);
14137 +
14138 + /**
14139 + * pci_irq_vector - return Linux IRQ number of a device vector
14140 +- * @dev: PCI device to operate on
14141 +- * @nr: device-relative interrupt vector index (0-based).
14142 ++ * @dev: PCI device to operate on
14143 ++ * @nr: Interrupt vector index (0-based)
14144 ++ *
14145 ++ * @nr has the following meanings depending on the interrupt mode:
14146 ++ * MSI-X: The index in the MSI-X vector table
14147 ++ * MSI: The index of the enabled MSI vectors
14148 ++ * INTx: Must be 0
14149 ++ *
14150 ++ * Return: The Linux interrupt number or -EINVAl if @nr is out of range.
14151 + */
14152 + int pci_irq_vector(struct pci_dev *dev, unsigned int nr)
14153 + {
14154 + if (dev->msix_enabled) {
14155 + struct msi_desc *entry;
14156 +- int i = 0;
14157 +
14158 + for_each_pci_msi_entry(entry, dev) {
14159 +- if (i == nr)
14160 ++ if (entry->msi_attrib.entry_nr == nr)
14161 + return entry->irq;
14162 +- i++;
14163 + }
14164 + WARN_ON_ONCE(1);
14165 + return -EINVAL;
14166 +@@ -1327,17 +1332,22 @@ EXPORT_SYMBOL(pci_irq_vector);
14167 + * pci_irq_get_affinity - return the affinity of a particular MSI vector
14168 + * @dev: PCI device to operate on
14169 + * @nr: device-relative interrupt vector index (0-based).
14170 ++ *
14171 ++ * @nr has the following meanings depending on the interrupt mode:
14172 ++ * MSI-X: The index in the MSI-X vector table
14173 ++ * MSI: The index of the enabled MSI vectors
14174 ++ * INTx: Must be 0
14175 ++ *
14176 ++ * Return: A cpumask pointer or NULL if @nr is out of range
14177 + */
14178 + const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr)
14179 + {
14180 + if (dev->msix_enabled) {
14181 + struct msi_desc *entry;
14182 +- int i = 0;
14183 +
14184 + for_each_pci_msi_entry(entry, dev) {
14185 +- if (i == nr)
14186 ++ if (entry->msi_attrib.entry_nr == nr)
14187 + return &entry->affinity->mask;
14188 +- i++;
14189 + }
14190 + WARN_ON_ONCE(1);
14191 + return NULL;
14192 +diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c
14193 +index db97cddfc85e1..37504c2cce9b8 100644
14194 +--- a/drivers/pci/pci-bridge-emul.c
14195 ++++ b/drivers/pci/pci-bridge-emul.c
14196 +@@ -139,8 +139,13 @@ struct pci_bridge_reg_behavior pci_regs_behavior[PCI_STD_HEADER_SIZEOF / 4] = {
14197 + .ro = GENMASK(7, 0),
14198 + },
14199 +
14200 ++ /*
14201 ++ * If expansion ROM is unsupported then ROM Base Address register must
14202 ++ * be implemented as read-only register that return 0 when read, same
14203 ++ * as for unused Base Address registers.
14204 ++ */
14205 + [PCI_ROM_ADDRESS1 / 4] = {
14206 +- .rw = GENMASK(31, 11) | BIT(0),
14207 ++ .ro = ~0,
14208 + },
14209 +
14210 + /*
14211 +@@ -171,41 +176,55 @@ struct pci_bridge_reg_behavior pcie_cap_regs_behavior[PCI_CAP_PCIE_SIZEOF / 4] =
14212 + [PCI_CAP_LIST_ID / 4] = {
14213 + /*
14214 + * Capability ID, Next Capability Pointer and
14215 +- * Capabilities register are all read-only.
14216 ++ * bits [14:0] of Capabilities register are all read-only.
14217 ++ * Bit 15 of Capabilities register is reserved.
14218 + */
14219 +- .ro = ~0,
14220 ++ .ro = GENMASK(30, 0),
14221 + },
14222 +
14223 + [PCI_EXP_DEVCAP / 4] = {
14224 +- .ro = ~0,
14225 ++ /*
14226 ++ * Bits [31:29] and [17:16] are reserved.
14227 ++ * Bits [27:18] are reserved for non-upstream ports.
14228 ++ * Bits 28 and [14:6] are reserved for non-endpoint devices.
14229 ++ * Other bits are read-only.
14230 ++ */
14231 ++ .ro = BIT(15) | GENMASK(5, 0),
14232 + },
14233 +
14234 + [PCI_EXP_DEVCTL / 4] = {
14235 +- /* Device control register is RW */
14236 +- .rw = GENMASK(15, 0),
14237 ++ /*
14238 ++ * Device control register is RW, except bit 15 which is
14239 ++ * reserved for non-endpoints or non-PCIe-to-PCI/X bridges.
14240 ++ */
14241 ++ .rw = GENMASK(14, 0),
14242 +
14243 + /*
14244 + * Device status register has bits 6 and [3:0] W1C, [5:4] RO,
14245 +- * the rest is reserved
14246 ++ * the rest is reserved. Also bit 6 is reserved for non-upstream
14247 ++ * ports.
14248 + */
14249 +- .w1c = (BIT(6) | GENMASK(3, 0)) << 16,
14250 ++ .w1c = GENMASK(3, 0) << 16,
14251 + .ro = GENMASK(5, 4) << 16,
14252 + },
14253 +
14254 + [PCI_EXP_LNKCAP / 4] = {
14255 +- /* All bits are RO, except bit 23 which is reserved */
14256 +- .ro = lower_32_bits(~BIT(23)),
14257 ++ /*
14258 ++ * All bits are RO, except bit 23 which is reserved and
14259 ++ * bit 18 which is reserved for non-upstream ports.
14260 ++ */
14261 ++ .ro = lower_32_bits(~(BIT(23) | PCI_EXP_LNKCAP_CLKPM)),
14262 + },
14263 +
14264 + [PCI_EXP_LNKCTL / 4] = {
14265 + /*
14266 + * Link control has bits [15:14], [11:3] and [1:0] RW, the
14267 +- * rest is reserved.
14268 ++ * rest is reserved. Bit 8 is reserved for non-upstream ports.
14269 + *
14270 + * Link status has bits [13:0] RO, and bits [15:14]
14271 + * W1C.
14272 + */
14273 +- .rw = GENMASK(15, 14) | GENMASK(11, 3) | GENMASK(1, 0),
14274 ++ .rw = GENMASK(15, 14) | GENMASK(11, 9) | GENMASK(7, 3) | GENMASK(1, 0),
14275 + .ro = GENMASK(13, 0) << 16,
14276 + .w1c = GENMASK(15, 14) << 16,
14277 + },
14278 +@@ -277,11 +296,9 @@ int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
14279 +
14280 + if (bridge->has_pcie) {
14281 + bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START;
14282 ++ bridge->conf.status |= cpu_to_le16(PCI_STATUS_CAP_LIST);
14283 + bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP;
14284 +- /* Set PCIe v2, root port, slot support */
14285 +- bridge->pcie_conf.cap =
14286 +- cpu_to_le16(PCI_EXP_TYPE_ROOT_PORT << 4 | 2 |
14287 +- PCI_EXP_FLAGS_SLOT);
14288 ++ bridge->pcie_conf.cap |= cpu_to_le16(PCI_EXP_TYPE_ROOT_PORT << 4);
14289 + bridge->pcie_cap_regs_behavior =
14290 + kmemdup(pcie_cap_regs_behavior,
14291 + sizeof(pcie_cap_regs_behavior),
14292 +@@ -290,6 +307,27 @@ int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
14293 + kfree(bridge->pci_regs_behavior);
14294 + return -ENOMEM;
14295 + }
14296 ++ /* These bits are applicable only for PCI and reserved on PCIe */
14297 ++ bridge->pci_regs_behavior[PCI_CACHE_LINE_SIZE / 4].ro &=
14298 ++ ~GENMASK(15, 8);
14299 ++ bridge->pci_regs_behavior[PCI_COMMAND / 4].ro &=
14300 ++ ~((PCI_COMMAND_SPECIAL | PCI_COMMAND_INVALIDATE |
14301 ++ PCI_COMMAND_VGA_PALETTE | PCI_COMMAND_WAIT |
14302 ++ PCI_COMMAND_FAST_BACK) |
14303 ++ (PCI_STATUS_66MHZ | PCI_STATUS_FAST_BACK |
14304 ++ PCI_STATUS_DEVSEL_MASK) << 16);
14305 ++ bridge->pci_regs_behavior[PCI_PRIMARY_BUS / 4].ro &=
14306 ++ ~GENMASK(31, 24);
14307 ++ bridge->pci_regs_behavior[PCI_IO_BASE / 4].ro &=
14308 ++ ~((PCI_STATUS_66MHZ | PCI_STATUS_FAST_BACK |
14309 ++ PCI_STATUS_DEVSEL_MASK) << 16);
14310 ++ bridge->pci_regs_behavior[PCI_INTERRUPT_LINE / 4].rw &=
14311 ++ ~((PCI_BRIDGE_CTL_MASTER_ABORT |
14312 ++ BIT(8) | BIT(9) | BIT(11)) << 16);
14313 ++ bridge->pci_regs_behavior[PCI_INTERRUPT_LINE / 4].ro &=
14314 ++ ~((PCI_BRIDGE_CTL_FAST_BACK) << 16);
14315 ++ bridge->pci_regs_behavior[PCI_INTERRUPT_LINE / 4].w1c &=
14316 ++ ~(BIT(10) << 16);
14317 + }
14318 +
14319 + if (flags & PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR) {
14320 +diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
14321 +index bb863ddb59bfc..95fcc735c88e7 100644
14322 +--- a/drivers/pci/quirks.c
14323 ++++ b/drivers/pci/quirks.c
14324 +@@ -4077,6 +4077,9 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9120,
14325 + quirk_dma_func1_alias);
14326 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9123,
14327 + quirk_dma_func1_alias);
14328 ++/* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c136 */
14329 ++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9125,
14330 ++ quirk_dma_func1_alias);
14331 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9128,
14332 + quirk_dma_func1_alias);
14333 + /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c14 */
14334 +diff --git a/drivers/pcmcia/cs.c b/drivers/pcmcia/cs.c
14335 +index e211e2619680c..f70197154a362 100644
14336 +--- a/drivers/pcmcia/cs.c
14337 ++++ b/drivers/pcmcia/cs.c
14338 +@@ -666,18 +666,16 @@ static int pccardd(void *__skt)
14339 + if (events || sysfs_events)
14340 + continue;
14341 +
14342 ++ set_current_state(TASK_INTERRUPTIBLE);
14343 + if (kthread_should_stop())
14344 + break;
14345 +
14346 +- set_current_state(TASK_INTERRUPTIBLE);
14347 +-
14348 + schedule();
14349 +
14350 +- /* make sure we are running */
14351 +- __set_current_state(TASK_RUNNING);
14352 +-
14353 + try_to_freeze();
14354 + }
14355 ++ /* make sure we are running before we exit */
14356 ++ __set_current_state(TASK_RUNNING);
14357 +
14358 + /* shut down socket, if a device is still present */
14359 + if (skt->state & SOCKET_PRESENT) {
14360 +diff --git a/drivers/pcmcia/rsrc_nonstatic.c b/drivers/pcmcia/rsrc_nonstatic.c
14361 +index 3b05760e69d62..69a6e9a5d6d26 100644
14362 +--- a/drivers/pcmcia/rsrc_nonstatic.c
14363 ++++ b/drivers/pcmcia/rsrc_nonstatic.c
14364 +@@ -690,6 +690,9 @@ static struct resource *__nonstatic_find_io_region(struct pcmcia_socket *s,
14365 + unsigned long min = base;
14366 + int ret;
14367 +
14368 ++ if (!res)
14369 ++ return NULL;
14370 ++
14371 + data.mask = align - 1;
14372 + data.offset = base & data.mask;
14373 + data.map = &s_data->io_db;
14374 +@@ -809,6 +812,9 @@ static struct resource *nonstatic_find_mem_region(u_long base, u_long num,
14375 + unsigned long min, max;
14376 + int ret, i, j;
14377 +
14378 ++ if (!res)
14379 ++ return NULL;
14380 ++
14381 + low = low || !(s->features & SS_CAP_PAGE_REGS);
14382 +
14383 + data.mask = align - 1;
14384 +diff --git a/drivers/phy/socionext/phy-uniphier-usb3ss.c b/drivers/phy/socionext/phy-uniphier-usb3ss.c
14385 +index 6700645bcbe6b..3b5ffc16a6947 100644
14386 +--- a/drivers/phy/socionext/phy-uniphier-usb3ss.c
14387 ++++ b/drivers/phy/socionext/phy-uniphier-usb3ss.c
14388 +@@ -22,11 +22,13 @@
14389 + #include <linux/reset.h>
14390 +
14391 + #define SSPHY_TESTI 0x0
14392 +-#define SSPHY_TESTO 0x4
14393 + #define TESTI_DAT_MASK GENMASK(13, 6)
14394 + #define TESTI_ADR_MASK GENMASK(5, 1)
14395 + #define TESTI_WR_EN BIT(0)
14396 +
14397 ++#define SSPHY_TESTO 0x4
14398 ++#define TESTO_DAT_MASK GENMASK(7, 0)
14399 ++
14400 + #define PHY_F(regno, msb, lsb) { (regno), (msb), (lsb) }
14401 +
14402 + #define CDR_CPD_TRIM PHY_F(7, 3, 0) /* RxPLL charge pump current */
14403 +@@ -84,12 +86,12 @@ static void uniphier_u3ssphy_set_param(struct uniphier_u3ssphy_priv *priv,
14404 + val = FIELD_PREP(TESTI_DAT_MASK, 1);
14405 + val |= FIELD_PREP(TESTI_ADR_MASK, p->field.reg_no);
14406 + uniphier_u3ssphy_testio_write(priv, val);
14407 +- val = readl(priv->base + SSPHY_TESTO);
14408 ++ val = readl(priv->base + SSPHY_TESTO) & TESTO_DAT_MASK;
14409 +
14410 + /* update value */
14411 +- val &= ~FIELD_PREP(TESTI_DAT_MASK, field_mask);
14412 ++ val &= ~field_mask;
14413 + data = field_mask & (p->value << p->field.lsb);
14414 +- val = FIELD_PREP(TESTI_DAT_MASK, data);
14415 ++ val = FIELD_PREP(TESTI_DAT_MASK, data | val);
14416 + val |= FIELD_PREP(TESTI_ADR_MASK, p->field.reg_no);
14417 + uniphier_u3ssphy_testio_write(priv, val);
14418 + uniphier_u3ssphy_testio_write(priv, val | TESTI_WR_EN);
14419 +diff --git a/drivers/power/reset/mt6323-poweroff.c b/drivers/power/reset/mt6323-poweroff.c
14420 +index 0532803e6cbc4..d90e76fcb9383 100644
14421 +--- a/drivers/power/reset/mt6323-poweroff.c
14422 ++++ b/drivers/power/reset/mt6323-poweroff.c
14423 +@@ -57,6 +57,9 @@ static int mt6323_pwrc_probe(struct platform_device *pdev)
14424 + return -ENOMEM;
14425 +
14426 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
14427 ++ if (!res)
14428 ++ return -EINVAL;
14429 ++
14430 + pwrc->base = res->start;
14431 + pwrc->regmap = mt6397_chip->regmap;
14432 + pwrc->dev = &pdev->dev;
14433 +diff --git a/drivers/regulator/qcom_smd-regulator.c b/drivers/regulator/qcom_smd-regulator.c
14434 +index bb944ee5fe3b1..03e146e98abd5 100644
14435 +--- a/drivers/regulator/qcom_smd-regulator.c
14436 ++++ b/drivers/regulator/qcom_smd-regulator.c
14437 +@@ -9,6 +9,7 @@
14438 + #include <linux/of_device.h>
14439 + #include <linux/platform_device.h>
14440 + #include <linux/regulator/driver.h>
14441 ++#include <linux/regulator/of_regulator.h>
14442 + #include <linux/soc/qcom/smd-rpm.h>
14443 +
14444 + struct qcom_rpm_reg {
14445 +@@ -1107,52 +1108,91 @@ static const struct of_device_id rpm_of_match[] = {
14446 + };
14447 + MODULE_DEVICE_TABLE(of, rpm_of_match);
14448 +
14449 +-static int rpm_reg_probe(struct platform_device *pdev)
14450 ++/**
14451 ++ * rpm_regulator_init_vreg() - initialize all attributes of a qcom_smd-regulator
14452 ++ * @vreg: Pointer to the individual qcom_smd-regulator resource
14453 ++ * @dev: Pointer to the top level qcom_smd-regulator PMIC device
14454 ++ * @node: Pointer to the individual qcom_smd-regulator resource
14455 ++ * device node
14456 ++ * @rpm: Pointer to the rpm bus node
14457 ++ * @pmic_rpm_data: Pointer to a null-terminated array of qcom_smd-regulator
14458 ++ * resources defined for the top level PMIC device
14459 ++ *
14460 ++ * Return: 0 on success, errno on failure
14461 ++ */
14462 ++static int rpm_regulator_init_vreg(struct qcom_rpm_reg *vreg, struct device *dev,
14463 ++ struct device_node *node, struct qcom_smd_rpm *rpm,
14464 ++ const struct rpm_regulator_data *pmic_rpm_data)
14465 + {
14466 +- const struct rpm_regulator_data *reg;
14467 +- const struct of_device_id *match;
14468 +- struct regulator_config config = { };
14469 ++ struct regulator_config config = {};
14470 ++ const struct rpm_regulator_data *rpm_data;
14471 + struct regulator_dev *rdev;
14472 ++ int ret;
14473 ++
14474 ++ for (rpm_data = pmic_rpm_data; rpm_data->name; rpm_data++)
14475 ++ if (of_node_name_eq(node, rpm_data->name))
14476 ++ break;
14477 ++
14478 ++ if (!rpm_data->name) {
14479 ++ dev_err(dev, "Unknown regulator %pOFn\n", node);
14480 ++ return -EINVAL;
14481 ++ }
14482 ++
14483 ++ vreg->dev = dev;
14484 ++ vreg->rpm = rpm;
14485 ++ vreg->type = rpm_data->type;
14486 ++ vreg->id = rpm_data->id;
14487 ++
14488 ++ memcpy(&vreg->desc, rpm_data->desc, sizeof(vreg->desc));
14489 ++ vreg->desc.name = rpm_data->name;
14490 ++ vreg->desc.supply_name = rpm_data->supply;
14491 ++ vreg->desc.owner = THIS_MODULE;
14492 ++ vreg->desc.type = REGULATOR_VOLTAGE;
14493 ++ vreg->desc.of_match = rpm_data->name;
14494 ++
14495 ++ config.dev = dev;
14496 ++ config.of_node = node;
14497 ++ config.driver_data = vreg;
14498 ++
14499 ++ rdev = devm_regulator_register(dev, &vreg->desc, &config);
14500 ++ if (IS_ERR(rdev)) {
14501 ++ ret = PTR_ERR(rdev);
14502 ++ dev_err(dev, "%pOFn: devm_regulator_register() failed, ret=%d\n", node, ret);
14503 ++ return ret;
14504 ++ }
14505 ++
14506 ++ return 0;
14507 ++}
14508 ++
14509 ++static int rpm_reg_probe(struct platform_device *pdev)
14510 ++{
14511 ++ struct device *dev = &pdev->dev;
14512 ++ const struct rpm_regulator_data *vreg_data;
14513 ++ struct device_node *node;
14514 + struct qcom_rpm_reg *vreg;
14515 + struct qcom_smd_rpm *rpm;
14516 ++ int ret;
14517 +
14518 + rpm = dev_get_drvdata(pdev->dev.parent);
14519 + if (!rpm) {
14520 +- dev_err(&pdev->dev, "unable to retrieve handle to rpm\n");
14521 ++ dev_err(&pdev->dev, "Unable to retrieve handle to rpm\n");
14522 + return -ENODEV;
14523 + }
14524 +
14525 +- match = of_match_device(rpm_of_match, &pdev->dev);
14526 +- if (!match) {
14527 +- dev_err(&pdev->dev, "failed to match device\n");
14528 ++ vreg_data = of_device_get_match_data(dev);
14529 ++ if (!vreg_data)
14530 + return -ENODEV;
14531 +- }
14532 +
14533 +- for (reg = match->data; reg->name; reg++) {
14534 ++ for_each_available_child_of_node(dev->of_node, node) {
14535 + vreg = devm_kzalloc(&pdev->dev, sizeof(*vreg), GFP_KERNEL);
14536 + if (!vreg)
14537 + return -ENOMEM;
14538 +
14539 +- vreg->dev = &pdev->dev;
14540 +- vreg->type = reg->type;
14541 +- vreg->id = reg->id;
14542 +- vreg->rpm = rpm;
14543 +-
14544 +- memcpy(&vreg->desc, reg->desc, sizeof(vreg->desc));
14545 +-
14546 +- vreg->desc.id = -1;
14547 +- vreg->desc.owner = THIS_MODULE;
14548 +- vreg->desc.type = REGULATOR_VOLTAGE;
14549 +- vreg->desc.name = reg->name;
14550 +- vreg->desc.supply_name = reg->supply;
14551 +- vreg->desc.of_match = reg->name;
14552 +-
14553 +- config.dev = &pdev->dev;
14554 +- config.driver_data = vreg;
14555 +- rdev = devm_regulator_register(&pdev->dev, &vreg->desc, &config);
14556 +- if (IS_ERR(rdev)) {
14557 +- dev_err(&pdev->dev, "failed to register %s\n", reg->name);
14558 +- return PTR_ERR(rdev);
14559 ++ ret = rpm_regulator_init_vreg(vreg, dev, node, rpm, vreg_data);
14560 ++
14561 ++ if (ret < 0) {
14562 ++ of_node_put(node);
14563 ++ return ret;
14564 + }
14565 + }
14566 +
14567 +diff --git a/drivers/rpmsg/rpmsg_core.c b/drivers/rpmsg/rpmsg_core.c
14568 +index 91de940896e3d..028ca5961bc2a 100644
14569 +--- a/drivers/rpmsg/rpmsg_core.c
14570 ++++ b/drivers/rpmsg/rpmsg_core.c
14571 +@@ -473,13 +473,25 @@ static int rpmsg_dev_probe(struct device *dev)
14572 + err = rpdrv->probe(rpdev);
14573 + if (err) {
14574 + dev_err(dev, "%s: failed: %d\n", __func__, err);
14575 +- if (ept)
14576 +- rpmsg_destroy_ept(ept);
14577 +- goto out;
14578 ++ goto destroy_ept;
14579 + }
14580 +
14581 +- if (ept && rpdev->ops->announce_create)
14582 ++ if (ept && rpdev->ops->announce_create) {
14583 + err = rpdev->ops->announce_create(rpdev);
14584 ++ if (err) {
14585 ++ dev_err(dev, "failed to announce creation\n");
14586 ++ goto remove_rpdev;
14587 ++ }
14588 ++ }
14589 ++
14590 ++ return 0;
14591 ++
14592 ++remove_rpdev:
14593 ++ if (rpdrv->remove)
14594 ++ rpdrv->remove(rpdev);
14595 ++destroy_ept:
14596 ++ if (ept)
14597 ++ rpmsg_destroy_ept(ept);
14598 + out:
14599 + return err;
14600 + }
14601 +diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
14602 +index c633319cdb913..58c6382a2807c 100644
14603 +--- a/drivers/rtc/rtc-cmos.c
14604 ++++ b/drivers/rtc/rtc-cmos.c
14605 +@@ -463,7 +463,10 @@ static int cmos_set_alarm(struct device *dev, struct rtc_wkalrm *t)
14606 + min = t->time.tm_min;
14607 + sec = t->time.tm_sec;
14608 +
14609 ++ spin_lock_irq(&rtc_lock);
14610 + rtc_control = CMOS_READ(RTC_CONTROL);
14611 ++ spin_unlock_irq(&rtc_lock);
14612 ++
14613 + if (!(rtc_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
14614 + /* Writing 0xff means "don't care" or "match all". */
14615 + mon = (mon <= 12) ? bin2bcd(mon) : 0xff;
14616 +diff --git a/drivers/rtc/rtc-pxa.c b/drivers/rtc/rtc-pxa.c
14617 +index d2f1d8f754bf3..cf8119b6d3204 100644
14618 +--- a/drivers/rtc/rtc-pxa.c
14619 ++++ b/drivers/rtc/rtc-pxa.c
14620 +@@ -330,6 +330,10 @@ static int __init pxa_rtc_probe(struct platform_device *pdev)
14621 + if (sa1100_rtc->irq_alarm < 0)
14622 + return -ENXIO;
14623 +
14624 ++ sa1100_rtc->rtc = devm_rtc_allocate_device(&pdev->dev);
14625 ++ if (IS_ERR(sa1100_rtc->rtc))
14626 ++ return PTR_ERR(sa1100_rtc->rtc);
14627 ++
14628 + pxa_rtc->base = devm_ioremap(dev, pxa_rtc->ress->start,
14629 + resource_size(pxa_rtc->ress));
14630 + if (!pxa_rtc->base) {
14631 +diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
14632 +index 93e507677bdcb..0273bf3918ff3 100644
14633 +--- a/drivers/scsi/lpfc/lpfc.h
14634 ++++ b/drivers/scsi/lpfc/lpfc.h
14635 +@@ -763,7 +763,6 @@ struct lpfc_hba {
14636 + #define HBA_DEVLOSS_TMO 0x2000 /* HBA in devloss timeout */
14637 + #define HBA_RRQ_ACTIVE 0x4000 /* process the rrq active list */
14638 + #define HBA_IOQ_FLUSH 0x8000 /* FCP/NVME I/O queues being flushed */
14639 +-#define HBA_FW_DUMP_OP 0x10000 /* Skips fn reset before FW dump */
14640 + #define HBA_RECOVERABLE_UE 0x20000 /* Firmware supports recoverable UE */
14641 + #define HBA_FORCED_LINK_SPEED 0x40000 /*
14642 + * Firmware supports Forced Link Speed
14643 +@@ -772,6 +771,7 @@ struct lpfc_hba {
14644 + #define HBA_FLOGI_ISSUED 0x100000 /* FLOGI was issued */
14645 + #define HBA_DEFER_FLOGI 0x800000 /* Defer FLOGI till read_sparm cmpl */
14646 +
14647 ++ struct completion *fw_dump_cmpl; /* cmpl event tracker for fw_dump */
14648 + uint32_t fcp_ring_in_use; /* When polling test if intr-hndlr active*/
14649 + struct lpfc_dmabuf slim2p;
14650 +
14651 +diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
14652 +index 2c59a5bf35390..727b7ba4d8f82 100644
14653 +--- a/drivers/scsi/lpfc/lpfc_attr.c
14654 ++++ b/drivers/scsi/lpfc/lpfc_attr.c
14655 +@@ -1536,25 +1536,25 @@ lpfc_sli4_pdev_reg_request(struct lpfc_hba *phba, uint32_t opcode)
14656 + before_fc_flag = phba->pport->fc_flag;
14657 + sriov_nr_virtfn = phba->cfg_sriov_nr_virtfn;
14658 +
14659 +- /* Disable SR-IOV virtual functions if enabled */
14660 +- if (phba->cfg_sriov_nr_virtfn) {
14661 +- pci_disable_sriov(pdev);
14662 +- phba->cfg_sriov_nr_virtfn = 0;
14663 +- }
14664 ++ if (opcode == LPFC_FW_DUMP) {
14665 ++ init_completion(&online_compl);
14666 ++ phba->fw_dump_cmpl = &online_compl;
14667 ++ } else {
14668 ++ /* Disable SR-IOV virtual functions if enabled */
14669 ++ if (phba->cfg_sriov_nr_virtfn) {
14670 ++ pci_disable_sriov(pdev);
14671 ++ phba->cfg_sriov_nr_virtfn = 0;
14672 ++ }
14673 +
14674 +- if (opcode == LPFC_FW_DUMP)
14675 +- phba->hba_flag |= HBA_FW_DUMP_OP;
14676 ++ status = lpfc_do_offline(phba, LPFC_EVT_OFFLINE);
14677 +
14678 +- status = lpfc_do_offline(phba, LPFC_EVT_OFFLINE);
14679 ++ if (status != 0)
14680 ++ return status;
14681 +
14682 +- if (status != 0) {
14683 +- phba->hba_flag &= ~HBA_FW_DUMP_OP;
14684 +- return status;
14685 ++ /* wait for the device to be quiesced before firmware reset */
14686 ++ msleep(100);
14687 + }
14688 +
14689 +- /* wait for the device to be quiesced before firmware reset */
14690 +- msleep(100);
14691 +-
14692 + reg_val = readl(phba->sli4_hba.conf_regs_memmap_p +
14693 + LPFC_CTL_PDEV_CTL_OFFSET);
14694 +
14695 +@@ -1583,24 +1583,42 @@ lpfc_sli4_pdev_reg_request(struct lpfc_hba *phba, uint32_t opcode)
14696 + lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
14697 + "3153 Fail to perform the requested "
14698 + "access: x%x\n", reg_val);
14699 ++ if (phba->fw_dump_cmpl)
14700 ++ phba->fw_dump_cmpl = NULL;
14701 + return rc;
14702 + }
14703 +
14704 + /* keep the original port state */
14705 +- if (before_fc_flag & FC_OFFLINE_MODE)
14706 +- goto out;
14707 +-
14708 +- init_completion(&online_compl);
14709 +- job_posted = lpfc_workq_post_event(phba, &status, &online_compl,
14710 +- LPFC_EVT_ONLINE);
14711 +- if (!job_posted)
14712 ++ if (before_fc_flag & FC_OFFLINE_MODE) {
14713 ++ if (phba->fw_dump_cmpl)
14714 ++ phba->fw_dump_cmpl = NULL;
14715 + goto out;
14716 ++ }
14717 +
14718 +- wait_for_completion(&online_compl);
14719 ++ /* Firmware dump will trigger an HA_ERATT event, and
14720 ++ * lpfc_handle_eratt_s4 routine already handles bringing the port back
14721 ++ * online.
14722 ++ */
14723 ++ if (opcode == LPFC_FW_DUMP) {
14724 ++ wait_for_completion(phba->fw_dump_cmpl);
14725 ++ } else {
14726 ++ init_completion(&online_compl);
14727 ++ job_posted = lpfc_workq_post_event(phba, &status, &online_compl,
14728 ++ LPFC_EVT_ONLINE);
14729 ++ if (!job_posted)
14730 ++ goto out;
14731 +
14732 ++ wait_for_completion(&online_compl);
14733 ++ }
14734 + out:
14735 + /* in any case, restore the virtual functions enabled as before */
14736 + if (sriov_nr_virtfn) {
14737 ++ /* If fw_dump was performed, first disable to clean up */
14738 ++ if (opcode == LPFC_FW_DUMP) {
14739 ++ pci_disable_sriov(pdev);
14740 ++ phba->cfg_sriov_nr_virtfn = 0;
14741 ++ }
14742 ++
14743 + sriov_err =
14744 + lpfc_sli_probe_sriov_nr_virtfn(phba, sriov_nr_virtfn);
14745 + if (!sriov_err)
14746 +diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
14747 +index f4a672e549716..68ff233f936e5 100644
14748 +--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
14749 ++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
14750 +@@ -635,10 +635,16 @@ lpfc_work_done(struct lpfc_hba *phba)
14751 + if (phba->pci_dev_grp == LPFC_PCI_DEV_OC)
14752 + lpfc_sli4_post_async_mbox(phba);
14753 +
14754 +- if (ha_copy & HA_ERATT)
14755 ++ if (ha_copy & HA_ERATT) {
14756 + /* Handle the error attention event */
14757 + lpfc_handle_eratt(phba);
14758 +
14759 ++ if (phba->fw_dump_cmpl) {
14760 ++ complete(phba->fw_dump_cmpl);
14761 ++ phba->fw_dump_cmpl = NULL;
14762 ++ }
14763 ++ }
14764 ++
14765 + if (ha_copy & HA_MBATT)
14766 + lpfc_sli_handle_mb_event(phba);
14767 +
14768 +diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
14769 +index 06a23718a7c7f..1a9522baba484 100644
14770 +--- a/drivers/scsi/lpfc/lpfc_sli.c
14771 ++++ b/drivers/scsi/lpfc/lpfc_sli.c
14772 +@@ -4629,12 +4629,6 @@ lpfc_sli4_brdreset(struct lpfc_hba *phba)
14773 + phba->fcf.fcf_flag = 0;
14774 + spin_unlock_irq(&phba->hbalock);
14775 +
14776 +- /* SLI4 INTF 2: if FW dump is being taken skip INIT_PORT */
14777 +- if (phba->hba_flag & HBA_FW_DUMP_OP) {
14778 +- phba->hba_flag &= ~HBA_FW_DUMP_OP;
14779 +- return rc;
14780 +- }
14781 +-
14782 + /* Now physically reset the device */
14783 + lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
14784 + "0389 Performing PCI function reset!\n");
14785 +diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
14786 +index 5d751628a6340..9b318958d78cc 100644
14787 +--- a/drivers/scsi/pm8001/pm8001_hwi.c
14788 ++++ b/drivers/scsi/pm8001/pm8001_hwi.c
14789 +@@ -1323,7 +1323,9 @@ int pm8001_mpi_build_cmd(struct pm8001_hba_info *pm8001_ha,
14790 + int q_index = circularQ - pm8001_ha->inbnd_q_tbl;
14791 + int rv = -1;
14792 +
14793 +- WARN_ON(q_index >= PM8001_MAX_INB_NUM);
14794 ++ if (WARN_ON(q_index >= pm8001_ha->max_q_num))
14795 ++ return -EINVAL;
14796 ++
14797 + spin_lock_irqsave(&circularQ->iq_lock, flags);
14798 + rv = pm8001_mpi_msg_free_get(circularQ, pm8001_ha->iomb_size,
14799 + &pMessage);
14800 +diff --git a/drivers/scsi/scsi_debugfs.c b/drivers/scsi/scsi_debugfs.c
14801 +index c19ea7ab54cbd..d9a18124cfc9d 100644
14802 +--- a/drivers/scsi/scsi_debugfs.c
14803 ++++ b/drivers/scsi/scsi_debugfs.c
14804 +@@ -10,6 +10,7 @@ static const char *const scsi_cmd_flags[] = {
14805 + SCSI_CMD_FLAG_NAME(TAGGED),
14806 + SCSI_CMD_FLAG_NAME(UNCHECKED_ISA_DMA),
14807 + SCSI_CMD_FLAG_NAME(INITIALIZED),
14808 ++ SCSI_CMD_FLAG_NAME(LAST),
14809 + };
14810 + #undef SCSI_CMD_FLAG_NAME
14811 +
14812 +diff --git a/drivers/scsi/scsi_pm.c b/drivers/scsi/scsi_pm.c
14813 +index 3717eea37ecb3..e91a0a5bc7a3e 100644
14814 +--- a/drivers/scsi/scsi_pm.c
14815 ++++ b/drivers/scsi/scsi_pm.c
14816 +@@ -262,7 +262,7 @@ static int sdev_runtime_resume(struct device *dev)
14817 + blk_pre_runtime_resume(sdev->request_queue);
14818 + if (pm && pm->runtime_resume)
14819 + err = pm->runtime_resume(dev);
14820 +- blk_post_runtime_resume(sdev->request_queue, err);
14821 ++ blk_post_runtime_resume(sdev->request_queue);
14822 +
14823 + return err;
14824 + }
14825 +diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
14826 +index 4cb4ab9c6137e..464418413ced0 100644
14827 +--- a/drivers/scsi/sr.c
14828 ++++ b/drivers/scsi/sr.c
14829 +@@ -917,7 +917,7 @@ static void get_capabilities(struct scsi_cd *cd)
14830 +
14831 +
14832 + /* allocate transfer buffer */
14833 +- buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
14834 ++ buffer = kmalloc(512, GFP_KERNEL);
14835 + if (!buffer) {
14836 + sr_printk(KERN_ERR, cd, "out of memory.\n");
14837 + return;
14838 +diff --git a/drivers/scsi/sr_vendor.c b/drivers/scsi/sr_vendor.c
14839 +index 1f988a1b9166f..a61635326ae0a 100644
14840 +--- a/drivers/scsi/sr_vendor.c
14841 ++++ b/drivers/scsi/sr_vendor.c
14842 +@@ -131,7 +131,7 @@ int sr_set_blocklength(Scsi_CD *cd, int blocklength)
14843 + if (cd->vendor == VENDOR_TOSHIBA)
14844 + density = (blocklength > 2048) ? 0x81 : 0x83;
14845 +
14846 +- buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
14847 ++ buffer = kmalloc(512, GFP_KERNEL);
14848 + if (!buffer)
14849 + return -ENOMEM;
14850 +
14851 +@@ -179,7 +179,7 @@ int sr_cd_check(struct cdrom_device_info *cdi)
14852 + if (cd->cdi.mask & CDC_MULTI_SESSION)
14853 + return 0;
14854 +
14855 +- buffer = kmalloc(512, GFP_KERNEL | GFP_DMA);
14856 ++ buffer = kmalloc(512, GFP_KERNEL);
14857 + if (!buffer)
14858 + return -ENOMEM;
14859 +
14860 +diff --git a/drivers/scsi/ufs/tc-dwc-g210-pci.c b/drivers/scsi/ufs/tc-dwc-g210-pci.c
14861 +index 67a6a61154b71..4e471484539d2 100644
14862 +--- a/drivers/scsi/ufs/tc-dwc-g210-pci.c
14863 ++++ b/drivers/scsi/ufs/tc-dwc-g210-pci.c
14864 +@@ -135,7 +135,6 @@ tc_dwc_g210_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
14865 + return err;
14866 + }
14867 +
14868 +- pci_set_drvdata(pdev, hba);
14869 + pm_runtime_put_noidle(&pdev->dev);
14870 + pm_runtime_allow(&pdev->dev);
14871 +
14872 +diff --git a/drivers/scsi/ufs/ufshcd-pci.c b/drivers/scsi/ufs/ufshcd-pci.c
14873 +index fadd566025b86..4bf8ec88676ee 100644
14874 +--- a/drivers/scsi/ufs/ufshcd-pci.c
14875 ++++ b/drivers/scsi/ufs/ufshcd-pci.c
14876 +@@ -347,8 +347,6 @@ ufshcd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
14877 + return err;
14878 + }
14879 +
14880 +- pci_set_drvdata(pdev, hba);
14881 +-
14882 + hba->vops = (struct ufs_hba_variant_ops *)id->driver_data;
14883 +
14884 + err = ufshcd_init(hba, mmio_base, pdev->irq);
14885 +diff --git a/drivers/scsi/ufs/ufshcd-pltfrm.c b/drivers/scsi/ufs/ufshcd-pltfrm.c
14886 +index 8c92d1bde64be..e49505534d498 100644
14887 +--- a/drivers/scsi/ufs/ufshcd-pltfrm.c
14888 ++++ b/drivers/scsi/ufs/ufshcd-pltfrm.c
14889 +@@ -412,8 +412,6 @@ int ufshcd_pltfrm_init(struct platform_device *pdev,
14890 + goto dealloc_host;
14891 + }
14892 +
14893 +- platform_set_drvdata(pdev, hba);
14894 +-
14895 + pm_runtime_set_active(&pdev->dev);
14896 + pm_runtime_enable(&pdev->dev);
14897 +
14898 +diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
14899 +index e3a9a02cadf5a..bf302776340ce 100644
14900 +--- a/drivers/scsi/ufs/ufshcd.c
14901 ++++ b/drivers/scsi/ufs/ufshcd.c
14902 +@@ -9085,6 +9085,13 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
14903 + struct device *dev = hba->dev;
14904 + char eh_wq_name[sizeof("ufs_eh_wq_00")];
14905 +
14906 ++ /*
14907 ++ * dev_set_drvdata() must be called before any callbacks are registered
14908 ++ * that use dev_get_drvdata() (frequency scaling, clock scaling, hwmon,
14909 ++ * sysfs).
14910 ++ */
14911 ++ dev_set_drvdata(dev, hba);
14912 ++
14913 + if (!mmio_base) {
14914 + dev_err(hba->dev,
14915 + "Invalid memory reference for mmio_base is NULL\n");
14916 +diff --git a/drivers/soc/mediatek/mtk-scpsys.c b/drivers/soc/mediatek/mtk-scpsys.c
14917 +index ca75b14931ec9..670cc82d17dc2 100644
14918 +--- a/drivers/soc/mediatek/mtk-scpsys.c
14919 ++++ b/drivers/soc/mediatek/mtk-scpsys.c
14920 +@@ -411,12 +411,17 @@ out:
14921 + return ret;
14922 + }
14923 +
14924 +-static void init_clks(struct platform_device *pdev, struct clk **clk)
14925 ++static int init_clks(struct platform_device *pdev, struct clk **clk)
14926 + {
14927 + int i;
14928 +
14929 +- for (i = CLK_NONE + 1; i < CLK_MAX; i++)
14930 ++ for (i = CLK_NONE + 1; i < CLK_MAX; i++) {
14931 + clk[i] = devm_clk_get(&pdev->dev, clk_names[i]);
14932 ++ if (IS_ERR(clk[i]))
14933 ++ return PTR_ERR(clk[i]);
14934 ++ }
14935 ++
14936 ++ return 0;
14937 + }
14938 +
14939 + static struct scp *init_scp(struct platform_device *pdev,
14940 +@@ -426,7 +431,7 @@ static struct scp *init_scp(struct platform_device *pdev,
14941 + {
14942 + struct genpd_onecell_data *pd_data;
14943 + struct resource *res;
14944 +- int i, j;
14945 ++ int i, j, ret;
14946 + struct scp *scp;
14947 + struct clk *clk[CLK_MAX];
14948 +
14949 +@@ -481,7 +486,9 @@ static struct scp *init_scp(struct platform_device *pdev,
14950 +
14951 + pd_data->num_domains = num;
14952 +
14953 +- init_clks(pdev, clk);
14954 ++ ret = init_clks(pdev, clk);
14955 ++ if (ret)
14956 ++ return ERR_PTR(ret);
14957 +
14958 + for (i = 0; i < num; i++) {
14959 + struct scp_domain *scpd = &scp->domains[i];
14960 +diff --git a/drivers/soc/qcom/cpr.c b/drivers/soc/qcom/cpr.c
14961 +index b24cc77d1889f..6298561bc29c9 100644
14962 +--- a/drivers/soc/qcom/cpr.c
14963 ++++ b/drivers/soc/qcom/cpr.c
14964 +@@ -1043,7 +1043,7 @@ static int cpr_interpolate(const struct corner *corner, int step_volt,
14965 + return corner->uV;
14966 +
14967 + temp = f_diff * (uV_high - uV_low);
14968 +- do_div(temp, f_high - f_low);
14969 ++ temp = div64_ul(temp, f_high - f_low);
14970 +
14971 + /*
14972 + * max_volt_scale has units of uV/MHz while freq values
14973 +diff --git a/drivers/soc/ti/pruss.c b/drivers/soc/ti/pruss.c
14974 +index cc0b4ad7a3d34..30695172a508f 100644
14975 +--- a/drivers/soc/ti/pruss.c
14976 ++++ b/drivers/soc/ti/pruss.c
14977 +@@ -131,7 +131,7 @@ static int pruss_clk_init(struct pruss *pruss, struct device_node *cfg_node)
14978 +
14979 + clks_np = of_get_child_by_name(cfg_node, "clocks");
14980 + if (!clks_np) {
14981 +- dev_err(dev, "%pOF is missing its 'clocks' node\n", clks_np);
14982 ++ dev_err(dev, "%pOF is missing its 'clocks' node\n", cfg_node);
14983 + return -ENODEV;
14984 + }
14985 +
14986 +diff --git a/drivers/spi/spi-meson-spifc.c b/drivers/spi/spi-meson-spifc.c
14987 +index 8eca6f24cb799..c8ed7815c4ba6 100644
14988 +--- a/drivers/spi/spi-meson-spifc.c
14989 ++++ b/drivers/spi/spi-meson-spifc.c
14990 +@@ -349,6 +349,7 @@ static int meson_spifc_probe(struct platform_device *pdev)
14991 + return 0;
14992 + out_clk:
14993 + clk_disable_unprepare(spifc->clk);
14994 ++ pm_runtime_disable(spifc->dev);
14995 + out_err:
14996 + spi_master_put(master);
14997 + return ret;
14998 +diff --git a/drivers/spi/spi-uniphier.c b/drivers/spi/spi-uniphier.c
14999 +index 6a9ef8ee3cc90..e5c234aecf675 100644
15000 +--- a/drivers/spi/spi-uniphier.c
15001 ++++ b/drivers/spi/spi-uniphier.c
15002 +@@ -767,12 +767,13 @@ out_master_put:
15003 +
15004 + static int uniphier_spi_remove(struct platform_device *pdev)
15005 + {
15006 +- struct uniphier_spi_priv *priv = platform_get_drvdata(pdev);
15007 ++ struct spi_master *master = platform_get_drvdata(pdev);
15008 ++ struct uniphier_spi_priv *priv = spi_master_get_devdata(master);
15009 +
15010 +- if (priv->master->dma_tx)
15011 +- dma_release_channel(priv->master->dma_tx);
15012 +- if (priv->master->dma_rx)
15013 +- dma_release_channel(priv->master->dma_rx);
15014 ++ if (master->dma_tx)
15015 ++ dma_release_channel(master->dma_tx);
15016 ++ if (master->dma_rx)
15017 ++ dma_release_channel(master->dma_rx);
15018 +
15019 + clk_disable_unprepare(priv->clk);
15020 +
15021 +diff --git a/drivers/staging/greybus/audio_topology.c b/drivers/staging/greybus/audio_topology.c
15022 +index 2bb8e7b60e8d5..e1579f356af5c 100644
15023 +--- a/drivers/staging/greybus/audio_topology.c
15024 ++++ b/drivers/staging/greybus/audio_topology.c
15025 +@@ -147,6 +147,9 @@ static const char **gb_generate_enum_strings(struct gbaudio_module_info *gb,
15026 +
15027 + items = le32_to_cpu(gbenum->items);
15028 + strings = devm_kcalloc(gb->dev, items, sizeof(char *), GFP_KERNEL);
15029 ++ if (!strings)
15030 ++ return NULL;
15031 ++
15032 + data = gbenum->names;
15033 +
15034 + for (i = 0; i < items; i++) {
15035 +@@ -655,6 +658,8 @@ static int gbaudio_tplg_create_enum_kctl(struct gbaudio_module_info *gb,
15036 + /* since count=1, and reg is dummy */
15037 + gbe->items = le32_to_cpu(gb_enum->items);
15038 + gbe->texts = gb_generate_enum_strings(gb, gb_enum);
15039 ++ if (!gbe->texts)
15040 ++ return -ENOMEM;
15041 +
15042 + /* debug enum info */
15043 + dev_dbg(gb->dev, "Max:%d, name_length:%d\n", gbe->items,
15044 +@@ -862,6 +867,8 @@ static int gbaudio_tplg_create_enum_ctl(struct gbaudio_module_info *gb,
15045 + /* since count=1, and reg is dummy */
15046 + gbe->items = le32_to_cpu(gb_enum->items);
15047 + gbe->texts = gb_generate_enum_strings(gb, gb_enum);
15048 ++ if (!gbe->texts)
15049 ++ return -ENOMEM;
15050 +
15051 + /* debug enum info */
15052 + dev_dbg(gb->dev, "Max:%d, name_length:%d\n", gbe->items,
15053 +@@ -1072,6 +1079,10 @@ static int gbaudio_tplg_create_widget(struct gbaudio_module_info *module,
15054 + csize += le16_to_cpu(gbenum->names_length);
15055 + control->texts = (const char * const *)
15056 + gb_generate_enum_strings(module, gbenum);
15057 ++ if (!control->texts) {
15058 ++ ret = -ENOMEM;
15059 ++ goto error;
15060 ++ }
15061 + control->items = le32_to_cpu(gbenum->items);
15062 + } else {
15063 + csize = sizeof(struct gb_audio_control);
15064 +@@ -1181,6 +1192,10 @@ static int gbaudio_tplg_process_kcontrols(struct gbaudio_module_info *module,
15065 + csize += le16_to_cpu(gbenum->names_length);
15066 + control->texts = (const char * const *)
15067 + gb_generate_enum_strings(module, gbenum);
15068 ++ if (!control->texts) {
15069 ++ ret = -ENOMEM;
15070 ++ goto error;
15071 ++ }
15072 + control->items = le32_to_cpu(gbenum->items);
15073 + } else {
15074 + csize = sizeof(struct gb_audio_control);
15075 +diff --git a/drivers/staging/media/atomisp/i2c/ov2680.h b/drivers/staging/media/atomisp/i2c/ov2680.h
15076 +index 49920245e0647..cafb798a71abe 100644
15077 +--- a/drivers/staging/media/atomisp/i2c/ov2680.h
15078 ++++ b/drivers/staging/media/atomisp/i2c/ov2680.h
15079 +@@ -289,8 +289,6 @@ static struct ov2680_reg const ov2680_global_setting[] = {
15080 + */
15081 + static struct ov2680_reg const ov2680_QCIF_30fps[] = {
15082 + {0x3086, 0x01},
15083 +- {0x3501, 0x24},
15084 +- {0x3502, 0x40},
15085 + {0x370a, 0x23},
15086 + {0x3801, 0xa0},
15087 + {0x3802, 0x00},
15088 +@@ -334,8 +332,6 @@ static struct ov2680_reg const ov2680_QCIF_30fps[] = {
15089 + */
15090 + static struct ov2680_reg const ov2680_CIF_30fps[] = {
15091 + {0x3086, 0x01},
15092 +- {0x3501, 0x24},
15093 +- {0x3502, 0x40},
15094 + {0x370a, 0x23},
15095 + {0x3801, 0xa0},
15096 + {0x3802, 0x00},
15097 +@@ -377,8 +373,6 @@ static struct ov2680_reg const ov2680_CIF_30fps[] = {
15098 + */
15099 + static struct ov2680_reg const ov2680_QVGA_30fps[] = {
15100 + {0x3086, 0x01},
15101 +- {0x3501, 0x24},
15102 +- {0x3502, 0x40},
15103 + {0x370a, 0x23},
15104 + {0x3801, 0xa0},
15105 + {0x3802, 0x00},
15106 +@@ -420,8 +414,6 @@ static struct ov2680_reg const ov2680_QVGA_30fps[] = {
15107 + */
15108 + static struct ov2680_reg const ov2680_656x496_30fps[] = {
15109 + {0x3086, 0x01},
15110 +- {0x3501, 0x24},
15111 +- {0x3502, 0x40},
15112 + {0x370a, 0x23},
15113 + {0x3801, 0xa0},
15114 + {0x3802, 0x00},
15115 +@@ -463,8 +455,6 @@ static struct ov2680_reg const ov2680_656x496_30fps[] = {
15116 + */
15117 + static struct ov2680_reg const ov2680_720x592_30fps[] = {
15118 + {0x3086, 0x01},
15119 +- {0x3501, 0x26},
15120 +- {0x3502, 0x40},
15121 + {0x370a, 0x23},
15122 + {0x3801, 0x00}, // X_ADDR_START;
15123 + {0x3802, 0x00},
15124 +@@ -508,8 +498,6 @@ static struct ov2680_reg const ov2680_720x592_30fps[] = {
15125 + */
15126 + static struct ov2680_reg const ov2680_800x600_30fps[] = {
15127 + {0x3086, 0x01},
15128 +- {0x3501, 0x26},
15129 +- {0x3502, 0x40},
15130 + {0x370a, 0x23},
15131 + {0x3801, 0x00},
15132 + {0x3802, 0x00},
15133 +@@ -551,8 +539,6 @@ static struct ov2680_reg const ov2680_800x600_30fps[] = {
15134 + */
15135 + static struct ov2680_reg const ov2680_720p_30fps[] = {
15136 + {0x3086, 0x00},
15137 +- {0x3501, 0x48},
15138 +- {0x3502, 0xe0},
15139 + {0x370a, 0x21},
15140 + {0x3801, 0xa0},
15141 + {0x3802, 0x00},
15142 +@@ -594,8 +580,6 @@ static struct ov2680_reg const ov2680_720p_30fps[] = {
15143 + */
15144 + static struct ov2680_reg const ov2680_1296x976_30fps[] = {
15145 + {0x3086, 0x00},
15146 +- {0x3501, 0x48},
15147 +- {0x3502, 0xe0},
15148 + {0x370a, 0x21},
15149 + {0x3801, 0xa0},
15150 + {0x3802, 0x00},
15151 +@@ -637,8 +621,6 @@ static struct ov2680_reg const ov2680_1296x976_30fps[] = {
15152 + */
15153 + static struct ov2680_reg const ov2680_1456x1096_30fps[] = {
15154 + {0x3086, 0x00},
15155 +- {0x3501, 0x48},
15156 +- {0x3502, 0xe0},
15157 + {0x370a, 0x21},
15158 + {0x3801, 0x90},
15159 + {0x3802, 0x00},
15160 +@@ -682,8 +664,6 @@ static struct ov2680_reg const ov2680_1456x1096_30fps[] = {
15161 +
15162 + static struct ov2680_reg const ov2680_1616x916_30fps[] = {
15163 + {0x3086, 0x00},
15164 +- {0x3501, 0x48},
15165 +- {0x3502, 0xe0},
15166 + {0x370a, 0x21},
15167 + {0x3801, 0x00},
15168 + {0x3802, 0x00},
15169 +@@ -726,8 +706,6 @@ static struct ov2680_reg const ov2680_1616x916_30fps[] = {
15170 + #if 0
15171 + static struct ov2680_reg const ov2680_1616x1082_30fps[] = {
15172 + {0x3086, 0x00},
15173 +- {0x3501, 0x48},
15174 +- {0x3502, 0xe0},
15175 + {0x370a, 0x21},
15176 + {0x3801, 0x00},
15177 + {0x3802, 0x00},
15178 +@@ -769,8 +747,6 @@ static struct ov2680_reg const ov2680_1616x1082_30fps[] = {
15179 + */
15180 + static struct ov2680_reg const ov2680_1616x1216_30fps[] = {
15181 + {0x3086, 0x00},
15182 +- {0x3501, 0x48},
15183 +- {0x3502, 0xe0},
15184 + {0x370a, 0x21},
15185 + {0x3801, 0x00},
15186 + {0x3802, 0x00},
15187 +diff --git a/drivers/staging/media/atomisp/pci/atomisp_cmd.c b/drivers/staging/media/atomisp/pci/atomisp_cmd.c
15188 +index 592ea990d4ca4..90d50a693ce57 100644
15189 +--- a/drivers/staging/media/atomisp/pci/atomisp_cmd.c
15190 ++++ b/drivers/staging/media/atomisp/pci/atomisp_cmd.c
15191 +@@ -1138,9 +1138,10 @@ void atomisp_buf_done(struct atomisp_sub_device *asd, int error,
15192 + asd->frame_status[vb->i] =
15193 + ATOMISP_FRAME_STATUS_OK;
15194 + }
15195 +- } else
15196 ++ } else {
15197 + asd->frame_status[vb->i] =
15198 + ATOMISP_FRAME_STATUS_OK;
15199 ++ }
15200 + } else {
15201 + asd->frame_status[vb->i] = ATOMISP_FRAME_STATUS_OK;
15202 + }
15203 +@@ -1714,6 +1715,12 @@ void atomisp_wdt_refresh_pipe(struct atomisp_video_pipe *pipe,
15204 + {
15205 + unsigned long next;
15206 +
15207 ++ if (!pipe->asd) {
15208 ++ dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
15209 ++ __func__, pipe->vdev.name);
15210 ++ return;
15211 ++ }
15212 ++
15213 + if (delay != ATOMISP_WDT_KEEP_CURRENT_DELAY)
15214 + pipe->wdt_duration = delay;
15215 +
15216 +@@ -1776,6 +1783,12 @@ void atomisp_wdt_refresh(struct atomisp_sub_device *asd, unsigned int delay)
15217 + /* ISP2401 */
15218 + void atomisp_wdt_stop_pipe(struct atomisp_video_pipe *pipe, bool sync)
15219 + {
15220 ++ if (!pipe->asd) {
15221 ++ dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
15222 ++ __func__, pipe->vdev.name);
15223 ++ return;
15224 ++ }
15225 ++
15226 + if (!atomisp_is_wdt_running(pipe))
15227 + return;
15228 +
15229 +@@ -4108,6 +4121,12 @@ void atomisp_handle_parameter_and_buffer(struct atomisp_video_pipe *pipe)
15230 + unsigned long irqflags;
15231 + bool need_to_enqueue_buffer = false;
15232 +
15233 ++ if (!asd) {
15234 ++ dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
15235 ++ __func__, pipe->vdev.name);
15236 ++ return;
15237 ++ }
15238 ++
15239 + if (atomisp_is_vf_pipe(pipe))
15240 + return;
15241 +
15242 +@@ -4195,6 +4214,12 @@ int atomisp_set_parameters(struct video_device *vdev,
15243 + struct atomisp_css_params *css_param = &asd->params.css_param;
15244 + int ret;
15245 +
15246 ++ if (!asd) {
15247 ++ dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
15248 ++ __func__, vdev->name);
15249 ++ return -EINVAL;
15250 ++ }
15251 ++
15252 + if (!asd->stream_env[ATOMISP_INPUT_STREAM_GENERAL].stream) {
15253 + dev_err(asd->isp->dev, "%s: internal error!\n", __func__);
15254 + return -EINVAL;
15255 +@@ -4855,6 +4880,12 @@ int atomisp_try_fmt(struct video_device *vdev, struct v4l2_format *f,
15256 + int source_pad = atomisp_subdev_source_pad(vdev);
15257 + int ret;
15258 +
15259 ++ if (!asd) {
15260 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15261 ++ __func__, vdev->name);
15262 ++ return -EINVAL;
15263 ++ }
15264 ++
15265 + if (!isp->inputs[asd->input_curr].camera)
15266 + return -EINVAL;
15267 +
15268 +@@ -4945,9 +4976,9 @@ atomisp_try_fmt_file(struct atomisp_device *isp, struct v4l2_format *f)
15269 +
15270 + depth = get_pixel_depth(pixelformat);
15271 +
15272 +- if (field == V4L2_FIELD_ANY)
15273 ++ if (field == V4L2_FIELD_ANY) {
15274 + field = V4L2_FIELD_NONE;
15275 +- else if (field != V4L2_FIELD_NONE) {
15276 ++ } else if (field != V4L2_FIELD_NONE) {
15277 + dev_err(isp->dev, "Wrong output field\n");
15278 + return -EINVAL;
15279 + }
15280 +@@ -5201,6 +5232,12 @@ static int atomisp_set_fmt_to_isp(struct video_device *vdev,
15281 + const struct atomisp_in_fmt_conv *fc;
15282 + int ret, i;
15283 +
15284 ++ if (!asd) {
15285 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15286 ++ __func__, vdev->name);
15287 ++ return -EINVAL;
15288 ++ }
15289 ++
15290 + v4l2_fh_init(&fh.vfh, vdev);
15291 +
15292 + isp_sink_crop = atomisp_subdev_get_rect(
15293 +@@ -5512,6 +5549,7 @@ static int atomisp_set_fmt_to_snr(struct video_device *vdev,
15294 + unsigned int dvs_env_w, unsigned int dvs_env_h)
15295 + {
15296 + struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
15297 ++ struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
15298 + const struct atomisp_format_bridge *format;
15299 + struct v4l2_subdev_pad_config pad_cfg;
15300 + struct v4l2_subdev_format vformat = {
15301 +@@ -5527,6 +5565,12 @@ static int atomisp_set_fmt_to_snr(struct video_device *vdev,
15302 + struct v4l2_subdev_fh fh;
15303 + int ret;
15304 +
15305 ++ if (!asd) {
15306 ++ dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
15307 ++ __func__, vdev->name);
15308 ++ return -EINVAL;
15309 ++ }
15310 ++
15311 + v4l2_fh_init(&fh.vfh, vdev);
15312 +
15313 + stream_index = atomisp_source_pad_to_stream_id(asd, source_pad);
15314 +@@ -5617,6 +5661,12 @@ int atomisp_set_fmt(struct video_device *vdev, struct v4l2_format *f)
15315 + struct v4l2_subdev_fh fh;
15316 + int ret;
15317 +
15318 ++ if (!asd) {
15319 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15320 ++ __func__, vdev->name);
15321 ++ return -EINVAL;
15322 ++ }
15323 ++
15324 + if (source_pad >= ATOMISP_SUBDEV_PADS_NUM)
15325 + return -EINVAL;
15326 +
15327 +@@ -6050,6 +6100,12 @@ int atomisp_set_fmt_file(struct video_device *vdev, struct v4l2_format *f)
15328 + struct v4l2_subdev_fh fh;
15329 + int ret;
15330 +
15331 ++ if (!asd) {
15332 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15333 ++ __func__, vdev->name);
15334 ++ return -EINVAL;
15335 ++ }
15336 ++
15337 + v4l2_fh_init(&fh.vfh, vdev);
15338 +
15339 + dev_dbg(isp->dev, "setting fmt %ux%u 0x%x for file inject\n",
15340 +@@ -6374,6 +6430,12 @@ bool atomisp_is_vf_pipe(struct atomisp_video_pipe *pipe)
15341 + {
15342 + struct atomisp_sub_device *asd = pipe->asd;
15343 +
15344 ++ if (!asd) {
15345 ++ dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
15346 ++ __func__, pipe->vdev.name);
15347 ++ return false;
15348 ++ }
15349 ++
15350 + if (pipe == &asd->video_out_vf)
15351 + return true;
15352 +
15353 +@@ -6587,17 +6649,23 @@ static int atomisp_get_pipe_id(struct atomisp_video_pipe *pipe)
15354 + {
15355 + struct atomisp_sub_device *asd = pipe->asd;
15356 +
15357 +- if (ATOMISP_USE_YUVPP(asd))
15358 ++ if (!asd) {
15359 ++ dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
15360 ++ __func__, pipe->vdev.name);
15361 ++ return -EINVAL;
15362 ++ }
15363 ++
15364 ++ if (ATOMISP_USE_YUVPP(asd)) {
15365 + return IA_CSS_PIPE_ID_YUVPP;
15366 +- else if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_SCALER)
15367 ++ } else if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_SCALER) {
15368 + return IA_CSS_PIPE_ID_VIDEO;
15369 +- else if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_LOWLAT)
15370 ++ } else if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_LOWLAT) {
15371 + return IA_CSS_PIPE_ID_CAPTURE;
15372 +- else if (pipe == &asd->video_out_video_capture)
15373 ++ } else if (pipe == &asd->video_out_video_capture) {
15374 + return IA_CSS_PIPE_ID_VIDEO;
15375 +- else if (pipe == &asd->video_out_vf)
15376 ++ } else if (pipe == &asd->video_out_vf) {
15377 + return IA_CSS_PIPE_ID_CAPTURE;
15378 +- else if (pipe == &asd->video_out_preview) {
15379 ++ } else if (pipe == &asd->video_out_preview) {
15380 + if (asd->run_mode->val == ATOMISP_RUN_MODE_VIDEO)
15381 + return IA_CSS_PIPE_ID_VIDEO;
15382 + else
15383 +@@ -6624,6 +6692,12 @@ int atomisp_get_invalid_frame_num(struct video_device *vdev,
15384 + struct ia_css_pipe_info p_info;
15385 + int ret;
15386 +
15387 ++ if (!asd) {
15388 ++ dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
15389 ++ __func__, vdev->name);
15390 ++ return -EINVAL;
15391 ++ }
15392 ++
15393 + if (asd->isp->inputs[asd->input_curr].camera_caps->
15394 + sensor[asd->sensor_curr].stream_num > 1) {
15395 + /* External ISP */
15396 +diff --git a/drivers/staging/media/atomisp/pci/atomisp_fops.c b/drivers/staging/media/atomisp/pci/atomisp_fops.c
15397 +index f1e6b25978534..b751df31cc24c 100644
15398 +--- a/drivers/staging/media/atomisp/pci/atomisp_fops.c
15399 ++++ b/drivers/staging/media/atomisp/pci/atomisp_fops.c
15400 +@@ -877,6 +877,11 @@ done:
15401 + else
15402 + pipe->users++;
15403 + rt_mutex_unlock(&isp->mutex);
15404 ++
15405 ++ /* Ensure that a mode is set */
15406 ++ if (asd)
15407 ++ v4l2_ctrl_s_ctrl(asd->run_mode, pipe->default_run_mode);
15408 ++
15409 + return 0;
15410 +
15411 + css_error:
15412 +@@ -1171,6 +1176,12 @@ static int atomisp_mmap(struct file *file, struct vm_area_struct *vma)
15413 + u32 origin_size, new_size;
15414 + int ret;
15415 +
15416 ++ if (!asd) {
15417 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15418 ++ __func__, vdev->name);
15419 ++ return -EINVAL;
15420 ++ }
15421 ++
15422 + if (!(vma->vm_flags & (VM_WRITE | VM_READ)))
15423 + return -EACCES;
15424 +
15425 +diff --git a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
15426 +index 135994d44802c..34480ca164746 100644
15427 +--- a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
15428 ++++ b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
15429 +@@ -481,7 +481,7 @@ fail:
15430 +
15431 + static u8 gmin_get_pmic_id_and_addr(struct device *dev)
15432 + {
15433 +- struct i2c_client *power;
15434 ++ struct i2c_client *power = NULL;
15435 + static u8 pmic_i2c_addr;
15436 +
15437 + if (pmic_id)
15438 +diff --git a/drivers/staging/media/atomisp/pci/atomisp_ioctl.c b/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
15439 +index 9da82855552de..8a0648fd7c813 100644
15440 +--- a/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
15441 ++++ b/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
15442 +@@ -646,6 +646,12 @@ static int atomisp_g_input(struct file *file, void *fh, unsigned int *input)
15443 + struct atomisp_device *isp = video_get_drvdata(vdev);
15444 + struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
15445 +
15446 ++ if (!asd) {
15447 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15448 ++ __func__, vdev->name);
15449 ++ return -EINVAL;
15450 ++ }
15451 ++
15452 + rt_mutex_lock(&isp->mutex);
15453 + *input = asd->input_curr;
15454 + rt_mutex_unlock(&isp->mutex);
15455 +@@ -665,6 +671,12 @@ static int atomisp_s_input(struct file *file, void *fh, unsigned int input)
15456 + struct v4l2_subdev *motor;
15457 + int ret;
15458 +
15459 ++ if (!asd) {
15460 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15461 ++ __func__, vdev->name);
15462 ++ return -EINVAL;
15463 ++ }
15464 ++
15465 + rt_mutex_lock(&isp->mutex);
15466 + if (input >= ATOM_ISP_MAX_INPUTS || input >= isp->input_cnt) {
15467 + dev_dbg(isp->dev, "input_cnt: %d\n", isp->input_cnt);
15468 +@@ -761,18 +773,33 @@ static int atomisp_enum_fmt_cap(struct file *file, void *fh,
15469 + struct video_device *vdev = video_devdata(file);
15470 + struct atomisp_device *isp = video_get_drvdata(vdev);
15471 + struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
15472 +- struct v4l2_subdev_mbus_code_enum code = { 0 };
15473 ++ struct v4l2_subdev_mbus_code_enum code = {
15474 ++ .which = V4L2_SUBDEV_FORMAT_ACTIVE,
15475 ++ };
15476 ++ struct v4l2_subdev *camera;
15477 + unsigned int i, fi = 0;
15478 + int rval;
15479 +
15480 ++ if (!asd) {
15481 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15482 ++ __func__, vdev->name);
15483 ++ return -EINVAL;
15484 ++ }
15485 ++
15486 ++ camera = isp->inputs[asd->input_curr].camera;
15487 ++ if(!camera) {
15488 ++ dev_err(isp->dev, "%s(): camera is NULL, device is %s\n",
15489 ++ __func__, vdev->name);
15490 ++ return -EINVAL;
15491 ++ }
15492 ++
15493 + rt_mutex_lock(&isp->mutex);
15494 +- rval = v4l2_subdev_call(isp->inputs[asd->input_curr].camera, pad,
15495 +- enum_mbus_code, NULL, &code);
15496 ++
15497 ++ rval = v4l2_subdev_call(camera, pad, enum_mbus_code, NULL, &code);
15498 + if (rval == -ENOIOCTLCMD) {
15499 + dev_warn(isp->dev,
15500 +- "enum_mbus_code pad op not supported. Please fix your sensor driver!\n");
15501 +- // rval = v4l2_subdev_call(isp->inputs[asd->input_curr].camera,
15502 +- // video, enum_mbus_fmt, 0, &code.code);
15503 ++ "enum_mbus_code pad op not supported by %s. Please fix your sensor driver!\n",
15504 ++ camera->name);
15505 + }
15506 + rt_mutex_unlock(&isp->mutex);
15507 +
15508 +@@ -802,6 +829,8 @@ static int atomisp_enum_fmt_cap(struct file *file, void *fh,
15509 + f->pixelformat = format->pixelformat;
15510 + return 0;
15511 + }
15512 ++ dev_err(isp->dev, "%s(): format for code %x not found.\n",
15513 ++ __func__, code.code);
15514 +
15515 + return -EINVAL;
15516 + }
15517 +@@ -834,6 +863,72 @@ static int atomisp_g_fmt_file(struct file *file, void *fh,
15518 + return 0;
15519 + }
15520 +
15521 ++static int atomisp_adjust_fmt(struct v4l2_format *f)
15522 ++{
15523 ++ const struct atomisp_format_bridge *format_bridge;
15524 ++ u32 padded_width;
15525 ++
15526 ++ format_bridge = atomisp_get_format_bridge(f->fmt.pix.pixelformat);
15527 ++
15528 ++ padded_width = f->fmt.pix.width + pad_w;
15529 ++
15530 ++ if (format_bridge->planar) {
15531 ++ f->fmt.pix.bytesperline = padded_width;
15532 ++ f->fmt.pix.sizeimage = PAGE_ALIGN(f->fmt.pix.height *
15533 ++ DIV_ROUND_UP(format_bridge->depth *
15534 ++ padded_width, 8));
15535 ++ } else {
15536 ++ f->fmt.pix.bytesperline = DIV_ROUND_UP(format_bridge->depth *
15537 ++ padded_width, 8);
15538 ++ f->fmt.pix.sizeimage = PAGE_ALIGN(f->fmt.pix.height * f->fmt.pix.bytesperline);
15539 ++ }
15540 ++
15541 ++ if (f->fmt.pix.field == V4L2_FIELD_ANY)
15542 ++ f->fmt.pix.field = V4L2_FIELD_NONE;
15543 ++
15544 ++ format_bridge = atomisp_get_format_bridge(f->fmt.pix.pixelformat);
15545 ++ if (!format_bridge)
15546 ++ return -EINVAL;
15547 ++
15548 ++ /* Currently, raw formats are broken!!! */
15549 ++ if (format_bridge->sh_fmt == IA_CSS_FRAME_FORMAT_RAW) {
15550 ++ f->fmt.pix.pixelformat = V4L2_PIX_FMT_YUV420;
15551 ++
15552 ++ format_bridge = atomisp_get_format_bridge(f->fmt.pix.pixelformat);
15553 ++ if (!format_bridge)
15554 ++ return -EINVAL;
15555 ++ }
15556 ++
15557 ++ padded_width = f->fmt.pix.width + pad_w;
15558 ++
15559 ++ if (format_bridge->planar) {
15560 ++ f->fmt.pix.bytesperline = padded_width;
15561 ++ f->fmt.pix.sizeimage = PAGE_ALIGN(f->fmt.pix.height *
15562 ++ DIV_ROUND_UP(format_bridge->depth *
15563 ++ padded_width, 8));
15564 ++ } else {
15565 ++ f->fmt.pix.bytesperline = DIV_ROUND_UP(format_bridge->depth *
15566 ++ padded_width, 8);
15567 ++ f->fmt.pix.sizeimage = PAGE_ALIGN(f->fmt.pix.height * f->fmt.pix.bytesperline);
15568 ++ }
15569 ++
15570 ++ if (f->fmt.pix.field == V4L2_FIELD_ANY)
15571 ++ f->fmt.pix.field = V4L2_FIELD_NONE;
15572 ++
15573 ++ /*
15574 ++ * FIXME: do we need to setup this differently, depending on the
15575 ++ * sensor or the pipeline?
15576 ++ */
15577 ++ f->fmt.pix.colorspace = V4L2_COLORSPACE_REC709;
15578 ++ f->fmt.pix.ycbcr_enc = V4L2_YCBCR_ENC_709;
15579 ++ f->fmt.pix.xfer_func = V4L2_XFER_FUNC_709;
15580 ++
15581 ++ f->fmt.pix.width -= pad_w;
15582 ++ f->fmt.pix.height -= pad_h;
15583 ++
15584 ++ return 0;
15585 ++}
15586 ++
15587 + /* This function looks up the closest available resolution. */
15588 + static int atomisp_try_fmt_cap(struct file *file, void *fh,
15589 + struct v4l2_format *f)
15590 +@@ -845,7 +940,11 @@ static int atomisp_try_fmt_cap(struct file *file, void *fh,
15591 + rt_mutex_lock(&isp->mutex);
15592 + ret = atomisp_try_fmt(vdev, f, NULL);
15593 + rt_mutex_unlock(&isp->mutex);
15594 +- return ret;
15595 ++
15596 ++ if (ret)
15597 ++ return ret;
15598 ++
15599 ++ return atomisp_adjust_fmt(f);
15600 + }
15601 +
15602 + static int atomisp_s_fmt_cap(struct file *file, void *fh,
15603 +@@ -1027,6 +1126,12 @@ int __atomisp_reqbufs(struct file *file, void *fh,
15604 + u16 stream_id = atomisp_source_pad_to_stream_id(asd, source_pad);
15605 + int ret = 0, i = 0;
15606 +
15607 ++ if (!asd) {
15608 ++ dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
15609 ++ __func__, vdev->name);
15610 ++ return -EINVAL;
15611 ++ }
15612 ++
15613 + if (req->count == 0) {
15614 + mutex_lock(&pipe->capq.vb_lock);
15615 + if (!list_empty(&pipe->capq.stream))
15616 +@@ -1154,6 +1259,12 @@ static int atomisp_qbuf(struct file *file, void *fh, struct v4l2_buffer *buf)
15617 + u32 pgnr;
15618 + int ret = 0;
15619 +
15620 ++ if (!asd) {
15621 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15622 ++ __func__, vdev->name);
15623 ++ return -EINVAL;
15624 ++ }
15625 ++
15626 + rt_mutex_lock(&isp->mutex);
15627 + if (isp->isp_fatal_error) {
15628 + ret = -EIO;
15629 +@@ -1389,6 +1500,12 @@ static int atomisp_dqbuf(struct file *file, void *fh, struct v4l2_buffer *buf)
15630 + struct atomisp_device *isp = video_get_drvdata(vdev);
15631 + int ret = 0;
15632 +
15633 ++ if (!asd) {
15634 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15635 ++ __func__, vdev->name);
15636 ++ return -EINVAL;
15637 ++ }
15638 ++
15639 + rt_mutex_lock(&isp->mutex);
15640 +
15641 + if (isp->isp_fatal_error) {
15642 +@@ -1640,6 +1757,12 @@ static int atomisp_streamon(struct file *file, void *fh,
15643 + int ret = 0;
15644 + unsigned long irqflags;
15645 +
15646 ++ if (!asd) {
15647 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15648 ++ __func__, vdev->name);
15649 ++ return -EINVAL;
15650 ++ }
15651 ++
15652 + dev_dbg(isp->dev, "Start stream on pad %d for asd%d\n",
15653 + atomisp_subdev_source_pad(vdev), asd->index);
15654 +
15655 +@@ -1901,6 +2024,12 @@ int __atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type)
15656 + unsigned long flags;
15657 + bool first_streamoff = false;
15658 +
15659 ++ if (!asd) {
15660 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15661 ++ __func__, vdev->name);
15662 ++ return -EINVAL;
15663 ++ }
15664 ++
15665 + dev_dbg(isp->dev, "Stop stream on pad %d for asd%d\n",
15666 + atomisp_subdev_source_pad(vdev), asd->index);
15667 +
15668 +@@ -2150,6 +2279,12 @@ static int atomisp_g_ctrl(struct file *file, void *fh,
15669 + struct atomisp_device *isp = video_get_drvdata(vdev);
15670 + int i, ret = -EINVAL;
15671 +
15672 ++ if (!asd) {
15673 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15674 ++ __func__, vdev->name);
15675 ++ return -EINVAL;
15676 ++ }
15677 ++
15678 + for (i = 0; i < ctrls_num; i++) {
15679 + if (ci_v4l2_controls[i].id == control->id) {
15680 + ret = 0;
15681 +@@ -2229,6 +2364,12 @@ static int atomisp_s_ctrl(struct file *file, void *fh,
15682 + struct atomisp_device *isp = video_get_drvdata(vdev);
15683 + int i, ret = -EINVAL;
15684 +
15685 ++ if (!asd) {
15686 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15687 ++ __func__, vdev->name);
15688 ++ return -EINVAL;
15689 ++ }
15690 ++
15691 + for (i = 0; i < ctrls_num; i++) {
15692 + if (ci_v4l2_controls[i].id == control->id) {
15693 + ret = 0;
15694 +@@ -2310,6 +2451,12 @@ static int atomisp_queryctl(struct file *file, void *fh,
15695 + struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
15696 + struct atomisp_device *isp = video_get_drvdata(vdev);
15697 +
15698 ++ if (!asd) {
15699 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15700 ++ __func__, vdev->name);
15701 ++ return -EINVAL;
15702 ++ }
15703 ++
15704 + switch (qc->id) {
15705 + case V4L2_CID_FOCUS_ABSOLUTE:
15706 + case V4L2_CID_FOCUS_RELATIVE:
15707 +@@ -2355,6 +2502,12 @@ static int atomisp_camera_g_ext_ctrls(struct file *file, void *fh,
15708 + int i;
15709 + int ret = 0;
15710 +
15711 ++ if (!asd) {
15712 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15713 ++ __func__, vdev->name);
15714 ++ return -EINVAL;
15715 ++ }
15716 ++
15717 + if (!IS_ISP2401)
15718 + motor = isp->inputs[asd->input_curr].motor;
15719 + else
15720 +@@ -2466,6 +2619,12 @@ static int atomisp_camera_s_ext_ctrls(struct file *file, void *fh,
15721 + int i;
15722 + int ret = 0;
15723 +
15724 ++ if (!asd) {
15725 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15726 ++ __func__, vdev->name);
15727 ++ return -EINVAL;
15728 ++ }
15729 ++
15730 + if (!IS_ISP2401)
15731 + motor = isp->inputs[asd->input_curr].motor;
15732 + else
15733 +@@ -2591,6 +2750,12 @@ static int atomisp_g_parm(struct file *file, void *fh,
15734 + struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
15735 + struct atomisp_device *isp = video_get_drvdata(vdev);
15736 +
15737 ++ if (!asd) {
15738 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15739 ++ __func__, vdev->name);
15740 ++ return -EINVAL;
15741 ++ }
15742 ++
15743 + if (parm->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) {
15744 + dev_err(isp->dev, "unsupported v4l2 buf type\n");
15745 + return -EINVAL;
15746 +@@ -2613,6 +2778,12 @@ static int atomisp_s_parm(struct file *file, void *fh,
15747 + int rval;
15748 + int fps;
15749 +
15750 ++ if (!asd) {
15751 ++ dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
15752 ++ __func__, vdev->name);
15753 ++ return -EINVAL;
15754 ++ }
15755 ++
15756 + if (parm->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) {
15757 + dev_err(isp->dev, "unsupported v4l2 buf type\n");
15758 + return -EINVAL;
15759 +diff --git a/drivers/staging/media/atomisp/pci/atomisp_subdev.c b/drivers/staging/media/atomisp/pci/atomisp_subdev.c
15760 +index dcc2dd981ca60..628e85799274d 100644
15761 +--- a/drivers/staging/media/atomisp/pci/atomisp_subdev.c
15762 ++++ b/drivers/staging/media/atomisp/pci/atomisp_subdev.c
15763 +@@ -1178,23 +1178,28 @@ static int isp_subdev_init_entities(struct atomisp_sub_device *asd)
15764 +
15765 + atomisp_init_acc_pipe(asd, &asd->video_acc);
15766 +
15767 +- ret = atomisp_video_init(&asd->video_in, "MEMORY");
15768 ++ ret = atomisp_video_init(&asd->video_in, "MEMORY",
15769 ++ ATOMISP_RUN_MODE_SDV);
15770 + if (ret < 0)
15771 + return ret;
15772 +
15773 +- ret = atomisp_video_init(&asd->video_out_capture, "CAPTURE");
15774 ++ ret = atomisp_video_init(&asd->video_out_capture, "CAPTURE",
15775 ++ ATOMISP_RUN_MODE_STILL_CAPTURE);
15776 + if (ret < 0)
15777 + return ret;
15778 +
15779 +- ret = atomisp_video_init(&asd->video_out_vf, "VIEWFINDER");
15780 ++ ret = atomisp_video_init(&asd->video_out_vf, "VIEWFINDER",
15781 ++ ATOMISP_RUN_MODE_CONTINUOUS_CAPTURE);
15782 + if (ret < 0)
15783 + return ret;
15784 +
15785 +- ret = atomisp_video_init(&asd->video_out_preview, "PREVIEW");
15786 ++ ret = atomisp_video_init(&asd->video_out_preview, "PREVIEW",
15787 ++ ATOMISP_RUN_MODE_PREVIEW);
15788 + if (ret < 0)
15789 + return ret;
15790 +
15791 +- ret = atomisp_video_init(&asd->video_out_video_capture, "VIDEO");
15792 ++ ret = atomisp_video_init(&asd->video_out_video_capture, "VIDEO",
15793 ++ ATOMISP_RUN_MODE_VIDEO);
15794 + if (ret < 0)
15795 + return ret;
15796 +
15797 +diff --git a/drivers/staging/media/atomisp/pci/atomisp_subdev.h b/drivers/staging/media/atomisp/pci/atomisp_subdev.h
15798 +index 330a77eed8aa6..12215d7406169 100644
15799 +--- a/drivers/staging/media/atomisp/pci/atomisp_subdev.h
15800 ++++ b/drivers/staging/media/atomisp/pci/atomisp_subdev.h
15801 +@@ -81,6 +81,9 @@ struct atomisp_video_pipe {
15802 + /* the link list to store per_frame parameters */
15803 + struct list_head per_frame_params;
15804 +
15805 ++ /* Store here the initial run mode */
15806 ++ unsigned int default_run_mode;
15807 ++
15808 + unsigned int buffers_in_css;
15809 +
15810 + /* irq_lock is used to protect video buffer state change operations and
15811 +diff --git a/drivers/staging/media/atomisp/pci/atomisp_v4l2.c b/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
15812 +index fa1bd99cd6f17..8aeea74cfd06b 100644
15813 +--- a/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
15814 ++++ b/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
15815 +@@ -447,7 +447,8 @@ const struct atomisp_dfs_config dfs_config_cht_soc = {
15816 + .dfs_table_size = ARRAY_SIZE(dfs_rules_cht_soc),
15817 + };
15818 +
15819 +-int atomisp_video_init(struct atomisp_video_pipe *video, const char *name)
15820 ++int atomisp_video_init(struct atomisp_video_pipe *video, const char *name,
15821 ++ unsigned int run_mode)
15822 + {
15823 + int ret;
15824 + const char *direction;
15825 +@@ -478,6 +479,7 @@ int atomisp_video_init(struct atomisp_video_pipe *video, const char *name)
15826 + "ATOMISP ISP %s %s", name, direction);
15827 + video->vdev.release = video_device_release_empty;
15828 + video_set_drvdata(&video->vdev, video->isp);
15829 ++ video->default_run_mode = run_mode;
15830 +
15831 + return 0;
15832 + }
15833 +@@ -711,15 +713,15 @@ static int atomisp_mrfld_power(struct atomisp_device *isp, bool enable)
15834 +
15835 + dev_dbg(isp->dev, "IUNIT power-%s.\n", enable ? "on" : "off");
15836 +
15837 +- /*WA:Enable DVFS*/
15838 ++ /* WA for P-Unit, if DVFS enabled, ISP timeout observed */
15839 + if (IS_CHT && enable)
15840 +- punit_ddr_dvfs_enable(true);
15841 ++ punit_ddr_dvfs_enable(false);
15842 +
15843 + /*
15844 + * FIXME:WA for ECS28A, with this sleep, CTS
15845 + * android.hardware.camera2.cts.CameraDeviceTest#testCameraDeviceAbort
15846 + * PASS, no impact on other platforms
15847 +- */
15848 ++ */
15849 + if (IS_BYT && enable)
15850 + msleep(10);
15851 +
15852 +@@ -727,7 +729,7 @@ static int atomisp_mrfld_power(struct atomisp_device *isp, bool enable)
15853 + iosf_mbi_modify(BT_MBI_UNIT_PMC, MBI_REG_READ, MRFLD_ISPSSPM0,
15854 + val, MRFLD_ISPSSPM0_ISPSSC_MASK);
15855 +
15856 +- /*WA:Enable DVFS*/
15857 ++ /* WA:Enable DVFS */
15858 + if (IS_CHT && !enable)
15859 + punit_ddr_dvfs_enable(true);
15860 +
15861 +@@ -1182,6 +1184,7 @@ static void atomisp_unregister_entities(struct atomisp_device *isp)
15862 +
15863 + v4l2_device_unregister(&isp->v4l2_dev);
15864 + media_device_unregister(&isp->media_dev);
15865 ++ media_device_cleanup(&isp->media_dev);
15866 + }
15867 +
15868 + static int atomisp_register_entities(struct atomisp_device *isp)
15869 +diff --git a/drivers/staging/media/atomisp/pci/atomisp_v4l2.h b/drivers/staging/media/atomisp/pci/atomisp_v4l2.h
15870 +index 81bb356b81720..72611b8286a4a 100644
15871 +--- a/drivers/staging/media/atomisp/pci/atomisp_v4l2.h
15872 ++++ b/drivers/staging/media/atomisp/pci/atomisp_v4l2.h
15873 +@@ -27,7 +27,8 @@ struct v4l2_device;
15874 + struct atomisp_device;
15875 + struct firmware;
15876 +
15877 +-int atomisp_video_init(struct atomisp_video_pipe *video, const char *name);
15878 ++int atomisp_video_init(struct atomisp_video_pipe *video, const char *name,
15879 ++ unsigned int run_mode);
15880 + void atomisp_acc_init(struct atomisp_acc_pipe *video, const char *name);
15881 + void atomisp_video_unregister(struct atomisp_video_pipe *video);
15882 + void atomisp_acc_unregister(struct atomisp_acc_pipe *video);
15883 +diff --git a/drivers/staging/media/atomisp/pci/sh_css.c b/drivers/staging/media/atomisp/pci/sh_css.c
15884 +index ddee04c8248d0..54a18921fbd15 100644
15885 +--- a/drivers/staging/media/atomisp/pci/sh_css.c
15886 ++++ b/drivers/staging/media/atomisp/pci/sh_css.c
15887 +@@ -527,6 +527,7 @@ ia_css_stream_input_format_bits_per_pixel(struct ia_css_stream *stream)
15888 + return bpp;
15889 + }
15890 +
15891 ++/* TODO: move define to proper file in tools */
15892 + #define GP_ISEL_TPG_MODE 0x90058
15893 +
15894 + #if !defined(ISP2401)
15895 +@@ -579,12 +580,8 @@ sh_css_config_input_network(struct ia_css_stream *stream) {
15896 + vblank_cycles = vblank_lines * (width + hblank_cycles);
15897 + sh_css_sp_configure_sync_gen(width, height, hblank_cycles,
15898 + vblank_cycles);
15899 +- if (!IS_ISP2401) {
15900 +- if (pipe->stream->config.mode == IA_CSS_INPUT_MODE_TPG) {
15901 +- /* TODO: move define to proper file in tools */
15902 +- ia_css_device_store_uint32(GP_ISEL_TPG_MODE, 0);
15903 +- }
15904 +- }
15905 ++ if (pipe->stream->config.mode == IA_CSS_INPUT_MODE_TPG)
15906 ++ ia_css_device_store_uint32(GP_ISEL_TPG_MODE, 0);
15907 + }
15908 + ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE_PRIVATE,
15909 + "sh_css_config_input_network() leave:\n");
15910 +@@ -1019,16 +1016,14 @@ static bool sh_css_translate_stream_cfg_to_isys_stream_descr(
15911 + * ia_css_isys_stream_capture_indication() instead of
15912 + * ia_css_pipeline_sp_wait_for_isys_stream_N() as isp processing of
15913 + * capture takes longer than getting an ISYS frame
15914 +- *
15915 +- * Only 2401 relevant ??
15916 + */
15917 +-#if 0 // FIXME: NOT USED on Yocto Aero
15918 +- isys_stream_descr->polling_mode
15919 +- = early_polling ? INPUT_SYSTEM_POLL_ON_CAPTURE_REQUEST
15920 +- : INPUT_SYSTEM_POLL_ON_WAIT_FOR_FRAME;
15921 +- ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE_PRIVATE,
15922 +- "sh_css_translate_stream_cfg_to_isys_stream_descr() leave:\n");
15923 +-#endif
15924 ++ if (IS_ISP2401) {
15925 ++ isys_stream_descr->polling_mode
15926 ++ = early_polling ? INPUT_SYSTEM_POLL_ON_CAPTURE_REQUEST
15927 ++ : INPUT_SYSTEM_POLL_ON_WAIT_FOR_FRAME;
15928 ++ ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE_PRIVATE,
15929 ++ "sh_css_translate_stream_cfg_to_isys_stream_descr() leave:\n");
15930 ++ }
15931 +
15932 + return rc;
15933 + }
15934 +@@ -1451,7 +1446,7 @@ static void start_pipe(
15935 +
15936 + assert(me); /* all callers are in this file and call with non null argument */
15937 +
15938 +- if (!IS_ISP2401) {
15939 ++ if (IS_ISP2401) {
15940 + coord = &me->config.internal_frame_origin_bqs_on_sctbl;
15941 + params = me->stream->isp_params_configs;
15942 + }
15943 +diff --git a/drivers/staging/media/atomisp/pci/sh_css_mipi.c b/drivers/staging/media/atomisp/pci/sh_css_mipi.c
15944 +index d5ae7f0b5864b..651eda0469b23 100644
15945 +--- a/drivers/staging/media/atomisp/pci/sh_css_mipi.c
15946 ++++ b/drivers/staging/media/atomisp/pci/sh_css_mipi.c
15947 +@@ -389,17 +389,17 @@ static bool buffers_needed(struct ia_css_pipe *pipe)
15948 + {
15949 + if (!IS_ISP2401) {
15950 + if (pipe->stream->config.mode == IA_CSS_INPUT_MODE_BUFFERED_SENSOR)
15951 +- return false;
15952 +- else
15953 + return true;
15954 ++ else
15955 ++ return false;
15956 + }
15957 +
15958 + if (pipe->stream->config.mode == IA_CSS_INPUT_MODE_BUFFERED_SENSOR ||
15959 + pipe->stream->config.mode == IA_CSS_INPUT_MODE_TPG ||
15960 + pipe->stream->config.mode == IA_CSS_INPUT_MODE_PRBS)
15961 +- return false;
15962 ++ return true;
15963 +
15964 +- return true;
15965 ++ return false;
15966 + }
15967 +
15968 + int
15969 +@@ -439,14 +439,17 @@ allocate_mipi_frames(struct ia_css_pipe *pipe,
15970 + return 0; /* AM TODO: Check */
15971 + }
15972 +
15973 +- if (!IS_ISP2401)
15974 ++ if (!IS_ISP2401) {
15975 + port = (unsigned int)pipe->stream->config.source.port.port;
15976 +- else
15977 +- err = ia_css_mipi_is_source_port_valid(pipe, &port);
15978 ++ } else {
15979 ++ /* Returns true if port is valid. So, invert it */
15980 ++ err = !ia_css_mipi_is_source_port_valid(pipe, &port);
15981 ++ }
15982 +
15983 + assert(port < N_CSI_PORTS);
15984 +
15985 +- if (port >= N_CSI_PORTS || err) {
15986 ++ if ((!IS_ISP2401 && port >= N_CSI_PORTS) ||
15987 ++ (IS_ISP2401 && err)) {
15988 + ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE_PRIVATE,
15989 + "allocate_mipi_frames(%p) exit: error: port is not correct (port=%d).\n",
15990 + pipe, port);
15991 +@@ -571,14 +574,17 @@ free_mipi_frames(struct ia_css_pipe *pipe) {
15992 + return err;
15993 + }
15994 +
15995 +- if (!IS_ISP2401)
15996 ++ if (!IS_ISP2401) {
15997 + port = (unsigned int)pipe->stream->config.source.port.port;
15998 +- else
15999 +- err = ia_css_mipi_is_source_port_valid(pipe, &port);
16000 ++ } else {
16001 ++ /* Returns true if port is valid. So, invert it */
16002 ++ err = !ia_css_mipi_is_source_port_valid(pipe, &port);
16003 ++ }
16004 +
16005 + assert(port < N_CSI_PORTS);
16006 +
16007 +- if (port >= N_CSI_PORTS || err) {
16008 ++ if ((!IS_ISP2401 && port >= N_CSI_PORTS) ||
16009 ++ (IS_ISP2401 && err)) {
16010 + ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE_PRIVATE,
16011 + "free_mipi_frames(%p, %d) exit: error: pipe port is not correct.\n",
16012 + pipe, port);
16013 +@@ -683,14 +689,17 @@ send_mipi_frames(struct ia_css_pipe *pipe) {
16014 + /* TODO: AM: maybe this should be returning an error. */
16015 + }
16016 +
16017 +- if (!IS_ISP2401)
16018 ++ if (!IS_ISP2401) {
16019 + port = (unsigned int)pipe->stream->config.source.port.port;
16020 +- else
16021 +- err = ia_css_mipi_is_source_port_valid(pipe, &port);
16022 ++ } else {
16023 ++ /* Returns true if port is valid. So, invert it */
16024 ++ err = !ia_css_mipi_is_source_port_valid(pipe, &port);
16025 ++ }
16026 +
16027 + assert(port < N_CSI_PORTS);
16028 +
16029 +- if (port >= N_CSI_PORTS || err) {
16030 ++ if ((!IS_ISP2401 && port >= N_CSI_PORTS) ||
16031 ++ (IS_ISP2401 && err)) {
16032 + IA_CSS_ERROR("send_mipi_frames(%p) exit: invalid port specified (port=%d).\n",
16033 + pipe, port);
16034 + return err;
16035 +diff --git a/drivers/staging/media/atomisp/pci/sh_css_params.c b/drivers/staging/media/atomisp/pci/sh_css_params.c
16036 +index 24fc497bd4915..8d6514c45eeb6 100644
16037 +--- a/drivers/staging/media/atomisp/pci/sh_css_params.c
16038 ++++ b/drivers/staging/media/atomisp/pci/sh_css_params.c
16039 +@@ -2437,7 +2437,7 @@ sh_css_create_isp_params(struct ia_css_stream *stream,
16040 + unsigned int i;
16041 + struct sh_css_ddr_address_map *ddr_ptrs;
16042 + struct sh_css_ddr_address_map_size *ddr_ptrs_size;
16043 +- int err = 0;
16044 ++ int err;
16045 + size_t params_size;
16046 + struct ia_css_isp_parameters *params =
16047 + kvmalloc(sizeof(struct ia_css_isp_parameters), GFP_KERNEL);
16048 +@@ -2482,7 +2482,11 @@ sh_css_create_isp_params(struct ia_css_stream *stream,
16049 + succ &= (ddr_ptrs->macc_tbl != mmgr_NULL);
16050 +
16051 + *isp_params_out = params;
16052 +- return err;
16053 ++
16054 ++ if (!succ)
16055 ++ return -ENOMEM;
16056 ++
16057 ++ return 0;
16058 + }
16059 +
16060 + static bool
16061 +diff --git a/drivers/staging/media/hantro/hantro_drv.c b/drivers/staging/media/hantro/hantro_drv.c
16062 +index 7749ca9a8ebbf..bc97ec0a7e4af 100644
16063 +--- a/drivers/staging/media/hantro/hantro_drv.c
16064 ++++ b/drivers/staging/media/hantro/hantro_drv.c
16065 +@@ -829,7 +829,7 @@ static int hantro_probe(struct platform_device *pdev)
16066 + ret = clk_bulk_prepare(vpu->variant->num_clocks, vpu->clocks);
16067 + if (ret) {
16068 + dev_err(&pdev->dev, "Failed to prepare clocks\n");
16069 +- return ret;
16070 ++ goto err_pm_disable;
16071 + }
16072 +
16073 + ret = v4l2_device_register(&pdev->dev, &vpu->v4l2_dev);
16074 +@@ -885,6 +885,7 @@ err_v4l2_unreg:
16075 + v4l2_device_unregister(&vpu->v4l2_dev);
16076 + err_clk_unprepare:
16077 + clk_bulk_unprepare(vpu->variant->num_clocks, vpu->clocks);
16078 ++err_pm_disable:
16079 + pm_runtime_dont_use_autosuspend(vpu->dev);
16080 + pm_runtime_disable(vpu->dev);
16081 + return ret;
16082 +diff --git a/drivers/staging/rtl8192e/rtllib.h b/drivers/staging/rtl8192e/rtllib.h
16083 +index 4cabaf21c1ca0..367db4acc7852 100644
16084 +--- a/drivers/staging/rtl8192e/rtllib.h
16085 ++++ b/drivers/staging/rtl8192e/rtllib.h
16086 +@@ -1982,7 +1982,7 @@ void rtllib_softmac_xmit(struct rtllib_txb *txb, struct rtllib_device *ieee);
16087 + void rtllib_stop_send_beacons(struct rtllib_device *ieee);
16088 + void notify_wx_assoc_event(struct rtllib_device *ieee);
16089 + void rtllib_start_ibss(struct rtllib_device *ieee);
16090 +-void rtllib_softmac_init(struct rtllib_device *ieee);
16091 ++int rtllib_softmac_init(struct rtllib_device *ieee);
16092 + void rtllib_softmac_free(struct rtllib_device *ieee);
16093 + void rtllib_disassociate(struct rtllib_device *ieee);
16094 + void rtllib_stop_scan(struct rtllib_device *ieee);
16095 +diff --git a/drivers/staging/rtl8192e/rtllib_module.c b/drivers/staging/rtl8192e/rtllib_module.c
16096 +index 64d9feee1f392..f00ac94b2639b 100644
16097 +--- a/drivers/staging/rtl8192e/rtllib_module.c
16098 ++++ b/drivers/staging/rtl8192e/rtllib_module.c
16099 +@@ -88,7 +88,7 @@ struct net_device *alloc_rtllib(int sizeof_priv)
16100 + err = rtllib_networks_allocate(ieee);
16101 + if (err) {
16102 + pr_err("Unable to allocate beacon storage: %d\n", err);
16103 +- goto failed;
16104 ++ goto free_netdev;
16105 + }
16106 + rtllib_networks_initialize(ieee);
16107 +
16108 +@@ -121,11 +121,13 @@ struct net_device *alloc_rtllib(int sizeof_priv)
16109 + ieee->hwsec_active = 0;
16110 +
16111 + memset(ieee->swcamtable, 0, sizeof(struct sw_cam_table) * 32);
16112 +- rtllib_softmac_init(ieee);
16113 ++ err = rtllib_softmac_init(ieee);
16114 ++ if (err)
16115 ++ goto free_crypt_info;
16116 +
16117 + ieee->pHTInfo = kzalloc(sizeof(struct rt_hi_throughput), GFP_KERNEL);
16118 + if (!ieee->pHTInfo)
16119 +- return NULL;
16120 ++ goto free_softmac;
16121 +
16122 + HTUpdateDefaultSetting(ieee);
16123 + HTInitializeHTInfo(ieee);
16124 +@@ -141,8 +143,14 @@ struct net_device *alloc_rtllib(int sizeof_priv)
16125 +
16126 + return dev;
16127 +
16128 +- failed:
16129 ++free_softmac:
16130 ++ rtllib_softmac_free(ieee);
16131 ++free_crypt_info:
16132 ++ lib80211_crypt_info_free(&ieee->crypt_info);
16133 ++ rtllib_networks_free(ieee);
16134 ++free_netdev:
16135 + free_netdev(dev);
16136 ++
16137 + return NULL;
16138 + }
16139 + EXPORT_SYMBOL(alloc_rtllib);
16140 +diff --git a/drivers/staging/rtl8192e/rtllib_softmac.c b/drivers/staging/rtl8192e/rtllib_softmac.c
16141 +index 2c752ba5a802a..e8e72f79ca007 100644
16142 +--- a/drivers/staging/rtl8192e/rtllib_softmac.c
16143 ++++ b/drivers/staging/rtl8192e/rtllib_softmac.c
16144 +@@ -2953,7 +2953,7 @@ void rtllib_start_protocol(struct rtllib_device *ieee)
16145 + }
16146 + }
16147 +
16148 +-void rtllib_softmac_init(struct rtllib_device *ieee)
16149 ++int rtllib_softmac_init(struct rtllib_device *ieee)
16150 + {
16151 + int i;
16152 +
16153 +@@ -2964,7 +2964,8 @@ void rtllib_softmac_init(struct rtllib_device *ieee)
16154 + ieee->seq_ctrl[i] = 0;
16155 + ieee->dot11d_info = kzalloc(sizeof(struct rt_dot11d_info), GFP_ATOMIC);
16156 + if (!ieee->dot11d_info)
16157 +- netdev_err(ieee->dev, "Can't alloc memory for DOT11D\n");
16158 ++ return -ENOMEM;
16159 ++
16160 + ieee->LinkDetectInfo.SlotIndex = 0;
16161 + ieee->LinkDetectInfo.SlotNum = 2;
16162 + ieee->LinkDetectInfo.NumRecvBcnInPeriod = 0;
16163 +@@ -3030,6 +3031,7 @@ void rtllib_softmac_init(struct rtllib_device *ieee)
16164 +
16165 + tasklet_setup(&ieee->ps_task, rtllib_sta_ps);
16166 +
16167 ++ return 0;
16168 + }
16169 +
16170 + void rtllib_softmac_free(struct rtllib_device *ieee)
16171 +diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c
16172 +index 6ade4a5c48407..dfc239c64ce3c 100644
16173 +--- a/drivers/tee/tee_core.c
16174 ++++ b/drivers/tee/tee_core.c
16175 +@@ -98,8 +98,10 @@ void teedev_ctx_put(struct tee_context *ctx)
16176 +
16177 + static void teedev_close_context(struct tee_context *ctx)
16178 + {
16179 +- tee_device_put(ctx->teedev);
16180 ++ struct tee_device *teedev = ctx->teedev;
16181 ++
16182 + teedev_ctx_put(ctx);
16183 ++ tee_device_put(teedev);
16184 + }
16185 +
16186 + static int tee_open(struct inode *inode, struct file *filp)
16187 +diff --git a/drivers/thermal/imx8mm_thermal.c b/drivers/thermal/imx8mm_thermal.c
16188 +index a1e4f9bb4cb01..0f4cabd2a8c62 100644
16189 +--- a/drivers/thermal/imx8mm_thermal.c
16190 ++++ b/drivers/thermal/imx8mm_thermal.c
16191 +@@ -21,6 +21,7 @@
16192 + #define TPS 0x4
16193 + #define TRITSR 0x20 /* TMU immediate temp */
16194 +
16195 ++#define TER_ADC_PD BIT(30)
16196 + #define TER_EN BIT(31)
16197 + #define TRITSR_TEMP0_VAL_MASK 0xff
16198 + #define TRITSR_TEMP1_VAL_MASK 0xff0000
16199 +@@ -113,6 +114,8 @@ static void imx8mm_tmu_enable(struct imx8mm_tmu *tmu, bool enable)
16200 +
16201 + val = readl_relaxed(tmu->base + TER);
16202 + val = enable ? (val | TER_EN) : (val & ~TER_EN);
16203 ++ if (tmu->socdata->version == TMU_VER2)
16204 ++ val = enable ? (val & ~TER_ADC_PD) : (val | TER_ADC_PD);
16205 + writel_relaxed(val, tmu->base + TER);
16206 + }
16207 +
16208 +diff --git a/drivers/thermal/imx_thermal.c b/drivers/thermal/imx_thermal.c
16209 +index 2c7473d86a59b..16663373b6829 100644
16210 +--- a/drivers/thermal/imx_thermal.c
16211 ++++ b/drivers/thermal/imx_thermal.c
16212 +@@ -15,6 +15,7 @@
16213 + #include <linux/regmap.h>
16214 + #include <linux/thermal.h>
16215 + #include <linux/nvmem-consumer.h>
16216 ++#include <linux/pm_runtime.h>
16217 +
16218 + #define REG_SET 0x4
16219 + #define REG_CLR 0x8
16220 +@@ -194,6 +195,7 @@ static struct thermal_soc_data thermal_imx7d_data = {
16221 + };
16222 +
16223 + struct imx_thermal_data {
16224 ++ struct device *dev;
16225 + struct cpufreq_policy *policy;
16226 + struct thermal_zone_device *tz;
16227 + struct thermal_cooling_device *cdev;
16228 +@@ -252,44 +254,15 @@ static int imx_get_temp(struct thermal_zone_device *tz, int *temp)
16229 + const struct thermal_soc_data *soc_data = data->socdata;
16230 + struct regmap *map = data->tempmon;
16231 + unsigned int n_meas;
16232 +- bool wait, run_measurement;
16233 + u32 val;
16234 ++ int ret;
16235 +
16236 +- run_measurement = !data->irq_enabled;
16237 +- if (!run_measurement) {
16238 +- /* Check if a measurement is currently in progress */
16239 +- regmap_read(map, soc_data->temp_data, &val);
16240 +- wait = !(val & soc_data->temp_valid_mask);
16241 +- } else {
16242 +- /*
16243 +- * Every time we measure the temperature, we will power on the
16244 +- * temperature sensor, enable measurements, take a reading,
16245 +- * disable measurements, power off the temperature sensor.
16246 +- */
16247 +- regmap_write(map, soc_data->sensor_ctrl + REG_CLR,
16248 +- soc_data->power_down_mask);
16249 +- regmap_write(map, soc_data->sensor_ctrl + REG_SET,
16250 +- soc_data->measure_temp_mask);
16251 +-
16252 +- wait = true;
16253 +- }
16254 +-
16255 +- /*
16256 +- * According to the temp sensor designers, it may require up to ~17us
16257 +- * to complete a measurement.
16258 +- */
16259 +- if (wait)
16260 +- usleep_range(20, 50);
16261 ++ ret = pm_runtime_resume_and_get(data->dev);
16262 ++ if (ret < 0)
16263 ++ return ret;
16264 +
16265 + regmap_read(map, soc_data->temp_data, &val);
16266 +
16267 +- if (run_measurement) {
16268 +- regmap_write(map, soc_data->sensor_ctrl + REG_CLR,
16269 +- soc_data->measure_temp_mask);
16270 +- regmap_write(map, soc_data->sensor_ctrl + REG_SET,
16271 +- soc_data->power_down_mask);
16272 +- }
16273 +-
16274 + if ((val & soc_data->temp_valid_mask) == 0) {
16275 + dev_dbg(&tz->device, "temp measurement never finished\n");
16276 + return -EAGAIN;
16277 +@@ -328,6 +301,8 @@ static int imx_get_temp(struct thermal_zone_device *tz, int *temp)
16278 + enable_irq(data->irq);
16279 + }
16280 +
16281 ++ pm_runtime_put(data->dev);
16282 ++
16283 + return 0;
16284 + }
16285 +
16286 +@@ -335,24 +310,16 @@ static int imx_change_mode(struct thermal_zone_device *tz,
16287 + enum thermal_device_mode mode)
16288 + {
16289 + struct imx_thermal_data *data = tz->devdata;
16290 +- struct regmap *map = data->tempmon;
16291 +- const struct thermal_soc_data *soc_data = data->socdata;
16292 +
16293 + if (mode == THERMAL_DEVICE_ENABLED) {
16294 +- regmap_write(map, soc_data->sensor_ctrl + REG_CLR,
16295 +- soc_data->power_down_mask);
16296 +- regmap_write(map, soc_data->sensor_ctrl + REG_SET,
16297 +- soc_data->measure_temp_mask);
16298 ++ pm_runtime_get(data->dev);
16299 +
16300 + if (!data->irq_enabled) {
16301 + data->irq_enabled = true;
16302 + enable_irq(data->irq);
16303 + }
16304 + } else {
16305 +- regmap_write(map, soc_data->sensor_ctrl + REG_CLR,
16306 +- soc_data->measure_temp_mask);
16307 +- regmap_write(map, soc_data->sensor_ctrl + REG_SET,
16308 +- soc_data->power_down_mask);
16309 ++ pm_runtime_put(data->dev);
16310 +
16311 + if (data->irq_enabled) {
16312 + disable_irq(data->irq);
16313 +@@ -393,6 +360,11 @@ static int imx_set_trip_temp(struct thermal_zone_device *tz, int trip,
16314 + int temp)
16315 + {
16316 + struct imx_thermal_data *data = tz->devdata;
16317 ++ int ret;
16318 ++
16319 ++ ret = pm_runtime_resume_and_get(data->dev);
16320 ++ if (ret < 0)
16321 ++ return ret;
16322 +
16323 + /* do not allow changing critical threshold */
16324 + if (trip == IMX_TRIP_CRITICAL)
16325 +@@ -406,6 +378,8 @@ static int imx_set_trip_temp(struct thermal_zone_device *tz, int trip,
16326 +
16327 + imx_set_alarm_temp(data, temp);
16328 +
16329 ++ pm_runtime_put(data->dev);
16330 ++
16331 + return 0;
16332 + }
16333 +
16334 +@@ -681,6 +655,8 @@ static int imx_thermal_probe(struct platform_device *pdev)
16335 + if (!data)
16336 + return -ENOMEM;
16337 +
16338 ++ data->dev = &pdev->dev;
16339 ++
16340 + map = syscon_regmap_lookup_by_phandle(pdev->dev.of_node, "fsl,tempmon");
16341 + if (IS_ERR(map)) {
16342 + ret = PTR_ERR(map);
16343 +@@ -800,6 +776,16 @@ static int imx_thermal_probe(struct platform_device *pdev)
16344 + data->socdata->power_down_mask);
16345 + regmap_write(map, data->socdata->sensor_ctrl + REG_SET,
16346 + data->socdata->measure_temp_mask);
16347 ++ /* After power up, we need a delay before first access can be done. */
16348 ++ usleep_range(20, 50);
16349 ++
16350 ++ /* the core was configured and enabled just before */
16351 ++ pm_runtime_set_active(&pdev->dev);
16352 ++ pm_runtime_enable(data->dev);
16353 ++
16354 ++ ret = pm_runtime_resume_and_get(data->dev);
16355 ++ if (ret < 0)
16356 ++ goto disable_runtime_pm;
16357 +
16358 + data->irq_enabled = true;
16359 + ret = thermal_zone_device_enable(data->tz);
16360 +@@ -814,10 +800,15 @@ static int imx_thermal_probe(struct platform_device *pdev)
16361 + goto thermal_zone_unregister;
16362 + }
16363 +
16364 ++ pm_runtime_put(data->dev);
16365 ++
16366 + return 0;
16367 +
16368 + thermal_zone_unregister:
16369 + thermal_zone_device_unregister(data->tz);
16370 ++disable_runtime_pm:
16371 ++ pm_runtime_put_noidle(data->dev);
16372 ++ pm_runtime_disable(data->dev);
16373 + clk_disable:
16374 + clk_disable_unprepare(data->thermal_clk);
16375 + legacy_cleanup:
16376 +@@ -829,13 +820,9 @@ legacy_cleanup:
16377 + static int imx_thermal_remove(struct platform_device *pdev)
16378 + {
16379 + struct imx_thermal_data *data = platform_get_drvdata(pdev);
16380 +- struct regmap *map = data->tempmon;
16381 +
16382 +- /* Disable measurements */
16383 +- regmap_write(map, data->socdata->sensor_ctrl + REG_SET,
16384 +- data->socdata->power_down_mask);
16385 +- if (!IS_ERR(data->thermal_clk))
16386 +- clk_disable_unprepare(data->thermal_clk);
16387 ++ pm_runtime_put_noidle(data->dev);
16388 ++ pm_runtime_disable(data->dev);
16389 +
16390 + thermal_zone_device_unregister(data->tz);
16391 + imx_thermal_unregister_legacy_cooling(data);
16392 +@@ -858,29 +845,79 @@ static int __maybe_unused imx_thermal_suspend(struct device *dev)
16393 + ret = thermal_zone_device_disable(data->tz);
16394 + if (ret)
16395 + return ret;
16396 ++
16397 ++ return pm_runtime_force_suspend(data->dev);
16398 ++}
16399 ++
16400 ++static int __maybe_unused imx_thermal_resume(struct device *dev)
16401 ++{
16402 ++ struct imx_thermal_data *data = dev_get_drvdata(dev);
16403 ++ int ret;
16404 ++
16405 ++ ret = pm_runtime_force_resume(data->dev);
16406 ++ if (ret)
16407 ++ return ret;
16408 ++ /* Enabled thermal sensor after resume */
16409 ++ return thermal_zone_device_enable(data->tz);
16410 ++}
16411 ++
16412 ++static int __maybe_unused imx_thermal_runtime_suspend(struct device *dev)
16413 ++{
16414 ++ struct imx_thermal_data *data = dev_get_drvdata(dev);
16415 ++ const struct thermal_soc_data *socdata = data->socdata;
16416 ++ struct regmap *map = data->tempmon;
16417 ++ int ret;
16418 ++
16419 ++ ret = regmap_write(map, socdata->sensor_ctrl + REG_CLR,
16420 ++ socdata->measure_temp_mask);
16421 ++ if (ret)
16422 ++ return ret;
16423 ++
16424 ++ ret = regmap_write(map, socdata->sensor_ctrl + REG_SET,
16425 ++ socdata->power_down_mask);
16426 ++ if (ret)
16427 ++ return ret;
16428 ++
16429 + clk_disable_unprepare(data->thermal_clk);
16430 +
16431 + return 0;
16432 + }
16433 +
16434 +-static int __maybe_unused imx_thermal_resume(struct device *dev)
16435 ++static int __maybe_unused imx_thermal_runtime_resume(struct device *dev)
16436 + {
16437 + struct imx_thermal_data *data = dev_get_drvdata(dev);
16438 ++ const struct thermal_soc_data *socdata = data->socdata;
16439 ++ struct regmap *map = data->tempmon;
16440 + int ret;
16441 +
16442 + ret = clk_prepare_enable(data->thermal_clk);
16443 + if (ret)
16444 + return ret;
16445 +- /* Enabled thermal sensor after resume */
16446 +- ret = thermal_zone_device_enable(data->tz);
16447 ++
16448 ++ ret = regmap_write(map, socdata->sensor_ctrl + REG_CLR,
16449 ++ socdata->power_down_mask);
16450 ++ if (ret)
16451 ++ return ret;
16452 ++
16453 ++ ret = regmap_write(map, socdata->sensor_ctrl + REG_SET,
16454 ++ socdata->measure_temp_mask);
16455 + if (ret)
16456 + return ret;
16457 +
16458 ++ /*
16459 ++ * According to the temp sensor designers, it may require up to ~17us
16460 ++ * to complete a measurement.
16461 ++ */
16462 ++ usleep_range(20, 50);
16463 ++
16464 + return 0;
16465 + }
16466 +
16467 +-static SIMPLE_DEV_PM_OPS(imx_thermal_pm_ops,
16468 +- imx_thermal_suspend, imx_thermal_resume);
16469 ++static const struct dev_pm_ops imx_thermal_pm_ops = {
16470 ++ SET_SYSTEM_SLEEP_PM_OPS(imx_thermal_suspend, imx_thermal_resume)
16471 ++ SET_RUNTIME_PM_OPS(imx_thermal_runtime_suspend,
16472 ++ imx_thermal_runtime_resume, NULL)
16473 ++};
16474 +
16475 + static struct platform_driver imx_thermal = {
16476 + .driver = {
16477 +diff --git a/drivers/thunderbolt/acpi.c b/drivers/thunderbolt/acpi.c
16478 +index b5442f979b4d0..6355fdf7d71a3 100644
16479 +--- a/drivers/thunderbolt/acpi.c
16480 ++++ b/drivers/thunderbolt/acpi.c
16481 +@@ -7,6 +7,7 @@
16482 + */
16483 +
16484 + #include <linux/acpi.h>
16485 ++#include <linux/pm_runtime.h>
16486 +
16487 + #include "tb.h"
16488 +
16489 +@@ -74,8 +75,18 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
16490 + pci_pcie_type(pdev) == PCI_EXP_TYPE_DOWNSTREAM))) {
16491 + const struct device_link *link;
16492 +
16493 ++ /*
16494 ++ * Make them both active first to make sure the NHI does
16495 ++ * not runtime suspend before the consumer. The
16496 ++ * pm_runtime_put() below then allows the consumer to
16497 ++ * runtime suspend again (which then allows NHI runtime
16498 ++ * suspend too now that the device link is established).
16499 ++ */
16500 ++ pm_runtime_get_sync(&pdev->dev);
16501 ++
16502 + link = device_link_add(&pdev->dev, &nhi->pdev->dev,
16503 + DL_FLAG_AUTOREMOVE_SUPPLIER |
16504 ++ DL_FLAG_RPM_ACTIVE |
16505 + DL_FLAG_PM_RUNTIME);
16506 + if (link) {
16507 + dev_dbg(&nhi->pdev->dev, "created link from %s\n",
16508 +@@ -84,6 +95,8 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
16509 + dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n",
16510 + dev_name(&pdev->dev));
16511 + }
16512 ++
16513 ++ pm_runtime_put(&pdev->dev);
16514 + }
16515 +
16516 + out_put:
16517 +diff --git a/drivers/tty/serial/amba-pl010.c b/drivers/tty/serial/amba-pl010.c
16518 +index 3284f34e9dfe1..75d61e038a775 100644
16519 +--- a/drivers/tty/serial/amba-pl010.c
16520 ++++ b/drivers/tty/serial/amba-pl010.c
16521 +@@ -448,14 +448,11 @@ pl010_set_termios(struct uart_port *port, struct ktermios *termios,
16522 + if ((termios->c_cflag & CREAD) == 0)
16523 + uap->port.ignore_status_mask |= UART_DUMMY_RSR_RX;
16524 +
16525 +- /* first, disable everything */
16526 + old_cr = readb(uap->port.membase + UART010_CR) & ~UART010_CR_MSIE;
16527 +
16528 + if (UART_ENABLE_MS(port, termios->c_cflag))
16529 + old_cr |= UART010_CR_MSIE;
16530 +
16531 +- writel(0, uap->port.membase + UART010_CR);
16532 +-
16533 + /* Set baud rate */
16534 + quot -= 1;
16535 + writel((quot & 0xf00) >> 8, uap->port.membase + UART010_LCRM);
16536 +diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
16537 +index b3cddcdcbdad0..61183e7ff0097 100644
16538 +--- a/drivers/tty/serial/amba-pl011.c
16539 ++++ b/drivers/tty/serial/amba-pl011.c
16540 +@@ -2083,32 +2083,13 @@ static const char *pl011_type(struct uart_port *port)
16541 + return uap->port.type == PORT_AMBA ? uap->type : NULL;
16542 + }
16543 +
16544 +-/*
16545 +- * Release the memory region(s) being used by 'port'
16546 +- */
16547 +-static void pl011_release_port(struct uart_port *port)
16548 +-{
16549 +- release_mem_region(port->mapbase, SZ_4K);
16550 +-}
16551 +-
16552 +-/*
16553 +- * Request the memory region(s) being used by 'port'
16554 +- */
16555 +-static int pl011_request_port(struct uart_port *port)
16556 +-{
16557 +- return request_mem_region(port->mapbase, SZ_4K, "uart-pl011")
16558 +- != NULL ? 0 : -EBUSY;
16559 +-}
16560 +-
16561 + /*
16562 + * Configure/autoconfigure the port.
16563 + */
16564 + static void pl011_config_port(struct uart_port *port, int flags)
16565 + {
16566 +- if (flags & UART_CONFIG_TYPE) {
16567 ++ if (flags & UART_CONFIG_TYPE)
16568 + port->type = PORT_AMBA;
16569 +- pl011_request_port(port);
16570 +- }
16571 + }
16572 +
16573 + /*
16574 +@@ -2123,6 +2104,8 @@ static int pl011_verify_port(struct uart_port *port, struct serial_struct *ser)
16575 + ret = -EINVAL;
16576 + if (ser->baud_base < 9600)
16577 + ret = -EINVAL;
16578 ++ if (port->mapbase != (unsigned long) ser->iomem_base)
16579 ++ ret = -EINVAL;
16580 + return ret;
16581 + }
16582 +
16583 +@@ -2140,8 +2123,6 @@ static const struct uart_ops amba_pl011_pops = {
16584 + .flush_buffer = pl011_dma_flush_buffer,
16585 + .set_termios = pl011_set_termios,
16586 + .type = pl011_type,
16587 +- .release_port = pl011_release_port,
16588 +- .request_port = pl011_request_port,
16589 + .config_port = pl011_config_port,
16590 + .verify_port = pl011_verify_port,
16591 + #ifdef CONFIG_CONSOLE_POLL
16592 +@@ -2171,8 +2152,6 @@ static const struct uart_ops sbsa_uart_pops = {
16593 + .shutdown = sbsa_uart_shutdown,
16594 + .set_termios = sbsa_uart_set_termios,
16595 + .type = pl011_type,
16596 +- .release_port = pl011_release_port,
16597 +- .request_port = pl011_request_port,
16598 + .config_port = pl011_config_port,
16599 + .verify_port = pl011_verify_port,
16600 + #ifdef CONFIG_CONSOLE_POLL
16601 +diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
16602 +index a24e5c2b30bc9..602065bfc9bb8 100644
16603 +--- a/drivers/tty/serial/atmel_serial.c
16604 ++++ b/drivers/tty/serial/atmel_serial.c
16605 +@@ -1004,6 +1004,13 @@ static void atmel_tx_dma(struct uart_port *port)
16606 + desc->callback = atmel_complete_tx_dma;
16607 + desc->callback_param = atmel_port;
16608 + atmel_port->cookie_tx = dmaengine_submit(desc);
16609 ++ if (dma_submit_error(atmel_port->cookie_tx)) {
16610 ++ dev_err(port->dev, "dma_submit_error %d\n",
16611 ++ atmel_port->cookie_tx);
16612 ++ return;
16613 ++ }
16614 ++
16615 ++ dma_async_issue_pending(chan);
16616 + }
16617 +
16618 + if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
16619 +@@ -1264,6 +1271,13 @@ static int atmel_prepare_rx_dma(struct uart_port *port)
16620 + desc->callback_param = port;
16621 + atmel_port->desc_rx = desc;
16622 + atmel_port->cookie_rx = dmaengine_submit(desc);
16623 ++ if (dma_submit_error(atmel_port->cookie_rx)) {
16624 ++ dev_err(port->dev, "dma_submit_error %d\n",
16625 ++ atmel_port->cookie_rx);
16626 ++ goto chan_err;
16627 ++ }
16628 ++
16629 ++ dma_async_issue_pending(atmel_port->chan_rx);
16630 +
16631 + return 0;
16632 +
16633 +diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
16634 +index 28cc328ddb6eb..93cd8ad57f385 100644
16635 +--- a/drivers/tty/serial/imx.c
16636 ++++ b/drivers/tty/serial/imx.c
16637 +@@ -508,18 +508,21 @@ static void imx_uart_stop_tx(struct uart_port *port)
16638 + static void imx_uart_stop_rx(struct uart_port *port)
16639 + {
16640 + struct imx_port *sport = (struct imx_port *)port;
16641 +- u32 ucr1, ucr2;
16642 ++ u32 ucr1, ucr2, ucr4;
16643 +
16644 + ucr1 = imx_uart_readl(sport, UCR1);
16645 + ucr2 = imx_uart_readl(sport, UCR2);
16646 ++ ucr4 = imx_uart_readl(sport, UCR4);
16647 +
16648 + if (sport->dma_is_enabled) {
16649 + ucr1 &= ~(UCR1_RXDMAEN | UCR1_ATDMAEN);
16650 + } else {
16651 + ucr1 &= ~UCR1_RRDYEN;
16652 + ucr2 &= ~UCR2_ATEN;
16653 ++ ucr4 &= ~UCR4_OREN;
16654 + }
16655 + imx_uart_writel(sport, ucr1, UCR1);
16656 ++ imx_uart_writel(sport, ucr4, UCR4);
16657 +
16658 + ucr2 &= ~UCR2_RXEN;
16659 + imx_uart_writel(sport, ucr2, UCR2);
16660 +@@ -1576,7 +1579,7 @@ static void imx_uart_shutdown(struct uart_port *port)
16661 + imx_uart_writel(sport, ucr1, UCR1);
16662 +
16663 + ucr4 = imx_uart_readl(sport, UCR4);
16664 +- ucr4 &= ~(UCR4_OREN | UCR4_TCEN);
16665 ++ ucr4 &= ~UCR4_TCEN;
16666 + imx_uart_writel(sport, ucr4, UCR4);
16667 +
16668 + spin_unlock_irqrestore(&sport->port.lock, flags);
16669 +diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
16670 +index 046bedca7b8f5..be0d9922e320e 100644
16671 +--- a/drivers/tty/serial/serial_core.c
16672 ++++ b/drivers/tty/serial/serial_core.c
16673 +@@ -162,7 +162,7 @@ static void uart_port_dtr_rts(struct uart_port *uport, int raise)
16674 + int RTS_after_send = !!(uport->rs485.flags & SER_RS485_RTS_AFTER_SEND);
16675 +
16676 + if (raise) {
16677 +- if (rs485_on && !RTS_after_send) {
16678 ++ if (rs485_on && RTS_after_send) {
16679 + uart_set_mctrl(uport, TIOCM_DTR);
16680 + uart_clear_mctrl(uport, TIOCM_RTS);
16681 + } else {
16682 +@@ -171,7 +171,7 @@ static void uart_port_dtr_rts(struct uart_port *uport, int raise)
16683 + } else {
16684 + unsigned int clear = TIOCM_DTR;
16685 +
16686 +- clear |= (!rs485_on || !RTS_after_send) ? TIOCM_RTS : 0;
16687 ++ clear |= (!rs485_on || RTS_after_send) ? TIOCM_RTS : 0;
16688 + uart_clear_mctrl(uport, clear);
16689 + }
16690 + }
16691 +@@ -2414,7 +2414,8 @@ uart_configure_port(struct uart_driver *drv, struct uart_state *state,
16692 + * We probably don't need a spinlock around this, but
16693 + */
16694 + spin_lock_irqsave(&port->lock, flags);
16695 +- port->ops->set_mctrl(port, port->mctrl & TIOCM_DTR);
16696 ++ port->mctrl &= TIOCM_DTR;
16697 ++ port->ops->set_mctrl(port, port->mctrl);
16698 + spin_unlock_irqrestore(&port->lock, flags);
16699 +
16700 + /*
16701 +diff --git a/drivers/tty/serial/uartlite.c b/drivers/tty/serial/uartlite.c
16702 +index 7081ab322b402..48923cd8c07d1 100644
16703 +--- a/drivers/tty/serial/uartlite.c
16704 ++++ b/drivers/tty/serial/uartlite.c
16705 +@@ -615,7 +615,7 @@ static struct uart_driver ulite_uart_driver = {
16706 + *
16707 + * Returns: 0 on success, <0 otherwise
16708 + */
16709 +-static int ulite_assign(struct device *dev, int id, u32 base, int irq,
16710 ++static int ulite_assign(struct device *dev, int id, phys_addr_t base, int irq,
16711 + struct uartlite_data *pdata)
16712 + {
16713 + struct uart_port *port;
16714 +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
16715 +index af15dbe6bb141..18ee3914b4686 100644
16716 +--- a/drivers/usb/core/hub.c
16717 ++++ b/drivers/usb/core/hub.c
16718 +@@ -1109,7 +1109,10 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
16719 + } else {
16720 + hub_power_on(hub, true);
16721 + }
16722 +- }
16723 ++ /* Give some time on remote wakeup to let links to transit to U0 */
16724 ++ } else if (hub_is_superspeed(hub->hdev))
16725 ++ msleep(20);
16726 ++
16727 + init2:
16728 +
16729 + /*
16730 +diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
16731 +index 2a29e2f681fe6..504f8af4d0f80 100644
16732 +--- a/drivers/usb/dwc3/dwc3-qcom.c
16733 ++++ b/drivers/usb/dwc3/dwc3-qcom.c
16734 +@@ -764,9 +764,12 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
16735 +
16736 + if (qcom->acpi_pdata->is_urs) {
16737 + qcom->urs_usb = dwc3_qcom_create_urs_usb_platdev(dev);
16738 +- if (!qcom->urs_usb) {
16739 ++ if (IS_ERR_OR_NULL(qcom->urs_usb)) {
16740 + dev_err(dev, "failed to create URS USB platdev\n");
16741 +- return -ENODEV;
16742 ++ if (!qcom->urs_usb)
16743 ++ return -ENODEV;
16744 ++ else
16745 ++ return PTR_ERR(qcom->urs_usb);
16746 + }
16747 + }
16748 + }
16749 +diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
16750 +index cbb7947f366f9..d8652321e15e9 100644
16751 +--- a/drivers/usb/gadget/function/f_fs.c
16752 ++++ b/drivers/usb/gadget/function/f_fs.c
16753 +@@ -614,7 +614,7 @@ static int ffs_ep0_open(struct inode *inode, struct file *file)
16754 + file->private_data = ffs;
16755 + ffs_data_opened(ffs);
16756 +
16757 +- return 0;
16758 ++ return stream_open(inode, file);
16759 + }
16760 +
16761 + static int ffs_ep0_release(struct inode *inode, struct file *file)
16762 +@@ -1152,7 +1152,7 @@ ffs_epfile_open(struct inode *inode, struct file *file)
16763 + file->private_data = epfile;
16764 + ffs_data_opened(epfile->ffs);
16765 +
16766 +- return 0;
16767 ++ return stream_open(inode, file);
16768 + }
16769 +
16770 + static int ffs_aio_cancel(struct kiocb *kiocb)
16771 +diff --git a/drivers/usb/host/uhci-platform.c b/drivers/usb/host/uhci-platform.c
16772 +index 70dbd95c3f063..be9e9db7cad10 100644
16773 +--- a/drivers/usb/host/uhci-platform.c
16774 ++++ b/drivers/usb/host/uhci-platform.c
16775 +@@ -113,7 +113,8 @@ static int uhci_hcd_platform_probe(struct platform_device *pdev)
16776 + num_ports);
16777 + }
16778 + if (of_device_is_compatible(np, "aspeed,ast2400-uhci") ||
16779 +- of_device_is_compatible(np, "aspeed,ast2500-uhci")) {
16780 ++ of_device_is_compatible(np, "aspeed,ast2500-uhci") ||
16781 ++ of_device_is_compatible(np, "aspeed,ast2600-uhci")) {
16782 + uhci->is_aspeed = 1;
16783 + dev_info(&pdev->dev,
16784 + "Enabled Aspeed implementation workarounds\n");
16785 +diff --git a/drivers/usb/misc/ftdi-elan.c b/drivers/usb/misc/ftdi-elan.c
16786 +index 8a3d9c0c8d8bc..157b31d354ac2 100644
16787 +--- a/drivers/usb/misc/ftdi-elan.c
16788 ++++ b/drivers/usb/misc/ftdi-elan.c
16789 +@@ -202,6 +202,7 @@ static void ftdi_elan_delete(struct kref *kref)
16790 + mutex_unlock(&ftdi_module_lock);
16791 + kfree(ftdi->bulk_in_buffer);
16792 + ftdi->bulk_in_buffer = NULL;
16793 ++ kfree(ftdi);
16794 + }
16795 +
16796 + static void ftdi_elan_put_kref(struct usb_ftdi *ftdi)
16797 +diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
16798 +index fbdc9468818d3..65d6f8fd81e70 100644
16799 +--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
16800 ++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
16801 +@@ -812,8 +812,6 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
16802 + MLX5_SET(virtio_q, vq_ctx, umem_3_id, mvq->umem3.id);
16803 + MLX5_SET(virtio_q, vq_ctx, umem_3_size, mvq->umem3.size);
16804 + MLX5_SET(virtio_q, vq_ctx, pd, ndev->mvdev.res.pdn);
16805 +- if (MLX5_CAP_DEV_VDPA_EMULATION(ndev->mvdev.mdev, eth_frame_offload_type))
16806 +- MLX5_SET(virtio_q, vq_ctx, virtio_version_1_0, 1);
16807 +
16808 + err = mlx5_cmd_exec(ndev->mvdev.mdev, in, inlen, out, sizeof(out));
16809 + if (err)
16810 +diff --git a/drivers/video/backlight/qcom-wled.c b/drivers/video/backlight/qcom-wled.c
16811 +index cd11c57764381..486d35da01507 100644
16812 +--- a/drivers/video/backlight/qcom-wled.c
16813 ++++ b/drivers/video/backlight/qcom-wled.c
16814 +@@ -231,14 +231,14 @@ struct wled {
16815 + static int wled3_set_brightness(struct wled *wled, u16 brightness)
16816 + {
16817 + int rc, i;
16818 +- u8 v[2];
16819 ++ __le16 v;
16820 +
16821 +- v[0] = brightness & 0xff;
16822 +- v[1] = (brightness >> 8) & 0xf;
16823 ++ v = cpu_to_le16(brightness & WLED3_SINK_REG_BRIGHT_MAX);
16824 +
16825 + for (i = 0; i < wled->cfg.num_strings; ++i) {
16826 + rc = regmap_bulk_write(wled->regmap, wled->ctrl_addr +
16827 +- WLED3_SINK_REG_BRIGHT(i), v, 2);
16828 ++ WLED3_SINK_REG_BRIGHT(wled->cfg.enabled_strings[i]),
16829 ++ &v, sizeof(v));
16830 + if (rc < 0)
16831 + return rc;
16832 + }
16833 +@@ -250,18 +250,18 @@ static int wled4_set_brightness(struct wled *wled, u16 brightness)
16834 + {
16835 + int rc, i;
16836 + u16 low_limit = wled->max_brightness * 4 / 1000;
16837 +- u8 v[2];
16838 ++ __le16 v;
16839 +
16840 + /* WLED4's lower limit of operation is 0.4% */
16841 + if (brightness > 0 && brightness < low_limit)
16842 + brightness = low_limit;
16843 +
16844 +- v[0] = brightness & 0xff;
16845 +- v[1] = (brightness >> 8) & 0xf;
16846 ++ v = cpu_to_le16(brightness & WLED3_SINK_REG_BRIGHT_MAX);
16847 +
16848 + for (i = 0; i < wled->cfg.num_strings; ++i) {
16849 + rc = regmap_bulk_write(wled->regmap, wled->sink_addr +
16850 +- WLED4_SINK_REG_BRIGHT(i), v, 2);
16851 ++ WLED4_SINK_REG_BRIGHT(wled->cfg.enabled_strings[i]),
16852 ++ &v, sizeof(v));
16853 + if (rc < 0)
16854 + return rc;
16855 + }
16856 +@@ -273,21 +273,20 @@ static int wled5_set_brightness(struct wled *wled, u16 brightness)
16857 + {
16858 + int rc, offset;
16859 + u16 low_limit = wled->max_brightness * 1 / 1000;
16860 +- u8 v[2];
16861 ++ __le16 v;
16862 +
16863 + /* WLED5's lower limit is 0.1% */
16864 + if (brightness < low_limit)
16865 + brightness = low_limit;
16866 +
16867 +- v[0] = brightness & 0xff;
16868 +- v[1] = (brightness >> 8) & 0x7f;
16869 ++ v = cpu_to_le16(brightness & WLED5_SINK_REG_BRIGHT_MAX_15B);
16870 +
16871 + offset = (wled->cfg.mod_sel == MOD_A) ?
16872 + WLED5_SINK_REG_MOD_A_BRIGHTNESS_LSB :
16873 + WLED5_SINK_REG_MOD_B_BRIGHTNESS_LSB;
16874 +
16875 + rc = regmap_bulk_write(wled->regmap, wled->sink_addr + offset,
16876 +- v, 2);
16877 ++ &v, sizeof(v));
16878 + return rc;
16879 + }
16880 +
16881 +@@ -572,7 +571,7 @@ unlock_mutex:
16882 +
16883 + static void wled_auto_string_detection(struct wled *wled)
16884 + {
16885 +- int rc = 0, i, delay_time_us;
16886 ++ int rc = 0, i, j, delay_time_us;
16887 + u32 sink_config = 0;
16888 + u8 sink_test = 0, sink_valid = 0, val;
16889 + bool fault_set;
16890 +@@ -619,14 +618,15 @@ static void wled_auto_string_detection(struct wled *wled)
16891 +
16892 + /* Iterate through the strings one by one */
16893 + for (i = 0; i < wled->cfg.num_strings; i++) {
16894 +- sink_test = BIT((WLED4_SINK_REG_CURR_SINK_SHFT + i));
16895 ++ j = wled->cfg.enabled_strings[i];
16896 ++ sink_test = BIT((WLED4_SINK_REG_CURR_SINK_SHFT + j));
16897 +
16898 + /* Enable feedback control */
16899 + rc = regmap_write(wled->regmap, wled->ctrl_addr +
16900 +- WLED3_CTRL_REG_FEEDBACK_CONTROL, i + 1);
16901 ++ WLED3_CTRL_REG_FEEDBACK_CONTROL, j + 1);
16902 + if (rc < 0) {
16903 + dev_err(wled->dev, "Failed to enable feedback for SINK %d rc = %d\n",
16904 +- i + 1, rc);
16905 ++ j + 1, rc);
16906 + goto failed_detect;
16907 + }
16908 +
16909 +@@ -635,7 +635,7 @@ static void wled_auto_string_detection(struct wled *wled)
16910 + WLED4_SINK_REG_CURR_SINK, sink_test);
16911 + if (rc < 0) {
16912 + dev_err(wled->dev, "Failed to configure SINK %d rc=%d\n",
16913 +- i + 1, rc);
16914 ++ j + 1, rc);
16915 + goto failed_detect;
16916 + }
16917 +
16918 +@@ -662,7 +662,7 @@ static void wled_auto_string_detection(struct wled *wled)
16919 +
16920 + if (fault_set)
16921 + dev_dbg(wled->dev, "WLED OVP fault detected with SINK %d\n",
16922 +- i + 1);
16923 ++ j + 1);
16924 + else
16925 + sink_valid |= sink_test;
16926 +
16927 +@@ -702,15 +702,16 @@ static void wled_auto_string_detection(struct wled *wled)
16928 + /* Enable valid sinks */
16929 + if (wled->version == 4) {
16930 + for (i = 0; i < wled->cfg.num_strings; i++) {
16931 ++ j = wled->cfg.enabled_strings[i];
16932 + if (sink_config &
16933 +- BIT(WLED4_SINK_REG_CURR_SINK_SHFT + i))
16934 ++ BIT(WLED4_SINK_REG_CURR_SINK_SHFT + j))
16935 + val = WLED4_SINK_REG_STR_MOD_MASK;
16936 + else
16937 + /* Disable modulator_en for unused sink */
16938 + val = 0;
16939 +
16940 + rc = regmap_write(wled->regmap, wled->sink_addr +
16941 +- WLED4_SINK_REG_STR_MOD_EN(i), val);
16942 ++ WLED4_SINK_REG_STR_MOD_EN(j), val);
16943 + if (rc < 0) {
16944 + dev_err(wled->dev, "Failed to configure MODULATOR_EN rc=%d\n",
16945 + rc);
16946 +@@ -1256,21 +1257,6 @@ static const struct wled_var_cfg wled5_ovp_cfg = {
16947 + .size = 16,
16948 + };
16949 +
16950 +-static u32 wled3_num_strings_values_fn(u32 idx)
16951 +-{
16952 +- return idx + 1;
16953 +-}
16954 +-
16955 +-static const struct wled_var_cfg wled3_num_strings_cfg = {
16956 +- .fn = wled3_num_strings_values_fn,
16957 +- .size = 3,
16958 +-};
16959 +-
16960 +-static const struct wled_var_cfg wled4_num_strings_cfg = {
16961 +- .fn = wled3_num_strings_values_fn,
16962 +- .size = 4,
16963 +-};
16964 +-
16965 + static u32 wled3_switch_freq_values_fn(u32 idx)
16966 + {
16967 + return 19200 / (2 * (1 + idx));
16968 +@@ -1344,11 +1330,6 @@ static int wled_configure(struct wled *wled)
16969 + .val_ptr = &cfg->switch_freq,
16970 + .cfg = &wled3_switch_freq_cfg,
16971 + },
16972 +- {
16973 +- .name = "qcom,num-strings",
16974 +- .val_ptr = &cfg->num_strings,
16975 +- .cfg = &wled3_num_strings_cfg,
16976 +- },
16977 + };
16978 +
16979 + const struct wled_u32_opts wled4_opts[] = {
16980 +@@ -1372,11 +1353,6 @@ static int wled_configure(struct wled *wled)
16981 + .val_ptr = &cfg->switch_freq,
16982 + .cfg = &wled3_switch_freq_cfg,
16983 + },
16984 +- {
16985 +- .name = "qcom,num-strings",
16986 +- .val_ptr = &cfg->num_strings,
16987 +- .cfg = &wled4_num_strings_cfg,
16988 +- },
16989 + };
16990 +
16991 + const struct wled_u32_opts wled5_opts[] = {
16992 +@@ -1400,11 +1376,6 @@ static int wled_configure(struct wled *wled)
16993 + .val_ptr = &cfg->switch_freq,
16994 + .cfg = &wled3_switch_freq_cfg,
16995 + },
16996 +- {
16997 +- .name = "qcom,num-strings",
16998 +- .val_ptr = &cfg->num_strings,
16999 +- .cfg = &wled4_num_strings_cfg,
17000 +- },
17001 + {
17002 + .name = "qcom,modulator-sel",
17003 + .val_ptr = &cfg->mod_sel,
17004 +@@ -1523,16 +1494,57 @@ static int wled_configure(struct wled *wled)
17005 + *bool_opts[i].val_ptr = true;
17006 + }
17007 +
17008 +- cfg->num_strings = cfg->num_strings + 1;
17009 +-
17010 + string_len = of_property_count_elems_of_size(dev->of_node,
17011 + "qcom,enabled-strings",
17012 + sizeof(u32));
17013 +- if (string_len > 0)
17014 +- of_property_read_u32_array(dev->of_node,
17015 ++ if (string_len > 0) {
17016 ++ if (string_len > wled->max_string_count) {
17017 ++ dev_err(dev, "Cannot have more than %d strings\n",
17018 ++ wled->max_string_count);
17019 ++ return -EINVAL;
17020 ++ }
17021 ++
17022 ++ rc = of_property_read_u32_array(dev->of_node,
17023 + "qcom,enabled-strings",
17024 + wled->cfg.enabled_strings,
17025 +- sizeof(u32));
17026 ++ string_len);
17027 ++ if (rc) {
17028 ++ dev_err(dev, "Failed to read %d elements from qcom,enabled-strings: %d\n",
17029 ++ string_len, rc);
17030 ++ return rc;
17031 ++ }
17032 ++
17033 ++ for (i = 0; i < string_len; ++i) {
17034 ++ if (wled->cfg.enabled_strings[i] >= wled->max_string_count) {
17035 ++ dev_err(dev,
17036 ++ "qcom,enabled-strings index %d at %d is out of bounds\n",
17037 ++ wled->cfg.enabled_strings[i], i);
17038 ++ return -EINVAL;
17039 ++ }
17040 ++ }
17041 ++
17042 ++ cfg->num_strings = string_len;
17043 ++ }
17044 ++
17045 ++ rc = of_property_read_u32(dev->of_node, "qcom,num-strings", &val);
17046 ++ if (!rc) {
17047 ++ if (val < 1 || val > wled->max_string_count) {
17048 ++ dev_err(dev, "qcom,num-strings must be between 1 and %d\n",
17049 ++ wled->max_string_count);
17050 ++ return -EINVAL;
17051 ++ }
17052 ++
17053 ++ if (string_len > 0) {
17054 ++ dev_warn(dev, "Only one of qcom,num-strings or qcom,enabled-strings"
17055 ++ " should be set\n");
17056 ++ if (val > string_len) {
17057 ++ dev_err(dev, "qcom,num-strings exceeds qcom,enabled-strings\n");
17058 ++ return -EINVAL;
17059 ++ }
17060 ++ }
17061 ++
17062 ++ cfg->num_strings = val;
17063 ++ }
17064 +
17065 + return 0;
17066 + }
17067 +diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
17068 +index cce75d3b3ba05..3cc2a4ee7152c 100644
17069 +--- a/drivers/virtio/virtio_ring.c
17070 ++++ b/drivers/virtio/virtio_ring.c
17071 +@@ -1124,8 +1124,10 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
17072 + if (virtqueue_use_indirect(_vq, total_sg)) {
17073 + err = virtqueue_add_indirect_packed(vq, sgs, total_sg, out_sgs,
17074 + in_sgs, data, gfp);
17075 +- if (err != -ENOMEM)
17076 ++ if (err != -ENOMEM) {
17077 ++ END_USE(vq);
17078 + return err;
17079 ++ }
17080 +
17081 + /* fall back on direct */
17082 + }
17083 +diff --git a/drivers/w1/slaves/w1_ds28e04.c b/drivers/w1/slaves/w1_ds28e04.c
17084 +index e4f336111edc6..6cef6e2edb892 100644
17085 +--- a/drivers/w1/slaves/w1_ds28e04.c
17086 ++++ b/drivers/w1/slaves/w1_ds28e04.c
17087 +@@ -32,7 +32,7 @@ static int w1_strong_pullup = 1;
17088 + module_param_named(strong_pullup, w1_strong_pullup, int, 0);
17089 +
17090 + /* enable/disable CRC checking on DS28E04-100 memory accesses */
17091 +-static char w1_enable_crccheck = 1;
17092 ++static bool w1_enable_crccheck = true;
17093 +
17094 + #define W1_EEPROM_SIZE 512
17095 + #define W1_PAGE_COUNT 16
17096 +@@ -339,32 +339,18 @@ static BIN_ATTR_RW(pio, 1);
17097 + static ssize_t crccheck_show(struct device *dev, struct device_attribute *attr,
17098 + char *buf)
17099 + {
17100 +- if (put_user(w1_enable_crccheck + 0x30, buf))
17101 +- return -EFAULT;
17102 +-
17103 +- return sizeof(w1_enable_crccheck);
17104 ++ return sysfs_emit(buf, "%d\n", w1_enable_crccheck);
17105 + }
17106 +
17107 + static ssize_t crccheck_store(struct device *dev, struct device_attribute *attr,
17108 + const char *buf, size_t count)
17109 + {
17110 +- char val;
17111 +-
17112 +- if (count != 1 || !buf)
17113 +- return -EINVAL;
17114 ++ int err = kstrtobool(buf, &w1_enable_crccheck);
17115 +
17116 +- if (get_user(val, buf))
17117 +- return -EFAULT;
17118 ++ if (err)
17119 ++ return err;
17120 +
17121 +- /* convert to decimal */
17122 +- val = val - 0x30;
17123 +- if (val != 0 && val != 1)
17124 +- return -EINVAL;
17125 +-
17126 +- /* set the new value */
17127 +- w1_enable_crccheck = val;
17128 +-
17129 +- return sizeof(w1_enable_crccheck);
17130 ++ return count;
17131 + }
17132 +
17133 + static DEVICE_ATTR_RW(crccheck);
17134 +diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
17135 +index b9651f797676c..54778aadf618d 100644
17136 +--- a/drivers/xen/gntdev.c
17137 ++++ b/drivers/xen/gntdev.c
17138 +@@ -240,13 +240,13 @@ void gntdev_put_map(struct gntdev_priv *priv, struct gntdev_grant_map *map)
17139 + if (!refcount_dec_and_test(&map->users))
17140 + return;
17141 +
17142 ++ if (map->pages && !use_ptemod)
17143 ++ unmap_grant_pages(map, 0, map->count);
17144 ++
17145 + if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) {
17146 + notify_remote_via_evtchn(map->notify.event);
17147 + evtchn_put(map->notify.event);
17148 + }
17149 +-
17150 +- if (map->pages && !use_ptemod)
17151 +- unmap_grant_pages(map, 0, map->count);
17152 + gntdev_free_map(map);
17153 + }
17154 +
17155 +diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
17156 +index 6e447bdaf9ec8..baff31a147e7d 100644
17157 +--- a/fs/btrfs/backref.c
17158 ++++ b/fs/btrfs/backref.c
17159 +@@ -1213,7 +1213,12 @@ again:
17160 + ret = btrfs_search_slot(trans, fs_info->extent_root, &key, path, 0, 0);
17161 + if (ret < 0)
17162 + goto out;
17163 +- BUG_ON(ret == 0);
17164 ++ if (ret == 0) {
17165 ++ /* This shouldn't happen, indicates a bug or fs corruption. */
17166 ++ ASSERT(ret != 0);
17167 ++ ret = -EUCLEAN;
17168 ++ goto out;
17169 ++ }
17170 +
17171 + #ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
17172 + if (trans && likely(trans->type != __TRANS_DUMMY) &&
17173 +@@ -1361,10 +1366,18 @@ again:
17174 + goto out;
17175 + if (!ret && extent_item_pos) {
17176 + /*
17177 +- * we've recorded that parent, so we must extend
17178 +- * its inode list here
17179 ++ * We've recorded that parent, so we must extend
17180 ++ * its inode list here.
17181 ++ *
17182 ++ * However if there was corruption we may not
17183 ++ * have found an eie, return an error in this
17184 ++ * case.
17185 + */
17186 +- BUG_ON(!eie);
17187 ++ ASSERT(eie);
17188 ++ if (!eie) {
17189 ++ ret = -EUCLEAN;
17190 ++ goto out;
17191 ++ }
17192 + while (eie->next)
17193 + eie = eie->next;
17194 + eie->next = ref->inode_list;
17195 +diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
17196 +index 519cf145f9bd1..5addd1e36a8ee 100644
17197 +--- a/fs/btrfs/ctree.c
17198 ++++ b/fs/btrfs/ctree.c
17199 +@@ -2589,12 +2589,9 @@ static struct extent_buffer *btrfs_search_slot_get_root(struct btrfs_root *root,
17200 + {
17201 + struct btrfs_fs_info *fs_info = root->fs_info;
17202 + struct extent_buffer *b;
17203 +- int root_lock;
17204 ++ int root_lock = 0;
17205 + int level = 0;
17206 +
17207 +- /* We try very hard to do read locks on the root */
17208 +- root_lock = BTRFS_READ_LOCK;
17209 +-
17210 + if (p->search_commit_root) {
17211 + /*
17212 + * The commit roots are read only so we always do read locks,
17213 +@@ -2632,6 +2629,9 @@ static struct extent_buffer *btrfs_search_slot_get_root(struct btrfs_root *root,
17214 + goto out;
17215 + }
17216 +
17217 ++ /* We try very hard to do read locks on the root */
17218 ++ root_lock = BTRFS_READ_LOCK;
17219 ++
17220 + /*
17221 + * If the level is set to maximum, we can skip trying to get the read
17222 + * lock.
17223 +@@ -2658,6 +2658,17 @@ static struct extent_buffer *btrfs_search_slot_get_root(struct btrfs_root *root,
17224 + level = btrfs_header_level(b);
17225 +
17226 + out:
17227 ++ /*
17228 ++ * The root may have failed to write out at some point, and thus is no
17229 ++ * longer valid, return an error in this case.
17230 ++ */
17231 ++ if (!extent_buffer_uptodate(b)) {
17232 ++ if (root_lock)
17233 ++ btrfs_tree_unlock_rw(b, root_lock);
17234 ++ free_extent_buffer(b);
17235 ++ return ERR_PTR(-EIO);
17236 ++ }
17237 ++
17238 + p->nodes[level] = b;
17239 + if (!p->skip_locking)
17240 + p->locks[level] = root_lock;
17241 +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
17242 +index ff3f0638cdb90..1d9262a35473c 100644
17243 +--- a/fs/btrfs/inode.c
17244 ++++ b/fs/btrfs/inode.c
17245 +@@ -10094,9 +10094,19 @@ static int btrfs_add_swap_extent(struct swap_info_struct *sis,
17246 + struct btrfs_swap_info *bsi)
17247 + {
17248 + unsigned long nr_pages;
17249 ++ unsigned long max_pages;
17250 + u64 first_ppage, first_ppage_reported, next_ppage;
17251 + int ret;
17252 +
17253 ++ /*
17254 ++ * Our swapfile may have had its size extended after the swap header was
17255 ++ * written. In that case activating the swapfile should not go beyond
17256 ++ * the max size set in the swap header.
17257 ++ */
17258 ++ if (bsi->nr_pages >= sis->max)
17259 ++ return 0;
17260 ++
17261 ++ max_pages = sis->max - bsi->nr_pages;
17262 + first_ppage = ALIGN(bsi->block_start, PAGE_SIZE) >> PAGE_SHIFT;
17263 + next_ppage = ALIGN_DOWN(bsi->block_start + bsi->block_len,
17264 + PAGE_SIZE) >> PAGE_SHIFT;
17265 +@@ -10104,6 +10114,7 @@ static int btrfs_add_swap_extent(struct swap_info_struct *sis,
17266 + if (first_ppage >= next_ppage)
17267 + return 0;
17268 + nr_pages = next_ppage - first_ppage;
17269 ++ nr_pages = min(nr_pages, max_pages);
17270 +
17271 + first_ppage_reported = first_ppage;
17272 + if (bsi->start == 0)
17273 +diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
17274 +index 4bac32a274ceb..f65aa4ed5ca1e 100644
17275 +--- a/fs/btrfs/qgroup.c
17276 ++++ b/fs/btrfs/qgroup.c
17277 +@@ -941,6 +941,14 @@ int btrfs_quota_enable(struct btrfs_fs_info *fs_info)
17278 + int ret = 0;
17279 + int slot;
17280 +
17281 ++ /*
17282 ++ * We need to have subvol_sem write locked, to prevent races between
17283 ++ * concurrent tasks trying to enable quotas, because we will unlock
17284 ++ * and relock qgroup_ioctl_lock before setting fs_info->quota_root
17285 ++ * and before setting BTRFS_FS_QUOTA_ENABLED.
17286 ++ */
17287 ++ lockdep_assert_held_write(&fs_info->subvol_sem);
17288 ++
17289 + mutex_lock(&fs_info->qgroup_ioctl_lock);
17290 + if (fs_info->quota_root)
17291 + goto out;
17292 +@@ -1118,8 +1126,19 @@ out_add_root:
17293 + goto out_free_path;
17294 + }
17295 +
17296 ++ mutex_unlock(&fs_info->qgroup_ioctl_lock);
17297 ++ /*
17298 ++ * Commit the transaction while not holding qgroup_ioctl_lock, to avoid
17299 ++ * a deadlock with tasks concurrently doing other qgroup operations, such
17300 ++ * adding/removing qgroups or adding/deleting qgroup relations for example,
17301 ++ * because all qgroup operations first start or join a transaction and then
17302 ++ * lock the qgroup_ioctl_lock mutex.
17303 ++ * We are safe from a concurrent task trying to enable quotas, by calling
17304 ++ * this function, since we are serialized by fs_info->subvol_sem.
17305 ++ */
17306 + ret = btrfs_commit_transaction(trans);
17307 + trans = NULL;
17308 ++ mutex_lock(&fs_info->qgroup_ioctl_lock);
17309 + if (ret)
17310 + goto out_free_path;
17311 +
17312 +diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
17313 +index 3aa5eb9ce498e..96059af28f508 100644
17314 +--- a/fs/debugfs/file.c
17315 ++++ b/fs/debugfs/file.c
17316 +@@ -147,7 +147,7 @@ static int debugfs_locked_down(struct inode *inode,
17317 + struct file *filp,
17318 + const struct file_operations *real_fops)
17319 + {
17320 +- if ((inode->i_mode & 07777) == 0444 &&
17321 ++ if ((inode->i_mode & 07777 & ~0444) == 0 &&
17322 + !(filp->f_mode & FMODE_WRITE) &&
17323 + !real_fops->unlocked_ioctl &&
17324 + !real_fops->compat_ioctl &&
17325 +diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
17326 +index 002123efc6b05..1e9d8999b9390 100644
17327 +--- a/fs/dlm/lock.c
17328 ++++ b/fs/dlm/lock.c
17329 +@@ -3975,6 +3975,14 @@ static int validate_message(struct dlm_lkb *lkb, struct dlm_message *ms)
17330 + int from = ms->m_header.h_nodeid;
17331 + int error = 0;
17332 +
17333 ++ /* currently mixing of user/kernel locks are not supported */
17334 ++ if (ms->m_flags & DLM_IFL_USER && ~lkb->lkb_flags & DLM_IFL_USER) {
17335 ++ log_error(lkb->lkb_resource->res_ls,
17336 ++ "got user dlm message for a kernel lock");
17337 ++ error = -EINVAL;
17338 ++ goto out;
17339 ++ }
17340 ++
17341 + switch (ms->m_type) {
17342 + case DLM_MSG_CONVERT:
17343 + case DLM_MSG_UNLOCK:
17344 +@@ -4003,6 +4011,7 @@ static int validate_message(struct dlm_lkb *lkb, struct dlm_message *ms)
17345 + error = -EINVAL;
17346 + }
17347 +
17348 ++out:
17349 + if (error)
17350 + log_error(lkb->lkb_resource->res_ls,
17351 + "ignore invalid message %d from %d %x %x %x %d",
17352 +diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
17353 +index 0c78fdfb1f6fa..68b765369c928 100644
17354 +--- a/fs/dlm/lowcomms.c
17355 ++++ b/fs/dlm/lowcomms.c
17356 +@@ -471,8 +471,8 @@ int dlm_lowcomms_connect_node(int nodeid)
17357 + static void lowcomms_error_report(struct sock *sk)
17358 + {
17359 + struct connection *con;
17360 +- struct sockaddr_storage saddr;
17361 + void (*orig_report)(struct sock *) = NULL;
17362 ++ struct inet_sock *inet;
17363 +
17364 + read_lock_bh(&sk->sk_callback_lock);
17365 + con = sock2con(sk);
17366 +@@ -480,34 +480,33 @@ static void lowcomms_error_report(struct sock *sk)
17367 + goto out;
17368 +
17369 + orig_report = listen_sock.sk_error_report;
17370 +- if (con->sock == NULL ||
17371 +- kernel_getpeername(con->sock, (struct sockaddr *)&saddr) < 0) {
17372 +- printk_ratelimited(KERN_ERR "dlm: node %d: socket error "
17373 +- "sending to node %d, port %d, "
17374 +- "sk_err=%d/%d\n", dlm_our_nodeid(),
17375 +- con->nodeid, dlm_config.ci_tcp_port,
17376 +- sk->sk_err, sk->sk_err_soft);
17377 +- } else if (saddr.ss_family == AF_INET) {
17378 +- struct sockaddr_in *sin4 = (struct sockaddr_in *)&saddr;
17379 +
17380 ++ inet = inet_sk(sk);
17381 ++ switch (sk->sk_family) {
17382 ++ case AF_INET:
17383 + printk_ratelimited(KERN_ERR "dlm: node %d: socket error "
17384 +- "sending to node %d at %pI4, port %d, "
17385 ++ "sending to node %d at %pI4, dport %d, "
17386 + "sk_err=%d/%d\n", dlm_our_nodeid(),
17387 +- con->nodeid, &sin4->sin_addr.s_addr,
17388 +- dlm_config.ci_tcp_port, sk->sk_err,
17389 ++ con->nodeid, &inet->inet_daddr,
17390 ++ ntohs(inet->inet_dport), sk->sk_err,
17391 + sk->sk_err_soft);
17392 +- } else {
17393 +- struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *)&saddr;
17394 +-
17395 ++ break;
17396 ++#if IS_ENABLED(CONFIG_IPV6)
17397 ++ case AF_INET6:
17398 + printk_ratelimited(KERN_ERR "dlm: node %d: socket error "
17399 +- "sending to node %d at %u.%u.%u.%u, "
17400 +- "port %d, sk_err=%d/%d\n", dlm_our_nodeid(),
17401 +- con->nodeid, sin6->sin6_addr.s6_addr32[0],
17402 +- sin6->sin6_addr.s6_addr32[1],
17403 +- sin6->sin6_addr.s6_addr32[2],
17404 +- sin6->sin6_addr.s6_addr32[3],
17405 +- dlm_config.ci_tcp_port, sk->sk_err,
17406 ++ "sending to node %d at %pI6c, "
17407 ++ "dport %d, sk_err=%d/%d\n", dlm_our_nodeid(),
17408 ++ con->nodeid, &sk->sk_v6_daddr,
17409 ++ ntohs(inet->inet_dport), sk->sk_err,
17410 + sk->sk_err_soft);
17411 ++ break;
17412 ++#endif
17413 ++ default:
17414 ++ printk_ratelimited(KERN_ERR "dlm: node %d: socket error "
17415 ++ "invalid socket family %d set, "
17416 ++ "sk_err=%d/%d\n", dlm_our_nodeid(),
17417 ++ sk->sk_family, sk->sk_err, sk->sk_err_soft);
17418 ++ goto out;
17419 + }
17420 + out:
17421 + read_unlock_bh(&sk->sk_callback_lock);
17422 +diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
17423 +index 115a77b96e5e1..99d98d1010217 100644
17424 +--- a/fs/ext4/ext4.h
17425 ++++ b/fs/ext4/ext4.h
17426 +@@ -2778,6 +2778,7 @@ bool ext4_fc_replay_check_excluded(struct super_block *sb, ext4_fsblk_t block);
17427 + void ext4_fc_replay_cleanup(struct super_block *sb);
17428 + int ext4_fc_commit(journal_t *journal, tid_t commit_tid);
17429 + int __init ext4_fc_init_dentry_cache(void);
17430 ++void ext4_fc_destroy_dentry_cache(void);
17431 +
17432 + /* mballoc.c */
17433 + extern const struct seq_operations ext4_mb_seq_groups_ops;
17434 +diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c
17435 +index 0fd0c42a4f7db..6ff7b4020df8a 100644
17436 +--- a/fs/ext4/ext4_jbd2.c
17437 ++++ b/fs/ext4/ext4_jbd2.c
17438 +@@ -162,6 +162,8 @@ int __ext4_journal_ensure_credits(handle_t *handle, int check_cred,
17439 + {
17440 + if (!ext4_handle_valid(handle))
17441 + return 0;
17442 ++ if (is_handle_aborted(handle))
17443 ++ return -EROFS;
17444 + if (jbd2_handle_buffer_credits(handle) >= check_cred &&
17445 + handle->h_revoke_credits >= revoke_cred)
17446 + return 0;
17447 +diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
17448 +index b8c9df6ab67f5..b297b14de7509 100644
17449 +--- a/fs/ext4/extents.c
17450 ++++ b/fs/ext4/extents.c
17451 +@@ -4638,8 +4638,6 @@ static long ext4_zero_range(struct file *file, loff_t offset,
17452 + ret = ext4_mark_inode_dirty(handle, inode);
17453 + if (unlikely(ret))
17454 + goto out_handle;
17455 +- ext4_fc_track_range(handle, inode, offset >> inode->i_sb->s_blocksize_bits,
17456 +- (offset + len - 1) >> inode->i_sb->s_blocksize_bits);
17457 + /* Zero out partial block at the edges of the range */
17458 + ret = ext4_zero_partial_blocks(handle, inode, offset, len);
17459 + if (ret >= 0)
17460 +diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
17461 +index 08ca690f928bd..f483abcd5213a 100644
17462 +--- a/fs/ext4/fast_commit.c
17463 ++++ b/fs/ext4/fast_commit.c
17464 +@@ -1764,11 +1764,14 @@ ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,
17465 + }
17466 + }
17467 +
17468 +- ret = ext4_punch_hole(inode,
17469 +- le32_to_cpu(lrange.fc_lblk) << sb->s_blocksize_bits,
17470 +- le32_to_cpu(lrange.fc_len) << sb->s_blocksize_bits);
17471 +- if (ret)
17472 +- jbd_debug(1, "ext4_punch_hole returned %d", ret);
17473 ++ down_write(&EXT4_I(inode)->i_data_sem);
17474 ++ ret = ext4_ext_remove_space(inode, lrange.fc_lblk,
17475 ++ lrange.fc_lblk + lrange.fc_len - 1);
17476 ++ up_write(&EXT4_I(inode)->i_data_sem);
17477 ++ if (ret) {
17478 ++ iput(inode);
17479 ++ return 0;
17480 ++ }
17481 + ext4_ext_replay_shrink_inode(inode,
17482 + i_size_read(inode) >> sb->s_blocksize_bits);
17483 + ext4_mark_inode_dirty(NULL, inode);
17484 +@@ -2166,3 +2169,8 @@ int __init ext4_fc_init_dentry_cache(void)
17485 +
17486 + return 0;
17487 + }
17488 ++
17489 ++void ext4_fc_destroy_dentry_cache(void)
17490 ++{
17491 ++ kmem_cache_destroy(ext4_fc_dentry_cachep);
17492 ++}
17493 +diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
17494 +index 317aa1b90fb95..d59474a541897 100644
17495 +--- a/fs/ext4/inode.c
17496 ++++ b/fs/ext4/inode.c
17497 +@@ -741,10 +741,11 @@ out_sem:
17498 + if (ret)
17499 + return ret;
17500 + }
17501 +- ext4_fc_track_range(handle, inode, map->m_lblk,
17502 +- map->m_lblk + map->m_len - 1);
17503 + }
17504 +-
17505 ++ if (retval > 0 && (map->m_flags & EXT4_MAP_UNWRITTEN ||
17506 ++ map->m_flags & EXT4_MAP_MAPPED))
17507 ++ ext4_fc_track_range(handle, inode, map->m_lblk,
17508 ++ map->m_lblk + map->m_len - 1);
17509 + if (retval < 0)
17510 + ext_debug(inode, "failed with err %d\n", retval);
17511 + return retval;
17512 +@@ -4445,7 +4446,7 @@ has_buffer:
17513 + static int __ext4_get_inode_loc_noinmem(struct inode *inode,
17514 + struct ext4_iloc *iloc)
17515 + {
17516 +- ext4_fsblk_t err_blk;
17517 ++ ext4_fsblk_t err_blk = 0;
17518 + int ret;
17519 +
17520 + ret = __ext4_get_inode_loc(inode->i_sb, inode->i_ino, iloc, 0,
17521 +@@ -4460,7 +4461,7 @@ static int __ext4_get_inode_loc_noinmem(struct inode *inode,
17522 +
17523 + int ext4_get_inode_loc(struct inode *inode, struct ext4_iloc *iloc)
17524 + {
17525 +- ext4_fsblk_t err_blk;
17526 ++ ext4_fsblk_t err_blk = 0;
17527 + int ret;
17528 +
17529 + /* We have all inode data except xattrs in memory here. */
17530 +@@ -5467,8 +5468,7 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
17531 + ext4_fc_track_range(handle, inode,
17532 + (attr->ia_size > 0 ? attr->ia_size - 1 : 0) >>
17533 + inode->i_sb->s_blocksize_bits,
17534 +- (oldsize > 0 ? oldsize - 1 : 0) >>
17535 +- inode->i_sb->s_blocksize_bits);
17536 ++ EXT_MAX_BLOCKS - 1);
17537 + else
17538 + ext4_fc_track_range(
17539 + handle, inode,
17540 +diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
17541 +index cb54ea6461fd8..413bf3d2f7844 100644
17542 +--- a/fs/ext4/ioctl.c
17543 ++++ b/fs/ext4/ioctl.c
17544 +@@ -1123,8 +1123,6 @@ resizefs_out:
17545 + sizeof(range)))
17546 + return -EFAULT;
17547 +
17548 +- range.minlen = max((unsigned int)range.minlen,
17549 +- q->limits.discard_granularity);
17550 + ret = ext4_trim_fs(sb, &range);
17551 + if (ret < 0)
17552 + return ret;
17553 +diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
17554 +index d7cb7d719ee58..e40f87d07783a 100644
17555 +--- a/fs/ext4/mballoc.c
17556 ++++ b/fs/ext4/mballoc.c
17557 +@@ -4234,7 +4234,7 @@ ext4_mb_release_group_pa(struct ext4_buddy *e4b,
17558 + */
17559 + static noinline_for_stack int
17560 + ext4_mb_discard_group_preallocations(struct super_block *sb,
17561 +- ext4_group_t group, int needed)
17562 ++ ext4_group_t group, int *busy)
17563 + {
17564 + struct ext4_group_info *grp = ext4_get_group_info(sb, group);
17565 + struct buffer_head *bitmap_bh = NULL;
17566 +@@ -4242,8 +4242,7 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
17567 + struct list_head list;
17568 + struct ext4_buddy e4b;
17569 + int err;
17570 +- int busy = 0;
17571 +- int free, free_total = 0;
17572 ++ int free = 0;
17573 +
17574 + mb_debug(sb, "discard preallocation for group %u\n", group);
17575 + if (list_empty(&grp->bb_prealloc_list))
17576 +@@ -4266,19 +4265,14 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
17577 + goto out_dbg;
17578 + }
17579 +
17580 +- if (needed == 0)
17581 +- needed = EXT4_CLUSTERS_PER_GROUP(sb) + 1;
17582 +-
17583 + INIT_LIST_HEAD(&list);
17584 +-repeat:
17585 +- free = 0;
17586 + ext4_lock_group(sb, group);
17587 + list_for_each_entry_safe(pa, tmp,
17588 + &grp->bb_prealloc_list, pa_group_list) {
17589 + spin_lock(&pa->pa_lock);
17590 + if (atomic_read(&pa->pa_count)) {
17591 + spin_unlock(&pa->pa_lock);
17592 +- busy = 1;
17593 ++ *busy = 1;
17594 + continue;
17595 + }
17596 + if (pa->pa_deleted) {
17597 +@@ -4318,22 +4312,13 @@ repeat:
17598 + call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
17599 + }
17600 +
17601 +- free_total += free;
17602 +-
17603 +- /* if we still need more blocks and some PAs were used, try again */
17604 +- if (free_total < needed && busy) {
17605 +- ext4_unlock_group(sb, group);
17606 +- cond_resched();
17607 +- busy = 0;
17608 +- goto repeat;
17609 +- }
17610 + ext4_unlock_group(sb, group);
17611 + ext4_mb_unload_buddy(&e4b);
17612 + put_bh(bitmap_bh);
17613 + out_dbg:
17614 + mb_debug(sb, "discarded (%d) blocks preallocated for group %u bb_free (%d)\n",
17615 +- free_total, group, grp->bb_free);
17616 +- return free_total;
17617 ++ free, group, grp->bb_free);
17618 ++ return free;
17619 + }
17620 +
17621 + /*
17622 +@@ -4875,13 +4860,24 @@ static int ext4_mb_discard_preallocations(struct super_block *sb, int needed)
17623 + {
17624 + ext4_group_t i, ngroups = ext4_get_groups_count(sb);
17625 + int ret;
17626 +- int freed = 0;
17627 ++ int freed = 0, busy = 0;
17628 ++ int retry = 0;
17629 +
17630 + trace_ext4_mb_discard_preallocations(sb, needed);
17631 ++
17632 ++ if (needed == 0)
17633 ++ needed = EXT4_CLUSTERS_PER_GROUP(sb) + 1;
17634 ++ repeat:
17635 + for (i = 0; i < ngroups && needed > 0; i++) {
17636 +- ret = ext4_mb_discard_group_preallocations(sb, i, needed);
17637 ++ ret = ext4_mb_discard_group_preallocations(sb, i, &busy);
17638 + freed += ret;
17639 + needed -= ret;
17640 ++ cond_resched();
17641 ++ }
17642 ++
17643 ++ if (needed > 0 && busy && ++retry < 3) {
17644 ++ busy = 0;
17645 ++ goto repeat;
17646 + }
17647 +
17648 + return freed;
17649 +@@ -5815,6 +5811,7 @@ out:
17650 + */
17651 + int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
17652 + {
17653 ++ struct request_queue *q = bdev_get_queue(sb->s_bdev);
17654 + struct ext4_group_info *grp;
17655 + ext4_group_t group, first_group, last_group;
17656 + ext4_grpblk_t cnt = 0, first_cluster, last_cluster;
17657 +@@ -5833,6 +5830,13 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
17658 + start >= max_blks ||
17659 + range->len < sb->s_blocksize)
17660 + return -EINVAL;
17661 ++ /* No point to try to trim less than discard granularity */
17662 ++ if (range->minlen < q->limits.discard_granularity) {
17663 ++ minlen = EXT4_NUM_B2C(EXT4_SB(sb),
17664 ++ q->limits.discard_granularity >> sb->s_blocksize_bits);
17665 ++ if (minlen > EXT4_CLUSTERS_PER_GROUP(sb))
17666 ++ goto out;
17667 ++ }
17668 + if (end >= max_blks)
17669 + end = max_blks - 1;
17670 + if (end <= first_data_blk)
17671 +diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c
17672 +index c5e3fc998211a..49912814f3d8d 100644
17673 +--- a/fs/ext4/migrate.c
17674 ++++ b/fs/ext4/migrate.c
17675 +@@ -437,12 +437,12 @@ int ext4_ext_migrate(struct inode *inode)
17676 + percpu_down_write(&sbi->s_writepages_rwsem);
17677 +
17678 + /*
17679 +- * Worst case we can touch the allocation bitmaps, a bgd
17680 +- * block, and a block to link in the orphan list. We do need
17681 +- * need to worry about credits for modifying the quota inode.
17682 ++ * Worst case we can touch the allocation bitmaps and a block
17683 ++ * group descriptor block. We do need need to worry about
17684 ++ * credits for modifying the quota inode.
17685 + */
17686 + handle = ext4_journal_start(inode, EXT4_HT_MIGRATE,
17687 +- 4 + EXT4_MAXQUOTAS_TRANS_BLOCKS(inode->i_sb));
17688 ++ 3 + EXT4_MAXQUOTAS_TRANS_BLOCKS(inode->i_sb));
17689 +
17690 + if (IS_ERR(handle)) {
17691 + retval = PTR_ERR(handle);
17692 +@@ -459,6 +459,13 @@ int ext4_ext_migrate(struct inode *inode)
17693 + ext4_journal_stop(handle);
17694 + goto out_unlock;
17695 + }
17696 ++ /*
17697 ++ * Use the correct seed for checksum (i.e. the seed from 'inode'). This
17698 ++ * is so that the metadata blocks will have the correct checksum after
17699 ++ * the migration.
17700 ++ */
17701 ++ ei = EXT4_I(inode);
17702 ++ EXT4_I(tmp_inode)->i_csum_seed = ei->i_csum_seed;
17703 + i_size_write(tmp_inode, i_size_read(inode));
17704 + /*
17705 + * Set the i_nlink to zero so it will be deleted later
17706 +@@ -467,7 +474,6 @@ int ext4_ext_migrate(struct inode *inode)
17707 + clear_nlink(tmp_inode);
17708 +
17709 + ext4_ext_tree_init(handle, tmp_inode);
17710 +- ext4_orphan_add(handle, tmp_inode);
17711 + ext4_journal_stop(handle);
17712 +
17713 + /*
17714 +@@ -492,17 +498,10 @@ int ext4_ext_migrate(struct inode *inode)
17715 +
17716 + handle = ext4_journal_start(inode, EXT4_HT_MIGRATE, 1);
17717 + if (IS_ERR(handle)) {
17718 +- /*
17719 +- * It is impossible to update on-disk structures without
17720 +- * a handle, so just rollback in-core changes and live other
17721 +- * work to orphan_list_cleanup()
17722 +- */
17723 +- ext4_orphan_del(NULL, tmp_inode);
17724 + retval = PTR_ERR(handle);
17725 + goto out_tmp_inode;
17726 + }
17727 +
17728 +- ei = EXT4_I(inode);
17729 + i_data = ei->i_data;
17730 + memset(&lb, 0, sizeof(lb));
17731 +
17732 +diff --git a/fs/ext4/super.c b/fs/ext4/super.c
17733 +index b1af6588bad01..9e210bc85c817 100644
17734 +--- a/fs/ext4/super.c
17735 ++++ b/fs/ext4/super.c
17736 +@@ -6341,10 +6341,7 @@ static int ext4_quota_on(struct super_block *sb, int type, int format_id,
17737 +
17738 + lockdep_set_quota_inode(path->dentry->d_inode, I_DATA_SEM_QUOTA);
17739 + err = dquot_quota_on(sb, type, format_id, path);
17740 +- if (err) {
17741 +- lockdep_set_quota_inode(path->dentry->d_inode,
17742 +- I_DATA_SEM_NORMAL);
17743 +- } else {
17744 ++ if (!err) {
17745 + struct inode *inode = d_inode(path->dentry);
17746 + handle_t *handle;
17747 +
17748 +@@ -6364,7 +6361,12 @@ static int ext4_quota_on(struct super_block *sb, int type, int format_id,
17749 + ext4_journal_stop(handle);
17750 + unlock_inode:
17751 + inode_unlock(inode);
17752 ++ if (err)
17753 ++ dquot_quota_off(sb, type);
17754 + }
17755 ++ if (err)
17756 ++ lockdep_set_quota_inode(path->dentry->d_inode,
17757 ++ I_DATA_SEM_NORMAL);
17758 + return err;
17759 + }
17760 +
17761 +@@ -6427,8 +6429,19 @@ static int ext4_enable_quotas(struct super_block *sb)
17762 + "Failed to enable quota tracking "
17763 + "(type=%d, err=%d). Please run "
17764 + "e2fsck to fix.", type, err);
17765 +- for (type--; type >= 0; type--)
17766 ++ for (type--; type >= 0; type--) {
17767 ++ struct inode *inode;
17768 ++
17769 ++ inode = sb_dqopt(sb)->files[type];
17770 ++ if (inode)
17771 ++ inode = igrab(inode);
17772 + dquot_quota_off(sb, type);
17773 ++ if (inode) {
17774 ++ lockdep_set_quota_inode(inode,
17775 ++ I_DATA_SEM_NORMAL);
17776 ++ iput(inode);
17777 ++ }
17778 ++ }
17779 +
17780 + return err;
17781 + }
17782 +@@ -6532,7 +6545,7 @@ static ssize_t ext4_quota_write(struct super_block *sb, int type,
17783 + struct buffer_head *bh;
17784 + handle_t *handle = journal_current_handle();
17785 +
17786 +- if (EXT4_SB(sb)->s_journal && !handle) {
17787 ++ if (!handle) {
17788 + ext4_msg(sb, KERN_WARNING, "Quota write (off=%llu, len=%llu)"
17789 + " cancelled because transaction is not started",
17790 + (unsigned long long)off, (unsigned long long)len);
17791 +@@ -6716,6 +6729,7 @@ static int __init ext4_init_fs(void)
17792 + out:
17793 + unregister_as_ext2();
17794 + unregister_as_ext3();
17795 ++ ext4_fc_destroy_dentry_cache();
17796 + out05:
17797 + destroy_inodecache();
17798 + out1:
17799 +@@ -6742,6 +6756,7 @@ static void __exit ext4_exit_fs(void)
17800 + unregister_as_ext2();
17801 + unregister_as_ext3();
17802 + unregister_filesystem(&ext4_fs_type);
17803 ++ ext4_fc_destroy_dentry_cache();
17804 + destroy_inodecache();
17805 + ext4_exit_mballoc();
17806 + ext4_exit_sysfs();
17807 +diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
17808 +index 30987ea011f1a..ec542e8c46cc9 100644
17809 +--- a/fs/f2fs/compress.c
17810 ++++ b/fs/f2fs/compress.c
17811 +@@ -1362,25 +1362,38 @@ static int f2fs_write_raw_pages(struct compress_ctx *cc,
17812 + enum iostat_type io_type)
17813 + {
17814 + struct address_space *mapping = cc->inode->i_mapping;
17815 +- int _submitted, compr_blocks, ret;
17816 +- int i = -1, err = 0;
17817 ++ int _submitted, compr_blocks, ret, i;
17818 +
17819 + compr_blocks = f2fs_compressed_blocks(cc);
17820 +- if (compr_blocks < 0) {
17821 +- err = compr_blocks;
17822 +- goto out_err;
17823 ++
17824 ++ for (i = 0; i < cc->cluster_size; i++) {
17825 ++ if (!cc->rpages[i])
17826 ++ continue;
17827 ++
17828 ++ redirty_page_for_writepage(wbc, cc->rpages[i]);
17829 ++ unlock_page(cc->rpages[i]);
17830 + }
17831 +
17832 ++ if (compr_blocks < 0)
17833 ++ return compr_blocks;
17834 ++
17835 + for (i = 0; i < cc->cluster_size; i++) {
17836 + if (!cc->rpages[i])
17837 + continue;
17838 + retry_write:
17839 ++ lock_page(cc->rpages[i]);
17840 ++
17841 + if (cc->rpages[i]->mapping != mapping) {
17842 ++continue_unlock:
17843 + unlock_page(cc->rpages[i]);
17844 + continue;
17845 + }
17846 +
17847 +- BUG_ON(!PageLocked(cc->rpages[i]));
17848 ++ if (!PageDirty(cc->rpages[i]))
17849 ++ goto continue_unlock;
17850 ++
17851 ++ if (!clear_page_dirty_for_io(cc->rpages[i]))
17852 ++ goto continue_unlock;
17853 +
17854 + ret = f2fs_write_single_data_page(cc->rpages[i], &_submitted,
17855 + NULL, NULL, wbc, io_type,
17856 +@@ -1395,26 +1408,15 @@ retry_write:
17857 + * avoid deadlock caused by cluster update race
17858 + * from foreground operation.
17859 + */
17860 +- if (IS_NOQUOTA(cc->inode)) {
17861 +- err = 0;
17862 +- goto out_err;
17863 +- }
17864 ++ if (IS_NOQUOTA(cc->inode))
17865 ++ return 0;
17866 + ret = 0;
17867 + cond_resched();
17868 + congestion_wait(BLK_RW_ASYNC,
17869 + DEFAULT_IO_TIMEOUT);
17870 +- lock_page(cc->rpages[i]);
17871 +-
17872 +- if (!PageDirty(cc->rpages[i])) {
17873 +- unlock_page(cc->rpages[i]);
17874 +- continue;
17875 +- }
17876 +-
17877 +- clear_page_dirty_for_io(cc->rpages[i]);
17878 + goto retry_write;
17879 + }
17880 +- err = ret;
17881 +- goto out_err;
17882 ++ return ret;
17883 + }
17884 +
17885 + *submitted += _submitted;
17886 +@@ -1423,14 +1425,6 @@ retry_write:
17887 + f2fs_balance_fs(F2FS_M_SB(mapping), true);
17888 +
17889 + return 0;
17890 +-out_err:
17891 +- for (++i; i < cc->cluster_size; i++) {
17892 +- if (!cc->rpages[i])
17893 +- continue;
17894 +- redirty_page_for_writepage(wbc, cc->rpages[i]);
17895 +- unlock_page(cc->rpages[i]);
17896 +- }
17897 +- return err;
17898 + }
17899 +
17900 + int f2fs_write_multi_pages(struct compress_ctx *cc,
17901 +diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
17902 +index bc488a7d01903..6c4bf22a3e83e 100644
17903 +--- a/fs/f2fs/f2fs.h
17904 ++++ b/fs/f2fs/f2fs.h
17905 +@@ -955,6 +955,7 @@ struct f2fs_sm_info {
17906 + unsigned int segment_count; /* total # of segments */
17907 + unsigned int main_segments; /* # of segments in main area */
17908 + unsigned int reserved_segments; /* # of reserved segments */
17909 ++ unsigned int additional_reserved_segments;/* reserved segs for IO align feature */
17910 + unsigned int ovp_segments; /* # of overprovision segments */
17911 +
17912 + /* a threshold to reclaim prefree segments */
17913 +@@ -1984,6 +1985,11 @@ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi,
17914 +
17915 + if (!__allow_reserved_blocks(sbi, inode, true))
17916 + avail_user_block_count -= F2FS_OPTION(sbi).root_reserved_blocks;
17917 ++
17918 ++ if (F2FS_IO_ALIGNED(sbi))
17919 ++ avail_user_block_count -= sbi->blocks_per_seg *
17920 ++ SM_I(sbi)->additional_reserved_segments;
17921 ++
17922 + if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
17923 + if (avail_user_block_count > sbi->unusable_block_count)
17924 + avail_user_block_count -= sbi->unusable_block_count;
17925 +@@ -2229,6 +2235,11 @@ static inline int inc_valid_node_count(struct f2fs_sb_info *sbi,
17926 +
17927 + if (!__allow_reserved_blocks(sbi, inode, false))
17928 + valid_block_count += F2FS_OPTION(sbi).root_reserved_blocks;
17929 ++
17930 ++ if (F2FS_IO_ALIGNED(sbi))
17931 ++ valid_block_count += sbi->blocks_per_seg *
17932 ++ SM_I(sbi)->additional_reserved_segments;
17933 ++
17934 + user_block_count = sbi->user_block_count;
17935 + if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED)))
17936 + user_block_count -= sbi->unusable_block_count;
17937 +diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
17938 +index 72f227f6ebad0..6b240b71d2e83 100644
17939 +--- a/fs/f2fs/gc.c
17940 ++++ b/fs/f2fs/gc.c
17941 +@@ -998,6 +998,9 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
17942 + set_sbi_flag(sbi, SBI_NEED_FSCK);
17943 + }
17944 +
17945 ++ if (f2fs_check_nid_range(sbi, dni->ino))
17946 ++ return false;
17947 ++
17948 + *nofs = ofs_of_node(node_page);
17949 + source_blkaddr = data_blkaddr(NULL, node_page, ofs_in_node);
17950 + f2fs_put_page(node_page, 1);
17951 +diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
17952 +index 1bf33fc27b8f8..beef833a69604 100644
17953 +--- a/fs/f2fs/segment.h
17954 ++++ b/fs/f2fs/segment.h
17955 +@@ -539,7 +539,8 @@ static inline unsigned int free_segments(struct f2fs_sb_info *sbi)
17956 +
17957 + static inline unsigned int reserved_segments(struct f2fs_sb_info *sbi)
17958 + {
17959 +- return SM_I(sbi)->reserved_segments;
17960 ++ return SM_I(sbi)->reserved_segments +
17961 ++ SM_I(sbi)->additional_reserved_segments;
17962 + }
17963 +
17964 + static inline unsigned int free_sections(struct f2fs_sb_info *sbi)
17965 +diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
17966 +index b7287b722e9e1..af98abb17c272 100644
17967 +--- a/fs/f2fs/super.c
17968 ++++ b/fs/f2fs/super.c
17969 +@@ -289,6 +289,46 @@ static inline void limit_reserve_root(struct f2fs_sb_info *sbi)
17970 + F2FS_OPTION(sbi).s_resgid));
17971 + }
17972 +
17973 ++static inline int adjust_reserved_segment(struct f2fs_sb_info *sbi)
17974 ++{
17975 ++ unsigned int sec_blks = sbi->blocks_per_seg * sbi->segs_per_sec;
17976 ++ unsigned int avg_vblocks;
17977 ++ unsigned int wanted_reserved_segments;
17978 ++ block_t avail_user_block_count;
17979 ++
17980 ++ if (!F2FS_IO_ALIGNED(sbi))
17981 ++ return 0;
17982 ++
17983 ++ /* average valid block count in section in worst case */
17984 ++ avg_vblocks = sec_blks / F2FS_IO_SIZE(sbi);
17985 ++
17986 ++ /*
17987 ++ * we need enough free space when migrating one section in worst case
17988 ++ */
17989 ++ wanted_reserved_segments = (F2FS_IO_SIZE(sbi) / avg_vblocks) *
17990 ++ reserved_segments(sbi);
17991 ++ wanted_reserved_segments -= reserved_segments(sbi);
17992 ++
17993 ++ avail_user_block_count = sbi->user_block_count -
17994 ++ sbi->current_reserved_blocks -
17995 ++ F2FS_OPTION(sbi).root_reserved_blocks;
17996 ++
17997 ++ if (wanted_reserved_segments * sbi->blocks_per_seg >
17998 ++ avail_user_block_count) {
17999 ++ f2fs_err(sbi, "IO align feature can't grab additional reserved segment: %u, available segments: %u",
18000 ++ wanted_reserved_segments,
18001 ++ avail_user_block_count >> sbi->log_blocks_per_seg);
18002 ++ return -ENOSPC;
18003 ++ }
18004 ++
18005 ++ SM_I(sbi)->additional_reserved_segments = wanted_reserved_segments;
18006 ++
18007 ++ f2fs_info(sbi, "IO align feature needs additional reserved segment: %u",
18008 ++ wanted_reserved_segments);
18009 ++
18010 ++ return 0;
18011 ++}
18012 ++
18013 + static inline void adjust_unusable_cap_perc(struct f2fs_sb_info *sbi)
18014 + {
18015 + if (!F2FS_OPTION(sbi).unusable_cap_perc)
18016 +@@ -3736,6 +3776,10 @@ try_onemore:
18017 + goto free_nm;
18018 + }
18019 +
18020 ++ err = adjust_reserved_segment(sbi);
18021 ++ if (err)
18022 ++ goto free_nm;
18023 ++
18024 + /* For write statistics */
18025 + if (sb->s_bdev->bd_part)
18026 + sbi->sectors_written_start =
18027 +diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
18028 +index b8850c81068a0..7ffd4bb398b0c 100644
18029 +--- a/fs/f2fs/sysfs.c
18030 ++++ b/fs/f2fs/sysfs.c
18031 +@@ -330,7 +330,9 @@ out:
18032 + if (a->struct_type == RESERVED_BLOCKS) {
18033 + spin_lock(&sbi->stat_lock);
18034 + if (t > (unsigned long)(sbi->user_block_count -
18035 +- F2FS_OPTION(sbi).root_reserved_blocks)) {
18036 ++ F2FS_OPTION(sbi).root_reserved_blocks -
18037 ++ sbi->blocks_per_seg *
18038 ++ SM_I(sbi)->additional_reserved_segments)) {
18039 + spin_unlock(&sbi->stat_lock);
18040 + return -EINVAL;
18041 + }
18042 +diff --git a/fs/fuse/file.c b/fs/fuse/file.c
18043 +index 4dd70b53df81a..e81d1c3eb7e11 100644
18044 +--- a/fs/fuse/file.c
18045 ++++ b/fs/fuse/file.c
18046 +@@ -3251,7 +3251,7 @@ fuse_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
18047 +
18048 + static int fuse_writeback_range(struct inode *inode, loff_t start, loff_t end)
18049 + {
18050 +- int err = filemap_write_and_wait_range(inode->i_mapping, start, -1);
18051 ++ int err = filemap_write_and_wait_range(inode->i_mapping, start, LLONG_MAX);
18052 +
18053 + if (!err)
18054 + fuse_sync_writes(inode);
18055 +diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c
18056 +index 4fc8cd698d1a4..bd7d58d27bfc6 100644
18057 +--- a/fs/jffs2/file.c
18058 ++++ b/fs/jffs2/file.c
18059 +@@ -136,20 +136,15 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
18060 + struct page *pg;
18061 + struct inode *inode = mapping->host;
18062 + struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode);
18063 ++ struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb);
18064 + pgoff_t index = pos >> PAGE_SHIFT;
18065 + uint32_t pageofs = index << PAGE_SHIFT;
18066 + int ret = 0;
18067 +
18068 +- pg = grab_cache_page_write_begin(mapping, index, flags);
18069 +- if (!pg)
18070 +- return -ENOMEM;
18071 +- *pagep = pg;
18072 +-
18073 + jffs2_dbg(1, "%s()\n", __func__);
18074 +
18075 + if (pageofs > inode->i_size) {
18076 + /* Make new hole frag from old EOF to new page */
18077 +- struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb);
18078 + struct jffs2_raw_inode ri;
18079 + struct jffs2_full_dnode *fn;
18080 + uint32_t alloc_len;
18081 +@@ -160,7 +155,7 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
18082 + ret = jffs2_reserve_space(c, sizeof(ri), &alloc_len,
18083 + ALLOC_NORMAL, JFFS2_SUMMARY_INODE_SIZE);
18084 + if (ret)
18085 +- goto out_page;
18086 ++ goto out_err;
18087 +
18088 + mutex_lock(&f->sem);
18089 + memset(&ri, 0, sizeof(ri));
18090 +@@ -190,7 +185,7 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
18091 + ret = PTR_ERR(fn);
18092 + jffs2_complete_reservation(c);
18093 + mutex_unlock(&f->sem);
18094 +- goto out_page;
18095 ++ goto out_err;
18096 + }
18097 + ret = jffs2_add_full_dnode_to_inode(c, f, fn);
18098 + if (f->metadata) {
18099 +@@ -205,13 +200,26 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
18100 + jffs2_free_full_dnode(fn);
18101 + jffs2_complete_reservation(c);
18102 + mutex_unlock(&f->sem);
18103 +- goto out_page;
18104 ++ goto out_err;
18105 + }
18106 + jffs2_complete_reservation(c);
18107 + inode->i_size = pageofs;
18108 + mutex_unlock(&f->sem);
18109 + }
18110 +
18111 ++ /*
18112 ++ * While getting a page and reading data in, lock c->alloc_sem until
18113 ++ * the page is Uptodate. Otherwise GC task may attempt to read the same
18114 ++ * page in read_cache_page(), which causes a deadlock.
18115 ++ */
18116 ++ mutex_lock(&c->alloc_sem);
18117 ++ pg = grab_cache_page_write_begin(mapping, index, flags);
18118 ++ if (!pg) {
18119 ++ ret = -ENOMEM;
18120 ++ goto release_sem;
18121 ++ }
18122 ++ *pagep = pg;
18123 ++
18124 + /*
18125 + * Read in the page if it wasn't already present. Cannot optimize away
18126 + * the whole page write case until jffs2_write_end can handle the
18127 +@@ -221,15 +229,17 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
18128 + mutex_lock(&f->sem);
18129 + ret = jffs2_do_readpage_nolock(inode, pg);
18130 + mutex_unlock(&f->sem);
18131 +- if (ret)
18132 +- goto out_page;
18133 ++ if (ret) {
18134 ++ unlock_page(pg);
18135 ++ put_page(pg);
18136 ++ goto release_sem;
18137 ++ }
18138 + }
18139 + jffs2_dbg(1, "end write_begin(). pg->flags %lx\n", pg->flags);
18140 +- return ret;
18141 +
18142 +-out_page:
18143 +- unlock_page(pg);
18144 +- put_page(pg);
18145 ++release_sem:
18146 ++ mutex_unlock(&c->alloc_sem);
18147 ++out_err:
18148 + return ret;
18149 + }
18150 +
18151 +diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
18152 +index cfd46753a6856..6a8f9efc2e2f0 100644
18153 +--- a/fs/ubifs/super.c
18154 ++++ b/fs/ubifs/super.c
18155 +@@ -1853,7 +1853,6 @@ out:
18156 + kthread_stop(c->bgt);
18157 + c->bgt = NULL;
18158 + }
18159 +- free_wbufs(c);
18160 + kfree(c->write_reserve_buf);
18161 + c->write_reserve_buf = NULL;
18162 + vfree(c->ileb_buf);
18163 +diff --git a/fs/udf/ialloc.c b/fs/udf/ialloc.c
18164 +index 84ed23edebfd3..87a77bf70ee19 100644
18165 +--- a/fs/udf/ialloc.c
18166 ++++ b/fs/udf/ialloc.c
18167 +@@ -77,6 +77,7 @@ struct inode *udf_new_inode(struct inode *dir, umode_t mode)
18168 + GFP_KERNEL);
18169 + }
18170 + if (!iinfo->i_data) {
18171 ++ make_bad_inode(inode);
18172 + iput(inode);
18173 + return ERR_PTR(-ENOMEM);
18174 + }
18175 +@@ -86,6 +87,7 @@ struct inode *udf_new_inode(struct inode *dir, umode_t mode)
18176 + dinfo->i_location.partitionReferenceNum,
18177 + start, &err);
18178 + if (err) {
18179 ++ make_bad_inode(inode);
18180 + iput(inode);
18181 + return ERR_PTR(err);
18182 + }
18183 +diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h
18184 +index 6ad3b89a8a2e0..0f5366792d22e 100644
18185 +--- a/include/acpi/acpi_bus.h
18186 ++++ b/include/acpi/acpi_bus.h
18187 +@@ -605,9 +605,10 @@ int acpi_enable_wakeup_device_power(struct acpi_device *dev, int state);
18188 + int acpi_disable_wakeup_device_power(struct acpi_device *dev);
18189 +
18190 + #ifdef CONFIG_X86
18191 +-bool acpi_device_always_present(struct acpi_device *adev);
18192 ++bool acpi_device_override_status(struct acpi_device *adev, unsigned long long *status);
18193 + #else
18194 +-static inline bool acpi_device_always_present(struct acpi_device *adev)
18195 ++static inline bool acpi_device_override_status(struct acpi_device *adev,
18196 ++ unsigned long long *status)
18197 + {
18198 + return false;
18199 + }
18200 +diff --git a/include/acpi/actypes.h b/include/acpi/actypes.h
18201 +index 647cb11d0a0a3..7334037624c5c 100644
18202 +--- a/include/acpi/actypes.h
18203 ++++ b/include/acpi/actypes.h
18204 +@@ -536,8 +536,14 @@ typedef u64 acpi_integer;
18205 + * Can be used with access_width of struct acpi_generic_address and access_size of
18206 + * struct acpi_resource_generic_register.
18207 + */
18208 +-#define ACPI_ACCESS_BIT_WIDTH(size) (1 << ((size) + 2))
18209 +-#define ACPI_ACCESS_BYTE_WIDTH(size) (1 << ((size) - 1))
18210 ++#define ACPI_ACCESS_BIT_SHIFT 2
18211 ++#define ACPI_ACCESS_BYTE_SHIFT -1
18212 ++#define ACPI_ACCESS_BIT_MAX (31 - ACPI_ACCESS_BIT_SHIFT)
18213 ++#define ACPI_ACCESS_BYTE_MAX (31 - ACPI_ACCESS_BYTE_SHIFT)
18214 ++#define ACPI_ACCESS_BIT_DEFAULT (8 - ACPI_ACCESS_BIT_SHIFT)
18215 ++#define ACPI_ACCESS_BYTE_DEFAULT (8 - ACPI_ACCESS_BYTE_SHIFT)
18216 ++#define ACPI_ACCESS_BIT_WIDTH(size) (1 << ((size) + ACPI_ACCESS_BIT_SHIFT))
18217 ++#define ACPI_ACCESS_BYTE_WIDTH(size) (1 << ((size) + ACPI_ACCESS_BYTE_SHIFT))
18218 +
18219 + /*******************************************************************************
18220 + *
18221 +diff --git a/include/linux/blk-pm.h b/include/linux/blk-pm.h
18222 +index b80c65aba2493..2580e05a8ab67 100644
18223 +--- a/include/linux/blk-pm.h
18224 ++++ b/include/linux/blk-pm.h
18225 +@@ -14,7 +14,7 @@ extern void blk_pm_runtime_init(struct request_queue *q, struct device *dev);
18226 + extern int blk_pre_runtime_suspend(struct request_queue *q);
18227 + extern void blk_post_runtime_suspend(struct request_queue *q, int err);
18228 + extern void blk_pre_runtime_resume(struct request_queue *q);
18229 +-extern void blk_post_runtime_resume(struct request_queue *q, int err);
18230 ++extern void blk_post_runtime_resume(struct request_queue *q);
18231 + extern void blk_set_runtime_active(struct request_queue *q);
18232 + #else
18233 + static inline void blk_pm_runtime_init(struct request_queue *q,
18234 +diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
18235 +index 6e330ff2f28df..391bc1480dfb1 100644
18236 +--- a/include/linux/bpf_verifier.h
18237 ++++ b/include/linux/bpf_verifier.h
18238 +@@ -367,6 +367,13 @@ static inline bool bpf_verifier_log_needed(const struct bpf_verifier_log *log)
18239 + log->level == BPF_LOG_KERNEL);
18240 + }
18241 +
18242 ++static inline bool
18243 ++bpf_verifier_log_attr_valid(const struct bpf_verifier_log *log)
18244 ++{
18245 ++ return log->len_total >= 128 && log->len_total <= UINT_MAX >> 2 &&
18246 ++ log->level && log->ubuf && !(log->level & ~BPF_LOG_MASK);
18247 ++}
18248 ++
18249 + #define BPF_MAX_SUBPROGS 256
18250 +
18251 + struct bpf_subprog_info {
18252 +diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h
18253 +index 83a3ebff74560..8f87c1a6f3231 100644
18254 +--- a/include/linux/clocksource.h
18255 ++++ b/include/linux/clocksource.h
18256 +@@ -42,6 +42,8 @@ struct module;
18257 + * @shift: Cycle to nanosecond divisor (power of two)
18258 + * @max_idle_ns: Maximum idle time permitted by the clocksource (nsecs)
18259 + * @maxadj: Maximum adjustment value to mult (~11%)
18260 ++ * @uncertainty_margin: Maximum uncertainty in nanoseconds per half second.
18261 ++ * Zero says to use default WATCHDOG_THRESHOLD.
18262 + * @archdata: Optional arch-specific data
18263 + * @max_cycles: Maximum safe cycle value which won't overflow on
18264 + * multiplication
18265 +@@ -93,6 +95,7 @@ struct clocksource {
18266 + u32 shift;
18267 + u64 max_idle_ns;
18268 + u32 maxadj;
18269 ++ u32 uncertainty_margin;
18270 + #ifdef CONFIG_ARCH_CLOCKSOURCE_DATA
18271 + struct arch_clocksource_data archdata;
18272 + #endif
18273 +diff --git a/include/linux/hid.h b/include/linux/hid.h
18274 +index fc56d53cc68bf..2ba33d708942c 100644
18275 +--- a/include/linux/hid.h
18276 ++++ b/include/linux/hid.h
18277 +@@ -345,6 +345,8 @@ struct hid_item {
18278 + /* BIT(9) reserved for backward compatibility, was NO_INIT_INPUT_REPORTS */
18279 + #define HID_QUIRK_ALWAYS_POLL BIT(10)
18280 + #define HID_QUIRK_INPUT_PER_APP BIT(11)
18281 ++#define HID_QUIRK_X_INVERT BIT(12)
18282 ++#define HID_QUIRK_Y_INVERT BIT(13)
18283 + #define HID_QUIRK_SKIP_OUTPUT_REPORTS BIT(16)
18284 + #define HID_QUIRK_SKIP_OUTPUT_REPORT_ID BIT(17)
18285 + #define HID_QUIRK_NO_OUTPUT_REPORTS_ON_INTR_EP BIT(18)
18286 +diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
18287 +index 63b550403317a..c142a152d6a41 100644
18288 +--- a/include/linux/mmzone.h
18289 ++++ b/include/linux/mmzone.h
18290 +@@ -938,6 +938,15 @@ static inline int is_highmem_idx(enum zone_type idx)
18291 + #endif
18292 + }
18293 +
18294 ++#ifdef CONFIG_ZONE_DMA
18295 ++bool has_managed_dma(void);
18296 ++#else
18297 ++static inline bool has_managed_dma(void)
18298 ++{
18299 ++ return false;
18300 ++}
18301 ++#endif
18302 ++
18303 + /**
18304 + * is_highmem - helper function to quickly check if a struct zone is a
18305 + * highmem zone or not. This is an attempt to keep references
18306 +diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
18307 +index 161acd4ede448..30091ab5de287 100644
18308 +--- a/include/linux/pm_runtime.h
18309 ++++ b/include/linux/pm_runtime.h
18310 +@@ -58,6 +58,7 @@ extern void pm_runtime_get_suppliers(struct device *dev);
18311 + extern void pm_runtime_put_suppliers(struct device *dev);
18312 + extern void pm_runtime_new_link(struct device *dev);
18313 + extern void pm_runtime_drop_link(struct device_link *link);
18314 ++extern void pm_runtime_release_supplier(struct device_link *link, bool check_idle);
18315 +
18316 + /**
18317 + * pm_runtime_get_if_in_use - Conditionally bump up runtime PM usage counter.
18318 +@@ -279,6 +280,8 @@ static inline void pm_runtime_get_suppliers(struct device *dev) {}
18319 + static inline void pm_runtime_put_suppliers(struct device *dev) {}
18320 + static inline void pm_runtime_new_link(struct device *dev) {}
18321 + static inline void pm_runtime_drop_link(struct device_link *link) {}
18322 ++static inline void pm_runtime_release_supplier(struct device_link *link,
18323 ++ bool check_idle) {}
18324 +
18325 + #endif /* !CONFIG_PM */
18326 +
18327 +diff --git a/include/net/inet_frag.h b/include/net/inet_frag.h
18328 +index bac79e817776c..4cbd413e71a3f 100644
18329 +--- a/include/net/inet_frag.h
18330 ++++ b/include/net/inet_frag.h
18331 +@@ -116,8 +116,15 @@ int fqdir_init(struct fqdir **fqdirp, struct inet_frags *f, struct net *net);
18332 +
18333 + static inline void fqdir_pre_exit(struct fqdir *fqdir)
18334 + {
18335 +- fqdir->high_thresh = 0; /* prevent creation of new frags */
18336 +- fqdir->dead = true;
18337 ++ /* Prevent creation of new frags.
18338 ++ * Pairs with READ_ONCE() in inet_frag_find().
18339 ++ */
18340 ++ WRITE_ONCE(fqdir->high_thresh, 0);
18341 ++
18342 ++ /* Pairs with READ_ONCE() in inet_frag_kill(), ip_expire()
18343 ++ * and ip6frag_expire_frag_queue().
18344 ++ */
18345 ++ WRITE_ONCE(fqdir->dead, true);
18346 + }
18347 + void fqdir_exit(struct fqdir *fqdir);
18348 +
18349 +diff --git a/include/net/ipv6_frag.h b/include/net/ipv6_frag.h
18350 +index 851029ecff13c..0a4779175a523 100644
18351 +--- a/include/net/ipv6_frag.h
18352 ++++ b/include/net/ipv6_frag.h
18353 +@@ -67,7 +67,8 @@ ip6frag_expire_frag_queue(struct net *net, struct frag_queue *fq)
18354 + struct sk_buff *head;
18355 +
18356 + rcu_read_lock();
18357 +- if (fq->q.fqdir->dead)
18358 ++ /* Paired with the WRITE_ONCE() in fqdir_pre_exit(). */
18359 ++ if (READ_ONCE(fq->q.fqdir->dead))
18360 + goto out_rcu_unlock;
18361 + spin_lock(&fq->q.lock);
18362 +
18363 +diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
18364 +index 9226a84dcc14d..1042c449e7db5 100644
18365 +--- a/include/net/sch_generic.h
18366 ++++ b/include/net/sch_generic.h
18367 +@@ -1261,6 +1261,7 @@ struct psched_ratecfg {
18368 + u64 rate_bytes_ps; /* bytes per second */
18369 + u32 mult;
18370 + u16 overhead;
18371 ++ u16 mpu;
18372 + u8 linklayer;
18373 + u8 shift;
18374 + };
18375 +@@ -1270,6 +1271,9 @@ static inline u64 psched_l2t_ns(const struct psched_ratecfg *r,
18376 + {
18377 + len += r->overhead;
18378 +
18379 ++ if (len < r->mpu)
18380 ++ len = r->mpu;
18381 ++
18382 + if (unlikely(r->linklayer == TC_LINKLAYER_ATM))
18383 + return ((u64)(DIV_ROUND_UP(len,48)*53) * r->mult) >> r->shift;
18384 +
18385 +@@ -1292,6 +1296,7 @@ static inline void psched_ratecfg_getrate(struct tc_ratespec *res,
18386 + res->rate = min_t(u64, r->rate_bytes_ps, ~0U);
18387 +
18388 + res->overhead = r->overhead;
18389 ++ res->mpu = r->mpu;
18390 + res->linklayer = (r->linklayer & TC_LINKLAYER_MASK);
18391 + }
18392 +
18393 +diff --git a/include/net/xfrm.h b/include/net/xfrm.h
18394 +index 6232a5f048bde..337d29875e518 100644
18395 +--- a/include/net/xfrm.h
18396 ++++ b/include/net/xfrm.h
18397 +@@ -193,6 +193,11 @@ struct xfrm_state {
18398 + struct xfrm_algo_aead *aead;
18399 + const char *geniv;
18400 +
18401 ++ /* mapping change rate limiting */
18402 ++ __be16 new_mapping_sport;
18403 ++ u32 new_mapping; /* seconds */
18404 ++ u32 mapping_maxage; /* seconds for input SA */
18405 ++
18406 + /* Data for encapsulator */
18407 + struct xfrm_encap_tmpl *encap;
18408 + struct sock __rcu *encap_sk;
18409 +diff --git a/include/trace/events/cgroup.h b/include/trace/events/cgroup.h
18410 +index 7f42a3de59e6b..dd7d7c9efecdf 100644
18411 +--- a/include/trace/events/cgroup.h
18412 ++++ b/include/trace/events/cgroup.h
18413 +@@ -59,8 +59,8 @@ DECLARE_EVENT_CLASS(cgroup,
18414 +
18415 + TP_STRUCT__entry(
18416 + __field( int, root )
18417 +- __field( int, id )
18418 + __field( int, level )
18419 ++ __field( u64, id )
18420 + __string( path, path )
18421 + ),
18422 +
18423 +@@ -71,7 +71,7 @@ DECLARE_EVENT_CLASS(cgroup,
18424 + __assign_str(path, path);
18425 + ),
18426 +
18427 +- TP_printk("root=%d id=%d level=%d path=%s",
18428 ++ TP_printk("root=%d id=%llu level=%d path=%s",
18429 + __entry->root, __entry->id, __entry->level, __get_str(path))
18430 + );
18431 +
18432 +@@ -126,8 +126,8 @@ DECLARE_EVENT_CLASS(cgroup_migrate,
18433 +
18434 + TP_STRUCT__entry(
18435 + __field( int, dst_root )
18436 +- __field( int, dst_id )
18437 + __field( int, dst_level )
18438 ++ __field( u64, dst_id )
18439 + __field( int, pid )
18440 + __string( dst_path, path )
18441 + __string( comm, task->comm )
18442 +@@ -142,7 +142,7 @@ DECLARE_EVENT_CLASS(cgroup_migrate,
18443 + __assign_str(comm, task->comm);
18444 + ),
18445 +
18446 +- TP_printk("dst_root=%d dst_id=%d dst_level=%d dst_path=%s pid=%d comm=%s",
18447 ++ TP_printk("dst_root=%d dst_id=%llu dst_level=%d dst_path=%s pid=%d comm=%s",
18448 + __entry->dst_root, __entry->dst_id, __entry->dst_level,
18449 + __get_str(dst_path), __entry->pid, __get_str(comm))
18450 + );
18451 +@@ -171,8 +171,8 @@ DECLARE_EVENT_CLASS(cgroup_event,
18452 +
18453 + TP_STRUCT__entry(
18454 + __field( int, root )
18455 +- __field( int, id )
18456 + __field( int, level )
18457 ++ __field( u64, id )
18458 + __string( path, path )
18459 + __field( int, val )
18460 + ),
18461 +@@ -185,7 +185,7 @@ DECLARE_EVENT_CLASS(cgroup_event,
18462 + __entry->val = val;
18463 + ),
18464 +
18465 +- TP_printk("root=%d id=%d level=%d path=%s val=%d",
18466 ++ TP_printk("root=%d id=%llu level=%d path=%s val=%d",
18467 + __entry->root, __entry->id, __entry->level, __get_str(path),
18468 + __entry->val)
18469 + );
18470 +diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h
18471 +index ffc6a5391bb7b..2290c98b47cf8 100644
18472 +--- a/include/uapi/linux/xfrm.h
18473 ++++ b/include/uapi/linux/xfrm.h
18474 +@@ -308,6 +308,7 @@ enum xfrm_attr_type_t {
18475 + XFRMA_SET_MARK, /* __u32 */
18476 + XFRMA_SET_MARK_MASK, /* __u32 */
18477 + XFRMA_IF_ID, /* __u32 */
18478 ++ XFRMA_MTIMER_THRESH, /* __u32 in seconds for input SA */
18479 + __XFRMA_MAX
18480 +
18481 + #define XFRMA_OUTPUT_MARK XFRMA_SET_MARK /* Compatibility */
18482 +diff --git a/kernel/audit.c b/kernel/audit.c
18483 +index d784000921da3..2a38cbaf3ddb7 100644
18484 +--- a/kernel/audit.c
18485 ++++ b/kernel/audit.c
18486 +@@ -1540,6 +1540,20 @@ static void audit_receive(struct sk_buff *skb)
18487 + nlh = nlmsg_next(nlh, &len);
18488 + }
18489 + audit_ctl_unlock();
18490 ++
18491 ++ /* can't block with the ctrl lock, so penalize the sender now */
18492 ++ if (audit_backlog_limit &&
18493 ++ (skb_queue_len(&audit_queue) > audit_backlog_limit)) {
18494 ++ DECLARE_WAITQUEUE(wait, current);
18495 ++
18496 ++ /* wake kauditd to try and flush the queue */
18497 ++ wake_up_interruptible(&kauditd_wait);
18498 ++
18499 ++ add_wait_queue_exclusive(&audit_backlog_wait, &wait);
18500 ++ set_current_state(TASK_UNINTERRUPTIBLE);
18501 ++ schedule_timeout(audit_backlog_wait_time);
18502 ++ remove_wait_queue(&audit_backlog_wait, &wait);
18503 ++ }
18504 + }
18505 +
18506 + /* Log information about who is connecting to the audit multicast socket */
18507 +@@ -1824,7 +1838,9 @@ struct audit_buffer *audit_log_start(struct audit_context *ctx, gfp_t gfp_mask,
18508 + * task_tgid_vnr() since auditd_pid is set in audit_receive_msg()
18509 + * using a PID anchored in the caller's namespace
18510 + * 2. generator holding the audit_cmd_mutex - we don't want to block
18511 +- * while holding the mutex */
18512 ++ * while holding the mutex, although we do penalize the sender
18513 ++ * later in audit_receive() when it is safe to block
18514 ++ */
18515 + if (!(auditd_test_task(current) || audit_ctl_owner_current())) {
18516 + long stime = audit_backlog_wait_time;
18517 +
18518 +diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
18519 +index aaf2fbaa0cc76..dc497eaf22663 100644
18520 +--- a/kernel/bpf/btf.c
18521 ++++ b/kernel/bpf/btf.c
18522 +@@ -4135,8 +4135,7 @@ static struct btf *btf_parse(void __user *btf_data, u32 btf_data_size,
18523 + log->len_total = log_size;
18524 +
18525 + /* log attributes have to be sane */
18526 +- if (log->len_total < 128 || log->len_total > UINT_MAX >> 8 ||
18527 +- !log->level || !log->ubuf) {
18528 ++ if (!bpf_verifier_log_attr_valid(log)) {
18529 + err = -EINVAL;
18530 + goto errout;
18531 + }
18532 +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
18533 +index b43c9de34a2c2..015bf2ba4a0b6 100644
18534 +--- a/kernel/bpf/verifier.c
18535 ++++ b/kernel/bpf/verifier.c
18536 +@@ -7725,15 +7725,15 @@ static void mark_ptr_or_null_reg(struct bpf_func_state *state,
18537 + {
18538 + if (reg_type_may_be_null(reg->type) && reg->id == id &&
18539 + !WARN_ON_ONCE(!reg->id)) {
18540 +- /* Old offset (both fixed and variable parts) should
18541 +- * have been known-zero, because we don't allow pointer
18542 +- * arithmetic on pointers that might be NULL.
18543 +- */
18544 + if (WARN_ON_ONCE(reg->smin_value || reg->smax_value ||
18545 + !tnum_equals_const(reg->var_off, 0) ||
18546 + reg->off)) {
18547 +- __mark_reg_known_zero(reg);
18548 +- reg->off = 0;
18549 ++ /* Old offset (both fixed and variable parts) should
18550 ++ * have been known-zero, because we don't allow pointer
18551 ++ * arithmetic on pointers that might be NULL. If we
18552 ++ * see this happening, don't convert the register.
18553 ++ */
18554 ++ return;
18555 + }
18556 + if (is_null) {
18557 + reg->type = SCALAR_VALUE;
18558 +@@ -12349,11 +12349,11 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
18559 + log->ubuf = (char __user *) (unsigned long) attr->log_buf;
18560 + log->len_total = attr->log_size;
18561 +
18562 +- ret = -EINVAL;
18563 + /* log attributes have to be sane */
18564 +- if (log->len_total < 128 || log->len_total > UINT_MAX >> 2 ||
18565 +- !log->level || !log->ubuf || log->level & ~BPF_LOG_MASK)
18566 ++ if (!bpf_verifier_log_attr_valid(log)) {
18567 ++ ret = -EINVAL;
18568 + goto err_unlock;
18569 ++ }
18570 + }
18571 +
18572 + if (IS_ERR(btf_vmlinux)) {
18573 +diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
18574 +index d4637f72239b4..b9082b572e0f8 100644
18575 +--- a/kernel/dma/pool.c
18576 ++++ b/kernel/dma/pool.c
18577 +@@ -206,7 +206,7 @@ static int __init dma_atomic_pool_init(void)
18578 + GFP_KERNEL);
18579 + if (!atomic_pool_kernel)
18580 + ret = -ENOMEM;
18581 +- if (IS_ENABLED(CONFIG_ZONE_DMA)) {
18582 ++ if (has_managed_dma()) {
18583 + atomic_pool_dma = __dma_atomic_pool_init(atomic_pool_size,
18584 + GFP_KERNEL | GFP_DMA);
18585 + if (!atomic_pool_dma)
18586 +@@ -229,7 +229,7 @@ static inline struct gen_pool *dma_guess_pool(struct gen_pool *prev, gfp_t gfp)
18587 + if (prev == NULL) {
18588 + if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32))
18589 + return atomic_pool_dma32;
18590 +- if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA))
18591 ++ if (atomic_pool_dma && (gfp & GFP_DMA))
18592 + return atomic_pool_dma;
18593 + return atomic_pool_kernel;
18594 + }
18595 +diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
18596 +index 0ffe185c1f46a..0dc16345e668c 100644
18597 +--- a/kernel/rcu/tree_exp.h
18598 ++++ b/kernel/rcu/tree_exp.h
18599 +@@ -387,6 +387,7 @@ retry_ipi:
18600 + continue;
18601 + }
18602 + if (get_cpu() == cpu) {
18603 ++ mask_ofl_test |= mask;
18604 + put_cpu();
18605 + continue;
18606 + }
18607 +diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
18608 +index 5a55d23004524..ca0eef7d3852b 100644
18609 +--- a/kernel/sched/cputime.c
18610 ++++ b/kernel/sched/cputime.c
18611 +@@ -147,10 +147,10 @@ void account_guest_time(struct task_struct *p, u64 cputime)
18612 +
18613 + /* Add guest time to cpustat. */
18614 + if (task_nice(p) > 0) {
18615 +- cpustat[CPUTIME_NICE] += cputime;
18616 ++ task_group_account_field(p, CPUTIME_NICE, cputime);
18617 + cpustat[CPUTIME_GUEST_NICE] += cputime;
18618 + } else {
18619 +- cpustat[CPUTIME_USER] += cputime;
18620 ++ task_group_account_field(p, CPUTIME_USER, cputime);
18621 + cpustat[CPUTIME_GUEST] += cputime;
18622 + }
18623 + }
18624 +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
18625 +index c004e3b89c324..2a33cb5a10e59 100644
18626 +--- a/kernel/sched/fair.c
18627 ++++ b/kernel/sched/fair.c
18628 +@@ -6284,8 +6284,10 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
18629 + * pattern is IO completions.
18630 + */
18631 + if (is_per_cpu_kthread(current) &&
18632 ++ in_task() &&
18633 + prev == smp_processor_id() &&
18634 +- this_rq()->nr_running <= 1) {
18635 ++ this_rq()->nr_running <= 1 &&
18636 ++ asym_fits_capacity(task_util, prev)) {
18637 + return prev;
18638 + }
18639 +
18640 +diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
18641 +index b5cf418e2e3fe..41b14d9242039 100644
18642 +--- a/kernel/sched/rt.c
18643 ++++ b/kernel/sched/rt.c
18644 +@@ -52,11 +52,8 @@ void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime)
18645 + rt_b->rt_period_timer.function = sched_rt_period_timer;
18646 + }
18647 +
18648 +-static void start_rt_bandwidth(struct rt_bandwidth *rt_b)
18649 ++static inline void do_start_rt_bandwidth(struct rt_bandwidth *rt_b)
18650 + {
18651 +- if (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF)
18652 +- return;
18653 +-
18654 + raw_spin_lock(&rt_b->rt_runtime_lock);
18655 + if (!rt_b->rt_period_active) {
18656 + rt_b->rt_period_active = 1;
18657 +@@ -75,6 +72,14 @@ static void start_rt_bandwidth(struct rt_bandwidth *rt_b)
18658 + raw_spin_unlock(&rt_b->rt_runtime_lock);
18659 + }
18660 +
18661 ++static void start_rt_bandwidth(struct rt_bandwidth *rt_b)
18662 ++{
18663 ++ if (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF)
18664 ++ return;
18665 ++
18666 ++ do_start_rt_bandwidth(rt_b);
18667 ++}
18668 ++
18669 + void init_rt_rq(struct rt_rq *rt_rq)
18670 + {
18671 + struct rt_prio_array *array;
18672 +@@ -1022,13 +1027,17 @@ static void update_curr_rt(struct rq *rq)
18673 +
18674 + for_each_sched_rt_entity(rt_se) {
18675 + struct rt_rq *rt_rq = rt_rq_of_se(rt_se);
18676 ++ int exceeded;
18677 +
18678 + if (sched_rt_runtime(rt_rq) != RUNTIME_INF) {
18679 + raw_spin_lock(&rt_rq->rt_runtime_lock);
18680 + rt_rq->rt_time += delta_exec;
18681 +- if (sched_rt_runtime_exceeded(rt_rq))
18682 ++ exceeded = sched_rt_runtime_exceeded(rt_rq);
18683 ++ if (exceeded)
18684 + resched_curr(rq);
18685 + raw_spin_unlock(&rt_rq->rt_runtime_lock);
18686 ++ if (exceeded)
18687 ++ do_start_rt_bandwidth(sched_rt_bandwidth(rt_rq));
18688 + }
18689 + }
18690 + }
18691 +@@ -2727,8 +2736,12 @@ static int sched_rt_global_validate(void)
18692 +
18693 + static void sched_rt_do_global(void)
18694 + {
18695 ++ unsigned long flags;
18696 ++
18697 ++ raw_spin_lock_irqsave(&def_rt_bandwidth.rt_runtime_lock, flags);
18698 + def_rt_bandwidth.rt_runtime = global_rt_runtime();
18699 + def_rt_bandwidth.rt_period = ns_to_ktime(global_rt_period());
18700 ++ raw_spin_unlock_irqrestore(&def_rt_bandwidth.rt_runtime_lock, flags);
18701 + }
18702 +
18703 + int sched_rt_handler(struct ctl_table *table, int write, void *buffer,
18704 +diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
18705 +index 74492f08660c4..e34ceb91f4c5a 100644
18706 +--- a/kernel/time/clocksource.c
18707 ++++ b/kernel/time/clocksource.c
18708 +@@ -93,6 +93,20 @@ static char override_name[CS_NAME_LEN];
18709 + static int finished_booting;
18710 + static u64 suspend_start;
18711 +
18712 ++/*
18713 ++ * Threshold: 0.0312s, when doubled: 0.0625s.
18714 ++ * Also a default for cs->uncertainty_margin when registering clocks.
18715 ++ */
18716 ++#define WATCHDOG_THRESHOLD (NSEC_PER_SEC >> 5)
18717 ++
18718 ++/*
18719 ++ * Maximum permissible delay between two readouts of the watchdog
18720 ++ * clocksource surrounding a read of the clocksource being validated.
18721 ++ * This delay could be due to SMIs, NMIs, or to VCPU preemptions. Used as
18722 ++ * a lower bound for cs->uncertainty_margin values when registering clocks.
18723 ++ */
18724 ++#define WATCHDOG_MAX_SKEW (100 * NSEC_PER_USEC)
18725 ++
18726 + #ifdef CONFIG_CLOCKSOURCE_WATCHDOG
18727 + static void clocksource_watchdog_work(struct work_struct *work);
18728 + static void clocksource_select(void);
18729 +@@ -119,17 +133,9 @@ static int clocksource_watchdog_kthread(void *data);
18730 + static void __clocksource_change_rating(struct clocksource *cs, int rating);
18731 +
18732 + /*
18733 +- * Interval: 0.5sec Threshold: 0.0625s
18734 ++ * Interval: 0.5sec.
18735 + */
18736 + #define WATCHDOG_INTERVAL (HZ >> 1)
18737 +-#define WATCHDOG_THRESHOLD (NSEC_PER_SEC >> 4)
18738 +-
18739 +-/*
18740 +- * Maximum permissible delay between two readouts of the watchdog
18741 +- * clocksource surrounding a read of the clocksource being validated.
18742 +- * This delay could be due to SMIs, NMIs, or to VCPU preemptions.
18743 +- */
18744 +-#define WATCHDOG_MAX_SKEW (100 * NSEC_PER_USEC)
18745 +
18746 + static void clocksource_watchdog_work(struct work_struct *work)
18747 + {
18748 +@@ -194,17 +200,24 @@ void clocksource_mark_unstable(struct clocksource *cs)
18749 + static ulong max_cswd_read_retries = 3;
18750 + module_param(max_cswd_read_retries, ulong, 0644);
18751 +
18752 +-static bool cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow)
18753 ++enum wd_read_status {
18754 ++ WD_READ_SUCCESS,
18755 ++ WD_READ_UNSTABLE,
18756 ++ WD_READ_SKIP
18757 ++};
18758 ++
18759 ++static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow)
18760 + {
18761 + unsigned int nretries;
18762 +- u64 wd_end, wd_delta;
18763 +- int64_t wd_delay;
18764 ++ u64 wd_end, wd_end2, wd_delta;
18765 ++ int64_t wd_delay, wd_seq_delay;
18766 +
18767 + for (nretries = 0; nretries <= max_cswd_read_retries; nretries++) {
18768 + local_irq_disable();
18769 + *wdnow = watchdog->read(watchdog);
18770 + *csnow = cs->read(cs);
18771 + wd_end = watchdog->read(watchdog);
18772 ++ wd_end2 = watchdog->read(watchdog);
18773 + local_irq_enable();
18774 +
18775 + wd_delta = clocksource_delta(wd_end, *wdnow, watchdog->mask);
18776 +@@ -215,13 +228,34 @@ static bool cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow)
18777 + pr_warn("timekeeping watchdog on CPU%d: %s retried %d times before success\n",
18778 + smp_processor_id(), watchdog->name, nretries);
18779 + }
18780 +- return true;
18781 ++ return WD_READ_SUCCESS;
18782 + }
18783 ++
18784 ++ /*
18785 ++ * Now compute delay in consecutive watchdog read to see if
18786 ++ * there is too much external interferences that cause
18787 ++ * significant delay in reading both clocksource and watchdog.
18788 ++ *
18789 ++ * If consecutive WD read-back delay > WATCHDOG_MAX_SKEW/2,
18790 ++ * report system busy, reinit the watchdog and skip the current
18791 ++ * watchdog test.
18792 ++ */
18793 ++ wd_delta = clocksource_delta(wd_end2, wd_end, watchdog->mask);
18794 ++ wd_seq_delay = clocksource_cyc2ns(wd_delta, watchdog->mult, watchdog->shift);
18795 ++ if (wd_seq_delay > WATCHDOG_MAX_SKEW/2)
18796 ++ goto skip_test;
18797 + }
18798 +
18799 + pr_warn("timekeeping watchdog on CPU%d: %s read-back delay of %lldns, attempt %d, marking unstable\n",
18800 + smp_processor_id(), watchdog->name, wd_delay, nretries);
18801 +- return false;
18802 ++ return WD_READ_UNSTABLE;
18803 ++
18804 ++skip_test:
18805 ++ pr_info("timekeeping watchdog on CPU%d: %s wd-wd read-back delay of %lldns\n",
18806 ++ smp_processor_id(), watchdog->name, wd_seq_delay);
18807 ++ pr_info("wd-%s-wd read-back delay of %lldns, clock-skew test skipped!\n",
18808 ++ cs->name, wd_delay);
18809 ++ return WD_READ_SKIP;
18810 + }
18811 +
18812 + static u64 csnow_mid;
18813 +@@ -284,6 +318,8 @@ static void clocksource_watchdog(struct timer_list *unused)
18814 + int next_cpu, reset_pending;
18815 + int64_t wd_nsec, cs_nsec;
18816 + struct clocksource *cs;
18817 ++ enum wd_read_status read_ret;
18818 ++ u32 md;
18819 +
18820 + spin_lock(&watchdog_lock);
18821 + if (!watchdog_running)
18822 +@@ -300,9 +336,12 @@ static void clocksource_watchdog(struct timer_list *unused)
18823 + continue;
18824 + }
18825 +
18826 +- if (!cs_watchdog_read(cs, &csnow, &wdnow)) {
18827 +- /* Clock readout unreliable, so give it up. */
18828 +- __clocksource_unstable(cs);
18829 ++ read_ret = cs_watchdog_read(cs, &csnow, &wdnow);
18830 ++
18831 ++ if (read_ret != WD_READ_SUCCESS) {
18832 ++ if (read_ret == WD_READ_UNSTABLE)
18833 ++ /* Clock readout unreliable, so give it up. */
18834 ++ __clocksource_unstable(cs);
18835 + continue;
18836 + }
18837 +
18838 +@@ -330,7 +369,8 @@ static void clocksource_watchdog(struct timer_list *unused)
18839 + continue;
18840 +
18841 + /* Check the deviation from the watchdog clocksource. */
18842 +- if (abs(cs_nsec - wd_nsec) > WATCHDOG_THRESHOLD) {
18843 ++ md = cs->uncertainty_margin + watchdog->uncertainty_margin;
18844 ++ if (abs(cs_nsec - wd_nsec) > md) {
18845 + pr_warn("timekeeping watchdog on CPU%d: Marking clocksource '%s' as unstable because the skew is too large:\n",
18846 + smp_processor_id(), cs->name);
18847 + pr_warn(" '%s' wd_now: %llx wd_last: %llx mask: %llx\n",
18848 +@@ -985,6 +1025,26 @@ void __clocksource_update_freq_scale(struct clocksource *cs, u32 scale, u32 freq
18849 + clocks_calc_mult_shift(&cs->mult, &cs->shift, freq,
18850 + NSEC_PER_SEC / scale, sec * scale);
18851 + }
18852 ++
18853 ++ /*
18854 ++ * If the uncertainty margin is not specified, calculate it.
18855 ++ * If both scale and freq are non-zero, calculate the clock
18856 ++ * period, but bound below at 2*WATCHDOG_MAX_SKEW. However,
18857 ++ * if either of scale or freq is zero, be very conservative and
18858 ++ * take the tens-of-milliseconds WATCHDOG_THRESHOLD value for the
18859 ++ * uncertainty margin. Allow stupidly small uncertainty margins
18860 ++ * to be specified by the caller for testing purposes, but warn
18861 ++ * to discourage production use of this capability.
18862 ++ */
18863 ++ if (scale && freq && !cs->uncertainty_margin) {
18864 ++ cs->uncertainty_margin = NSEC_PER_SEC / (scale * freq);
18865 ++ if (cs->uncertainty_margin < 2 * WATCHDOG_MAX_SKEW)
18866 ++ cs->uncertainty_margin = 2 * WATCHDOG_MAX_SKEW;
18867 ++ } else if (!cs->uncertainty_margin) {
18868 ++ cs->uncertainty_margin = WATCHDOG_THRESHOLD;
18869 ++ }
18870 ++ WARN_ON_ONCE(cs->uncertainty_margin < 2 * WATCHDOG_MAX_SKEW);
18871 ++
18872 + /*
18873 + * Ensure clocksources that have large 'mult' values don't overflow
18874 + * when adjusted.
18875 +diff --git a/kernel/time/jiffies.c b/kernel/time/jiffies.c
18876 +index eddcf49704445..65409abcca8e1 100644
18877 +--- a/kernel/time/jiffies.c
18878 ++++ b/kernel/time/jiffies.c
18879 +@@ -49,13 +49,14 @@ static u64 jiffies_read(struct clocksource *cs)
18880 + * for "tick-less" systems.
18881 + */
18882 + static struct clocksource clocksource_jiffies = {
18883 +- .name = "jiffies",
18884 +- .rating = 1, /* lowest valid rating*/
18885 +- .read = jiffies_read,
18886 +- .mask = CLOCKSOURCE_MASK(32),
18887 +- .mult = TICK_NSEC << JIFFIES_SHIFT, /* details above */
18888 +- .shift = JIFFIES_SHIFT,
18889 +- .max_cycles = 10,
18890 ++ .name = "jiffies",
18891 ++ .rating = 1, /* lowest valid rating*/
18892 ++ .uncertainty_margin = 32 * NSEC_PER_MSEC,
18893 ++ .read = jiffies_read,
18894 ++ .mask = CLOCKSOURCE_MASK(32),
18895 ++ .mult = TICK_NSEC << JIFFIES_SHIFT, /* details above */
18896 ++ .shift = JIFFIES_SHIFT,
18897 ++ .max_cycles = 10,
18898 + };
18899 +
18900 + __cacheline_aligned_in_smp DEFINE_RAW_SPINLOCK(jiffies_lock);
18901 +diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
18902 +index ba644760f5076..a9e074769881f 100644
18903 +--- a/kernel/trace/bpf_trace.c
18904 ++++ b/kernel/trace/bpf_trace.c
18905 +@@ -1517,9 +1517,6 @@ static const struct bpf_func_proto bpf_perf_prog_read_value_proto = {
18906 + BPF_CALL_4(bpf_read_branch_records, struct bpf_perf_event_data_kern *, ctx,
18907 + void *, buf, u32, size, u64, flags)
18908 + {
18909 +-#ifndef CONFIG_X86
18910 +- return -ENOENT;
18911 +-#else
18912 + static const u32 br_entry_size = sizeof(struct perf_branch_entry);
18913 + struct perf_branch_stack *br_stack = ctx->data->br_stack;
18914 + u32 to_copy;
18915 +@@ -1528,7 +1525,7 @@ BPF_CALL_4(bpf_read_branch_records, struct bpf_perf_event_data_kern *, ctx,
18916 + return -EINVAL;
18917 +
18918 + if (unlikely(!br_stack))
18919 +- return -EINVAL;
18920 ++ return -ENOENT;
18921 +
18922 + if (flags & BPF_F_GET_BRANCH_RECORDS_SIZE)
18923 + return br_stack->nr * br_entry_size;
18924 +@@ -1540,7 +1537,6 @@ BPF_CALL_4(bpf_read_branch_records, struct bpf_perf_event_data_kern *, ctx,
18925 + memcpy(buf, br_stack->entries, to_copy);
18926 +
18927 + return to_copy;
18928 +-#endif
18929 + }
18930 +
18931 + static const struct bpf_func_proto bpf_read_branch_records_proto = {
18932 +diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
18933 +index 552dbc9d52260..d8a9fc7941266 100644
18934 +--- a/kernel/trace/trace_kprobe.c
18935 ++++ b/kernel/trace/trace_kprobe.c
18936 +@@ -1183,15 +1183,18 @@ static int probes_profile_seq_show(struct seq_file *m, void *v)
18937 + {
18938 + struct dyn_event *ev = v;
18939 + struct trace_kprobe *tk;
18940 ++ unsigned long nmissed;
18941 +
18942 + if (!is_trace_kprobe(ev))
18943 + return 0;
18944 +
18945 + tk = to_trace_kprobe(ev);
18946 ++ nmissed = trace_kprobe_is_return(tk) ?
18947 ++ tk->rp.kp.nmissed + tk->rp.nmissed : tk->rp.kp.nmissed;
18948 + seq_printf(m, " %-44s %15lu %15lu\n",
18949 + trace_probe_name(&tk->tp),
18950 + trace_kprobe_nhit(tk),
18951 +- tk->rp.kp.nmissed);
18952 ++ nmissed);
18953 +
18954 + return 0;
18955 + }
18956 +diff --git a/kernel/tsacct.c b/kernel/tsacct.c
18957 +index 257ffb993ea23..fd2f7a052fdd9 100644
18958 +--- a/kernel/tsacct.c
18959 ++++ b/kernel/tsacct.c
18960 +@@ -38,11 +38,10 @@ void bacct_add_tsk(struct user_namespace *user_ns,
18961 + stats->ac_btime = clamp_t(time64_t, btime, 0, U32_MAX);
18962 + stats->ac_btime64 = btime;
18963 +
18964 +- if (thread_group_leader(tsk)) {
18965 ++ if (tsk->flags & PF_EXITING)
18966 + stats->ac_exitcode = tsk->exit_code;
18967 +- if (tsk->flags & PF_FORKNOEXEC)
18968 +- stats->ac_flag |= AFORK;
18969 +- }
18970 ++ if (thread_group_leader(tsk) && (tsk->flags & PF_FORKNOEXEC))
18971 ++ stats->ac_flag |= AFORK;
18972 + if (tsk->flags & PF_SUPERPRIV)
18973 + stats->ac_flag |= ASU;
18974 + if (tsk->flags & PF_DUMPCORE)
18975 +diff --git a/lib/mpi/mpi-mod.c b/lib/mpi/mpi-mod.c
18976 +index 47bc59edd4ff9..54fcc01564d9d 100644
18977 +--- a/lib/mpi/mpi-mod.c
18978 ++++ b/lib/mpi/mpi-mod.c
18979 +@@ -40,6 +40,8 @@ mpi_barrett_t mpi_barrett_init(MPI m, int copy)
18980 +
18981 + mpi_normalize(m);
18982 + ctx = kcalloc(1, sizeof(*ctx), GFP_KERNEL);
18983 ++ if (!ctx)
18984 ++ return NULL;
18985 +
18986 + if (copy) {
18987 + ctx->m = mpi_copy(m);
18988 +diff --git a/lib/test_hmm.c b/lib/test_hmm.c
18989 +index 80a78877bd939..a85613068d601 100644
18990 +--- a/lib/test_hmm.c
18991 ++++ b/lib/test_hmm.c
18992 +@@ -965,9 +965,33 @@ static long dmirror_fops_unlocked_ioctl(struct file *filp,
18993 + return 0;
18994 + }
18995 +
18996 ++static int dmirror_fops_mmap(struct file *file, struct vm_area_struct *vma)
18997 ++{
18998 ++ unsigned long addr;
18999 ++
19000 ++ for (addr = vma->vm_start; addr < vma->vm_end; addr += PAGE_SIZE) {
19001 ++ struct page *page;
19002 ++ int ret;
19003 ++
19004 ++ page = alloc_page(GFP_KERNEL | __GFP_ZERO);
19005 ++ if (!page)
19006 ++ return -ENOMEM;
19007 ++
19008 ++ ret = vm_insert_page(vma, addr, page);
19009 ++ if (ret) {
19010 ++ __free_page(page);
19011 ++ return ret;
19012 ++ }
19013 ++ put_page(page);
19014 ++ }
19015 ++
19016 ++ return 0;
19017 ++}
19018 ++
19019 + static const struct file_operations dmirror_fops = {
19020 + .open = dmirror_fops_open,
19021 + .release = dmirror_fops_release,
19022 ++ .mmap = dmirror_fops_mmap,
19023 + .unlocked_ioctl = dmirror_fops_unlocked_ioctl,
19024 + .llseek = default_llseek,
19025 + .owner = THIS_MODULE,
19026 +diff --git a/lib/test_meminit.c b/lib/test_meminit.c
19027 +index e4f706a404b3a..3ca717f113977 100644
19028 +--- a/lib/test_meminit.c
19029 ++++ b/lib/test_meminit.c
19030 +@@ -337,6 +337,7 @@ static int __init do_kmem_cache_size_bulk(int size, int *total_failures)
19031 + if (num)
19032 + kmem_cache_free_bulk(c, num, objects);
19033 + }
19034 ++ kmem_cache_destroy(c);
19035 + *total_failures += fail;
19036 + return 1;
19037 + }
19038 +diff --git a/mm/hmm.c b/mm/hmm.c
19039 +index fb617054f9631..cbe9d0c666504 100644
19040 +--- a/mm/hmm.c
19041 ++++ b/mm/hmm.c
19042 +@@ -296,7 +296,8 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
19043 + * Since each architecture defines a struct page for the zero page, just
19044 + * fall through and treat it like a normal page.
19045 + */
19046 +- if (pte_special(pte) && !pte_devmap(pte) &&
19047 ++ if (!vm_normal_page(walk->vma, addr, pte) &&
19048 ++ !pte_devmap(pte) &&
19049 + !is_zero_pfn(pte_pfn(pte))) {
19050 + if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) {
19051 + pte_unmap(ptep);
19052 +@@ -514,7 +515,7 @@ static int hmm_vma_walk_test(unsigned long start, unsigned long end,
19053 + struct hmm_range *range = hmm_vma_walk->range;
19054 + struct vm_area_struct *vma = walk->vma;
19055 +
19056 +- if (!(vma->vm_flags & (VM_IO | VM_PFNMAP | VM_MIXEDMAP)) &&
19057 ++ if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)) &&
19058 + vma->vm_flags & VM_READ)
19059 + return 0;
19060 +
19061 +diff --git a/mm/page_alloc.c b/mm/page_alloc.c
19062 +index e8e0f1cec8b04..c63656c42e288 100644
19063 +--- a/mm/page_alloc.c
19064 ++++ b/mm/page_alloc.c
19065 +@@ -3964,7 +3964,9 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
19066 + va_list args;
19067 + static DEFINE_RATELIMIT_STATE(nopage_rs, 10*HZ, 1);
19068 +
19069 +- if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs))
19070 ++ if ((gfp_mask & __GFP_NOWARN) ||
19071 ++ !__ratelimit(&nopage_rs) ||
19072 ++ ((gfp_mask & __GFP_DMA) && !has_managed_dma()))
19073 + return;
19074 +
19075 + va_start(args, fmt);
19076 +@@ -8903,3 +8905,18 @@ bool take_page_off_buddy(struct page *page)
19077 + return ret;
19078 + }
19079 + #endif
19080 ++
19081 ++#ifdef CONFIG_ZONE_DMA
19082 ++bool has_managed_dma(void)
19083 ++{
19084 ++ struct pglist_data *pgdat;
19085 ++
19086 ++ for_each_online_pgdat(pgdat) {
19087 ++ struct zone *zone = &pgdat->node_zones[ZONE_DMA];
19088 ++
19089 ++ if (managed_zone(zone))
19090 ++ return true;
19091 ++ }
19092 ++ return false;
19093 ++}
19094 ++#endif /* CONFIG_ZONE_DMA */
19095 +diff --git a/mm/shmem.c b/mm/shmem.c
19096 +index ae8adca3b56d1..d3d8c5e7a296b 100644
19097 +--- a/mm/shmem.c
19098 ++++ b/mm/shmem.c
19099 +@@ -527,7 +527,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
19100 + struct shmem_inode_info *info;
19101 + struct page *page;
19102 + unsigned long batch = sc ? sc->nr_to_scan : 128;
19103 +- int removed = 0, split = 0;
19104 ++ int split = 0;
19105 +
19106 + if (list_empty(&sbinfo->shrinklist))
19107 + return SHRINK_STOP;
19108 +@@ -542,7 +542,6 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
19109 + /* inode is about to be evicted */
19110 + if (!inode) {
19111 + list_del_init(&info->shrinklist);
19112 +- removed++;
19113 + goto next;
19114 + }
19115 +
19116 +@@ -550,12 +549,12 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
19117 + if (round_up(inode->i_size, PAGE_SIZE) ==
19118 + round_up(inode->i_size, HPAGE_PMD_SIZE)) {
19119 + list_move(&info->shrinklist, &to_remove);
19120 +- removed++;
19121 + goto next;
19122 + }
19123 +
19124 + list_move(&info->shrinklist, &list);
19125 + next:
19126 ++ sbinfo->shrinklist_len--;
19127 + if (!--batch)
19128 + break;
19129 + }
19130 +@@ -575,7 +574,7 @@ next:
19131 + inode = &info->vfs_inode;
19132 +
19133 + if (nr_to_split && split >= nr_to_split)
19134 +- goto leave;
19135 ++ goto move_back;
19136 +
19137 + page = find_get_page(inode->i_mapping,
19138 + (inode->i_size & HPAGE_PMD_MASK) >> PAGE_SHIFT);
19139 +@@ -589,38 +588,44 @@ next:
19140 + }
19141 +
19142 + /*
19143 +- * Leave the inode on the list if we failed to lock
19144 +- * the page at this time.
19145 ++ * Move the inode on the list back to shrinklist if we failed
19146 ++ * to lock the page at this time.
19147 + *
19148 + * Waiting for the lock may lead to deadlock in the
19149 + * reclaim path.
19150 + */
19151 + if (!trylock_page(page)) {
19152 + put_page(page);
19153 +- goto leave;
19154 ++ goto move_back;
19155 + }
19156 +
19157 + ret = split_huge_page(page);
19158 + unlock_page(page);
19159 + put_page(page);
19160 +
19161 +- /* If split failed leave the inode on the list */
19162 ++ /* If split failed move the inode on the list back to shrinklist */
19163 + if (ret)
19164 +- goto leave;
19165 ++ goto move_back;
19166 +
19167 + split++;
19168 + drop:
19169 + list_del_init(&info->shrinklist);
19170 +- removed++;
19171 +-leave:
19172 ++ goto put;
19173 ++move_back:
19174 ++ /*
19175 ++ * Make sure the inode is either on the global list or deleted
19176 ++ * from any local list before iput() since it could be deleted
19177 ++ * in another thread once we put the inode (then the local list
19178 ++ * is corrupted).
19179 ++ */
19180 ++ spin_lock(&sbinfo->shrinklist_lock);
19181 ++ list_move(&info->shrinklist, &sbinfo->shrinklist);
19182 ++ sbinfo->shrinklist_len++;
19183 ++ spin_unlock(&sbinfo->shrinklist_lock);
19184 ++put:
19185 + iput(inode);
19186 + }
19187 +
19188 +- spin_lock(&sbinfo->shrinklist_lock);
19189 +- list_splice_tail(&list, &sbinfo->shrinklist);
19190 +- sbinfo->shrinklist_len -= removed;
19191 +- spin_unlock(&sbinfo->shrinklist_lock);
19192 +-
19193 + return split;
19194 + }
19195 +
19196 +diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
19197 +index 22278807b3f36..5e84dce5ff7ae 100644
19198 +--- a/net/ax25/af_ax25.c
19199 ++++ b/net/ax25/af_ax25.c
19200 +@@ -536,7 +536,7 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
19201 + ax25_cb *ax25;
19202 + struct net_device *dev;
19203 + char devname[IFNAMSIZ];
19204 +- unsigned long opt;
19205 ++ unsigned int opt;
19206 + int res = 0;
19207 +
19208 + if (level != SOL_AX25)
19209 +@@ -568,7 +568,7 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
19210 + break;
19211 +
19212 + case AX25_T1:
19213 +- if (opt < 1 || opt > ULONG_MAX / HZ) {
19214 ++ if (opt < 1 || opt > UINT_MAX / HZ) {
19215 + res = -EINVAL;
19216 + break;
19217 + }
19218 +@@ -577,7 +577,7 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
19219 + break;
19220 +
19221 + case AX25_T2:
19222 +- if (opt < 1 || opt > ULONG_MAX / HZ) {
19223 ++ if (opt < 1 || opt > UINT_MAX / HZ) {
19224 + res = -EINVAL;
19225 + break;
19226 + }
19227 +@@ -593,7 +593,7 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
19228 + break;
19229 +
19230 + case AX25_T3:
19231 +- if (opt < 1 || opt > ULONG_MAX / HZ) {
19232 ++ if (opt < 1 || opt > UINT_MAX / HZ) {
19233 + res = -EINVAL;
19234 + break;
19235 + }
19236 +@@ -601,7 +601,7 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
19237 + break;
19238 +
19239 + case AX25_IDLE:
19240 +- if (opt > ULONG_MAX / (60 * HZ)) {
19241 ++ if (opt > UINT_MAX / (60 * HZ)) {
19242 + res = -EINVAL;
19243 + break;
19244 + }
19245 +diff --git a/net/batman-adv/netlink.c b/net/batman-adv/netlink.c
19246 +index c7a55647b520e..121459704b069 100644
19247 +--- a/net/batman-adv/netlink.c
19248 ++++ b/net/batman-adv/netlink.c
19249 +@@ -1361,21 +1361,21 @@ static const struct genl_small_ops batadv_netlink_ops[] = {
19250 + {
19251 + .cmd = BATADV_CMD_TP_METER,
19252 + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
19253 +- .flags = GENL_ADMIN_PERM,
19254 ++ .flags = GENL_UNS_ADMIN_PERM,
19255 + .doit = batadv_netlink_tp_meter_start,
19256 + .internal_flags = BATADV_FLAG_NEED_MESH,
19257 + },
19258 + {
19259 + .cmd = BATADV_CMD_TP_METER_CANCEL,
19260 + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
19261 +- .flags = GENL_ADMIN_PERM,
19262 ++ .flags = GENL_UNS_ADMIN_PERM,
19263 + .doit = batadv_netlink_tp_meter_cancel,
19264 + .internal_flags = BATADV_FLAG_NEED_MESH,
19265 + },
19266 + {
19267 + .cmd = BATADV_CMD_GET_ROUTING_ALGOS,
19268 + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
19269 +- .flags = GENL_ADMIN_PERM,
19270 ++ .flags = GENL_UNS_ADMIN_PERM,
19271 + .dumpit = batadv_algo_dump,
19272 + },
19273 + {
19274 +@@ -1390,68 +1390,68 @@ static const struct genl_small_ops batadv_netlink_ops[] = {
19275 + {
19276 + .cmd = BATADV_CMD_GET_TRANSTABLE_LOCAL,
19277 + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
19278 +- .flags = GENL_ADMIN_PERM,
19279 ++ .flags = GENL_UNS_ADMIN_PERM,
19280 + .dumpit = batadv_tt_local_dump,
19281 + },
19282 + {
19283 + .cmd = BATADV_CMD_GET_TRANSTABLE_GLOBAL,
19284 + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
19285 +- .flags = GENL_ADMIN_PERM,
19286 ++ .flags = GENL_UNS_ADMIN_PERM,
19287 + .dumpit = batadv_tt_global_dump,
19288 + },
19289 + {
19290 + .cmd = BATADV_CMD_GET_ORIGINATORS,
19291 + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
19292 +- .flags = GENL_ADMIN_PERM,
19293 ++ .flags = GENL_UNS_ADMIN_PERM,
19294 + .dumpit = batadv_orig_dump,
19295 + },
19296 + {
19297 + .cmd = BATADV_CMD_GET_NEIGHBORS,
19298 + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
19299 +- .flags = GENL_ADMIN_PERM,
19300 ++ .flags = GENL_UNS_ADMIN_PERM,
19301 + .dumpit = batadv_hardif_neigh_dump,
19302 + },
19303 + {
19304 + .cmd = BATADV_CMD_GET_GATEWAYS,
19305 + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
19306 +- .flags = GENL_ADMIN_PERM,
19307 ++ .flags = GENL_UNS_ADMIN_PERM,
19308 + .dumpit = batadv_gw_dump,
19309 + },
19310 + {
19311 + .cmd = BATADV_CMD_GET_BLA_CLAIM,
19312 + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
19313 +- .flags = GENL_ADMIN_PERM,
19314 ++ .flags = GENL_UNS_ADMIN_PERM,
19315 + .dumpit = batadv_bla_claim_dump,
19316 + },
19317 + {
19318 + .cmd = BATADV_CMD_GET_BLA_BACKBONE,
19319 + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
19320 +- .flags = GENL_ADMIN_PERM,
19321 ++ .flags = GENL_UNS_ADMIN_PERM,
19322 + .dumpit = batadv_bla_backbone_dump,
19323 + },
19324 + {
19325 + .cmd = BATADV_CMD_GET_DAT_CACHE,
19326 + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
19327 +- .flags = GENL_ADMIN_PERM,
19328 ++ .flags = GENL_UNS_ADMIN_PERM,
19329 + .dumpit = batadv_dat_cache_dump,
19330 + },
19331 + {
19332 + .cmd = BATADV_CMD_GET_MCAST_FLAGS,
19333 + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
19334 +- .flags = GENL_ADMIN_PERM,
19335 ++ .flags = GENL_UNS_ADMIN_PERM,
19336 + .dumpit = batadv_mcast_flags_dump,
19337 + },
19338 + {
19339 + .cmd = BATADV_CMD_SET_MESH,
19340 + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
19341 +- .flags = GENL_ADMIN_PERM,
19342 ++ .flags = GENL_UNS_ADMIN_PERM,
19343 + .doit = batadv_netlink_set_mesh,
19344 + .internal_flags = BATADV_FLAG_NEED_MESH,
19345 + },
19346 + {
19347 + .cmd = BATADV_CMD_SET_HARDIF,
19348 + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
19349 +- .flags = GENL_ADMIN_PERM,
19350 ++ .flags = GENL_UNS_ADMIN_PERM,
19351 + .doit = batadv_netlink_set_hardif,
19352 + .internal_flags = BATADV_FLAG_NEED_MESH |
19353 + BATADV_FLAG_NEED_HARDIF,
19354 +@@ -1467,7 +1467,7 @@ static const struct genl_small_ops batadv_netlink_ops[] = {
19355 + {
19356 + .cmd = BATADV_CMD_SET_VLAN,
19357 + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
19358 +- .flags = GENL_ADMIN_PERM,
19359 ++ .flags = GENL_UNS_ADMIN_PERM,
19360 + .doit = batadv_netlink_set_vlan,
19361 + .internal_flags = BATADV_FLAG_NEED_MESH |
19362 + BATADV_FLAG_NEED_VLAN,
19363 +diff --git a/net/bluetooth/cmtp/core.c b/net/bluetooth/cmtp/core.c
19364 +index 0a2d78e811cf5..83eb84e8e688f 100644
19365 +--- a/net/bluetooth/cmtp/core.c
19366 ++++ b/net/bluetooth/cmtp/core.c
19367 +@@ -501,9 +501,7 @@ static int __init cmtp_init(void)
19368 + {
19369 + BT_INFO("CMTP (CAPI Emulation) ver %s", VERSION);
19370 +
19371 +- cmtp_init_sockets();
19372 +-
19373 +- return 0;
19374 ++ return cmtp_init_sockets();
19375 + }
19376 +
19377 + static void __exit cmtp_exit(void)
19378 +diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
19379 +index 2ad66f64879f1..2e7998bad133b 100644
19380 +--- a/net/bluetooth/hci_core.c
19381 ++++ b/net/bluetooth/hci_core.c
19382 +@@ -3810,6 +3810,7 @@ int hci_register_dev(struct hci_dev *hdev)
19383 + return id;
19384 +
19385 + err_wqueue:
19386 ++ debugfs_remove_recursive(hdev->debugfs);
19387 + destroy_workqueue(hdev->workqueue);
19388 + destroy_workqueue(hdev->req_workqueue);
19389 + err:
19390 +diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
19391 +index 9f52145bb7b76..7ffcca9ae82a1 100644
19392 +--- a/net/bluetooth/hci_event.c
19393 ++++ b/net/bluetooth/hci_event.c
19394 +@@ -5661,7 +5661,8 @@ static void hci_le_adv_report_evt(struct hci_dev *hdev, struct sk_buff *skb)
19395 + struct hci_ev_le_advertising_info *ev = ptr;
19396 + s8 rssi;
19397 +
19398 +- if (ev->length <= HCI_MAX_AD_LENGTH) {
19399 ++ if (ev->length <= HCI_MAX_AD_LENGTH &&
19400 ++ ev->data + ev->length <= skb_tail_pointer(skb)) {
19401 + rssi = ev->data[ev->length];
19402 + process_adv_report(hdev, ev->evt_type, &ev->bdaddr,
19403 + ev->bdaddr_type, NULL, 0, rssi,
19404 +@@ -5671,6 +5672,11 @@ static void hci_le_adv_report_evt(struct hci_dev *hdev, struct sk_buff *skb)
19405 + }
19406 +
19407 + ptr += sizeof(*ev) + ev->length + 1;
19408 ++
19409 ++ if (ptr > (void *) skb_tail_pointer(skb) - sizeof(*ev)) {
19410 ++ bt_dev_err(hdev, "Malicious advertising data. Stopping processing");
19411 ++ break;
19412 ++ }
19413 + }
19414 +
19415 + hci_dev_unlock(hdev);
19416 +diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
19417 +index 1a94ed2f8a4f8..d965b7c66bd62 100644
19418 +--- a/net/bluetooth/hci_request.c
19419 ++++ b/net/bluetooth/hci_request.c
19420 +@@ -2118,7 +2118,7 @@ int __hci_req_enable_ext_advertising(struct hci_request *req, u8 instance)
19421 + /* Set duration per instance since controller is responsible for
19422 + * scheduling it.
19423 + */
19424 +- if (adv_instance && adv_instance->duration) {
19425 ++ if (adv_instance && adv_instance->timeout) {
19426 + u16 duration = adv_instance->timeout * MSEC_PER_SEC;
19427 +
19428 + /* Time = N * 10 ms */
19429 +diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
19430 +index 160c016a5dfb9..d2c6785205992 100644
19431 +--- a/net/bluetooth/l2cap_sock.c
19432 ++++ b/net/bluetooth/l2cap_sock.c
19433 +@@ -161,7 +161,11 @@ static int l2cap_sock_bind(struct socket *sock, struct sockaddr *addr, int alen)
19434 + break;
19435 + }
19436 +
19437 +- if (chan->psm && bdaddr_type_is_le(chan->src_type))
19438 ++ /* Use L2CAP_MODE_LE_FLOWCTL (CoC) in case of LE address and
19439 ++ * L2CAP_MODE_EXT_FLOWCTL (ECRED) has not been set.
19440 ++ */
19441 ++ if (chan->psm && bdaddr_type_is_le(chan->src_type) &&
19442 ++ chan->mode != L2CAP_MODE_EXT_FLOWCTL)
19443 + chan->mode = L2CAP_MODE_LE_FLOWCTL;
19444 +
19445 + chan->state = BT_BOUND;
19446 +@@ -172,6 +176,21 @@ done:
19447 + return err;
19448 + }
19449 +
19450 ++static void l2cap_sock_init_pid(struct sock *sk)
19451 ++{
19452 ++ struct l2cap_chan *chan = l2cap_pi(sk)->chan;
19453 ++
19454 ++ /* Only L2CAP_MODE_EXT_FLOWCTL ever need to access the PID in order to
19455 ++ * group the channels being requested.
19456 ++ */
19457 ++ if (chan->mode != L2CAP_MODE_EXT_FLOWCTL)
19458 ++ return;
19459 ++
19460 ++ spin_lock(&sk->sk_peer_lock);
19461 ++ sk->sk_peer_pid = get_pid(task_tgid(current));
19462 ++ spin_unlock(&sk->sk_peer_lock);
19463 ++}
19464 ++
19465 + static int l2cap_sock_connect(struct socket *sock, struct sockaddr *addr,
19466 + int alen, int flags)
19467 + {
19468 +@@ -240,9 +259,15 @@ static int l2cap_sock_connect(struct socket *sock, struct sockaddr *addr,
19469 + return -EINVAL;
19470 + }
19471 +
19472 +- if (chan->psm && bdaddr_type_is_le(chan->src_type) && !chan->mode)
19473 ++ /* Use L2CAP_MODE_LE_FLOWCTL (CoC) in case of LE address and
19474 ++ * L2CAP_MODE_EXT_FLOWCTL (ECRED) has not been set.
19475 ++ */
19476 ++ if (chan->psm && bdaddr_type_is_le(chan->src_type) &&
19477 ++ chan->mode != L2CAP_MODE_EXT_FLOWCTL)
19478 + chan->mode = L2CAP_MODE_LE_FLOWCTL;
19479 +
19480 ++ l2cap_sock_init_pid(sk);
19481 ++
19482 + err = l2cap_chan_connect(chan, la.l2_psm, __le16_to_cpu(la.l2_cid),
19483 + &la.l2_bdaddr, la.l2_bdaddr_type);
19484 + if (err)
19485 +@@ -298,6 +323,8 @@ static int l2cap_sock_listen(struct socket *sock, int backlog)
19486 + goto done;
19487 + }
19488 +
19489 ++ l2cap_sock_init_pid(sk);
19490 ++
19491 + sk->sk_max_ack_backlog = backlog;
19492 + sk->sk_ack_backlog = 0;
19493 +
19494 +@@ -876,6 +903,8 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
19495 + struct l2cap_conn *conn;
19496 + int len, err = 0;
19497 + u32 opt;
19498 ++ u16 mtu;
19499 ++ u8 mode;
19500 +
19501 + BT_DBG("sk %p", sk);
19502 +
19503 +@@ -1058,16 +1087,16 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
19504 + break;
19505 + }
19506 +
19507 +- if (copy_from_sockptr(&opt, optval, sizeof(u16))) {
19508 ++ if (copy_from_sockptr(&mtu, optval, sizeof(u16))) {
19509 + err = -EFAULT;
19510 + break;
19511 + }
19512 +
19513 + if (chan->mode == L2CAP_MODE_EXT_FLOWCTL &&
19514 + sk->sk_state == BT_CONNECTED)
19515 +- err = l2cap_chan_reconfigure(chan, opt);
19516 ++ err = l2cap_chan_reconfigure(chan, mtu);
19517 + else
19518 +- chan->imtu = opt;
19519 ++ chan->imtu = mtu;
19520 +
19521 + break;
19522 +
19523 +@@ -1089,14 +1118,14 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
19524 + break;
19525 + }
19526 +
19527 +- if (copy_from_sockptr(&opt, optval, sizeof(u8))) {
19528 ++ if (copy_from_sockptr(&mode, optval, sizeof(u8))) {
19529 + err = -EFAULT;
19530 + break;
19531 + }
19532 +
19533 +- BT_DBG("opt %u", opt);
19534 ++ BT_DBG("mode %u", mode);
19535 +
19536 +- err = l2cap_set_mode(chan, opt);
19537 ++ err = l2cap_set_mode(chan, mode);
19538 + if (err)
19539 + break;
19540 +
19541 +diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
19542 +index 8edfb98ae1d58..68c0d0f928908 100644
19543 +--- a/net/bridge/br_netfilter_hooks.c
19544 ++++ b/net/bridge/br_netfilter_hooks.c
19545 +@@ -743,6 +743,9 @@ static int br_nf_dev_queue_xmit(struct net *net, struct sock *sk, struct sk_buff
19546 + if (nf_bridge->frag_max_size && nf_bridge->frag_max_size < mtu)
19547 + mtu = nf_bridge->frag_max_size;
19548 +
19549 ++ nf_bridge_update_protocol(skb);
19550 ++ nf_bridge_push_encap_header(skb);
19551 ++
19552 + if (skb_is_gso(skb) || skb->len + mtu_reserved <= mtu) {
19553 + nf_bridge_info_free(skb);
19554 + return br_dev_queue_push_xmit(net, sk, skb);
19555 +@@ -760,8 +763,6 @@ static int br_nf_dev_queue_xmit(struct net *net, struct sock *sk, struct sk_buff
19556 +
19557 + IPCB(skb)->frag_max_size = nf_bridge->frag_max_size;
19558 +
19559 +- nf_bridge_update_protocol(skb);
19560 +-
19561 + data = this_cpu_ptr(&brnf_frag_data_storage);
19562 +
19563 + if (skb_vlan_tag_present(skb)) {
19564 +@@ -789,8 +790,6 @@ static int br_nf_dev_queue_xmit(struct net *net, struct sock *sk, struct sk_buff
19565 +
19566 + IP6CB(skb)->frag_max_size = nf_bridge->frag_max_size;
19567 +
19568 +- nf_bridge_update_protocol(skb);
19569 +-
19570 + data = this_cpu_ptr(&brnf_frag_data_storage);
19571 + data->encap_size = nf_bridge_encap_header_len(skb);
19572 + data->size = ETH_HLEN + data->encap_size;
19573 +diff --git a/net/core/dev.c b/net/core/dev.c
19574 +index 60cf3cd0c282f..0bab2aca07fd3 100644
19575 +--- a/net/core/dev.c
19576 ++++ b/net/core/dev.c
19577 +@@ -9339,6 +9339,12 @@ static int bpf_xdp_link_update(struct bpf_link *link, struct bpf_prog *new_prog,
19578 + goto out_unlock;
19579 + }
19580 + old_prog = link->prog;
19581 ++ if (old_prog->type != new_prog->type ||
19582 ++ old_prog->expected_attach_type != new_prog->expected_attach_type) {
19583 ++ err = -EINVAL;
19584 ++ goto out_unlock;
19585 ++ }
19586 ++
19587 + if (old_prog == new_prog) {
19588 + /* no-op, don't disturb drivers */
19589 + bpf_prog_put(new_prog);
19590 +diff --git a/net/core/devlink.c b/net/core/devlink.c
19591 +index 442b67c044a9f..646d90f63dafc 100644
19592 +--- a/net/core/devlink.c
19593 ++++ b/net/core/devlink.c
19594 +@@ -7852,8 +7852,6 @@ static const struct genl_small_ops devlink_nl_ops[] = {
19595 + GENL_DONT_VALIDATE_DUMP_STRICT,
19596 + .dumpit = devlink_nl_cmd_health_reporter_dump_get_dumpit,
19597 + .flags = GENL_ADMIN_PERM,
19598 +- .internal_flags = DEVLINK_NL_FLAG_NEED_DEVLINK_OR_PORT |
19599 +- DEVLINK_NL_FLAG_NO_LOCK,
19600 + },
19601 + {
19602 + .cmd = DEVLINK_CMD_HEALTH_REPORTER_DUMP_CLEAR,
19603 +diff --git a/net/core/filter.c b/net/core/filter.c
19604 +index abd58dce49bbc..7fa4283f2a8c0 100644
19605 +--- a/net/core/filter.c
19606 ++++ b/net/core/filter.c
19607 +@@ -4711,12 +4711,14 @@ static int _bpf_setsockopt(struct sock *sk, int level, int optname,
19608 + switch (optname) {
19609 + case SO_RCVBUF:
19610 + val = min_t(u32, val, sysctl_rmem_max);
19611 ++ val = min_t(int, val, INT_MAX / 2);
19612 + sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
19613 + WRITE_ONCE(sk->sk_rcvbuf,
19614 + max_t(int, val * 2, SOCK_MIN_RCVBUF));
19615 + break;
19616 + case SO_SNDBUF:
19617 + val = min_t(u32, val, sysctl_wmem_max);
19618 ++ val = min_t(int, val, INT_MAX / 2);
19619 + sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
19620 + WRITE_ONCE(sk->sk_sndbuf,
19621 + max_t(int, val * 2, SOCK_MIN_SNDBUF));
19622 +@@ -7919,9 +7921,9 @@ void bpf_warn_invalid_xdp_action(u32 act)
19623 + {
19624 + const u32 act_max = XDP_REDIRECT;
19625 +
19626 +- WARN_ONCE(1, "%s XDP return value %u, expect packet loss!\n",
19627 +- act > act_max ? "Illegal" : "Driver unsupported",
19628 +- act);
19629 ++ pr_warn_once("%s XDP return value %u, expect packet loss!\n",
19630 ++ act > act_max ? "Illegal" : "Driver unsupported",
19631 ++ act);
19632 + }
19633 + EXPORT_SYMBOL_GPL(bpf_warn_invalid_xdp_action);
19634 +
19635 +diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
19636 +index af59123601055..99303897b7bb7 100644
19637 +--- a/net/core/net-sysfs.c
19638 ++++ b/net/core/net-sysfs.c
19639 +@@ -1804,6 +1804,9 @@ static void remove_queue_kobjects(struct net_device *dev)
19640 +
19641 + net_rx_queue_update_kobjects(dev, real_rx, 0);
19642 + netdev_queue_update_kobjects(dev, real_tx, 0);
19643 ++
19644 ++ dev->real_num_rx_queues = 0;
19645 ++ dev->real_num_tx_queues = 0;
19646 + #ifdef CONFIG_SYSFS
19647 + kset_unregister(dev->queues_kset);
19648 + #endif
19649 +diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
19650 +index ac852db83de9f..cbff7d94b993e 100644
19651 +--- a/net/core/net_namespace.c
19652 ++++ b/net/core/net_namespace.c
19653 +@@ -183,8 +183,10 @@ static void ops_exit_list(const struct pernet_operations *ops,
19654 + {
19655 + struct net *net;
19656 + if (ops->exit) {
19657 +- list_for_each_entry(net, net_exit_list, exit_list)
19658 ++ list_for_each_entry(net, net_exit_list, exit_list) {
19659 + ops->exit(net);
19660 ++ cond_resched();
19661 ++ }
19662 + }
19663 + if (ops->exit_batch)
19664 + ops->exit_batch(net_exit_list);
19665 +diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
19666 +index ab6a8f35d369d..838a876c168ca 100644
19667 +--- a/net/ipv4/fib_semantics.c
19668 ++++ b/net/ipv4/fib_semantics.c
19669 +@@ -29,6 +29,7 @@
19670 + #include <linux/init.h>
19671 + #include <linux/slab.h>
19672 + #include <linux/netlink.h>
19673 ++#include <linux/hash.h>
19674 +
19675 + #include <net/arp.h>
19676 + #include <net/ip.h>
19677 +@@ -251,7 +252,6 @@ void free_fib_info(struct fib_info *fi)
19678 + pr_warn("Freeing alive fib_info %p\n", fi);
19679 + return;
19680 + }
19681 +- fib_info_cnt--;
19682 +
19683 + call_rcu(&fi->rcu, free_fib_info_rcu);
19684 + }
19685 +@@ -262,6 +262,10 @@ void fib_release_info(struct fib_info *fi)
19686 + spin_lock_bh(&fib_info_lock);
19687 + if (fi && --fi->fib_treeref == 0) {
19688 + hlist_del(&fi->fib_hash);
19689 ++
19690 ++ /* Paired with READ_ONCE() in fib_create_info(). */
19691 ++ WRITE_ONCE(fib_info_cnt, fib_info_cnt - 1);
19692 ++
19693 + if (fi->fib_prefsrc)
19694 + hlist_del(&fi->fib_lhash);
19695 + if (fi->nh) {
19696 +@@ -318,11 +322,15 @@ static inline int nh_comp(struct fib_info *fi, struct fib_info *ofi)
19697 +
19698 + static inline unsigned int fib_devindex_hashfn(unsigned int val)
19699 + {
19700 +- unsigned int mask = DEVINDEX_HASHSIZE - 1;
19701 ++ return hash_32(val, DEVINDEX_HASHBITS);
19702 ++}
19703 ++
19704 ++static struct hlist_head *
19705 ++fib_info_devhash_bucket(const struct net_device *dev)
19706 ++{
19707 ++ u32 val = net_hash_mix(dev_net(dev)) ^ dev->ifindex;
19708 +
19709 +- return (val ^
19710 +- (val >> DEVINDEX_HASHBITS) ^
19711 +- (val >> (DEVINDEX_HASHBITS * 2))) & mask;
19712 ++ return &fib_info_devhash[fib_devindex_hashfn(val)];
19713 + }
19714 +
19715 + static unsigned int fib_info_hashfn_1(int init_val, u8 protocol, u8 scope,
19716 +@@ -432,12 +440,11 @@ int ip_fib_check_default(__be32 gw, struct net_device *dev)
19717 + {
19718 + struct hlist_head *head;
19719 + struct fib_nh *nh;
19720 +- unsigned int hash;
19721 +
19722 + spin_lock(&fib_info_lock);
19723 +
19724 +- hash = fib_devindex_hashfn(dev->ifindex);
19725 +- head = &fib_info_devhash[hash];
19726 ++ head = fib_info_devhash_bucket(dev);
19727 ++
19728 + hlist_for_each_entry(nh, head, nh_hash) {
19729 + if (nh->fib_nh_dev == dev &&
19730 + nh->fib_nh_gw4 == gw &&
19731 +@@ -1431,7 +1438,9 @@ struct fib_info *fib_create_info(struct fib_config *cfg,
19732 + #endif
19733 +
19734 + err = -ENOBUFS;
19735 +- if (fib_info_cnt >= fib_info_hash_size) {
19736 ++
19737 ++ /* Paired with WRITE_ONCE() in fib_release_info() */
19738 ++ if (READ_ONCE(fib_info_cnt) >= fib_info_hash_size) {
19739 + unsigned int new_size = fib_info_hash_size << 1;
19740 + struct hlist_head *new_info_hash;
19741 + struct hlist_head *new_laddrhash;
19742 +@@ -1463,7 +1472,6 @@ struct fib_info *fib_create_info(struct fib_config *cfg,
19743 + return ERR_PTR(err);
19744 + }
19745 +
19746 +- fib_info_cnt++;
19747 + fi->fib_net = net;
19748 + fi->fib_protocol = cfg->fc_protocol;
19749 + fi->fib_scope = cfg->fc_scope;
19750 +@@ -1590,6 +1598,7 @@ link_it:
19751 + fi->fib_treeref++;
19752 + refcount_set(&fi->fib_clntref, 1);
19753 + spin_lock_bh(&fib_info_lock);
19754 ++ fib_info_cnt++;
19755 + hlist_add_head(&fi->fib_hash,
19756 + &fib_info_hash[fib_info_hashfn(fi)]);
19757 + if (fi->fib_prefsrc) {
19758 +@@ -1603,12 +1612,10 @@ link_it:
19759 + } else {
19760 + change_nexthops(fi) {
19761 + struct hlist_head *head;
19762 +- unsigned int hash;
19763 +
19764 + if (!nexthop_nh->fib_nh_dev)
19765 + continue;
19766 +- hash = fib_devindex_hashfn(nexthop_nh->fib_nh_dev->ifindex);
19767 +- head = &fib_info_devhash[hash];
19768 ++ head = fib_info_devhash_bucket(nexthop_nh->fib_nh_dev);
19769 + hlist_add_head(&nexthop_nh->nh_hash, head);
19770 + } endfor_nexthops(fi)
19771 + }
19772 +@@ -1958,8 +1965,7 @@ void fib_nhc_update_mtu(struct fib_nh_common *nhc, u32 new, u32 orig)
19773 +
19774 + void fib_sync_mtu(struct net_device *dev, u32 orig_mtu)
19775 + {
19776 +- unsigned int hash = fib_devindex_hashfn(dev->ifindex);
19777 +- struct hlist_head *head = &fib_info_devhash[hash];
19778 ++ struct hlist_head *head = fib_info_devhash_bucket(dev);
19779 + struct fib_nh *nh;
19780 +
19781 + hlist_for_each_entry(nh, head, nh_hash) {
19782 +@@ -1978,12 +1984,11 @@ void fib_sync_mtu(struct net_device *dev, u32 orig_mtu)
19783 + */
19784 + int fib_sync_down_dev(struct net_device *dev, unsigned long event, bool force)
19785 + {
19786 +- int ret = 0;
19787 +- int scope = RT_SCOPE_NOWHERE;
19788 ++ struct hlist_head *head = fib_info_devhash_bucket(dev);
19789 + struct fib_info *prev_fi = NULL;
19790 +- unsigned int hash = fib_devindex_hashfn(dev->ifindex);
19791 +- struct hlist_head *head = &fib_info_devhash[hash];
19792 ++ int scope = RT_SCOPE_NOWHERE;
19793 + struct fib_nh *nh;
19794 ++ int ret = 0;
19795 +
19796 + if (force)
19797 + scope = -1;
19798 +@@ -2128,7 +2133,6 @@ out:
19799 + int fib_sync_up(struct net_device *dev, unsigned char nh_flags)
19800 + {
19801 + struct fib_info *prev_fi;
19802 +- unsigned int hash;
19803 + struct hlist_head *head;
19804 + struct fib_nh *nh;
19805 + int ret;
19806 +@@ -2144,8 +2148,7 @@ int fib_sync_up(struct net_device *dev, unsigned char nh_flags)
19807 + }
19808 +
19809 + prev_fi = NULL;
19810 +- hash = fib_devindex_hashfn(dev->ifindex);
19811 +- head = &fib_info_devhash[hash];
19812 ++ head = fib_info_devhash_bucket(dev);
19813 + ret = 0;
19814 +
19815 + hlist_for_each_entry(nh, head, nh_hash) {
19816 +diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c
19817 +index 10d31733297d7..e0e8a65d561ec 100644
19818 +--- a/net/ipv4/inet_fragment.c
19819 ++++ b/net/ipv4/inet_fragment.c
19820 +@@ -204,9 +204,9 @@ void inet_frag_kill(struct inet_frag_queue *fq)
19821 + /* The RCU read lock provides a memory barrier
19822 + * guaranteeing that if fqdir->dead is false then
19823 + * the hash table destruction will not start until
19824 +- * after we unlock. Paired with inet_frags_exit_net().
19825 ++ * after we unlock. Paired with fqdir_pre_exit().
19826 + */
19827 +- if (!fqdir->dead) {
19828 ++ if (!READ_ONCE(fqdir->dead)) {
19829 + rhashtable_remove_fast(&fqdir->rhashtable, &fq->node,
19830 + fqdir->f->rhash_params);
19831 + refcount_dec(&fq->refcnt);
19832 +@@ -321,9 +321,11 @@ static struct inet_frag_queue *inet_frag_create(struct fqdir *fqdir,
19833 + /* TODO : call from rcu_read_lock() and no longer use refcount_inc_not_zero() */
19834 + struct inet_frag_queue *inet_frag_find(struct fqdir *fqdir, void *key)
19835 + {
19836 ++ /* This pairs with WRITE_ONCE() in fqdir_pre_exit(). */
19837 ++ long high_thresh = READ_ONCE(fqdir->high_thresh);
19838 + struct inet_frag_queue *fq = NULL, *prev;
19839 +
19840 +- if (!fqdir->high_thresh || frag_mem_limit(fqdir) > fqdir->high_thresh)
19841 ++ if (!high_thresh || frag_mem_limit(fqdir) > high_thresh)
19842 + return NULL;
19843 +
19844 + rcu_read_lock();
19845 +diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c
19846 +index cfeb8890f94ee..fad803d2d711e 100644
19847 +--- a/net/ipv4/ip_fragment.c
19848 ++++ b/net/ipv4/ip_fragment.c
19849 +@@ -144,7 +144,8 @@ static void ip_expire(struct timer_list *t)
19850 +
19851 + rcu_read_lock();
19852 +
19853 +- if (qp->q.fqdir->dead)
19854 ++ /* Paired with WRITE_ONCE() in fqdir_pre_exit(). */
19855 ++ if (READ_ONCE(qp->q.fqdir->dead))
19856 + goto out_rcu_unlock;
19857 +
19858 + spin_lock(&qp->q.lock);
19859 +diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
19860 +index a9cc05043fa47..e4504dd510c6d 100644
19861 +--- a/net/ipv4/ip_gre.c
19862 ++++ b/net/ipv4/ip_gre.c
19863 +@@ -599,8 +599,9 @@ static int gre_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
19864 +
19865 + key = &info->key;
19866 + ip_tunnel_init_flow(&fl4, IPPROTO_GRE, key->u.ipv4.dst, key->u.ipv4.src,
19867 +- tunnel_id_to_key32(key->tun_id), key->tos, 0,
19868 +- skb->mark, skb_get_hash(skb));
19869 ++ tunnel_id_to_key32(key->tun_id),
19870 ++ key->tos & ~INET_ECN_MASK, 0, skb->mark,
19871 ++ skb_get_hash(skb));
19872 + rt = ip_route_output_key(dev_net(dev), &fl4);
19873 + if (IS_ERR(rt))
19874 + return PTR_ERR(rt);
19875 +diff --git a/net/ipv4/netfilter/ipt_CLUSTERIP.c b/net/ipv4/netfilter/ipt_CLUSTERIP.c
19876 +index a8b980ad11d4e..1088564d4dbcb 100644
19877 +--- a/net/ipv4/netfilter/ipt_CLUSTERIP.c
19878 ++++ b/net/ipv4/netfilter/ipt_CLUSTERIP.c
19879 +@@ -505,8 +505,11 @@ static int clusterip_tg_check(const struct xt_tgchk_param *par)
19880 + if (IS_ERR(config))
19881 + return PTR_ERR(config);
19882 + }
19883 +- } else if (memcmp(&config->clustermac, &cipinfo->clustermac, ETH_ALEN))
19884 ++ } else if (memcmp(&config->clustermac, &cipinfo->clustermac, ETH_ALEN)) {
19885 ++ clusterip_config_entry_put(config);
19886 ++ clusterip_config_put(config);
19887 + return -EINVAL;
19888 ++ }
19889 +
19890 + ret = nf_ct_netns_get(par->net, par->family);
19891 + if (ret < 0) {
19892 +diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
19893 +index 09fa49bbf617d..9a0263f252323 100644
19894 +--- a/net/ipv6/ip6_gre.c
19895 ++++ b/net/ipv6/ip6_gre.c
19896 +@@ -755,6 +755,7 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
19897 + fl6->daddr = key->u.ipv6.dst;
19898 + fl6->flowlabel = key->label;
19899 + fl6->flowi6_uid = sock_net_uid(dev_net(dev), NULL);
19900 ++ fl6->fl6_gre_key = tunnel_id_to_key32(key->tun_id);
19901 +
19902 + dsfield = key->tos;
19903 + flags = key->tun_flags &
19904 +@@ -990,6 +991,7 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
19905 + fl6.daddr = key->u.ipv6.dst;
19906 + fl6.flowlabel = key->label;
19907 + fl6.flowi6_uid = sock_net_uid(dev_net(dev), NULL);
19908 ++ fl6.fl6_gre_key = tunnel_id_to_key32(key->tun_id);
19909 +
19910 + dsfield = key->tos;
19911 + if (!(tun_info->key.tun_flags & TUNNEL_ERSPAN_OPT))
19912 +@@ -1098,6 +1100,7 @@ static void ip6gre_tnl_link_config_common(struct ip6_tnl *t)
19913 + fl6->flowi6_oif = p->link;
19914 + fl6->flowlabel = 0;
19915 + fl6->flowi6_proto = IPPROTO_GRE;
19916 ++ fl6->fl6_gre_key = t->parms.o_key;
19917 +
19918 + if (!(p->flags&IP6_TNL_F_USE_ORIG_TCLASS))
19919 + fl6->flowlabel |= IPV6_TCLASS_MASK & p->flowinfo;
19920 +@@ -1543,7 +1546,7 @@ static void ip6gre_fb_tunnel_init(struct net_device *dev)
19921 + static struct inet6_protocol ip6gre_protocol __read_mostly = {
19922 + .handler = gre_rcv,
19923 + .err_handler = ip6gre_err,
19924 +- .flags = INET6_PROTO_NOPOLICY|INET6_PROTO_FINAL,
19925 ++ .flags = INET6_PROTO_FINAL,
19926 + };
19927 +
19928 + static void ip6gre_destroy_tunnels(struct net *net, struct list_head *head)
19929 +diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
19930 +index 6a24431b90095..d27c444a19ed1 100644
19931 +--- a/net/mac80211/rx.c
19932 ++++ b/net/mac80211/rx.c
19933 +@@ -4800,7 +4800,7 @@ void ieee80211_rx_list(struct ieee80211_hw *hw, struct ieee80211_sta *pubsta,
19934 + goto drop;
19935 + break;
19936 + case RX_ENC_VHT:
19937 +- if (WARN_ONCE(status->rate_idx > 9 ||
19938 ++ if (WARN_ONCE(status->rate_idx > 11 ||
19939 + !status->nss ||
19940 + status->nss > 8,
19941 + "Rate marked as a VHT rate but data is invalid: MCS: %d, NSS: %d\n",
19942 +diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
19943 +index 2d73f265b12c9..f67c4436c5d31 100644
19944 +--- a/net/netfilter/nft_set_pipapo.c
19945 ++++ b/net/netfilter/nft_set_pipapo.c
19946 +@@ -1290,6 +1290,11 @@ static struct nft_pipapo_match *pipapo_clone(struct nft_pipapo_match *old)
19947 + if (!new->scratch_aligned)
19948 + goto out_scratch;
19949 + #endif
19950 ++ for_each_possible_cpu(i)
19951 ++ *per_cpu_ptr(new->scratch, i) = NULL;
19952 ++
19953 ++ if (pipapo_realloc_scratch(new, old->bsize_max))
19954 ++ goto out_scratch_realloc;
19955 +
19956 + rcu_head_init(&new->rcu);
19957 +
19958 +@@ -1334,6 +1339,9 @@ out_lt:
19959 + kvfree(dst->lt);
19960 + dst--;
19961 + }
19962 ++out_scratch_realloc:
19963 ++ for_each_possible_cpu(i)
19964 ++ kfree(*per_cpu_ptr(new->scratch, i));
19965 + #ifdef NFT_PIPAPO_ALIGN
19966 + free_percpu(new->scratch_aligned);
19967 + #endif
19968 +diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
19969 +index eef0e3f2f25b0..e5c8a295e6406 100644
19970 +--- a/net/netrom/af_netrom.c
19971 ++++ b/net/netrom/af_netrom.c
19972 +@@ -298,7 +298,7 @@ static int nr_setsockopt(struct socket *sock, int level, int optname,
19973 + {
19974 + struct sock *sk = sock->sk;
19975 + struct nr_sock *nr = nr_sk(sk);
19976 +- unsigned long opt;
19977 ++ unsigned int opt;
19978 +
19979 + if (level != SOL_NETROM)
19980 + return -ENOPROTOOPT;
19981 +@@ -306,18 +306,18 @@ static int nr_setsockopt(struct socket *sock, int level, int optname,
19982 + if (optlen < sizeof(unsigned int))
19983 + return -EINVAL;
19984 +
19985 +- if (copy_from_sockptr(&opt, optval, sizeof(unsigned long)))
19986 ++ if (copy_from_sockptr(&opt, optval, sizeof(opt)))
19987 + return -EFAULT;
19988 +
19989 + switch (optname) {
19990 + case NETROM_T1:
19991 +- if (opt < 1 || opt > ULONG_MAX / HZ)
19992 ++ if (opt < 1 || opt > UINT_MAX / HZ)
19993 + return -EINVAL;
19994 + nr->t1 = opt * HZ;
19995 + return 0;
19996 +
19997 + case NETROM_T2:
19998 +- if (opt < 1 || opt > ULONG_MAX / HZ)
19999 ++ if (opt < 1 || opt > UINT_MAX / HZ)
20000 + return -EINVAL;
20001 + nr->t2 = opt * HZ;
20002 + return 0;
20003 +@@ -329,13 +329,13 @@ static int nr_setsockopt(struct socket *sock, int level, int optname,
20004 + return 0;
20005 +
20006 + case NETROM_T4:
20007 +- if (opt < 1 || opt > ULONG_MAX / HZ)
20008 ++ if (opt < 1 || opt > UINT_MAX / HZ)
20009 + return -EINVAL;
20010 + nr->t4 = opt * HZ;
20011 + return 0;
20012 +
20013 + case NETROM_IDLE:
20014 +- if (opt > ULONG_MAX / (60 * HZ))
20015 ++ if (opt > UINT_MAX / (60 * HZ))
20016 + return -EINVAL;
20017 + nr->idle = opt * 60 * HZ;
20018 + return 0;
20019 +diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c
20020 +index 6cfd30fc07985..0b93a17b9f11f 100644
20021 +--- a/net/nfc/llcp_sock.c
20022 ++++ b/net/nfc/llcp_sock.c
20023 +@@ -789,6 +789,11 @@ static int llcp_sock_sendmsg(struct socket *sock, struct msghdr *msg,
20024 +
20025 + lock_sock(sk);
20026 +
20027 ++ if (!llcp_sock->local) {
20028 ++ release_sock(sk);
20029 ++ return -ENODEV;
20030 ++ }
20031 ++
20032 + if (sk->sk_type == SOCK_DGRAM) {
20033 + DECLARE_SOCKADDR(struct sockaddr_nfc_llcp *, addr,
20034 + msg->msg_name);
20035 +diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
20036 +index 6a9c1a39874a0..b5005abc84ec2 100644
20037 +--- a/net/sched/sch_generic.c
20038 ++++ b/net/sched/sch_generic.c
20039 +@@ -1386,6 +1386,7 @@ void psched_ratecfg_precompute(struct psched_ratecfg *r,
20040 + {
20041 + memset(r, 0, sizeof(*r));
20042 + r->overhead = conf->overhead;
20043 ++ r->mpu = conf->mpu;
20044 + r->rate_bytes_ps = max_t(u64, conf->rate, rate64);
20045 + r->linklayer = (conf->linklayer & TC_LINKLAYER_MASK);
20046 + r->mult = 1;
20047 +diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
20048 +index 2a22dc85951ee..4eb9ef9c28003 100644
20049 +--- a/net/smc/smc_core.c
20050 ++++ b/net/smc/smc_core.c
20051 +@@ -1002,16 +1002,11 @@ void smc_smcd_terminate_all(struct smcd_dev *smcd)
20052 + /* Called when an SMCR device is removed or the smc module is unloaded.
20053 + * If smcibdev is given, all SMCR link groups using this device are terminated.
20054 + * If smcibdev is NULL, all SMCR link groups are terminated.
20055 +- *
20056 +- * We must wait here for QPs been destroyed before we destroy the CQs,
20057 +- * or we won't received any CQEs and cdc_pend_tx_wr cannot reach 0 thus
20058 +- * smc_sock cannot be released.
20059 + */
20060 + void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
20061 + {
20062 + struct smc_link_group *lgr, *lg;
20063 + LIST_HEAD(lgr_free_list);
20064 +- LIST_HEAD(lgr_linkdown_list);
20065 + int i;
20066 +
20067 + spin_lock_bh(&smc_lgr_list.lock);
20068 +@@ -1023,7 +1018,7 @@ void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
20069 + list_for_each_entry_safe(lgr, lg, &smc_lgr_list.list, list) {
20070 + for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
20071 + if (lgr->lnk[i].smcibdev == smcibdev)
20072 +- list_move_tail(&lgr->list, &lgr_linkdown_list);
20073 ++ smcr_link_down_cond_sched(&lgr->lnk[i]);
20074 + }
20075 + }
20076 + }
20077 +@@ -1035,16 +1030,6 @@ void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
20078 + __smc_lgr_terminate(lgr, false);
20079 + }
20080 +
20081 +- list_for_each_entry_safe(lgr, lg, &lgr_linkdown_list, list) {
20082 +- for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
20083 +- if (lgr->lnk[i].smcibdev == smcibdev) {
20084 +- mutex_lock(&lgr->llc_conf_mutex);
20085 +- smcr_link_down_cond(&lgr->lnk[i]);
20086 +- mutex_unlock(&lgr->llc_conf_mutex);
20087 +- }
20088 +- }
20089 +- }
20090 +-
20091 + if (smcibdev) {
20092 + if (atomic_read(&smcibdev->lnk_cnt))
20093 + wait_event(smcibdev->lnks_deleted,
20094 +diff --git a/net/unix/garbage.c b/net/unix/garbage.c
20095 +index 12e2ddaf887f2..d45d5366115a7 100644
20096 +--- a/net/unix/garbage.c
20097 ++++ b/net/unix/garbage.c
20098 +@@ -192,8 +192,11 @@ void wait_for_unix_gc(void)
20099 + {
20100 + /* If number of inflight sockets is insane,
20101 + * force a garbage collect right now.
20102 ++ * Paired with the WRITE_ONCE() in unix_inflight(),
20103 ++ * unix_notinflight() and gc_in_progress().
20104 + */
20105 +- if (unix_tot_inflight > UNIX_INFLIGHT_TRIGGER_GC && !gc_in_progress)
20106 ++ if (READ_ONCE(unix_tot_inflight) > UNIX_INFLIGHT_TRIGGER_GC &&
20107 ++ !READ_ONCE(gc_in_progress))
20108 + unix_gc();
20109 + wait_event(unix_gc_wait, gc_in_progress == false);
20110 + }
20111 +@@ -213,7 +216,9 @@ void unix_gc(void)
20112 + if (gc_in_progress)
20113 + goto out;
20114 +
20115 +- gc_in_progress = true;
20116 ++ /* Paired with READ_ONCE() in wait_for_unix_gc(). */
20117 ++ WRITE_ONCE(gc_in_progress, true);
20118 ++
20119 + /* First, select candidates for garbage collection. Only
20120 + * in-flight sockets are considered, and from those only ones
20121 + * which don't have any external reference.
20122 +@@ -299,7 +304,10 @@ void unix_gc(void)
20123 +
20124 + /* All candidates should have been detached by now. */
20125 + BUG_ON(!list_empty(&gc_candidates));
20126 +- gc_in_progress = false;
20127 ++
20128 ++ /* Paired with READ_ONCE() in wait_for_unix_gc(). */
20129 ++ WRITE_ONCE(gc_in_progress, false);
20130 ++
20131 + wake_up(&unix_gc_wait);
20132 +
20133 + out:
20134 +diff --git a/net/unix/scm.c b/net/unix/scm.c
20135 +index 052ae709ce289..aa27a02478dc1 100644
20136 +--- a/net/unix/scm.c
20137 ++++ b/net/unix/scm.c
20138 +@@ -60,7 +60,8 @@ void unix_inflight(struct user_struct *user, struct file *fp)
20139 + } else {
20140 + BUG_ON(list_empty(&u->link));
20141 + }
20142 +- unix_tot_inflight++;
20143 ++ /* Paired with READ_ONCE() in wait_for_unix_gc() */
20144 ++ WRITE_ONCE(unix_tot_inflight, unix_tot_inflight + 1);
20145 + }
20146 + user->unix_inflight++;
20147 + spin_unlock(&unix_gc_lock);
20148 +@@ -80,7 +81,8 @@ void unix_notinflight(struct user_struct *user, struct file *fp)
20149 +
20150 + if (atomic_long_dec_and_test(&u->inflight))
20151 + list_del_init(&u->link);
20152 +- unix_tot_inflight--;
20153 ++ /* Paired with READ_ONCE() in wait_for_unix_gc() */
20154 ++ WRITE_ONCE(unix_tot_inflight, unix_tot_inflight - 1);
20155 + }
20156 + user->unix_inflight--;
20157 + spin_unlock(&unix_gc_lock);
20158 +diff --git a/net/xfrm/xfrm_compat.c b/net/xfrm/xfrm_compat.c
20159 +index 2bf2693901631..a0f62fa02e06e 100644
20160 +--- a/net/xfrm/xfrm_compat.c
20161 ++++ b/net/xfrm/xfrm_compat.c
20162 +@@ -127,6 +127,7 @@ static const struct nla_policy compat_policy[XFRMA_MAX+1] = {
20163 + [XFRMA_SET_MARK] = { .type = NLA_U32 },
20164 + [XFRMA_SET_MARK_MASK] = { .type = NLA_U32 },
20165 + [XFRMA_IF_ID] = { .type = NLA_U32 },
20166 ++ [XFRMA_MTIMER_THRESH] = { .type = NLA_U32 },
20167 + };
20168 +
20169 + static struct nlmsghdr *xfrm_nlmsg_put_compat(struct sk_buff *skb,
20170 +@@ -274,9 +275,10 @@ static int xfrm_xlate64_attr(struct sk_buff *dst, const struct nlattr *src)
20171 + case XFRMA_SET_MARK:
20172 + case XFRMA_SET_MARK_MASK:
20173 + case XFRMA_IF_ID:
20174 ++ case XFRMA_MTIMER_THRESH:
20175 + return xfrm_nla_cpy(dst, src, nla_len(src));
20176 + default:
20177 +- BUILD_BUG_ON(XFRMA_MAX != XFRMA_IF_ID);
20178 ++ BUILD_BUG_ON(XFRMA_MAX != XFRMA_MTIMER_THRESH);
20179 + pr_warn_once("unsupported nla_type %d\n", src->nla_type);
20180 + return -EOPNOTSUPP;
20181 + }
20182 +@@ -431,7 +433,7 @@ static int xfrm_xlate32_attr(void *dst, const struct nlattr *nla,
20183 + int err;
20184 +
20185 + if (type > XFRMA_MAX) {
20186 +- BUILD_BUG_ON(XFRMA_MAX != XFRMA_IF_ID);
20187 ++ BUILD_BUG_ON(XFRMA_MAX != XFRMA_MTIMER_THRESH);
20188 + NL_SET_ERR_MSG(extack, "Bad attribute");
20189 + return -EOPNOTSUPP;
20190 + }
20191 +diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c
20192 +index e9ce23343f5ca..e1fae61a5bb90 100644
20193 +--- a/net/xfrm/xfrm_interface.c
20194 ++++ b/net/xfrm/xfrm_interface.c
20195 +@@ -643,11 +643,16 @@ static int xfrmi_newlink(struct net *src_net, struct net_device *dev,
20196 + struct netlink_ext_ack *extack)
20197 + {
20198 + struct net *net = dev_net(dev);
20199 +- struct xfrm_if_parms p;
20200 ++ struct xfrm_if_parms p = {};
20201 + struct xfrm_if *xi;
20202 + int err;
20203 +
20204 + xfrmi_netlink_parms(data, &p);
20205 ++ if (!p.if_id) {
20206 ++ NL_SET_ERR_MSG(extack, "if_id must be non zero");
20207 ++ return -EINVAL;
20208 ++ }
20209 ++
20210 + xi = xfrmi_locate(net, &p);
20211 + if (xi)
20212 + return -EEXIST;
20213 +@@ -672,7 +677,12 @@ static int xfrmi_changelink(struct net_device *dev, struct nlattr *tb[],
20214 + {
20215 + struct xfrm_if *xi = netdev_priv(dev);
20216 + struct net *net = xi->net;
20217 +- struct xfrm_if_parms p;
20218 ++ struct xfrm_if_parms p = {};
20219 ++
20220 ++ if (!p.if_id) {
20221 ++ NL_SET_ERR_MSG(extack, "if_id must be non zero");
20222 ++ return -EINVAL;
20223 ++ }
20224 +
20225 + xfrmi_netlink_parms(data, &p);
20226 + xi = xfrmi_locate(net, &p);
20227 +diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
20228 +index 3a9831c05ec71..c4a195cb36817 100644
20229 +--- a/net/xfrm/xfrm_policy.c
20230 ++++ b/net/xfrm/xfrm_policy.c
20231 +@@ -31,8 +31,10 @@
20232 + #include <linux/if_tunnel.h>
20233 + #include <net/dst.h>
20234 + #include <net/flow.h>
20235 ++#include <net/inet_ecn.h>
20236 + #include <net/xfrm.h>
20237 + #include <net/ip.h>
20238 ++#include <net/gre.h>
20239 + #if IS_ENABLED(CONFIG_IPV6_MIP6)
20240 + #include <net/mip6.h>
20241 + #endif
20242 +@@ -3293,7 +3295,7 @@ decode_session4(struct sk_buff *skb, struct flowi *fl, bool reverse)
20243 + fl4->flowi4_proto = iph->protocol;
20244 + fl4->daddr = reverse ? iph->saddr : iph->daddr;
20245 + fl4->saddr = reverse ? iph->daddr : iph->saddr;
20246 +- fl4->flowi4_tos = iph->tos;
20247 ++ fl4->flowi4_tos = iph->tos & ~INET_ECN_MASK;
20248 +
20249 + if (!ip_is_fragment(iph)) {
20250 + switch (iph->protocol) {
20251 +@@ -3455,6 +3457,26 @@ decode_session6(struct sk_buff *skb, struct flowi *fl, bool reverse)
20252 + }
20253 + fl6->flowi6_proto = nexthdr;
20254 + return;
20255 ++ case IPPROTO_GRE:
20256 ++ if (!onlyproto &&
20257 ++ (nh + offset + 12 < skb->data ||
20258 ++ pskb_may_pull(skb, nh + offset + 12 - skb->data))) {
20259 ++ struct gre_base_hdr *gre_hdr;
20260 ++ __be32 *gre_key;
20261 ++
20262 ++ nh = skb_network_header(skb);
20263 ++ gre_hdr = (struct gre_base_hdr *)(nh + offset);
20264 ++ gre_key = (__be32 *)(gre_hdr + 1);
20265 ++
20266 ++ if (gre_hdr->flags & GRE_KEY) {
20267 ++ if (gre_hdr->flags & GRE_CSUM)
20268 ++ gre_key++;
20269 ++ fl6->fl6_gre_key = *gre_key;
20270 ++ }
20271 ++ }
20272 ++ fl6->flowi6_proto = nexthdr;
20273 ++ return;
20274 ++
20275 + #if IS_ENABLED(CONFIG_IPV6_MIP6)
20276 + case IPPROTO_MH:
20277 + offset += ipv6_optlen(exthdr);
20278 +diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
20279 +index c158e70e8ae10..65e2805fa113a 100644
20280 +--- a/net/xfrm/xfrm_state.c
20281 ++++ b/net/xfrm/xfrm_state.c
20282 +@@ -1557,6 +1557,9 @@ static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig,
20283 + x->km.seq = orig->km.seq;
20284 + x->replay = orig->replay;
20285 + x->preplay = orig->preplay;
20286 ++ x->mapping_maxage = orig->mapping_maxage;
20287 ++ x->new_mapping = 0;
20288 ++ x->new_mapping_sport = 0;
20289 +
20290 + return x;
20291 +
20292 +@@ -2208,7 +2211,7 @@ int km_query(struct xfrm_state *x, struct xfrm_tmpl *t, struct xfrm_policy *pol)
20293 + }
20294 + EXPORT_SYMBOL(km_query);
20295 +
20296 +-int km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, __be16 sport)
20297 ++static int __km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, __be16 sport)
20298 + {
20299 + int err = -EINVAL;
20300 + struct xfrm_mgr *km;
20301 +@@ -2223,6 +2226,24 @@ int km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, __be16 sport)
20302 + rcu_read_unlock();
20303 + return err;
20304 + }
20305 ++
20306 ++int km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, __be16 sport)
20307 ++{
20308 ++ int ret = 0;
20309 ++
20310 ++ if (x->mapping_maxage) {
20311 ++ if ((jiffies / HZ - x->new_mapping) > x->mapping_maxage ||
20312 ++ x->new_mapping_sport != sport) {
20313 ++ x->new_mapping_sport = sport;
20314 ++ x->new_mapping = jiffies / HZ;
20315 ++ ret = __km_new_mapping(x, ipaddr, sport);
20316 ++ }
20317 ++ } else {
20318 ++ ret = __km_new_mapping(x, ipaddr, sport);
20319 ++ }
20320 ++
20321 ++ return ret;
20322 ++}
20323 + EXPORT_SYMBOL(km_new_mapping);
20324 +
20325 + void km_policy_expired(struct xfrm_policy *pol, int dir, int hard, u32 portid)
20326 +diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
20327 +index 6f97665b632ed..d0fdfbf4c5f72 100644
20328 +--- a/net/xfrm/xfrm_user.c
20329 ++++ b/net/xfrm/xfrm_user.c
20330 +@@ -282,6 +282,10 @@ static int verify_newsa_info(struct xfrm_usersa_info *p,
20331 +
20332 + err = 0;
20333 +
20334 ++ if (attrs[XFRMA_MTIMER_THRESH])
20335 ++ if (!attrs[XFRMA_ENCAP])
20336 ++ err = -EINVAL;
20337 ++
20338 + out:
20339 + return err;
20340 + }
20341 +@@ -521,6 +525,7 @@ static void xfrm_update_ae_params(struct xfrm_state *x, struct nlattr **attrs,
20342 + struct nlattr *lt = attrs[XFRMA_LTIME_VAL];
20343 + struct nlattr *et = attrs[XFRMA_ETIMER_THRESH];
20344 + struct nlattr *rt = attrs[XFRMA_REPLAY_THRESH];
20345 ++ struct nlattr *mt = attrs[XFRMA_MTIMER_THRESH];
20346 +
20347 + if (re) {
20348 + struct xfrm_replay_state_esn *replay_esn;
20349 +@@ -552,6 +557,9 @@ static void xfrm_update_ae_params(struct xfrm_state *x, struct nlattr **attrs,
20350 +
20351 + if (rt)
20352 + x->replay_maxdiff = nla_get_u32(rt);
20353 ++
20354 ++ if (mt)
20355 ++ x->mapping_maxage = nla_get_u32(mt);
20356 + }
20357 +
20358 + static void xfrm_smark_init(struct nlattr **attrs, struct xfrm_mark *m)
20359 +@@ -621,8 +629,13 @@ static struct xfrm_state *xfrm_state_construct(struct net *net,
20360 +
20361 + xfrm_smark_init(attrs, &x->props.smark);
20362 +
20363 +- if (attrs[XFRMA_IF_ID])
20364 ++ if (attrs[XFRMA_IF_ID]) {
20365 + x->if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
20366 ++ if (!x->if_id) {
20367 ++ err = -EINVAL;
20368 ++ goto error;
20369 ++ }
20370 ++ }
20371 +
20372 + err = __xfrm_init_state(x, false, attrs[XFRMA_OFFLOAD_DEV]);
20373 + if (err)
20374 +@@ -964,8 +977,13 @@ static int copy_to_user_state_extra(struct xfrm_state *x,
20375 + if (ret)
20376 + goto out;
20377 + }
20378 +- if (x->security)
20379 ++ if (x->security) {
20380 + ret = copy_sec_ctx(x->security, skb);
20381 ++ if (ret)
20382 ++ goto out;
20383 ++ }
20384 ++ if (x->mapping_maxage)
20385 ++ ret = nla_put_u32(skb, XFRMA_MTIMER_THRESH, x->mapping_maxage);
20386 + out:
20387 + return ret;
20388 + }
20389 +@@ -1353,8 +1371,13 @@ static int xfrm_alloc_userspi(struct sk_buff *skb, struct nlmsghdr *nlh,
20390 +
20391 + mark = xfrm_mark_get(attrs, &m);
20392 +
20393 +- if (attrs[XFRMA_IF_ID])
20394 ++ if (attrs[XFRMA_IF_ID]) {
20395 + if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
20396 ++ if (!if_id) {
20397 ++ err = -EINVAL;
20398 ++ goto out_noput;
20399 ++ }
20400 ++ }
20401 +
20402 + if (p->info.seq) {
20403 + x = xfrm_find_acq_byseq(net, mark, p->info.seq);
20404 +@@ -1667,8 +1690,13 @@ static struct xfrm_policy *xfrm_policy_construct(struct net *net, struct xfrm_us
20405 +
20406 + xfrm_mark_get(attrs, &xp->mark);
20407 +
20408 +- if (attrs[XFRMA_IF_ID])
20409 ++ if (attrs[XFRMA_IF_ID]) {
20410 + xp->if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
20411 ++ if (!xp->if_id) {
20412 ++ err = -EINVAL;
20413 ++ goto error;
20414 ++ }
20415 ++ }
20416 +
20417 + return xp;
20418 + error:
20419 +@@ -2898,7 +2926,7 @@ static inline unsigned int xfrm_sa_len(struct xfrm_state *x)
20420 + if (x->props.extra_flags)
20421 + l += nla_total_size(sizeof(x->props.extra_flags));
20422 + if (x->xso.dev)
20423 +- l += nla_total_size(sizeof(x->xso));
20424 ++ l += nla_total_size(sizeof(struct xfrm_user_offload));
20425 + if (x->props.smark.v | x->props.smark.m) {
20426 + l += nla_total_size(sizeof(x->props.smark.v));
20427 + l += nla_total_size(sizeof(x->props.smark.m));
20428 +@@ -2909,6 +2937,9 @@ static inline unsigned int xfrm_sa_len(struct xfrm_state *x)
20429 + /* Must count x->lastused as it may become non-zero behind our back. */
20430 + l += nla_total_size_64bit(sizeof(u64));
20431 +
20432 ++ if (x->mapping_maxage)
20433 ++ l += nla_total_size(sizeof(x->mapping_maxage));
20434 ++
20435 + return l;
20436 + }
20437 +
20438 +diff --git a/scripts/dtc/dtx_diff b/scripts/dtc/dtx_diff
20439 +index d3422ee15e300..f2bbde4bba86b 100755
20440 +--- a/scripts/dtc/dtx_diff
20441 ++++ b/scripts/dtc/dtx_diff
20442 +@@ -59,12 +59,8 @@ Otherwise DTx is treated as a dts source file (aka .dts).
20443 + or '/include/' to be processed.
20444 +
20445 + If DTx_1 and DTx_2 are in different architectures, then this script
20446 +- may not work since \${ARCH} is part of the include path. Two possible
20447 +- workarounds:
20448 +-
20449 +- `basename $0` \\
20450 +- <(ARCH=arch_of_dtx_1 `basename $0` DTx_1) \\
20451 +- <(ARCH=arch_of_dtx_2 `basename $0` DTx_2)
20452 ++ may not work since \${ARCH} is part of the include path. The following
20453 ++ workaround can be used:
20454 +
20455 + `basename $0` ARCH=arch_of_dtx_1 DTx_1 >tmp_dtx_1.dts
20456 + `basename $0` ARCH=arch_of_dtx_2 DTx_2 >tmp_dtx_2.dts
20457 +diff --git a/scripts/sphinx-pre-install b/scripts/sphinx-pre-install
20458 +index 828a8615a9181..8fcea769d44f5 100755
20459 +--- a/scripts/sphinx-pre-install
20460 ++++ b/scripts/sphinx-pre-install
20461 +@@ -76,6 +76,7 @@ my %texlive = (
20462 + 'ucs.sty' => 'texlive-ucs',
20463 + 'upquote.sty' => 'texlive-upquote',
20464 + 'wrapfig.sty' => 'texlive-wrapfig',
20465 ++ 'ctexhook.sty' => 'texlive-ctex',
20466 + );
20467 +
20468 + #
20469 +@@ -370,6 +371,9 @@ sub give_debian_hints()
20470 + );
20471 +
20472 + if ($pdf) {
20473 ++ check_missing_file(["/usr/share/texlive/texmf-dist/tex/latex/ctex/ctexhook.sty"],
20474 ++ "texlive-lang-chinese", 2);
20475 ++
20476 + check_missing_file(["/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf"],
20477 + "fonts-dejavu", 2);
20478 +
20479 +diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
20480 +index ff2191ae53528..86159b32921cc 100644
20481 +--- a/security/selinux/hooks.c
20482 ++++ b/security/selinux/hooks.c
20483 +@@ -947,18 +947,22 @@ out:
20484 + static int selinux_add_opt(int token, const char *s, void **mnt_opts)
20485 + {
20486 + struct selinux_mnt_opts *opts = *mnt_opts;
20487 ++ bool is_alloc_opts = false;
20488 +
20489 + if (token == Opt_seclabel) /* eaten and completely ignored */
20490 + return 0;
20491 +
20492 ++ if (!s)
20493 ++ return -ENOMEM;
20494 ++
20495 + if (!opts) {
20496 + opts = kzalloc(sizeof(struct selinux_mnt_opts), GFP_KERNEL);
20497 + if (!opts)
20498 + return -ENOMEM;
20499 + *mnt_opts = opts;
20500 ++ is_alloc_opts = true;
20501 + }
20502 +- if (!s)
20503 +- return -ENOMEM;
20504 ++
20505 + switch (token) {
20506 + case Opt_context:
20507 + if (opts->context || opts->defcontext)
20508 +@@ -983,6 +987,10 @@ static int selinux_add_opt(int token, const char *s, void **mnt_opts)
20509 + }
20510 + return 0;
20511 + Einval:
20512 ++ if (is_alloc_opts) {
20513 ++ kfree(opts);
20514 ++ *mnt_opts = NULL;
20515 ++ }
20516 + pr_warn(SEL_MOUNT_FAIL_MSG);
20517 + return -EINVAL;
20518 + }
20519 +diff --git a/sound/core/jack.c b/sound/core/jack.c
20520 +index d6502dff247a8..dc2e06ae24149 100644
20521 +--- a/sound/core/jack.c
20522 ++++ b/sound/core/jack.c
20523 +@@ -54,10 +54,13 @@ static int snd_jack_dev_free(struct snd_device *device)
20524 + struct snd_card *card = device->card;
20525 + struct snd_jack_kctl *jack_kctl, *tmp_jack_kctl;
20526 +
20527 ++ down_write(&card->controls_rwsem);
20528 + list_for_each_entry_safe(jack_kctl, tmp_jack_kctl, &jack->kctl_list, list) {
20529 + list_del_init(&jack_kctl->list);
20530 + snd_ctl_remove(card, jack_kctl->kctl);
20531 + }
20532 ++ up_write(&card->controls_rwsem);
20533 ++
20534 + if (jack->private_free)
20535 + jack->private_free(jack);
20536 +
20537 +diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
20538 +index 77727a69c3c4e..d79febeebf0c5 100644
20539 +--- a/sound/core/oss/pcm_oss.c
20540 ++++ b/sound/core/oss/pcm_oss.c
20541 +@@ -2056,7 +2056,7 @@ static int snd_pcm_oss_set_trigger(struct snd_pcm_oss_file *pcm_oss_file, int tr
20542 + int err, cmd;
20543 +
20544 + #ifdef OSS_DEBUG
20545 +- pcm_dbg(substream->pcm, "pcm_oss: trigger = 0x%x\n", trigger);
20546 ++ pr_debug("pcm_oss: trigger = 0x%x\n", trigger);
20547 + #endif
20548 +
20549 + psubstream = pcm_oss_file->streams[SNDRV_PCM_STREAM_PLAYBACK];
20550 +diff --git a/sound/core/pcm.c b/sound/core/pcm.c
20551 +index 41cbdac5b1cfa..a8ae5928decda 100644
20552 +--- a/sound/core/pcm.c
20553 ++++ b/sound/core/pcm.c
20554 +@@ -810,7 +810,11 @@ EXPORT_SYMBOL(snd_pcm_new_internal);
20555 + static void free_chmap(struct snd_pcm_str *pstr)
20556 + {
20557 + if (pstr->chmap_kctl) {
20558 +- snd_ctl_remove(pstr->pcm->card, pstr->chmap_kctl);
20559 ++ struct snd_card *card = pstr->pcm->card;
20560 ++
20561 ++ down_write(&card->controls_rwsem);
20562 ++ snd_ctl_remove(card, pstr->chmap_kctl);
20563 ++ up_write(&card->controls_rwsem);
20564 + pstr->chmap_kctl = NULL;
20565 + }
20566 + }
20567 +diff --git a/sound/core/seq/seq_queue.c b/sound/core/seq/seq_queue.c
20568 +index 71a6ea62c3be7..4ff0b927230c2 100644
20569 +--- a/sound/core/seq/seq_queue.c
20570 ++++ b/sound/core/seq/seq_queue.c
20571 +@@ -234,12 +234,15 @@ struct snd_seq_queue *snd_seq_queue_find_name(char *name)
20572 +
20573 + /* -------------------------------------------------------- */
20574 +
20575 ++#define MAX_CELL_PROCESSES_IN_QUEUE 1000
20576 ++
20577 + void snd_seq_check_queue(struct snd_seq_queue *q, int atomic, int hop)
20578 + {
20579 + unsigned long flags;
20580 + struct snd_seq_event_cell *cell;
20581 + snd_seq_tick_time_t cur_tick;
20582 + snd_seq_real_time_t cur_time;
20583 ++ int processed = 0;
20584 +
20585 + if (q == NULL)
20586 + return;
20587 +@@ -262,6 +265,8 @@ void snd_seq_check_queue(struct snd_seq_queue *q, int atomic, int hop)
20588 + if (!cell)
20589 + break;
20590 + snd_seq_dispatch_event(cell, atomic, hop);
20591 ++ if (++processed >= MAX_CELL_PROCESSES_IN_QUEUE)
20592 ++ goto out; /* the rest processed at the next batch */
20593 + }
20594 +
20595 + /* Process time queue... */
20596 +@@ -271,14 +276,19 @@ void snd_seq_check_queue(struct snd_seq_queue *q, int atomic, int hop)
20597 + if (!cell)
20598 + break;
20599 + snd_seq_dispatch_event(cell, atomic, hop);
20600 ++ if (++processed >= MAX_CELL_PROCESSES_IN_QUEUE)
20601 ++ goto out; /* the rest processed at the next batch */
20602 + }
20603 +
20604 ++ out:
20605 + /* free lock */
20606 + spin_lock_irqsave(&q->check_lock, flags);
20607 + if (q->check_again) {
20608 + q->check_again = 0;
20609 +- spin_unlock_irqrestore(&q->check_lock, flags);
20610 +- goto __again;
20611 ++ if (processed < MAX_CELL_PROCESSES_IN_QUEUE) {
20612 ++ spin_unlock_irqrestore(&q->check_lock, flags);
20613 ++ goto __again;
20614 ++ }
20615 + }
20616 + q->check_blocked = 0;
20617 + spin_unlock_irqrestore(&q->check_lock, flags);
20618 +diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
20619 +index 6dece719be669..39281106477eb 100644
20620 +--- a/sound/pci/hda/hda_codec.c
20621 ++++ b/sound/pci/hda/hda_codec.c
20622 +@@ -1727,8 +1727,11 @@ void snd_hda_ctls_clear(struct hda_codec *codec)
20623 + {
20624 + int i;
20625 + struct hda_nid_item *items = codec->mixers.list;
20626 ++
20627 ++ down_write(&codec->card->controls_rwsem);
20628 + for (i = 0; i < codec->mixers.used; i++)
20629 + snd_ctl_remove(codec->card, items[i].kctl);
20630 ++ up_write(&codec->card->controls_rwsem);
20631 + snd_array_free(&codec->mixers);
20632 + snd_array_free(&codec->nids);
20633 + }
20634 +diff --git a/sound/soc/codecs/rt5663.c b/sound/soc/codecs/rt5663.c
20635 +index 619fb9a031e39..db8a41aaa3859 100644
20636 +--- a/sound/soc/codecs/rt5663.c
20637 ++++ b/sound/soc/codecs/rt5663.c
20638 +@@ -3461,6 +3461,7 @@ static void rt5663_calibrate(struct rt5663_priv *rt5663)
20639 + static int rt5663_parse_dp(struct rt5663_priv *rt5663, struct device *dev)
20640 + {
20641 + int table_size;
20642 ++ int ret;
20643 +
20644 + device_property_read_u32(dev, "realtek,dc_offset_l_manual",
20645 + &rt5663->pdata.dc_offset_l_manual);
20646 +@@ -3477,9 +3478,11 @@ static int rt5663_parse_dp(struct rt5663_priv *rt5663, struct device *dev)
20647 + table_size = sizeof(struct impedance_mapping_table) *
20648 + rt5663->pdata.impedance_sensing_num;
20649 + rt5663->imp_table = devm_kzalloc(dev, table_size, GFP_KERNEL);
20650 +- device_property_read_u32_array(dev,
20651 ++ ret = device_property_read_u32_array(dev,
20652 + "realtek,impedance_sensing_table",
20653 + (u32 *)rt5663->imp_table, table_size);
20654 ++ if (ret)
20655 ++ return ret;
20656 + }
20657 +
20658 + return 0;
20659 +@@ -3504,8 +3507,11 @@ static int rt5663_i2c_probe(struct i2c_client *i2c,
20660 +
20661 + if (pdata)
20662 + rt5663->pdata = *pdata;
20663 +- else
20664 +- rt5663_parse_dp(rt5663, &i2c->dev);
20665 ++ else {
20666 ++ ret = rt5663_parse_dp(rt5663, &i2c->dev);
20667 ++ if (ret)
20668 ++ return ret;
20669 ++ }
20670 +
20671 + for (i = 0; i < ARRAY_SIZE(rt5663->supplies); i++)
20672 + rt5663->supplies[i].supply = rt5663_supply_names[i];
20673 +diff --git a/sound/soc/fsl/fsl_asrc.c b/sound/soc/fsl/fsl_asrc.c
20674 +index 02c81d2e34ad0..5e3c71f025f45 100644
20675 +--- a/sound/soc/fsl/fsl_asrc.c
20676 ++++ b/sound/soc/fsl/fsl_asrc.c
20677 +@@ -19,6 +19,7 @@
20678 + #include "fsl_asrc.h"
20679 +
20680 + #define IDEAL_RATIO_DECIMAL_DEPTH 26
20681 ++#define DIVIDER_NUM 64
20682 +
20683 + #define pair_err(fmt, ...) \
20684 + dev_err(&asrc->pdev->dev, "Pair %c: " fmt, 'A' + index, ##__VA_ARGS__)
20685 +@@ -101,6 +102,55 @@ static unsigned char clk_map_imx8qxp[2][ASRC_CLK_MAP_LEN] = {
20686 + },
20687 + };
20688 +
20689 ++/*
20690 ++ * According to RM, the divider range is 1 ~ 8,
20691 ++ * prescaler is power of 2 from 1 ~ 128.
20692 ++ */
20693 ++static int asrc_clk_divider[DIVIDER_NUM] = {
20694 ++ 1, 2, 4, 8, 16, 32, 64, 128, /* divider = 1 */
20695 ++ 2, 4, 8, 16, 32, 64, 128, 256, /* divider = 2 */
20696 ++ 3, 6, 12, 24, 48, 96, 192, 384, /* divider = 3 */
20697 ++ 4, 8, 16, 32, 64, 128, 256, 512, /* divider = 4 */
20698 ++ 5, 10, 20, 40, 80, 160, 320, 640, /* divider = 5 */
20699 ++ 6, 12, 24, 48, 96, 192, 384, 768, /* divider = 6 */
20700 ++ 7, 14, 28, 56, 112, 224, 448, 896, /* divider = 7 */
20701 ++ 8, 16, 32, 64, 128, 256, 512, 1024, /* divider = 8 */
20702 ++};
20703 ++
20704 ++/*
20705 ++ * Check if the divider is available for internal ratio mode
20706 ++ */
20707 ++static bool fsl_asrc_divider_avail(int clk_rate, int rate, int *div)
20708 ++{
20709 ++ u32 rem, i;
20710 ++ u64 n;
20711 ++
20712 ++ if (div)
20713 ++ *div = 0;
20714 ++
20715 ++ if (clk_rate == 0 || rate == 0)
20716 ++ return false;
20717 ++
20718 ++ n = clk_rate;
20719 ++ rem = do_div(n, rate);
20720 ++
20721 ++ if (div)
20722 ++ *div = n;
20723 ++
20724 ++ if (rem != 0)
20725 ++ return false;
20726 ++
20727 ++ for (i = 0; i < DIVIDER_NUM; i++) {
20728 ++ if (n == asrc_clk_divider[i])
20729 ++ break;
20730 ++ }
20731 ++
20732 ++ if (i == DIVIDER_NUM)
20733 ++ return false;
20734 ++
20735 ++ return true;
20736 ++}
20737 ++
20738 + /**
20739 + * fsl_asrc_sel_proc - Select the pre-processing and post-processing options
20740 + * @inrate: input sample rate
20741 +@@ -330,12 +380,12 @@ static int fsl_asrc_config_pair(struct fsl_asrc_pair *pair, bool use_ideal_rate)
20742 + enum asrc_word_width input_word_width;
20743 + enum asrc_word_width output_word_width;
20744 + u32 inrate, outrate, indiv, outdiv;
20745 +- u32 clk_index[2], div[2], rem[2];
20746 ++ u32 clk_index[2], div[2];
20747 + u64 clk_rate;
20748 + int in, out, channels;
20749 + int pre_proc, post_proc;
20750 + struct clk *clk;
20751 +- bool ideal;
20752 ++ bool ideal, div_avail;
20753 +
20754 + if (!config) {
20755 + pair_err("invalid pair config\n");
20756 +@@ -415,8 +465,7 @@ static int fsl_asrc_config_pair(struct fsl_asrc_pair *pair, bool use_ideal_rate)
20757 + clk = asrc_priv->asrck_clk[clk_index[ideal ? OUT : IN]];
20758 +
20759 + clk_rate = clk_get_rate(clk);
20760 +- rem[IN] = do_div(clk_rate, inrate);
20761 +- div[IN] = (u32)clk_rate;
20762 ++ div_avail = fsl_asrc_divider_avail(clk_rate, inrate, &div[IN]);
20763 +
20764 + /*
20765 + * The divider range is [1, 1024], defined by the hardware. For non-
20766 +@@ -425,7 +474,7 @@ static int fsl_asrc_config_pair(struct fsl_asrc_pair *pair, bool use_ideal_rate)
20767 + * only result in different converting speeds. So remainder does not
20768 + * matter, as long as we keep the divider within its valid range.
20769 + */
20770 +- if (div[IN] == 0 || (!ideal && (div[IN] > 1024 || rem[IN] != 0))) {
20771 ++ if (div[IN] == 0 || (!ideal && !div_avail)) {
20772 + pair_err("failed to support input sample rate %dHz by asrck_%x\n",
20773 + inrate, clk_index[ideal ? OUT : IN]);
20774 + return -EINVAL;
20775 +@@ -436,13 +485,12 @@ static int fsl_asrc_config_pair(struct fsl_asrc_pair *pair, bool use_ideal_rate)
20776 + clk = asrc_priv->asrck_clk[clk_index[OUT]];
20777 + clk_rate = clk_get_rate(clk);
20778 + if (ideal && use_ideal_rate)
20779 +- rem[OUT] = do_div(clk_rate, IDEAL_RATIO_RATE);
20780 ++ div_avail = fsl_asrc_divider_avail(clk_rate, IDEAL_RATIO_RATE, &div[OUT]);
20781 + else
20782 +- rem[OUT] = do_div(clk_rate, outrate);
20783 +- div[OUT] = clk_rate;
20784 ++ div_avail = fsl_asrc_divider_avail(clk_rate, outrate, &div[OUT]);
20785 +
20786 + /* Output divider has the same limitation as the input one */
20787 +- if (div[OUT] == 0 || (!ideal && (div[OUT] > 1024 || rem[OUT] != 0))) {
20788 ++ if (div[OUT] == 0 || (!ideal && !div_avail)) {
20789 + pair_err("failed to support output sample rate %dHz by asrck_%x\n",
20790 + outrate, clk_index[OUT]);
20791 + return -EINVAL;
20792 +@@ -621,8 +669,7 @@ static void fsl_asrc_select_clk(struct fsl_asrc_priv *asrc_priv,
20793 + clk_index = asrc_priv->clk_map[j][i];
20794 + clk_rate = clk_get_rate(asrc_priv->asrck_clk[clk_index]);
20795 + /* Only match a perfect clock source with no remainder */
20796 +- if (clk_rate != 0 && (clk_rate / rate[j]) <= 1024 &&
20797 +- (clk_rate % rate[j]) == 0)
20798 ++ if (fsl_asrc_divider_avail(clk_rate, rate[j], NULL))
20799 + break;
20800 + }
20801 +
20802 +diff --git a/sound/soc/fsl/fsl_mqs.c b/sound/soc/fsl/fsl_mqs.c
20803 +index 69aeb0e71844d..0d4efbed41dab 100644
20804 +--- a/sound/soc/fsl/fsl_mqs.c
20805 ++++ b/sound/soc/fsl/fsl_mqs.c
20806 +@@ -337,4 +337,4 @@ module_platform_driver(fsl_mqs_driver);
20807 + MODULE_AUTHOR("Shengjiu Wang <Shengjiu.Wang@×××.com>");
20808 + MODULE_DESCRIPTION("MQS codec driver");
20809 + MODULE_LICENSE("GPL v2");
20810 +-MODULE_ALIAS("platform: fsl-mqs");
20811 ++MODULE_ALIAS("platform:fsl-mqs");
20812 +diff --git a/sound/soc/intel/catpt/dsp.c b/sound/soc/intel/catpt/dsp.c
20813 +index 9e807b9417321..38a92bbc1ed56 100644
20814 +--- a/sound/soc/intel/catpt/dsp.c
20815 ++++ b/sound/soc/intel/catpt/dsp.c
20816 +@@ -65,6 +65,7 @@ static int catpt_dma_memcpy(struct catpt_dev *cdev, struct dma_chan *chan,
20817 + {
20818 + struct dma_async_tx_descriptor *desc;
20819 + enum dma_status status;
20820 ++ int ret;
20821 +
20822 + desc = dmaengine_prep_dma_memcpy(chan, dst_addr, src_addr, size,
20823 + DMA_CTRL_ACK);
20824 +@@ -77,13 +78,22 @@ static int catpt_dma_memcpy(struct catpt_dev *cdev, struct dma_chan *chan,
20825 + catpt_updatel_shim(cdev, HMDC,
20826 + CATPT_HMDC_HDDA(CATPT_DMA_DEVID, chan->chan_id),
20827 + CATPT_HMDC_HDDA(CATPT_DMA_DEVID, chan->chan_id));
20828 +- dmaengine_submit(desc);
20829 ++
20830 ++ ret = dma_submit_error(dmaengine_submit(desc));
20831 ++ if (ret) {
20832 ++ dev_err(cdev->dev, "submit tx failed: %d\n", ret);
20833 ++ goto clear_hdda;
20834 ++ }
20835 ++
20836 + status = dma_wait_for_async_tx(desc);
20837 ++ ret = (status == DMA_COMPLETE) ? 0 : -EPROTO;
20838 ++
20839 ++clear_hdda:
20840 + /* regardless of status, disable access to HOST memory in demand mode */
20841 + catpt_updatel_shim(cdev, HMDC,
20842 + CATPT_HMDC_HDDA(CATPT_DMA_DEVID, chan->chan_id), 0);
20843 +
20844 +- return (status == DMA_COMPLETE) ? 0 : -EPROTO;
20845 ++ return ret;
20846 + }
20847 +
20848 + int catpt_dma_memcpy_todsp(struct catpt_dev *cdev, struct dma_chan *chan,
20849 +diff --git a/sound/soc/mediatek/mt8173/mt8173-max98090.c b/sound/soc/mediatek/mt8173/mt8173-max98090.c
20850 +index fc94314bfc02f..3bdd4931316cd 100644
20851 +--- a/sound/soc/mediatek/mt8173/mt8173-max98090.c
20852 ++++ b/sound/soc/mediatek/mt8173/mt8173-max98090.c
20853 +@@ -180,6 +180,9 @@ static int mt8173_max98090_dev_probe(struct platform_device *pdev)
20854 + if (ret)
20855 + dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
20856 + __func__, ret);
20857 ++
20858 ++ of_node_put(codec_node);
20859 ++ of_node_put(platform_node);
20860 + return ret;
20861 + }
20862 +
20863 +diff --git a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
20864 +index 0f28dc2217c09..390da5bf727eb 100644
20865 +--- a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
20866 ++++ b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
20867 +@@ -218,6 +218,8 @@ static int mt8173_rt5650_rt5514_dev_probe(struct platform_device *pdev)
20868 + if (ret)
20869 + dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
20870 + __func__, ret);
20871 ++
20872 ++ of_node_put(platform_node);
20873 + return ret;
20874 + }
20875 +
20876 +diff --git a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c
20877 +index 077c6ee067806..c8e4e85e10575 100644
20878 +--- a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c
20879 ++++ b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5676.c
20880 +@@ -285,6 +285,8 @@ static int mt8173_rt5650_rt5676_dev_probe(struct platform_device *pdev)
20881 + if (ret)
20882 + dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
20883 + __func__, ret);
20884 ++
20885 ++ of_node_put(platform_node);
20886 + return ret;
20887 + }
20888 +
20889 +diff --git a/sound/soc/mediatek/mt8173/mt8173-rt5650.c b/sound/soc/mediatek/mt8173/mt8173-rt5650.c
20890 +index c28ebf891cb05..e168d31f44459 100644
20891 +--- a/sound/soc/mediatek/mt8173/mt8173-rt5650.c
20892 ++++ b/sound/soc/mediatek/mt8173/mt8173-rt5650.c
20893 +@@ -323,6 +323,8 @@ static int mt8173_rt5650_dev_probe(struct platform_device *pdev)
20894 + if (ret)
20895 + dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
20896 + __func__, ret);
20897 ++
20898 ++ of_node_put(platform_node);
20899 + return ret;
20900 + }
20901 +
20902 +diff --git a/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c b/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
20903 +index 20d31b69a5c00..9cc0f26b08fbc 100644
20904 +--- a/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
20905 ++++ b/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
20906 +@@ -787,7 +787,11 @@ static int mt8183_da7219_max98357_dev_probe(struct platform_device *pdev)
20907 + return ret;
20908 + }
20909 +
20910 +- return devm_snd_soc_register_card(&pdev->dev, card);
20911 ++ ret = devm_snd_soc_register_card(&pdev->dev, card);
20912 ++
20913 ++ of_node_put(platform_node);
20914 ++ of_node_put(hdmi_codec);
20915 ++ return ret;
20916 + }
20917 +
20918 + #ifdef CONFIG_OF
20919 +diff --git a/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c b/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
20920 +index 79ba2f2d84522..14ce8b93597f3 100644
20921 +--- a/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
20922 ++++ b/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
20923 +@@ -720,7 +720,12 @@ mt8183_mt6358_ts3a227_max98357_dev_probe(struct platform_device *pdev)
20924 + __func__, ret);
20925 + }
20926 +
20927 +- return devm_snd_soc_register_card(&pdev->dev, card);
20928 ++ ret = devm_snd_soc_register_card(&pdev->dev, card);
20929 ++
20930 ++ of_node_put(platform_node);
20931 ++ of_node_put(ec_codec);
20932 ++ of_node_put(hdmi_codec);
20933 ++ return ret;
20934 + }
20935 +
20936 + #ifdef CONFIG_OF
20937 +diff --git a/sound/soc/samsung/idma.c b/sound/soc/samsung/idma.c
20938 +index 66bcc2f97544b..c3f1b054e2389 100644
20939 +--- a/sound/soc/samsung/idma.c
20940 ++++ b/sound/soc/samsung/idma.c
20941 +@@ -360,6 +360,8 @@ static int preallocate_idma_buffer(struct snd_pcm *pcm, int stream)
20942 + buf->addr = idma.lp_tx_addr;
20943 + buf->bytes = idma_hardware.buffer_bytes_max;
20944 + buf->area = (unsigned char * __force)ioremap(buf->addr, buf->bytes);
20945 ++ if (!buf->area)
20946 ++ return -ENOMEM;
20947 +
20948 + return 0;
20949 + }
20950 +diff --git a/sound/soc/uniphier/Kconfig b/sound/soc/uniphier/Kconfig
20951 +index aa3592ee1358b..ddfa6424c656b 100644
20952 +--- a/sound/soc/uniphier/Kconfig
20953 ++++ b/sound/soc/uniphier/Kconfig
20954 +@@ -23,7 +23,6 @@ config SND_SOC_UNIPHIER_LD11
20955 + tristate "UniPhier LD11/LD20 Device Driver"
20956 + depends on SND_SOC_UNIPHIER
20957 + select SND_SOC_UNIPHIER_AIO
20958 +- select SND_SOC_UNIPHIER_AIO_DMA
20959 + help
20960 + This adds ASoC driver for Socionext UniPhier LD11/LD20
20961 + input and output that can be used with other codecs.
20962 +@@ -34,7 +33,6 @@ config SND_SOC_UNIPHIER_PXS2
20963 + tristate "UniPhier PXs2 Device Driver"
20964 + depends on SND_SOC_UNIPHIER
20965 + select SND_SOC_UNIPHIER_AIO
20966 +- select SND_SOC_UNIPHIER_AIO_DMA
20967 + help
20968 + This adds ASoC driver for Socionext UniPhier PXs2
20969 + input and output that can be used with other codecs.
20970 +diff --git a/sound/usb/format.c b/sound/usb/format.c
20971 +index 4693384db0695..e8a63ea2189d1 100644
20972 +--- a/sound/usb/format.c
20973 ++++ b/sound/usb/format.c
20974 +@@ -365,7 +365,7 @@ static int parse_uac2_sample_rate_range(struct snd_usb_audio *chip,
20975 + for (rate = min; rate <= max; rate += res) {
20976 +
20977 + /* Filter out invalid rates on Presonus Studio 1810c */
20978 +- if (chip->usb_id == USB_ID(0x0194f, 0x010c) &&
20979 ++ if (chip->usb_id == USB_ID(0x194f, 0x010c) &&
20980 + !s1810c_valid_sample_rate(fp, rate))
20981 + goto skip_rate;
20982 +
20983 +diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
20984 +index 8297117f4766e..86fdd669f3fd7 100644
20985 +--- a/sound/usb/mixer_quirks.c
20986 ++++ b/sound/usb/mixer_quirks.c
20987 +@@ -3033,7 +3033,7 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
20988 + err = snd_rme_controls_create(mixer);
20989 + break;
20990 +
20991 +- case USB_ID(0x0194f, 0x010c): /* Presonus Studio 1810c */
20992 ++ case USB_ID(0x194f, 0x010c): /* Presonus Studio 1810c */
20993 + err = snd_sc1810_init_mixer(mixer);
20994 + break;
20995 + case USB_ID(0x2a39, 0x3fb0): /* RME Babyface Pro FS */
20996 +diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
20997 +index 75d4d317b34b6..6333a2ecb848a 100644
20998 +--- a/sound/usb/quirks.c
20999 ++++ b/sound/usb/quirks.c
21000 +@@ -1310,7 +1310,7 @@ int snd_usb_apply_interface_quirk(struct snd_usb_audio *chip,
21001 + if (chip->usb_id == USB_ID(0x0763, 0x2012))
21002 + return fasttrackpro_skip_setting_quirk(chip, iface, altno);
21003 + /* presonus studio 1810c: skip altsets incompatible with device_setup */
21004 +- if (chip->usb_id == USB_ID(0x0194f, 0x010c))
21005 ++ if (chip->usb_id == USB_ID(0x194f, 0x010c))
21006 + return s1810c_skip_setting_quirk(chip, iface, altno);
21007 +
21008 +
21009 +diff --git a/tools/bpf/bpftool/Documentation/Makefile b/tools/bpf/bpftool/Documentation/Makefile
21010 +index f33cb02de95cf..3601b1d1974ca 100644
21011 +--- a/tools/bpf/bpftool/Documentation/Makefile
21012 ++++ b/tools/bpf/bpftool/Documentation/Makefile
21013 +@@ -1,6 +1,5 @@
21014 + # SPDX-License-Identifier: GPL-2.0-only
21015 + include ../../../scripts/Makefile.include
21016 +-include ../../../scripts/utilities.mak
21017 +
21018 + INSTALL ?= install
21019 + RM ?= rm -f
21020 +diff --git a/tools/bpf/bpftool/Makefile b/tools/bpf/bpftool/Makefile
21021 +index f60e6ad3a1dff..1896ef69b4492 100644
21022 +--- a/tools/bpf/bpftool/Makefile
21023 ++++ b/tools/bpf/bpftool/Makefile
21024 +@@ -1,6 +1,5 @@
21025 + # SPDX-License-Identifier: GPL-2.0-only
21026 + include ../../scripts/Makefile.include
21027 +-include ../../scripts/utilities.mak
21028 +
21029 + ifeq ($(srctree),)
21030 + srctree := $(patsubst %/,%,$(dir $(CURDIR)))
21031 +diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
21032 +index c58a135dc355e..1854d6b978604 100644
21033 +--- a/tools/bpf/bpftool/main.c
21034 ++++ b/tools/bpf/bpftool/main.c
21035 +@@ -396,6 +396,8 @@ int main(int argc, char **argv)
21036 + };
21037 + int opt, ret;
21038 +
21039 ++ setlinebuf(stdout);
21040 ++
21041 + last_do_help = do_help;
21042 + pretty_output = false;
21043 + json_output = false;
21044 +diff --git a/tools/include/nolibc/nolibc.h b/tools/include/nolibc/nolibc.h
21045 +index 2551e9b71167b..b8cecb66d28b7 100644
21046 +--- a/tools/include/nolibc/nolibc.h
21047 ++++ b/tools/include/nolibc/nolibc.h
21048 +@@ -422,16 +422,22 @@ struct stat {
21049 + })
21050 +
21051 + /* startup code */
21052 ++/*
21053 ++ * x86-64 System V ABI mandates:
21054 ++ * 1) %rsp must be 16-byte aligned right before the function call.
21055 ++ * 2) The deepest stack frame should be zero (the %rbp).
21056 ++ *
21057 ++ */
21058 + asm(".section .text\n"
21059 + ".global _start\n"
21060 + "_start:\n"
21061 + "pop %rdi\n" // argc (first arg, %rdi)
21062 + "mov %rsp, %rsi\n" // argv[] (second arg, %rsi)
21063 + "lea 8(%rsi,%rdi,8),%rdx\n" // then a NULL then envp (third arg, %rdx)
21064 +- "and $-16, %rsp\n" // x86 ABI : esp must be 16-byte aligned when
21065 +- "sub $8, %rsp\n" // entering the callee
21066 ++ "xor %ebp, %ebp\n" // zero the stack frame
21067 ++ "and $-16, %rsp\n" // x86 ABI : esp must be 16-byte aligned before call
21068 + "call main\n" // main() returns the status code, we'll exit with it.
21069 +- "movzb %al, %rdi\n" // retrieve exit code from 8 lower bits
21070 ++ "mov %eax, %edi\n" // retrieve exit code (32 bit)
21071 + "mov $60, %rax\n" // NR_exit == 60
21072 + "syscall\n" // really exit
21073 + "hlt\n" // ensure it does not return
21074 +@@ -600,20 +606,28 @@ struct sys_stat_struct {
21075 + })
21076 +
21077 + /* startup code */
21078 ++/*
21079 ++ * i386 System V ABI mandates:
21080 ++ * 1) last pushed argument must be 16-byte aligned.
21081 ++ * 2) The deepest stack frame should be set to zero
21082 ++ *
21083 ++ */
21084 + asm(".section .text\n"
21085 + ".global _start\n"
21086 + "_start:\n"
21087 + "pop %eax\n" // argc (first arg, %eax)
21088 + "mov %esp, %ebx\n" // argv[] (second arg, %ebx)
21089 + "lea 4(%ebx,%eax,4),%ecx\n" // then a NULL then envp (third arg, %ecx)
21090 +- "and $-16, %esp\n" // x86 ABI : esp must be 16-byte aligned when
21091 ++ "xor %ebp, %ebp\n" // zero the stack frame
21092 ++ "and $-16, %esp\n" // x86 ABI : esp must be 16-byte aligned before
21093 ++ "sub $4, %esp\n" // the call instruction (args are aligned)
21094 + "push %ecx\n" // push all registers on the stack so that we
21095 + "push %ebx\n" // support both regparm and plain stack modes
21096 + "push %eax\n"
21097 + "call main\n" // main() returns the status code in %eax
21098 +- "movzbl %al, %ebx\n" // retrieve exit code from lower 8 bits
21099 +- "movl $1, %eax\n" // NR_exit == 1
21100 +- "int $0x80\n" // exit now
21101 ++ "mov %eax, %ebx\n" // retrieve exit code (32-bit int)
21102 ++ "movl $1, %eax\n" // NR_exit == 1
21103 ++ "int $0x80\n" // exit now
21104 + "hlt\n" // ensure it does not
21105 + "");
21106 +
21107 +@@ -797,7 +811,6 @@ asm(".section .text\n"
21108 + "and %r3, %r1, $-8\n" // AAPCS : sp must be 8-byte aligned in the
21109 + "mov %sp, %r3\n" // callee, an bl doesn't push (lr=pc)
21110 + "bl main\n" // main() returns the status code, we'll exit with it.
21111 +- "and %r0, %r0, $0xff\n" // limit exit code to 8 bits
21112 + "movs r7, $1\n" // NR_exit == 1
21113 + "svc $0x00\n"
21114 + "");
21115 +@@ -994,7 +1007,6 @@ asm(".section .text\n"
21116 + "add x2, x2, x1\n" // + argv
21117 + "and sp, x1, -16\n" // sp must be 16-byte aligned in the callee
21118 + "bl main\n" // main() returns the status code, we'll exit with it.
21119 +- "and x0, x0, 0xff\n" // limit exit code to 8 bits
21120 + "mov x8, 93\n" // NR_exit == 93
21121 + "svc #0\n"
21122 + "");
21123 +@@ -1199,7 +1211,7 @@ asm(".section .text\n"
21124 + "addiu $sp,$sp,-16\n" // the callee expects to save a0..a3 there!
21125 + "jal main\n" // main() returns the status code, we'll exit with it.
21126 + "nop\n" // delayed slot
21127 +- "and $a0, $v0, 0xff\n" // limit exit code to 8 bits
21128 ++ "move $a0, $v0\n" // retrieve 32-bit exit code from v0
21129 + "li $v0, 4001\n" // NR_exit == 4001
21130 + "syscall\n"
21131 + ".end __start\n"
21132 +@@ -1397,7 +1409,6 @@ asm(".section .text\n"
21133 + "add a2,a2,a1\n" // + argv
21134 + "andi sp,a1,-16\n" // sp must be 16-byte aligned
21135 + "call main\n" // main() returns the status code, we'll exit with it.
21136 +- "andi a0, a0, 0xff\n" // limit exit code to 8 bits
21137 + "li a7, 93\n" // NR_exit == 93
21138 + "ecall\n"
21139 + "");
21140 +diff --git a/tools/perf/util/debug.c b/tools/perf/util/debug.c
21141 +index 5cda5565777a0..0af163abaa62b 100644
21142 +--- a/tools/perf/util/debug.c
21143 ++++ b/tools/perf/util/debug.c
21144 +@@ -145,7 +145,7 @@ static int trace_event_printer(enum binary_printer_ops op,
21145 + break;
21146 + case BINARY_PRINT_CHAR_DATA:
21147 + printed += color_fprintf(fp, color, "%c",
21148 +- isprint(ch) ? ch : '.');
21149 ++ isprint(ch) && isascii(ch) ? ch : '.');
21150 + break;
21151 + case BINARY_PRINT_CHAR_PAD:
21152 + printed += color_fprintf(fp, color, " ");
21153 +diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
21154 +index 1cad6051d8b08..1a1cbd16d76d4 100644
21155 +--- a/tools/perf/util/evsel.c
21156 ++++ b/tools/perf/util/evsel.c
21157 +@@ -1014,6 +1014,17 @@ struct evsel_config_term *__evsel__get_config_term(struct evsel *evsel, enum evs
21158 + return found_term;
21159 + }
21160 +
21161 ++static void evsel__set_default_freq_period(struct record_opts *opts,
21162 ++ struct perf_event_attr *attr)
21163 ++{
21164 ++ if (opts->freq) {
21165 ++ attr->freq = 1;
21166 ++ attr->sample_freq = opts->freq;
21167 ++ } else {
21168 ++ attr->sample_period = opts->default_interval;
21169 ++ }
21170 ++}
21171 ++
21172 + /*
21173 + * The enable_on_exec/disabled value strategy:
21174 + *
21175 +@@ -1080,14 +1091,12 @@ void evsel__config(struct evsel *evsel, struct record_opts *opts,
21176 + * We default some events to have a default interval. But keep
21177 + * it a weak assumption overridable by the user.
21178 + */
21179 +- if (!attr->sample_period) {
21180 +- if (opts->freq) {
21181 +- attr->freq = 1;
21182 +- attr->sample_freq = opts->freq;
21183 +- } else {
21184 +- attr->sample_period = opts->default_interval;
21185 +- }
21186 +- }
21187 ++ if ((evsel->is_libpfm_event && !attr->sample_period) ||
21188 ++ (!evsel->is_libpfm_event && (!attr->sample_period ||
21189 ++ opts->user_freq != UINT_MAX ||
21190 ++ opts->user_interval != ULLONG_MAX)))
21191 ++ evsel__set_default_freq_period(opts, attr);
21192 ++
21193 + /*
21194 + * If attr->freq was set (here or earlier), ask for period
21195 + * to be sampled.
21196 +diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
21197 +index 07db6cfad65b9..d103084fcd56c 100644
21198 +--- a/tools/perf/util/probe-event.c
21199 ++++ b/tools/perf/util/probe-event.c
21200 +@@ -3035,6 +3035,9 @@ static int find_probe_trace_events_from_map(struct perf_probe_event *pev,
21201 + for (j = 0; j < num_matched_functions; j++) {
21202 + sym = syms[j];
21203 +
21204 ++ if (sym->type != STT_FUNC)
21205 ++ continue;
21206 ++
21207 + /* There can be duplicated symbols in the map */
21208 + for (i = 0; i < j; i++)
21209 + if (sym->start == syms[i]->start) {
21210 +diff --git a/tools/testing/selftests/bpf/prog_tests/skb_ctx.c b/tools/testing/selftests/bpf/prog_tests/skb_ctx.c
21211 +index fafeddaad6a99..23915be6172d6 100644
21212 +--- a/tools/testing/selftests/bpf/prog_tests/skb_ctx.c
21213 ++++ b/tools/testing/selftests/bpf/prog_tests/skb_ctx.c
21214 +@@ -105,4 +105,6 @@ void test_skb_ctx(void)
21215 + "ctx_out_mark",
21216 + "skb->mark == %u, expected %d\n",
21217 + skb.mark, 10);
21218 ++
21219 ++ bpf_object__close(obj);
21220 + }
21221 +diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c
21222 +index 42be3b9258301..076cf4325f783 100644
21223 +--- a/tools/testing/selftests/clone3/clone3.c
21224 ++++ b/tools/testing/selftests/clone3/clone3.c
21225 +@@ -52,6 +52,12 @@ static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode)
21226 + size = sizeof(struct __clone_args);
21227 +
21228 + switch (test_mode) {
21229 ++ case CLONE3_ARGS_NO_TEST:
21230 ++ /*
21231 ++ * Uses default 'flags' and 'SIGCHLD'
21232 ++ * assignment.
21233 ++ */
21234 ++ break;
21235 + case CLONE3_ARGS_ALL_0:
21236 + args.flags = 0;
21237 + args.exit_signal = 0;
21238 +diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/profile.tc b/tools/testing/selftests/ftrace/test.d/kprobe/profile.tc
21239 +index 98166fa3eb91c..34fb89b0c61fa 100644
21240 +--- a/tools/testing/selftests/ftrace/test.d/kprobe/profile.tc
21241 ++++ b/tools/testing/selftests/ftrace/test.d/kprobe/profile.tc
21242 +@@ -1,6 +1,6 @@
21243 + #!/bin/sh
21244 + # SPDX-License-Identifier: GPL-2.0
21245 +-# description: Kprobe dynamic event - adding and removing
21246 ++# description: Kprobe profile
21247 + # requires: kprobe_events
21248 +
21249 + ! grep -q 'myevent' kprobe_profile
21250 +diff --git a/tools/testing/selftests/kselftest_harness.h b/tools/testing/selftests/kselftest_harness.h
21251 +index edce85420d193..5ecb9718e1616 100644
21252 +--- a/tools/testing/selftests/kselftest_harness.h
21253 ++++ b/tools/testing/selftests/kselftest_harness.h
21254 +@@ -965,7 +965,7 @@ void __run_test(struct __fixture_metadata *f,
21255 + t->passed = 1;
21256 + t->skip = 0;
21257 + t->trigger = 0;
21258 +- t->step = 0;
21259 ++ t->step = 1;
21260 + t->no_print = 0;
21261 + memset(t->results->reason, 0, sizeof(t->results->reason));
21262 +
21263 +diff --git a/tools/testing/selftests/powerpc/security/spectre_v2.c b/tools/testing/selftests/powerpc/security/spectre_v2.c
21264 +index adc2b7294e5fd..83647b8277e7d 100644
21265 +--- a/tools/testing/selftests/powerpc/security/spectre_v2.c
21266 ++++ b/tools/testing/selftests/powerpc/security/spectre_v2.c
21267 +@@ -193,7 +193,7 @@ int spectre_v2_test(void)
21268 + * We are not vulnerable and reporting otherwise, so
21269 + * missing such a mismatch is safe.
21270 + */
21271 +- if (state == VULNERABLE)
21272 ++ if (miss_percent > 95)
21273 + return 4;
21274 +
21275 + return 1;
21276 +diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c
21277 +index c9404ef9698e2..426dccc08f906 100644
21278 +--- a/tools/testing/selftests/vm/hmm-tests.c
21279 ++++ b/tools/testing/selftests/vm/hmm-tests.c
21280 +@@ -1242,6 +1242,48 @@ TEST_F(hmm, anon_teardown)
21281 + }
21282 + }
21283 +
21284 ++/*
21285 ++ * Test memory snapshot without faulting in pages accessed by the device.
21286 ++ */
21287 ++TEST_F(hmm, mixedmap)
21288 ++{
21289 ++ struct hmm_buffer *buffer;
21290 ++ unsigned long npages;
21291 ++ unsigned long size;
21292 ++ unsigned char *m;
21293 ++ int ret;
21294 ++
21295 ++ npages = 1;
21296 ++ size = npages << self->page_shift;
21297 ++
21298 ++ buffer = malloc(sizeof(*buffer));
21299 ++ ASSERT_NE(buffer, NULL);
21300 ++
21301 ++ buffer->fd = -1;
21302 ++ buffer->size = size;
21303 ++ buffer->mirror = malloc(npages);
21304 ++ ASSERT_NE(buffer->mirror, NULL);
21305 ++
21306 ++
21307 ++ /* Reserve a range of addresses. */
21308 ++ buffer->ptr = mmap(NULL, size,
21309 ++ PROT_READ | PROT_WRITE,
21310 ++ MAP_PRIVATE,
21311 ++ self->fd, 0);
21312 ++ ASSERT_NE(buffer->ptr, MAP_FAILED);
21313 ++
21314 ++ /* Simulate a device snapshotting CPU pagetables. */
21315 ++ ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_SNAPSHOT, buffer, npages);
21316 ++ ASSERT_EQ(ret, 0);
21317 ++ ASSERT_EQ(buffer->cpages, npages);
21318 ++
21319 ++ /* Check what the device saw. */
21320 ++ m = buffer->mirror;
21321 ++ ASSERT_EQ(m[0], HMM_DMIRROR_PROT_READ);
21322 ++
21323 ++ hmm_buffer_free(buffer);
21324 ++}
21325 ++
21326 + /*
21327 + * Test memory snapshot without faulting in pages accessed by the device.
21328 + */