Gentoo Archives: gentoo-commits

From: Mike Pagano <mpagano@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] proj/linux-patches:6.1 commit in: /
Date: Sat, 31 Dec 2022 15:28:44
Message-Id: 1672500455.99cfecfe6b7115f295b20864a9a3292fca78347f.mpagano@gentoo
1 commit: 99cfecfe6b7115f295b20864a9a3292fca78347f
2 Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
3 AuthorDate: Sat Dec 31 15:27:35 2022 +0000
4 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
5 CommitDate: Sat Dec 31 15:27:35 2022 +0000
6 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=99cfecfe
7
8 Linux patch 6.1.2
9 Fix for BMQ Patch Bug: https://bugs.gentoo.org/888043
10
11 Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
12
13 0000_README | 4 +
14 1001_linux-6.1.2.patch | 45529 ++++++++++++++++++++
15 5021_sched-alt-missing-rq-lock-irq-function.patch | 30 +
16 3 files changed, 45563 insertions(+)
17
18 diff --git a/0000_README b/0000_README
19 index d85dd44f..7f1d2ce2 100644
20 --- a/0000_README
21 +++ b/0000_README
22 @@ -47,6 +47,10 @@ Patch: 1000_linux-6.1.1.patch
23 From: http://www.kernel.org
24 Desc: Linux 6.1.1
25
26 +Patch: 1001_linux-6.1.2.patch
27 +From: http://www.kernel.org
28 +Desc: Linux 6.1.2
29 +
30 Patch: 1500_XATTR_USER_PREFIX.patch
31 From: https://bugs.gentoo.org/show_bug.cgi?id=470644
32 Desc: Support for namespace user.pax.* on tmpfs.
33
34 diff --git a/1001_linux-6.1.2.patch b/1001_linux-6.1.2.patch
35 new file mode 100644
36 index 00000000..dce6fb3b
37 --- /dev/null
38 +++ b/1001_linux-6.1.2.patch
39 @@ -0,0 +1,45529 @@
40 +diff --git a/Documentation/ABI/stable/sysfs-driver-dma-idxd b/Documentation/ABI/stable/sysfs-driver-dma-idxd
41 +index 8e2c2c405db22..3becc9a82bdf6 100644
42 +--- a/Documentation/ABI/stable/sysfs-driver-dma-idxd
43 ++++ b/Documentation/ABI/stable/sysfs-driver-dma-idxd
44 +@@ -22,6 +22,7 @@ Date: Oct 25, 2019
45 + KernelVersion: 5.6.0
46 + Contact: dmaengine@×××××××××××.org
47 + Description: The largest number of work descriptors in a batch.
48 ++ It's not visible when the device does not support batch.
49 +
50 + What: /sys/bus/dsa/devices/dsa<m>/max_work_queues_size
51 + Date: Oct 25, 2019
52 +@@ -49,6 +50,8 @@ Description: The total number of read buffers supported by this device.
53 + The read buffers represent resources within the DSA
54 + implementation, and these resources are allocated by engines to
55 + support operations. See DSA spec v1.2 9.2.4 Total Read Buffers.
56 ++ It's not visible when the device does not support Read Buffer
57 ++ allocation control.
58 +
59 + What: /sys/bus/dsa/devices/dsa<m>/max_transfer_size
60 + Date: Oct 25, 2019
61 +@@ -122,6 +125,8 @@ Contact: dmaengine@×××××××××××.org
62 + Description: The maximum number of read buffers that may be in use at
63 + one time by operations that access low bandwidth memory in the
64 + device. See DSA spec v1.2 9.2.8 GENCFG on Global Read Buffer Limit.
65 ++ It's not visible when the device does not support Read Buffer
66 ++ allocation control.
67 +
68 + What: /sys/bus/dsa/devices/dsa<m>/cmd_status
69 + Date: Aug 28, 2020
70 +@@ -205,6 +210,7 @@ KernelVersion: 5.10.0
71 + Contact: dmaengine@×××××××××××.org
72 + Description: The max batch size for this workqueue. Cannot exceed device
73 + max batch size. Configurable parameter.
74 ++ It's not visible when the device does not support batch.
75 +
76 + What: /sys/bus/dsa/devices/wq<m>.<n>/ats_disable
77 + Date: Nov 13, 2020
78 +@@ -250,6 +256,8 @@ KernelVersion: 5.17.0
79 + Contact: dmaengine@×××××××××××.org
80 + Description: Enable the use of global read buffer limit for the group. See DSA
81 + spec v1.2 9.2.18 GRPCFG Use Global Read Buffer Limit.
82 ++ It's not visible when the device does not support Read Buffer
83 ++ allocation control.
84 +
85 + What: /sys/bus/dsa/devices/group<m>.<n>/read_buffers_allowed
86 + Date: Dec 10, 2021
87 +@@ -258,6 +266,8 @@ Contact: dmaengine@×××××××××××.org
88 + Description: Indicates max number of read buffers that may be in use at one time
89 + by all engines in the group. See DSA spec v1.2 9.2.18 GRPCFG Read
90 + Buffers Allowed.
91 ++ It's not visible when the device does not support Read Buffer
92 ++ allocation control.
93 +
94 + What: /sys/bus/dsa/devices/group<m>.<n>/read_buffers_reserved
95 + Date: Dec 10, 2021
96 +@@ -266,6 +276,8 @@ Contact: dmaengine@×××××××××××.org
97 + Description: Indicates the number of Read Buffers reserved for the use of
98 + engines in the group. See DSA spec v1.2 9.2.18 GRPCFG Read Buffers
99 + Reserved.
100 ++ It's not visible when the device does not support Read Buffer
101 ++ allocation control.
102 +
103 + What: /sys/bus/dsa/devices/group<m>.<n>/desc_progress_limit
104 + Date: Sept 14, 2022
105 +diff --git a/Documentation/ABI/testing/sysfs-bus-spi-devices-spi-nor b/Documentation/ABI/testing/sysfs-bus-spi-devices-spi-nor
106 +index d76cd3946434d..e9ef69aef20b1 100644
107 +--- a/Documentation/ABI/testing/sysfs-bus-spi-devices-spi-nor
108 ++++ b/Documentation/ABI/testing/sysfs-bus-spi-devices-spi-nor
109 +@@ -5,6 +5,9 @@ Contact: linux-mtd@×××××××××××××××.org
110 + Description: (RO) The JEDEC ID of the SPI NOR flash as reported by the
111 + flash device.
112 +
113 ++ The attribute is not present if the flash doesn't support
114 ++ the "Read JEDEC ID" command (9Fh). This is the case for
115 ++ non-JEDEC compliant flashes.
116 +
117 + What: /sys/bus/spi/devices/.../spi-nor/manufacturer
118 + Date: April 2021
119 +diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
120 +index 98d1b198b2b4c..c2c64c1b706ff 100644
121 +--- a/Documentation/admin-guide/sysctl/kernel.rst
122 ++++ b/Documentation/admin-guide/sysctl/kernel.rst
123 +@@ -1314,6 +1314,29 @@ watchdog work to be queued by the watchdog timer function, otherwise the NMI
124 + watchdog — if enabled — can detect a hard lockup condition.
125 +
126 +
127 ++split_lock_mitigate (x86 only)
128 ++==============================
129 ++
130 ++On x86, each "split lock" imposes a system-wide performance penalty. On larger
131 ++systems, large numbers of split locks from unprivileged users can result in
132 ++denials of service to well-behaved and potentially more important users.
133 ++
134 ++The kernel mitigates these bad users by detecting split locks and imposing
135 ++penalties: forcing them to wait and only allowing one core to execute split
136 ++locks at a time.
137 ++
138 ++These mitigations can make those bad applications unbearably slow. Setting
139 ++split_lock_mitigate=0 may restore some application performance, but will also
140 ++increase system exposure to denial of service attacks from split lock users.
141 ++
142 ++= ===================================================================
143 ++0 Disable the mitigation mode - just warns the split lock on kernel log
144 ++ and exposes the system to denials of service from the split lockers.
145 ++1 Enable the mitigation mode (this is the default) - penalizes the split
146 ++ lockers with intentional performance degradation.
147 ++= ===================================================================
148 ++
149 ++
150 + stack_erasing
151 + =============
152 +
153 +diff --git a/Documentation/devicetree/bindings/input/azoteq,iqs7222.yaml b/Documentation/devicetree/bindings/input/azoteq,iqs7222.yaml
154 +index 02e605fac408d..9ddba7f2e7aa6 100644
155 +--- a/Documentation/devicetree/bindings/input/azoteq,iqs7222.yaml
156 ++++ b/Documentation/devicetree/bindings/input/azoteq,iqs7222.yaml
157 +@@ -473,9 +473,6 @@ patternProperties:
158 + Specifies whether the event is to be interpreted as a key (1)
159 + or a switch (5).
160 +
161 +- required:
162 +- - linux,code
163 +-
164 + additionalProperties: false
165 +
166 + dependencies:
167 +@@ -501,7 +498,7 @@ patternProperties:
168 +
169 + azoteq,slider-size:
170 + $ref: /schemas/types.yaml#/definitions/uint32
171 +- minimum: 0
172 ++ minimum: 1
173 + maximum: 65535
174 + description:
175 + Specifies the slider's one-dimensional resolution, equal to the
176 +@@ -575,9 +572,9 @@ patternProperties:
177 + linux,code: true
178 +
179 + azoteq,gesture-max-ms:
180 +- multipleOf: 4
181 ++ multipleOf: 16
182 + minimum: 0
183 +- maximum: 1020
184 ++ maximum: 4080
185 + description:
186 + Specifies the length of time (in ms) within which a tap, swipe
187 + or flick gesture must be completed in order to be acknowledged
188 +@@ -585,9 +582,9 @@ patternProperties:
189 + gesture applies to all remaining swipe or flick gestures.
190 +
191 + azoteq,gesture-min-ms:
192 +- multipleOf: 4
193 ++ multipleOf: 16
194 + minimum: 0
195 +- maximum: 124
196 ++ maximum: 496
197 + description:
198 + Specifies the length of time (in ms) for which a tap gesture must
199 + be held in order to be acknowledged by the device.
200 +@@ -620,9 +617,6 @@ patternProperties:
201 + GPIO, they must all be of the same type (proximity, touch or
202 + slider gesture).
203 +
204 +- required:
205 +- - linux,code
206 +-
207 + additionalProperties: false
208 +
209 + required:
210 +@@ -693,6 +687,7 @@ allOf:
211 + properties:
212 + azoteq,slider-size:
213 + multipleOf: 16
214 ++ minimum: 16
215 + maximum: 4080
216 +
217 + azoteq,top-speed:
218 +@@ -935,14 +930,14 @@ examples:
219 +
220 + event-tap {
221 + linux,code = <KEY_PLAYPAUSE>;
222 +- azoteq,gesture-max-ms = <600>;
223 +- azoteq,gesture-min-ms = <24>;
224 ++ azoteq,gesture-max-ms = <400>;
225 ++ azoteq,gesture-min-ms = <32>;
226 + };
227 +
228 + event-flick-pos {
229 + linux,code = <KEY_NEXTSONG>;
230 +- azoteq,gesture-max-ms = <600>;
231 +- azoteq,gesture-dist = <816>;
232 ++ azoteq,gesture-max-ms = <800>;
233 ++ azoteq,gesture-dist = <800>;
234 + };
235 +
236 + event-flick-neg {
237 +diff --git a/Documentation/devicetree/bindings/mfd/qcom,spmi-pmic.yaml b/Documentation/devicetree/bindings/mfd/qcom,spmi-pmic.yaml
238 +index 6a3e3ede1ede7..777f2da52f1ed 100644
239 +--- a/Documentation/devicetree/bindings/mfd/qcom,spmi-pmic.yaml
240 ++++ b/Documentation/devicetree/bindings/mfd/qcom,spmi-pmic.yaml
241 +@@ -98,6 +98,10 @@ properties:
242 + type: object
243 + $ref: /schemas/regulator/qcom,spmi-regulator.yaml#
244 +
245 ++ pwm:
246 ++ type: object
247 ++ $ref: /schemas/leds/leds-qcom-lpg.yaml#
248 ++
249 + patternProperties:
250 + "^adc@[0-9a-f]+$":
251 + type: object
252 +@@ -123,10 +127,6 @@ patternProperties:
253 + type: object
254 + $ref: /schemas/power/reset/qcom,pon.yaml#
255 +
256 +- "pwm@[0-9a-f]+$":
257 +- type: object
258 +- $ref: /schemas/leds/leds-qcom-lpg.yaml#
259 +-
260 + "^rtc@[0-9a-f]+$":
261 + type: object
262 + $ref: /schemas/rtc/qcom-pm8xxx-rtc.yaml#
263 +diff --git a/Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.yaml b/Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.yaml
264 +index 376e739bcad40..49b4f7a32e71e 100644
265 +--- a/Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.yaml
266 ++++ b/Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.yaml
267 +@@ -14,9 +14,6 @@ description: |+
268 + This PCIe host controller is based on the Synopsys DesignWare PCIe IP
269 + and thus inherits all the common properties defined in snps,dw-pcie.yaml.
270 +
271 +-allOf:
272 +- - $ref: /schemas/pci/snps,dw-pcie.yaml#
273 +-
274 + properties:
275 + compatible:
276 + enum:
277 +@@ -61,7 +58,7 @@ properties:
278 + - const: pcie
279 + - const: pcie_bus
280 + - const: pcie_phy
281 +- - const: pcie_inbound_axi for imx6sx-pcie, pcie_aux for imx8mq-pcie
282 ++ - enum: [ pcie_inbound_axi, pcie_aux ]
283 +
284 + num-lanes:
285 + const: 1
286 +@@ -175,6 +172,47 @@ required:
287 + - clocks
288 + - clock-names
289 +
290 ++allOf:
291 ++ - $ref: /schemas/pci/snps,dw-pcie.yaml#
292 ++ - if:
293 ++ properties:
294 ++ compatible:
295 ++ contains:
296 ++ const: fsl,imx6sx-pcie
297 ++ then:
298 ++ properties:
299 ++ clock-names:
300 ++ items:
301 ++ - {}
302 ++ - {}
303 ++ - {}
304 ++ - const: pcie_inbound_axi
305 ++ - if:
306 ++ properties:
307 ++ compatible:
308 ++ contains:
309 ++ const: fsl,imx8mq-pcie
310 ++ then:
311 ++ properties:
312 ++ clock-names:
313 ++ items:
314 ++ - {}
315 ++ - {}
316 ++ - {}
317 ++ - const: pcie_aux
318 ++ - if:
319 ++ properties:
320 ++ compatible:
321 ++ not:
322 ++ contains:
323 ++ enum:
324 ++ - fsl,imx6sx-pcie
325 ++ - fsl,imx8mq-pcie
326 ++ then:
327 ++ properties:
328 ++ clock-names:
329 ++ maxItems: 3
330 ++
331 + unevaluatedProperties: false
332 +
333 + examples:
334 +diff --git a/Documentation/devicetree/bindings/pci/toshiba,visconti-pcie.yaml b/Documentation/devicetree/bindings/pci/toshiba,visconti-pcie.yaml
335 +index 48ed227fc5b9e..53da2edd7c9ab 100644
336 +--- a/Documentation/devicetree/bindings/pci/toshiba,visconti-pcie.yaml
337 ++++ b/Documentation/devicetree/bindings/pci/toshiba,visconti-pcie.yaml
338 +@@ -36,7 +36,7 @@ properties:
339 + - const: mpu
340 +
341 + interrupts:
342 +- maxItems: 1
343 ++ maxItems: 2
344 +
345 + clocks:
346 + items:
347 +@@ -94,8 +94,9 @@ examples:
348 + #interrupt-cells = <1>;
349 + ranges = <0x81000000 0 0x40000000 0 0x40000000 0 0x00010000>,
350 + <0x82000000 0 0x50000000 0 0x50000000 0 0x20000000>;
351 +- interrupts = <GIC_SPI 215 IRQ_TYPE_LEVEL_HIGH>;
352 +- interrupt-names = "intr";
353 ++ interrupts = <GIC_SPI 211 IRQ_TYPE_LEVEL_HIGH>,
354 ++ <GIC_SPI 215 IRQ_TYPE_LEVEL_HIGH>;
355 ++ interrupt-names = "msi", "intr";
356 + interrupt-map-mask = <0 0 0 7>;
357 + interrupt-map =
358 + <0 0 0 1 &gic GIC_SPI 215 IRQ_TYPE_LEVEL_HIGH
359 +diff --git a/Documentation/devicetree/bindings/pinctrl/mediatek,mt7986-pinctrl.yaml b/Documentation/devicetree/bindings/pinctrl/mediatek,mt7986-pinctrl.yaml
360 +index 89b8f3dd67a19..3342847dcb19a 100644
361 +--- a/Documentation/devicetree/bindings/pinctrl/mediatek,mt7986-pinctrl.yaml
362 ++++ b/Documentation/devicetree/bindings/pinctrl/mediatek,mt7986-pinctrl.yaml
363 +@@ -87,6 +87,8 @@ patternProperties:
364 + "wifi_led" "led" 1, 2
365 + "i2c" "i2c" 3, 4
366 + "uart1_0" "uart" 7, 8, 9, 10
367 ++ "uart1_rx_tx" "uart" 42, 43
368 ++ "uart1_cts_rts" "uart" 44, 45
369 + "pcie_clk" "pcie" 9
370 + "pcie_wake" "pcie" 10
371 + "spi1_0" "spi" 11, 12, 13, 14
372 +@@ -98,9 +100,11 @@ patternProperties:
373 + "emmc_45" "emmc" 22, 23, 24, 25, 26, 27, 28, 29, 30,
374 + 31, 32
375 + "spi1_1" "spi" 23, 24, 25, 26
376 +- "uart1_2" "uart" 29, 30, 31, 32
377 ++ "uart1_2_rx_tx" "uart" 29, 30
378 ++ "uart1_2_cts_rts" "uart" 31, 32
379 + "uart1_1" "uart" 23, 24, 25, 26
380 +- "uart2_0" "uart" 29, 30, 31, 32
381 ++ "uart2_0_rx_tx" "uart" 29, 30
382 ++ "uart2_0_cts_rts" "uart" 31, 32
383 + "spi0" "spi" 33, 34, 35, 36
384 + "spi0_wp_hold" "spi" 37, 38
385 + "uart1_3_rx_tx" "uart" 35, 36
386 +@@ -157,7 +161,7 @@ patternProperties:
387 + then:
388 + properties:
389 + groups:
390 +- enum: [emmc, emmc_rst]
391 ++ enum: [emmc_45, emmc_51]
392 + - if:
393 + properties:
394 + function:
395 +@@ -221,8 +225,12 @@ patternProperties:
396 + then:
397 + properties:
398 + groups:
399 +- enum: [uart1_0, uart1_1, uart1_2, uart1_3_rx_tx,
400 +- uart1_3_cts_rts, uart2_0, uart2_1, uart0, uart1, uart2]
401 ++ items:
402 ++ enum: [uart1_0, uart1_rx_tx, uart1_cts_rts, uart1_1,
403 ++ uart1_2_rx_tx, uart1_2_cts_rts, uart1_3_rx_tx,
404 ++ uart1_3_cts_rts, uart2_0_rx_tx, uart2_0_cts_rts,
405 ++ uart2_1, uart0, uart1, uart2]
406 ++ maxItems: 2
407 + - if:
408 + properties:
409 + function:
410 +@@ -356,6 +364,27 @@ examples:
411 + interrupt-parent = <&gic>;
412 + #interrupt-cells = <2>;
413 +
414 ++ pcie_pins: pcie-pins {
415 ++ mux {
416 ++ function = "pcie";
417 ++ groups = "pcie_clk", "pcie_wake", "pcie_pereset";
418 ++ };
419 ++ };
420 ++
421 ++ pwm_pins: pwm-pins {
422 ++ mux {
423 ++ function = "pwm";
424 ++ groups = "pwm0", "pwm1_0";
425 ++ };
426 ++ };
427 ++
428 ++ spi0_pins: spi0-pins {
429 ++ mux {
430 ++ function = "spi";
431 ++ groups = "spi0", "spi0_wp_hold";
432 ++ };
433 ++ };
434 ++
435 + uart1_pins: uart1-pins {
436 + mux {
437 + function = "uart";
438 +@@ -363,6 +392,13 @@ examples:
439 + };
440 + };
441 +
442 ++ uart1_3_pins: uart1-3-pins {
443 ++ mux {
444 ++ function = "uart";
445 ++ groups = "uart1_3_rx_tx", "uart1_3_cts_rts";
446 ++ };
447 ++ };
448 ++
449 + uart2_pins: uart2-pins {
450 + mux {
451 + function = "uart";
452 +diff --git a/Documentation/devicetree/bindings/pwm/microchip,corepwm.yaml b/Documentation/devicetree/bindings/pwm/microchip,corepwm.yaml
453 +index a7fae1772a81b..cd8e9a8907f84 100644
454 +--- a/Documentation/devicetree/bindings/pwm/microchip,corepwm.yaml
455 ++++ b/Documentation/devicetree/bindings/pwm/microchip,corepwm.yaml
456 +@@ -30,7 +30,9 @@ properties:
457 + maxItems: 1
458 +
459 + "#pwm-cells":
460 +- const: 2
461 ++ enum: [2, 3]
462 ++ description:
463 ++ The only flag supported by the controller is PWM_POLARITY_INVERTED.
464 +
465 + microchip,sync-update-mask:
466 + description: |
467 +diff --git a/Documentation/devicetree/bindings/sound/qcom,wcd9335.txt b/Documentation/devicetree/bindings/sound/qcom,wcd9335.txt
468 +index 5d6ea66a863fe..1f75feec3dec6 100644
469 +--- a/Documentation/devicetree/bindings/sound/qcom,wcd9335.txt
470 ++++ b/Documentation/devicetree/bindings/sound/qcom,wcd9335.txt
471 +@@ -109,7 +109,7 @@ audio-codec@1{
472 + reg = <1 0>;
473 + interrupts = <&msmgpio 54 IRQ_TYPE_LEVEL_HIGH>;
474 + interrupt-names = "intr2"
475 +- reset-gpios = <&msmgpio 64 0>;
476 ++ reset-gpios = <&msmgpio 64 GPIO_ACTIVE_LOW>;
477 + slim-ifc-dev = <&wc9335_ifd>;
478 + clock-names = "mclk", "native";
479 + clocks = <&rpmcc RPM_SMD_DIV_CLK1>,
480 +diff --git a/Documentation/devicetree/bindings/sound/rt5682.txt b/Documentation/devicetree/bindings/sound/rt5682.txt
481 +index c5f2b8febceec..6b87db68337c2 100644
482 +--- a/Documentation/devicetree/bindings/sound/rt5682.txt
483 ++++ b/Documentation/devicetree/bindings/sound/rt5682.txt
484 +@@ -46,7 +46,7 @@ Optional properties:
485 +
486 + - realtek,dmic-clk-driving-high : Set the high driving of the DMIC clock out.
487 +
488 +-- #sound-dai-cells: Should be set to '<0>'.
489 ++- #sound-dai-cells: Should be set to '<1>'.
490 +
491 + Pins on the device (for linking into audio routes) for RT5682:
492 +
493 +diff --git a/Documentation/driver-api/spi.rst b/Documentation/driver-api/spi.rst
494 +index f64cb666498aa..f28887045049d 100644
495 +--- a/Documentation/driver-api/spi.rst
496 ++++ b/Documentation/driver-api/spi.rst
497 +@@ -25,8 +25,8 @@ hardware, which may be as simple as a set of GPIO pins or as complex as
498 + a pair of FIFOs connected to dual DMA engines on the other side of the
499 + SPI shift register (maximizing throughput). Such drivers bridge between
500 + whatever bus they sit on (often the platform bus) and SPI, and expose
501 +-the SPI side of their device as a :c:type:`struct spi_master
502 +-<spi_master>`. SPI devices are children of that master,
503 ++the SPI side of their device as a :c:type:`struct spi_controller
504 ++<spi_controller>`. SPI devices are children of that master,
505 + represented as a :c:type:`struct spi_device <spi_device>` and
506 + manufactured from :c:type:`struct spi_board_info
507 + <spi_board_info>` descriptors which are usually provided by
508 +diff --git a/Documentation/fault-injection/fault-injection.rst b/Documentation/fault-injection/fault-injection.rst
509 +index 17779a2772e51..5f6454b9dbd4d 100644
510 +--- a/Documentation/fault-injection/fault-injection.rst
511 ++++ b/Documentation/fault-injection/fault-injection.rst
512 +@@ -83,9 +83,7 @@ configuration of fault-injection capabilities.
513 + - /sys/kernel/debug/fail*/times:
514 +
515 + specifies how many times failures may happen at most. A value of -1
516 +- means "no limit". Note, though, that this file only accepts unsigned
517 +- values. So, if you want to specify -1, you better use 'printf' instead
518 +- of 'echo', e.g.: $ printf %#x -1 > times
519 ++ means "no limit".
520 +
521 + - /sys/kernel/debug/fail*/space:
522 +
523 +@@ -284,7 +282,7 @@ Application Examples
524 + echo Y > /sys/kernel/debug/$FAILTYPE/task-filter
525 + echo 10 > /sys/kernel/debug/$FAILTYPE/probability
526 + echo 100 > /sys/kernel/debug/$FAILTYPE/interval
527 +- printf %#x -1 > /sys/kernel/debug/$FAILTYPE/times
528 ++ echo -1 > /sys/kernel/debug/$FAILTYPE/times
529 + echo 0 > /sys/kernel/debug/$FAILTYPE/space
530 + echo 2 > /sys/kernel/debug/$FAILTYPE/verbose
531 + echo Y > /sys/kernel/debug/$FAILTYPE/ignore-gfp-wait
532 +@@ -338,7 +336,7 @@ Application Examples
533 + echo N > /sys/kernel/debug/$FAILTYPE/task-filter
534 + echo 10 > /sys/kernel/debug/$FAILTYPE/probability
535 + echo 100 > /sys/kernel/debug/$FAILTYPE/interval
536 +- printf %#x -1 > /sys/kernel/debug/$FAILTYPE/times
537 ++ echo -1 > /sys/kernel/debug/$FAILTYPE/times
538 + echo 0 > /sys/kernel/debug/$FAILTYPE/space
539 + echo 2 > /sys/kernel/debug/$FAILTYPE/verbose
540 + echo Y > /sys/kernel/debug/$FAILTYPE/ignore-gfp-wait
541 +@@ -369,7 +367,7 @@ Application Examples
542 + echo N > /sys/kernel/debug/$FAILTYPE/task-filter
543 + echo 100 > /sys/kernel/debug/$FAILTYPE/probability
544 + echo 0 > /sys/kernel/debug/$FAILTYPE/interval
545 +- printf %#x -1 > /sys/kernel/debug/$FAILTYPE/times
546 ++ echo -1 > /sys/kernel/debug/$FAILTYPE/times
547 + echo 0 > /sys/kernel/debug/$FAILTYPE/space
548 + echo 1 > /sys/kernel/debug/$FAILTYPE/verbose
549 +
550 +diff --git a/Makefile b/Makefile
551 +index 7307ae6c2ef72..2ecc568c779fa 100644
552 +--- a/Makefile
553 ++++ b/Makefile
554 +@@ -1,7 +1,7 @@
555 + # SPDX-License-Identifier: GPL-2.0
556 + VERSION = 6
557 + PATCHLEVEL = 1
558 +-SUBLEVEL = 1
559 ++SUBLEVEL = 2
560 + EXTRAVERSION =
561 + NAME = Hurr durr I'ma ninja sloth
562 +
563 +diff --git a/arch/Kconfig b/arch/Kconfig
564 +index 8f138e580d1ae..81599f5c17b0f 100644
565 +--- a/arch/Kconfig
566 ++++ b/arch/Kconfig
567 +@@ -635,7 +635,7 @@ config ARCH_SUPPORTS_SHADOW_CALL_STACK
568 + config SHADOW_CALL_STACK
569 + bool "Shadow Call Stack"
570 + depends on ARCH_SUPPORTS_SHADOW_CALL_STACK
571 +- depends on DYNAMIC_FTRACE_WITH_REGS || !FUNCTION_GRAPH_TRACER
572 ++ depends on DYNAMIC_FTRACE_WITH_ARGS || DYNAMIC_FTRACE_WITH_REGS || !FUNCTION_GRAPH_TRACER
573 + help
574 + This option enables the compiler's Shadow Call Stack, which
575 + uses a shadow stack to protect function return addresses from
576 +diff --git a/arch/alpha/include/asm/thread_info.h b/arch/alpha/include/asm/thread_info.h
577 +index fdc485d7787a6..084c27cb0c707 100644
578 +--- a/arch/alpha/include/asm/thread_info.h
579 ++++ b/arch/alpha/include/asm/thread_info.h
580 +@@ -75,7 +75,7 @@ register struct thread_info *__current_thread_info __asm__("$8");
581 +
582 + /* Work to do on interrupt/exception return. */
583 + #define _TIF_WORK_MASK (_TIF_SIGPENDING | _TIF_NEED_RESCHED | \
584 +- _TIF_NOTIFY_RESUME)
585 ++ _TIF_NOTIFY_RESUME | _TIF_NOTIFY_SIGNAL)
586 +
587 + /* Work to do on any return to userspace. */
588 + #define _TIF_ALLWORK_MASK (_TIF_WORK_MASK \
589 +diff --git a/arch/alpha/kernel/entry.S b/arch/alpha/kernel/entry.S
590 +index e227f3a29a43c..c41a5a9c3b9f2 100644
591 +--- a/arch/alpha/kernel/entry.S
592 ++++ b/arch/alpha/kernel/entry.S
593 +@@ -469,8 +469,10 @@ entSys:
594 + #ifdef CONFIG_AUDITSYSCALL
595 + lda $6, _TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT
596 + and $3, $6, $3
597 +-#endif
598 + bne $3, strace
599 ++#else
600 ++ blbs $3, strace /* check for SYSCALL_TRACE in disguise */
601 ++#endif
602 + beq $4, 1f
603 + ldq $27, 0($5)
604 + 1: jsr $26, ($27), sys_ni_syscall
605 +diff --git a/arch/arm/boot/dts/armada-370.dtsi b/arch/arm/boot/dts/armada-370.dtsi
606 +index 9dc928859ad33..2013a5ccecd31 100644
607 +--- a/arch/arm/boot/dts/armada-370.dtsi
608 ++++ b/arch/arm/boot/dts/armada-370.dtsi
609 +@@ -84,7 +84,7 @@
610 +
611 + pcie2: pcie@2,0 {
612 + device_type = "pci";
613 +- assigned-addresses = <0x82002800 0 0x80000 0 0x2000>;
614 ++ assigned-addresses = <0x82001000 0 0x80000 0 0x2000>;
615 + reg = <0x1000 0 0 0 0>;
616 + #address-cells = <3>;
617 + #size-cells = <2>;
618 +diff --git a/arch/arm/boot/dts/armada-375.dtsi b/arch/arm/boot/dts/armada-375.dtsi
619 +index 929deaf312a55..c310ef26d1cce 100644
620 +--- a/arch/arm/boot/dts/armada-375.dtsi
621 ++++ b/arch/arm/boot/dts/armada-375.dtsi
622 +@@ -592,7 +592,7 @@
623 +
624 + pcie1: pcie@2,0 {
625 + device_type = "pci";
626 +- assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
627 ++ assigned-addresses = <0x82001000 0 0x44000 0 0x2000>;
628 + reg = <0x1000 0 0 0 0>;
629 + #address-cells = <3>;
630 + #size-cells = <2>;
631 +diff --git a/arch/arm/boot/dts/armada-380.dtsi b/arch/arm/boot/dts/armada-380.dtsi
632 +index ce1dddb2269b0..e94f22b0e9b5e 100644
633 +--- a/arch/arm/boot/dts/armada-380.dtsi
634 ++++ b/arch/arm/boot/dts/armada-380.dtsi
635 +@@ -89,7 +89,7 @@
636 + /* x1 port */
637 + pcie@2,0 {
638 + device_type = "pci";
639 +- assigned-addresses = <0x82000800 0 0x40000 0 0x2000>;
640 ++ assigned-addresses = <0x82001000 0 0x40000 0 0x2000>;
641 + reg = <0x1000 0 0 0 0>;
642 + #address-cells = <3>;
643 + #size-cells = <2>;
644 +@@ -118,7 +118,7 @@
645 + /* x1 port */
646 + pcie@3,0 {
647 + device_type = "pci";
648 +- assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
649 ++ assigned-addresses = <0x82001800 0 0x44000 0 0x2000>;
650 + reg = <0x1800 0 0 0 0>;
651 + #address-cells = <3>;
652 + #size-cells = <2>;
653 +diff --git a/arch/arm/boot/dts/armada-385-turris-omnia.dts b/arch/arm/boot/dts/armada-385-turris-omnia.dts
654 +index 72ac807cae259..0c1f238e4c306 100644
655 +--- a/arch/arm/boot/dts/armada-385-turris-omnia.dts
656 ++++ b/arch/arm/boot/dts/armada-385-turris-omnia.dts
657 +@@ -23,6 +23,12 @@
658 + stdout-path = &uart0;
659 + };
660 +
661 ++ aliases {
662 ++ ethernet0 = &eth0;
663 ++ ethernet1 = &eth1;
664 ++ ethernet2 = &eth2;
665 ++ };
666 ++
667 + memory {
668 + device_type = "memory";
669 + reg = <0x00000000 0x40000000>; /* 1024 MB */
670 +@@ -483,7 +489,17 @@
671 + };
672 + };
673 +
674 +- /* port 6 is connected to eth0 */
675 ++ ports@6 {
676 ++ reg = <6>;
677 ++ label = "cpu";
678 ++ ethernet = <&eth0>;
679 ++ phy-mode = "rgmii-id";
680 ++
681 ++ fixed-link {
682 ++ speed = <1000>;
683 ++ full-duplex;
684 ++ };
685 ++ };
686 + };
687 + };
688 + };
689 +diff --git a/arch/arm/boot/dts/armada-385.dtsi b/arch/arm/boot/dts/armada-385.dtsi
690 +index 83392b92dae28..be8d607c59b21 100644
691 +--- a/arch/arm/boot/dts/armada-385.dtsi
692 ++++ b/arch/arm/boot/dts/armada-385.dtsi
693 +@@ -93,7 +93,7 @@
694 + /* x1 port */
695 + pcie2: pcie@2,0 {
696 + device_type = "pci";
697 +- assigned-addresses = <0x82000800 0 0x40000 0 0x2000>;
698 ++ assigned-addresses = <0x82001000 0 0x40000 0 0x2000>;
699 + reg = <0x1000 0 0 0 0>;
700 + #address-cells = <3>;
701 + #size-cells = <2>;
702 +@@ -121,7 +121,7 @@
703 + /* x1 port */
704 + pcie3: pcie@3,0 {
705 + device_type = "pci";
706 +- assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
707 ++ assigned-addresses = <0x82001800 0 0x44000 0 0x2000>;
708 + reg = <0x1800 0 0 0 0>;
709 + #address-cells = <3>;
710 + #size-cells = <2>;
711 +@@ -152,7 +152,7 @@
712 + */
713 + pcie4: pcie@4,0 {
714 + device_type = "pci";
715 +- assigned-addresses = <0x82000800 0 0x48000 0 0x2000>;
716 ++ assigned-addresses = <0x82002000 0 0x48000 0 0x2000>;
717 + reg = <0x2000 0 0 0 0>;
718 + #address-cells = <3>;
719 + #size-cells = <2>;
720 +diff --git a/arch/arm/boot/dts/armada-39x.dtsi b/arch/arm/boot/dts/armada-39x.dtsi
721 +index 923b035a3ab38..9d1cac49c022f 100644
722 +--- a/arch/arm/boot/dts/armada-39x.dtsi
723 ++++ b/arch/arm/boot/dts/armada-39x.dtsi
724 +@@ -463,7 +463,7 @@
725 + /* x1 port */
726 + pcie@2,0 {
727 + device_type = "pci";
728 +- assigned-addresses = <0x82000800 0 0x40000 0 0x2000>;
729 ++ assigned-addresses = <0x82001000 0 0x40000 0 0x2000>;
730 + reg = <0x1000 0 0 0 0>;
731 + #address-cells = <3>;
732 + #size-cells = <2>;
733 +@@ -492,7 +492,7 @@
734 + /* x1 port */
735 + pcie@3,0 {
736 + device_type = "pci";
737 +- assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
738 ++ assigned-addresses = <0x82001800 0 0x44000 0 0x2000>;
739 + reg = <0x1800 0 0 0 0>;
740 + #address-cells = <3>;
741 + #size-cells = <2>;
742 +@@ -524,7 +524,7 @@
743 + */
744 + pcie@4,0 {
745 + device_type = "pci";
746 +- assigned-addresses = <0x82000800 0 0x48000 0 0x2000>;
747 ++ assigned-addresses = <0x82002000 0 0x48000 0 0x2000>;
748 + reg = <0x2000 0 0 0 0>;
749 + #address-cells = <3>;
750 + #size-cells = <2>;
751 +diff --git a/arch/arm/boot/dts/armada-xp-mv78230.dtsi b/arch/arm/boot/dts/armada-xp-mv78230.dtsi
752 +index bf9360f41e0a6..5ea9d509cd308 100644
753 +--- a/arch/arm/boot/dts/armada-xp-mv78230.dtsi
754 ++++ b/arch/arm/boot/dts/armada-xp-mv78230.dtsi
755 +@@ -107,7 +107,7 @@
756 +
757 + pcie2: pcie@2,0 {
758 + device_type = "pci";
759 +- assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
760 ++ assigned-addresses = <0x82001000 0 0x44000 0 0x2000>;
761 + reg = <0x1000 0 0 0 0>;
762 + #address-cells = <3>;
763 + #size-cells = <2>;
764 +@@ -135,7 +135,7 @@
765 +
766 + pcie3: pcie@3,0 {
767 + device_type = "pci";
768 +- assigned-addresses = <0x82000800 0 0x48000 0 0x2000>;
769 ++ assigned-addresses = <0x82001800 0 0x48000 0 0x2000>;
770 + reg = <0x1800 0 0 0 0>;
771 + #address-cells = <3>;
772 + #size-cells = <2>;
773 +@@ -163,7 +163,7 @@
774 +
775 + pcie4: pcie@4,0 {
776 + device_type = "pci";
777 +- assigned-addresses = <0x82000800 0 0x4c000 0 0x2000>;
778 ++ assigned-addresses = <0x82002000 0 0x4c000 0 0x2000>;
779 + reg = <0x2000 0 0 0 0>;
780 + #address-cells = <3>;
781 + #size-cells = <2>;
782 +@@ -191,7 +191,7 @@
783 +
784 + pcie5: pcie@5,0 {
785 + device_type = "pci";
786 +- assigned-addresses = <0x82000800 0 0x80000 0 0x2000>;
787 ++ assigned-addresses = <0x82002800 0 0x80000 0 0x2000>;
788 + reg = <0x2800 0 0 0 0>;
789 + #address-cells = <3>;
790 + #size-cells = <2>;
791 +diff --git a/arch/arm/boot/dts/armada-xp-mv78260.dtsi b/arch/arm/boot/dts/armada-xp-mv78260.dtsi
792 +index 0714af52e6075..6c6fbb9faf5ac 100644
793 +--- a/arch/arm/boot/dts/armada-xp-mv78260.dtsi
794 ++++ b/arch/arm/boot/dts/armada-xp-mv78260.dtsi
795 +@@ -122,7 +122,7 @@
796 +
797 + pcie2: pcie@2,0 {
798 + device_type = "pci";
799 +- assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
800 ++ assigned-addresses = <0x82001000 0 0x44000 0 0x2000>;
801 + reg = <0x1000 0 0 0 0>;
802 + #address-cells = <3>;
803 + #size-cells = <2>;
804 +@@ -150,7 +150,7 @@
805 +
806 + pcie3: pcie@3,0 {
807 + device_type = "pci";
808 +- assigned-addresses = <0x82000800 0 0x48000 0 0x2000>;
809 ++ assigned-addresses = <0x82001800 0 0x48000 0 0x2000>;
810 + reg = <0x1800 0 0 0 0>;
811 + #address-cells = <3>;
812 + #size-cells = <2>;
813 +@@ -178,7 +178,7 @@
814 +
815 + pcie4: pcie@4,0 {
816 + device_type = "pci";
817 +- assigned-addresses = <0x82000800 0 0x4c000 0 0x2000>;
818 ++ assigned-addresses = <0x82002000 0 0x4c000 0 0x2000>;
819 + reg = <0x2000 0 0 0 0>;
820 + #address-cells = <3>;
821 + #size-cells = <2>;
822 +@@ -206,7 +206,7 @@
823 +
824 + pcie5: pcie@5,0 {
825 + device_type = "pci";
826 +- assigned-addresses = <0x82000800 0 0x80000 0 0x2000>;
827 ++ assigned-addresses = <0x82002800 0 0x80000 0 0x2000>;
828 + reg = <0x2800 0 0 0 0>;
829 + #address-cells = <3>;
830 + #size-cells = <2>;
831 +@@ -234,7 +234,7 @@
832 +
833 + pcie6: pcie@6,0 {
834 + device_type = "pci";
835 +- assigned-addresses = <0x82000800 0 0x84000 0 0x2000>;
836 ++ assigned-addresses = <0x82003000 0 0x84000 0 0x2000>;
837 + reg = <0x3000 0 0 0 0>;
838 + #address-cells = <3>;
839 + #size-cells = <2>;
840 +@@ -262,7 +262,7 @@
841 +
842 + pcie7: pcie@7,0 {
843 + device_type = "pci";
844 +- assigned-addresses = <0x82000800 0 0x88000 0 0x2000>;
845 ++ assigned-addresses = <0x82003800 0 0x88000 0 0x2000>;
846 + reg = <0x3800 0 0 0 0>;
847 + #address-cells = <3>;
848 + #size-cells = <2>;
849 +@@ -290,7 +290,7 @@
850 +
851 + pcie8: pcie@8,0 {
852 + device_type = "pci";
853 +- assigned-addresses = <0x82000800 0 0x8c000 0 0x2000>;
854 ++ assigned-addresses = <0x82004000 0 0x8c000 0 0x2000>;
855 + reg = <0x4000 0 0 0 0>;
856 + #address-cells = <3>;
857 + #size-cells = <2>;
858 +@@ -318,7 +318,7 @@
859 +
860 + pcie9: pcie@9,0 {
861 + device_type = "pci";
862 +- assigned-addresses = <0x82000800 0 0x42000 0 0x2000>;
863 ++ assigned-addresses = <0x82004800 0 0x42000 0 0x2000>;
864 + reg = <0x4800 0 0 0 0>;
865 + #address-cells = <3>;
866 + #size-cells = <2>;
867 +diff --git a/arch/arm/boot/dts/aspeed-bmc-ibm-everest.dts b/arch/arm/boot/dts/aspeed-bmc-ibm-everest.dts
868 +index a6a2bc3b855c2..fcc890e3ad735 100644
869 +--- a/arch/arm/boot/dts/aspeed-bmc-ibm-everest.dts
870 ++++ b/arch/arm/boot/dts/aspeed-bmc-ibm-everest.dts
871 +@@ -162,16 +162,9 @@
872 + #size-cells = <1>;
873 + ranges;
874 +
875 +- /* LPC FW cycle bridge region requires natural alignment */
876 +- flash_memory: region@b8000000 {
877 +- no-map;
878 +- reg = <0xb8000000 0x04000000>; /* 64M */
879 +- };
880 +-
881 +- /* 48MB region from the end of flash to start of vga memory */
882 +- ramoops@bc000000 {
883 ++ ramoops@b3e00000 {
884 + compatible = "ramoops";
885 +- reg = <0xbc000000 0x200000>; /* 16 * (4 * 0x8000) */
886 ++ reg = <0xb3e00000 0x200000>; /* 16 * (4 * 0x8000) */
887 + record-size = <0x8000>;
888 + console-size = <0x8000>;
889 + ftrace-size = <0x8000>;
890 +@@ -179,6 +172,12 @@
891 + max-reason = <3>; /* KMSG_DUMP_EMERG */
892 + };
893 +
894 ++ /* LPC FW cycle bridge region requires natural alignment */
895 ++ flash_memory: region@b4000000 {
896 ++ no-map;
897 ++ reg = <0xb4000000 0x04000000>; /* 64M */
898 ++ };
899 ++
900 + /* VGA region is dictated by hardware strapping */
901 + vga_memory: region@bf000000 {
902 + no-map;
903 +diff --git a/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts b/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
904 +index bf59a9962379d..4879da4cdbd25 100644
905 +--- a/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
906 ++++ b/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
907 +@@ -95,14 +95,9 @@
908 + #size-cells = <1>;
909 + ranges;
910 +
911 +- flash_memory: region@b8000000 {
912 +- no-map;
913 +- reg = <0xb8000000 0x04000000>; /* 64M */
914 +- };
915 +-
916 +- ramoops@bc000000 {
917 ++ ramoops@b3e00000 {
918 + compatible = "ramoops";
919 +- reg = <0xbc000000 0x200000>; /* 16 * (4 * 0x8000) */
920 ++ reg = <0xb3e00000 0x200000>; /* 16 * (4 * 0x8000) */
921 + record-size = <0x8000>;
922 + console-size = <0x8000>;
923 + ftrace-size = <0x8000>;
924 +@@ -110,6 +105,13 @@
925 + max-reason = <3>; /* KMSG_DUMP_EMERG */
926 + };
927 +
928 ++ /* LPC FW cycle bridge region requires natural alignment */
929 ++ flash_memory: region@b4000000 {
930 ++ no-map;
931 ++ reg = <0xb4000000 0x04000000>; /* 64M */
932 ++ };
933 ++
934 ++ /* VGA region is dictated by hardware strapping */
935 + vga_memory: region@bf000000 {
936 + no-map;
937 + compatible = "shared-dma-pool";
938 +diff --git a/arch/arm/boot/dts/dove.dtsi b/arch/arm/boot/dts/dove.dtsi
939 +index 00a36fba2fd23..9aee3cfd3e981 100644
940 +--- a/arch/arm/boot/dts/dove.dtsi
941 ++++ b/arch/arm/boot/dts/dove.dtsi
942 +@@ -139,7 +139,7 @@
943 + pcie1: pcie@2 {
944 + device_type = "pci";
945 + status = "disabled";
946 +- assigned-addresses = <0x82002800 0 0x80000 0 0x2000>;
947 ++ assigned-addresses = <0x82001000 0 0x80000 0 0x2000>;
948 + reg = <0x1000 0 0 0 0>;
949 + clocks = <&gate_clk 5>;
950 + marvell,pcie-port = <1>;
951 +diff --git a/arch/arm/boot/dts/nuvoton-npcm730-gbs.dts b/arch/arm/boot/dts/nuvoton-npcm730-gbs.dts
952 +index d10669fcd527d..9e9eba8bad5e4 100644
953 +--- a/arch/arm/boot/dts/nuvoton-npcm730-gbs.dts
954 ++++ b/arch/arm/boot/dts/nuvoton-npcm730-gbs.dts
955 +@@ -366,7 +366,7 @@
956 + spi-max-frequency = <20000000>;
957 + spi-rx-bus-width = <2>;
958 + label = "bmc";
959 +- partitions@80000000 {
960 ++ partitions {
961 + compatible = "fixed-partitions";
962 + #address-cells = <1>;
963 + #size-cells = <1>;
964 +diff --git a/arch/arm/boot/dts/nuvoton-npcm730-gsj.dts b/arch/arm/boot/dts/nuvoton-npcm730-gsj.dts
965 +index 491606c4f044d..2a394cc15284c 100644
966 +--- a/arch/arm/boot/dts/nuvoton-npcm730-gsj.dts
967 ++++ b/arch/arm/boot/dts/nuvoton-npcm730-gsj.dts
968 +@@ -142,7 +142,7 @@
969 + reg = <0>;
970 + spi-rx-bus-width = <2>;
971 +
972 +- partitions@80000000 {
973 ++ partitions {
974 + compatible = "fixed-partitions";
975 + #address-cells = <1>;
976 + #size-cells = <1>;
977 +diff --git a/arch/arm/boot/dts/nuvoton-npcm730-kudo.dts b/arch/arm/boot/dts/nuvoton-npcm730-kudo.dts
978 +index a0c2d76526258..f7b38bee039bc 100644
979 +--- a/arch/arm/boot/dts/nuvoton-npcm730-kudo.dts
980 ++++ b/arch/arm/boot/dts/nuvoton-npcm730-kudo.dts
981 +@@ -388,7 +388,7 @@
982 + spi-max-frequency = <5000000>;
983 + spi-rx-bus-width = <2>;
984 + label = "bmc";
985 +- partitions@80000000 {
986 ++ partitions {
987 + compatible = "fixed-partitions";
988 + #address-cells = <1>;
989 + #size-cells = <1>;
990 +@@ -422,7 +422,7 @@
991 + reg = <1>;
992 + spi-max-frequency = <5000000>;
993 + spi-rx-bus-width = <2>;
994 +- partitions@88000000 {
995 ++ partitions {
996 + compatible = "fixed-partitions";
997 + #address-cells = <1>;
998 + #size-cells = <1>;
999 +@@ -447,7 +447,7 @@
1000 + reg = <0>;
1001 + spi-max-frequency = <5000000>;
1002 + spi-rx-bus-width = <2>;
1003 +- partitions@A0000000 {
1004 ++ partitions {
1005 + compatible = "fixed-partitions";
1006 + #address-cells = <1>;
1007 + #size-cells = <1>;
1008 +diff --git a/arch/arm/boot/dts/nuvoton-npcm750-evb.dts b/arch/arm/boot/dts/nuvoton-npcm750-evb.dts
1009 +index 3dad32834e5ea..f53d45fa1de87 100644
1010 +--- a/arch/arm/boot/dts/nuvoton-npcm750-evb.dts
1011 ++++ b/arch/arm/boot/dts/nuvoton-npcm750-evb.dts
1012 +@@ -74,7 +74,7 @@
1013 + spi-rx-bus-width = <2>;
1014 + reg = <0>;
1015 + spi-max-frequency = <5000000>;
1016 +- partitions@80000000 {
1017 ++ partitions {
1018 + compatible = "fixed-partitions";
1019 + #address-cells = <1>;
1020 + #size-cells = <1>;
1021 +@@ -135,7 +135,7 @@
1022 + spi-rx-bus-width = <2>;
1023 + reg = <0>;
1024 + spi-max-frequency = <5000000>;
1025 +- partitions@A0000000 {
1026 ++ partitions {
1027 + compatible = "fixed-partitions";
1028 + #address-cells = <1>;
1029 + #size-cells = <1>;
1030 +diff --git a/arch/arm/boot/dts/nuvoton-npcm750-runbmc-olympus.dts b/arch/arm/boot/dts/nuvoton-npcm750-runbmc-olympus.dts
1031 +index 132e702281fc5..87359ab05db3e 100644
1032 +--- a/arch/arm/boot/dts/nuvoton-npcm750-runbmc-olympus.dts
1033 ++++ b/arch/arm/boot/dts/nuvoton-npcm750-runbmc-olympus.dts
1034 +@@ -107,7 +107,7 @@
1035 + reg = <0>;
1036 + spi-rx-bus-width = <2>;
1037 +
1038 +- partitions@80000000 {
1039 ++ partitions {
1040 + compatible = "fixed-partitions";
1041 + #address-cells = <1>;
1042 + #size-cells = <1>;
1043 +@@ -146,7 +146,7 @@
1044 + reg = <1>;
1045 + npcm,fiu-rx-bus-width = <2>;
1046 +
1047 +- partitions@88000000 {
1048 ++ partitions {
1049 + compatible = "fixed-partitions";
1050 + #address-cells = <1>;
1051 + #size-cells = <1>;
1052 +@@ -173,7 +173,7 @@
1053 + reg = <0>;
1054 + spi-rx-bus-width = <2>;
1055 +
1056 +- partitions@A0000000 {
1057 ++ partitions {
1058 + compatible = "fixed-partitions";
1059 + #address-cells = <1>;
1060 + #size-cells = <1>;
1061 +diff --git a/arch/arm/boot/dts/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom-apq8064.dtsi
1062 +index 942aa2278355d..a39b940d58532 100644
1063 +--- a/arch/arm/boot/dts/qcom-apq8064.dtsi
1064 ++++ b/arch/arm/boot/dts/qcom-apq8064.dtsi
1065 +@@ -1615,7 +1615,7 @@
1066 + };
1067 +
1068 + etb@1a01000 {
1069 +- compatible = "coresight-etb10", "arm,primecell";
1070 ++ compatible = "arm,coresight-etb10", "arm,primecell";
1071 + reg = <0x1a01000 0x1000>;
1072 +
1073 + clocks = <&rpmcc RPM_QDSS_CLK>;
1074 +diff --git a/arch/arm/boot/dts/spear600.dtsi b/arch/arm/boot/dts/spear600.dtsi
1075 +index fd41243a0b2c0..9d5a04a46b14e 100644
1076 +--- a/arch/arm/boot/dts/spear600.dtsi
1077 ++++ b/arch/arm/boot/dts/spear600.dtsi
1078 +@@ -47,7 +47,7 @@
1079 + compatible = "arm,pl110", "arm,primecell";
1080 + reg = <0xfc200000 0x1000>;
1081 + interrupt-parent = <&vic1>;
1082 +- interrupts = <12>;
1083 ++ interrupts = <13>;
1084 + status = "disabled";
1085 + };
1086 +
1087 +diff --git a/arch/arm/boot/dts/stm32mp157a-dhcor-avenger96.dts b/arch/arm/boot/dts/stm32mp157a-dhcor-avenger96.dts
1088 +index 2e3c9fbb4eb36..275167f26fd9d 100644
1089 +--- a/arch/arm/boot/dts/stm32mp157a-dhcor-avenger96.dts
1090 ++++ b/arch/arm/boot/dts/stm32mp157a-dhcor-avenger96.dts
1091 +@@ -13,7 +13,6 @@
1092 + /dts-v1/;
1093 +
1094 + #include "stm32mp157.dtsi"
1095 +-#include "stm32mp15xc.dtsi"
1096 + #include "stm32mp15xx-dhcor-som.dtsi"
1097 + #include "stm32mp15xx-dhcor-avenger96.dtsi"
1098 +
1099 +diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
1100 +index 90933077d66de..b6957cbdeff5f 100644
1101 +--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
1102 ++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
1103 +@@ -100,7 +100,7 @@
1104 + regulator-min-microvolt = <3300000>;
1105 + regulator-max-microvolt = <3300000>;
1106 +
1107 +- gpios = <&gpioz 3 GPIO_ACTIVE_HIGH>;
1108 ++ gpio = <&gpioz 3 GPIO_ACTIVE_HIGH>;
1109 + enable-active-high;
1110 + };
1111 + };
1112 +diff --git a/arch/arm/mach-mmp/time.c b/arch/arm/mach-mmp/time.c
1113 +index 41b2e8abc9e69..708816caf859c 100644
1114 +--- a/arch/arm/mach-mmp/time.c
1115 ++++ b/arch/arm/mach-mmp/time.c
1116 +@@ -43,18 +43,21 @@
1117 + static void __iomem *mmp_timer_base = TIMERS_VIRT_BASE;
1118 +
1119 + /*
1120 +- * FIXME: the timer needs some delay to stablize the counter capture
1121 ++ * Read the timer through the CVWR register. Delay is required after requesting
1122 ++ * a read. The CR register cannot be directly read due to metastability issues
1123 ++ * documented in the PXA168 software manual.
1124 + */
1125 + static inline uint32_t timer_read(void)
1126 + {
1127 +- int delay = 100;
1128 ++ uint32_t val;
1129 ++ int delay = 3;
1130 +
1131 + __raw_writel(1, mmp_timer_base + TMR_CVWR(1));
1132 +
1133 + while (delay--)
1134 +- cpu_relax();
1135 ++ val = __raw_readl(mmp_timer_base + TMR_CVWR(1));
1136 +
1137 +- return __raw_readl(mmp_timer_base + TMR_CVWR(1));
1138 ++ return val;
1139 + }
1140 +
1141 + static u64 notrace mmp_read_sched_clock(void)
1142 +diff --git a/arch/arm64/boot/dts/apple/t8103.dtsi b/arch/arm64/boot/dts/apple/t8103.dtsi
1143 +index 51a63b29d4045..a4d195e9eb8c8 100644
1144 +--- a/arch/arm64/boot/dts/apple/t8103.dtsi
1145 ++++ b/arch/arm64/boot/dts/apple/t8103.dtsi
1146 +@@ -412,7 +412,7 @@
1147 + resets = <&ps_ans2>;
1148 + };
1149 +
1150 +- pcie0_dart_0: dart@681008000 {
1151 ++ pcie0_dart_0: iommu@681008000 {
1152 + compatible = "apple,t8103-dart";
1153 + reg = <0x6 0x81008000 0x0 0x4000>;
1154 + #iommu-cells = <1>;
1155 +@@ -421,7 +421,7 @@
1156 + power-domains = <&ps_apcie_gp>;
1157 + };
1158 +
1159 +- pcie0_dart_1: dart@682008000 {
1160 ++ pcie0_dart_1: iommu@682008000 {
1161 + compatible = "apple,t8103-dart";
1162 + reg = <0x6 0x82008000 0x0 0x4000>;
1163 + #iommu-cells = <1>;
1164 +@@ -430,7 +430,7 @@
1165 + power-domains = <&ps_apcie_gp>;
1166 + };
1167 +
1168 +- pcie0_dart_2: dart@683008000 {
1169 ++ pcie0_dart_2: iommu@683008000 {
1170 + compatible = "apple,t8103-dart";
1171 + reg = <0x6 0x83008000 0x0 0x4000>;
1172 + #iommu-cells = <1>;
1173 +diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
1174 +index ada164d423f3d..200f97e1c4c9c 100644
1175 +--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
1176 ++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
1177 +@@ -125,9 +125,12 @@
1178 + /delete-property/ mrvl,i2c-fast-mode;
1179 + status = "okay";
1180 +
1181 ++ /* MCP7940MT-I/MNY RTC */
1182 + rtc@6f {
1183 + compatible = "microchip,mcp7940x";
1184 + reg = <0x6f>;
1185 ++ interrupt-parent = <&gpiosb>;
1186 ++ interrupts = <5 0>; /* GPIO2_5 */
1187 + };
1188 + };
1189 +
1190 +diff --git a/arch/arm64/boot/dts/mediatek/mt2712-evb.dts b/arch/arm64/boot/dts/mediatek/mt2712-evb.dts
1191 +index 9b1af9c801308..d31a194124c91 100644
1192 +--- a/arch/arm64/boot/dts/mediatek/mt2712-evb.dts
1193 ++++ b/arch/arm64/boot/dts/mediatek/mt2712-evb.dts
1194 +@@ -26,14 +26,14 @@
1195 + stdout-path = "serial0:921600n8";
1196 + };
1197 +
1198 +- cpus_fixed_vproc0: fixedregulator@0 {
1199 ++ cpus_fixed_vproc0: regulator-vproc-buck0 {
1200 + compatible = "regulator-fixed";
1201 + regulator-name = "vproc_buck0";
1202 + regulator-min-microvolt = <1000000>;
1203 + regulator-max-microvolt = <1000000>;
1204 + };
1205 +
1206 +- cpus_fixed_vproc1: fixedregulator@1 {
1207 ++ cpus_fixed_vproc1: regulator-vproc-buck1 {
1208 + compatible = "regulator-fixed";
1209 + regulator-name = "vproc_buck1";
1210 + regulator-min-microvolt = <1000000>;
1211 +@@ -50,7 +50,7 @@
1212 + id-gpio = <&pio 14 GPIO_ACTIVE_HIGH>;
1213 + };
1214 +
1215 +- usb_p0_vbus: regulator@2 {
1216 ++ usb_p0_vbus: regulator-usb-p0-vbus {
1217 + compatible = "regulator-fixed";
1218 + regulator-name = "p0_vbus";
1219 + regulator-min-microvolt = <5000000>;
1220 +@@ -59,7 +59,7 @@
1221 + enable-active-high;
1222 + };
1223 +
1224 +- usb_p1_vbus: regulator@3 {
1225 ++ usb_p1_vbus: regulator-usb-p1-vbus {
1226 + compatible = "regulator-fixed";
1227 + regulator-name = "p1_vbus";
1228 + regulator-min-microvolt = <5000000>;
1229 +@@ -68,7 +68,7 @@
1230 + enable-active-high;
1231 + };
1232 +
1233 +- usb_p2_vbus: regulator@4 {
1234 ++ usb_p2_vbus: regulator-usb-p2-vbus {
1235 + compatible = "regulator-fixed";
1236 + regulator-name = "p2_vbus";
1237 + regulator-min-microvolt = <5000000>;
1238 +@@ -77,7 +77,7 @@
1239 + enable-active-high;
1240 + };
1241 +
1242 +- usb_p3_vbus: regulator@5 {
1243 ++ usb_p3_vbus: regulator-usb-p3-vbus {
1244 + compatible = "regulator-fixed";
1245 + regulator-name = "p3_vbus";
1246 + regulator-min-microvolt = <5000000>;
1247 +diff --git a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
1248 +index e6d7453e56e0e..1ac0b2cf3d406 100644
1249 +--- a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
1250 ++++ b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
1251 +@@ -160,70 +160,70 @@
1252 + #clock-cells = <0>;
1253 + };
1254 +
1255 +- clk26m: oscillator@0 {
1256 ++ clk26m: oscillator-26m {
1257 + compatible = "fixed-clock";
1258 + #clock-cells = <0>;
1259 + clock-frequency = <26000000>;
1260 + clock-output-names = "clk26m";
1261 + };
1262 +
1263 +- clk32k: oscillator@1 {
1264 ++ clk32k: oscillator-32k {
1265 + compatible = "fixed-clock";
1266 + #clock-cells = <0>;
1267 + clock-frequency = <32768>;
1268 + clock-output-names = "clk32k";
1269 + };
1270 +
1271 +- clkfpc: oscillator@2 {
1272 ++ clkfpc: oscillator-50m {
1273 + compatible = "fixed-clock";
1274 + #clock-cells = <0>;
1275 + clock-frequency = <50000000>;
1276 + clock-output-names = "clkfpc";
1277 + };
1278 +
1279 +- clkaud_ext_i_0: oscillator@3 {
1280 ++ clkaud_ext_i_0: oscillator-aud0 {
1281 + compatible = "fixed-clock";
1282 + #clock-cells = <0>;
1283 + clock-frequency = <6500000>;
1284 + clock-output-names = "clkaud_ext_i_0";
1285 + };
1286 +
1287 +- clkaud_ext_i_1: oscillator@4 {
1288 ++ clkaud_ext_i_1: oscillator-aud1 {
1289 + compatible = "fixed-clock";
1290 + #clock-cells = <0>;
1291 + clock-frequency = <196608000>;
1292 + clock-output-names = "clkaud_ext_i_1";
1293 + };
1294 +
1295 +- clkaud_ext_i_2: oscillator@5 {
1296 ++ clkaud_ext_i_2: oscillator-aud2 {
1297 + compatible = "fixed-clock";
1298 + #clock-cells = <0>;
1299 + clock-frequency = <180633600>;
1300 + clock-output-names = "clkaud_ext_i_2";
1301 + };
1302 +
1303 +- clki2si0_mck_i: oscillator@6 {
1304 ++ clki2si0_mck_i: oscillator-i2s0 {
1305 + compatible = "fixed-clock";
1306 + #clock-cells = <0>;
1307 + clock-frequency = <30000000>;
1308 + clock-output-names = "clki2si0_mck_i";
1309 + };
1310 +
1311 +- clki2si1_mck_i: oscillator@7 {
1312 ++ clki2si1_mck_i: oscillator-i2s1 {
1313 + compatible = "fixed-clock";
1314 + #clock-cells = <0>;
1315 + clock-frequency = <30000000>;
1316 + clock-output-names = "clki2si1_mck_i";
1317 + };
1318 +
1319 +- clki2si2_mck_i: oscillator@8 {
1320 ++ clki2si2_mck_i: oscillator-i2s2 {
1321 + compatible = "fixed-clock";
1322 + #clock-cells = <0>;
1323 + clock-frequency = <30000000>;
1324 + clock-output-names = "clki2si2_mck_i";
1325 + };
1326 +
1327 +- clktdmin_mclk_i: oscillator@9 {
1328 ++ clktdmin_mclk_i: oscillator-mclk {
1329 + compatible = "fixed-clock";
1330 + #clock-cells = <0>;
1331 + clock-frequency = <30000000>;
1332 +@@ -266,7 +266,7 @@
1333 + reg = <0 0x10005000 0 0x1000>;
1334 + };
1335 +
1336 +- pio: pinctrl@10005000 {
1337 ++ pio: pinctrl@1000b000 {
1338 + compatible = "mediatek,mt2712-pinctrl";
1339 + reg = <0 0x1000b000 0 0x1000>;
1340 + mediatek,pctl-regmap = <&syscfg_pctl_a>;
1341 +diff --git a/arch/arm64/boot/dts/mediatek/mt6779.dtsi b/arch/arm64/boot/dts/mediatek/mt6779.dtsi
1342 +index 9bdf5145966c5..dde9ce137b4f1 100644
1343 +--- a/arch/arm64/boot/dts/mediatek/mt6779.dtsi
1344 ++++ b/arch/arm64/boot/dts/mediatek/mt6779.dtsi
1345 +@@ -88,14 +88,14 @@
1346 + interrupts = <GIC_PPI 7 IRQ_TYPE_LEVEL_LOW 0>;
1347 + };
1348 +
1349 +- clk26m: oscillator@0 {
1350 ++ clk26m: oscillator-26m {
1351 + compatible = "fixed-clock";
1352 + #clock-cells = <0>;
1353 + clock-frequency = <26000000>;
1354 + clock-output-names = "clk26m";
1355 + };
1356 +
1357 +- clk32k: oscillator@1 {
1358 ++ clk32k: oscillator-32k {
1359 + compatible = "fixed-clock";
1360 + #clock-cells = <0>;
1361 + clock-frequency = <32768>;
1362 +@@ -117,7 +117,7 @@
1363 + compatible = "simple-bus";
1364 + ranges;
1365 +
1366 +- gic: interrupt-controller@0c000000 {
1367 ++ gic: interrupt-controller@c000000 {
1368 + compatible = "arm,gic-v3";
1369 + #interrupt-cells = <4>;
1370 + interrupt-parent = <&gic>;
1371 +@@ -138,7 +138,7 @@
1372 +
1373 + };
1374 +
1375 +- sysirq: intpol-controller@0c53a650 {
1376 ++ sysirq: intpol-controller@c53a650 {
1377 + compatible = "mediatek,mt6779-sysirq",
1378 + "mediatek,mt6577-sysirq";
1379 + interrupt-controller;
1380 +diff --git a/arch/arm64/boot/dts/mediatek/mt6797.dtsi b/arch/arm64/boot/dts/mediatek/mt6797.dtsi
1381 +index 15616231022a2..c3677d77e0a45 100644
1382 +--- a/arch/arm64/boot/dts/mediatek/mt6797.dtsi
1383 ++++ b/arch/arm64/boot/dts/mediatek/mt6797.dtsi
1384 +@@ -95,7 +95,7 @@
1385 + };
1386 + };
1387 +
1388 +- clk26m: oscillator@0 {
1389 ++ clk26m: oscillator-26m {
1390 + compatible = "fixed-clock";
1391 + #clock-cells = <0>;
1392 + clock-frequency = <26000000>;
1393 +diff --git a/arch/arm64/boot/dts/mediatek/mt7986a.dtsi b/arch/arm64/boot/dts/mediatek/mt7986a.dtsi
1394 +index 72e0d9722e07a..35e01fa2d314b 100644
1395 +--- a/arch/arm64/boot/dts/mediatek/mt7986a.dtsi
1396 ++++ b/arch/arm64/boot/dts/mediatek/mt7986a.dtsi
1397 +@@ -14,7 +14,7 @@
1398 + #address-cells = <2>;
1399 + #size-cells = <2>;
1400 +
1401 +- clk40m: oscillator@0 {
1402 ++ clk40m: oscillator-40m {
1403 + compatible = "fixed-clock";
1404 + clock-frequency = <40000000>;
1405 + #clock-cells = <0>;
1406 +@@ -112,6 +112,12 @@
1407 + #clock-cells = <1>;
1408 + };
1409 +
1410 ++ wed_pcie: wed-pcie@10003000 {
1411 ++ compatible = "mediatek,mt7986-wed-pcie",
1412 ++ "syscon";
1413 ++ reg = <0 0x10003000 0 0x10>;
1414 ++ };
1415 ++
1416 + topckgen: topckgen@1001b000 {
1417 + compatible = "mediatek,mt7986-topckgen", "syscon";
1418 + reg = <0 0x1001B000 0 0x1000>;
1419 +@@ -168,7 +174,7 @@
1420 + #clock-cells = <1>;
1421 + };
1422 +
1423 +- trng: trng@1020f000 {
1424 ++ trng: rng@1020f000 {
1425 + compatible = "mediatek,mt7986-rng",
1426 + "mediatek,mt7623-rng";
1427 + reg = <0 0x1020f000 0 0x100>;
1428 +@@ -228,12 +234,6 @@
1429 + #reset-cells = <1>;
1430 + };
1431 +
1432 +- wed_pcie: wed-pcie@10003000 {
1433 +- compatible = "mediatek,mt7986-wed-pcie",
1434 +- "syscon";
1435 +- reg = <0 0x10003000 0 0x10>;
1436 +- };
1437 +-
1438 + wed0: wed@15010000 {
1439 + compatible = "mediatek,mt7986-wed",
1440 + "syscon";
1441 +diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
1442 +index a70b669c49baa..402136bfd5350 100644
1443 +--- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi
1444 ++++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
1445 +@@ -1678,7 +1678,7 @@
1446 + <GIC_SPI 278 IRQ_TYPE_LEVEL_LOW>;
1447 + interrupt-names = "job", "mmu", "gpu";
1448 +
1449 +- clocks = <&topckgen CLK_TOP_MFGPLL_CK>;
1450 ++ clocks = <&mfgcfg CLK_MFG_BG3D>;
1451 +
1452 + power-domains =
1453 + <&spm MT8183_POWER_DOMAIN_MFG_CORE0>,
1454 +diff --git a/arch/arm64/boot/dts/mediatek/mt8195.dtsi b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
1455 +index 905d1a90b406c..0b85b5874a4f9 100644
1456 +--- a/arch/arm64/boot/dts/mediatek/mt8195.dtsi
1457 ++++ b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
1458 +@@ -36,7 +36,7 @@
1459 + enable-method = "psci";
1460 + performance-domains = <&performance 0>;
1461 + clock-frequency = <1701000000>;
1462 +- capacity-dmips-mhz = <578>;
1463 ++ capacity-dmips-mhz = <308>;
1464 + cpu-idle-states = <&cpu_off_l &cluster_off_l>;
1465 + next-level-cache = <&l2_0>;
1466 + #cooling-cells = <2>;
1467 +@@ -49,7 +49,7 @@
1468 + enable-method = "psci";
1469 + performance-domains = <&performance 0>;
1470 + clock-frequency = <1701000000>;
1471 +- capacity-dmips-mhz = <578>;
1472 ++ capacity-dmips-mhz = <308>;
1473 + cpu-idle-states = <&cpu_off_l &cluster_off_l>;
1474 + next-level-cache = <&l2_0>;
1475 + #cooling-cells = <2>;
1476 +@@ -62,7 +62,7 @@
1477 + enable-method = "psci";
1478 + performance-domains = <&performance 0>;
1479 + clock-frequency = <1701000000>;
1480 +- capacity-dmips-mhz = <578>;
1481 ++ capacity-dmips-mhz = <308>;
1482 + cpu-idle-states = <&cpu_off_l &cluster_off_l>;
1483 + next-level-cache = <&l2_0>;
1484 + #cooling-cells = <2>;
1485 +@@ -75,7 +75,7 @@
1486 + enable-method = "psci";
1487 + performance-domains = <&performance 0>;
1488 + clock-frequency = <1701000000>;
1489 +- capacity-dmips-mhz = <578>;
1490 ++ capacity-dmips-mhz = <308>;
1491 + cpu-idle-states = <&cpu_off_l &cluster_off_l>;
1492 + next-level-cache = <&l2_0>;
1493 + #cooling-cells = <2>;
1494 +diff --git a/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi b/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
1495 +index 8ee1529683a34..ec8dfb3d1c6d6 100644
1496 +--- a/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
1497 ++++ b/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
1498 +@@ -17,7 +17,7 @@
1499 + };
1500 +
1501 + firmware {
1502 +- optee: optee@4fd00000 {
1503 ++ optee: optee {
1504 + compatible = "linaro,optee-tz";
1505 + method = "smc";
1506 + };
1507 +@@ -209,7 +209,7 @@
1508 + };
1509 + };
1510 +
1511 +- i2c0_pins_a: i2c0@0 {
1512 ++ i2c0_pins_a: i2c0 {
1513 + pins1 {
1514 + pinmux = <MT8516_PIN_58_SDA0__FUNC_SDA0_0>,
1515 + <MT8516_PIN_59_SCL0__FUNC_SCL0_0>;
1516 +@@ -217,7 +217,7 @@
1517 + };
1518 + };
1519 +
1520 +- i2c2_pins_a: i2c2@0 {
1521 ++ i2c2_pins_a: i2c2 {
1522 + pins1 {
1523 + pinmux = <MT8516_PIN_60_SDA2__FUNC_SDA2_0>,
1524 + <MT8516_PIN_61_SCL2__FUNC_SCL2_0>;
1525 +diff --git a/arch/arm64/boot/dts/nvidia/tegra234.dtsi b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
1526 +index 0170bfa8a4679..dfe2cf2f4b218 100644
1527 +--- a/arch/arm64/boot/dts/nvidia/tegra234.dtsi
1528 ++++ b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
1529 +@@ -1965,7 +1965,7 @@
1530 +
1531 + bus-range = <0x0 0xff>;
1532 +
1533 +- ranges = <0x43000000 0x35 0x40000000 0x35 0x40000000 0x2 0xe8000000>, /* prefetchable memory (11904 MB) */
1534 ++ ranges = <0x43000000 0x35 0x40000000 0x35 0x40000000 0x2 0xc0000000>, /* prefetchable memory (11264 MB) */
1535 + <0x02000000 0x0 0x40000000 0x38 0x28000000 0x0 0x08000000>, /* non-prefetchable memory (128 MB) */
1536 + <0x01000000 0x0 0x2c100000 0x00 0x2c100000 0x0 0x00100000>; /* downstream I/O (1 MB) */
1537 +
1538 +@@ -2178,7 +2178,7 @@
1539 + bus-range = <0x0 0xff>;
1540 +
1541 + ranges = <0x43000000 0x21 0x00000000 0x21 0x00000000 0x0 0x28000000>, /* prefetchable memory (640 MB) */
1542 +- <0x02000000 0x0 0x40000000 0x21 0xe8000000 0x0 0x08000000>, /* non-prefetchable memory (128 MB) */
1543 ++ <0x02000000 0x0 0x40000000 0x21 0x28000000 0x0 0x08000000>, /* non-prefetchable memory (128 MB) */
1544 + <0x01000000 0x0 0x34100000 0x00 0x34100000 0x0 0x00100000>; /* downstream I/O (1 MB) */
1545 +
1546 + interconnects = <&mc TEGRA234_MEMORY_CLIENT_PCIE3R &emc>,
1547 +@@ -2336,7 +2336,7 @@
1548 +
1549 + bus-range = <0x0 0xff>;
1550 +
1551 +- ranges = <0x43000000 0x27 0x40000000 0x27 0x40000000 0x3 0xe8000000>, /* prefetchable memory (16000 MB) */
1552 ++ ranges = <0x43000000 0x28 0x00000000 0x28 0x00000000 0x3 0x28000000>, /* prefetchable memory (12928 MB) */
1553 + <0x02000000 0x0 0x40000000 0x2b 0x28000000 0x0 0x08000000>, /* non-prefetchable memory (128 MB) */
1554 + <0x01000000 0x0 0x3a100000 0x00 0x3a100000 0x0 0x00100000>; /* downstream I/O (1 MB) */
1555 +
1556 +@@ -2442,7 +2442,7 @@
1557 +
1558 + bus-range = <0x0 0xff>;
1559 +
1560 +- ranges = <0x43000000 0x2e 0x40000000 0x2e 0x40000000 0x3 0xe8000000>, /* prefetchable memory (16000 MB) */
1561 ++ ranges = <0x43000000 0x30 0x00000000 0x30 0x00000000 0x2 0x28000000>, /* prefetchable memory (8832 MB) */
1562 + <0x02000000 0x0 0x40000000 0x32 0x28000000 0x0 0x08000000>, /* non-prefetchable memory (128 MB) */
1563 + <0x01000000 0x0 0x3e100000 0x00 0x3e100000 0x0 0x00100000>; /* downstream I/O (1 MB) */
1564 +
1565 +diff --git a/arch/arm64/boot/dts/qcom/ipq6018-cp01-c1.dts b/arch/arm64/boot/dts/qcom/ipq6018-cp01-c1.dts
1566 +index 1ba2eca33c7b6..6a716c83e5f1d 100644
1567 +--- a/arch/arm64/boot/dts/qcom/ipq6018-cp01-c1.dts
1568 ++++ b/arch/arm64/boot/dts/qcom/ipq6018-cp01-c1.dts
1569 +@@ -37,6 +37,8 @@
1570 +
1571 + &blsp1_spi1 {
1572 + cs-select = <0>;
1573 ++ pinctrl-0 = <&spi_0_pins>;
1574 ++ pinctrl-names = "default";
1575 + status = "okay";
1576 +
1577 + flash@0 {
1578 +diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
1579 +index a831064700ee8..9743cb270639d 100644
1580 +--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
1581 ++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
1582 +@@ -1345,7 +1345,7 @@
1583 + };
1584 +
1585 + mpss: remoteproc@4080000 {
1586 +- compatible = "qcom,msm8916-mss-pil", "qcom,q6v5-pil";
1587 ++ compatible = "qcom,msm8916-mss-pil";
1588 + reg = <0x04080000 0x100>,
1589 + <0x04020000 0x040>;
1590 +
1591 +diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
1592 +index aba7176443919..1107befc3b091 100644
1593 +--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
1594 ++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
1595 +@@ -144,82 +144,92 @@
1596 + /* Nominal fmax for now */
1597 + opp-307200000 {
1598 + opp-hz = /bits/ 64 <307200000>;
1599 +- opp-supported-hw = <0x77>;
1600 ++ opp-supported-hw = <0x7>;
1601 + clock-latency-ns = <200000>;
1602 + };
1603 + opp-422400000 {
1604 + opp-hz = /bits/ 64 <422400000>;
1605 +- opp-supported-hw = <0x77>;
1606 ++ opp-supported-hw = <0x7>;
1607 + clock-latency-ns = <200000>;
1608 + };
1609 + opp-480000000 {
1610 + opp-hz = /bits/ 64 <480000000>;
1611 +- opp-supported-hw = <0x77>;
1612 ++ opp-supported-hw = <0x7>;
1613 + clock-latency-ns = <200000>;
1614 + };
1615 + opp-556800000 {
1616 + opp-hz = /bits/ 64 <556800000>;
1617 +- opp-supported-hw = <0x77>;
1618 ++ opp-supported-hw = <0x7>;
1619 + clock-latency-ns = <200000>;
1620 + };
1621 + opp-652800000 {
1622 + opp-hz = /bits/ 64 <652800000>;
1623 +- opp-supported-hw = <0x77>;
1624 ++ opp-supported-hw = <0x7>;
1625 + clock-latency-ns = <200000>;
1626 + };
1627 + opp-729600000 {
1628 + opp-hz = /bits/ 64 <729600000>;
1629 +- opp-supported-hw = <0x77>;
1630 ++ opp-supported-hw = <0x7>;
1631 + clock-latency-ns = <200000>;
1632 + };
1633 + opp-844800000 {
1634 + opp-hz = /bits/ 64 <844800000>;
1635 +- opp-supported-hw = <0x77>;
1636 ++ opp-supported-hw = <0x7>;
1637 + clock-latency-ns = <200000>;
1638 + };
1639 + opp-960000000 {
1640 + opp-hz = /bits/ 64 <960000000>;
1641 +- opp-supported-hw = <0x77>;
1642 ++ opp-supported-hw = <0x7>;
1643 + clock-latency-ns = <200000>;
1644 + };
1645 + opp-1036800000 {
1646 + opp-hz = /bits/ 64 <1036800000>;
1647 +- opp-supported-hw = <0x77>;
1648 ++ opp-supported-hw = <0x7>;
1649 + clock-latency-ns = <200000>;
1650 + };
1651 + opp-1113600000 {
1652 + opp-hz = /bits/ 64 <1113600000>;
1653 +- opp-supported-hw = <0x77>;
1654 ++ opp-supported-hw = <0x7>;
1655 + clock-latency-ns = <200000>;
1656 + };
1657 + opp-1190400000 {
1658 + opp-hz = /bits/ 64 <1190400000>;
1659 +- opp-supported-hw = <0x77>;
1660 ++ opp-supported-hw = <0x7>;
1661 + clock-latency-ns = <200000>;
1662 + };
1663 + opp-1228800000 {
1664 + opp-hz = /bits/ 64 <1228800000>;
1665 +- opp-supported-hw = <0x77>;
1666 ++ opp-supported-hw = <0x7>;
1667 + clock-latency-ns = <200000>;
1668 + };
1669 + opp-1324800000 {
1670 + opp-hz = /bits/ 64 <1324800000>;
1671 +- opp-supported-hw = <0x77>;
1672 ++ opp-supported-hw = <0x5>;
1673 ++ clock-latency-ns = <200000>;
1674 ++ };
1675 ++ opp-1363200000 {
1676 ++ opp-hz = /bits/ 64 <1363200000>;
1677 ++ opp-supported-hw = <0x2>;
1678 + clock-latency-ns = <200000>;
1679 + };
1680 + opp-1401600000 {
1681 + opp-hz = /bits/ 64 <1401600000>;
1682 +- opp-supported-hw = <0x77>;
1683 ++ opp-supported-hw = <0x5>;
1684 + clock-latency-ns = <200000>;
1685 + };
1686 + opp-1478400000 {
1687 + opp-hz = /bits/ 64 <1478400000>;
1688 +- opp-supported-hw = <0x77>;
1689 ++ opp-supported-hw = <0x1>;
1690 ++ clock-latency-ns = <200000>;
1691 ++ };
1692 ++ opp-1497600000 {
1693 ++ opp-hz = /bits/ 64 <1497600000>;
1694 ++ opp-supported-hw = <0x04>;
1695 + clock-latency-ns = <200000>;
1696 + };
1697 + opp-1593600000 {
1698 + opp-hz = /bits/ 64 <1593600000>;
1699 +- opp-supported-hw = <0x77>;
1700 ++ opp-supported-hw = <0x1>;
1701 + clock-latency-ns = <200000>;
1702 + };
1703 + };
1704 +@@ -232,127 +242,137 @@
1705 + /* Nominal fmax for now */
1706 + opp-307200000 {
1707 + opp-hz = /bits/ 64 <307200000>;
1708 +- opp-supported-hw = <0x77>;
1709 ++ opp-supported-hw = <0x7>;
1710 + clock-latency-ns = <200000>;
1711 + };
1712 + opp-403200000 {
1713 + opp-hz = /bits/ 64 <403200000>;
1714 +- opp-supported-hw = <0x77>;
1715 ++ opp-supported-hw = <0x7>;
1716 + clock-latency-ns = <200000>;
1717 + };
1718 + opp-480000000 {
1719 + opp-hz = /bits/ 64 <480000000>;
1720 +- opp-supported-hw = <0x77>;
1721 ++ opp-supported-hw = <0x7>;
1722 + clock-latency-ns = <200000>;
1723 + };
1724 + opp-556800000 {
1725 + opp-hz = /bits/ 64 <556800000>;
1726 +- opp-supported-hw = <0x77>;
1727 ++ opp-supported-hw = <0x7>;
1728 + clock-latency-ns = <200000>;
1729 + };
1730 + opp-652800000 {
1731 + opp-hz = /bits/ 64 <652800000>;
1732 +- opp-supported-hw = <0x77>;
1733 ++ opp-supported-hw = <0x7>;
1734 + clock-latency-ns = <200000>;
1735 + };
1736 + opp-729600000 {
1737 + opp-hz = /bits/ 64 <729600000>;
1738 +- opp-supported-hw = <0x77>;
1739 ++ opp-supported-hw = <0x7>;
1740 + clock-latency-ns = <200000>;
1741 + };
1742 + opp-806400000 {
1743 + opp-hz = /bits/ 64 <806400000>;
1744 +- opp-supported-hw = <0x77>;
1745 ++ opp-supported-hw = <0x7>;
1746 + clock-latency-ns = <200000>;
1747 + };
1748 + opp-883200000 {
1749 + opp-hz = /bits/ 64 <883200000>;
1750 +- opp-supported-hw = <0x77>;
1751 ++ opp-supported-hw = <0x7>;
1752 + clock-latency-ns = <200000>;
1753 + };
1754 + opp-940800000 {
1755 + opp-hz = /bits/ 64 <940800000>;
1756 +- opp-supported-hw = <0x77>;
1757 ++ opp-supported-hw = <0x7>;
1758 + clock-latency-ns = <200000>;
1759 + };
1760 + opp-1036800000 {
1761 + opp-hz = /bits/ 64 <1036800000>;
1762 +- opp-supported-hw = <0x77>;
1763 ++ opp-supported-hw = <0x7>;
1764 + clock-latency-ns = <200000>;
1765 + };
1766 + opp-1113600000 {
1767 + opp-hz = /bits/ 64 <1113600000>;
1768 +- opp-supported-hw = <0x77>;
1769 ++ opp-supported-hw = <0x7>;
1770 + clock-latency-ns = <200000>;
1771 + };
1772 + opp-1190400000 {
1773 + opp-hz = /bits/ 64 <1190400000>;
1774 +- opp-supported-hw = <0x77>;
1775 ++ opp-supported-hw = <0x7>;
1776 + clock-latency-ns = <200000>;
1777 + };
1778 + opp-1248000000 {
1779 + opp-hz = /bits/ 64 <1248000000>;
1780 +- opp-supported-hw = <0x77>;
1781 ++ opp-supported-hw = <0x7>;
1782 + clock-latency-ns = <200000>;
1783 + };
1784 + opp-1324800000 {
1785 + opp-hz = /bits/ 64 <1324800000>;
1786 +- opp-supported-hw = <0x77>;
1787 ++ opp-supported-hw = <0x7>;
1788 + clock-latency-ns = <200000>;
1789 + };
1790 + opp-1401600000 {
1791 + opp-hz = /bits/ 64 <1401600000>;
1792 +- opp-supported-hw = <0x77>;
1793 ++ opp-supported-hw = <0x7>;
1794 + clock-latency-ns = <200000>;
1795 + };
1796 + opp-1478400000 {
1797 + opp-hz = /bits/ 64 <1478400000>;
1798 +- opp-supported-hw = <0x77>;
1799 ++ opp-supported-hw = <0x7>;
1800 + clock-latency-ns = <200000>;
1801 + };
1802 + opp-1555200000 {
1803 + opp-hz = /bits/ 64 <1555200000>;
1804 +- opp-supported-hw = <0x77>;
1805 ++ opp-supported-hw = <0x7>;
1806 + clock-latency-ns = <200000>;
1807 + };
1808 + opp-1632000000 {
1809 + opp-hz = /bits/ 64 <1632000000>;
1810 +- opp-supported-hw = <0x77>;
1811 ++ opp-supported-hw = <0x7>;
1812 + clock-latency-ns = <200000>;
1813 + };
1814 + opp-1708800000 {
1815 + opp-hz = /bits/ 64 <1708800000>;
1816 +- opp-supported-hw = <0x77>;
1817 ++ opp-supported-hw = <0x7>;
1818 + clock-latency-ns = <200000>;
1819 + };
1820 + opp-1785600000 {
1821 + opp-hz = /bits/ 64 <1785600000>;
1822 +- opp-supported-hw = <0x77>;
1823 ++ opp-supported-hw = <0x7>;
1824 ++ clock-latency-ns = <200000>;
1825 ++ };
1826 ++ opp-1804800000 {
1827 ++ opp-hz = /bits/ 64 <1804800000>;
1828 ++ opp-supported-hw = <0x6>;
1829 + clock-latency-ns = <200000>;
1830 + };
1831 + opp-1824000000 {
1832 + opp-hz = /bits/ 64 <1824000000>;
1833 +- opp-supported-hw = <0x77>;
1834 ++ opp-supported-hw = <0x1>;
1835 ++ clock-latency-ns = <200000>;
1836 ++ };
1837 ++ opp-1900800000 {
1838 ++ opp-hz = /bits/ 64 <1900800000>;
1839 ++ opp-supported-hw = <0x4>;
1840 + clock-latency-ns = <200000>;
1841 + };
1842 + opp-1920000000 {
1843 + opp-hz = /bits/ 64 <1920000000>;
1844 +- opp-supported-hw = <0x77>;
1845 ++ opp-supported-hw = <0x1>;
1846 + clock-latency-ns = <200000>;
1847 + };
1848 + opp-1996800000 {
1849 + opp-hz = /bits/ 64 <1996800000>;
1850 +- opp-supported-hw = <0x77>;
1851 ++ opp-supported-hw = <0x1>;
1852 + clock-latency-ns = <200000>;
1853 + };
1854 + opp-2073600000 {
1855 + opp-hz = /bits/ 64 <2073600000>;
1856 +- opp-supported-hw = <0x77>;
1857 ++ opp-supported-hw = <0x1>;
1858 + clock-latency-ns = <200000>;
1859 + };
1860 + opp-2150400000 {
1861 + opp-hz = /bits/ 64 <2150400000>;
1862 +- opp-supported-hw = <0x77>;
1863 ++ opp-supported-hw = <0x1>;
1864 + clock-latency-ns = <200000>;
1865 + };
1866 + };
1867 +@@ -1213,17 +1233,17 @@
1868 + compatible = "operating-points-v2";
1869 +
1870 + /*
1871 +- * 624Mhz and 560Mhz are only available on speed
1872 +- * bin (1 << 0). All the rest are available on
1873 +- * all bins of the hardware
1874 ++ * 624Mhz is only available on speed bins 0 and 3.
1875 ++ * 560Mhz is only available on speed bins 0, 2 and 3.
1876 ++ * All the rest are available on all bins of the hardware.
1877 + */
1878 + opp-624000000 {
1879 + opp-hz = /bits/ 64 <624000000>;
1880 +- opp-supported-hw = <0x01>;
1881 ++ opp-supported-hw = <0x09>;
1882 + };
1883 + opp-560000000 {
1884 + opp-hz = /bits/ 64 <560000000>;
1885 +- opp-supported-hw = <0x01>;
1886 ++ opp-supported-hw = <0x0d>;
1887 + };
1888 + opp-510000000 {
1889 + opp-hz = /bits/ 64 <510000000>;
1890 +@@ -3342,7 +3362,7 @@
1891 + interrupt-names = "intr1", "intr2";
1892 + interrupt-controller;
1893 + #interrupt-cells = <1>;
1894 +- reset-gpios = <&tlmm 64 GPIO_ACTIVE_HIGH>;
1895 ++ reset-gpios = <&tlmm 64 GPIO_ACTIVE_LOW>;
1896 +
1897 + slim-ifc-dev = <&tasha_ifd>;
1898 +
1899 +diff --git a/arch/arm64/boot/dts/qcom/msm8996pro.dtsi b/arch/arm64/boot/dts/qcom/msm8996pro.dtsi
1900 +new file mode 100644
1901 +index 0000000000000..63e1b4ec7a360
1902 +--- /dev/null
1903 ++++ b/arch/arm64/boot/dts/qcom/msm8996pro.dtsi
1904 +@@ -0,0 +1,266 @@
1905 ++// SPDX-License-Identifier: BSD-3-Clause
1906 ++/*
1907 ++ * Copyright (c) 2022, Linaro Limited
1908 ++ */
1909 ++
1910 ++#include "msm8996.dtsi"
1911 ++
1912 ++/ {
1913 ++ /delete-node/ opp-table-cluster0;
1914 ++ /delete-node/ opp-table-cluster1;
1915 ++
1916 ++ /*
1917 ++ * On MSM8996 Pro the cpufreq driver shifts speed bins into the high
1918 ++ * nibble of supported hw, so speed bin 0 becomes 0x10, speed bin 1
1919 ++ * becomes 0x20, speed 2 becomes 0x40.
1920 ++ */
1921 ++
1922 ++ cluster0_opp: opp-table-cluster0 {
1923 ++ compatible = "operating-points-v2-kryo-cpu";
1924 ++ nvmem-cells = <&speedbin_efuse>;
1925 ++ opp-shared;
1926 ++
1927 ++ opp-307200000 {
1928 ++ opp-hz = /bits/ 64 <307200000>;
1929 ++ opp-supported-hw = <0x70>;
1930 ++ clock-latency-ns = <200000>;
1931 ++ };
1932 ++ opp-384000000 {
1933 ++ opp-hz = /bits/ 64 <384000000>;
1934 ++ opp-supported-hw = <0x70>;
1935 ++ clock-latency-ns = <200000>;
1936 ++ };
1937 ++ opp-460800000 {
1938 ++ opp-hz = /bits/ 64 <460800000>;
1939 ++ opp-supported-hw = <0x70>;
1940 ++ clock-latency-ns = <200000>;
1941 ++ };
1942 ++ opp-537600000 {
1943 ++ opp-hz = /bits/ 64 <537600000>;
1944 ++ opp-supported-hw = <0x70>;
1945 ++ clock-latency-ns = <200000>;
1946 ++ };
1947 ++ opp-614400000 {
1948 ++ opp-hz = /bits/ 64 <614400000>;
1949 ++ opp-supported-hw = <0x70>;
1950 ++ clock-latency-ns = <200000>;
1951 ++ };
1952 ++ opp-691200000 {
1953 ++ opp-hz = /bits/ 64 <691200000>;
1954 ++ opp-supported-hw = <0x70>;
1955 ++ clock-latency-ns = <200000>;
1956 ++ };
1957 ++ opp-768000000 {
1958 ++ opp-hz = /bits/ 64 <768000000>;
1959 ++ opp-supported-hw = <0x70>;
1960 ++ clock-latency-ns = <200000>;
1961 ++ };
1962 ++ opp-844800000 {
1963 ++ opp-hz = /bits/ 64 <844800000>;
1964 ++ opp-supported-hw = <0x70>;
1965 ++ clock-latency-ns = <200000>;
1966 ++ };
1967 ++ opp-902400000 {
1968 ++ opp-hz = /bits/ 64 <902400000>;
1969 ++ opp-supported-hw = <0x70>;
1970 ++ clock-latency-ns = <200000>;
1971 ++ };
1972 ++ opp-979200000 {
1973 ++ opp-hz = /bits/ 64 <979200000>;
1974 ++ opp-supported-hw = <0x70>;
1975 ++ clock-latency-ns = <200000>;
1976 ++ };
1977 ++ opp-1056000000 {
1978 ++ opp-hz = /bits/ 64 <1056000000>;
1979 ++ opp-supported-hw = <0x70>;
1980 ++ clock-latency-ns = <200000>;
1981 ++ };
1982 ++ opp-1132800000 {
1983 ++ opp-hz = /bits/ 64 <1132800000>;
1984 ++ opp-supported-hw = <0x70>;
1985 ++ clock-latency-ns = <200000>;
1986 ++ };
1987 ++ opp-1209600000 {
1988 ++ opp-hz = /bits/ 64 <1209600000>;
1989 ++ opp-supported-hw = <0x70>;
1990 ++ clock-latency-ns = <200000>;
1991 ++ };
1992 ++ opp-1286400000 {
1993 ++ opp-hz = /bits/ 64 <1286400000>;
1994 ++ opp-supported-hw = <0x70>;
1995 ++ clock-latency-ns = <200000>;
1996 ++ };
1997 ++ opp-1363200000 {
1998 ++ opp-hz = /bits/ 64 <1363200000>;
1999 ++ opp-supported-hw = <0x70>;
2000 ++ clock-latency-ns = <200000>;
2001 ++ };
2002 ++ opp-1440000000 {
2003 ++ opp-hz = /bits/ 64 <1440000000>;
2004 ++ opp-supported-hw = <0x70>;
2005 ++ clock-latency-ns = <200000>;
2006 ++ };
2007 ++ opp-1516800000 {
2008 ++ opp-hz = /bits/ 64 <1516800000>;
2009 ++ opp-supported-hw = <0x70>;
2010 ++ clock-latency-ns = <200000>;
2011 ++ };
2012 ++ opp-1593600000 {
2013 ++ opp-hz = /bits/ 64 <1593600000>;
2014 ++ opp-supported-hw = <0x70>;
2015 ++ clock-latency-ns = <200000>;
2016 ++ };
2017 ++ opp-1996800000 {
2018 ++ opp-hz = /bits/ 64 <1996800000>;
2019 ++ opp-supported-hw = <0x20>;
2020 ++ clock-latency-ns = <200000>;
2021 ++ };
2022 ++ opp-2188800000 {
2023 ++ opp-hz = /bits/ 64 <2188800000>;
2024 ++ opp-supported-hw = <0x10>;
2025 ++ clock-latency-ns = <200000>;
2026 ++ };
2027 ++ };
2028 ++
2029 ++ cluster1_opp: opp-table-cluster1 {
2030 ++ compatible = "operating-points-v2-kryo-cpu";
2031 ++ nvmem-cells = <&speedbin_efuse>;
2032 ++ opp-shared;
2033 ++
2034 ++ opp-307200000 {
2035 ++ opp-hz = /bits/ 64 <307200000>;
2036 ++ opp-supported-hw = <0x70>;
2037 ++ clock-latency-ns = <200000>;
2038 ++ };
2039 ++ opp-384000000 {
2040 ++ opp-hz = /bits/ 64 <384000000>;
2041 ++ opp-supported-hw = <0x70>;
2042 ++ clock-latency-ns = <200000>;
2043 ++ };
2044 ++ opp-460800000 {
2045 ++ opp-hz = /bits/ 64 <460800000>;
2046 ++ opp-supported-hw = <0x70>;
2047 ++ clock-latency-ns = <200000>;
2048 ++ };
2049 ++ opp-537600000 {
2050 ++ opp-hz = /bits/ 64 <537600000>;
2051 ++ opp-supported-hw = <0x70>;
2052 ++ clock-latency-ns = <200000>;
2053 ++ };
2054 ++ opp-614400000 {
2055 ++ opp-hz = /bits/ 64 <614400000>;
2056 ++ opp-supported-hw = <0x70>;
2057 ++ clock-latency-ns = <200000>;
2058 ++ };
2059 ++ opp-691200000 {
2060 ++ opp-hz = /bits/ 64 <691200000>;
2061 ++ opp-supported-hw = <0x70>;
2062 ++ clock-latency-ns = <200000>;
2063 ++ };
2064 ++ opp-748800000 {
2065 ++ opp-hz = /bits/ 64 <748800000>;
2066 ++ opp-supported-hw = <0x70>;
2067 ++ clock-latency-ns = <200000>;
2068 ++ };
2069 ++ opp-825600000 {
2070 ++ opp-hz = /bits/ 64 <825600000>;
2071 ++ opp-supported-hw = <0x70>;
2072 ++ clock-latency-ns = <200000>;
2073 ++ };
2074 ++ opp-902400000 {
2075 ++ opp-hz = /bits/ 64 <902400000>;
2076 ++ opp-supported-hw = <0x70>;
2077 ++ clock-latency-ns = <200000>;
2078 ++ };
2079 ++ opp-979200000 {
2080 ++ opp-hz = /bits/ 64 <979200000>;
2081 ++ opp-supported-hw = <0x70>;
2082 ++ clock-latency-ns = <200000>;
2083 ++ };
2084 ++ opp-1056000000 {
2085 ++ opp-hz = /bits/ 64 <1056000000>;
2086 ++ opp-supported-hw = <0x70>;
2087 ++ clock-latency-ns = <200000>;
2088 ++ };
2089 ++ opp-1132800000 {
2090 ++ opp-hz = /bits/ 64 <1132800000>;
2091 ++ opp-supported-hw = <0x70>;
2092 ++ clock-latency-ns = <200000>;
2093 ++ };
2094 ++ opp-1209600000 {
2095 ++ opp-hz = /bits/ 64 <1209600000>;
2096 ++ opp-supported-hw = <0x70>;
2097 ++ clock-latency-ns = <200000>;
2098 ++ };
2099 ++ opp-1286400000 {
2100 ++ opp-hz = /bits/ 64 <1286400000>;
2101 ++ opp-supported-hw = <0x70>;
2102 ++ clock-latency-ns = <200000>;
2103 ++ };
2104 ++ opp-1363200000 {
2105 ++ opp-hz = /bits/ 64 <1363200000>;
2106 ++ opp-supported-hw = <0x70>;
2107 ++ clock-latency-ns = <200000>;
2108 ++ };
2109 ++ opp-1440000000 {
2110 ++ opp-hz = /bits/ 64 <1440000000>;
2111 ++ opp-supported-hw = <0x70>;
2112 ++ clock-latency-ns = <200000>;
2113 ++ };
2114 ++ opp-1516800000 {
2115 ++ opp-hz = /bits/ 64 <1516800000>;
2116 ++ opp-supported-hw = <0x70>;
2117 ++ clock-latency-ns = <200000>;
2118 ++ };
2119 ++ opp-1593600000 {
2120 ++ opp-hz = /bits/ 64 <1593600000>;
2121 ++ opp-supported-hw = <0x70>;
2122 ++ clock-latency-ns = <200000>;
2123 ++ };
2124 ++ opp-1670400000 {
2125 ++ opp-hz = /bits/ 64 <1670400000>;
2126 ++ opp-supported-hw = <0x70>;
2127 ++ clock-latency-ns = <200000>;
2128 ++ };
2129 ++ opp-1747200000 {
2130 ++ opp-hz = /bits/ 64 <1747200000>;
2131 ++ opp-supported-hw = <0x70>;
2132 ++ clock-latency-ns = <200000>;
2133 ++ };
2134 ++ opp-1824000000 {
2135 ++ opp-hz = /bits/ 64 <1824000000>;
2136 ++ opp-supported-hw = <0x70>;
2137 ++ clock-latency-ns = <200000>;
2138 ++ };
2139 ++ opp-1900800000 {
2140 ++ opp-hz = /bits/ 64 <1900800000>;
2141 ++ opp-supported-hw = <0x70>;
2142 ++ clock-latency-ns = <200000>;
2143 ++ };
2144 ++ opp-1977600000 {
2145 ++ opp-hz = /bits/ 64 <1977600000>;
2146 ++ opp-supported-hw = <0x30>;
2147 ++ clock-latency-ns = <200000>;
2148 ++ };
2149 ++ opp-2054400000 {
2150 ++ opp-hz = /bits/ 64 <2054400000>;
2151 ++ opp-supported-hw = <0x30>;
2152 ++ clock-latency-ns = <200000>;
2153 ++ };
2154 ++ opp-2150400000 {
2155 ++ opp-hz = /bits/ 64 <2150400000>;
2156 ++ opp-supported-hw = <0x30>;
2157 ++ clock-latency-ns = <200000>;
2158 ++ };
2159 ++ opp-2246400000 {
2160 ++ opp-hz = /bits/ 64 <2246400000>;
2161 ++ opp-supported-hw = <0x10>;
2162 ++ clock-latency-ns = <200000>;
2163 ++ };
2164 ++ opp-2342400000 {
2165 ++ opp-hz = /bits/ 64 <2342400000>;
2166 ++ opp-supported-hw = <0x10>;
2167 ++ clock-latency-ns = <200000>;
2168 ++ };
2169 ++ };
2170 ++};
2171 +diff --git a/arch/arm64/boot/dts/qcom/pm6350.dtsi b/arch/arm64/boot/dts/qcom/pm6350.dtsi
2172 +index ecf9b99191828..68245d78d2b93 100644
2173 +--- a/arch/arm64/boot/dts/qcom/pm6350.dtsi
2174 ++++ b/arch/arm64/boot/dts/qcom/pm6350.dtsi
2175 +@@ -3,6 +3,7 @@
2176 + * Copyright (c) 2021, Luca Weiss <luca@×××××.xyz>
2177 + */
2178 +
2179 ++#include <dt-bindings/input/input.h>
2180 + #include <dt-bindings/spmi/spmi.h>
2181 +
2182 + &spmi_bus {
2183 +diff --git a/arch/arm64/boot/dts/qcom/pm660.dtsi b/arch/arm64/boot/dts/qcom/pm660.dtsi
2184 +index e1622b16c08bd..02a69ac0149b2 100644
2185 +--- a/arch/arm64/boot/dts/qcom/pm660.dtsi
2186 ++++ b/arch/arm64/boot/dts/qcom/pm660.dtsi
2187 +@@ -163,7 +163,7 @@
2188 + qcom,pre-scaling = <1 3>;
2189 + };
2190 +
2191 +- vcoin: vcoin@83 {
2192 ++ vcoin: vcoin@85 {
2193 + reg = <ADC5_VCOIN>;
2194 + qcom,decimation = <1024>;
2195 + qcom,pre-scaling = <1 3>;
2196 +diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-homestar.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-homestar.dtsi
2197 +index 1bd6c7dcd9e91..bfab67f4a7c9c 100644
2198 +--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-homestar.dtsi
2199 ++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-homestar.dtsi
2200 +@@ -194,6 +194,12 @@ ap_ts_pen_1v8: &i2c4 {
2201 + pins = "gpio49", "gpio50", "gpio51", "gpio52";
2202 + function = "mi2s_1";
2203 + };
2204 ++
2205 ++ pinconf {
2206 ++ pins = "gpio49", "gpio50", "gpio51", "gpio52";
2207 ++ drive-strength = <2>;
2208 ++ bias-pull-down;
2209 ++ };
2210 + };
2211 +
2212 + &ts_reset_l {
2213 +diff --git a/arch/arm64/boot/dts/qcom/sc7280-idp.dts b/arch/arm64/boot/dts/qcom/sc7280-idp.dts
2214 +index 7559164cdda08..e2e37a0292ad6 100644
2215 +--- a/arch/arm64/boot/dts/qcom/sc7280-idp.dts
2216 ++++ b/arch/arm64/boot/dts/qcom/sc7280-idp.dts
2217 +@@ -10,7 +10,6 @@
2218 + #include <dt-bindings/iio/qcom,spmi-adc7-pmr735a.h>
2219 + #include "sc7280-idp.dtsi"
2220 + #include "pmr735a.dtsi"
2221 +-#include "sc7280-herobrine-lte-sku.dtsi"
2222 +
2223 + / {
2224 + model = "Qualcomm Technologies, Inc. sc7280 IDP SKU1 platform";
2225 +diff --git a/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi b/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi
2226 +index cd432a2856a7b..ca50f0ba9b815 100644
2227 +--- a/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi
2228 ++++ b/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi
2229 +@@ -13,6 +13,7 @@
2230 + #include "pmk8350.dtsi"
2231 +
2232 + #include "sc7280-chrome-common.dtsi"
2233 ++#include "sc7280-herobrine-lte-sku.dtsi"
2234 +
2235 + / {
2236 + aliases {
2237 +@@ -34,7 +35,7 @@
2238 + pinctrl-0 = <&wcd_reset_n>;
2239 + pinctrl-1 = <&wcd_reset_n_sleep>;
2240 +
2241 +- reset-gpios = <&tlmm 83 GPIO_ACTIVE_HIGH>;
2242 ++ reset-gpios = <&tlmm 83 GPIO_ACTIVE_LOW>;
2243 +
2244 + qcom,rx-device = <&wcd_rx>;
2245 + qcom,tx-device = <&wcd_tx>;
2246 +diff --git a/arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi b/arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi
2247 +index 4b8c676b0bb19..f7665b3799233 100644
2248 +--- a/arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi
2249 ++++ b/arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi
2250 +@@ -37,7 +37,7 @@
2251 + pinctrl-0 = <&wcd_reset_n>, <&us_euro_hs_sel>;
2252 + pinctrl-1 = <&wcd_reset_n_sleep>, <&us_euro_hs_sel>;
2253 +
2254 +- reset-gpios = <&tlmm 83 GPIO_ACTIVE_HIGH>;
2255 ++ reset-gpios = <&tlmm 83 GPIO_ACTIVE_LOW>;
2256 + us-euro-gpios = <&tlmm 81 GPIO_ACTIVE_HIGH>;
2257 +
2258 + qcom,rx-device = <&wcd_rx>;
2259 +diff --git a/arch/arm64/boot/dts/qcom/sdm630.dtsi b/arch/arm64/boot/dts/qcom/sdm630.dtsi
2260 +index b51b85f583e5d..e119060ac56cb 100644
2261 +--- a/arch/arm64/boot/dts/qcom/sdm630.dtsi
2262 ++++ b/arch/arm64/boot/dts/qcom/sdm630.dtsi
2263 +@@ -779,7 +779,7 @@
2264 + pins = "gpio17", "gpio18", "gpio19";
2265 + function = "gpio";
2266 + drive-strength = <2>;
2267 +- bias-no-pull;
2268 ++ bias-disable;
2269 + };
2270 + };
2271 +
2272 +diff --git a/arch/arm64/boot/dts/qcom/sdm845-cheza.dtsi b/arch/arm64/boot/dts/qcom/sdm845-cheza.dtsi
2273 +index b5eb8f7eca1d5..b5f11fbcc3004 100644
2274 +--- a/arch/arm64/boot/dts/qcom/sdm845-cheza.dtsi
2275 ++++ b/arch/arm64/boot/dts/qcom/sdm845-cheza.dtsi
2276 +@@ -1436,7 +1436,7 @@ ap_ts_i2c: &i2c14 {
2277 + config {
2278 + pins = "gpio126";
2279 + function = "gpio";
2280 +- bias-no-pull;
2281 ++ bias-disable;
2282 + drive-strength = <2>;
2283 + output-low;
2284 + };
2285 +@@ -1446,7 +1446,7 @@ ap_ts_i2c: &i2c14 {
2286 + config {
2287 + pins = "gpio126";
2288 + function = "gpio";
2289 +- bias-no-pull;
2290 ++ bias-disable;
2291 + drive-strength = <2>;
2292 + output-high;
2293 + };
2294 +diff --git a/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts b/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts
2295 +index afc17e4d403fc..f982594896796 100644
2296 +--- a/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts
2297 ++++ b/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts
2298 +@@ -628,7 +628,7 @@
2299 + };
2300 +
2301 + wcd_intr_default: wcd-intr-default {
2302 +- pins = "goui54";
2303 ++ pins = "gpio54";
2304 + function = "gpio";
2305 + input-enable;
2306 + bias-pull-down;
2307 +diff --git a/arch/arm64/boot/dts/qcom/sm6125.dtsi b/arch/arm64/boot/dts/qcom/sm6125.dtsi
2308 +index 1fe3fa3ad8770..7818fb6c5a10a 100644
2309 +--- a/arch/arm64/boot/dts/qcom/sm6125.dtsi
2310 ++++ b/arch/arm64/boot/dts/qcom/sm6125.dtsi
2311 +@@ -458,7 +458,7 @@
2312 + sdhc_1: mmc@4744000 {
2313 + compatible = "qcom,sm6125-sdhci", "qcom,sdhci-msm-v5";
2314 + reg = <0x04744000 0x1000>, <0x04745000 0x1000>;
2315 +- reg-names = "hc", "core";
2316 ++ reg-names = "hc", "cqhci";
2317 +
2318 + interrupts = <GIC_SPI 348 IRQ_TYPE_LEVEL_HIGH>,
2319 + <GIC_SPI 352 IRQ_TYPE_LEVEL_HIGH>;
2320 +diff --git a/arch/arm64/boot/dts/qcom/sm6350.dtsi b/arch/arm64/boot/dts/qcom/sm6350.dtsi
2321 +index c39de7d3ace0b..7be5fc8dec671 100644
2322 +--- a/arch/arm64/boot/dts/qcom/sm6350.dtsi
2323 ++++ b/arch/arm64/boot/dts/qcom/sm6350.dtsi
2324 +@@ -485,6 +485,7 @@
2325 + interrupts = <GIC_SPI 641 IRQ_TYPE_LEVEL_HIGH>,
2326 + <GIC_SPI 644 IRQ_TYPE_LEVEL_HIGH>;
2327 + interrupt-names = "hc_irq", "pwr_irq";
2328 ++ iommus = <&apps_smmu 0x60 0x0>;
2329 +
2330 + clocks = <&gcc GCC_SDCC1_AHB_CLK>,
2331 + <&gcc GCC_SDCC1_APPS_CLK>,
2332 +@@ -1063,6 +1064,7 @@
2333 + interrupts = <GIC_SPI 204 IRQ_TYPE_LEVEL_HIGH>,
2334 + <GIC_SPI 222 IRQ_TYPE_LEVEL_HIGH>;
2335 + interrupt-names = "hc_irq", "pwr_irq";
2336 ++ iommus = <&apps_smmu 0x560 0x0>;
2337 +
2338 + clocks = <&gcc GCC_SDCC2_AHB_CLK>,
2339 + <&gcc GCC_SDCC2_APPS_CLK>,
2340 +@@ -1148,15 +1150,11 @@
2341 + dp_phy: dp-phy@88ea200 {
2342 + reg = <0 0x088ea200 0 0x200>,
2343 + <0 0x088ea400 0 0x200>,
2344 +- <0 0x088eac00 0 0x400>,
2345 ++ <0 0x088eaa00 0 0x200>,
2346 + <0 0x088ea600 0 0x200>,
2347 +- <0 0x088ea800 0 0x200>,
2348 +- <0 0x088eaa00 0 0x100>;
2349 ++ <0 0x088ea800 0 0x200>;
2350 + #phy-cells = <0>;
2351 + #clock-cells = <1>;
2352 +- clocks = <&gcc GCC_USB3_PRIM_PHY_PIPE_CLK>;
2353 +- clock-names = "pipe0";
2354 +- clock-output-names = "usb3_phy_pipe_clk_src";
2355 + };
2356 + };
2357 +
2358 +diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi
2359 +index cef8c4f4f0ff2..4a527a64772b4 100644
2360 +--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
2361 ++++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
2362 +@@ -2032,11 +2032,11 @@
2363 + status = "disabled";
2364 +
2365 + ufs_mem_phy_lanes: phy@1d87400 {
2366 +- reg = <0 0x01d87400 0 0x108>,
2367 +- <0 0x01d87600 0 0x1e0>,
2368 +- <0 0x01d87c00 0 0x1dc>,
2369 +- <0 0x01d87800 0 0x108>,
2370 +- <0 0x01d87a00 0 0x1e0>;
2371 ++ reg = <0 0x01d87400 0 0x16c>,
2372 ++ <0 0x01d87600 0 0x200>,
2373 ++ <0 0x01d87c00 0 0x200>,
2374 ++ <0 0x01d87800 0 0x16c>,
2375 ++ <0 0x01d87a00 0 0x200>;
2376 + #phy-cells = <0>;
2377 + };
2378 + };
2379 +diff --git a/arch/arm64/boot/dts/qcom/sm8250-mtp.dts b/arch/arm64/boot/dts/qcom/sm8250-mtp.dts
2380 +index a102aa5efa326..a05fe468e0b41 100644
2381 +--- a/arch/arm64/boot/dts/qcom/sm8250-mtp.dts
2382 ++++ b/arch/arm64/boot/dts/qcom/sm8250-mtp.dts
2383 +@@ -635,7 +635,7 @@
2384 + wcd938x: codec {
2385 + compatible = "qcom,wcd9380-codec";
2386 + #sound-dai-cells = <1>;
2387 +- reset-gpios = <&tlmm 32 GPIO_ACTIVE_HIGH>;
2388 ++ reset-gpios = <&tlmm 32 GPIO_ACTIVE_LOW>;
2389 + vdd-buck-supply = <&vreg_s4a_1p8>;
2390 + vdd-rxtx-supply = <&vreg_s4a_1p8>;
2391 + vdd-io-supply = <&vreg_s4a_1p8>;
2392 +diff --git a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi
2393 +index 5428aab3058dd..e4769dcfaad7b 100644
2394 +--- a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi
2395 ++++ b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi
2396 +@@ -619,7 +619,7 @@
2397 + pins = "gpio39";
2398 + function = "gpio";
2399 + drive-strength = <2>;
2400 +- bias-disabled;
2401 ++ bias-disable;
2402 + input-enable;
2403 + };
2404 +
2405 +diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
2406 +index e276eed1f8e2c..29e352a577311 100644
2407 +--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
2408 ++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
2409 +@@ -2180,11 +2180,11 @@
2410 + status = "disabled";
2411 +
2412 + ufs_mem_phy_lanes: phy@1d87400 {
2413 +- reg = <0 0x01d87400 0 0x108>,
2414 +- <0 0x01d87600 0 0x1e0>,
2415 +- <0 0x01d87c00 0 0x1dc>,
2416 +- <0 0x01d87800 0 0x108>,
2417 +- <0 0x01d87a00 0 0x1e0>;
2418 ++ reg = <0 0x01d87400 0 0x16c>,
2419 ++ <0 0x01d87600 0 0x200>,
2420 ++ <0 0x01d87c00 0 0x200>,
2421 ++ <0 0x01d87800 0 0x16c>,
2422 ++ <0 0x01d87a00 0 0x200>;
2423 + #phy-cells = <0>;
2424 + };
2425 + };
2426 +@@ -2455,7 +2455,7 @@
2427 + pins = "gpio7";
2428 + function = "dmic1_data";
2429 + drive-strength = <2>;
2430 +- pull-down;
2431 ++ bias-pull-down;
2432 + input-enable;
2433 + };
2434 + };
2435 +@@ -2892,15 +2892,11 @@
2436 + dp_phy: dp-phy@88ea200 {
2437 + reg = <0 0x088ea200 0 0x200>,
2438 + <0 0x088ea400 0 0x200>,
2439 +- <0 0x088eac00 0 0x400>,
2440 ++ <0 0x088eaa00 0 0x200>,
2441 + <0 0x088ea600 0 0x200>,
2442 +- <0 0x088ea800 0 0x200>,
2443 +- <0 0x088eaa00 0 0x100>;
2444 ++ <0 0x088ea800 0 0x200>;
2445 + #phy-cells = <0>;
2446 + #clock-cells = <1>;
2447 +- clocks = <&gcc GCC_USB3_PRIM_PHY_PIPE_CLK>;
2448 +- clock-names = "pipe0";
2449 +- clock-output-names = "usb3_phy_pipe_clk_src";
2450 + };
2451 + };
2452 +
2453 +diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
2454 +index a86d9ea93b9d4..a6270d97a3192 100644
2455 +--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
2456 ++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
2457 +@@ -2142,11 +2142,11 @@
2458 + status = "disabled";
2459 +
2460 + ufs_mem_phy_lanes: phy@1d87400 {
2461 +- reg = <0 0x01d87400 0 0x108>,
2462 +- <0 0x01d87600 0 0x1e0>,
2463 +- <0 0x01d87c00 0 0x1dc>,
2464 +- <0 0x01d87800 0 0x108>,
2465 +- <0 0x01d87a00 0 0x1e0>;
2466 ++ reg = <0 0x01d87400 0 0x188>,
2467 ++ <0 0x01d87600 0 0x200>,
2468 ++ <0 0x01d87c00 0 0x200>,
2469 ++ <0 0x01d87800 0 0x188>,
2470 ++ <0 0x01d87a00 0 0x200>;
2471 + #phy-cells = <0>;
2472 + };
2473 + };
2474 +diff --git a/arch/arm64/boot/dts/qcom/sm8450-sony-xperia-nagara-pdx223.dts b/arch/arm64/boot/dts/qcom/sm8450-sony-xperia-nagara-pdx223.dts
2475 +index d68765eb6d4f9..6351050bc87f2 100644
2476 +--- a/arch/arm64/boot/dts/qcom/sm8450-sony-xperia-nagara-pdx223.dts
2477 ++++ b/arch/arm64/boot/dts/qcom/sm8450-sony-xperia-nagara-pdx223.dts
2478 +@@ -556,8 +556,6 @@
2479 + pinctrl-1 = <&sdc2_sleep_state &sdc2_card_det_n>;
2480 + vmmc-supply = <&pm8350c_l9>;
2481 + vqmmc-supply = <&pm8350c_l6>;
2482 +- /* Forbid SDR104/SDR50 - broken hw! */
2483 +- sdhci-caps-mask = <0x3 0x0>;
2484 + no-sdio;
2485 + no-mmc;
2486 + status = "okay";
2487 +diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
2488 +index d32f08df743d8..32a37c878a34c 100644
2489 +--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
2490 ++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
2491 +@@ -3161,11 +3161,11 @@
2492 + status = "disabled";
2493 +
2494 + ufs_mem_phy_lanes: phy@1d87400 {
2495 +- reg = <0 0x01d87400 0 0x108>,
2496 +- <0 0x01d87600 0 0x1e0>,
2497 +- <0 0x01d87c00 0 0x1dc>,
2498 +- <0 0x01d87800 0 0x108>,
2499 +- <0 0x01d87a00 0 0x1e0>;
2500 ++ reg = <0 0x01d87400 0 0x188>,
2501 ++ <0 0x01d87600 0 0x200>,
2502 ++ <0 0x01d87c00 0 0x200>,
2503 ++ <0 0x01d87800 0 0x188>,
2504 ++ <0 0x01d87a00 0 0x200>;
2505 + #phy-cells = <0>;
2506 + };
2507 + };
2508 +@@ -3192,6 +3192,9 @@
2509 + bus-width = <4>;
2510 + dma-coherent;
2511 +
2512 ++ /* Forbid SDR104/SDR50 - broken hw! */
2513 ++ sdhci-caps-mask = <0x3 0x0>;
2514 ++
2515 + status = "disabled";
2516 +
2517 + sdhc2_opp_table: opp-table {
2518 +diff --git a/arch/arm64/boot/dts/renesas/r8a779f0.dtsi b/arch/arm64/boot/dts/renesas/r8a779f0.dtsi
2519 +index c2f152bcf10ec..4092c0016035e 100644
2520 +--- a/arch/arm64/boot/dts/renesas/r8a779f0.dtsi
2521 ++++ b/arch/arm64/boot/dts/renesas/r8a779f0.dtsi
2522 +@@ -577,7 +577,7 @@
2523 + reg = <0 0xe6540000 0 0x60>;
2524 + interrupts = <GIC_SPI 245 IRQ_TYPE_LEVEL_HIGH>;
2525 + clocks = <&cpg CPG_MOD 514>,
2526 +- <&cpg CPG_CORE R8A779F0_CLK_S0D3>,
2527 ++ <&cpg CPG_CORE R8A779F0_CLK_SASYNCPERD1>,
2528 + <&scif_clk>;
2529 + clock-names = "fck", "brg_int", "scif_clk";
2530 + dmas = <&dmac0 0x31>, <&dmac0 0x30>,
2531 +@@ -594,7 +594,7 @@
2532 + reg = <0 0xe6550000 0 0x60>;
2533 + interrupts = <GIC_SPI 246 IRQ_TYPE_LEVEL_HIGH>;
2534 + clocks = <&cpg CPG_MOD 515>,
2535 +- <&cpg CPG_CORE R8A779F0_CLK_S0D3>,
2536 ++ <&cpg CPG_CORE R8A779F0_CLK_SASYNCPERD1>,
2537 + <&scif_clk>;
2538 + clock-names = "fck", "brg_int", "scif_clk";
2539 + dmas = <&dmac0 0x33>, <&dmac0 0x32>,
2540 +@@ -611,7 +611,7 @@
2541 + reg = <0 0xe6560000 0 0x60>;
2542 + interrupts = <GIC_SPI 247 IRQ_TYPE_LEVEL_HIGH>;
2543 + clocks = <&cpg CPG_MOD 516>,
2544 +- <&cpg CPG_CORE R8A779F0_CLK_S0D3>,
2545 ++ <&cpg CPG_CORE R8A779F0_CLK_SASYNCPERD1>,
2546 + <&scif_clk>;
2547 + clock-names = "fck", "brg_int", "scif_clk";
2548 + dmas = <&dmac0 0x35>, <&dmac0 0x34>,
2549 +@@ -628,7 +628,7 @@
2550 + reg = <0 0xe66a0000 0 0x60>;
2551 + interrupts = <GIC_SPI 248 IRQ_TYPE_LEVEL_HIGH>;
2552 + clocks = <&cpg CPG_MOD 517>,
2553 +- <&cpg CPG_CORE R8A779F0_CLK_S0D3>,
2554 ++ <&cpg CPG_CORE R8A779F0_CLK_SASYNCPERD1>,
2555 + <&scif_clk>;
2556 + clock-names = "fck", "brg_int", "scif_clk";
2557 + dmas = <&dmac0 0x37>, <&dmac0 0x36>,
2558 +@@ -657,7 +657,7 @@
2559 + reg = <0 0xe6e60000 0 64>;
2560 + interrupts = <GIC_SPI 249 IRQ_TYPE_LEVEL_HIGH>;
2561 + clocks = <&cpg CPG_MOD 702>,
2562 +- <&cpg CPG_CORE R8A779F0_CLK_S0D3_PER>,
2563 ++ <&cpg CPG_CORE R8A779F0_CLK_SASYNCPERD1>,
2564 + <&scif_clk>;
2565 + clock-names = "fck", "brg_int", "scif_clk";
2566 + dmas = <&dmac0 0x51>, <&dmac0 0x50>,
2567 +@@ -674,7 +674,7 @@
2568 + reg = <0 0xe6e68000 0 64>;
2569 + interrupts = <GIC_SPI 250 IRQ_TYPE_LEVEL_HIGH>;
2570 + clocks = <&cpg CPG_MOD 703>,
2571 +- <&cpg CPG_CORE R8A779F0_CLK_S0D3_PER>,
2572 ++ <&cpg CPG_CORE R8A779F0_CLK_SASYNCPERD1>,
2573 + <&scif_clk>;
2574 + clock-names = "fck", "brg_int", "scif_clk";
2575 + dmas = <&dmac0 0x53>, <&dmac0 0x52>,
2576 +@@ -691,7 +691,7 @@
2577 + reg = <0 0xe6c50000 0 64>;
2578 + interrupts = <GIC_SPI 252 IRQ_TYPE_LEVEL_HIGH>;
2579 + clocks = <&cpg CPG_MOD 704>,
2580 +- <&cpg CPG_CORE R8A779F0_CLK_S0D3_PER>,
2581 ++ <&cpg CPG_CORE R8A779F0_CLK_SASYNCPERD1>,
2582 + <&scif_clk>;
2583 + clock-names = "fck", "brg_int", "scif_clk";
2584 + dmas = <&dmac0 0x57>, <&dmac0 0x56>,
2585 +@@ -708,7 +708,7 @@
2586 + reg = <0 0xe6c40000 0 64>;
2587 + interrupts = <GIC_SPI 253 IRQ_TYPE_LEVEL_HIGH>;
2588 + clocks = <&cpg CPG_MOD 705>,
2589 +- <&cpg CPG_CORE R8A779F0_CLK_S0D3_PER>,
2590 ++ <&cpg CPG_CORE R8A779F0_CLK_SASYNCPERD1>,
2591 + <&scif_clk>;
2592 + clock-names = "fck", "brg_int", "scif_clk";
2593 + dmas = <&dmac0 0x59>, <&dmac0 0x58>,
2594 +diff --git a/arch/arm64/boot/dts/renesas/r8a779g0.dtsi b/arch/arm64/boot/dts/renesas/r8a779g0.dtsi
2595 +index d70f0600ae5a9..d58b18802cb01 100644
2596 +--- a/arch/arm64/boot/dts/renesas/r8a779g0.dtsi
2597 ++++ b/arch/arm64/boot/dts/renesas/r8a779g0.dtsi
2598 +@@ -326,7 +326,7 @@
2599 + reg = <0 0xe6540000 0 96>;
2600 + interrupts = <GIC_SPI 246 IRQ_TYPE_LEVEL_HIGH>;
2601 + clocks = <&cpg CPG_MOD 514>,
2602 +- <&cpg CPG_CORE R8A779G0_CLK_S0D3_PER>,
2603 ++ <&cpg CPG_CORE R8A779G0_CLK_SASYNCPERD1>,
2604 + <&scif_clk>;
2605 + clock-names = "fck", "brg_int", "scif_clk";
2606 + power-domains = <&sysc R8A779G0_PD_ALWAYS_ON>;
2607 +diff --git a/arch/arm64/boot/dts/renesas/r9a09g011.dtsi b/arch/arm64/boot/dts/renesas/r9a09g011.dtsi
2608 +index fb1a97202c387..ebaa8cdd747d2 100644
2609 +--- a/arch/arm64/boot/dts/renesas/r9a09g011.dtsi
2610 ++++ b/arch/arm64/boot/dts/renesas/r9a09g011.dtsi
2611 +@@ -48,7 +48,7 @@
2612 + #size-cells = <2>;
2613 + ranges;
2614 +
2615 +- gic: interrupt-controller@82000000 {
2616 ++ gic: interrupt-controller@82010000 {
2617 + compatible = "arm,gic-400";
2618 + #interrupt-cells = <3>;
2619 + #address-cells = <0>;
2620 +@@ -126,7 +126,7 @@
2621 + i2c0: i2c@a4030000 {
2622 + #address-cells = <1>;
2623 + #size-cells = <0>;
2624 +- compatible = "renesas,i2c-r9a09g011", "renesas,rzv2m-i2c";
2625 ++ compatible = "renesas,r9a09g011-i2c", "renesas,rzv2m-i2c";
2626 + reg = <0 0xa4030000 0 0x80>;
2627 + interrupts = <GIC_SPI 232 IRQ_TYPE_EDGE_RISING>,
2628 + <GIC_SPI 236 IRQ_TYPE_EDGE_RISING>;
2629 +@@ -140,7 +140,7 @@
2630 + i2c2: i2c@a4030100 {
2631 + #address-cells = <1>;
2632 + #size-cells = <0>;
2633 +- compatible = "renesas,i2c-r9a09g011", "renesas,rzv2m-i2c";
2634 ++ compatible = "renesas,r9a09g011-i2c", "renesas,rzv2m-i2c";
2635 + reg = <0 0xa4030100 0 0x80>;
2636 + interrupts = <GIC_SPI 234 IRQ_TYPE_EDGE_RISING>,
2637 + <GIC_SPI 238 IRQ_TYPE_EDGE_RISING>;
2638 +diff --git a/arch/arm64/boot/dts/tesla/fsd-pinctrl.dtsi b/arch/arm64/boot/dts/tesla/fsd-pinctrl.dtsi
2639 +index d0abb9aa0e9ed..e3852c9463528 100644
2640 +--- a/arch/arm64/boot/dts/tesla/fsd-pinctrl.dtsi
2641 ++++ b/arch/arm64/boot/dts/tesla/fsd-pinctrl.dtsi
2642 +@@ -55,14 +55,14 @@
2643 + samsung,pins = "gpf5-0";
2644 + samsung,pin-function = <FSD_PIN_FUNC_2>;
2645 + samsung,pin-pud = <FSD_PIN_PULL_NONE>;
2646 +- samsung,pin-drv = <FSD_PIN_DRV_LV2>;
2647 ++ samsung,pin-drv = <FSD_PIN_DRV_LV4>;
2648 + };
2649 +
2650 + ufs_refclk_out: ufs-refclk-out-pins {
2651 + samsung,pins = "gpf5-1";
2652 + samsung,pin-function = <FSD_PIN_FUNC_2>;
2653 + samsung,pin-pud = <FSD_PIN_PULL_NONE>;
2654 +- samsung,pin-drv = <FSD_PIN_DRV_LV2>;
2655 ++ samsung,pin-drv = <FSD_PIN_DRV_LV4>;
2656 + };
2657 + };
2658 +
2659 +@@ -239,105 +239,105 @@
2660 + samsung,pins = "gpb6-1";
2661 + samsung,pin-function = <FSD_PIN_FUNC_2>;
2662 + samsung,pin-pud = <FSD_PIN_PULL_UP>;
2663 +- samsung,pin-drv = <FSD_PIN_DRV_LV2>;
2664 ++ samsung,pin-drv = <FSD_PIN_DRV_LV4>;
2665 + };
2666 +
2667 + pwm1_out: pwm1-out-pins {
2668 + samsung,pins = "gpb6-5";
2669 + samsung,pin-function = <FSD_PIN_FUNC_2>;
2670 + samsung,pin-pud = <FSD_PIN_PULL_UP>;
2671 +- samsung,pin-drv = <FSD_PIN_DRV_LV2>;
2672 ++ samsung,pin-drv = <FSD_PIN_DRV_LV4>;
2673 + };
2674 +
2675 + hs_i2c0_bus: hs-i2c0-bus-pins {
2676 + samsung,pins = "gpb0-0", "gpb0-1";
2677 + samsung,pin-function = <FSD_PIN_FUNC_2>;
2678 + samsung,pin-pud = <FSD_PIN_PULL_UP>;
2679 +- samsung,pin-drv = <FSD_PIN_DRV_LV1>;
2680 ++ samsung,pin-drv = <FSD_PIN_DRV_LV4>;
2681 + };
2682 +
2683 + hs_i2c1_bus: hs-i2c1-bus-pins {
2684 + samsung,pins = "gpb0-2", "gpb0-3";
2685 + samsung,pin-function = <FSD_PIN_FUNC_2>;
2686 + samsung,pin-pud = <FSD_PIN_PULL_UP>;
2687 +- samsung,pin-drv = <FSD_PIN_DRV_LV1>;
2688 ++ samsung,pin-drv = <FSD_PIN_DRV_LV4>;
2689 + };
2690 +
2691 + hs_i2c2_bus: hs-i2c2-bus-pins {
2692 + samsung,pins = "gpb0-4", "gpb0-5";
2693 + samsung,pin-function = <FSD_PIN_FUNC_2>;
2694 + samsung,pin-pud = <FSD_PIN_PULL_UP>;
2695 +- samsung,pin-drv = <FSD_PIN_DRV_LV1>;
2696 ++ samsung,pin-drv = <FSD_PIN_DRV_LV4>;
2697 + };
2698 +
2699 + hs_i2c3_bus: hs-i2c3-bus-pins {
2700 + samsung,pins = "gpb0-6", "gpb0-7";
2701 + samsung,pin-function = <FSD_PIN_FUNC_2>;
2702 + samsung,pin-pud = <FSD_PIN_PULL_UP>;
2703 +- samsung,pin-drv = <FSD_PIN_DRV_LV1>;
2704 ++ samsung,pin-drv = <FSD_PIN_DRV_LV4>;
2705 + };
2706 +
2707 + hs_i2c4_bus: hs-i2c4-bus-pins {
2708 + samsung,pins = "gpb1-0", "gpb1-1";
2709 + samsung,pin-function = <FSD_PIN_FUNC_2>;
2710 + samsung,pin-pud = <FSD_PIN_PULL_UP>;
2711 +- samsung,pin-drv = <FSD_PIN_DRV_LV1>;
2712 ++ samsung,pin-drv = <FSD_PIN_DRV_LV4>;
2713 + };
2714 +
2715 + hs_i2c5_bus: hs-i2c5-bus-pins {
2716 + samsung,pins = "gpb1-2", "gpb1-3";
2717 + samsung,pin-function = <FSD_PIN_FUNC_2>;
2718 + samsung,pin-pud = <FSD_PIN_PULL_UP>;
2719 +- samsung,pin-drv = <FSD_PIN_DRV_LV1>;
2720 ++ samsung,pin-drv = <FSD_PIN_DRV_LV4>;
2721 + };
2722 +
2723 + hs_i2c6_bus: hs-i2c6-bus-pins {
2724 + samsung,pins = "gpb1-4", "gpb1-5";
2725 + samsung,pin-function = <FSD_PIN_FUNC_2>;
2726 + samsung,pin-pud = <FSD_PIN_PULL_UP>;
2727 +- samsung,pin-drv = <FSD_PIN_DRV_LV1>;
2728 ++ samsung,pin-drv = <FSD_PIN_DRV_LV4>;
2729 + };
2730 +
2731 + hs_i2c7_bus: hs-i2c7-bus-pins {
2732 + samsung,pins = "gpb1-6", "gpb1-7";
2733 + samsung,pin-function = <FSD_PIN_FUNC_2>;
2734 + samsung,pin-pud = <FSD_PIN_PULL_UP>;
2735 +- samsung,pin-drv = <FSD_PIN_DRV_LV1>;
2736 ++ samsung,pin-drv = <FSD_PIN_DRV_LV4>;
2737 + };
2738 +
2739 + uart0_data: uart0-data-pins {
2740 + samsung,pins = "gpb7-0", "gpb7-1";
2741 + samsung,pin-function = <FSD_PIN_FUNC_2>;
2742 + samsung,pin-pud = <FSD_PIN_PULL_NONE>;
2743 +- samsung,pin-drv = <FSD_PIN_DRV_LV1>;
2744 ++ samsung,pin-drv = <FSD_PIN_DRV_LV4>;
2745 + };
2746 +
2747 + uart1_data: uart1-data-pins {
2748 + samsung,pins = "gpb7-4", "gpb7-5";
2749 + samsung,pin-function = <FSD_PIN_FUNC_2>;
2750 + samsung,pin-pud = <FSD_PIN_PULL_NONE>;
2751 +- samsung,pin-drv = <FSD_PIN_DRV_LV1>;
2752 ++ samsung,pin-drv = <FSD_PIN_DRV_LV4>;
2753 + };
2754 +
2755 + spi0_bus: spi0-bus-pins {
2756 + samsung,pins = "gpb4-0", "gpb4-2", "gpb4-3";
2757 + samsung,pin-function = <FSD_PIN_FUNC_2>;
2758 + samsung,pin-pud = <FSD_PIN_PULL_UP>;
2759 +- samsung,pin-drv = <FSD_PIN_DRV_LV1>;
2760 ++ samsung,pin-drv = <FSD_PIN_DRV_LV4>;
2761 + };
2762 +
2763 + spi1_bus: spi1-bus-pins {
2764 + samsung,pins = "gpb4-4", "gpb4-6", "gpb4-7";
2765 + samsung,pin-function = <FSD_PIN_FUNC_2>;
2766 + samsung,pin-pud = <FSD_PIN_PULL_UP>;
2767 +- samsung,pin-drv = <FSD_PIN_DRV_LV1>;
2768 ++ samsung,pin-drv = <FSD_PIN_DRV_LV4>;
2769 + };
2770 +
2771 + spi2_bus: spi2-bus-pins {
2772 + samsung,pins = "gpb5-0", "gpb5-2", "gpb5-3";
2773 + samsung,pin-function = <FSD_PIN_FUNC_2>;
2774 + samsung,pin-pud = <FSD_PIN_PULL_UP>;
2775 +- samsung,pin-drv = <FSD_PIN_DRV_LV1>;
2776 ++ samsung,pin-drv = <FSD_PIN_DRV_LV4>;
2777 + };
2778 + };
2779 +
2780 +diff --git a/arch/arm64/boot/dts/tesla/fsd-pinctrl.h b/arch/arm64/boot/dts/tesla/fsd-pinctrl.h
2781 +index 6ffbda3624930..c397d02208a08 100644
2782 +--- a/arch/arm64/boot/dts/tesla/fsd-pinctrl.h
2783 ++++ b/arch/arm64/boot/dts/tesla/fsd-pinctrl.h
2784 +@@ -16,9 +16,9 @@
2785 + #define FSD_PIN_PULL_UP 3
2786 +
2787 + #define FSD_PIN_DRV_LV1 0
2788 +-#define FSD_PIN_DRV_LV2 2
2789 +-#define FSD_PIN_DRV_LV3 1
2790 +-#define FSD_PIN_DRV_LV4 3
2791 ++#define FSD_PIN_DRV_LV2 1
2792 ++#define FSD_PIN_DRV_LV4 2
2793 ++#define FSD_PIN_DRV_LV6 3
2794 +
2795 + #define FSD_PIN_FUNC_INPUT 0
2796 + #define FSD_PIN_FUNC_OUTPUT 1
2797 +diff --git a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
2798 +index 4005a73cfea99..ebb1c5ce7aece 100644
2799 +--- a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
2800 ++++ b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
2801 +@@ -120,7 +120,6 @@
2802 + dmas = <&main_udmap 0xc001>, <&main_udmap 0x4002>,
2803 + <&main_udmap 0x4003>;
2804 + dma-names = "tx", "rx1", "rx2";
2805 +- dma-coherent;
2806 +
2807 + rng: rng@4e10000 {
2808 + compatible = "inside-secure,safexcel-eip76";
2809 +diff --git a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
2810 +index e5be78a58682d..d3fb86b2ea939 100644
2811 +--- a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
2812 ++++ b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
2813 +@@ -386,7 +386,6 @@
2814 + dmas = <&mcu_udmap 0xf501>, <&mcu_udmap 0x7502>,
2815 + <&mcu_udmap 0x7503>;
2816 + dma-names = "tx", "rx1", "rx2";
2817 +- dma-coherent;
2818 +
2819 + rng: rng@40910000 {
2820 + compatible = "inside-secure,safexcel-eip76";
2821 +diff --git a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
2822 +index 917c9dc99efaa..603ddda5127fa 100644
2823 +--- a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
2824 ++++ b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
2825 +@@ -337,7 +337,6 @@
2826 + dmas = <&main_udmap 0xc000>, <&main_udmap 0x4000>,
2827 + <&main_udmap 0x4001>;
2828 + dma-names = "tx", "rx1", "rx2";
2829 +- dma-coherent;
2830 +
2831 + rng: rng@4e10000 {
2832 + compatible = "inside-secure,safexcel-eip76";
2833 +diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
2834 +index 34e7d577ae13b..c89f28235812a 100644
2835 +--- a/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
2836 ++++ b/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
2837 +@@ -60,7 +60,7 @@
2838 + #interrupt-cells = <1>;
2839 + ti,sci = <&sms>;
2840 + ti,sci-dev-id = <148>;
2841 +- ti,interrupt-ranges = <8 360 56>;
2842 ++ ti,interrupt-ranges = <8 392 56>;
2843 + };
2844 +
2845 + main_pmx0: pinctrl@11c000 {
2846 +diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
2847 +index 4d1bfabd1313a..f0644851602cd 100644
2848 +--- a/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
2849 ++++ b/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
2850 +@@ -65,7 +65,7 @@
2851 + #interrupt-cells = <1>;
2852 + ti,sci = <&sms>;
2853 + ti,sci-dev-id = <125>;
2854 +- ti,interrupt-ranges = <16 928 16>;
2855 ++ ti,interrupt-ranges = <16 960 16>;
2856 + };
2857 +
2858 + mcu_conf: syscon@40f00000 {
2859 +diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
2860 +index 8bd80508a710d..4b121dc0cfba2 100644
2861 +--- a/arch/arm64/crypto/Kconfig
2862 ++++ b/arch/arm64/crypto/Kconfig
2863 +@@ -96,6 +96,17 @@ config CRYPTO_SHA3_ARM64
2864 + Architecture: arm64 using:
2865 + - ARMv8.2 Crypto Extensions
2866 +
2867 ++config CRYPTO_SM3_NEON
2868 ++ tristate "Hash functions: SM3 (NEON)"
2869 ++ depends on KERNEL_MODE_NEON
2870 ++ select CRYPTO_HASH
2871 ++ select CRYPTO_SM3
2872 ++ help
2873 ++ SM3 (ShangMi 3) secure hash function (OSCCA GM/T 0004-2012)
2874 ++
2875 ++ Architecture: arm64 using:
2876 ++ - NEON (Advanced SIMD) extensions
2877 ++
2878 + config CRYPTO_SM3_ARM64_CE
2879 + tristate "Hash functions: SM3 (ARMv8.2 Crypto Extensions)"
2880 + depends on KERNEL_MODE_NEON
2881 +diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile
2882 +index 24bb0c4610de2..087f1625e7751 100644
2883 +--- a/arch/arm64/crypto/Makefile
2884 ++++ b/arch/arm64/crypto/Makefile
2885 +@@ -17,6 +17,9 @@ sha512-ce-y := sha512-ce-glue.o sha512-ce-core.o
2886 + obj-$(CONFIG_CRYPTO_SHA3_ARM64) += sha3-ce.o
2887 + sha3-ce-y := sha3-ce-glue.o sha3-ce-core.o
2888 +
2889 ++obj-$(CONFIG_CRYPTO_SM3_NEON) += sm3-neon.o
2890 ++sm3-neon-y := sm3-neon-glue.o sm3-neon-core.o
2891 ++
2892 + obj-$(CONFIG_CRYPTO_SM3_ARM64_CE) += sm3-ce.o
2893 + sm3-ce-y := sm3-ce-glue.o sm3-ce-core.o
2894 +
2895 +diff --git a/arch/arm64/crypto/sm3-neon-core.S b/arch/arm64/crypto/sm3-neon-core.S
2896 +new file mode 100644
2897 +index 0000000000000..4357e0e51be38
2898 +--- /dev/null
2899 ++++ b/arch/arm64/crypto/sm3-neon-core.S
2900 +@@ -0,0 +1,601 @@
2901 ++// SPDX-License-Identifier: GPL-2.0-or-later
2902 ++/*
2903 ++ * sm3-neon-core.S - SM3 secure hash using NEON instructions
2904 ++ *
2905 ++ * Linux/arm64 port of the libgcrypt SM3 implementation for AArch64
2906 ++ *
2907 ++ * Copyright (C) 2021 Jussi Kivilinna <jussi.kivilinna@×××.fi>
2908 ++ * Copyright (c) 2022 Tianjia Zhang <tianjia.zhang@×××××××××××××.com>
2909 ++ */
2910 ++
2911 ++#include <linux/linkage.h>
2912 ++#include <linux/cfi_types.h>
2913 ++#include <asm/assembler.h>
2914 ++
2915 ++/* Context structure */
2916 ++
2917 ++#define state_h0 0
2918 ++#define state_h1 4
2919 ++#define state_h2 8
2920 ++#define state_h3 12
2921 ++#define state_h4 16
2922 ++#define state_h5 20
2923 ++#define state_h6 24
2924 ++#define state_h7 28
2925 ++
2926 ++/* Stack structure */
2927 ++
2928 ++#define STACK_W_SIZE (32 * 2 * 3)
2929 ++
2930 ++#define STACK_W (0)
2931 ++#define STACK_SIZE (STACK_W + STACK_W_SIZE)
2932 ++
2933 ++/* Register macros */
2934 ++
2935 ++#define RSTATE x0
2936 ++#define RDATA x1
2937 ++#define RNBLKS x2
2938 ++#define RKPTR x28
2939 ++#define RFRAME x29
2940 ++
2941 ++#define ra w3
2942 ++#define rb w4
2943 ++#define rc w5
2944 ++#define rd w6
2945 ++#define re w7
2946 ++#define rf w8
2947 ++#define rg w9
2948 ++#define rh w10
2949 ++
2950 ++#define t0 w11
2951 ++#define t1 w12
2952 ++#define t2 w13
2953 ++#define t3 w14
2954 ++#define t4 w15
2955 ++#define t5 w16
2956 ++#define t6 w17
2957 ++
2958 ++#define k_even w19
2959 ++#define k_odd w20
2960 ++
2961 ++#define addr0 x21
2962 ++#define addr1 x22
2963 ++
2964 ++#define s0 w23
2965 ++#define s1 w24
2966 ++#define s2 w25
2967 ++#define s3 w26
2968 ++
2969 ++#define W0 v0
2970 ++#define W1 v1
2971 ++#define W2 v2
2972 ++#define W3 v3
2973 ++#define W4 v4
2974 ++#define W5 v5
2975 ++
2976 ++#define XTMP0 v6
2977 ++#define XTMP1 v7
2978 ++#define XTMP2 v16
2979 ++#define XTMP3 v17
2980 ++#define XTMP4 v18
2981 ++#define XTMP5 v19
2982 ++#define XTMP6 v20
2983 ++
2984 ++/* Helper macros. */
2985 ++
2986 ++#define _(...) /*_*/
2987 ++
2988 ++#define clear_vec(x) \
2989 ++ movi x.8h, #0;
2990 ++
2991 ++#define rolw(o, a, n) \
2992 ++ ror o, a, #(32 - n);
2993 ++
2994 ++/* Round function macros. */
2995 ++
2996 ++#define GG1_1(x, y, z, o, t) \
2997 ++ eor o, x, y;
2998 ++#define GG1_2(x, y, z, o, t) \
2999 ++ eor o, o, z;
3000 ++#define GG1_3(x, y, z, o, t)
3001 ++
3002 ++#define FF1_1(x, y, z, o, t) GG1_1(x, y, z, o, t)
3003 ++#define FF1_2(x, y, z, o, t)
3004 ++#define FF1_3(x, y, z, o, t) GG1_2(x, y, z, o, t)
3005 ++
3006 ++#define GG2_1(x, y, z, o, t) \
3007 ++ bic o, z, x;
3008 ++#define GG2_2(x, y, z, o, t) \
3009 ++ and t, y, x;
3010 ++#define GG2_3(x, y, z, o, t) \
3011 ++ eor o, o, t;
3012 ++
3013 ++#define FF2_1(x, y, z, o, t) \
3014 ++ eor o, x, y;
3015 ++#define FF2_2(x, y, z, o, t) \
3016 ++ and t, x, y; \
3017 ++ and o, o, z;
3018 ++#define FF2_3(x, y, z, o, t) \
3019 ++ eor o, o, t;
3020 ++
3021 ++#define R(i, a, b, c, d, e, f, g, h, k, K_LOAD, round, widx, wtype, IOP, iop_param) \
3022 ++ K_LOAD(round); \
3023 ++ ldr t5, [sp, #(wtype##_W1_ADDR(round, widx))]; \
3024 ++ rolw(t0, a, 12); /* rol(a, 12) => t0 */ \
3025 ++ IOP(1, iop_param); \
3026 ++ FF##i##_1(a, b, c, t1, t2); \
3027 ++ ldr t6, [sp, #(wtype##_W1W2_ADDR(round, widx))]; \
3028 ++ add k, k, e; \
3029 ++ IOP(2, iop_param); \
3030 ++ GG##i##_1(e, f, g, t3, t4); \
3031 ++ FF##i##_2(a, b, c, t1, t2); \
3032 ++ IOP(3, iop_param); \
3033 ++ add k, k, t0; \
3034 ++ add h, h, t5; \
3035 ++ add d, d, t6; /* w1w2 + d => d */ \
3036 ++ IOP(4, iop_param); \
3037 ++ rolw(k, k, 7); /* rol (t0 + e + t), 7) => k */ \
3038 ++ GG##i##_2(e, f, g, t3, t4); \
3039 ++ add h, h, k; /* h + w1 + k => h */ \
3040 ++ IOP(5, iop_param); \
3041 ++ FF##i##_3(a, b, c, t1, t2); \
3042 ++ eor t0, t0, k; /* k ^ t0 => t0 */ \
3043 ++ GG##i##_3(e, f, g, t3, t4); \
3044 ++ add d, d, t1; /* FF(a,b,c) + d => d */ \
3045 ++ IOP(6, iop_param); \
3046 ++ add t3, t3, h; /* GG(e,f,g) + h => t3 */ \
3047 ++ rolw(b, b, 9); /* rol(b, 9) => b */ \
3048 ++ eor h, t3, t3, ror #(32-9); \
3049 ++ IOP(7, iop_param); \
3050 ++ add d, d, t0; /* t0 + d => d */ \
3051 ++ rolw(f, f, 19); /* rol(f, 19) => f */ \
3052 ++ IOP(8, iop_param); \
3053 ++ eor h, h, t3, ror #(32-17); /* P0(t3) => h */
3054 ++
3055 ++#define R1(a, b, c, d, e, f, g, h, k, K_LOAD, round, widx, wtype, IOP, iop_param) \
3056 ++ R(1, ##a, ##b, ##c, ##d, ##e, ##f, ##g, ##h, ##k, K_LOAD, round, widx, wtype, IOP, iop_param)
3057 ++
3058 ++#define R2(a, b, c, d, e, f, g, h, k, K_LOAD, round, widx, wtype, IOP, iop_param) \
3059 ++ R(2, ##a, ##b, ##c, ##d, ##e, ##f, ##g, ##h, ##k, K_LOAD, round, widx, wtype, IOP, iop_param)
3060 ++
3061 ++#define KL(round) \
3062 ++ ldp k_even, k_odd, [RKPTR, #(4*(round))];
3063 ++
3064 ++/* Input expansion macros. */
3065 ++
3066 ++/* Byte-swapped input address. */
3067 ++#define IW_W_ADDR(round, widx, offs) \
3068 ++ (STACK_W + ((round) / 4) * 64 + (offs) + ((widx) * 4))
3069 ++
3070 ++/* Expanded input address. */
3071 ++#define XW_W_ADDR(round, widx, offs) \
3072 ++ (STACK_W + ((((round) / 3) - 4) % 2) * 64 + (offs) + ((widx) * 4))
3073 ++
3074 ++/* Rounds 1-12, byte-swapped input block addresses. */
3075 ++#define IW_W1_ADDR(round, widx) IW_W_ADDR(round, widx, 32)
3076 ++#define IW_W1W2_ADDR(round, widx) IW_W_ADDR(round, widx, 48)
3077 ++
3078 ++/* Rounds 1-12, expanded input block addresses. */
3079 ++#define XW_W1_ADDR(round, widx) XW_W_ADDR(round, widx, 0)
3080 ++#define XW_W1W2_ADDR(round, widx) XW_W_ADDR(round, widx, 16)
3081 ++
3082 ++/* Input block loading.
3083 ++ * Interleaving within round function needed for in-order CPUs. */
3084 ++#define LOAD_W_VEC_1_1() \
3085 ++ add addr0, sp, #IW_W1_ADDR(0, 0);
3086 ++#define LOAD_W_VEC_1_2() \
3087 ++ add addr1, sp, #IW_W1_ADDR(4, 0);
3088 ++#define LOAD_W_VEC_1_3() \
3089 ++ ld1 {W0.16b}, [RDATA], #16;
3090 ++#define LOAD_W_VEC_1_4() \
3091 ++ ld1 {W1.16b}, [RDATA], #16;
3092 ++#define LOAD_W_VEC_1_5() \
3093 ++ ld1 {W2.16b}, [RDATA], #16;
3094 ++#define LOAD_W_VEC_1_6() \
3095 ++ ld1 {W3.16b}, [RDATA], #16;
3096 ++#define LOAD_W_VEC_1_7() \
3097 ++ rev32 XTMP0.16b, W0.16b;
3098 ++#define LOAD_W_VEC_1_8() \
3099 ++ rev32 XTMP1.16b, W1.16b;
3100 ++#define LOAD_W_VEC_2_1() \
3101 ++ rev32 XTMP2.16b, W2.16b;
3102 ++#define LOAD_W_VEC_2_2() \
3103 ++ rev32 XTMP3.16b, W3.16b;
3104 ++#define LOAD_W_VEC_2_3() \
3105 ++ eor XTMP4.16b, XTMP1.16b, XTMP0.16b;
3106 ++#define LOAD_W_VEC_2_4() \
3107 ++ eor XTMP5.16b, XTMP2.16b, XTMP1.16b;
3108 ++#define LOAD_W_VEC_2_5() \
3109 ++ st1 {XTMP0.16b}, [addr0], #16;
3110 ++#define LOAD_W_VEC_2_6() \
3111 ++ st1 {XTMP4.16b}, [addr0]; \
3112 ++ add addr0, sp, #IW_W1_ADDR(8, 0);
3113 ++#define LOAD_W_VEC_2_7() \
3114 ++ eor XTMP6.16b, XTMP3.16b, XTMP2.16b;
3115 ++#define LOAD_W_VEC_2_8() \
3116 ++ ext W0.16b, XTMP0.16b, XTMP0.16b, #8; /* W0: xx, w0, xx, xx */
3117 ++#define LOAD_W_VEC_3_1() \
3118 ++ mov W2.16b, XTMP1.16b; /* W2: xx, w6, w5, w4 */
3119 ++#define LOAD_W_VEC_3_2() \
3120 ++ st1 {XTMP1.16b}, [addr1], #16;
3121 ++#define LOAD_W_VEC_3_3() \
3122 ++ st1 {XTMP5.16b}, [addr1]; \
3123 ++ ext W1.16b, XTMP0.16b, XTMP0.16b, #4; /* W1: xx, w3, w2, w1 */
3124 ++#define LOAD_W_VEC_3_4() \
3125 ++ ext W3.16b, XTMP1.16b, XTMP2.16b, #12; /* W3: xx, w9, w8, w7 */
3126 ++#define LOAD_W_VEC_3_5() \
3127 ++ ext W4.16b, XTMP2.16b, XTMP3.16b, #8; /* W4: xx, w12, w11, w10 */
3128 ++#define LOAD_W_VEC_3_6() \
3129 ++ st1 {XTMP2.16b}, [addr0], #16;
3130 ++#define LOAD_W_VEC_3_7() \
3131 ++ st1 {XTMP6.16b}, [addr0];
3132 ++#define LOAD_W_VEC_3_8() \
3133 ++ ext W5.16b, XTMP3.16b, XTMP3.16b, #4; /* W5: xx, w15, w14, w13 */
3134 ++
3135 ++#define LOAD_W_VEC_1(iop_num, ...) \
3136 ++ LOAD_W_VEC_1_##iop_num()
3137 ++#define LOAD_W_VEC_2(iop_num, ...) \
3138 ++ LOAD_W_VEC_2_##iop_num()
3139 ++#define LOAD_W_VEC_3(iop_num, ...) \
3140 ++ LOAD_W_VEC_3_##iop_num()
3141 ++
3142 ++/* Message scheduling. Note: 3 words per vector register.
3143 ++ * Interleaving within round function needed for in-order CPUs. */
3144 ++#define SCHED_W_1_1(round, w0, w1, w2, w3, w4, w5) \
3145 ++ /* Load (w[i - 16]) => XTMP0 */ \
3146 ++ /* Load (w[i - 13]) => XTMP5 */ \
3147 ++ ext XTMP0.16b, w0.16b, w0.16b, #12; /* XTMP0: w0, xx, xx, xx */
3148 ++#define SCHED_W_1_2(round, w0, w1, w2, w3, w4, w5) \
3149 ++ ext XTMP5.16b, w1.16b, w1.16b, #12;
3150 ++#define SCHED_W_1_3(round, w0, w1, w2, w3, w4, w5) \
3151 ++ ext XTMP0.16b, XTMP0.16b, w1.16b, #12; /* XTMP0: xx, w2, w1, w0 */
3152 ++#define SCHED_W_1_4(round, w0, w1, w2, w3, w4, w5) \
3153 ++ ext XTMP5.16b, XTMP5.16b, w2.16b, #12;
3154 ++#define SCHED_W_1_5(round, w0, w1, w2, w3, w4, w5) \
3155 ++ /* w[i - 9] == w3 */ \
3156 ++ /* W3 ^ XTMP0 => XTMP0 */ \
3157 ++ eor XTMP0.16b, XTMP0.16b, w3.16b;
3158 ++#define SCHED_W_1_6(round, w0, w1, w2, w3, w4, w5) \
3159 ++ /* w[i - 3] == w5 */ \
3160 ++ /* rol(XMM5, 15) ^ XTMP0 => XTMP0 */ \
3161 ++ /* rol(XTMP5, 7) => XTMP1 */ \
3162 ++ add addr0, sp, #XW_W1_ADDR((round), 0); \
3163 ++ shl XTMP2.4s, w5.4s, #15;
3164 ++#define SCHED_W_1_7(round, w0, w1, w2, w3, w4, w5) \
3165 ++ shl XTMP1.4s, XTMP5.4s, #7;
3166 ++#define SCHED_W_1_8(round, w0, w1, w2, w3, w4, w5) \
3167 ++ sri XTMP2.4s, w5.4s, #(32-15);
3168 ++#define SCHED_W_2_1(round, w0, w1, w2, w3, w4, w5) \
3169 ++ sri XTMP1.4s, XTMP5.4s, #(32-7);
3170 ++#define SCHED_W_2_2(round, w0, w1, w2, w3, w4, w5) \
3171 ++ eor XTMP0.16b, XTMP0.16b, XTMP2.16b;
3172 ++#define SCHED_W_2_3(round, w0, w1, w2, w3, w4, w5) \
3173 ++ /* w[i - 6] == W4 */ \
3174 ++ /* W4 ^ XTMP1 => XTMP1 */ \
3175 ++ eor XTMP1.16b, XTMP1.16b, w4.16b;
3176 ++#define SCHED_W_2_4(round, w0, w1, w2, w3, w4, w5) \
3177 ++ /* P1(XTMP0) ^ XTMP1 => W0 */ \
3178 ++ shl XTMP3.4s, XTMP0.4s, #15;
3179 ++#define SCHED_W_2_5(round, w0, w1, w2, w3, w4, w5) \
3180 ++ shl XTMP4.4s, XTMP0.4s, #23;
3181 ++#define SCHED_W_2_6(round, w0, w1, w2, w3, w4, w5) \
3182 ++ eor w0.16b, XTMP1.16b, XTMP0.16b;
3183 ++#define SCHED_W_2_7(round, w0, w1, w2, w3, w4, w5) \
3184 ++ sri XTMP3.4s, XTMP0.4s, #(32-15);
3185 ++#define SCHED_W_2_8(round, w0, w1, w2, w3, w4, w5) \
3186 ++ sri XTMP4.4s, XTMP0.4s, #(32-23);
3187 ++#define SCHED_W_3_1(round, w0, w1, w2, w3, w4, w5) \
3188 ++ eor w0.16b, w0.16b, XTMP3.16b;
3189 ++#define SCHED_W_3_2(round, w0, w1, w2, w3, w4, w5) \
3190 ++ /* Load (w[i - 3]) => XTMP2 */ \
3191 ++ ext XTMP2.16b, w4.16b, w4.16b, #12;
3192 ++#define SCHED_W_3_3(round, w0, w1, w2, w3, w4, w5) \
3193 ++ eor w0.16b, w0.16b, XTMP4.16b;
3194 ++#define SCHED_W_3_4(round, w0, w1, w2, w3, w4, w5) \
3195 ++ ext XTMP2.16b, XTMP2.16b, w5.16b, #12;
3196 ++#define SCHED_W_3_5(round, w0, w1, w2, w3, w4, w5) \
3197 ++ /* W1 ^ W2 => XTMP3 */ \
3198 ++ eor XTMP3.16b, XTMP2.16b, w0.16b;
3199 ++#define SCHED_W_3_6(round, w0, w1, w2, w3, w4, w5)
3200 ++#define SCHED_W_3_7(round, w0, w1, w2, w3, w4, w5) \
3201 ++ st1 {XTMP2.16b-XTMP3.16b}, [addr0];
3202 ++#define SCHED_W_3_8(round, w0, w1, w2, w3, w4, w5)
3203 ++
3204 ++#define SCHED_W_W0W1W2W3W4W5_1(iop_num, round) \
3205 ++ SCHED_W_1_##iop_num(round, W0, W1, W2, W3, W4, W5)
3206 ++#define SCHED_W_W0W1W2W3W4W5_2(iop_num, round) \
3207 ++ SCHED_W_2_##iop_num(round, W0, W1, W2, W3, W4, W5)
3208 ++#define SCHED_W_W0W1W2W3W4W5_3(iop_num, round) \
3209 ++ SCHED_W_3_##iop_num(round, W0, W1, W2, W3, W4, W5)
3210 ++
3211 ++#define SCHED_W_W1W2W3W4W5W0_1(iop_num, round) \
3212 ++ SCHED_W_1_##iop_num(round, W1, W2, W3, W4, W5, W0)
3213 ++#define SCHED_W_W1W2W3W4W5W0_2(iop_num, round) \
3214 ++ SCHED_W_2_##iop_num(round, W1, W2, W3, W4, W5, W0)
3215 ++#define SCHED_W_W1W2W3W4W5W0_3(iop_num, round) \
3216 ++ SCHED_W_3_##iop_num(round, W1, W2, W3, W4, W5, W0)
3217 ++
3218 ++#define SCHED_W_W2W3W4W5W0W1_1(iop_num, round) \
3219 ++ SCHED_W_1_##iop_num(round, W2, W3, W4, W5, W0, W1)
3220 ++#define SCHED_W_W2W3W4W5W0W1_2(iop_num, round) \
3221 ++ SCHED_W_2_##iop_num(round, W2, W3, W4, W5, W0, W1)
3222 ++#define SCHED_W_W2W3W4W5W0W1_3(iop_num, round) \
3223 ++ SCHED_W_3_##iop_num(round, W2, W3, W4, W5, W0, W1)
3224 ++
3225 ++#define SCHED_W_W3W4W5W0W1W2_1(iop_num, round) \
3226 ++ SCHED_W_1_##iop_num(round, W3, W4, W5, W0, W1, W2)
3227 ++#define SCHED_W_W3W4W5W0W1W2_2(iop_num, round) \
3228 ++ SCHED_W_2_##iop_num(round, W3, W4, W5, W0, W1, W2)
3229 ++#define SCHED_W_W3W4W5W0W1W2_3(iop_num, round) \
3230 ++ SCHED_W_3_##iop_num(round, W3, W4, W5, W0, W1, W2)
3231 ++
3232 ++#define SCHED_W_W4W5W0W1W2W3_1(iop_num, round) \
3233 ++ SCHED_W_1_##iop_num(round, W4, W5, W0, W1, W2, W3)
3234 ++#define SCHED_W_W4W5W0W1W2W3_2(iop_num, round) \
3235 ++ SCHED_W_2_##iop_num(round, W4, W5, W0, W1, W2, W3)
3236 ++#define SCHED_W_W4W5W0W1W2W3_3(iop_num, round) \
3237 ++ SCHED_W_3_##iop_num(round, W4, W5, W0, W1, W2, W3)
3238 ++
3239 ++#define SCHED_W_W5W0W1W2W3W4_1(iop_num, round) \
3240 ++ SCHED_W_1_##iop_num(round, W5, W0, W1, W2, W3, W4)
3241 ++#define SCHED_W_W5W0W1W2W3W4_2(iop_num, round) \
3242 ++ SCHED_W_2_##iop_num(round, W5, W0, W1, W2, W3, W4)
3243 ++#define SCHED_W_W5W0W1W2W3W4_3(iop_num, round) \
3244 ++ SCHED_W_3_##iop_num(round, W5, W0, W1, W2, W3, W4)
3245 ++
3246 ++
3247 ++ /*
3248 ++ * Transform blocks*64 bytes (blocks*16 32-bit words) at 'src'.
3249 ++ *
3250 ++ * void sm3_neon_transform(struct sm3_state *sst, u8 const *src,
3251 ++ * int blocks)
3252 ++ */
3253 ++ .text
3254 ++.align 3
3255 ++SYM_TYPED_FUNC_START(sm3_neon_transform)
3256 ++ ldp ra, rb, [RSTATE, #0]
3257 ++ ldp rc, rd, [RSTATE, #8]
3258 ++ ldp re, rf, [RSTATE, #16]
3259 ++ ldp rg, rh, [RSTATE, #24]
3260 ++
3261 ++ stp x28, x29, [sp, #-16]!
3262 ++ stp x19, x20, [sp, #-16]!
3263 ++ stp x21, x22, [sp, #-16]!
3264 ++ stp x23, x24, [sp, #-16]!
3265 ++ stp x25, x26, [sp, #-16]!
3266 ++ mov RFRAME, sp
3267 ++
3268 ++ sub addr0, sp, #STACK_SIZE
3269 ++ adr_l RKPTR, .LKtable
3270 ++ and sp, addr0, #(~63)
3271 ++
3272 ++ /* Preload first block. */
3273 ++ LOAD_W_VEC_1(1, 0)
3274 ++ LOAD_W_VEC_1(2, 0)
3275 ++ LOAD_W_VEC_1(3, 0)
3276 ++ LOAD_W_VEC_1(4, 0)
3277 ++ LOAD_W_VEC_1(5, 0)
3278 ++ LOAD_W_VEC_1(6, 0)
3279 ++ LOAD_W_VEC_1(7, 0)
3280 ++ LOAD_W_VEC_1(8, 0)
3281 ++ LOAD_W_VEC_2(1, 0)
3282 ++ LOAD_W_VEC_2(2, 0)
3283 ++ LOAD_W_VEC_2(3, 0)
3284 ++ LOAD_W_VEC_2(4, 0)
3285 ++ LOAD_W_VEC_2(5, 0)
3286 ++ LOAD_W_VEC_2(6, 0)
3287 ++ LOAD_W_VEC_2(7, 0)
3288 ++ LOAD_W_VEC_2(8, 0)
3289 ++ LOAD_W_VEC_3(1, 0)
3290 ++ LOAD_W_VEC_3(2, 0)
3291 ++ LOAD_W_VEC_3(3, 0)
3292 ++ LOAD_W_VEC_3(4, 0)
3293 ++ LOAD_W_VEC_3(5, 0)
3294 ++ LOAD_W_VEC_3(6, 0)
3295 ++ LOAD_W_VEC_3(7, 0)
3296 ++ LOAD_W_VEC_3(8, 0)
3297 ++
3298 ++.balign 16
3299 ++.Loop:
3300 ++ /* Transform 0-3 */
3301 ++ R1(ra, rb, rc, rd, re, rf, rg, rh, k_even, KL, 0, 0, IW, _, 0)
3302 ++ R1(rd, ra, rb, rc, rh, re, rf, rg, k_odd, _, 1, 1, IW, _, 0)
3303 ++ R1(rc, rd, ra, rb, rg, rh, re, rf, k_even, KL, 2, 2, IW, _, 0)
3304 ++ R1(rb, rc, rd, ra, rf, rg, rh, re, k_odd, _, 3, 3, IW, _, 0)
3305 ++
3306 ++ /* Transform 4-7 + Precalc 12-14 */
3307 ++ R1(ra, rb, rc, rd, re, rf, rg, rh, k_even, KL, 4, 0, IW, _, 0)
3308 ++ R1(rd, ra, rb, rc, rh, re, rf, rg, k_odd, _, 5, 1, IW, _, 0)
3309 ++ R1(rc, rd, ra, rb, rg, rh, re, rf, k_even, KL, 6, 2, IW, SCHED_W_W0W1W2W3W4W5_1, 12)
3310 ++ R1(rb, rc, rd, ra, rf, rg, rh, re, k_odd, _, 7, 3, IW, SCHED_W_W0W1W2W3W4W5_2, 12)
3311 ++
3312 ++ /* Transform 8-11 + Precalc 12-17 */
3313 ++ R1(ra, rb, rc, rd, re, rf, rg, rh, k_even, KL, 8, 0, IW, SCHED_W_W0W1W2W3W4W5_3, 12)
3314 ++ R1(rd, ra, rb, rc, rh, re, rf, rg, k_odd, _, 9, 1, IW, SCHED_W_W1W2W3W4W5W0_1, 15)
3315 ++ R1(rc, rd, ra, rb, rg, rh, re, rf, k_even, KL, 10, 2, IW, SCHED_W_W1W2W3W4W5W0_2, 15)
3316 ++ R1(rb, rc, rd, ra, rf, rg, rh, re, k_odd, _, 11, 3, IW, SCHED_W_W1W2W3W4W5W0_3, 15)
3317 ++
3318 ++ /* Transform 12-14 + Precalc 18-20 */
3319 ++ R1(ra, rb, rc, rd, re, rf, rg, rh, k_even, KL, 12, 0, XW, SCHED_W_W2W3W4W5W0W1_1, 18)
3320 ++ R1(rd, ra, rb, rc, rh, re, rf, rg, k_odd, _, 13, 1, XW, SCHED_W_W2W3W4W5W0W1_2, 18)
3321 ++ R1(rc, rd, ra, rb, rg, rh, re, rf, k_even, KL, 14, 2, XW, SCHED_W_W2W3W4W5W0W1_3, 18)
3322 ++
3323 ++ /* Transform 15-17 + Precalc 21-23 */
3324 ++ R1(rb, rc, rd, ra, rf, rg, rh, re, k_odd, _, 15, 0, XW, SCHED_W_W3W4W5W0W1W2_1, 21)
3325 ++ R2(ra, rb, rc, rd, re, rf, rg, rh, k_even, KL, 16, 1, XW, SCHED_W_W3W4W5W0W1W2_2, 21)
3326 ++ R2(rd, ra, rb, rc, rh, re, rf, rg, k_odd, _, 17, 2, XW, SCHED_W_W3W4W5W0W1W2_3, 21)
3327 ++
3328 ++ /* Transform 18-20 + Precalc 24-26 */
3329 ++ R2(rc, rd, ra, rb, rg, rh, re, rf, k_even, KL, 18, 0, XW, SCHED_W_W4W5W0W1W2W3_1, 24)
3330 ++ R2(rb, rc, rd, ra, rf, rg, rh, re, k_odd, _, 19, 1, XW, SCHED_W_W4W5W0W1W2W3_2, 24)
3331 ++ R2(ra, rb, rc, rd, re, rf, rg, rh, k_even, KL, 20, 2, XW, SCHED_W_W4W5W0W1W2W3_3, 24)
3332 ++
3333 ++ /* Transform 21-23 + Precalc 27-29 */
3334 ++ R2(rd, ra, rb, rc, rh, re, rf, rg, k_odd, _, 21, 0, XW, SCHED_W_W5W0W1W2W3W4_1, 27)
3335 ++ R2(rc, rd, ra, rb, rg, rh, re, rf, k_even, KL, 22, 1, XW, SCHED_W_W5W0W1W2W3W4_2, 27)
3336 ++ R2(rb, rc, rd, ra, rf, rg, rh, re, k_odd, _, 23, 2, XW, SCHED_W_W5W0W1W2W3W4_3, 27)
3337 ++
3338 ++ /* Transform 24-26 + Precalc 30-32 */
3339 ++ R2(ra, rb, rc, rd, re, rf, rg, rh, k_even, KL, 24, 0, XW, SCHED_W_W0W1W2W3W4W5_1, 30)
3340 ++ R2(rd, ra, rb, rc, rh, re, rf, rg, k_odd, _, 25, 1, XW, SCHED_W_W0W1W2W3W4W5_2, 30)
3341 ++ R2(rc, rd, ra, rb, rg, rh, re, rf, k_even, KL, 26, 2, XW, SCHED_W_W0W1W2W3W4W5_3, 30)
3342 ++
3343 ++ /* Transform 27-29 + Precalc 33-35 */
3344 ++ R2(rb, rc, rd, ra, rf, rg, rh, re, k_odd, _, 27, 0, XW, SCHED_W_W1W2W3W4W5W0_1, 33)
3345 ++ R2(ra, rb, rc, rd, re, rf, rg, rh, k_even, KL, 28, 1, XW, SCHED_W_W1W2W3W4W5W0_2, 33)
3346 ++ R2(rd, ra, rb, rc, rh, re, rf, rg, k_odd, _, 29, 2, XW, SCHED_W_W1W2W3W4W5W0_3, 33)
3347 ++
3348 ++ /* Transform 30-32 + Precalc 36-38 */
3349 ++ R2(rc, rd, ra, rb, rg, rh, re, rf, k_even, KL, 30, 0, XW, SCHED_W_W2W3W4W5W0W1_1, 36)
3350 ++ R2(rb, rc, rd, ra, rf, rg, rh, re, k_odd, _, 31, 1, XW, SCHED_W_W2W3W4W5W0W1_2, 36)
3351 ++ R2(ra, rb, rc, rd, re, rf, rg, rh, k_even, KL, 32, 2, XW, SCHED_W_W2W3W4W5W0W1_3, 36)
3352 ++
3353 ++ /* Transform 33-35 + Precalc 39-41 */
3354 ++ R2(rd, ra, rb, rc, rh, re, rf, rg, k_odd, _, 33, 0, XW, SCHED_W_W3W4W5W0W1W2_1, 39)
3355 ++ R2(rc, rd, ra, rb, rg, rh, re, rf, k_even, KL, 34, 1, XW, SCHED_W_W3W4W5W0W1W2_2, 39)
3356 ++ R2(rb, rc, rd, ra, rf, rg, rh, re, k_odd, _, 35, 2, XW, SCHED_W_W3W4W5W0W1W2_3, 39)
3357 ++
3358 ++ /* Transform 36-38 + Precalc 42-44 */
3359 ++ R2(ra, rb, rc, rd, re, rf, rg, rh, k_even, KL, 36, 0, XW, SCHED_W_W4W5W0W1W2W3_1, 42)
3360 ++ R2(rd, ra, rb, rc, rh, re, rf, rg, k_odd, _, 37, 1, XW, SCHED_W_W4W5W0W1W2W3_2, 42)
3361 ++ R2(rc, rd, ra, rb, rg, rh, re, rf, k_even, KL, 38, 2, XW, SCHED_W_W4W5W0W1W2W3_3, 42)
3362 ++
3363 ++ /* Transform 39-41 + Precalc 45-47 */
3364 ++ R2(rb, rc, rd, ra, rf, rg, rh, re, k_odd, _, 39, 0, XW, SCHED_W_W5W0W1W2W3W4_1, 45)
3365 ++ R2(ra, rb, rc, rd, re, rf, rg, rh, k_even, KL, 40, 1, XW, SCHED_W_W5W0W1W2W3W4_2, 45)
3366 ++ R2(rd, ra, rb, rc, rh, re, rf, rg, k_odd, _, 41, 2, XW, SCHED_W_W5W0W1W2W3W4_3, 45)
3367 ++
3368 ++ /* Transform 42-44 + Precalc 48-50 */
3369 ++ R2(rc, rd, ra, rb, rg, rh, re, rf, k_even, KL, 42, 0, XW, SCHED_W_W0W1W2W3W4W5_1, 48)
3370 ++ R2(rb, rc, rd, ra, rf, rg, rh, re, k_odd, _, 43, 1, XW, SCHED_W_W0W1W2W3W4W5_2, 48)
3371 ++ R2(ra, rb, rc, rd, re, rf, rg, rh, k_even, KL, 44, 2, XW, SCHED_W_W0W1W2W3W4W5_3, 48)
3372 ++
3373 ++ /* Transform 45-47 + Precalc 51-53 */
3374 ++ R2(rd, ra, rb, rc, rh, re, rf, rg, k_odd, _, 45, 0, XW, SCHED_W_W1W2W3W4W5W0_1, 51)
3375 ++ R2(rc, rd, ra, rb, rg, rh, re, rf, k_even, KL, 46, 1, XW, SCHED_W_W1W2W3W4W5W0_2, 51)
3376 ++ R2(rb, rc, rd, ra, rf, rg, rh, re, k_odd, _, 47, 2, XW, SCHED_W_W1W2W3W4W5W0_3, 51)
3377 ++
3378 ++ /* Transform 48-50 + Precalc 54-56 */
3379 ++ R2(ra, rb, rc, rd, re, rf, rg, rh, k_even, KL, 48, 0, XW, SCHED_W_W2W3W4W5W0W1_1, 54)
3380 ++ R2(rd, ra, rb, rc, rh, re, rf, rg, k_odd, _, 49, 1, XW, SCHED_W_W2W3W4W5W0W1_2, 54)
3381 ++ R2(rc, rd, ra, rb, rg, rh, re, rf, k_even, KL, 50, 2, XW, SCHED_W_W2W3W4W5W0W1_3, 54)
3382 ++
3383 ++ /* Transform 51-53 + Precalc 57-59 */
3384 ++ R2(rb, rc, rd, ra, rf, rg, rh, re, k_odd, _, 51, 0, XW, SCHED_W_W3W4W5W0W1W2_1, 57)
3385 ++ R2(ra, rb, rc, rd, re, rf, rg, rh, k_even, KL, 52, 1, XW, SCHED_W_W3W4W5W0W1W2_2, 57)
3386 ++ R2(rd, ra, rb, rc, rh, re, rf, rg, k_odd, _, 53, 2, XW, SCHED_W_W3W4W5W0W1W2_3, 57)
3387 ++
3388 ++ /* Transform 54-56 + Precalc 60-62 */
3389 ++ R2(rc, rd, ra, rb, rg, rh, re, rf, k_even, KL, 54, 0, XW, SCHED_W_W4W5W0W1W2W3_1, 60)
3390 ++ R2(rb, rc, rd, ra, rf, rg, rh, re, k_odd, _, 55, 1, XW, SCHED_W_W4W5W0W1W2W3_2, 60)
3391 ++ R2(ra, rb, rc, rd, re, rf, rg, rh, k_even, KL, 56, 2, XW, SCHED_W_W4W5W0W1W2W3_3, 60)
3392 ++
3393 ++ /* Transform 57-59 + Precalc 63 */
3394 ++ R2(rd, ra, rb, rc, rh, re, rf, rg, k_odd, _, 57, 0, XW, SCHED_W_W5W0W1W2W3W4_1, 63)
3395 ++ R2(rc, rd, ra, rb, rg, rh, re, rf, k_even, KL, 58, 1, XW, SCHED_W_W5W0W1W2W3W4_2, 63)
3396 ++ R2(rb, rc, rd, ra, rf, rg, rh, re, k_odd, _, 59, 2, XW, SCHED_W_W5W0W1W2W3W4_3, 63)
3397 ++
3398 ++ /* Transform 60 */
3399 ++ R2(ra, rb, rc, rd, re, rf, rg, rh, k_even, KL, 60, 0, XW, _, _)
3400 ++ subs RNBLKS, RNBLKS, #1
3401 ++ b.eq .Lend
3402 ++
3403 ++ /* Transform 61-63 + Preload next block */
3404 ++ R2(rd, ra, rb, rc, rh, re, rf, rg, k_odd, _, 61, 1, XW, LOAD_W_VEC_1, _)
3405 ++ ldp s0, s1, [RSTATE, #0]
3406 ++ R2(rc, rd, ra, rb, rg, rh, re, rf, k_even, KL, 62, 2, XW, LOAD_W_VEC_2, _)
3407 ++ ldp s2, s3, [RSTATE, #8]
3408 ++ R2(rb, rc, rd, ra, rf, rg, rh, re, k_odd, _, 63, 0, XW, LOAD_W_VEC_3, _)
3409 ++
3410 ++ /* Update the chaining variables. */
3411 ++ eor ra, ra, s0
3412 ++ eor rb, rb, s1
3413 ++ ldp s0, s1, [RSTATE, #16]
3414 ++ eor rc, rc, s2
3415 ++ ldp k_even, k_odd, [RSTATE, #24]
3416 ++ eor rd, rd, s3
3417 ++ eor re, re, s0
3418 ++ stp ra, rb, [RSTATE, #0]
3419 ++ eor rf, rf, s1
3420 ++ stp rc, rd, [RSTATE, #8]
3421 ++ eor rg, rg, k_even
3422 ++ stp re, rf, [RSTATE, #16]
3423 ++ eor rh, rh, k_odd
3424 ++ stp rg, rh, [RSTATE, #24]
3425 ++ b .Loop
3426 ++
3427 ++.Lend:
3428 ++ /* Transform 61-63 */
3429 ++ R2(rd, ra, rb, rc, rh, re, rf, rg, k_odd, _, 61, 1, XW, _, _)
3430 ++ ldp s0, s1, [RSTATE, #0]
3431 ++ R2(rc, rd, ra, rb, rg, rh, re, rf, k_even, KL, 62, 2, XW, _, _)
3432 ++ ldp s2, s3, [RSTATE, #8]
3433 ++ R2(rb, rc, rd, ra, rf, rg, rh, re, k_odd, _, 63, 0, XW, _, _)
3434 ++
3435 ++ /* Update the chaining variables. */
3436 ++ eor ra, ra, s0
3437 ++ clear_vec(W0)
3438 ++ eor rb, rb, s1
3439 ++ clear_vec(W1)
3440 ++ ldp s0, s1, [RSTATE, #16]
3441 ++ clear_vec(W2)
3442 ++ eor rc, rc, s2
3443 ++ clear_vec(W3)
3444 ++ ldp k_even, k_odd, [RSTATE, #24]
3445 ++ clear_vec(W4)
3446 ++ eor rd, rd, s3
3447 ++ clear_vec(W5)
3448 ++ eor re, re, s0
3449 ++ clear_vec(XTMP0)
3450 ++ stp ra, rb, [RSTATE, #0]
3451 ++ clear_vec(XTMP1)
3452 ++ eor rf, rf, s1
3453 ++ clear_vec(XTMP2)
3454 ++ stp rc, rd, [RSTATE, #8]
3455 ++ clear_vec(XTMP3)
3456 ++ eor rg, rg, k_even
3457 ++ clear_vec(XTMP4)
3458 ++ stp re, rf, [RSTATE, #16]
3459 ++ clear_vec(XTMP5)
3460 ++ eor rh, rh, k_odd
3461 ++ clear_vec(XTMP6)
3462 ++ stp rg, rh, [RSTATE, #24]
3463 ++
3464 ++ /* Clear message expansion area */
3465 ++ add addr0, sp, #STACK_W
3466 ++ st1 {W0.16b-W3.16b}, [addr0], #64
3467 ++ st1 {W0.16b-W3.16b}, [addr0], #64
3468 ++ st1 {W0.16b-W3.16b}, [addr0]
3469 ++
3470 ++ mov sp, RFRAME
3471 ++
3472 ++ ldp x25, x26, [sp], #16
3473 ++ ldp x23, x24, [sp], #16
3474 ++ ldp x21, x22, [sp], #16
3475 ++ ldp x19, x20, [sp], #16
3476 ++ ldp x28, x29, [sp], #16
3477 ++
3478 ++ ret
3479 ++SYM_FUNC_END(sm3_neon_transform)
3480 ++
3481 ++
3482 ++ .section ".rodata", "a"
3483 ++
3484 ++ .align 4
3485 ++.LKtable:
3486 ++ .long 0x79cc4519, 0xf3988a32, 0xe7311465, 0xce6228cb
3487 ++ .long 0x9cc45197, 0x3988a32f, 0x7311465e, 0xe6228cbc
3488 ++ .long 0xcc451979, 0x988a32f3, 0x311465e7, 0x6228cbce
3489 ++ .long 0xc451979c, 0x88a32f39, 0x11465e73, 0x228cbce6
3490 ++ .long 0x9d8a7a87, 0x3b14f50f, 0x7629ea1e, 0xec53d43c
3491 ++ .long 0xd8a7a879, 0xb14f50f3, 0x629ea1e7, 0xc53d43ce
3492 ++ .long 0x8a7a879d, 0x14f50f3b, 0x29ea1e76, 0x53d43cec
3493 ++ .long 0xa7a879d8, 0x4f50f3b1, 0x9ea1e762, 0x3d43cec5
3494 ++ .long 0x7a879d8a, 0xf50f3b14, 0xea1e7629, 0xd43cec53
3495 ++ .long 0xa879d8a7, 0x50f3b14f, 0xa1e7629e, 0x43cec53d
3496 ++ .long 0x879d8a7a, 0x0f3b14f5, 0x1e7629ea, 0x3cec53d4
3497 ++ .long 0x79d8a7a8, 0xf3b14f50, 0xe7629ea1, 0xcec53d43
3498 ++ .long 0x9d8a7a87, 0x3b14f50f, 0x7629ea1e, 0xec53d43c
3499 ++ .long 0xd8a7a879, 0xb14f50f3, 0x629ea1e7, 0xc53d43ce
3500 ++ .long 0x8a7a879d, 0x14f50f3b, 0x29ea1e76, 0x53d43cec
3501 ++ .long 0xa7a879d8, 0x4f50f3b1, 0x9ea1e762, 0x3d43cec5
3502 +diff --git a/arch/arm64/crypto/sm3-neon-glue.c b/arch/arm64/crypto/sm3-neon-glue.c
3503 +new file mode 100644
3504 +index 0000000000000..7182ee683f14a
3505 +--- /dev/null
3506 ++++ b/arch/arm64/crypto/sm3-neon-glue.c
3507 +@@ -0,0 +1,103 @@
3508 ++// SPDX-License-Identifier: GPL-2.0-or-later
3509 ++/*
3510 ++ * sm3-neon-glue.c - SM3 secure hash using NEON instructions
3511 ++ *
3512 ++ * Copyright (C) 2022 Tianjia Zhang <tianjia.zhang@×××××××××××××.com>
3513 ++ */
3514 ++
3515 ++#include <asm/neon.h>
3516 ++#include <asm/simd.h>
3517 ++#include <asm/unaligned.h>
3518 ++#include <crypto/internal/hash.h>
3519 ++#include <crypto/internal/simd.h>
3520 ++#include <crypto/sm3.h>
3521 ++#include <crypto/sm3_base.h>
3522 ++#include <linux/cpufeature.h>
3523 ++#include <linux/crypto.h>
3524 ++#include <linux/module.h>
3525 ++
3526 ++
3527 ++asmlinkage void sm3_neon_transform(struct sm3_state *sst, u8 const *src,
3528 ++ int blocks);
3529 ++
3530 ++static int sm3_neon_update(struct shash_desc *desc, const u8 *data,
3531 ++ unsigned int len)
3532 ++{
3533 ++ if (!crypto_simd_usable()) {
3534 ++ sm3_update(shash_desc_ctx(desc), data, len);
3535 ++ return 0;
3536 ++ }
3537 ++
3538 ++ kernel_neon_begin();
3539 ++ sm3_base_do_update(desc, data, len, sm3_neon_transform);
3540 ++ kernel_neon_end();
3541 ++
3542 ++ return 0;
3543 ++}
3544 ++
3545 ++static int sm3_neon_final(struct shash_desc *desc, u8 *out)
3546 ++{
3547 ++ if (!crypto_simd_usable()) {
3548 ++ sm3_final(shash_desc_ctx(desc), out);
3549 ++ return 0;
3550 ++ }
3551 ++
3552 ++ kernel_neon_begin();
3553 ++ sm3_base_do_finalize(desc, sm3_neon_transform);
3554 ++ kernel_neon_end();
3555 ++
3556 ++ return sm3_base_finish(desc, out);
3557 ++}
3558 ++
3559 ++static int sm3_neon_finup(struct shash_desc *desc, const u8 *data,
3560 ++ unsigned int len, u8 *out)
3561 ++{
3562 ++ if (!crypto_simd_usable()) {
3563 ++ struct sm3_state *sctx = shash_desc_ctx(desc);
3564 ++
3565 ++ if (len)
3566 ++ sm3_update(sctx, data, len);
3567 ++ sm3_final(sctx, out);
3568 ++ return 0;
3569 ++ }
3570 ++
3571 ++ kernel_neon_begin();
3572 ++ if (len)
3573 ++ sm3_base_do_update(desc, data, len, sm3_neon_transform);
3574 ++ sm3_base_do_finalize(desc, sm3_neon_transform);
3575 ++ kernel_neon_end();
3576 ++
3577 ++ return sm3_base_finish(desc, out);
3578 ++}
3579 ++
3580 ++static struct shash_alg sm3_alg = {
3581 ++ .digestsize = SM3_DIGEST_SIZE,
3582 ++ .init = sm3_base_init,
3583 ++ .update = sm3_neon_update,
3584 ++ .final = sm3_neon_final,
3585 ++ .finup = sm3_neon_finup,
3586 ++ .descsize = sizeof(struct sm3_state),
3587 ++ .base.cra_name = "sm3",
3588 ++ .base.cra_driver_name = "sm3-neon",
3589 ++ .base.cra_blocksize = SM3_BLOCK_SIZE,
3590 ++ .base.cra_module = THIS_MODULE,
3591 ++ .base.cra_priority = 200,
3592 ++};
3593 ++
3594 ++static int __init sm3_neon_init(void)
3595 ++{
3596 ++ return crypto_register_shash(&sm3_alg);
3597 ++}
3598 ++
3599 ++static void __exit sm3_neon_fini(void)
3600 ++{
3601 ++ crypto_unregister_shash(&sm3_alg);
3602 ++}
3603 ++
3604 ++module_init(sm3_neon_init);
3605 ++module_exit(sm3_neon_fini);
3606 ++
3607 ++MODULE_DESCRIPTION("SM3 secure hash using NEON instructions");
3608 ++MODULE_AUTHOR("Jussi Kivilinna <jussi.kivilinna@×××.fi>");
3609 ++MODULE_AUTHOR("Tianjia Zhang <tianjia.zhang@×××××××××××××.com>");
3610 ++MODULE_LICENSE("GPL v2");
3611 +diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
3612 +index 445aa3af3b762..400f8956328b9 100644
3613 +--- a/arch/arm64/include/asm/processor.h
3614 ++++ b/arch/arm64/include/asm/processor.h
3615 +@@ -308,13 +308,13 @@ static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc,
3616 + }
3617 + #endif
3618 +
3619 +-static inline bool is_ttbr0_addr(unsigned long addr)
3620 ++static __always_inline bool is_ttbr0_addr(unsigned long addr)
3621 + {
3622 + /* entry assembly clears tags for TTBR0 addrs */
3623 + return addr < TASK_SIZE;
3624 + }
3625 +
3626 +-static inline bool is_ttbr1_addr(unsigned long addr)
3627 ++static __always_inline bool is_ttbr1_addr(unsigned long addr)
3628 + {
3629 + /* TTBR1 addresses may have a tag if KASAN_SW_TAGS is in use */
3630 + return arch_kasan_reset_tag(addr) >= PAGE_OFFSET;
3631 +diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
3632 +index 5b391490e045b..74f76514a48d0 100644
3633 +--- a/arch/arm64/mm/fault.c
3634 ++++ b/arch/arm64/mm/fault.c
3635 +@@ -353,6 +353,11 @@ static bool is_el1_mte_sync_tag_check_fault(unsigned long esr)
3636 + return false;
3637 + }
3638 +
3639 ++static bool is_translation_fault(unsigned long esr)
3640 ++{
3641 ++ return (esr & ESR_ELx_FSC_TYPE) == ESR_ELx_FSC_FAULT;
3642 ++}
3643 ++
3644 + static void __do_kernel_fault(unsigned long addr, unsigned long esr,
3645 + struct pt_regs *regs)
3646 + {
3647 +@@ -385,7 +390,8 @@ static void __do_kernel_fault(unsigned long addr, unsigned long esr,
3648 + } else if (addr < PAGE_SIZE) {
3649 + msg = "NULL pointer dereference";
3650 + } else {
3651 +- if (kfence_handle_page_fault(addr, esr & ESR_ELx_WNR, regs))
3652 ++ if (is_translation_fault(esr) &&
3653 ++ kfence_handle_page_fault(addr, esr & ESR_ELx_WNR, regs))
3654 + return;
3655 +
3656 + msg = "paging request";
3657 +diff --git a/arch/mips/bcm63xx/clk.c b/arch/mips/bcm63xx/clk.c
3658 +index 6e6756e8fa0a9..86a6e25908664 100644
3659 +--- a/arch/mips/bcm63xx/clk.c
3660 ++++ b/arch/mips/bcm63xx/clk.c
3661 +@@ -361,6 +361,8 @@ static struct clk clk_periph = {
3662 + */
3663 + int clk_enable(struct clk *clk)
3664 + {
3665 ++ if (!clk)
3666 ++ return 0;
3667 + mutex_lock(&clocks_mutex);
3668 + clk_enable_unlocked(clk);
3669 + mutex_unlock(&clocks_mutex);
3670 +diff --git a/arch/mips/boot/dts/ingenic/ci20.dts b/arch/mips/boot/dts/ingenic/ci20.dts
3671 +index 37c46720c719a..f38c39572a9e8 100644
3672 +--- a/arch/mips/boot/dts/ingenic/ci20.dts
3673 ++++ b/arch/mips/boot/dts/ingenic/ci20.dts
3674 +@@ -438,7 +438,7 @@
3675 + ingenic,nemc-tAW = <50>;
3676 + ingenic,nemc-tSTRV = <100>;
3677 +
3678 +- reset-gpios = <&gpf 12 GPIO_ACTIVE_HIGH>;
3679 ++ reset-gpios = <&gpf 12 GPIO_ACTIVE_LOW>;
3680 + vcc-supply = <&eth0_power>;
3681 +
3682 + interrupt-parent = <&gpe>;
3683 +diff --git a/arch/mips/cavium-octeon/executive/cvmx-helper-board.c b/arch/mips/cavium-octeon/executive/cvmx-helper-board.c
3684 +index d09d0769f5496..0fd9ac76eb742 100644
3685 +--- a/arch/mips/cavium-octeon/executive/cvmx-helper-board.c
3686 ++++ b/arch/mips/cavium-octeon/executive/cvmx-helper-board.c
3687 +@@ -211,7 +211,7 @@ union cvmx_helper_link_info __cvmx_helper_board_link_get(int ipd_port)
3688 + {
3689 + union cvmx_helper_link_info result;
3690 +
3691 +- WARN(!octeon_is_simulation(),
3692 ++ WARN_ONCE(!octeon_is_simulation(),
3693 + "Using deprecated link status - please update your DT");
3694 +
3695 + /* Unless we fix it later, all links are defaulted to down */
3696 +diff --git a/arch/mips/cavium-octeon/executive/cvmx-helper.c b/arch/mips/cavium-octeon/executive/cvmx-helper.c
3697 +index 6f49fd9be1f3c..9abfc4bf9bd83 100644
3698 +--- a/arch/mips/cavium-octeon/executive/cvmx-helper.c
3699 ++++ b/arch/mips/cavium-octeon/executive/cvmx-helper.c
3700 +@@ -1096,7 +1096,7 @@ union cvmx_helper_link_info cvmx_helper_link_get(int ipd_port)
3701 + if (index == 0)
3702 + result = __cvmx_helper_rgmii_link_get(ipd_port);
3703 + else {
3704 +- WARN(1, "Using deprecated link status - please update your DT");
3705 ++ WARN_ONCE(1, "Using deprecated link status - please update your DT");
3706 + result.s.full_duplex = 1;
3707 + result.s.link_up = 1;
3708 + result.s.speed = 1000;
3709 +diff --git a/arch/mips/kernel/vpe-cmp.c b/arch/mips/kernel/vpe-cmp.c
3710 +index e673603e11e5d..92140edb3ce3e 100644
3711 +--- a/arch/mips/kernel/vpe-cmp.c
3712 ++++ b/arch/mips/kernel/vpe-cmp.c
3713 +@@ -75,7 +75,6 @@ ATTRIBUTE_GROUPS(vpe);
3714 +
3715 + static void vpe_device_release(struct device *cd)
3716 + {
3717 +- kfree(cd);
3718 + }
3719 +
3720 + static struct class vpe_class = {
3721 +@@ -157,6 +156,7 @@ out_dev:
3722 + device_del(&vpe_device);
3723 +
3724 + out_class:
3725 ++ put_device(&vpe_device);
3726 + class_unregister(&vpe_class);
3727 +
3728 + out_chrdev:
3729 +@@ -169,7 +169,7 @@ void __exit vpe_module_exit(void)
3730 + {
3731 + struct vpe *v, *n;
3732 +
3733 +- device_del(&vpe_device);
3734 ++ device_unregister(&vpe_device);
3735 + class_unregister(&vpe_class);
3736 + unregister_chrdev(major, VPE_MODULE_NAME);
3737 +
3738 +diff --git a/arch/mips/kernel/vpe-mt.c b/arch/mips/kernel/vpe-mt.c
3739 +index bad6b0891b2b5..84a82b551ec35 100644
3740 +--- a/arch/mips/kernel/vpe-mt.c
3741 ++++ b/arch/mips/kernel/vpe-mt.c
3742 +@@ -313,7 +313,6 @@ ATTRIBUTE_GROUPS(vpe);
3743 +
3744 + static void vpe_device_release(struct device *cd)
3745 + {
3746 +- kfree(cd);
3747 + }
3748 +
3749 + static struct class vpe_class = {
3750 +@@ -497,6 +496,7 @@ out_dev:
3751 + device_del(&vpe_device);
3752 +
3753 + out_class:
3754 ++ put_device(&vpe_device);
3755 + class_unregister(&vpe_class);
3756 +
3757 + out_chrdev:
3758 +@@ -509,7 +509,7 @@ void __exit vpe_module_exit(void)
3759 + {
3760 + struct vpe *v, *n;
3761 +
3762 +- device_del(&vpe_device);
3763 ++ device_unregister(&vpe_device);
3764 + class_unregister(&vpe_class);
3765 + unregister_chrdev(major, VPE_MODULE_NAME);
3766 +
3767 +diff --git a/arch/mips/ralink/of.c b/arch/mips/ralink/of.c
3768 +index ea8072acf8d94..01c132bc33d54 100644
3769 +--- a/arch/mips/ralink/of.c
3770 ++++ b/arch/mips/ralink/of.c
3771 +@@ -21,6 +21,7 @@
3772 + #include <asm/bootinfo.h>
3773 + #include <asm/addrspace.h>
3774 + #include <asm/prom.h>
3775 ++#include <asm/mach-ralink/ralink_regs.h>
3776 +
3777 + #include "common.h"
3778 +
3779 +@@ -81,7 +82,8 @@ static int __init plat_of_setup(void)
3780 + __dt_register_buses(soc_info.compatible, "palmbus");
3781 +
3782 + /* make sure that the reset controller is setup early */
3783 +- ralink_rst_init();
3784 ++ if (ralink_soc != MT762X_SOC_MT7621AT)
3785 ++ ralink_rst_init();
3786 +
3787 + return 0;
3788 + }
3789 +diff --git a/arch/powerpc/boot/dts/turris1x.dts b/arch/powerpc/boot/dts/turris1x.dts
3790 +index 045af668e9284..e9cda34a140e0 100644
3791 +--- a/arch/powerpc/boot/dts/turris1x.dts
3792 ++++ b/arch/powerpc/boot/dts/turris1x.dts
3793 +@@ -69,6 +69,20 @@
3794 + interrupt-parent = <&gpio>;
3795 + interrupts = <12 IRQ_TYPE_LEVEL_LOW>, /* GPIO12 - ALERT pin */
3796 + <13 IRQ_TYPE_LEVEL_LOW>; /* GPIO13 - CRIT pin */
3797 ++ #address-cells = <1>;
3798 ++ #size-cells = <0>;
3799 ++
3800 ++ /* Local temperature sensor (SA56004ED internal) */
3801 ++ channel@0 {
3802 ++ reg = <0>;
3803 ++ label = "board";
3804 ++ };
3805 ++
3806 ++ /* Remote temperature sensor (D+/D- connected to P2020 CPU Temperature Diode) */
3807 ++ channel@1 {
3808 ++ reg = <1>;
3809 ++ label = "cpu";
3810 ++ };
3811 + };
3812 +
3813 + /* DDR3 SPD/EEPROM */
3814 +diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
3815 +index 8abae463f6c12..95fd7f9485d55 100644
3816 +--- a/arch/powerpc/include/asm/hvcall.h
3817 ++++ b/arch/powerpc/include/asm/hvcall.h
3818 +@@ -79,7 +79,7 @@
3819 + #define H_NOT_ENOUGH_RESOURCES -44
3820 + #define H_R_STATE -45
3821 + #define H_RESCINDED -46
3822 +-#define H_P1 -54
3823 ++#define H_ABORTED -54
3824 + #define H_P2 -55
3825 + #define H_P3 -56
3826 + #define H_P4 -57
3827 +@@ -100,7 +100,6 @@
3828 + #define H_COP_HW -74
3829 + #define H_STATE -75
3830 + #define H_IN_USE -77
3831 +-#define H_ABORTED -78
3832 + #define H_UNSUPPORTED_FLAG_START -256
3833 + #define H_UNSUPPORTED_FLAG_END -511
3834 + #define H_MULTI_THREADS_ACTIVE -9005
3835 +diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
3836 +index 082f6d0308a47..8718289c051dd 100644
3837 +--- a/arch/powerpc/perf/callchain.c
3838 ++++ b/arch/powerpc/perf/callchain.c
3839 +@@ -61,6 +61,7 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
3840 + next_sp = fp[0];
3841 +
3842 + if (next_sp == sp + STACK_INT_FRAME_SIZE &&
3843 ++ validate_sp(sp, current, STACK_INT_FRAME_SIZE) &&
3844 + fp[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) {
3845 + /*
3846 + * This looks like an interrupt frame for an
3847 +diff --git a/arch/powerpc/perf/hv-gpci-requests.h b/arch/powerpc/perf/hv-gpci-requests.h
3848 +index 8965b4463d433..5e86371a20c78 100644
3849 +--- a/arch/powerpc/perf/hv-gpci-requests.h
3850 ++++ b/arch/powerpc/perf/hv-gpci-requests.h
3851 +@@ -79,6 +79,7 @@ REQUEST(__field(0, 8, partition_id)
3852 + )
3853 + #include I(REQUEST_END)
3854 +
3855 ++#ifdef ENABLE_EVENTS_COUNTERINFO_V6
3856 + /*
3857 + * Not available for counter_info_version >= 0x8, use
3858 + * run_instruction_cycles_by_partition(0x100) instead.
3859 +@@ -92,6 +93,7 @@ REQUEST(__field(0, 8, partition_id)
3860 + __count(0x10, 8, cycles)
3861 + )
3862 + #include I(REQUEST_END)
3863 ++#endif
3864 +
3865 + #define REQUEST_NAME system_performance_capabilities
3866 + #define REQUEST_NUM 0x40
3867 +@@ -103,6 +105,7 @@ REQUEST(__field(0, 1, perf_collect_privileged)
3868 + )
3869 + #include I(REQUEST_END)
3870 +
3871 ++#ifdef ENABLE_EVENTS_COUNTERINFO_V6
3872 + #define REQUEST_NAME processor_bus_utilization_abc_links
3873 + #define REQUEST_NUM 0x50
3874 + #define REQUEST_IDX_KIND "hw_chip_id=?"
3875 +@@ -194,6 +197,7 @@ REQUEST(__field(0, 4, phys_processor_idx)
3876 + __count(0x28, 8, instructions_completed)
3877 + )
3878 + #include I(REQUEST_END)
3879 ++#endif
3880 +
3881 + /* Processor_core_power_mode (0x95) skipped, no counters */
3882 + /* Affinity_domain_information_by_virtual_processor (0xA0) skipped,
3883 +diff --git a/arch/powerpc/perf/hv-gpci.c b/arch/powerpc/perf/hv-gpci.c
3884 +index 5eb60ed5b5e8a..7ff8ff3509f5f 100644
3885 +--- a/arch/powerpc/perf/hv-gpci.c
3886 ++++ b/arch/powerpc/perf/hv-gpci.c
3887 +@@ -70,9 +70,9 @@ static const struct attribute_group format_group = {
3888 + .attrs = format_attrs,
3889 + };
3890 +
3891 +-static const struct attribute_group event_group = {
3892 ++static struct attribute_group event_group = {
3893 + .name = "events",
3894 +- .attrs = hv_gpci_event_attrs,
3895 ++ /* .attrs is set in init */
3896 + };
3897 +
3898 + #define HV_CAPS_ATTR(_name, _format) \
3899 +@@ -330,6 +330,7 @@ static int hv_gpci_init(void)
3900 + int r;
3901 + unsigned long hret;
3902 + struct hv_perf_caps caps;
3903 ++ struct hv_gpci_request_buffer *arg;
3904 +
3905 + hv_gpci_assert_offsets_correct();
3906 +
3907 +@@ -353,6 +354,36 @@ static int hv_gpci_init(void)
3908 + /* sampling not supported */
3909 + h_gpci_pmu.capabilities |= PERF_PMU_CAP_NO_INTERRUPT;
3910 +
3911 ++ arg = (void *)get_cpu_var(hv_gpci_reqb);
3912 ++ memset(arg, 0, HGPCI_REQ_BUFFER_SIZE);
3913 ++
3914 ++ /*
3915 ++ * hcall H_GET_PERF_COUNTER_INFO populates the output
3916 ++ * counter_info_version value based on the system hypervisor.
3917 ++ * Pass the counter request 0x10 corresponds to request type
3918 ++ * 'Dispatch_timebase_by_processor', to get the supported
3919 ++ * counter_info_version.
3920 ++ */
3921 ++ arg->params.counter_request = cpu_to_be32(0x10);
3922 ++
3923 ++ r = plpar_hcall_norets(H_GET_PERF_COUNTER_INFO,
3924 ++ virt_to_phys(arg), HGPCI_REQ_BUFFER_SIZE);
3925 ++ if (r) {
3926 ++ pr_devel("hcall failed, can't get supported counter_info_version: 0x%x\n", r);
3927 ++ arg->params.counter_info_version_out = 0x8;
3928 ++ }
3929 ++
3930 ++ /*
3931 ++ * Use counter_info_version_out value to assign
3932 ++ * required hv-gpci event list.
3933 ++ */
3934 ++ if (arg->params.counter_info_version_out >= 0x8)
3935 ++ event_group.attrs = hv_gpci_event_attrs;
3936 ++ else
3937 ++ event_group.attrs = hv_gpci_event_attrs_v6;
3938 ++
3939 ++ put_cpu_var(hv_gpci_reqb);
3940 ++
3941 + r = perf_pmu_register(&h_gpci_pmu, h_gpci_pmu.name, -1);
3942 + if (r)
3943 + return r;
3944 +diff --git a/arch/powerpc/perf/hv-gpci.h b/arch/powerpc/perf/hv-gpci.h
3945 +index 4d108262bed79..c72020912dea5 100644
3946 +--- a/arch/powerpc/perf/hv-gpci.h
3947 ++++ b/arch/powerpc/perf/hv-gpci.h
3948 +@@ -26,6 +26,7 @@ enum {
3949 + #define REQUEST_FILE "../hv-gpci-requests.h"
3950 + #define NAME_LOWER hv_gpci
3951 + #define NAME_UPPER HV_GPCI
3952 ++#define ENABLE_EVENTS_COUNTERINFO_V6
3953 + #include "req-gen/perf.h"
3954 + #undef REQUEST_FILE
3955 + #undef NAME_LOWER
3956 +diff --git a/arch/powerpc/perf/req-gen/perf.h b/arch/powerpc/perf/req-gen/perf.h
3957 +index fa9bc804e67af..6b2a59fefffa7 100644
3958 +--- a/arch/powerpc/perf/req-gen/perf.h
3959 ++++ b/arch/powerpc/perf/req-gen/perf.h
3960 +@@ -139,6 +139,26 @@ PMU_EVENT_ATTR_STRING( \
3961 + #define REQUEST_(r_name, r_value, r_idx_1, r_fields) \
3962 + r_fields
3963 +
3964 ++/* Generate event list for platforms with counter_info_version 0x6 or below */
3965 ++static __maybe_unused struct attribute *hv_gpci_event_attrs_v6[] = {
3966 ++#include REQUEST_FILE
3967 ++ NULL
3968 ++};
3969 ++
3970 ++/*
3971 ++ * Based on getPerfCountInfo v1.018 documentation, some of the hv-gpci
3972 ++ * events were deprecated for platform firmware that supports
3973 ++ * counter_info_version 0x8 or above.
3974 ++ * Those deprecated events are still part of platform firmware that
3975 ++ * support counter_info_version 0x6 and below. As per the getPerfCountInfo
3976 ++ * v1.018 documentation there is no counter_info_version 0x7.
3977 ++ * Undefining macro ENABLE_EVENTS_COUNTERINFO_V6, to disable the addition of
3978 ++ * deprecated events in "hv_gpci_event_attrs" attribute group, for platforms
3979 ++ * that supports counter_info_version 0x8 or above.
3980 ++ */
3981 ++#undef ENABLE_EVENTS_COUNTERINFO_V6
3982 ++
3983 ++/* Generate event list for platforms with counter_info_version 0x8 or above*/
3984 + static __maybe_unused struct attribute *hv_gpci_event_attrs[] = {
3985 + #include REQUEST_FILE
3986 + NULL
3987 +diff --git a/arch/powerpc/platforms/52xx/mpc52xx_lpbfifo.c b/arch/powerpc/platforms/52xx/mpc52xx_lpbfifo.c
3988 +index 48038aaedbd36..2875c206ac0f8 100644
3989 +--- a/arch/powerpc/platforms/52xx/mpc52xx_lpbfifo.c
3990 ++++ b/arch/powerpc/platforms/52xx/mpc52xx_lpbfifo.c
3991 +@@ -531,6 +531,7 @@ static int mpc52xx_lpbfifo_probe(struct platform_device *op)
3992 + err_bcom_rx_irq:
3993 + bcom_gen_bd_rx_release(lpbfifo.bcom_rx_task);
3994 + err_bcom_rx:
3995 ++ free_irq(lpbfifo.irq, &lpbfifo);
3996 + err_irq:
3997 + iounmap(lpbfifo.regs);
3998 + lpbfifo.regs = NULL;
3999 +diff --git a/arch/powerpc/platforms/83xx/mpc832x_rdb.c b/arch/powerpc/platforms/83xx/mpc832x_rdb.c
4000 +index e12cb44e717f1..caa96edf0e72a 100644
4001 +--- a/arch/powerpc/platforms/83xx/mpc832x_rdb.c
4002 ++++ b/arch/powerpc/platforms/83xx/mpc832x_rdb.c
4003 +@@ -107,7 +107,7 @@ static int __init of_fsl_spi_probe(char *type, char *compatible, u32 sysclk,
4004 +
4005 + goto next;
4006 + unreg:
4007 +- platform_device_del(pdev);
4008 ++ platform_device_put(pdev);
4009 + err:
4010 + pr_err("%pOF: registration failed\n", np);
4011 + next:
4012 +diff --git a/arch/powerpc/platforms/pseries/eeh_pseries.c b/arch/powerpc/platforms/pseries/eeh_pseries.c
4013 +index 8e40ccac0f44e..e5a58a9b2fe9f 100644
4014 +--- a/arch/powerpc/platforms/pseries/eeh_pseries.c
4015 ++++ b/arch/powerpc/platforms/pseries/eeh_pseries.c
4016 +@@ -848,16 +848,7 @@ static int __init eeh_pseries_init(void)
4017 + }
4018 +
4019 + /* Initialize error log size */
4020 +- eeh_error_buf_size = rtas_token("rtas-error-log-max");
4021 +- if (eeh_error_buf_size == RTAS_UNKNOWN_SERVICE) {
4022 +- pr_info("%s: unknown EEH error log size\n",
4023 +- __func__);
4024 +- eeh_error_buf_size = 1024;
4025 +- } else if (eeh_error_buf_size > RTAS_ERROR_LOG_MAX) {
4026 +- pr_info("%s: EEH error log size %d exceeds the maximal %d\n",
4027 +- __func__, eeh_error_buf_size, RTAS_ERROR_LOG_MAX);
4028 +- eeh_error_buf_size = RTAS_ERROR_LOG_MAX;
4029 +- }
4030 ++ eeh_error_buf_size = rtas_get_error_log_max();
4031 +
4032 + /* Set EEH probe mode */
4033 + eeh_add_flag(EEH_PROBE_MODE_DEVTREE | EEH_ENABLE_IO_FOR_LOG);
4034 +diff --git a/arch/powerpc/platforms/pseries/plpks.c b/arch/powerpc/platforms/pseries/plpks.c
4035 +index f4b5b5a64db3d..63a1e1fe01851 100644
4036 +--- a/arch/powerpc/platforms/pseries/plpks.c
4037 ++++ b/arch/powerpc/platforms/pseries/plpks.c
4038 +@@ -75,7 +75,7 @@ static int pseries_status_to_err(int rc)
4039 + case H_FUNCTION:
4040 + err = -ENXIO;
4041 + break;
4042 +- case H_P1:
4043 ++ case H_PARAMETER:
4044 + case H_P2:
4045 + case H_P3:
4046 + case H_P4:
4047 +@@ -111,7 +111,7 @@ static int pseries_status_to_err(int rc)
4048 + err = -EEXIST;
4049 + break;
4050 + case H_ABORTED:
4051 +- err = -EINTR;
4052 ++ err = -EIO;
4053 + break;
4054 + default:
4055 + err = -EINVAL;
4056 +@@ -366,22 +366,24 @@ static int plpks_read_var(u8 consumer, struct plpks_var *var)
4057 + {
4058 + unsigned long retbuf[PLPAR_HCALL_BUFSIZE] = { 0 };
4059 + struct plpks_auth *auth;
4060 +- struct label *label;
4061 ++ struct label *label = NULL;
4062 + u8 *output;
4063 + int rc;
4064 +
4065 + if (var->namelen > MAX_NAME_SIZE)
4066 + return -EINVAL;
4067 +
4068 +- auth = construct_auth(PKS_OS_OWNER);
4069 ++ auth = construct_auth(consumer);
4070 + if (IS_ERR(auth))
4071 + return PTR_ERR(auth);
4072 +
4073 +- label = construct_label(var->component, var->os, var->name,
4074 +- var->namelen);
4075 +- if (IS_ERR(label)) {
4076 +- rc = PTR_ERR(label);
4077 +- goto out_free_auth;
4078 ++ if (consumer == PKS_OS_OWNER) {
4079 ++ label = construct_label(var->component, var->os, var->name,
4080 ++ var->namelen);
4081 ++ if (IS_ERR(label)) {
4082 ++ rc = PTR_ERR(label);
4083 ++ goto out_free_auth;
4084 ++ }
4085 + }
4086 +
4087 + output = kzalloc(maxobjsize, GFP_KERNEL);
4088 +@@ -390,9 +392,15 @@ static int plpks_read_var(u8 consumer, struct plpks_var *var)
4089 + goto out_free_label;
4090 + }
4091 +
4092 +- rc = plpar_hcall(H_PKS_READ_OBJECT, retbuf, virt_to_phys(auth),
4093 +- virt_to_phys(label), label->size, virt_to_phys(output),
4094 +- maxobjsize);
4095 ++ if (consumer == PKS_OS_OWNER)
4096 ++ rc = plpar_hcall(H_PKS_READ_OBJECT, retbuf, virt_to_phys(auth),
4097 ++ virt_to_phys(label), label->size, virt_to_phys(output),
4098 ++ maxobjsize);
4099 ++ else
4100 ++ rc = plpar_hcall(H_PKS_READ_OBJECT, retbuf, virt_to_phys(auth),
4101 ++ virt_to_phys(var->name), var->namelen, virt_to_phys(output),
4102 ++ maxobjsize);
4103 ++
4104 +
4105 + if (rc != H_SUCCESS) {
4106 + pr_err("Failed to read variable %s for component %s with error %d\n",
4107 +diff --git a/arch/powerpc/platforms/pseries/plpks.h b/arch/powerpc/platforms/pseries/plpks.h
4108 +index c6a291367bb13..275ccd86bfb5e 100644
4109 +--- a/arch/powerpc/platforms/pseries/plpks.h
4110 ++++ b/arch/powerpc/platforms/pseries/plpks.h
4111 +@@ -17,7 +17,7 @@
4112 + #define WORLDREADABLE 0x08000000
4113 + #define SIGNEDUPDATE 0x01000000
4114 +
4115 +-#define PLPKS_VAR_LINUX 0x01
4116 ++#define PLPKS_VAR_LINUX 0x02
4117 + #define PLPKS_VAR_COMMON 0x04
4118 +
4119 + struct plpks_var {
4120 +diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
4121 +index e2c8f93b535ba..e454192643910 100644
4122 +--- a/arch/powerpc/sysdev/xive/spapr.c
4123 ++++ b/arch/powerpc/sysdev/xive/spapr.c
4124 +@@ -439,6 +439,7 @@ static int xive_spapr_populate_irq_data(u32 hw_irq, struct xive_irq_data *data)
4125 +
4126 + data->trig_mmio = ioremap(data->trig_page, 1u << data->esb_shift);
4127 + if (!data->trig_mmio) {
4128 ++ iounmap(data->eoi_mmio);
4129 + pr_err("Failed to map trigger page for irq 0x%x\n", hw_irq);
4130 + return -ENOMEM;
4131 + }
4132 +diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
4133 +index f51c882bf9023..e34d7809f6c9f 100644
4134 +--- a/arch/powerpc/xmon/xmon.c
4135 ++++ b/arch/powerpc/xmon/xmon.c
4136 +@@ -1525,9 +1525,9 @@ bpt_cmds(void)
4137 + cmd = inchar();
4138 +
4139 + switch (cmd) {
4140 +- static const char badaddr[] = "Only kernel addresses are permitted for breakpoints\n";
4141 +- int mode;
4142 +- case 'd': /* bd - hardware data breakpoint */
4143 ++ case 'd': { /* bd - hardware data breakpoint */
4144 ++ static const char badaddr[] = "Only kernel addresses are permitted for breakpoints\n";
4145 ++ int mode;
4146 + if (xmon_is_ro) {
4147 + printf(xmon_ro_msg);
4148 + break;
4149 +@@ -1560,6 +1560,7 @@ bpt_cmds(void)
4150 +
4151 + force_enable_xmon();
4152 + break;
4153 ++ }
4154 +
4155 + case 'i': /* bi - hardware instr breakpoint */
4156 + if (xmon_is_ro) {
4157 +diff --git a/arch/riscv/boot/dts/microchip/mpfs-icicle-kit-fabric.dtsi b/arch/riscv/boot/dts/microchip/mpfs-icicle-kit-fabric.dtsi
4158 +index 24b1cfb9a73e4..5d3e5240e33ae 100644
4159 +--- a/arch/riscv/boot/dts/microchip/mpfs-icicle-kit-fabric.dtsi
4160 ++++ b/arch/riscv/boot/dts/microchip/mpfs-icicle-kit-fabric.dtsi
4161 +@@ -9,7 +9,7 @@
4162 + compatible = "microchip,corepwm-rtl-v4";
4163 + reg = <0x0 0x40000000 0x0 0xF0>;
4164 + microchip,sync-update-mask = /bits/ 32 <0>;
4165 +- #pwm-cells = <2>;
4166 ++ #pwm-cells = <3>;
4167 + clocks = <&fabric_clk3>;
4168 + status = "disabled";
4169 + };
4170 +diff --git a/arch/riscv/boot/dts/microchip/mpfs-icicle-kit.dts b/arch/riscv/boot/dts/microchip/mpfs-icicle-kit.dts
4171 +index ec7b7c2a3ce28..8ced67c3b00b2 100644
4172 +--- a/arch/riscv/boot/dts/microchip/mpfs-icicle-kit.dts
4173 ++++ b/arch/riscv/boot/dts/microchip/mpfs-icicle-kit.dts
4174 +@@ -37,7 +37,7 @@
4175 + status = "okay";
4176 + };
4177 +
4178 +- ddrc_cache_hi: memory@1000000000 {
4179 ++ ddrc_cache_hi: memory@1040000000 {
4180 + device_type = "memory";
4181 + reg = <0x10 0x40000000 0x0 0x40000000>;
4182 + status = "okay";
4183 +diff --git a/arch/riscv/boot/dts/microchip/mpfs-sev-kit-fabric.dtsi b/arch/riscv/boot/dts/microchip/mpfs-sev-kit-fabric.dtsi
4184 +index 8545baf4d1290..39a77df489abf 100644
4185 +--- a/arch/riscv/boot/dts/microchip/mpfs-sev-kit-fabric.dtsi
4186 ++++ b/arch/riscv/boot/dts/microchip/mpfs-sev-kit-fabric.dtsi
4187 +@@ -13,33 +13,4 @@
4188 + #clock-cells = <0>;
4189 + clock-frequency = <125000000>;
4190 + };
4191 +-
4192 +- pcie: pcie@2000000000 {
4193 +- compatible = "microchip,pcie-host-1.0";
4194 +- #address-cells = <0x3>;
4195 +- #interrupt-cells = <0x1>;
4196 +- #size-cells = <0x2>;
4197 +- device_type = "pci";
4198 +- reg = <0x20 0x0 0x0 0x8000000>, <0x0 0x43000000 0x0 0x10000>;
4199 +- reg-names = "cfg", "apb";
4200 +- bus-range = <0x0 0x7f>;
4201 +- interrupt-parent = <&plic>;
4202 +- interrupts = <119>;
4203 +- interrupt-map = <0 0 0 1 &pcie_intc 0>,
4204 +- <0 0 0 2 &pcie_intc 1>,
4205 +- <0 0 0 3 &pcie_intc 2>,
4206 +- <0 0 0 4 &pcie_intc 3>;
4207 +- interrupt-map-mask = <0 0 0 7>;
4208 +- clocks = <&fabric_clk1>, <&fabric_clk1>, <&fabric_clk3>;
4209 +- clock-names = "fic0", "fic1", "fic3";
4210 +- ranges = <0x3000000 0x0 0x8000000 0x20 0x8000000 0x0 0x80000000>;
4211 +- msi-parent = <&pcie>;
4212 +- msi-controller;
4213 +- status = "disabled";
4214 +- pcie_intc: interrupt-controller {
4215 +- #address-cells = <0>;
4216 +- #interrupt-cells = <1>;
4217 +- interrupt-controller;
4218 +- };
4219 +- };
4220 + };
4221 +diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h
4222 +index a5c2ca1d1cd8b..ec19d6afc8965 100644
4223 +--- a/arch/riscv/include/asm/hugetlb.h
4224 ++++ b/arch/riscv/include/asm/hugetlb.h
4225 +@@ -5,4 +5,10 @@
4226 + #include <asm-generic/hugetlb.h>
4227 + #include <asm/page.h>
4228 +
4229 ++static inline void arch_clear_hugepage_flags(struct page *page)
4230 ++{
4231 ++ clear_bit(PG_dcache_clean, &page->flags);
4232 ++}
4233 ++#define arch_clear_hugepage_flags arch_clear_hugepage_flags
4234 ++
4235 + #endif /* _ASM_RISCV_HUGETLB_H */
4236 +diff --git a/arch/riscv/include/asm/io.h b/arch/riscv/include/asm/io.h
4237 +index 92080a2279372..42497d487a174 100644
4238 +--- a/arch/riscv/include/asm/io.h
4239 ++++ b/arch/riscv/include/asm/io.h
4240 +@@ -135,4 +135,9 @@ __io_writes_outs(outs, u64, q, __io_pbr(), __io_paw())
4241 +
4242 + #include <asm-generic/io.h>
4243 +
4244 ++#ifdef CONFIG_MMU
4245 ++#define arch_memremap_wb(addr, size) \
4246 ++ ((__force void *)ioremap_prot((addr), (size), _PAGE_KERNEL))
4247 ++#endif
4248 ++
4249 + #endif /* _ASM_RISCV_IO_H */
4250 +diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h
4251 +index dc42375c23571..42a042c0e13ed 100644
4252 +--- a/arch/riscv/include/asm/pgtable-64.h
4253 ++++ b/arch/riscv/include/asm/pgtable-64.h
4254 +@@ -25,7 +25,11 @@ extern bool pgtable_l5_enabled;
4255 + #define PGDIR_MASK (~(PGDIR_SIZE - 1))
4256 +
4257 + /* p4d is folded into pgd in case of 4-level page table */
4258 +-#define P4D_SHIFT 39
4259 ++#define P4D_SHIFT_L3 30
4260 ++#define P4D_SHIFT_L4 39
4261 ++#define P4D_SHIFT_L5 39
4262 ++#define P4D_SHIFT (pgtable_l5_enabled ? P4D_SHIFT_L5 : \
4263 ++ (pgtable_l4_enabled ? P4D_SHIFT_L4 : P4D_SHIFT_L3))
4264 + #define P4D_SIZE (_AC(1, UL) << P4D_SHIFT)
4265 + #define P4D_MASK (~(P4D_SIZE - 1))
4266 +
4267 +diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
4268 +index 186abd146eaff..3221a9e5f3724 100644
4269 +--- a/arch/riscv/kernel/entry.S
4270 ++++ b/arch/riscv/kernel/entry.S
4271 +@@ -263,12 +263,11 @@ ret_from_exception:
4272 + #endif
4273 + bnez s0, resume_kernel
4274 +
4275 +-resume_userspace:
4276 + /* Interrupts must be disabled here so flags are checked atomically */
4277 + REG_L s0, TASK_TI_FLAGS(tp) /* current_thread_info->flags */
4278 + andi s1, s0, _TIF_WORK_MASK
4279 +- bnez s1, work_pending
4280 +-
4281 ++ bnez s1, resume_userspace_slow
4282 ++resume_userspace:
4283 + #ifdef CONFIG_CONTEXT_TRACKING_USER
4284 + call user_enter_callable
4285 + #endif
4286 +@@ -368,19 +367,12 @@ resume_kernel:
4287 + j restore_all
4288 + #endif
4289 +
4290 +-work_pending:
4291 ++resume_userspace_slow:
4292 + /* Enter slow path for supplementary processing */
4293 +- la ra, ret_from_exception
4294 +- andi s1, s0, _TIF_NEED_RESCHED
4295 +- bnez s1, work_resched
4296 +-work_notifysig:
4297 +- /* Handle pending signals and notify-resume requests */
4298 +- csrs CSR_STATUS, SR_IE /* Enable interrupts for do_notify_resume() */
4299 + move a0, sp /* pt_regs */
4300 + move a1, s0 /* current_thread_info->flags */
4301 +- tail do_notify_resume
4302 +-work_resched:
4303 +- tail schedule
4304 ++ call do_work_pending
4305 ++ j resume_userspace
4306 +
4307 + /* Slow paths for ptrace. */
4308 + handle_syscall_trace_enter:
4309 +diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c
4310 +index 5c591123c4409..bfb2afa4135f8 100644
4311 +--- a/arch/riscv/kernel/signal.c
4312 ++++ b/arch/riscv/kernel/signal.c
4313 +@@ -313,19 +313,27 @@ static void do_signal(struct pt_regs *regs)
4314 + }
4315 +
4316 + /*
4317 +- * notification of userspace execution resumption
4318 +- * - triggered by the _TIF_WORK_MASK flags
4319 ++ * Handle any pending work on the resume-to-userspace path, as indicated by
4320 ++ * _TIF_WORK_MASK. Entered from assembly with IRQs off.
4321 + */
4322 +-asmlinkage __visible void do_notify_resume(struct pt_regs *regs,
4323 +- unsigned long thread_info_flags)
4324 ++asmlinkage __visible void do_work_pending(struct pt_regs *regs,
4325 ++ unsigned long thread_info_flags)
4326 + {
4327 +- if (thread_info_flags & _TIF_UPROBE)
4328 +- uprobe_notify_resume(regs);
4329 +-
4330 +- /* Handle pending signal delivery */
4331 +- if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
4332 +- do_signal(regs);
4333 +-
4334 +- if (thread_info_flags & _TIF_NOTIFY_RESUME)
4335 +- resume_user_mode_work(regs);
4336 ++ do {
4337 ++ if (thread_info_flags & _TIF_NEED_RESCHED) {
4338 ++ schedule();
4339 ++ } else {
4340 ++ local_irq_enable();
4341 ++ if (thread_info_flags & _TIF_UPROBE)
4342 ++ uprobe_notify_resume(regs);
4343 ++ /* Handle pending signal delivery */
4344 ++ if (thread_info_flags & (_TIF_SIGPENDING |
4345 ++ _TIF_NOTIFY_SIGNAL))
4346 ++ do_signal(regs);
4347 ++ if (thread_info_flags & _TIF_NOTIFY_RESUME)
4348 ++ resume_user_mode_work(regs);
4349 ++ }
4350 ++ local_irq_disable();
4351 ++ thread_info_flags = read_thread_flags();
4352 ++ } while (thread_info_flags & _TIF_WORK_MASK);
4353 + }
4354 +diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
4355 +index 7abd8e4c4df63..f77cb8e42bd2a 100644
4356 +--- a/arch/riscv/kernel/traps.c
4357 ++++ b/arch/riscv/kernel/traps.c
4358 +@@ -214,7 +214,7 @@ static DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)],
4359 + * shadow stack, handled_ kernel_ stack_ overflow(in kernel/entry.S) is used
4360 + * to get per-cpu overflow stack(get_overflow_stack).
4361 + */
4362 +-long shadow_stack[SHADOW_OVERFLOW_STACK_SIZE/sizeof(long)];
4363 ++long shadow_stack[SHADOW_OVERFLOW_STACK_SIZE/sizeof(long)] __aligned(16);
4364 + asmlinkage unsigned long get_overflow_stack(void)
4365 + {
4366 + return (unsigned long)this_cpu_ptr(overflow_stack) +
4367 +diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
4368 +index 71ebbc4821f0e..5174ef54ad1d9 100644
4369 +--- a/arch/riscv/kvm/vcpu.c
4370 ++++ b/arch/riscv/kvm/vcpu.c
4371 +@@ -296,12 +296,15 @@ static int kvm_riscv_vcpu_set_reg_config(struct kvm_vcpu *vcpu,
4372 + if (copy_from_user(&reg_val, uaddr, KVM_REG_SIZE(reg->id)))
4373 + return -EFAULT;
4374 +
4375 +- /* This ONE REG interface is only defined for single letter extensions */
4376 +- if (fls(reg_val) >= RISCV_ISA_EXT_BASE)
4377 +- return -EINVAL;
4378 +-
4379 + switch (reg_num) {
4380 + case KVM_REG_RISCV_CONFIG_REG(isa):
4381 ++ /*
4382 ++ * This ONE REG interface is only defined for
4383 ++ * single letter extensions.
4384 ++ */
4385 ++ if (fls(reg_val) >= RISCV_ISA_EXT_BASE)
4386 ++ return -EINVAL;
4387 ++
4388 + if (!vcpu->arch.ran_atleast_once) {
4389 + /* Ignore the enable/disable request for certain extensions */
4390 + for (i = 0; i < RISCV_ISA_EXT_BASE; i++) {
4391 +diff --git a/arch/riscv/mm/physaddr.c b/arch/riscv/mm/physaddr.c
4392 +index 19cf25a74ee29..9b18bda74154e 100644
4393 +--- a/arch/riscv/mm/physaddr.c
4394 ++++ b/arch/riscv/mm/physaddr.c
4395 +@@ -22,7 +22,7 @@ EXPORT_SYMBOL(__virt_to_phys);
4396 + phys_addr_t __phys_addr_symbol(unsigned long x)
4397 + {
4398 + unsigned long kernel_start = kernel_map.virt_addr;
4399 +- unsigned long kernel_end = (unsigned long)_end;
4400 ++ unsigned long kernel_end = kernel_start + kernel_map.size;
4401 +
4402 + /*
4403 + * Boundary checking aginst the kernel image mapping.
4404 +diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
4405 +index 00df3a8f92acd..f2417ac54edd6 100644
4406 +--- a/arch/riscv/net/bpf_jit_comp64.c
4407 ++++ b/arch/riscv/net/bpf_jit_comp64.c
4408 +@@ -136,6 +136,25 @@ static bool in_auipc_jalr_range(s64 val)
4409 + val < ((1L << 31) - (1L << 11));
4410 + }
4411 +
4412 ++/* Emit fixed-length instructions for address */
4413 ++static int emit_addr(u8 rd, u64 addr, bool extra_pass, struct rv_jit_context *ctx)
4414 ++{
4415 ++ u64 ip = (u64)(ctx->insns + ctx->ninsns);
4416 ++ s64 off = addr - ip;
4417 ++ s64 upper = (off + (1 << 11)) >> 12;
4418 ++ s64 lower = off & 0xfff;
4419 ++
4420 ++ if (extra_pass && !in_auipc_jalr_range(off)) {
4421 ++ pr_err("bpf-jit: target offset 0x%llx is out of range\n", off);
4422 ++ return -ERANGE;
4423 ++ }
4424 ++
4425 ++ emit(rv_auipc(rd, upper), ctx);
4426 ++ emit(rv_addi(rd, rd, lower), ctx);
4427 ++ return 0;
4428 ++}
4429 ++
4430 ++/* Emit variable-length instructions for 32-bit and 64-bit imm */
4431 + static void emit_imm(u8 rd, s64 val, struct rv_jit_context *ctx)
4432 + {
4433 + /* Note that the immediate from the add is sign-extended,
4434 +@@ -1050,7 +1069,15 @@ out_be:
4435 + u64 imm64;
4436 +
4437 + imm64 = (u64)insn1.imm << 32 | (u32)imm;
4438 +- emit_imm(rd, imm64, ctx);
4439 ++ if (bpf_pseudo_func(insn)) {
4440 ++ /* fixed-length insns for extra jit pass */
4441 ++ ret = emit_addr(rd, imm64, extra_pass, ctx);
4442 ++ if (ret)
4443 ++ return ret;
4444 ++ } else {
4445 ++ emit_imm(rd, imm64, ctx);
4446 ++ }
4447 ++
4448 + return 1;
4449 + }
4450 +
4451 +diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
4452 +index 67745ceab0dbc..b2c0fce3f257c 100644
4453 +--- a/arch/x86/Kconfig
4454 ++++ b/arch/x86/Kconfig
4455 +@@ -462,8 +462,8 @@ config X86_X2APIC
4456 +
4457 + Some Intel systems circa 2022 and later are locked into x2APIC mode
4458 + and can not fall back to the legacy APIC modes if SGX or TDX are
4459 +- enabled in the BIOS. They will be unable to boot without enabling
4460 +- this option.
4461 ++ enabled in the BIOS. They will boot with very reduced functionality
4462 ++ without enabling this option.
4463 +
4464 + If you don't know what to do here, say N.
4465 +
4466 +diff --git a/arch/x86/crypto/aegis128-aesni-asm.S b/arch/x86/crypto/aegis128-aesni-asm.S
4467 +index b48ddebb47489..cdf3215ec272c 100644
4468 +--- a/arch/x86/crypto/aegis128-aesni-asm.S
4469 ++++ b/arch/x86/crypto/aegis128-aesni-asm.S
4470 +@@ -7,6 +7,7 @@
4471 + */
4472 +
4473 + #include <linux/linkage.h>
4474 ++#include <linux/cfi_types.h>
4475 + #include <asm/frame.h>
4476 +
4477 + #define STATE0 %xmm0
4478 +@@ -402,7 +403,7 @@ SYM_FUNC_END(crypto_aegis128_aesni_ad)
4479 + * void crypto_aegis128_aesni_enc(void *state, unsigned int length,
4480 + * const void *src, void *dst);
4481 + */
4482 +-SYM_FUNC_START(crypto_aegis128_aesni_enc)
4483 ++SYM_TYPED_FUNC_START(crypto_aegis128_aesni_enc)
4484 + FRAME_BEGIN
4485 +
4486 + cmp $0x10, LEN
4487 +@@ -499,7 +500,7 @@ SYM_FUNC_END(crypto_aegis128_aesni_enc)
4488 + * void crypto_aegis128_aesni_enc_tail(void *state, unsigned int length,
4489 + * const void *src, void *dst);
4490 + */
4491 +-SYM_FUNC_START(crypto_aegis128_aesni_enc_tail)
4492 ++SYM_TYPED_FUNC_START(crypto_aegis128_aesni_enc_tail)
4493 + FRAME_BEGIN
4494 +
4495 + /* load the state: */
4496 +@@ -556,7 +557,7 @@ SYM_FUNC_END(crypto_aegis128_aesni_enc_tail)
4497 + * void crypto_aegis128_aesni_dec(void *state, unsigned int length,
4498 + * const void *src, void *dst);
4499 + */
4500 +-SYM_FUNC_START(crypto_aegis128_aesni_dec)
4501 ++SYM_TYPED_FUNC_START(crypto_aegis128_aesni_dec)
4502 + FRAME_BEGIN
4503 +
4504 + cmp $0x10, LEN
4505 +@@ -653,7 +654,7 @@ SYM_FUNC_END(crypto_aegis128_aesni_dec)
4506 + * void crypto_aegis128_aesni_dec_tail(void *state, unsigned int length,
4507 + * const void *src, void *dst);
4508 + */
4509 +-SYM_FUNC_START(crypto_aegis128_aesni_dec_tail)
4510 ++SYM_TYPED_FUNC_START(crypto_aegis128_aesni_dec_tail)
4511 + FRAME_BEGIN
4512 +
4513 + /* load the state: */
4514 +diff --git a/arch/x86/crypto/aria-aesni-avx-asm_64.S b/arch/x86/crypto/aria-aesni-avx-asm_64.S
4515 +index c75fd7d015ed8..03ae4cd1d976a 100644
4516 +--- a/arch/x86/crypto/aria-aesni-avx-asm_64.S
4517 ++++ b/arch/x86/crypto/aria-aesni-avx-asm_64.S
4518 +@@ -7,6 +7,7 @@
4519 + */
4520 +
4521 + #include <linux/linkage.h>
4522 ++#include <linux/cfi_types.h>
4523 + #include <asm/frame.h>
4524 +
4525 + /* struct aria_ctx: */
4526 +@@ -913,7 +914,7 @@ SYM_FUNC_START_LOCAL(__aria_aesni_avx_crypt_16way)
4527 + RET;
4528 + SYM_FUNC_END(__aria_aesni_avx_crypt_16way)
4529 +
4530 +-SYM_FUNC_START(aria_aesni_avx_encrypt_16way)
4531 ++SYM_TYPED_FUNC_START(aria_aesni_avx_encrypt_16way)
4532 + /* input:
4533 + * %rdi: ctx, CTX
4534 + * %rsi: dst
4535 +@@ -938,7 +939,7 @@ SYM_FUNC_START(aria_aesni_avx_encrypt_16way)
4536 + RET;
4537 + SYM_FUNC_END(aria_aesni_avx_encrypt_16way)
4538 +
4539 +-SYM_FUNC_START(aria_aesni_avx_decrypt_16way)
4540 ++SYM_TYPED_FUNC_START(aria_aesni_avx_decrypt_16way)
4541 + /* input:
4542 + * %rdi: ctx, CTX
4543 + * %rsi: dst
4544 +@@ -1039,7 +1040,7 @@ SYM_FUNC_START_LOCAL(__aria_aesni_avx_ctr_gen_keystream_16way)
4545 + RET;
4546 + SYM_FUNC_END(__aria_aesni_avx_ctr_gen_keystream_16way)
4547 +
4548 +-SYM_FUNC_START(aria_aesni_avx_ctr_crypt_16way)
4549 ++SYM_TYPED_FUNC_START(aria_aesni_avx_ctr_crypt_16way)
4550 + /* input:
4551 + * %rdi: ctx
4552 + * %rsi: dst
4553 +@@ -1208,7 +1209,7 @@ SYM_FUNC_START_LOCAL(__aria_aesni_avx_gfni_crypt_16way)
4554 + RET;
4555 + SYM_FUNC_END(__aria_aesni_avx_gfni_crypt_16way)
4556 +
4557 +-SYM_FUNC_START(aria_aesni_avx_gfni_encrypt_16way)
4558 ++SYM_TYPED_FUNC_START(aria_aesni_avx_gfni_encrypt_16way)
4559 + /* input:
4560 + * %rdi: ctx, CTX
4561 + * %rsi: dst
4562 +@@ -1233,7 +1234,7 @@ SYM_FUNC_START(aria_aesni_avx_gfni_encrypt_16way)
4563 + RET;
4564 + SYM_FUNC_END(aria_aesni_avx_gfni_encrypt_16way)
4565 +
4566 +-SYM_FUNC_START(aria_aesni_avx_gfni_decrypt_16way)
4567 ++SYM_TYPED_FUNC_START(aria_aesni_avx_gfni_decrypt_16way)
4568 + /* input:
4569 + * %rdi: ctx, CTX
4570 + * %rsi: dst
4571 +@@ -1258,7 +1259,7 @@ SYM_FUNC_START(aria_aesni_avx_gfni_decrypt_16way)
4572 + RET;
4573 + SYM_FUNC_END(aria_aesni_avx_gfni_decrypt_16way)
4574 +
4575 +-SYM_FUNC_START(aria_aesni_avx_gfni_ctr_crypt_16way)
4576 ++SYM_TYPED_FUNC_START(aria_aesni_avx_gfni_ctr_crypt_16way)
4577 + /* input:
4578 + * %rdi: ctx
4579 + * %rsi: dst
4580 +diff --git a/arch/x86/crypto/sha1_ni_asm.S b/arch/x86/crypto/sha1_ni_asm.S
4581 +index 2f94ec0e763bf..3cae5a1bb3d6e 100644
4582 +--- a/arch/x86/crypto/sha1_ni_asm.S
4583 ++++ b/arch/x86/crypto/sha1_ni_asm.S
4584 +@@ -54,6 +54,7 @@
4585 + */
4586 +
4587 + #include <linux/linkage.h>
4588 ++#include <linux/cfi_types.h>
4589 +
4590 + #define DIGEST_PTR %rdi /* 1st arg */
4591 + #define DATA_PTR %rsi /* 2nd arg */
4592 +@@ -93,7 +94,7 @@
4593 + */
4594 + .text
4595 + .align 32
4596 +-SYM_FUNC_START(sha1_ni_transform)
4597 ++SYM_TYPED_FUNC_START(sha1_ni_transform)
4598 + push %rbp
4599 + mov %rsp, %rbp
4600 + sub $FRAME_SIZE, %rsp
4601 +diff --git a/arch/x86/crypto/sha1_ssse3_asm.S b/arch/x86/crypto/sha1_ssse3_asm.S
4602 +index 263f916362e02..f54988c80eb40 100644
4603 +--- a/arch/x86/crypto/sha1_ssse3_asm.S
4604 ++++ b/arch/x86/crypto/sha1_ssse3_asm.S
4605 +@@ -25,6 +25,7 @@
4606 + */
4607 +
4608 + #include <linux/linkage.h>
4609 ++#include <linux/cfi_types.h>
4610 +
4611 + #define CTX %rdi // arg1
4612 + #define BUF %rsi // arg2
4613 +@@ -67,7 +68,7 @@
4614 + * param: function's name
4615 + */
4616 + .macro SHA1_VECTOR_ASM name
4617 +- SYM_FUNC_START(\name)
4618 ++ SYM_TYPED_FUNC_START(\name)
4619 +
4620 + push %rbx
4621 + push %r12
4622 +diff --git a/arch/x86/crypto/sha256-avx-asm.S b/arch/x86/crypto/sha256-avx-asm.S
4623 +index 3baa1ec390974..06ea30c20828d 100644
4624 +--- a/arch/x86/crypto/sha256-avx-asm.S
4625 ++++ b/arch/x86/crypto/sha256-avx-asm.S
4626 +@@ -48,6 +48,7 @@
4627 + ########################################################################
4628 +
4629 + #include <linux/linkage.h>
4630 ++#include <linux/cfi_types.h>
4631 +
4632 + ## assume buffers not aligned
4633 + #define VMOVDQ vmovdqu
4634 +@@ -346,7 +347,7 @@ a = TMP_
4635 + ## arg 3 : Num blocks
4636 + ########################################################################
4637 + .text
4638 +-SYM_FUNC_START(sha256_transform_avx)
4639 ++SYM_TYPED_FUNC_START(sha256_transform_avx)
4640 + .align 32
4641 + pushq %rbx
4642 + pushq %r12
4643 +diff --git a/arch/x86/crypto/sha256-avx2-asm.S b/arch/x86/crypto/sha256-avx2-asm.S
4644 +index 9bcdbc47b8b4b..2d2be531a11ed 100644
4645 +--- a/arch/x86/crypto/sha256-avx2-asm.S
4646 ++++ b/arch/x86/crypto/sha256-avx2-asm.S
4647 +@@ -49,6 +49,7 @@
4648 + ########################################################################
4649 +
4650 + #include <linux/linkage.h>
4651 ++#include <linux/cfi_types.h>
4652 +
4653 + ## assume buffers not aligned
4654 + #define VMOVDQ vmovdqu
4655 +@@ -523,7 +524,7 @@ STACK_SIZE = _CTX + _CTX_SIZE
4656 + ## arg 3 : Num blocks
4657 + ########################################################################
4658 + .text
4659 +-SYM_FUNC_START(sha256_transform_rorx)
4660 ++SYM_TYPED_FUNC_START(sha256_transform_rorx)
4661 + .align 32
4662 + pushq %rbx
4663 + pushq %r12
4664 +diff --git a/arch/x86/crypto/sha256-ssse3-asm.S b/arch/x86/crypto/sha256-ssse3-asm.S
4665 +index c4a5db612c327..7db28839108dd 100644
4666 +--- a/arch/x86/crypto/sha256-ssse3-asm.S
4667 ++++ b/arch/x86/crypto/sha256-ssse3-asm.S
4668 +@@ -47,6 +47,7 @@
4669 + ########################################################################
4670 +
4671 + #include <linux/linkage.h>
4672 ++#include <linux/cfi_types.h>
4673 +
4674 + ## assume buffers not aligned
4675 + #define MOVDQ movdqu
4676 +@@ -355,7 +356,7 @@ a = TMP_
4677 + ## arg 3 : Num blocks
4678 + ########################################################################
4679 + .text
4680 +-SYM_FUNC_START(sha256_transform_ssse3)
4681 ++SYM_TYPED_FUNC_START(sha256_transform_ssse3)
4682 + .align 32
4683 + pushq %rbx
4684 + pushq %r12
4685 +diff --git a/arch/x86/crypto/sha256_ni_asm.S b/arch/x86/crypto/sha256_ni_asm.S
4686 +index 94d50dd27cb53..47f93937f798a 100644
4687 +--- a/arch/x86/crypto/sha256_ni_asm.S
4688 ++++ b/arch/x86/crypto/sha256_ni_asm.S
4689 +@@ -54,6 +54,7 @@
4690 + */
4691 +
4692 + #include <linux/linkage.h>
4693 ++#include <linux/cfi_types.h>
4694 +
4695 + #define DIGEST_PTR %rdi /* 1st arg */
4696 + #define DATA_PTR %rsi /* 2nd arg */
4697 +@@ -97,7 +98,7 @@
4698 +
4699 + .text
4700 + .align 32
4701 +-SYM_FUNC_START(sha256_ni_transform)
4702 ++SYM_TYPED_FUNC_START(sha256_ni_transform)
4703 +
4704 + shl $6, NUM_BLKS /* convert to bytes */
4705 + jz .Ldone_hash
4706 +diff --git a/arch/x86/crypto/sha512-avx-asm.S b/arch/x86/crypto/sha512-avx-asm.S
4707 +index 1fefe6dd3a9e2..b0984f19fdb40 100644
4708 +--- a/arch/x86/crypto/sha512-avx-asm.S
4709 ++++ b/arch/x86/crypto/sha512-avx-asm.S
4710 +@@ -48,6 +48,7 @@
4711 + ########################################################################
4712 +
4713 + #include <linux/linkage.h>
4714 ++#include <linux/cfi_types.h>
4715 +
4716 + .text
4717 +
4718 +@@ -273,7 +274,7 @@ frame_size = frame_WK + WK_SIZE
4719 + # of SHA512 message blocks.
4720 + # "blocks" is the message length in SHA512 blocks
4721 + ########################################################################
4722 +-SYM_FUNC_START(sha512_transform_avx)
4723 ++SYM_TYPED_FUNC_START(sha512_transform_avx)
4724 + test msglen, msglen
4725 + je nowork
4726 +
4727 +diff --git a/arch/x86/crypto/sha512-avx2-asm.S b/arch/x86/crypto/sha512-avx2-asm.S
4728 +index 5cdaab7d69015..b1ca99055ef99 100644
4729 +--- a/arch/x86/crypto/sha512-avx2-asm.S
4730 ++++ b/arch/x86/crypto/sha512-avx2-asm.S
4731 +@@ -50,6 +50,7 @@
4732 + ########################################################################
4733 +
4734 + #include <linux/linkage.h>
4735 ++#include <linux/cfi_types.h>
4736 +
4737 + .text
4738 +
4739 +@@ -565,7 +566,7 @@ frame_size = frame_CTX + CTX_SIZE
4740 + # of SHA512 message blocks.
4741 + # "blocks" is the message length in SHA512 blocks
4742 + ########################################################################
4743 +-SYM_FUNC_START(sha512_transform_rorx)
4744 ++SYM_TYPED_FUNC_START(sha512_transform_rorx)
4745 + # Save GPRs
4746 + push %rbx
4747 + push %r12
4748 +diff --git a/arch/x86/crypto/sha512-ssse3-asm.S b/arch/x86/crypto/sha512-ssse3-asm.S
4749 +index b84c22e06c5f7..c06afb5270e5f 100644
4750 +--- a/arch/x86/crypto/sha512-ssse3-asm.S
4751 ++++ b/arch/x86/crypto/sha512-ssse3-asm.S
4752 +@@ -48,6 +48,7 @@
4753 + ########################################################################
4754 +
4755 + #include <linux/linkage.h>
4756 ++#include <linux/cfi_types.h>
4757 +
4758 + .text
4759 +
4760 +@@ -274,7 +275,7 @@ frame_size = frame_WK + WK_SIZE
4761 + # of SHA512 message blocks.
4762 + # "blocks" is the message length in SHA512 blocks.
4763 + ########################################################################
4764 +-SYM_FUNC_START(sha512_transform_ssse3)
4765 ++SYM_TYPED_FUNC_START(sha512_transform_ssse3)
4766 +
4767 + test msglen, msglen
4768 + je nowork
4769 +diff --git a/arch/x86/crypto/sm3-avx-asm_64.S b/arch/x86/crypto/sm3-avx-asm_64.S
4770 +index b12b9efb5ec51..8fc5ac681fd63 100644
4771 +--- a/arch/x86/crypto/sm3-avx-asm_64.S
4772 ++++ b/arch/x86/crypto/sm3-avx-asm_64.S
4773 +@@ -12,6 +12,7 @@
4774 + */
4775 +
4776 + #include <linux/linkage.h>
4777 ++#include <linux/cfi_types.h>
4778 + #include <asm/frame.h>
4779 +
4780 + /* Context structure */
4781 +@@ -328,7 +329,7 @@
4782 + * const u8 *data, int nblocks);
4783 + */
4784 + .align 16
4785 +-SYM_FUNC_START(sm3_transform_avx)
4786 ++SYM_TYPED_FUNC_START(sm3_transform_avx)
4787 + /* input:
4788 + * %rdi: ctx, CTX
4789 + * %rsi: data (64*nblks bytes)
4790 +diff --git a/arch/x86/crypto/sm4-aesni-avx-asm_64.S b/arch/x86/crypto/sm4-aesni-avx-asm_64.S
4791 +index 4767ab61ff489..22b6560eb9e1e 100644
4792 +--- a/arch/x86/crypto/sm4-aesni-avx-asm_64.S
4793 ++++ b/arch/x86/crypto/sm4-aesni-avx-asm_64.S
4794 +@@ -14,6 +14,7 @@
4795 + */
4796 +
4797 + #include <linux/linkage.h>
4798 ++#include <linux/cfi_types.h>
4799 + #include <asm/frame.h>
4800 +
4801 + #define rRIP (%rip)
4802 +@@ -420,7 +421,7 @@ SYM_FUNC_END(sm4_aesni_avx_crypt8)
4803 + * const u8 *src, u8 *iv)
4804 + */
4805 + .align 8
4806 +-SYM_FUNC_START(sm4_aesni_avx_ctr_enc_blk8)
4807 ++SYM_TYPED_FUNC_START(sm4_aesni_avx_ctr_enc_blk8)
4808 + /* input:
4809 + * %rdi: round key array, CTX
4810 + * %rsi: dst (8 blocks)
4811 +@@ -495,7 +496,7 @@ SYM_FUNC_END(sm4_aesni_avx_ctr_enc_blk8)
4812 + * const u8 *src, u8 *iv)
4813 + */
4814 + .align 8
4815 +-SYM_FUNC_START(sm4_aesni_avx_cbc_dec_blk8)
4816 ++SYM_TYPED_FUNC_START(sm4_aesni_avx_cbc_dec_blk8)
4817 + /* input:
4818 + * %rdi: round key array, CTX
4819 + * %rsi: dst (8 blocks)
4820 +@@ -545,7 +546,7 @@ SYM_FUNC_END(sm4_aesni_avx_cbc_dec_blk8)
4821 + * const u8 *src, u8 *iv)
4822 + */
4823 + .align 8
4824 +-SYM_FUNC_START(sm4_aesni_avx_cfb_dec_blk8)
4825 ++SYM_TYPED_FUNC_START(sm4_aesni_avx_cfb_dec_blk8)
4826 + /* input:
4827 + * %rdi: round key array, CTX
4828 + * %rsi: dst (8 blocks)
4829 +diff --git a/arch/x86/crypto/sm4-aesni-avx2-asm_64.S b/arch/x86/crypto/sm4-aesni-avx2-asm_64.S
4830 +index 4732fe8bb65b6..23ee39a8ada8c 100644
4831 +--- a/arch/x86/crypto/sm4-aesni-avx2-asm_64.S
4832 ++++ b/arch/x86/crypto/sm4-aesni-avx2-asm_64.S
4833 +@@ -14,6 +14,7 @@
4834 + */
4835 +
4836 + #include <linux/linkage.h>
4837 ++#include <linux/cfi_types.h>
4838 + #include <asm/frame.h>
4839 +
4840 + #define rRIP (%rip)
4841 +@@ -282,7 +283,7 @@ SYM_FUNC_END(__sm4_crypt_blk16)
4842 + * const u8 *src, u8 *iv)
4843 + */
4844 + .align 8
4845 +-SYM_FUNC_START(sm4_aesni_avx2_ctr_enc_blk16)
4846 ++SYM_TYPED_FUNC_START(sm4_aesni_avx2_ctr_enc_blk16)
4847 + /* input:
4848 + * %rdi: round key array, CTX
4849 + * %rsi: dst (16 blocks)
4850 +@@ -395,7 +396,7 @@ SYM_FUNC_END(sm4_aesni_avx2_ctr_enc_blk16)
4851 + * const u8 *src, u8 *iv)
4852 + */
4853 + .align 8
4854 +-SYM_FUNC_START(sm4_aesni_avx2_cbc_dec_blk16)
4855 ++SYM_TYPED_FUNC_START(sm4_aesni_avx2_cbc_dec_blk16)
4856 + /* input:
4857 + * %rdi: round key array, CTX
4858 + * %rsi: dst (16 blocks)
4859 +@@ -449,7 +450,7 @@ SYM_FUNC_END(sm4_aesni_avx2_cbc_dec_blk16)
4860 + * const u8 *src, u8 *iv)
4861 + */
4862 + .align 8
4863 +-SYM_FUNC_START(sm4_aesni_avx2_cfb_dec_blk16)
4864 ++SYM_TYPED_FUNC_START(sm4_aesni_avx2_cfb_dec_blk16)
4865 + /* input:
4866 + * %rdi: round key array, CTX
4867 + * %rsi: dst (16 blocks)
4868 +diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
4869 +index 1ef4f7861e2ec..1f4869227efb9 100644
4870 +--- a/arch/x86/events/intel/uncore_snb.c
4871 ++++ b/arch/x86/events/intel/uncore_snb.c
4872 +@@ -1338,6 +1338,7 @@ static void __uncore_imc_init_box(struct intel_uncore_box *box,
4873 + /* MCHBAR is disabled */
4874 + if (!(mch_bar & BIT(0))) {
4875 + pr_warn("perf uncore: MCHBAR is disabled. Failed to map IMC free-running counters.\n");
4876 ++ pci_dev_put(pdev);
4877 + return;
4878 + }
4879 + mch_bar &= ~BIT(0);
4880 +@@ -1352,6 +1353,8 @@ static void __uncore_imc_init_box(struct intel_uncore_box *box,
4881 + box->io_addr = ioremap(addr, type->mmio_map_size);
4882 + if (!box->io_addr)
4883 + pr_warn("perf uncore: Failed to ioremap for %s.\n", type->name);
4884 ++
4885 ++ pci_dev_put(pdev);
4886 + }
4887 +
4888 + static void tgl_uncore_imc_freerunning_init_box(struct intel_uncore_box *box)
4889 +diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
4890 +index ed869443efb21..fcd95e93f479a 100644
4891 +--- a/arch/x86/events/intel/uncore_snbep.c
4892 ++++ b/arch/x86/events/intel/uncore_snbep.c
4893 +@@ -2891,6 +2891,7 @@ static bool hswep_has_limit_sbox(unsigned int device)
4894 + return false;
4895 +
4896 + pci_read_config_dword(dev, HSWEP_PCU_CAPID4_OFFET, &capid4);
4897 ++ pci_dev_put(dev);
4898 + if (!hswep_get_chop(capid4))
4899 + return true;
4900 +
4901 +@@ -4492,6 +4493,8 @@ static int sad_cfg_iio_topology(struct intel_uncore_type *type, u8 *sad_pmon_map
4902 + type->topology = NULL;
4903 + }
4904 +
4905 ++ pci_dev_put(dev);
4906 ++
4907 + return ret;
4908 + }
4909 +
4910 +@@ -4857,6 +4860,8 @@ static int snr_uncore_mmio_map(struct intel_uncore_box *box,
4911 +
4912 + addr += box_ctl;
4913 +
4914 ++ pci_dev_put(pdev);
4915 ++
4916 + box->io_addr = ioremap(addr, type->mmio_map_size);
4917 + if (!box->io_addr) {
4918 + pr_warn("perf uncore: Failed to ioremap for %s.\n", type->name);
4919 +diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
4920 +index a269049a43ce3..85863b9c9e684 100644
4921 +--- a/arch/x86/hyperv/hv_init.c
4922 ++++ b/arch/x86/hyperv/hv_init.c
4923 +@@ -535,8 +535,6 @@ void hyperv_cleanup(void)
4924 + union hv_x64_msr_hypercall_contents hypercall_msr;
4925 + union hv_reference_tsc_msr tsc_msr;
4926 +
4927 +- unregister_syscore_ops(&hv_syscore_ops);
4928 +-
4929 + /* Reset our OS id */
4930 + wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0);
4931 + hv_ghcb_msr_write(HV_X64_MSR_GUEST_OS_ID, 0);
4932 +diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
4933 +index 3415321c8240c..3216da7074bad 100644
4934 +--- a/arch/x86/include/asm/apic.h
4935 ++++ b/arch/x86/include/asm/apic.h
4936 +@@ -249,7 +249,6 @@ static inline u64 native_x2apic_icr_read(void)
4937 + extern int x2apic_mode;
4938 + extern int x2apic_phys;
4939 + extern void __init x2apic_set_max_apicid(u32 apicid);
4940 +-extern void __init check_x2apic(void);
4941 + extern void x2apic_setup(void);
4942 + static inline int x2apic_enabled(void)
4943 + {
4944 +@@ -258,13 +257,13 @@ static inline int x2apic_enabled(void)
4945 +
4946 + #define x2apic_supported() (boot_cpu_has(X86_FEATURE_X2APIC))
4947 + #else /* !CONFIG_X86_X2APIC */
4948 +-static inline void check_x2apic(void) { }
4949 + static inline void x2apic_setup(void) { }
4950 + static inline int x2apic_enabled(void) { return 0; }
4951 +
4952 + #define x2apic_mode (0)
4953 + #define x2apic_supported() (0)
4954 + #endif /* !CONFIG_X86_X2APIC */
4955 ++extern void __init check_x2apic(void);
4956 +
4957 + struct irq_data;
4958 +
4959 +diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
4960 +index fd6f6e5b755a7..a336feef0af14 100644
4961 +--- a/arch/x86/include/asm/realmode.h
4962 ++++ b/arch/x86/include/asm/realmode.h
4963 +@@ -91,6 +91,7 @@ static inline void set_real_mode_mem(phys_addr_t mem)
4964 +
4965 + void reserve_real_mode(void);
4966 + void load_trampoline_pgtable(void);
4967 ++void init_real_mode(void);
4968 +
4969 + #endif /* __ASSEMBLY__ */
4970 +
4971 +diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
4972 +index e9170457697e4..c1c8c581759d6 100644
4973 +--- a/arch/x86/include/asm/x86_init.h
4974 ++++ b/arch/x86/include/asm/x86_init.h
4975 +@@ -285,6 +285,8 @@ struct x86_hyper_runtime {
4976 + * possible in x86_early_init_platform_quirks() by
4977 + * only using the current x86_hardware_subarch
4978 + * semantics.
4979 ++ * @realmode_reserve: reserve memory for realmode trampoline
4980 ++ * @realmode_init: initialize realmode trampoline
4981 + * @hyper: x86 hypervisor specific runtime callbacks
4982 + */
4983 + struct x86_platform_ops {
4984 +@@ -301,6 +303,8 @@ struct x86_platform_ops {
4985 + void (*apic_post_init)(void);
4986 + struct x86_legacy_features legacy;
4987 + void (*set_legacy_features)(void);
4988 ++ void (*realmode_reserve)(void);
4989 ++ void (*realmode_init)(void);
4990 + struct x86_hyper_runtime hyper;
4991 + struct x86_guest guest;
4992 + };
4993 +diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
4994 +index c6876d3ea4b17..20d9a604da7c4 100644
4995 +--- a/arch/x86/kernel/apic/apic.c
4996 ++++ b/arch/x86/kernel/apic/apic.c
4997 +@@ -1931,16 +1931,19 @@ void __init check_x2apic(void)
4998 + }
4999 + }
5000 + #else /* CONFIG_X86_X2APIC */
5001 +-static int __init validate_x2apic(void)
5002 ++void __init check_x2apic(void)
5003 + {
5004 + if (!apic_is_x2apic_enabled())
5005 +- return 0;
5006 ++ return;
5007 + /*
5008 +- * Checkme: Can we simply turn off x2apic here instead of panic?
5009 ++ * Checkme: Can we simply turn off x2APIC here instead of disabling the APIC?
5010 + */
5011 +- panic("BIOS has enabled x2apic but kernel doesn't support x2apic, please disable x2apic in BIOS.\n");
5012 ++ pr_err("Kernel does not support x2APIC, please recompile with CONFIG_X86_X2APIC.\n");
5013 ++ pr_err("Disabling APIC, expect reduced performance and functionality.\n");
5014 ++
5015 ++ disable_apic = 1;
5016 ++ setup_clear_cpu_cap(X86_FEATURE_APIC);
5017 + }
5018 +-early_initcall(validate_x2apic);
5019 +
5020 + static inline void try_to_enable_x2apic(int remap_mode) { }
5021 + static inline void __x2apic_enable(void) { }
5022 +diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
5023 +index 2d7ea5480ec33..4278996504833 100644
5024 +--- a/arch/x86/kernel/cpu/intel.c
5025 ++++ b/arch/x86/kernel/cpu/intel.c
5026 +@@ -1034,8 +1034,32 @@ static const struct {
5027 +
5028 + static struct ratelimit_state bld_ratelimit;
5029 +
5030 ++static unsigned int sysctl_sld_mitigate = 1;
5031 + static DEFINE_SEMAPHORE(buslock_sem);
5032 +
5033 ++#ifdef CONFIG_PROC_SYSCTL
5034 ++static struct ctl_table sld_sysctls[] = {
5035 ++ {
5036 ++ .procname = "split_lock_mitigate",
5037 ++ .data = &sysctl_sld_mitigate,
5038 ++ .maxlen = sizeof(unsigned int),
5039 ++ .mode = 0644,
5040 ++ .proc_handler = proc_douintvec_minmax,
5041 ++ .extra1 = SYSCTL_ZERO,
5042 ++ .extra2 = SYSCTL_ONE,
5043 ++ },
5044 ++ {}
5045 ++};
5046 ++
5047 ++static int __init sld_mitigate_sysctl_init(void)
5048 ++{
5049 ++ register_sysctl_init("kernel", sld_sysctls);
5050 ++ return 0;
5051 ++}
5052 ++
5053 ++late_initcall(sld_mitigate_sysctl_init);
5054 ++#endif
5055 ++
5056 + static inline bool match_option(const char *arg, int arglen, const char *opt)
5057 + {
5058 + int len = strlen(opt), ratelimit;
5059 +@@ -1146,12 +1170,20 @@ static void split_lock_init(void)
5060 + split_lock_verify_msr(sld_state != sld_off);
5061 + }
5062 +
5063 +-static void __split_lock_reenable(struct work_struct *work)
5064 ++static void __split_lock_reenable_unlock(struct work_struct *work)
5065 + {
5066 + sld_update_msr(true);
5067 + up(&buslock_sem);
5068 + }
5069 +
5070 ++static DECLARE_DELAYED_WORK(sl_reenable_unlock, __split_lock_reenable_unlock);
5071 ++
5072 ++static void __split_lock_reenable(struct work_struct *work)
5073 ++{
5074 ++ sld_update_msr(true);
5075 ++}
5076 ++static DECLARE_DELAYED_WORK(sl_reenable, __split_lock_reenable);
5077 ++
5078 + /*
5079 + * If a CPU goes offline with pending delayed work to re-enable split lock
5080 + * detection then the delayed work will be executed on some other CPU. That
5081 +@@ -1169,10 +1201,9 @@ static int splitlock_cpu_offline(unsigned int cpu)
5082 + return 0;
5083 + }
5084 +
5085 +-static DECLARE_DELAYED_WORK(split_lock_reenable, __split_lock_reenable);
5086 +-
5087 + static void split_lock_warn(unsigned long ip)
5088 + {
5089 ++ struct delayed_work *work;
5090 + int cpu;
5091 +
5092 + if (!current->reported_split_lock)
5093 +@@ -1180,14 +1211,26 @@ static void split_lock_warn(unsigned long ip)
5094 + current->comm, current->pid, ip);
5095 + current->reported_split_lock = 1;
5096 +
5097 +- /* misery factor #1, sleep 10ms before trying to execute split lock */
5098 +- if (msleep_interruptible(10) > 0)
5099 +- return;
5100 +- /* Misery factor #2, only allow one buslocked disabled core at a time */
5101 +- if (down_interruptible(&buslock_sem) == -EINTR)
5102 +- return;
5103 ++ if (sysctl_sld_mitigate) {
5104 ++ /*
5105 ++ * misery factor #1:
5106 ++ * sleep 10ms before trying to execute split lock.
5107 ++ */
5108 ++ if (msleep_interruptible(10) > 0)
5109 ++ return;
5110 ++ /*
5111 ++ * Misery factor #2:
5112 ++ * only allow one buslocked disabled core at a time.
5113 ++ */
5114 ++ if (down_interruptible(&buslock_sem) == -EINTR)
5115 ++ return;
5116 ++ work = &sl_reenable_unlock;
5117 ++ } else {
5118 ++ work = &sl_reenable;
5119 ++ }
5120 ++
5121 + cpu = get_cpu();
5122 +- schedule_delayed_work_on(cpu, &split_lock_reenable, 2);
5123 ++ schedule_delayed_work_on(cpu, work, 2);
5124 +
5125 + /* Disable split lock detection on this CPU to make progress */
5126 + sld_update_msr(false);
5127 +diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
5128 +index 1ec20807de1e8..2c258255a6296 100644
5129 +--- a/arch/x86/kernel/cpu/sgx/encl.c
5130 ++++ b/arch/x86/kernel/cpu/sgx/encl.c
5131 +@@ -680,11 +680,15 @@ const struct vm_operations_struct sgx_vm_ops = {
5132 + void sgx_encl_release(struct kref *ref)
5133 + {
5134 + struct sgx_encl *encl = container_of(ref, struct sgx_encl, refcount);
5135 ++ unsigned long max_page_index = PFN_DOWN(encl->base + encl->size - 1);
5136 + struct sgx_va_page *va_page;
5137 + struct sgx_encl_page *entry;
5138 +- unsigned long index;
5139 ++ unsigned long count = 0;
5140 ++
5141 ++ XA_STATE(xas, &encl->page_array, PFN_DOWN(encl->base));
5142 +
5143 +- xa_for_each(&encl->page_array, index, entry) {
5144 ++ xas_lock(&xas);
5145 ++ xas_for_each(&xas, entry, max_page_index) {
5146 + if (entry->epc_page) {
5147 + /*
5148 + * The page and its radix tree entry cannot be freed
5149 +@@ -699,9 +703,20 @@ void sgx_encl_release(struct kref *ref)
5150 + }
5151 +
5152 + kfree(entry);
5153 +- /* Invoke scheduler to prevent soft lockups. */
5154 +- cond_resched();
5155 ++ /*
5156 ++ * Invoke scheduler on every XA_CHECK_SCHED iteration
5157 ++ * to prevent soft lockups.
5158 ++ */
5159 ++ if (!(++count % XA_CHECK_SCHED)) {
5160 ++ xas_pause(&xas);
5161 ++ xas_unlock(&xas);
5162 ++
5163 ++ cond_resched();
5164 ++
5165 ++ xas_lock(&xas);
5166 ++ }
5167 + }
5168 ++ xas_unlock(&xas);
5169 +
5170 + xa_destroy(&encl->page_array);
5171 +
5172 +diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
5173 +index 216fee7144eef..892609cde4a20 100644
5174 +--- a/arch/x86/kernel/setup.c
5175 ++++ b/arch/x86/kernel/setup.c
5176 +@@ -1175,7 +1175,7 @@ void __init setup_arch(char **cmdline_p)
5177 + * Moreover, on machines with SandyBridge graphics or in setups that use
5178 + * crashkernel the entire 1M is reserved anyway.
5179 + */
5180 +- reserve_real_mode();
5181 ++ x86_platform.realmode_reserve();
5182 +
5183 + init_mem_mapping();
5184 +
5185 +diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
5186 +index b63cf8f7745ee..6c07f6daaa227 100644
5187 +--- a/arch/x86/kernel/uprobes.c
5188 ++++ b/arch/x86/kernel/uprobes.c
5189 +@@ -722,8 +722,9 @@ static int branch_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn)
5190 + switch (opc1) {
5191 + case 0xeb: /* jmp 8 */
5192 + case 0xe9: /* jmp 32 */
5193 +- case 0x90: /* prefix* + nop; same as jmp with .offs = 0 */
5194 + break;
5195 ++ case 0x90: /* prefix* + nop; same as jmp with .offs = 0 */
5196 ++ goto setup;
5197 +
5198 + case 0xe8: /* call relative */
5199 + branch_clear_offset(auprobe, insn);
5200 +@@ -753,6 +754,7 @@ static int branch_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn)
5201 + return -ENOTSUPP;
5202 + }
5203 +
5204 ++setup:
5205 + auprobe->branch.opc1 = opc1;
5206 + auprobe->branch.ilen = insn->length;
5207 + auprobe->branch.offs = insn->immediate.value;
5208 +diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
5209 +index 57353519bc119..ef80d361b4632 100644
5210 +--- a/arch/x86/kernel/x86_init.c
5211 ++++ b/arch/x86/kernel/x86_init.c
5212 +@@ -25,6 +25,7 @@
5213 + #include <asm/iommu.h>
5214 + #include <asm/mach_traps.h>
5215 + #include <asm/irqdomain.h>
5216 ++#include <asm/realmode.h>
5217 +
5218 + void x86_init_noop(void) { }
5219 + void __init x86_init_uint_noop(unsigned int unused) { }
5220 +@@ -145,6 +146,8 @@ struct x86_platform_ops x86_platform __ro_after_init = {
5221 + .get_nmi_reason = default_get_nmi_reason,
5222 + .save_sched_clock_state = tsc_save_sched_clock_state,
5223 + .restore_sched_clock_state = tsc_restore_sched_clock_state,
5224 ++ .realmode_reserve = reserve_real_mode,
5225 ++ .realmode_init = init_real_mode,
5226 + .hyper.pin_vcpu = x86_op_int_noop,
5227 +
5228 + .guest = {
5229 +diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
5230 +index 41d7669a97ad1..af565816d2ba6 100644
5231 +--- a/arch/x86/realmode/init.c
5232 ++++ b/arch/x86/realmode/init.c
5233 +@@ -200,14 +200,18 @@ static void __init set_real_mode_permissions(void)
5234 + set_memory_x((unsigned long) text_start, text_size >> PAGE_SHIFT);
5235 + }
5236 +
5237 +-static int __init init_real_mode(void)
5238 ++void __init init_real_mode(void)
5239 + {
5240 + if (!real_mode_header)
5241 + panic("Real mode trampoline was not allocated");
5242 +
5243 + setup_real_mode();
5244 + set_real_mode_permissions();
5245 ++}
5246 +
5247 ++static int __init do_init_real_mode(void)
5248 ++{
5249 ++ x86_platform.realmode_init();
5250 + return 0;
5251 + }
5252 +-early_initcall(init_real_mode);
5253 ++early_initcall(do_init_real_mode);
5254 +diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
5255 +index 038da45f057a7..8944726255c9c 100644
5256 +--- a/arch/x86/xen/enlighten_pv.c
5257 ++++ b/arch/x86/xen/enlighten_pv.c
5258 +@@ -1266,6 +1266,8 @@ asmlinkage __visible void __init xen_start_kernel(struct start_info *si)
5259 + xen_vcpu_info_reset(0);
5260 +
5261 + x86_platform.get_nmi_reason = xen_get_nmi_reason;
5262 ++ x86_platform.realmode_reserve = x86_init_noop;
5263 ++ x86_platform.realmode_init = x86_init_noop;
5264 +
5265 + x86_init.resources.memory_setup = xen_memory_setup;
5266 + x86_init.irqs.intr_mode_select = x86_init_noop;
5267 +diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
5268 +index c3e1f9a7d43aa..4b0d6fff88de5 100644
5269 +--- a/arch/x86/xen/smp.c
5270 ++++ b/arch/x86/xen/smp.c
5271 +@@ -32,30 +32,30 @@ static irqreturn_t xen_reschedule_interrupt(int irq, void *dev_id)
5272 +
5273 + void xen_smp_intr_free(unsigned int cpu)
5274 + {
5275 ++ kfree(per_cpu(xen_resched_irq, cpu).name);
5276 ++ per_cpu(xen_resched_irq, cpu).name = NULL;
5277 + if (per_cpu(xen_resched_irq, cpu).irq >= 0) {
5278 + unbind_from_irqhandler(per_cpu(xen_resched_irq, cpu).irq, NULL);
5279 + per_cpu(xen_resched_irq, cpu).irq = -1;
5280 +- kfree(per_cpu(xen_resched_irq, cpu).name);
5281 +- per_cpu(xen_resched_irq, cpu).name = NULL;
5282 + }
5283 ++ kfree(per_cpu(xen_callfunc_irq, cpu).name);
5284 ++ per_cpu(xen_callfunc_irq, cpu).name = NULL;
5285 + if (per_cpu(xen_callfunc_irq, cpu).irq >= 0) {
5286 + unbind_from_irqhandler(per_cpu(xen_callfunc_irq, cpu).irq, NULL);
5287 + per_cpu(xen_callfunc_irq, cpu).irq = -1;
5288 +- kfree(per_cpu(xen_callfunc_irq, cpu).name);
5289 +- per_cpu(xen_callfunc_irq, cpu).name = NULL;
5290 + }
5291 ++ kfree(per_cpu(xen_debug_irq, cpu).name);
5292 ++ per_cpu(xen_debug_irq, cpu).name = NULL;
5293 + if (per_cpu(xen_debug_irq, cpu).irq >= 0) {
5294 + unbind_from_irqhandler(per_cpu(xen_debug_irq, cpu).irq, NULL);
5295 + per_cpu(xen_debug_irq, cpu).irq = -1;
5296 +- kfree(per_cpu(xen_debug_irq, cpu).name);
5297 +- per_cpu(xen_debug_irq, cpu).name = NULL;
5298 + }
5299 ++ kfree(per_cpu(xen_callfuncsingle_irq, cpu).name);
5300 ++ per_cpu(xen_callfuncsingle_irq, cpu).name = NULL;
5301 + if (per_cpu(xen_callfuncsingle_irq, cpu).irq >= 0) {
5302 + unbind_from_irqhandler(per_cpu(xen_callfuncsingle_irq, cpu).irq,
5303 + NULL);
5304 + per_cpu(xen_callfuncsingle_irq, cpu).irq = -1;
5305 +- kfree(per_cpu(xen_callfuncsingle_irq, cpu).name);
5306 +- per_cpu(xen_callfuncsingle_irq, cpu).name = NULL;
5307 + }
5308 + }
5309 +
5310 +@@ -65,6 +65,7 @@ int xen_smp_intr_init(unsigned int cpu)
5311 + char *resched_name, *callfunc_name, *debug_name;
5312 +
5313 + resched_name = kasprintf(GFP_KERNEL, "resched%d", cpu);
5314 ++ per_cpu(xen_resched_irq, cpu).name = resched_name;
5315 + rc = bind_ipi_to_irqhandler(XEN_RESCHEDULE_VECTOR,
5316 + cpu,
5317 + xen_reschedule_interrupt,
5318 +@@ -74,9 +75,9 @@ int xen_smp_intr_init(unsigned int cpu)
5319 + if (rc < 0)
5320 + goto fail;
5321 + per_cpu(xen_resched_irq, cpu).irq = rc;
5322 +- per_cpu(xen_resched_irq, cpu).name = resched_name;
5323 +
5324 + callfunc_name = kasprintf(GFP_KERNEL, "callfunc%d", cpu);
5325 ++ per_cpu(xen_callfunc_irq, cpu).name = callfunc_name;
5326 + rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_VECTOR,
5327 + cpu,
5328 + xen_call_function_interrupt,
5329 +@@ -86,10 +87,10 @@ int xen_smp_intr_init(unsigned int cpu)
5330 + if (rc < 0)
5331 + goto fail;
5332 + per_cpu(xen_callfunc_irq, cpu).irq = rc;
5333 +- per_cpu(xen_callfunc_irq, cpu).name = callfunc_name;
5334 +
5335 + if (!xen_fifo_events) {
5336 + debug_name = kasprintf(GFP_KERNEL, "debug%d", cpu);
5337 ++ per_cpu(xen_debug_irq, cpu).name = debug_name;
5338 + rc = bind_virq_to_irqhandler(VIRQ_DEBUG, cpu,
5339 + xen_debug_interrupt,
5340 + IRQF_PERCPU | IRQF_NOBALANCING,
5341 +@@ -97,10 +98,10 @@ int xen_smp_intr_init(unsigned int cpu)
5342 + if (rc < 0)
5343 + goto fail;
5344 + per_cpu(xen_debug_irq, cpu).irq = rc;
5345 +- per_cpu(xen_debug_irq, cpu).name = debug_name;
5346 + }
5347 +
5348 + callfunc_name = kasprintf(GFP_KERNEL, "callfuncsingle%d", cpu);
5349 ++ per_cpu(xen_callfuncsingle_irq, cpu).name = callfunc_name;
5350 + rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_SINGLE_VECTOR,
5351 + cpu,
5352 + xen_call_function_single_interrupt,
5353 +@@ -110,7 +111,6 @@ int xen_smp_intr_init(unsigned int cpu)
5354 + if (rc < 0)
5355 + goto fail;
5356 + per_cpu(xen_callfuncsingle_irq, cpu).irq = rc;
5357 +- per_cpu(xen_callfuncsingle_irq, cpu).name = callfunc_name;
5358 +
5359 + return 0;
5360 +
5361 +diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
5362 +index 480be82e9b7be..6175f2c5c8224 100644
5363 +--- a/arch/x86/xen/smp_pv.c
5364 ++++ b/arch/x86/xen/smp_pv.c
5365 +@@ -97,18 +97,18 @@ asmlinkage __visible void cpu_bringup_and_idle(void)
5366 +
5367 + void xen_smp_intr_free_pv(unsigned int cpu)
5368 + {
5369 ++ kfree(per_cpu(xen_irq_work, cpu).name);
5370 ++ per_cpu(xen_irq_work, cpu).name = NULL;
5371 + if (per_cpu(xen_irq_work, cpu).irq >= 0) {
5372 + unbind_from_irqhandler(per_cpu(xen_irq_work, cpu).irq, NULL);
5373 + per_cpu(xen_irq_work, cpu).irq = -1;
5374 +- kfree(per_cpu(xen_irq_work, cpu).name);
5375 +- per_cpu(xen_irq_work, cpu).name = NULL;
5376 + }
5377 +
5378 ++ kfree(per_cpu(xen_pmu_irq, cpu).name);
5379 ++ per_cpu(xen_pmu_irq, cpu).name = NULL;
5380 + if (per_cpu(xen_pmu_irq, cpu).irq >= 0) {
5381 + unbind_from_irqhandler(per_cpu(xen_pmu_irq, cpu).irq, NULL);
5382 + per_cpu(xen_pmu_irq, cpu).irq = -1;
5383 +- kfree(per_cpu(xen_pmu_irq, cpu).name);
5384 +- per_cpu(xen_pmu_irq, cpu).name = NULL;
5385 + }
5386 + }
5387 +
5388 +@@ -118,6 +118,7 @@ int xen_smp_intr_init_pv(unsigned int cpu)
5389 + char *callfunc_name, *pmu_name;
5390 +
5391 + callfunc_name = kasprintf(GFP_KERNEL, "irqwork%d", cpu);
5392 ++ per_cpu(xen_irq_work, cpu).name = callfunc_name;
5393 + rc = bind_ipi_to_irqhandler(XEN_IRQ_WORK_VECTOR,
5394 + cpu,
5395 + xen_irq_work_interrupt,
5396 +@@ -127,10 +128,10 @@ int xen_smp_intr_init_pv(unsigned int cpu)
5397 + if (rc < 0)
5398 + goto fail;
5399 + per_cpu(xen_irq_work, cpu).irq = rc;
5400 +- per_cpu(xen_irq_work, cpu).name = callfunc_name;
5401 +
5402 + if (is_xen_pmu) {
5403 + pmu_name = kasprintf(GFP_KERNEL, "pmu%d", cpu);
5404 ++ per_cpu(xen_pmu_irq, cpu).name = pmu_name;
5405 + rc = bind_virq_to_irqhandler(VIRQ_XENPMU, cpu,
5406 + xen_pmu_irq_handler,
5407 + IRQF_PERCPU|IRQF_NOBALANCING,
5408 +@@ -138,7 +139,6 @@ int xen_smp_intr_init_pv(unsigned int cpu)
5409 + if (rc < 0)
5410 + goto fail;
5411 + per_cpu(xen_pmu_irq, cpu).irq = rc;
5412 +- per_cpu(xen_pmu_irq, cpu).name = pmu_name;
5413 + }
5414 +
5415 + return 0;
5416 +diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
5417 +index 043c73dfd2c98..5c6fc16e4b925 100644
5418 +--- a/arch/x86/xen/spinlock.c
5419 ++++ b/arch/x86/xen/spinlock.c
5420 +@@ -75,6 +75,7 @@ void xen_init_lock_cpu(int cpu)
5421 + cpu, per_cpu(lock_kicker_irq, cpu));
5422 +
5423 + name = kasprintf(GFP_KERNEL, "spinlock%d", cpu);
5424 ++ per_cpu(irq_name, cpu) = name;
5425 + irq = bind_ipi_to_irqhandler(XEN_SPIN_UNLOCK_VECTOR,
5426 + cpu,
5427 + dummy_handler,
5428 +@@ -85,7 +86,6 @@ void xen_init_lock_cpu(int cpu)
5429 + if (irq >= 0) {
5430 + disable_irq(irq); /* make sure it's never delivered */
5431 + per_cpu(lock_kicker_irq, cpu) = irq;
5432 +- per_cpu(irq_name, cpu) = name;
5433 + }
5434 +
5435 + printk("cpu %d spinlock event irq %d\n", cpu, irq);
5436 +@@ -98,6 +98,8 @@ void xen_uninit_lock_cpu(int cpu)
5437 + if (!xen_pvspin)
5438 + return;
5439 +
5440 ++ kfree(per_cpu(irq_name, cpu));
5441 ++ per_cpu(irq_name, cpu) = NULL;
5442 + /*
5443 + * When booting the kernel with 'mitigations=auto,nosmt', the secondary
5444 + * CPUs are not activated, and lock_kicker_irq is not initialized.
5445 +@@ -108,8 +110,6 @@ void xen_uninit_lock_cpu(int cpu)
5446 +
5447 + unbind_from_irqhandler(irq, NULL);
5448 + per_cpu(lock_kicker_irq, cpu) = -1;
5449 +- kfree(per_cpu(irq_name, cpu));
5450 +- per_cpu(irq_name, cpu) = NULL;
5451 + }
5452 +
5453 + PV_CALLEE_SAVE_REGS_THUNK(xen_vcpu_stolen);
5454 +diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
5455 +index 7ea427817f7f5..3e3bd1a466464 100644
5456 +--- a/block/bfq-iosched.c
5457 ++++ b/block/bfq-iosched.c
5458 +@@ -386,6 +386,12 @@ static void bfq_put_stable_ref(struct bfq_queue *bfqq);
5459 +
5460 + void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync)
5461 + {
5462 ++ struct bfq_queue *old_bfqq = bic->bfqq[is_sync];
5463 ++
5464 ++ /* Clear bic pointer if bfqq is detached from this bic */
5465 ++ if (old_bfqq && old_bfqq->bic == bic)
5466 ++ old_bfqq->bic = NULL;
5467 ++
5468 + /*
5469 + * If bfqq != NULL, then a non-stable queue merge between
5470 + * bic->bfqq and bfqq is happening here. This causes troubles
5471 +@@ -5377,7 +5383,6 @@ static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync)
5472 + unsigned long flags;
5473 +
5474 + spin_lock_irqsave(&bfqd->lock, flags);
5475 +- bfqq->bic = NULL;
5476 + bfq_exit_bfqq(bfqd, bfqq);
5477 + bic_set_bfqq(bic, NULL, is_sync);
5478 + spin_unlock_irqrestore(&bfqd->lock, flags);
5479 +@@ -6784,6 +6789,12 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
5480 + bfqq = bfq_get_bfqq_handle_split(bfqd, bic, bio,
5481 + true, is_sync,
5482 + NULL);
5483 ++ if (unlikely(bfqq == &bfqd->oom_bfqq))
5484 ++ bfqq_already_existing = true;
5485 ++ } else
5486 ++ bfqq_already_existing = true;
5487 ++
5488 ++ if (!bfqq_already_existing) {
5489 + bfqq->waker_bfqq = old_bfqq->waker_bfqq;
5490 + bfqq->tentative_waker_bfqq = NULL;
5491 +
5492 +@@ -6797,8 +6808,7 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
5493 + if (bfqq->waker_bfqq)
5494 + hlist_add_head(&bfqq->woken_list_node,
5495 + &bfqq->waker_bfqq->woken_list);
5496 +- } else
5497 +- bfqq_already_existing = true;
5498 ++ }
5499 + }
5500 + }
5501 +
5502 +diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
5503 +index ed761c62ad0a7..fcf9cf49f5de1 100644
5504 +--- a/block/blk-cgroup.c
5505 ++++ b/block/blk-cgroup.c
5506 +@@ -33,6 +33,7 @@
5507 + #include "blk-cgroup.h"
5508 + #include "blk-ioprio.h"
5509 + #include "blk-throttle.h"
5510 ++#include "blk-rq-qos.h"
5511 +
5512 + /*
5513 + * blkcg_pol_mutex protects blkcg_policy[] and policy [de]activation.
5514 +@@ -1275,6 +1276,7 @@ err_unlock:
5515 + void blkcg_exit_disk(struct gendisk *disk)
5516 + {
5517 + blkg_destroy_all(disk);
5518 ++ rq_qos_exit(disk->queue);
5519 + blk_throtl_exit(disk);
5520 + }
5521 +
5522 +diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
5523 +index 93997d297d427..4515288fbe351 100644
5524 +--- a/block/blk-mq-sysfs.c
5525 ++++ b/block/blk-mq-sysfs.c
5526 +@@ -185,7 +185,7 @@ static int blk_mq_register_hctx(struct blk_mq_hw_ctx *hctx)
5527 + {
5528 + struct request_queue *q = hctx->queue;
5529 + struct blk_mq_ctx *ctx;
5530 +- int i, ret;
5531 ++ int i, j, ret;
5532 +
5533 + if (!hctx->nr_ctx)
5534 + return 0;
5535 +@@ -197,9 +197,16 @@ static int blk_mq_register_hctx(struct blk_mq_hw_ctx *hctx)
5536 + hctx_for_each_ctx(hctx, ctx, i) {
5537 + ret = kobject_add(&ctx->kobj, &hctx->kobj, "cpu%u", ctx->cpu);
5538 + if (ret)
5539 +- break;
5540 ++ goto out;
5541 + }
5542 +
5543 ++ return 0;
5544 ++out:
5545 ++ hctx_for_each_ctx(hctx, ctx, j) {
5546 ++ if (j < i)
5547 ++ kobject_del(&ctx->kobj);
5548 ++ }
5549 ++ kobject_del(&hctx->kobj);
5550 + return ret;
5551 + }
5552 +
5553 +diff --git a/block/blk-mq.c b/block/blk-mq.c
5554 +index 228a6696d8351..0b855e033a834 100644
5555 +--- a/block/blk-mq.c
5556 ++++ b/block/blk-mq.c
5557 +@@ -1529,7 +1529,13 @@ static void blk_mq_rq_timed_out(struct request *req)
5558 + blk_add_timer(req);
5559 + }
5560 +
5561 +-static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
5562 ++struct blk_expired_data {
5563 ++ bool has_timedout_rq;
5564 ++ unsigned long next;
5565 ++ unsigned long timeout_start;
5566 ++};
5567 ++
5568 ++static bool blk_mq_req_expired(struct request *rq, struct blk_expired_data *expired)
5569 + {
5570 + unsigned long deadline;
5571 +
5572 +@@ -1539,13 +1545,13 @@ static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
5573 + return false;
5574 +
5575 + deadline = READ_ONCE(rq->deadline);
5576 +- if (time_after_eq(jiffies, deadline))
5577 ++ if (time_after_eq(expired->timeout_start, deadline))
5578 + return true;
5579 +
5580 +- if (*next == 0)
5581 +- *next = deadline;
5582 +- else if (time_after(*next, deadline))
5583 +- *next = deadline;
5584 ++ if (expired->next == 0)
5585 ++ expired->next = deadline;
5586 ++ else if (time_after(expired->next, deadline))
5587 ++ expired->next = deadline;
5588 + return false;
5589 + }
5590 +
5591 +@@ -1561,7 +1567,7 @@ void blk_mq_put_rq_ref(struct request *rq)
5592 +
5593 + static bool blk_mq_check_expired(struct request *rq, void *priv)
5594 + {
5595 +- unsigned long *next = priv;
5596 ++ struct blk_expired_data *expired = priv;
5597 +
5598 + /*
5599 + * blk_mq_queue_tag_busy_iter() has locked the request, so it cannot
5600 +@@ -1570,7 +1576,18 @@ static bool blk_mq_check_expired(struct request *rq, void *priv)
5601 + * it was completed and reallocated as a new request after returning
5602 + * from blk_mq_check_expired().
5603 + */
5604 +- if (blk_mq_req_expired(rq, next))
5605 ++ if (blk_mq_req_expired(rq, expired)) {
5606 ++ expired->has_timedout_rq = true;
5607 ++ return false;
5608 ++ }
5609 ++ return true;
5610 ++}
5611 ++
5612 ++static bool blk_mq_handle_expired(struct request *rq, void *priv)
5613 ++{
5614 ++ struct blk_expired_data *expired = priv;
5615 ++
5616 ++ if (blk_mq_req_expired(rq, expired))
5617 + blk_mq_rq_timed_out(rq);
5618 + return true;
5619 + }
5620 +@@ -1579,7 +1596,9 @@ static void blk_mq_timeout_work(struct work_struct *work)
5621 + {
5622 + struct request_queue *q =
5623 + container_of(work, struct request_queue, timeout_work);
5624 +- unsigned long next = 0;
5625 ++ struct blk_expired_data expired = {
5626 ++ .timeout_start = jiffies,
5627 ++ };
5628 + struct blk_mq_hw_ctx *hctx;
5629 + unsigned long i;
5630 +
5631 +@@ -1599,10 +1618,23 @@ static void blk_mq_timeout_work(struct work_struct *work)
5632 + if (!percpu_ref_tryget(&q->q_usage_counter))
5633 + return;
5634 +
5635 +- blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &next);
5636 ++ /* check if there is any timed-out request */
5637 ++ blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &expired);
5638 ++ if (expired.has_timedout_rq) {
5639 ++ /*
5640 ++ * Before walking tags, we must ensure any submit started
5641 ++ * before the current time has finished. Since the submit
5642 ++ * uses srcu or rcu, wait for a synchronization point to
5643 ++ * ensure all running submits have finished
5644 ++ */
5645 ++ blk_mq_wait_quiesce_done(q);
5646 ++
5647 ++ expired.next = 0;
5648 ++ blk_mq_queue_tag_busy_iter(q, blk_mq_handle_expired, &expired);
5649 ++ }
5650 +
5651 +- if (next != 0) {
5652 +- mod_timer(&q->timeout, next);
5653 ++ if (expired.next != 0) {
5654 ++ mod_timer(&q->timeout, expired.next);
5655 + } else {
5656 + /*
5657 + * Request timeouts are handled as a forward rolling timer. If
5658 +diff --git a/block/genhd.c b/block/genhd.c
5659 +index 0f9769db2de83..647f7d8d88312 100644
5660 +--- a/block/genhd.c
5661 ++++ b/block/genhd.c
5662 +@@ -530,6 +530,7 @@ out_unregister_queue:
5663 + rq_qos_exit(disk->queue);
5664 + out_put_slave_dir:
5665 + kobject_put(disk->slave_dir);
5666 ++ disk->slave_dir = NULL;
5667 + out_put_holder_dir:
5668 + kobject_put(disk->part0->bd_holder_dir);
5669 + out_del_integrity:
5670 +@@ -629,6 +630,7 @@ void del_gendisk(struct gendisk *disk)
5671 +
5672 + kobject_put(disk->part0->bd_holder_dir);
5673 + kobject_put(disk->slave_dir);
5674 ++ disk->slave_dir = NULL;
5675 +
5676 + part_stat_set_all(disk->part0, 0);
5677 + disk->part0->bd_stamp = 0;
5678 +diff --git a/crypto/cryptd.c b/crypto/cryptd.c
5679 +index 668095eca0faf..ca3a40fc7da91 100644
5680 +--- a/crypto/cryptd.c
5681 ++++ b/crypto/cryptd.c
5682 +@@ -68,11 +68,12 @@ struct aead_instance_ctx {
5683 +
5684 + struct cryptd_skcipher_ctx {
5685 + refcount_t refcnt;
5686 +- struct crypto_sync_skcipher *child;
5687 ++ struct crypto_skcipher *child;
5688 + };
5689 +
5690 + struct cryptd_skcipher_request_ctx {
5691 + crypto_completion_t complete;
5692 ++ struct skcipher_request req;
5693 + };
5694 +
5695 + struct cryptd_hash_ctx {
5696 +@@ -227,13 +228,13 @@ static int cryptd_skcipher_setkey(struct crypto_skcipher *parent,
5697 + const u8 *key, unsigned int keylen)
5698 + {
5699 + struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(parent);
5700 +- struct crypto_sync_skcipher *child = ctx->child;
5701 ++ struct crypto_skcipher *child = ctx->child;
5702 +
5703 +- crypto_sync_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
5704 +- crypto_sync_skcipher_set_flags(child,
5705 +- crypto_skcipher_get_flags(parent) &
5706 +- CRYPTO_TFM_REQ_MASK);
5707 +- return crypto_sync_skcipher_setkey(child, key, keylen);
5708 ++ crypto_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
5709 ++ crypto_skcipher_set_flags(child,
5710 ++ crypto_skcipher_get_flags(parent) &
5711 ++ CRYPTO_TFM_REQ_MASK);
5712 ++ return crypto_skcipher_setkey(child, key, keylen);
5713 + }
5714 +
5715 + static void cryptd_skcipher_complete(struct skcipher_request *req, int err)
5716 +@@ -258,13 +259,13 @@ static void cryptd_skcipher_encrypt(struct crypto_async_request *base,
5717 + struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
5718 + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
5719 + struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
5720 +- struct crypto_sync_skcipher *child = ctx->child;
5721 +- SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child);
5722 ++ struct skcipher_request *subreq = &rctx->req;
5723 ++ struct crypto_skcipher *child = ctx->child;
5724 +
5725 + if (unlikely(err == -EINPROGRESS))
5726 + goto out;
5727 +
5728 +- skcipher_request_set_sync_tfm(subreq, child);
5729 ++ skcipher_request_set_tfm(subreq, child);
5730 + skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP,
5731 + NULL, NULL);
5732 + skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
5733 +@@ -286,13 +287,13 @@ static void cryptd_skcipher_decrypt(struct crypto_async_request *base,
5734 + struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
5735 + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
5736 + struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
5737 +- struct crypto_sync_skcipher *child = ctx->child;
5738 +- SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child);
5739 ++ struct skcipher_request *subreq = &rctx->req;
5740 ++ struct crypto_skcipher *child = ctx->child;
5741 +
5742 + if (unlikely(err == -EINPROGRESS))
5743 + goto out;
5744 +
5745 +- skcipher_request_set_sync_tfm(subreq, child);
5746 ++ skcipher_request_set_tfm(subreq, child);
5747 + skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP,
5748 + NULL, NULL);
5749 + skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
5750 +@@ -343,9 +344,10 @@ static int cryptd_skcipher_init_tfm(struct crypto_skcipher *tfm)
5751 + if (IS_ERR(cipher))
5752 + return PTR_ERR(cipher);
5753 +
5754 +- ctx->child = (struct crypto_sync_skcipher *)cipher;
5755 ++ ctx->child = cipher;
5756 + crypto_skcipher_set_reqsize(
5757 +- tfm, sizeof(struct cryptd_skcipher_request_ctx));
5758 ++ tfm, sizeof(struct cryptd_skcipher_request_ctx) +
5759 ++ crypto_skcipher_reqsize(cipher));
5760 + return 0;
5761 + }
5762 +
5763 +@@ -353,7 +355,7 @@ static void cryptd_skcipher_exit_tfm(struct crypto_skcipher *tfm)
5764 + {
5765 + struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
5766 +
5767 +- crypto_free_sync_skcipher(ctx->child);
5768 ++ crypto_free_skcipher(ctx->child);
5769 + }
5770 +
5771 + static void cryptd_skcipher_free(struct skcipher_instance *inst)
5772 +@@ -931,7 +933,7 @@ struct crypto_skcipher *cryptd_skcipher_child(struct cryptd_skcipher *tfm)
5773 + {
5774 + struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(&tfm->base);
5775 +
5776 +- return &ctx->child->base;
5777 ++ return ctx->child;
5778 + }
5779 + EXPORT_SYMBOL_GPL(cryptd_skcipher_child);
5780 +
5781 +diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
5782 +index a82679b576bb4..b23235d58a122 100644
5783 +--- a/crypto/tcrypt.c
5784 ++++ b/crypto/tcrypt.c
5785 +@@ -1090,15 +1090,6 @@ static void test_mb_skcipher_speed(const char *algo, int enc, int secs,
5786 + goto out_free_tfm;
5787 + }
5788 +
5789 +-
5790 +- for (i = 0; i < num_mb; ++i)
5791 +- if (testmgr_alloc_buf(data[i].xbuf)) {
5792 +- while (i--)
5793 +- testmgr_free_buf(data[i].xbuf);
5794 +- goto out_free_tfm;
5795 +- }
5796 +-
5797 +-
5798 + for (i = 0; i < num_mb; ++i) {
5799 + data[i].req = skcipher_request_alloc(tfm, GFP_KERNEL);
5800 + if (!data[i].req) {
5801 +@@ -1471,387 +1462,387 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
5802 + }
5803 +
5804 + for (i = 1; i < 200; i++)
5805 +- ret += do_test(NULL, 0, 0, i, num_mb);
5806 ++ ret = min(ret, do_test(NULL, 0, 0, i, num_mb));
5807 + break;
5808 +
5809 + case 1:
5810 +- ret += tcrypt_test("md5");
5811 ++ ret = min(ret, tcrypt_test("md5"));
5812 + break;
5813 +
5814 + case 2:
5815 +- ret += tcrypt_test("sha1");
5816 ++ ret = min(ret, tcrypt_test("sha1"));
5817 + break;
5818 +
5819 + case 3:
5820 +- ret += tcrypt_test("ecb(des)");
5821 +- ret += tcrypt_test("cbc(des)");
5822 +- ret += tcrypt_test("ctr(des)");
5823 ++ ret = min(ret, tcrypt_test("ecb(des)"));
5824 ++ ret = min(ret, tcrypt_test("cbc(des)"));
5825 ++ ret = min(ret, tcrypt_test("ctr(des)"));
5826 + break;
5827 +
5828 + case 4:
5829 +- ret += tcrypt_test("ecb(des3_ede)");
5830 +- ret += tcrypt_test("cbc(des3_ede)");
5831 +- ret += tcrypt_test("ctr(des3_ede)");
5832 ++ ret = min(ret, tcrypt_test("ecb(des3_ede)"));
5833 ++ ret = min(ret, tcrypt_test("cbc(des3_ede)"));
5834 ++ ret = min(ret, tcrypt_test("ctr(des3_ede)"));
5835 + break;
5836 +
5837 + case 5:
5838 +- ret += tcrypt_test("md4");
5839 ++ ret = min(ret, tcrypt_test("md4"));
5840 + break;
5841 +
5842 + case 6:
5843 +- ret += tcrypt_test("sha256");
5844 ++ ret = min(ret, tcrypt_test("sha256"));
5845 + break;
5846 +
5847 + case 7:
5848 +- ret += tcrypt_test("ecb(blowfish)");
5849 +- ret += tcrypt_test("cbc(blowfish)");
5850 +- ret += tcrypt_test("ctr(blowfish)");
5851 ++ ret = min(ret, tcrypt_test("ecb(blowfish)"));
5852 ++ ret = min(ret, tcrypt_test("cbc(blowfish)"));
5853 ++ ret = min(ret, tcrypt_test("ctr(blowfish)"));
5854 + break;
5855 +
5856 + case 8:
5857 +- ret += tcrypt_test("ecb(twofish)");
5858 +- ret += tcrypt_test("cbc(twofish)");
5859 +- ret += tcrypt_test("ctr(twofish)");
5860 +- ret += tcrypt_test("lrw(twofish)");
5861 +- ret += tcrypt_test("xts(twofish)");
5862 ++ ret = min(ret, tcrypt_test("ecb(twofish)"));
5863 ++ ret = min(ret, tcrypt_test("cbc(twofish)"));
5864 ++ ret = min(ret, tcrypt_test("ctr(twofish)"));
5865 ++ ret = min(ret, tcrypt_test("lrw(twofish)"));
5866 ++ ret = min(ret, tcrypt_test("xts(twofish)"));
5867 + break;
5868 +
5869 + case 9:
5870 +- ret += tcrypt_test("ecb(serpent)");
5871 +- ret += tcrypt_test("cbc(serpent)");
5872 +- ret += tcrypt_test("ctr(serpent)");
5873 +- ret += tcrypt_test("lrw(serpent)");
5874 +- ret += tcrypt_test("xts(serpent)");
5875 ++ ret = min(ret, tcrypt_test("ecb(serpent)"));
5876 ++ ret = min(ret, tcrypt_test("cbc(serpent)"));
5877 ++ ret = min(ret, tcrypt_test("ctr(serpent)"));
5878 ++ ret = min(ret, tcrypt_test("lrw(serpent)"));
5879 ++ ret = min(ret, tcrypt_test("xts(serpent)"));
5880 + break;
5881 +
5882 + case 10:
5883 +- ret += tcrypt_test("ecb(aes)");
5884 +- ret += tcrypt_test("cbc(aes)");
5885 +- ret += tcrypt_test("lrw(aes)");
5886 +- ret += tcrypt_test("xts(aes)");
5887 +- ret += tcrypt_test("ctr(aes)");
5888 +- ret += tcrypt_test("rfc3686(ctr(aes))");
5889 +- ret += tcrypt_test("ofb(aes)");
5890 +- ret += tcrypt_test("cfb(aes)");
5891 +- ret += tcrypt_test("xctr(aes)");
5892 ++ ret = min(ret, tcrypt_test("ecb(aes)"));
5893 ++ ret = min(ret, tcrypt_test("cbc(aes)"));
5894 ++ ret = min(ret, tcrypt_test("lrw(aes)"));
5895 ++ ret = min(ret, tcrypt_test("xts(aes)"));
5896 ++ ret = min(ret, tcrypt_test("ctr(aes)"));
5897 ++ ret = min(ret, tcrypt_test("rfc3686(ctr(aes))"));
5898 ++ ret = min(ret, tcrypt_test("ofb(aes)"));
5899 ++ ret = min(ret, tcrypt_test("cfb(aes)"));
5900 ++ ret = min(ret, tcrypt_test("xctr(aes)"));
5901 + break;
5902 +
5903 + case 11:
5904 +- ret += tcrypt_test("sha384");
5905 ++ ret = min(ret, tcrypt_test("sha384"));
5906 + break;
5907 +
5908 + case 12:
5909 +- ret += tcrypt_test("sha512");
5910 ++ ret = min(ret, tcrypt_test("sha512"));
5911 + break;
5912 +
5913 + case 13:
5914 +- ret += tcrypt_test("deflate");
5915 ++ ret = min(ret, tcrypt_test("deflate"));
5916 + break;
5917 +
5918 + case 14:
5919 +- ret += tcrypt_test("ecb(cast5)");
5920 +- ret += tcrypt_test("cbc(cast5)");
5921 +- ret += tcrypt_test("ctr(cast5)");
5922 ++ ret = min(ret, tcrypt_test("ecb(cast5)"));
5923 ++ ret = min(ret, tcrypt_test("cbc(cast5)"));
5924 ++ ret = min(ret, tcrypt_test("ctr(cast5)"));
5925 + break;
5926 +
5927 + case 15:
5928 +- ret += tcrypt_test("ecb(cast6)");
5929 +- ret += tcrypt_test("cbc(cast6)");
5930 +- ret += tcrypt_test("ctr(cast6)");
5931 +- ret += tcrypt_test("lrw(cast6)");
5932 +- ret += tcrypt_test("xts(cast6)");
5933 ++ ret = min(ret, tcrypt_test("ecb(cast6)"));
5934 ++ ret = min(ret, tcrypt_test("cbc(cast6)"));
5935 ++ ret = min(ret, tcrypt_test("ctr(cast6)"));
5936 ++ ret = min(ret, tcrypt_test("lrw(cast6)"));
5937 ++ ret = min(ret, tcrypt_test("xts(cast6)"));
5938 + break;
5939 +
5940 + case 16:
5941 +- ret += tcrypt_test("ecb(arc4)");
5942 ++ ret = min(ret, tcrypt_test("ecb(arc4)"));
5943 + break;
5944 +
5945 + case 17:
5946 +- ret += tcrypt_test("michael_mic");
5947 ++ ret = min(ret, tcrypt_test("michael_mic"));
5948 + break;
5949 +
5950 + case 18:
5951 +- ret += tcrypt_test("crc32c");
5952 ++ ret = min(ret, tcrypt_test("crc32c"));
5953 + break;
5954 +
5955 + case 19:
5956 +- ret += tcrypt_test("ecb(tea)");
5957 ++ ret = min(ret, tcrypt_test("ecb(tea)"));
5958 + break;
5959 +
5960 + case 20:
5961 +- ret += tcrypt_test("ecb(xtea)");
5962 ++ ret = min(ret, tcrypt_test("ecb(xtea)"));
5963 + break;
5964 +
5965 + case 21:
5966 +- ret += tcrypt_test("ecb(khazad)");
5967 ++ ret = min(ret, tcrypt_test("ecb(khazad)"));
5968 + break;
5969 +
5970 + case 22:
5971 +- ret += tcrypt_test("wp512");
5972 ++ ret = min(ret, tcrypt_test("wp512"));
5973 + break;
5974 +
5975 + case 23:
5976 +- ret += tcrypt_test("wp384");
5977 ++ ret = min(ret, tcrypt_test("wp384"));
5978 + break;
5979 +
5980 + case 24:
5981 +- ret += tcrypt_test("wp256");
5982 ++ ret = min(ret, tcrypt_test("wp256"));
5983 + break;
5984 +
5985 + case 26:
5986 +- ret += tcrypt_test("ecb(anubis)");
5987 +- ret += tcrypt_test("cbc(anubis)");
5988 ++ ret = min(ret, tcrypt_test("ecb(anubis)"));
5989 ++ ret = min(ret, tcrypt_test("cbc(anubis)"));
5990 + break;
5991 +
5992 + case 30:
5993 +- ret += tcrypt_test("ecb(xeta)");
5994 ++ ret = min(ret, tcrypt_test("ecb(xeta)"));
5995 + break;
5996 +
5997 + case 31:
5998 +- ret += tcrypt_test("pcbc(fcrypt)");
5999 ++ ret = min(ret, tcrypt_test("pcbc(fcrypt)"));
6000 + break;
6001 +
6002 + case 32:
6003 +- ret += tcrypt_test("ecb(camellia)");
6004 +- ret += tcrypt_test("cbc(camellia)");
6005 +- ret += tcrypt_test("ctr(camellia)");
6006 +- ret += tcrypt_test("lrw(camellia)");
6007 +- ret += tcrypt_test("xts(camellia)");
6008 ++ ret = min(ret, tcrypt_test("ecb(camellia)"));
6009 ++ ret = min(ret, tcrypt_test("cbc(camellia)"));
6010 ++ ret = min(ret, tcrypt_test("ctr(camellia)"));
6011 ++ ret = min(ret, tcrypt_test("lrw(camellia)"));
6012 ++ ret = min(ret, tcrypt_test("xts(camellia)"));
6013 + break;
6014 +
6015 + case 33:
6016 +- ret += tcrypt_test("sha224");
6017 ++ ret = min(ret, tcrypt_test("sha224"));
6018 + break;
6019 +
6020 + case 35:
6021 +- ret += tcrypt_test("gcm(aes)");
6022 ++ ret = min(ret, tcrypt_test("gcm(aes)"));
6023 + break;
6024 +
6025 + case 36:
6026 +- ret += tcrypt_test("lzo");
6027 ++ ret = min(ret, tcrypt_test("lzo"));
6028 + break;
6029 +
6030 + case 37:
6031 +- ret += tcrypt_test("ccm(aes)");
6032 ++ ret = min(ret, tcrypt_test("ccm(aes)"));
6033 + break;
6034 +
6035 + case 38:
6036 +- ret += tcrypt_test("cts(cbc(aes))");
6037 ++ ret = min(ret, tcrypt_test("cts(cbc(aes))"));
6038 + break;
6039 +
6040 + case 39:
6041 +- ret += tcrypt_test("xxhash64");
6042 ++ ret = min(ret, tcrypt_test("xxhash64"));
6043 + break;
6044 +
6045 + case 40:
6046 +- ret += tcrypt_test("rmd160");
6047 ++ ret = min(ret, tcrypt_test("rmd160"));
6048 + break;
6049 +
6050 + case 42:
6051 +- ret += tcrypt_test("blake2b-512");
6052 ++ ret = min(ret, tcrypt_test("blake2b-512"));
6053 + break;
6054 +
6055 + case 43:
6056 +- ret += tcrypt_test("ecb(seed)");
6057 ++ ret = min(ret, tcrypt_test("ecb(seed)"));
6058 + break;
6059 +
6060 + case 45:
6061 +- ret += tcrypt_test("rfc4309(ccm(aes))");
6062 ++ ret = min(ret, tcrypt_test("rfc4309(ccm(aes))"));
6063 + break;
6064 +
6065 + case 46:
6066 +- ret += tcrypt_test("ghash");
6067 ++ ret = min(ret, tcrypt_test("ghash"));
6068 + break;
6069 +
6070 + case 47:
6071 +- ret += tcrypt_test("crct10dif");
6072 ++ ret = min(ret, tcrypt_test("crct10dif"));
6073 + break;
6074 +
6075 + case 48:
6076 +- ret += tcrypt_test("sha3-224");
6077 ++ ret = min(ret, tcrypt_test("sha3-224"));
6078 + break;
6079 +
6080 + case 49:
6081 +- ret += tcrypt_test("sha3-256");
6082 ++ ret = min(ret, tcrypt_test("sha3-256"));
6083 + break;
6084 +
6085 + case 50:
6086 +- ret += tcrypt_test("sha3-384");
6087 ++ ret = min(ret, tcrypt_test("sha3-384"));
6088 + break;
6089 +
6090 + case 51:
6091 +- ret += tcrypt_test("sha3-512");
6092 ++ ret = min(ret, tcrypt_test("sha3-512"));
6093 + break;
6094 +
6095 + case 52:
6096 +- ret += tcrypt_test("sm3");
6097 ++ ret = min(ret, tcrypt_test("sm3"));
6098 + break;
6099 +
6100 + case 53:
6101 +- ret += tcrypt_test("streebog256");
6102 ++ ret = min(ret, tcrypt_test("streebog256"));
6103 + break;
6104 +
6105 + case 54:
6106 +- ret += tcrypt_test("streebog512");
6107 ++ ret = min(ret, tcrypt_test("streebog512"));
6108 + break;
6109 +
6110 + case 55:
6111 +- ret += tcrypt_test("gcm(sm4)");
6112 ++ ret = min(ret, tcrypt_test("gcm(sm4)"));
6113 + break;
6114 +
6115 + case 56:
6116 +- ret += tcrypt_test("ccm(sm4)");
6117 ++ ret = min(ret, tcrypt_test("ccm(sm4)"));
6118 + break;
6119 +
6120 + case 57:
6121 +- ret += tcrypt_test("polyval");
6122 ++ ret = min(ret, tcrypt_test("polyval"));
6123 + break;
6124 +
6125 + case 58:
6126 +- ret += tcrypt_test("gcm(aria)");
6127 ++ ret = min(ret, tcrypt_test("gcm(aria)"));
6128 + break;
6129 +
6130 + case 100:
6131 +- ret += tcrypt_test("hmac(md5)");
6132 ++ ret = min(ret, tcrypt_test("hmac(md5)"));
6133 + break;
6134 +
6135 + case 101:
6136 +- ret += tcrypt_test("hmac(sha1)");
6137 ++ ret = min(ret, tcrypt_test("hmac(sha1)"));
6138 + break;
6139 +
6140 + case 102:
6141 +- ret += tcrypt_test("hmac(sha256)");
6142 ++ ret = min(ret, tcrypt_test("hmac(sha256)"));
6143 + break;
6144 +
6145 + case 103:
6146 +- ret += tcrypt_test("hmac(sha384)");
6147 ++ ret = min(ret, tcrypt_test("hmac(sha384)"));
6148 + break;
6149 +
6150 + case 104:
6151 +- ret += tcrypt_test("hmac(sha512)");
6152 ++ ret = min(ret, tcrypt_test("hmac(sha512)"));
6153 + break;
6154 +
6155 + case 105:
6156 +- ret += tcrypt_test("hmac(sha224)");
6157 ++ ret = min(ret, tcrypt_test("hmac(sha224)"));
6158 + break;
6159 +
6160 + case 106:
6161 +- ret += tcrypt_test("xcbc(aes)");
6162 ++ ret = min(ret, tcrypt_test("xcbc(aes)"));
6163 + break;
6164 +
6165 + case 108:
6166 +- ret += tcrypt_test("hmac(rmd160)");
6167 ++ ret = min(ret, tcrypt_test("hmac(rmd160)"));
6168 + break;
6169 +
6170 + case 109:
6171 +- ret += tcrypt_test("vmac64(aes)");
6172 ++ ret = min(ret, tcrypt_test("vmac64(aes)"));
6173 + break;
6174 +
6175 + case 111:
6176 +- ret += tcrypt_test("hmac(sha3-224)");
6177 ++ ret = min(ret, tcrypt_test("hmac(sha3-224)"));
6178 + break;
6179 +
6180 + case 112:
6181 +- ret += tcrypt_test("hmac(sha3-256)");
6182 ++ ret = min(ret, tcrypt_test("hmac(sha3-256)"));
6183 + break;
6184 +
6185 + case 113:
6186 +- ret += tcrypt_test("hmac(sha3-384)");
6187 ++ ret = min(ret, tcrypt_test("hmac(sha3-384)"));
6188 + break;
6189 +
6190 + case 114:
6191 +- ret += tcrypt_test("hmac(sha3-512)");
6192 ++ ret = min(ret, tcrypt_test("hmac(sha3-512)"));
6193 + break;
6194 +
6195 + case 115:
6196 +- ret += tcrypt_test("hmac(streebog256)");
6197 ++ ret = min(ret, tcrypt_test("hmac(streebog256)"));
6198 + break;
6199 +
6200 + case 116:
6201 +- ret += tcrypt_test("hmac(streebog512)");
6202 ++ ret = min(ret, tcrypt_test("hmac(streebog512)"));
6203 + break;
6204 +
6205 + case 150:
6206 +- ret += tcrypt_test("ansi_cprng");
6207 ++ ret = min(ret, tcrypt_test("ansi_cprng"));
6208 + break;
6209 +
6210 + case 151:
6211 +- ret += tcrypt_test("rfc4106(gcm(aes))");
6212 ++ ret = min(ret, tcrypt_test("rfc4106(gcm(aes))"));
6213 + break;
6214 +
6215 + case 152:
6216 +- ret += tcrypt_test("rfc4543(gcm(aes))");
6217 ++ ret = min(ret, tcrypt_test("rfc4543(gcm(aes))"));
6218 + break;
6219 +
6220 + case 153:
6221 +- ret += tcrypt_test("cmac(aes)");
6222 ++ ret = min(ret, tcrypt_test("cmac(aes)"));
6223 + break;
6224 +
6225 + case 154:
6226 +- ret += tcrypt_test("cmac(des3_ede)");
6227 ++ ret = min(ret, tcrypt_test("cmac(des3_ede)"));
6228 + break;
6229 +
6230 + case 155:
6231 +- ret += tcrypt_test("authenc(hmac(sha1),cbc(aes))");
6232 ++ ret = min(ret, tcrypt_test("authenc(hmac(sha1),cbc(aes))"));
6233 + break;
6234 +
6235 + case 156:
6236 +- ret += tcrypt_test("authenc(hmac(md5),ecb(cipher_null))");
6237 ++ ret = min(ret, tcrypt_test("authenc(hmac(md5),ecb(cipher_null))"));
6238 + break;
6239 +
6240 + case 157:
6241 +- ret += tcrypt_test("authenc(hmac(sha1),ecb(cipher_null))");
6242 ++ ret = min(ret, tcrypt_test("authenc(hmac(sha1),ecb(cipher_null))"));
6243 + break;
6244 +
6245 + case 158:
6246 +- ret += tcrypt_test("cbcmac(sm4)");
6247 ++ ret = min(ret, tcrypt_test("cbcmac(sm4)"));
6248 + break;
6249 +
6250 + case 159:
6251 +- ret += tcrypt_test("cmac(sm4)");
6252 ++ ret = min(ret, tcrypt_test("cmac(sm4)"));
6253 + break;
6254 +
6255 + case 181:
6256 +- ret += tcrypt_test("authenc(hmac(sha1),cbc(des))");
6257 ++ ret = min(ret, tcrypt_test("authenc(hmac(sha1),cbc(des))"));
6258 + break;
6259 + case 182:
6260 +- ret += tcrypt_test("authenc(hmac(sha1),cbc(des3_ede))");
6261 ++ ret = min(ret, tcrypt_test("authenc(hmac(sha1),cbc(des3_ede))"));
6262 + break;
6263 + case 183:
6264 +- ret += tcrypt_test("authenc(hmac(sha224),cbc(des))");
6265 ++ ret = min(ret, tcrypt_test("authenc(hmac(sha224),cbc(des))"));
6266 + break;
6267 + case 184:
6268 +- ret += tcrypt_test("authenc(hmac(sha224),cbc(des3_ede))");
6269 ++ ret = min(ret, tcrypt_test("authenc(hmac(sha224),cbc(des3_ede))"));
6270 + break;
6271 + case 185:
6272 +- ret += tcrypt_test("authenc(hmac(sha256),cbc(des))");
6273 ++ ret = min(ret, tcrypt_test("authenc(hmac(sha256),cbc(des))"));
6274 + break;
6275 + case 186:
6276 +- ret += tcrypt_test("authenc(hmac(sha256),cbc(des3_ede))");
6277 ++ ret = min(ret, tcrypt_test("authenc(hmac(sha256),cbc(des3_ede))"));
6278 + break;
6279 + case 187:
6280 +- ret += tcrypt_test("authenc(hmac(sha384),cbc(des))");
6281 ++ ret = min(ret, tcrypt_test("authenc(hmac(sha384),cbc(des))"));
6282 + break;
6283 + case 188:
6284 +- ret += tcrypt_test("authenc(hmac(sha384),cbc(des3_ede))");
6285 ++ ret = min(ret, tcrypt_test("authenc(hmac(sha384),cbc(des3_ede))"));
6286 + break;
6287 + case 189:
6288 +- ret += tcrypt_test("authenc(hmac(sha512),cbc(des))");
6289 ++ ret = min(ret, tcrypt_test("authenc(hmac(sha512),cbc(des))"));
6290 + break;
6291 + case 190:
6292 +- ret += tcrypt_test("authenc(hmac(sha512),cbc(des3_ede))");
6293 ++ ret = min(ret, tcrypt_test("authenc(hmac(sha512),cbc(des3_ede))"));
6294 + break;
6295 + case 191:
6296 +- ret += tcrypt_test("ecb(sm4)");
6297 +- ret += tcrypt_test("cbc(sm4)");
6298 +- ret += tcrypt_test("cfb(sm4)");
6299 +- ret += tcrypt_test("ctr(sm4)");
6300 ++ ret = min(ret, tcrypt_test("ecb(sm4)"));
6301 ++ ret = min(ret, tcrypt_test("cbc(sm4)"));
6302 ++ ret = min(ret, tcrypt_test("cfb(sm4)"));
6303 ++ ret = min(ret, tcrypt_test("ctr(sm4)"));
6304 + break;
6305 + case 192:
6306 +- ret += tcrypt_test("ecb(aria)");
6307 +- ret += tcrypt_test("cbc(aria)");
6308 +- ret += tcrypt_test("cfb(aria)");
6309 +- ret += tcrypt_test("ctr(aria)");
6310 ++ ret = min(ret, tcrypt_test("ecb(aria)"));
6311 ++ ret = min(ret, tcrypt_test("cbc(aria)"));
6312 ++ ret = min(ret, tcrypt_test("cfb(aria)"));
6313 ++ ret = min(ret, tcrypt_test("ctr(aria)"));
6314 + break;
6315 + case 200:
6316 + test_cipher_speed("ecb(aes)", ENCRYPT, sec, NULL, 0,
6317 +diff --git a/drivers/acpi/acpica/dsmethod.c b/drivers/acpi/acpica/dsmethod.c
6318 +index ae2e768830bfc..9332bc688713c 100644
6319 +--- a/drivers/acpi/acpica/dsmethod.c
6320 ++++ b/drivers/acpi/acpica/dsmethod.c
6321 +@@ -517,7 +517,7 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread,
6322 + info = ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_evaluate_info));
6323 + if (!info) {
6324 + status = AE_NO_MEMORY;
6325 +- goto cleanup;
6326 ++ goto pop_walk_state;
6327 + }
6328 +
6329 + info->parameters = &this_walk_state->operands[0];
6330 +@@ -529,7 +529,7 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread,
6331 +
6332 + ACPI_FREE(info);
6333 + if (ACPI_FAILURE(status)) {
6334 +- goto cleanup;
6335 ++ goto pop_walk_state;
6336 + }
6337 +
6338 + next_walk_state->method_nesting_depth =
6339 +@@ -575,6 +575,12 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread,
6340 +
6341 + return_ACPI_STATUS(status);
6342 +
6343 ++pop_walk_state:
6344 ++
6345 ++ /* On error, pop the walk state to be deleted from thread */
6346 ++
6347 ++ acpi_ds_pop_walk_state(thread);
6348 ++
6349 + cleanup:
6350 +
6351 + /* On error, we must terminate the method properly */
6352 +diff --git a/drivers/acpi/acpica/utcopy.c b/drivers/acpi/acpica/utcopy.c
6353 +index 400b9e15a709c..63c17f420fb86 100644
6354 +--- a/drivers/acpi/acpica/utcopy.c
6355 ++++ b/drivers/acpi/acpica/utcopy.c
6356 +@@ -916,13 +916,6 @@ acpi_ut_copy_ipackage_to_ipackage(union acpi_operand_object *source_obj,
6357 + status = acpi_ut_walk_package_tree(source_obj, dest_obj,
6358 + acpi_ut_copy_ielement_to_ielement,
6359 + walk_state);
6360 +- if (ACPI_FAILURE(status)) {
6361 +-
6362 +- /* On failure, delete the destination package object */
6363 +-
6364 +- acpi_ut_remove_reference(dest_obj);
6365 +- }
6366 +-
6367 + return_ACPI_STATUS(status);
6368 + }
6369 +
6370 +diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
6371 +index 9b42628cf21b3..9751b84c1b221 100644
6372 +--- a/drivers/acpi/ec.c
6373 ++++ b/drivers/acpi/ec.c
6374 +@@ -1875,6 +1875,16 @@ static const struct dmi_system_id ec_dmi_table[] __initconst = {
6375 + DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion Gaming Laptop 15-cx0xxx"),
6376 + },
6377 + },
6378 ++ {
6379 ++ /*
6380 ++ * HP Pavilion Gaming Laptop 15-cx0041ur
6381 ++ */
6382 ++ .callback = ec_honor_dsdt_gpe,
6383 ++ .matches = {
6384 ++ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
6385 ++ DMI_MATCH(DMI_PRODUCT_NAME, "HP 15-cx0041ur"),
6386 ++ },
6387 ++ },
6388 + {
6389 + /*
6390 + * Samsung hardware
6391 +diff --git a/drivers/acpi/irq.c b/drivers/acpi/irq.c
6392 +index 1cc4647f78b86..c2c786eb95abc 100644
6393 +--- a/drivers/acpi/irq.c
6394 ++++ b/drivers/acpi/irq.c
6395 +@@ -94,6 +94,7 @@ EXPORT_SYMBOL_GPL(acpi_unregister_gsi);
6396 + /**
6397 + * acpi_get_irq_source_fwhandle() - Retrieve fwhandle from IRQ resource source.
6398 + * @source: acpi_resource_source to use for the lookup.
6399 ++ * @gsi: GSI IRQ number
6400 + *
6401 + * Description:
6402 + * Retrieve the fwhandle of the device referenced by the given IRQ resource
6403 +@@ -297,8 +298,8 @@ EXPORT_SYMBOL_GPL(acpi_irq_get);
6404 + /**
6405 + * acpi_set_irq_model - Setup the GSI irqdomain information
6406 + * @model: the value assigned to acpi_irq_model
6407 +- * @fwnode: the irq_domain identifier for mapping and looking up
6408 +- * GSI interrupts
6409 ++ * @fn: a dispatcher function that will return the domain fwnode
6410 ++ * for a given GSI
6411 + */
6412 + void __init acpi_set_irq_model(enum acpi_irq_model_id model,
6413 + struct fwnode_handle *(*fn)(u32))
6414 +diff --git a/drivers/acpi/pfr_telemetry.c b/drivers/acpi/pfr_telemetry.c
6415 +index 9abf350bd7a5a..27fb6cdad75f9 100644
6416 +--- a/drivers/acpi/pfr_telemetry.c
6417 ++++ b/drivers/acpi/pfr_telemetry.c
6418 +@@ -144,7 +144,7 @@ static int get_pfrt_log_data_info(struct pfrt_log_data_info *data_info,
6419 + ret = 0;
6420 +
6421 + free_acpi_buffer:
6422 +- kfree(out_obj);
6423 ++ ACPI_FREE(out_obj);
6424 +
6425 + return ret;
6426 + }
6427 +@@ -180,7 +180,7 @@ static int set_pfrt_log_level(int level, struct pfrt_log_device *pfrt_log_dev)
6428 + ret = -EBUSY;
6429 + }
6430 +
6431 +- kfree(out_obj);
6432 ++ ACPI_FREE(out_obj);
6433 +
6434 + return ret;
6435 + }
6436 +@@ -218,7 +218,7 @@ static int get_pfrt_log_level(struct pfrt_log_device *pfrt_log_dev)
6437 + ret = obj->integer.value;
6438 +
6439 + free_acpi_buffer:
6440 +- kfree(out_obj);
6441 ++ ACPI_FREE(out_obj);
6442 +
6443 + return ret;
6444 + }
6445 +diff --git a/drivers/acpi/pfr_update.c b/drivers/acpi/pfr_update.c
6446 +index 6bb0b778b5da5..9d2bdc13253a5 100644
6447 +--- a/drivers/acpi/pfr_update.c
6448 ++++ b/drivers/acpi/pfr_update.c
6449 +@@ -178,7 +178,7 @@ static int query_capability(struct pfru_update_cap_info *cap_hdr,
6450 + ret = 0;
6451 +
6452 + free_acpi_buffer:
6453 +- kfree(out_obj);
6454 ++ ACPI_FREE(out_obj);
6455 +
6456 + return ret;
6457 + }
6458 +@@ -224,7 +224,7 @@ static int query_buffer(struct pfru_com_buf_info *info,
6459 + ret = 0;
6460 +
6461 + free_acpi_buffer:
6462 +- kfree(out_obj);
6463 ++ ACPI_FREE(out_obj);
6464 +
6465 + return ret;
6466 + }
6467 +@@ -385,7 +385,7 @@ static int start_update(int action, struct pfru_device *pfru_dev)
6468 + ret = 0;
6469 +
6470 + free_acpi_buffer:
6471 +- kfree(out_obj);
6472 ++ ACPI_FREE(out_obj);
6473 +
6474 + return ret;
6475 + }
6476 +diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
6477 +index acfabfe07c4fa..fc5b5b2c9e819 100644
6478 +--- a/drivers/acpi/processor_idle.c
6479 ++++ b/drivers/acpi/processor_idle.c
6480 +@@ -1134,6 +1134,9 @@ static int acpi_processor_get_lpi_info(struct acpi_processor *pr)
6481 + status = acpi_get_parent(handle, &pr_ahandle);
6482 + while (ACPI_SUCCESS(status)) {
6483 + d = acpi_fetch_acpi_dev(pr_ahandle);
6484 ++ if (!d)
6485 ++ break;
6486 ++
6487 + handle = pr_ahandle;
6488 +
6489 + if (strcmp(acpi_device_hid(d), ACPI_PROCESSOR_CONTAINER_HID))
6490 +diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
6491 +index b2a6162876387..ffa19d418847f 100644
6492 +--- a/drivers/acpi/video_detect.c
6493 ++++ b/drivers/acpi/video_detect.c
6494 +@@ -132,6 +132,10 @@ static int video_detect_force_none(const struct dmi_system_id *d)
6495 + }
6496 +
6497 + static const struct dmi_system_id video_detect_dmi_table[] = {
6498 ++ /*
6499 ++ * Models which should use the vendor backlight interface,
6500 ++ * because of broken ACPI video backlight control.
6501 ++ */
6502 + {
6503 + /* https://bugzilla.redhat.com/show_bug.cgi?id=1128309 */
6504 + .callback = video_detect_force_vendor,
6505 +@@ -197,14 +201,6 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
6506 + DMI_MATCH(DMI_PRODUCT_NAME, "1015CX"),
6507 + },
6508 + },
6509 +- {
6510 +- .callback = video_detect_force_vendor,
6511 +- /* GIGABYTE GB-BXBT-2807 */
6512 +- .matches = {
6513 +- DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
6514 +- DMI_MATCH(DMI_PRODUCT_NAME, "GB-BXBT-2807"),
6515 +- },
6516 +- },
6517 + {
6518 + .callback = video_detect_force_vendor,
6519 + /* Samsung N150/N210/N220 */
6520 +@@ -234,18 +230,23 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
6521 + },
6522 + {
6523 + .callback = video_detect_force_vendor,
6524 +- /* Sony VPCEH3U1E */
6525 ++ /* Xiaomi Mi Pad 2 */
6526 + .matches = {
6527 +- DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
6528 +- DMI_MATCH(DMI_PRODUCT_NAME, "VPCEH3U1E"),
6529 ++ DMI_MATCH(DMI_SYS_VENDOR, "Xiaomi Inc"),
6530 ++ DMI_MATCH(DMI_PRODUCT_NAME, "Mipad2"),
6531 + },
6532 + },
6533 ++
6534 ++ /*
6535 ++ * Models which should use the vendor backlight interface,
6536 ++ * because of broken native backlight control.
6537 ++ */
6538 + {
6539 + .callback = video_detect_force_vendor,
6540 +- /* Xiaomi Mi Pad 2 */
6541 ++ /* Sony Vaio PCG-FRV35 */
6542 + .matches = {
6543 +- DMI_MATCH(DMI_SYS_VENDOR, "Xiaomi Inc"),
6544 +- DMI_MATCH(DMI_PRODUCT_NAME, "Mipad2"),
6545 ++ DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
6546 ++ DMI_MATCH(DMI_PRODUCT_NAME, "PCG-FRV35"),
6547 + },
6548 + },
6549 +
6550 +@@ -609,6 +610,23 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
6551 + DMI_MATCH(DMI_BOARD_NAME, "N250P"),
6552 + },
6553 + },
6554 ++ {
6555 ++ /* https://bugzilla.kernel.org/show_bug.cgi?id=202401 */
6556 ++ .callback = video_detect_force_native,
6557 ++ /* Sony Vaio VPCEH3U1E */
6558 ++ .matches = {
6559 ++ DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
6560 ++ DMI_MATCH(DMI_PRODUCT_NAME, "VPCEH3U1E"),
6561 ++ },
6562 ++ },
6563 ++ {
6564 ++ .callback = video_detect_force_native,
6565 ++ /* Sony Vaio VPCY11S1E */
6566 ++ .matches = {
6567 ++ DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
6568 ++ DMI_MATCH(DMI_PRODUCT_NAME, "VPCY11S1E"),
6569 ++ },
6570 ++ },
6571 +
6572 + /*
6573 + * These Toshibas have a broken acpi-video interface for brightness
6574 +@@ -671,6 +689,14 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
6575 + DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex 9020M"),
6576 + },
6577 + },
6578 ++ {
6579 ++ .callback = video_detect_force_none,
6580 ++ /* GIGABYTE GB-BXBT-2807 */
6581 ++ .matches = {
6582 ++ DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
6583 ++ DMI_MATCH(DMI_PRODUCT_NAME, "GB-BXBT-2807"),
6584 ++ },
6585 ++ },
6586 + {
6587 + .callback = video_detect_force_none,
6588 + /* MSI MS-7721 */
6589 +diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c
6590 +index d7d3f1669d4c0..4e816bb402f68 100644
6591 +--- a/drivers/acpi/x86/utils.c
6592 ++++ b/drivers/acpi/x86/utils.c
6593 +@@ -308,7 +308,7 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
6594 + ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY),
6595 + },
6596 + {
6597 +- /* Lenovo Yoga Tablet 1050F/L */
6598 ++ /* Lenovo Yoga Tablet 2 1050F/L */
6599 + .matches = {
6600 + DMI_MATCH(DMI_SYS_VENDOR, "Intel Corp."),
6601 + DMI_MATCH(DMI_PRODUCT_NAME, "VALLEYVIEW C0 PLATFORM"),
6602 +@@ -319,6 +319,27 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
6603 + .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
6604 + ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY),
6605 + },
6606 ++ {
6607 ++ /* Lenovo Yoga Tab 3 Pro X90F */
6608 ++ .matches = {
6609 ++ DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
6610 ++ DMI_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
6611 ++ DMI_MATCH(DMI_PRODUCT_VERSION, "Blade3-10A-001"),
6612 ++ },
6613 ++ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
6614 ++ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY),
6615 ++ },
6616 ++ {
6617 ++ /* Medion Lifetab S10346 */
6618 ++ .matches = {
6619 ++ DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
6620 ++ DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"),
6621 ++ /* Way too generic, also match on BIOS data */
6622 ++ DMI_MATCH(DMI_BIOS_DATE, "10/22/2015"),
6623 ++ },
6624 ++ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
6625 ++ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY),
6626 ++ },
6627 + {
6628 + /* Nextbook Ares 8 */
6629 + .matches = {
6630 +@@ -348,6 +369,7 @@ static const struct acpi_device_id i2c_acpi_known_good_ids[] = {
6631 + { "10EC5640", 0 }, /* RealTek ALC5640 audio codec */
6632 + { "INT33F4", 0 }, /* X-Powers AXP288 PMIC */
6633 + { "INT33FD", 0 }, /* Intel Crystal Cove PMIC */
6634 ++ { "INT34D3", 0 }, /* Intel Whiskey Cove PMIC */
6635 + { "NPCE69A", 0 }, /* Asus Transformer keyboard dock */
6636 + {}
6637 + };
6638 +diff --git a/drivers/ata/libata-sata.c b/drivers/ata/libata-sata.c
6639 +index b6806d41a8c50..fd4dccc253896 100644
6640 +--- a/drivers/ata/libata-sata.c
6641 ++++ b/drivers/ata/libata-sata.c
6642 +@@ -1392,7 +1392,8 @@ static int ata_eh_read_log_10h(struct ata_device *dev,
6643 + tf->hob_lbah = buf[10];
6644 + tf->nsect = buf[12];
6645 + tf->hob_nsect = buf[13];
6646 +- if (dev->class == ATA_DEV_ZAC && ata_id_has_ncq_autosense(dev->id))
6647 ++ if (dev->class == ATA_DEV_ZAC && ata_id_has_ncq_autosense(dev->id) &&
6648 ++ (tf->status & ATA_SENSE))
6649 + tf->auxiliary = buf[14] << 16 | buf[15] << 8 | buf[16];
6650 +
6651 + return 0;
6652 +@@ -1456,8 +1457,12 @@ void ata_eh_analyze_ncq_error(struct ata_link *link)
6653 + memcpy(&qc->result_tf, &tf, sizeof(tf));
6654 + qc->result_tf.flags = ATA_TFLAG_ISADDR | ATA_TFLAG_LBA | ATA_TFLAG_LBA48;
6655 + qc->err_mask |= AC_ERR_DEV | AC_ERR_NCQ;
6656 +- if (dev->class == ATA_DEV_ZAC &&
6657 +- ((qc->result_tf.status & ATA_SENSE) || qc->result_tf.auxiliary)) {
6658 ++
6659 ++ /*
6660 ++ * If the device supports NCQ autosense, ata_eh_read_log_10h() will have
6661 ++ * stored the sense data in qc->result_tf.auxiliary.
6662 ++ */
6663 ++ if (qc->result_tf.auxiliary) {
6664 + char sense_key, asc, ascq;
6665 +
6666 + sense_key = (qc->result_tf.auxiliary >> 16) & 0xff;
6667 +diff --git a/drivers/base/class.c b/drivers/base/class.c
6668 +index 64f7b9a0970f7..8ceafb7d0203b 100644
6669 +--- a/drivers/base/class.c
6670 ++++ b/drivers/base/class.c
6671 +@@ -192,6 +192,11 @@ int __class_register(struct class *cls, struct lock_class_key *key)
6672 + }
6673 + error = class_add_groups(class_get(cls), cls->class_groups);
6674 + class_put(cls);
6675 ++ if (error) {
6676 ++ kobject_del(&cp->subsys.kobj);
6677 ++ kfree_const(cp->subsys.kobj.name);
6678 ++ kfree(cp);
6679 ++ }
6680 + return error;
6681 + }
6682 + EXPORT_SYMBOL_GPL(__class_register);
6683 +diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
6684 +index b52049098d4ee..14088b5adb556 100644
6685 +--- a/drivers/base/power/runtime.c
6686 ++++ b/drivers/base/power/runtime.c
6687 +@@ -484,7 +484,17 @@ static int rpm_idle(struct device *dev, int rpmflags)
6688 +
6689 + dev->power.idle_notification = true;
6690 +
6691 +- retval = __rpm_callback(callback, dev);
6692 ++ if (dev->power.irq_safe)
6693 ++ spin_unlock(&dev->power.lock);
6694 ++ else
6695 ++ spin_unlock_irq(&dev->power.lock);
6696 ++
6697 ++ retval = callback(dev);
6698 ++
6699 ++ if (dev->power.irq_safe)
6700 ++ spin_lock(&dev->power.lock);
6701 ++ else
6702 ++ spin_lock_irq(&dev->power.lock);
6703 +
6704 + dev->power.idle_notification = false;
6705 + wake_up_all(&dev->power.wait_queue);
6706 +diff --git a/drivers/base/regmap/regmap-irq.c b/drivers/base/regmap/regmap-irq.c
6707 +index 4ef9488d05cde..3de89795f5843 100644
6708 +--- a/drivers/base/regmap/regmap-irq.c
6709 ++++ b/drivers/base/regmap/regmap-irq.c
6710 +@@ -722,6 +722,7 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode,
6711 + int i;
6712 + int ret = -ENOMEM;
6713 + int num_type_reg;
6714 ++ int num_regs;
6715 + u32 reg;
6716 +
6717 + if (chip->num_regs <= 0)
6718 +@@ -796,14 +797,20 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode,
6719 + goto err_alloc;
6720 + }
6721 +
6722 +- num_type_reg = chip->type_in_mask ? chip->num_regs : chip->num_type_reg;
6723 +- if (num_type_reg) {
6724 +- d->type_buf_def = kcalloc(num_type_reg,
6725 ++ /*
6726 ++ * Use num_config_regs if defined, otherwise fall back to num_type_reg
6727 ++ * to maintain backward compatibility.
6728 ++ */
6729 ++ num_type_reg = chip->num_config_regs ? chip->num_config_regs
6730 ++ : chip->num_type_reg;
6731 ++ num_regs = chip->type_in_mask ? chip->num_regs : num_type_reg;
6732 ++ if (num_regs) {
6733 ++ d->type_buf_def = kcalloc(num_regs,
6734 + sizeof(*d->type_buf_def), GFP_KERNEL);
6735 + if (!d->type_buf_def)
6736 + goto err_alloc;
6737 +
6738 +- d->type_buf = kcalloc(num_type_reg, sizeof(*d->type_buf),
6739 ++ d->type_buf = kcalloc(num_regs, sizeof(*d->type_buf),
6740 + GFP_KERNEL);
6741 + if (!d->type_buf)
6742 + goto err_alloc;
6743 +diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
6744 +index 8532b839a3435..6772402326842 100644
6745 +--- a/drivers/block/drbd/drbd_main.c
6746 ++++ b/drivers/block/drbd/drbd_main.c
6747 +@@ -2217,7 +2217,8 @@ void drbd_destroy_device(struct kref *kref)
6748 + kref_put(&peer_device->connection->kref, drbd_destroy_connection);
6749 + kfree(peer_device);
6750 + }
6751 +- memset(device, 0xfd, sizeof(*device));
6752 ++ if (device->submit.wq)
6753 ++ destroy_workqueue(device->submit.wq);
6754 + kfree(device);
6755 + kref_put(&resource->kref, drbd_destroy_resource);
6756 + }
6757 +@@ -2309,7 +2310,6 @@ void drbd_destroy_resource(struct kref *kref)
6758 + idr_destroy(&resource->devices);
6759 + free_cpumask_var(resource->cpu_mask);
6760 + kfree(resource->name);
6761 +- memset(resource, 0xf2, sizeof(*resource));
6762 + kfree(resource);
6763 + }
6764 +
6765 +@@ -2650,7 +2650,6 @@ void drbd_destroy_connection(struct kref *kref)
6766 + drbd_free_socket(&connection->data);
6767 + kfree(connection->int_dig_in);
6768 + kfree(connection->int_dig_vv);
6769 +- memset(connection, 0xfc, sizeof(*connection));
6770 + kfree(connection);
6771 + kref_put(&resource->kref, drbd_destroy_resource);
6772 + }
6773 +@@ -2774,7 +2773,7 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
6774 +
6775 + err = add_disk(disk);
6776 + if (err)
6777 +- goto out_idr_remove_from_resource;
6778 ++ goto out_destroy_workqueue;
6779 +
6780 + /* inherit the connection state */
6781 + device->state.conn = first_connection(resource)->cstate;
6782 +@@ -2788,6 +2787,8 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
6783 + drbd_debugfs_device_add(device);
6784 + return NO_ERROR;
6785 +
6786 ++out_destroy_workqueue:
6787 ++ destroy_workqueue(device->submit.wq);
6788 + out_idr_remove_from_resource:
6789 + for_each_connection_safe(connection, n, resource) {
6790 + peer_device = idr_remove(&connection->peer_devices, vnr);
6791 +diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
6792 +index 864c98e748757..249eba7d21c28 100644
6793 +--- a/drivers/block/drbd/drbd_nl.c
6794 ++++ b/drivers/block/drbd/drbd_nl.c
6795 +@@ -1210,6 +1210,7 @@ static void decide_on_discard_support(struct drbd_device *device,
6796 + struct drbd_connection *connection =
6797 + first_peer_device(device)->connection;
6798 + struct request_queue *q = device->rq_queue;
6799 ++ unsigned int max_discard_sectors;
6800 +
6801 + if (bdev && !bdev_max_discard_sectors(bdev->backing_bdev))
6802 + goto not_supported;
6803 +@@ -1230,15 +1231,14 @@ static void decide_on_discard_support(struct drbd_device *device,
6804 + * topology on all peers.
6805 + */
6806 + blk_queue_discard_granularity(q, 512);
6807 +- q->limits.max_discard_sectors = drbd_max_discard_sectors(connection);
6808 +- q->limits.max_write_zeroes_sectors =
6809 +- drbd_max_discard_sectors(connection);
6810 ++ max_discard_sectors = drbd_max_discard_sectors(connection);
6811 ++ blk_queue_max_discard_sectors(q, max_discard_sectors);
6812 ++ blk_queue_max_write_zeroes_sectors(q, max_discard_sectors);
6813 + return;
6814 +
6815 + not_supported:
6816 + blk_queue_discard_granularity(q, 0);
6817 +- q->limits.max_discard_sectors = 0;
6818 +- q->limits.max_write_zeroes_sectors = 0;
6819 ++ blk_queue_max_discard_sectors(q, 0);
6820 + }
6821 +
6822 + static void fixup_write_zeroes(struct drbd_device *device, struct request_queue *q)
6823 +diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
6824 +index ccad3d7b3ddd9..487840e3564df 100644
6825 +--- a/drivers/block/floppy.c
6826 ++++ b/drivers/block/floppy.c
6827 +@@ -4593,8 +4593,10 @@ static int __init do_floppy_init(void)
6828 + goto out_put_disk;
6829 +
6830 + err = floppy_alloc_disk(drive, 0);
6831 +- if (err)
6832 ++ if (err) {
6833 ++ blk_mq_free_tag_set(&tag_sets[drive]);
6834 + goto out_put_disk;
6835 ++ }
6836 +
6837 + timer_setup(&motor_off_timer[drive], motor_off_callback, 0);
6838 + }
6839 +diff --git a/drivers/block/loop.c b/drivers/block/loop.c
6840 +index ad92192c7d617..d12d3d171ec4c 100644
6841 +--- a/drivers/block/loop.c
6842 ++++ b/drivers/block/loop.c
6843 +@@ -1773,7 +1773,16 @@ static const struct block_device_operations lo_fops = {
6844 + /*
6845 + * And now the modules code and kernel interface.
6846 + */
6847 +-static int max_loop;
6848 ++
6849 ++/*
6850 ++ * If max_loop is specified, create that many devices upfront.
6851 ++ * This also becomes a hard limit. If max_loop is not specified,
6852 ++ * create CONFIG_BLK_DEV_LOOP_MIN_COUNT loop devices at module
6853 ++ * init time. Loop devices can be requested on-demand with the
6854 ++ * /dev/loop-control interface, or be instantiated by accessing
6855 ++ * a 'dead' device node.
6856 ++ */
6857 ++static int max_loop = CONFIG_BLK_DEV_LOOP_MIN_COUNT;
6858 + module_param(max_loop, int, 0444);
6859 + MODULE_PARM_DESC(max_loop, "Maximum number of loop devices");
6860 + module_param(max_part, int, 0444);
6861 +@@ -2181,7 +2190,7 @@ MODULE_ALIAS("devname:loop-control");
6862 +
6863 + static int __init loop_init(void)
6864 + {
6865 +- int i, nr;
6866 ++ int i;
6867 + int err;
6868 +
6869 + part_shift = 0;
6870 +@@ -2209,19 +2218,6 @@ static int __init loop_init(void)
6871 + goto err_out;
6872 + }
6873 +
6874 +- /*
6875 +- * If max_loop is specified, create that many devices upfront.
6876 +- * This also becomes a hard limit. If max_loop is not specified,
6877 +- * create CONFIG_BLK_DEV_LOOP_MIN_COUNT loop devices at module
6878 +- * init time. Loop devices can be requested on-demand with the
6879 +- * /dev/loop-control interface, or be instantiated by accessing
6880 +- * a 'dead' device node.
6881 +- */
6882 +- if (max_loop)
6883 +- nr = max_loop;
6884 +- else
6885 +- nr = CONFIG_BLK_DEV_LOOP_MIN_COUNT;
6886 +-
6887 + err = misc_register(&loop_misc);
6888 + if (err < 0)
6889 + goto err_out;
6890 +@@ -2233,7 +2229,7 @@ static int __init loop_init(void)
6891 + }
6892 +
6893 + /* pre-create number of devices given by config or max_loop */
6894 +- for (i = 0; i < nr; i++)
6895 ++ for (i = 0; i < max_loop; i++)
6896 + loop_add(i);
6897 +
6898 + printk(KERN_INFO "loop: module loaded\n");
6899 +diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
6900 +index a657e9a3e96a5..f6b4b7a1be4cc 100644
6901 +--- a/drivers/bluetooth/btintel.c
6902 ++++ b/drivers/bluetooth/btintel.c
6903 +@@ -2524,7 +2524,7 @@ static int btintel_setup_combined(struct hci_dev *hdev)
6904 + */
6905 + err = btintel_read_version(hdev, &ver);
6906 + if (err)
6907 +- return err;
6908 ++ break;
6909 +
6910 + /* Apply the device specific HCI quirks
6911 + *
6912 +@@ -2566,7 +2566,8 @@ static int btintel_setup_combined(struct hci_dev *hdev)
6913 + default:
6914 + bt_dev_err(hdev, "Unsupported Intel hw variant (%u)",
6915 + INTEL_HW_VARIANT(ver_tlv.cnvi_bt));
6916 +- return -EINVAL;
6917 ++ err = -EINVAL;
6918 ++ break;
6919 + }
6920 +
6921 + exit_error:
6922 +diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
6923 +index f05018988a177..6beafd62d7226 100644
6924 +--- a/drivers/bluetooth/btusb.c
6925 ++++ b/drivers/bluetooth/btusb.c
6926 +@@ -802,13 +802,13 @@ static inline void btusb_free_frags(struct btusb_data *data)
6927 +
6928 + spin_lock_irqsave(&data->rxlock, flags);
6929 +
6930 +- kfree_skb(data->evt_skb);
6931 ++ dev_kfree_skb_irq(data->evt_skb);
6932 + data->evt_skb = NULL;
6933 +
6934 +- kfree_skb(data->acl_skb);
6935 ++ dev_kfree_skb_irq(data->acl_skb);
6936 + data->acl_skb = NULL;
6937 +
6938 +- kfree_skb(data->sco_skb);
6939 ++ dev_kfree_skb_irq(data->sco_skb);
6940 + data->sco_skb = NULL;
6941 +
6942 + spin_unlock_irqrestore(&data->rxlock, flags);
6943 +diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c
6944 +index d7e0b75db8a60..2b6c0e1922cb3 100644
6945 +--- a/drivers/bluetooth/hci_bcm.c
6946 ++++ b/drivers/bluetooth/hci_bcm.c
6947 +@@ -53,11 +53,13 @@
6948 + * struct bcm_device_data - device specific data
6949 + * @no_early_set_baudrate: Disallow set baudrate before driver setup()
6950 + * @drive_rts_on_open: drive RTS signal on ->open() when platform requires it
6951 ++ * @no_uart_clock_set: UART clock set command for >3Mbps mode is unavailable
6952 + * @max_autobaud_speed: max baudrate supported by device in autobaud mode
6953 + */
6954 + struct bcm_device_data {
6955 + bool no_early_set_baudrate;
6956 + bool drive_rts_on_open;
6957 ++ bool no_uart_clock_set;
6958 + u32 max_autobaud_speed;
6959 + };
6960 +
6961 +@@ -100,6 +102,7 @@ struct bcm_device_data {
6962 + * @is_suspended: whether flow control is currently disabled
6963 + * @no_early_set_baudrate: don't set_baudrate before setup()
6964 + * @drive_rts_on_open: drive RTS signal on ->open() when platform requires it
6965 ++ * @no_uart_clock_set: UART clock set command for >3Mbps mode is unavailable
6966 + * @pcm_int_params: keep the initial PCM configuration
6967 + * @use_autobaud_mode: start Bluetooth device in autobaud mode
6968 + * @max_autobaud_speed: max baudrate supported by device in autobaud mode
6969 +@@ -140,6 +143,7 @@ struct bcm_device {
6970 + #endif
6971 + bool no_early_set_baudrate;
6972 + bool drive_rts_on_open;
6973 ++ bool no_uart_clock_set;
6974 + bool use_autobaud_mode;
6975 + u8 pcm_int_params[5];
6976 + u32 max_autobaud_speed;
6977 +@@ -172,10 +176,11 @@ static inline void host_set_baudrate(struct hci_uart *hu, unsigned int speed)
6978 + static int bcm_set_baudrate(struct hci_uart *hu, unsigned int speed)
6979 + {
6980 + struct hci_dev *hdev = hu->hdev;
6981 ++ struct bcm_data *bcm = hu->priv;
6982 + struct sk_buff *skb;
6983 + struct bcm_update_uart_baud_rate param;
6984 +
6985 +- if (speed > 3000000) {
6986 ++ if (speed > 3000000 && !bcm->dev->no_uart_clock_set) {
6987 + struct bcm_write_uart_clock_setting clock;
6988 +
6989 + clock.type = BCM_UART_CLOCK_48MHZ;
6990 +@@ -1529,6 +1534,7 @@ static int bcm_serdev_probe(struct serdev_device *serdev)
6991 + bcmdev->max_autobaud_speed = data->max_autobaud_speed;
6992 + bcmdev->no_early_set_baudrate = data->no_early_set_baudrate;
6993 + bcmdev->drive_rts_on_open = data->drive_rts_on_open;
6994 ++ bcmdev->no_uart_clock_set = data->no_uart_clock_set;
6995 + }
6996 +
6997 + return hci_uart_register_device(&bcmdev->serdev_hu, &bcm_proto);
6998 +@@ -1550,6 +1556,10 @@ static struct bcm_device_data bcm43438_device_data = {
6999 + .drive_rts_on_open = true,
7000 + };
7001 +
7002 ++static struct bcm_device_data cyw4373a0_device_data = {
7003 ++ .no_uart_clock_set = true,
7004 ++};
7005 ++
7006 + static struct bcm_device_data cyw55572_device_data = {
7007 + .max_autobaud_speed = 921600,
7008 + };
7009 +@@ -1566,6 +1576,7 @@ static const struct of_device_id bcm_bluetooth_of_match[] = {
7010 + { .compatible = "brcm,bcm4349-bt", .data = &bcm43438_device_data },
7011 + { .compatible = "brcm,bcm43540-bt", .data = &bcm4354_device_data },
7012 + { .compatible = "brcm,bcm4335a0" },
7013 ++ { .compatible = "cypress,cyw4373a0-bt", .data = &cyw4373a0_device_data },
7014 + { .compatible = "infineon,cyw55572-bt", .data = &cyw55572_device_data },
7015 + { },
7016 + };
7017 +diff --git a/drivers/bluetooth/hci_bcsp.c b/drivers/bluetooth/hci_bcsp.c
7018 +index cf4a560958173..8055f63603f45 100644
7019 +--- a/drivers/bluetooth/hci_bcsp.c
7020 ++++ b/drivers/bluetooth/hci_bcsp.c
7021 +@@ -378,7 +378,7 @@ static void bcsp_pkt_cull(struct bcsp_struct *bcsp)
7022 + i++;
7023 +
7024 + __skb_unlink(skb, &bcsp->unack);
7025 +- kfree_skb(skb);
7026 ++ dev_kfree_skb_irq(skb);
7027 + }
7028 +
7029 + if (skb_queue_empty(&bcsp->unack))
7030 +diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c
7031 +index c5a0409ef84fd..6455bc4fb5bb3 100644
7032 +--- a/drivers/bluetooth/hci_h5.c
7033 ++++ b/drivers/bluetooth/hci_h5.c
7034 +@@ -313,7 +313,7 @@ static void h5_pkt_cull(struct h5 *h5)
7035 + break;
7036 +
7037 + __skb_unlink(skb, &h5->unack);
7038 +- kfree_skb(skb);
7039 ++ dev_kfree_skb_irq(skb);
7040 + }
7041 +
7042 + if (skb_queue_empty(&h5->unack))
7043 +diff --git a/drivers/bluetooth/hci_ll.c b/drivers/bluetooth/hci_ll.c
7044 +index 4eb420a9ed04e..5abc01a2acf72 100644
7045 +--- a/drivers/bluetooth/hci_ll.c
7046 ++++ b/drivers/bluetooth/hci_ll.c
7047 +@@ -345,7 +345,7 @@ static int ll_enqueue(struct hci_uart *hu, struct sk_buff *skb)
7048 + default:
7049 + BT_ERR("illegal hcill state: %ld (losing packet)",
7050 + ll->hcill_state);
7051 +- kfree_skb(skb);
7052 ++ dev_kfree_skb_irq(skb);
7053 + break;
7054 + }
7055 +
7056 +diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
7057 +index 8df11016fd51b..bae9b2a408d95 100644
7058 +--- a/drivers/bluetooth/hci_qca.c
7059 ++++ b/drivers/bluetooth/hci_qca.c
7060 +@@ -912,7 +912,7 @@ static int qca_enqueue(struct hci_uart *hu, struct sk_buff *skb)
7061 + default:
7062 + BT_ERR("Illegal tx state: %d (losing packet)",
7063 + qca->tx_ibs_state);
7064 +- kfree_skb(skb);
7065 ++ dev_kfree_skb_irq(skb);
7066 + break;
7067 + }
7068 +
7069 +diff --git a/drivers/char/hw_random/amd-rng.c b/drivers/char/hw_random/amd-rng.c
7070 +index c22d4184bb612..0555e3838bce1 100644
7071 +--- a/drivers/char/hw_random/amd-rng.c
7072 ++++ b/drivers/char/hw_random/amd-rng.c
7073 +@@ -143,15 +143,19 @@ static int __init amd_rng_mod_init(void)
7074 + found:
7075 + err = pci_read_config_dword(pdev, 0x58, &pmbase);
7076 + if (err)
7077 +- return err;
7078 ++ goto put_dev;
7079 +
7080 + pmbase &= 0x0000FF00;
7081 +- if (pmbase == 0)
7082 +- return -EIO;
7083 ++ if (pmbase == 0) {
7084 ++ err = -EIO;
7085 ++ goto put_dev;
7086 ++ }
7087 +
7088 + priv = kzalloc(sizeof(*priv), GFP_KERNEL);
7089 +- if (!priv)
7090 +- return -ENOMEM;
7091 ++ if (!priv) {
7092 ++ err = -ENOMEM;
7093 ++ goto put_dev;
7094 ++ }
7095 +
7096 + if (!request_region(pmbase + PMBASE_OFFSET, PMBASE_SIZE, DRV_NAME)) {
7097 + dev_err(&pdev->dev, DRV_NAME " region 0x%x already in use!\n",
7098 +@@ -185,6 +189,8 @@ err_iomap:
7099 + release_region(pmbase + PMBASE_OFFSET, PMBASE_SIZE);
7100 + out:
7101 + kfree(priv);
7102 ++put_dev:
7103 ++ pci_dev_put(pdev);
7104 + return err;
7105 + }
7106 +
7107 +@@ -200,6 +206,8 @@ static void __exit amd_rng_mod_exit(void)
7108 +
7109 + release_region(priv->pmbase + PMBASE_OFFSET, PMBASE_SIZE);
7110 +
7111 ++ pci_dev_put(priv->pcidev);
7112 ++
7113 + kfree(priv);
7114 + }
7115 +
7116 +diff --git a/drivers/char/hw_random/geode-rng.c b/drivers/char/hw_random/geode-rng.c
7117 +index 138ce434f86b2..12fbe80918319 100644
7118 +--- a/drivers/char/hw_random/geode-rng.c
7119 ++++ b/drivers/char/hw_random/geode-rng.c
7120 +@@ -51,6 +51,10 @@ static const struct pci_device_id pci_tbl[] = {
7121 + };
7122 + MODULE_DEVICE_TABLE(pci, pci_tbl);
7123 +
7124 ++struct amd_geode_priv {
7125 ++ struct pci_dev *pcidev;
7126 ++ void __iomem *membase;
7127 ++};
7128 +
7129 + static int geode_rng_data_read(struct hwrng *rng, u32 *data)
7130 + {
7131 +@@ -90,6 +94,7 @@ static int __init geode_rng_init(void)
7132 + const struct pci_device_id *ent;
7133 + void __iomem *mem;
7134 + unsigned long rng_base;
7135 ++ struct amd_geode_priv *priv;
7136 +
7137 + for_each_pci_dev(pdev) {
7138 + ent = pci_match_id(pci_tbl, pdev);
7139 +@@ -97,17 +102,26 @@ static int __init geode_rng_init(void)
7140 + goto found;
7141 + }
7142 + /* Device not found. */
7143 +- goto out;
7144 ++ return err;
7145 +
7146 + found:
7147 ++ priv = kzalloc(sizeof(*priv), GFP_KERNEL);
7148 ++ if (!priv) {
7149 ++ err = -ENOMEM;
7150 ++ goto put_dev;
7151 ++ }
7152 ++
7153 + rng_base = pci_resource_start(pdev, 0);
7154 + if (rng_base == 0)
7155 +- goto out;
7156 ++ goto free_priv;
7157 + err = -ENOMEM;
7158 + mem = ioremap(rng_base, 0x58);
7159 + if (!mem)
7160 +- goto out;
7161 +- geode_rng.priv = (unsigned long)mem;
7162 ++ goto free_priv;
7163 ++
7164 ++ geode_rng.priv = (unsigned long)priv;
7165 ++ priv->membase = mem;
7166 ++ priv->pcidev = pdev;
7167 +
7168 + pr_info("AMD Geode RNG detected\n");
7169 + err = hwrng_register(&geode_rng);
7170 +@@ -116,20 +130,26 @@ found:
7171 + err);
7172 + goto err_unmap;
7173 + }
7174 +-out:
7175 + return err;
7176 +
7177 + err_unmap:
7178 + iounmap(mem);
7179 +- goto out;
7180 ++free_priv:
7181 ++ kfree(priv);
7182 ++put_dev:
7183 ++ pci_dev_put(pdev);
7184 ++ return err;
7185 + }
7186 +
7187 + static void __exit geode_rng_exit(void)
7188 + {
7189 +- void __iomem *mem = (void __iomem *)geode_rng.priv;
7190 ++ struct amd_geode_priv *priv;
7191 +
7192 ++ priv = (struct amd_geode_priv *)geode_rng.priv;
7193 + hwrng_unregister(&geode_rng);
7194 +- iounmap(mem);
7195 ++ iounmap(priv->membase);
7196 ++ pci_dev_put(priv->pcidev);
7197 ++ kfree(priv);
7198 + }
7199 +
7200 + module_init(geode_rng_init);
7201 +diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
7202 +index 49a1707693c9f..d5ee52be176d3 100644
7203 +--- a/drivers/char/ipmi/ipmi_msghandler.c
7204 ++++ b/drivers/char/ipmi/ipmi_msghandler.c
7205 +@@ -3704,12 +3704,16 @@ static void deliver_smi_err_response(struct ipmi_smi *intf,
7206 + struct ipmi_smi_msg *msg,
7207 + unsigned char err)
7208 + {
7209 ++ int rv;
7210 + msg->rsp[0] = msg->data[0] | 4;
7211 + msg->rsp[1] = msg->data[1];
7212 + msg->rsp[2] = err;
7213 + msg->rsp_size = 3;
7214 +- /* It's an error, so it will never requeue, no need to check return. */
7215 +- handle_one_recv_msg(intf, msg);
7216 ++
7217 ++ /* This will never requeue, but it may ask us to free the message. */
7218 ++ rv = handle_one_recv_msg(intf, msg);
7219 ++ if (rv == 0)
7220 ++ ipmi_free_smi_msg(msg);
7221 + }
7222 +
7223 + static void cleanup_smi_msgs(struct ipmi_smi *intf)
7224 +diff --git a/drivers/char/ipmi/kcs_bmc_aspeed.c b/drivers/char/ipmi/kcs_bmc_aspeed.c
7225 +index 19c32bf50e0e9..2dea8cd5a09ac 100644
7226 +--- a/drivers/char/ipmi/kcs_bmc_aspeed.c
7227 ++++ b/drivers/char/ipmi/kcs_bmc_aspeed.c
7228 +@@ -406,13 +406,31 @@ static void aspeed_kcs_check_obe(struct timer_list *timer)
7229 + static void aspeed_kcs_irq_mask_update(struct kcs_bmc_device *kcs_bmc, u8 mask, u8 state)
7230 + {
7231 + struct aspeed_kcs_bmc *priv = to_aspeed_kcs_bmc(kcs_bmc);
7232 ++ int rc;
7233 ++ u8 str;
7234 +
7235 + /* We don't have an OBE IRQ, emulate it */
7236 + if (mask & KCS_BMC_EVENT_TYPE_OBE) {
7237 +- if (KCS_BMC_EVENT_TYPE_OBE & state)
7238 +- mod_timer(&priv->obe.timer, jiffies + OBE_POLL_PERIOD);
7239 +- else
7240 ++ if (KCS_BMC_EVENT_TYPE_OBE & state) {
7241 ++ /*
7242 ++ * Given we don't have an OBE IRQ, delay by polling briefly to see if we can
7243 ++ * observe such an event before returning to the caller. This is not
7244 ++ * incorrect because OBF may have already become clear before enabling the
7245 ++ * IRQ if we had one, under which circumstance no event will be propagated
7246 ++ * anyway.
7247 ++ *
7248 ++ * The onus is on the client to perform a race-free check that it hasn't
7249 ++ * missed the event.
7250 ++ */
7251 ++ rc = read_poll_timeout_atomic(aspeed_kcs_inb, str,
7252 ++ !(str & KCS_BMC_STR_OBF), 1, 100, false,
7253 ++ &priv->kcs_bmc, priv->kcs_bmc.ioreg.str);
7254 ++ /* Time for the slow path? */
7255 ++ if (rc == -ETIMEDOUT)
7256 ++ mod_timer(&priv->obe.timer, jiffies + OBE_POLL_PERIOD);
7257 ++ } else {
7258 + del_timer(&priv->obe.timer);
7259 ++ }
7260 + }
7261 +
7262 + if (mask & KCS_BMC_EVENT_TYPE_IBF) {
7263 +diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c
7264 +index 18606651d1aa4..65f8f179a27f0 100644
7265 +--- a/drivers/char/tpm/tpm_crb.c
7266 ++++ b/drivers/char/tpm/tpm_crb.c
7267 +@@ -252,7 +252,7 @@ static int __crb_relinquish_locality(struct device *dev,
7268 + iowrite32(CRB_LOC_CTRL_RELINQUISH, &priv->regs_h->loc_ctrl);
7269 + if (!crb_wait_for_reg_32(&priv->regs_h->loc_state, mask, value,
7270 + TPM2_TIMEOUT_C)) {
7271 +- dev_warn(dev, "TPM_LOC_STATE_x.requestAccess timed out\n");
7272 ++ dev_warn(dev, "TPM_LOC_STATE_x.Relinquish timed out\n");
7273 + return -ETIME;
7274 + }
7275 +
7276 +diff --git a/drivers/char/tpm/tpm_ftpm_tee.c b/drivers/char/tpm/tpm_ftpm_tee.c
7277 +index 5c233423c56fa..deff23bb54bf1 100644
7278 +--- a/drivers/char/tpm/tpm_ftpm_tee.c
7279 ++++ b/drivers/char/tpm/tpm_ftpm_tee.c
7280 +@@ -397,7 +397,13 @@ static int __init ftpm_mod_init(void)
7281 + if (rc)
7282 + return rc;
7283 +
7284 +- return driver_register(&ftpm_tee_driver.driver);
7285 ++ rc = driver_register(&ftpm_tee_driver.driver);
7286 ++ if (rc) {
7287 ++ platform_driver_unregister(&ftpm_tee_plat_driver);
7288 ++ return rc;
7289 ++ }
7290 ++
7291 ++ return 0;
7292 + }
7293 +
7294 + static void __exit ftpm_mod_exit(void)
7295 +diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
7296 +index 757623bacfd50..3f98e587b3e84 100644
7297 +--- a/drivers/char/tpm/tpm_tis_core.c
7298 ++++ b/drivers/char/tpm/tpm_tis_core.c
7299 +@@ -682,15 +682,19 @@ static bool tpm_tis_req_canceled(struct tpm_chip *chip, u8 status)
7300 + {
7301 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
7302 +
7303 +- switch (priv->manufacturer_id) {
7304 +- case TPM_VID_WINBOND:
7305 +- return ((status == TPM_STS_VALID) ||
7306 +- (status == (TPM_STS_VALID | TPM_STS_COMMAND_READY)));
7307 +- case TPM_VID_STM:
7308 +- return (status == (TPM_STS_VALID | TPM_STS_COMMAND_READY));
7309 +- default:
7310 +- return (status == TPM_STS_COMMAND_READY);
7311 ++ if (!test_bit(TPM_TIS_DEFAULT_CANCELLATION, &priv->flags)) {
7312 ++ switch (priv->manufacturer_id) {
7313 ++ case TPM_VID_WINBOND:
7314 ++ return ((status == TPM_STS_VALID) ||
7315 ++ (status == (TPM_STS_VALID | TPM_STS_COMMAND_READY)));
7316 ++ case TPM_VID_STM:
7317 ++ return (status == (TPM_STS_VALID | TPM_STS_COMMAND_READY));
7318 ++ default:
7319 ++ break;
7320 ++ }
7321 + }
7322 ++
7323 ++ return status == TPM_STS_COMMAND_READY;
7324 + }
7325 +
7326 + static irqreturn_t tis_int_handler(int dummy, void *dev_id)
7327 +diff --git a/drivers/char/tpm/tpm_tis_core.h b/drivers/char/tpm/tpm_tis_core.h
7328 +index 66a5a13cd1df2..b68479e0de10f 100644
7329 +--- a/drivers/char/tpm/tpm_tis_core.h
7330 ++++ b/drivers/char/tpm/tpm_tis_core.h
7331 +@@ -86,6 +86,7 @@ enum tis_defaults {
7332 + enum tpm_tis_flags {
7333 + TPM_TIS_ITPM_WORKAROUND = BIT(0),
7334 + TPM_TIS_INVALID_STATUS = BIT(1),
7335 ++ TPM_TIS_DEFAULT_CANCELLATION = BIT(2),
7336 + };
7337 +
7338 + struct tpm_tis_data {
7339 +diff --git a/drivers/char/tpm/tpm_tis_i2c.c b/drivers/char/tpm/tpm_tis_i2c.c
7340 +index 0692510dfcab9..f3a7251c8e38f 100644
7341 +--- a/drivers/char/tpm/tpm_tis_i2c.c
7342 ++++ b/drivers/char/tpm/tpm_tis_i2c.c
7343 +@@ -49,7 +49,7 @@
7344 +
7345 + /* Masks with bits that must be read zero */
7346 + #define TPM_ACCESS_READ_ZERO 0x48
7347 +-#define TPM_INT_ENABLE_ZERO 0x7FFFFF6
7348 ++#define TPM_INT_ENABLE_ZERO 0x7FFFFF60
7349 + #define TPM_STS_READ_ZERO 0x23
7350 + #define TPM_INTF_CAPABILITY_ZERO 0x0FFFF000
7351 + #define TPM_I2C_INTERFACE_CAPABILITY_ZERO 0x80000000
7352 +@@ -329,6 +329,7 @@ static int tpm_tis_i2c_probe(struct i2c_client *dev,
7353 + if (!phy->io_buf)
7354 + return -ENOMEM;
7355 +
7356 ++ set_bit(TPM_TIS_DEFAULT_CANCELLATION, &phy->priv.flags);
7357 + phy->i2c_client = dev;
7358 +
7359 + /* must precede all communication with the tpm */
7360 +diff --git a/drivers/clk/imx/clk-imx8mn.c b/drivers/clk/imx/clk-imx8mn.c
7361 +index d37c45b676abe..2afea905f7f3c 100644
7362 +--- a/drivers/clk/imx/clk-imx8mn.c
7363 ++++ b/drivers/clk/imx/clk-imx8mn.c
7364 +@@ -27,10 +27,10 @@ static u32 share_count_nand;
7365 + static const char * const pll_ref_sels[] = { "osc_24m", "dummy", "dummy", "dummy", };
7366 + static const char * const audio_pll1_bypass_sels[] = {"audio_pll1", "audio_pll1_ref_sel", };
7367 + static const char * const audio_pll2_bypass_sels[] = {"audio_pll2", "audio_pll2_ref_sel", };
7368 +-static const char * const video_pll1_bypass_sels[] = {"video_pll1", "video_pll1_ref_sel", };
7369 ++static const char * const video_pll_bypass_sels[] = {"video_pll", "video_pll_ref_sel", };
7370 + static const char * const dram_pll_bypass_sels[] = {"dram_pll", "dram_pll_ref_sel", };
7371 + static const char * const gpu_pll_bypass_sels[] = {"gpu_pll", "gpu_pll_ref_sel", };
7372 +-static const char * const vpu_pll_bypass_sels[] = {"vpu_pll", "vpu_pll_ref_sel", };
7373 ++static const char * const m7_alt_pll_bypass_sels[] = {"m7_alt_pll", "m7_alt_pll_ref_sel", };
7374 + static const char * const arm_pll_bypass_sels[] = {"arm_pll", "arm_pll_ref_sel", };
7375 + static const char * const sys_pll3_bypass_sels[] = {"sys_pll3", "sys_pll3_ref_sel", };
7376 +
7377 +@@ -40,24 +40,24 @@ static const char * const imx8mn_a53_sels[] = {"osc_24m", "arm_pll_out", "sys_pl
7378 +
7379 + static const char * const imx8mn_a53_core_sels[] = {"arm_a53_div", "arm_pll_out", };
7380 +
7381 +-static const char * const imx8mn_m7_sels[] = {"osc_24m", "sys_pll2_200m", "sys_pll2_250m", "vpu_pll_out",
7382 +- "sys_pll1_800m", "audio_pll1_out", "video_pll1_out", "sys_pll3_out", };
7383 ++static const char * const imx8mn_m7_sels[] = {"osc_24m", "sys_pll2_200m", "sys_pll2_250m", "m7_alt_pll_out",
7384 ++ "sys_pll1_800m", "audio_pll1_out", "video_pll_out", "sys_pll3_out", };
7385 +
7386 + static const char * const imx8mn_gpu_core_sels[] = {"osc_24m", "gpu_pll_out", "sys_pll1_800m",
7387 + "sys_pll3_out", "sys_pll2_1000m", "audio_pll1_out",
7388 +- "video_pll1_out", "audio_pll2_out", };
7389 ++ "video_pll_out", "audio_pll2_out", };
7390 +
7391 + static const char * const imx8mn_gpu_shader_sels[] = {"osc_24m", "gpu_pll_out", "sys_pll1_800m",
7392 + "sys_pll3_out", "sys_pll2_1000m", "audio_pll1_out",
7393 +- "video_pll1_out", "audio_pll2_out", };
7394 ++ "video_pll_out", "audio_pll2_out", };
7395 +
7396 + static const char * const imx8mn_main_axi_sels[] = {"osc_24m", "sys_pll2_333m", "sys_pll1_800m",
7397 + "sys_pll2_250m", "sys_pll2_1000m", "audio_pll1_out",
7398 +- "video_pll1_out", "sys_pll1_100m",};
7399 ++ "video_pll_out", "sys_pll1_100m",};
7400 +
7401 + static const char * const imx8mn_enet_axi_sels[] = {"osc_24m", "sys_pll1_266m", "sys_pll1_800m",
7402 + "sys_pll2_250m", "sys_pll2_200m", "audio_pll1_out",
7403 +- "video_pll1_out", "sys_pll3_out", };
7404 ++ "video_pll_out", "sys_pll3_out", };
7405 +
7406 + static const char * const imx8mn_nand_usdhc_sels[] = {"osc_24m", "sys_pll1_266m", "sys_pll1_800m",
7407 + "sys_pll2_200m", "sys_pll1_133m", "sys_pll3_out",
7408 +@@ -77,23 +77,23 @@ static const char * const imx8mn_usb_bus_sels[] = {"osc_24m", "sys_pll2_500m", "
7409 +
7410 + static const char * const imx8mn_gpu_axi_sels[] = {"osc_24m", "sys_pll1_800m", "gpu_pll_out",
7411 + "sys_pll3_out", "sys_pll2_1000m", "audio_pll1_out",
7412 +- "video_pll1_out", "audio_pll2_out", };
7413 ++ "video_pll_out", "audio_pll2_out", };
7414 +
7415 + static const char * const imx8mn_gpu_ahb_sels[] = {"osc_24m", "sys_pll1_800m", "gpu_pll_out",
7416 + "sys_pll3_out", "sys_pll2_1000m", "audio_pll1_out",
7417 +- "video_pll1_out", "audio_pll2_out", };
7418 ++ "video_pll_out", "audio_pll2_out", };
7419 +
7420 + static const char * const imx8mn_noc_sels[] = {"osc_24m", "sys_pll1_800m", "sys_pll3_out",
7421 + "sys_pll2_1000m", "sys_pll2_500m", "audio_pll1_out",
7422 +- "video_pll1_out", "audio_pll2_out", };
7423 ++ "video_pll_out", "audio_pll2_out", };
7424 +
7425 + static const char * const imx8mn_ahb_sels[] = {"osc_24m", "sys_pll1_133m", "sys_pll1_800m",
7426 + "sys_pll1_400m", "sys_pll2_125m", "sys_pll3_out",
7427 +- "audio_pll1_out", "video_pll1_out", };
7428 ++ "audio_pll1_out", "video_pll_out", };
7429 +
7430 + static const char * const imx8mn_audio_ahb_sels[] = {"osc_24m", "sys_pll2_500m", "sys_pll1_800m",
7431 + "sys_pll2_1000m", "sys_pll2_166m", "sys_pll3_out",
7432 +- "audio_pll1_out", "video_pll1_out", };
7433 ++ "audio_pll1_out", "video_pll_out", };
7434 +
7435 + static const char * const imx8mn_dram_alt_sels[] = {"osc_24m", "sys_pll1_800m", "sys_pll1_100m",
7436 + "sys_pll2_500m", "sys_pll2_1000m", "sys_pll3_out",
7437 +@@ -103,49 +103,49 @@ static const char * const imx8mn_dram_apb_sels[] = {"osc_24m", "sys_pll2_200m",
7438 + "sys_pll1_160m", "sys_pll1_800m", "sys_pll3_out",
7439 + "sys_pll2_250m", "audio_pll2_out", };
7440 +
7441 +-static const char * const imx8mn_disp_pixel_sels[] = {"osc_24m", "video_pll1_out", "audio_pll2_out",
7442 ++static const char * const imx8mn_disp_pixel_sels[] = {"osc_24m", "video_pll_out", "audio_pll2_out",
7443 + "audio_pll1_out", "sys_pll1_800m", "sys_pll2_1000m",
7444 + "sys_pll3_out", "clk_ext4", };
7445 +
7446 + static const char * const imx8mn_sai2_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out",
7447 +- "video_pll1_out", "sys_pll1_133m", "osc_hdmi",
7448 +- "clk_ext3", "clk_ext4", };
7449 ++ "video_pll_out", "sys_pll1_133m", "dummy",
7450 ++ "clk_ext2", "clk_ext3", };
7451 +
7452 + static const char * const imx8mn_sai3_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out",
7453 +- "video_pll1_out", "sys_pll1_133m", "osc_hdmi",
7454 ++ "video_pll_out", "sys_pll1_133m", "dummy",
7455 + "clk_ext3", "clk_ext4", };
7456 +
7457 + static const char * const imx8mn_sai5_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out",
7458 +- "video_pll1_out", "sys_pll1_133m", "osc_hdmi",
7459 ++ "video_pll_out", "sys_pll1_133m", "dummy",
7460 + "clk_ext2", "clk_ext3", };
7461 +
7462 + static const char * const imx8mn_sai6_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out",
7463 +- "video_pll1_out", "sys_pll1_133m", "osc_hdmi",
7464 ++ "video_pll_out", "sys_pll1_133m", "dummy",
7465 + "clk_ext3", "clk_ext4", };
7466 +
7467 + static const char * const imx8mn_sai7_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out",
7468 +- "video_pll1_out", "sys_pll1_133m", "osc_hdmi",
7469 ++ "video_pll_out", "sys_pll1_133m", "dummy",
7470 + "clk_ext3", "clk_ext4", };
7471 +
7472 + static const char * const imx8mn_spdif1_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out",
7473 +- "video_pll1_out", "sys_pll1_133m", "osc_hdmi",
7474 ++ "video_pll_out", "sys_pll1_133m", "dummy",
7475 + "clk_ext2", "clk_ext3", };
7476 +
7477 + static const char * const imx8mn_enet_ref_sels[] = {"osc_24m", "sys_pll2_125m", "sys_pll2_50m",
7478 + "sys_pll2_100m", "sys_pll1_160m", "audio_pll1_out",
7479 +- "video_pll1_out", "clk_ext4", };
7480 ++ "video_pll_out", "clk_ext4", };
7481 +
7482 + static const char * const imx8mn_enet_timer_sels[] = {"osc_24m", "sys_pll2_100m", "audio_pll1_out",
7483 + "clk_ext1", "clk_ext2", "clk_ext3",
7484 +- "clk_ext4", "video_pll1_out", };
7485 ++ "clk_ext4", "video_pll_out", };
7486 +
7487 + static const char * const imx8mn_enet_phy_sels[] = {"osc_24m", "sys_pll2_50m", "sys_pll2_125m",
7488 +- "sys_pll2_200m", "sys_pll2_500m", "video_pll1_out",
7489 +- "audio_pll2_out", };
7490 ++ "sys_pll2_200m", "sys_pll2_500m", "audio_pll1_out",
7491 ++ "video_pll_out", "audio_pll2_out", };
7492 +
7493 + static const char * const imx8mn_nand_sels[] = {"osc_24m", "sys_pll2_500m", "audio_pll1_out",
7494 + "sys_pll1_400m", "audio_pll2_out", "sys_pll3_out",
7495 +- "sys_pll2_250m", "video_pll1_out", };
7496 ++ "sys_pll2_250m", "video_pll_out", };
7497 +
7498 + static const char * const imx8mn_qspi_sels[] = {"osc_24m", "sys_pll1_400m", "sys_pll2_333m",
7499 + "sys_pll2_500m", "audio_pll2_out", "sys_pll1_266m",
7500 +@@ -160,19 +160,19 @@ static const char * const imx8mn_usdhc2_sels[] = {"osc_24m", "sys_pll1_400m", "s
7501 + "audio_pll2_out", "sys_pll1_100m", };
7502 +
7503 + static const char * const imx8mn_i2c1_sels[] = {"osc_24m", "sys_pll1_160m", "sys_pll2_50m",
7504 +- "sys_pll3_out", "audio_pll1_out", "video_pll1_out",
7505 ++ "sys_pll3_out", "audio_pll1_out", "video_pll_out",
7506 + "audio_pll2_out", "sys_pll1_133m", };
7507 +
7508 + static const char * const imx8mn_i2c2_sels[] = {"osc_24m", "sys_pll1_160m", "sys_pll2_50m",
7509 +- "sys_pll3_out", "audio_pll1_out", "video_pll1_out",
7510 ++ "sys_pll3_out", "audio_pll1_out", "video_pll_out",
7511 + "audio_pll2_out", "sys_pll1_133m", };
7512 +
7513 + static const char * const imx8mn_i2c3_sels[] = {"osc_24m", "sys_pll1_160m", "sys_pll2_50m",
7514 +- "sys_pll3_out", "audio_pll1_out", "video_pll1_out",
7515 ++ "sys_pll3_out", "audio_pll1_out", "video_pll_out",
7516 + "audio_pll2_out", "sys_pll1_133m", };
7517 +
7518 + static const char * const imx8mn_i2c4_sels[] = {"osc_24m", "sys_pll1_160m", "sys_pll2_50m",
7519 +- "sys_pll3_out", "audio_pll1_out", "video_pll1_out",
7520 ++ "sys_pll3_out", "audio_pll1_out", "video_pll_out",
7521 + "audio_pll2_out", "sys_pll1_133m", };
7522 +
7523 + static const char * const imx8mn_uart1_sels[] = {"osc_24m", "sys_pll1_80m", "sys_pll2_200m",
7524 +@@ -213,63 +213,63 @@ static const char * const imx8mn_ecspi2_sels[] = {"osc_24m", "sys_pll2_200m", "s
7525 +
7526 + static const char * const imx8mn_pwm1_sels[] = {"osc_24m", "sys_pll2_100m", "sys_pll1_160m",
7527 + "sys_pll1_40m", "sys_pll3_out", "clk_ext1",
7528 +- "sys_pll1_80m", "video_pll1_out", };
7529 ++ "sys_pll1_80m", "video_pll_out", };
7530 +
7531 + static const char * const imx8mn_pwm2_sels[] = {"osc_24m", "sys_pll2_100m", "sys_pll1_160m",
7532 + "sys_pll1_40m", "sys_pll3_out", "clk_ext1",
7533 +- "sys_pll1_80m", "video_pll1_out", };
7534 ++ "sys_pll1_80m", "video_pll_out", };
7535 +
7536 + static const char * const imx8mn_pwm3_sels[] = {"osc_24m", "sys_pll2_100m", "sys_pll1_160m",
7537 + "sys_pll1_40m", "sys_pll3_out", "clk_ext2",
7538 +- "sys_pll1_80m", "video_pll1_out", };
7539 ++ "sys_pll1_80m", "video_pll_out", };
7540 +
7541 + static const char * const imx8mn_pwm4_sels[] = {"osc_24m", "sys_pll2_100m", "sys_pll1_160m",
7542 + "sys_pll1_40m", "sys_pll3_out", "clk_ext2",
7543 +- "sys_pll1_80m", "video_pll1_out", };
7544 ++ "sys_pll1_80m", "video_pll_out", };
7545 +
7546 + static const char * const imx8mn_gpt1_sels[] = {"osc_24m", "sys_pll2_100m", "sys_pll1_400m",
7547 +- "sys_pll1_40m", "video_pll1_out", "sys_pll1_80m",
7548 ++ "sys_pll1_40m", "video_pll_out", "sys_pll1_80m",
7549 + "audio_pll1_out", "clk_ext1", };
7550 +
7551 + static const char * const imx8mn_gpt2_sels[] = {"osc_24m", "sys_pll2_100m", "sys_pll1_400m",
7552 +- "sys_pll1_40m", "video_pll1_out", "sys_pll1_80m",
7553 ++ "sys_pll1_40m", "video_pll_out", "sys_pll1_80m",
7554 + "audio_pll1_out", "clk_ext1", };
7555 +
7556 + static const char * const imx8mn_gpt3_sels[] = {"osc_24m", "sys_pll2_100m", "sys_pll1_400m",
7557 +- "sys_pll1_40m", "video_pll1_out", "sys_pll1_80m",
7558 ++ "sys_pll1_40m", "video_pll_out", "sys_pll1_80m",
7559 + "audio_pll1_out", "clk_ext1", };
7560 +
7561 + static const char * const imx8mn_gpt4_sels[] = {"osc_24m", "sys_pll2_100m", "sys_pll1_400m",
7562 +- "sys_pll1_40m", "video_pll1_out", "sys_pll1_80m",
7563 ++ "sys_pll1_40m", "video_pll_out", "sys_pll1_80m",
7564 + "audio_pll1_out", "clk_ext1", };
7565 +
7566 + static const char * const imx8mn_gpt5_sels[] = {"osc_24m", "sys_pll2_100m", "sys_pll1_400m",
7567 +- "sys_pll1_40m", "video_pll1_out", "sys_pll1_80m",
7568 ++ "sys_pll1_40m", "video_pll_out", "sys_pll1_80m",
7569 + "audio_pll1_out", "clk_ext1", };
7570 +
7571 + static const char * const imx8mn_gpt6_sels[] = {"osc_24m", "sys_pll2_100m", "sys_pll1_400m",
7572 +- "sys_pll1_40m", "video_pll1_out", "sys_pll1_80m",
7573 ++ "sys_pll1_40m", "video_pll_out", "sys_pll1_80m",
7574 + "audio_pll1_out", "clk_ext1", };
7575 +
7576 + static const char * const imx8mn_wdog_sels[] = {"osc_24m", "sys_pll1_133m", "sys_pll1_160m",
7577 +- "vpu_pll_out", "sys_pll2_125m", "sys_pll3_out",
7578 ++ "m7_alt_pll_out", "sys_pll2_125m", "sys_pll3_out",
7579 + "sys_pll1_80m", "sys_pll2_166m", };
7580 +
7581 +-static const char * const imx8mn_wrclk_sels[] = {"osc_24m", "sys_pll1_40m", "vpu_pll_out",
7582 ++static const char * const imx8mn_wrclk_sels[] = {"osc_24m", "sys_pll1_40m", "m7_alt_pll_out",
7583 + "sys_pll3_out", "sys_pll2_200m", "sys_pll1_266m",
7584 + "sys_pll2_500m", "sys_pll1_100m", };
7585 +
7586 + static const char * const imx8mn_dsi_core_sels[] = {"osc_24m", "sys_pll1_266m", "sys_pll2_250m",
7587 + "sys_pll1_800m", "sys_pll2_1000m", "sys_pll3_out",
7588 +- "audio_pll2_out", "video_pll1_out", };
7589 ++ "audio_pll2_out", "video_pll_out", };
7590 +
7591 + static const char * const imx8mn_dsi_phy_sels[] = {"osc_24m", "sys_pll2_125m", "sys_pll2_100m",
7592 + "sys_pll1_800m", "sys_pll2_1000m", "clk_ext2",
7593 +- "audio_pll2_out", "video_pll1_out", };
7594 ++ "audio_pll2_out", "video_pll_out", };
7595 +
7596 + static const char * const imx8mn_dsi_dbi_sels[] = {"osc_24m", "sys_pll1_266m", "sys_pll2_100m",
7597 + "sys_pll1_800m", "sys_pll2_1000m", "sys_pll3_out",
7598 +- "audio_pll2_out", "video_pll1_out", };
7599 ++ "audio_pll2_out", "video_pll_out", };
7600 +
7601 + static const char * const imx8mn_usdhc3_sels[] = {"osc_24m", "sys_pll1_400m", "sys_pll1_800m",
7602 + "sys_pll2_500m", "sys_pll3_out", "sys_pll1_266m",
7603 +@@ -277,15 +277,15 @@ static const char * const imx8mn_usdhc3_sels[] = {"osc_24m", "sys_pll1_400m", "s
7604 +
7605 + static const char * const imx8mn_camera_pixel_sels[] = {"osc_24m", "sys_pll1_266m", "sys_pll2_250m",
7606 + "sys_pll1_800m", "sys_pll2_1000m", "sys_pll3_out",
7607 +- "audio_pll2_out", "video_pll1_out", };
7608 ++ "audio_pll2_out", "video_pll_out", };
7609 +
7610 + static const char * const imx8mn_csi1_phy_sels[] = {"osc_24m", "sys_pll2_333m", "sys_pll2_100m",
7611 + "sys_pll1_800m", "sys_pll2_1000m", "clk_ext2",
7612 +- "audio_pll2_out", "video_pll1_out", };
7613 ++ "audio_pll2_out", "video_pll_out", };
7614 +
7615 + static const char * const imx8mn_csi2_phy_sels[] = {"osc_24m", "sys_pll2_333m", "sys_pll2_100m",
7616 + "sys_pll1_800m", "sys_pll2_1000m", "clk_ext2",
7617 +- "audio_pll2_out", "video_pll1_out", };
7618 ++ "audio_pll2_out", "video_pll_out", };
7619 +
7620 + static const char * const imx8mn_csi2_esc_sels[] = {"osc_24m", "sys_pll2_100m", "sys_pll1_80m",
7621 + "sys_pll1_800m", "sys_pll2_1000m", "sys_pll3_out",
7622 +@@ -306,9 +306,9 @@ static const char * const imx8mn_clko1_sels[] = {"osc_24m", "sys_pll1_800m", "du
7623 + "dummy", "sys_pll1_80m", };
7624 + static const char * const imx8mn_clko2_sels[] = {"osc_24m", "sys_pll2_200m", "sys_pll1_400m",
7625 + "sys_pll2_166m", "sys_pll3_out", "audio_pll1_out",
7626 +- "video_pll1_out", "osc_32k", };
7627 ++ "video_pll_out", "osc_32k", };
7628 +
7629 +-static const char * const clkout_sels[] = {"audio_pll1_out", "audio_pll2_out", "video_pll1_out",
7630 ++static const char * const clkout_sels[] = {"audio_pll1_out", "audio_pll2_out", "video_pll_out",
7631 + "dummy", "dummy", "gpu_pll_out", "dummy",
7632 + "arm_pll_out", "sys_pll1", "sys_pll2", "sys_pll3",
7633 + "dummy", "dummy", "osc_24m", "dummy", "osc_32k"};
7634 +@@ -349,19 +349,19 @@ static int imx8mn_clocks_probe(struct platform_device *pdev)
7635 +
7636 + hws[IMX8MN_AUDIO_PLL1_REF_SEL] = imx_clk_hw_mux("audio_pll1_ref_sel", base + 0x0, 0, 2, pll_ref_sels, ARRAY_SIZE(pll_ref_sels));
7637 + hws[IMX8MN_AUDIO_PLL2_REF_SEL] = imx_clk_hw_mux("audio_pll2_ref_sel", base + 0x14, 0, 2, pll_ref_sels, ARRAY_SIZE(pll_ref_sels));
7638 +- hws[IMX8MN_VIDEO_PLL1_REF_SEL] = imx_clk_hw_mux("video_pll1_ref_sel", base + 0x28, 0, 2, pll_ref_sels, ARRAY_SIZE(pll_ref_sels));
7639 ++ hws[IMX8MN_VIDEO_PLL_REF_SEL] = imx_clk_hw_mux("video_pll_ref_sel", base + 0x28, 0, 2, pll_ref_sels, ARRAY_SIZE(pll_ref_sels));
7640 + hws[IMX8MN_DRAM_PLL_REF_SEL] = imx_clk_hw_mux("dram_pll_ref_sel", base + 0x50, 0, 2, pll_ref_sels, ARRAY_SIZE(pll_ref_sels));
7641 + hws[IMX8MN_GPU_PLL_REF_SEL] = imx_clk_hw_mux("gpu_pll_ref_sel", base + 0x64, 0, 2, pll_ref_sels, ARRAY_SIZE(pll_ref_sels));
7642 +- hws[IMX8MN_VPU_PLL_REF_SEL] = imx_clk_hw_mux("vpu_pll_ref_sel", base + 0x74, 0, 2, pll_ref_sels, ARRAY_SIZE(pll_ref_sels));
7643 ++ hws[IMX8MN_M7_ALT_PLL_REF_SEL] = imx_clk_hw_mux("m7_alt_pll_ref_sel", base + 0x74, 0, 2, pll_ref_sels, ARRAY_SIZE(pll_ref_sels));
7644 + hws[IMX8MN_ARM_PLL_REF_SEL] = imx_clk_hw_mux("arm_pll_ref_sel", base + 0x84, 0, 2, pll_ref_sels, ARRAY_SIZE(pll_ref_sels));
7645 + hws[IMX8MN_SYS_PLL3_REF_SEL] = imx_clk_hw_mux("sys_pll3_ref_sel", base + 0x114, 0, 2, pll_ref_sels, ARRAY_SIZE(pll_ref_sels));
7646 +
7647 + hws[IMX8MN_AUDIO_PLL1] = imx_clk_hw_pll14xx("audio_pll1", "audio_pll1_ref_sel", base, &imx_1443x_pll);
7648 + hws[IMX8MN_AUDIO_PLL2] = imx_clk_hw_pll14xx("audio_pll2", "audio_pll2_ref_sel", base + 0x14, &imx_1443x_pll);
7649 +- hws[IMX8MN_VIDEO_PLL1] = imx_clk_hw_pll14xx("video_pll1", "video_pll1_ref_sel", base + 0x28, &imx_1443x_pll);
7650 ++ hws[IMX8MN_VIDEO_PLL] = imx_clk_hw_pll14xx("video_pll", "video_pll_ref_sel", base + 0x28, &imx_1443x_pll);
7651 + hws[IMX8MN_DRAM_PLL] = imx_clk_hw_pll14xx("dram_pll", "dram_pll_ref_sel", base + 0x50, &imx_1443x_dram_pll);
7652 + hws[IMX8MN_GPU_PLL] = imx_clk_hw_pll14xx("gpu_pll", "gpu_pll_ref_sel", base + 0x64, &imx_1416x_pll);
7653 +- hws[IMX8MN_VPU_PLL] = imx_clk_hw_pll14xx("vpu_pll", "vpu_pll_ref_sel", base + 0x74, &imx_1416x_pll);
7654 ++ hws[IMX8MN_M7_ALT_PLL] = imx_clk_hw_pll14xx("m7_alt_pll", "m7_alt_pll_ref_sel", base + 0x74, &imx_1416x_pll);
7655 + hws[IMX8MN_ARM_PLL] = imx_clk_hw_pll14xx("arm_pll", "arm_pll_ref_sel", base + 0x84, &imx_1416x_pll);
7656 + hws[IMX8MN_SYS_PLL1] = imx_clk_hw_fixed("sys_pll1", 800000000);
7657 + hws[IMX8MN_SYS_PLL2] = imx_clk_hw_fixed("sys_pll2", 1000000000);
7658 +@@ -370,20 +370,20 @@ static int imx8mn_clocks_probe(struct platform_device *pdev)
7659 + /* PLL bypass out */
7660 + hws[IMX8MN_AUDIO_PLL1_BYPASS] = imx_clk_hw_mux_flags("audio_pll1_bypass", base, 16, 1, audio_pll1_bypass_sels, ARRAY_SIZE(audio_pll1_bypass_sels), CLK_SET_RATE_PARENT);
7661 + hws[IMX8MN_AUDIO_PLL2_BYPASS] = imx_clk_hw_mux_flags("audio_pll2_bypass", base + 0x14, 16, 1, audio_pll2_bypass_sels, ARRAY_SIZE(audio_pll2_bypass_sels), CLK_SET_RATE_PARENT);
7662 +- hws[IMX8MN_VIDEO_PLL1_BYPASS] = imx_clk_hw_mux_flags("video_pll1_bypass", base + 0x28, 16, 1, video_pll1_bypass_sels, ARRAY_SIZE(video_pll1_bypass_sels), CLK_SET_RATE_PARENT);
7663 ++ hws[IMX8MN_VIDEO_PLL_BYPASS] = imx_clk_hw_mux_flags("video_pll_bypass", base + 0x28, 16, 1, video_pll_bypass_sels, ARRAY_SIZE(video_pll_bypass_sels), CLK_SET_RATE_PARENT);
7664 + hws[IMX8MN_DRAM_PLL_BYPASS] = imx_clk_hw_mux_flags("dram_pll_bypass", base + 0x50, 16, 1, dram_pll_bypass_sels, ARRAY_SIZE(dram_pll_bypass_sels), CLK_SET_RATE_PARENT);
7665 + hws[IMX8MN_GPU_PLL_BYPASS] = imx_clk_hw_mux_flags("gpu_pll_bypass", base + 0x64, 28, 1, gpu_pll_bypass_sels, ARRAY_SIZE(gpu_pll_bypass_sels), CLK_SET_RATE_PARENT);
7666 +- hws[IMX8MN_VPU_PLL_BYPASS] = imx_clk_hw_mux_flags("vpu_pll_bypass", base + 0x74, 28, 1, vpu_pll_bypass_sels, ARRAY_SIZE(vpu_pll_bypass_sels), CLK_SET_RATE_PARENT);
7667 ++ hws[IMX8MN_M7_ALT_PLL_BYPASS] = imx_clk_hw_mux_flags("m7_alt_pll_bypass", base + 0x74, 28, 1, m7_alt_pll_bypass_sels, ARRAY_SIZE(m7_alt_pll_bypass_sels), CLK_SET_RATE_PARENT);
7668 + hws[IMX8MN_ARM_PLL_BYPASS] = imx_clk_hw_mux_flags("arm_pll_bypass", base + 0x84, 28, 1, arm_pll_bypass_sels, ARRAY_SIZE(arm_pll_bypass_sels), CLK_SET_RATE_PARENT);
7669 + hws[IMX8MN_SYS_PLL3_BYPASS] = imx_clk_hw_mux_flags("sys_pll3_bypass", base + 0x114, 28, 1, sys_pll3_bypass_sels, ARRAY_SIZE(sys_pll3_bypass_sels), CLK_SET_RATE_PARENT);
7670 +
7671 + /* PLL out gate */
7672 + hws[IMX8MN_AUDIO_PLL1_OUT] = imx_clk_hw_gate("audio_pll1_out", "audio_pll1_bypass", base, 13);
7673 + hws[IMX8MN_AUDIO_PLL2_OUT] = imx_clk_hw_gate("audio_pll2_out", "audio_pll2_bypass", base + 0x14, 13);
7674 +- hws[IMX8MN_VIDEO_PLL1_OUT] = imx_clk_hw_gate("video_pll1_out", "video_pll1_bypass", base + 0x28, 13);
7675 ++ hws[IMX8MN_VIDEO_PLL_OUT] = imx_clk_hw_gate("video_pll_out", "video_pll_bypass", base + 0x28, 13);
7676 + hws[IMX8MN_DRAM_PLL_OUT] = imx_clk_hw_gate("dram_pll_out", "dram_pll_bypass", base + 0x50, 13);
7677 + hws[IMX8MN_GPU_PLL_OUT] = imx_clk_hw_gate("gpu_pll_out", "gpu_pll_bypass", base + 0x64, 11);
7678 +- hws[IMX8MN_VPU_PLL_OUT] = imx_clk_hw_gate("vpu_pll_out", "vpu_pll_bypass", base + 0x74, 11);
7679 ++ hws[IMX8MN_M7_ALT_PLL_OUT] = imx_clk_hw_gate("m7_alt_pll_out", "m7_alt_pll_bypass", base + 0x74, 11);
7680 + hws[IMX8MN_ARM_PLL_OUT] = imx_clk_hw_gate("arm_pll_out", "arm_pll_bypass", base + 0x84, 11);
7681 + hws[IMX8MN_SYS_PLL3_OUT] = imx_clk_hw_gate("sys_pll3_out", "sys_pll3_bypass", base + 0x114, 11);
7682 +
7683 +diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c
7684 +index 652ae58c2735f..5d68d975b4eb1 100644
7685 +--- a/drivers/clk/imx/clk-imx8mp.c
7686 ++++ b/drivers/clk/imx/clk-imx8mp.c
7687 +@@ -17,6 +17,7 @@
7688 +
7689 + static u32 share_count_nand;
7690 + static u32 share_count_media;
7691 ++static u32 share_count_usb;
7692 +
7693 + static const char * const pll_ref_sels[] = { "osc_24m", "dummy", "dummy", "dummy", };
7694 + static const char * const audio_pll1_bypass_sels[] = {"audio_pll1", "audio_pll1_ref_sel", };
7695 +@@ -673,7 +674,8 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
7696 + hws[IMX8MP_CLK_UART2_ROOT] = imx_clk_hw_gate4("uart2_root_clk", "uart2", ccm_base + 0x44a0, 0);
7697 + hws[IMX8MP_CLK_UART3_ROOT] = imx_clk_hw_gate4("uart3_root_clk", "uart3", ccm_base + 0x44b0, 0);
7698 + hws[IMX8MP_CLK_UART4_ROOT] = imx_clk_hw_gate4("uart4_root_clk", "uart4", ccm_base + 0x44c0, 0);
7699 +- hws[IMX8MP_CLK_USB_ROOT] = imx_clk_hw_gate4("usb_root_clk", "hsio_axi", ccm_base + 0x44d0, 0);
7700 ++ hws[IMX8MP_CLK_USB_ROOT] = imx_clk_hw_gate2_shared2("usb_root_clk", "hsio_axi", ccm_base + 0x44d0, 0, &share_count_usb);
7701 ++ hws[IMX8MP_CLK_USB_SUSP] = imx_clk_hw_gate2_shared2("usb_suspend_clk", "osc_32k", ccm_base + 0x44d0, 0, &share_count_usb);
7702 + hws[IMX8MP_CLK_USB_PHY_ROOT] = imx_clk_hw_gate4("usb_phy_root_clk", "usb_phy_ref", ccm_base + 0x44f0, 0);
7703 + hws[IMX8MP_CLK_USDHC1_ROOT] = imx_clk_hw_gate4("usdhc1_root_clk", "usdhc1", ccm_base + 0x4510, 0);
7704 + hws[IMX8MP_CLK_USDHC2_ROOT] = imx_clk_hw_gate4("usdhc2_root_clk", "usdhc2", ccm_base + 0x4520, 0);
7705 +diff --git a/drivers/clk/imx/clk-imx93.c b/drivers/clk/imx/clk-imx93.c
7706 +index 99cff1fd108b5..02d6a9894521d 100644
7707 +--- a/drivers/clk/imx/clk-imx93.c
7708 ++++ b/drivers/clk/imx/clk-imx93.c
7709 +@@ -170,7 +170,7 @@ static const struct imx93_clk_ccgr {
7710 + { IMX93_CLK_MU2_B_GATE, "mu2_b", "bus_wakeup_root", 0x8500, 0, &share_count_mub },
7711 + { IMX93_CLK_EDMA1_GATE, "edma1", "m33_root", 0x8540, },
7712 + { IMX93_CLK_EDMA2_GATE, "edma2", "wakeup_axi_root", 0x8580, },
7713 +- { IMX93_CLK_FLEXSPI1_GATE, "flexspi", "flexspi_root", 0x8640, },
7714 ++ { IMX93_CLK_FLEXSPI1_GATE, "flexspi1", "flexspi1_root", 0x8640, },
7715 + { IMX93_CLK_GPIO1_GATE, "gpio1", "m33_root", 0x8880, },
7716 + { IMX93_CLK_GPIO2_GATE, "gpio2", "bus_wakeup_root", 0x88c0, },
7717 + { IMX93_CLK_GPIO3_GATE, "gpio3", "bus_wakeup_root", 0x8900, },
7718 +@@ -240,7 +240,7 @@ static const struct imx93_clk_ccgr {
7719 + { IMX93_CLK_AUD_XCVR_GATE, "aud_xcvr", "audio_xcvr_root", 0x9b80, },
7720 + { IMX93_CLK_SPDIF_GATE, "spdif", "spdif_root", 0x9c00, },
7721 + { IMX93_CLK_HSIO_32K_GATE, "hsio_32k", "osc_32k", 0x9dc0, },
7722 +- { IMX93_CLK_ENET1_GATE, "enet1", "enet_root", 0x9e00, },
7723 ++ { IMX93_CLK_ENET1_GATE, "enet1", "wakeup_axi_root", 0x9e00, },
7724 + { IMX93_CLK_ENET_QOS_GATE, "enet_qos", "wakeup_axi_root", 0x9e40, },
7725 + { IMX93_CLK_SYS_CNT_GATE, "sys_cnt", "osc_24m", 0x9e80, },
7726 + { IMX93_CLK_TSTMR1_GATE, "tstmr1", "bus_aon_root", 0x9ec0, },
7727 +@@ -258,7 +258,7 @@ static int imx93_clocks_probe(struct platform_device *pdev)
7728 + struct device_node *np = dev->of_node;
7729 + const struct imx93_clk_root *root;
7730 + const struct imx93_clk_ccgr *ccgr;
7731 +- void __iomem *base = NULL;
7732 ++ void __iomem *base, *anatop_base;
7733 + int i, ret;
7734 +
7735 + clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
7736 +@@ -285,20 +285,22 @@ static int imx93_clocks_probe(struct platform_device *pdev)
7737 + "sys_pll_pfd2", 1, 2);
7738 +
7739 + np = of_find_compatible_node(NULL, NULL, "fsl,imx93-anatop");
7740 +- base = of_iomap(np, 0);
7741 ++ anatop_base = of_iomap(np, 0);
7742 + of_node_put(np);
7743 +- if (WARN_ON(!base))
7744 ++ if (WARN_ON(!anatop_base))
7745 + return -ENOMEM;
7746 +
7747 +- clks[IMX93_CLK_AUDIO_PLL] = imx_clk_fracn_gppll("audio_pll", "osc_24m", base + 0x1200,
7748 ++ clks[IMX93_CLK_AUDIO_PLL] = imx_clk_fracn_gppll("audio_pll", "osc_24m", anatop_base + 0x1200,
7749 + &imx_fracn_gppll);
7750 +- clks[IMX93_CLK_VIDEO_PLL] = imx_clk_fracn_gppll("video_pll", "osc_24m", base + 0x1400,
7751 ++ clks[IMX93_CLK_VIDEO_PLL] = imx_clk_fracn_gppll("video_pll", "osc_24m", anatop_base + 0x1400,
7752 + &imx_fracn_gppll);
7753 +
7754 + np = dev->of_node;
7755 + base = devm_platform_ioremap_resource(pdev, 0);
7756 +- if (WARN_ON(IS_ERR(base)))
7757 ++ if (WARN_ON(IS_ERR(base))) {
7758 ++ iounmap(anatop_base);
7759 + return PTR_ERR(base);
7760 ++ }
7761 +
7762 + for (i = 0; i < ARRAY_SIZE(root_array); i++) {
7763 + root = &root_array[i];
7764 +@@ -327,6 +329,7 @@ static int imx93_clocks_probe(struct platform_device *pdev)
7765 +
7766 + unregister_hws:
7767 + imx_unregister_hw_clocks(clks, IMX93_CLK_END);
7768 ++ iounmap(anatop_base);
7769 +
7770 + return ret;
7771 + }
7772 +diff --git a/drivers/clk/imx/clk-imxrt1050.c b/drivers/clk/imx/clk-imxrt1050.c
7773 +index 9539d35588ee9..26108e9f7e67a 100644
7774 +--- a/drivers/clk/imx/clk-imxrt1050.c
7775 ++++ b/drivers/clk/imx/clk-imxrt1050.c
7776 +@@ -140,7 +140,7 @@ static int imxrt1050_clocks_probe(struct platform_device *pdev)
7777 + hws[IMXRT1050_CLK_USDHC1] = imx_clk_hw_gate2("usdhc1", "usdhc1_podf", ccm_base + 0x80, 2);
7778 + hws[IMXRT1050_CLK_USDHC2] = imx_clk_hw_gate2("usdhc2", "usdhc2_podf", ccm_base + 0x80, 4);
7779 + hws[IMXRT1050_CLK_LPUART1] = imx_clk_hw_gate2("lpuart1", "lpuart_podf", ccm_base + 0x7c, 24);
7780 +- hws[IMXRT1050_CLK_LCDIF_APB] = imx_clk_hw_gate2("lcdif", "lcdif_podf", ccm_base + 0x74, 10);
7781 ++ hws[IMXRT1050_CLK_LCDIF_APB] = imx_clk_hw_gate2("lcdif", "lcdif_podf", ccm_base + 0x70, 28);
7782 + hws[IMXRT1050_CLK_DMA] = imx_clk_hw_gate("dma", "ipg", ccm_base + 0x7C, 6);
7783 + hws[IMXRT1050_CLK_DMA_MUX] = imx_clk_hw_gate("dmamux0", "ipg", ccm_base + 0x7C, 7);
7784 + imx_check_clk_hws(hws, IMXRT1050_CLK_END);
7785 +diff --git a/drivers/clk/mediatek/clk-mt7986-infracfg.c b/drivers/clk/mediatek/clk-mt7986-infracfg.c
7786 +index d90727a53283c..49666047bf0ed 100644
7787 +--- a/drivers/clk/mediatek/clk-mt7986-infracfg.c
7788 ++++ b/drivers/clk/mediatek/clk-mt7986-infracfg.c
7789 +@@ -153,7 +153,7 @@ static const struct mtk_gate infra_clks[] = {
7790 + 18),
7791 + GATE_INFRA1(CLK_INFRA_MSDC_66M_CK, "infra_msdc_66m", "infra_sysaxi_d2",
7792 + 19),
7793 +- GATE_INFRA1(CLK_INFRA_ADC_26M_CK, "infra_adc_26m", "csw_f26m_sel", 20),
7794 ++ GATE_INFRA1(CLK_INFRA_ADC_26M_CK, "infra_adc_26m", "infra_adc_frc", 20),
7795 + GATE_INFRA1(CLK_INFRA_ADC_FRC_CK, "infra_adc_frc", "csw_f26m_sel", 21),
7796 + GATE_INFRA1(CLK_INFRA_FBIST2FPC_CK, "infra_fbist2fpc", "nfi1x_sel", 23),
7797 + /* INFRA2 */
7798 +diff --git a/drivers/clk/microchip/clk-mpfs-ccc.c b/drivers/clk/microchip/clk-mpfs-ccc.c
7799 +index 7be028dced63d..32aae880a14f3 100644
7800 +--- a/drivers/clk/microchip/clk-mpfs-ccc.c
7801 ++++ b/drivers/clk/microchip/clk-mpfs-ccc.c
7802 +@@ -166,6 +166,9 @@ static int mpfs_ccc_register_outputs(struct device *dev, struct mpfs_ccc_out_hw_
7803 + struct mpfs_ccc_out_hw_clock *out_hw = &out_hws[i];
7804 + char *name = devm_kzalloc(dev, 23, GFP_KERNEL);
7805 +
7806 ++ if (!name)
7807 ++ return -ENOMEM;
7808 ++
7809 + snprintf(name, 23, "%s_out%u", parent->name, i);
7810 + out_hw->divider.hw.init = CLK_HW_INIT_HW(name, &parent->hw, &clk_divider_ops, 0);
7811 + out_hw->divider.reg = data->pll_base[i / MPFS_CCC_OUTPUTS_PER_PLL] +
7812 +@@ -200,6 +203,9 @@ static int mpfs_ccc_register_plls(struct device *dev, struct mpfs_ccc_pll_hw_clo
7813 + struct mpfs_ccc_pll_hw_clock *pll_hw = &pll_hws[i];
7814 + char *name = devm_kzalloc(dev, 18, GFP_KERNEL);
7815 +
7816 ++ if (!name)
7817 ++ return -ENOMEM;
7818 ++
7819 + pll_hw->base = data->pll_base[i];
7820 + snprintf(name, 18, "ccc%s_pll%u", strchrnul(dev->of_node->full_name, '@'), i);
7821 + pll_hw->name = (const char *)name;
7822 +diff --git a/drivers/clk/qcom/clk-krait.c b/drivers/clk/qcom/clk-krait.c
7823 +index 45da736bd5f4c..293a9dfa7151a 100644
7824 +--- a/drivers/clk/qcom/clk-krait.c
7825 ++++ b/drivers/clk/qcom/clk-krait.c
7826 +@@ -114,6 +114,8 @@ static int krait_div2_set_rate(struct clk_hw *hw, unsigned long rate,
7827 +
7828 + if (d->lpl)
7829 + mask = mask << (d->shift + LPL_SHIFT) | mask << d->shift;
7830 ++ else
7831 ++ mask <<= d->shift;
7832 +
7833 + spin_lock_irqsave(&krait_clock_reg_lock, flags);
7834 + val = krait_get_l2_indirect_reg(d->offset);
7835 +diff --git a/drivers/clk/qcom/dispcc-sm6350.c b/drivers/clk/qcom/dispcc-sm6350.c
7836 +index 0c3c2e26ede90..ea6f54ed846ec 100644
7837 +--- a/drivers/clk/qcom/dispcc-sm6350.c
7838 ++++ b/drivers/clk/qcom/dispcc-sm6350.c
7839 +@@ -306,7 +306,7 @@ static struct clk_rcg2 disp_cc_mdss_pclk0_clk_src = {
7840 + .name = "disp_cc_mdss_pclk0_clk_src",
7841 + .parent_data = disp_cc_parent_data_5,
7842 + .num_parents = ARRAY_SIZE(disp_cc_parent_data_5),
7843 +- .flags = CLK_SET_RATE_PARENT | CLK_GET_RATE_NOCACHE,
7844 ++ .flags = CLK_SET_RATE_PARENT | CLK_GET_RATE_NOCACHE | CLK_OPS_PARENT_ENABLE,
7845 + .ops = &clk_pixel_ops,
7846 + },
7847 + };
7848 +@@ -385,7 +385,7 @@ static struct clk_branch disp_cc_mdss_byte0_clk = {
7849 + &disp_cc_mdss_byte0_clk_src.clkr.hw,
7850 + },
7851 + .num_parents = 1,
7852 +- .flags = CLK_SET_RATE_PARENT | CLK_GET_RATE_NOCACHE,
7853 ++ .flags = CLK_SET_RATE_PARENT | CLK_GET_RATE_NOCACHE | CLK_OPS_PARENT_ENABLE,
7854 + .ops = &clk_branch2_ops,
7855 + },
7856 + },
7857 +diff --git a/drivers/clk/qcom/gcc-ipq806x.c b/drivers/clk/qcom/gcc-ipq806x.c
7858 +index 718de17a1e600..6447f3e81b555 100644
7859 +--- a/drivers/clk/qcom/gcc-ipq806x.c
7860 ++++ b/drivers/clk/qcom/gcc-ipq806x.c
7861 +@@ -79,7 +79,9 @@ static struct clk_regmap pll4_vote = {
7862 + .enable_mask = BIT(4),
7863 + .hw.init = &(struct clk_init_data){
7864 + .name = "pll4_vote",
7865 +- .parent_names = (const char *[]){ "pll4" },
7866 ++ .parent_data = &(const struct clk_parent_data){
7867 ++ .fw_name = "pll4", .name = "pll4",
7868 ++ },
7869 + .num_parents = 1,
7870 + .ops = &clk_pll_vote_ops,
7871 + },
7872 +diff --git a/drivers/clk/qcom/gcc-sm8250.c b/drivers/clk/qcom/gcc-sm8250.c
7873 +index 9755ef4888c19..a0ba37656b07b 100644
7874 +--- a/drivers/clk/qcom/gcc-sm8250.c
7875 ++++ b/drivers/clk/qcom/gcc-sm8250.c
7876 +@@ -3267,7 +3267,7 @@ static struct gdsc usb30_prim_gdsc = {
7877 + .pd = {
7878 + .name = "usb30_prim_gdsc",
7879 + },
7880 +- .pwrsts = PWRSTS_OFF_ON,
7881 ++ .pwrsts = PWRSTS_RET_ON,
7882 + };
7883 +
7884 + static struct gdsc usb30_sec_gdsc = {
7885 +@@ -3275,7 +3275,7 @@ static struct gdsc usb30_sec_gdsc = {
7886 + .pd = {
7887 + .name = "usb30_sec_gdsc",
7888 + },
7889 +- .pwrsts = PWRSTS_OFF_ON,
7890 ++ .pwrsts = PWRSTS_RET_ON,
7891 + };
7892 +
7893 + static struct gdsc hlos1_vote_mmnoc_mmu_tbu_hf0_gdsc = {
7894 +diff --git a/drivers/clk/qcom/lpassaudiocc-sc7280.c b/drivers/clk/qcom/lpassaudiocc-sc7280.c
7895 +index 063e0365f3119..1339f9211a149 100644
7896 +--- a/drivers/clk/qcom/lpassaudiocc-sc7280.c
7897 ++++ b/drivers/clk/qcom/lpassaudiocc-sc7280.c
7898 +@@ -722,33 +722,17 @@ static const struct of_device_id lpass_audio_cc_sc7280_match_table[] = {
7899 + };
7900 + MODULE_DEVICE_TABLE(of, lpass_audio_cc_sc7280_match_table);
7901 +
7902 +-static void lpassaudio_pm_runtime_disable(void *data)
7903 +-{
7904 +- pm_runtime_disable(data);
7905 +-}
7906 +-
7907 +-static void lpassaudio_pm_clk_destroy(void *data)
7908 +-{
7909 +- pm_clk_destroy(data);
7910 +-}
7911 +-
7912 +-static int lpassaudio_create_pm_clks(struct platform_device *pdev)
7913 ++static int lpass_audio_setup_runtime_pm(struct platform_device *pdev)
7914 + {
7915 + int ret;
7916 +
7917 + pm_runtime_use_autosuspend(&pdev->dev);
7918 + pm_runtime_set_autosuspend_delay(&pdev->dev, 50);
7919 +- pm_runtime_enable(&pdev->dev);
7920 +-
7921 +- ret = devm_add_action_or_reset(&pdev->dev, lpassaudio_pm_runtime_disable, &pdev->dev);
7922 +- if (ret)
7923 +- return ret;
7924 +-
7925 +- ret = pm_clk_create(&pdev->dev);
7926 ++ ret = devm_pm_runtime_enable(&pdev->dev);
7927 + if (ret)
7928 + return ret;
7929 +
7930 +- ret = devm_add_action_or_reset(&pdev->dev, lpassaudio_pm_clk_destroy, &pdev->dev);
7931 ++ ret = devm_pm_clk_create(&pdev->dev);
7932 + if (ret)
7933 + return ret;
7934 +
7935 +@@ -756,7 +740,7 @@ static int lpassaudio_create_pm_clks(struct platform_device *pdev)
7936 + if (ret < 0)
7937 + dev_err(&pdev->dev, "failed to acquire iface clock\n");
7938 +
7939 +- return ret;
7940 ++ return pm_runtime_resume_and_get(&pdev->dev);
7941 + }
7942 +
7943 + static int lpass_audio_cc_sc7280_probe(struct platform_device *pdev)
7944 +@@ -765,7 +749,7 @@ static int lpass_audio_cc_sc7280_probe(struct platform_device *pdev)
7945 + struct regmap *regmap;
7946 + int ret;
7947 +
7948 +- ret = lpassaudio_create_pm_clks(pdev);
7949 ++ ret = lpass_audio_setup_runtime_pm(pdev);
7950 + if (ret)
7951 + return ret;
7952 +
7953 +@@ -775,8 +759,8 @@ static int lpass_audio_cc_sc7280_probe(struct platform_device *pdev)
7954 +
7955 + regmap = qcom_cc_map(pdev, desc);
7956 + if (IS_ERR(regmap)) {
7957 +- pm_runtime_disable(&pdev->dev);
7958 +- return PTR_ERR(regmap);
7959 ++ ret = PTR_ERR(regmap);
7960 ++ goto exit;
7961 + }
7962 +
7963 + clk_zonda_pll_configure(&lpass_audio_cc_pll, regmap, &lpass_audio_cc_pll_config);
7964 +@@ -788,20 +772,18 @@ static int lpass_audio_cc_sc7280_probe(struct platform_device *pdev)
7965 + ret = qcom_cc_really_probe(pdev, &lpass_audio_cc_sc7280_desc, regmap);
7966 + if (ret) {
7967 + dev_err(&pdev->dev, "Failed to register LPASS AUDIO CC clocks\n");
7968 +- pm_runtime_disable(&pdev->dev);
7969 +- return ret;
7970 ++ goto exit;
7971 + }
7972 +
7973 + ret = qcom_cc_probe_by_index(pdev, 1, &lpass_audio_cc_reset_sc7280_desc);
7974 + if (ret) {
7975 + dev_err(&pdev->dev, "Failed to register LPASS AUDIO CC Resets\n");
7976 +- pm_runtime_disable(&pdev->dev);
7977 +- return ret;
7978 ++ goto exit;
7979 + }
7980 +
7981 + pm_runtime_mark_last_busy(&pdev->dev);
7982 ++exit:
7983 + pm_runtime_put_autosuspend(&pdev->dev);
7984 +- pm_runtime_put_sync(&pdev->dev);
7985 +
7986 + return ret;
7987 + }
7988 +@@ -839,14 +821,15 @@ static int lpass_aon_cc_sc7280_probe(struct platform_device *pdev)
7989 + struct regmap *regmap;
7990 + int ret;
7991 +
7992 +- ret = lpassaudio_create_pm_clks(pdev);
7993 ++ ret = lpass_audio_setup_runtime_pm(pdev);
7994 + if (ret)
7995 + return ret;
7996 +
7997 + if (of_property_read_bool(pdev->dev.of_node, "qcom,adsp-pil-mode")) {
7998 + lpass_audio_cc_sc7280_regmap_config.name = "cc";
7999 + desc = &lpass_cc_sc7280_desc;
8000 +- return qcom_cc_probe(pdev, desc);
8001 ++ ret = qcom_cc_probe(pdev, desc);
8002 ++ goto exit;
8003 + }
8004 +
8005 + lpass_audio_cc_sc7280_regmap_config.name = "lpasscc_aon";
8006 +@@ -854,18 +837,22 @@ static int lpass_aon_cc_sc7280_probe(struct platform_device *pdev)
8007 + desc = &lpass_aon_cc_sc7280_desc;
8008 +
8009 + regmap = qcom_cc_map(pdev, desc);
8010 +- if (IS_ERR(regmap))
8011 +- return PTR_ERR(regmap);
8012 ++ if (IS_ERR(regmap)) {
8013 ++ ret = PTR_ERR(regmap);
8014 ++ goto exit;
8015 ++ }
8016 +
8017 + clk_lucid_pll_configure(&lpass_aon_cc_pll, regmap, &lpass_aon_cc_pll_config);
8018 +
8019 + ret = qcom_cc_really_probe(pdev, &lpass_aon_cc_sc7280_desc, regmap);
8020 +- if (ret)
8021 ++ if (ret) {
8022 + dev_err(&pdev->dev, "Failed to register LPASS AON CC clocks\n");
8023 ++ goto exit;
8024 ++ }
8025 +
8026 + pm_runtime_mark_last_busy(&pdev->dev);
8027 ++exit:
8028 + pm_runtime_put_autosuspend(&pdev->dev);
8029 +- pm_runtime_put_sync(&pdev->dev);
8030 +
8031 + return ret;
8032 + }
8033 +diff --git a/drivers/clk/qcom/lpasscorecc-sc7180.c b/drivers/clk/qcom/lpasscorecc-sc7180.c
8034 +index ac09b7b840aba..a5731994cbed1 100644
8035 +--- a/drivers/clk/qcom/lpasscorecc-sc7180.c
8036 ++++ b/drivers/clk/qcom/lpasscorecc-sc7180.c
8037 +@@ -356,7 +356,7 @@ static const struct qcom_cc_desc lpass_audio_hm_sc7180_desc = {
8038 + .num_gdscs = ARRAY_SIZE(lpass_audio_hm_sc7180_gdscs),
8039 + };
8040 +
8041 +-static int lpass_create_pm_clks(struct platform_device *pdev)
8042 ++static int lpass_setup_runtime_pm(struct platform_device *pdev)
8043 + {
8044 + int ret;
8045 +
8046 +@@ -375,7 +375,7 @@ static int lpass_create_pm_clks(struct platform_device *pdev)
8047 + if (ret < 0)
8048 + dev_err(&pdev->dev, "failed to acquire iface clock\n");
8049 +
8050 +- return ret;
8051 ++ return pm_runtime_resume_and_get(&pdev->dev);
8052 + }
8053 +
8054 + static int lpass_core_cc_sc7180_probe(struct platform_device *pdev)
8055 +@@ -384,7 +384,7 @@ static int lpass_core_cc_sc7180_probe(struct platform_device *pdev)
8056 + struct regmap *regmap;
8057 + int ret;
8058 +
8059 +- ret = lpass_create_pm_clks(pdev);
8060 ++ ret = lpass_setup_runtime_pm(pdev);
8061 + if (ret)
8062 + return ret;
8063 +
8064 +@@ -392,12 +392,14 @@ static int lpass_core_cc_sc7180_probe(struct platform_device *pdev)
8065 + desc = &lpass_audio_hm_sc7180_desc;
8066 + ret = qcom_cc_probe_by_index(pdev, 1, desc);
8067 + if (ret)
8068 +- return ret;
8069 ++ goto exit;
8070 +
8071 + lpass_core_cc_sc7180_regmap_config.name = "lpass_core_cc";
8072 + regmap = qcom_cc_map(pdev, &lpass_core_cc_sc7180_desc);
8073 +- if (IS_ERR(regmap))
8074 +- return PTR_ERR(regmap);
8075 ++ if (IS_ERR(regmap)) {
8076 ++ ret = PTR_ERR(regmap);
8077 ++ goto exit;
8078 ++ }
8079 +
8080 + /*
8081 + * Keep the CLK always-ON
8082 +@@ -415,6 +417,7 @@ static int lpass_core_cc_sc7180_probe(struct platform_device *pdev)
8083 + ret = qcom_cc_really_probe(pdev, &lpass_core_cc_sc7180_desc, regmap);
8084 +
8085 + pm_runtime_mark_last_busy(&pdev->dev);
8086 ++exit:
8087 + pm_runtime_put_autosuspend(&pdev->dev);
8088 +
8089 + return ret;
8090 +@@ -425,14 +428,19 @@ static int lpass_hm_core_probe(struct platform_device *pdev)
8091 + const struct qcom_cc_desc *desc;
8092 + int ret;
8093 +
8094 +- ret = lpass_create_pm_clks(pdev);
8095 ++ ret = lpass_setup_runtime_pm(pdev);
8096 + if (ret)
8097 + return ret;
8098 +
8099 + lpass_core_cc_sc7180_regmap_config.name = "lpass_hm_core";
8100 + desc = &lpass_core_hm_sc7180_desc;
8101 +
8102 +- return qcom_cc_probe_by_index(pdev, 0, desc);
8103 ++ ret = qcom_cc_probe_by_index(pdev, 0, desc);
8104 ++
8105 ++ pm_runtime_mark_last_busy(&pdev->dev);
8106 ++ pm_runtime_put_autosuspend(&pdev->dev);
8107 ++
8108 ++ return ret;
8109 + }
8110 +
8111 + static const struct of_device_id lpass_hm_sc7180_match_table[] = {
8112 +diff --git a/drivers/clk/renesas/r8a779a0-cpg-mssr.c b/drivers/clk/renesas/r8a779a0-cpg-mssr.c
8113 +index d74d46833012f..e02542ca24a06 100644
8114 +--- a/drivers/clk/renesas/r8a779a0-cpg-mssr.c
8115 ++++ b/drivers/clk/renesas/r8a779a0-cpg-mssr.c
8116 +@@ -116,7 +116,7 @@ static const struct cpg_core_clk r8a779a0_core_clks[] __initconst = {
8117 + DEF_FIXED("cp", R8A779A0_CLK_CP, CLK_EXTAL, 2, 1),
8118 + DEF_FIXED("cl16mck", R8A779A0_CLK_CL16MCK, CLK_PLL1_DIV2, 64, 1),
8119 +
8120 +- DEF_GEN4_SDH("sdh0", R8A779A0_CLK_SD0H, CLK_SDSRC, 0x870),
8121 ++ DEF_GEN4_SDH("sd0h", R8A779A0_CLK_SD0H, CLK_SDSRC, 0x870),
8122 + DEF_GEN4_SD("sd0", R8A779A0_CLK_SD0, R8A779A0_CLK_SD0H, 0x870),
8123 +
8124 + DEF_BASE("rpc", R8A779A0_CLK_RPC, CLK_TYPE_GEN4_RPC, CLK_RPCSRC),
8125 +diff --git a/drivers/clk/renesas/r8a779f0-cpg-mssr.c b/drivers/clk/renesas/r8a779f0-cpg-mssr.c
8126 +index 4baf355e26d88..27b668def357f 100644
8127 +--- a/drivers/clk/renesas/r8a779f0-cpg-mssr.c
8128 ++++ b/drivers/clk/renesas/r8a779f0-cpg-mssr.c
8129 +@@ -113,7 +113,7 @@ static const struct cpg_core_clk r8a779f0_core_clks[] __initconst = {
8130 + DEF_FIXED("sasyncperd2", R8A779F0_CLK_SASYNCPERD2, R8A779F0_CLK_SASYNCPERD1, 2, 1),
8131 + DEF_FIXED("sasyncperd4", R8A779F0_CLK_SASYNCPERD4, R8A779F0_CLK_SASYNCPERD1, 4, 1),
8132 +
8133 +- DEF_GEN4_SDH("sdh0", R8A779F0_CLK_SD0H, CLK_SDSRC, 0x870),
8134 ++ DEF_GEN4_SDH("sd0h", R8A779F0_CLK_SD0H, CLK_SDSRC, 0x870),
8135 + DEF_GEN4_SD("sd0", R8A779F0_CLK_SD0, R8A779F0_CLK_SD0H, 0x870),
8136 +
8137 + DEF_BASE("rpc", R8A779F0_CLK_RPC, CLK_TYPE_GEN4_RPC, CLK_RPCSRC),
8138 +@@ -126,10 +126,10 @@ static const struct cpg_core_clk r8a779f0_core_clks[] __initconst = {
8139 + };
8140 +
8141 + static const struct mssr_mod_clk r8a779f0_mod_clks[] __initconst = {
8142 +- DEF_MOD("hscif0", 514, R8A779F0_CLK_S0D3),
8143 +- DEF_MOD("hscif1", 515, R8A779F0_CLK_S0D3),
8144 +- DEF_MOD("hscif2", 516, R8A779F0_CLK_S0D3),
8145 +- DEF_MOD("hscif3", 517, R8A779F0_CLK_S0D3),
8146 ++ DEF_MOD("hscif0", 514, R8A779F0_CLK_SASYNCPERD1),
8147 ++ DEF_MOD("hscif1", 515, R8A779F0_CLK_SASYNCPERD1),
8148 ++ DEF_MOD("hscif2", 516, R8A779F0_CLK_SASYNCPERD1),
8149 ++ DEF_MOD("hscif3", 517, R8A779F0_CLK_SASYNCPERD1),
8150 + DEF_MOD("i2c0", 518, R8A779F0_CLK_S0D6_PER),
8151 + DEF_MOD("i2c1", 519, R8A779F0_CLK_S0D6_PER),
8152 + DEF_MOD("i2c2", 520, R8A779F0_CLK_S0D6_PER),
8153 +@@ -142,10 +142,10 @@ static const struct mssr_mod_clk r8a779f0_mod_clks[] __initconst = {
8154 + DEF_MOD("msiof3", 621, R8A779F0_CLK_MSO),
8155 + DEF_MOD("pcie0", 624, R8A779F0_CLK_S0D2),
8156 + DEF_MOD("pcie1", 625, R8A779F0_CLK_S0D2),
8157 +- DEF_MOD("scif0", 702, R8A779F0_CLK_S0D12_PER),
8158 +- DEF_MOD("scif1", 703, R8A779F0_CLK_S0D12_PER),
8159 +- DEF_MOD("scif3", 704, R8A779F0_CLK_S0D12_PER),
8160 +- DEF_MOD("scif4", 705, R8A779F0_CLK_S0D12_PER),
8161 ++ DEF_MOD("scif0", 702, R8A779F0_CLK_SASYNCPERD4),
8162 ++ DEF_MOD("scif1", 703, R8A779F0_CLK_SASYNCPERD4),
8163 ++ DEF_MOD("scif3", 704, R8A779F0_CLK_SASYNCPERD4),
8164 ++ DEF_MOD("scif4", 705, R8A779F0_CLK_SASYNCPERD4),
8165 + DEF_MOD("sdhi0", 706, R8A779F0_CLK_SD0),
8166 + DEF_MOD("sys-dmac0", 709, R8A779F0_CLK_S0D3_PER),
8167 + DEF_MOD("sys-dmac1", 710, R8A779F0_CLK_S0D3_PER),
8168 +diff --git a/drivers/clk/renesas/r9a06g032-clocks.c b/drivers/clk/renesas/r9a06g032-clocks.c
8169 +index 1488c9d6e6394..983faa5707b9c 100644
8170 +--- a/drivers/clk/renesas/r9a06g032-clocks.c
8171 ++++ b/drivers/clk/renesas/r9a06g032-clocks.c
8172 +@@ -412,7 +412,7 @@ static int r9a06g032_attach_dev(struct generic_pm_domain *pd,
8173 + int error;
8174 + int index;
8175 +
8176 +- while (!of_parse_phandle_with_args(np, "clocks", "#clock-cells", i,
8177 ++ while (!of_parse_phandle_with_args(np, "clocks", "#clock-cells", i++,
8178 + &clkspec)) {
8179 + if (clkspec.np != pd->dev.of_node)
8180 + continue;
8181 +@@ -425,7 +425,6 @@ static int r9a06g032_attach_dev(struct generic_pm_domain *pd,
8182 + if (error)
8183 + return error;
8184 + }
8185 +- i++;
8186 + }
8187 +
8188 + return 0;
8189 +diff --git a/drivers/clk/rockchip/clk-pll.c b/drivers/clk/rockchip/clk-pll.c
8190 +index f7827b3b7fc1c..6e5e502be44a6 100644
8191 +--- a/drivers/clk/rockchip/clk-pll.c
8192 ++++ b/drivers/clk/rockchip/clk-pll.c
8193 +@@ -981,6 +981,7 @@ struct clk *rockchip_clk_register_pll(struct rockchip_clk_provider *ctx,
8194 + return mux_clk;
8195 +
8196 + err_pll:
8197 ++ kfree(pll->rate_table);
8198 + clk_unregister(mux_clk);
8199 + mux_clk = pll_clk;
8200 + err_mux:
8201 +diff --git a/drivers/clk/samsung/clk-pll.c b/drivers/clk/samsung/clk-pll.c
8202 +index fe383471c5f0a..0ff28938943f0 100644
8203 +--- a/drivers/clk/samsung/clk-pll.c
8204 ++++ b/drivers/clk/samsung/clk-pll.c
8205 +@@ -1583,6 +1583,7 @@ static void __init _samsung_clk_register_pll(struct samsung_clk_provider *ctx,
8206 + if (ret) {
8207 + pr_err("%s: failed to register pll clock %s : %d\n",
8208 + __func__, pll_clk->name, ret);
8209 ++ kfree(pll->rate_table);
8210 + kfree(pll);
8211 + return;
8212 + }
8213 +diff --git a/drivers/clk/socfpga/clk-gate.c b/drivers/clk/socfpga/clk-gate.c
8214 +index 53d6e3ec4309f..c94b59b80dd43 100644
8215 +--- a/drivers/clk/socfpga/clk-gate.c
8216 ++++ b/drivers/clk/socfpga/clk-gate.c
8217 +@@ -188,8 +188,10 @@ void __init socfpga_gate_init(struct device_node *node)
8218 + return;
8219 +
8220 + ops = kmemdup(&gateclk_ops, sizeof(gateclk_ops), GFP_KERNEL);
8221 +- if (WARN_ON(!ops))
8222 ++ if (WARN_ON(!ops)) {
8223 ++ kfree(socfpga_clk);
8224 + return;
8225 ++ }
8226 +
8227 + rc = of_property_read_u32_array(node, "clk-gate", clk_gate, 2);
8228 + if (rc)
8229 +@@ -243,6 +245,7 @@ void __init socfpga_gate_init(struct device_node *node)
8230 +
8231 + err = clk_hw_register(NULL, hw_clk);
8232 + if (err) {
8233 ++ kfree(ops);
8234 + kfree(socfpga_clk);
8235 + return;
8236 + }
8237 +diff --git a/drivers/clk/st/clkgen-fsyn.c b/drivers/clk/st/clkgen-fsyn.c
8238 +index d820292a381d0..40df1db102a77 100644
8239 +--- a/drivers/clk/st/clkgen-fsyn.c
8240 ++++ b/drivers/clk/st/clkgen-fsyn.c
8241 +@@ -1020,9 +1020,10 @@ static void __init st_of_quadfs_setup(struct device_node *np,
8242 +
8243 + clk = st_clk_register_quadfs_pll(pll_name, clk_parent_name, datac->data,
8244 + reg, lock);
8245 +- if (IS_ERR(clk))
8246 ++ if (IS_ERR(clk)) {
8247 ++ kfree(lock);
8248 + goto err_exit;
8249 +- else
8250 ++ } else
8251 + pr_debug("%s: parent %s rate %u\n",
8252 + __clk_get_name(clk),
8253 + __clk_get_name(clk_get_parent(clk)),
8254 +diff --git a/drivers/clk/visconti/pll.c b/drivers/clk/visconti/pll.c
8255 +index a484cb945d67b..1f3234f226674 100644
8256 +--- a/drivers/clk/visconti/pll.c
8257 ++++ b/drivers/clk/visconti/pll.c
8258 +@@ -277,6 +277,7 @@ static struct clk_hw *visconti_register_pll(struct visconti_pll_provider *ctx,
8259 + ret = clk_hw_register(NULL, &pll->hw);
8260 + if (ret) {
8261 + pr_err("failed to register pll clock %s : %d\n", name, ret);
8262 ++ kfree(pll->rate_table);
8263 + kfree(pll);
8264 + pll_hw_clk = ERR_PTR(ret);
8265 + }
8266 +diff --git a/drivers/clocksource/sh_cmt.c b/drivers/clocksource/sh_cmt.c
8267 +index 64dcb082d4cf6..7b952aa52c0b9 100644
8268 +--- a/drivers/clocksource/sh_cmt.c
8269 ++++ b/drivers/clocksource/sh_cmt.c
8270 +@@ -13,6 +13,7 @@
8271 + #include <linux/init.h>
8272 + #include <linux/interrupt.h>
8273 + #include <linux/io.h>
8274 ++#include <linux/iopoll.h>
8275 + #include <linux/ioport.h>
8276 + #include <linux/irq.h>
8277 + #include <linux/module.h>
8278 +@@ -116,6 +117,7 @@ struct sh_cmt_device {
8279 + void __iomem *mapbase;
8280 + struct clk *clk;
8281 + unsigned long rate;
8282 ++ unsigned int reg_delay;
8283 +
8284 + raw_spinlock_t lock; /* Protect the shared start/stop register */
8285 +
8286 +@@ -247,10 +249,17 @@ static inline u32 sh_cmt_read_cmstr(struct sh_cmt_channel *ch)
8287 +
8288 + static inline void sh_cmt_write_cmstr(struct sh_cmt_channel *ch, u32 value)
8289 + {
8290 +- if (ch->iostart)
8291 +- ch->cmt->info->write_control(ch->iostart, 0, value);
8292 +- else
8293 +- ch->cmt->info->write_control(ch->cmt->mapbase, 0, value);
8294 ++ u32 old_value = sh_cmt_read_cmstr(ch);
8295 ++
8296 ++ if (value != old_value) {
8297 ++ if (ch->iostart) {
8298 ++ ch->cmt->info->write_control(ch->iostart, 0, value);
8299 ++ udelay(ch->cmt->reg_delay);
8300 ++ } else {
8301 ++ ch->cmt->info->write_control(ch->cmt->mapbase, 0, value);
8302 ++ udelay(ch->cmt->reg_delay);
8303 ++ }
8304 ++ }
8305 + }
8306 +
8307 + static inline u32 sh_cmt_read_cmcsr(struct sh_cmt_channel *ch)
8308 +@@ -260,7 +269,12 @@ static inline u32 sh_cmt_read_cmcsr(struct sh_cmt_channel *ch)
8309 +
8310 + static inline void sh_cmt_write_cmcsr(struct sh_cmt_channel *ch, u32 value)
8311 + {
8312 +- ch->cmt->info->write_control(ch->ioctrl, CMCSR, value);
8313 ++ u32 old_value = sh_cmt_read_cmcsr(ch);
8314 ++
8315 ++ if (value != old_value) {
8316 ++ ch->cmt->info->write_control(ch->ioctrl, CMCSR, value);
8317 ++ udelay(ch->cmt->reg_delay);
8318 ++ }
8319 + }
8320 +
8321 + static inline u32 sh_cmt_read_cmcnt(struct sh_cmt_channel *ch)
8322 +@@ -268,14 +282,33 @@ static inline u32 sh_cmt_read_cmcnt(struct sh_cmt_channel *ch)
8323 + return ch->cmt->info->read_count(ch->ioctrl, CMCNT);
8324 + }
8325 +
8326 +-static inline void sh_cmt_write_cmcnt(struct sh_cmt_channel *ch, u32 value)
8327 ++static inline int sh_cmt_write_cmcnt(struct sh_cmt_channel *ch, u32 value)
8328 + {
8329 ++ /* Tests showed that we need to wait 3 clocks here */
8330 ++ unsigned int cmcnt_delay = DIV_ROUND_UP(3 * ch->cmt->reg_delay, 2);
8331 ++ u32 reg;
8332 ++
8333 ++ if (ch->cmt->info->model > SH_CMT_16BIT) {
8334 ++ int ret = read_poll_timeout_atomic(sh_cmt_read_cmcsr, reg,
8335 ++ !(reg & SH_CMT32_CMCSR_WRFLG),
8336 ++ 1, cmcnt_delay, false, ch);
8337 ++ if (ret < 0)
8338 ++ return ret;
8339 ++ }
8340 ++
8341 + ch->cmt->info->write_count(ch->ioctrl, CMCNT, value);
8342 ++ udelay(cmcnt_delay);
8343 ++ return 0;
8344 + }
8345 +
8346 + static inline void sh_cmt_write_cmcor(struct sh_cmt_channel *ch, u32 value)
8347 + {
8348 +- ch->cmt->info->write_count(ch->ioctrl, CMCOR, value);
8349 ++ u32 old_value = ch->cmt->info->read_count(ch->ioctrl, CMCOR);
8350 ++
8351 ++ if (value != old_value) {
8352 ++ ch->cmt->info->write_count(ch->ioctrl, CMCOR, value);
8353 ++ udelay(ch->cmt->reg_delay);
8354 ++ }
8355 + }
8356 +
8357 + static u32 sh_cmt_get_counter(struct sh_cmt_channel *ch, u32 *has_wrapped)
8358 +@@ -319,7 +352,7 @@ static void sh_cmt_start_stop_ch(struct sh_cmt_channel *ch, int start)
8359 +
8360 + static int sh_cmt_enable(struct sh_cmt_channel *ch)
8361 + {
8362 +- int k, ret;
8363 ++ int ret;
8364 +
8365 + dev_pm_syscore_device(&ch->cmt->pdev->dev, true);
8366 +
8367 +@@ -347,26 +380,9 @@ static int sh_cmt_enable(struct sh_cmt_channel *ch)
8368 + }
8369 +
8370 + sh_cmt_write_cmcor(ch, 0xffffffff);
8371 +- sh_cmt_write_cmcnt(ch, 0);
8372 +-
8373 +- /*
8374 +- * According to the sh73a0 user's manual, as CMCNT can be operated
8375 +- * only by the RCLK (Pseudo 32 kHz), there's one restriction on
8376 +- * modifying CMCNT register; two RCLK cycles are necessary before
8377 +- * this register is either read or any modification of the value
8378 +- * it holds is reflected in the LSI's actual operation.
8379 +- *
8380 +- * While at it, we're supposed to clear out the CMCNT as of this
8381 +- * moment, so make sure it's processed properly here. This will
8382 +- * take RCLKx2 at maximum.
8383 +- */
8384 +- for (k = 0; k < 100; k++) {
8385 +- if (!sh_cmt_read_cmcnt(ch))
8386 +- break;
8387 +- udelay(1);
8388 +- }
8389 ++ ret = sh_cmt_write_cmcnt(ch, 0);
8390 +
8391 +- if (sh_cmt_read_cmcnt(ch)) {
8392 ++ if (ret || sh_cmt_read_cmcnt(ch)) {
8393 + dev_err(&ch->cmt->pdev->dev, "ch%u: cannot clear CMCNT\n",
8394 + ch->index);
8395 + ret = -ETIMEDOUT;
8396 +@@ -995,8 +1011,8 @@ MODULE_DEVICE_TABLE(of, sh_cmt_of_table);
8397 +
8398 + static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev)
8399 + {
8400 +- unsigned int mask;
8401 +- unsigned int i;
8402 ++ unsigned int mask, i;
8403 ++ unsigned long rate;
8404 + int ret;
8405 +
8406 + cmt->pdev = pdev;
8407 +@@ -1032,10 +1048,16 @@ static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev)
8408 + if (ret < 0)
8409 + goto err_clk_unprepare;
8410 +
8411 +- if (cmt->info->width == 16)
8412 +- cmt->rate = clk_get_rate(cmt->clk) / 512;
8413 +- else
8414 +- cmt->rate = clk_get_rate(cmt->clk) / 8;
8415 ++ rate = clk_get_rate(cmt->clk);
8416 ++ if (!rate) {
8417 ++ ret = -EINVAL;
8418 ++ goto err_clk_disable;
8419 ++ }
8420 ++
8421 ++ /* We shall wait 2 input clks after register writes */
8422 ++ if (cmt->info->model >= SH_CMT_48BIT)
8423 ++ cmt->reg_delay = DIV_ROUND_UP(2UL * USEC_PER_SEC, rate);
8424 ++ cmt->rate = rate / (cmt->info->width == 16 ? 512 : 8);
8425 +
8426 + /* Map the memory resource(s). */
8427 + ret = sh_cmt_map_memory(cmt);
8428 +diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
8429 +index 2737407ff0698..632523c1232f6 100644
8430 +--- a/drivers/clocksource/timer-ti-dm-systimer.c
8431 ++++ b/drivers/clocksource/timer-ti-dm-systimer.c
8432 +@@ -345,8 +345,10 @@ static int __init dmtimer_systimer_init_clock(struct dmtimer_systimer *t,
8433 + return error;
8434 +
8435 + r = clk_get_rate(clock);
8436 +- if (!r)
8437 ++ if (!r) {
8438 ++ clk_disable_unprepare(clock);
8439 + return -ENODEV;
8440 ++ }
8441 +
8442 + if (is_ick)
8443 + t->ick = clock;
8444 +diff --git a/drivers/clocksource/timer-ti-dm.c b/drivers/clocksource/timer-ti-dm.c
8445 +index cad29ded3a48f..00af1a8e34fbd 100644
8446 +--- a/drivers/clocksource/timer-ti-dm.c
8447 ++++ b/drivers/clocksource/timer-ti-dm.c
8448 +@@ -1258,7 +1258,7 @@ static struct platform_driver omap_dm_timer_driver = {
8449 + .remove = omap_dm_timer_remove,
8450 + .driver = {
8451 + .name = "omap_timer",
8452 +- .of_match_table = of_match_ptr(omap_timer_match),
8453 ++ .of_match_table = omap_timer_match,
8454 + .pm = &omap_dm_timer_pm_ops,
8455 + },
8456 + };
8457 +diff --git a/drivers/counter/stm32-lptimer-cnt.c b/drivers/counter/stm32-lptimer-cnt.c
8458 +index d6b80b6dfc287..8439755559b21 100644
8459 +--- a/drivers/counter/stm32-lptimer-cnt.c
8460 ++++ b/drivers/counter/stm32-lptimer-cnt.c
8461 +@@ -69,7 +69,7 @@ static int stm32_lptim_set_enable_state(struct stm32_lptim_cnt *priv,
8462 +
8463 + /* ensure CMP & ARR registers are properly written */
8464 + ret = regmap_read_poll_timeout(priv->regmap, STM32_LPTIM_ISR, val,
8465 +- (val & STM32_LPTIM_CMPOK_ARROK),
8466 ++ (val & STM32_LPTIM_CMPOK_ARROK) == STM32_LPTIM_CMPOK_ARROK,
8467 + 100, 1000);
8468 + if (ret)
8469 + return ret;
8470 +diff --git a/drivers/cpufreq/amd_freq_sensitivity.c b/drivers/cpufreq/amd_freq_sensitivity.c
8471 +index 6448e03bcf488..59b19b9975e8c 100644
8472 +--- a/drivers/cpufreq/amd_freq_sensitivity.c
8473 ++++ b/drivers/cpufreq/amd_freq_sensitivity.c
8474 +@@ -125,6 +125,8 @@ static int __init amd_freq_sensitivity_init(void)
8475 + if (!pcidev) {
8476 + if (!boot_cpu_has(X86_FEATURE_PROC_FEEDBACK))
8477 + return -ENODEV;
8478 ++ } else {
8479 ++ pci_dev_put(pcidev);
8480 + }
8481 +
8482 + if (rdmsrl_safe(MSR_AMD64_FREQ_SENSITIVITY_ACTUAL, &val))
8483 +diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c
8484 +index 833589bc95e40..3c623a0bc147f 100644
8485 +--- a/drivers/cpufreq/qcom-cpufreq-hw.c
8486 ++++ b/drivers/cpufreq/qcom-cpufreq-hw.c
8487 +@@ -125,7 +125,35 @@ static int qcom_cpufreq_hw_target_index(struct cpufreq_policy *policy,
8488 + return 0;
8489 + }
8490 +
8491 ++static unsigned long qcom_lmh_get_throttle_freq(struct qcom_cpufreq_data *data)
8492 ++{
8493 ++ unsigned int lval;
8494 ++
8495 ++ if (data->soc_data->reg_current_vote)
8496 ++ lval = readl_relaxed(data->base + data->soc_data->reg_current_vote) & 0x3ff;
8497 ++ else
8498 ++ lval = readl_relaxed(data->base + data->soc_data->reg_domain_state) & 0xff;
8499 ++
8500 ++ return lval * xo_rate;
8501 ++}
8502 ++
8503 ++/* Get the current frequency of the CPU (after throttling) */
8504 + static unsigned int qcom_cpufreq_hw_get(unsigned int cpu)
8505 ++{
8506 ++ struct qcom_cpufreq_data *data;
8507 ++ struct cpufreq_policy *policy;
8508 ++
8509 ++ policy = cpufreq_cpu_get_raw(cpu);
8510 ++ if (!policy)
8511 ++ return 0;
8512 ++
8513 ++ data = policy->driver_data;
8514 ++
8515 ++ return qcom_lmh_get_throttle_freq(data) / HZ_PER_KHZ;
8516 ++}
8517 ++
8518 ++/* Get the frequency requested by the cpufreq core for the CPU */
8519 ++static unsigned int qcom_cpufreq_get_freq(unsigned int cpu)
8520 + {
8521 + struct qcom_cpufreq_data *data;
8522 + const struct qcom_cpufreq_soc_data *soc_data;
8523 +@@ -193,6 +221,7 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
8524 + }
8525 + } else if (ret != -ENODEV) {
8526 + dev_err(cpu_dev, "Invalid opp table in device tree\n");
8527 ++ kfree(table);
8528 + return ret;
8529 + } else {
8530 + policy->fast_switch_possible = true;
8531 +@@ -286,18 +315,6 @@ static void qcom_get_related_cpus(int index, struct cpumask *m)
8532 + }
8533 + }
8534 +
8535 +-static unsigned long qcom_lmh_get_throttle_freq(struct qcom_cpufreq_data *data)
8536 +-{
8537 +- unsigned int lval;
8538 +-
8539 +- if (data->soc_data->reg_current_vote)
8540 +- lval = readl_relaxed(data->base + data->soc_data->reg_current_vote) & 0x3ff;
8541 +- else
8542 +- lval = readl_relaxed(data->base + data->soc_data->reg_domain_state) & 0xff;
8543 +-
8544 +- return lval * xo_rate;
8545 +-}
8546 +-
8547 + static void qcom_lmh_dcvs_notify(struct qcom_cpufreq_data *data)
8548 + {
8549 + struct cpufreq_policy *policy = data->policy;
8550 +@@ -341,7 +358,7 @@ static void qcom_lmh_dcvs_notify(struct qcom_cpufreq_data *data)
8551 + * If h/w throttled frequency is higher than what cpufreq has requested
8552 + * for, then stop polling and switch back to interrupt mechanism.
8553 + */
8554 +- if (throttled_freq >= qcom_cpufreq_hw_get(cpu))
8555 ++ if (throttled_freq >= qcom_cpufreq_get_freq(cpu))
8556 + enable_irq(data->throttle_irq);
8557 + else
8558 + mod_delayed_work(system_highpri_wq, &data->throttle_work,
8559 +diff --git a/drivers/cpuidle/dt_idle_states.c b/drivers/cpuidle/dt_idle_states.c
8560 +index 252f2a9686a62..448bc796b0b40 100644
8561 +--- a/drivers/cpuidle/dt_idle_states.c
8562 ++++ b/drivers/cpuidle/dt_idle_states.c
8563 +@@ -223,6 +223,6 @@ int dt_init_idle_driver(struct cpuidle_driver *drv,
8564 + * also be 0 on platforms with missing DT idle states or legacy DT
8565 + * configuration predating the DT idle states bindings.
8566 + */
8567 +- return i;
8568 ++ return state_idx - start_idx;
8569 + }
8570 + EXPORT_SYMBOL_GPL(dt_init_idle_driver);
8571 +diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
8572 +index 55e75fbb658ee..c30b5a39c2ac2 100644
8573 +--- a/drivers/crypto/Kconfig
8574 ++++ b/drivers/crypto/Kconfig
8575 +@@ -669,7 +669,12 @@ config CRYPTO_DEV_IMGTEC_HASH
8576 + config CRYPTO_DEV_ROCKCHIP
8577 + tristate "Rockchip's Cryptographic Engine driver"
8578 + depends on OF && ARCH_ROCKCHIP
8579 ++ depends on PM
8580 ++ select CRYPTO_ECB
8581 ++ select CRYPTO_CBC
8582 ++ select CRYPTO_DES
8583 + select CRYPTO_AES
8584 ++ select CRYPTO_ENGINE
8585 + select CRYPTO_LIB_DES
8586 + select CRYPTO_MD5
8587 + select CRYPTO_SHA1
8588 +diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
8589 +index 910d6751644cf..902f6be057ec6 100644
8590 +--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
8591 ++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
8592 +@@ -124,7 +124,7 @@ static int sun8i_ss_setup_ivs(struct skcipher_request *areq)
8593 + unsigned int ivsize = crypto_skcipher_ivsize(tfm);
8594 + struct sun8i_ss_flow *sf = &ss->flows[rctx->flow];
8595 + int i = 0;
8596 +- u32 a;
8597 ++ dma_addr_t a;
8598 + int err;
8599 +
8600 + rctx->ivlen = ivsize;
8601 +diff --git a/drivers/crypto/amlogic/amlogic-gxl-core.c b/drivers/crypto/amlogic/amlogic-gxl-core.c
8602 +index 6e7ae896717cd..937187027ad57 100644
8603 +--- a/drivers/crypto/amlogic/amlogic-gxl-core.c
8604 ++++ b/drivers/crypto/amlogic/amlogic-gxl-core.c
8605 +@@ -237,7 +237,6 @@ static int meson_crypto_probe(struct platform_device *pdev)
8606 + return err;
8607 + }
8608 +
8609 +- mc->irqs = devm_kcalloc(mc->dev, MAXFLOW, sizeof(int), GFP_KERNEL);
8610 + for (i = 0; i < MAXFLOW; i++) {
8611 + mc->irqs[i] = platform_get_irq(pdev, i);
8612 + if (mc->irqs[i] < 0)
8613 +diff --git a/drivers/crypto/amlogic/amlogic-gxl.h b/drivers/crypto/amlogic/amlogic-gxl.h
8614 +index dc0f142324a3c..8c0746a1d6d43 100644
8615 +--- a/drivers/crypto/amlogic/amlogic-gxl.h
8616 ++++ b/drivers/crypto/amlogic/amlogic-gxl.h
8617 +@@ -95,7 +95,7 @@ struct meson_dev {
8618 + struct device *dev;
8619 + struct meson_flow *chanlist;
8620 + atomic_t flow;
8621 +- int *irqs;
8622 ++ int irqs[MAXFLOW];
8623 + #ifdef CONFIG_CRYPTO_DEV_AMLOGIC_GXL_DEBUG
8624 + struct dentry *dbgfs_dir;
8625 + #endif
8626 +diff --git a/drivers/crypto/cavium/nitrox/nitrox_mbx.c b/drivers/crypto/cavium/nitrox/nitrox_mbx.c
8627 +index 9e7308e39b304..d4e06999af9b7 100644
8628 +--- a/drivers/crypto/cavium/nitrox/nitrox_mbx.c
8629 ++++ b/drivers/crypto/cavium/nitrox/nitrox_mbx.c
8630 +@@ -195,6 +195,7 @@ int nitrox_mbox_init(struct nitrox_device *ndev)
8631 + ndev->iov.pf2vf_wq = alloc_workqueue("nitrox_pf2vf", 0, 0);
8632 + if (!ndev->iov.pf2vf_wq) {
8633 + kfree(ndev->iov.vfdev);
8634 ++ ndev->iov.vfdev = NULL;
8635 + return -ENOMEM;
8636 + }
8637 + /* enable pf2vf mailbox interrupts */
8638 +diff --git a/drivers/crypto/ccree/cc_debugfs.c b/drivers/crypto/ccree/cc_debugfs.c
8639 +index 7083767602fcf..8f008f024f8f1 100644
8640 +--- a/drivers/crypto/ccree/cc_debugfs.c
8641 ++++ b/drivers/crypto/ccree/cc_debugfs.c
8642 +@@ -55,7 +55,7 @@ void __init cc_debugfs_global_init(void)
8643 + cc_debugfs_dir = debugfs_create_dir("ccree", NULL);
8644 + }
8645 +
8646 +-void __exit cc_debugfs_global_fini(void)
8647 ++void cc_debugfs_global_fini(void)
8648 + {
8649 + debugfs_remove(cc_debugfs_dir);
8650 + }
8651 +diff --git a/drivers/crypto/ccree/cc_driver.c b/drivers/crypto/ccree/cc_driver.c
8652 +index cadead18b59e8..d489c6f808925 100644
8653 +--- a/drivers/crypto/ccree/cc_driver.c
8654 ++++ b/drivers/crypto/ccree/cc_driver.c
8655 +@@ -651,9 +651,17 @@ static struct platform_driver ccree_driver = {
8656 +
8657 + static int __init ccree_init(void)
8658 + {
8659 ++ int rc;
8660 ++
8661 + cc_debugfs_global_init();
8662 +
8663 +- return platform_driver_register(&ccree_driver);
8664 ++ rc = platform_driver_register(&ccree_driver);
8665 ++ if (rc) {
8666 ++ cc_debugfs_global_fini();
8667 ++ return rc;
8668 ++ }
8669 ++
8670 ++ return 0;
8671 + }
8672 + module_init(ccree_init);
8673 +
8674 +diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c
8675 +index 471e5ca720f57..baf1faec7046f 100644
8676 +--- a/drivers/crypto/hisilicon/hpre/hpre_main.c
8677 ++++ b/drivers/crypto/hisilicon/hpre/hpre_main.c
8678 +@@ -1437,18 +1437,12 @@ err_with_qm_init:
8679 + static void hpre_remove(struct pci_dev *pdev)
8680 + {
8681 + struct hisi_qm *qm = pci_get_drvdata(pdev);
8682 +- int ret;
8683 +
8684 + hisi_qm_pm_uninit(qm);
8685 + hisi_qm_wait_task_finish(qm, &hpre_devices);
8686 + hisi_qm_alg_unregister(qm, &hpre_devices);
8687 +- if (qm->fun_type == QM_HW_PF && qm->vfs_num) {
8688 +- ret = hisi_qm_sriov_disable(pdev, true);
8689 +- if (ret) {
8690 +- pci_err(pdev, "Disable SRIOV fail!\n");
8691 +- return;
8692 +- }
8693 +- }
8694 ++ if (qm->fun_type == QM_HW_PF && qm->vfs_num)
8695 ++ hisi_qm_sriov_disable(pdev, true);
8696 +
8697 + hpre_debugfs_exit(qm);
8698 + hisi_qm_stop(qm, QM_NORMAL);
8699 +diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
8700 +index 8b387de69d229..07e1e39a5e378 100644
8701 +--- a/drivers/crypto/hisilicon/qm.c
8702 ++++ b/drivers/crypto/hisilicon/qm.c
8703 +@@ -250,7 +250,6 @@
8704 + #define QM_QOS_MIN_CIR_B 100
8705 + #define QM_QOS_MAX_CIR_U 6
8706 + #define QM_QOS_MAX_CIR_S 11
8707 +-#define QM_QOS_VAL_MAX_LEN 32
8708 + #define QM_DFX_BASE 0x0100000
8709 + #define QM_DFX_STATE1 0x0104000
8710 + #define QM_DFX_STATE2 0x01040C8
8711 +@@ -359,7 +358,7 @@ static const struct hisi_qm_cap_info qm_cap_info_vf[] = {
8712 + static const struct hisi_qm_cap_info qm_basic_info[] = {
8713 + {QM_TOTAL_QP_NUM_CAP, 0x100158, 0, GENMASK(10, 0), 0x1000, 0x400, 0x400},
8714 + {QM_FUNC_MAX_QP_CAP, 0x100158, 11, GENMASK(10, 0), 0x1000, 0x400, 0x400},
8715 +- {QM_XEQ_DEPTH_CAP, 0x3104, 0, GENMASK(15, 0), 0x800, 0x4000800, 0x4000800},
8716 ++ {QM_XEQ_DEPTH_CAP, 0x3104, 0, GENMASK(31, 0), 0x800, 0x4000800, 0x4000800},
8717 + {QM_QP_DEPTH_CAP, 0x3108, 0, GENMASK(31, 0), 0x4000400, 0x4000400, 0x4000400},
8718 + {QM_EQ_IRQ_TYPE_CAP, 0x310c, 0, GENMASK(31, 0), 0x10000, 0x10000, 0x10000},
8719 + {QM_AEQ_IRQ_TYPE_CAP, 0x3110, 0, GENMASK(31, 0), 0x0, 0x10001, 0x10001},
8720 +@@ -909,8 +908,8 @@ static void qm_get_xqc_depth(struct hisi_qm *qm, u16 *low_bits,
8721 + u32 depth;
8722 +
8723 + depth = hisi_qm_get_hw_info(qm, qm_basic_info, type, qm->cap_ver);
8724 +- *high_bits = depth & QM_XQ_DEPTH_MASK;
8725 +- *low_bits = (depth >> QM_XQ_DEPTH_SHIFT) & QM_XQ_DEPTH_MASK;
8726 ++ *low_bits = depth & QM_XQ_DEPTH_MASK;
8727 ++ *high_bits = (depth >> QM_XQ_DEPTH_SHIFT) & QM_XQ_DEPTH_MASK;
8728 + }
8729 +
8730 + static u32 qm_get_irq_num(struct hisi_qm *qm)
8731 +@@ -4614,7 +4613,7 @@ static ssize_t qm_get_qos_value(struct hisi_qm *qm, const char *buf,
8732 + unsigned int *fun_index)
8733 + {
8734 + char tbuf_bdf[QM_DBG_READ_LEN] = {0};
8735 +- char val_buf[QM_QOS_VAL_MAX_LEN] = {0};
8736 ++ char val_buf[QM_DBG_READ_LEN] = {0};
8737 + u32 tmp1, device, function;
8738 + int ret, bus;
8739 +
8740 +@@ -5725,6 +5724,7 @@ static void qm_pf_reset_vf_done(struct hisi_qm *qm)
8741 + cmd = QM_VF_START_FAIL;
8742 + }
8743 +
8744 ++ qm_cmd_init(qm);
8745 + ret = qm_ping_pf(qm, cmd);
8746 + if (ret)
8747 + dev_warn(&pdev->dev, "PF responds timeout in reset done!\n");
8748 +@@ -5786,7 +5786,6 @@ static void qm_pf_reset_vf_process(struct hisi_qm *qm,
8749 + goto err_get_status;
8750 +
8751 + qm_pf_reset_vf_done(qm);
8752 +- qm_cmd_init(qm);
8753 +
8754 + dev_info(dev, "device reset done.\n");
8755 +
8756 +diff --git a/drivers/crypto/img-hash.c b/drivers/crypto/img-hash.c
8757 +index d8e82d69745d8..9629e98bd68b7 100644
8758 +--- a/drivers/crypto/img-hash.c
8759 ++++ b/drivers/crypto/img-hash.c
8760 +@@ -358,12 +358,16 @@ static int img_hash_dma_init(struct img_hash_dev *hdev)
8761 + static void img_hash_dma_task(unsigned long d)
8762 + {
8763 + struct img_hash_dev *hdev = (struct img_hash_dev *)d;
8764 +- struct img_hash_request_ctx *ctx = ahash_request_ctx(hdev->req);
8765 ++ struct img_hash_request_ctx *ctx;
8766 + u8 *addr;
8767 + size_t nbytes, bleft, wsend, len, tbc;
8768 + struct scatterlist tsg;
8769 +
8770 +- if (!hdev->req || !ctx->sg)
8771 ++ if (!hdev->req)
8772 ++ return;
8773 ++
8774 ++ ctx = ahash_request_ctx(hdev->req);
8775 ++ if (!ctx->sg)
8776 + return;
8777 +
8778 + addr = sg_virt(ctx->sg);
8779 +diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
8780 +index 655a7f5a406a1..cbeda59c6b191 100644
8781 +--- a/drivers/crypto/omap-sham.c
8782 ++++ b/drivers/crypto/omap-sham.c
8783 +@@ -2114,7 +2114,7 @@ static int omap_sham_probe(struct platform_device *pdev)
8784 +
8785 + pm_runtime_enable(dev);
8786 +
8787 +- err = pm_runtime_get_sync(dev);
8788 ++ err = pm_runtime_resume_and_get(dev);
8789 + if (err < 0) {
8790 + dev_err(dev, "failed to get sync: %d\n", err);
8791 + goto err_pm;
8792 +diff --git a/drivers/crypto/qat/qat_4xxx/adf_drv.c b/drivers/crypto/qat/qat_4xxx/adf_drv.c
8793 +index 2f212561acc47..670a58b25cb16 100644
8794 +--- a/drivers/crypto/qat/qat_4xxx/adf_drv.c
8795 ++++ b/drivers/crypto/qat/qat_4xxx/adf_drv.c
8796 +@@ -261,6 +261,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
8797 + hw_data->accel_capabilities_mask = hw_data->get_accel_cap(accel_dev);
8798 + if (!hw_data->accel_capabilities_mask) {
8799 + dev_err(&pdev->dev, "Failed to get capabilities mask.\n");
8800 ++ ret = -EINVAL;
8801 + goto out_err;
8802 + }
8803 +
8804 +diff --git a/drivers/crypto/rockchip/rk3288_crypto.c b/drivers/crypto/rockchip/rk3288_crypto.c
8805 +index 35d73061d1569..14a0aef18ab13 100644
8806 +--- a/drivers/crypto/rockchip/rk3288_crypto.c
8807 ++++ b/drivers/crypto/rockchip/rk3288_crypto.c
8808 +@@ -65,186 +65,24 @@ static void rk_crypto_disable_clk(struct rk_crypto_info *dev)
8809 + clk_disable_unprepare(dev->sclk);
8810 + }
8811 +
8812 +-static int check_alignment(struct scatterlist *sg_src,
8813 +- struct scatterlist *sg_dst,
8814 +- int align_mask)
8815 +-{
8816 +- int in, out, align;
8817 +-
8818 +- in = IS_ALIGNED((uint32_t)sg_src->offset, 4) &&
8819 +- IS_ALIGNED((uint32_t)sg_src->length, align_mask);
8820 +- if (!sg_dst)
8821 +- return in;
8822 +- out = IS_ALIGNED((uint32_t)sg_dst->offset, 4) &&
8823 +- IS_ALIGNED((uint32_t)sg_dst->length, align_mask);
8824 +- align = in && out;
8825 +-
8826 +- return (align && (sg_src->length == sg_dst->length));
8827 +-}
8828 +-
8829 +-static int rk_load_data(struct rk_crypto_info *dev,
8830 +- struct scatterlist *sg_src,
8831 +- struct scatterlist *sg_dst)
8832 +-{
8833 +- unsigned int count;
8834 +-
8835 +- dev->aligned = dev->aligned ?
8836 +- check_alignment(sg_src, sg_dst, dev->align_size) :
8837 +- dev->aligned;
8838 +- if (dev->aligned) {
8839 +- count = min(dev->left_bytes, sg_src->length);
8840 +- dev->left_bytes -= count;
8841 +-
8842 +- if (!dma_map_sg(dev->dev, sg_src, 1, DMA_TO_DEVICE)) {
8843 +- dev_err(dev->dev, "[%s:%d] dma_map_sg(src) error\n",
8844 +- __func__, __LINE__);
8845 +- return -EINVAL;
8846 +- }
8847 +- dev->addr_in = sg_dma_address(sg_src);
8848 +-
8849 +- if (sg_dst) {
8850 +- if (!dma_map_sg(dev->dev, sg_dst, 1, DMA_FROM_DEVICE)) {
8851 +- dev_err(dev->dev,
8852 +- "[%s:%d] dma_map_sg(dst) error\n",
8853 +- __func__, __LINE__);
8854 +- dma_unmap_sg(dev->dev, sg_src, 1,
8855 +- DMA_TO_DEVICE);
8856 +- return -EINVAL;
8857 +- }
8858 +- dev->addr_out = sg_dma_address(sg_dst);
8859 +- }
8860 +- } else {
8861 +- count = (dev->left_bytes > PAGE_SIZE) ?
8862 +- PAGE_SIZE : dev->left_bytes;
8863 +-
8864 +- if (!sg_pcopy_to_buffer(dev->first, dev->src_nents,
8865 +- dev->addr_vir, count,
8866 +- dev->total - dev->left_bytes)) {
8867 +- dev_err(dev->dev, "[%s:%d] pcopy err\n",
8868 +- __func__, __LINE__);
8869 +- return -EINVAL;
8870 +- }
8871 +- dev->left_bytes -= count;
8872 +- sg_init_one(&dev->sg_tmp, dev->addr_vir, count);
8873 +- if (!dma_map_sg(dev->dev, &dev->sg_tmp, 1, DMA_TO_DEVICE)) {
8874 +- dev_err(dev->dev, "[%s:%d] dma_map_sg(sg_tmp) error\n",
8875 +- __func__, __LINE__);
8876 +- return -ENOMEM;
8877 +- }
8878 +- dev->addr_in = sg_dma_address(&dev->sg_tmp);
8879 +-
8880 +- if (sg_dst) {
8881 +- if (!dma_map_sg(dev->dev, &dev->sg_tmp, 1,
8882 +- DMA_FROM_DEVICE)) {
8883 +- dev_err(dev->dev,
8884 +- "[%s:%d] dma_map_sg(sg_tmp) error\n",
8885 +- __func__, __LINE__);
8886 +- dma_unmap_sg(dev->dev, &dev->sg_tmp, 1,
8887 +- DMA_TO_DEVICE);
8888 +- return -ENOMEM;
8889 +- }
8890 +- dev->addr_out = sg_dma_address(&dev->sg_tmp);
8891 +- }
8892 +- }
8893 +- dev->count = count;
8894 +- return 0;
8895 +-}
8896 +-
8897 +-static void rk_unload_data(struct rk_crypto_info *dev)
8898 +-{
8899 +- struct scatterlist *sg_in, *sg_out;
8900 +-
8901 +- sg_in = dev->aligned ? dev->sg_src : &dev->sg_tmp;
8902 +- dma_unmap_sg(dev->dev, sg_in, 1, DMA_TO_DEVICE);
8903 +-
8904 +- if (dev->sg_dst) {
8905 +- sg_out = dev->aligned ? dev->sg_dst : &dev->sg_tmp;
8906 +- dma_unmap_sg(dev->dev, sg_out, 1, DMA_FROM_DEVICE);
8907 +- }
8908 +-}
8909 +-
8910 + static irqreturn_t rk_crypto_irq_handle(int irq, void *dev_id)
8911 + {
8912 + struct rk_crypto_info *dev = platform_get_drvdata(dev_id);
8913 + u32 interrupt_status;
8914 +
8915 +- spin_lock(&dev->lock);
8916 + interrupt_status = CRYPTO_READ(dev, RK_CRYPTO_INTSTS);
8917 + CRYPTO_WRITE(dev, RK_CRYPTO_INTSTS, interrupt_status);
8918 +
8919 ++ dev->status = 1;
8920 + if (interrupt_status & 0x0a) {
8921 + dev_warn(dev->dev, "DMA Error\n");
8922 +- dev->err = -EFAULT;
8923 ++ dev->status = 0;
8924 + }
8925 +- tasklet_schedule(&dev->done_task);
8926 ++ complete(&dev->complete);
8927 +
8928 +- spin_unlock(&dev->lock);
8929 + return IRQ_HANDLED;
8930 + }
8931 +
8932 +-static int rk_crypto_enqueue(struct rk_crypto_info *dev,
8933 +- struct crypto_async_request *async_req)
8934 +-{
8935 +- unsigned long flags;
8936 +- int ret;
8937 +-
8938 +- spin_lock_irqsave(&dev->lock, flags);
8939 +- ret = crypto_enqueue_request(&dev->queue, async_req);
8940 +- if (dev->busy) {
8941 +- spin_unlock_irqrestore(&dev->lock, flags);
8942 +- return ret;
8943 +- }
8944 +- dev->busy = true;
8945 +- spin_unlock_irqrestore(&dev->lock, flags);
8946 +- tasklet_schedule(&dev->queue_task);
8947 +-
8948 +- return ret;
8949 +-}
8950 +-
8951 +-static void rk_crypto_queue_task_cb(unsigned long data)
8952 +-{
8953 +- struct rk_crypto_info *dev = (struct rk_crypto_info *)data;
8954 +- struct crypto_async_request *async_req, *backlog;
8955 +- unsigned long flags;
8956 +- int err = 0;
8957 +-
8958 +- dev->err = 0;
8959 +- spin_lock_irqsave(&dev->lock, flags);
8960 +- backlog = crypto_get_backlog(&dev->queue);
8961 +- async_req = crypto_dequeue_request(&dev->queue);
8962 +-
8963 +- if (!async_req) {
8964 +- dev->busy = false;
8965 +- spin_unlock_irqrestore(&dev->lock, flags);
8966 +- return;
8967 +- }
8968 +- spin_unlock_irqrestore(&dev->lock, flags);
8969 +-
8970 +- if (backlog) {
8971 +- backlog->complete(backlog, -EINPROGRESS);
8972 +- backlog = NULL;
8973 +- }
8974 +-
8975 +- dev->async_req = async_req;
8976 +- err = dev->start(dev);
8977 +- if (err)
8978 +- dev->complete(dev->async_req, err);
8979 +-}
8980 +-
8981 +-static void rk_crypto_done_task_cb(unsigned long data)
8982 +-{
8983 +- struct rk_crypto_info *dev = (struct rk_crypto_info *)data;
8984 +-
8985 +- if (dev->err) {
8986 +- dev->complete(dev->async_req, dev->err);
8987 +- return;
8988 +- }
8989 +-
8990 +- dev->err = dev->update(dev);
8991 +- if (dev->err)
8992 +- dev->complete(dev->async_req, dev->err);
8993 +-}
8994 +-
8995 + static struct rk_crypto_tmp *rk_cipher_algs[] = {
8996 + &rk_ecb_aes_alg,
8997 + &rk_cbc_aes_alg,
8998 +@@ -337,8 +175,6 @@ static int rk_crypto_probe(struct platform_device *pdev)
8999 + if (err)
9000 + goto err_crypto;
9001 +
9002 +- spin_lock_init(&crypto_info->lock);
9003 +-
9004 + crypto_info->reg = devm_platform_ioremap_resource(pdev, 0);
9005 + if (IS_ERR(crypto_info->reg)) {
9006 + err = PTR_ERR(crypto_info->reg);
9007 +@@ -389,18 +225,11 @@ static int rk_crypto_probe(struct platform_device *pdev)
9008 + crypto_info->dev = &pdev->dev;
9009 + platform_set_drvdata(pdev, crypto_info);
9010 +
9011 +- tasklet_init(&crypto_info->queue_task,
9012 +- rk_crypto_queue_task_cb, (unsigned long)crypto_info);
9013 +- tasklet_init(&crypto_info->done_task,
9014 +- rk_crypto_done_task_cb, (unsigned long)crypto_info);
9015 +- crypto_init_queue(&crypto_info->queue, 50);
9016 ++ crypto_info->engine = crypto_engine_alloc_init(&pdev->dev, true);
9017 ++ crypto_engine_start(crypto_info->engine);
9018 ++ init_completion(&crypto_info->complete);
9019 +
9020 +- crypto_info->enable_clk = rk_crypto_enable_clk;
9021 +- crypto_info->disable_clk = rk_crypto_disable_clk;
9022 +- crypto_info->load_data = rk_load_data;
9023 +- crypto_info->unload_data = rk_unload_data;
9024 +- crypto_info->enqueue = rk_crypto_enqueue;
9025 +- crypto_info->busy = false;
9026 ++ rk_crypto_enable_clk(crypto_info);
9027 +
9028 + err = rk_crypto_register(crypto_info);
9029 + if (err) {
9030 +@@ -412,9 +241,9 @@ static int rk_crypto_probe(struct platform_device *pdev)
9031 + return 0;
9032 +
9033 + err_register_alg:
9034 +- tasklet_kill(&crypto_info->queue_task);
9035 +- tasklet_kill(&crypto_info->done_task);
9036 ++ crypto_engine_exit(crypto_info->engine);
9037 + err_crypto:
9038 ++ dev_err(dev, "Crypto Accelerator not successfully registered\n");
9039 + return err;
9040 + }
9041 +
9042 +@@ -423,8 +252,8 @@ static int rk_crypto_remove(struct platform_device *pdev)
9043 + struct rk_crypto_info *crypto_tmp = platform_get_drvdata(pdev);
9044 +
9045 + rk_crypto_unregister();
9046 +- tasklet_kill(&crypto_tmp->done_task);
9047 +- tasklet_kill(&crypto_tmp->queue_task);
9048 ++ rk_crypto_disable_clk(crypto_tmp);
9049 ++ crypto_engine_exit(crypto_tmp->engine);
9050 + return 0;
9051 + }
9052 +
9053 +diff --git a/drivers/crypto/rockchip/rk3288_crypto.h b/drivers/crypto/rockchip/rk3288_crypto.h
9054 +index 97278c2574ff9..045e811b4af84 100644
9055 +--- a/drivers/crypto/rockchip/rk3288_crypto.h
9056 ++++ b/drivers/crypto/rockchip/rk3288_crypto.h
9057 +@@ -5,9 +5,11 @@
9058 + #include <crypto/aes.h>
9059 + #include <crypto/internal/des.h>
9060 + #include <crypto/algapi.h>
9061 ++#include <linux/dma-mapping.h>
9062 + #include <linux/interrupt.h>
9063 + #include <linux/delay.h>
9064 + #include <linux/scatterlist.h>
9065 ++#include <crypto/engine.h>
9066 + #include <crypto/internal/hash.h>
9067 + #include <crypto/internal/skcipher.h>
9068 +
9069 +@@ -193,45 +195,15 @@ struct rk_crypto_info {
9070 + struct reset_control *rst;
9071 + void __iomem *reg;
9072 + int irq;
9073 +- struct crypto_queue queue;
9074 +- struct tasklet_struct queue_task;
9075 +- struct tasklet_struct done_task;
9076 +- struct crypto_async_request *async_req;
9077 +- int err;
9078 +- /* device lock */
9079 +- spinlock_t lock;
9080 +-
9081 +- /* the public variable */
9082 +- struct scatterlist *sg_src;
9083 +- struct scatterlist *sg_dst;
9084 +- struct scatterlist sg_tmp;
9085 +- struct scatterlist *first;
9086 +- unsigned int left_bytes;
9087 +- void *addr_vir;
9088 +- int aligned;
9089 +- int align_size;
9090 +- size_t src_nents;
9091 +- size_t dst_nents;
9092 +- unsigned int total;
9093 +- unsigned int count;
9094 +- dma_addr_t addr_in;
9095 +- dma_addr_t addr_out;
9096 +- bool busy;
9097 +- int (*start)(struct rk_crypto_info *dev);
9098 +- int (*update)(struct rk_crypto_info *dev);
9099 +- void (*complete)(struct crypto_async_request *base, int err);
9100 +- int (*enable_clk)(struct rk_crypto_info *dev);
9101 +- void (*disable_clk)(struct rk_crypto_info *dev);
9102 +- int (*load_data)(struct rk_crypto_info *dev,
9103 +- struct scatterlist *sg_src,
9104 +- struct scatterlist *sg_dst);
9105 +- void (*unload_data)(struct rk_crypto_info *dev);
9106 +- int (*enqueue)(struct rk_crypto_info *dev,
9107 +- struct crypto_async_request *async_req);
9108 ++
9109 ++ struct crypto_engine *engine;
9110 ++ struct completion complete;
9111 ++ int status;
9112 + };
9113 +
9114 + /* the private variable of hash */
9115 + struct rk_ahash_ctx {
9116 ++ struct crypto_engine_ctx enginectx;
9117 + struct rk_crypto_info *dev;
9118 + /* for fallback */
9119 + struct crypto_ahash *fallback_tfm;
9120 +@@ -241,14 +213,23 @@ struct rk_ahash_ctx {
9121 + struct rk_ahash_rctx {
9122 + struct ahash_request fallback_req;
9123 + u32 mode;
9124 ++ int nrsg;
9125 + };
9126 +
9127 + /* the private variable of cipher */
9128 + struct rk_cipher_ctx {
9129 ++ struct crypto_engine_ctx enginectx;
9130 + struct rk_crypto_info *dev;
9131 + unsigned int keylen;
9132 +- u32 mode;
9133 ++ u8 key[AES_MAX_KEY_SIZE];
9134 + u8 iv[AES_BLOCK_SIZE];
9135 ++ struct crypto_skcipher *fallback_tfm;
9136 ++};
9137 ++
9138 ++struct rk_cipher_rctx {
9139 ++ u8 backup_iv[AES_BLOCK_SIZE];
9140 ++ u32 mode;
9141 ++ struct skcipher_request fallback_req; // keep at the end
9142 + };
9143 +
9144 + enum alg_type {
9145 +diff --git a/drivers/crypto/rockchip/rk3288_crypto_ahash.c b/drivers/crypto/rockchip/rk3288_crypto_ahash.c
9146 +index ed03058497bc2..edd40e16a3f0a 100644
9147 +--- a/drivers/crypto/rockchip/rk3288_crypto_ahash.c
9148 ++++ b/drivers/crypto/rockchip/rk3288_crypto_ahash.c
9149 +@@ -9,6 +9,7 @@
9150 + * Some ideas are from marvell/cesa.c and s5p-sss.c driver.
9151 + */
9152 + #include <linux/device.h>
9153 ++#include <asm/unaligned.h>
9154 + #include "rk3288_crypto.h"
9155 +
9156 + /*
9157 +@@ -16,6 +17,40 @@
9158 + * so we put the fixed hash out when met zero message.
9159 + */
9160 +
9161 ++static bool rk_ahash_need_fallback(struct ahash_request *req)
9162 ++{
9163 ++ struct scatterlist *sg;
9164 ++
9165 ++ sg = req->src;
9166 ++ while (sg) {
9167 ++ if (!IS_ALIGNED(sg->offset, sizeof(u32))) {
9168 ++ return true;
9169 ++ }
9170 ++ if (sg->length % 4) {
9171 ++ return true;
9172 ++ }
9173 ++ sg = sg_next(sg);
9174 ++ }
9175 ++ return false;
9176 ++}
9177 ++
9178 ++static int rk_ahash_digest_fb(struct ahash_request *areq)
9179 ++{
9180 ++ struct rk_ahash_rctx *rctx = ahash_request_ctx(areq);
9181 ++ struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
9182 ++ struct rk_ahash_ctx *tfmctx = crypto_ahash_ctx(tfm);
9183 ++
9184 ++ ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm);
9185 ++ rctx->fallback_req.base.flags = areq->base.flags &
9186 ++ CRYPTO_TFM_REQ_MAY_SLEEP;
9187 ++
9188 ++ rctx->fallback_req.nbytes = areq->nbytes;
9189 ++ rctx->fallback_req.src = areq->src;
9190 ++ rctx->fallback_req.result = areq->result;
9191 ++
9192 ++ return crypto_ahash_digest(&rctx->fallback_req);
9193 ++}
9194 ++
9195 + static int zero_message_process(struct ahash_request *req)
9196 + {
9197 + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
9198 +@@ -38,16 +73,12 @@ static int zero_message_process(struct ahash_request *req)
9199 + return 0;
9200 + }
9201 +
9202 +-static void rk_ahash_crypto_complete(struct crypto_async_request *base, int err)
9203 +-{
9204 +- if (base->complete)
9205 +- base->complete(base, err);
9206 +-}
9207 +-
9208 +-static void rk_ahash_reg_init(struct rk_crypto_info *dev)
9209 ++static void rk_ahash_reg_init(struct ahash_request *req)
9210 + {
9211 +- struct ahash_request *req = ahash_request_cast(dev->async_req);
9212 + struct rk_ahash_rctx *rctx = ahash_request_ctx(req);
9213 ++ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
9214 ++ struct rk_ahash_ctx *tctx = crypto_ahash_ctx(tfm);
9215 ++ struct rk_crypto_info *dev = tctx->dev;
9216 + int reg_status;
9217 +
9218 + reg_status = CRYPTO_READ(dev, RK_CRYPTO_CTRL) |
9219 +@@ -74,7 +105,7 @@ static void rk_ahash_reg_init(struct rk_crypto_info *dev)
9220 + RK_CRYPTO_BYTESWAP_BRFIFO |
9221 + RK_CRYPTO_BYTESWAP_BTFIFO);
9222 +
9223 +- CRYPTO_WRITE(dev, RK_CRYPTO_HASH_MSG_LEN, dev->total);
9224 ++ CRYPTO_WRITE(dev, RK_CRYPTO_HASH_MSG_LEN, req->nbytes);
9225 + }
9226 +
9227 + static int rk_ahash_init(struct ahash_request *req)
9228 +@@ -167,48 +198,64 @@ static int rk_ahash_digest(struct ahash_request *req)
9229 + struct rk_ahash_ctx *tctx = crypto_tfm_ctx(req->base.tfm);
9230 + struct rk_crypto_info *dev = tctx->dev;
9231 +
9232 ++ if (rk_ahash_need_fallback(req))
9233 ++ return rk_ahash_digest_fb(req);
9234 ++
9235 + if (!req->nbytes)
9236 + return zero_message_process(req);
9237 +- else
9238 +- return dev->enqueue(dev, &req->base);
9239 ++
9240 ++ return crypto_transfer_hash_request_to_engine(dev->engine, req);
9241 + }
9242 +
9243 +-static void crypto_ahash_dma_start(struct rk_crypto_info *dev)
9244 ++static void crypto_ahash_dma_start(struct rk_crypto_info *dev, struct scatterlist *sg)
9245 + {
9246 +- CRYPTO_WRITE(dev, RK_CRYPTO_HRDMAS, dev->addr_in);
9247 +- CRYPTO_WRITE(dev, RK_CRYPTO_HRDMAL, (dev->count + 3) / 4);
9248 ++ CRYPTO_WRITE(dev, RK_CRYPTO_HRDMAS, sg_dma_address(sg));
9249 ++ CRYPTO_WRITE(dev, RK_CRYPTO_HRDMAL, sg_dma_len(sg) / 4);
9250 + CRYPTO_WRITE(dev, RK_CRYPTO_CTRL, RK_CRYPTO_HASH_START |
9251 + (RK_CRYPTO_HASH_START << 16));
9252 + }
9253 +
9254 +-static int rk_ahash_set_data_start(struct rk_crypto_info *dev)
9255 ++static int rk_hash_prepare(struct crypto_engine *engine, void *breq)
9256 ++{
9257 ++ struct ahash_request *areq = container_of(breq, struct ahash_request, base);
9258 ++ struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
9259 ++ struct rk_ahash_rctx *rctx = ahash_request_ctx(areq);
9260 ++ struct rk_ahash_ctx *tctx = crypto_ahash_ctx(tfm);
9261 ++ int ret;
9262 ++
9263 ++ ret = dma_map_sg(tctx->dev->dev, areq->src, sg_nents(areq->src), DMA_TO_DEVICE);
9264 ++ if (ret <= 0)
9265 ++ return -EINVAL;
9266 ++
9267 ++ rctx->nrsg = ret;
9268 ++
9269 ++ return 0;
9270 ++}
9271 ++
9272 ++static int rk_hash_unprepare(struct crypto_engine *engine, void *breq)
9273 + {
9274 +- int err;
9275 ++ struct ahash_request *areq = container_of(breq, struct ahash_request, base);
9276 ++ struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
9277 ++ struct rk_ahash_rctx *rctx = ahash_request_ctx(areq);
9278 ++ struct rk_ahash_ctx *tctx = crypto_ahash_ctx(tfm);
9279 +
9280 +- err = dev->load_data(dev, dev->sg_src, NULL);
9281 +- if (!err)
9282 +- crypto_ahash_dma_start(dev);
9283 +- return err;
9284 ++ dma_unmap_sg(tctx->dev->dev, areq->src, rctx->nrsg, DMA_TO_DEVICE);
9285 ++ return 0;
9286 + }
9287 +
9288 +-static int rk_ahash_start(struct rk_crypto_info *dev)
9289 ++static int rk_hash_run(struct crypto_engine *engine, void *breq)
9290 + {
9291 +- struct ahash_request *req = ahash_request_cast(dev->async_req);
9292 +- struct crypto_ahash *tfm;
9293 +- struct rk_ahash_rctx *rctx;
9294 +-
9295 +- dev->total = req->nbytes;
9296 +- dev->left_bytes = req->nbytes;
9297 +- dev->aligned = 0;
9298 +- dev->align_size = 4;
9299 +- dev->sg_dst = NULL;
9300 +- dev->sg_src = req->src;
9301 +- dev->first = req->src;
9302 +- dev->src_nents = sg_nents(req->src);
9303 +- rctx = ahash_request_ctx(req);
9304 ++ struct ahash_request *areq = container_of(breq, struct ahash_request, base);
9305 ++ struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
9306 ++ struct rk_ahash_rctx *rctx = ahash_request_ctx(areq);
9307 ++ struct rk_ahash_ctx *tctx = crypto_ahash_ctx(tfm);
9308 ++ struct scatterlist *sg = areq->src;
9309 ++ int err = 0;
9310 ++ int i;
9311 ++ u32 v;
9312 ++
9313 + rctx->mode = 0;
9314 +
9315 +- tfm = crypto_ahash_reqtfm(req);
9316 + switch (crypto_ahash_digestsize(tfm)) {
9317 + case SHA1_DIGEST_SIZE:
9318 + rctx->mode = RK_CRYPTO_HASH_SHA1;
9319 +@@ -220,32 +267,26 @@ static int rk_ahash_start(struct rk_crypto_info *dev)
9320 + rctx->mode = RK_CRYPTO_HASH_MD5;
9321 + break;
9322 + default:
9323 +- return -EINVAL;
9324 ++ err = -EINVAL;
9325 ++ goto theend;
9326 + }
9327 +
9328 +- rk_ahash_reg_init(dev);
9329 +- return rk_ahash_set_data_start(dev);
9330 +-}
9331 +-
9332 +-static int rk_ahash_crypto_rx(struct rk_crypto_info *dev)
9333 +-{
9334 +- int err = 0;
9335 +- struct ahash_request *req = ahash_request_cast(dev->async_req);
9336 +- struct crypto_ahash *tfm;
9337 +-
9338 +- dev->unload_data(dev);
9339 +- if (dev->left_bytes) {
9340 +- if (dev->aligned) {
9341 +- if (sg_is_last(dev->sg_src)) {
9342 +- dev_warn(dev->dev, "[%s:%d], Lack of data\n",
9343 +- __func__, __LINE__);
9344 +- err = -ENOMEM;
9345 +- goto out_rx;
9346 +- }
9347 +- dev->sg_src = sg_next(dev->sg_src);
9348 ++ rk_ahash_reg_init(areq);
9349 ++
9350 ++ while (sg) {
9351 ++ reinit_completion(&tctx->dev->complete);
9352 ++ tctx->dev->status = 0;
9353 ++ crypto_ahash_dma_start(tctx->dev, sg);
9354 ++ wait_for_completion_interruptible_timeout(&tctx->dev->complete,
9355 ++ msecs_to_jiffies(2000));
9356 ++ if (!tctx->dev->status) {
9357 ++ dev_err(tctx->dev->dev, "DMA timeout\n");
9358 ++ err = -EFAULT;
9359 ++ goto theend;
9360 + }
9361 +- err = rk_ahash_set_data_start(dev);
9362 +- } else {
9363 ++ sg = sg_next(sg);
9364 ++ }
9365 ++
9366 + /*
9367 + * it will take some time to process date after last dma
9368 + * transmission.
9369 +@@ -256,18 +297,20 @@ static int rk_ahash_crypto_rx(struct rk_crypto_info *dev)
9370 + * efficiency, and make it response quickly when dma
9371 + * complete.
9372 + */
9373 +- while (!CRYPTO_READ(dev, RK_CRYPTO_HASH_STS))
9374 +- udelay(10);
9375 +-
9376 +- tfm = crypto_ahash_reqtfm(req);
9377 +- memcpy_fromio(req->result, dev->reg + RK_CRYPTO_HASH_DOUT_0,
9378 +- crypto_ahash_digestsize(tfm));
9379 +- dev->complete(dev->async_req, 0);
9380 +- tasklet_schedule(&dev->queue_task);
9381 ++ while (!CRYPTO_READ(tctx->dev, RK_CRYPTO_HASH_STS))
9382 ++ udelay(10);
9383 ++
9384 ++ for (i = 0; i < crypto_ahash_digestsize(tfm) / 4; i++) {
9385 ++ v = readl(tctx->dev->reg + RK_CRYPTO_HASH_DOUT_0 + i * 4);
9386 ++ put_unaligned_le32(v, areq->result + i * 4);
9387 + }
9388 +
9389 +-out_rx:
9390 +- return err;
9391 ++theend:
9392 ++ local_bh_disable();
9393 ++ crypto_finalize_hash_request(engine, breq, err);
9394 ++ local_bh_enable();
9395 ++
9396 ++ return 0;
9397 + }
9398 +
9399 + static int rk_cra_hash_init(struct crypto_tfm *tfm)
9400 +@@ -281,14 +324,6 @@ static int rk_cra_hash_init(struct crypto_tfm *tfm)
9401 + algt = container_of(alg, struct rk_crypto_tmp, alg.hash);
9402 +
9403 + tctx->dev = algt->dev;
9404 +- tctx->dev->addr_vir = (void *)__get_free_page(GFP_KERNEL);
9405 +- if (!tctx->dev->addr_vir) {
9406 +- dev_err(tctx->dev->dev, "failed to kmalloc for addr_vir\n");
9407 +- return -ENOMEM;
9408 +- }
9409 +- tctx->dev->start = rk_ahash_start;
9410 +- tctx->dev->update = rk_ahash_crypto_rx;
9411 +- tctx->dev->complete = rk_ahash_crypto_complete;
9412 +
9413 + /* for fallback */
9414 + tctx->fallback_tfm = crypto_alloc_ahash(alg_name, 0,
9415 +@@ -297,19 +332,23 @@ static int rk_cra_hash_init(struct crypto_tfm *tfm)
9416 + dev_err(tctx->dev->dev, "Could not load fallback driver.\n");
9417 + return PTR_ERR(tctx->fallback_tfm);
9418 + }
9419 ++
9420 + crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
9421 + sizeof(struct rk_ahash_rctx) +
9422 + crypto_ahash_reqsize(tctx->fallback_tfm));
9423 +
9424 +- return tctx->dev->enable_clk(tctx->dev);
9425 ++ tctx->enginectx.op.do_one_request = rk_hash_run;
9426 ++ tctx->enginectx.op.prepare_request = rk_hash_prepare;
9427 ++ tctx->enginectx.op.unprepare_request = rk_hash_unprepare;
9428 ++
9429 ++ return 0;
9430 + }
9431 +
9432 + static void rk_cra_hash_exit(struct crypto_tfm *tfm)
9433 + {
9434 + struct rk_ahash_ctx *tctx = crypto_tfm_ctx(tfm);
9435 +
9436 +- free_page((unsigned long)tctx->dev->addr_vir);
9437 +- return tctx->dev->disable_clk(tctx->dev);
9438 ++ crypto_free_ahash(tctx->fallback_tfm);
9439 + }
9440 +
9441 + struct rk_crypto_tmp rk_ahash_sha1 = {
9442 +diff --git a/drivers/crypto/rockchip/rk3288_crypto_skcipher.c b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c
9443 +index 5bbf0d2722e11..67a7e05d5ae31 100644
9444 +--- a/drivers/crypto/rockchip/rk3288_crypto_skcipher.c
9445 ++++ b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c
9446 +@@ -9,23 +9,77 @@
9447 + * Some ideas are from marvell-cesa.c and s5p-sss.c driver.
9448 + */
9449 + #include <linux/device.h>
9450 ++#include <crypto/scatterwalk.h>
9451 + #include "rk3288_crypto.h"
9452 +
9453 + #define RK_CRYPTO_DEC BIT(0)
9454 +
9455 +-static void rk_crypto_complete(struct crypto_async_request *base, int err)
9456 ++static int rk_cipher_need_fallback(struct skcipher_request *req)
9457 + {
9458 +- if (base->complete)
9459 +- base->complete(base, err);
9460 ++ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
9461 ++ unsigned int bs = crypto_skcipher_blocksize(tfm);
9462 ++ struct scatterlist *sgs, *sgd;
9463 ++ unsigned int stodo, dtodo, len;
9464 ++
9465 ++ if (!req->cryptlen)
9466 ++ return true;
9467 ++
9468 ++ len = req->cryptlen;
9469 ++ sgs = req->src;
9470 ++ sgd = req->dst;
9471 ++ while (sgs && sgd) {
9472 ++ if (!IS_ALIGNED(sgs->offset, sizeof(u32))) {
9473 ++ return true;
9474 ++ }
9475 ++ if (!IS_ALIGNED(sgd->offset, sizeof(u32))) {
9476 ++ return true;
9477 ++ }
9478 ++ stodo = min(len, sgs->length);
9479 ++ if (stodo % bs) {
9480 ++ return true;
9481 ++ }
9482 ++ dtodo = min(len, sgd->length);
9483 ++ if (dtodo % bs) {
9484 ++ return true;
9485 ++ }
9486 ++ if (stodo != dtodo) {
9487 ++ return true;
9488 ++ }
9489 ++ len -= stodo;
9490 ++ sgs = sg_next(sgs);
9491 ++ sgd = sg_next(sgd);
9492 ++ }
9493 ++ return false;
9494 ++}
9495 ++
9496 ++static int rk_cipher_fallback(struct skcipher_request *areq)
9497 ++{
9498 ++ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
9499 ++ struct rk_cipher_ctx *op = crypto_skcipher_ctx(tfm);
9500 ++ struct rk_cipher_rctx *rctx = skcipher_request_ctx(areq);
9501 ++ int err;
9502 ++
9503 ++ skcipher_request_set_tfm(&rctx->fallback_req, op->fallback_tfm);
9504 ++ skcipher_request_set_callback(&rctx->fallback_req, areq->base.flags,
9505 ++ areq->base.complete, areq->base.data);
9506 ++ skcipher_request_set_crypt(&rctx->fallback_req, areq->src, areq->dst,
9507 ++ areq->cryptlen, areq->iv);
9508 ++ if (rctx->mode & RK_CRYPTO_DEC)
9509 ++ err = crypto_skcipher_decrypt(&rctx->fallback_req);
9510 ++ else
9511 ++ err = crypto_skcipher_encrypt(&rctx->fallback_req);
9512 ++ return err;
9513 + }
9514 +
9515 + static int rk_handle_req(struct rk_crypto_info *dev,
9516 + struct skcipher_request *req)
9517 + {
9518 +- if (!IS_ALIGNED(req->cryptlen, dev->align_size))
9519 +- return -EINVAL;
9520 +- else
9521 +- return dev->enqueue(dev, &req->base);
9522 ++ struct crypto_engine *engine = dev->engine;
9523 ++
9524 ++ if (rk_cipher_need_fallback(req))
9525 ++ return rk_cipher_fallback(req);
9526 ++
9527 ++ return crypto_transfer_skcipher_request_to_engine(engine, req);
9528 + }
9529 +
9530 + static int rk_aes_setkey(struct crypto_skcipher *cipher,
9531 +@@ -38,8 +92,9 @@ static int rk_aes_setkey(struct crypto_skcipher *cipher,
9532 + keylen != AES_KEYSIZE_256)
9533 + return -EINVAL;
9534 + ctx->keylen = keylen;
9535 +- memcpy_toio(ctx->dev->reg + RK_CRYPTO_AES_KEY_0, key, keylen);
9536 +- return 0;
9537 ++ memcpy(ctx->key, key, keylen);
9538 ++
9539 ++ return crypto_skcipher_setkey(ctx->fallback_tfm, key, keylen);
9540 + }
9541 +
9542 + static int rk_des_setkey(struct crypto_skcipher *cipher,
9543 +@@ -53,8 +108,9 @@ static int rk_des_setkey(struct crypto_skcipher *cipher,
9544 + return err;
9545 +
9546 + ctx->keylen = keylen;
9547 +- memcpy_toio(ctx->dev->reg + RK_CRYPTO_TDES_KEY1_0, key, keylen);
9548 +- return 0;
9549 ++ memcpy(ctx->key, key, keylen);
9550 ++
9551 ++ return crypto_skcipher_setkey(ctx->fallback_tfm, key, keylen);
9552 + }
9553 +
9554 + static int rk_tdes_setkey(struct crypto_skcipher *cipher,
9555 +@@ -68,17 +124,19 @@ static int rk_tdes_setkey(struct crypto_skcipher *cipher,
9556 + return err;
9557 +
9558 + ctx->keylen = keylen;
9559 +- memcpy_toio(ctx->dev->reg + RK_CRYPTO_TDES_KEY1_0, key, keylen);
9560 +- return 0;
9561 ++ memcpy(ctx->key, key, keylen);
9562 ++
9563 ++ return crypto_skcipher_setkey(ctx->fallback_tfm, key, keylen);
9564 + }
9565 +
9566 + static int rk_aes_ecb_encrypt(struct skcipher_request *req)
9567 + {
9568 + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
9569 + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
9570 ++ struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
9571 + struct rk_crypto_info *dev = ctx->dev;
9572 +
9573 +- ctx->mode = RK_CRYPTO_AES_ECB_MODE;
9574 ++ rctx->mode = RK_CRYPTO_AES_ECB_MODE;
9575 + return rk_handle_req(dev, req);
9576 + }
9577 +
9578 +@@ -86,9 +144,10 @@ static int rk_aes_ecb_decrypt(struct skcipher_request *req)
9579 + {
9580 + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
9581 + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
9582 ++ struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
9583 + struct rk_crypto_info *dev = ctx->dev;
9584 +
9585 +- ctx->mode = RK_CRYPTO_AES_ECB_MODE | RK_CRYPTO_DEC;
9586 ++ rctx->mode = RK_CRYPTO_AES_ECB_MODE | RK_CRYPTO_DEC;
9587 + return rk_handle_req(dev, req);
9588 + }
9589 +
9590 +@@ -96,9 +155,10 @@ static int rk_aes_cbc_encrypt(struct skcipher_request *req)
9591 + {
9592 + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
9593 + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
9594 ++ struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
9595 + struct rk_crypto_info *dev = ctx->dev;
9596 +
9597 +- ctx->mode = RK_CRYPTO_AES_CBC_MODE;
9598 ++ rctx->mode = RK_CRYPTO_AES_CBC_MODE;
9599 + return rk_handle_req(dev, req);
9600 + }
9601 +
9602 +@@ -106,9 +166,10 @@ static int rk_aes_cbc_decrypt(struct skcipher_request *req)
9603 + {
9604 + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
9605 + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
9606 ++ struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
9607 + struct rk_crypto_info *dev = ctx->dev;
9608 +
9609 +- ctx->mode = RK_CRYPTO_AES_CBC_MODE | RK_CRYPTO_DEC;
9610 ++ rctx->mode = RK_CRYPTO_AES_CBC_MODE | RK_CRYPTO_DEC;
9611 + return rk_handle_req(dev, req);
9612 + }
9613 +
9614 +@@ -116,9 +177,10 @@ static int rk_des_ecb_encrypt(struct skcipher_request *req)
9615 + {
9616 + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
9617 + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
9618 ++ struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
9619 + struct rk_crypto_info *dev = ctx->dev;
9620 +
9621 +- ctx->mode = 0;
9622 ++ rctx->mode = 0;
9623 + return rk_handle_req(dev, req);
9624 + }
9625 +
9626 +@@ -126,9 +188,10 @@ static int rk_des_ecb_decrypt(struct skcipher_request *req)
9627 + {
9628 + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
9629 + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
9630 ++ struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
9631 + struct rk_crypto_info *dev = ctx->dev;
9632 +
9633 +- ctx->mode = RK_CRYPTO_DEC;
9634 ++ rctx->mode = RK_CRYPTO_DEC;
9635 + return rk_handle_req(dev, req);
9636 + }
9637 +
9638 +@@ -136,9 +199,10 @@ static int rk_des_cbc_encrypt(struct skcipher_request *req)
9639 + {
9640 + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
9641 + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
9642 ++ struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
9643 + struct rk_crypto_info *dev = ctx->dev;
9644 +
9645 +- ctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC;
9646 ++ rctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC;
9647 + return rk_handle_req(dev, req);
9648 + }
9649 +
9650 +@@ -146,9 +210,10 @@ static int rk_des_cbc_decrypt(struct skcipher_request *req)
9651 + {
9652 + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
9653 + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
9654 ++ struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
9655 + struct rk_crypto_info *dev = ctx->dev;
9656 +
9657 +- ctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC | RK_CRYPTO_DEC;
9658 ++ rctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC | RK_CRYPTO_DEC;
9659 + return rk_handle_req(dev, req);
9660 + }
9661 +
9662 +@@ -156,9 +221,10 @@ static int rk_des3_ede_ecb_encrypt(struct skcipher_request *req)
9663 + {
9664 + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
9665 + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
9666 ++ struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
9667 + struct rk_crypto_info *dev = ctx->dev;
9668 +
9669 +- ctx->mode = RK_CRYPTO_TDES_SELECT;
9670 ++ rctx->mode = RK_CRYPTO_TDES_SELECT;
9671 + return rk_handle_req(dev, req);
9672 + }
9673 +
9674 +@@ -166,9 +232,10 @@ static int rk_des3_ede_ecb_decrypt(struct skcipher_request *req)
9675 + {
9676 + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
9677 + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
9678 ++ struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
9679 + struct rk_crypto_info *dev = ctx->dev;
9680 +
9681 +- ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_DEC;
9682 ++ rctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_DEC;
9683 + return rk_handle_req(dev, req);
9684 + }
9685 +
9686 +@@ -176,9 +243,10 @@ static int rk_des3_ede_cbc_encrypt(struct skcipher_request *req)
9687 + {
9688 + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
9689 + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
9690 ++ struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
9691 + struct rk_crypto_info *dev = ctx->dev;
9692 +
9693 +- ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC;
9694 ++ rctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC;
9695 + return rk_handle_req(dev, req);
9696 + }
9697 +
9698 +@@ -186,43 +254,42 @@ static int rk_des3_ede_cbc_decrypt(struct skcipher_request *req)
9699 + {
9700 + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
9701 + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
9702 ++ struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
9703 + struct rk_crypto_info *dev = ctx->dev;
9704 +
9705 +- ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC |
9706 ++ rctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC |
9707 + RK_CRYPTO_DEC;
9708 + return rk_handle_req(dev, req);
9709 + }
9710 +
9711 +-static void rk_ablk_hw_init(struct rk_crypto_info *dev)
9712 ++static void rk_ablk_hw_init(struct rk_crypto_info *dev, struct skcipher_request *req)
9713 + {
9714 +- struct skcipher_request *req =
9715 +- skcipher_request_cast(dev->async_req);
9716 + struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(req);
9717 + struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher);
9718 ++ struct rk_cipher_rctx *rctx = skcipher_request_ctx(req);
9719 + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(cipher);
9720 +- u32 ivsize, block, conf_reg = 0;
9721 ++ u32 block, conf_reg = 0;
9722 +
9723 + block = crypto_tfm_alg_blocksize(tfm);
9724 +- ivsize = crypto_skcipher_ivsize(cipher);
9725 +
9726 + if (block == DES_BLOCK_SIZE) {
9727 +- ctx->mode |= RK_CRYPTO_TDES_FIFO_MODE |
9728 ++ rctx->mode |= RK_CRYPTO_TDES_FIFO_MODE |
9729 + RK_CRYPTO_TDES_BYTESWAP_KEY |
9730 + RK_CRYPTO_TDES_BYTESWAP_IV;
9731 +- CRYPTO_WRITE(dev, RK_CRYPTO_TDES_CTRL, ctx->mode);
9732 +- memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, req->iv, ivsize);
9733 ++ CRYPTO_WRITE(dev, RK_CRYPTO_TDES_CTRL, rctx->mode);
9734 ++ memcpy_toio(ctx->dev->reg + RK_CRYPTO_TDES_KEY1_0, ctx->key, ctx->keylen);
9735 + conf_reg = RK_CRYPTO_DESSEL;
9736 + } else {
9737 +- ctx->mode |= RK_CRYPTO_AES_FIFO_MODE |
9738 ++ rctx->mode |= RK_CRYPTO_AES_FIFO_MODE |
9739 + RK_CRYPTO_AES_KEY_CHANGE |
9740 + RK_CRYPTO_AES_BYTESWAP_KEY |
9741 + RK_CRYPTO_AES_BYTESWAP_IV;
9742 + if (ctx->keylen == AES_KEYSIZE_192)
9743 +- ctx->mode |= RK_CRYPTO_AES_192BIT_key;
9744 ++ rctx->mode |= RK_CRYPTO_AES_192BIT_key;
9745 + else if (ctx->keylen == AES_KEYSIZE_256)
9746 +- ctx->mode |= RK_CRYPTO_AES_256BIT_key;
9747 +- CRYPTO_WRITE(dev, RK_CRYPTO_AES_CTRL, ctx->mode);
9748 +- memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, req->iv, ivsize);
9749 ++ rctx->mode |= RK_CRYPTO_AES_256BIT_key;
9750 ++ CRYPTO_WRITE(dev, RK_CRYPTO_AES_CTRL, rctx->mode);
9751 ++ memcpy_toio(ctx->dev->reg + RK_CRYPTO_AES_KEY_0, ctx->key, ctx->keylen);
9752 + }
9753 + conf_reg |= RK_CRYPTO_BYTESWAP_BTFIFO |
9754 + RK_CRYPTO_BYTESWAP_BRFIFO;
9755 +@@ -231,146 +298,138 @@ static void rk_ablk_hw_init(struct rk_crypto_info *dev)
9756 + RK_CRYPTO_BCDMA_ERR_ENA | RK_CRYPTO_BCDMA_DONE_ENA);
9757 + }
9758 +
9759 +-static void crypto_dma_start(struct rk_crypto_info *dev)
9760 ++static void crypto_dma_start(struct rk_crypto_info *dev,
9761 ++ struct scatterlist *sgs,
9762 ++ struct scatterlist *sgd, unsigned int todo)
9763 + {
9764 +- CRYPTO_WRITE(dev, RK_CRYPTO_BRDMAS, dev->addr_in);
9765 +- CRYPTO_WRITE(dev, RK_CRYPTO_BRDMAL, dev->count / 4);
9766 +- CRYPTO_WRITE(dev, RK_CRYPTO_BTDMAS, dev->addr_out);
9767 ++ CRYPTO_WRITE(dev, RK_CRYPTO_BRDMAS, sg_dma_address(sgs));
9768 ++ CRYPTO_WRITE(dev, RK_CRYPTO_BRDMAL, todo);
9769 ++ CRYPTO_WRITE(dev, RK_CRYPTO_BTDMAS, sg_dma_address(sgd));
9770 + CRYPTO_WRITE(dev, RK_CRYPTO_CTRL, RK_CRYPTO_BLOCK_START |
9771 + _SBF(RK_CRYPTO_BLOCK_START, 16));
9772 + }
9773 +
9774 +-static int rk_set_data_start(struct rk_crypto_info *dev)
9775 ++static int rk_cipher_run(struct crypto_engine *engine, void *async_req)
9776 + {
9777 +- int err;
9778 +- struct skcipher_request *req =
9779 +- skcipher_request_cast(dev->async_req);
9780 +- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
9781 ++ struct skcipher_request *areq = container_of(async_req, struct skcipher_request, base);
9782 ++ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
9783 + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
9784 +- u32 ivsize = crypto_skcipher_ivsize(tfm);
9785 +- u8 *src_last_blk = page_address(sg_page(dev->sg_src)) +
9786 +- dev->sg_src->offset + dev->sg_src->length - ivsize;
9787 +-
9788 +- /* Store the iv that need to be updated in chain mode.
9789 +- * And update the IV buffer to contain the next IV for decryption mode.
9790 +- */
9791 +- if (ctx->mode & RK_CRYPTO_DEC) {
9792 +- memcpy(ctx->iv, src_last_blk, ivsize);
9793 +- sg_pcopy_to_buffer(dev->first, dev->src_nents, req->iv,
9794 +- ivsize, dev->total - ivsize);
9795 +- }
9796 +-
9797 +- err = dev->load_data(dev, dev->sg_src, dev->sg_dst);
9798 +- if (!err)
9799 +- crypto_dma_start(dev);
9800 +- return err;
9801 +-}
9802 +-
9803 +-static int rk_ablk_start(struct rk_crypto_info *dev)
9804 +-{
9805 +- struct skcipher_request *req =
9806 +- skcipher_request_cast(dev->async_req);
9807 +- unsigned long flags;
9808 ++ struct rk_cipher_rctx *rctx = skcipher_request_ctx(areq);
9809 ++ struct scatterlist *sgs, *sgd;
9810 + int err = 0;
9811 ++ int ivsize = crypto_skcipher_ivsize(tfm);
9812 ++ int offset;
9813 ++ u8 iv[AES_BLOCK_SIZE];
9814 ++ u8 biv[AES_BLOCK_SIZE];
9815 ++ u8 *ivtouse = areq->iv;
9816 ++ unsigned int len = areq->cryptlen;
9817 ++ unsigned int todo;
9818 ++
9819 ++ ivsize = crypto_skcipher_ivsize(tfm);
9820 ++ if (areq->iv && crypto_skcipher_ivsize(tfm) > 0) {
9821 ++ if (rctx->mode & RK_CRYPTO_DEC) {
9822 ++ offset = areq->cryptlen - ivsize;
9823 ++ scatterwalk_map_and_copy(rctx->backup_iv, areq->src,
9824 ++ offset, ivsize, 0);
9825 ++ }
9826 ++ }
9827 +
9828 +- dev->left_bytes = req->cryptlen;
9829 +- dev->total = req->cryptlen;
9830 +- dev->sg_src = req->src;
9831 +- dev->first = req->src;
9832 +- dev->src_nents = sg_nents(req->src);
9833 +- dev->sg_dst = req->dst;
9834 +- dev->dst_nents = sg_nents(req->dst);
9835 +- dev->aligned = 1;
9836 +-
9837 +- spin_lock_irqsave(&dev->lock, flags);
9838 +- rk_ablk_hw_init(dev);
9839 +- err = rk_set_data_start(dev);
9840 +- spin_unlock_irqrestore(&dev->lock, flags);
9841 +- return err;
9842 +-}
9843 ++ sgs = areq->src;
9844 ++ sgd = areq->dst;
9845 +
9846 +-static void rk_iv_copyback(struct rk_crypto_info *dev)
9847 +-{
9848 +- struct skcipher_request *req =
9849 +- skcipher_request_cast(dev->async_req);
9850 +- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
9851 +- struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
9852 +- u32 ivsize = crypto_skcipher_ivsize(tfm);
9853 +-
9854 +- /* Update the IV buffer to contain the next IV for encryption mode. */
9855 +- if (!(ctx->mode & RK_CRYPTO_DEC)) {
9856 +- if (dev->aligned) {
9857 +- memcpy(req->iv, sg_virt(dev->sg_dst) +
9858 +- dev->sg_dst->length - ivsize, ivsize);
9859 ++ while (sgs && sgd && len) {
9860 ++ if (!sgs->length) {
9861 ++ sgs = sg_next(sgs);
9862 ++ sgd = sg_next(sgd);
9863 ++ continue;
9864 ++ }
9865 ++ if (rctx->mode & RK_CRYPTO_DEC) {
9866 ++ /* we backup last block of source to be used as IV at next step */
9867 ++ offset = sgs->length - ivsize;
9868 ++ scatterwalk_map_and_copy(biv, sgs, offset, ivsize, 0);
9869 ++ }
9870 ++ if (sgs == sgd) {
9871 ++ err = dma_map_sg(ctx->dev->dev, sgs, 1, DMA_BIDIRECTIONAL);
9872 ++ if (err <= 0) {
9873 ++ err = -EINVAL;
9874 ++ goto theend_iv;
9875 ++ }
9876 ++ } else {
9877 ++ err = dma_map_sg(ctx->dev->dev, sgs, 1, DMA_TO_DEVICE);
9878 ++ if (err <= 0) {
9879 ++ err = -EINVAL;
9880 ++ goto theend_iv;
9881 ++ }
9882 ++ err = dma_map_sg(ctx->dev->dev, sgd, 1, DMA_FROM_DEVICE);
9883 ++ if (err <= 0) {
9884 ++ err = -EINVAL;
9885 ++ goto theend_sgs;
9886 ++ }
9887 ++ }
9888 ++ err = 0;
9889 ++ rk_ablk_hw_init(ctx->dev, areq);
9890 ++ if (ivsize) {
9891 ++ if (ivsize == DES_BLOCK_SIZE)
9892 ++ memcpy_toio(ctx->dev->reg + RK_CRYPTO_TDES_IV_0, ivtouse, ivsize);
9893 ++ else
9894 ++ memcpy_toio(ctx->dev->reg + RK_CRYPTO_AES_IV_0, ivtouse, ivsize);
9895 ++ }
9896 ++ reinit_completion(&ctx->dev->complete);
9897 ++ ctx->dev->status = 0;
9898 ++
9899 ++ todo = min(sg_dma_len(sgs), len);
9900 ++ len -= todo;
9901 ++ crypto_dma_start(ctx->dev, sgs, sgd, todo / 4);
9902 ++ wait_for_completion_interruptible_timeout(&ctx->dev->complete,
9903 ++ msecs_to_jiffies(2000));
9904 ++ if (!ctx->dev->status) {
9905 ++ dev_err(ctx->dev->dev, "DMA timeout\n");
9906 ++ err = -EFAULT;
9907 ++ goto theend;
9908 ++ }
9909 ++ if (sgs == sgd) {
9910 ++ dma_unmap_sg(ctx->dev->dev, sgs, 1, DMA_BIDIRECTIONAL);
9911 ++ } else {
9912 ++ dma_unmap_sg(ctx->dev->dev, sgs, 1, DMA_TO_DEVICE);
9913 ++ dma_unmap_sg(ctx->dev->dev, sgd, 1, DMA_FROM_DEVICE);
9914 ++ }
9915 ++ if (rctx->mode & RK_CRYPTO_DEC) {
9916 ++ memcpy(iv, biv, ivsize);
9917 ++ ivtouse = iv;
9918 + } else {
9919 +- memcpy(req->iv, dev->addr_vir +
9920 +- dev->count - ivsize, ivsize);
9921 ++ offset = sgd->length - ivsize;
9922 ++ scatterwalk_map_and_copy(iv, sgd, offset, ivsize, 0);
9923 ++ ivtouse = iv;
9924 + }
9925 ++ sgs = sg_next(sgs);
9926 ++ sgd = sg_next(sgd);
9927 + }
9928 +-}
9929 +-
9930 +-static void rk_update_iv(struct rk_crypto_info *dev)
9931 +-{
9932 +- struct skcipher_request *req =
9933 +- skcipher_request_cast(dev->async_req);
9934 +- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
9935 +- struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
9936 +- u32 ivsize = crypto_skcipher_ivsize(tfm);
9937 +- u8 *new_iv = NULL;
9938 +
9939 +- if (ctx->mode & RK_CRYPTO_DEC) {
9940 +- new_iv = ctx->iv;
9941 +- } else {
9942 +- new_iv = page_address(sg_page(dev->sg_dst)) +
9943 +- dev->sg_dst->offset + dev->sg_dst->length - ivsize;
9944 ++ if (areq->iv && ivsize > 0) {
9945 ++ offset = areq->cryptlen - ivsize;
9946 ++ if (rctx->mode & RK_CRYPTO_DEC) {
9947 ++ memcpy(areq->iv, rctx->backup_iv, ivsize);
9948 ++ memzero_explicit(rctx->backup_iv, ivsize);
9949 ++ } else {
9950 ++ scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
9951 ++ ivsize, 0);
9952 ++ }
9953 + }
9954 +
9955 +- if (ivsize == DES_BLOCK_SIZE)
9956 +- memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, new_iv, ivsize);
9957 +- else if (ivsize == AES_BLOCK_SIZE)
9958 +- memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, new_iv, ivsize);
9959 +-}
9960 ++theend:
9961 ++ local_bh_disable();
9962 ++ crypto_finalize_skcipher_request(engine, areq, err);
9963 ++ local_bh_enable();
9964 ++ return 0;
9965 +
9966 +-/* return:
9967 +- * true some err was occurred
9968 +- * fault no err, continue
9969 +- */
9970 +-static int rk_ablk_rx(struct rk_crypto_info *dev)
9971 +-{
9972 +- int err = 0;
9973 +- struct skcipher_request *req =
9974 +- skcipher_request_cast(dev->async_req);
9975 +-
9976 +- dev->unload_data(dev);
9977 +- if (!dev->aligned) {
9978 +- if (!sg_pcopy_from_buffer(req->dst, dev->dst_nents,
9979 +- dev->addr_vir, dev->count,
9980 +- dev->total - dev->left_bytes -
9981 +- dev->count)) {
9982 +- err = -EINVAL;
9983 +- goto out_rx;
9984 +- }
9985 +- }
9986 +- if (dev->left_bytes) {
9987 +- rk_update_iv(dev);
9988 +- if (dev->aligned) {
9989 +- if (sg_is_last(dev->sg_src)) {
9990 +- dev_err(dev->dev, "[%s:%d] Lack of data\n",
9991 +- __func__, __LINE__);
9992 +- err = -ENOMEM;
9993 +- goto out_rx;
9994 +- }
9995 +- dev->sg_src = sg_next(dev->sg_src);
9996 +- dev->sg_dst = sg_next(dev->sg_dst);
9997 +- }
9998 +- err = rk_set_data_start(dev);
9999 ++theend_sgs:
10000 ++ if (sgs == sgd) {
10001 ++ dma_unmap_sg(ctx->dev->dev, sgs, 1, DMA_BIDIRECTIONAL);
10002 + } else {
10003 +- rk_iv_copyback(dev);
10004 +- /* here show the calculation is over without any err */
10005 +- dev->complete(dev->async_req, 0);
10006 +- tasklet_schedule(&dev->queue_task);
10007 ++ dma_unmap_sg(ctx->dev->dev, sgs, 1, DMA_TO_DEVICE);
10008 ++ dma_unmap_sg(ctx->dev->dev, sgd, 1, DMA_FROM_DEVICE);
10009 + }
10010 +-out_rx:
10011 ++theend_iv:
10012 + return err;
10013 + }
10014 +
10015 +@@ -378,26 +437,34 @@ static int rk_ablk_init_tfm(struct crypto_skcipher *tfm)
10016 + {
10017 + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
10018 + struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
10019 ++ const char *name = crypto_tfm_alg_name(&tfm->base);
10020 + struct rk_crypto_tmp *algt;
10021 +
10022 + algt = container_of(alg, struct rk_crypto_tmp, alg.skcipher);
10023 +
10024 + ctx->dev = algt->dev;
10025 +- ctx->dev->align_size = crypto_tfm_alg_alignmask(crypto_skcipher_tfm(tfm)) + 1;
10026 +- ctx->dev->start = rk_ablk_start;
10027 +- ctx->dev->update = rk_ablk_rx;
10028 +- ctx->dev->complete = rk_crypto_complete;
10029 +- ctx->dev->addr_vir = (char *)__get_free_page(GFP_KERNEL);
10030 +
10031 +- return ctx->dev->addr_vir ? ctx->dev->enable_clk(ctx->dev) : -ENOMEM;
10032 ++ ctx->fallback_tfm = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK);
10033 ++ if (IS_ERR(ctx->fallback_tfm)) {
10034 ++ dev_err(ctx->dev->dev, "ERROR: Cannot allocate fallback for %s %ld\n",
10035 ++ name, PTR_ERR(ctx->fallback_tfm));
10036 ++ return PTR_ERR(ctx->fallback_tfm);
10037 ++ }
10038 ++
10039 ++ tfm->reqsize = sizeof(struct rk_cipher_rctx) +
10040 ++ crypto_skcipher_reqsize(ctx->fallback_tfm);
10041 ++
10042 ++ ctx->enginectx.op.do_one_request = rk_cipher_run;
10043 ++
10044 ++ return 0;
10045 + }
10046 +
10047 + static void rk_ablk_exit_tfm(struct crypto_skcipher *tfm)
10048 + {
10049 + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm);
10050 +
10051 +- free_page((unsigned long)ctx->dev->addr_vir);
10052 +- ctx->dev->disable_clk(ctx->dev);
10053 ++ memzero_explicit(ctx->key, ctx->keylen);
10054 ++ crypto_free_skcipher(ctx->fallback_tfm);
10055 + }
10056 +
10057 + struct rk_crypto_tmp rk_ecb_aes_alg = {
10058 +@@ -406,7 +473,7 @@ struct rk_crypto_tmp rk_ecb_aes_alg = {
10059 + .base.cra_name = "ecb(aes)",
10060 + .base.cra_driver_name = "ecb-aes-rk",
10061 + .base.cra_priority = 300,
10062 +- .base.cra_flags = CRYPTO_ALG_ASYNC,
10063 ++ .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK,
10064 + .base.cra_blocksize = AES_BLOCK_SIZE,
10065 + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx),
10066 + .base.cra_alignmask = 0x0f,
10067 +@@ -428,7 +495,7 @@ struct rk_crypto_tmp rk_cbc_aes_alg = {
10068 + .base.cra_name = "cbc(aes)",
10069 + .base.cra_driver_name = "cbc-aes-rk",
10070 + .base.cra_priority = 300,
10071 +- .base.cra_flags = CRYPTO_ALG_ASYNC,
10072 ++ .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK,
10073 + .base.cra_blocksize = AES_BLOCK_SIZE,
10074 + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx),
10075 + .base.cra_alignmask = 0x0f,
10076 +@@ -451,7 +518,7 @@ struct rk_crypto_tmp rk_ecb_des_alg = {
10077 + .base.cra_name = "ecb(des)",
10078 + .base.cra_driver_name = "ecb-des-rk",
10079 + .base.cra_priority = 300,
10080 +- .base.cra_flags = CRYPTO_ALG_ASYNC,
10081 ++ .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK,
10082 + .base.cra_blocksize = DES_BLOCK_SIZE,
10083 + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx),
10084 + .base.cra_alignmask = 0x07,
10085 +@@ -473,7 +540,7 @@ struct rk_crypto_tmp rk_cbc_des_alg = {
10086 + .base.cra_name = "cbc(des)",
10087 + .base.cra_driver_name = "cbc-des-rk",
10088 + .base.cra_priority = 300,
10089 +- .base.cra_flags = CRYPTO_ALG_ASYNC,
10090 ++ .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK,
10091 + .base.cra_blocksize = DES_BLOCK_SIZE,
10092 + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx),
10093 + .base.cra_alignmask = 0x07,
10094 +@@ -496,7 +563,7 @@ struct rk_crypto_tmp rk_ecb_des3_ede_alg = {
10095 + .base.cra_name = "ecb(des3_ede)",
10096 + .base.cra_driver_name = "ecb-des3-ede-rk",
10097 + .base.cra_priority = 300,
10098 +- .base.cra_flags = CRYPTO_ALG_ASYNC,
10099 ++ .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK,
10100 + .base.cra_blocksize = DES_BLOCK_SIZE,
10101 + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx),
10102 + .base.cra_alignmask = 0x07,
10103 +@@ -518,7 +585,7 @@ struct rk_crypto_tmp rk_cbc_des3_ede_alg = {
10104 + .base.cra_name = "cbc(des3_ede)",
10105 + .base.cra_driver_name = "cbc-des3-ede-rk",
10106 + .base.cra_priority = 300,
10107 +- .base.cra_flags = CRYPTO_ALG_ASYNC,
10108 ++ .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK,
10109 + .base.cra_blocksize = DES_BLOCK_SIZE,
10110 + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx),
10111 + .base.cra_alignmask = 0x07,
10112 +diff --git a/drivers/dio/dio.c b/drivers/dio/dio.c
10113 +index 0e5a5662d5a40..0a051d6568800 100644
10114 +--- a/drivers/dio/dio.c
10115 ++++ b/drivers/dio/dio.c
10116 +@@ -109,6 +109,12 @@ static char dio_no_name[] = { 0 };
10117 +
10118 + #endif /* CONFIG_DIO_CONSTANTS */
10119 +
10120 ++static void dio_dev_release(struct device *dev)
10121 ++{
10122 ++ struct dio_dev *ddev = container_of(dev, typeof(struct dio_dev), dev);
10123 ++ kfree(ddev);
10124 ++}
10125 ++
10126 + int __init dio_find(int deviceid)
10127 + {
10128 + /* Called to find a DIO device before the full bus scan has run.
10129 +@@ -225,6 +231,7 @@ static int __init dio_init(void)
10130 + dev->bus = &dio_bus;
10131 + dev->dev.parent = &dio_bus.dev;
10132 + dev->dev.bus = &dio_bus_type;
10133 ++ dev->dev.release = dio_dev_release;
10134 + dev->scode = scode;
10135 + dev->resource.start = pa;
10136 + dev->resource.end = pa + DIO_SIZE(scode, va);
10137 +@@ -252,6 +259,7 @@ static int __init dio_init(void)
10138 + if (error) {
10139 + pr_err("DIO: Error registering device %s\n",
10140 + dev->name);
10141 ++ put_device(&dev->dev);
10142 + continue;
10143 + }
10144 + error = dio_create_sysfs_dev_files(dev);
10145 +diff --git a/drivers/dma/apple-admac.c b/drivers/dma/apple-admac.c
10146 +index a2cc520225d32..90f28bda29c8b 100644
10147 +--- a/drivers/dma/apple-admac.c
10148 ++++ b/drivers/dma/apple-admac.c
10149 +@@ -21,6 +21,12 @@
10150 + #define NCHANNELS_MAX 64
10151 + #define IRQ_NOUTPUTS 4
10152 +
10153 ++/*
10154 ++ * For allocation purposes we split the cache
10155 ++ * memory into blocks of fixed size (given in bytes).
10156 ++ */
10157 ++#define SRAM_BLOCK 2048
10158 ++
10159 + #define RING_WRITE_SLOT GENMASK(1, 0)
10160 + #define RING_READ_SLOT GENMASK(5, 4)
10161 + #define RING_FULL BIT(9)
10162 +@@ -36,6 +42,9 @@
10163 + #define REG_TX_STOP 0x0004
10164 + #define REG_RX_START 0x0008
10165 + #define REG_RX_STOP 0x000c
10166 ++#define REG_IMPRINT 0x0090
10167 ++#define REG_TX_SRAM_SIZE 0x0094
10168 ++#define REG_RX_SRAM_SIZE 0x0098
10169 +
10170 + #define REG_CHAN_CTL(ch) (0x8000 + (ch) * 0x200)
10171 + #define REG_CHAN_CTL_RST_RINGS BIT(0)
10172 +@@ -53,7 +62,9 @@
10173 + #define BUS_WIDTH_FRAME_2_WORDS 0x10
10174 + #define BUS_WIDTH_FRAME_4_WORDS 0x20
10175 +
10176 +-#define CHAN_BUFSIZE 0x8000
10177 ++#define REG_CHAN_SRAM_CARVEOUT(ch) (0x8050 + (ch) * 0x200)
10178 ++#define CHAN_SRAM_CARVEOUT_SIZE GENMASK(31, 16)
10179 ++#define CHAN_SRAM_CARVEOUT_BASE GENMASK(15, 0)
10180 +
10181 + #define REG_CHAN_FIFOCTL(ch) (0x8054 + (ch) * 0x200)
10182 + #define CHAN_FIFOCTL_LIMIT GENMASK(31, 16)
10183 +@@ -76,6 +87,8 @@ struct admac_chan {
10184 + struct dma_chan chan;
10185 + struct tasklet_struct tasklet;
10186 +
10187 ++ u32 carveout;
10188 ++
10189 + spinlock_t lock;
10190 + struct admac_tx *current_tx;
10191 + int nperiod_acks;
10192 +@@ -92,12 +105,24 @@ struct admac_chan {
10193 + struct list_head to_free;
10194 + };
10195 +
10196 ++struct admac_sram {
10197 ++ u32 size;
10198 ++ /*
10199 ++ * SRAM_CARVEOUT has 16-bit fields, so the SRAM cannot be larger than
10200 ++ * 64K and a 32-bit bitfield over 2K blocks covers it.
10201 ++ */
10202 ++ u32 allocated;
10203 ++};
10204 ++
10205 + struct admac_data {
10206 + struct dma_device dma;
10207 + struct device *dev;
10208 + __iomem void *base;
10209 + struct reset_control *rstc;
10210 +
10211 ++ struct mutex cache_alloc_lock;
10212 ++ struct admac_sram txcache, rxcache;
10213 ++
10214 + int irq;
10215 + int irq_index;
10216 + int nchannels;
10217 +@@ -118,6 +143,60 @@ struct admac_tx {
10218 + struct list_head node;
10219 + };
10220 +
10221 ++static int admac_alloc_sram_carveout(struct admac_data *ad,
10222 ++ enum dma_transfer_direction dir,
10223 ++ u32 *out)
10224 ++{
10225 ++ struct admac_sram *sram;
10226 ++ int i, ret = 0, nblocks;
10227 ++
10228 ++ if (dir == DMA_MEM_TO_DEV)
10229 ++ sram = &ad->txcache;
10230 ++ else
10231 ++ sram = &ad->rxcache;
10232 ++
10233 ++ mutex_lock(&ad->cache_alloc_lock);
10234 ++
10235 ++ nblocks = sram->size / SRAM_BLOCK;
10236 ++ for (i = 0; i < nblocks; i++)
10237 ++ if (!(sram->allocated & BIT(i)))
10238 ++ break;
10239 ++
10240 ++ if (i < nblocks) {
10241 ++ *out = FIELD_PREP(CHAN_SRAM_CARVEOUT_BASE, i * SRAM_BLOCK) |
10242 ++ FIELD_PREP(CHAN_SRAM_CARVEOUT_SIZE, SRAM_BLOCK);
10243 ++ sram->allocated |= BIT(i);
10244 ++ } else {
10245 ++ ret = -EBUSY;
10246 ++ }
10247 ++
10248 ++ mutex_unlock(&ad->cache_alloc_lock);
10249 ++
10250 ++ return ret;
10251 ++}
10252 ++
10253 ++static void admac_free_sram_carveout(struct admac_data *ad,
10254 ++ enum dma_transfer_direction dir,
10255 ++ u32 carveout)
10256 ++{
10257 ++ struct admac_sram *sram;
10258 ++ u32 base = FIELD_GET(CHAN_SRAM_CARVEOUT_BASE, carveout);
10259 ++ int i;
10260 ++
10261 ++ if (dir == DMA_MEM_TO_DEV)
10262 ++ sram = &ad->txcache;
10263 ++ else
10264 ++ sram = &ad->rxcache;
10265 ++
10266 ++ if (WARN_ON(base >= sram->size))
10267 ++ return;
10268 ++
10269 ++ mutex_lock(&ad->cache_alloc_lock);
10270 ++ i = base / SRAM_BLOCK;
10271 ++ sram->allocated &= ~BIT(i);
10272 ++ mutex_unlock(&ad->cache_alloc_lock);
10273 ++}
10274 ++
10275 + static void admac_modify(struct admac_data *ad, int reg, u32 mask, u32 val)
10276 + {
10277 + void __iomem *addr = ad->base + reg;
10278 +@@ -466,15 +545,28 @@ static void admac_synchronize(struct dma_chan *chan)
10279 + static int admac_alloc_chan_resources(struct dma_chan *chan)
10280 + {
10281 + struct admac_chan *adchan = to_admac_chan(chan);
10282 ++ struct admac_data *ad = adchan->host;
10283 ++ int ret;
10284 +
10285 + dma_cookie_init(&adchan->chan);
10286 ++ ret = admac_alloc_sram_carveout(ad, admac_chan_direction(adchan->no),
10287 ++ &adchan->carveout);
10288 ++ if (ret < 0)
10289 ++ return ret;
10290 ++
10291 ++ writel_relaxed(adchan->carveout,
10292 ++ ad->base + REG_CHAN_SRAM_CARVEOUT(adchan->no));
10293 + return 0;
10294 + }
10295 +
10296 + static void admac_free_chan_resources(struct dma_chan *chan)
10297 + {
10298 ++ struct admac_chan *adchan = to_admac_chan(chan);
10299 ++
10300 + admac_terminate_all(chan);
10301 + admac_synchronize(chan);
10302 ++ admac_free_sram_carveout(adchan->host, admac_chan_direction(adchan->no),
10303 ++ adchan->carveout);
10304 + }
10305 +
10306 + static struct dma_chan *admac_dma_of_xlate(struct of_phandle_args *dma_spec,
10307 +@@ -712,6 +804,7 @@ static int admac_probe(struct platform_device *pdev)
10308 + platform_set_drvdata(pdev, ad);
10309 + ad->dev = &pdev->dev;
10310 + ad->nchannels = nchannels;
10311 ++ mutex_init(&ad->cache_alloc_lock);
10312 +
10313 + /*
10314 + * The controller has 4 IRQ outputs. Try them all until
10315 +@@ -801,6 +894,13 @@ static int admac_probe(struct platform_device *pdev)
10316 + goto free_irq;
10317 + }
10318 +
10319 ++ ad->txcache.size = readl_relaxed(ad->base + REG_TX_SRAM_SIZE);
10320 ++ ad->rxcache.size = readl_relaxed(ad->base + REG_RX_SRAM_SIZE);
10321 ++
10322 ++ dev_info(&pdev->dev, "Audio DMA Controller\n");
10323 ++ dev_info(&pdev->dev, "imprint %x TX cache %u RX cache %u\n",
10324 ++ readl_relaxed(ad->base + REG_IMPRINT), ad->txcache.size, ad->rxcache.size);
10325 ++
10326 + return 0;
10327 +
10328 + free_irq:
10329 +diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
10330 +index 7269bd54554f6..3229dfc786507 100644
10331 +--- a/drivers/dma/idxd/sysfs.c
10332 ++++ b/drivers/dma/idxd/sysfs.c
10333 +@@ -528,6 +528,22 @@ static bool idxd_group_attr_progress_limit_invisible(struct attribute *attr,
10334 + !idxd->hw.group_cap.progress_limit;
10335 + }
10336 +
10337 ++static bool idxd_group_attr_read_buffers_invisible(struct attribute *attr,
10338 ++ struct idxd_device *idxd)
10339 ++{
10340 ++ /*
10341 ++ * Intel IAA does not support Read Buffer allocation control,
10342 ++ * make these attributes invisible.
10343 ++ */
10344 ++ return (attr == &dev_attr_group_use_token_limit.attr ||
10345 ++ attr == &dev_attr_group_use_read_buffer_limit.attr ||
10346 ++ attr == &dev_attr_group_tokens_allowed.attr ||
10347 ++ attr == &dev_attr_group_read_buffers_allowed.attr ||
10348 ++ attr == &dev_attr_group_tokens_reserved.attr ||
10349 ++ attr == &dev_attr_group_read_buffers_reserved.attr) &&
10350 ++ idxd->data->type == IDXD_TYPE_IAX;
10351 ++}
10352 ++
10353 + static umode_t idxd_group_attr_visible(struct kobject *kobj,
10354 + struct attribute *attr, int n)
10355 + {
10356 +@@ -538,6 +554,9 @@ static umode_t idxd_group_attr_visible(struct kobject *kobj,
10357 + if (idxd_group_attr_progress_limit_invisible(attr, idxd))
10358 + return 0;
10359 +
10360 ++ if (idxd_group_attr_read_buffers_invisible(attr, idxd))
10361 ++ return 0;
10362 ++
10363 + return attr->mode;
10364 + }
10365 +
10366 +@@ -1233,6 +1252,14 @@ static bool idxd_wq_attr_op_config_invisible(struct attribute *attr,
10367 + !idxd->hw.wq_cap.op_config;
10368 + }
10369 +
10370 ++static bool idxd_wq_attr_max_batch_size_invisible(struct attribute *attr,
10371 ++ struct idxd_device *idxd)
10372 ++{
10373 ++ /* Intel IAA does not support batch processing, make it invisible */
10374 ++ return attr == &dev_attr_wq_max_batch_size.attr &&
10375 ++ idxd->data->type == IDXD_TYPE_IAX;
10376 ++}
10377 ++
10378 + static umode_t idxd_wq_attr_visible(struct kobject *kobj,
10379 + struct attribute *attr, int n)
10380 + {
10381 +@@ -1243,6 +1270,9 @@ static umode_t idxd_wq_attr_visible(struct kobject *kobj,
10382 + if (idxd_wq_attr_op_config_invisible(attr, idxd))
10383 + return 0;
10384 +
10385 ++ if (idxd_wq_attr_max_batch_size_invisible(attr, idxd))
10386 ++ return 0;
10387 ++
10388 + return attr->mode;
10389 + }
10390 +
10391 +@@ -1533,6 +1563,43 @@ static ssize_t cmd_status_store(struct device *dev, struct device_attribute *att
10392 + }
10393 + static DEVICE_ATTR_RW(cmd_status);
10394 +
10395 ++static bool idxd_device_attr_max_batch_size_invisible(struct attribute *attr,
10396 ++ struct idxd_device *idxd)
10397 ++{
10398 ++ /* Intel IAA does not support batch processing, make it invisible */
10399 ++ return attr == &dev_attr_max_batch_size.attr &&
10400 ++ idxd->data->type == IDXD_TYPE_IAX;
10401 ++}
10402 ++
10403 ++static bool idxd_device_attr_read_buffers_invisible(struct attribute *attr,
10404 ++ struct idxd_device *idxd)
10405 ++{
10406 ++ /*
10407 ++ * Intel IAA does not support Read Buffer allocation control,
10408 ++ * make these attributes invisible.
10409 ++ */
10410 ++ return (attr == &dev_attr_max_tokens.attr ||
10411 ++ attr == &dev_attr_max_read_buffers.attr ||
10412 ++ attr == &dev_attr_token_limit.attr ||
10413 ++ attr == &dev_attr_read_buffer_limit.attr) &&
10414 ++ idxd->data->type == IDXD_TYPE_IAX;
10415 ++}
10416 ++
10417 ++static umode_t idxd_device_attr_visible(struct kobject *kobj,
10418 ++ struct attribute *attr, int n)
10419 ++{
10420 ++ struct device *dev = container_of(kobj, struct device, kobj);
10421 ++ struct idxd_device *idxd = confdev_to_idxd(dev);
10422 ++
10423 ++ if (idxd_device_attr_max_batch_size_invisible(attr, idxd))
10424 ++ return 0;
10425 ++
10426 ++ if (idxd_device_attr_read_buffers_invisible(attr, idxd))
10427 ++ return 0;
10428 ++
10429 ++ return attr->mode;
10430 ++}
10431 ++
10432 + static struct attribute *idxd_device_attributes[] = {
10433 + &dev_attr_version.attr,
10434 + &dev_attr_max_groups.attr,
10435 +@@ -1560,6 +1627,7 @@ static struct attribute *idxd_device_attributes[] = {
10436 +
10437 + static const struct attribute_group idxd_device_attribute_group = {
10438 + .attrs = idxd_device_attributes,
10439 ++ .is_visible = idxd_device_attr_visible,
10440 + };
10441 +
10442 + static const struct attribute_group *idxd_attribute_groups[] = {
10443 +diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
10444 +index a22ea053f8e1c..8af4d2523194a 100644
10445 +--- a/drivers/edac/i10nm_base.c
10446 ++++ b/drivers/edac/i10nm_base.c
10447 +@@ -304,11 +304,10 @@ static struct pci_dev *pci_get_dev_wrapper(int dom, unsigned int bus,
10448 + if (unlikely(pci_enable_device(pdev) < 0)) {
10449 + edac_dbg(2, "Failed to enable device %02x:%02x.%x\n",
10450 + bus, dev, fun);
10451 ++ pci_dev_put(pdev);
10452 + return NULL;
10453 + }
10454 +
10455 +- pci_dev_get(pdev);
10456 +-
10457 + return pdev;
10458 + }
10459 +
10460 +diff --git a/drivers/extcon/extcon-usbc-tusb320.c b/drivers/extcon/extcon-usbc-tusb320.c
10461 +index 2a120d8d3c272..9dfa545427ca1 100644
10462 +--- a/drivers/extcon/extcon-usbc-tusb320.c
10463 ++++ b/drivers/extcon/extcon-usbc-tusb320.c
10464 +@@ -313,9 +313,9 @@ static void tusb320_typec_irq_handler(struct tusb320_priv *priv, u8 reg9)
10465 + typec_set_pwr_opmode(port, TYPEC_PWR_MODE_USB);
10466 + }
10467 +
10468 +-static irqreturn_t tusb320_irq_handler(int irq, void *dev_id)
10469 ++static irqreturn_t tusb320_state_update_handler(struct tusb320_priv *priv,
10470 ++ bool force_update)
10471 + {
10472 +- struct tusb320_priv *priv = dev_id;
10473 + unsigned int reg;
10474 +
10475 + if (regmap_read(priv->regmap, TUSB320_REG9, &reg)) {
10476 +@@ -323,7 +323,7 @@ static irqreturn_t tusb320_irq_handler(int irq, void *dev_id)
10477 + return IRQ_NONE;
10478 + }
10479 +
10480 +- if (!(reg & TUSB320_REG9_INTERRUPT_STATUS))
10481 ++ if (!force_update && !(reg & TUSB320_REG9_INTERRUPT_STATUS))
10482 + return IRQ_NONE;
10483 +
10484 + tusb320_extcon_irq_handler(priv, reg);
10485 +@@ -340,6 +340,13 @@ static irqreturn_t tusb320_irq_handler(int irq, void *dev_id)
10486 + return IRQ_HANDLED;
10487 + }
10488 +
10489 ++static irqreturn_t tusb320_irq_handler(int irq, void *dev_id)
10490 ++{
10491 ++ struct tusb320_priv *priv = dev_id;
10492 ++
10493 ++ return tusb320_state_update_handler(priv, false);
10494 ++}
10495 ++
10496 + static const struct regmap_config tusb320_regmap_config = {
10497 + .reg_bits = 8,
10498 + .val_bits = 8,
10499 +@@ -466,7 +473,7 @@ static int tusb320_probe(struct i2c_client *client,
10500 + return ret;
10501 +
10502 + /* update initial state */
10503 +- tusb320_irq_handler(client->irq, priv);
10504 ++ tusb320_state_update_handler(priv, true);
10505 +
10506 + /* Reset chip to its default state */
10507 + ret = tusb320_reset(priv);
10508 +@@ -477,7 +484,7 @@ static int tusb320_probe(struct i2c_client *client,
10509 + * State and polarity might change after a reset, so update
10510 + * them again and make sure the interrupt status bit is cleared.
10511 + */
10512 +- tusb320_irq_handler(client->irq, priv);
10513 ++ tusb320_state_update_handler(priv, true);
10514 +
10515 + ret = devm_request_threaded_irq(priv->dev, client->irq, NULL,
10516 + tusb320_irq_handler,
10517 +diff --git a/drivers/firmware/raspberrypi.c b/drivers/firmware/raspberrypi.c
10518 +index 4b8978b254f9a..dba315f675bc7 100644
10519 +--- a/drivers/firmware/raspberrypi.c
10520 ++++ b/drivers/firmware/raspberrypi.c
10521 +@@ -272,6 +272,7 @@ static int rpi_firmware_probe(struct platform_device *pdev)
10522 + int ret = PTR_ERR(fw->chan);
10523 + if (ret != -EPROBE_DEFER)
10524 + dev_err(dev, "Failed to get mbox channel: %d\n", ret);
10525 ++ kfree(fw);
10526 + return ret;
10527 + }
10528 +
10529 +diff --git a/drivers/firmware/ti_sci.c b/drivers/firmware/ti_sci.c
10530 +index ebc32bbd9b833..6281e7153b475 100644
10531 +--- a/drivers/firmware/ti_sci.c
10532 ++++ b/drivers/firmware/ti_sci.c
10533 +@@ -429,15 +429,14 @@ static inline int ti_sci_do_xfer(struct ti_sci_info *info,
10534 + * during noirq phase, so we must manually poll the completion.
10535 + */
10536 + ret = read_poll_timeout_atomic(try_wait_for_completion, done_state,
10537 +- true, 1,
10538 ++ done_state, 1,
10539 + info->desc->max_rx_timeout_ms * 1000,
10540 + false, &xfer->done);
10541 + }
10542 +
10543 +- if (ret == -ETIMEDOUT || !done_state) {
10544 ++ if (ret == -ETIMEDOUT)
10545 + dev_err(dev, "Mbox timedout in resp(caller: %pS)\n",
10546 + (void *)_RET_IP_);
10547 +- }
10548 +
10549 + /*
10550 + * NOTE: we might prefer not to need the mailbox ticker to manage the
10551 +diff --git a/drivers/gpio/gpiolib-cdev.c b/drivers/gpio/gpiolib-cdev.c
10552 +index 0cb6b468f364f..6ab1cf489d035 100644
10553 +--- a/drivers/gpio/gpiolib-cdev.c
10554 ++++ b/drivers/gpio/gpiolib-cdev.c
10555 +@@ -55,6 +55,50 @@ static_assert(IS_ALIGNED(sizeof(struct gpio_v2_line_values), 8));
10556 + * interface to gpiolib GPIOs via ioctl()s.
10557 + */
10558 +
10559 ++typedef __poll_t (*poll_fn)(struct file *, struct poll_table_struct *);
10560 ++typedef long (*ioctl_fn)(struct file *, unsigned int, unsigned long);
10561 ++typedef ssize_t (*read_fn)(struct file *, char __user *,
10562 ++ size_t count, loff_t *);
10563 ++
10564 ++static __poll_t call_poll_locked(struct file *file,
10565 ++ struct poll_table_struct *wait,
10566 ++ struct gpio_device *gdev, poll_fn func)
10567 ++{
10568 ++ __poll_t ret;
10569 ++
10570 ++ down_read(&gdev->sem);
10571 ++ ret = func(file, wait);
10572 ++ up_read(&gdev->sem);
10573 ++
10574 ++ return ret;
10575 ++}
10576 ++
10577 ++static long call_ioctl_locked(struct file *file, unsigned int cmd,
10578 ++ unsigned long arg, struct gpio_device *gdev,
10579 ++ ioctl_fn func)
10580 ++{
10581 ++ long ret;
10582 ++
10583 ++ down_read(&gdev->sem);
10584 ++ ret = func(file, cmd, arg);
10585 ++ up_read(&gdev->sem);
10586 ++
10587 ++ return ret;
10588 ++}
10589 ++
10590 ++static ssize_t call_read_locked(struct file *file, char __user *buf,
10591 ++ size_t count, loff_t *f_ps,
10592 ++ struct gpio_device *gdev, read_fn func)
10593 ++{
10594 ++ ssize_t ret;
10595 ++
10596 ++ down_read(&gdev->sem);
10597 ++ ret = func(file, buf, count, f_ps);
10598 ++ up_read(&gdev->sem);
10599 ++
10600 ++ return ret;
10601 ++}
10602 ++
10603 + /*
10604 + * GPIO line handle management
10605 + */
10606 +@@ -191,8 +235,8 @@ static long linehandle_set_config(struct linehandle_state *lh,
10607 + return 0;
10608 + }
10609 +
10610 +-static long linehandle_ioctl(struct file *file, unsigned int cmd,
10611 +- unsigned long arg)
10612 ++static long linehandle_ioctl_unlocked(struct file *file, unsigned int cmd,
10613 ++ unsigned long arg)
10614 + {
10615 + struct linehandle_state *lh = file->private_data;
10616 + void __user *ip = (void __user *)arg;
10617 +@@ -201,6 +245,9 @@ static long linehandle_ioctl(struct file *file, unsigned int cmd,
10618 + unsigned int i;
10619 + int ret;
10620 +
10621 ++ if (!lh->gdev->chip)
10622 ++ return -ENODEV;
10623 ++
10624 + switch (cmd) {
10625 + case GPIOHANDLE_GET_LINE_VALUES_IOCTL:
10626 + /* NOTE: It's okay to read values of output lines */
10627 +@@ -247,6 +294,15 @@ static long linehandle_ioctl(struct file *file, unsigned int cmd,
10628 + }
10629 + }
10630 +
10631 ++static long linehandle_ioctl(struct file *file, unsigned int cmd,
10632 ++ unsigned long arg)
10633 ++{
10634 ++ struct linehandle_state *lh = file->private_data;
10635 ++
10636 ++ return call_ioctl_locked(file, cmd, arg, lh->gdev,
10637 ++ linehandle_ioctl_unlocked);
10638 ++}
10639 ++
10640 + #ifdef CONFIG_COMPAT
10641 + static long linehandle_ioctl_compat(struct file *file, unsigned int cmd,
10642 + unsigned long arg)
10643 +@@ -1378,12 +1434,15 @@ static long linereq_set_config(struct linereq *lr, void __user *ip)
10644 + return ret;
10645 + }
10646 +
10647 +-static long linereq_ioctl(struct file *file, unsigned int cmd,
10648 +- unsigned long arg)
10649 ++static long linereq_ioctl_unlocked(struct file *file, unsigned int cmd,
10650 ++ unsigned long arg)
10651 + {
10652 + struct linereq *lr = file->private_data;
10653 + void __user *ip = (void __user *)arg;
10654 +
10655 ++ if (!lr->gdev->chip)
10656 ++ return -ENODEV;
10657 ++
10658 + switch (cmd) {
10659 + case GPIO_V2_LINE_GET_VALUES_IOCTL:
10660 + return linereq_get_values(lr, ip);
10661 +@@ -1396,6 +1455,15 @@ static long linereq_ioctl(struct file *file, unsigned int cmd,
10662 + }
10663 + }
10664 +
10665 ++static long linereq_ioctl(struct file *file, unsigned int cmd,
10666 ++ unsigned long arg)
10667 ++{
10668 ++ struct linereq *lr = file->private_data;
10669 ++
10670 ++ return call_ioctl_locked(file, cmd, arg, lr->gdev,
10671 ++ linereq_ioctl_unlocked);
10672 ++}
10673 ++
10674 + #ifdef CONFIG_COMPAT
10675 + static long linereq_ioctl_compat(struct file *file, unsigned int cmd,
10676 + unsigned long arg)
10677 +@@ -1404,12 +1472,15 @@ static long linereq_ioctl_compat(struct file *file, unsigned int cmd,
10678 + }
10679 + #endif
10680 +
10681 +-static __poll_t linereq_poll(struct file *file,
10682 +- struct poll_table_struct *wait)
10683 ++static __poll_t linereq_poll_unlocked(struct file *file,
10684 ++ struct poll_table_struct *wait)
10685 + {
10686 + struct linereq *lr = file->private_data;
10687 + __poll_t events = 0;
10688 +
10689 ++ if (!lr->gdev->chip)
10690 ++ return EPOLLHUP | EPOLLERR;
10691 ++
10692 + poll_wait(file, &lr->wait, wait);
10693 +
10694 + if (!kfifo_is_empty_spinlocked_noirqsave(&lr->events,
10695 +@@ -1419,16 +1490,25 @@ static __poll_t linereq_poll(struct file *file,
10696 + return events;
10697 + }
10698 +
10699 +-static ssize_t linereq_read(struct file *file,
10700 +- char __user *buf,
10701 +- size_t count,
10702 +- loff_t *f_ps)
10703 ++static __poll_t linereq_poll(struct file *file,
10704 ++ struct poll_table_struct *wait)
10705 ++{
10706 ++ struct linereq *lr = file->private_data;
10707 ++
10708 ++ return call_poll_locked(file, wait, lr->gdev, linereq_poll_unlocked);
10709 ++}
10710 ++
10711 ++static ssize_t linereq_read_unlocked(struct file *file, char __user *buf,
10712 ++ size_t count, loff_t *f_ps)
10713 + {
10714 + struct linereq *lr = file->private_data;
10715 + struct gpio_v2_line_event le;
10716 + ssize_t bytes_read = 0;
10717 + int ret;
10718 +
10719 ++ if (!lr->gdev->chip)
10720 ++ return -ENODEV;
10721 ++
10722 + if (count < sizeof(le))
10723 + return -EINVAL;
10724 +
10725 +@@ -1473,6 +1553,15 @@ static ssize_t linereq_read(struct file *file,
10726 + return bytes_read;
10727 + }
10728 +
10729 ++static ssize_t linereq_read(struct file *file, char __user *buf,
10730 ++ size_t count, loff_t *f_ps)
10731 ++{
10732 ++ struct linereq *lr = file->private_data;
10733 ++
10734 ++ return call_read_locked(file, buf, count, f_ps, lr->gdev,
10735 ++ linereq_read_unlocked);
10736 ++}
10737 ++
10738 + static void linereq_free(struct linereq *lr)
10739 + {
10740 + unsigned int i;
10741 +@@ -1710,12 +1799,15 @@ struct lineevent_state {
10742 + (GPIOEVENT_REQUEST_RISING_EDGE | \
10743 + GPIOEVENT_REQUEST_FALLING_EDGE)
10744 +
10745 +-static __poll_t lineevent_poll(struct file *file,
10746 +- struct poll_table_struct *wait)
10747 ++static __poll_t lineevent_poll_unlocked(struct file *file,
10748 ++ struct poll_table_struct *wait)
10749 + {
10750 + struct lineevent_state *le = file->private_data;
10751 + __poll_t events = 0;
10752 +
10753 ++ if (!le->gdev->chip)
10754 ++ return EPOLLHUP | EPOLLERR;
10755 ++
10756 + poll_wait(file, &le->wait, wait);
10757 +
10758 + if (!kfifo_is_empty_spinlocked_noirqsave(&le->events, &le->wait.lock))
10759 +@@ -1724,15 +1816,21 @@ static __poll_t lineevent_poll(struct file *file,
10760 + return events;
10761 + }
10762 +
10763 ++static __poll_t lineevent_poll(struct file *file,
10764 ++ struct poll_table_struct *wait)
10765 ++{
10766 ++ struct lineevent_state *le = file->private_data;
10767 ++
10768 ++ return call_poll_locked(file, wait, le->gdev, lineevent_poll_unlocked);
10769 ++}
10770 ++
10771 + struct compat_gpioeevent_data {
10772 + compat_u64 timestamp;
10773 + u32 id;
10774 + };
10775 +
10776 +-static ssize_t lineevent_read(struct file *file,
10777 +- char __user *buf,
10778 +- size_t count,
10779 +- loff_t *f_ps)
10780 ++static ssize_t lineevent_read_unlocked(struct file *file, char __user *buf,
10781 ++ size_t count, loff_t *f_ps)
10782 + {
10783 + struct lineevent_state *le = file->private_data;
10784 + struct gpioevent_data ge;
10785 +@@ -1740,6 +1838,9 @@ static ssize_t lineevent_read(struct file *file,
10786 + ssize_t ge_size;
10787 + int ret;
10788 +
10789 ++ if (!le->gdev->chip)
10790 ++ return -ENODEV;
10791 ++
10792 + /*
10793 + * When compatible system call is being used the struct gpioevent_data,
10794 + * in case of at least ia32, has different size due to the alignment
10795 +@@ -1797,6 +1898,15 @@ static ssize_t lineevent_read(struct file *file,
10796 + return bytes_read;
10797 + }
10798 +
10799 ++static ssize_t lineevent_read(struct file *file, char __user *buf,
10800 ++ size_t count, loff_t *f_ps)
10801 ++{
10802 ++ struct lineevent_state *le = file->private_data;
10803 ++
10804 ++ return call_read_locked(file, buf, count, f_ps, le->gdev,
10805 ++ lineevent_read_unlocked);
10806 ++}
10807 ++
10808 + static void lineevent_free(struct lineevent_state *le)
10809 + {
10810 + if (le->irq)
10811 +@@ -1814,13 +1924,16 @@ static int lineevent_release(struct inode *inode, struct file *file)
10812 + return 0;
10813 + }
10814 +
10815 +-static long lineevent_ioctl(struct file *file, unsigned int cmd,
10816 +- unsigned long arg)
10817 ++static long lineevent_ioctl_unlocked(struct file *file, unsigned int cmd,
10818 ++ unsigned long arg)
10819 + {
10820 + struct lineevent_state *le = file->private_data;
10821 + void __user *ip = (void __user *)arg;
10822 + struct gpiohandle_data ghd;
10823 +
10824 ++ if (!le->gdev->chip)
10825 ++ return -ENODEV;
10826 ++
10827 + /*
10828 + * We can get the value for an event line but not set it,
10829 + * because it is input by definition.
10830 +@@ -1843,6 +1956,15 @@ static long lineevent_ioctl(struct file *file, unsigned int cmd,
10831 + return -EINVAL;
10832 + }
10833 +
10834 ++static long lineevent_ioctl(struct file *file, unsigned int cmd,
10835 ++ unsigned long arg)
10836 ++{
10837 ++ struct lineevent_state *le = file->private_data;
10838 ++
10839 ++ return call_ioctl_locked(file, cmd, arg, le->gdev,
10840 ++ lineevent_ioctl_unlocked);
10841 ++}
10842 ++
10843 + #ifdef CONFIG_COMPAT
10844 + static long lineevent_ioctl_compat(struct file *file, unsigned int cmd,
10845 + unsigned long arg)
10846 +@@ -2401,12 +2523,15 @@ static int lineinfo_changed_notify(struct notifier_block *nb,
10847 + return NOTIFY_OK;
10848 + }
10849 +
10850 +-static __poll_t lineinfo_watch_poll(struct file *file,
10851 +- struct poll_table_struct *pollt)
10852 ++static __poll_t lineinfo_watch_poll_unlocked(struct file *file,
10853 ++ struct poll_table_struct *pollt)
10854 + {
10855 + struct gpio_chardev_data *cdev = file->private_data;
10856 + __poll_t events = 0;
10857 +
10858 ++ if (!cdev->gdev->chip)
10859 ++ return EPOLLHUP | EPOLLERR;
10860 ++
10861 + poll_wait(file, &cdev->wait, pollt);
10862 +
10863 + if (!kfifo_is_empty_spinlocked_noirqsave(&cdev->events,
10864 +@@ -2416,8 +2541,17 @@ static __poll_t lineinfo_watch_poll(struct file *file,
10865 + return events;
10866 + }
10867 +
10868 +-static ssize_t lineinfo_watch_read(struct file *file, char __user *buf,
10869 +- size_t count, loff_t *off)
10870 ++static __poll_t lineinfo_watch_poll(struct file *file,
10871 ++ struct poll_table_struct *pollt)
10872 ++{
10873 ++ struct gpio_chardev_data *cdev = file->private_data;
10874 ++
10875 ++ return call_poll_locked(file, pollt, cdev->gdev,
10876 ++ lineinfo_watch_poll_unlocked);
10877 ++}
10878 ++
10879 ++static ssize_t lineinfo_watch_read_unlocked(struct file *file, char __user *buf,
10880 ++ size_t count, loff_t *off)
10881 + {
10882 + struct gpio_chardev_data *cdev = file->private_data;
10883 + struct gpio_v2_line_info_changed event;
10884 +@@ -2425,6 +2559,9 @@ static ssize_t lineinfo_watch_read(struct file *file, char __user *buf,
10885 + int ret;
10886 + size_t event_size;
10887 +
10888 ++ if (!cdev->gdev->chip)
10889 ++ return -ENODEV;
10890 ++
10891 + #ifndef CONFIG_GPIO_CDEV_V1
10892 + event_size = sizeof(struct gpio_v2_line_info_changed);
10893 + if (count < event_size)
10894 +@@ -2492,6 +2629,15 @@ static ssize_t lineinfo_watch_read(struct file *file, char __user *buf,
10895 + return bytes_read;
10896 + }
10897 +
10898 ++static ssize_t lineinfo_watch_read(struct file *file, char __user *buf,
10899 ++ size_t count, loff_t *off)
10900 ++{
10901 ++ struct gpio_chardev_data *cdev = file->private_data;
10902 ++
10903 ++ return call_read_locked(file, buf, count, off, cdev->gdev,
10904 ++ lineinfo_watch_read_unlocked);
10905 ++}
10906 ++
10907 + /**
10908 + * gpio_chrdev_open() - open the chardev for ioctl operations
10909 + * @inode: inode for this chardev
10910 +@@ -2505,13 +2651,17 @@ static int gpio_chrdev_open(struct inode *inode, struct file *file)
10911 + struct gpio_chardev_data *cdev;
10912 + int ret = -ENOMEM;
10913 +
10914 ++ down_read(&gdev->sem);
10915 ++
10916 + /* Fail on open if the backing gpiochip is gone */
10917 +- if (!gdev->chip)
10918 +- return -ENODEV;
10919 ++ if (!gdev->chip) {
10920 ++ ret = -ENODEV;
10921 ++ goto out_unlock;
10922 ++ }
10923 +
10924 + cdev = kzalloc(sizeof(*cdev), GFP_KERNEL);
10925 + if (!cdev)
10926 +- return -ENOMEM;
10927 ++ goto out_unlock;
10928 +
10929 + cdev->watched_lines = bitmap_zalloc(gdev->chip->ngpio, GFP_KERNEL);
10930 + if (!cdev->watched_lines)
10931 +@@ -2534,6 +2684,8 @@ static int gpio_chrdev_open(struct inode *inode, struct file *file)
10932 + if (ret)
10933 + goto out_unregister_notifier;
10934 +
10935 ++ up_read(&gdev->sem);
10936 ++
10937 + return ret;
10938 +
10939 + out_unregister_notifier:
10940 +@@ -2543,6 +2695,8 @@ out_free_bitmap:
10941 + bitmap_free(cdev->watched_lines);
10942 + out_free_cdev:
10943 + kfree(cdev);
10944 ++out_unlock:
10945 ++ up_read(&gdev->sem);
10946 + return ret;
10947 + }
10948 +
10949 +diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
10950 +index a70522aef3557..5974cfc61b417 100644
10951 +--- a/drivers/gpio/gpiolib.c
10952 ++++ b/drivers/gpio/gpiolib.c
10953 +@@ -735,6 +735,7 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
10954 + spin_unlock_irqrestore(&gpio_lock, flags);
10955 +
10956 + BLOCKING_INIT_NOTIFIER_HEAD(&gdev->notifier);
10957 ++ init_rwsem(&gdev->sem);
10958 +
10959 + #ifdef CONFIG_PINCTRL
10960 + INIT_LIST_HEAD(&gdev->pin_ranges);
10961 +@@ -875,6 +876,8 @@ void gpiochip_remove(struct gpio_chip *gc)
10962 + unsigned long flags;
10963 + unsigned int i;
10964 +
10965 ++ down_write(&gdev->sem);
10966 ++
10967 + /* FIXME: should the legacy sysfs handling be moved to gpio_device? */
10968 + gpiochip_sysfs_unregister(gdev);
10969 + gpiochip_free_hogs(gc);
10970 +@@ -909,6 +912,7 @@ void gpiochip_remove(struct gpio_chip *gc)
10971 + * gone.
10972 + */
10973 + gcdev_unregister(gdev);
10974 ++ up_write(&gdev->sem);
10975 + put_device(&gdev->dev);
10976 + }
10977 + EXPORT_SYMBOL_GPL(gpiochip_remove);
10978 +diff --git a/drivers/gpio/gpiolib.h b/drivers/gpio/gpiolib.h
10979 +index d900ecdbac46d..9ad68a0adf4a8 100644
10980 +--- a/drivers/gpio/gpiolib.h
10981 ++++ b/drivers/gpio/gpiolib.h
10982 +@@ -15,6 +15,7 @@
10983 + #include <linux/device.h>
10984 + #include <linux/module.h>
10985 + #include <linux/cdev.h>
10986 ++#include <linux/rwsem.h>
10987 +
10988 + #define GPIOCHIP_NAME "gpiochip"
10989 +
10990 +@@ -39,6 +40,9 @@
10991 + * @list: links gpio_device:s together for traversal
10992 + * @notifier: used to notify subscribers about lines being requested, released
10993 + * or reconfigured
10994 ++ * @sem: protects the structure from a NULL-pointer dereference of @chip by
10995 ++ * user-space operations when the device gets unregistered during
10996 ++ * a hot-unplug event
10997 + * @pin_ranges: range of pins served by the GPIO driver
10998 + *
10999 + * This state container holds most of the runtime variable data
11000 +@@ -60,6 +64,7 @@ struct gpio_device {
11001 + void *data;
11002 + struct list_head list;
11003 + struct blocking_notifier_head notifier;
11004 ++ struct rw_semaphore sem;
11005 +
11006 + #ifdef CONFIG_PINCTRL
11007 + /*
11008 +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
11009 +index 1f76e27f1a354..fe87b3402f06a 100644
11010 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
11011 ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
11012 +@@ -2256,7 +2256,7 @@ int amdgpu_amdkfd_gpuvm_import_dmabuf(struct amdgpu_device *adev,
11013 +
11014 + ret = drm_vma_node_allow(&obj->vma_node, drm_priv);
11015 + if (ret) {
11016 +- kfree(mem);
11017 ++ kfree(*mem);
11018 + return ret;
11019 + }
11020 +
11021 +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
11022 +index e363f56c72af1..30c28a69e847d 100644
11023 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
11024 ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
11025 +@@ -317,6 +317,7 @@ static bool amdgpu_atrm_get_bios(struct amdgpu_device *adev)
11026 +
11027 + if (!found)
11028 + return false;
11029 ++ pci_dev_put(pdev);
11030 +
11031 + adev->bios = kmalloc(size, GFP_KERNEL);
11032 + if (!adev->bios) {
11033 +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
11034 +index f1e9663b40510..913f22d41673d 100644
11035 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
11036 ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
11037 +@@ -2462,6 +2462,11 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
11038 + if (!amdgpu_sriov_vf(adev)) {
11039 + struct amdgpu_hive_info *hive = amdgpu_get_xgmi_hive(adev);
11040 +
11041 ++ if (WARN_ON(!hive)) {
11042 ++ r = -ENOENT;
11043 ++ goto init_failed;
11044 ++ }
11045 ++
11046 + if (!hive->reset_domain ||
11047 + !amdgpu_reset_get_reset_domain(hive->reset_domain)) {
11048 + r = -ENOENT;
11049 +@@ -5027,6 +5032,8 @@ static void amdgpu_device_resume_display_audio(struct amdgpu_device *adev)
11050 + pm_runtime_enable(&(p->dev));
11051 + pm_runtime_resume(&(p->dev));
11052 + }
11053 ++
11054 ++ pci_dev_put(p);
11055 + }
11056 +
11057 + static int amdgpu_device_suspend_display_audio(struct amdgpu_device *adev)
11058 +@@ -5065,6 +5072,7 @@ static int amdgpu_device_suspend_display_audio(struct amdgpu_device *adev)
11059 +
11060 + if (expires < ktime_get_mono_fast_ns()) {
11061 + dev_warn(adev->dev, "failed to suspend display audio\n");
11062 ++ pci_dev_put(p);
11063 + /* TODO: abort the succeeding gpu reset? */
11064 + return -ETIMEDOUT;
11065 + }
11066 +@@ -5072,6 +5080,7 @@ static int amdgpu_device_suspend_display_audio(struct amdgpu_device *adev)
11067 +
11068 + pm_runtime_disable(&(p->dev));
11069 +
11070 ++ pci_dev_put(p);
11071 + return 0;
11072 + }
11073 +
11074 +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
11075 +index 49c4347d154ce..2b9d806e23afb 100644
11076 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
11077 ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
11078 +@@ -75,6 +75,8 @@ struct amdgpu_vf_error_buffer {
11079 + uint64_t data[AMDGPU_VF_ERROR_ENTRY_SIZE];
11080 + };
11081 +
11082 ++enum idh_request;
11083 ++
11084 + /**
11085 + * struct amdgpu_virt_ops - amdgpu device virt operations
11086 + */
11087 +@@ -84,7 +86,8 @@ struct amdgpu_virt_ops {
11088 + int (*req_init_data)(struct amdgpu_device *adev);
11089 + int (*reset_gpu)(struct amdgpu_device *adev);
11090 + int (*wait_reset)(struct amdgpu_device *adev);
11091 +- void (*trans_msg)(struct amdgpu_device *adev, u32 req, u32 data1, u32 data2, u32 data3);
11092 ++ void (*trans_msg)(struct amdgpu_device *adev, enum idh_request req,
11093 ++ u32 data1, u32 data2, u32 data3);
11094 + };
11095 +
11096 + /*
11097 +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
11098 +index 47159e9a08848..4b9e7b050ccd2 100644
11099 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
11100 ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
11101 +@@ -386,7 +386,6 @@ struct amdgpu_hive_info *amdgpu_get_xgmi_hive(struct amdgpu_device *adev)
11102 + if (ret) {
11103 + dev_err(adev->dev, "XGMI: failed initializing kobject for xgmi hive\n");
11104 + kobject_put(&hive->kobj);
11105 +- kfree(hive);
11106 + hive = NULL;
11107 + goto pro_end;
11108 + }
11109 +@@ -410,7 +409,6 @@ struct amdgpu_hive_info *amdgpu_get_xgmi_hive(struct amdgpu_device *adev)
11110 + dev_err(adev->dev, "XGMI: failed initializing reset domain for xgmi hive\n");
11111 + ret = -ENOMEM;
11112 + kobject_put(&hive->kobj);
11113 +- kfree(hive);
11114 + hive = NULL;
11115 + goto pro_end;
11116 + }
11117 +diff --git a/drivers/gpu/drm/amd/amdgpu/nv.c b/drivers/gpu/drm/amd/amdgpu/nv.c
11118 +index b3fba8dea63ca..6853b93ac82e7 100644
11119 +--- a/drivers/gpu/drm/amd/amdgpu/nv.c
11120 ++++ b/drivers/gpu/drm/amd/amdgpu/nv.c
11121 +@@ -82,10 +82,10 @@ static const struct amdgpu_video_codecs nv_video_codecs_encode =
11122 + /* Navi1x */
11123 + static const struct amdgpu_video_codec_info nv_video_codecs_decode_array[] =
11124 + {
11125 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4906, 3)},
11126 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4906, 5)},
11127 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4906, 52)},
11128 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4906, 4)},
11129 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 3)},
11130 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 5)},
11131 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
11132 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4096, 4)},
11133 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
11134 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
11135 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
11136 +@@ -100,10 +100,10 @@ static const struct amdgpu_video_codecs nv_video_codecs_decode =
11137 + /* Sienna Cichlid */
11138 + static const struct amdgpu_video_codec_info sc_video_codecs_decode_array[] =
11139 + {
11140 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4906, 3)},
11141 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4906, 5)},
11142 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4906, 52)},
11143 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4906, 4)},
11144 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 3)},
11145 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 5)},
11146 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
11147 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4096, 4)},
11148 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
11149 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
11150 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
11151 +@@ -125,10 +125,10 @@ static struct amdgpu_video_codec_info sriov_sc_video_codecs_encode_array[] =
11152 +
11153 + static struct amdgpu_video_codec_info sriov_sc_video_codecs_decode_array[] =
11154 + {
11155 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4906, 3)},
11156 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4906, 5)},
11157 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4906, 52)},
11158 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4906, 4)},
11159 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 3)},
11160 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 5)},
11161 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
11162 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4096, 4)},
11163 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
11164 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
11165 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
11166 +@@ -149,7 +149,7 @@ static struct amdgpu_video_codecs sriov_sc_video_codecs_decode =
11167 +
11168 + /* Beige Goby*/
11169 + static const struct amdgpu_video_codec_info bg_video_codecs_decode_array[] = {
11170 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4906, 52)},
11171 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
11172 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
11173 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
11174 + };
11175 +@@ -166,7 +166,7 @@ static const struct amdgpu_video_codecs bg_video_codecs_encode = {
11176 +
11177 + /* Yellow Carp*/
11178 + static const struct amdgpu_video_codec_info yc_video_codecs_decode_array[] = {
11179 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4906, 52)},
11180 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
11181 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
11182 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
11183 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
11184 +diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
11185 +index e3b2b6b4f1a66..7cd17dda32ceb 100644
11186 +--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
11187 ++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
11188 +@@ -103,10 +103,10 @@ static const struct amdgpu_video_codecs vega_video_codecs_encode =
11189 + /* Vega */
11190 + static const struct amdgpu_video_codec_info vega_video_codecs_decode_array[] =
11191 + {
11192 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4906, 3)},
11193 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4906, 5)},
11194 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4906, 52)},
11195 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4906, 4)},
11196 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 3)},
11197 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 5)},
11198 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
11199 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4096, 4)},
11200 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 4096, 186)},
11201 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
11202 + };
11203 +@@ -120,10 +120,10 @@ static const struct amdgpu_video_codecs vega_video_codecs_decode =
11204 + /* Raven */
11205 + static const struct amdgpu_video_codec_info rv_video_codecs_decode_array[] =
11206 + {
11207 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4906, 3)},
11208 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4906, 5)},
11209 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4906, 52)},
11210 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4906, 4)},
11211 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 3)},
11212 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 5)},
11213 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
11214 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4096, 4)},
11215 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 4096, 186)},
11216 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
11217 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 4096, 4096, 0)},
11218 +@@ -138,10 +138,10 @@ static const struct amdgpu_video_codecs rv_video_codecs_decode =
11219 + /* Renoir, Arcturus */
11220 + static const struct amdgpu_video_codec_info rn_video_codecs_decode_array[] =
11221 + {
11222 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4906, 3)},
11223 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4906, 5)},
11224 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4906, 52)},
11225 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4906, 4)},
11226 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 3)},
11227 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 5)},
11228 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
11229 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4096, 4)},
11230 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
11231 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
11232 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
11233 +diff --git a/drivers/gpu/drm/amd/amdgpu/soc21.c b/drivers/gpu/drm/amd/amdgpu/soc21.c
11234 +index e08044008186e..8b297ade69a24 100644
11235 +--- a/drivers/gpu/drm/amd/amdgpu/soc21.c
11236 ++++ b/drivers/gpu/drm/amd/amdgpu/soc21.c
11237 +@@ -61,7 +61,7 @@ static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_encode =
11238 +
11239 + static const struct amdgpu_video_codec_info vcn_4_0_0_video_codecs_decode_array[] =
11240 + {
11241 +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4906, 52)},
11242 ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
11243 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
11244 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
11245 + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
11246 +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
11247 +index f0b01c8dc4a6b..f72c013d3a5b0 100644
11248 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
11249 ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
11250 +@@ -42,39 +42,6 @@
11251 + #include "dm_helpers.h"
11252 + #include "ddc_service_types.h"
11253 +
11254 +-struct monitor_patch_info {
11255 +- unsigned int manufacturer_id;
11256 +- unsigned int product_id;
11257 +- void (*patch_func)(struct dc_edid_caps *edid_caps, unsigned int param);
11258 +- unsigned int patch_param;
11259 +-};
11260 +-static void set_max_dsc_bpp_limit(struct dc_edid_caps *edid_caps, unsigned int param);
11261 +-
11262 +-static const struct monitor_patch_info monitor_patch_table[] = {
11263 +-{0x6D1E, 0x5BBF, set_max_dsc_bpp_limit, 15},
11264 +-{0x6D1E, 0x5B9A, set_max_dsc_bpp_limit, 15},
11265 +-};
11266 +-
11267 +-static void set_max_dsc_bpp_limit(struct dc_edid_caps *edid_caps, unsigned int param)
11268 +-{
11269 +- if (edid_caps)
11270 +- edid_caps->panel_patch.max_dsc_target_bpp_limit = param;
11271 +-}
11272 +-
11273 +-static int amdgpu_dm_patch_edid_caps(struct dc_edid_caps *edid_caps)
11274 +-{
11275 +- int i, ret = 0;
11276 +-
11277 +- for (i = 0; i < ARRAY_SIZE(monitor_patch_table); i++)
11278 +- if ((edid_caps->manufacturer_id == monitor_patch_table[i].manufacturer_id)
11279 +- && (edid_caps->product_id == monitor_patch_table[i].product_id)) {
11280 +- monitor_patch_table[i].patch_func(edid_caps, monitor_patch_table[i].patch_param);
11281 +- ret++;
11282 +- }
11283 +-
11284 +- return ret;
11285 +-}
11286 +-
11287 + /* dm_helpers_parse_edid_caps
11288 + *
11289 + * Parse edid caps
11290 +@@ -149,8 +116,6 @@ enum dc_edid_status dm_helpers_parse_edid_caps(
11291 + kfree(sads);
11292 + kfree(sadb);
11293 +
11294 +- amdgpu_dm_patch_edid_caps(edid_caps);
11295 +-
11296 + return result;
11297 + }
11298 +
11299 +diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
11300 +index e0c8d6f09bb4b..074e70a5c458e 100644
11301 +--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
11302 ++++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
11303 +@@ -462,6 +462,7 @@ static enum bp_result get_gpio_i2c_info(
11304 + uint32_t count = 0;
11305 + unsigned int table_index = 0;
11306 + bool find_valid = false;
11307 ++ struct atom_gpio_pin_assignment *pin;
11308 +
11309 + if (!info)
11310 + return BP_RESULT_BADINPUT;
11311 +@@ -489,20 +490,17 @@ static enum bp_result get_gpio_i2c_info(
11312 + - sizeof(struct atom_common_table_header))
11313 + / sizeof(struct atom_gpio_pin_assignment);
11314 +
11315 ++ pin = (struct atom_gpio_pin_assignment *) header->gpio_pin;
11316 ++
11317 + for (table_index = 0; table_index < count; table_index++) {
11318 +- if (((record->i2c_id & I2C_HW_CAP) == (
11319 +- header->gpio_pin[table_index].gpio_id &
11320 +- I2C_HW_CAP)) &&
11321 +- ((record->i2c_id & I2C_HW_ENGINE_ID_MASK) ==
11322 +- (header->gpio_pin[table_index].gpio_id &
11323 +- I2C_HW_ENGINE_ID_MASK)) &&
11324 +- ((record->i2c_id & I2C_HW_LANE_MUX) ==
11325 +- (header->gpio_pin[table_index].gpio_id &
11326 +- I2C_HW_LANE_MUX))) {
11327 ++ if (((record->i2c_id & I2C_HW_CAP) == (pin->gpio_id & I2C_HW_CAP)) &&
11328 ++ ((record->i2c_id & I2C_HW_ENGINE_ID_MASK) == (pin->gpio_id & I2C_HW_ENGINE_ID_MASK)) &&
11329 ++ ((record->i2c_id & I2C_HW_LANE_MUX) == (pin->gpio_id & I2C_HW_LANE_MUX))) {
11330 + /* still valid */
11331 + find_valid = true;
11332 + break;
11333 + }
11334 ++ pin = (struct atom_gpio_pin_assignment *)((uint8_t *)pin + sizeof(struct atom_gpio_pin_assignment));
11335 + }
11336 +
11337 + /* If we don't find the entry that we are looking for then
11338 +diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
11339 +index 6f77d8e538ab1..9eb9fe5b8d2c5 100644
11340 +--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
11341 ++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
11342 +@@ -438,7 +438,7 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
11343 + }
11344 +
11345 + if (!new_clocks->dtbclk_en) {
11346 +- new_clocks->ref_dtbclk_khz = 0;
11347 ++ new_clocks->ref_dtbclk_khz = clk_mgr_base->bw_params->clk_table.entries[0].dtbclk_mhz * 1000;
11348 + }
11349 +
11350 + /* clock limits are received with MHz precision, divide by 1000 to prevent setting clocks at every call */
11351 +diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
11352 +index 997ab031f816d..5260ad6de8038 100644
11353 +--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
11354 ++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
11355 +@@ -1070,6 +1070,7 @@ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
11356 + int i, j;
11357 + struct dc_state *dangling_context = dc_create_state(dc);
11358 + struct dc_state *current_ctx;
11359 ++ struct pipe_ctx *pipe;
11360 +
11361 + if (dangling_context == NULL)
11362 + return;
11363 +@@ -1112,6 +1113,16 @@ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
11364 + }
11365 +
11366 + if (should_disable && old_stream) {
11367 ++ pipe = &dc->current_state->res_ctx.pipe_ctx[i];
11368 ++ /* When disabling plane for a phantom pipe, we must turn on the
11369 ++ * phantom OTG so the disable programming gets the double buffer
11370 ++ * update. Otherwise the pipe will be left in a partially disabled
11371 ++ * state that can result in underflow or hang when enabling it
11372 ++ * again for different use.
11373 ++ */
11374 ++ if (old_stream->mall_stream_config.type == SUBVP_PHANTOM) {
11375 ++ pipe->stream_res.tg->funcs->enable_crtc(pipe->stream_res.tg);
11376 ++ }
11377 + dc_rem_all_planes_for_stream(dc, old_stream, dangling_context);
11378 + disable_all_writeback_pipes_for_stream(dc, old_stream, dangling_context);
11379 +
11380 +@@ -1760,6 +1771,12 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
11381 + context->stream_count == 0)
11382 + dc->hwss.prepare_bandwidth(dc, context);
11383 +
11384 ++ /* When SubVP is active, all HW programming must be done while
11385 ++ * SubVP lock is acquired
11386 ++ */
11387 ++ if (dc->hwss.subvp_pipe_control_lock)
11388 ++ dc->hwss.subvp_pipe_control_lock(dc, context, true, true, NULL, subvp_prev_use);
11389 ++
11390 + if (dc->debug.enable_double_buffered_dsc_pg_support)
11391 + dc->hwss.update_dsc_pg(dc, context, false);
11392 +
11393 +@@ -1787,9 +1804,6 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
11394 + dc->hwss.wait_for_mpcc_disconnect(dc, dc->res_pool, pipe);
11395 + }
11396 +
11397 +- if (dc->hwss.subvp_pipe_control_lock)
11398 +- dc->hwss.subvp_pipe_control_lock(dc, context, true, true, NULL, subvp_prev_use);
11399 +-
11400 + result = dc->hwss.apply_ctx_to_hw(dc, context);
11401 +
11402 + if (result != DC_OK) {
11403 +@@ -3576,7 +3590,6 @@ static bool could_mpcc_tree_change_for_active_pipes(struct dc *dc,
11404 +
11405 + struct dc_stream_status *cur_stream_status = stream_get_status(dc->current_state, stream);
11406 + bool force_minimal_pipe_splitting = false;
11407 +- uint32_t i;
11408 +
11409 + *is_plane_addition = false;
11410 +
11411 +@@ -3608,27 +3621,11 @@ static bool could_mpcc_tree_change_for_active_pipes(struct dc *dc,
11412 + }
11413 + }
11414 +
11415 +- /* For SubVP pipe split case when adding MPO video
11416 +- * we need to add a minimal transition. In this case
11417 +- * there will be 2 streams (1 main stream, 1 phantom
11418 +- * stream).
11419 ++ /* For SubVP when adding MPO video we need to add a minimal transition.
11420 + */
11421 +- if (cur_stream_status &&
11422 +- dc->current_state->stream_count == 2 &&
11423 +- stream->mall_stream_config.type == SUBVP_MAIN) {
11424 +- bool is_pipe_split = false;
11425 +-
11426 +- for (i = 0; i < dc->res_pool->pipe_count; i++) {
11427 +- if (dc->current_state->res_ctx.pipe_ctx[i].stream == stream &&
11428 +- (dc->current_state->res_ctx.pipe_ctx[i].bottom_pipe ||
11429 +- dc->current_state->res_ctx.pipe_ctx[i].next_odm_pipe)) {
11430 +- is_pipe_split = true;
11431 +- break;
11432 +- }
11433 +- }
11434 +-
11435 ++ if (cur_stream_status && stream->mall_stream_config.type == SUBVP_MAIN) {
11436 + /* determine if minimal transition is required due to SubVP*/
11437 +- if (surface_count > 0 && is_pipe_split) {
11438 ++ if (surface_count > 0) {
11439 + if (cur_stream_status->plane_count > surface_count) {
11440 + force_minimal_pipe_splitting = true;
11441 + } else if (cur_stream_status->plane_count < surface_count) {
11442 +@@ -3650,10 +3647,32 @@ static bool commit_minimal_transition_state(struct dc *dc,
11443 + bool temp_subvp_policy;
11444 + enum dc_status ret = DC_ERROR_UNEXPECTED;
11445 + unsigned int i, j;
11446 ++ unsigned int pipe_in_use = 0;
11447 +
11448 + if (!transition_context)
11449 + return false;
11450 +
11451 ++ /* check current pipes in use*/
11452 ++ for (i = 0; i < dc->res_pool->pipe_count; i++) {
11453 ++ struct pipe_ctx *pipe = &transition_base_context->res_ctx.pipe_ctx[i];
11454 ++
11455 ++ if (pipe->plane_state)
11456 ++ pipe_in_use++;
11457 ++ }
11458 ++
11459 ++ /* When the OS add a new surface if we have been used all of pipes with odm combine
11460 ++ * and mpc split feature, it need use commit_minimal_transition_state to transition safely.
11461 ++ * After OS exit MPO, it will back to use odm and mpc split with all of pipes, we need
11462 ++ * call it again. Otherwise return true to skip.
11463 ++ *
11464 ++ * Reduce the scenarios to use dc_commit_state_no_check in the stage of flip. Especially
11465 ++ * enter/exit MPO when DCN still have enough resources.
11466 ++ */
11467 ++ if (pipe_in_use != dc->res_pool->pipe_count) {
11468 ++ dc_release_state(transition_context);
11469 ++ return true;
11470 ++ }
11471 ++
11472 + if (!dc->config.is_vmin_only_asic) {
11473 + tmp_mpc_policy = dc->debug.pipe_split_policy;
11474 + dc->debug.pipe_split_policy = MPC_SPLIT_AVOID;
11475 +diff --git a/drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c b/drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c
11476 +index fc6aa098bda06..8db9f75144662 100644
11477 +--- a/drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c
11478 ++++ b/drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c
11479 +@@ -1128,6 +1128,7 @@ struct resource_pool *dce60_create_resource_pool(
11480 + if (dce60_construct(num_virtual_links, dc, pool))
11481 + return &pool->base;
11482 +
11483 ++ kfree(pool);
11484 + BREAK_TO_DEBUGGER();
11485 + return NULL;
11486 + }
11487 +@@ -1325,6 +1326,7 @@ struct resource_pool *dce61_create_resource_pool(
11488 + if (dce61_construct(num_virtual_links, dc, pool))
11489 + return &pool->base;
11490 +
11491 ++ kfree(pool);
11492 + BREAK_TO_DEBUGGER();
11493 + return NULL;
11494 + }
11495 +@@ -1518,6 +1520,7 @@ struct resource_pool *dce64_create_resource_pool(
11496 + if (dce64_construct(num_virtual_links, dc, pool))
11497 + return &pool->base;
11498 +
11499 ++ kfree(pool);
11500 + BREAK_TO_DEBUGGER();
11501 + return NULL;
11502 + }
11503 +diff --git a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
11504 +index b28025960050c..5825e6f412bd7 100644
11505 +--- a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
11506 ++++ b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
11507 +@@ -1137,6 +1137,7 @@ struct resource_pool *dce80_create_resource_pool(
11508 + if (dce80_construct(num_virtual_links, dc, pool))
11509 + return &pool->base;
11510 +
11511 ++ kfree(pool);
11512 + BREAK_TO_DEBUGGER();
11513 + return NULL;
11514 + }
11515 +@@ -1336,6 +1337,7 @@ struct resource_pool *dce81_create_resource_pool(
11516 + if (dce81_construct(num_virtual_links, dc, pool))
11517 + return &pool->base;
11518 +
11519 ++ kfree(pool);
11520 + BREAK_TO_DEBUGGER();
11521 + return NULL;
11522 + }
11523 +diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
11524 +index 11e4c4e469473..c06538c37a11f 100644
11525 +--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
11526 ++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
11527 +@@ -867,6 +867,32 @@ static void false_optc_underflow_wa(
11528 + tg->funcs->clear_optc_underflow(tg);
11529 + }
11530 +
11531 ++static int calculate_vready_offset_for_group(struct pipe_ctx *pipe)
11532 ++{
11533 ++ struct pipe_ctx *other_pipe;
11534 ++ int vready_offset = pipe->pipe_dlg_param.vready_offset;
11535 ++
11536 ++ /* Always use the largest vready_offset of all connected pipes */
11537 ++ for (other_pipe = pipe->bottom_pipe; other_pipe != NULL; other_pipe = other_pipe->bottom_pipe) {
11538 ++ if (other_pipe->pipe_dlg_param.vready_offset > vready_offset)
11539 ++ vready_offset = other_pipe->pipe_dlg_param.vready_offset;
11540 ++ }
11541 ++ for (other_pipe = pipe->top_pipe; other_pipe != NULL; other_pipe = other_pipe->top_pipe) {
11542 ++ if (other_pipe->pipe_dlg_param.vready_offset > vready_offset)
11543 ++ vready_offset = other_pipe->pipe_dlg_param.vready_offset;
11544 ++ }
11545 ++ for (other_pipe = pipe->next_odm_pipe; other_pipe != NULL; other_pipe = other_pipe->next_odm_pipe) {
11546 ++ if (other_pipe->pipe_dlg_param.vready_offset > vready_offset)
11547 ++ vready_offset = other_pipe->pipe_dlg_param.vready_offset;
11548 ++ }
11549 ++ for (other_pipe = pipe->prev_odm_pipe; other_pipe != NULL; other_pipe = other_pipe->prev_odm_pipe) {
11550 ++ if (other_pipe->pipe_dlg_param.vready_offset > vready_offset)
11551 ++ vready_offset = other_pipe->pipe_dlg_param.vready_offset;
11552 ++ }
11553 ++
11554 ++ return vready_offset;
11555 ++}
11556 ++
11557 + enum dc_status dcn10_enable_stream_timing(
11558 + struct pipe_ctx *pipe_ctx,
11559 + struct dc_state *context,
11560 +@@ -910,7 +936,7 @@ enum dc_status dcn10_enable_stream_timing(
11561 + pipe_ctx->stream_res.tg->funcs->program_timing(
11562 + pipe_ctx->stream_res.tg,
11563 + &stream->timing,
11564 +- pipe_ctx->pipe_dlg_param.vready_offset,
11565 ++ calculate_vready_offset_for_group(pipe_ctx),
11566 + pipe_ctx->pipe_dlg_param.vstartup_start,
11567 + pipe_ctx->pipe_dlg_param.vupdate_offset,
11568 + pipe_ctx->pipe_dlg_param.vupdate_width,
11569 +@@ -2900,7 +2926,7 @@ void dcn10_program_pipe(
11570 +
11571 + pipe_ctx->stream_res.tg->funcs->program_global_sync(
11572 + pipe_ctx->stream_res.tg,
11573 +- pipe_ctx->pipe_dlg_param.vready_offset,
11574 ++ calculate_vready_offset_for_group(pipe_ctx),
11575 + pipe_ctx->pipe_dlg_param.vstartup_start,
11576 + pipe_ctx->pipe_dlg_param.vupdate_offset,
11577 + pipe_ctx->pipe_dlg_param.vupdate_width);
11578 +diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
11579 +index a7e0001a8f46d..f348bc15a9256 100644
11580 +--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
11581 ++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
11582 +@@ -1616,6 +1616,31 @@ static void dcn20_update_dchubp_dpp(
11583 + hubp->funcs->phantom_hubp_post_enable(hubp);
11584 + }
11585 +
11586 ++static int calculate_vready_offset_for_group(struct pipe_ctx *pipe)
11587 ++{
11588 ++ struct pipe_ctx *other_pipe;
11589 ++ int vready_offset = pipe->pipe_dlg_param.vready_offset;
11590 ++
11591 ++ /* Always use the largest vready_offset of all connected pipes */
11592 ++ for (other_pipe = pipe->bottom_pipe; other_pipe != NULL; other_pipe = other_pipe->bottom_pipe) {
11593 ++ if (other_pipe->pipe_dlg_param.vready_offset > vready_offset)
11594 ++ vready_offset = other_pipe->pipe_dlg_param.vready_offset;
11595 ++ }
11596 ++ for (other_pipe = pipe->top_pipe; other_pipe != NULL; other_pipe = other_pipe->top_pipe) {
11597 ++ if (other_pipe->pipe_dlg_param.vready_offset > vready_offset)
11598 ++ vready_offset = other_pipe->pipe_dlg_param.vready_offset;
11599 ++ }
11600 ++ for (other_pipe = pipe->next_odm_pipe; other_pipe != NULL; other_pipe = other_pipe->next_odm_pipe) {
11601 ++ if (other_pipe->pipe_dlg_param.vready_offset > vready_offset)
11602 ++ vready_offset = other_pipe->pipe_dlg_param.vready_offset;
11603 ++ }
11604 ++ for (other_pipe = pipe->prev_odm_pipe; other_pipe != NULL; other_pipe = other_pipe->prev_odm_pipe) {
11605 ++ if (other_pipe->pipe_dlg_param.vready_offset > vready_offset)
11606 ++ vready_offset = other_pipe->pipe_dlg_param.vready_offset;
11607 ++ }
11608 ++
11609 ++ return vready_offset;
11610 ++}
11611 +
11612 + static void dcn20_program_pipe(
11613 + struct dc *dc,
11614 +@@ -1634,16 +1659,14 @@ static void dcn20_program_pipe(
11615 + && !pipe_ctx->prev_odm_pipe) {
11616 + pipe_ctx->stream_res.tg->funcs->program_global_sync(
11617 + pipe_ctx->stream_res.tg,
11618 +- pipe_ctx->pipe_dlg_param.vready_offset,
11619 ++ calculate_vready_offset_for_group(pipe_ctx),
11620 + pipe_ctx->pipe_dlg_param.vstartup_start,
11621 + pipe_ctx->pipe_dlg_param.vupdate_offset,
11622 + pipe_ctx->pipe_dlg_param.vupdate_width);
11623 +
11624 + if (pipe_ctx->stream->mall_stream_config.type != SUBVP_PHANTOM) {
11625 +- pipe_ctx->stream_res.tg->funcs->wait_for_state(
11626 +- pipe_ctx->stream_res.tg, CRTC_STATE_VBLANK);
11627 +- pipe_ctx->stream_res.tg->funcs->wait_for_state(
11628 +- pipe_ctx->stream_res.tg, CRTC_STATE_VACTIVE);
11629 ++ pipe_ctx->stream_res.tg->funcs->wait_for_state(pipe_ctx->stream_res.tg, CRTC_STATE_VBLANK);
11630 ++ pipe_ctx->stream_res.tg->funcs->wait_for_state(pipe_ctx->stream_res.tg, CRTC_STATE_VACTIVE);
11631 + }
11632 +
11633 + pipe_ctx->stream_res.tg->funcs->set_vtg_params(
11634 +@@ -2037,7 +2060,7 @@ bool dcn20_update_bandwidth(
11635 +
11636 + pipe_ctx->stream_res.tg->funcs->program_global_sync(
11637 + pipe_ctx->stream_res.tg,
11638 +- pipe_ctx->pipe_dlg_param.vready_offset,
11639 ++ calculate_vready_offset_for_group(pipe_ctx),
11640 + pipe_ctx->pipe_dlg_param.vstartup_start,
11641 + pipe_ctx->pipe_dlg_param.vupdate_offset,
11642 + pipe_ctx->pipe_dlg_param.vupdate_width);
11643 +diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
11644 +index df4f251191424..e4472c6be6c32 100644
11645 +--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
11646 ++++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
11647 +@@ -225,11 +225,7 @@ static void dccg32_set_dtbclk_dto(
11648 + } else {
11649 + REG_UPDATE_2(OTG_PIXEL_RATE_CNTL[params->otg_inst],
11650 + DTBCLK_DTO_ENABLE[params->otg_inst], 0,
11651 +- PIPE_DTO_SRC_SEL[params->otg_inst], 1);
11652 +- if (params->is_hdmi)
11653 +- REG_UPDATE(OTG_PIXEL_RATE_CNTL[params->otg_inst],
11654 +- PIPE_DTO_SRC_SEL[params->otg_inst], 0);
11655 +-
11656 ++ PIPE_DTO_SRC_SEL[params->otg_inst], params->is_hdmi ? 0 : 1);
11657 + REG_WRITE(DTBCLK_DTO_MODULO[params->otg_inst], 0);
11658 + REG_WRITE(DTBCLK_DTO_PHASE[params->otg_inst], 0);
11659 + }
11660 +diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
11661 +index d1598e3131f66..33ab6fdc36175 100644
11662 +--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
11663 ++++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
11664 +@@ -1901,7 +1901,7 @@ int dcn32_populate_dml_pipes_from_context(
11665 +
11666 + pipes[pipe_cnt].pipe.dest.odm_combine_policy = dm_odm_combine_policy_dal;
11667 + if (context->stream_count == 1 &&
11668 +- context->stream_status[0].plane_count <= 1 &&
11669 ++ context->stream_status[0].plane_count == 1 &&
11670 + !dc_is_hdmi_signal(res_ctx->pipe_ctx[i].stream->signal) &&
11671 + is_h_timing_divisible_by_2(res_ctx->pipe_ctx[i].stream) &&
11672 + pipe->stream->timing.pix_clk_100hz * 100 > DCN3_2_VMIN_DISPCLK_HZ &&
11673 +diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
11674 +index 2abe3967f7fbd..d1bf49d207de4 100644
11675 +--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
11676 ++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
11677 +@@ -531,9 +531,11 @@ void dcn32_set_phantom_stream_timing(struct dc *dc,
11678 + unsigned int i, pipe_idx;
11679 + struct pipe_ctx *pipe;
11680 + uint32_t phantom_vactive, phantom_bp, pstate_width_fw_delay_lines;
11681 ++ unsigned int num_dpp;
11682 + unsigned int vlevel = context->bw_ctx.dml.vba.VoltageLevel;
11683 + unsigned int dcfclk = context->bw_ctx.dml.vba.DCFCLKState[vlevel][context->bw_ctx.dml.vba.maxMpcComb];
11684 + unsigned int socclk = context->bw_ctx.dml.vba.SOCCLKPerState[vlevel];
11685 ++ struct vba_vars_st *vba = &context->bw_ctx.dml.vba;
11686 +
11687 + dc_assert_fp_enabled();
11688 +
11689 +@@ -569,6 +571,11 @@ void dcn32_set_phantom_stream_timing(struct dc *dc,
11690 + phantom_vactive = get_subviewport_lines_needed_in_mall(&context->bw_ctx.dml, pipes, pipe_cnt, pipe_idx) +
11691 + pstate_width_fw_delay_lines + dc->caps.subvp_swath_height_margin_lines;
11692 +
11693 ++ // W/A for DCC corruption with certain high resolution timings.
11694 ++ // Determing if pipesplit is used. If so, add meta_row_height to the phantom vactive.
11695 ++ num_dpp = vba->NoOfDPP[vba->VoltageLevel][vba->maxMpcComb][vba->pipe_plane[pipe_idx]];
11696 ++ phantom_vactive += num_dpp > 1 ? vba->meta_row_height[vba->pipe_plane[pipe_idx]] : 0;
11697 ++
11698 + // For backporch of phantom pipe, use vstartup of the main pipe
11699 + phantom_bp = get_vstartup(&context->bw_ctx.dml, pipes, pipe_cnt, pipe_idx);
11700 +
11701 +diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
11702 +index a40ead44778af..d18162e9ed1da 100644
11703 +--- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
11704 ++++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
11705 +@@ -354,7 +354,8 @@ struct amd_pm_funcs {
11706 + int (*get_power_profile_mode)(void *handle, char *buf);
11707 + int (*set_power_profile_mode)(void *handle, long *input, uint32_t size);
11708 + int (*set_fine_grain_clk_vol)(void *handle, uint32_t type, long *input, uint32_t size);
11709 +- int (*odn_edit_dpm_table)(void *handle, uint32_t type, long *input, uint32_t size);
11710 ++ int (*odn_edit_dpm_table)(void *handle, enum PP_OD_DPM_TABLE_COMMAND type,
11711 ++ long *input, uint32_t size);
11712 + int (*set_mp1_state)(void *handle, enum pp_mp1_state mp1_state);
11713 + int (*smu_i2c_bus_access)(void *handle, bool acquire);
11714 + int (*gfx_state_change_set)(void *handle, uint32_t state);
11715 +diff --git a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
11716 +index ec055858eb95a..1159ae114dd02 100644
11717 +--- a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
11718 ++++ b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
11719 +@@ -838,7 +838,8 @@ static int pp_set_fine_grain_clk_vol(void *handle, uint32_t type, long *input, u
11720 + return hwmgr->hwmgr_func->set_fine_grain_clk_vol(hwmgr, type, input, size);
11721 + }
11722 +
11723 +-static int pp_odn_edit_dpm_table(void *handle, uint32_t type, long *input, uint32_t size)
11724 ++static int pp_odn_edit_dpm_table(void *handle, enum PP_OD_DPM_TABLE_COMMAND type,
11725 ++ long *input, uint32_t size)
11726 + {
11727 + struct pp_hwmgr *hwmgr = handle;
11728 +
11729 +diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c
11730 +index 67d7da0b6fed5..1d829402cd2e2 100644
11731 +--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c
11732 ++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c
11733 +@@ -75,8 +75,10 @@ int psm_init_power_state_table(struct pp_hwmgr *hwmgr)
11734 + for (i = 0; i < table_entries; i++) {
11735 + result = hwmgr->hwmgr_func->get_pp_table_entry(hwmgr, i, state);
11736 + if (result) {
11737 ++ kfree(hwmgr->current_ps);
11738 + kfree(hwmgr->request_ps);
11739 + kfree(hwmgr->ps);
11740 ++ hwmgr->current_ps = NULL;
11741 + hwmgr->request_ps = NULL;
11742 + hwmgr->ps = NULL;
11743 + return -EINVAL;
11744 +diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c
11745 +index 190af79f3236f..dad3e3741a4e8 100644
11746 +--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c
11747 ++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c
11748 +@@ -67,21 +67,22 @@ int vega10_fan_ctrl_get_fan_speed_info(struct pp_hwmgr *hwmgr,
11749 + int vega10_fan_ctrl_get_fan_speed_pwm(struct pp_hwmgr *hwmgr,
11750 + uint32_t *speed)
11751 + {
11752 +- struct amdgpu_device *adev = hwmgr->adev;
11753 +- uint32_t duty100, duty;
11754 +- uint64_t tmp64;
11755 ++ uint32_t current_rpm;
11756 ++ uint32_t percent = 0;
11757 +
11758 +- duty100 = REG_GET_FIELD(RREG32_SOC15(THM, 0, mmCG_FDO_CTRL1),
11759 +- CG_FDO_CTRL1, FMAX_DUTY100);
11760 +- duty = REG_GET_FIELD(RREG32_SOC15(THM, 0, mmCG_THERMAL_STATUS),
11761 +- CG_THERMAL_STATUS, FDO_PWM_DUTY);
11762 ++ if (hwmgr->thermal_controller.fanInfo.bNoFan)
11763 ++ return 0;
11764 +
11765 +- if (!duty100)
11766 +- return -EINVAL;
11767 ++ if (vega10_get_current_rpm(hwmgr, &current_rpm))
11768 ++ return -1;
11769 ++
11770 ++ if (hwmgr->thermal_controller.
11771 ++ advanceFanControlParameters.usMaxFanRPM != 0)
11772 ++ percent = current_rpm * 255 /
11773 ++ hwmgr->thermal_controller.
11774 ++ advanceFanControlParameters.usMaxFanRPM;
11775 +
11776 +- tmp64 = (uint64_t)duty * 255;
11777 +- do_div(tmp64, duty100);
11778 +- *speed = MIN((uint32_t)tmp64, 255);
11779 ++ *speed = MIN(percent, 255);
11780 +
11781 + return 0;
11782 + }
11783 +diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
11784 +index 97b3ad3690467..b30684c84e20e 100644
11785 +--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
11786 ++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
11787 +@@ -2961,7 +2961,8 @@ static int vega20_odn_edit_dpm_table(struct pp_hwmgr *hwmgr,
11788 + data->od8_settings.od8_settings_array;
11789 + OverDriveTable_t *od_table =
11790 + &(data->smc_state_table.overdrive_table);
11791 +- int32_t input_index, input_clk, input_vol, i;
11792 ++ int32_t input_clk, input_vol, i;
11793 ++ uint32_t input_index;
11794 + int od8_id;
11795 + int ret;
11796 +
11797 +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
11798 +index 70b560737687e..ad5f6a15a1d7d 100644
11799 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
11800 ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
11801 +@@ -1588,6 +1588,10 @@ bool smu_v11_0_baco_is_support(struct smu_context *smu)
11802 + if (amdgpu_sriov_vf(smu->adev) || !smu_baco->platform_support)
11803 + return false;
11804 +
11805 ++ /* return true if ASIC is in BACO state already */
11806 ++ if (smu_v11_0_baco_get_state(smu) == SMU_BACO_STATE_ENTER)
11807 ++ return true;
11808 ++
11809 + /* Arcturus does not support this bit mask */
11810 + if (smu_cmn_feature_is_supported(smu, SMU_FEATURE_BACO_BIT) &&
11811 + !smu_cmn_feature_is_enabled(smu, SMU_FEATURE_BACO_BIT))
11812 +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
11813 +index d74debc584f89..39deb06a86ba3 100644
11814 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
11815 ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
11816 +@@ -1436,7 +1436,7 @@ static int smu_v13_0_7_get_power_limit(struct smu_context *smu,
11817 +
11818 + static int smu_v13_0_7_get_power_profile_mode(struct smu_context *smu, char *buf)
11819 + {
11820 +- DpmActivityMonitorCoeffIntExternal_t activity_monitor_external[PP_SMC_POWER_PROFILE_COUNT];
11821 ++ DpmActivityMonitorCoeffIntExternal_t *activity_monitor_external;
11822 + uint32_t i, j, size = 0;
11823 + int16_t workload_type = 0;
11824 + int result = 0;
11825 +@@ -1444,6 +1444,12 @@ static int smu_v13_0_7_get_power_profile_mode(struct smu_context *smu, char *buf
11826 + if (!buf)
11827 + return -EINVAL;
11828 +
11829 ++ activity_monitor_external = kcalloc(PP_SMC_POWER_PROFILE_COUNT,
11830 ++ sizeof(*activity_monitor_external),
11831 ++ GFP_KERNEL);
11832 ++ if (!activity_monitor_external)
11833 ++ return -ENOMEM;
11834 ++
11835 + size += sysfs_emit_at(buf, size, " ");
11836 + for (i = 0; i <= PP_SMC_POWER_PROFILE_WINDOW3D; i++)
11837 + size += sysfs_emit_at(buf, size, "%-14s%s", amdgpu_pp_profile_name[i],
11838 +@@ -1456,15 +1462,17 @@ static int smu_v13_0_7_get_power_profile_mode(struct smu_context *smu, char *buf
11839 + workload_type = smu_cmn_to_asic_specific_index(smu,
11840 + CMN2ASIC_MAPPING_WORKLOAD,
11841 + i);
11842 +- if (workload_type < 0)
11843 +- return -EINVAL;
11844 ++ if (workload_type < 0) {
11845 ++ result = -EINVAL;
11846 ++ goto out;
11847 ++ }
11848 +
11849 + result = smu_cmn_update_table(smu,
11850 + SMU_TABLE_ACTIVITY_MONITOR_COEFF, workload_type,
11851 + (void *)(&activity_monitor_external[i]), false);
11852 + if (result) {
11853 + dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
11854 +- return result;
11855 ++ goto out;
11856 + }
11857 + }
11858 +
11859 +@@ -1492,7 +1500,10 @@ do { \
11860 + PRINT_DPM_MONITOR(Fclk_BoosterFreq);
11861 + #undef PRINT_DPM_MONITOR
11862 +
11863 +- return size;
11864 ++ result = size;
11865 ++out:
11866 ++ kfree(activity_monitor_external);
11867 ++ return result;
11868 + }
11869 +
11870 + static int smu_v13_0_7_set_power_profile_mode(struct smu_context *smu, long *input, uint32_t size)
11871 +diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511.h b/drivers/gpu/drm/bridge/adv7511/adv7511.h
11872 +index 94de73cbeb2dd..17445800248dd 100644
11873 +--- a/drivers/gpu/drm/bridge/adv7511/adv7511.h
11874 ++++ b/drivers/gpu/drm/bridge/adv7511/adv7511.h
11875 +@@ -402,7 +402,8 @@ static inline int adv7511_cec_init(struct device *dev, struct adv7511 *adv7511)
11876 +
11877 + void adv7533_dsi_power_on(struct adv7511 *adv);
11878 + void adv7533_dsi_power_off(struct adv7511 *adv);
11879 +-void adv7533_mode_set(struct adv7511 *adv, const struct drm_display_mode *mode);
11880 ++enum drm_mode_status adv7533_mode_valid(struct adv7511 *adv,
11881 ++ const struct drm_display_mode *mode);
11882 + int adv7533_patch_registers(struct adv7511 *adv);
11883 + int adv7533_patch_cec_registers(struct adv7511 *adv);
11884 + int adv7533_attach_dsi(struct adv7511 *adv);
11885 +diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
11886 +index f887200e8abc9..78b72739e5c3e 100644
11887 +--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
11888 ++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
11889 +@@ -697,7 +697,7 @@ adv7511_detect(struct adv7511 *adv7511, struct drm_connector *connector)
11890 + }
11891 +
11892 + static enum drm_mode_status adv7511_mode_valid(struct adv7511 *adv7511,
11893 +- struct drm_display_mode *mode)
11894 ++ const struct drm_display_mode *mode)
11895 + {
11896 + if (mode->clock > 165000)
11897 + return MODE_CLOCK_HIGH;
11898 +@@ -791,9 +791,6 @@ static void adv7511_mode_set(struct adv7511 *adv7511,
11899 + regmap_update_bits(adv7511->regmap, 0x17,
11900 + 0x60, (vsync_polarity << 6) | (hsync_polarity << 5));
11901 +
11902 +- if (adv7511->type == ADV7533 || adv7511->type == ADV7535)
11903 +- adv7533_mode_set(adv7511, adj_mode);
11904 +-
11905 + drm_mode_copy(&adv7511->curr_mode, adj_mode);
11906 +
11907 + /*
11908 +@@ -913,6 +910,18 @@ static void adv7511_bridge_mode_set(struct drm_bridge *bridge,
11909 + adv7511_mode_set(adv, mode, adj_mode);
11910 + }
11911 +
11912 ++static enum drm_mode_status adv7511_bridge_mode_valid(struct drm_bridge *bridge,
11913 ++ const struct drm_display_info *info,
11914 ++ const struct drm_display_mode *mode)
11915 ++{
11916 ++ struct adv7511 *adv = bridge_to_adv7511(bridge);
11917 ++
11918 ++ if (adv->type == ADV7533 || adv->type == ADV7535)
11919 ++ return adv7533_mode_valid(adv, mode);
11920 ++ else
11921 ++ return adv7511_mode_valid(adv, mode);
11922 ++}
11923 ++
11924 + static int adv7511_bridge_attach(struct drm_bridge *bridge,
11925 + enum drm_bridge_attach_flags flags)
11926 + {
11927 +@@ -960,6 +969,7 @@ static const struct drm_bridge_funcs adv7511_bridge_funcs = {
11928 + .enable = adv7511_bridge_enable,
11929 + .disable = adv7511_bridge_disable,
11930 + .mode_set = adv7511_bridge_mode_set,
11931 ++ .mode_valid = adv7511_bridge_mode_valid,
11932 + .attach = adv7511_bridge_attach,
11933 + .detect = adv7511_bridge_detect,
11934 + .get_edid = adv7511_bridge_get_edid,
11935 +diff --git a/drivers/gpu/drm/bridge/adv7511/adv7533.c b/drivers/gpu/drm/bridge/adv7511/adv7533.c
11936 +index ef6270806d1d3..258c79d4dab0a 100644
11937 +--- a/drivers/gpu/drm/bridge/adv7511/adv7533.c
11938 ++++ b/drivers/gpu/drm/bridge/adv7511/adv7533.c
11939 +@@ -100,26 +100,27 @@ void adv7533_dsi_power_off(struct adv7511 *adv)
11940 + regmap_write(adv->regmap_cec, 0x27, 0x0b);
11941 + }
11942 +
11943 +-void adv7533_mode_set(struct adv7511 *adv, const struct drm_display_mode *mode)
11944 ++enum drm_mode_status adv7533_mode_valid(struct adv7511 *adv,
11945 ++ const struct drm_display_mode *mode)
11946 + {
11947 ++ int lanes;
11948 + struct mipi_dsi_device *dsi = adv->dsi;
11949 +- int lanes, ret;
11950 +-
11951 +- if (adv->num_dsi_lanes != 4)
11952 +- return;
11953 +
11954 + if (mode->clock > 80000)
11955 + lanes = 4;
11956 + else
11957 + lanes = 3;
11958 +
11959 +- if (lanes != dsi->lanes) {
11960 +- mipi_dsi_detach(dsi);
11961 +- dsi->lanes = lanes;
11962 +- ret = mipi_dsi_attach(dsi);
11963 +- if (ret)
11964 +- dev_err(&dsi->dev, "failed to change host lanes\n");
11965 +- }
11966 ++ /*
11967 ++ * TODO: add support for dynamic switching of lanes
11968 ++ * by using the bridge pre_enable() op . Till then filter
11969 ++ * out the modes which shall need different number of lanes
11970 ++ * than what was configured in the device tree.
11971 ++ */
11972 ++ if (lanes != dsi->lanes)
11973 ++ return MODE_BAD;
11974 ++
11975 ++ return MODE_OK;
11976 + }
11977 +
11978 + int adv7533_patch_registers(struct adv7511 *adv)
11979 +diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
11980 +index dfe4351c9bdd3..99123eec45511 100644
11981 +--- a/drivers/gpu/drm/bridge/ite-it6505.c
11982 ++++ b/drivers/gpu/drm/bridge/ite-it6505.c
11983 +@@ -2860,10 +2860,7 @@ static int it6505_bridge_attach(struct drm_bridge *bridge,
11984 + }
11985 +
11986 + /* Register aux channel */
11987 +- it6505->aux.name = "DP-AUX";
11988 +- it6505->aux.dev = dev;
11989 + it6505->aux.drm_dev = bridge->dev;
11990 +- it6505->aux.transfer = it6505_aux_transfer;
11991 +
11992 + ret = drm_dp_aux_register(&it6505->aux);
11993 +
11994 +@@ -3316,6 +3313,11 @@ static int it6505_i2c_probe(struct i2c_client *client,
11995 + DRM_DEV_DEBUG_DRIVER(dev, "it6505 device name: %s", dev_name(dev));
11996 + debugfs_init(it6505);
11997 +
11998 ++ it6505->aux.name = "DP-AUX";
11999 ++ it6505->aux.dev = dev;
12000 ++ it6505->aux.transfer = it6505_aux_transfer;
12001 ++ drm_dp_aux_init(&it6505->aux);
12002 ++
12003 + it6505->bridge.funcs = &it6505_bridge_funcs;
12004 + it6505->bridge.type = DRM_MODE_CONNECTOR_DisplayPort;
12005 + it6505->bridge.ops = DRM_BRIDGE_OP_DETECT | DRM_BRIDGE_OP_EDID |
12006 +diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
12007 +index 98cc3137c0625..02b4a7dc92f5e 100644
12008 +--- a/drivers/gpu/drm/drm_atomic_helper.c
12009 ++++ b/drivers/gpu/drm/drm_atomic_helper.c
12010 +@@ -945,7 +945,6 @@ int drm_atomic_helper_check_crtc_state(struct drm_crtc_state *crtc_state,
12011 + bool can_disable_primary_planes)
12012 + {
12013 + struct drm_device *dev = crtc_state->crtc->dev;
12014 +- struct drm_atomic_state *state = crtc_state->state;
12015 +
12016 + if (!crtc_state->enable)
12017 + return 0;
12018 +@@ -956,14 +955,7 @@ int drm_atomic_helper_check_crtc_state(struct drm_crtc_state *crtc_state,
12019 + struct drm_plane *plane;
12020 +
12021 + drm_for_each_plane_mask(plane, dev, crtc_state->plane_mask) {
12022 +- struct drm_plane_state *plane_state;
12023 +-
12024 +- if (plane->type != DRM_PLANE_TYPE_PRIMARY)
12025 +- continue;
12026 +- plane_state = drm_atomic_get_plane_state(state, plane);
12027 +- if (IS_ERR(plane_state))
12028 +- return PTR_ERR(plane_state);
12029 +- if (plane_state->fb && plane_state->crtc) {
12030 ++ if (plane->type == DRM_PLANE_TYPE_PRIMARY) {
12031 + has_primary_plane = true;
12032 + break;
12033 + }
12034 +diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
12035 +index 4005dab6147d9..b36abfa915813 100644
12036 +--- a/drivers/gpu/drm/drm_edid.c
12037 ++++ b/drivers/gpu/drm/drm_edid.c
12038 +@@ -87,6 +87,8 @@ static int oui(u8 first, u8 second, u8 third)
12039 + #define EDID_QUIRK_FORCE_10BPC (1 << 11)
12040 + /* Non desktop display (i.e. HMD) */
12041 + #define EDID_QUIRK_NON_DESKTOP (1 << 12)
12042 ++/* Cap the DSC target bitrate to 15bpp */
12043 ++#define EDID_QUIRK_CAP_DSC_15BPP (1 << 13)
12044 +
12045 + #define MICROSOFT_IEEE_OUI 0xca125c
12046 +
12047 +@@ -147,6 +149,12 @@ static const struct edid_quirk {
12048 + EDID_QUIRK('F', 'C', 'M', 13600, EDID_QUIRK_PREFER_LARGE_75 |
12049 + EDID_QUIRK_DETAILED_IN_CM),
12050 +
12051 ++ /* LG 27GP950 */
12052 ++ EDID_QUIRK('G', 'S', 'M', 0x5bbf, EDID_QUIRK_CAP_DSC_15BPP),
12053 ++
12054 ++ /* LG 27GN950 */
12055 ++ EDID_QUIRK('G', 'S', 'M', 0x5b9a, EDID_QUIRK_CAP_DSC_15BPP),
12056 ++
12057 + /* LGD panel of HP zBook 17 G2, eDP 10 bpc, but reports unknown bpc */
12058 + EDID_QUIRK('L', 'G', 'D', 764, EDID_QUIRK_FORCE_10BPC),
12059 +
12060 +@@ -6166,6 +6174,7 @@ static void drm_reset_display_info(struct drm_connector *connector)
12061 +
12062 + info->mso_stream_count = 0;
12063 + info->mso_pixel_overlap = 0;
12064 ++ info->max_dsc_bpp = 0;
12065 + }
12066 +
12067 + static u32 update_display_info(struct drm_connector *connector,
12068 +@@ -6252,6 +6261,9 @@ out:
12069 + info->non_desktop = true;
12070 + }
12071 +
12072 ++ if (quirks & EDID_QUIRK_CAP_DSC_15BPP)
12073 ++ info->max_dsc_bpp = 15;
12074 ++
12075 + return quirks;
12076 + }
12077 +
12078 +diff --git a/drivers/gpu/drm/drm_fourcc.c b/drivers/gpu/drm/drm_fourcc.c
12079 +index e09331bb3bc73..6242dfbe92402 100644
12080 +--- a/drivers/gpu/drm/drm_fourcc.c
12081 ++++ b/drivers/gpu/drm/drm_fourcc.c
12082 +@@ -297,12 +297,12 @@ const struct drm_format_info *__drm_format_info(u32 format)
12083 + .vsub = 2, .is_yuv = true },
12084 + { .format = DRM_FORMAT_Q410, .depth = 0,
12085 + .num_planes = 3, .char_per_block = { 2, 2, 2 },
12086 +- .block_w = { 1, 1, 1 }, .block_h = { 1, 1, 1 }, .hsub = 0,
12087 +- .vsub = 0, .is_yuv = true },
12088 ++ .block_w = { 1, 1, 1 }, .block_h = { 1, 1, 1 }, .hsub = 1,
12089 ++ .vsub = 1, .is_yuv = true },
12090 + { .format = DRM_FORMAT_Q401, .depth = 0,
12091 + .num_planes = 3, .char_per_block = { 2, 2, 2 },
12092 +- .block_w = { 1, 1, 1 }, .block_h = { 1, 1, 1 }, .hsub = 0,
12093 +- .vsub = 0, .is_yuv = true },
12094 ++ .block_w = { 1, 1, 1 }, .block_h = { 1, 1, 1 }, .hsub = 1,
12095 ++ .vsub = 1, .is_yuv = true },
12096 + { .format = DRM_FORMAT_P030, .depth = 0, .num_planes = 2,
12097 + .char_per_block = { 4, 8, 0 }, .block_w = { 3, 3, 0 }, .block_h = { 1, 1, 0 },
12098 + .hsub = 2, .vsub = 2, .is_yuv = true},
12099 +diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
12100 +index 37018bc55810d..f667e7906d1f4 100644
12101 +--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
12102 ++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
12103 +@@ -416,6 +416,12 @@ static void etnaviv_hw_identify(struct etnaviv_gpu *gpu)
12104 + if (gpu->identity.model == chipModel_GC700)
12105 + gpu->identity.features &= ~chipFeatures_FAST_CLEAR;
12106 +
12107 ++ /* These models/revisions don't have the 2D pipe bit */
12108 ++ if ((gpu->identity.model == chipModel_GC500 &&
12109 ++ gpu->identity.revision <= 2) ||
12110 ++ gpu->identity.model == chipModel_GC300)
12111 ++ gpu->identity.features |= chipFeatures_PIPE_2D;
12112 ++
12113 + if ((gpu->identity.model == chipModel_GC500 &&
12114 + gpu->identity.revision < 2) ||
12115 + (gpu->identity.model == chipModel_GC300 &&
12116 +@@ -449,8 +455,9 @@ static void etnaviv_hw_identify(struct etnaviv_gpu *gpu)
12117 + gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_5);
12118 + }
12119 +
12120 +- /* GC600 idle register reports zero bits where modules aren't present */
12121 +- if (gpu->identity.model == chipModel_GC600)
12122 ++ /* GC600/300 idle register reports zero bits where modules aren't present */
12123 ++ if (gpu->identity.model == chipModel_GC600 ||
12124 ++ gpu->identity.model == chipModel_GC300)
12125 + gpu->idle_mask = VIVS_HI_IDLE_STATE_TX |
12126 + VIVS_HI_IDLE_STATE_RA |
12127 + VIVS_HI_IDLE_STATE_SE |
12128 +diff --git a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_rgb.c b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_rgb.c
12129 +index 4d4a715b429d1..2c2b92324a2e9 100644
12130 +--- a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_rgb.c
12131 ++++ b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_rgb.c
12132 +@@ -60,8 +60,9 @@ static int fsl_dcu_drm_connector_get_modes(struct drm_connector *connector)
12133 + return drm_panel_get_modes(fsl_connector->panel, connector);
12134 + }
12135 +
12136 +-static int fsl_dcu_drm_connector_mode_valid(struct drm_connector *connector,
12137 +- struct drm_display_mode *mode)
12138 ++static enum drm_mode_status
12139 ++fsl_dcu_drm_connector_mode_valid(struct drm_connector *connector,
12140 ++ struct drm_display_mode *mode)
12141 + {
12142 + if (mode->hdisplay & 0xf)
12143 + return MODE_ERROR;
12144 +diff --git a/drivers/gpu/drm/i915/display/intel_bios.c b/drivers/gpu/drm/i915/display/intel_bios.c
12145 +index 28bdb936cd1fc..edbdb949b6ced 100644
12146 +--- a/drivers/gpu/drm/i915/display/intel_bios.c
12147 ++++ b/drivers/gpu/drm/i915/display/intel_bios.c
12148 +@@ -414,7 +414,7 @@ static void *generate_lfp_data_ptrs(struct drm_i915_private *i915,
12149 + ptrs->lvds_entries++;
12150 +
12151 + if (size != 0 || ptrs->lvds_entries != 3) {
12152 +- kfree(ptrs);
12153 ++ kfree(ptrs_block);
12154 + return NULL;
12155 + }
12156 +
12157 +diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
12158 +index 2b5bc95a8b0df..78b3427471bd7 100644
12159 +--- a/drivers/gpu/drm/i915/display/intel_dp.c
12160 ++++ b/drivers/gpu/drm/i915/display/intel_dp.c
12161 +@@ -3675,61 +3675,6 @@ static void intel_dp_phy_pattern_update(struct intel_dp *intel_dp,
12162 + }
12163 + }
12164 +
12165 +-static void
12166 +-intel_dp_autotest_phy_ddi_disable(struct intel_dp *intel_dp,
12167 +- const struct intel_crtc_state *crtc_state)
12168 +-{
12169 +- struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
12170 +- struct drm_device *dev = dig_port->base.base.dev;
12171 +- struct drm_i915_private *dev_priv = to_i915(dev);
12172 +- struct intel_crtc *crtc = to_intel_crtc(dig_port->base.base.crtc);
12173 +- enum pipe pipe = crtc->pipe;
12174 +- u32 trans_ddi_func_ctl_value, trans_conf_value, dp_tp_ctl_value;
12175 +-
12176 +- trans_ddi_func_ctl_value = intel_de_read(dev_priv,
12177 +- TRANS_DDI_FUNC_CTL(pipe));
12178 +- trans_conf_value = intel_de_read(dev_priv, PIPECONF(pipe));
12179 +- dp_tp_ctl_value = intel_de_read(dev_priv, TGL_DP_TP_CTL(pipe));
12180 +-
12181 +- trans_ddi_func_ctl_value &= ~(TRANS_DDI_FUNC_ENABLE |
12182 +- TGL_TRANS_DDI_PORT_MASK);
12183 +- trans_conf_value &= ~PIPECONF_ENABLE;
12184 +- dp_tp_ctl_value &= ~DP_TP_CTL_ENABLE;
12185 +-
12186 +- intel_de_write(dev_priv, PIPECONF(pipe), trans_conf_value);
12187 +- intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(pipe),
12188 +- trans_ddi_func_ctl_value);
12189 +- intel_de_write(dev_priv, TGL_DP_TP_CTL(pipe), dp_tp_ctl_value);
12190 +-}
12191 +-
12192 +-static void
12193 +-intel_dp_autotest_phy_ddi_enable(struct intel_dp *intel_dp,
12194 +- const struct intel_crtc_state *crtc_state)
12195 +-{
12196 +- struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
12197 +- struct drm_device *dev = dig_port->base.base.dev;
12198 +- struct drm_i915_private *dev_priv = to_i915(dev);
12199 +- enum port port = dig_port->base.port;
12200 +- struct intel_crtc *crtc = to_intel_crtc(dig_port->base.base.crtc);
12201 +- enum pipe pipe = crtc->pipe;
12202 +- u32 trans_ddi_func_ctl_value, trans_conf_value, dp_tp_ctl_value;
12203 +-
12204 +- trans_ddi_func_ctl_value = intel_de_read(dev_priv,
12205 +- TRANS_DDI_FUNC_CTL(pipe));
12206 +- trans_conf_value = intel_de_read(dev_priv, PIPECONF(pipe));
12207 +- dp_tp_ctl_value = intel_de_read(dev_priv, TGL_DP_TP_CTL(pipe));
12208 +-
12209 +- trans_ddi_func_ctl_value |= TRANS_DDI_FUNC_ENABLE |
12210 +- TGL_TRANS_DDI_SELECT_PORT(port);
12211 +- trans_conf_value |= PIPECONF_ENABLE;
12212 +- dp_tp_ctl_value |= DP_TP_CTL_ENABLE;
12213 +-
12214 +- intel_de_write(dev_priv, PIPECONF(pipe), trans_conf_value);
12215 +- intel_de_write(dev_priv, TGL_DP_TP_CTL(pipe), dp_tp_ctl_value);
12216 +- intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(pipe),
12217 +- trans_ddi_func_ctl_value);
12218 +-}
12219 +-
12220 + static void intel_dp_process_phy_request(struct intel_dp *intel_dp,
12221 + const struct intel_crtc_state *crtc_state)
12222 + {
12223 +@@ -3748,14 +3693,10 @@ static void intel_dp_process_phy_request(struct intel_dp *intel_dp,
12224 + intel_dp_get_adjust_train(intel_dp, crtc_state, DP_PHY_DPRX,
12225 + link_status);
12226 +
12227 +- intel_dp_autotest_phy_ddi_disable(intel_dp, crtc_state);
12228 +-
12229 + intel_dp_set_signal_levels(intel_dp, crtc_state, DP_PHY_DPRX);
12230 +
12231 + intel_dp_phy_pattern_update(intel_dp, crtc_state);
12232 +
12233 +- intel_dp_autotest_phy_ddi_enable(intel_dp, crtc_state);
12234 +-
12235 + drm_dp_dpcd_write(&intel_dp->aux, DP_TRAINING_LANE0_SET,
12236 + intel_dp->train_set, crtc_state->lane_count);
12237 +
12238 +diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
12239 +index 73d9eda1d6b7a..e63329bc80659 100644
12240 +--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
12241 ++++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
12242 +@@ -413,7 +413,7 @@ retry:
12243 + vma->mmo = mmo;
12244 +
12245 + if (CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND)
12246 +- intel_wakeref_auto(&to_gt(i915)->userfault_wakeref,
12247 ++ intel_wakeref_auto(&i915->runtime_pm.userfault_wakeref,
12248 + msecs_to_jiffies_timeout(CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND));
12249 +
12250 + if (write) {
12251 +@@ -557,11 +557,13 @@ void i915_gem_object_runtime_pm_release_mmap_offset(struct drm_i915_gem_object *
12252 +
12253 + drm_vma_node_unmap(&bo->base.vma_node, bdev->dev_mapping);
12254 +
12255 +- if (obj->userfault_count) {
12256 +- /* rpm wakeref provide exclusive access */
12257 +- list_del(&obj->userfault_link);
12258 +- obj->userfault_count = 0;
12259 +- }
12260 ++ /*
12261 ++ * We have exclusive access here via runtime suspend. All other callers
12262 ++ * must first grab the rpm wakeref.
12263 ++ */
12264 ++ GEM_BUG_ON(!obj->userfault_count);
12265 ++ list_del(&obj->userfault_link);
12266 ++ obj->userfault_count = 0;
12267 + }
12268 +
12269 + void i915_gem_object_release_mmap_offset(struct drm_i915_gem_object *obj)
12270 +@@ -587,13 +589,6 @@ void i915_gem_object_release_mmap_offset(struct drm_i915_gem_object *obj)
12271 + spin_lock(&obj->mmo.lock);
12272 + }
12273 + spin_unlock(&obj->mmo.lock);
12274 +-
12275 +- if (obj->userfault_count) {
12276 +- mutex_lock(&to_gt(to_i915(obj->base.dev))->lmem_userfault_lock);
12277 +- list_del(&obj->userfault_link);
12278 +- mutex_unlock(&to_gt(to_i915(obj->base.dev))->lmem_userfault_lock);
12279 +- obj->userfault_count = 0;
12280 +- }
12281 + }
12282 +
12283 + static struct i915_mmap_offset *
12284 +diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
12285 +index 3428f735e786c..8d30db5e678c4 100644
12286 +--- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c
12287 ++++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
12288 +@@ -24,7 +24,7 @@ void i915_gem_suspend(struct drm_i915_private *i915)
12289 + {
12290 + GEM_TRACE("%s\n", dev_name(i915->drm.dev));
12291 +
12292 +- intel_wakeref_auto(&to_gt(i915)->userfault_wakeref, 0);
12293 ++ intel_wakeref_auto(&i915->runtime_pm.userfault_wakeref, 0);
12294 + flush_workqueue(i915->wq);
12295 +
12296 + /*
12297 +diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
12298 +index 0d6d640225fc8..be4c081e7e13d 100644
12299 +--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
12300 ++++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
12301 +@@ -279,7 +279,7 @@ static struct ttm_tt *i915_ttm_tt_create(struct ttm_buffer_object *bo,
12302 + struct i915_ttm_tt *i915_tt;
12303 + int ret;
12304 +
12305 +- if (!obj)
12306 ++ if (i915_ttm_is_ghost_object(bo))
12307 + return NULL;
12308 +
12309 + i915_tt = kzalloc(sizeof(*i915_tt), GFP_KERNEL);
12310 +@@ -362,7 +362,7 @@ static bool i915_ttm_eviction_valuable(struct ttm_buffer_object *bo,
12311 + {
12312 + struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
12313 +
12314 +- if (!obj)
12315 ++ if (i915_ttm_is_ghost_object(bo))
12316 + return false;
12317 +
12318 + /*
12319 +@@ -509,18 +509,9 @@ static int i915_ttm_shrink(struct drm_i915_gem_object *obj, unsigned int flags)
12320 + static void i915_ttm_delete_mem_notify(struct ttm_buffer_object *bo)
12321 + {
12322 + struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
12323 +- intel_wakeref_t wakeref = 0;
12324 +-
12325 +- if (bo->resource && likely(obj)) {
12326 +- /* ttm_bo_release() already has dma_resv_lock */
12327 +- if (i915_ttm_cpu_maps_iomem(bo->resource))
12328 +- wakeref = intel_runtime_pm_get(&to_i915(obj->base.dev)->runtime_pm);
12329 +
12330 ++ if (bo->resource && !i915_ttm_is_ghost_object(bo)) {
12331 + __i915_gem_object_pages_fini(obj);
12332 +-
12333 +- if (wakeref)
12334 +- intel_runtime_pm_put(&to_i915(obj->base.dev)->runtime_pm, wakeref);
12335 +-
12336 + i915_ttm_free_cached_io_rsgt(obj);
12337 + }
12338 + }
12339 +@@ -628,7 +619,7 @@ static void i915_ttm_swap_notify(struct ttm_buffer_object *bo)
12340 + struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
12341 + int ret;
12342 +
12343 +- if (!obj)
12344 ++ if (i915_ttm_is_ghost_object(bo))
12345 + return;
12346 +
12347 + ret = i915_ttm_move_notify(bo);
12348 +@@ -661,7 +652,7 @@ static int i915_ttm_io_mem_reserve(struct ttm_device *bdev, struct ttm_resource
12349 + struct drm_i915_gem_object *obj = i915_ttm_to_gem(mem->bo);
12350 + bool unknown_state;
12351 +
12352 +- if (!obj)
12353 ++ if (i915_ttm_is_ghost_object(mem->bo))
12354 + return -EINVAL;
12355 +
12356 + if (!kref_get_unless_zero(&obj->base.refcount))
12357 +@@ -694,7 +685,7 @@ static unsigned long i915_ttm_io_mem_pfn(struct ttm_buffer_object *bo,
12358 + unsigned long base;
12359 + unsigned int ofs;
12360 +
12361 +- GEM_BUG_ON(!obj);
12362 ++ GEM_BUG_ON(i915_ttm_is_ghost_object(bo));
12363 + GEM_WARN_ON(bo->ttm);
12364 +
12365 + base = obj->mm.region->iomap.base - obj->mm.region->region.start;
12366 +@@ -994,13 +985,12 @@ static vm_fault_t vm_fault_ttm(struct vm_fault *vmf)
12367 + struct vm_area_struct *area = vmf->vma;
12368 + struct ttm_buffer_object *bo = area->vm_private_data;
12369 + struct drm_device *dev = bo->base.dev;
12370 +- struct drm_i915_gem_object *obj;
12371 ++ struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
12372 + intel_wakeref_t wakeref = 0;
12373 + vm_fault_t ret;
12374 + int idx;
12375 +
12376 +- obj = i915_ttm_to_gem(bo);
12377 +- if (!obj)
12378 ++ if (i915_ttm_is_ghost_object(bo))
12379 + return VM_FAULT_SIGBUS;
12380 +
12381 + /* Sanity check that we allow writing into this object */
12382 +@@ -1057,16 +1047,19 @@ static vm_fault_t vm_fault_ttm(struct vm_fault *vmf)
12383 + if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT))
12384 + goto out_rpm;
12385 +
12386 +- /* ttm_bo_vm_reserve() already has dma_resv_lock */
12387 ++ /*
12388 ++ * ttm_bo_vm_reserve() already has dma_resv_lock.
12389 ++ * userfault_count is protected by dma_resv lock and rpm wakeref.
12390 ++ */
12391 + if (ret == VM_FAULT_NOPAGE && wakeref && !obj->userfault_count) {
12392 + obj->userfault_count = 1;
12393 +- mutex_lock(&to_gt(to_i915(obj->base.dev))->lmem_userfault_lock);
12394 +- list_add(&obj->userfault_link, &to_gt(to_i915(obj->base.dev))->lmem_userfault_list);
12395 +- mutex_unlock(&to_gt(to_i915(obj->base.dev))->lmem_userfault_lock);
12396 ++ spin_lock(&to_i915(obj->base.dev)->runtime_pm.lmem_userfault_lock);
12397 ++ list_add(&obj->userfault_link, &to_i915(obj->base.dev)->runtime_pm.lmem_userfault_list);
12398 ++ spin_unlock(&to_i915(obj->base.dev)->runtime_pm.lmem_userfault_lock);
12399 + }
12400 +
12401 + if (wakeref & CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND)
12402 +- intel_wakeref_auto(&to_gt(to_i915(obj->base.dev))->userfault_wakeref,
12403 ++ intel_wakeref_auto(&to_i915(obj->base.dev)->runtime_pm.userfault_wakeref,
12404 + msecs_to_jiffies_timeout(CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND));
12405 +
12406 + i915_ttm_adjust_lru(obj);
12407 +@@ -1098,7 +1091,7 @@ static void ttm_vm_open(struct vm_area_struct *vma)
12408 + struct drm_i915_gem_object *obj =
12409 + i915_ttm_to_gem(vma->vm_private_data);
12410 +
12411 +- GEM_BUG_ON(!obj);
12412 ++ GEM_BUG_ON(i915_ttm_is_ghost_object(vma->vm_private_data));
12413 + i915_gem_object_get(obj);
12414 + }
12415 +
12416 +@@ -1107,7 +1100,7 @@ static void ttm_vm_close(struct vm_area_struct *vma)
12417 + struct drm_i915_gem_object *obj =
12418 + i915_ttm_to_gem(vma->vm_private_data);
12419 +
12420 +- GEM_BUG_ON(!obj);
12421 ++ GEM_BUG_ON(i915_ttm_is_ghost_object(vma->vm_private_data));
12422 + i915_gem_object_put(obj);
12423 + }
12424 +
12425 +@@ -1128,7 +1121,27 @@ static u64 i915_ttm_mmap_offset(struct drm_i915_gem_object *obj)
12426 +
12427 + static void i915_ttm_unmap_virtual(struct drm_i915_gem_object *obj)
12428 + {
12429 ++ struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
12430 ++ intel_wakeref_t wakeref = 0;
12431 ++
12432 ++ assert_object_held_shared(obj);
12433 ++
12434 ++ if (i915_ttm_cpu_maps_iomem(bo->resource)) {
12435 ++ wakeref = intel_runtime_pm_get(&to_i915(obj->base.dev)->runtime_pm);
12436 ++
12437 ++ /* userfault_count is protected by obj lock and rpm wakeref. */
12438 ++ if (obj->userfault_count) {
12439 ++ spin_lock(&to_i915(obj->base.dev)->runtime_pm.lmem_userfault_lock);
12440 ++ list_del(&obj->userfault_link);
12441 ++ spin_unlock(&to_i915(obj->base.dev)->runtime_pm.lmem_userfault_lock);
12442 ++ obj->userfault_count = 0;
12443 ++ }
12444 ++ }
12445 ++
12446 + ttm_bo_unmap_virtual(i915_gem_to_ttm(obj));
12447 ++
12448 ++ if (wakeref)
12449 ++ intel_runtime_pm_put(&to_i915(obj->base.dev)->runtime_pm, wakeref);
12450 + }
12451 +
12452 + static const struct drm_i915_gem_object_ops i915_gem_ttm_obj_ops = {
12453 +diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
12454 +index e4842b4296fc2..2a94a99ef76b4 100644
12455 +--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
12456 ++++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
12457 +@@ -27,19 +27,27 @@ i915_gem_to_ttm(struct drm_i915_gem_object *obj)
12458 + */
12459 + void i915_ttm_bo_destroy(struct ttm_buffer_object *bo);
12460 +
12461 ++/**
12462 ++ * i915_ttm_is_ghost_object - Check if the ttm bo is a ghost object.
12463 ++ * @bo: Pointer to the ttm buffer object
12464 ++ *
12465 ++ * Return: True if the ttm bo is not a i915 object but a ghost ttm object,
12466 ++ * False otherwise.
12467 ++ */
12468 ++static inline bool i915_ttm_is_ghost_object(struct ttm_buffer_object *bo)
12469 ++{
12470 ++ return bo->destroy != i915_ttm_bo_destroy;
12471 ++}
12472 ++
12473 + /**
12474 + * i915_ttm_to_gem - Convert a struct ttm_buffer_object to an embedding
12475 + * struct drm_i915_gem_object.
12476 + *
12477 +- * Return: Pointer to the embedding struct ttm_buffer_object, or NULL
12478 +- * if the object was not an i915 ttm object.
12479 ++ * Return: Pointer to the embedding struct ttm_buffer_object.
12480 + */
12481 + static inline struct drm_i915_gem_object *
12482 + i915_ttm_to_gem(struct ttm_buffer_object *bo)
12483 + {
12484 +- if (bo->destroy != i915_ttm_bo_destroy)
12485 +- return NULL;
12486 +-
12487 + return container_of(bo, struct drm_i915_gem_object, __do_not_access);
12488 + }
12489 +
12490 +diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c
12491 +index 9a7e50534b84b..f59f812dc6d29 100644
12492 +--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c
12493 ++++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c
12494 +@@ -560,7 +560,7 @@ int i915_ttm_move(struct ttm_buffer_object *bo, bool evict,
12495 + bool clear;
12496 + int ret;
12497 +
12498 +- if (GEM_WARN_ON(!obj)) {
12499 ++ if (GEM_WARN_ON(i915_ttm_is_ghost_object(bo))) {
12500 + ttm_bo_move_null(bo, dst_mem);
12501 + return 0;
12502 + }
12503 +diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
12504 +index 04e435bce79bd..cbc8b857d5f7a 100644
12505 +--- a/drivers/gpu/drm/i915/gt/intel_engine.h
12506 ++++ b/drivers/gpu/drm/i915/gt/intel_engine.h
12507 +@@ -348,4 +348,10 @@ intel_engine_get_hung_context(struct intel_engine_cs *engine)
12508 + return engine->hung_ce;
12509 + }
12510 +
12511 ++u64 intel_clamp_heartbeat_interval_ms(struct intel_engine_cs *engine, u64 value);
12512 ++u64 intel_clamp_max_busywait_duration_ns(struct intel_engine_cs *engine, u64 value);
12513 ++u64 intel_clamp_preempt_timeout_ms(struct intel_engine_cs *engine, u64 value);
12514 ++u64 intel_clamp_stop_timeout_ms(struct intel_engine_cs *engine, u64 value);
12515 ++u64 intel_clamp_timeslice_duration_ms(struct intel_engine_cs *engine, u64 value);
12516 ++
12517 + #endif /* _INTEL_RINGBUFFER_H_ */
12518 +diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
12519 +index 1f7188129cd1f..83bfeb872bdaa 100644
12520 +--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
12521 ++++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
12522 +@@ -486,6 +486,17 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id,
12523 + engine->logical_mask = BIT(logical_instance);
12524 + __sprint_engine_name(engine);
12525 +
12526 ++ if ((engine->class == COMPUTE_CLASS && !RCS_MASK(engine->gt) &&
12527 ++ __ffs(CCS_MASK(engine->gt)) == engine->instance) ||
12528 ++ engine->class == RENDER_CLASS)
12529 ++ engine->flags |= I915_ENGINE_FIRST_RENDER_COMPUTE;
12530 ++
12531 ++ /* features common between engines sharing EUs */
12532 ++ if (engine->class == RENDER_CLASS || engine->class == COMPUTE_CLASS) {
12533 ++ engine->flags |= I915_ENGINE_HAS_RCS_REG_STATE;
12534 ++ engine->flags |= I915_ENGINE_HAS_EU_PRIORITY;
12535 ++ }
12536 ++
12537 + engine->props.heartbeat_interval_ms =
12538 + CONFIG_DRM_I915_HEARTBEAT_INTERVAL;
12539 + engine->props.max_busywait_duration_ns =
12540 +@@ -498,19 +509,28 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id,
12541 + CONFIG_DRM_I915_TIMESLICE_DURATION;
12542 +
12543 + /* Override to uninterruptible for OpenCL workloads. */
12544 +- if (GRAPHICS_VER(i915) == 12 && engine->class == RENDER_CLASS)
12545 ++ if (GRAPHICS_VER(i915) == 12 && (engine->flags & I915_ENGINE_HAS_RCS_REG_STATE))
12546 + engine->props.preempt_timeout_ms = 0;
12547 +
12548 +- if ((engine->class == COMPUTE_CLASS && !RCS_MASK(engine->gt) &&
12549 +- __ffs(CCS_MASK(engine->gt)) == engine->instance) ||
12550 +- engine->class == RENDER_CLASS)
12551 +- engine->flags |= I915_ENGINE_FIRST_RENDER_COMPUTE;
12552 +-
12553 +- /* features common between engines sharing EUs */
12554 +- if (engine->class == RENDER_CLASS || engine->class == COMPUTE_CLASS) {
12555 +- engine->flags |= I915_ENGINE_HAS_RCS_REG_STATE;
12556 +- engine->flags |= I915_ENGINE_HAS_EU_PRIORITY;
12557 +- }
12558 ++ /* Cap properties according to any system limits */
12559 ++#define CLAMP_PROP(field) \
12560 ++ do { \
12561 ++ u64 clamp = intel_clamp_##field(engine, engine->props.field); \
12562 ++ if (clamp != engine->props.field) { \
12563 ++ drm_notice(&engine->i915->drm, \
12564 ++ "Warning, clamping %s to %lld to prevent overflow\n", \
12565 ++ #field, clamp); \
12566 ++ engine->props.field = clamp; \
12567 ++ } \
12568 ++ } while (0)
12569 ++
12570 ++ CLAMP_PROP(heartbeat_interval_ms);
12571 ++ CLAMP_PROP(max_busywait_duration_ns);
12572 ++ CLAMP_PROP(preempt_timeout_ms);
12573 ++ CLAMP_PROP(stop_timeout_ms);
12574 ++ CLAMP_PROP(timeslice_duration_ms);
12575 ++
12576 ++#undef CLAMP_PROP
12577 +
12578 + engine->defaults = engine->props; /* never to change again */
12579 +
12580 +@@ -534,6 +554,55 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id,
12581 + return 0;
12582 + }
12583 +
12584 ++u64 intel_clamp_heartbeat_interval_ms(struct intel_engine_cs *engine, u64 value)
12585 ++{
12586 ++ value = min_t(u64, value, jiffies_to_msecs(MAX_SCHEDULE_TIMEOUT));
12587 ++
12588 ++ return value;
12589 ++}
12590 ++
12591 ++u64 intel_clamp_max_busywait_duration_ns(struct intel_engine_cs *engine, u64 value)
12592 ++{
12593 ++ value = min(value, jiffies_to_nsecs(2));
12594 ++
12595 ++ return value;
12596 ++}
12597 ++
12598 ++u64 intel_clamp_preempt_timeout_ms(struct intel_engine_cs *engine, u64 value)
12599 ++{
12600 ++ /*
12601 ++ * NB: The GuC API only supports 32bit values. However, the limit is further
12602 ++ * reduced due to internal calculations which would otherwise overflow.
12603 ++ */
12604 ++ if (intel_guc_submission_is_wanted(&engine->gt->uc.guc))
12605 ++ value = min_t(u64, value, guc_policy_max_preempt_timeout_ms());
12606 ++
12607 ++ value = min_t(u64, value, jiffies_to_msecs(MAX_SCHEDULE_TIMEOUT));
12608 ++
12609 ++ return value;
12610 ++}
12611 ++
12612 ++u64 intel_clamp_stop_timeout_ms(struct intel_engine_cs *engine, u64 value)
12613 ++{
12614 ++ value = min_t(u64, value, jiffies_to_msecs(MAX_SCHEDULE_TIMEOUT));
12615 ++
12616 ++ return value;
12617 ++}
12618 ++
12619 ++u64 intel_clamp_timeslice_duration_ms(struct intel_engine_cs *engine, u64 value)
12620 ++{
12621 ++ /*
12622 ++ * NB: The GuC API only supports 32bit values. However, the limit is further
12623 ++ * reduced due to internal calculations which would otherwise overflow.
12624 ++ */
12625 ++ if (intel_guc_submission_is_wanted(&engine->gt->uc.guc))
12626 ++ value = min_t(u64, value, guc_policy_max_exec_quantum_ms());
12627 ++
12628 ++ value = min_t(u64, value, jiffies_to_msecs(MAX_SCHEDULE_TIMEOUT));
12629 ++
12630 ++ return value;
12631 ++}
12632 ++
12633 + static void __setup_engine_capabilities(struct intel_engine_cs *engine)
12634 + {
12635 + struct drm_i915_private *i915 = engine->i915;
12636 +diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
12637 +index 7caa3412a2446..c7db49749a636 100644
12638 +--- a/drivers/gpu/drm/i915/gt/intel_gt.c
12639 ++++ b/drivers/gpu/drm/i915/gt/intel_gt.c
12640 +@@ -40,8 +40,6 @@ void intel_gt_common_init_early(struct intel_gt *gt)
12641 + {
12642 + spin_lock_init(gt->irq_lock);
12643 +
12644 +- INIT_LIST_HEAD(&gt->lmem_userfault_list);
12645 +- mutex_init(&gt->lmem_userfault_lock);
12646 + INIT_LIST_HEAD(&gt->closed_vma);
12647 + spin_lock_init(&gt->closed_lock);
12648 +
12649 +@@ -812,7 +810,6 @@ static int intel_gt_tile_setup(struct intel_gt *gt, phys_addr_t phys_addr)
12650 + }
12651 +
12652 + intel_uncore_init_early(gt->uncore, gt);
12653 +- intel_wakeref_auto_init(&gt->userfault_wakeref, gt->uncore->rpm);
12654 +
12655 + ret = intel_uncore_setup_mmio(gt->uncore, phys_addr);
12656 + if (ret)
12657 +diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h b/drivers/gpu/drm/i915/gt/intel_gt_types.h
12658 +index f19c2de77ff66..184ee9b11a4da 100644
12659 +--- a/drivers/gpu/drm/i915/gt/intel_gt_types.h
12660 ++++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h
12661 +@@ -141,20 +141,6 @@ struct intel_gt {
12662 + struct intel_wakeref wakeref;
12663 + atomic_t user_wakeref;
12664 +
12665 +- /**
12666 +- * Protects access to lmem usefault list.
12667 +- * It is required, if we are outside of the runtime suspend path,
12668 +- * access to @lmem_userfault_list requires always first grabbing the
12669 +- * runtime pm, to ensure we can't race against runtime suspend.
12670 +- * Once we have that we also need to grab @lmem_userfault_lock,
12671 +- * at which point we have exclusive access.
12672 +- * The runtime suspend path is special since it doesn't really hold any locks,
12673 +- * but instead has exclusive access by virtue of all other accesses requiring
12674 +- * holding the runtime pm wakeref.
12675 +- */
12676 +- struct mutex lmem_userfault_lock;
12677 +- struct list_head lmem_userfault_list;
12678 +-
12679 + struct list_head closed_vma;
12680 + spinlock_t closed_lock; /* guards the list of closed_vma */
12681 +
12682 +@@ -170,9 +156,6 @@ struct intel_gt {
12683 + */
12684 + intel_wakeref_t awake;
12685 +
12686 +- /* Manual runtime pm autosuspend delay for user GGTT/lmem mmaps */
12687 +- struct intel_wakeref_auto userfault_wakeref;
12688 +-
12689 + u32 clock_frequency;
12690 + u32 clock_period_ns;
12691 +
12692 +diff --git a/drivers/gpu/drm/i915/gt/sysfs_engines.c b/drivers/gpu/drm/i915/gt/sysfs_engines.c
12693 +index 9670310562029..f2d9858d827c2 100644
12694 +--- a/drivers/gpu/drm/i915/gt/sysfs_engines.c
12695 ++++ b/drivers/gpu/drm/i915/gt/sysfs_engines.c
12696 +@@ -144,7 +144,7 @@ max_spin_store(struct kobject *kobj, struct kobj_attribute *attr,
12697 + const char *buf, size_t count)
12698 + {
12699 + struct intel_engine_cs *engine = kobj_to_engine(kobj);
12700 +- unsigned long long duration;
12701 ++ unsigned long long duration, clamped;
12702 + int err;
12703 +
12704 + /*
12705 +@@ -168,7 +168,8 @@ max_spin_store(struct kobject *kobj, struct kobj_attribute *attr,
12706 + if (err)
12707 + return err;
12708 +
12709 +- if (duration > jiffies_to_nsecs(2))
12710 ++ clamped = intel_clamp_max_busywait_duration_ns(engine, duration);
12711 ++ if (duration != clamped)
12712 + return -EINVAL;
12713 +
12714 + WRITE_ONCE(engine->props.max_busywait_duration_ns, duration);
12715 +@@ -203,7 +204,7 @@ timeslice_store(struct kobject *kobj, struct kobj_attribute *attr,
12716 + const char *buf, size_t count)
12717 + {
12718 + struct intel_engine_cs *engine = kobj_to_engine(kobj);
12719 +- unsigned long long duration;
12720 ++ unsigned long long duration, clamped;
12721 + int err;
12722 +
12723 + /*
12724 +@@ -218,7 +219,8 @@ timeslice_store(struct kobject *kobj, struct kobj_attribute *attr,
12725 + if (err)
12726 + return err;
12727 +
12728 +- if (duration > jiffies_to_msecs(MAX_SCHEDULE_TIMEOUT))
12729 ++ clamped = intel_clamp_timeslice_duration_ms(engine, duration);
12730 ++ if (duration != clamped)
12731 + return -EINVAL;
12732 +
12733 + WRITE_ONCE(engine->props.timeslice_duration_ms, duration);
12734 +@@ -256,7 +258,7 @@ stop_store(struct kobject *kobj, struct kobj_attribute *attr,
12735 + const char *buf, size_t count)
12736 + {
12737 + struct intel_engine_cs *engine = kobj_to_engine(kobj);
12738 +- unsigned long long duration;
12739 ++ unsigned long long duration, clamped;
12740 + int err;
12741 +
12742 + /*
12743 +@@ -272,7 +274,8 @@ stop_store(struct kobject *kobj, struct kobj_attribute *attr,
12744 + if (err)
12745 + return err;
12746 +
12747 +- if (duration > jiffies_to_msecs(MAX_SCHEDULE_TIMEOUT))
12748 ++ clamped = intel_clamp_stop_timeout_ms(engine, duration);
12749 ++ if (duration != clamped)
12750 + return -EINVAL;
12751 +
12752 + WRITE_ONCE(engine->props.stop_timeout_ms, duration);
12753 +@@ -306,7 +309,7 @@ preempt_timeout_store(struct kobject *kobj, struct kobj_attribute *attr,
12754 + const char *buf, size_t count)
12755 + {
12756 + struct intel_engine_cs *engine = kobj_to_engine(kobj);
12757 +- unsigned long long timeout;
12758 ++ unsigned long long timeout, clamped;
12759 + int err;
12760 +
12761 + /*
12762 +@@ -322,7 +325,8 @@ preempt_timeout_store(struct kobject *kobj, struct kobj_attribute *attr,
12763 + if (err)
12764 + return err;
12765 +
12766 +- if (timeout > jiffies_to_msecs(MAX_SCHEDULE_TIMEOUT))
12767 ++ clamped = intel_clamp_preempt_timeout_ms(engine, timeout);
12768 ++ if (timeout != clamped)
12769 + return -EINVAL;
12770 +
12771 + WRITE_ONCE(engine->props.preempt_timeout_ms, timeout);
12772 +@@ -362,7 +366,7 @@ heartbeat_store(struct kobject *kobj, struct kobj_attribute *attr,
12773 + const char *buf, size_t count)
12774 + {
12775 + struct intel_engine_cs *engine = kobj_to_engine(kobj);
12776 +- unsigned long long delay;
12777 ++ unsigned long long delay, clamped;
12778 + int err;
12779 +
12780 + /*
12781 +@@ -379,7 +383,8 @@ heartbeat_store(struct kobject *kobj, struct kobj_attribute *attr,
12782 + if (err)
12783 + return err;
12784 +
12785 +- if (delay >= jiffies_to_msecs(MAX_SCHEDULE_TIMEOUT))
12786 ++ clamped = intel_clamp_heartbeat_interval_ms(engine, delay);
12787 ++ if (delay != clamped)
12788 + return -EINVAL;
12789 +
12790 + err = intel_engine_set_heartbeat(engine, delay);
12791 +diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_capture.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_capture.c
12792 +index 8f11651460131..685ddccc0f26a 100644
12793 +--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_capture.c
12794 ++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_capture.c
12795 +@@ -165,7 +165,7 @@ static const struct __guc_mmio_reg_descr empty_regs_list[] = {
12796 + }
12797 +
12798 + /* List of lists */
12799 +-static struct __guc_mmio_reg_descr_group default_lists[] = {
12800 ++static const struct __guc_mmio_reg_descr_group default_lists[] = {
12801 + MAKE_REGLIST(default_global_regs, PF, GLOBAL, 0),
12802 + MAKE_REGLIST(default_rc_class_regs, PF, ENGINE_CLASS, GUC_RENDER_CLASS),
12803 + MAKE_REGLIST(xe_lpd_rc_inst_regs, PF, ENGINE_INSTANCE, GUC_RENDER_CLASS),
12804 +@@ -419,6 +419,44 @@ guc_capture_get_device_reglist(struct intel_guc *guc)
12805 + return default_lists;
12806 + }
12807 +
12808 ++static const char *
12809 ++__stringify_type(u32 type)
12810 ++{
12811 ++ switch (type) {
12812 ++ case GUC_CAPTURE_LIST_TYPE_GLOBAL:
12813 ++ return "Global";
12814 ++ case GUC_CAPTURE_LIST_TYPE_ENGINE_CLASS:
12815 ++ return "Class";
12816 ++ case GUC_CAPTURE_LIST_TYPE_ENGINE_INSTANCE:
12817 ++ return "Instance";
12818 ++ default:
12819 ++ break;
12820 ++ }
12821 ++
12822 ++ return "unknown";
12823 ++}
12824 ++
12825 ++static const char *
12826 ++__stringify_engclass(u32 class)
12827 ++{
12828 ++ switch (class) {
12829 ++ case GUC_RENDER_CLASS:
12830 ++ return "Render";
12831 ++ case GUC_VIDEO_CLASS:
12832 ++ return "Video";
12833 ++ case GUC_VIDEOENHANCE_CLASS:
12834 ++ return "VideoEnhance";
12835 ++ case GUC_BLITTER_CLASS:
12836 ++ return "Blitter";
12837 ++ case GUC_COMPUTE_CLASS:
12838 ++ return "Compute";
12839 ++ default:
12840 ++ break;
12841 ++ }
12842 ++
12843 ++ return "unknown";
12844 ++}
12845 ++
12846 + static int
12847 + guc_capture_list_init(struct intel_guc *guc, u32 owner, u32 type, u32 classid,
12848 + struct guc_mmio_reg *ptr, u16 num_entries)
12849 +@@ -482,32 +520,55 @@ guc_cap_list_num_regs(struct intel_guc_state_capture *gc, u32 owner, u32 type, u
12850 + return num_regs;
12851 + }
12852 +
12853 +-int
12854 +-intel_guc_capture_getlistsize(struct intel_guc *guc, u32 owner, u32 type, u32 classid,
12855 +- size_t *size)
12856 ++static int
12857 ++guc_capture_getlistsize(struct intel_guc *guc, u32 owner, u32 type, u32 classid,
12858 ++ size_t *size, bool is_purpose_est)
12859 + {
12860 + struct intel_guc_state_capture *gc = guc->capture;
12861 ++ struct drm_i915_private *i915 = guc_to_gt(guc)->i915;
12862 + struct __guc_capture_ads_cache *cache = &gc->ads_cache[owner][type][classid];
12863 + int num_regs;
12864 +
12865 +- if (!gc->reglists)
12866 ++ if (!gc->reglists) {
12867 ++ drm_warn(&i915->drm, "GuC-capture: No reglist on this device\n");
12868 + return -ENODEV;
12869 ++ }
12870 +
12871 + if (cache->is_valid) {
12872 + *size = cache->size;
12873 + return cache->status;
12874 + }
12875 +
12876 ++ if (!is_purpose_est && owner == GUC_CAPTURE_LIST_INDEX_PF &&
12877 ++ !guc_capture_get_one_list(gc->reglists, owner, type, classid)) {
12878 ++ if (type == GUC_CAPTURE_LIST_TYPE_GLOBAL)
12879 ++ drm_warn(&i915->drm, "Missing GuC-Err-Cap reglist Global!\n");
12880 ++ else
12881 ++ drm_warn(&i915->drm, "Missing GuC-Err-Cap reglist %s(%u):%s(%u)!\n",
12882 ++ __stringify_type(type), type,
12883 ++ __stringify_engclass(classid), classid);
12884 ++ return -ENODATA;
12885 ++ }
12886 ++
12887 + num_regs = guc_cap_list_num_regs(gc, owner, type, classid);
12888 ++ /* intentional empty lists can exist depending on hw config */
12889 + if (!num_regs)
12890 + return -ENODATA;
12891 +
12892 +- *size = PAGE_ALIGN((sizeof(struct guc_debug_capture_list)) +
12893 +- (num_regs * sizeof(struct guc_mmio_reg)));
12894 ++ if (size)
12895 ++ *size = PAGE_ALIGN((sizeof(struct guc_debug_capture_list)) +
12896 ++ (num_regs * sizeof(struct guc_mmio_reg)));
12897 +
12898 + return 0;
12899 + }
12900 +
12901 ++int
12902 ++intel_guc_capture_getlistsize(struct intel_guc *guc, u32 owner, u32 type, u32 classid,
12903 ++ size_t *size)
12904 ++{
12905 ++ return guc_capture_getlistsize(guc, owner, type, classid, size, false);
12906 ++}
12907 ++
12908 + static void guc_capture_create_prealloc_nodes(struct intel_guc *guc);
12909 +
12910 + int
12911 +@@ -606,7 +667,7 @@ guc_capture_output_min_size_est(struct intel_guc *guc)
12912 + struct intel_gt *gt = guc_to_gt(guc);
12913 + struct intel_engine_cs *engine;
12914 + enum intel_engine_id id;
12915 +- int worst_min_size = 0, num_regs = 0;
12916 ++ int worst_min_size = 0;
12917 + size_t tmp = 0;
12918 +
12919 + if (!guc->capture)
12920 +@@ -627,21 +688,19 @@ guc_capture_output_min_size_est(struct intel_guc *guc)
12921 + worst_min_size += sizeof(struct guc_state_capture_group_header_t) +
12922 + (3 * sizeof(struct guc_state_capture_header_t));
12923 +
12924 +- if (!intel_guc_capture_getlistsize(guc, 0, GUC_CAPTURE_LIST_TYPE_GLOBAL, 0, &tmp))
12925 +- num_regs += tmp;
12926 ++ if (!guc_capture_getlistsize(guc, 0, GUC_CAPTURE_LIST_TYPE_GLOBAL, 0, &tmp, true))
12927 ++ worst_min_size += tmp;
12928 +
12929 +- if (!intel_guc_capture_getlistsize(guc, 0, GUC_CAPTURE_LIST_TYPE_ENGINE_CLASS,
12930 +- engine->class, &tmp)) {
12931 +- num_regs += tmp;
12932 ++ if (!guc_capture_getlistsize(guc, 0, GUC_CAPTURE_LIST_TYPE_ENGINE_CLASS,
12933 ++ engine->class, &tmp, true)) {
12934 ++ worst_min_size += tmp;
12935 + }
12936 +- if (!intel_guc_capture_getlistsize(guc, 0, GUC_CAPTURE_LIST_TYPE_ENGINE_INSTANCE,
12937 +- engine->class, &tmp)) {
12938 +- num_regs += tmp;
12939 ++ if (!guc_capture_getlistsize(guc, 0, GUC_CAPTURE_LIST_TYPE_ENGINE_INSTANCE,
12940 ++ engine->class, &tmp, true)) {
12941 ++ worst_min_size += tmp;
12942 + }
12943 + }
12944 +
12945 +- worst_min_size += (num_regs * sizeof(struct guc_mmio_reg));
12946 +-
12947 + return worst_min_size;
12948 + }
12949 +
12950 +@@ -658,15 +717,23 @@ static void check_guc_capture_size(struct intel_guc *guc)
12951 + int spare_size = min_size * GUC_CAPTURE_OVERBUFFER_MULTIPLIER;
12952 + u32 buffer_size = intel_guc_log_section_size_capture(&guc->log);
12953 +
12954 ++ /*
12955 ++ * NOTE: min_size is much smaller than the capture region allocation (DG2: <80K vs 1MB)
12956 ++ * Additionally, its based on space needed to fit all engines getting reset at once
12957 ++ * within the same G2H handler task slot. This is very unlikely. However, if GuC really
12958 ++ * does run out of space for whatever reason, we will see an separate warning message
12959 ++ * when processing the G2H event capture-notification, search for:
12960 ++ * INTEL_GUC_STATE_CAPTURE_EVENT_STATUS_NOSPACE.
12961 ++ */
12962 + if (min_size < 0)
12963 + drm_warn(&i915->drm, "Failed to calculate GuC error state capture buffer minimum size: %d!\n",
12964 + min_size);
12965 + else if (min_size > buffer_size)
12966 +- drm_warn(&i915->drm, "GuC error state capture buffer is too small: %d < %d\n",
12967 ++ drm_warn(&i915->drm, "GuC error state capture buffer maybe small: %d < %d\n",
12968 + buffer_size, min_size);
12969 + else if (spare_size > buffer_size)
12970 +- drm_notice(&i915->drm, "GuC error state capture buffer maybe too small: %d < %d (min = %d)\n",
12971 +- buffer_size, spare_size, min_size);
12972 ++ drm_dbg(&i915->drm, "GuC error state capture buffer lacks spare size: %d < %d (min = %d)\n",
12973 ++ buffer_size, spare_size, min_size);
12974 + }
12975 +
12976 + /*
12977 +diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
12978 +index 323b055e5db97..502e7cb5a3025 100644
12979 +--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
12980 ++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
12981 +@@ -305,6 +305,27 @@ struct guc_update_context_policy {
12982 +
12983 + #define GLOBAL_POLICY_DEFAULT_DPC_PROMOTE_TIME_US 500000
12984 +
12985 ++/*
12986 ++ * GuC converts the timeout to clock ticks internally. Different platforms have
12987 ++ * different GuC clocks. Thus, the maximum value before overflow is platform
12988 ++ * dependent. Current worst case scenario is about 110s. So, the spec says to
12989 ++ * limit to 100s to be safe.
12990 ++ */
12991 ++#define GUC_POLICY_MAX_EXEC_QUANTUM_US (100 * 1000 * 1000UL)
12992 ++#define GUC_POLICY_MAX_PREEMPT_TIMEOUT_US (100 * 1000 * 1000UL)
12993 ++
12994 ++static inline u32 guc_policy_max_exec_quantum_ms(void)
12995 ++{
12996 ++ BUILD_BUG_ON(GUC_POLICY_MAX_EXEC_QUANTUM_US >= UINT_MAX);
12997 ++ return GUC_POLICY_MAX_EXEC_QUANTUM_US / 1000;
12998 ++}
12999 ++
13000 ++static inline u32 guc_policy_max_preempt_timeout_ms(void)
13001 ++{
13002 ++ BUILD_BUG_ON(GUC_POLICY_MAX_PREEMPT_TIMEOUT_US >= UINT_MAX);
13003 ++ return GUC_POLICY_MAX_PREEMPT_TIMEOUT_US / 1000;
13004 ++}
13005 ++
13006 + struct guc_policies {
13007 + u32 submission_queue_depth[GUC_MAX_ENGINE_CLASSES];
13008 + /* In micro seconds. How much time to allow before DPC processing is
13009 +diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_log.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_log.c
13010 +index 55d3ef93e86f8..68331c538b0a7 100644
13011 +--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_log.c
13012 ++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_log.c
13013 +@@ -16,15 +16,15 @@
13014 + #if defined(CONFIG_DRM_I915_DEBUG_GUC)
13015 + #define GUC_LOG_DEFAULT_CRASH_BUFFER_SIZE SZ_2M
13016 + #define GUC_LOG_DEFAULT_DEBUG_BUFFER_SIZE SZ_16M
13017 +-#define GUC_LOG_DEFAULT_CAPTURE_BUFFER_SIZE SZ_4M
13018 ++#define GUC_LOG_DEFAULT_CAPTURE_BUFFER_SIZE SZ_1M
13019 + #elif defined(CONFIG_DRM_I915_DEBUG_GEM)
13020 + #define GUC_LOG_DEFAULT_CRASH_BUFFER_SIZE SZ_1M
13021 + #define GUC_LOG_DEFAULT_DEBUG_BUFFER_SIZE SZ_2M
13022 +-#define GUC_LOG_DEFAULT_CAPTURE_BUFFER_SIZE SZ_4M
13023 ++#define GUC_LOG_DEFAULT_CAPTURE_BUFFER_SIZE SZ_1M
13024 + #else
13025 + #define GUC_LOG_DEFAULT_CRASH_BUFFER_SIZE SZ_8K
13026 + #define GUC_LOG_DEFAULT_DEBUG_BUFFER_SIZE SZ_64K
13027 +-#define GUC_LOG_DEFAULT_CAPTURE_BUFFER_SIZE SZ_2M
13028 ++#define GUC_LOG_DEFAULT_CAPTURE_BUFFER_SIZE SZ_1M
13029 + #endif
13030 +
13031 + static void guc_log_copy_debuglogs_for_relay(struct intel_guc_log *log);
13032 +diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
13033 +index 1db59eeb34db9..1a23e901cc663 100644
13034 +--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
13035 ++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
13036 +@@ -2429,6 +2429,10 @@ static int guc_context_policy_init_v70(struct intel_context *ce, bool loop)
13037 + int ret;
13038 +
13039 + /* NB: For both of these, zero means disabled. */
13040 ++ GEM_BUG_ON(overflows_type(engine->props.timeslice_duration_ms * 1000,
13041 ++ execution_quantum));
13042 ++ GEM_BUG_ON(overflows_type(engine->props.preempt_timeout_ms * 1000,
13043 ++ preemption_timeout));
13044 + execution_quantum = engine->props.timeslice_duration_ms * 1000;
13045 + preemption_timeout = engine->props.preempt_timeout_ms * 1000;
13046 +
13047 +@@ -2462,6 +2466,10 @@ static void guc_context_policy_init_v69(struct intel_engine_cs *engine,
13048 + desc->policy_flags |= CONTEXT_POLICY_FLAG_PREEMPT_TO_IDLE_V69;
13049 +
13050 + /* NB: For both of these, zero means disabled. */
13051 ++ GEM_BUG_ON(overflows_type(engine->props.timeslice_duration_ms * 1000,
13052 ++ desc->execution_quantum));
13053 ++ GEM_BUG_ON(overflows_type(engine->props.preempt_timeout_ms * 1000,
13054 ++ desc->preemption_timeout));
13055 + desc->execution_quantum = engine->props.timeslice_duration_ms * 1000;
13056 + desc->preemption_timeout = engine->props.preempt_timeout_ms * 1000;
13057 + }
13058 +diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c
13059 +index f2a15d8155f4a..2ce30cff461a0 100644
13060 +--- a/drivers/gpu/drm/i915/i915_driver.c
13061 ++++ b/drivers/gpu/drm/i915/i915_driver.c
13062 +@@ -1662,7 +1662,8 @@ static int intel_runtime_suspend(struct device *kdev)
13063 +
13064 + intel_runtime_pm_enable_interrupts(dev_priv);
13065 +
13066 +- intel_gt_runtime_resume(to_gt(dev_priv));
13067 ++ for_each_gt(gt, dev_priv, i)
13068 ++ intel_gt_runtime_resume(gt);
13069 +
13070 + enable_rpm_wakeref_asserts(rpm);
13071 +
13072 +diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
13073 +index 2bdddb61ebd7a..38c26668b9602 100644
13074 +--- a/drivers/gpu/drm/i915/i915_gem.c
13075 ++++ b/drivers/gpu/drm/i915/i915_gem.c
13076 +@@ -843,7 +843,7 @@ void i915_gem_runtime_suspend(struct drm_i915_private *i915)
13077 + __i915_gem_object_release_mmap_gtt(obj);
13078 +
13079 + list_for_each_entry_safe(obj, on,
13080 +- &to_gt(i915)->lmem_userfault_list, userfault_link)
13081 ++ &i915->runtime_pm.lmem_userfault_list, userfault_link)
13082 + i915_gem_object_runtime_pm_release_mmap_offset(obj);
13083 +
13084 + /*
13085 +@@ -1128,6 +1128,8 @@ void i915_gem_drain_workqueue(struct drm_i915_private *i915)
13086 +
13087 + int i915_gem_init(struct drm_i915_private *dev_priv)
13088 + {
13089 ++ struct intel_gt *gt;
13090 ++ unsigned int i;
13091 + int ret;
13092 +
13093 + /* We need to fallback to 4K pages if host doesn't support huge gtt. */
13094 +@@ -1158,9 +1160,11 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
13095 + */
13096 + intel_init_clock_gating(dev_priv);
13097 +
13098 +- ret = intel_gt_init(to_gt(dev_priv));
13099 +- if (ret)
13100 +- goto err_unlock;
13101 ++ for_each_gt(gt, dev_priv, i) {
13102 ++ ret = intel_gt_init(gt);
13103 ++ if (ret)
13104 ++ goto err_unlock;
13105 ++ }
13106 +
13107 + return 0;
13108 +
13109 +@@ -1173,8 +1177,13 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
13110 + err_unlock:
13111 + i915_gem_drain_workqueue(dev_priv);
13112 +
13113 +- if (ret != -EIO)
13114 +- intel_uc_cleanup_firmwares(&to_gt(dev_priv)->uc);
13115 ++ if (ret != -EIO) {
13116 ++ for_each_gt(gt, dev_priv, i) {
13117 ++ intel_gt_driver_remove(gt);
13118 ++ intel_gt_driver_release(gt);
13119 ++ intel_uc_cleanup_firmwares(&gt->uc);
13120 ++ }
13121 ++ }
13122 +
13123 + if (ret == -EIO) {
13124 + /*
13125 +@@ -1182,10 +1191,12 @@ err_unlock:
13126 + * as wedged. But we only want to do this when the GPU is angry,
13127 + * for all other failure, such as an allocation failure, bail.
13128 + */
13129 +- if (!intel_gt_is_wedged(to_gt(dev_priv))) {
13130 +- i915_probe_error(dev_priv,
13131 +- "Failed to initialize GPU, declaring it wedged!\n");
13132 +- intel_gt_set_wedged(to_gt(dev_priv));
13133 ++ for_each_gt(gt, dev_priv, i) {
13134 ++ if (!intel_gt_is_wedged(gt)) {
13135 ++ i915_probe_error(dev_priv,
13136 ++ "Failed to initialize GPU, declaring it wedged!\n");
13137 ++ intel_gt_set_wedged(gt);
13138 ++ }
13139 + }
13140 +
13141 + /* Minimal basic recovery for KMS */
13142 +@@ -1213,10 +1224,12 @@ void i915_gem_driver_unregister(struct drm_i915_private *i915)
13143 +
13144 + void i915_gem_driver_remove(struct drm_i915_private *dev_priv)
13145 + {
13146 +- intel_wakeref_auto_fini(&to_gt(dev_priv)->userfault_wakeref);
13147 ++ struct intel_gt *gt;
13148 ++ unsigned int i;
13149 +
13150 + i915_gem_suspend_late(dev_priv);
13151 +- intel_gt_driver_remove(to_gt(dev_priv));
13152 ++ for_each_gt(gt, dev_priv, i)
13153 ++ intel_gt_driver_remove(gt);
13154 + dev_priv->uabi_engines = RB_ROOT;
13155 +
13156 + /* Flush any outstanding unpin_work. */
13157 +@@ -1227,9 +1240,13 @@ void i915_gem_driver_remove(struct drm_i915_private *dev_priv)
13158 +
13159 + void i915_gem_driver_release(struct drm_i915_private *dev_priv)
13160 + {
13161 +- intel_gt_driver_release(to_gt(dev_priv));
13162 ++ struct intel_gt *gt;
13163 ++ unsigned int i;
13164 +
13165 +- intel_uc_cleanup_firmwares(&to_gt(dev_priv)->uc);
13166 ++ for_each_gt(gt, dev_priv, i) {
13167 ++ intel_gt_driver_release(gt);
13168 ++ intel_uc_cleanup_firmwares(&gt->uc);
13169 ++ }
13170 +
13171 + /* Flush any outstanding work, including i915_gem_context.release_work. */
13172 + i915_gem_drain_workqueue(dev_priv);
13173 +diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
13174 +index 744cca507946b..129746713d072 100644
13175 +--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
13176 ++++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
13177 +@@ -633,6 +633,8 @@ void intel_runtime_pm_driver_release(struct intel_runtime_pm *rpm)
13178 + runtime_pm);
13179 + int count = atomic_read(&rpm->wakeref_count);
13180 +
13181 ++ intel_wakeref_auto_fini(&rpm->userfault_wakeref);
13182 ++
13183 + drm_WARN(&i915->drm, count,
13184 + "i915 raw-wakerefs=%d wakelocks=%d on cleanup\n",
13185 + intel_rpm_raw_wakeref_count(count),
13186 +@@ -652,4 +654,7 @@ void intel_runtime_pm_init_early(struct intel_runtime_pm *rpm)
13187 + rpm->available = HAS_RUNTIME_PM(i915);
13188 +
13189 + init_intel_runtime_pm_wakeref(rpm);
13190 ++ INIT_LIST_HEAD(&rpm->lmem_userfault_list);
13191 ++ spin_lock_init(&rpm->lmem_userfault_lock);
13192 ++ intel_wakeref_auto_init(&rpm->userfault_wakeref, rpm);
13193 + }
13194 +diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.h b/drivers/gpu/drm/i915/intel_runtime_pm.h
13195 +index d9160e3ff4afc..98b8b28baaa15 100644
13196 +--- a/drivers/gpu/drm/i915/intel_runtime_pm.h
13197 ++++ b/drivers/gpu/drm/i915/intel_runtime_pm.h
13198 +@@ -53,6 +53,28 @@ struct intel_runtime_pm {
13199 + bool irqs_enabled;
13200 + bool no_wakeref_tracking;
13201 +
13202 ++ /*
13203 ++ * Protects access to lmem usefault list.
13204 ++ * It is required, if we are outside of the runtime suspend path,
13205 ++ * access to @lmem_userfault_list requires always first grabbing the
13206 ++ * runtime pm, to ensure we can't race against runtime suspend.
13207 ++ * Once we have that we also need to grab @lmem_userfault_lock,
13208 ++ * at which point we have exclusive access.
13209 ++ * The runtime suspend path is special since it doesn't really hold any locks,
13210 ++ * but instead has exclusive access by virtue of all other accesses requiring
13211 ++ * holding the runtime pm wakeref.
13212 ++ */
13213 ++ spinlock_t lmem_userfault_lock;
13214 ++
13215 ++ /*
13216 ++ * Keep list of userfaulted gem obj, which require to release their
13217 ++ * mmap mappings at runtime suspend path.
13218 ++ */
13219 ++ struct list_head lmem_userfault_list;
13220 ++
13221 ++ /* Manual runtime pm autosuspend delay for user GGTT/lmem mmaps */
13222 ++ struct intel_wakeref_auto userfault_wakeref;
13223 ++
13224 + #if IS_ENABLED(CONFIG_DRM_I915_DEBUG_RUNTIME_PM)
13225 + /*
13226 + * To aide detection of wakeref leaks and general misuse, we
13227 +diff --git a/drivers/gpu/drm/mediatek/mtk_dpi.c b/drivers/gpu/drm/mediatek/mtk_dpi.c
13228 +index 508a6d994e831..1f5d39a4077cd 100644
13229 +--- a/drivers/gpu/drm/mediatek/mtk_dpi.c
13230 ++++ b/drivers/gpu/drm/mediatek/mtk_dpi.c
13231 +@@ -461,9 +461,6 @@ static void mtk_dpi_power_off(struct mtk_dpi *dpi)
13232 + if (--dpi->refcount != 0)
13233 + return;
13234 +
13235 +- if (dpi->pinctrl && dpi->pins_gpio)
13236 +- pinctrl_select_state(dpi->pinctrl, dpi->pins_gpio);
13237 +-
13238 + mtk_dpi_disable(dpi);
13239 + clk_disable_unprepare(dpi->pixel_clk);
13240 + clk_disable_unprepare(dpi->engine_clk);
13241 +@@ -488,9 +485,6 @@ static int mtk_dpi_power_on(struct mtk_dpi *dpi)
13242 + goto err_pixel;
13243 + }
13244 +
13245 +- if (dpi->pinctrl && dpi->pins_dpi)
13246 +- pinctrl_select_state(dpi->pinctrl, dpi->pins_dpi);
13247 +-
13248 + return 0;
13249 +
13250 + err_pixel:
13251 +@@ -721,12 +715,18 @@ static void mtk_dpi_bridge_disable(struct drm_bridge *bridge)
13252 + struct mtk_dpi *dpi = bridge_to_dpi(bridge);
13253 +
13254 + mtk_dpi_power_off(dpi);
13255 ++
13256 ++ if (dpi->pinctrl && dpi->pins_gpio)
13257 ++ pinctrl_select_state(dpi->pinctrl, dpi->pins_gpio);
13258 + }
13259 +
13260 + static void mtk_dpi_bridge_enable(struct drm_bridge *bridge)
13261 + {
13262 + struct mtk_dpi *dpi = bridge_to_dpi(bridge);
13263 +
13264 ++ if (dpi->pinctrl && dpi->pins_dpi)
13265 ++ pinctrl_select_state(dpi->pinctrl, dpi->pins_dpi);
13266 ++
13267 + mtk_dpi_power_on(dpi);
13268 + mtk_dpi_set_display_mode(dpi, &dpi->mode);
13269 + mtk_dpi_enable(dpi);
13270 +diff --git a/drivers/gpu/drm/mediatek/mtk_hdmi.c b/drivers/gpu/drm/mediatek/mtk_hdmi.c
13271 +index 4c80b6896dc3d..6e8f99554f548 100644
13272 +--- a/drivers/gpu/drm/mediatek/mtk_hdmi.c
13273 ++++ b/drivers/gpu/drm/mediatek/mtk_hdmi.c
13274 +@@ -1202,9 +1202,10 @@ static enum drm_connector_status mtk_hdmi_detect(struct mtk_hdmi *hdmi)
13275 + return mtk_hdmi_update_plugged_status(hdmi);
13276 + }
13277 +
13278 +-static int mtk_hdmi_bridge_mode_valid(struct drm_bridge *bridge,
13279 +- const struct drm_display_info *info,
13280 +- const struct drm_display_mode *mode)
13281 ++static enum drm_mode_status
13282 ++mtk_hdmi_bridge_mode_valid(struct drm_bridge *bridge,
13283 ++ const struct drm_display_info *info,
13284 ++ const struct drm_display_mode *mode)
13285 + {
13286 + struct mtk_hdmi *hdmi = hdmi_ctx_from_bridge(bridge);
13287 + struct drm_bridge *next_bridge;
13288 +diff --git a/drivers/gpu/drm/meson/meson_encoder_cvbs.c b/drivers/gpu/drm/meson/meson_encoder_cvbs.c
13289 +index 5675bc2a92cf8..3f73b211fa8e3 100644
13290 +--- a/drivers/gpu/drm/meson/meson_encoder_cvbs.c
13291 ++++ b/drivers/gpu/drm/meson/meson_encoder_cvbs.c
13292 +@@ -116,9 +116,10 @@ static int meson_encoder_cvbs_get_modes(struct drm_bridge *bridge,
13293 + return i;
13294 + }
13295 +
13296 +-static int meson_encoder_cvbs_mode_valid(struct drm_bridge *bridge,
13297 +- const struct drm_display_info *display_info,
13298 +- const struct drm_display_mode *mode)
13299 ++static enum drm_mode_status
13300 ++meson_encoder_cvbs_mode_valid(struct drm_bridge *bridge,
13301 ++ const struct drm_display_info *display_info,
13302 ++ const struct drm_display_mode *mode)
13303 + {
13304 + if (meson_cvbs_get_mode(mode))
13305 + return MODE_OK;
13306 +diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
13307 +index fdc578016e0bf..e846e629c00d8 100644
13308 +--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
13309 ++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
13310 +@@ -1906,7 +1906,7 @@ static u32 fuse_to_supp_hw(struct device *dev, struct adreno_rev rev, u32 fuse)
13311 +
13312 + if (val == UINT_MAX) {
13313 + DRM_DEV_ERROR(dev,
13314 +- "missing support for speed-bin: %u. Some OPPs may not be supported by hardware",
13315 ++ "missing support for speed-bin: %u. Some OPPs may not be supported by hardware\n",
13316 + fuse);
13317 + return UINT_MAX;
13318 + }
13319 +@@ -1916,7 +1916,7 @@ static u32 fuse_to_supp_hw(struct device *dev, struct adreno_rev rev, u32 fuse)
13320 +
13321 + static int a6xx_set_supported_hw(struct device *dev, struct adreno_rev rev)
13322 + {
13323 +- u32 supp_hw = UINT_MAX;
13324 ++ u32 supp_hw;
13325 + u32 speedbin;
13326 + int ret;
13327 +
13328 +@@ -1928,15 +1928,13 @@ static int a6xx_set_supported_hw(struct device *dev, struct adreno_rev rev)
13329 + if (ret == -ENOENT) {
13330 + return 0;
13331 + } else if (ret) {
13332 +- DRM_DEV_ERROR(dev,
13333 +- "failed to read speed-bin (%d). Some OPPs may not be supported by hardware",
13334 +- ret);
13335 +- goto done;
13336 ++ dev_err_probe(dev, ret,
13337 ++ "failed to read speed-bin. Some OPPs may not be supported by hardware\n");
13338 ++ return ret;
13339 + }
13340 +
13341 + supp_hw = fuse_to_supp_hw(dev, rev, speedbin);
13342 +
13343 +-done:
13344 + ret = devm_pm_opp_set_supported_hw(dev, &supp_hw, 1);
13345 + if (ret)
13346 + return ret;
13347 +diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
13348 +index f2ddcfb6f7ee6..3662df698dae5 100644
13349 +--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
13350 ++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
13351 +@@ -42,7 +42,7 @@ static void dpu_hw_dsc_config(struct dpu_hw_dsc *hw_dsc,
13352 + u32 initial_lines)
13353 + {
13354 + struct dpu_hw_blk_reg_map *c = &hw_dsc->hw;
13355 +- u32 data, lsb, bpp;
13356 ++ u32 data;
13357 + u32 slice_last_group_size;
13358 + u32 det_thresh_flatness;
13359 + bool is_cmd_mode = !(mode & DSC_MODE_VIDEO);
13360 +@@ -56,14 +56,7 @@ static void dpu_hw_dsc_config(struct dpu_hw_dsc *hw_dsc,
13361 + data = (initial_lines << 20);
13362 + data |= ((slice_last_group_size - 1) << 18);
13363 + /* bpp is 6.4 format, 4 LSBs bits are for fractional part */
13364 +- data |= dsc->bits_per_pixel << 12;
13365 +- lsb = dsc->bits_per_pixel % 4;
13366 +- bpp = dsc->bits_per_pixel / 4;
13367 +- bpp *= 4;
13368 +- bpp <<= 4;
13369 +- bpp |= lsb;
13370 +-
13371 +- data |= bpp << 8;
13372 ++ data |= (dsc->bits_per_pixel << 8);
13373 + data |= (dsc->block_pred_enable << 7);
13374 + data |= (dsc->line_buf_depth << 3);
13375 + data |= (dsc->simple_422 << 2);
13376 +diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
13377 +index b0d21838a1343..29ae5c9613f36 100644
13378 +--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
13379 ++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
13380 +@@ -203,7 +203,7 @@ static int mdp5_set_split_display(struct msm_kms *kms,
13381 + slave_encoder);
13382 + }
13383 +
13384 +-static void mdp5_destroy(struct platform_device *pdev);
13385 ++static void mdp5_destroy(struct mdp5_kms *mdp5_kms);
13386 +
13387 + static void mdp5_kms_destroy(struct msm_kms *kms)
13388 + {
13389 +@@ -223,7 +223,7 @@ static void mdp5_kms_destroy(struct msm_kms *kms)
13390 + }
13391 +
13392 + mdp_kms_destroy(&mdp5_kms->base);
13393 +- mdp5_destroy(mdp5_kms->pdev);
13394 ++ mdp5_destroy(mdp5_kms);
13395 + }
13396 +
13397 + #ifdef CONFIG_DEBUG_FS
13398 +@@ -559,6 +559,8 @@ static int mdp5_kms_init(struct drm_device *dev)
13399 + int irq, i, ret;
13400 +
13401 + ret = mdp5_init(to_platform_device(dev->dev), dev);
13402 ++ if (ret)
13403 ++ return ret;
13404 +
13405 + /* priv->kms would have been populated by the MDP5 driver */
13406 + kms = priv->kms;
13407 +@@ -632,9 +634,8 @@ fail:
13408 + return ret;
13409 + }
13410 +
13411 +-static void mdp5_destroy(struct platform_device *pdev)
13412 ++static void mdp5_destroy(struct mdp5_kms *mdp5_kms)
13413 + {
13414 +- struct mdp5_kms *mdp5_kms = platform_get_drvdata(pdev);
13415 + int i;
13416 +
13417 + if (mdp5_kms->ctlm)
13418 +@@ -648,7 +649,7 @@ static void mdp5_destroy(struct platform_device *pdev)
13419 + kfree(mdp5_kms->intfs[i]);
13420 +
13421 + if (mdp5_kms->rpm_enabled)
13422 +- pm_runtime_disable(&pdev->dev);
13423 ++ pm_runtime_disable(&mdp5_kms->pdev->dev);
13424 +
13425 + drm_atomic_private_obj_fini(&mdp5_kms->glob_state);
13426 + drm_modeset_lock_fini(&mdp5_kms->glob_state_lock);
13427 +@@ -797,8 +798,6 @@ static int mdp5_init(struct platform_device *pdev, struct drm_device *dev)
13428 + goto fail;
13429 + }
13430 +
13431 +- platform_set_drvdata(pdev, mdp5_kms);
13432 +-
13433 + spin_lock_init(&mdp5_kms->resource_lock);
13434 +
13435 + mdp5_kms->dev = dev;
13436 +@@ -839,6 +838,9 @@ static int mdp5_init(struct platform_device *pdev, struct drm_device *dev)
13437 + */
13438 + clk_set_rate(mdp5_kms->core_clk, 200000000);
13439 +
13440 ++ /* set uninit-ed kms */
13441 ++ priv->kms = &mdp5_kms->base.base;
13442 ++
13443 + pm_runtime_enable(&pdev->dev);
13444 + mdp5_kms->rpm_enabled = true;
13445 +
13446 +@@ -890,13 +892,10 @@ static int mdp5_init(struct platform_device *pdev, struct drm_device *dev)
13447 + if (ret)
13448 + goto fail;
13449 +
13450 +- /* set uninit-ed kms */
13451 +- priv->kms = &mdp5_kms->base.base;
13452 +-
13453 + return 0;
13454 + fail:
13455 + if (mdp5_kms)
13456 +- mdp5_destroy(pdev);
13457 ++ mdp5_destroy(mdp5_kms);
13458 + return ret;
13459 + }
13460 +
13461 +@@ -953,7 +952,8 @@ static int mdp5_dev_remove(struct platform_device *pdev)
13462 + static __maybe_unused int mdp5_runtime_suspend(struct device *dev)
13463 + {
13464 + struct platform_device *pdev = to_platform_device(dev);
13465 +- struct mdp5_kms *mdp5_kms = platform_get_drvdata(pdev);
13466 ++ struct msm_drm_private *priv = platform_get_drvdata(pdev);
13467 ++ struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(priv->kms));
13468 +
13469 + DBG("");
13470 +
13471 +@@ -963,7 +963,8 @@ static __maybe_unused int mdp5_runtime_suspend(struct device *dev)
13472 + static __maybe_unused int mdp5_runtime_resume(struct device *dev)
13473 + {
13474 + struct platform_device *pdev = to_platform_device(dev);
13475 +- struct mdp5_kms *mdp5_kms = platform_get_drvdata(pdev);
13476 ++ struct msm_drm_private *priv = platform_get_drvdata(pdev);
13477 ++ struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(priv->kms));
13478 +
13479 + DBG("");
13480 +
13481 +diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
13482 +index a49f6dbbe8883..c9d9b384ddd03 100644
13483 +--- a/drivers/gpu/drm/msm/dp/dp_display.c
13484 ++++ b/drivers/gpu/drm/msm/dp/dp_display.c
13485 +@@ -857,7 +857,7 @@ static int dp_display_set_mode(struct msm_dp *dp_display,
13486 +
13487 + dp = container_of(dp_display, struct dp_display_private, dp_display);
13488 +
13489 +- dp->panel->dp_mode.drm_mode = mode->drm_mode;
13490 ++ drm_mode_copy(&dp->panel->dp_mode.drm_mode, &mode->drm_mode);
13491 + dp->panel->dp_mode.bpp = mode->bpp;
13492 + dp->panel->dp_mode.capabilities = mode->capabilities;
13493 + dp_panel_init_panel_info(dp->panel);
13494 +diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
13495 +index 7fbf391c024f8..89aadd3b3202b 100644
13496 +--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
13497 ++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
13498 +@@ -21,6 +21,7 @@
13499 +
13500 + #include <video/mipi_display.h>
13501 +
13502 ++#include <drm/display/drm_dsc_helper.h>
13503 + #include <drm/drm_of.h>
13504 +
13505 + #include "dsi.h"
13506 +@@ -33,7 +34,7 @@
13507 +
13508 + #define DSI_RESET_TOGGLE_DELAY_MS 20
13509 +
13510 +-static int dsi_populate_dsc_params(struct drm_dsc_config *dsc);
13511 ++static int dsi_populate_dsc_params(struct msm_dsi_host *msm_host, struct drm_dsc_config *dsc);
13512 +
13513 + static int dsi_get_version(const void __iomem *base, u32 *major, u32 *minor)
13514 + {
13515 +@@ -842,17 +843,15 @@ static void dsi_ctrl_config(struct msm_dsi_host *msm_host, bool enable,
13516 + static void dsi_update_dsc_timing(struct msm_dsi_host *msm_host, bool is_cmd_mode, u32 hdisplay)
13517 + {
13518 + struct drm_dsc_config *dsc = msm_host->dsc;
13519 +- u32 reg, intf_width, reg_ctrl, reg_ctrl2;
13520 ++ u32 reg, reg_ctrl, reg_ctrl2;
13521 + u32 slice_per_intf, total_bytes_per_intf;
13522 + u32 pkt_per_line;
13523 +- u32 bytes_in_slice;
13524 + u32 eol_byte_num;
13525 +
13526 + /* first calculate dsc parameters and then program
13527 + * compress mode registers
13528 + */
13529 +- intf_width = hdisplay;
13530 +- slice_per_intf = DIV_ROUND_UP(intf_width, dsc->slice_width);
13531 ++ slice_per_intf = DIV_ROUND_UP(hdisplay, dsc->slice_width);
13532 +
13533 + /* If slice_per_pkt is greater than slice_per_intf
13534 + * then default to 1. This can happen during partial
13535 +@@ -861,12 +860,7 @@ static void dsi_update_dsc_timing(struct msm_dsi_host *msm_host, bool is_cmd_mod
13536 + if (slice_per_intf > dsc->slice_count)
13537 + dsc->slice_count = 1;
13538 +
13539 +- slice_per_intf = DIV_ROUND_UP(hdisplay, dsc->slice_width);
13540 +- bytes_in_slice = DIV_ROUND_UP(dsc->slice_width * dsc->bits_per_pixel, 8);
13541 +-
13542 +- dsc->slice_chunk_size = bytes_in_slice;
13543 +-
13544 +- total_bytes_per_intf = bytes_in_slice * slice_per_intf;
13545 ++ total_bytes_per_intf = dsc->slice_chunk_size * slice_per_intf;
13546 +
13547 + eol_byte_num = total_bytes_per_intf % 3;
13548 + pkt_per_line = slice_per_intf / dsc->slice_count;
13549 +@@ -892,7 +886,7 @@ static void dsi_update_dsc_timing(struct msm_dsi_host *msm_host, bool is_cmd_mod
13550 + reg_ctrl |= reg;
13551 +
13552 + reg_ctrl2 &= ~DSI_COMMAND_COMPRESSION_MODE_CTRL2_STREAM0_SLICE_WIDTH__MASK;
13553 +- reg_ctrl2 |= DSI_COMMAND_COMPRESSION_MODE_CTRL2_STREAM0_SLICE_WIDTH(bytes_in_slice);
13554 ++ reg_ctrl2 |= DSI_COMMAND_COMPRESSION_MODE_CTRL2_STREAM0_SLICE_WIDTH(dsc->slice_chunk_size);
13555 +
13556 + dsi_write(msm_host, REG_DSI_COMMAND_COMPRESSION_MODE_CTRL, reg_ctrl);
13557 + dsi_write(msm_host, REG_DSI_COMMAND_COMPRESSION_MODE_CTRL2, reg_ctrl2);
13558 +@@ -915,6 +909,7 @@ static void dsi_timing_setup(struct msm_dsi_host *msm_host, bool is_bonded_dsi)
13559 + u32 va_end = va_start + mode->vdisplay;
13560 + u32 hdisplay = mode->hdisplay;
13561 + u32 wc;
13562 ++ int ret;
13563 +
13564 + DBG("");
13565 +
13566 +@@ -950,7 +945,9 @@ static void dsi_timing_setup(struct msm_dsi_host *msm_host, bool is_bonded_dsi)
13567 + /* we do the calculations for dsc parameters here so that
13568 + * panel can use these parameters
13569 + */
13570 +- dsi_populate_dsc_params(dsc);
13571 ++ ret = dsi_populate_dsc_params(msm_host, dsc);
13572 ++ if (ret)
13573 ++ return;
13574 +
13575 + /* Divide the display by 3 but keep back/font porch and
13576 + * pulse width same
13577 +@@ -1754,18 +1751,20 @@ static char bpg_offset[DSC_NUM_BUF_RANGES] = {
13578 + 2, 0, 0, -2, -4, -6, -8, -8, -8, -10, -10, -12, -12, -12, -12
13579 + };
13580 +
13581 +-static int dsi_populate_dsc_params(struct drm_dsc_config *dsc)
13582 +-{
13583 +- int mux_words_size;
13584 +- int groups_per_line, groups_total;
13585 +- int min_rate_buffer_size;
13586 +- int hrd_delay;
13587 +- int pre_num_extra_mux_bits, num_extra_mux_bits;
13588 +- int slice_bits;
13589 +- int target_bpp_x16;
13590 +- int data;
13591 +- int final_value, final_scale;
13592 ++static int dsi_populate_dsc_params(struct msm_dsi_host *msm_host, struct drm_dsc_config *dsc)
13593 ++{
13594 + int i;
13595 ++ u16 bpp = dsc->bits_per_pixel >> 4;
13596 ++
13597 ++ if (dsc->bits_per_pixel & 0xf) {
13598 ++ DRM_DEV_ERROR(&msm_host->pdev->dev, "DSI does not support fractional bits_per_pixel\n");
13599 ++ return -EINVAL;
13600 ++ }
13601 ++
13602 ++ if (dsc->bits_per_component != 8) {
13603 ++ DRM_DEV_ERROR(&msm_host->pdev->dev, "DSI does not support bits_per_component != 8 yet\n");
13604 ++ return -EOPNOTSUPP;
13605 ++ }
13606 +
13607 + dsc->rc_model_size = 8192;
13608 + dsc->first_line_bpg_offset = 12;
13609 +@@ -1783,16 +1782,21 @@ static int dsi_populate_dsc_params(struct drm_dsc_config *dsc)
13610 + for (i = 0; i < DSC_NUM_BUF_RANGES; i++) {
13611 + dsc->rc_range_params[i].range_min_qp = min_qp[i];
13612 + dsc->rc_range_params[i].range_max_qp = max_qp[i];
13613 +- dsc->rc_range_params[i].range_bpg_offset = bpg_offset[i];
13614 ++ /*
13615 ++ * Range BPG Offset contains two's-complement signed values that fill
13616 ++ * 8 bits, yet the registers and DCS PPS field are only 6 bits wide.
13617 ++ */
13618 ++ dsc->rc_range_params[i].range_bpg_offset = bpg_offset[i] & DSC_RANGE_BPG_OFFSET_MASK;
13619 + }
13620 +
13621 +- dsc->initial_offset = 6144; /* Not bpp 12 */
13622 +- if (dsc->bits_per_pixel != 8)
13623 ++ dsc->initial_offset = 6144; /* Not bpp 12 */
13624 ++ if (bpp != 8)
13625 + dsc->initial_offset = 2048; /* bpp = 12 */
13626 +
13627 +- mux_words_size = 48; /* bpc == 8/10 */
13628 +- if (dsc->bits_per_component == 12)
13629 +- mux_words_size = 64;
13630 ++ if (dsc->bits_per_component <= 10)
13631 ++ dsc->mux_word_size = DSC_MUX_WORD_SIZE_8_10_BPC;
13632 ++ else
13633 ++ dsc->mux_word_size = DSC_MUX_WORD_SIZE_12_BPC;
13634 +
13635 + dsc->initial_xmit_delay = 512;
13636 + dsc->initial_scale_value = 32;
13637 +@@ -1804,63 +1808,8 @@ static int dsi_populate_dsc_params(struct drm_dsc_config *dsc)
13638 + dsc->flatness_max_qp = 12;
13639 + dsc->rc_quant_incr_limit0 = 11;
13640 + dsc->rc_quant_incr_limit1 = 11;
13641 +- dsc->mux_word_size = DSC_MUX_WORD_SIZE_8_10_BPC;
13642 +-
13643 +- /* FIXME: need to call drm_dsc_compute_rc_parameters() so that rest of
13644 +- * params are calculated
13645 +- */
13646 +- groups_per_line = DIV_ROUND_UP(dsc->slice_width, 3);
13647 +- dsc->slice_chunk_size = dsc->slice_width * dsc->bits_per_pixel / 8;
13648 +- if ((dsc->slice_width * dsc->bits_per_pixel) % 8)
13649 +- dsc->slice_chunk_size++;
13650 +
13651 +- /* rbs-min */
13652 +- min_rate_buffer_size = dsc->rc_model_size - dsc->initial_offset +
13653 +- dsc->initial_xmit_delay * dsc->bits_per_pixel +
13654 +- groups_per_line * dsc->first_line_bpg_offset;
13655 +-
13656 +- hrd_delay = DIV_ROUND_UP(min_rate_buffer_size, dsc->bits_per_pixel);
13657 +-
13658 +- dsc->initial_dec_delay = hrd_delay - dsc->initial_xmit_delay;
13659 +-
13660 +- dsc->initial_scale_value = 8 * dsc->rc_model_size /
13661 +- (dsc->rc_model_size - dsc->initial_offset);
13662 +-
13663 +- slice_bits = 8 * dsc->slice_chunk_size * dsc->slice_height;
13664 +-
13665 +- groups_total = groups_per_line * dsc->slice_height;
13666 +-
13667 +- data = dsc->first_line_bpg_offset * 2048;
13668 +-
13669 +- dsc->nfl_bpg_offset = DIV_ROUND_UP(data, (dsc->slice_height - 1));
13670 +-
13671 +- pre_num_extra_mux_bits = 3 * (mux_words_size + (4 * dsc->bits_per_component + 4) - 2);
13672 +-
13673 +- num_extra_mux_bits = pre_num_extra_mux_bits - (mux_words_size -
13674 +- ((slice_bits - pre_num_extra_mux_bits) % mux_words_size));
13675 +-
13676 +- data = 2048 * (dsc->rc_model_size - dsc->initial_offset + num_extra_mux_bits);
13677 +- dsc->slice_bpg_offset = DIV_ROUND_UP(data, groups_total);
13678 +-
13679 +- /* bpp * 16 + 0.5 */
13680 +- data = dsc->bits_per_pixel * 16;
13681 +- data *= 2;
13682 +- data++;
13683 +- data /= 2;
13684 +- target_bpp_x16 = data;
13685 +-
13686 +- data = (dsc->initial_xmit_delay * target_bpp_x16) / 16;
13687 +- final_value = dsc->rc_model_size - data + num_extra_mux_bits;
13688 +- dsc->final_offset = final_value;
13689 +-
13690 +- final_scale = 8 * dsc->rc_model_size / (dsc->rc_model_size - final_value);
13691 +-
13692 +- data = (final_scale - 9) * (dsc->nfl_bpg_offset + dsc->slice_bpg_offset);
13693 +- dsc->scale_increment_interval = (2048 * dsc->final_offset) / data;
13694 +-
13695 +- dsc->scale_decrement_interval = groups_per_line / (dsc->initial_scale_value - 8);
13696 +-
13697 +- return 0;
13698 ++ return drm_dsc_compute_rc_parameters(dsc);
13699 + }
13700 +
13701 + static int dsi_host_parse_dt(struct msm_dsi_host *msm_host)
13702 +diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c
13703 +index f28fb21e38911..8cd5d50639a53 100644
13704 +--- a/drivers/gpu/drm/msm/hdmi/hdmi.c
13705 ++++ b/drivers/gpu/drm/msm/hdmi/hdmi.c
13706 +@@ -252,7 +252,7 @@ static struct hdmi *msm_hdmi_init(struct platform_device *pdev)
13707 + if (hdmi->hpd_gpiod)
13708 + gpiod_set_consumer_name(hdmi->hpd_gpiod, "HDMI_HPD");
13709 +
13710 +- pm_runtime_enable(&pdev->dev);
13711 ++ devm_pm_runtime_enable(&pdev->dev);
13712 +
13713 + hdmi->workq = alloc_ordered_workqueue("msm_hdmi", 0);
13714 +
13715 +diff --git a/drivers/gpu/drm/mxsfb/lcdif_kms.c b/drivers/gpu/drm/mxsfb/lcdif_kms.c
13716 +index b1092aab14231..71546a5d0a48c 100644
13717 +--- a/drivers/gpu/drm/mxsfb/lcdif_kms.c
13718 ++++ b/drivers/gpu/drm/mxsfb/lcdif_kms.c
13719 +@@ -5,6 +5,7 @@
13720 + * This code is based on drivers/gpu/drm/mxsfb/mxsfb*
13721 + */
13722 +
13723 ++#include <linux/bitfield.h>
13724 + #include <linux/clk.h>
13725 + #include <linux/io.h>
13726 + #include <linux/iopoll.h>
13727 +@@ -52,16 +53,22 @@ static void lcdif_set_formats(struct lcdif_drm_private *lcdif,
13728 + writel(DISP_PARA_LINE_PATTERN_UYVY_H,
13729 + lcdif->base + LCDC_V8_DISP_PARA);
13730 +
13731 +- /* CSC: BT.601 Full Range RGB to YCbCr coefficients. */
13732 +- writel(CSC0_COEF0_A2(0x096) | CSC0_COEF0_A1(0x04c),
13733 ++ /*
13734 ++ * CSC: BT.601 Limited Range RGB to YCbCr coefficients.
13735 ++ *
13736 ++ * |Y | | 0.2568 0.5041 0.0979| |R| |16 |
13737 ++ * |Cb| = |-0.1482 -0.2910 0.4392| * |G| + |128|
13738 ++ * |Cr| | 0.4392 0.4392 -0.3678| |B| |128|
13739 ++ */
13740 ++ writel(CSC0_COEF0_A2(0x081) | CSC0_COEF0_A1(0x041),
13741 + lcdif->base + LCDC_V8_CSC0_COEF0);
13742 +- writel(CSC0_COEF1_B1(0x7d5) | CSC0_COEF1_A3(0x01d),
13743 ++ writel(CSC0_COEF1_B1(0x7db) | CSC0_COEF1_A3(0x019),
13744 + lcdif->base + LCDC_V8_CSC0_COEF1);
13745 +- writel(CSC0_COEF2_B3(0x080) | CSC0_COEF2_B2(0x7ac),
13746 ++ writel(CSC0_COEF2_B3(0x070) | CSC0_COEF2_B2(0x7b6),
13747 + lcdif->base + LCDC_V8_CSC0_COEF2);
13748 +- writel(CSC0_COEF3_C2(0x795) | CSC0_COEF3_C1(0x080),
13749 ++ writel(CSC0_COEF3_C2(0x7a2) | CSC0_COEF3_C1(0x070),
13750 + lcdif->base + LCDC_V8_CSC0_COEF3);
13751 +- writel(CSC0_COEF4_D1(0x000) | CSC0_COEF4_C3(0x7ec),
13752 ++ writel(CSC0_COEF4_D1(0x010) | CSC0_COEF4_C3(0x7ee),
13753 + lcdif->base + LCDC_V8_CSC0_COEF4);
13754 + writel(CSC0_COEF5_D3(0x080) | CSC0_COEF5_D2(0x080),
13755 + lcdif->base + LCDC_V8_CSC0_COEF5);
13756 +@@ -142,14 +149,36 @@ static void lcdif_set_mode(struct lcdif_drm_private *lcdif, u32 bus_flags)
13757 + CTRLDESCL0_1_WIDTH(m->hdisplay),
13758 + lcdif->base + LCDC_V8_CTRLDESCL0_1);
13759 +
13760 +- writel(CTRLDESCL0_3_PITCH(lcdif->crtc.primary->state->fb->pitches[0]),
13761 +- lcdif->base + LCDC_V8_CTRLDESCL0_3);
13762 ++ /*
13763 ++ * Undocumented P_SIZE and T_SIZE register but those written in the
13764 ++ * downstream kernel those registers control the AXI burst size. As of
13765 ++ * now there are two known values:
13766 ++ * 1 - 128Byte
13767 ++ * 2 - 256Byte
13768 ++ * Downstream set it to 256B burst size to improve the memory
13769 ++ * efficiency so set it here too.
13770 ++ */
13771 ++ ctrl = CTRLDESCL0_3_P_SIZE(2) | CTRLDESCL0_3_T_SIZE(2) |
13772 ++ CTRLDESCL0_3_PITCH(lcdif->crtc.primary->state->fb->pitches[0]);
13773 ++ writel(ctrl, lcdif->base + LCDC_V8_CTRLDESCL0_3);
13774 + }
13775 +
13776 + static void lcdif_enable_controller(struct lcdif_drm_private *lcdif)
13777 + {
13778 + u32 reg;
13779 +
13780 ++ /* Set FIFO Panic watermarks, low 1/3, high 2/3 . */
13781 ++ writel(FIELD_PREP(PANIC0_THRES_LOW_MASK, 1 * PANIC0_THRES_MAX / 3) |
13782 ++ FIELD_PREP(PANIC0_THRES_HIGH_MASK, 2 * PANIC0_THRES_MAX / 3),
13783 ++ lcdif->base + LCDC_V8_PANIC0_THRES);
13784 ++
13785 ++ /*
13786 ++ * Enable FIFO Panic, this does not generate interrupt, but
13787 ++ * boosts NoC priority based on FIFO Panic watermarks.
13788 ++ */
13789 ++ writel(INT_ENABLE_D1_PLANE_PANIC_EN,
13790 ++ lcdif->base + LCDC_V8_INT_ENABLE_D1);
13791 ++
13792 + reg = readl(lcdif->base + LCDC_V8_DISP_PARA);
13793 + reg |= DISP_PARA_DISP_ON;
13794 + writel(reg, lcdif->base + LCDC_V8_DISP_PARA);
13795 +@@ -177,6 +206,9 @@ static void lcdif_disable_controller(struct lcdif_drm_private *lcdif)
13796 + reg = readl(lcdif->base + LCDC_V8_DISP_PARA);
13797 + reg &= ~DISP_PARA_DISP_ON;
13798 + writel(reg, lcdif->base + LCDC_V8_DISP_PARA);
13799 ++
13800 ++ /* Disable FIFO Panic NoC priority booster. */
13801 ++ writel(0, lcdif->base + LCDC_V8_INT_ENABLE_D1);
13802 + }
13803 +
13804 + static void lcdif_reset_block(struct lcdif_drm_private *lcdif)
13805 +diff --git a/drivers/gpu/drm/mxsfb/lcdif_regs.h b/drivers/gpu/drm/mxsfb/lcdif_regs.h
13806 +index c70220651e3a5..37f0d9a06b104 100644
13807 +--- a/drivers/gpu/drm/mxsfb/lcdif_regs.h
13808 ++++ b/drivers/gpu/drm/mxsfb/lcdif_regs.h
13809 +@@ -190,6 +190,10 @@
13810 + #define CTRLDESCL0_1_WIDTH(n) ((n) & 0xffff)
13811 + #define CTRLDESCL0_1_WIDTH_MASK GENMASK(15, 0)
13812 +
13813 ++#define CTRLDESCL0_3_P_SIZE(n) (((n) << 20) & CTRLDESCL0_3_P_SIZE_MASK)
13814 ++#define CTRLDESCL0_3_P_SIZE_MASK GENMASK(22, 20)
13815 ++#define CTRLDESCL0_3_T_SIZE(n) (((n) << 16) & CTRLDESCL0_3_T_SIZE_MASK)
13816 ++#define CTRLDESCL0_3_T_SIZE_MASK GENMASK(17, 16)
13817 + #define CTRLDESCL0_3_PITCH(n) ((n) & 0xffff)
13818 + #define CTRLDESCL0_3_PITCH_MASK GENMASK(15, 0)
13819 +
13820 +@@ -248,6 +252,7 @@
13821 +
13822 + #define PANIC0_THRES_LOW_MASK GENMASK(24, 16)
13823 + #define PANIC0_THRES_HIGH_MASK GENMASK(8, 0)
13824 ++#define PANIC0_THRES_MAX 511
13825 +
13826 + #define LCDIF_MIN_XRES 120
13827 + #define LCDIF_MIN_YRES 120
13828 +diff --git a/drivers/gpu/drm/panel/panel-sitronix-st7701.c b/drivers/gpu/drm/panel/panel-sitronix-st7701.c
13829 +index c481daa4bbceb..225b9884f61a9 100644
13830 +--- a/drivers/gpu/drm/panel/panel-sitronix-st7701.c
13831 ++++ b/drivers/gpu/drm/panel/panel-sitronix-st7701.c
13832 +@@ -244,7 +244,7 @@ static void st7701_init_sequence(struct st7701 *st7701)
13833 + DSI_CMD2_BK0_INVSEL_ONES_MASK |
13834 + FIELD_PREP(DSI_CMD2_BK0_INVSEL_NLINV_MASK, desc->nlinv),
13835 + FIELD_PREP(DSI_CMD2_BK0_INVSEL_RTNI_MASK,
13836 +- DIV_ROUND_UP(mode->htotal, 16)));
13837 ++ (clamp((u32)mode->htotal, 512U, 1008U) - 512) / 16));
13838 +
13839 + /* Command2, BK1 */
13840 + ST7701_DSI(st7701, DSI_CMD2BKX_SEL,
13841 +@@ -762,7 +762,15 @@ static int st7701_dsi_probe(struct mipi_dsi_device *dsi)
13842 + st7701->dsi = dsi;
13843 + st7701->desc = desc;
13844 +
13845 +- return mipi_dsi_attach(dsi);
13846 ++ ret = mipi_dsi_attach(dsi);
13847 ++ if (ret)
13848 ++ goto err_attach;
13849 ++
13850 ++ return 0;
13851 ++
13852 ++err_attach:
13853 ++ drm_panel_remove(&st7701->panel);
13854 ++ return ret;
13855 + }
13856 +
13857 + static void st7701_dsi_remove(struct mipi_dsi_device *dsi)
13858 +diff --git a/drivers/gpu/drm/radeon/radeon_bios.c b/drivers/gpu/drm/radeon/radeon_bios.c
13859 +index 33121655d50bb..63bdc9f6fc243 100644
13860 +--- a/drivers/gpu/drm/radeon/radeon_bios.c
13861 ++++ b/drivers/gpu/drm/radeon/radeon_bios.c
13862 +@@ -227,6 +227,7 @@ static bool radeon_atrm_get_bios(struct radeon_device *rdev)
13863 +
13864 + if (!found)
13865 + return false;
13866 ++ pci_dev_put(pdev);
13867 +
13868 + rdev->bios = kmalloc(size, GFP_KERNEL);
13869 + if (!rdev->bios) {
13870 +@@ -612,13 +613,14 @@ static bool radeon_acpi_vfct_bios(struct radeon_device *rdev)
13871 + acpi_size tbl_size;
13872 + UEFI_ACPI_VFCT *vfct;
13873 + unsigned offset;
13874 ++ bool r = false;
13875 +
13876 + if (!ACPI_SUCCESS(acpi_get_table("VFCT", 1, &hdr)))
13877 + return false;
13878 + tbl_size = hdr->length;
13879 + if (tbl_size < sizeof(UEFI_ACPI_VFCT)) {
13880 + DRM_ERROR("ACPI VFCT table present but broken (too short #1)\n");
13881 +- return false;
13882 ++ goto out;
13883 + }
13884 +
13885 + vfct = (UEFI_ACPI_VFCT *)hdr;
13886 +@@ -631,13 +633,13 @@ static bool radeon_acpi_vfct_bios(struct radeon_device *rdev)
13887 + offset += sizeof(VFCT_IMAGE_HEADER);
13888 + if (offset > tbl_size) {
13889 + DRM_ERROR("ACPI VFCT image header truncated\n");
13890 +- return false;
13891 ++ goto out;
13892 + }
13893 +
13894 + offset += vhdr->ImageLength;
13895 + if (offset > tbl_size) {
13896 + DRM_ERROR("ACPI VFCT image truncated\n");
13897 +- return false;
13898 ++ goto out;
13899 + }
13900 +
13901 + if (vhdr->ImageLength &&
13902 +@@ -649,15 +651,18 @@ static bool radeon_acpi_vfct_bios(struct radeon_device *rdev)
13903 + rdev->bios = kmemdup(&vbios->VbiosContent,
13904 + vhdr->ImageLength,
13905 + GFP_KERNEL);
13906 ++ if (rdev->bios)
13907 ++ r = true;
13908 +
13909 +- if (!rdev->bios)
13910 +- return false;
13911 +- return true;
13912 ++ goto out;
13913 + }
13914 + }
13915 +
13916 + DRM_ERROR("ACPI VFCT table present but broken (too short #2)\n");
13917 +- return false;
13918 ++
13919 ++out:
13920 ++ acpi_put_table(hdr);
13921 ++ return r;
13922 + }
13923 + #else
13924 + static inline bool radeon_acpi_vfct_bios(struct radeon_device *rdev)
13925 +diff --git a/drivers/gpu/drm/rcar-du/Kconfig b/drivers/gpu/drm/rcar-du/Kconfig
13926 +index fd2c2eaee26ba..a5518e90d6896 100644
13927 +--- a/drivers/gpu/drm/rcar-du/Kconfig
13928 ++++ b/drivers/gpu/drm/rcar-du/Kconfig
13929 +@@ -41,8 +41,6 @@ config DRM_RCAR_LVDS
13930 + depends on DRM_RCAR_USE_LVDS
13931 + select DRM_KMS_HELPER
13932 + select DRM_PANEL
13933 +- select OF_FLATTREE
13934 +- select OF_OVERLAY
13935 +
13936 + config DRM_RCAR_USE_MIPI_DSI
13937 + bool "R-Car DU MIPI DSI Encoder Support"
13938 +diff --git a/drivers/gpu/drm/rockchip/cdn-dp-core.c b/drivers/gpu/drm/rockchip/cdn-dp-core.c
13939 +index 518ee13b1d6f4..8526dda919317 100644
13940 +--- a/drivers/gpu/drm/rockchip/cdn-dp-core.c
13941 ++++ b/drivers/gpu/drm/rockchip/cdn-dp-core.c
13942 +@@ -571,7 +571,7 @@ static void cdn_dp_encoder_mode_set(struct drm_encoder *encoder,
13943 + video->v_sync_polarity = !!(mode->flags & DRM_MODE_FLAG_NVSYNC);
13944 + video->h_sync_polarity = !!(mode->flags & DRM_MODE_FLAG_NHSYNC);
13945 +
13946 +- memcpy(&dp->mode, adjusted, sizeof(*mode));
13947 ++ drm_mode_copy(&dp->mode, adjusted);
13948 + }
13949 +
13950 + static bool cdn_dp_check_link_status(struct cdn_dp_device *dp)
13951 +diff --git a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
13952 +index f4df9820b295d..912eb4e94c595 100644
13953 +--- a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
13954 ++++ b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
13955 +@@ -1221,7 +1221,7 @@ static int dw_mipi_dsi_dphy_power_on(struct phy *phy)
13956 + return i;
13957 + }
13958 +
13959 +- ret = pm_runtime_get_sync(dsi->dev);
13960 ++ ret = pm_runtime_resume_and_get(dsi->dev);
13961 + if (ret < 0) {
13962 + DRM_DEV_ERROR(dsi->dev, "failed to enable device: %d\n", ret);
13963 + return ret;
13964 +diff --git a/drivers/gpu/drm/rockchip/inno_hdmi.c b/drivers/gpu/drm/rockchip/inno_hdmi.c
13965 +index 87b2243ea23e3..f51774866f412 100644
13966 +--- a/drivers/gpu/drm/rockchip/inno_hdmi.c
13967 ++++ b/drivers/gpu/drm/rockchip/inno_hdmi.c
13968 +@@ -499,7 +499,7 @@ static void inno_hdmi_encoder_mode_set(struct drm_encoder *encoder,
13969 + inno_hdmi_setup(hdmi, adj_mode);
13970 +
13971 + /* Store the display mode for plugin/DPMS poweron events */
13972 +- memcpy(&hdmi->previous_mode, adj_mode, sizeof(hdmi->previous_mode));
13973 ++ drm_mode_copy(&hdmi->previous_mode, adj_mode);
13974 + }
13975 +
13976 + static void inno_hdmi_encoder_enable(struct drm_encoder *encoder)
13977 +diff --git a/drivers/gpu/drm/rockchip/rk3066_hdmi.c b/drivers/gpu/drm/rockchip/rk3066_hdmi.c
13978 +index cf2cf51091a3e..90145ad969841 100644
13979 +--- a/drivers/gpu/drm/rockchip/rk3066_hdmi.c
13980 ++++ b/drivers/gpu/drm/rockchip/rk3066_hdmi.c
13981 +@@ -395,7 +395,7 @@ rk3066_hdmi_encoder_mode_set(struct drm_encoder *encoder,
13982 + struct rk3066_hdmi *hdmi = encoder_to_rk3066_hdmi(encoder);
13983 +
13984 + /* Store the display mode for plugin/DPMS poweron events. */
13985 +- memcpy(&hdmi->previous_mode, adj_mode, sizeof(hdmi->previous_mode));
13986 ++ drm_mode_copy(&hdmi->previous_mode, adj_mode);
13987 + }
13988 +
13989 + static void rk3066_hdmi_encoder_enable(struct drm_encoder *encoder)
13990 +diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
13991 +index c356de5dd2206..fa1f4ee6d1950 100644
13992 +--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
13993 ++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
13994 +@@ -602,7 +602,7 @@ static int vop_enable(struct drm_crtc *crtc, struct drm_crtc_state *old_state)
13995 + struct vop *vop = to_vop(crtc);
13996 + int ret, i;
13997 +
13998 +- ret = pm_runtime_get_sync(vop->dev);
13999 ++ ret = pm_runtime_resume_and_get(vop->dev);
14000 + if (ret < 0) {
14001 + DRM_DEV_ERROR(vop->dev, "failed to get pm runtime: %d\n", ret);
14002 + return ret;
14003 +@@ -1983,7 +1983,7 @@ static int vop_initial(struct vop *vop)
14004 + return PTR_ERR(vop->dclk);
14005 + }
14006 +
14007 +- ret = pm_runtime_get_sync(vop->dev);
14008 ++ ret = pm_runtime_resume_and_get(vop->dev);
14009 + if (ret < 0) {
14010 + DRM_DEV_ERROR(vop->dev, "failed to get pm runtime: %d\n", ret);
14011 + return ret;
14012 +diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
14013 +index 105a548d0abeb..8cecf81a5ae03 100644
14014 +--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
14015 ++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
14016 +@@ -822,7 +822,7 @@ static void vop2_enable(struct vop2 *vop2)
14017 + {
14018 + int ret;
14019 +
14020 +- ret = pm_runtime_get_sync(vop2->dev);
14021 ++ ret = pm_runtime_resume_and_get(vop2->dev);
14022 + if (ret < 0) {
14023 + drm_err(vop2->drm, "failed to get pm runtime: %d\n", ret);
14024 + return;
14025 +diff --git a/drivers/gpu/drm/rockchip/rockchip_lvds.c b/drivers/gpu/drm/rockchip/rockchip_lvds.c
14026 +index 5a284332ec49e..68f6ebb33460b 100644
14027 +--- a/drivers/gpu/drm/rockchip/rockchip_lvds.c
14028 ++++ b/drivers/gpu/drm/rockchip/rockchip_lvds.c
14029 +@@ -152,7 +152,7 @@ static int rk3288_lvds_poweron(struct rockchip_lvds *lvds)
14030 + DRM_DEV_ERROR(lvds->dev, "failed to enable lvds pclk %d\n", ret);
14031 + return ret;
14032 + }
14033 +- ret = pm_runtime_get_sync(lvds->dev);
14034 ++ ret = pm_runtime_resume_and_get(lvds->dev);
14035 + if (ret < 0) {
14036 + DRM_DEV_ERROR(lvds->dev, "failed to get pm runtime: %d\n", ret);
14037 + clk_disable(lvds->pclk);
14038 +@@ -336,16 +336,20 @@ static int px30_lvds_poweron(struct rockchip_lvds *lvds)
14039 + {
14040 + int ret;
14041 +
14042 +- ret = pm_runtime_get_sync(lvds->dev);
14043 ++ ret = pm_runtime_resume_and_get(lvds->dev);
14044 + if (ret < 0) {
14045 + DRM_DEV_ERROR(lvds->dev, "failed to get pm runtime: %d\n", ret);
14046 + return ret;
14047 + }
14048 +
14049 + /* Enable LVDS mode */
14050 +- return regmap_update_bits(lvds->grf, PX30_LVDS_GRF_PD_VO_CON1,
14051 ++ ret = regmap_update_bits(lvds->grf, PX30_LVDS_GRF_PD_VO_CON1,
14052 + PX30_LVDS_MODE_EN(1) | PX30_LVDS_P2S_EN(1),
14053 + PX30_LVDS_MODE_EN(1) | PX30_LVDS_P2S_EN(1));
14054 ++ if (ret)
14055 ++ pm_runtime_put(lvds->dev);
14056 ++
14057 ++ return ret;
14058 + }
14059 +
14060 + static void px30_lvds_poweroff(struct rockchip_lvds *lvds)
14061 +diff --git a/drivers/gpu/drm/sti/sti_dvo.c b/drivers/gpu/drm/sti/sti_dvo.c
14062 +index b6ee8a82e656c..577c477b5f467 100644
14063 +--- a/drivers/gpu/drm/sti/sti_dvo.c
14064 ++++ b/drivers/gpu/drm/sti/sti_dvo.c
14065 +@@ -288,7 +288,7 @@ static void sti_dvo_set_mode(struct drm_bridge *bridge,
14066 +
14067 + DRM_DEBUG_DRIVER("\n");
14068 +
14069 +- memcpy(&dvo->mode, mode, sizeof(struct drm_display_mode));
14070 ++ drm_mode_copy(&dvo->mode, mode);
14071 +
14072 + /* According to the path used (main or aux), the dvo clocks should
14073 + * have a different parent clock. */
14074 +@@ -346,8 +346,9 @@ static int sti_dvo_connector_get_modes(struct drm_connector *connector)
14075 +
14076 + #define CLK_TOLERANCE_HZ 50
14077 +
14078 +-static int sti_dvo_connector_mode_valid(struct drm_connector *connector,
14079 +- struct drm_display_mode *mode)
14080 ++static enum drm_mode_status
14081 ++sti_dvo_connector_mode_valid(struct drm_connector *connector,
14082 ++ struct drm_display_mode *mode)
14083 + {
14084 + int target = mode->clock * 1000;
14085 + int target_min = target - CLK_TOLERANCE_HZ;
14086 +diff --git a/drivers/gpu/drm/sti/sti_hda.c b/drivers/gpu/drm/sti/sti_hda.c
14087 +index 03cc401ed5934..15097ac679314 100644
14088 +--- a/drivers/gpu/drm/sti/sti_hda.c
14089 ++++ b/drivers/gpu/drm/sti/sti_hda.c
14090 +@@ -524,7 +524,7 @@ static void sti_hda_set_mode(struct drm_bridge *bridge,
14091 +
14092 + DRM_DEBUG_DRIVER("\n");
14093 +
14094 +- memcpy(&hda->mode, mode, sizeof(struct drm_display_mode));
14095 ++ drm_mode_copy(&hda->mode, mode);
14096 +
14097 + if (!hda_get_mode_idx(hda->mode, &mode_idx)) {
14098 + DRM_ERROR("Undefined mode\n");
14099 +@@ -601,8 +601,9 @@ static int sti_hda_connector_get_modes(struct drm_connector *connector)
14100 +
14101 + #define CLK_TOLERANCE_HZ 50
14102 +
14103 +-static int sti_hda_connector_mode_valid(struct drm_connector *connector,
14104 +- struct drm_display_mode *mode)
14105 ++static enum drm_mode_status
14106 ++sti_hda_connector_mode_valid(struct drm_connector *connector,
14107 ++ struct drm_display_mode *mode)
14108 + {
14109 + int target = mode->clock * 1000;
14110 + int target_min = target - CLK_TOLERANCE_HZ;
14111 +diff --git a/drivers/gpu/drm/sti/sti_hdmi.c b/drivers/gpu/drm/sti/sti_hdmi.c
14112 +index cb82622877d20..8539fe1fedc4c 100644
14113 +--- a/drivers/gpu/drm/sti/sti_hdmi.c
14114 ++++ b/drivers/gpu/drm/sti/sti_hdmi.c
14115 +@@ -941,7 +941,7 @@ static void sti_hdmi_set_mode(struct drm_bridge *bridge,
14116 + DRM_DEBUG_DRIVER("\n");
14117 +
14118 + /* Copy the drm display mode in the connector local structure */
14119 +- memcpy(&hdmi->mode, mode, sizeof(struct drm_display_mode));
14120 ++ drm_mode_copy(&hdmi->mode, mode);
14121 +
14122 + /* Update clock framerate according to the selected mode */
14123 + ret = clk_set_rate(hdmi->clk_pix, mode->clock * 1000);
14124 +@@ -1004,8 +1004,9 @@ fail:
14125 +
14126 + #define CLK_TOLERANCE_HZ 50
14127 +
14128 +-static int sti_hdmi_connector_mode_valid(struct drm_connector *connector,
14129 +- struct drm_display_mode *mode)
14130 ++static enum drm_mode_status
14131 ++sti_hdmi_connector_mode_valid(struct drm_connector *connector,
14132 ++ struct drm_display_mode *mode)
14133 + {
14134 + int target = mode->clock * 1000;
14135 + int target_min = target - CLK_TOLERANCE_HZ;
14136 +diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c
14137 +index bd0f60704467f..a67453cee8832 100644
14138 +--- a/drivers/gpu/drm/tegra/dc.c
14139 ++++ b/drivers/gpu/drm/tegra/dc.c
14140 +@@ -3205,8 +3205,10 @@ static int tegra_dc_probe(struct platform_device *pdev)
14141 + usleep_range(2000, 4000);
14142 +
14143 + err = reset_control_assert(dc->rst);
14144 +- if (err < 0)
14145 ++ if (err < 0) {
14146 ++ clk_disable_unprepare(dc->clk);
14147 + return err;
14148 ++ }
14149 +
14150 + usleep_range(2000, 4000);
14151 +
14152 +diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_client.c b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
14153 +index 8275bba636119..ab125f79408f2 100644
14154 +--- a/drivers/hid/amd-sfh-hid/amd_sfh_client.c
14155 ++++ b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
14156 +@@ -237,6 +237,10 @@ int amd_sfh_hid_client_init(struct amd_mp2_dev *privdata)
14157 + in_data->sensor_virt_addr[i] = dma_alloc_coherent(dev, sizeof(int) * 8,
14158 + &cl_data->sensor_dma_addr[i],
14159 + GFP_KERNEL);
14160 ++ if (!in_data->sensor_virt_addr[i]) {
14161 ++ rc = -ENOMEM;
14162 ++ goto cleanup;
14163 ++ }
14164 + cl_data->sensor_sts[i] = SENSOR_DISABLED;
14165 + cl_data->sensor_requested_cnt[i] = 0;
14166 + cl_data->cur_hid_dev = i;
14167 +diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
14168 +index 6970797cdc56d..c671ce94671ca 100644
14169 +--- a/drivers/hid/hid-apple.c
14170 ++++ b/drivers/hid/hid-apple.c
14171 +@@ -314,6 +314,7 @@ static const struct apple_key_translation swapped_option_cmd_keys[] = {
14172 +
14173 + static const struct apple_key_translation swapped_fn_leftctrl_keys[] = {
14174 + { KEY_FN, KEY_LEFTCTRL },
14175 ++ { KEY_LEFTCTRL, KEY_FN },
14176 + { }
14177 + };
14178 +
14179 +@@ -375,24 +376,40 @@ static int hidinput_apple_event(struct hid_device *hid, struct input_dev *input,
14180 + struct apple_sc *asc = hid_get_drvdata(hid);
14181 + const struct apple_key_translation *trans, *table;
14182 + bool do_translate;
14183 +- u16 code = 0;
14184 ++ u16 code = usage->code;
14185 + unsigned int real_fnmode;
14186 +
14187 +- u16 fn_keycode = (swap_fn_leftctrl) ? (KEY_LEFTCTRL) : (KEY_FN);
14188 +-
14189 +- if (usage->code == fn_keycode) {
14190 +- asc->fn_on = !!value;
14191 +- input_event_with_scancode(input, usage->type, KEY_FN,
14192 +- usage->hid, value);
14193 +- return 1;
14194 +- }
14195 +-
14196 + if (fnmode == 3) {
14197 + real_fnmode = (asc->quirks & APPLE_IS_NON_APPLE) ? 2 : 1;
14198 + } else {
14199 + real_fnmode = fnmode;
14200 + }
14201 +
14202 ++ if (swap_fn_leftctrl) {
14203 ++ trans = apple_find_translation(swapped_fn_leftctrl_keys, code);
14204 ++
14205 ++ if (trans)
14206 ++ code = trans->to;
14207 ++ }
14208 ++
14209 ++ if (iso_layout > 0 || (iso_layout < 0 && (asc->quirks & APPLE_ISO_TILDE_QUIRK) &&
14210 ++ hid->country == HID_COUNTRY_INTERNATIONAL_ISO)) {
14211 ++ trans = apple_find_translation(apple_iso_keyboard, code);
14212 ++
14213 ++ if (trans)
14214 ++ code = trans->to;
14215 ++ }
14216 ++
14217 ++ if (swap_opt_cmd) {
14218 ++ trans = apple_find_translation(swapped_option_cmd_keys, code);
14219 ++
14220 ++ if (trans)
14221 ++ code = trans->to;
14222 ++ }
14223 ++
14224 ++ if (code == KEY_FN)
14225 ++ asc->fn_on = !!value;
14226 ++
14227 + if (real_fnmode) {
14228 + if (hid->product == USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI ||
14229 + hid->product == USB_DEVICE_ID_APPLE_ALU_WIRELESS_ISO ||
14230 +@@ -430,15 +447,18 @@ static int hidinput_apple_event(struct hid_device *hid, struct input_dev *input,
14231 + else
14232 + table = apple_fn_keys;
14233 +
14234 +- trans = apple_find_translation (table, usage->code);
14235 ++ trans = apple_find_translation(table, code);
14236 +
14237 + if (trans) {
14238 +- if (test_bit(trans->from, input->key))
14239 ++ bool from_is_set = test_bit(trans->from, input->key);
14240 ++ bool to_is_set = test_bit(trans->to, input->key);
14241 ++
14242 ++ if (from_is_set)
14243 + code = trans->from;
14244 +- else if (test_bit(trans->to, input->key))
14245 ++ else if (to_is_set)
14246 + code = trans->to;
14247 +
14248 +- if (!code) {
14249 ++ if (!(from_is_set || to_is_set)) {
14250 + if (trans->flags & APPLE_FLAG_FKEY) {
14251 + switch (real_fnmode) {
14252 + case 1:
14253 +@@ -455,62 +475,31 @@ static int hidinput_apple_event(struct hid_device *hid, struct input_dev *input,
14254 + do_translate = asc->fn_on;
14255 + }
14256 +
14257 +- code = do_translate ? trans->to : trans->from;
14258 ++ if (do_translate)
14259 ++ code = trans->to;
14260 + }
14261 +-
14262 +- input_event_with_scancode(input, usage->type, code,
14263 +- usage->hid, value);
14264 +- return 1;
14265 + }
14266 +
14267 + if (asc->quirks & APPLE_NUMLOCK_EMULATION &&
14268 +- (test_bit(usage->code, asc->pressed_numlock) ||
14269 ++ (test_bit(code, asc->pressed_numlock) ||
14270 + test_bit(LED_NUML, input->led))) {
14271 +- trans = apple_find_translation(powerbook_numlock_keys,
14272 +- usage->code);
14273 ++ trans = apple_find_translation(powerbook_numlock_keys, code);
14274 +
14275 + if (trans) {
14276 + if (value)
14277 +- set_bit(usage->code,
14278 +- asc->pressed_numlock);
14279 ++ set_bit(code, asc->pressed_numlock);
14280 + else
14281 +- clear_bit(usage->code,
14282 +- asc->pressed_numlock);
14283 ++ clear_bit(code, asc->pressed_numlock);
14284 +
14285 +- input_event_with_scancode(input, usage->type,
14286 +- trans->to, usage->hid, value);
14287 ++ code = trans->to;
14288 + }
14289 +-
14290 +- return 1;
14291 + }
14292 + }
14293 +
14294 +- if (iso_layout > 0 || (iso_layout < 0 && (asc->quirks & APPLE_ISO_TILDE_QUIRK) &&
14295 +- hid->country == HID_COUNTRY_INTERNATIONAL_ISO)) {
14296 +- trans = apple_find_translation(apple_iso_keyboard, usage->code);
14297 +- if (trans) {
14298 +- input_event_with_scancode(input, usage->type,
14299 +- trans->to, usage->hid, value);
14300 +- return 1;
14301 +- }
14302 +- }
14303 +-
14304 +- if (swap_opt_cmd) {
14305 +- trans = apple_find_translation(swapped_option_cmd_keys, usage->code);
14306 +- if (trans) {
14307 +- input_event_with_scancode(input, usage->type,
14308 +- trans->to, usage->hid, value);
14309 +- return 1;
14310 +- }
14311 +- }
14312 ++ if (usage->code != code) {
14313 ++ input_event_with_scancode(input, usage->type, code, usage->hid, value);
14314 +
14315 +- if (swap_fn_leftctrl) {
14316 +- trans = apple_find_translation(swapped_fn_leftctrl_keys, usage->code);
14317 +- if (trans) {
14318 +- input_event_with_scancode(input, usage->type,
14319 +- trans->to, usage->hid, value);
14320 +- return 1;
14321 +- }
14322 ++ return 1;
14323 + }
14324 +
14325 + return 0;
14326 +@@ -640,9 +629,6 @@ static void apple_setup_input(struct input_dev *input)
14327 + apple_setup_key_translation(input, apple2021_fn_keys);
14328 + apple_setup_key_translation(input, macbookpro_no_esc_fn_keys);
14329 + apple_setup_key_translation(input, macbookpro_dedicated_esc_fn_keys);
14330 +-
14331 +- if (swap_fn_leftctrl)
14332 +- apple_setup_key_translation(input, swapped_fn_leftctrl_keys);
14333 + }
14334 +
14335 + static int apple_input_mapping(struct hid_device *hdev, struct hid_input *hi,
14336 +@@ -1011,21 +997,21 @@ static const struct hid_device_id apple_devices[] = {
14337 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING9_JIS),
14338 + .driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
14339 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRINGT2_J140K),
14340 +- .driver_data = APPLE_HAS_FN | APPLE_BACKLIGHT_CTL },
14341 ++ .driver_data = APPLE_HAS_FN | APPLE_BACKLIGHT_CTL | APPLE_ISO_TILDE_QUIRK },
14342 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRINGT2_J132),
14343 +- .driver_data = APPLE_HAS_FN | APPLE_BACKLIGHT_CTL },
14344 ++ .driver_data = APPLE_HAS_FN | APPLE_BACKLIGHT_CTL | APPLE_ISO_TILDE_QUIRK },
14345 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRINGT2_J680),
14346 +- .driver_data = APPLE_HAS_FN | APPLE_BACKLIGHT_CTL },
14347 ++ .driver_data = APPLE_HAS_FN | APPLE_BACKLIGHT_CTL | APPLE_ISO_TILDE_QUIRK },
14348 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRINGT2_J213),
14349 +- .driver_data = APPLE_HAS_FN | APPLE_BACKLIGHT_CTL },
14350 ++ .driver_data = APPLE_HAS_FN | APPLE_BACKLIGHT_CTL | APPLE_ISO_TILDE_QUIRK },
14351 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRINGT2_J214K),
14352 +- .driver_data = APPLE_HAS_FN },
14353 ++ .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
14354 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRINGT2_J223),
14355 +- .driver_data = APPLE_HAS_FN },
14356 ++ .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
14357 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRINGT2_J230K),
14358 +- .driver_data = APPLE_HAS_FN },
14359 ++ .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
14360 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRINGT2_J152F),
14361 +- .driver_data = APPLE_HAS_FN },
14362 ++ .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
14363 + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_2009_ANSI),
14364 + .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
14365 + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_2009_ISO),
14366 +diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
14367 +index 859aeb07542e3..d728a94c642eb 100644
14368 +--- a/drivers/hid/hid-input.c
14369 ++++ b/drivers/hid/hid-input.c
14370 +@@ -340,6 +340,7 @@ static enum power_supply_property hidinput_battery_props[] = {
14371 + #define HID_BATTERY_QUIRK_PERCENT (1 << 0) /* always reports percent */
14372 + #define HID_BATTERY_QUIRK_FEATURE (1 << 1) /* ask for feature report */
14373 + #define HID_BATTERY_QUIRK_IGNORE (1 << 2) /* completely ignore the battery */
14374 ++#define HID_BATTERY_QUIRK_AVOID_QUERY (1 << 3) /* do not query the battery */
14375 +
14376 + static const struct hid_device_id hid_battery_quirks[] = {
14377 + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE,
14378 +@@ -373,6 +374,8 @@ static const struct hid_device_id hid_battery_quirks[] = {
14379 + HID_BATTERY_QUIRK_IGNORE },
14380 + { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN),
14381 + HID_BATTERY_QUIRK_IGNORE },
14382 ++ { HID_USB_DEVICE(USB_VENDOR_ID_UGEE, USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_L),
14383 ++ HID_BATTERY_QUIRK_AVOID_QUERY },
14384 + { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_15),
14385 + HID_BATTERY_QUIRK_IGNORE },
14386 + { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_15T_DR100),
14387 +@@ -554,6 +557,9 @@ static int hidinput_setup_battery(struct hid_device *dev, unsigned report_type,
14388 + dev->battery_avoid_query = report_type == HID_INPUT_REPORT &&
14389 + field->physical == HID_DG_STYLUS;
14390 +
14391 ++ if (quirks & HID_BATTERY_QUIRK_AVOID_QUERY)
14392 ++ dev->battery_avoid_query = true;
14393 ++
14394 + dev->battery = power_supply_register(&dev->dev, psy_desc, &psy_cfg);
14395 + if (IS_ERR(dev->battery)) {
14396 + error = PTR_ERR(dev->battery);
14397 +diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
14398 +index 8a2aac18dcc51..656757c79f6b8 100644
14399 +--- a/drivers/hid/hid-logitech-hidpp.c
14400 ++++ b/drivers/hid/hid-logitech-hidpp.c
14401 +@@ -2548,12 +2548,17 @@ static int hidpp_ff_init(struct hidpp_device *hidpp,
14402 + struct hid_device *hid = hidpp->hid_dev;
14403 + struct hid_input *hidinput;
14404 + struct input_dev *dev;
14405 +- const struct usb_device_descriptor *udesc = &(hid_to_usb_dev(hid)->descriptor);
14406 +- const u16 bcdDevice = le16_to_cpu(udesc->bcdDevice);
14407 ++ struct usb_device_descriptor *udesc;
14408 ++ u16 bcdDevice;
14409 + struct ff_device *ff;
14410 + int error, j, num_slots = data->num_effects;
14411 + u8 version;
14412 +
14413 ++ if (!hid_is_usb(hid)) {
14414 ++ hid_err(hid, "device is not USB\n");
14415 ++ return -ENODEV;
14416 ++ }
14417 ++
14418 + if (list_empty(&hid->inputs)) {
14419 + hid_err(hid, "no inputs found\n");
14420 + return -ENODEV;
14421 +@@ -2567,6 +2572,8 @@ static int hidpp_ff_init(struct hidpp_device *hidpp,
14422 + }
14423 +
14424 + /* Get firmware release */
14425 ++ udesc = &(hid_to_usb_dev(hid)->descriptor);
14426 ++ bcdDevice = le16_to_cpu(udesc->bcdDevice);
14427 + version = bcdDevice & 255;
14428 +
14429 + /* Set supported force feedback capabilities */
14430 +diff --git a/drivers/hid/hid-mcp2221.c b/drivers/hid/hid-mcp2221.c
14431 +index de52e9f7bb8cb..560eeec4035aa 100644
14432 +--- a/drivers/hid/hid-mcp2221.c
14433 ++++ b/drivers/hid/hid-mcp2221.c
14434 +@@ -840,12 +840,19 @@ static int mcp2221_probe(struct hid_device *hdev,
14435 + return ret;
14436 + }
14437 +
14438 +- ret = hid_hw_start(hdev, HID_CONNECT_HIDRAW);
14439 ++ /*
14440 ++ * This driver uses the .raw_event callback and therefore does not need any
14441 ++ * HID_CONNECT_xxx flags.
14442 ++ */
14443 ++ ret = hid_hw_start(hdev, 0);
14444 + if (ret) {
14445 + hid_err(hdev, "can't start hardware\n");
14446 + return ret;
14447 + }
14448 +
14449 ++ hid_info(hdev, "USB HID v%x.%02x Device [%s] on %s\n", hdev->version >> 8,
14450 ++ hdev->version & 0xff, hdev->name, hdev->phys);
14451 ++
14452 + ret = hid_hw_open(hdev);
14453 + if (ret) {
14454 + hid_err(hdev, "can't open device\n");
14455 +@@ -870,8 +877,7 @@ static int mcp2221_probe(struct hid_device *hdev,
14456 + mcp->adapter.retries = 1;
14457 + mcp->adapter.dev.parent = &hdev->dev;
14458 + snprintf(mcp->adapter.name, sizeof(mcp->adapter.name),
14459 +- "MCP2221 usb-i2c bridge on hidraw%d",
14460 +- ((struct hidraw *)hdev->hidraw)->minor);
14461 ++ "MCP2221 usb-i2c bridge");
14462 +
14463 + ret = i2c_add_adapter(&mcp->adapter);
14464 + if (ret) {
14465 +diff --git a/drivers/hid/hid-rmi.c b/drivers/hid/hid-rmi.c
14466 +index bb1f423f4ace3..84e7ba5314d3f 100644
14467 +--- a/drivers/hid/hid-rmi.c
14468 ++++ b/drivers/hid/hid-rmi.c
14469 +@@ -326,6 +326,8 @@ static int rmi_input_event(struct hid_device *hdev, u8 *data, int size)
14470 + if (!(test_bit(RMI_STARTED, &hdata->flags)))
14471 + return 0;
14472 +
14473 ++ pm_wakeup_event(hdev->dev.parent, 0);
14474 ++
14475 + local_irq_save(flags);
14476 +
14477 + rmi_set_attn_data(rmi_dev, data[1], &data[2], size - 2);
14478 +diff --git a/drivers/hid/hid-sensor-custom.c b/drivers/hid/hid-sensor-custom.c
14479 +index 32c2306e240d6..602465ad27458 100644
14480 +--- a/drivers/hid/hid-sensor-custom.c
14481 ++++ b/drivers/hid/hid-sensor-custom.c
14482 +@@ -62,7 +62,7 @@ struct hid_sensor_sample {
14483 + u32 raw_len;
14484 + } __packed;
14485 +
14486 +-static struct attribute hid_custom_attrs[] = {
14487 ++static struct attribute hid_custom_attrs[HID_CUSTOM_TOTAL_ATTRS] = {
14488 + {.name = "name", .mode = S_IRUGO},
14489 + {.name = "units", .mode = S_IRUGO},
14490 + {.name = "unit-expo", .mode = S_IRUGO},
14491 +diff --git a/drivers/hid/hid-uclogic-params.c b/drivers/hid/hid-uclogic-params.c
14492 +index 34fa991e6267e..cd1233d7e2535 100644
14493 +--- a/drivers/hid/hid-uclogic-params.c
14494 ++++ b/drivers/hid/hid-uclogic-params.c
14495 +@@ -18,6 +18,7 @@
14496 + #include "usbhid/usbhid.h"
14497 + #include "hid-ids.h"
14498 + #include <linux/ctype.h>
14499 ++#include <linux/string.h>
14500 + #include <asm/unaligned.h>
14501 +
14502 + /**
14503 +@@ -1211,6 +1212,69 @@ static int uclogic_params_ugee_v2_init_frame_mouse(struct uclogic_params *p)
14504 + return rc;
14505 + }
14506 +
14507 ++/**
14508 ++ * uclogic_params_ugee_v2_has_battery() - check whether a UGEE v2 device has
14509 ++ * battery or not.
14510 ++ * @hdev: The HID device of the tablet interface.
14511 ++ *
14512 ++ * Returns:
14513 ++ * True if the device has battery, false otherwise.
14514 ++ */
14515 ++static bool uclogic_params_ugee_v2_has_battery(struct hid_device *hdev)
14516 ++{
14517 ++ /* The XP-PEN Deco LW vendor, product and version are identical to the
14518 ++ * Deco L. The only difference reported by their firmware is the product
14519 ++ * name. Add a quirk to support battery reporting on the wireless
14520 ++ * version.
14521 ++ */
14522 ++ if (hdev->vendor == USB_VENDOR_ID_UGEE &&
14523 ++ hdev->product == USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_L) {
14524 ++ struct usb_device *udev = hid_to_usb_dev(hdev);
14525 ++
14526 ++ if (strstarts(udev->product, "Deco LW"))
14527 ++ return true;
14528 ++ }
14529 ++
14530 ++ return false;
14531 ++}
14532 ++
14533 ++/**
14534 ++ * uclogic_params_ugee_v2_init_battery() - initialize UGEE v2 battery reporting.
14535 ++ * @hdev: The HID device of the tablet interface, cannot be NULL.
14536 ++ * @p: Parameters to fill in, cannot be NULL.
14537 ++ *
14538 ++ * Returns:
14539 ++ * Zero, if successful. A negative errno code on error.
14540 ++ */
14541 ++static int uclogic_params_ugee_v2_init_battery(struct hid_device *hdev,
14542 ++ struct uclogic_params *p)
14543 ++{
14544 ++ int rc = 0;
14545 ++
14546 ++ if (!hdev || !p)
14547 ++ return -EINVAL;
14548 ++
14549 ++ /* Some tablets contain invalid characters in hdev->uniq, throwing a
14550 ++ * "hwmon: '<name>' is not a valid name attribute, please fix" error.
14551 ++ * Use the device vendor and product IDs instead.
14552 ++ */
14553 ++ snprintf(hdev->uniq, sizeof(hdev->uniq), "%x-%x", hdev->vendor,
14554 ++ hdev->product);
14555 ++
14556 ++ rc = uclogic_params_frame_init_with_desc(&p->frame_list[1],
14557 ++ uclogic_rdesc_ugee_v2_battery_template_arr,
14558 ++ uclogic_rdesc_ugee_v2_battery_template_size,
14559 ++ UCLOGIC_RDESC_UGEE_V2_BATTERY_ID);
14560 ++ if (rc)
14561 ++ return rc;
14562 ++
14563 ++ p->frame_list[1].suffix = "Battery";
14564 ++ p->pen.subreport_list[1].value = 0xf2;
14565 ++ p->pen.subreport_list[1].id = UCLOGIC_RDESC_UGEE_V2_BATTERY_ID;
14566 ++
14567 ++ return rc;
14568 ++}
14569 ++
14570 + /**
14571 + * uclogic_params_ugee_v2_init() - initialize a UGEE graphics tablets by
14572 + * discovering their parameters.
14573 +@@ -1334,6 +1398,15 @@ static int uclogic_params_ugee_v2_init(struct uclogic_params *params,
14574 + if (rc)
14575 + goto cleanup;
14576 +
14577 ++ /* Initialize the battery interface*/
14578 ++ if (uclogic_params_ugee_v2_has_battery(hdev)) {
14579 ++ rc = uclogic_params_ugee_v2_init_battery(hdev, &p);
14580 ++ if (rc) {
14581 ++ hid_err(hdev, "error initializing battery: %d\n", rc);
14582 ++ goto cleanup;
14583 ++ }
14584 ++ }
14585 ++
14586 + output:
14587 + /* Output parameters */
14588 + memcpy(params, &p, sizeof(*params));
14589 +diff --git a/drivers/hid/hid-uclogic-rdesc.c b/drivers/hid/hid-uclogic-rdesc.c
14590 +index 6b73eb0df6bd7..fb40775f5f5b3 100644
14591 +--- a/drivers/hid/hid-uclogic-rdesc.c
14592 ++++ b/drivers/hid/hid-uclogic-rdesc.c
14593 +@@ -1035,6 +1035,40 @@ const __u8 uclogic_rdesc_ugee_v2_frame_mouse_template_arr[] = {
14594 + const size_t uclogic_rdesc_ugee_v2_frame_mouse_template_size =
14595 + sizeof(uclogic_rdesc_ugee_v2_frame_mouse_template_arr);
14596 +
14597 ++/* Fixed report descriptor template for UGEE v2 battery reports */
14598 ++const __u8 uclogic_rdesc_ugee_v2_battery_template_arr[] = {
14599 ++ 0x05, 0x01, /* Usage Page (Desktop), */
14600 ++ 0x09, 0x07, /* Usage (Keypad), */
14601 ++ 0xA1, 0x01, /* Collection (Application), */
14602 ++ 0x85, UCLOGIC_RDESC_UGEE_V2_BATTERY_ID,
14603 ++ /* Report ID, */
14604 ++ 0x75, 0x08, /* Report Size (8), */
14605 ++ 0x95, 0x02, /* Report Count (2), */
14606 ++ 0x81, 0x01, /* Input (Constant), */
14607 ++ 0x05, 0x84, /* Usage Page (Power Device), */
14608 ++ 0x05, 0x85, /* Usage Page (Battery System), */
14609 ++ 0x09, 0x65, /* Usage Page (AbsoluteStateOfCharge), */
14610 ++ 0x75, 0x08, /* Report Size (8), */
14611 ++ 0x95, 0x01, /* Report Count (1), */
14612 ++ 0x15, 0x00, /* Logical Minimum (0), */
14613 ++ 0x26, 0xff, 0x00, /* Logical Maximum (255), */
14614 ++ 0x81, 0x02, /* Input (Variable), */
14615 ++ 0x75, 0x01, /* Report Size (1), */
14616 ++ 0x95, 0x01, /* Report Count (1), */
14617 ++ 0x15, 0x00, /* Logical Minimum (0), */
14618 ++ 0x25, 0x01, /* Logical Maximum (1), */
14619 ++ 0x09, 0x44, /* Usage Page (Charging), */
14620 ++ 0x81, 0x02, /* Input (Variable), */
14621 ++ 0x95, 0x07, /* Report Count (7), */
14622 ++ 0x81, 0x01, /* Input (Constant), */
14623 ++ 0x75, 0x08, /* Report Size (8), */
14624 ++ 0x95, 0x07, /* Report Count (7), */
14625 ++ 0x81, 0x01, /* Input (Constant), */
14626 ++ 0xC0 /* End Collection */
14627 ++};
14628 ++const size_t uclogic_rdesc_ugee_v2_battery_template_size =
14629 ++ sizeof(uclogic_rdesc_ugee_v2_battery_template_arr);
14630 ++
14631 + /* Fixed report descriptor for Ugee EX07 frame */
14632 + const __u8 uclogic_rdesc_ugee_ex07_frame_arr[] = {
14633 + 0x05, 0x01, /* Usage Page (Desktop), */
14634 +diff --git a/drivers/hid/hid-uclogic-rdesc.h b/drivers/hid/hid-uclogic-rdesc.h
14635 +index 0502a06564964..a1f78c07293ff 100644
14636 +--- a/drivers/hid/hid-uclogic-rdesc.h
14637 ++++ b/drivers/hid/hid-uclogic-rdesc.h
14638 +@@ -161,6 +161,9 @@ extern const size_t uclogic_rdesc_v2_frame_dial_size;
14639 + /* Device ID byte offset in v2 frame dial reports */
14640 + #define UCLOGIC_RDESC_V2_FRAME_DIAL_DEV_ID_BYTE 0x4
14641 +
14642 ++/* Report ID for tweaked UGEE v2 battery reports */
14643 ++#define UCLOGIC_RDESC_UGEE_V2_BATTERY_ID 0xba
14644 ++
14645 + /* Fixed report descriptor template for UGEE v2 pen reports */
14646 + extern const __u8 uclogic_rdesc_ugee_v2_pen_template_arr[];
14647 + extern const size_t uclogic_rdesc_ugee_v2_pen_template_size;
14648 +@@ -177,6 +180,10 @@ extern const size_t uclogic_rdesc_ugee_v2_frame_dial_template_size;
14649 + extern const __u8 uclogic_rdesc_ugee_v2_frame_mouse_template_arr[];
14650 + extern const size_t uclogic_rdesc_ugee_v2_frame_mouse_template_size;
14651 +
14652 ++/* Fixed report descriptor template for UGEE v2 battery reports */
14653 ++extern const __u8 uclogic_rdesc_ugee_v2_battery_template_arr[];
14654 ++extern const size_t uclogic_rdesc_ugee_v2_battery_template_size;
14655 ++
14656 + /* Fixed report descriptor for Ugee EX07 frame */
14657 + extern const __u8 uclogic_rdesc_ugee_ex07_frame_arr[];
14658 + extern const size_t uclogic_rdesc_ugee_ex07_frame_size;
14659 +diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
14660 +index 0667b6022c3b7..a9428b7f34a46 100644
14661 +--- a/drivers/hid/i2c-hid/i2c-hid-core.c
14662 ++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
14663 +@@ -554,7 +554,8 @@ static void i2c_hid_get_input(struct i2c_hid *ihid)
14664 + i2c_hid_dbg(ihid, "input: %*ph\n", ret_size, ihid->inbuf);
14665 +
14666 + if (test_bit(I2C_HID_STARTED, &ihid->flags)) {
14667 +- pm_wakeup_event(&ihid->client->dev, 0);
14668 ++ if (ihid->hid->group != HID_GROUP_RMI)
14669 ++ pm_wakeup_event(&ihid->client->dev, 0);
14670 +
14671 + hid_input_report(ihid->hid, HID_INPUT_REPORT,
14672 + ihid->inbuf + sizeof(__le16),
14673 +diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
14674 +index 634263e4556b0..fb538a6c4add8 100644
14675 +--- a/drivers/hid/wacom_sys.c
14676 ++++ b/drivers/hid/wacom_sys.c
14677 +@@ -155,6 +155,9 @@ static int wacom_raw_event(struct hid_device *hdev, struct hid_report *report,
14678 + {
14679 + struct wacom *wacom = hid_get_drvdata(hdev);
14680 +
14681 ++ if (wacom->wacom_wac.features.type == BOOTLOADER)
14682 ++ return 0;
14683 ++
14684 + if (size > WACOM_PKGLEN_MAX)
14685 + return 1;
14686 +
14687 +@@ -2785,6 +2788,11 @@ static int wacom_probe(struct hid_device *hdev,
14688 + return error;
14689 + }
14690 +
14691 ++ if (features->type == BOOTLOADER) {
14692 ++ hid_warn(hdev, "Using device in hidraw-only mode");
14693 ++ return hid_hw_start(hdev, HID_CONNECT_HIDRAW);
14694 ++ }
14695 ++
14696 + error = wacom_parse_and_register(wacom, false);
14697 + if (error)
14698 + return error;
14699 +diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
14700 +index 0f3d57b426846..9312d611db8e5 100644
14701 +--- a/drivers/hid/wacom_wac.c
14702 ++++ b/drivers/hid/wacom_wac.c
14703 +@@ -4882,6 +4882,9 @@ static const struct wacom_features wacom_features_0x3dd =
14704 + static const struct wacom_features wacom_features_HID_ANY_ID =
14705 + { "Wacom HID", .type = HID_GENERIC, .oVid = HID_ANY_ID, .oPid = HID_ANY_ID };
14706 +
14707 ++static const struct wacom_features wacom_features_0x94 =
14708 ++ { "Wacom Bootloader", .type = BOOTLOADER };
14709 ++
14710 + #define USB_DEVICE_WACOM(prod) \
14711 + HID_DEVICE(BUS_USB, HID_GROUP_WACOM, USB_VENDOR_ID_WACOM, prod),\
14712 + .driver_data = (kernel_ulong_t)&wacom_features_##prod
14713 +@@ -4955,6 +4958,7 @@ const struct hid_device_id wacom_ids[] = {
14714 + { USB_DEVICE_WACOM(0x84) },
14715 + { USB_DEVICE_WACOM(0x90) },
14716 + { USB_DEVICE_WACOM(0x93) },
14717 ++ { USB_DEVICE_WACOM(0x94) },
14718 + { USB_DEVICE_WACOM(0x97) },
14719 + { USB_DEVICE_WACOM(0x9A) },
14720 + { USB_DEVICE_WACOM(0x9F) },
14721 +diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
14722 +index 5ca6c06d143be..16f221388563d 100644
14723 +--- a/drivers/hid/wacom_wac.h
14724 ++++ b/drivers/hid/wacom_wac.h
14725 +@@ -243,6 +243,7 @@ enum {
14726 + MTTPC,
14727 + MTTPC_B,
14728 + HID_GENERIC,
14729 ++ BOOTLOADER,
14730 + MAX_TYPE
14731 + };
14732 +
14733 +diff --git a/drivers/hsi/controllers/omap_ssi_core.c b/drivers/hsi/controllers/omap_ssi_core.c
14734 +index eb98201583185..26f2c3c012978 100644
14735 +--- a/drivers/hsi/controllers/omap_ssi_core.c
14736 ++++ b/drivers/hsi/controllers/omap_ssi_core.c
14737 +@@ -502,8 +502,10 @@ static int ssi_probe(struct platform_device *pd)
14738 + platform_set_drvdata(pd, ssi);
14739 +
14740 + err = ssi_add_controller(ssi, pd);
14741 +- if (err < 0)
14742 ++ if (err < 0) {
14743 ++ hsi_put_controller(ssi);
14744 + goto out1;
14745 ++ }
14746 +
14747 + pm_runtime_enable(&pd->dev);
14748 +
14749 +@@ -536,9 +538,9 @@ out3:
14750 + device_for_each_child(&pd->dev, NULL, ssi_remove_ports);
14751 + out2:
14752 + ssi_remove_controller(ssi);
14753 ++ pm_runtime_disable(&pd->dev);
14754 + out1:
14755 + platform_set_drvdata(pd, NULL);
14756 +- pm_runtime_disable(&pd->dev);
14757 +
14758 + return err;
14759 + }
14760 +@@ -629,7 +631,13 @@ static int __init ssi_init(void) {
14761 + if (ret)
14762 + return ret;
14763 +
14764 +- return platform_driver_register(&ssi_port_pdriver);
14765 ++ ret = platform_driver_register(&ssi_port_pdriver);
14766 ++ if (ret) {
14767 ++ platform_driver_unregister(&ssi_pdriver);
14768 ++ return ret;
14769 ++ }
14770 ++
14771 ++ return 0;
14772 + }
14773 + module_init(ssi_init);
14774 +
14775 +diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c
14776 +index 59a4aa86d1f35..c6692fd5ab155 100644
14777 +--- a/drivers/hv/ring_buffer.c
14778 ++++ b/drivers/hv/ring_buffer.c
14779 +@@ -280,6 +280,19 @@ void hv_ringbuffer_cleanup(struct hv_ring_buffer_info *ring_info)
14780 + ring_info->pkt_buffer_size = 0;
14781 + }
14782 +
14783 ++/*
14784 ++ * Check if the ring buffer spinlock is available to take or not; used on
14785 ++ * atomic contexts, like panic path (see the Hyper-V framebuffer driver).
14786 ++ */
14787 ++
14788 ++bool hv_ringbuffer_spinlock_busy(struct vmbus_channel *channel)
14789 ++{
14790 ++ struct hv_ring_buffer_info *rinfo = &channel->outbound;
14791 ++
14792 ++ return spin_is_locked(&rinfo->ring_lock);
14793 ++}
14794 ++EXPORT_SYMBOL_GPL(hv_ringbuffer_spinlock_busy);
14795 ++
14796 + /* Write to the ring buffer. */
14797 + int hv_ringbuffer_write(struct vmbus_channel *channel,
14798 + const struct kvec *kv_list, u32 kv_count,
14799 +diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
14800 +index 7ac3daaf59ce0..d3bccc8176c51 100644
14801 +--- a/drivers/hwmon/Kconfig
14802 ++++ b/drivers/hwmon/Kconfig
14803 +@@ -799,6 +799,7 @@ config SENSORS_IT87
14804 + config SENSORS_JC42
14805 + tristate "JEDEC JC42.4 compliant memory module temperature sensors"
14806 + depends on I2C
14807 ++ select REGMAP_I2C
14808 + help
14809 + If you say yes here, you get support for JEDEC JC42.4 compliant
14810 + temperature sensors, which are used on many DDR3 memory modules for
14811 +diff --git a/drivers/hwmon/emc2305.c b/drivers/hwmon/emc2305.c
14812 +index aa1f25add0b6b..e42ae43f3de46 100644
14813 +--- a/drivers/hwmon/emc2305.c
14814 ++++ b/drivers/hwmon/emc2305.c
14815 +@@ -16,7 +16,6 @@ static const unsigned short
14816 + emc2305_normal_i2c[] = { 0x27, 0x2c, 0x2d, 0x2e, 0x2f, 0x4c, 0x4d, I2C_CLIENT_END };
14817 +
14818 + #define EMC2305_REG_DRIVE_FAIL_STATUS 0x27
14819 +-#define EMC2305_REG_DEVICE 0xfd
14820 + #define EMC2305_REG_VENDOR 0xfe
14821 + #define EMC2305_FAN_MAX 0xff
14822 + #define EMC2305_FAN_MIN 0x00
14823 +@@ -172,22 +171,12 @@ static int emc2305_get_max_state(struct thermal_cooling_device *cdev, unsigned l
14824 + return 0;
14825 + }
14826 +
14827 +-static int emc2305_set_cur_state(struct thermal_cooling_device *cdev, unsigned long state)
14828 ++static int __emc2305_set_cur_state(struct emc2305_data *data, int cdev_idx, unsigned long state)
14829 + {
14830 +- int cdev_idx, ret;
14831 +- struct emc2305_data *data = cdev->devdata;
14832 ++ int ret;
14833 + struct i2c_client *client = data->client;
14834 + u8 val, i;
14835 +
14836 +- if (state > data->max_state)
14837 +- return -EINVAL;
14838 +-
14839 +- cdev_idx = emc2305_get_cdev_idx(cdev);
14840 +- if (cdev_idx < 0)
14841 +- return cdev_idx;
14842 +-
14843 +- /* Save thermal state. */
14844 +- data->cdev_data[cdev_idx].last_thermal_state = state;
14845 + state = max_t(unsigned long, state, data->cdev_data[cdev_idx].last_hwmon_state);
14846 +
14847 + val = EMC2305_PWM_STATE2DUTY(state, data->max_state, EMC2305_FAN_MAX);
14848 +@@ -212,6 +201,27 @@ static int emc2305_set_cur_state(struct thermal_cooling_device *cdev, unsigned l
14849 + return 0;
14850 + }
14851 +
14852 ++static int emc2305_set_cur_state(struct thermal_cooling_device *cdev, unsigned long state)
14853 ++{
14854 ++ int cdev_idx, ret;
14855 ++ struct emc2305_data *data = cdev->devdata;
14856 ++
14857 ++ if (state > data->max_state)
14858 ++ return -EINVAL;
14859 ++
14860 ++ cdev_idx = emc2305_get_cdev_idx(cdev);
14861 ++ if (cdev_idx < 0)
14862 ++ return cdev_idx;
14863 ++
14864 ++ /* Save thermal state. */
14865 ++ data->cdev_data[cdev_idx].last_thermal_state = state;
14866 ++ ret = __emc2305_set_cur_state(data, cdev_idx, state);
14867 ++ if (ret < 0)
14868 ++ return ret;
14869 ++
14870 ++ return 0;
14871 ++}
14872 ++
14873 + static const struct thermal_cooling_device_ops emc2305_cooling_ops = {
14874 + .get_max_state = emc2305_get_max_state,
14875 + .get_cur_state = emc2305_get_cur_state,
14876 +@@ -402,7 +412,7 @@ emc2305_write(struct device *dev, enum hwmon_sensor_types type, u32 attr, int ch
14877 + */
14878 + if (data->cdev_data[cdev_idx].last_hwmon_state >=
14879 + data->cdev_data[cdev_idx].last_thermal_state)
14880 +- return emc2305_set_cur_state(data->cdev_data[cdev_idx].cdev,
14881 ++ return __emc2305_set_cur_state(data, cdev_idx,
14882 + data->cdev_data[cdev_idx].last_hwmon_state);
14883 + return 0;
14884 + }
14885 +@@ -524,7 +534,7 @@ static int emc2305_probe(struct i2c_client *client, const struct i2c_device_id *
14886 + struct device *dev = &client->dev;
14887 + struct emc2305_data *data;
14888 + struct emc2305_platform_data *pdata;
14889 +- int vendor, device;
14890 ++ int vendor;
14891 + int ret;
14892 + int i;
14893 +
14894 +@@ -535,10 +545,6 @@ static int emc2305_probe(struct i2c_client *client, const struct i2c_device_id *
14895 + if (vendor != EMC2305_VENDOR)
14896 + return -ENODEV;
14897 +
14898 +- device = i2c_smbus_read_byte_data(client, EMC2305_REG_DEVICE);
14899 +- if (device != EMC2305_DEVICE)
14900 +- return -ENODEV;
14901 +-
14902 + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
14903 + if (!data)
14904 + return -ENOMEM;
14905 +diff --git a/drivers/hwmon/jc42.c b/drivers/hwmon/jc42.c
14906 +index 30888feaf589b..6593d81cb901b 100644
14907 +--- a/drivers/hwmon/jc42.c
14908 ++++ b/drivers/hwmon/jc42.c
14909 +@@ -19,6 +19,7 @@
14910 + #include <linux/err.h>
14911 + #include <linux/mutex.h>
14912 + #include <linux/of.h>
14913 ++#include <linux/regmap.h>
14914 +
14915 + /* Addresses to scan */
14916 + static const unsigned short normal_i2c[] = {
14917 +@@ -199,31 +200,14 @@ static struct jc42_chips jc42_chips[] = {
14918 + { STM_MANID, STTS3000_DEVID, STTS3000_DEVID_MASK },
14919 + };
14920 +
14921 +-enum temp_index {
14922 +- t_input = 0,
14923 +- t_crit,
14924 +- t_min,
14925 +- t_max,
14926 +- t_num_temp
14927 +-};
14928 +-
14929 +-static const u8 temp_regs[t_num_temp] = {
14930 +- [t_input] = JC42_REG_TEMP,
14931 +- [t_crit] = JC42_REG_TEMP_CRITICAL,
14932 +- [t_min] = JC42_REG_TEMP_LOWER,
14933 +- [t_max] = JC42_REG_TEMP_UPPER,
14934 +-};
14935 +-
14936 + /* Each client has this additional data */
14937 + struct jc42_data {
14938 +- struct i2c_client *client;
14939 + struct mutex update_lock; /* protect register access */
14940 ++ struct regmap *regmap;
14941 + bool extended; /* true if extended range supported */
14942 + bool valid;
14943 +- unsigned long last_updated; /* In jiffies */
14944 + u16 orig_config; /* original configuration */
14945 + u16 config; /* current configuration */
14946 +- u16 temp[t_num_temp];/* Temperatures */
14947 + };
14948 +
14949 + #define JC42_TEMP_MIN_EXTENDED (-40000)
14950 +@@ -248,85 +232,102 @@ static int jc42_temp_from_reg(s16 reg)
14951 + return reg * 125 / 2;
14952 + }
14953 +
14954 +-static struct jc42_data *jc42_update_device(struct device *dev)
14955 +-{
14956 +- struct jc42_data *data = dev_get_drvdata(dev);
14957 +- struct i2c_client *client = data->client;
14958 +- struct jc42_data *ret = data;
14959 +- int i, val;
14960 +-
14961 +- mutex_lock(&data->update_lock);
14962 +-
14963 +- if (time_after(jiffies, data->last_updated + HZ) || !data->valid) {
14964 +- for (i = 0; i < t_num_temp; i++) {
14965 +- val = i2c_smbus_read_word_swapped(client, temp_regs[i]);
14966 +- if (val < 0) {
14967 +- ret = ERR_PTR(val);
14968 +- goto abort;
14969 +- }
14970 +- data->temp[i] = val;
14971 +- }
14972 +- data->last_updated = jiffies;
14973 +- data->valid = true;
14974 +- }
14975 +-abort:
14976 +- mutex_unlock(&data->update_lock);
14977 +- return ret;
14978 +-}
14979 +-
14980 + static int jc42_read(struct device *dev, enum hwmon_sensor_types type,
14981 + u32 attr, int channel, long *val)
14982 + {
14983 +- struct jc42_data *data = jc42_update_device(dev);
14984 +- int temp, hyst;
14985 ++ struct jc42_data *data = dev_get_drvdata(dev);
14986 ++ unsigned int regval;
14987 ++ int ret, temp, hyst;
14988 +
14989 +- if (IS_ERR(data))
14990 +- return PTR_ERR(data);
14991 ++ mutex_lock(&data->update_lock);
14992 +
14993 + switch (attr) {
14994 + case hwmon_temp_input:
14995 +- *val = jc42_temp_from_reg(data->temp[t_input]);
14996 +- return 0;
14997 ++ ret = regmap_read(data->regmap, JC42_REG_TEMP, &regval);
14998 ++ if (ret)
14999 ++ break;
15000 ++
15001 ++ *val = jc42_temp_from_reg(regval);
15002 ++ break;
15003 + case hwmon_temp_min:
15004 +- *val = jc42_temp_from_reg(data->temp[t_min]);
15005 +- return 0;
15006 ++ ret = regmap_read(data->regmap, JC42_REG_TEMP_LOWER, &regval);
15007 ++ if (ret)
15008 ++ break;
15009 ++
15010 ++ *val = jc42_temp_from_reg(regval);
15011 ++ break;
15012 + case hwmon_temp_max:
15013 +- *val = jc42_temp_from_reg(data->temp[t_max]);
15014 +- return 0;
15015 ++ ret = regmap_read(data->regmap, JC42_REG_TEMP_UPPER, &regval);
15016 ++ if (ret)
15017 ++ break;
15018 ++
15019 ++ *val = jc42_temp_from_reg(regval);
15020 ++ break;
15021 + case hwmon_temp_crit:
15022 +- *val = jc42_temp_from_reg(data->temp[t_crit]);
15023 +- return 0;
15024 ++ ret = regmap_read(data->regmap, JC42_REG_TEMP_CRITICAL,
15025 ++ &regval);
15026 ++ if (ret)
15027 ++ break;
15028 ++
15029 ++ *val = jc42_temp_from_reg(regval);
15030 ++ break;
15031 + case hwmon_temp_max_hyst:
15032 +- temp = jc42_temp_from_reg(data->temp[t_max]);
15033 ++ ret = regmap_read(data->regmap, JC42_REG_TEMP_UPPER, &regval);
15034 ++ if (ret)
15035 ++ break;
15036 ++
15037 ++ temp = jc42_temp_from_reg(regval);
15038 + hyst = jc42_hysteresis[(data->config & JC42_CFG_HYST_MASK)
15039 + >> JC42_CFG_HYST_SHIFT];
15040 + *val = temp - hyst;
15041 +- return 0;
15042 ++ break;
15043 + case hwmon_temp_crit_hyst:
15044 +- temp = jc42_temp_from_reg(data->temp[t_crit]);
15045 ++ ret = regmap_read(data->regmap, JC42_REG_TEMP_CRITICAL,
15046 ++ &regval);
15047 ++ if (ret)
15048 ++ break;
15049 ++
15050 ++ temp = jc42_temp_from_reg(regval);
15051 + hyst = jc42_hysteresis[(data->config & JC42_CFG_HYST_MASK)
15052 + >> JC42_CFG_HYST_SHIFT];
15053 + *val = temp - hyst;
15054 +- return 0;
15055 ++ break;
15056 + case hwmon_temp_min_alarm:
15057 +- *val = (data->temp[t_input] >> JC42_ALARM_MIN_BIT) & 1;
15058 +- return 0;
15059 ++ ret = regmap_read(data->regmap, JC42_REG_TEMP, &regval);
15060 ++ if (ret)
15061 ++ break;
15062 ++
15063 ++ *val = (regval >> JC42_ALARM_MIN_BIT) & 1;
15064 ++ break;
15065 + case hwmon_temp_max_alarm:
15066 +- *val = (data->temp[t_input] >> JC42_ALARM_MAX_BIT) & 1;
15067 +- return 0;
15068 ++ ret = regmap_read(data->regmap, JC42_REG_TEMP, &regval);
15069 ++ if (ret)
15070 ++ break;
15071 ++
15072 ++ *val = (regval >> JC42_ALARM_MAX_BIT) & 1;
15073 ++ break;
15074 + case hwmon_temp_crit_alarm:
15075 +- *val = (data->temp[t_input] >> JC42_ALARM_CRIT_BIT) & 1;
15076 +- return 0;
15077 ++ ret = regmap_read(data->regmap, JC42_REG_TEMP, &regval);
15078 ++ if (ret)
15079 ++ break;
15080 ++
15081 ++ *val = (regval >> JC42_ALARM_CRIT_BIT) & 1;
15082 ++ break;
15083 + default:
15084 +- return -EOPNOTSUPP;
15085 ++ ret = -EOPNOTSUPP;
15086 ++ break;
15087 + }
15088 ++
15089 ++ mutex_unlock(&data->update_lock);
15090 ++
15091 ++ return ret;
15092 + }
15093 +
15094 + static int jc42_write(struct device *dev, enum hwmon_sensor_types type,
15095 + u32 attr, int channel, long val)
15096 + {
15097 + struct jc42_data *data = dev_get_drvdata(dev);
15098 +- struct i2c_client *client = data->client;
15099 ++ unsigned int regval;
15100 + int diff, hyst;
15101 + int ret;
15102 +
15103 +@@ -334,21 +335,23 @@ static int jc42_write(struct device *dev, enum hwmon_sensor_types type,
15104 +
15105 + switch (attr) {
15106 + case hwmon_temp_min:
15107 +- data->temp[t_min] = jc42_temp_to_reg(val, data->extended);
15108 +- ret = i2c_smbus_write_word_swapped(client, temp_regs[t_min],
15109 +- data->temp[t_min]);
15110 ++ ret = regmap_write(data->regmap, JC42_REG_TEMP_LOWER,
15111 ++ jc42_temp_to_reg(val, data->extended));
15112 + break;
15113 + case hwmon_temp_max:
15114 +- data->temp[t_max] = jc42_temp_to_reg(val, data->extended);
15115 +- ret = i2c_smbus_write_word_swapped(client, temp_regs[t_max],
15116 +- data->temp[t_max]);
15117 ++ ret = regmap_write(data->regmap, JC42_REG_TEMP_UPPER,
15118 ++ jc42_temp_to_reg(val, data->extended));
15119 + break;
15120 + case hwmon_temp_crit:
15121 +- data->temp[t_crit] = jc42_temp_to_reg(val, data->extended);
15122 +- ret = i2c_smbus_write_word_swapped(client, temp_regs[t_crit],
15123 +- data->temp[t_crit]);
15124 ++ ret = regmap_write(data->regmap, JC42_REG_TEMP_CRITICAL,
15125 ++ jc42_temp_to_reg(val, data->extended));
15126 + break;
15127 + case hwmon_temp_crit_hyst:
15128 ++ ret = regmap_read(data->regmap, JC42_REG_TEMP_CRITICAL,
15129 ++ &regval);
15130 ++ if (ret)
15131 ++ break;
15132 ++
15133 + /*
15134 + * JC42.4 compliant chips only support four hysteresis values.
15135 + * Pick best choice and go from there.
15136 +@@ -356,7 +359,7 @@ static int jc42_write(struct device *dev, enum hwmon_sensor_types type,
15137 + val = clamp_val(val, (data->extended ? JC42_TEMP_MIN_EXTENDED
15138 + : JC42_TEMP_MIN) - 6000,
15139 + JC42_TEMP_MAX);
15140 +- diff = jc42_temp_from_reg(data->temp[t_crit]) - val;
15141 ++ diff = jc42_temp_from_reg(regval) - val;
15142 + hyst = 0;
15143 + if (diff > 0) {
15144 + if (diff < 2250)
15145 +@@ -368,9 +371,8 @@ static int jc42_write(struct device *dev, enum hwmon_sensor_types type,
15146 + }
15147 + data->config = (data->config & ~JC42_CFG_HYST_MASK) |
15148 + (hyst << JC42_CFG_HYST_SHIFT);
15149 +- ret = i2c_smbus_write_word_swapped(data->client,
15150 +- JC42_REG_CONFIG,
15151 +- data->config);
15152 ++ ret = regmap_write(data->regmap, JC42_REG_CONFIG,
15153 ++ data->config);
15154 + break;
15155 + default:
15156 + ret = -EOPNOTSUPP;
15157 +@@ -470,51 +472,80 @@ static const struct hwmon_chip_info jc42_chip_info = {
15158 + .info = jc42_info,
15159 + };
15160 +
15161 ++static bool jc42_readable_reg(struct device *dev, unsigned int reg)
15162 ++{
15163 ++ return (reg >= JC42_REG_CAP && reg <= JC42_REG_DEVICEID) ||
15164 ++ reg == JC42_REG_SMBUS;
15165 ++}
15166 ++
15167 ++static bool jc42_writable_reg(struct device *dev, unsigned int reg)
15168 ++{
15169 ++ return (reg >= JC42_REG_CONFIG && reg <= JC42_REG_TEMP_CRITICAL) ||
15170 ++ reg == JC42_REG_SMBUS;
15171 ++}
15172 ++
15173 ++static bool jc42_volatile_reg(struct device *dev, unsigned int reg)
15174 ++{
15175 ++ return reg == JC42_REG_CONFIG || reg == JC42_REG_TEMP;
15176 ++}
15177 ++
15178 ++static const struct regmap_config jc42_regmap_config = {
15179 ++ .reg_bits = 8,
15180 ++ .val_bits = 16,
15181 ++ .val_format_endian = REGMAP_ENDIAN_BIG,
15182 ++ .max_register = JC42_REG_SMBUS,
15183 ++ .writeable_reg = jc42_writable_reg,
15184 ++ .readable_reg = jc42_readable_reg,
15185 ++ .volatile_reg = jc42_volatile_reg,
15186 ++ .cache_type = REGCACHE_RBTREE,
15187 ++};
15188 ++
15189 + static int jc42_probe(struct i2c_client *client)
15190 + {
15191 + struct device *dev = &client->dev;
15192 + struct device *hwmon_dev;
15193 ++ unsigned int config, cap;
15194 + struct jc42_data *data;
15195 +- int config, cap;
15196 ++ int ret;
15197 +
15198 + data = devm_kzalloc(dev, sizeof(struct jc42_data), GFP_KERNEL);
15199 + if (!data)
15200 + return -ENOMEM;
15201 +
15202 +- data->client = client;
15203 ++ data->regmap = devm_regmap_init_i2c(client, &jc42_regmap_config);
15204 ++ if (IS_ERR(data->regmap))
15205 ++ return PTR_ERR(data->regmap);
15206 ++
15207 + i2c_set_clientdata(client, data);
15208 + mutex_init(&data->update_lock);
15209 +
15210 +- cap = i2c_smbus_read_word_swapped(client, JC42_REG_CAP);
15211 +- if (cap < 0)
15212 +- return cap;
15213 ++ ret = regmap_read(data->regmap, JC42_REG_CAP, &cap);
15214 ++ if (ret)
15215 ++ return ret;
15216 +
15217 + data->extended = !!(cap & JC42_CAP_RANGE);
15218 +
15219 + if (device_property_read_bool(dev, "smbus-timeout-disable")) {
15220 +- int smbus;
15221 +-
15222 + /*
15223 + * Not all chips support this register, but from a
15224 + * quick read of various datasheets no chip appears
15225 + * incompatible with the below attempt to disable
15226 + * the timeout. And the whole thing is opt-in...
15227 + */
15228 +- smbus = i2c_smbus_read_word_swapped(client, JC42_REG_SMBUS);
15229 +- if (smbus < 0)
15230 +- return smbus;
15231 +- i2c_smbus_write_word_swapped(client, JC42_REG_SMBUS,
15232 +- smbus | SMBUS_STMOUT);
15233 ++ ret = regmap_set_bits(data->regmap, JC42_REG_SMBUS,
15234 ++ SMBUS_STMOUT);
15235 ++ if (ret)
15236 ++ return ret;
15237 + }
15238 +
15239 +- config = i2c_smbus_read_word_swapped(client, JC42_REG_CONFIG);
15240 +- if (config < 0)
15241 +- return config;
15242 ++ ret = regmap_read(data->regmap, JC42_REG_CONFIG, &config);
15243 ++ if (ret)
15244 ++ return ret;
15245 +
15246 + data->orig_config = config;
15247 + if (config & JC42_CFG_SHUTDOWN) {
15248 + config &= ~JC42_CFG_SHUTDOWN;
15249 +- i2c_smbus_write_word_swapped(client, JC42_REG_CONFIG, config);
15250 ++ regmap_write(data->regmap, JC42_REG_CONFIG, config);
15251 + }
15252 + data->config = config;
15253 +
15254 +@@ -535,7 +566,7 @@ static void jc42_remove(struct i2c_client *client)
15255 +
15256 + config = (data->orig_config & ~JC42_CFG_HYST_MASK)
15257 + | (data->config & JC42_CFG_HYST_MASK);
15258 +- i2c_smbus_write_word_swapped(client, JC42_REG_CONFIG, config);
15259 ++ regmap_write(data->regmap, JC42_REG_CONFIG, config);
15260 + }
15261 + }
15262 +
15263 +@@ -546,8 +577,11 @@ static int jc42_suspend(struct device *dev)
15264 + struct jc42_data *data = dev_get_drvdata(dev);
15265 +
15266 + data->config |= JC42_CFG_SHUTDOWN;
15267 +- i2c_smbus_write_word_swapped(data->client, JC42_REG_CONFIG,
15268 +- data->config);
15269 ++ regmap_write(data->regmap, JC42_REG_CONFIG, data->config);
15270 ++
15271 ++ regcache_cache_only(data->regmap, true);
15272 ++ regcache_mark_dirty(data->regmap);
15273 ++
15274 + return 0;
15275 + }
15276 +
15277 +@@ -555,10 +589,13 @@ static int jc42_resume(struct device *dev)
15278 + {
15279 + struct jc42_data *data = dev_get_drvdata(dev);
15280 +
15281 ++ regcache_cache_only(data->regmap, false);
15282 ++
15283 + data->config &= ~JC42_CFG_SHUTDOWN;
15284 +- i2c_smbus_write_word_swapped(data->client, JC42_REG_CONFIG,
15285 +- data->config);
15286 +- return 0;
15287 ++ regmap_write(data->regmap, JC42_REG_CONFIG, data->config);
15288 ++
15289 ++ /* Restore cached register values to hardware */
15290 ++ return regcache_sync(data->regmap);
15291 + }
15292 +
15293 + static const struct dev_pm_ops jc42_dev_pm_ops = {
15294 +diff --git a/drivers/hwmon/nct6775-platform.c b/drivers/hwmon/nct6775-platform.c
15295 +index b347837842139..bf43f73dc835f 100644
15296 +--- a/drivers/hwmon/nct6775-platform.c
15297 ++++ b/drivers/hwmon/nct6775-platform.c
15298 +@@ -1043,7 +1043,9 @@ static struct platform_device *pdev[2];
15299 +
15300 + static const char * const asus_wmi_boards[] = {
15301 + "PRO H410T",
15302 ++ "ProArt B550-CREATOR",
15303 + "ProArt X570-CREATOR WIFI",
15304 ++ "ProArt Z490-CREATOR 10G",
15305 + "Pro B550M-C",
15306 + "Pro WS X570-ACE",
15307 + "PRIME B360-PLUS",
15308 +@@ -1055,8 +1057,10 @@ static const char * const asus_wmi_boards[] = {
15309 + "PRIME X570-P",
15310 + "PRIME X570-PRO",
15311 + "ROG CROSSHAIR VIII DARK HERO",
15312 ++ "ROG CROSSHAIR VIII EXTREME",
15313 + "ROG CROSSHAIR VIII FORMULA",
15314 + "ROG CROSSHAIR VIII HERO",
15315 ++ "ROG CROSSHAIR VIII HERO (WI-FI)",
15316 + "ROG CROSSHAIR VIII IMPACT",
15317 + "ROG STRIX B550-A GAMING",
15318 + "ROG STRIX B550-E GAMING",
15319 +@@ -1080,8 +1084,11 @@ static const char * const asus_wmi_boards[] = {
15320 + "ROG STRIX Z490-G GAMING (WI-FI)",
15321 + "ROG STRIX Z490-H GAMING",
15322 + "ROG STRIX Z490-I GAMING",
15323 ++ "TUF GAMING B550M-E",
15324 ++ "TUF GAMING B550M-E (WI-FI)",
15325 + "TUF GAMING B550M-PLUS",
15326 + "TUF GAMING B550M-PLUS (WI-FI)",
15327 ++ "TUF GAMING B550M-PLUS WIFI II",
15328 + "TUF GAMING B550-PLUS",
15329 + "TUF GAMING B550-PLUS WIFI II",
15330 + "TUF GAMING B550-PRO",
15331 +diff --git a/drivers/hwtracing/coresight/coresight-cti-core.c b/drivers/hwtracing/coresight/coresight-cti-core.c
15332 +index c6e8c6542f24b..d2cf4f4848e1b 100644
15333 +--- a/drivers/hwtracing/coresight/coresight-cti-core.c
15334 ++++ b/drivers/hwtracing/coresight/coresight-cti-core.c
15335 +@@ -564,7 +564,7 @@ static void cti_add_assoc_to_csdev(struct coresight_device *csdev)
15336 + * if we found a matching csdev then update the ECT
15337 + * association pointer for the device with this CTI.
15338 + */
15339 +- coresight_set_assoc_ectdev_mutex(csdev->ect_dev,
15340 ++ coresight_set_assoc_ectdev_mutex(csdev,
15341 + ect_item->csdev);
15342 + break;
15343 + }
15344 +diff --git a/drivers/hwtracing/coresight/coresight-trbe.c b/drivers/hwtracing/coresight/coresight-trbe.c
15345 +index 2b386bb848f8d..1fc4fd79a1c69 100644
15346 +--- a/drivers/hwtracing/coresight/coresight-trbe.c
15347 ++++ b/drivers/hwtracing/coresight/coresight-trbe.c
15348 +@@ -1434,6 +1434,7 @@ static int arm_trbe_probe_cpuhp(struct trbe_drvdata *drvdata)
15349 +
15350 + static void arm_trbe_remove_cpuhp(struct trbe_drvdata *drvdata)
15351 + {
15352 ++ cpuhp_state_remove_instance(drvdata->trbe_online, &drvdata->hotplug_node);
15353 + cpuhp_remove_multi_state(drvdata->trbe_online);
15354 + }
15355 +
15356 +diff --git a/drivers/i2c/busses/i2c-ismt.c b/drivers/i2c/busses/i2c-ismt.c
15357 +index fe2349590f75e..c74985d77b0ec 100644
15358 +--- a/drivers/i2c/busses/i2c-ismt.c
15359 ++++ b/drivers/i2c/busses/i2c-ismt.c
15360 +@@ -509,6 +509,9 @@ static int ismt_access(struct i2c_adapter *adap, u16 addr,
15361 + if (read_write == I2C_SMBUS_WRITE) {
15362 + /* Block Write */
15363 + dev_dbg(dev, "I2C_SMBUS_BLOCK_DATA: WRITE\n");
15364 ++ if (data->block[0] < 1 || data->block[0] > I2C_SMBUS_BLOCK_MAX)
15365 ++ return -EINVAL;
15366 ++
15367 + dma_size = data->block[0] + 1;
15368 + dma_direction = DMA_TO_DEVICE;
15369 + desc->wr_len_cmd = dma_size;
15370 +diff --git a/drivers/i2c/busses/i2c-pxa-pci.c b/drivers/i2c/busses/i2c-pxa-pci.c
15371 +index f614cade432bb..30e38bc8b6db8 100644
15372 +--- a/drivers/i2c/busses/i2c-pxa-pci.c
15373 ++++ b/drivers/i2c/busses/i2c-pxa-pci.c
15374 +@@ -105,7 +105,7 @@ static int ce4100_i2c_probe(struct pci_dev *dev,
15375 + int i;
15376 + struct ce4100_devices *sds;
15377 +
15378 +- ret = pci_enable_device_mem(dev);
15379 ++ ret = pcim_enable_device(dev);
15380 + if (ret)
15381 + return ret;
15382 +
15383 +@@ -114,10 +114,8 @@ static int ce4100_i2c_probe(struct pci_dev *dev,
15384 + return -EINVAL;
15385 + }
15386 + sds = kzalloc(sizeof(*sds), GFP_KERNEL);
15387 +- if (!sds) {
15388 +- ret = -ENOMEM;
15389 +- goto err_mem;
15390 +- }
15391 ++ if (!sds)
15392 ++ return -ENOMEM;
15393 +
15394 + for (i = 0; i < ARRAY_SIZE(sds->pdev); i++) {
15395 + sds->pdev[i] = add_i2c_device(dev, i);
15396 +@@ -133,8 +131,6 @@ static int ce4100_i2c_probe(struct pci_dev *dev,
15397 +
15398 + err_dev_add:
15399 + kfree(sds);
15400 +-err_mem:
15401 +- pci_disable_device(dev);
15402 + return ret;
15403 + }
15404 +
15405 +diff --git a/drivers/i2c/muxes/i2c-mux-reg.c b/drivers/i2c/muxes/i2c-mux-reg.c
15406 +index 0e0679f65cf77..30a6de1694e07 100644
15407 +--- a/drivers/i2c/muxes/i2c-mux-reg.c
15408 ++++ b/drivers/i2c/muxes/i2c-mux-reg.c
15409 +@@ -183,13 +183,12 @@ static int i2c_mux_reg_probe(struct platform_device *pdev)
15410 + if (!mux->data.reg) {
15411 + dev_info(&pdev->dev,
15412 + "Register not set, using platform resource\n");
15413 +- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
15414 +- mux->data.reg_size = resource_size(res);
15415 +- mux->data.reg = devm_ioremap_resource(&pdev->dev, res);
15416 ++ mux->data.reg = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
15417 + if (IS_ERR(mux->data.reg)) {
15418 + ret = PTR_ERR(mux->data.reg);
15419 + goto err_put_parent;
15420 + }
15421 ++ mux->data.reg_size = resource_size(res);
15422 + }
15423 +
15424 + if (mux->data.reg_size != 4 && mux->data.reg_size != 2 &&
15425 +diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c
15426 +index 261a9a6b45e15..d8570f620785a 100644
15427 +--- a/drivers/iio/adc/ad_sigma_delta.c
15428 ++++ b/drivers/iio/adc/ad_sigma_delta.c
15429 +@@ -281,10 +281,10 @@ int ad_sigma_delta_single_conversion(struct iio_dev *indio_dev,
15430 + unsigned int data_reg;
15431 + int ret = 0;
15432 +
15433 +- if (iio_buffer_enabled(indio_dev))
15434 +- return -EBUSY;
15435 ++ ret = iio_device_claim_direct_mode(indio_dev);
15436 ++ if (ret)
15437 ++ return ret;
15438 +
15439 +- mutex_lock(&indio_dev->mlock);
15440 + ad_sigma_delta_set_channel(sigma_delta, chan->address);
15441 +
15442 + spi_bus_lock(sigma_delta->spi->master);
15443 +@@ -323,7 +323,7 @@ out:
15444 + ad_sigma_delta_set_mode(sigma_delta, AD_SD_MODE_IDLE);
15445 + sigma_delta->bus_locked = false;
15446 + spi_bus_unlock(sigma_delta->spi->master);
15447 +- mutex_unlock(&indio_dev->mlock);
15448 ++ iio_device_release_direct_mode(indio_dev);
15449 +
15450 + if (ret)
15451 + return ret;
15452 +diff --git a/drivers/iio/adc/ti-adc128s052.c b/drivers/iio/adc/ti-adc128s052.c
15453 +index 622fd384983c7..b3d5b9b7255bc 100644
15454 +--- a/drivers/iio/adc/ti-adc128s052.c
15455 ++++ b/drivers/iio/adc/ti-adc128s052.c
15456 +@@ -181,13 +181,13 @@ static int adc128_probe(struct spi_device *spi)
15457 + }
15458 +
15459 + static const struct of_device_id adc128_of_match[] = {
15460 +- { .compatible = "ti,adc128s052", },
15461 +- { .compatible = "ti,adc122s021", },
15462 +- { .compatible = "ti,adc122s051", },
15463 +- { .compatible = "ti,adc122s101", },
15464 +- { .compatible = "ti,adc124s021", },
15465 +- { .compatible = "ti,adc124s051", },
15466 +- { .compatible = "ti,adc124s101", },
15467 ++ { .compatible = "ti,adc128s052", .data = (void*)0L, },
15468 ++ { .compatible = "ti,adc122s021", .data = (void*)1L, },
15469 ++ { .compatible = "ti,adc122s051", .data = (void*)1L, },
15470 ++ { .compatible = "ti,adc122s101", .data = (void*)1L, },
15471 ++ { .compatible = "ti,adc124s021", .data = (void*)2L, },
15472 ++ { .compatible = "ti,adc124s051", .data = (void*)2L, },
15473 ++ { .compatible = "ti,adc124s101", .data = (void*)2L, },
15474 + { /* sentinel */ },
15475 + };
15476 + MODULE_DEVICE_TABLE(of, adc128_of_match);
15477 +diff --git a/drivers/iio/addac/ad74413r.c b/drivers/iio/addac/ad74413r.c
15478 +index 899bcd83f40bc..e0e130ba9d3ec 100644
15479 +--- a/drivers/iio/addac/ad74413r.c
15480 ++++ b/drivers/iio/addac/ad74413r.c
15481 +@@ -691,7 +691,7 @@ static int ad74413_get_input_current_offset(struct ad74413r_state *st,
15482 + if (ret)
15483 + return ret;
15484 +
15485 +- *val = voltage_offset * AD74413R_ADC_RESULT_MAX / voltage_range;
15486 ++ *val = voltage_offset * (int)AD74413R_ADC_RESULT_MAX / voltage_range;
15487 +
15488 + return IIO_VAL_INT;
15489 + }
15490 +diff --git a/drivers/iio/imu/adis.c b/drivers/iio/imu/adis.c
15491 +index f7fcfd04f659d..bc40240b29e26 100644
15492 +--- a/drivers/iio/imu/adis.c
15493 ++++ b/drivers/iio/imu/adis.c
15494 +@@ -270,23 +270,19 @@ EXPORT_SYMBOL_NS(adis_debugfs_reg_access, IIO_ADISLIB);
15495 + #endif
15496 +
15497 + /**
15498 +- * adis_enable_irq() - Enable or disable data ready IRQ
15499 ++ * __adis_enable_irq() - Enable or disable data ready IRQ (unlocked)
15500 + * @adis: The adis device
15501 + * @enable: Whether to enable the IRQ
15502 + *
15503 + * Returns 0 on success, negative error code otherwise
15504 + */
15505 +-int adis_enable_irq(struct adis *adis, bool enable)
15506 ++int __adis_enable_irq(struct adis *adis, bool enable)
15507 + {
15508 +- int ret = 0;
15509 ++ int ret;
15510 + u16 msc;
15511 +
15512 +- mutex_lock(&adis->state_lock);
15513 +-
15514 +- if (adis->data->enable_irq) {
15515 +- ret = adis->data->enable_irq(adis, enable);
15516 +- goto out_unlock;
15517 +- }
15518 ++ if (adis->data->enable_irq)
15519 ++ return adis->data->enable_irq(adis, enable);
15520 +
15521 + if (adis->data->unmasked_drdy) {
15522 + if (enable)
15523 +@@ -294,12 +290,12 @@ int adis_enable_irq(struct adis *adis, bool enable)
15524 + else
15525 + disable_irq(adis->spi->irq);
15526 +
15527 +- goto out_unlock;
15528 ++ return 0;
15529 + }
15530 +
15531 + ret = __adis_read_reg_16(adis, adis->data->msc_ctrl_reg, &msc);
15532 + if (ret)
15533 +- goto out_unlock;
15534 ++ return ret;
15535 +
15536 + msc |= ADIS_MSC_CTRL_DATA_RDY_POL_HIGH;
15537 + msc &= ~ADIS_MSC_CTRL_DATA_RDY_DIO2;
15538 +@@ -308,13 +304,9 @@ int adis_enable_irq(struct adis *adis, bool enable)
15539 + else
15540 + msc &= ~ADIS_MSC_CTRL_DATA_RDY_EN;
15541 +
15542 +- ret = __adis_write_reg_16(adis, adis->data->msc_ctrl_reg, msc);
15543 +-
15544 +-out_unlock:
15545 +- mutex_unlock(&adis->state_lock);
15546 +- return ret;
15547 ++ return __adis_write_reg_16(adis, adis->data->msc_ctrl_reg, msc);
15548 + }
15549 +-EXPORT_SYMBOL_NS(adis_enable_irq, IIO_ADISLIB);
15550 ++EXPORT_SYMBOL_NS(__adis_enable_irq, IIO_ADISLIB);
15551 +
15552 + /**
15553 + * __adis_check_status() - Check the device for error conditions (unlocked)
15554 +@@ -445,7 +437,7 @@ int __adis_initial_startup(struct adis *adis)
15555 + * with 'IRQF_NO_AUTOEN' anyways.
15556 + */
15557 + if (!adis->data->unmasked_drdy)
15558 +- adis_enable_irq(adis, false);
15559 ++ __adis_enable_irq(adis, false);
15560 +
15561 + if (!adis->data->prod_id_reg)
15562 + return 0;
15563 +diff --git a/drivers/iio/industrialio-event.c b/drivers/iio/industrialio-event.c
15564 +index 3d78da2531a9a..727e2ef66aa4b 100644
15565 +--- a/drivers/iio/industrialio-event.c
15566 ++++ b/drivers/iio/industrialio-event.c
15567 +@@ -556,7 +556,7 @@ int iio_device_register_eventset(struct iio_dev *indio_dev)
15568 +
15569 + ret = iio_device_register_sysfs_group(indio_dev, &ev_int->group);
15570 + if (ret)
15571 +- goto error_free_setup_event_lines;
15572 ++ goto error_free_group_attrs;
15573 +
15574 + ev_int->ioctl_handler.ioctl = iio_event_ioctl;
15575 + iio_device_ioctl_handler_register(&iio_dev_opaque->indio_dev,
15576 +@@ -564,6 +564,8 @@ int iio_device_register_eventset(struct iio_dev *indio_dev)
15577 +
15578 + return 0;
15579 +
15580 ++error_free_group_attrs:
15581 ++ kfree(ev_int->group.attrs);
15582 + error_free_setup_event_lines:
15583 + iio_free_chan_devattr_list(&ev_int->dev_attr_list);
15584 + kfree(ev_int);
15585 +diff --git a/drivers/iio/temperature/ltc2983.c b/drivers/iio/temperature/ltc2983.c
15586 +index a60ccf1836872..1117991ca2ab6 100644
15587 +--- a/drivers/iio/temperature/ltc2983.c
15588 ++++ b/drivers/iio/temperature/ltc2983.c
15589 +@@ -209,6 +209,7 @@ struct ltc2983_data {
15590 + * Holds the converted temperature
15591 + */
15592 + __be32 temp __aligned(IIO_DMA_MINALIGN);
15593 ++ __be32 chan_val;
15594 + };
15595 +
15596 + struct ltc2983_sensor {
15597 +@@ -313,19 +314,18 @@ static int __ltc2983_fault_handler(const struct ltc2983_data *st,
15598 + return 0;
15599 + }
15600 +
15601 +-static int __ltc2983_chan_assign_common(const struct ltc2983_data *st,
15602 ++static int __ltc2983_chan_assign_common(struct ltc2983_data *st,
15603 + const struct ltc2983_sensor *sensor,
15604 + u32 chan_val)
15605 + {
15606 + u32 reg = LTC2983_CHAN_START_ADDR(sensor->chan);
15607 +- __be32 __chan_val;
15608 +
15609 + chan_val |= LTC2983_CHAN_TYPE(sensor->type);
15610 + dev_dbg(&st->spi->dev, "Assign reg:0x%04X, val:0x%08X\n", reg,
15611 + chan_val);
15612 +- __chan_val = cpu_to_be32(chan_val);
15613 +- return regmap_bulk_write(st->regmap, reg, &__chan_val,
15614 +- sizeof(__chan_val));
15615 ++ st->chan_val = cpu_to_be32(chan_val);
15616 ++ return regmap_bulk_write(st->regmap, reg, &st->chan_val,
15617 ++ sizeof(st->chan_val));
15618 + }
15619 +
15620 + static int __ltc2983_chan_custom_sensor_assign(struct ltc2983_data *st,
15621 +diff --git a/drivers/infiniband/Kconfig b/drivers/infiniband/Kconfig
15622 +index aa36ac618e729..17a2274152771 100644
15623 +--- a/drivers/infiniband/Kconfig
15624 ++++ b/drivers/infiniband/Kconfig
15625 +@@ -78,6 +78,7 @@ config INFINIBAND_VIRT_DMA
15626 + def_bool !HIGHMEM
15627 +
15628 + if INFINIBAND_USER_ACCESS || !INFINIBAND_USER_ACCESS
15629 ++if !UML
15630 + source "drivers/infiniband/hw/bnxt_re/Kconfig"
15631 + source "drivers/infiniband/hw/cxgb4/Kconfig"
15632 + source "drivers/infiniband/hw/efa/Kconfig"
15633 +@@ -94,6 +95,7 @@ source "drivers/infiniband/hw/qib/Kconfig"
15634 + source "drivers/infiniband/hw/usnic/Kconfig"
15635 + source "drivers/infiniband/hw/vmw_pvrdma/Kconfig"
15636 + source "drivers/infiniband/sw/rdmavt/Kconfig"
15637 ++endif # !UML
15638 + source "drivers/infiniband/sw/rxe/Kconfig"
15639 + source "drivers/infiniband/sw/siw/Kconfig"
15640 + endif
15641 +diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
15642 +index b69e2c4e4d2a4..3c422698a51c1 100644
15643 +--- a/drivers/infiniband/core/device.c
15644 ++++ b/drivers/infiniband/core/device.c
15645 +@@ -2851,8 +2851,8 @@ err:
15646 + static void __exit ib_core_cleanup(void)
15647 + {
15648 + roce_gid_mgmt_cleanup();
15649 +- nldev_exit();
15650 + rdma_nl_unregister(RDMA_NL_LS);
15651 ++ nldev_exit();
15652 + unregister_pernet_device(&rdma_dev_net_ops);
15653 + unregister_blocking_lsm_notifier(&ibdev_lsm_nb);
15654 + ib_sa_cleanup();
15655 +diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
15656 +index 1893aa613ad73..674344eb8e2f4 100644
15657 +--- a/drivers/infiniband/core/mad.c
15658 ++++ b/drivers/infiniband/core/mad.c
15659 +@@ -59,9 +59,6 @@ static void create_mad_addr_info(struct ib_mad_send_wr_private *mad_send_wr,
15660 + struct ib_mad_qp_info *qp_info,
15661 + struct trace_event_raw_ib_mad_send_template *entry)
15662 + {
15663 +- u16 pkey;
15664 +- struct ib_device *dev = qp_info->port_priv->device;
15665 +- u32 pnum = qp_info->port_priv->port_num;
15666 + struct ib_ud_wr *wr = &mad_send_wr->send_wr;
15667 + struct rdma_ah_attr attr = {};
15668 +
15669 +@@ -69,8 +66,6 @@ static void create_mad_addr_info(struct ib_mad_send_wr_private *mad_send_wr,
15670 +
15671 + /* These are common */
15672 + entry->sl = attr.sl;
15673 +- ib_query_pkey(dev, pnum, wr->pkey_index, &pkey);
15674 +- entry->pkey = pkey;
15675 + entry->rqpn = wr->remote_qpn;
15676 + entry->rqkey = wr->remote_qkey;
15677 + entry->dlid = rdma_ah_get_dlid(&attr);
15678 +diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
15679 +index 12dc97067ed2b..222733a83ddb7 100644
15680 +--- a/drivers/infiniband/core/nldev.c
15681 ++++ b/drivers/infiniband/core/nldev.c
15682 +@@ -513,7 +513,7 @@ static int fill_res_qp_entry(struct sk_buff *msg, bool has_cap_net_admin,
15683 +
15684 + /* In create_qp() port is not set yet */
15685 + if (qp->port && nla_put_u32(msg, RDMA_NLDEV_ATTR_PORT_INDEX, qp->port))
15686 +- return -EINVAL;
15687 ++ return -EMSGSIZE;
15688 +
15689 + ret = nla_put_u32(msg, RDMA_NLDEV_ATTR_RES_LQPN, qp->qp_num);
15690 + if (ret)
15691 +@@ -552,7 +552,7 @@ static int fill_res_cm_id_entry(struct sk_buff *msg, bool has_cap_net_admin,
15692 + struct rdma_cm_id *cm_id = &id_priv->id;
15693 +
15694 + if (port && port != cm_id->port_num)
15695 +- return 0;
15696 ++ return -EAGAIN;
15697 +
15698 + if (cm_id->port_num &&
15699 + nla_put_u32(msg, RDMA_NLDEV_ATTR_PORT_INDEX, cm_id->port_num))
15700 +@@ -894,6 +894,8 @@ static int fill_stat_counter_qps(struct sk_buff *msg,
15701 + int ret = 0;
15702 +
15703 + table_attr = nla_nest_start(msg, RDMA_NLDEV_ATTR_RES_QP);
15704 ++ if (!table_attr)
15705 ++ return -EMSGSIZE;
15706 +
15707 + rt = &counter->device->res[RDMA_RESTRACK_QP];
15708 + xa_lock(&rt->xa);
15709 +diff --git a/drivers/infiniband/core/restrack.c b/drivers/infiniband/core/restrack.c
15710 +index 1f935d9f61785..01a499a8b88db 100644
15711 +--- a/drivers/infiniband/core/restrack.c
15712 ++++ b/drivers/infiniband/core/restrack.c
15713 +@@ -343,8 +343,6 @@ void rdma_restrack_del(struct rdma_restrack_entry *res)
15714 + rt = &dev->res[res->type];
15715 +
15716 + old = xa_erase(&rt->xa, res->id);
15717 +- if (res->type == RDMA_RESTRACK_MR)
15718 +- return;
15719 + WARN_ON(old != res);
15720 +
15721 + out:
15722 +diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c
15723 +index 84c53bd2a52db..ee59d73915689 100644
15724 +--- a/drivers/infiniband/core/sysfs.c
15725 ++++ b/drivers/infiniband/core/sysfs.c
15726 +@@ -1213,6 +1213,9 @@ static struct ib_port *setup_port(struct ib_core_device *coredev, int port_num,
15727 + p->port_num = port_num;
15728 + kobject_init(&p->kobj, &port_type);
15729 +
15730 ++ if (device->port_data && is_full_dev)
15731 ++ device->port_data[port_num].sysfs = p;
15732 ++
15733 + cur_group = p->groups_list;
15734 + ret = alloc_port_table_group("gids", &p->groups[0], p->attrs_list,
15735 + attr->gid_tbl_len, show_port_gid);
15736 +@@ -1258,9 +1261,6 @@ static struct ib_port *setup_port(struct ib_core_device *coredev, int port_num,
15737 + }
15738 +
15739 + list_add_tail(&p->kobj.entry, &coredev->port_list);
15740 +- if (device->port_data && is_full_dev)
15741 +- device->port_data[port_num].sysfs = p;
15742 +-
15743 + return p;
15744 +
15745 + err_groups:
15746 +@@ -1268,6 +1268,8 @@ err_groups:
15747 + err_del:
15748 + kobject_del(&p->kobj);
15749 + err_put:
15750 ++ if (device->port_data && is_full_dev)
15751 ++ device->port_data[port_num].sysfs = NULL;
15752 + kobject_put(&p->kobj);
15753 + return ERR_PTR(ret);
15754 + }
15755 +@@ -1276,14 +1278,17 @@ static void destroy_port(struct ib_core_device *coredev, struct ib_port *port)
15756 + {
15757 + bool is_full_dev = &port->ibdev->coredev == coredev;
15758 +
15759 +- if (port->ibdev->port_data &&
15760 +- port->ibdev->port_data[port->port_num].sysfs == port)
15761 +- port->ibdev->port_data[port->port_num].sysfs = NULL;
15762 + list_del(&port->kobj.entry);
15763 + if (is_full_dev)
15764 + sysfs_remove_groups(&port->kobj, port->ibdev->ops.port_groups);
15765 ++
15766 + sysfs_remove_groups(&port->kobj, port->groups_list);
15767 + kobject_del(&port->kobj);
15768 ++
15769 ++ if (port->ibdev->port_data &&
15770 ++ port->ibdev->port_data[port->port_num].sysfs == port)
15771 ++ port->ibdev->port_data[port->port_num].sysfs = NULL;
15772 ++
15773 + kobject_put(&port->kobj);
15774 + }
15775 +
15776 +diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
15777 +index 877f8e84a672a..77ee77d4000fb 100644
15778 +--- a/drivers/infiniband/hw/hfi1/affinity.c
15779 ++++ b/drivers/infiniband/hw/hfi1/affinity.c
15780 +@@ -177,6 +177,8 @@ out:
15781 + for (node = 0; node < node_affinity.num_possible_nodes; node++)
15782 + hfi1_per_node_cntr[node] = 1;
15783 +
15784 ++ pci_dev_put(dev);
15785 ++
15786 + return 0;
15787 + }
15788 +
15789 +diff --git a/drivers/infiniband/hw/hfi1/firmware.c b/drivers/infiniband/hw/hfi1/firmware.c
15790 +index 1d77514ebbee0..0c0cef5b1e0e5 100644
15791 +--- a/drivers/infiniband/hw/hfi1/firmware.c
15792 ++++ b/drivers/infiniband/hw/hfi1/firmware.c
15793 +@@ -1743,6 +1743,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
15794 +
15795 + if (!dd->platform_config.data) {
15796 + dd_dev_err(dd, "%s: Missing config file\n", __func__);
15797 ++ ret = -EINVAL;
15798 + goto bail;
15799 + }
15800 + ptr = (u32 *)dd->platform_config.data;
15801 +@@ -1751,6 +1752,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
15802 + ptr++;
15803 + if (magic_num != PLATFORM_CONFIG_MAGIC_NUM) {
15804 + dd_dev_err(dd, "%s: Bad config file\n", __func__);
15805 ++ ret = -EINVAL;
15806 + goto bail;
15807 + }
15808 +
15809 +@@ -1774,6 +1776,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
15810 + if (file_length > dd->platform_config.size) {
15811 + dd_dev_info(dd, "%s:File claims to be larger than read size\n",
15812 + __func__);
15813 ++ ret = -EINVAL;
15814 + goto bail;
15815 + } else if (file_length < dd->platform_config.size) {
15816 + dd_dev_info(dd,
15817 +@@ -1794,6 +1797,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
15818 + dd_dev_err(dd, "%s: Failed validation at offset %ld\n",
15819 + __func__, (ptr - (u32 *)
15820 + dd->platform_config.data));
15821 ++ ret = -EINVAL;
15822 + goto bail;
15823 + }
15824 +
15825 +@@ -1837,6 +1841,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
15826 + __func__, table_type,
15827 + (ptr - (u32 *)
15828 + dd->platform_config.data));
15829 ++ ret = -EINVAL;
15830 + goto bail; /* We don't trust this file now */
15831 + }
15832 + pcfgcache->config_tables[table_type].table = ptr;
15833 +@@ -1856,6 +1861,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
15834 + __func__, table_type,
15835 + (ptr -
15836 + (u32 *)dd->platform_config.data));
15837 ++ ret = -EINVAL;
15838 + goto bail; /* We don't trust this file now */
15839 + }
15840 + pcfgcache->config_tables[table_type].table_metadata =
15841 +diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
15842 +index 723e55a7de8d9..f701cc86896b3 100644
15843 +--- a/drivers/infiniband/hw/hns/hns_roce_device.h
15844 ++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
15845 +@@ -202,6 +202,7 @@ struct hns_roce_ucontext {
15846 + struct list_head page_list;
15847 + struct mutex page_mutex;
15848 + struct hns_user_mmap_entry *db_mmap_entry;
15849 ++ u32 config;
15850 + };
15851 +
15852 + struct hns_roce_pd {
15853 +@@ -334,6 +335,7 @@ struct hns_roce_wq {
15854 + u32 head;
15855 + u32 tail;
15856 + void __iomem *db_reg;
15857 ++ u32 ext_sge_cnt;
15858 + };
15859 +
15860 + struct hns_roce_sge {
15861 +@@ -635,6 +637,7 @@ struct hns_roce_qp {
15862 + struct list_head rq_node; /* all recv qps are on a list */
15863 + struct list_head sq_node; /* all send qps are on a list */
15864 + struct hns_user_mmap_entry *dwqe_mmap_entry;
15865 ++ u32 config;
15866 + };
15867 +
15868 + struct hns_roce_ib_iboe {
15869 +diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
15870 +index 1435fe2ea176f..b2421883993b1 100644
15871 +--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
15872 ++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
15873 +@@ -192,7 +192,6 @@ static int fill_ext_sge_inl_data(struct hns_roce_qp *qp,
15874 + unsigned int *sge_idx, u32 msg_len)
15875 + {
15876 + struct ib_device *ibdev = &(to_hr_dev(qp->ibqp.device))->ib_dev;
15877 +- unsigned int ext_sge_sz = qp->sq.max_gs * HNS_ROCE_SGE_SIZE;
15878 + unsigned int left_len_in_pg;
15879 + unsigned int idx = *sge_idx;
15880 + unsigned int i = 0;
15881 +@@ -200,7 +199,7 @@ static int fill_ext_sge_inl_data(struct hns_roce_qp *qp,
15882 + void *addr;
15883 + void *dseg;
15884 +
15885 +- if (msg_len > ext_sge_sz) {
15886 ++ if (msg_len > qp->sq.ext_sge_cnt * HNS_ROCE_SGE_SIZE) {
15887 + ibdev_err(ibdev,
15888 + "no enough extended sge space for inline data.\n");
15889 + return -EINVAL;
15890 +@@ -1274,6 +1273,30 @@ static void update_cmdq_status(struct hns_roce_dev *hr_dev)
15891 + hr_dev->cmd.state = HNS_ROCE_CMDQ_STATE_FATAL_ERR;
15892 + }
15893 +
15894 ++static int hns_roce_cmd_err_convert_errno(u16 desc_ret)
15895 ++{
15896 ++ struct hns_roce_cmd_errcode errcode_table[] = {
15897 ++ {CMD_EXEC_SUCCESS, 0},
15898 ++ {CMD_NO_AUTH, -EPERM},
15899 ++ {CMD_NOT_EXIST, -EOPNOTSUPP},
15900 ++ {CMD_CRQ_FULL, -EXFULL},
15901 ++ {CMD_NEXT_ERR, -ENOSR},
15902 ++ {CMD_NOT_EXEC, -ENOTBLK},
15903 ++ {CMD_PARA_ERR, -EINVAL},
15904 ++ {CMD_RESULT_ERR, -ERANGE},
15905 ++ {CMD_TIMEOUT, -ETIME},
15906 ++ {CMD_HILINK_ERR, -ENOLINK},
15907 ++ {CMD_INFO_ILLEGAL, -ENXIO},
15908 ++ {CMD_INVALID, -EBADR},
15909 ++ };
15910 ++ u16 i;
15911 ++
15912 ++ for (i = 0; i < ARRAY_SIZE(errcode_table); i++)
15913 ++ if (desc_ret == errcode_table[i].return_status)
15914 ++ return errcode_table[i].errno;
15915 ++ return -EIO;
15916 ++}
15917 ++
15918 + static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
15919 + struct hns_roce_cmq_desc *desc, int num)
15920 + {
15921 +@@ -1319,7 +1342,7 @@ static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
15922 + dev_err_ratelimited(hr_dev->dev,
15923 + "Cmdq IO error, opcode = 0x%x, return = 0x%x.\n",
15924 + desc->opcode, desc_ret);
15925 +- ret = -EIO;
15926 ++ ret = hns_roce_cmd_err_convert_errno(desc_ret);
15927 + }
15928 + } else {
15929 + /* FW/HW reset or incorrect number of desc */
15930 +@@ -2024,13 +2047,14 @@ static void set_default_caps(struct hns_roce_dev *hr_dev)
15931 +
15932 + caps->flags |= HNS_ROCE_CAP_FLAG_ATOMIC | HNS_ROCE_CAP_FLAG_MW |
15933 + HNS_ROCE_CAP_FLAG_SRQ | HNS_ROCE_CAP_FLAG_FRMR |
15934 +- HNS_ROCE_CAP_FLAG_QP_FLOW_CTRL | HNS_ROCE_CAP_FLAG_XRC;
15935 ++ HNS_ROCE_CAP_FLAG_QP_FLOW_CTRL;
15936 +
15937 + caps->gid_table_len[0] = HNS_ROCE_V2_GID_INDEX_NUM;
15938 +
15939 + if (hr_dev->pci_dev->revision >= PCI_REVISION_ID_HIP09) {
15940 + caps->flags |= HNS_ROCE_CAP_FLAG_STASH |
15941 +- HNS_ROCE_CAP_FLAG_DIRECT_WQE;
15942 ++ HNS_ROCE_CAP_FLAG_DIRECT_WQE |
15943 ++ HNS_ROCE_CAP_FLAG_XRC;
15944 + caps->max_sq_inline = HNS_ROCE_V3_MAX_SQ_INLINE;
15945 + } else {
15946 + caps->max_sq_inline = HNS_ROCE_V2_MAX_SQ_INLINE;
15947 +@@ -2342,6 +2366,9 @@ static int hns_roce_query_pf_caps(struct hns_roce_dev *hr_dev)
15948 + caps->wqe_sge_hop_num = hr_reg_read(resp_d, PF_CAPS_D_EX_SGE_HOP_NUM);
15949 + caps->wqe_rq_hop_num = hr_reg_read(resp_d, PF_CAPS_D_RQWQE_HOP_NUM);
15950 +
15951 ++ if (!(caps->page_size_cap & PAGE_SIZE))
15952 ++ caps->page_size_cap = HNS_ROCE_V2_PAGE_SIZE_SUPPORTED;
15953 ++
15954 + return 0;
15955 + }
15956 +
15957 +@@ -2631,31 +2658,124 @@ static void free_dip_list(struct hns_roce_dev *hr_dev)
15958 + spin_unlock_irqrestore(&hr_dev->dip_list_lock, flags);
15959 + }
15960 +
15961 +-static void free_mr_exit(struct hns_roce_dev *hr_dev)
15962 ++static struct ib_pd *free_mr_init_pd(struct hns_roce_dev *hr_dev)
15963 ++{
15964 ++ struct hns_roce_v2_priv *priv = hr_dev->priv;
15965 ++ struct hns_roce_v2_free_mr *free_mr = &priv->free_mr;
15966 ++ struct ib_device *ibdev = &hr_dev->ib_dev;
15967 ++ struct hns_roce_pd *hr_pd;
15968 ++ struct ib_pd *pd;
15969 ++
15970 ++ hr_pd = kzalloc(sizeof(*hr_pd), GFP_KERNEL);
15971 ++ if (ZERO_OR_NULL_PTR(hr_pd))
15972 ++ return NULL;
15973 ++ pd = &hr_pd->ibpd;
15974 ++ pd->device = ibdev;
15975 ++
15976 ++ if (hns_roce_alloc_pd(pd, NULL)) {
15977 ++ ibdev_err(ibdev, "failed to create pd for free mr.\n");
15978 ++ kfree(hr_pd);
15979 ++ return NULL;
15980 ++ }
15981 ++ free_mr->rsv_pd = to_hr_pd(pd);
15982 ++ free_mr->rsv_pd->ibpd.device = &hr_dev->ib_dev;
15983 ++ free_mr->rsv_pd->ibpd.uobject = NULL;
15984 ++ free_mr->rsv_pd->ibpd.__internal_mr = NULL;
15985 ++ atomic_set(&free_mr->rsv_pd->ibpd.usecnt, 0);
15986 ++
15987 ++ return pd;
15988 ++}
15989 ++
15990 ++static struct ib_cq *free_mr_init_cq(struct hns_roce_dev *hr_dev)
15991 + {
15992 + struct hns_roce_v2_priv *priv = hr_dev->priv;
15993 + struct hns_roce_v2_free_mr *free_mr = &priv->free_mr;
15994 ++ struct ib_device *ibdev = &hr_dev->ib_dev;
15995 ++ struct ib_cq_init_attr cq_init_attr = {};
15996 ++ struct hns_roce_cq *hr_cq;
15997 ++ struct ib_cq *cq;
15998 ++
15999 ++ cq_init_attr.cqe = HNS_ROCE_FREE_MR_USED_CQE_NUM;
16000 ++
16001 ++ hr_cq = kzalloc(sizeof(*hr_cq), GFP_KERNEL);
16002 ++ if (ZERO_OR_NULL_PTR(hr_cq))
16003 ++ return NULL;
16004 ++
16005 ++ cq = &hr_cq->ib_cq;
16006 ++ cq->device = ibdev;
16007 ++
16008 ++ if (hns_roce_create_cq(cq, &cq_init_attr, NULL)) {
16009 ++ ibdev_err(ibdev, "failed to create cq for free mr.\n");
16010 ++ kfree(hr_cq);
16011 ++ return NULL;
16012 ++ }
16013 ++ free_mr->rsv_cq = to_hr_cq(cq);
16014 ++ free_mr->rsv_cq->ib_cq.device = &hr_dev->ib_dev;
16015 ++ free_mr->rsv_cq->ib_cq.uobject = NULL;
16016 ++ free_mr->rsv_cq->ib_cq.comp_handler = NULL;
16017 ++ free_mr->rsv_cq->ib_cq.event_handler = NULL;
16018 ++ free_mr->rsv_cq->ib_cq.cq_context = NULL;
16019 ++ atomic_set(&free_mr->rsv_cq->ib_cq.usecnt, 0);
16020 ++
16021 ++ return cq;
16022 ++}
16023 ++
16024 ++static int free_mr_init_qp(struct hns_roce_dev *hr_dev, struct ib_cq *cq,
16025 ++ struct ib_qp_init_attr *init_attr, int i)
16026 ++{
16027 ++ struct hns_roce_v2_priv *priv = hr_dev->priv;
16028 ++ struct hns_roce_v2_free_mr *free_mr = &priv->free_mr;
16029 ++ struct ib_device *ibdev = &hr_dev->ib_dev;
16030 ++ struct hns_roce_qp *hr_qp;
16031 ++ struct ib_qp *qp;
16032 + int ret;
16033 ++
16034 ++ hr_qp = kzalloc(sizeof(*hr_qp), GFP_KERNEL);
16035 ++ if (ZERO_OR_NULL_PTR(hr_qp))
16036 ++ return -ENOMEM;
16037 ++
16038 ++ qp = &hr_qp->ibqp;
16039 ++ qp->device = ibdev;
16040 ++
16041 ++ ret = hns_roce_create_qp(qp, init_attr, NULL);
16042 ++ if (ret) {
16043 ++ ibdev_err(ibdev, "failed to create qp for free mr.\n");
16044 ++ kfree(hr_qp);
16045 ++ return ret;
16046 ++ }
16047 ++
16048 ++ free_mr->rsv_qp[i] = hr_qp;
16049 ++ free_mr->rsv_qp[i]->ibqp.recv_cq = cq;
16050 ++ free_mr->rsv_qp[i]->ibqp.send_cq = cq;
16051 ++
16052 ++ return 0;
16053 ++}
16054 ++
16055 ++static void free_mr_exit(struct hns_roce_dev *hr_dev)
16056 ++{
16057 ++ struct hns_roce_v2_priv *priv = hr_dev->priv;
16058 ++ struct hns_roce_v2_free_mr *free_mr = &priv->free_mr;
16059 ++ struct ib_qp *qp;
16060 + int i;
16061 +
16062 + for (i = 0; i < ARRAY_SIZE(free_mr->rsv_qp); i++) {
16063 + if (free_mr->rsv_qp[i]) {
16064 +- ret = ib_destroy_qp(free_mr->rsv_qp[i]);
16065 +- if (ret)
16066 +- ibdev_err(&hr_dev->ib_dev,
16067 +- "failed to destroy qp in free mr.\n");
16068 +-
16069 ++ qp = &free_mr->rsv_qp[i]->ibqp;
16070 ++ hns_roce_v2_destroy_qp(qp, NULL);
16071 ++ kfree(free_mr->rsv_qp[i]);
16072 + free_mr->rsv_qp[i] = NULL;
16073 + }
16074 + }
16075 +
16076 + if (free_mr->rsv_cq) {
16077 +- ib_destroy_cq(free_mr->rsv_cq);
16078 ++ hns_roce_destroy_cq(&free_mr->rsv_cq->ib_cq, NULL);
16079 ++ kfree(free_mr->rsv_cq);
16080 + free_mr->rsv_cq = NULL;
16081 + }
16082 +
16083 + if (free_mr->rsv_pd) {
16084 +- ib_dealloc_pd(free_mr->rsv_pd);
16085 ++ hns_roce_dealloc_pd(&free_mr->rsv_pd->ibpd, NULL);
16086 ++ kfree(free_mr->rsv_pd);
16087 + free_mr->rsv_pd = NULL;
16088 + }
16089 + }
16090 +@@ -2664,55 +2784,46 @@ static int free_mr_alloc_res(struct hns_roce_dev *hr_dev)
16091 + {
16092 + struct hns_roce_v2_priv *priv = hr_dev->priv;
16093 + struct hns_roce_v2_free_mr *free_mr = &priv->free_mr;
16094 +- struct ib_device *ibdev = &hr_dev->ib_dev;
16095 +- struct ib_cq_init_attr cq_init_attr = {};
16096 + struct ib_qp_init_attr qp_init_attr = {};
16097 + struct ib_pd *pd;
16098 + struct ib_cq *cq;
16099 +- struct ib_qp *qp;
16100 + int ret;
16101 + int i;
16102 +
16103 +- pd = ib_alloc_pd(ibdev, 0);
16104 +- if (IS_ERR(pd)) {
16105 +- ibdev_err(ibdev, "failed to create pd for free mr.\n");
16106 +- return PTR_ERR(pd);
16107 +- }
16108 +- free_mr->rsv_pd = pd;
16109 ++ pd = free_mr_init_pd(hr_dev);
16110 ++ if (!pd)
16111 ++ return -ENOMEM;
16112 +
16113 +- cq_init_attr.cqe = HNS_ROCE_FREE_MR_USED_CQE_NUM;
16114 +- cq = ib_create_cq(ibdev, NULL, NULL, NULL, &cq_init_attr);
16115 +- if (IS_ERR(cq)) {
16116 +- ibdev_err(ibdev, "failed to create cq for free mr.\n");
16117 +- ret = PTR_ERR(cq);
16118 +- goto create_failed;
16119 ++ cq = free_mr_init_cq(hr_dev);
16120 ++ if (!cq) {
16121 ++ ret = -ENOMEM;
16122 ++ goto create_failed_cq;
16123 + }
16124 +- free_mr->rsv_cq = cq;
16125 +
16126 + qp_init_attr.qp_type = IB_QPT_RC;
16127 + qp_init_attr.sq_sig_type = IB_SIGNAL_ALL_WR;
16128 +- qp_init_attr.send_cq = free_mr->rsv_cq;
16129 +- qp_init_attr.recv_cq = free_mr->rsv_cq;
16130 ++ qp_init_attr.send_cq = cq;
16131 ++ qp_init_attr.recv_cq = cq;
16132 + for (i = 0; i < ARRAY_SIZE(free_mr->rsv_qp); i++) {
16133 + qp_init_attr.cap.max_send_wr = HNS_ROCE_FREE_MR_USED_SQWQE_NUM;
16134 + qp_init_attr.cap.max_send_sge = HNS_ROCE_FREE_MR_USED_SQSGE_NUM;
16135 + qp_init_attr.cap.max_recv_wr = HNS_ROCE_FREE_MR_USED_RQWQE_NUM;
16136 + qp_init_attr.cap.max_recv_sge = HNS_ROCE_FREE_MR_USED_RQSGE_NUM;
16137 +
16138 +- qp = ib_create_qp(free_mr->rsv_pd, &qp_init_attr);
16139 +- if (IS_ERR(qp)) {
16140 +- ibdev_err(ibdev, "failed to create qp for free mr.\n");
16141 +- ret = PTR_ERR(qp);
16142 +- goto create_failed;
16143 +- }
16144 +-
16145 +- free_mr->rsv_qp[i] = qp;
16146 ++ ret = free_mr_init_qp(hr_dev, cq, &qp_init_attr, i);
16147 ++ if (ret)
16148 ++ goto create_failed_qp;
16149 + }
16150 +
16151 + return 0;
16152 +
16153 +-create_failed:
16154 +- free_mr_exit(hr_dev);
16155 ++create_failed_qp:
16156 ++ hns_roce_destroy_cq(cq, NULL);
16157 ++ kfree(cq);
16158 ++
16159 ++create_failed_cq:
16160 ++ hns_roce_dealloc_pd(pd, NULL);
16161 ++ kfree(pd);
16162 +
16163 + return ret;
16164 + }
16165 +@@ -2728,14 +2839,17 @@ static int free_mr_modify_rsv_qp(struct hns_roce_dev *hr_dev,
16166 + int mask;
16167 + int ret;
16168 +
16169 +- hr_qp = to_hr_qp(free_mr->rsv_qp[sl_num]);
16170 ++ hr_qp = to_hr_qp(&free_mr->rsv_qp[sl_num]->ibqp);
16171 + hr_qp->free_mr_en = 1;
16172 ++ hr_qp->ibqp.device = ibdev;
16173 ++ hr_qp->ibqp.qp_type = IB_QPT_RC;
16174 +
16175 + mask = IB_QP_STATE | IB_QP_PKEY_INDEX | IB_QP_PORT | IB_QP_ACCESS_FLAGS;
16176 + attr->qp_state = IB_QPS_INIT;
16177 + attr->port_num = 1;
16178 + attr->qp_access_flags = IB_ACCESS_REMOTE_WRITE;
16179 +- ret = ib_modify_qp(&hr_qp->ibqp, attr, mask);
16180 ++ ret = hr_dev->hw->modify_qp(&hr_qp->ibqp, attr, mask, IB_QPS_INIT,
16181 ++ IB_QPS_INIT);
16182 + if (ret) {
16183 + ibdev_err(ibdev, "failed to modify qp to init, ret = %d.\n",
16184 + ret);
16185 +@@ -2756,7 +2870,8 @@ static int free_mr_modify_rsv_qp(struct hns_roce_dev *hr_dev,
16186 +
16187 + rdma_ah_set_sl(&attr->ah_attr, (u8)sl_num);
16188 +
16189 +- ret = ib_modify_qp(&hr_qp->ibqp, attr, mask);
16190 ++ ret = hr_dev->hw->modify_qp(&hr_qp->ibqp, attr, mask, IB_QPS_INIT,
16191 ++ IB_QPS_RTR);
16192 + hr_dev->loop_idc = loopback;
16193 + if (ret) {
16194 + ibdev_err(ibdev, "failed to modify qp to rtr, ret = %d.\n",
16195 +@@ -2770,7 +2885,8 @@ static int free_mr_modify_rsv_qp(struct hns_roce_dev *hr_dev,
16196 + attr->sq_psn = HNS_ROCE_FREE_MR_USED_PSN;
16197 + attr->retry_cnt = HNS_ROCE_FREE_MR_USED_QP_RETRY_CNT;
16198 + attr->timeout = HNS_ROCE_FREE_MR_USED_QP_TIMEOUT;
16199 +- ret = ib_modify_qp(&hr_qp->ibqp, attr, mask);
16200 ++ ret = hr_dev->hw->modify_qp(&hr_qp->ibqp, attr, mask, IB_QPS_RTR,
16201 ++ IB_QPS_RTS);
16202 + if (ret)
16203 + ibdev_err(ibdev, "failed to modify qp to rts, ret = %d.\n",
16204 + ret);
16205 +@@ -3186,7 +3302,8 @@ static int set_mtpt_pbl(struct hns_roce_dev *hr_dev,
16206 + int i, count;
16207 +
16208 + count = hns_roce_mtr_find(hr_dev, &mr->pbl_mtr, 0, pages,
16209 +- ARRAY_SIZE(pages), &pbl_ba);
16210 ++ min_t(int, ARRAY_SIZE(pages), mr->npages),
16211 ++ &pbl_ba);
16212 + if (count < 1) {
16213 + ibdev_err(ibdev, "failed to find PBL mtr, count = %d.\n",
16214 + count);
16215 +@@ -3414,7 +3531,7 @@ static void free_mr_send_cmd_to_hw(struct hns_roce_dev *hr_dev)
16216 + mutex_lock(&free_mr->mutex);
16217 +
16218 + for (i = 0; i < ARRAY_SIZE(free_mr->rsv_qp); i++) {
16219 +- hr_qp = to_hr_qp(free_mr->rsv_qp[i]);
16220 ++ hr_qp = free_mr->rsv_qp[i];
16221 +
16222 + ret = free_mr_post_send_lp_wqe(hr_qp);
16223 + if (ret) {
16224 +@@ -3429,7 +3546,7 @@ static void free_mr_send_cmd_to_hw(struct hns_roce_dev *hr_dev)
16225 +
16226 + end = msecs_to_jiffies(HNS_ROCE_V2_FREE_MR_TIMEOUT) + jiffies;
16227 + while (cqe_cnt) {
16228 +- npolled = hns_roce_v2_poll_cq(free_mr->rsv_cq, cqe_cnt, wc);
16229 ++ npolled = hns_roce_v2_poll_cq(&free_mr->rsv_cq->ib_cq, cqe_cnt, wc);
16230 + if (npolled < 0) {
16231 + ibdev_err(ibdev,
16232 + "failed to poll cqe for free mr, remain %d cqe.\n",
16233 +@@ -5375,6 +5492,8 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
16234 +
16235 + rdma_ah_set_sl(&qp_attr->ah_attr,
16236 + hr_reg_read(&context, QPC_SL));
16237 ++ rdma_ah_set_port_num(&qp_attr->ah_attr, hr_qp->port + 1);
16238 ++ rdma_ah_set_ah_flags(&qp_attr->ah_attr, IB_AH_GRH);
16239 + grh->flow_label = hr_reg_read(&context, QPC_FL);
16240 + grh->sgid_index = hr_reg_read(&context, QPC_GMV_IDX);
16241 + grh->hop_limit = hr_reg_read(&context, QPC_HOPLIMIT);
16242 +@@ -5468,7 +5587,7 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev,
16243 + return ret;
16244 + }
16245 +
16246 +-static int hns_roce_v2_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
16247 ++int hns_roce_v2_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
16248 + {
16249 + struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
16250 + struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
16251 +diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
16252 +index c7bf2d52c1cdb..b1b3e1e0b84e5 100644
16253 +--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
16254 ++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
16255 +@@ -272,6 +272,11 @@ enum hns_roce_cmd_return_status {
16256 + CMD_OTHER_ERR = 0xff
16257 + };
16258 +
16259 ++struct hns_roce_cmd_errcode {
16260 ++ enum hns_roce_cmd_return_status return_status;
16261 ++ int errno;
16262 ++};
16263 ++
16264 + enum hns_roce_sgid_type {
16265 + GID_TYPE_FLAG_ROCE_V1 = 0,
16266 + GID_TYPE_FLAG_ROCE_V2_IPV4,
16267 +@@ -1327,9 +1332,9 @@ struct hns_roce_link_table {
16268 + #define HNS_ROCE_EXT_LLM_MIN_PAGES(que_num) ((que_num) * 4 + 2)
16269 +
16270 + struct hns_roce_v2_free_mr {
16271 +- struct ib_qp *rsv_qp[HNS_ROCE_FREE_MR_USED_QP_NUM];
16272 +- struct ib_cq *rsv_cq;
16273 +- struct ib_pd *rsv_pd;
16274 ++ struct hns_roce_qp *rsv_qp[HNS_ROCE_FREE_MR_USED_QP_NUM];
16275 ++ struct hns_roce_cq *rsv_cq;
16276 ++ struct hns_roce_pd *rsv_pd;
16277 + struct mutex mutex;
16278 + };
16279 +
16280 +@@ -1459,6 +1464,8 @@ struct hns_roce_sccc_clr_done {
16281 + __le32 rsv[5];
16282 + };
16283 +
16284 ++int hns_roce_v2_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata);
16285 ++
16286 + static inline void hns_roce_write64(struct hns_roce_dev *hr_dev, __le32 val[2],
16287 + void __iomem *dest)
16288 + {
16289 +diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
16290 +index dcf89689a4c62..8ba68ac12388d 100644
16291 +--- a/drivers/infiniband/hw/hns/hns_roce_main.c
16292 ++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
16293 +@@ -354,10 +354,11 @@ static int hns_roce_alloc_uar_entry(struct ib_ucontext *uctx)
16294 + static int hns_roce_alloc_ucontext(struct ib_ucontext *uctx,
16295 + struct ib_udata *udata)
16296 + {
16297 +- int ret;
16298 + struct hns_roce_ucontext *context = to_hr_ucontext(uctx);
16299 +- struct hns_roce_ib_alloc_ucontext_resp resp = {};
16300 + struct hns_roce_dev *hr_dev = to_hr_dev(uctx->device);
16301 ++ struct hns_roce_ib_alloc_ucontext_resp resp = {};
16302 ++ struct hns_roce_ib_alloc_ucontext ucmd = {};
16303 ++ int ret;
16304 +
16305 + if (!hr_dev->active)
16306 + return -EAGAIN;
16307 +@@ -365,6 +366,19 @@ static int hns_roce_alloc_ucontext(struct ib_ucontext *uctx,
16308 + resp.qp_tab_size = hr_dev->caps.num_qps;
16309 + resp.srq_tab_size = hr_dev->caps.num_srqs;
16310 +
16311 ++ ret = ib_copy_from_udata(&ucmd, udata,
16312 ++ min(udata->inlen, sizeof(ucmd)));
16313 ++ if (ret)
16314 ++ return ret;
16315 ++
16316 ++ if (hr_dev->pci_dev->revision >= PCI_REVISION_ID_HIP09)
16317 ++ context->config = ucmd.config & HNS_ROCE_EXSGE_FLAGS;
16318 ++
16319 ++ if (context->config & HNS_ROCE_EXSGE_FLAGS) {
16320 ++ resp.config |= HNS_ROCE_RSP_EXSGE_FLAGS;
16321 ++ resp.max_inline_data = hr_dev->caps.max_sq_inline;
16322 ++ }
16323 ++
16324 + ret = hns_roce_uar_alloc(hr_dev, &context->uar);
16325 + if (ret)
16326 + goto error_fail_uar_alloc;
16327 +diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
16328 +index 845ac7d3831f4..37a5cf62f88b4 100644
16329 +--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
16330 ++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
16331 +@@ -392,10 +392,10 @@ struct ib_mr *hns_roce_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type,
16332 +
16333 + return &mr->ibmr;
16334 +
16335 +-err_key:
16336 +- free_mr_key(hr_dev, mr);
16337 + err_pbl:
16338 + free_mr_pbl(hr_dev, mr);
16339 ++err_key:
16340 ++ free_mr_key(hr_dev, mr);
16341 + err_free:
16342 + kfree(mr);
16343 + return ERR_PTR(ret);
16344 +diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
16345 +index f0bd82a18069a..0ae335fb205ca 100644
16346 +--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
16347 ++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
16348 +@@ -476,38 +476,109 @@ static int set_rq_size(struct hns_roce_dev *hr_dev, struct ib_qp_cap *cap,
16349 + return 0;
16350 + }
16351 +
16352 +-static u32 get_wqe_ext_sge_cnt(struct hns_roce_qp *qp)
16353 ++static u32 get_max_inline_data(struct hns_roce_dev *hr_dev,
16354 ++ struct ib_qp_cap *cap)
16355 + {
16356 +- /* GSI/UD QP only has extended sge */
16357 +- if (qp->ibqp.qp_type == IB_QPT_GSI || qp->ibqp.qp_type == IB_QPT_UD)
16358 +- return qp->sq.max_gs;
16359 +-
16360 +- if (qp->sq.max_gs > HNS_ROCE_SGE_IN_WQE)
16361 +- return qp->sq.max_gs - HNS_ROCE_SGE_IN_WQE;
16362 ++ if (cap->max_inline_data) {
16363 ++ cap->max_inline_data = roundup_pow_of_two(cap->max_inline_data);
16364 ++ return min(cap->max_inline_data,
16365 ++ hr_dev->caps.max_sq_inline);
16366 ++ }
16367 +
16368 + return 0;
16369 + }
16370 +
16371 ++static void update_inline_data(struct hns_roce_qp *hr_qp,
16372 ++ struct ib_qp_cap *cap)
16373 ++{
16374 ++ u32 sge_num = hr_qp->sq.ext_sge_cnt;
16375 ++
16376 ++ if (hr_qp->config & HNS_ROCE_EXSGE_FLAGS) {
16377 ++ if (!(hr_qp->ibqp.qp_type == IB_QPT_GSI ||
16378 ++ hr_qp->ibqp.qp_type == IB_QPT_UD))
16379 ++ sge_num = max((u32)HNS_ROCE_SGE_IN_WQE, sge_num);
16380 ++
16381 ++ cap->max_inline_data = max(cap->max_inline_data,
16382 ++ sge_num * HNS_ROCE_SGE_SIZE);
16383 ++ }
16384 ++
16385 ++ hr_qp->max_inline_data = cap->max_inline_data;
16386 ++}
16387 ++
16388 ++static u32 get_sge_num_from_max_send_sge(bool is_ud_or_gsi,
16389 ++ u32 max_send_sge)
16390 ++{
16391 ++ unsigned int std_sge_num;
16392 ++ unsigned int min_sge;
16393 ++
16394 ++ std_sge_num = is_ud_or_gsi ? 0 : HNS_ROCE_SGE_IN_WQE;
16395 ++ min_sge = is_ud_or_gsi ? 1 : 0;
16396 ++ return max_send_sge > std_sge_num ? (max_send_sge - std_sge_num) :
16397 ++ min_sge;
16398 ++}
16399 ++
16400 ++static unsigned int get_sge_num_from_max_inl_data(bool is_ud_or_gsi,
16401 ++ u32 max_inline_data)
16402 ++{
16403 ++ unsigned int inline_sge;
16404 ++
16405 ++ inline_sge = roundup_pow_of_two(max_inline_data) / HNS_ROCE_SGE_SIZE;
16406 ++
16407 ++ /*
16408 ++ * if max_inline_data less than
16409 ++ * HNS_ROCE_SGE_IN_WQE * HNS_ROCE_SGE_SIZE,
16410 ++ * In addition to ud's mode, no need to extend sge.
16411 ++ */
16412 ++ if (!is_ud_or_gsi && inline_sge <= HNS_ROCE_SGE_IN_WQE)
16413 ++ inline_sge = 0;
16414 ++
16415 ++ return inline_sge;
16416 ++}
16417 ++
16418 + static void set_ext_sge_param(struct hns_roce_dev *hr_dev, u32 sq_wqe_cnt,
16419 + struct hns_roce_qp *hr_qp, struct ib_qp_cap *cap)
16420 + {
16421 ++ bool is_ud_or_gsi = (hr_qp->ibqp.qp_type == IB_QPT_GSI ||
16422 ++ hr_qp->ibqp.qp_type == IB_QPT_UD);
16423 ++ unsigned int std_sge_num;
16424 ++ u32 inline_ext_sge = 0;
16425 ++ u32 ext_wqe_sge_cnt;
16426 + u32 total_sge_cnt;
16427 +- u32 wqe_sge_cnt;
16428 ++
16429 ++ cap->max_inline_data = get_max_inline_data(hr_dev, cap);
16430 +
16431 + hr_qp->sge.sge_shift = HNS_ROCE_SGE_SHIFT;
16432 ++ std_sge_num = is_ud_or_gsi ? 0 : HNS_ROCE_SGE_IN_WQE;
16433 ++ ext_wqe_sge_cnt = get_sge_num_from_max_send_sge(is_ud_or_gsi,
16434 ++ cap->max_send_sge);
16435 +
16436 +- hr_qp->sq.max_gs = max(1U, cap->max_send_sge);
16437 ++ if (hr_qp->config & HNS_ROCE_EXSGE_FLAGS) {
16438 ++ inline_ext_sge = max(ext_wqe_sge_cnt,
16439 ++ get_sge_num_from_max_inl_data(is_ud_or_gsi,
16440 ++ cap->max_inline_data));
16441 ++ hr_qp->sq.ext_sge_cnt = inline_ext_sge ?
16442 ++ roundup_pow_of_two(inline_ext_sge) : 0;
16443 +
16444 +- wqe_sge_cnt = get_wqe_ext_sge_cnt(hr_qp);
16445 ++ hr_qp->sq.max_gs = max(1U, (hr_qp->sq.ext_sge_cnt + std_sge_num));
16446 ++ hr_qp->sq.max_gs = min(hr_qp->sq.max_gs, hr_dev->caps.max_sq_sg);
16447 ++
16448 ++ ext_wqe_sge_cnt = hr_qp->sq.ext_sge_cnt;
16449 ++ } else {
16450 ++ hr_qp->sq.max_gs = max(1U, cap->max_send_sge);
16451 ++ hr_qp->sq.max_gs = min(hr_qp->sq.max_gs, hr_dev->caps.max_sq_sg);
16452 ++ hr_qp->sq.ext_sge_cnt = hr_qp->sq.max_gs;
16453 ++ }
16454 +
16455 + /* If the number of extended sge is not zero, they MUST use the
16456 + * space of HNS_HW_PAGE_SIZE at least.
16457 + */
16458 +- if (wqe_sge_cnt) {
16459 +- total_sge_cnt = roundup_pow_of_two(sq_wqe_cnt * wqe_sge_cnt);
16460 ++ if (ext_wqe_sge_cnt) {
16461 ++ total_sge_cnt = roundup_pow_of_two(sq_wqe_cnt * ext_wqe_sge_cnt);
16462 + hr_qp->sge.sge_cnt = max(total_sge_cnt,
16463 + (u32)HNS_HW_PAGE_SIZE / HNS_ROCE_SGE_SIZE);
16464 + }
16465 ++
16466 ++ update_inline_data(hr_qp, cap);
16467 + }
16468 +
16469 + static int check_sq_size_with_integrity(struct hns_roce_dev *hr_dev,
16470 +@@ -556,6 +627,7 @@ static int set_user_sq_size(struct hns_roce_dev *hr_dev,
16471 +
16472 + hr_qp->sq.wqe_shift = ucmd->log_sq_stride;
16473 + hr_qp->sq.wqe_cnt = cnt;
16474 ++ cap->max_send_sge = hr_qp->sq.max_gs;
16475 +
16476 + return 0;
16477 + }
16478 +@@ -986,13 +1058,9 @@ static int set_qp_param(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
16479 + struct hns_roce_ib_create_qp *ucmd)
16480 + {
16481 + struct ib_device *ibdev = &hr_dev->ib_dev;
16482 ++ struct hns_roce_ucontext *uctx;
16483 + int ret;
16484 +
16485 +- if (init_attr->cap.max_inline_data > hr_dev->caps.max_sq_inline)
16486 +- init_attr->cap.max_inline_data = hr_dev->caps.max_sq_inline;
16487 +-
16488 +- hr_qp->max_inline_data = init_attr->cap.max_inline_data;
16489 +-
16490 + if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR)
16491 + hr_qp->sq_signal_bits = IB_SIGNAL_ALL_WR;
16492 + else
16493 +@@ -1015,12 +1083,17 @@ static int set_qp_param(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
16494 + return ret;
16495 + }
16496 +
16497 ++ uctx = rdma_udata_to_drv_context(udata, struct hns_roce_ucontext,
16498 ++ ibucontext);
16499 ++ hr_qp->config = uctx->config;
16500 + ret = set_user_sq_size(hr_dev, &init_attr->cap, hr_qp, ucmd);
16501 + if (ret)
16502 + ibdev_err(ibdev,
16503 + "failed to set user SQ size, ret = %d.\n",
16504 + ret);
16505 + } else {
16506 ++ if (hr_dev->pci_dev->revision >= PCI_REVISION_ID_HIP09)
16507 ++ hr_qp->config = HNS_ROCE_EXSGE_FLAGS;
16508 + ret = set_kernel_sq_size(hr_dev, &init_attr->cap, hr_qp);
16509 + if (ret)
16510 + ibdev_err(ibdev,
16511 +diff --git a/drivers/infiniband/hw/irdma/uk.c b/drivers/infiniband/hw/irdma/uk.c
16512 +index a6e5d350a94ce..16183e894da77 100644
16513 +--- a/drivers/infiniband/hw/irdma/uk.c
16514 ++++ b/drivers/infiniband/hw/irdma/uk.c
16515 +@@ -566,21 +566,37 @@ static void irdma_set_mw_bind_wqe_gen_1(__le64 *wqe,
16516 +
16517 + /**
16518 + * irdma_copy_inline_data_gen_1 - Copy inline data to wqe
16519 +- * @dest: pointer to wqe
16520 +- * @src: pointer to inline data
16521 +- * @len: length of inline data to copy
16522 ++ * @wqe: pointer to wqe
16523 ++ * @sge_list: table of pointers to inline data
16524 ++ * @num_sges: Total inline data length
16525 + * @polarity: compatibility parameter
16526 + */
16527 +-static void irdma_copy_inline_data_gen_1(u8 *dest, u8 *src, u32 len,
16528 +- u8 polarity)
16529 ++static void irdma_copy_inline_data_gen_1(u8 *wqe, struct ib_sge *sge_list,
16530 ++ u32 num_sges, u8 polarity)
16531 + {
16532 +- if (len <= 16) {
16533 +- memcpy(dest, src, len);
16534 +- } else {
16535 +- memcpy(dest, src, 16);
16536 +- src += 16;
16537 +- dest = dest + 32;
16538 +- memcpy(dest, src, len - 16);
16539 ++ u32 quanta_bytes_remaining = 16;
16540 ++ int i;
16541 ++
16542 ++ for (i = 0; i < num_sges; i++) {
16543 ++ u8 *cur_sge = (u8 *)(uintptr_t)sge_list[i].addr;
16544 ++ u32 sge_len = sge_list[i].length;
16545 ++
16546 ++ while (sge_len) {
16547 ++ u32 bytes_copied;
16548 ++
16549 ++ bytes_copied = min(sge_len, quanta_bytes_remaining);
16550 ++ memcpy(wqe, cur_sge, bytes_copied);
16551 ++ wqe += bytes_copied;
16552 ++ cur_sge += bytes_copied;
16553 ++ quanta_bytes_remaining -= bytes_copied;
16554 ++ sge_len -= bytes_copied;
16555 ++
16556 ++ if (!quanta_bytes_remaining) {
16557 ++ /* Remaining inline bytes reside after hdr */
16558 ++ wqe += 16;
16559 ++ quanta_bytes_remaining = 32;
16560 ++ }
16561 ++ }
16562 + }
16563 + }
16564 +
16565 +@@ -612,35 +628,51 @@ static void irdma_set_mw_bind_wqe(__le64 *wqe,
16566 +
16567 + /**
16568 + * irdma_copy_inline_data - Copy inline data to wqe
16569 +- * @dest: pointer to wqe
16570 +- * @src: pointer to inline data
16571 +- * @len: length of inline data to copy
16572 ++ * @wqe: pointer to wqe
16573 ++ * @sge_list: table of pointers to inline data
16574 ++ * @num_sges: number of SGE's
16575 + * @polarity: polarity of wqe valid bit
16576 + */
16577 +-static void irdma_copy_inline_data(u8 *dest, u8 *src, u32 len, u8 polarity)
16578 ++static void irdma_copy_inline_data(u8 *wqe, struct ib_sge *sge_list,
16579 ++ u32 num_sges, u8 polarity)
16580 + {
16581 + u8 inline_valid = polarity << IRDMA_INLINE_VALID_S;
16582 +- u32 copy_size;
16583 +-
16584 +- dest += 8;
16585 +- if (len <= 8) {
16586 +- memcpy(dest, src, len);
16587 +- return;
16588 +- }
16589 +-
16590 +- *((u64 *)dest) = *((u64 *)src);
16591 +- len -= 8;
16592 +- src += 8;
16593 +- dest += 24; /* point to additional 32 byte quanta */
16594 +-
16595 +- while (len) {
16596 +- copy_size = len < 31 ? len : 31;
16597 +- memcpy(dest, src, copy_size);
16598 +- *(dest + 31) = inline_valid;
16599 +- len -= copy_size;
16600 +- dest += 32;
16601 +- src += copy_size;
16602 ++ u32 quanta_bytes_remaining = 8;
16603 ++ bool first_quanta = true;
16604 ++ int i;
16605 ++
16606 ++ wqe += 8;
16607 ++
16608 ++ for (i = 0; i < num_sges; i++) {
16609 ++ u8 *cur_sge = (u8 *)(uintptr_t)sge_list[i].addr;
16610 ++ u32 sge_len = sge_list[i].length;
16611 ++
16612 ++ while (sge_len) {
16613 ++ u32 bytes_copied;
16614 ++
16615 ++ bytes_copied = min(sge_len, quanta_bytes_remaining);
16616 ++ memcpy(wqe, cur_sge, bytes_copied);
16617 ++ wqe += bytes_copied;
16618 ++ cur_sge += bytes_copied;
16619 ++ quanta_bytes_remaining -= bytes_copied;
16620 ++ sge_len -= bytes_copied;
16621 ++
16622 ++ if (!quanta_bytes_remaining) {
16623 ++ quanta_bytes_remaining = 31;
16624 ++
16625 ++ /* Remaining inline bytes reside after hdr */
16626 ++ if (first_quanta) {
16627 ++ first_quanta = false;
16628 ++ wqe += 16;
16629 ++ } else {
16630 ++ *wqe = inline_valid;
16631 ++ wqe++;
16632 ++ }
16633 ++ }
16634 ++ }
16635 + }
16636 ++ if (!first_quanta && quanta_bytes_remaining < 31)
16637 ++ *(wqe + quanta_bytes_remaining) = inline_valid;
16638 + }
16639 +
16640 + /**
16641 +@@ -679,20 +711,27 @@ int irdma_uk_inline_rdma_write(struct irdma_qp_uk *qp,
16642 + struct irdma_post_sq_info *info, bool post_sq)
16643 + {
16644 + __le64 *wqe;
16645 +- struct irdma_inline_rdma_write *op_info;
16646 ++ struct irdma_rdma_write *op_info;
16647 + u64 hdr = 0;
16648 + u32 wqe_idx;
16649 + bool read_fence = false;
16650 ++ u32 i, total_size = 0;
16651 + u16 quanta;
16652 +
16653 + info->push_wqe = qp->push_db ? true : false;
16654 +- op_info = &info->op.inline_rdma_write;
16655 ++ op_info = &info->op.rdma_write;
16656 ++
16657 ++ if (unlikely(qp->max_sq_frag_cnt < op_info->num_lo_sges))
16658 ++ return -EINVAL;
16659 ++
16660 ++ for (i = 0; i < op_info->num_lo_sges; i++)
16661 ++ total_size += op_info->lo_sg_list[i].length;
16662 +
16663 +- if (op_info->len > qp->max_inline_data)
16664 ++ if (unlikely(total_size > qp->max_inline_data))
16665 + return -EINVAL;
16666 +
16667 +- quanta = qp->wqe_ops.iw_inline_data_size_to_quanta(op_info->len);
16668 +- wqe = irdma_qp_get_next_send_wqe(qp, &wqe_idx, quanta, op_info->len,
16669 ++ quanta = qp->wqe_ops.iw_inline_data_size_to_quanta(total_size);
16670 ++ wqe = irdma_qp_get_next_send_wqe(qp, &wqe_idx, quanta, total_size,
16671 + info);
16672 + if (!wqe)
16673 + return -ENOMEM;
16674 +@@ -705,7 +744,7 @@ int irdma_uk_inline_rdma_write(struct irdma_qp_uk *qp,
16675 +
16676 + hdr = FIELD_PREP(IRDMAQPSQ_REMSTAG, op_info->rem_addr.lkey) |
16677 + FIELD_PREP(IRDMAQPSQ_OPCODE, info->op_type) |
16678 +- FIELD_PREP(IRDMAQPSQ_INLINEDATALEN, op_info->len) |
16679 ++ FIELD_PREP(IRDMAQPSQ_INLINEDATALEN, total_size) |
16680 + FIELD_PREP(IRDMAQPSQ_REPORTRTT, info->report_rtt ? 1 : 0) |
16681 + FIELD_PREP(IRDMAQPSQ_INLINEDATAFLAG, 1) |
16682 + FIELD_PREP(IRDMAQPSQ_IMMDATAFLAG, info->imm_data_valid ? 1 : 0) |
16683 +@@ -719,7 +758,8 @@ int irdma_uk_inline_rdma_write(struct irdma_qp_uk *qp,
16684 + set_64bit_val(wqe, 0,
16685 + FIELD_PREP(IRDMAQPSQ_IMMDATA, info->imm_data));
16686 +
16687 +- qp->wqe_ops.iw_copy_inline_data((u8 *)wqe, op_info->data, op_info->len,
16688 ++ qp->wqe_ops.iw_copy_inline_data((u8 *)wqe, op_info->lo_sg_list,
16689 ++ op_info->num_lo_sges,
16690 + qp->swqe_polarity);
16691 + dma_wmb(); /* make sure WQE is populated before valid bit is set */
16692 +
16693 +@@ -745,20 +785,27 @@ int irdma_uk_inline_send(struct irdma_qp_uk *qp,
16694 + struct irdma_post_sq_info *info, bool post_sq)
16695 + {
16696 + __le64 *wqe;
16697 +- struct irdma_post_inline_send *op_info;
16698 ++ struct irdma_post_send *op_info;
16699 + u64 hdr;
16700 + u32 wqe_idx;
16701 + bool read_fence = false;
16702 ++ u32 i, total_size = 0;
16703 + u16 quanta;
16704 +
16705 + info->push_wqe = qp->push_db ? true : false;
16706 +- op_info = &info->op.inline_send;
16707 ++ op_info = &info->op.send;
16708 ++
16709 ++ if (unlikely(qp->max_sq_frag_cnt < op_info->num_sges))
16710 ++ return -EINVAL;
16711 +
16712 +- if (op_info->len > qp->max_inline_data)
16713 ++ for (i = 0; i < op_info->num_sges; i++)
16714 ++ total_size += op_info->sg_list[i].length;
16715 ++
16716 ++ if (unlikely(total_size > qp->max_inline_data))
16717 + return -EINVAL;
16718 +
16719 +- quanta = qp->wqe_ops.iw_inline_data_size_to_quanta(op_info->len);
16720 +- wqe = irdma_qp_get_next_send_wqe(qp, &wqe_idx, quanta, op_info->len,
16721 ++ quanta = qp->wqe_ops.iw_inline_data_size_to_quanta(total_size);
16722 ++ wqe = irdma_qp_get_next_send_wqe(qp, &wqe_idx, quanta, total_size,
16723 + info);
16724 + if (!wqe)
16725 + return -ENOMEM;
16726 +@@ -773,7 +820,7 @@ int irdma_uk_inline_send(struct irdma_qp_uk *qp,
16727 + hdr = FIELD_PREP(IRDMAQPSQ_REMSTAG, info->stag_to_inv) |
16728 + FIELD_PREP(IRDMAQPSQ_AHID, op_info->ah_id) |
16729 + FIELD_PREP(IRDMAQPSQ_OPCODE, info->op_type) |
16730 +- FIELD_PREP(IRDMAQPSQ_INLINEDATALEN, op_info->len) |
16731 ++ FIELD_PREP(IRDMAQPSQ_INLINEDATALEN, total_size) |
16732 + FIELD_PREP(IRDMAQPSQ_IMMDATAFLAG,
16733 + (info->imm_data_valid ? 1 : 0)) |
16734 + FIELD_PREP(IRDMAQPSQ_REPORTRTT, (info->report_rtt ? 1 : 0)) |
16735 +@@ -789,8 +836,8 @@ int irdma_uk_inline_send(struct irdma_qp_uk *qp,
16736 + if (info->imm_data_valid)
16737 + set_64bit_val(wqe, 0,
16738 + FIELD_PREP(IRDMAQPSQ_IMMDATA, info->imm_data));
16739 +- qp->wqe_ops.iw_copy_inline_data((u8 *)wqe, op_info->data, op_info->len,
16740 +- qp->swqe_polarity);
16741 ++ qp->wqe_ops.iw_copy_inline_data((u8 *)wqe, op_info->sg_list,
16742 ++ op_info->num_sges, qp->swqe_polarity);
16743 +
16744 + dma_wmb(); /* make sure WQE is populated before valid bit is set */
16745 +
16746 +@@ -1002,11 +1049,10 @@ int irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq,
16747 + __le64 *cqe;
16748 + struct irdma_qp_uk *qp;
16749 + struct irdma_ring *pring = NULL;
16750 +- u32 wqe_idx, q_type;
16751 ++ u32 wqe_idx;
16752 + int ret_code;
16753 + bool move_cq_head = true;
16754 + u8 polarity;
16755 +- u8 op_type;
16756 + bool ext_valid;
16757 + __le64 *ext_cqe;
16758 +
16759 +@@ -1074,7 +1120,7 @@ int irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq,
16760 + info->ud_vlan_valid = false;
16761 + }
16762 +
16763 +- q_type = (u8)FIELD_GET(IRDMA_CQ_SQ, qword3);
16764 ++ info->q_type = (u8)FIELD_GET(IRDMA_CQ_SQ, qword3);
16765 + info->error = (bool)FIELD_GET(IRDMA_CQ_ERROR, qword3);
16766 + info->push_dropped = (bool)FIELD_GET(IRDMACQ_PSHDROP, qword3);
16767 + info->ipv4 = (bool)FIELD_GET(IRDMACQ_IPV4, qword3);
16768 +@@ -1113,8 +1159,9 @@ int irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq,
16769 + }
16770 + wqe_idx = (u32)FIELD_GET(IRDMA_CQ_WQEIDX, qword3);
16771 + info->qp_handle = (irdma_qp_handle)(unsigned long)qp;
16772 ++ info->op_type = (u8)FIELD_GET(IRDMA_CQ_SQ, qword3);
16773 +
16774 +- if (q_type == IRDMA_CQE_QTYPE_RQ) {
16775 ++ if (info->q_type == IRDMA_CQE_QTYPE_RQ) {
16776 + u32 array_idx;
16777 +
16778 + array_idx = wqe_idx / qp->rq_wqe_size_multiplier;
16779 +@@ -1134,10 +1181,6 @@ int irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq,
16780 +
16781 + info->bytes_xfered = (u32)FIELD_GET(IRDMACQ_PAYLDLEN, qword0);
16782 +
16783 +- if (info->imm_valid)
16784 +- info->op_type = IRDMA_OP_TYPE_REC_IMM;
16785 +- else
16786 +- info->op_type = IRDMA_OP_TYPE_REC;
16787 + if (qword3 & IRDMACQ_STAG) {
16788 + info->stag_invalid_set = true;
16789 + info->inv_stag = (u32)FIELD_GET(IRDMACQ_INVSTAG, qword2);
16790 +@@ -1195,17 +1238,18 @@ int irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq,
16791 + sw_wqe = qp->sq_base[tail].elem;
16792 + get_64bit_val(sw_wqe, 24,
16793 + &wqe_qword);
16794 +- op_type = (u8)FIELD_GET(IRDMAQPSQ_OPCODE, wqe_qword);
16795 +- info->op_type = op_type;
16796 ++ info->op_type = (u8)FIELD_GET(IRDMAQPSQ_OPCODE,
16797 ++ wqe_qword);
16798 + IRDMA_RING_SET_TAIL(qp->sq_ring,
16799 + tail + qp->sq_wrtrk_array[tail].quanta);
16800 +- if (op_type != IRDMAQP_OP_NOP) {
16801 ++ if (info->op_type != IRDMAQP_OP_NOP) {
16802 + info->wr_id = qp->sq_wrtrk_array[tail].wrid;
16803 + info->bytes_xfered = qp->sq_wrtrk_array[tail].wr_len;
16804 + break;
16805 + }
16806 + } while (1);
16807 +- if (op_type == IRDMA_OP_TYPE_BIND_MW && info->minor_err == FLUSH_PROT_ERR)
16808 ++ if (info->op_type == IRDMA_OP_TYPE_BIND_MW &&
16809 ++ info->minor_err == FLUSH_PROT_ERR)
16810 + info->minor_err = FLUSH_MW_BIND_ERR;
16811 + qp->sq_flush_seen = true;
16812 + if (!IRDMA_RING_MORE_WORK(qp->sq_ring))
16813 +diff --git a/drivers/infiniband/hw/irdma/user.h b/drivers/infiniband/hw/irdma/user.h
16814 +index 2ef61923c9268..d0cdf609f5e06 100644
16815 +--- a/drivers/infiniband/hw/irdma/user.h
16816 ++++ b/drivers/infiniband/hw/irdma/user.h
16817 +@@ -173,14 +173,6 @@ struct irdma_post_send {
16818 + u32 ah_id;
16819 + };
16820 +
16821 +-struct irdma_post_inline_send {
16822 +- void *data;
16823 +- u32 len;
16824 +- u32 qkey;
16825 +- u32 dest_qp;
16826 +- u32 ah_id;
16827 +-};
16828 +-
16829 + struct irdma_post_rq_info {
16830 + u64 wr_id;
16831 + struct ib_sge *sg_list;
16832 +@@ -193,12 +185,6 @@ struct irdma_rdma_write {
16833 + struct ib_sge rem_addr;
16834 + };
16835 +
16836 +-struct irdma_inline_rdma_write {
16837 +- void *data;
16838 +- u32 len;
16839 +- struct ib_sge rem_addr;
16840 +-};
16841 +-
16842 + struct irdma_rdma_read {
16843 + struct ib_sge *lo_sg_list;
16844 + u32 num_lo_sges;
16845 +@@ -241,8 +227,6 @@ struct irdma_post_sq_info {
16846 + struct irdma_rdma_read rdma_read;
16847 + struct irdma_bind_window bind_window;
16848 + struct irdma_inv_local_stag inv_local_stag;
16849 +- struct irdma_inline_rdma_write inline_rdma_write;
16850 +- struct irdma_post_inline_send inline_send;
16851 + } op;
16852 + };
16853 +
16854 +@@ -261,6 +245,7 @@ struct irdma_cq_poll_info {
16855 + u16 ud_vlan;
16856 + u8 ud_smac[6];
16857 + u8 op_type;
16858 ++ u8 q_type;
16859 + bool stag_invalid_set:1; /* or L_R_Key set */
16860 + bool push_dropped:1;
16861 + bool error:1;
16862 +@@ -291,7 +276,8 @@ int irdma_uk_stag_local_invalidate(struct irdma_qp_uk *qp,
16863 + bool post_sq);
16864 +
16865 + struct irdma_wqe_uk_ops {
16866 +- void (*iw_copy_inline_data)(u8 *dest, u8 *src, u32 len, u8 polarity);
16867 ++ void (*iw_copy_inline_data)(u8 *dest, struct ib_sge *sge_list,
16868 ++ u32 num_sges, u8 polarity);
16869 + u16 (*iw_inline_data_size_to_quanta)(u32 data_size);
16870 + void (*iw_set_fragment)(__le64 *wqe, u32 offset, struct ib_sge *sge,
16871 + u8 valid);
16872 +diff --git a/drivers/infiniband/hw/irdma/utils.c b/drivers/infiniband/hw/irdma/utils.c
16873 +index 8dfc9e154d733..445e69e864097 100644
16874 +--- a/drivers/infiniband/hw/irdma/utils.c
16875 ++++ b/drivers/infiniband/hw/irdma/utils.c
16876 +@@ -2591,6 +2591,7 @@ void irdma_generate_flush_completions(struct irdma_qp *iwqp)
16877 + sw_wqe = qp->sq_base[wqe_idx].elem;
16878 + get_64bit_val(sw_wqe, 24, &wqe_qword);
16879 + cmpl->cpi.op_type = (u8)FIELD_GET(IRDMAQPSQ_OPCODE, IRDMAQPSQ_OPCODE);
16880 ++ cmpl->cpi.q_type = IRDMA_CQE_QTYPE_SQ;
16881 + /* remove the SQ WR by moving SQ tail*/
16882 + IRDMA_RING_SET_TAIL(*sq_ring,
16883 + sq_ring->tail + qp->sq_wrtrk_array[sq_ring->tail].quanta);
16884 +@@ -2629,6 +2630,7 @@ void irdma_generate_flush_completions(struct irdma_qp *iwqp)
16885 +
16886 + cmpl->cpi.wr_id = qp->rq_wrid_array[wqe_idx];
16887 + cmpl->cpi.op_type = IRDMA_OP_TYPE_REC;
16888 ++ cmpl->cpi.q_type = IRDMA_CQE_QTYPE_RQ;
16889 + /* remove the RQ WR by moving RQ tail */
16890 + IRDMA_RING_SET_TAIL(*rq_ring, rq_ring->tail + 1);
16891 + ibdev_dbg(iwqp->iwrcq->ibcq.device,
16892 +diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
16893 +index a22afbb25bc58..f6973ea55eda7 100644
16894 +--- a/drivers/infiniband/hw/irdma/verbs.c
16895 ++++ b/drivers/infiniband/hw/irdma/verbs.c
16896 +@@ -63,36 +63,6 @@ static int irdma_query_device(struct ib_device *ibdev,
16897 + return 0;
16898 + }
16899 +
16900 +-/**
16901 +- * irdma_get_eth_speed_and_width - Get IB port speed and width from netdev speed
16902 +- * @link_speed: netdev phy link speed
16903 +- * @active_speed: IB port speed
16904 +- * @active_width: IB port width
16905 +- */
16906 +-static void irdma_get_eth_speed_and_width(u32 link_speed, u16 *active_speed,
16907 +- u8 *active_width)
16908 +-{
16909 +- if (link_speed <= SPEED_1000) {
16910 +- *active_width = IB_WIDTH_1X;
16911 +- *active_speed = IB_SPEED_SDR;
16912 +- } else if (link_speed <= SPEED_10000) {
16913 +- *active_width = IB_WIDTH_1X;
16914 +- *active_speed = IB_SPEED_FDR10;
16915 +- } else if (link_speed <= SPEED_20000) {
16916 +- *active_width = IB_WIDTH_4X;
16917 +- *active_speed = IB_SPEED_DDR;
16918 +- } else if (link_speed <= SPEED_25000) {
16919 +- *active_width = IB_WIDTH_1X;
16920 +- *active_speed = IB_SPEED_EDR;
16921 +- } else if (link_speed <= SPEED_40000) {
16922 +- *active_width = IB_WIDTH_4X;
16923 +- *active_speed = IB_SPEED_FDR10;
16924 +- } else {
16925 +- *active_width = IB_WIDTH_4X;
16926 +- *active_speed = IB_SPEED_EDR;
16927 +- }
16928 +-}
16929 +-
16930 + /**
16931 + * irdma_query_port - get port attributes
16932 + * @ibdev: device pointer from stack
16933 +@@ -120,8 +90,9 @@ static int irdma_query_port(struct ib_device *ibdev, u32 port,
16934 + props->state = IB_PORT_DOWN;
16935 + props->phys_state = IB_PORT_PHYS_STATE_DISABLED;
16936 + }
16937 +- irdma_get_eth_speed_and_width(SPEED_100000, &props->active_speed,
16938 +- &props->active_width);
16939 ++
16940 ++ ib_get_eth_speed(ibdev, port, &props->active_speed,
16941 ++ &props->active_width);
16942 +
16943 + if (rdma_protocol_roce(ibdev, 1)) {
16944 + props->gid_tbl_len = 32;
16945 +@@ -1242,6 +1213,7 @@ int irdma_modify_qp_roce(struct ib_qp *ibqp, struct ib_qp_attr *attr,
16946 + av->attrs = attr->ah_attr;
16947 + rdma_gid2ip((struct sockaddr *)&av->sgid_addr, &sgid_attr->gid);
16948 + rdma_gid2ip((struct sockaddr *)&av->dgid_addr, &attr->ah_attr.grh.dgid);
16949 ++ av->net_type = rdma_gid_attr_network_type(sgid_attr);
16950 + if (av->net_type == RDMA_NETWORK_IPV6) {
16951 + __be32 *daddr =
16952 + av->dgid_addr.saddr_in6.sin6_addr.in6_u.u6_addr32;
16953 +@@ -2358,9 +2330,10 @@ static bool irdma_check_mr_contiguous(struct irdma_pble_alloc *palloc,
16954 + * @rf: RDMA PCI function
16955 + * @iwmr: mr pointer for this memory registration
16956 + * @use_pbles: flag if to use pble's
16957 ++ * @lvl_1_only: request only level 1 pble if true
16958 + */
16959 + static int irdma_setup_pbles(struct irdma_pci_f *rf, struct irdma_mr *iwmr,
16960 +- bool use_pbles)
16961 ++ bool use_pbles, bool lvl_1_only)
16962 + {
16963 + struct irdma_pbl *iwpbl = &iwmr->iwpbl;
16964 + struct irdma_pble_alloc *palloc = &iwpbl->pble_alloc;
16965 +@@ -2371,7 +2344,7 @@ static int irdma_setup_pbles(struct irdma_pci_f *rf, struct irdma_mr *iwmr,
16966 +
16967 + if (use_pbles) {
16968 + status = irdma_get_pble(rf->pble_rsrc, palloc, iwmr->page_cnt,
16969 +- false);
16970 ++ lvl_1_only);
16971 + if (status)
16972 + return status;
16973 +
16974 +@@ -2414,16 +2387,10 @@ static int irdma_handle_q_mem(struct irdma_device *iwdev,
16975 + bool ret = true;
16976 +
16977 + pg_size = iwmr->page_size;
16978 +- err = irdma_setup_pbles(iwdev->rf, iwmr, use_pbles);
16979 ++ err = irdma_setup_pbles(iwdev->rf, iwmr, use_pbles, true);
16980 + if (err)
16981 + return err;
16982 +
16983 +- if (use_pbles && palloc->level != PBLE_LEVEL_1) {
16984 +- irdma_free_pble(iwdev->rf->pble_rsrc, palloc);
16985 +- iwpbl->pbl_allocated = false;
16986 +- return -ENOMEM;
16987 +- }
16988 +-
16989 + if (use_pbles)
16990 + arr = palloc->level1.addr;
16991 +
16992 +@@ -2899,7 +2866,7 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len,
16993 + case IRDMA_MEMREG_TYPE_MEM:
16994 + use_pbles = (iwmr->page_cnt != 1);
16995 +
16996 +- err = irdma_setup_pbles(iwdev->rf, iwmr, use_pbles);
16997 ++ err = irdma_setup_pbles(iwdev->rf, iwmr, use_pbles, false);
16998 + if (err)
16999 + goto error;
17000 +
17001 +@@ -3165,30 +3132,20 @@ static int irdma_post_send(struct ib_qp *ibqp,
17002 + info.stag_to_inv = ib_wr->ex.invalidate_rkey;
17003 + }
17004 +
17005 +- if (ib_wr->send_flags & IB_SEND_INLINE) {
17006 +- info.op.inline_send.data = (void *)(unsigned long)
17007 +- ib_wr->sg_list[0].addr;
17008 +- info.op.inline_send.len = ib_wr->sg_list[0].length;
17009 +- if (iwqp->ibqp.qp_type == IB_QPT_UD ||
17010 +- iwqp->ibqp.qp_type == IB_QPT_GSI) {
17011 +- ah = to_iwah(ud_wr(ib_wr)->ah);
17012 +- info.op.inline_send.ah_id = ah->sc_ah.ah_info.ah_idx;
17013 +- info.op.inline_send.qkey = ud_wr(ib_wr)->remote_qkey;
17014 +- info.op.inline_send.dest_qp = ud_wr(ib_wr)->remote_qpn;
17015 +- }
17016 ++ info.op.send.num_sges = ib_wr->num_sge;
17017 ++ info.op.send.sg_list = ib_wr->sg_list;
17018 ++ if (iwqp->ibqp.qp_type == IB_QPT_UD ||
17019 ++ iwqp->ibqp.qp_type == IB_QPT_GSI) {
17020 ++ ah = to_iwah(ud_wr(ib_wr)->ah);
17021 ++ info.op.send.ah_id = ah->sc_ah.ah_info.ah_idx;
17022 ++ info.op.send.qkey = ud_wr(ib_wr)->remote_qkey;
17023 ++ info.op.send.dest_qp = ud_wr(ib_wr)->remote_qpn;
17024 ++ }
17025 ++
17026 ++ if (ib_wr->send_flags & IB_SEND_INLINE)
17027 + err = irdma_uk_inline_send(ukqp, &info, false);
17028 +- } else {
17029 +- info.op.send.num_sges = ib_wr->num_sge;
17030 +- info.op.send.sg_list = ib_wr->sg_list;
17031 +- if (iwqp->ibqp.qp_type == IB_QPT_UD ||
17032 +- iwqp->ibqp.qp_type == IB_QPT_GSI) {
17033 +- ah = to_iwah(ud_wr(ib_wr)->ah);
17034 +- info.op.send.ah_id = ah->sc_ah.ah_info.ah_idx;
17035 +- info.op.send.qkey = ud_wr(ib_wr)->remote_qkey;
17036 +- info.op.send.dest_qp = ud_wr(ib_wr)->remote_qpn;
17037 +- }
17038 ++ else
17039 + err = irdma_uk_send(ukqp, &info, false);
17040 +- }
17041 + break;
17042 + case IB_WR_RDMA_WRITE_WITH_IMM:
17043 + if (ukqp->qp_caps & IRDMA_WRITE_WITH_IMM) {
17044 +@@ -3205,22 +3162,15 @@ static int irdma_post_send(struct ib_qp *ibqp,
17045 + else
17046 + info.op_type = IRDMA_OP_TYPE_RDMA_WRITE;
17047 +
17048 +- if (ib_wr->send_flags & IB_SEND_INLINE) {
17049 +- info.op.inline_rdma_write.data = (void *)(uintptr_t)ib_wr->sg_list[0].addr;
17050 +- info.op.inline_rdma_write.len =
17051 +- ib_wr->sg_list[0].length;
17052 +- info.op.inline_rdma_write.rem_addr.addr =
17053 +- rdma_wr(ib_wr)->remote_addr;
17054 +- info.op.inline_rdma_write.rem_addr.lkey =
17055 +- rdma_wr(ib_wr)->rkey;
17056 ++ info.op.rdma_write.num_lo_sges = ib_wr->num_sge;
17057 ++ info.op.rdma_write.lo_sg_list = ib_wr->sg_list;
17058 ++ info.op.rdma_write.rem_addr.addr =
17059 ++ rdma_wr(ib_wr)->remote_addr;
17060 ++ info.op.rdma_write.rem_addr.lkey = rdma_wr(ib_wr)->rkey;
17061 ++ if (ib_wr->send_flags & IB_SEND_INLINE)
17062 + err = irdma_uk_inline_rdma_write(ukqp, &info, false);
17063 +- } else {
17064 +- info.op.rdma_write.lo_sg_list = (void *)ib_wr->sg_list;
17065 +- info.op.rdma_write.num_lo_sges = ib_wr->num_sge;
17066 +- info.op.rdma_write.rem_addr.addr = rdma_wr(ib_wr)->remote_addr;
17067 +- info.op.rdma_write.rem_addr.lkey = rdma_wr(ib_wr)->rkey;
17068 ++ else
17069 + err = irdma_uk_rdma_write(ukqp, &info, false);
17070 +- }
17071 + break;
17072 + case IB_WR_RDMA_READ_WITH_INV:
17073 + inv_stag = true;
17074 +@@ -3380,7 +3330,6 @@ static enum ib_wc_status irdma_flush_err_to_ib_wc_status(enum irdma_flush_opcode
17075 + static void irdma_process_cqe(struct ib_wc *entry,
17076 + struct irdma_cq_poll_info *cq_poll_info)
17077 + {
17078 +- struct irdma_qp *iwqp;
17079 + struct irdma_sc_qp *qp;
17080 +
17081 + entry->wc_flags = 0;
17082 +@@ -3388,7 +3337,6 @@ static void irdma_process_cqe(struct ib_wc *entry,
17083 + entry->wr_id = cq_poll_info->wr_id;
17084 +
17085 + qp = cq_poll_info->qp_handle;
17086 +- iwqp = qp->qp_uk.back_qp;
17087 + entry->qp = qp->qp_uk.back_qp;
17088 +
17089 + if (cq_poll_info->error) {
17090 +@@ -3421,42 +3369,17 @@ static void irdma_process_cqe(struct ib_wc *entry,
17091 + }
17092 + }
17093 +
17094 +- switch (cq_poll_info->op_type) {
17095 +- case IRDMA_OP_TYPE_RDMA_WRITE:
17096 +- case IRDMA_OP_TYPE_RDMA_WRITE_SOL:
17097 +- entry->opcode = IB_WC_RDMA_WRITE;
17098 +- break;
17099 +- case IRDMA_OP_TYPE_RDMA_READ_INV_STAG:
17100 +- case IRDMA_OP_TYPE_RDMA_READ:
17101 +- entry->opcode = IB_WC_RDMA_READ;
17102 +- break;
17103 +- case IRDMA_OP_TYPE_SEND_INV:
17104 +- case IRDMA_OP_TYPE_SEND_SOL:
17105 +- case IRDMA_OP_TYPE_SEND_SOL_INV:
17106 +- case IRDMA_OP_TYPE_SEND:
17107 +- entry->opcode = IB_WC_SEND;
17108 +- break;
17109 +- case IRDMA_OP_TYPE_FAST_REG_NSMR:
17110 +- entry->opcode = IB_WC_REG_MR;
17111 +- break;
17112 +- case IRDMA_OP_TYPE_INV_STAG:
17113 +- entry->opcode = IB_WC_LOCAL_INV;
17114 +- break;
17115 +- case IRDMA_OP_TYPE_REC_IMM:
17116 +- case IRDMA_OP_TYPE_REC:
17117 +- entry->opcode = cq_poll_info->op_type == IRDMA_OP_TYPE_REC_IMM ?
17118 +- IB_WC_RECV_RDMA_WITH_IMM : IB_WC_RECV;
17119 ++ if (cq_poll_info->q_type == IRDMA_CQE_QTYPE_SQ) {
17120 ++ set_ib_wc_op_sq(cq_poll_info, entry);
17121 ++ } else {
17122 ++ set_ib_wc_op_rq(cq_poll_info, entry,
17123 ++ qp->qp_uk.qp_caps & IRDMA_SEND_WITH_IMM ?
17124 ++ true : false);
17125 + if (qp->qp_uk.qp_type != IRDMA_QP_TYPE_ROCE_UD &&
17126 + cq_poll_info->stag_invalid_set) {
17127 + entry->ex.invalidate_rkey = cq_poll_info->inv_stag;
17128 + entry->wc_flags |= IB_WC_WITH_INVALIDATE;
17129 + }
17130 +- break;
17131 +- default:
17132 +- ibdev_err(&iwqp->iwdev->ibdev,
17133 +- "Invalid opcode = %d in CQE\n", cq_poll_info->op_type);
17134 +- entry->status = IB_WC_GENERAL_ERR;
17135 +- return;
17136 + }
17137 +
17138 + if (qp->qp_uk.qp_type == IRDMA_QP_TYPE_ROCE_UD) {
17139 +diff --git a/drivers/infiniband/hw/irdma/verbs.h b/drivers/infiniband/hw/irdma/verbs.h
17140 +index 4309b7159f42c..a536e9fa85ebf 100644
17141 +--- a/drivers/infiniband/hw/irdma/verbs.h
17142 ++++ b/drivers/infiniband/hw/irdma/verbs.h
17143 +@@ -232,6 +232,59 @@ static inline u16 irdma_fw_minor_ver(struct irdma_sc_dev *dev)
17144 + return (u16)FIELD_GET(IRDMA_FW_VER_MINOR, dev->feature_info[IRDMA_FEATURE_FW_INFO]);
17145 + }
17146 +
17147 ++static inline void set_ib_wc_op_sq(struct irdma_cq_poll_info *cq_poll_info,
17148 ++ struct ib_wc *entry)
17149 ++{
17150 ++ switch (cq_poll_info->op_type) {
17151 ++ case IRDMA_OP_TYPE_RDMA_WRITE:
17152 ++ case IRDMA_OP_TYPE_RDMA_WRITE_SOL:
17153 ++ entry->opcode = IB_WC_RDMA_WRITE;
17154 ++ break;
17155 ++ case IRDMA_OP_TYPE_RDMA_READ_INV_STAG:
17156 ++ case IRDMA_OP_TYPE_RDMA_READ:
17157 ++ entry->opcode = IB_WC_RDMA_READ;
17158 ++ break;
17159 ++ case IRDMA_OP_TYPE_SEND_SOL:
17160 ++ case IRDMA_OP_TYPE_SEND_SOL_INV:
17161 ++ case IRDMA_OP_TYPE_SEND_INV:
17162 ++ case IRDMA_OP_TYPE_SEND:
17163 ++ entry->opcode = IB_WC_SEND;
17164 ++ break;
17165 ++ case IRDMA_OP_TYPE_FAST_REG_NSMR:
17166 ++ entry->opcode = IB_WC_REG_MR;
17167 ++ break;
17168 ++ case IRDMA_OP_TYPE_INV_STAG:
17169 ++ entry->opcode = IB_WC_LOCAL_INV;
17170 ++ break;
17171 ++ default:
17172 ++ entry->status = IB_WC_GENERAL_ERR;
17173 ++ }
17174 ++}
17175 ++
17176 ++static inline void set_ib_wc_op_rq(struct irdma_cq_poll_info *cq_poll_info,
17177 ++ struct ib_wc *entry, bool send_imm_support)
17178 ++{
17179 ++ /**
17180 ++ * iWARP does not support sendImm, so the presence of Imm data
17181 ++ * must be WriteImm.
17182 ++ */
17183 ++ if (!send_imm_support) {
17184 ++ entry->opcode = cq_poll_info->imm_valid ?
17185 ++ IB_WC_RECV_RDMA_WITH_IMM :
17186 ++ IB_WC_RECV;
17187 ++ return;
17188 ++ }
17189 ++
17190 ++ switch (cq_poll_info->op_type) {
17191 ++ case IB_OPCODE_RDMA_WRITE_ONLY_WITH_IMMEDIATE:
17192 ++ case IB_OPCODE_RDMA_WRITE_LAST_WITH_IMMEDIATE:
17193 ++ entry->opcode = IB_WC_RECV_RDMA_WITH_IMM;
17194 ++ break;
17195 ++ default:
17196 ++ entry->opcode = IB_WC_RECV;
17197 ++ }
17198 ++}
17199 ++
17200 + void irdma_mcast_mac(u32 *ip_addr, u8 *mac, bool ipv4);
17201 + int irdma_ib_register_device(struct irdma_device *iwdev);
17202 + void irdma_ib_unregister_device(struct irdma_device *iwdev);
17203 +diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
17204 +index 502e9ada99b30..80e2d631fdb24 100644
17205 +--- a/drivers/infiniband/sw/rxe/rxe_mr.c
17206 ++++ b/drivers/infiniband/sw/rxe/rxe_mr.c
17207 +@@ -99,6 +99,7 @@ err2:
17208 + kfree(mr->map[i]);
17209 +
17210 + kfree(mr->map);
17211 ++ mr->map = NULL;
17212 + err1:
17213 + return -ENOMEM;
17214 + }
17215 +@@ -122,7 +123,6 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova,
17216 + int num_buf;
17217 + void *vaddr;
17218 + int err;
17219 +- int i;
17220 +
17221 + umem = ib_umem_get(&rxe->ib_dev, start, length, access);
17222 + if (IS_ERR(umem)) {
17223 +@@ -163,9 +163,8 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova,
17224 + pr_warn("%s: Unable to get virtual address\n",
17225 + __func__);
17226 + err = -ENOMEM;
17227 +- goto err_cleanup_map;
17228 ++ goto err_release_umem;
17229 + }
17230 +-
17231 + buf->addr = (uintptr_t)vaddr;
17232 + buf->size = PAGE_SIZE;
17233 + num_buf++;
17234 +@@ -182,10 +181,6 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova,
17235 +
17236 + return 0;
17237 +
17238 +-err_cleanup_map:
17239 +- for (i = 0; i < mr->num_map; i++)
17240 +- kfree(mr->map[i]);
17241 +- kfree(mr->map);
17242 + err_release_umem:
17243 + ib_umem_release(umem);
17244 + err_out:
17245 +diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
17246 +index a62bab88415cb..e459fb542b83a 100644
17247 +--- a/drivers/infiniband/sw/rxe/rxe_qp.c
17248 ++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
17249 +@@ -829,12 +829,12 @@ static void rxe_qp_do_cleanup(struct work_struct *work)
17250 + if (qp->resp.mr)
17251 + rxe_put(qp->resp.mr);
17252 +
17253 +- if (qp_type(qp) == IB_QPT_RC)
17254 +- sk_dst_reset(qp->sk->sk);
17255 +-
17256 + free_rd_atomic_resources(qp);
17257 +
17258 + if (qp->sk) {
17259 ++ if (qp_type(qp) == IB_QPT_RC)
17260 ++ sk_dst_reset(qp->sk->sk);
17261 ++
17262 + kernel_sock_shutdown(qp->sk, SHUT_RDWR);
17263 + sock_release(qp->sk);
17264 + }
17265 +diff --git a/drivers/infiniband/sw/siw/siw_cq.c b/drivers/infiniband/sw/siw/siw_cq.c
17266 +index d68e37859e73b..403029de6b92d 100644
17267 +--- a/drivers/infiniband/sw/siw/siw_cq.c
17268 ++++ b/drivers/infiniband/sw/siw/siw_cq.c
17269 +@@ -56,8 +56,6 @@ int siw_reap_cqe(struct siw_cq *cq, struct ib_wc *wc)
17270 + if (READ_ONCE(cqe->flags) & SIW_WQE_VALID) {
17271 + memset(wc, 0, sizeof(*wc));
17272 + wc->wr_id = cqe->id;
17273 +- wc->status = map_cqe_status[cqe->status].ib;
17274 +- wc->opcode = map_wc_opcode[cqe->opcode];
17275 + wc->byte_len = cqe->bytes;
17276 +
17277 + /*
17278 +@@ -71,10 +69,32 @@ int siw_reap_cqe(struct siw_cq *cq, struct ib_wc *wc)
17279 + wc->wc_flags = IB_WC_WITH_INVALIDATE;
17280 + }
17281 + wc->qp = cqe->base_qp;
17282 ++ wc->opcode = map_wc_opcode[cqe->opcode];
17283 ++ wc->status = map_cqe_status[cqe->status].ib;
17284 + siw_dbg_cq(cq,
17285 + "idx %u, type %d, flags %2x, id 0x%pK\n",
17286 + cq->cq_get % cq->num_cqe, cqe->opcode,
17287 + cqe->flags, (void *)(uintptr_t)cqe->id);
17288 ++ } else {
17289 ++ /*
17290 ++ * A malicious user may set invalid opcode or
17291 ++ * status in the user mmapped CQE array.
17292 ++ * Sanity check and correct values in that case
17293 ++ * to avoid out-of-bounds access to global arrays
17294 ++ * for opcode and status mapping.
17295 ++ */
17296 ++ u8 opcode = cqe->opcode;
17297 ++ u16 status = cqe->status;
17298 ++
17299 ++ if (opcode >= SIW_NUM_OPCODES) {
17300 ++ opcode = 0;
17301 ++ status = SIW_WC_GENERAL_ERR;
17302 ++ } else if (status >= SIW_NUM_WC_STATUS) {
17303 ++ status = SIW_WC_GENERAL_ERR;
17304 ++ }
17305 ++ wc->opcode = map_wc_opcode[opcode];
17306 ++ wc->status = map_cqe_status[status].ib;
17307 ++
17308 + }
17309 + WRITE_ONCE(cqe->flags, 0);
17310 + cq->cq_get++;
17311 +diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c
17312 +index 7d47b521070b1..05052b49107f2 100644
17313 +--- a/drivers/infiniband/sw/siw/siw_qp_tx.c
17314 ++++ b/drivers/infiniband/sw/siw/siw_qp_tx.c
17315 +@@ -29,7 +29,7 @@ static struct page *siw_get_pblpage(struct siw_mem *mem, u64 addr, int *idx)
17316 + dma_addr_t paddr = siw_pbl_get_buffer(pbl, offset, NULL, idx);
17317 +
17318 + if (paddr)
17319 +- return virt_to_page((void *)paddr);
17320 ++ return virt_to_page((void *)(uintptr_t)paddr);
17321 +
17322 + return NULL;
17323 + }
17324 +diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
17325 +index 3e814cfb298cf..906fde1a2a0de 100644
17326 +--- a/drivers/infiniband/sw/siw/siw_verbs.c
17327 ++++ b/drivers/infiniband/sw/siw/siw_verbs.c
17328 +@@ -676,13 +676,45 @@ static int siw_copy_inline_sgl(const struct ib_send_wr *core_wr,
17329 + static int siw_sq_flush_wr(struct siw_qp *qp, const struct ib_send_wr *wr,
17330 + const struct ib_send_wr **bad_wr)
17331 + {
17332 +- struct siw_sqe sqe = {};
17333 + int rv = 0;
17334 +
17335 + while (wr) {
17336 +- sqe.id = wr->wr_id;
17337 +- sqe.opcode = wr->opcode;
17338 +- rv = siw_sqe_complete(qp, &sqe, 0, SIW_WC_WR_FLUSH_ERR);
17339 ++ struct siw_sqe sqe = {};
17340 ++
17341 ++ switch (wr->opcode) {
17342 ++ case IB_WR_RDMA_WRITE:
17343 ++ sqe.opcode = SIW_OP_WRITE;
17344 ++ break;
17345 ++ case IB_WR_RDMA_READ:
17346 ++ sqe.opcode = SIW_OP_READ;
17347 ++ break;
17348 ++ case IB_WR_RDMA_READ_WITH_INV:
17349 ++ sqe.opcode = SIW_OP_READ_LOCAL_INV;
17350 ++ break;
17351 ++ case IB_WR_SEND:
17352 ++ sqe.opcode = SIW_OP_SEND;
17353 ++ break;
17354 ++ case IB_WR_SEND_WITH_IMM:
17355 ++ sqe.opcode = SIW_OP_SEND_WITH_IMM;
17356 ++ break;
17357 ++ case IB_WR_SEND_WITH_INV:
17358 ++ sqe.opcode = SIW_OP_SEND_REMOTE_INV;
17359 ++ break;
17360 ++ case IB_WR_LOCAL_INV:
17361 ++ sqe.opcode = SIW_OP_INVAL_STAG;
17362 ++ break;
17363 ++ case IB_WR_REG_MR:
17364 ++ sqe.opcode = SIW_OP_REG_MR;
17365 ++ break;
17366 ++ default:
17367 ++ rv = -EINVAL;
17368 ++ break;
17369 ++ }
17370 ++ if (!rv) {
17371 ++ sqe.id = wr->wr_id;
17372 ++ rv = siw_sqe_complete(qp, &sqe, 0,
17373 ++ SIW_WC_WR_FLUSH_ERR);
17374 ++ }
17375 + if (rv) {
17376 + if (bad_wr)
17377 + *bad_wr = wr;
17378 +diff --git a/drivers/infiniband/ulp/ipoib/ipoib_netlink.c b/drivers/infiniband/ulp/ipoib/ipoib_netlink.c
17379 +index ea16ba5d8da6c..9ad8d98562752 100644
17380 +--- a/drivers/infiniband/ulp/ipoib/ipoib_netlink.c
17381 ++++ b/drivers/infiniband/ulp/ipoib/ipoib_netlink.c
17382 +@@ -41,6 +41,11 @@ static const struct nla_policy ipoib_policy[IFLA_IPOIB_MAX + 1] = {
17383 + [IFLA_IPOIB_UMCAST] = { .type = NLA_U16 },
17384 + };
17385 +
17386 ++static unsigned int ipoib_get_max_num_queues(void)
17387 ++{
17388 ++ return min_t(unsigned int, num_possible_cpus(), 128);
17389 ++}
17390 ++
17391 + static int ipoib_fill_info(struct sk_buff *skb, const struct net_device *dev)
17392 + {
17393 + struct ipoib_dev_priv *priv = ipoib_priv(dev);
17394 +@@ -172,6 +177,8 @@ static struct rtnl_link_ops ipoib_link_ops __read_mostly = {
17395 + .changelink = ipoib_changelink,
17396 + .get_size = ipoib_get_size,
17397 + .fill_info = ipoib_fill_info,
17398 ++ .get_num_rx_queues = ipoib_get_max_num_queues,
17399 ++ .get_num_tx_queues = ipoib_get_max_num_queues,
17400 + };
17401 +
17402 + struct rtnl_link_ops *ipoib_get_link_ops(void)
17403 +diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
17404 +index 1075c2ac8fe20..b4d6a4a5ae81e 100644
17405 +--- a/drivers/infiniband/ulp/srp/ib_srp.c
17406 ++++ b/drivers/infiniband/ulp/srp/ib_srp.c
17407 +@@ -3410,7 +3410,8 @@ static int srp_parse_options(struct net *net, const char *buf,
17408 + break;
17409 +
17410 + case SRP_OPT_PKEY:
17411 +- if (match_hex(args, &token)) {
17412 ++ ret = match_hex(args, &token);
17413 ++ if (ret) {
17414 + pr_warn("bad P_Key parameter '%s'\n", p);
17415 + goto out;
17416 + }
17417 +@@ -3470,7 +3471,8 @@ static int srp_parse_options(struct net *net, const char *buf,
17418 + break;
17419 +
17420 + case SRP_OPT_MAX_SECT:
17421 +- if (match_int(args, &token)) {
17422 ++ ret = match_int(args, &token);
17423 ++ if (ret) {
17424 + pr_warn("bad max sect parameter '%s'\n", p);
17425 + goto out;
17426 + }
17427 +@@ -3478,8 +3480,15 @@ static int srp_parse_options(struct net *net, const char *buf,
17428 + break;
17429 +
17430 + case SRP_OPT_QUEUE_SIZE:
17431 +- if (match_int(args, &token) || token < 1) {
17432 ++ ret = match_int(args, &token);
17433 ++ if (ret) {
17434 ++ pr_warn("match_int() failed for queue_size parameter '%s', Error %d\n",
17435 ++ p, ret);
17436 ++ goto out;
17437 ++ }
17438 ++ if (token < 1) {
17439 + pr_warn("bad queue_size parameter '%s'\n", p);
17440 ++ ret = -EINVAL;
17441 + goto out;
17442 + }
17443 + target->scsi_host->can_queue = token;
17444 +@@ -3490,25 +3499,40 @@ static int srp_parse_options(struct net *net, const char *buf,
17445 + break;
17446 +
17447 + case SRP_OPT_MAX_CMD_PER_LUN:
17448 +- if (match_int(args, &token) || token < 1) {
17449 ++ ret = match_int(args, &token);
17450 ++ if (ret) {
17451 ++ pr_warn("match_int() failed for max cmd_per_lun parameter '%s', Error %d\n",
17452 ++ p, ret);
17453 ++ goto out;
17454 ++ }
17455 ++ if (token < 1) {
17456 + pr_warn("bad max cmd_per_lun parameter '%s'\n",
17457 + p);
17458 ++ ret = -EINVAL;
17459 + goto out;
17460 + }
17461 + target->scsi_host->cmd_per_lun = token;
17462 + break;
17463 +
17464 + case SRP_OPT_TARGET_CAN_QUEUE:
17465 +- if (match_int(args, &token) || token < 1) {
17466 ++ ret = match_int(args, &token);
17467 ++ if (ret) {
17468 ++ pr_warn("match_int() failed for max target_can_queue parameter '%s', Error %d\n",
17469 ++ p, ret);
17470 ++ goto out;
17471 ++ }
17472 ++ if (token < 1) {
17473 + pr_warn("bad max target_can_queue parameter '%s'\n",
17474 + p);
17475 ++ ret = -EINVAL;
17476 + goto out;
17477 + }
17478 + target->target_can_queue = token;
17479 + break;
17480 +
17481 + case SRP_OPT_IO_CLASS:
17482 +- if (match_hex(args, &token)) {
17483 ++ ret = match_hex(args, &token);
17484 ++ if (ret) {
17485 + pr_warn("bad IO class parameter '%s'\n", p);
17486 + goto out;
17487 + }
17488 +@@ -3517,6 +3541,7 @@ static int srp_parse_options(struct net *net, const char *buf,
17489 + pr_warn("unknown IO class parameter value %x specified (use %x or %x).\n",
17490 + token, SRP_REV10_IB_IO_CLASS,
17491 + SRP_REV16A_IB_IO_CLASS);
17492 ++ ret = -EINVAL;
17493 + goto out;
17494 + }
17495 + target->io_class = token;
17496 +@@ -3539,16 +3564,24 @@ static int srp_parse_options(struct net *net, const char *buf,
17497 + break;
17498 +
17499 + case SRP_OPT_CMD_SG_ENTRIES:
17500 +- if (match_int(args, &token) || token < 1 || token > 255) {
17501 ++ ret = match_int(args, &token);
17502 ++ if (ret) {
17503 ++ pr_warn("match_int() failed for max cmd_sg_entries parameter '%s', Error %d\n",
17504 ++ p, ret);
17505 ++ goto out;
17506 ++ }
17507 ++ if (token < 1 || token > 255) {
17508 + pr_warn("bad max cmd_sg_entries parameter '%s'\n",
17509 + p);
17510 ++ ret = -EINVAL;
17511 + goto out;
17512 + }
17513 + target->cmd_sg_cnt = token;
17514 + break;
17515 +
17516 + case SRP_OPT_ALLOW_EXT_SG:
17517 +- if (match_int(args, &token)) {
17518 ++ ret = match_int(args, &token);
17519 ++ if (ret) {
17520 + pr_warn("bad allow_ext_sg parameter '%s'\n", p);
17521 + goto out;
17522 + }
17523 +@@ -3556,43 +3589,77 @@ static int srp_parse_options(struct net *net, const char *buf,
17524 + break;
17525 +
17526 + case SRP_OPT_SG_TABLESIZE:
17527 +- if (match_int(args, &token) || token < 1 ||
17528 +- token > SG_MAX_SEGMENTS) {
17529 ++ ret = match_int(args, &token);
17530 ++ if (ret) {
17531 ++ pr_warn("match_int() failed for max sg_tablesize parameter '%s', Error %d\n",
17532 ++ p, ret);
17533 ++ goto out;
17534 ++ }
17535 ++ if (token < 1 || token > SG_MAX_SEGMENTS) {
17536 + pr_warn("bad max sg_tablesize parameter '%s'\n",
17537 + p);
17538 ++ ret = -EINVAL;
17539 + goto out;
17540 + }
17541 + target->sg_tablesize = token;
17542 + break;
17543 +
17544 + case SRP_OPT_COMP_VECTOR:
17545 +- if (match_int(args, &token) || token < 0) {
17546 ++ ret = match_int(args, &token);
17547 ++ if (ret) {
17548 ++ pr_warn("match_int() failed for comp_vector parameter '%s', Error %d\n",
17549 ++ p, ret);
17550 ++ goto out;
17551 ++ }
17552 ++ if (token < 0) {
17553 + pr_warn("bad comp_vector parameter '%s'\n", p);
17554 ++ ret = -EINVAL;
17555 + goto out;
17556 + }
17557 + target->comp_vector = token;
17558 + break;
17559 +
17560 + case SRP_OPT_TL_RETRY_COUNT:
17561 +- if (match_int(args, &token) || token < 2 || token > 7) {
17562 ++ ret = match_int(args, &token);
17563 ++ if (ret) {
17564 ++ pr_warn("match_int() failed for tl_retry_count parameter '%s', Error %d\n",
17565 ++ p, ret);
17566 ++ goto out;
17567 ++ }
17568 ++ if (token < 2 || token > 7) {
17569 + pr_warn("bad tl_retry_count parameter '%s' (must be a number between 2 and 7)\n",
17570 + p);
17571 ++ ret = -EINVAL;
17572 + goto out;
17573 + }
17574 + target->tl_retry_count = token;
17575 + break;
17576 +
17577 + case SRP_OPT_MAX_IT_IU_SIZE:
17578 +- if (match_int(args, &token) || token < 0) {
17579 ++ ret = match_int(args, &token);
17580 ++ if (ret) {
17581 ++ pr_warn("match_int() failed for max it_iu_size parameter '%s', Error %d\n",
17582 ++ p, ret);
17583 ++ goto out;
17584 ++ }
17585 ++ if (token < 0) {
17586 + pr_warn("bad maximum initiator to target IU size '%s'\n", p);
17587 ++ ret = -EINVAL;
17588 + goto out;
17589 + }
17590 + target->max_it_iu_size = token;
17591 + break;
17592 +
17593 + case SRP_OPT_CH_COUNT:
17594 +- if (match_int(args, &token) || token < 1) {
17595 ++ ret = match_int(args, &token);
17596 ++ if (ret) {
17597 ++ pr_warn("match_int() failed for channel count parameter '%s', Error %d\n",
17598 ++ p, ret);
17599 ++ goto out;
17600 ++ }
17601 ++ if (token < 1) {
17602 + pr_warn("bad channel count %s\n", p);
17603 ++ ret = -EINVAL;
17604 + goto out;
17605 + }
17606 + target->ch_count = token;
17607 +@@ -3601,6 +3668,7 @@ static int srp_parse_options(struct net *net, const char *buf,
17608 + default:
17609 + pr_warn("unknown parameter or missing value '%s' in target creation request\n",
17610 + p);
17611 ++ ret = -EINVAL;
17612 + goto out;
17613 + }
17614 + }
17615 +diff --git a/drivers/input/joystick/Kconfig b/drivers/input/joystick/Kconfig
17616 +index 9dcf3f51f2dd9..04ca3d1c28162 100644
17617 +--- a/drivers/input/joystick/Kconfig
17618 ++++ b/drivers/input/joystick/Kconfig
17619 +@@ -46,6 +46,7 @@ config JOYSTICK_A3D
17620 + config JOYSTICK_ADC
17621 + tristate "Simple joystick connected over ADC"
17622 + depends on IIO
17623 ++ select IIO_BUFFER
17624 + select IIO_BUFFER_CB
17625 + help
17626 + Say Y here if you have a simple joystick connected over ADC.
17627 +diff --git a/drivers/input/misc/Kconfig b/drivers/input/misc/Kconfig
17628 +index 9f088900f863b..fa942651619d2 100644
17629 +--- a/drivers/input/misc/Kconfig
17630 ++++ b/drivers/input/misc/Kconfig
17631 +@@ -330,7 +330,7 @@ config INPUT_CPCAP_PWRBUTTON
17632 +
17633 + config INPUT_WISTRON_BTNS
17634 + tristate "x86 Wistron laptop button interface"
17635 +- depends on X86_32
17636 ++ depends on X86_32 && !UML
17637 + select INPUT_SPARSEKMAP
17638 + select NEW_LEDS
17639 + select LEDS_CLASS
17640 +diff --git a/drivers/input/misc/iqs7222.c b/drivers/input/misc/iqs7222.c
17641 +index ddb863bf63eec..e47ab6c1177f5 100644
17642 +--- a/drivers/input/misc/iqs7222.c
17643 ++++ b/drivers/input/misc/iqs7222.c
17644 +@@ -86,7 +86,9 @@ enum iqs7222_reg_key_id {
17645 + IQS7222_REG_KEY_TOUCH,
17646 + IQS7222_REG_KEY_DEBOUNCE,
17647 + IQS7222_REG_KEY_TAP,
17648 ++ IQS7222_REG_KEY_TAP_LEGACY,
17649 + IQS7222_REG_KEY_AXIAL,
17650 ++ IQS7222_REG_KEY_AXIAL_LEGACY,
17651 + IQS7222_REG_KEY_WHEEL,
17652 + IQS7222_REG_KEY_NO_WHEEL,
17653 + IQS7222_REG_KEY_RESERVED
17654 +@@ -105,14 +107,14 @@ enum iqs7222_reg_grp_id {
17655 + IQS7222_NUM_REG_GRPS
17656 + };
17657 +
17658 +-static const char * const iqs7222_reg_grp_names[] = {
17659 ++static const char * const iqs7222_reg_grp_names[IQS7222_NUM_REG_GRPS] = {
17660 + [IQS7222_REG_GRP_CYCLE] = "cycle",
17661 + [IQS7222_REG_GRP_CHAN] = "channel",
17662 + [IQS7222_REG_GRP_SLDR] = "slider",
17663 + [IQS7222_REG_GRP_GPIO] = "gpio",
17664 + };
17665 +
17666 +-static const unsigned int iqs7222_max_cols[] = {
17667 ++static const unsigned int iqs7222_max_cols[IQS7222_NUM_REG_GRPS] = {
17668 + [IQS7222_REG_GRP_STAT] = IQS7222_MAX_COLS_STAT,
17669 + [IQS7222_REG_GRP_CYCLE] = IQS7222_MAX_COLS_CYCLE,
17670 + [IQS7222_REG_GRP_GLBL] = IQS7222_MAX_COLS_GLBL,
17671 +@@ -202,10 +204,68 @@ struct iqs7222_dev_desc {
17672 + int allow_offset;
17673 + int event_offset;
17674 + int comms_offset;
17675 ++ bool legacy_gesture;
17676 + struct iqs7222_reg_grp_desc reg_grps[IQS7222_NUM_REG_GRPS];
17677 + };
17678 +
17679 + static const struct iqs7222_dev_desc iqs7222_devs[] = {
17680 ++ {
17681 ++ .prod_num = IQS7222_PROD_NUM_A,
17682 ++ .fw_major = 1,
17683 ++ .fw_minor = 13,
17684 ++ .sldr_res = U8_MAX * 16,
17685 ++ .touch_link = 1768,
17686 ++ .allow_offset = 9,
17687 ++ .event_offset = 10,
17688 ++ .comms_offset = 12,
17689 ++ .reg_grps = {
17690 ++ [IQS7222_REG_GRP_STAT] = {
17691 ++ .base = IQS7222_SYS_STATUS,
17692 ++ .num_row = 1,
17693 ++ .num_col = 8,
17694 ++ },
17695 ++ [IQS7222_REG_GRP_CYCLE] = {
17696 ++ .base = 0x8000,
17697 ++ .num_row = 7,
17698 ++ .num_col = 3,
17699 ++ },
17700 ++ [IQS7222_REG_GRP_GLBL] = {
17701 ++ .base = 0x8700,
17702 ++ .num_row = 1,
17703 ++ .num_col = 3,
17704 ++ },
17705 ++ [IQS7222_REG_GRP_BTN] = {
17706 ++ .base = 0x9000,
17707 ++ .num_row = 12,
17708 ++ .num_col = 3,
17709 ++ },
17710 ++ [IQS7222_REG_GRP_CHAN] = {
17711 ++ .base = 0xA000,
17712 ++ .num_row = 12,
17713 ++ .num_col = 6,
17714 ++ },
17715 ++ [IQS7222_REG_GRP_FILT] = {
17716 ++ .base = 0xAC00,
17717 ++ .num_row = 1,
17718 ++ .num_col = 2,
17719 ++ },
17720 ++ [IQS7222_REG_GRP_SLDR] = {
17721 ++ .base = 0xB000,
17722 ++ .num_row = 2,
17723 ++ .num_col = 11,
17724 ++ },
17725 ++ [IQS7222_REG_GRP_GPIO] = {
17726 ++ .base = 0xC000,
17727 ++ .num_row = 1,
17728 ++ .num_col = 3,
17729 ++ },
17730 ++ [IQS7222_REG_GRP_SYS] = {
17731 ++ .base = IQS7222_SYS_SETUP,
17732 ++ .num_row = 1,
17733 ++ .num_col = 13,
17734 ++ },
17735 ++ },
17736 ++ },
17737 + {
17738 + .prod_num = IQS7222_PROD_NUM_A,
17739 + .fw_major = 1,
17740 +@@ -215,6 +275,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = {
17741 + .allow_offset = 9,
17742 + .event_offset = 10,
17743 + .comms_offset = 12,
17744 ++ .legacy_gesture = true,
17745 + .reg_grps = {
17746 + [IQS7222_REG_GRP_STAT] = {
17747 + .base = IQS7222_SYS_STATUS,
17748 +@@ -874,6 +935,16 @@ static const struct iqs7222_prop_desc iqs7222_props[] = {
17749 + .reg_offset = 9,
17750 + .reg_shift = 8,
17751 + .reg_width = 8,
17752 ++ .val_pitch = 16,
17753 ++ .label = "maximum gesture time",
17754 ++ },
17755 ++ {
17756 ++ .name = "azoteq,gesture-max-ms",
17757 ++ .reg_grp = IQS7222_REG_GRP_SLDR,
17758 ++ .reg_key = IQS7222_REG_KEY_TAP_LEGACY,
17759 ++ .reg_offset = 9,
17760 ++ .reg_shift = 8,
17761 ++ .reg_width = 8,
17762 + .val_pitch = 4,
17763 + .label = "maximum gesture time",
17764 + },
17765 +@@ -884,6 +955,16 @@ static const struct iqs7222_prop_desc iqs7222_props[] = {
17766 + .reg_offset = 9,
17767 + .reg_shift = 3,
17768 + .reg_width = 5,
17769 ++ .val_pitch = 16,
17770 ++ .label = "minimum gesture time",
17771 ++ },
17772 ++ {
17773 ++ .name = "azoteq,gesture-min-ms",
17774 ++ .reg_grp = IQS7222_REG_GRP_SLDR,
17775 ++ .reg_key = IQS7222_REG_KEY_TAP_LEGACY,
17776 ++ .reg_offset = 9,
17777 ++ .reg_shift = 3,
17778 ++ .reg_width = 5,
17779 + .val_pitch = 4,
17780 + .label = "minimum gesture time",
17781 + },
17782 +@@ -897,6 +978,16 @@ static const struct iqs7222_prop_desc iqs7222_props[] = {
17783 + .val_pitch = 16,
17784 + .label = "gesture distance",
17785 + },
17786 ++ {
17787 ++ .name = "azoteq,gesture-dist",
17788 ++ .reg_grp = IQS7222_REG_GRP_SLDR,
17789 ++ .reg_key = IQS7222_REG_KEY_AXIAL_LEGACY,
17790 ++ .reg_offset = 10,
17791 ++ .reg_shift = 8,
17792 ++ .reg_width = 8,
17793 ++ .val_pitch = 16,
17794 ++ .label = "gesture distance",
17795 ++ },
17796 + {
17797 + .name = "azoteq,gesture-max-ms",
17798 + .reg_grp = IQS7222_REG_GRP_SLDR,
17799 +@@ -904,6 +995,16 @@ static const struct iqs7222_prop_desc iqs7222_props[] = {
17800 + .reg_offset = 10,
17801 + .reg_shift = 0,
17802 + .reg_width = 8,
17803 ++ .val_pitch = 16,
17804 ++ .label = "maximum gesture time",
17805 ++ },
17806 ++ {
17807 ++ .name = "azoteq,gesture-max-ms",
17808 ++ .reg_grp = IQS7222_REG_GRP_SLDR,
17809 ++ .reg_key = IQS7222_REG_KEY_AXIAL_LEGACY,
17810 ++ .reg_offset = 10,
17811 ++ .reg_shift = 0,
17812 ++ .reg_width = 8,
17813 + .val_pitch = 4,
17814 + .label = "maximum gesture time",
17815 + },
17816 +@@ -1567,56 +1668,17 @@ static int iqs7222_gpio_select(struct iqs7222_private *iqs7222,
17817 + }
17818 +
17819 + static int iqs7222_parse_props(struct iqs7222_private *iqs7222,
17820 +- struct fwnode_handle **child_node,
17821 +- int child_index,
17822 ++ struct fwnode_handle *reg_grp_node,
17823 ++ int reg_grp_index,
17824 + enum iqs7222_reg_grp_id reg_grp,
17825 + enum iqs7222_reg_key_id reg_key)
17826 + {
17827 +- u16 *setup = iqs7222_setup(iqs7222, reg_grp, child_index);
17828 ++ u16 *setup = iqs7222_setup(iqs7222, reg_grp, reg_grp_index);
17829 + struct i2c_client *client = iqs7222->client;
17830 +- struct fwnode_handle *reg_grp_node;
17831 +- char reg_grp_name[16];
17832 + int i;
17833 +
17834 +- switch (reg_grp) {
17835 +- case IQS7222_REG_GRP_CYCLE:
17836 +- case IQS7222_REG_GRP_CHAN:
17837 +- case IQS7222_REG_GRP_SLDR:
17838 +- case IQS7222_REG_GRP_GPIO:
17839 +- case IQS7222_REG_GRP_BTN:
17840 +- /*
17841 +- * These groups derive a child node and return it to the caller
17842 +- * for additional group-specific processing. In some cases, the
17843 +- * child node may have already been derived.
17844 +- */
17845 +- reg_grp_node = *child_node;
17846 +- if (reg_grp_node)
17847 +- break;
17848 +-
17849 +- snprintf(reg_grp_name, sizeof(reg_grp_name), "%s-%d",
17850 +- iqs7222_reg_grp_names[reg_grp], child_index);
17851 +-
17852 +- reg_grp_node = device_get_named_child_node(&client->dev,
17853 +- reg_grp_name);
17854 +- if (!reg_grp_node)
17855 +- return 0;
17856 +-
17857 +- *child_node = reg_grp_node;
17858 +- break;
17859 +-
17860 +- case IQS7222_REG_GRP_GLBL:
17861 +- case IQS7222_REG_GRP_FILT:
17862 +- case IQS7222_REG_GRP_SYS:
17863 +- /*
17864 +- * These groups are not organized beneath a child node, nor are
17865 +- * they subject to any additional processing by the caller.
17866 +- */
17867 +- reg_grp_node = dev_fwnode(&client->dev);
17868 +- break;
17869 +-
17870 +- default:
17871 +- return -EINVAL;
17872 +- }
17873 ++ if (!setup)
17874 ++ return 0;
17875 +
17876 + for (i = 0; i < ARRAY_SIZE(iqs7222_props); i++) {
17877 + const char *name = iqs7222_props[i].name;
17878 +@@ -1686,11 +1748,66 @@ static int iqs7222_parse_props(struct iqs7222_private *iqs7222,
17879 + return 0;
17880 + }
17881 +
17882 +-static int iqs7222_parse_cycle(struct iqs7222_private *iqs7222, int cycle_index)
17883 ++static int iqs7222_parse_event(struct iqs7222_private *iqs7222,
17884 ++ struct fwnode_handle *event_node,
17885 ++ int reg_grp_index,
17886 ++ enum iqs7222_reg_grp_id reg_grp,
17887 ++ enum iqs7222_reg_key_id reg_key,
17888 ++ u16 event_enable, u16 event_link,
17889 ++ unsigned int *event_type,
17890 ++ unsigned int *event_code)
17891 ++{
17892 ++ struct i2c_client *client = iqs7222->client;
17893 ++ int error;
17894 ++
17895 ++ error = iqs7222_parse_props(iqs7222, event_node, reg_grp_index,
17896 ++ reg_grp, reg_key);
17897 ++ if (error)
17898 ++ return error;
17899 ++
17900 ++ error = iqs7222_gpio_select(iqs7222, event_node, event_enable,
17901 ++ event_link);
17902 ++ if (error)
17903 ++ return error;
17904 ++
17905 ++ error = fwnode_property_read_u32(event_node, "linux,code", event_code);
17906 ++ if (error == -EINVAL) {
17907 ++ return 0;
17908 ++ } else if (error) {
17909 ++ dev_err(&client->dev, "Failed to read %s code: %d\n",
17910 ++ fwnode_get_name(event_node), error);
17911 ++ return error;
17912 ++ }
17913 ++
17914 ++ if (!event_type) {
17915 ++ input_set_capability(iqs7222->keypad, EV_KEY, *event_code);
17916 ++ return 0;
17917 ++ }
17918 ++
17919 ++ error = fwnode_property_read_u32(event_node, "linux,input-type",
17920 ++ event_type);
17921 ++ if (error == -EINVAL) {
17922 ++ *event_type = EV_KEY;
17923 ++ } else if (error) {
17924 ++ dev_err(&client->dev, "Failed to read %s input type: %d\n",
17925 ++ fwnode_get_name(event_node), error);
17926 ++ return error;
17927 ++ } else if (*event_type != EV_KEY && *event_type != EV_SW) {
17928 ++ dev_err(&client->dev, "Invalid %s input type: %d\n",
17929 ++ fwnode_get_name(event_node), *event_type);
17930 ++ return -EINVAL;
17931 ++ }
17932 ++
17933 ++ input_set_capability(iqs7222->keypad, *event_type, *event_code);
17934 ++
17935 ++ return 0;
17936 ++}
17937 ++
17938 ++static int iqs7222_parse_cycle(struct iqs7222_private *iqs7222,
17939 ++ struct fwnode_handle *cycle_node, int cycle_index)
17940 + {
17941 + u16 *cycle_setup = iqs7222->cycle_setup[cycle_index];
17942 + struct i2c_client *client = iqs7222->client;
17943 +- struct fwnode_handle *cycle_node = NULL;
17944 + unsigned int pins[9];
17945 + int error, count, i;
17946 +
17947 +@@ -1698,17 +1815,7 @@ static int iqs7222_parse_cycle(struct iqs7222_private *iqs7222, int cycle_index)
17948 + * Each channel shares a cycle with one other channel; the mapping of
17949 + * channels to cycles is fixed. Properties defined for a cycle impact
17950 + * both channels tied to the cycle.
17951 +- */
17952 +- error = iqs7222_parse_props(iqs7222, &cycle_node, cycle_index,
17953 +- IQS7222_REG_GRP_CYCLE,
17954 +- IQS7222_REG_KEY_NONE);
17955 +- if (error)
17956 +- return error;
17957 +-
17958 +- if (!cycle_node)
17959 +- return 0;
17960 +-
17961 +- /*
17962 ++ *
17963 + * Unlike channels which are restricted to a select range of CRx pins
17964 + * based on channel number, any cycle can claim any of the device's 9
17965 + * CTx pins (CTx0-8).
17966 +@@ -1750,11 +1857,11 @@ static int iqs7222_parse_cycle(struct iqs7222_private *iqs7222, int cycle_index)
17967 + return 0;
17968 + }
17969 +
17970 +-static int iqs7222_parse_chan(struct iqs7222_private *iqs7222, int chan_index)
17971 ++static int iqs7222_parse_chan(struct iqs7222_private *iqs7222,
17972 ++ struct fwnode_handle *chan_node, int chan_index)
17973 + {
17974 + const struct iqs7222_dev_desc *dev_desc = iqs7222->dev_desc;
17975 + struct i2c_client *client = iqs7222->client;
17976 +- struct fwnode_handle *chan_node = NULL;
17977 + int num_chan = dev_desc->reg_grps[IQS7222_REG_GRP_CHAN].num_row;
17978 + int ext_chan = rounddown(num_chan, 10);
17979 + int error, i;
17980 +@@ -1762,15 +1869,6 @@ static int iqs7222_parse_chan(struct iqs7222_private *iqs7222, int chan_index)
17981 + u16 *sys_setup = iqs7222->sys_setup;
17982 + unsigned int val;
17983 +
17984 +- error = iqs7222_parse_props(iqs7222, &chan_node, chan_index,
17985 +- IQS7222_REG_GRP_CHAN,
17986 +- IQS7222_REG_KEY_NONE);
17987 +- if (error)
17988 +- return error;
17989 +-
17990 +- if (!chan_node)
17991 +- return 0;
17992 +-
17993 + if (dev_desc->allow_offset &&
17994 + fwnode_property_present(chan_node, "azoteq,ulp-allow"))
17995 + sys_setup[dev_desc->allow_offset] &= ~BIT(chan_index);
17996 +@@ -1810,8 +1908,9 @@ static int iqs7222_parse_chan(struct iqs7222_private *iqs7222, int chan_index)
17997 + chan_setup[0] |= IQS7222_CHAN_SETUP_0_REF_MODE_FOLLOW;
17998 + chan_setup[4] = val * 42 + 1048;
17999 +
18000 +- if (!fwnode_property_read_u32(chan_node, "azoteq,ref-weight",
18001 +- &val)) {
18002 ++ error = fwnode_property_read_u32(chan_node, "azoteq,ref-weight",
18003 ++ &val);
18004 ++ if (!error) {
18005 + if (val > U16_MAX) {
18006 + dev_err(&client->dev,
18007 + "Invalid %s reference weight: %u\n",
18008 +@@ -1820,6 +1919,11 @@ static int iqs7222_parse_chan(struct iqs7222_private *iqs7222, int chan_index)
18009 + }
18010 +
18011 + chan_setup[5] = val;
18012 ++ } else if (error != -EINVAL) {
18013 ++ dev_err(&client->dev,
18014 ++ "Failed to read %s reference weight: %d\n",
18015 ++ fwnode_get_name(chan_node), error);
18016 ++ return error;
18017 + }
18018 +
18019 + /*
18020 +@@ -1892,21 +1996,10 @@ static int iqs7222_parse_chan(struct iqs7222_private *iqs7222, int chan_index)
18021 + if (!event_node)
18022 + continue;
18023 +
18024 +- error = iqs7222_parse_props(iqs7222, &event_node, chan_index,
18025 +- IQS7222_REG_GRP_BTN,
18026 +- iqs7222_kp_events[i].reg_key);
18027 +- if (error)
18028 +- return error;
18029 +-
18030 +- error = iqs7222_gpio_select(iqs7222, event_node,
18031 +- BIT(chan_index),
18032 +- dev_desc->touch_link - (i ? 0 : 2));
18033 +- if (error)
18034 +- return error;
18035 +-
18036 +- if (!fwnode_property_read_u32(event_node,
18037 +- "azoteq,timeout-press-ms",
18038 +- &val)) {
18039 ++ error = fwnode_property_read_u32(event_node,
18040 ++ "azoteq,timeout-press-ms",
18041 ++ &val);
18042 ++ if (!error) {
18043 + /*
18044 + * The IQS7222B employs a global pair of press timeout
18045 + * registers as opposed to channel-specific registers.
18046 +@@ -1919,57 +2012,31 @@ static int iqs7222_parse_chan(struct iqs7222_private *iqs7222, int chan_index)
18047 + if (val > U8_MAX * 500) {
18048 + dev_err(&client->dev,
18049 + "Invalid %s press timeout: %u\n",
18050 +- fwnode_get_name(chan_node), val);
18051 ++ fwnode_get_name(event_node), val);
18052 ++ fwnode_handle_put(event_node);
18053 + return -EINVAL;
18054 + }
18055 +
18056 + *setup &= ~(U8_MAX << i * 8);
18057 + *setup |= (val / 500 << i * 8);
18058 +- }
18059 +-
18060 +- error = fwnode_property_read_u32(event_node, "linux,code",
18061 +- &val);
18062 +- if (error) {
18063 +- dev_err(&client->dev, "Failed to read %s code: %d\n",
18064 +- fwnode_get_name(chan_node), error);
18065 ++ } else if (error != -EINVAL) {
18066 ++ dev_err(&client->dev,
18067 ++ "Failed to read %s press timeout: %d\n",
18068 ++ fwnode_get_name(event_node), error);
18069 ++ fwnode_handle_put(event_node);
18070 + return error;
18071 + }
18072 +
18073 +- iqs7222->kp_code[chan_index][i] = val;
18074 +- iqs7222->kp_type[chan_index][i] = EV_KEY;
18075 +-
18076 +- if (fwnode_property_present(event_node, "linux,input-type")) {
18077 +- error = fwnode_property_read_u32(event_node,
18078 +- "linux,input-type",
18079 +- &val);
18080 +- if (error) {
18081 +- dev_err(&client->dev,
18082 +- "Failed to read %s input type: %d\n",
18083 +- fwnode_get_name(chan_node), error);
18084 +- return error;
18085 +- }
18086 +-
18087 +- if (val != EV_KEY && val != EV_SW) {
18088 +- dev_err(&client->dev,
18089 +- "Invalid %s input type: %u\n",
18090 +- fwnode_get_name(chan_node), val);
18091 +- return -EINVAL;
18092 +- }
18093 +-
18094 +- iqs7222->kp_type[chan_index][i] = val;
18095 +- }
18096 +-
18097 +- /*
18098 +- * Reference channels can opt out of event reporting by using
18099 +- * KEY_RESERVED in place of a true key or switch code.
18100 +- */
18101 +- if (iqs7222->kp_type[chan_index][i] == EV_KEY &&
18102 +- iqs7222->kp_code[chan_index][i] == KEY_RESERVED)
18103 +- continue;
18104 +-
18105 +- input_set_capability(iqs7222->keypad,
18106 +- iqs7222->kp_type[chan_index][i],
18107 +- iqs7222->kp_code[chan_index][i]);
18108 ++ error = iqs7222_parse_event(iqs7222, event_node, chan_index,
18109 ++ IQS7222_REG_GRP_BTN,
18110 ++ iqs7222_kp_events[i].reg_key,
18111 ++ BIT(chan_index),
18112 ++ dev_desc->touch_link - (i ? 0 : 2),
18113 ++ &iqs7222->kp_type[chan_index][i],
18114 ++ &iqs7222->kp_code[chan_index][i]);
18115 ++ fwnode_handle_put(event_node);
18116 ++ if (error)
18117 ++ return error;
18118 +
18119 + if (!dev_desc->event_offset)
18120 + continue;
18121 +@@ -1981,16 +2048,16 @@ static int iqs7222_parse_chan(struct iqs7222_private *iqs7222, int chan_index)
18122 + * The following call handles a special pair of properties that apply
18123 + * to a channel node, but reside within the button (event) group.
18124 + */
18125 +- return iqs7222_parse_props(iqs7222, &chan_node, chan_index,
18126 ++ return iqs7222_parse_props(iqs7222, chan_node, chan_index,
18127 + IQS7222_REG_GRP_BTN,
18128 + IQS7222_REG_KEY_DEBOUNCE);
18129 + }
18130 +
18131 +-static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
18132 ++static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222,
18133 ++ struct fwnode_handle *sldr_node, int sldr_index)
18134 + {
18135 + const struct iqs7222_dev_desc *dev_desc = iqs7222->dev_desc;
18136 + struct i2c_client *client = iqs7222->client;
18137 +- struct fwnode_handle *sldr_node = NULL;
18138 + int num_chan = dev_desc->reg_grps[IQS7222_REG_GRP_CHAN].num_row;
18139 + int ext_chan = rounddown(num_chan, 10);
18140 + int count, error, reg_offset, i;
18141 +@@ -1998,15 +2065,6 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
18142 + u16 *sldr_setup = iqs7222->sldr_setup[sldr_index];
18143 + unsigned int chan_sel[4], val;
18144 +
18145 +- error = iqs7222_parse_props(iqs7222, &sldr_node, sldr_index,
18146 +- IQS7222_REG_GRP_SLDR,
18147 +- IQS7222_REG_KEY_NONE);
18148 +- if (error)
18149 +- return error;
18150 +-
18151 +- if (!sldr_node)
18152 +- return 0;
18153 +-
18154 + /*
18155 + * Each slider can be spread across 3 to 4 channels. It is possible to
18156 + * select only 2 channels, but doing so prevents the slider from using
18157 +@@ -2065,8 +2123,9 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
18158 + if (fwnode_property_present(sldr_node, "azoteq,use-prox"))
18159 + sldr_setup[4 + reg_offset] -= 2;
18160 +
18161 +- if (!fwnode_property_read_u32(sldr_node, "azoteq,slider-size", &val)) {
18162 +- if (!val || val > dev_desc->sldr_res) {
18163 ++ error = fwnode_property_read_u32(sldr_node, "azoteq,slider-size", &val);
18164 ++ if (!error) {
18165 ++ if (val > dev_desc->sldr_res) {
18166 + dev_err(&client->dev, "Invalid %s size: %u\n",
18167 + fwnode_get_name(sldr_node), val);
18168 + return -EINVAL;
18169 +@@ -2079,9 +2138,21 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
18170 + sldr_setup[2] |= (val / 16 <<
18171 + IQS7222_SLDR_SETUP_2_RES_SHIFT);
18172 + }
18173 ++ } else if (error != -EINVAL) {
18174 ++ dev_err(&client->dev, "Failed to read %s size: %d\n",
18175 ++ fwnode_get_name(sldr_node), error);
18176 ++ return error;
18177 + }
18178 +
18179 +- if (!fwnode_property_read_u32(sldr_node, "azoteq,top-speed", &val)) {
18180 ++ if (!(reg_offset ? sldr_setup[3]
18181 ++ : sldr_setup[2] & IQS7222_SLDR_SETUP_2_RES_MASK)) {
18182 ++ dev_err(&client->dev, "Undefined %s size\n",
18183 ++ fwnode_get_name(sldr_node));
18184 ++ return -EINVAL;
18185 ++ }
18186 ++
18187 ++ error = fwnode_property_read_u32(sldr_node, "azoteq,top-speed", &val);
18188 ++ if (!error) {
18189 + if (val > (reg_offset ? U16_MAX : U8_MAX * 4)) {
18190 + dev_err(&client->dev, "Invalid %s top speed: %u\n",
18191 + fwnode_get_name(sldr_node), val);
18192 +@@ -2094,9 +2165,14 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
18193 + sldr_setup[2] &= ~IQS7222_SLDR_SETUP_2_TOP_SPEED_MASK;
18194 + sldr_setup[2] |= (val / 4);
18195 + }
18196 ++ } else if (error != -EINVAL) {
18197 ++ dev_err(&client->dev, "Failed to read %s top speed: %d\n",
18198 ++ fwnode_get_name(sldr_node), error);
18199 ++ return error;
18200 + }
18201 +
18202 +- if (!fwnode_property_read_u32(sldr_node, "linux,axis", &val)) {
18203 ++ error = fwnode_property_read_u32(sldr_node, "linux,axis", &val);
18204 ++ if (!error) {
18205 + u16 sldr_max = sldr_setup[3] - 1;
18206 +
18207 + if (!reg_offset) {
18208 +@@ -2110,6 +2186,10 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
18209 +
18210 + input_set_abs_params(iqs7222->keypad, val, 0, sldr_max, 0, 0);
18211 + iqs7222->sl_axis[sldr_index] = val;
18212 ++ } else if (error != -EINVAL) {
18213 ++ dev_err(&client->dev, "Failed to read %s axis: %d\n",
18214 ++ fwnode_get_name(sldr_node), error);
18215 ++ return error;
18216 + }
18217 +
18218 + if (dev_desc->wheel_enable) {
18219 +@@ -2130,46 +2210,47 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
18220 + for (i = 0; i < ARRAY_SIZE(iqs7222_sl_events); i++) {
18221 + const char *event_name = iqs7222_sl_events[i].name;
18222 + struct fwnode_handle *event_node;
18223 ++ enum iqs7222_reg_key_id reg_key;
18224 +
18225 + event_node = fwnode_get_named_child_node(sldr_node, event_name);
18226 + if (!event_node)
18227 + continue;
18228 +
18229 +- error = iqs7222_parse_props(iqs7222, &event_node, sldr_index,
18230 +- IQS7222_REG_GRP_SLDR,
18231 +- reg_offset ?
18232 +- IQS7222_REG_KEY_RESERVED :
18233 +- iqs7222_sl_events[i].reg_key);
18234 +- if (error)
18235 +- return error;
18236 ++ /*
18237 ++ * Depending on the device, gestures are either offered using
18238 ++ * one of two timing resolutions, or are not supported at all.
18239 ++ */
18240 ++ if (reg_offset)
18241 ++ reg_key = IQS7222_REG_KEY_RESERVED;
18242 ++ else if (dev_desc->legacy_gesture &&
18243 ++ iqs7222_sl_events[i].reg_key == IQS7222_REG_KEY_TAP)
18244 ++ reg_key = IQS7222_REG_KEY_TAP_LEGACY;
18245 ++ else if (dev_desc->legacy_gesture &&
18246 ++ iqs7222_sl_events[i].reg_key == IQS7222_REG_KEY_AXIAL)
18247 ++ reg_key = IQS7222_REG_KEY_AXIAL_LEGACY;
18248 ++ else
18249 ++ reg_key = iqs7222_sl_events[i].reg_key;
18250 +
18251 + /*
18252 + * The press/release event does not expose a direct GPIO link,
18253 + * but one can be emulated by tying each of the participating
18254 + * channels to the same GPIO.
18255 + */
18256 +- error = iqs7222_gpio_select(iqs7222, event_node,
18257 ++ error = iqs7222_parse_event(iqs7222, event_node, sldr_index,
18258 ++ IQS7222_REG_GRP_SLDR, reg_key,
18259 + i ? iqs7222_sl_events[i].enable
18260 + : sldr_setup[3 + reg_offset],
18261 + i ? 1568 + sldr_index * 30
18262 +- : sldr_setup[4 + reg_offset]);
18263 ++ : sldr_setup[4 + reg_offset],
18264 ++ NULL,
18265 ++ &iqs7222->sl_code[sldr_index][i]);
18266 ++ fwnode_handle_put(event_node);
18267 + if (error)
18268 + return error;
18269 +
18270 + if (!reg_offset)
18271 + sldr_setup[9] |= iqs7222_sl_events[i].enable;
18272 +
18273 +- error = fwnode_property_read_u32(event_node, "linux,code",
18274 +- &val);
18275 +- if (error) {
18276 +- dev_err(&client->dev, "Failed to read %s code: %d\n",
18277 +- fwnode_get_name(sldr_node), error);
18278 +- return error;
18279 +- }
18280 +-
18281 +- iqs7222->sl_code[sldr_index][i] = val;
18282 +- input_set_capability(iqs7222->keypad, EV_KEY, val);
18283 +-
18284 + if (!dev_desc->event_offset)
18285 + continue;
18286 +
18287 +@@ -2190,19 +2271,63 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
18288 + * The following call handles a special pair of properties that shift
18289 + * to make room for a wheel enable control in the case of IQS7222C.
18290 + */
18291 +- return iqs7222_parse_props(iqs7222, &sldr_node, sldr_index,
18292 ++ return iqs7222_parse_props(iqs7222, sldr_node, sldr_index,
18293 + IQS7222_REG_GRP_SLDR,
18294 + dev_desc->wheel_enable ?
18295 + IQS7222_REG_KEY_WHEEL :
18296 + IQS7222_REG_KEY_NO_WHEEL);
18297 + }
18298 +
18299 ++static int (*iqs7222_parse_extra[IQS7222_NUM_REG_GRPS])
18300 ++ (struct iqs7222_private *iqs7222,
18301 ++ struct fwnode_handle *reg_grp_node,
18302 ++ int reg_grp_index) = {
18303 ++ [IQS7222_REG_GRP_CYCLE] = iqs7222_parse_cycle,
18304 ++ [IQS7222_REG_GRP_CHAN] = iqs7222_parse_chan,
18305 ++ [IQS7222_REG_GRP_SLDR] = iqs7222_parse_sldr,
18306 ++};
18307 ++
18308 ++static int iqs7222_parse_reg_grp(struct iqs7222_private *iqs7222,
18309 ++ enum iqs7222_reg_grp_id reg_grp,
18310 ++ int reg_grp_index)
18311 ++{
18312 ++ struct i2c_client *client = iqs7222->client;
18313 ++ struct fwnode_handle *reg_grp_node;
18314 ++ int error;
18315 ++
18316 ++ if (iqs7222_reg_grp_names[reg_grp]) {
18317 ++ char reg_grp_name[16];
18318 ++
18319 ++ snprintf(reg_grp_name, sizeof(reg_grp_name), "%s-%d",
18320 ++ iqs7222_reg_grp_names[reg_grp], reg_grp_index);
18321 ++
18322 ++ reg_grp_node = device_get_named_child_node(&client->dev,
18323 ++ reg_grp_name);
18324 ++ } else {
18325 ++ reg_grp_node = fwnode_handle_get(dev_fwnode(&client->dev));
18326 ++ }
18327 ++
18328 ++ if (!reg_grp_node)
18329 ++ return 0;
18330 ++
18331 ++ error = iqs7222_parse_props(iqs7222, reg_grp_node, reg_grp_index,
18332 ++ reg_grp, IQS7222_REG_KEY_NONE);
18333 ++
18334 ++ if (!error && iqs7222_parse_extra[reg_grp])
18335 ++ error = iqs7222_parse_extra[reg_grp](iqs7222, reg_grp_node,
18336 ++ reg_grp_index);
18337 ++
18338 ++ fwnode_handle_put(reg_grp_node);
18339 ++
18340 ++ return error;
18341 ++}
18342 ++
18343 + static int iqs7222_parse_all(struct iqs7222_private *iqs7222)
18344 + {
18345 + const struct iqs7222_dev_desc *dev_desc = iqs7222->dev_desc;
18346 + const struct iqs7222_reg_grp_desc *reg_grps = dev_desc->reg_grps;
18347 + u16 *sys_setup = iqs7222->sys_setup;
18348 +- int error, i;
18349 ++ int error, i, j;
18350 +
18351 + if (dev_desc->allow_offset)
18352 + sys_setup[dev_desc->allow_offset] = U16_MAX;
18353 +@@ -2210,32 +2335,13 @@ static int iqs7222_parse_all(struct iqs7222_private *iqs7222)
18354 + if (dev_desc->event_offset)
18355 + sys_setup[dev_desc->event_offset] = IQS7222_EVENT_MASK_ATI;
18356 +
18357 +- for (i = 0; i < reg_grps[IQS7222_REG_GRP_CYCLE].num_row; i++) {
18358 +- error = iqs7222_parse_cycle(iqs7222, i);
18359 +- if (error)
18360 +- return error;
18361 +- }
18362 +-
18363 +- error = iqs7222_parse_props(iqs7222, NULL, 0, IQS7222_REG_GRP_GLBL,
18364 +- IQS7222_REG_KEY_NONE);
18365 +- if (error)
18366 +- return error;
18367 +-
18368 + for (i = 0; i < reg_grps[IQS7222_REG_GRP_GPIO].num_row; i++) {
18369 +- struct fwnode_handle *gpio_node = NULL;
18370 + u16 *gpio_setup = iqs7222->gpio_setup[i];
18371 +- int j;
18372 +
18373 + gpio_setup[0] &= ~IQS7222_GPIO_SETUP_0_GPIO_EN;
18374 + gpio_setup[1] = 0;
18375 + gpio_setup[2] = 0;
18376 +
18377 +- error = iqs7222_parse_props(iqs7222, &gpio_node, i,
18378 +- IQS7222_REG_GRP_GPIO,
18379 +- IQS7222_REG_KEY_NONE);
18380 +- if (error)
18381 +- return error;
18382 +-
18383 + if (reg_grps[IQS7222_REG_GRP_GPIO].num_row == 1)
18384 + continue;
18385 +
18386 +@@ -2258,29 +2364,21 @@ static int iqs7222_parse_all(struct iqs7222_private *iqs7222)
18387 + chan_setup[5] = 0;
18388 + }
18389 +
18390 +- for (i = 0; i < reg_grps[IQS7222_REG_GRP_CHAN].num_row; i++) {
18391 +- error = iqs7222_parse_chan(iqs7222, i);
18392 +- if (error)
18393 +- return error;
18394 +- }
18395 +-
18396 +- error = iqs7222_parse_props(iqs7222, NULL, 0, IQS7222_REG_GRP_FILT,
18397 +- IQS7222_REG_KEY_NONE);
18398 +- if (error)
18399 +- return error;
18400 +-
18401 + for (i = 0; i < reg_grps[IQS7222_REG_GRP_SLDR].num_row; i++) {
18402 + u16 *sldr_setup = iqs7222->sldr_setup[i];
18403 +
18404 + sldr_setup[0] &= ~IQS7222_SLDR_SETUP_0_CHAN_CNT_MASK;
18405 ++ }
18406 +
18407 +- error = iqs7222_parse_sldr(iqs7222, i);
18408 +- if (error)
18409 +- return error;
18410 ++ for (i = 0; i < IQS7222_NUM_REG_GRPS; i++) {
18411 ++ for (j = 0; j < reg_grps[i].num_row; j++) {
18412 ++ error = iqs7222_parse_reg_grp(iqs7222, i, j);
18413 ++ if (error)
18414 ++ return error;
18415 ++ }
18416 + }
18417 +
18418 +- return iqs7222_parse_props(iqs7222, NULL, 0, IQS7222_REG_GRP_SYS,
18419 +- IQS7222_REG_KEY_NONE);
18420 ++ return 0;
18421 + }
18422 +
18423 + static int iqs7222_report(struct iqs7222_private *iqs7222)
18424 +diff --git a/drivers/input/touchscreen/elants_i2c.c b/drivers/input/touchscreen/elants_i2c.c
18425 +index 879a4d984c907..e1308e179dd6f 100644
18426 +--- a/drivers/input/touchscreen/elants_i2c.c
18427 ++++ b/drivers/input/touchscreen/elants_i2c.c
18428 +@@ -1329,14 +1329,12 @@ static int elants_i2c_power_on(struct elants_data *ts)
18429 + if (IS_ERR_OR_NULL(ts->reset_gpio))
18430 + return 0;
18431 +
18432 +- gpiod_set_value_cansleep(ts->reset_gpio, 1);
18433 +-
18434 + error = regulator_enable(ts->vcc33);
18435 + if (error) {
18436 + dev_err(&ts->client->dev,
18437 + "failed to enable vcc33 regulator: %d\n",
18438 + error);
18439 +- goto release_reset_gpio;
18440 ++ return error;
18441 + }
18442 +
18443 + error = regulator_enable(ts->vccio);
18444 +@@ -1345,7 +1343,7 @@ static int elants_i2c_power_on(struct elants_data *ts)
18445 + "failed to enable vccio regulator: %d\n",
18446 + error);
18447 + regulator_disable(ts->vcc33);
18448 +- goto release_reset_gpio;
18449 ++ return error;
18450 + }
18451 +
18452 + /*
18453 +@@ -1354,7 +1352,6 @@ static int elants_i2c_power_on(struct elants_data *ts)
18454 + */
18455 + udelay(ELAN_POWERON_DELAY_USEC);
18456 +
18457 +-release_reset_gpio:
18458 + gpiod_set_value_cansleep(ts->reset_gpio, 0);
18459 + if (error)
18460 + return error;
18461 +@@ -1462,7 +1459,7 @@ static int elants_i2c_probe(struct i2c_client *client)
18462 + return error;
18463 + }
18464 +
18465 +- ts->reset_gpio = devm_gpiod_get(&client->dev, "reset", GPIOD_OUT_LOW);
18466 ++ ts->reset_gpio = devm_gpiod_get(&client->dev, "reset", GPIOD_OUT_HIGH);
18467 + if (IS_ERR(ts->reset_gpio)) {
18468 + error = PTR_ERR(ts->reset_gpio);
18469 +
18470 +diff --git a/drivers/interconnect/qcom/sc7180.c b/drivers/interconnect/qcom/sc7180.c
18471 +index 35cd448efdfbe..82d5e8a8c19ea 100644
18472 +--- a/drivers/interconnect/qcom/sc7180.c
18473 ++++ b/drivers/interconnect/qcom/sc7180.c
18474 +@@ -369,7 +369,7 @@ static const struct qcom_icc_desc sc7180_gem_noc = {
18475 + .num_bcms = ARRAY_SIZE(gem_noc_bcms),
18476 + };
18477 +
18478 +-static struct qcom_icc_bcm *mc_virt_bcms[] = {
18479 ++static struct qcom_icc_bcm * const mc_virt_bcms[] = {
18480 + &bcm_acv,
18481 + &bcm_mc0,
18482 + };
18483 +diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c
18484 +index 6a1f02c62dffc..9f7fab49a5a90 100644
18485 +--- a/drivers/iommu/amd/iommu_v2.c
18486 ++++ b/drivers/iommu/amd/iommu_v2.c
18487 +@@ -587,6 +587,7 @@ out_drop_state:
18488 + put_device_state(dev_state);
18489 +
18490 + out:
18491 ++ pci_dev_put(pdev);
18492 + return ret;
18493 + }
18494 +
18495 +diff --git a/drivers/iommu/fsl_pamu.c b/drivers/iommu/fsl_pamu.c
18496 +index 0d03f837a5d4e..7a1a413f75ab2 100644
18497 +--- a/drivers/iommu/fsl_pamu.c
18498 ++++ b/drivers/iommu/fsl_pamu.c
18499 +@@ -868,7 +868,7 @@ static int fsl_pamu_probe(struct platform_device *pdev)
18500 + ret = create_csd(ppaact_phys, mem_size, csd_port_id);
18501 + if (ret) {
18502 + dev_err(dev, "could not create coherence subdomain\n");
18503 +- return ret;
18504 ++ goto error;
18505 + }
18506 + }
18507 +
18508 +diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
18509 +index 65a3b3d886dc0..959d895fc1dff 100644
18510 +--- a/drivers/iommu/iommu.c
18511 ++++ b/drivers/iommu/iommu.c
18512 +@@ -283,13 +283,23 @@ static int __iommu_probe_device(struct device *dev, struct list_head *group_list
18513 + const struct iommu_ops *ops = dev->bus->iommu_ops;
18514 + struct iommu_device *iommu_dev;
18515 + struct iommu_group *group;
18516 ++ static DEFINE_MUTEX(iommu_probe_device_lock);
18517 + int ret;
18518 +
18519 + if (!ops)
18520 + return -ENODEV;
18521 +-
18522 +- if (!dev_iommu_get(dev))
18523 +- return -ENOMEM;
18524 ++ /*
18525 ++ * Serialise to avoid races between IOMMU drivers registering in
18526 ++ * parallel and/or the "replay" calls from ACPI/OF code via client
18527 ++ * driver probe. Once the latter have been cleaned up we should
18528 ++ * probably be able to use device_lock() here to minimise the scope,
18529 ++ * but for now enforcing a simple global ordering is fine.
18530 ++ */
18531 ++ mutex_lock(&iommu_probe_device_lock);
18532 ++ if (!dev_iommu_get(dev)) {
18533 ++ ret = -ENOMEM;
18534 ++ goto err_unlock;
18535 ++ }
18536 +
18537 + if (!try_module_get(ops->owner)) {
18538 + ret = -EINVAL;
18539 +@@ -309,11 +319,14 @@ static int __iommu_probe_device(struct device *dev, struct list_head *group_list
18540 + ret = PTR_ERR(group);
18541 + goto out_release;
18542 + }
18543 +- iommu_group_put(group);
18544 +
18545 ++ mutex_lock(&group->mutex);
18546 + if (group_list && !group->default_domain && list_empty(&group->entry))
18547 + list_add_tail(&group->entry, group_list);
18548 ++ mutex_unlock(&group->mutex);
18549 ++ iommu_group_put(group);
18550 +
18551 ++ mutex_unlock(&iommu_probe_device_lock);
18552 + iommu_device_link(iommu_dev, dev);
18553 +
18554 + return 0;
18555 +@@ -328,6 +341,9 @@ out_module_put:
18556 + err_free:
18557 + dev_iommu_free(dev);
18558 +
18559 ++err_unlock:
18560 ++ mutex_unlock(&iommu_probe_device_lock);
18561 ++
18562 + return ret;
18563 + }
18564 +
18565 +@@ -1799,11 +1815,11 @@ int bus_iommu_probe(struct bus_type *bus)
18566 + return ret;
18567 +
18568 + list_for_each_entry_safe(group, next, &group_list, entry) {
18569 ++ mutex_lock(&group->mutex);
18570 ++
18571 + /* Remove item from the list */
18572 + list_del_init(&group->entry);
18573 +
18574 +- mutex_lock(&group->mutex);
18575 +-
18576 + /* Try to allocate default domain */
18577 + probe_alloc_default_domain(bus, group);
18578 +
18579 +diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
18580 +index 2ab2ecfe01f80..dad2f238ffbf2 100644
18581 +--- a/drivers/iommu/mtk_iommu.c
18582 ++++ b/drivers/iommu/mtk_iommu.c
18583 +@@ -1044,20 +1044,24 @@ static int mtk_iommu_mm_dts_parse(struct device *dev, struct component_match **m
18584 + struct mtk_iommu_data *data)
18585 + {
18586 + struct device_node *larbnode, *smicomm_node, *smi_subcomm_node;
18587 +- struct platform_device *plarbdev;
18588 ++ struct platform_device *plarbdev, *pcommdev;
18589 + struct device_link *link;
18590 + int i, larb_nr, ret;
18591 +
18592 + larb_nr = of_count_phandle_with_args(dev->of_node, "mediatek,larbs", NULL);
18593 + if (larb_nr < 0)
18594 + return larb_nr;
18595 ++ if (larb_nr == 0 || larb_nr > MTK_LARB_NR_MAX)
18596 ++ return -EINVAL;
18597 +
18598 + for (i = 0; i < larb_nr; i++) {
18599 + u32 id;
18600 +
18601 + larbnode = of_parse_phandle(dev->of_node, "mediatek,larbs", i);
18602 +- if (!larbnode)
18603 +- return -EINVAL;
18604 ++ if (!larbnode) {
18605 ++ ret = -EINVAL;
18606 ++ goto err_larbdev_put;
18607 ++ }
18608 +
18609 + if (!of_device_is_available(larbnode)) {
18610 + of_node_put(larbnode);
18611 +@@ -1067,20 +1071,32 @@ static int mtk_iommu_mm_dts_parse(struct device *dev, struct component_match **m
18612 + ret = of_property_read_u32(larbnode, "mediatek,larb-id", &id);
18613 + if (ret)/* The id is consecutive if there is no this property */
18614 + id = i;
18615 ++ if (id >= MTK_LARB_NR_MAX) {
18616 ++ of_node_put(larbnode);
18617 ++ ret = -EINVAL;
18618 ++ goto err_larbdev_put;
18619 ++ }
18620 +
18621 + plarbdev = of_find_device_by_node(larbnode);
18622 ++ of_node_put(larbnode);
18623 + if (!plarbdev) {
18624 +- of_node_put(larbnode);
18625 +- return -ENODEV;
18626 ++ ret = -ENODEV;
18627 ++ goto err_larbdev_put;
18628 + }
18629 +- if (!plarbdev->dev.driver) {
18630 +- of_node_put(larbnode);
18631 +- return -EPROBE_DEFER;
18632 ++ if (data->larb_imu[id].dev) {
18633 ++ platform_device_put(plarbdev);
18634 ++ ret = -EEXIST;
18635 ++ goto err_larbdev_put;
18636 + }
18637 + data->larb_imu[id].dev = &plarbdev->dev;
18638 +
18639 +- component_match_add_release(dev, match, component_release_of,
18640 +- component_compare_of, larbnode);
18641 ++ if (!plarbdev->dev.driver) {
18642 ++ ret = -EPROBE_DEFER;
18643 ++ goto err_larbdev_put;
18644 ++ }
18645 ++
18646 ++ component_match_add(dev, match, component_compare_dev, &plarbdev->dev);
18647 ++ platform_device_put(plarbdev);
18648 + }
18649 +
18650 + /* Get smi-(sub)-common dev from the last larb. */
18651 +@@ -1098,17 +1114,28 @@ static int mtk_iommu_mm_dts_parse(struct device *dev, struct component_match **m
18652 + else
18653 + smicomm_node = smi_subcomm_node;
18654 +
18655 +- plarbdev = of_find_device_by_node(smicomm_node);
18656 ++ pcommdev = of_find_device_by_node(smicomm_node);
18657 + of_node_put(smicomm_node);
18658 +- data->smicomm_dev = &plarbdev->dev;
18659 ++ if (!pcommdev)
18660 ++ return -ENODEV;
18661 ++ data->smicomm_dev = &pcommdev->dev;
18662 +
18663 + link = device_link_add(data->smicomm_dev, dev,
18664 + DL_FLAG_STATELESS | DL_FLAG_PM_RUNTIME);
18665 ++ platform_device_put(pcommdev);
18666 + if (!link) {
18667 + dev_err(dev, "Unable to link %s.\n", dev_name(data->smicomm_dev));
18668 + return -EINVAL;
18669 + }
18670 + return 0;
18671 ++
18672 ++err_larbdev_put:
18673 ++ for (i = MTK_LARB_NR_MAX - 1; i >= 0; i--) {
18674 ++ if (!data->larb_imu[i].dev)
18675 ++ continue;
18676 ++ put_device(data->larb_imu[i].dev);
18677 ++ }
18678 ++ return ret;
18679 + }
18680 +
18681 + static int mtk_iommu_probe(struct platform_device *pdev)
18682 +@@ -1173,6 +1200,8 @@ static int mtk_iommu_probe(struct platform_device *pdev)
18683 +
18684 + banks_num = data->plat_data->banks_num;
18685 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
18686 ++ if (!res)
18687 ++ return -EINVAL;
18688 + if (resource_size(res) < banks_num * MTK_IOMMU_BANK_SZ) {
18689 + dev_err(dev, "banknr %d. res %pR is not enough.\n", banks_num, res);
18690 + return -EINVAL;
18691 +diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
18692 +index a3fc59b814ab5..a68eadd64f38d 100644
18693 +--- a/drivers/iommu/rockchip-iommu.c
18694 ++++ b/drivers/iommu/rockchip-iommu.c
18695 +@@ -280,19 +280,17 @@ static u32 rk_mk_pte(phys_addr_t page, int prot)
18696 + * 11:9 - Page address bit 34:32
18697 + * 8:4 - Page address bit 39:35
18698 + * 3 - Security
18699 +- * 2 - Readable
18700 +- * 1 - Writable
18701 ++ * 2 - Writable
18702 ++ * 1 - Readable
18703 + * 0 - 1 if Page @ Page address is valid
18704 + */
18705 +-#define RK_PTE_PAGE_READABLE_V2 BIT(2)
18706 +-#define RK_PTE_PAGE_WRITABLE_V2 BIT(1)
18707 +
18708 + static u32 rk_mk_pte_v2(phys_addr_t page, int prot)
18709 + {
18710 + u32 flags = 0;
18711 +
18712 +- flags |= (prot & IOMMU_READ) ? RK_PTE_PAGE_READABLE_V2 : 0;
18713 +- flags |= (prot & IOMMU_WRITE) ? RK_PTE_PAGE_WRITABLE_V2 : 0;
18714 ++ flags |= (prot & IOMMU_READ) ? RK_PTE_PAGE_READABLE : 0;
18715 ++ flags |= (prot & IOMMU_WRITE) ? RK_PTE_PAGE_WRITABLE : 0;
18716 +
18717 + return rk_mk_dte_v2(page) | flags;
18718 + }
18719 +diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
18720 +index 3c071782f6f16..c2e5e81d609e1 100644
18721 +--- a/drivers/iommu/s390-iommu.c
18722 ++++ b/drivers/iommu/s390-iommu.c
18723 +@@ -79,10 +79,36 @@ static void s390_domain_free(struct iommu_domain *domain)
18724 + {
18725 + struct s390_domain *s390_domain = to_s390_domain(domain);
18726 +
18727 ++ WARN_ON(!list_empty(&s390_domain->devices));
18728 + dma_cleanup_tables(s390_domain->dma_table);
18729 + kfree(s390_domain);
18730 + }
18731 +
18732 ++static void __s390_iommu_detach_device(struct zpci_dev *zdev)
18733 ++{
18734 ++ struct s390_domain *s390_domain = zdev->s390_domain;
18735 ++ struct s390_domain_device *domain_device, *tmp;
18736 ++ unsigned long flags;
18737 ++
18738 ++ if (!s390_domain)
18739 ++ return;
18740 ++
18741 ++ spin_lock_irqsave(&s390_domain->list_lock, flags);
18742 ++ list_for_each_entry_safe(domain_device, tmp, &s390_domain->devices,
18743 ++ list) {
18744 ++ if (domain_device->zdev == zdev) {
18745 ++ list_del(&domain_device->list);
18746 ++ kfree(domain_device);
18747 ++ break;
18748 ++ }
18749 ++ }
18750 ++ spin_unlock_irqrestore(&s390_domain->list_lock, flags);
18751 ++
18752 ++ zpci_unregister_ioat(zdev, 0);
18753 ++ zdev->s390_domain = NULL;
18754 ++ zdev->dma_table = NULL;
18755 ++}
18756 ++
18757 + static int s390_iommu_attach_device(struct iommu_domain *domain,
18758 + struct device *dev)
18759 + {
18760 +@@ -90,7 +116,7 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
18761 + struct zpci_dev *zdev = to_zpci_dev(dev);
18762 + struct s390_domain_device *domain_device;
18763 + unsigned long flags;
18764 +- int cc, rc;
18765 ++ int cc, rc = 0;
18766 +
18767 + if (!zdev)
18768 + return -ENODEV;
18769 +@@ -99,24 +125,18 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
18770 + if (!domain_device)
18771 + return -ENOMEM;
18772 +
18773 +- if (zdev->dma_table && !zdev->s390_domain) {
18774 +- cc = zpci_dma_exit_device(zdev);
18775 +- if (cc) {
18776 +- rc = -EIO;
18777 +- goto out_free;
18778 +- }
18779 +- }
18780 +-
18781 + if (zdev->s390_domain)
18782 +- zpci_unregister_ioat(zdev, 0);
18783 ++ __s390_iommu_detach_device(zdev);
18784 ++ else if (zdev->dma_table)
18785 ++ zpci_dma_exit_device(zdev);
18786 +
18787 +- zdev->dma_table = s390_domain->dma_table;
18788 + cc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
18789 +- virt_to_phys(zdev->dma_table));
18790 ++ virt_to_phys(s390_domain->dma_table));
18791 + if (cc) {
18792 + rc = -EIO;
18793 +- goto out_restore;
18794 ++ goto out_free;
18795 + }
18796 ++ zdev->dma_table = s390_domain->dma_table;
18797 +
18798 + spin_lock_irqsave(&s390_domain->list_lock, flags);
18799 + /* First device defines the DMA range limits */
18800 +@@ -127,9 +147,9 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
18801 + /* Allow only devices with identical DMA range limits */
18802 + } else if (domain->geometry.aperture_start != zdev->start_dma ||
18803 + domain->geometry.aperture_end != zdev->end_dma) {
18804 +- rc = -EINVAL;
18805 + spin_unlock_irqrestore(&s390_domain->list_lock, flags);
18806 +- goto out_restore;
18807 ++ rc = -EINVAL;
18808 ++ goto out_unregister;
18809 + }
18810 + domain_device->zdev = zdev;
18811 + zdev->s390_domain = s390_domain;
18812 +@@ -138,14 +158,9 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
18813 +
18814 + return 0;
18815 +
18816 +-out_restore:
18817 +- if (!zdev->s390_domain) {
18818 +- zpci_dma_init_device(zdev);
18819 +- } else {
18820 +- zdev->dma_table = zdev->s390_domain->dma_table;
18821 +- zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
18822 +- virt_to_phys(zdev->dma_table));
18823 +- }
18824 ++out_unregister:
18825 ++ zpci_unregister_ioat(zdev, 0);
18826 ++ zdev->dma_table = NULL;
18827 + out_free:
18828 + kfree(domain_device);
18829 +
18830 +@@ -155,32 +170,12 @@ out_free:
18831 + static void s390_iommu_detach_device(struct iommu_domain *domain,
18832 + struct device *dev)
18833 + {
18834 +- struct s390_domain *s390_domain = to_s390_domain(domain);
18835 + struct zpci_dev *zdev = to_zpci_dev(dev);
18836 +- struct s390_domain_device *domain_device, *tmp;
18837 +- unsigned long flags;
18838 +- int found = 0;
18839 +
18840 +- if (!zdev)
18841 +- return;
18842 ++ WARN_ON(zdev->s390_domain != to_s390_domain(domain));
18843 +
18844 +- spin_lock_irqsave(&s390_domain->list_lock, flags);
18845 +- list_for_each_entry_safe(domain_device, tmp, &s390_domain->devices,
18846 +- list) {
18847 +- if (domain_device->zdev == zdev) {
18848 +- list_del(&domain_device->list);
18849 +- kfree(domain_device);
18850 +- found = 1;
18851 +- break;
18852 +- }
18853 +- }
18854 +- spin_unlock_irqrestore(&s390_domain->list_lock, flags);
18855 +-
18856 +- if (found && (zdev->s390_domain == s390_domain)) {
18857 +- zdev->s390_domain = NULL;
18858 +- zpci_unregister_ioat(zdev, 0);
18859 +- zpci_dma_init_device(zdev);
18860 +- }
18861 ++ __s390_iommu_detach_device(zdev);
18862 ++ zpci_dma_init_device(zdev);
18863 + }
18864 +
18865 + static struct iommu_device *s390_iommu_probe_device(struct device *dev)
18866 +@@ -198,24 +193,13 @@ static struct iommu_device *s390_iommu_probe_device(struct device *dev)
18867 + static void s390_iommu_release_device(struct device *dev)
18868 + {
18869 + struct zpci_dev *zdev = to_zpci_dev(dev);
18870 +- struct iommu_domain *domain;
18871 +
18872 + /*
18873 +- * This is a workaround for a scenario where the IOMMU API common code
18874 +- * "forgets" to call the detach_dev callback: After binding a device
18875 +- * to vfio-pci and completing the VFIO_SET_IOMMU ioctl (which triggers
18876 +- * the attach_dev), removing the device via
18877 +- * "echo 1 > /sys/bus/pci/devices/.../remove" won't trigger detach_dev,
18878 +- * only release_device will be called via the BUS_NOTIFY_REMOVED_DEVICE
18879 +- * notifier.
18880 +- *
18881 +- * So let's call detach_dev from here if it hasn't been called before.
18882 ++ * release_device is expected to detach any domain currently attached
18883 ++ * to the device, but keep it attached to other devices in the group.
18884 + */
18885 +- if (zdev && zdev->s390_domain) {
18886 +- domain = iommu_get_domain_for_dev(dev);
18887 +- if (domain)
18888 +- s390_iommu_detach_device(domain, dev);
18889 +- }
18890 ++ if (zdev)
18891 ++ __s390_iommu_detach_device(zdev);
18892 + }
18893 +
18894 + static int s390_iommu_update_trans(struct s390_domain *s390_domain,
18895 +diff --git a/drivers/iommu/sun50i-iommu.c b/drivers/iommu/sun50i-iommu.c
18896 +index cd9b74ee24def..5b585eace3d46 100644
18897 +--- a/drivers/iommu/sun50i-iommu.c
18898 ++++ b/drivers/iommu/sun50i-iommu.c
18899 +@@ -27,6 +27,7 @@
18900 + #include <linux/types.h>
18901 +
18902 + #define IOMMU_RESET_REG 0x010
18903 ++#define IOMMU_RESET_RELEASE_ALL 0xffffffff
18904 + #define IOMMU_ENABLE_REG 0x020
18905 + #define IOMMU_ENABLE_ENABLE BIT(0)
18906 +
18907 +@@ -92,6 +93,8 @@
18908 + #define NUM_PT_ENTRIES 256
18909 + #define PT_SIZE (NUM_PT_ENTRIES * PT_ENTRY_SIZE)
18910 +
18911 ++#define SPAGE_SIZE 4096
18912 ++
18913 + struct sun50i_iommu {
18914 + struct iommu_device iommu;
18915 +
18916 +@@ -270,7 +273,7 @@ static u32 sun50i_mk_pte(phys_addr_t page, int prot)
18917 + enum sun50i_iommu_aci aci;
18918 + u32 flags = 0;
18919 +
18920 +- if (prot & (IOMMU_READ | IOMMU_WRITE))
18921 ++ if ((prot & (IOMMU_READ | IOMMU_WRITE)) == (IOMMU_READ | IOMMU_WRITE))
18922 + aci = SUN50I_IOMMU_ACI_RD_WR;
18923 + else if (prot & IOMMU_READ)
18924 + aci = SUN50I_IOMMU_ACI_RD;
18925 +@@ -294,6 +297,62 @@ static void sun50i_table_flush(struct sun50i_iommu_domain *sun50i_domain,
18926 + dma_sync_single_for_device(iommu->dev, dma, size, DMA_TO_DEVICE);
18927 + }
18928 +
18929 ++static void sun50i_iommu_zap_iova(struct sun50i_iommu *iommu,
18930 ++ unsigned long iova)
18931 ++{
18932 ++ u32 reg;
18933 ++ int ret;
18934 ++
18935 ++ iommu_write(iommu, IOMMU_TLB_IVLD_ADDR_REG, iova);
18936 ++ iommu_write(iommu, IOMMU_TLB_IVLD_ADDR_MASK_REG, GENMASK(31, 12));
18937 ++ iommu_write(iommu, IOMMU_TLB_IVLD_ENABLE_REG,
18938 ++ IOMMU_TLB_IVLD_ENABLE_ENABLE);
18939 ++
18940 ++ ret = readl_poll_timeout_atomic(iommu->base + IOMMU_TLB_IVLD_ENABLE_REG,
18941 ++ reg, !reg, 1, 2000);
18942 ++ if (ret)
18943 ++ dev_warn(iommu->dev, "TLB invalidation timed out!\n");
18944 ++}
18945 ++
18946 ++static void sun50i_iommu_zap_ptw_cache(struct sun50i_iommu *iommu,
18947 ++ unsigned long iova)
18948 ++{
18949 ++ u32 reg;
18950 ++ int ret;
18951 ++
18952 ++ iommu_write(iommu, IOMMU_PC_IVLD_ADDR_REG, iova);
18953 ++ iommu_write(iommu, IOMMU_PC_IVLD_ENABLE_REG,
18954 ++ IOMMU_PC_IVLD_ENABLE_ENABLE);
18955 ++
18956 ++ ret = readl_poll_timeout_atomic(iommu->base + IOMMU_PC_IVLD_ENABLE_REG,
18957 ++ reg, !reg, 1, 2000);
18958 ++ if (ret)
18959 ++ dev_warn(iommu->dev, "PTW cache invalidation timed out!\n");
18960 ++}
18961 ++
18962 ++static void sun50i_iommu_zap_range(struct sun50i_iommu *iommu,
18963 ++ unsigned long iova, size_t size)
18964 ++{
18965 ++ assert_spin_locked(&iommu->iommu_lock);
18966 ++
18967 ++ iommu_write(iommu, IOMMU_AUTO_GATING_REG, 0);
18968 ++
18969 ++ sun50i_iommu_zap_iova(iommu, iova);
18970 ++ sun50i_iommu_zap_iova(iommu, iova + SPAGE_SIZE);
18971 ++ if (size > SPAGE_SIZE) {
18972 ++ sun50i_iommu_zap_iova(iommu, iova + size);
18973 ++ sun50i_iommu_zap_iova(iommu, iova + size + SPAGE_SIZE);
18974 ++ }
18975 ++ sun50i_iommu_zap_ptw_cache(iommu, iova);
18976 ++ sun50i_iommu_zap_ptw_cache(iommu, iova + SZ_1M);
18977 ++ if (size > SZ_1M) {
18978 ++ sun50i_iommu_zap_ptw_cache(iommu, iova + size);
18979 ++ sun50i_iommu_zap_ptw_cache(iommu, iova + size + SZ_1M);
18980 ++ }
18981 ++
18982 ++ iommu_write(iommu, IOMMU_AUTO_GATING_REG, IOMMU_AUTO_GATING_ENABLE);
18983 ++}
18984 ++
18985 + static int sun50i_iommu_flush_all_tlb(struct sun50i_iommu *iommu)
18986 + {
18987 + u32 reg;
18988 +@@ -343,6 +402,18 @@ static void sun50i_iommu_flush_iotlb_all(struct iommu_domain *domain)
18989 + spin_unlock_irqrestore(&iommu->iommu_lock, flags);
18990 + }
18991 +
18992 ++static void sun50i_iommu_iotlb_sync_map(struct iommu_domain *domain,
18993 ++ unsigned long iova, size_t size)
18994 ++{
18995 ++ struct sun50i_iommu_domain *sun50i_domain = to_sun50i_domain(domain);
18996 ++ struct sun50i_iommu *iommu = sun50i_domain->iommu;
18997 ++ unsigned long flags;
18998 ++
18999 ++ spin_lock_irqsave(&iommu->iommu_lock, flags);
19000 ++ sun50i_iommu_zap_range(iommu, iova, size);
19001 ++ spin_unlock_irqrestore(&iommu->iommu_lock, flags);
19002 ++}
19003 ++
19004 + static void sun50i_iommu_iotlb_sync(struct iommu_domain *domain,
19005 + struct iommu_iotlb_gather *gather)
19006 + {
19007 +@@ -511,7 +582,7 @@ static u32 *sun50i_dte_get_page_table(struct sun50i_iommu_domain *sun50i_domain,
19008 + sun50i_iommu_free_page_table(iommu, drop_pt);
19009 + }
19010 +
19011 +- sun50i_table_flush(sun50i_domain, page_table, PT_SIZE);
19012 ++ sun50i_table_flush(sun50i_domain, page_table, NUM_PT_ENTRIES);
19013 + sun50i_table_flush(sun50i_domain, dte_addr, 1);
19014 +
19015 + return page_table;
19016 +@@ -601,7 +672,6 @@ static struct iommu_domain *sun50i_iommu_domain_alloc(unsigned type)
19017 + struct sun50i_iommu_domain *sun50i_domain;
19018 +
19019 + if (type != IOMMU_DOMAIN_DMA &&
19020 +- type != IOMMU_DOMAIN_IDENTITY &&
19021 + type != IOMMU_DOMAIN_UNMANAGED)
19022 + return NULL;
19023 +
19024 +@@ -766,6 +836,7 @@ static const struct iommu_ops sun50i_iommu_ops = {
19025 + .attach_dev = sun50i_iommu_attach_device,
19026 + .detach_dev = sun50i_iommu_detach_device,
19027 + .flush_iotlb_all = sun50i_iommu_flush_iotlb_all,
19028 ++ .iotlb_sync_map = sun50i_iommu_iotlb_sync_map,
19029 + .iotlb_sync = sun50i_iommu_iotlb_sync,
19030 + .iova_to_phys = sun50i_iommu_iova_to_phys,
19031 + .map = sun50i_iommu_map,
19032 +@@ -785,6 +856,8 @@ static void sun50i_iommu_report_fault(struct sun50i_iommu *iommu,
19033 + report_iommu_fault(iommu->domain, iommu->dev, iova, prot);
19034 + else
19035 + dev_err(iommu->dev, "Page fault while iommu not attached to any domain?\n");
19036 ++
19037 ++ sun50i_iommu_zap_range(iommu, iova, SPAGE_SIZE);
19038 + }
19039 +
19040 + static phys_addr_t sun50i_iommu_handle_pt_irq(struct sun50i_iommu *iommu,
19041 +@@ -868,8 +941,8 @@ static phys_addr_t sun50i_iommu_handle_perm_irq(struct sun50i_iommu *iommu)
19042 +
19043 + static irqreturn_t sun50i_iommu_irq(int irq, void *dev_id)
19044 + {
19045 ++ u32 status, l1_status, l2_status, resets;
19046 + struct sun50i_iommu *iommu = dev_id;
19047 +- u32 status;
19048 +
19049 + spin_lock(&iommu->iommu_lock);
19050 +
19051 +@@ -879,6 +952,9 @@ static irqreturn_t sun50i_iommu_irq(int irq, void *dev_id)
19052 + return IRQ_NONE;
19053 + }
19054 +
19055 ++ l1_status = iommu_read(iommu, IOMMU_L1PG_INT_REG);
19056 ++ l2_status = iommu_read(iommu, IOMMU_L2PG_INT_REG);
19057 ++
19058 + if (status & IOMMU_INT_INVALID_L2PG)
19059 + sun50i_iommu_handle_pt_irq(iommu,
19060 + IOMMU_INT_ERR_ADDR_L2_REG,
19061 +@@ -892,8 +968,9 @@ static irqreturn_t sun50i_iommu_irq(int irq, void *dev_id)
19062 +
19063 + iommu_write(iommu, IOMMU_INT_CLR_REG, status);
19064 +
19065 +- iommu_write(iommu, IOMMU_RESET_REG, ~status);
19066 +- iommu_write(iommu, IOMMU_RESET_REG, status);
19067 ++ resets = (status | l1_status | l2_status) & IOMMU_INT_MASTER_MASK;
19068 ++ iommu_write(iommu, IOMMU_RESET_REG, ~resets);
19069 ++ iommu_write(iommu, IOMMU_RESET_REG, IOMMU_RESET_RELEASE_ALL);
19070 +
19071 + spin_unlock(&iommu->iommu_lock);
19072 +
19073 +diff --git a/drivers/irqchip/irq-gic-pm.c b/drivers/irqchip/irq-gic-pm.c
19074 +index b60e1853593f4..3989d16f997b3 100644
19075 +--- a/drivers/irqchip/irq-gic-pm.c
19076 ++++ b/drivers/irqchip/irq-gic-pm.c
19077 +@@ -102,7 +102,7 @@ static int gic_probe(struct platform_device *pdev)
19078 +
19079 + pm_runtime_enable(dev);
19080 +
19081 +- ret = pm_runtime_get_sync(dev);
19082 ++ ret = pm_runtime_resume_and_get(dev);
19083 + if (ret < 0)
19084 + goto rpm_disable;
19085 +
19086 +diff --git a/drivers/irqchip/irq-loongson-liointc.c b/drivers/irqchip/irq-loongson-liointc.c
19087 +index 0da8716f8f24b..c4584e2f0ad3d 100644
19088 +--- a/drivers/irqchip/irq-loongson-liointc.c
19089 ++++ b/drivers/irqchip/irq-loongson-liointc.c
19090 +@@ -207,10 +207,13 @@ static int liointc_init(phys_addr_t addr, unsigned long size, int revision,
19091 + "reg-names", core_reg_names[i]);
19092 +
19093 + if (index < 0)
19094 +- goto out_iounmap;
19095 ++ continue;
19096 +
19097 + priv->core_isr[i] = of_iomap(node, index);
19098 + }
19099 ++
19100 ++ if (!priv->core_isr[0])
19101 ++ goto out_iounmap;
19102 + }
19103 +
19104 + /* Setup IRQ domain */
19105 +diff --git a/drivers/irqchip/irq-loongson-pch-pic.c b/drivers/irqchip/irq-loongson-pch-pic.c
19106 +index c01b9c2570053..03493cda65a37 100644
19107 +--- a/drivers/irqchip/irq-loongson-pch-pic.c
19108 ++++ b/drivers/irqchip/irq-loongson-pch-pic.c
19109 +@@ -159,6 +159,9 @@ static int pch_pic_domain_translate(struct irq_domain *d,
19110 + return -EINVAL;
19111 +
19112 + if (of_node) {
19113 ++ if (fwspec->param_count < 2)
19114 ++ return -EINVAL;
19115 ++
19116 + *hwirq = fwspec->param[0] + priv->ht_vec_base;
19117 + *type = fwspec->param[1] & IRQ_TYPE_SENSE_MASK;
19118 + } else {
19119 +diff --git a/drivers/irqchip/irq-wpcm450-aic.c b/drivers/irqchip/irq-wpcm450-aic.c
19120 +index 0dcbeb1a05a1f..91df62a64cd91 100644
19121 +--- a/drivers/irqchip/irq-wpcm450-aic.c
19122 ++++ b/drivers/irqchip/irq-wpcm450-aic.c
19123 +@@ -146,6 +146,7 @@ static int __init wpcm450_aic_of_init(struct device_node *node,
19124 + aic->regs = of_iomap(node, 0);
19125 + if (!aic->regs) {
19126 + pr_err("Failed to map WPCM450 AIC registers\n");
19127 ++ kfree(aic);
19128 + return -ENOMEM;
19129 + }
19130 +
19131 +diff --git a/drivers/isdn/hardware/mISDN/hfcmulti.c b/drivers/isdn/hardware/mISDN/hfcmulti.c
19132 +index 4f7eaa17fb274..e840609c50eb7 100644
19133 +--- a/drivers/isdn/hardware/mISDN/hfcmulti.c
19134 ++++ b/drivers/isdn/hardware/mISDN/hfcmulti.c
19135 +@@ -3217,6 +3217,7 @@ static int
19136 + hfcm_l1callback(struct dchannel *dch, u_int cmd)
19137 + {
19138 + struct hfc_multi *hc = dch->hw;
19139 ++ struct sk_buff_head free_queue;
19140 + u_long flags;
19141 +
19142 + switch (cmd) {
19143 +@@ -3245,6 +3246,7 @@ hfcm_l1callback(struct dchannel *dch, u_int cmd)
19144 + l1_event(dch->l1, HW_POWERUP_IND);
19145 + break;
19146 + case HW_DEACT_REQ:
19147 ++ __skb_queue_head_init(&free_queue);
19148 + /* start deactivation */
19149 + spin_lock_irqsave(&hc->lock, flags);
19150 + if (hc->ctype == HFC_TYPE_E1) {
19151 +@@ -3264,20 +3266,21 @@ hfcm_l1callback(struct dchannel *dch, u_int cmd)
19152 + plxsd_checksync(hc, 0);
19153 + }
19154 + }
19155 +- skb_queue_purge(&dch->squeue);
19156 ++ skb_queue_splice_init(&dch->squeue, &free_queue);
19157 + if (dch->tx_skb) {
19158 +- dev_kfree_skb(dch->tx_skb);
19159 ++ __skb_queue_tail(&free_queue, dch->tx_skb);
19160 + dch->tx_skb = NULL;
19161 + }
19162 + dch->tx_idx = 0;
19163 + if (dch->rx_skb) {
19164 +- dev_kfree_skb(dch->rx_skb);
19165 ++ __skb_queue_tail(&free_queue, dch->rx_skb);
19166 + dch->rx_skb = NULL;
19167 + }
19168 + test_and_clear_bit(FLG_TX_BUSY, &dch->Flags);
19169 + if (test_and_clear_bit(FLG_BUSY_TIMER, &dch->Flags))
19170 + del_timer(&dch->timer);
19171 + spin_unlock_irqrestore(&hc->lock, flags);
19172 ++ __skb_queue_purge(&free_queue);
19173 + break;
19174 + case HW_POWERUP_REQ:
19175 + spin_lock_irqsave(&hc->lock, flags);
19176 +@@ -3384,6 +3387,9 @@ handle_dmsg(struct mISDNchannel *ch, struct sk_buff *skb)
19177 + case PH_DEACTIVATE_REQ:
19178 + test_and_clear_bit(FLG_L2_ACTIVATED, &dch->Flags);
19179 + if (dch->dev.D.protocol != ISDN_P_TE_S0) {
19180 ++ struct sk_buff_head free_queue;
19181 ++
19182 ++ __skb_queue_head_init(&free_queue);
19183 + spin_lock_irqsave(&hc->lock, flags);
19184 + if (debug & DEBUG_HFCMULTI_MSG)
19185 + printk(KERN_DEBUG
19186 +@@ -3405,14 +3411,14 @@ handle_dmsg(struct mISDNchannel *ch, struct sk_buff *skb)
19187 + /* deactivate */
19188 + dch->state = 1;
19189 + }
19190 +- skb_queue_purge(&dch->squeue);
19191 ++ skb_queue_splice_init(&dch->squeue, &free_queue);
19192 + if (dch->tx_skb) {
19193 +- dev_kfree_skb(dch->tx_skb);
19194 ++ __skb_queue_tail(&free_queue, dch->tx_skb);
19195 + dch->tx_skb = NULL;
19196 + }
19197 + dch->tx_idx = 0;
19198 + if (dch->rx_skb) {
19199 +- dev_kfree_skb(dch->rx_skb);
19200 ++ __skb_queue_tail(&free_queue, dch->rx_skb);
19201 + dch->rx_skb = NULL;
19202 + }
19203 + test_and_clear_bit(FLG_TX_BUSY, &dch->Flags);
19204 +@@ -3424,6 +3430,7 @@ handle_dmsg(struct mISDNchannel *ch, struct sk_buff *skb)
19205 + #endif
19206 + ret = 0;
19207 + spin_unlock_irqrestore(&hc->lock, flags);
19208 ++ __skb_queue_purge(&free_queue);
19209 + } else
19210 + ret = l1_event(dch->l1, hh->prim);
19211 + break;
19212 +diff --git a/drivers/isdn/hardware/mISDN/hfcpci.c b/drivers/isdn/hardware/mISDN/hfcpci.c
19213 +index e964a8dd8512a..c0331b2680108 100644
19214 +--- a/drivers/isdn/hardware/mISDN/hfcpci.c
19215 ++++ b/drivers/isdn/hardware/mISDN/hfcpci.c
19216 +@@ -1617,16 +1617,19 @@ hfcpci_l2l1D(struct mISDNchannel *ch, struct sk_buff *skb)
19217 + test_and_clear_bit(FLG_L2_ACTIVATED, &dch->Flags);
19218 + spin_lock_irqsave(&hc->lock, flags);
19219 + if (hc->hw.protocol == ISDN_P_NT_S0) {
19220 ++ struct sk_buff_head free_queue;
19221 ++
19222 ++ __skb_queue_head_init(&free_queue);
19223 + /* prepare deactivation */
19224 + Write_hfc(hc, HFCPCI_STATES, 0x40);
19225 +- skb_queue_purge(&dch->squeue);
19226 ++ skb_queue_splice_init(&dch->squeue, &free_queue);
19227 + if (dch->tx_skb) {
19228 +- dev_kfree_skb(dch->tx_skb);
19229 ++ __skb_queue_tail(&free_queue, dch->tx_skb);
19230 + dch->tx_skb = NULL;
19231 + }
19232 + dch->tx_idx = 0;
19233 + if (dch->rx_skb) {
19234 +- dev_kfree_skb(dch->rx_skb);
19235 ++ __skb_queue_tail(&free_queue, dch->rx_skb);
19236 + dch->rx_skb = NULL;
19237 + }
19238 + test_and_clear_bit(FLG_TX_BUSY, &dch->Flags);
19239 +@@ -1639,10 +1642,12 @@ hfcpci_l2l1D(struct mISDNchannel *ch, struct sk_buff *skb)
19240 + hc->hw.mst_m &= ~HFCPCI_MASTER;
19241 + Write_hfc(hc, HFCPCI_MST_MODE, hc->hw.mst_m);
19242 + ret = 0;
19243 ++ spin_unlock_irqrestore(&hc->lock, flags);
19244 ++ __skb_queue_purge(&free_queue);
19245 + } else {
19246 + ret = l1_event(dch->l1, hh->prim);
19247 ++ spin_unlock_irqrestore(&hc->lock, flags);
19248 + }
19249 +- spin_unlock_irqrestore(&hc->lock, flags);
19250 + break;
19251 + }
19252 + if (!ret)
19253 +diff --git a/drivers/isdn/hardware/mISDN/hfcsusb.c b/drivers/isdn/hardware/mISDN/hfcsusb.c
19254 +index 651f2f8f685b7..1efd17979f240 100644
19255 +--- a/drivers/isdn/hardware/mISDN/hfcsusb.c
19256 ++++ b/drivers/isdn/hardware/mISDN/hfcsusb.c
19257 +@@ -326,20 +326,24 @@ hfcusb_l2l1D(struct mISDNchannel *ch, struct sk_buff *skb)
19258 + test_and_clear_bit(FLG_L2_ACTIVATED, &dch->Flags);
19259 +
19260 + if (hw->protocol == ISDN_P_NT_S0) {
19261 ++ struct sk_buff_head free_queue;
19262 ++
19263 ++ __skb_queue_head_init(&free_queue);
19264 + hfcsusb_ph_command(hw, HFC_L1_DEACTIVATE_NT);
19265 + spin_lock_irqsave(&hw->lock, flags);
19266 +- skb_queue_purge(&dch->squeue);
19267 ++ skb_queue_splice_init(&dch->squeue, &free_queue);
19268 + if (dch->tx_skb) {
19269 +- dev_kfree_skb(dch->tx_skb);
19270 ++ __skb_queue_tail(&free_queue, dch->tx_skb);
19271 + dch->tx_skb = NULL;
19272 + }
19273 + dch->tx_idx = 0;
19274 + if (dch->rx_skb) {
19275 +- dev_kfree_skb(dch->rx_skb);
19276 ++ __skb_queue_tail(&free_queue, dch->rx_skb);
19277 + dch->rx_skb = NULL;
19278 + }
19279 + test_and_clear_bit(FLG_TX_BUSY, &dch->Flags);
19280 + spin_unlock_irqrestore(&hw->lock, flags);
19281 ++ __skb_queue_purge(&free_queue);
19282 + #ifdef FIXME
19283 + if (test_and_clear_bit(FLG_L1_BUSY, &dch->Flags))
19284 + dchannel_sched_event(&hc->dch, D_CLEARBUSY);
19285 +@@ -1330,7 +1334,7 @@ tx_iso_complete(struct urb *urb)
19286 + printk("\n");
19287 + }
19288 +
19289 +- dev_kfree_skb(tx_skb);
19290 ++ dev_consume_skb_irq(tx_skb);
19291 + tx_skb = NULL;
19292 + if (fifo->dch && get_next_dframe(fifo->dch))
19293 + tx_skb = fifo->dch->tx_skb;
19294 +diff --git a/drivers/leds/leds-is31fl319x.c b/drivers/leds/leds-is31fl319x.c
19295 +index 52b59b62f437c..b2f4c4ec7c567 100644
19296 +--- a/drivers/leds/leds-is31fl319x.c
19297 ++++ b/drivers/leds/leds-is31fl319x.c
19298 +@@ -38,6 +38,7 @@
19299 + #define IS31FL3190_CURRENT_uA_MIN 5000
19300 + #define IS31FL3190_CURRENT_uA_DEFAULT 42000
19301 + #define IS31FL3190_CURRENT_uA_MAX 42000
19302 ++#define IS31FL3190_CURRENT_SHIFT 2
19303 + #define IS31FL3190_CURRENT_MASK GENMASK(4, 2)
19304 + #define IS31FL3190_CURRENT_5_mA 0x02
19305 + #define IS31FL3190_CURRENT_10_mA 0x01
19306 +@@ -553,7 +554,7 @@ static int is31fl319x_probe(struct i2c_client *client)
19307 + is31fl3196_db_to_gain(is31->audio_gain_db));
19308 + else
19309 + regmap_update_bits(is31->regmap, IS31FL3190_CURRENT, IS31FL3190_CURRENT_MASK,
19310 +- is31fl3190_microamp_to_cs(dev, aggregated_led_microamp));
19311 ++ is31fl3190_microamp_to_cs(dev, aggregated_led_microamp) << IS31FL3190_CURRENT_SHIFT);
19312 +
19313 + for (i = 0; i < is31->cdef->num_leds; i++) {
19314 + struct is31fl319x_led *led = &is31->leds[i];
19315 +diff --git a/drivers/leds/rgb/leds-qcom-lpg.c b/drivers/leds/rgb/leds-qcom-lpg.c
19316 +index 02f51cc618376..c1a56259226fb 100644
19317 +--- a/drivers/leds/rgb/leds-qcom-lpg.c
19318 ++++ b/drivers/leds/rgb/leds-qcom-lpg.c
19319 +@@ -602,8 +602,8 @@ static void lpg_brightness_set(struct lpg_led *led, struct led_classdev *cdev,
19320 + lpg_lut_sync(lpg, lut_mask);
19321 + }
19322 +
19323 +-static void lpg_brightness_single_set(struct led_classdev *cdev,
19324 +- enum led_brightness value)
19325 ++static int lpg_brightness_single_set(struct led_classdev *cdev,
19326 ++ enum led_brightness value)
19327 + {
19328 + struct lpg_led *led = container_of(cdev, struct lpg_led, cdev);
19329 + struct mc_subled info;
19330 +@@ -614,10 +614,12 @@ static void lpg_brightness_single_set(struct led_classdev *cdev,
19331 + lpg_brightness_set(led, cdev, &info);
19332 +
19333 + mutex_unlock(&led->lpg->lock);
19334 ++
19335 ++ return 0;
19336 + }
19337 +
19338 +-static void lpg_brightness_mc_set(struct led_classdev *cdev,
19339 +- enum led_brightness value)
19340 ++static int lpg_brightness_mc_set(struct led_classdev *cdev,
19341 ++ enum led_brightness value)
19342 + {
19343 + struct led_classdev_mc *mc = lcdev_to_mccdev(cdev);
19344 + struct lpg_led *led = container_of(mc, struct lpg_led, mcdev);
19345 +@@ -628,6 +630,8 @@ static void lpg_brightness_mc_set(struct led_classdev *cdev,
19346 + lpg_brightness_set(led, cdev, mc->subled_info);
19347 +
19348 + mutex_unlock(&led->lpg->lock);
19349 ++
19350 ++ return 0;
19351 + }
19352 +
19353 + static int lpg_blink_set(struct lpg_led *led,
19354 +@@ -1118,7 +1122,7 @@ static int lpg_add_led(struct lpg *lpg, struct device_node *np)
19355 + led->mcdev.num_colors = num_channels;
19356 +
19357 + cdev = &led->mcdev.led_cdev;
19358 +- cdev->brightness_set = lpg_brightness_mc_set;
19359 ++ cdev->brightness_set_blocking = lpg_brightness_mc_set;
19360 + cdev->blink_set = lpg_blink_mc_set;
19361 +
19362 + /* Register pattern accessors only if we have a LUT block */
19363 +@@ -1132,7 +1136,7 @@ static int lpg_add_led(struct lpg *lpg, struct device_node *np)
19364 + return ret;
19365 +
19366 + cdev = &led->cdev;
19367 +- cdev->brightness_set = lpg_brightness_single_set;
19368 ++ cdev->brightness_set_blocking = lpg_brightness_single_set;
19369 + cdev->blink_set = lpg_blink_single_set;
19370 +
19371 + /* Register pattern accessors only if we have a LUT block */
19372 +@@ -1151,7 +1155,7 @@ static int lpg_add_led(struct lpg *lpg, struct device_node *np)
19373 + else
19374 + cdev->brightness = LED_OFF;
19375 +
19376 +- cdev->brightness_set(cdev, cdev->brightness);
19377 ++ cdev->brightness_set_blocking(cdev, cdev->brightness);
19378 +
19379 + init_data.fwnode = of_fwnode_handle(np);
19380 +
19381 +diff --git a/drivers/macintosh/macio-adb.c b/drivers/macintosh/macio-adb.c
19382 +index 9b63bd2551c63..cd4e34d15c26b 100644
19383 +--- a/drivers/macintosh/macio-adb.c
19384 ++++ b/drivers/macintosh/macio-adb.c
19385 +@@ -108,6 +108,10 @@ int macio_init(void)
19386 + return -ENXIO;
19387 + }
19388 + adb = ioremap(r.start, sizeof(struct adb_regs));
19389 ++ if (!adb) {
19390 ++ of_node_put(adbs);
19391 ++ return -ENOMEM;
19392 ++ }
19393 +
19394 + out_8(&adb->ctrl.r, 0);
19395 + out_8(&adb->intr.r, 0);
19396 +diff --git a/drivers/macintosh/macio_asic.c b/drivers/macintosh/macio_asic.c
19397 +index 1ec1e5984563f..3bc1f374e6577 100644
19398 +--- a/drivers/macintosh/macio_asic.c
19399 ++++ b/drivers/macintosh/macio_asic.c
19400 +@@ -424,7 +424,7 @@ static struct macio_dev * macio_add_one_device(struct macio_chip *chip,
19401 + if (of_device_register(&dev->ofdev) != 0) {
19402 + printk(KERN_DEBUG"macio: device registration error for %s!\n",
19403 + dev_name(&dev->ofdev.dev));
19404 +- kfree(dev);
19405 ++ put_device(&dev->ofdev.dev);
19406 + return NULL;
19407 + }
19408 +
19409 +diff --git a/drivers/mailbox/arm_mhuv2.c b/drivers/mailbox/arm_mhuv2.c
19410 +index a47aef8df52fd..c6d4957c4da83 100644
19411 +--- a/drivers/mailbox/arm_mhuv2.c
19412 ++++ b/drivers/mailbox/arm_mhuv2.c
19413 +@@ -1062,8 +1062,8 @@ static int mhuv2_probe(struct amba_device *adev, const struct amba_id *id)
19414 + int ret = -EINVAL;
19415 +
19416 + reg = devm_of_iomap(dev, dev->of_node, 0, NULL);
19417 +- if (!reg)
19418 +- return -ENOMEM;
19419 ++ if (IS_ERR(reg))
19420 ++ return PTR_ERR(reg);
19421 +
19422 + mhu = devm_kzalloc(dev, sizeof(*mhu), GFP_KERNEL);
19423 + if (!mhu)
19424 +diff --git a/drivers/mailbox/mailbox-mpfs.c b/drivers/mailbox/mailbox-mpfs.c
19425 +index cfacb3f320a64..853901acaeec2 100644
19426 +--- a/drivers/mailbox/mailbox-mpfs.c
19427 ++++ b/drivers/mailbox/mailbox-mpfs.c
19428 +@@ -2,7 +2,7 @@
19429 + /*
19430 + * Microchip PolarFire SoC (MPFS) system controller/mailbox controller driver
19431 + *
19432 +- * Copyright (c) 2020 Microchip Corporation. All rights reserved.
19433 ++ * Copyright (c) 2020-2022 Microchip Corporation. All rights reserved.
19434 + *
19435 + * Author: Conor Dooley <conor.dooley@×××××××××.com>
19436 + *
19437 +@@ -56,7 +56,7 @@
19438 + #define SCB_STATUS_NOTIFY_MASK BIT(SCB_STATUS_NOTIFY)
19439 +
19440 + #define SCB_STATUS_POS (16)
19441 +-#define SCB_STATUS_MASK GENMASK_ULL(SCB_STATUS_POS + SCB_MASK_WIDTH, SCB_STATUS_POS)
19442 ++#define SCB_STATUS_MASK GENMASK(SCB_STATUS_POS + SCB_MASK_WIDTH - 1, SCB_STATUS_POS)
19443 +
19444 + struct mpfs_mbox {
19445 + struct mbox_controller controller;
19446 +@@ -130,13 +130,38 @@ static void mpfs_mbox_rx_data(struct mbox_chan *chan)
19447 + struct mpfs_mbox *mbox = (struct mpfs_mbox *)chan->con_priv;
19448 + struct mpfs_mss_response *response = mbox->response;
19449 + u16 num_words = ALIGN((response->resp_size), (4)) / 4U;
19450 +- u32 i;
19451 ++ u32 i, status;
19452 +
19453 + if (!response->resp_msg) {
19454 + dev_err(mbox->dev, "failed to assign memory for response %d\n", -ENOMEM);
19455 + return;
19456 + }
19457 +
19458 ++ /*
19459 ++ * The status is stored in bits 31:16 of the SERVICES_SR register.
19460 ++ * It is only valid when BUSY == 0.
19461 ++ * We should *never* get an interrupt while the controller is
19462 ++ * still in the busy state. If we do, something has gone badly
19463 ++ * wrong & the content of the mailbox would not be valid.
19464 ++ */
19465 ++ if (mpfs_mbox_busy(mbox)) {
19466 ++ dev_err(mbox->dev, "got an interrupt but system controller is busy\n");
19467 ++ response->resp_status = 0xDEAD;
19468 ++ return;
19469 ++ }
19470 ++
19471 ++ status = readl_relaxed(mbox->ctrl_base + SERVICES_SR_OFFSET);
19472 ++
19473 ++ /*
19474 ++ * If the status of the individual servers is non-zero, the service has
19475 ++ * failed. The contents of the mailbox at this point are not be valid,
19476 ++ * so don't bother reading them. Set the status so that the driver
19477 ++ * implementing the service can handle the result.
19478 ++ */
19479 ++ response->resp_status = (status & SCB_STATUS_MASK) >> SCB_STATUS_POS;
19480 ++ if (response->resp_status)
19481 ++ return;
19482 ++
19483 + if (!mpfs_mbox_busy(mbox)) {
19484 + for (i = 0; i < num_words; i++) {
19485 + response->resp_msg[i] =
19486 +diff --git a/drivers/mailbox/pcc.c b/drivers/mailbox/pcc.c
19487 +index 3c2bc0ca454cf..105d46c9801ba 100644
19488 +--- a/drivers/mailbox/pcc.c
19489 ++++ b/drivers/mailbox/pcc.c
19490 +@@ -743,6 +743,7 @@ static int __init pcc_init(void)
19491 +
19492 + if (IS_ERR(pcc_pdev)) {
19493 + pr_debug("Err creating PCC platform bundle\n");
19494 ++ pcc_chan_count = 0;
19495 + return PTR_ERR(pcc_pdev);
19496 + }
19497 +
19498 +diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
19499 +index 31a0fa9142744..12e004ff1a147 100644
19500 +--- a/drivers/mailbox/zynqmp-ipi-mailbox.c
19501 ++++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
19502 +@@ -493,6 +493,7 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
19503 + ret = device_register(&ipi_mbox->dev);
19504 + if (ret) {
19505 + dev_err(dev, "Failed to register ipi mbox dev.\n");
19506 ++ put_device(&ipi_mbox->dev);
19507 + return ret;
19508 + }
19509 + mdev = &ipi_mbox->dev;
19510 +@@ -619,7 +620,8 @@ static void zynqmp_ipi_free_mboxes(struct zynqmp_ipi_pdata *pdata)
19511 + ipi_mbox = &pdata->ipi_mboxes[i];
19512 + if (ipi_mbox->dev.parent) {
19513 + mbox_controller_unregister(&ipi_mbox->mbox);
19514 +- device_unregister(&ipi_mbox->dev);
19515 ++ if (device_is_registered(&ipi_mbox->dev))
19516 ++ device_unregister(&ipi_mbox->dev);
19517 + }
19518 + }
19519 + }
19520 +diff --git a/drivers/mcb/mcb-core.c b/drivers/mcb/mcb-core.c
19521 +index 338fc889b357a..b8ad4f16b4acd 100644
19522 +--- a/drivers/mcb/mcb-core.c
19523 ++++ b/drivers/mcb/mcb-core.c
19524 +@@ -71,8 +71,10 @@ static int mcb_probe(struct device *dev)
19525 +
19526 + get_device(dev);
19527 + ret = mdrv->probe(mdev, found_id);
19528 +- if (ret)
19529 ++ if (ret) {
19530 + module_put(carrier_mod);
19531 ++ put_device(dev);
19532 ++ }
19533 +
19534 + return ret;
19535 + }
19536 +diff --git a/drivers/mcb/mcb-parse.c b/drivers/mcb/mcb-parse.c
19537 +index 0266bfddfbe27..aa6938da0db85 100644
19538 +--- a/drivers/mcb/mcb-parse.c
19539 ++++ b/drivers/mcb/mcb-parse.c
19540 +@@ -108,7 +108,7 @@ static int chameleon_parse_gdd(struct mcb_bus *bus,
19541 + return 0;
19542 +
19543 + err:
19544 +- mcb_free_dev(mdev);
19545 ++ put_device(&mdev->dev);
19546 +
19547 + return ret;
19548 + }
19549 +diff --git a/drivers/md/dm.c b/drivers/md/dm.c
19550 +index 95a1ee3d314eb..e30c2d2bc9c78 100644
19551 +--- a/drivers/md/dm.c
19552 ++++ b/drivers/md/dm.c
19553 +@@ -732,28 +732,48 @@ static char *_dm_claim_ptr = "I belong to device-mapper";
19554 + /*
19555 + * Open a table device so we can use it as a map destination.
19556 + */
19557 +-static int open_table_device(struct table_device *td, dev_t dev,
19558 +- struct mapped_device *md)
19559 ++static struct table_device *open_table_device(struct mapped_device *md,
19560 ++ dev_t dev, fmode_t mode)
19561 + {
19562 ++ struct table_device *td;
19563 + struct block_device *bdev;
19564 + u64 part_off;
19565 + int r;
19566 +
19567 +- BUG_ON(td->dm_dev.bdev);
19568 ++ td = kmalloc_node(sizeof(*td), GFP_KERNEL, md->numa_node_id);
19569 ++ if (!td)
19570 ++ return ERR_PTR(-ENOMEM);
19571 ++ refcount_set(&td->count, 1);
19572 +
19573 +- bdev = blkdev_get_by_dev(dev, td->dm_dev.mode | FMODE_EXCL, _dm_claim_ptr);
19574 +- if (IS_ERR(bdev))
19575 +- return PTR_ERR(bdev);
19576 ++ bdev = blkdev_get_by_dev(dev, mode | FMODE_EXCL, _dm_claim_ptr);
19577 ++ if (IS_ERR(bdev)) {
19578 ++ r = PTR_ERR(bdev);
19579 ++ goto out_free_td;
19580 ++ }
19581 +
19582 +- r = bd_link_disk_holder(bdev, dm_disk(md));
19583 +- if (r) {
19584 +- blkdev_put(bdev, td->dm_dev.mode | FMODE_EXCL);
19585 +- return r;
19586 ++ /*
19587 ++ * We can be called before the dm disk is added. In that case we can't
19588 ++ * register the holder relation here. It will be done once add_disk was
19589 ++ * called.
19590 ++ */
19591 ++ if (md->disk->slave_dir) {
19592 ++ r = bd_link_disk_holder(bdev, md->disk);
19593 ++ if (r)
19594 ++ goto out_blkdev_put;
19595 + }
19596 +
19597 ++ td->dm_dev.mode = mode;
19598 + td->dm_dev.bdev = bdev;
19599 + td->dm_dev.dax_dev = fs_dax_get_by_bdev(bdev, &part_off, NULL, NULL);
19600 +- return 0;
19601 ++ format_dev_t(td->dm_dev.name, dev);
19602 ++ list_add(&td->list, &md->table_devices);
19603 ++ return td;
19604 ++
19605 ++out_blkdev_put:
19606 ++ blkdev_put(bdev, mode | FMODE_EXCL);
19607 ++out_free_td:
19608 ++ kfree(td);
19609 ++ return ERR_PTR(r);
19610 + }
19611 +
19612 + /*
19613 +@@ -761,14 +781,12 @@ static int open_table_device(struct table_device *td, dev_t dev,
19614 + */
19615 + static void close_table_device(struct table_device *td, struct mapped_device *md)
19616 + {
19617 +- if (!td->dm_dev.bdev)
19618 +- return;
19619 +-
19620 +- bd_unlink_disk_holder(td->dm_dev.bdev, dm_disk(md));
19621 ++ if (md->disk->slave_dir)
19622 ++ bd_unlink_disk_holder(td->dm_dev.bdev, md->disk);
19623 + blkdev_put(td->dm_dev.bdev, td->dm_dev.mode | FMODE_EXCL);
19624 + put_dax(td->dm_dev.dax_dev);
19625 +- td->dm_dev.bdev = NULL;
19626 +- td->dm_dev.dax_dev = NULL;
19627 ++ list_del(&td->list);
19628 ++ kfree(td);
19629 + }
19630 +
19631 + static struct table_device *find_table_device(struct list_head *l, dev_t dev,
19632 +@@ -786,31 +804,16 @@ static struct table_device *find_table_device(struct list_head *l, dev_t dev,
19633 + int dm_get_table_device(struct mapped_device *md, dev_t dev, fmode_t mode,
19634 + struct dm_dev **result)
19635 + {
19636 +- int r;
19637 + struct table_device *td;
19638 +
19639 + mutex_lock(&md->table_devices_lock);
19640 + td = find_table_device(&md->table_devices, dev, mode);
19641 + if (!td) {
19642 +- td = kmalloc_node(sizeof(*td), GFP_KERNEL, md->numa_node_id);
19643 +- if (!td) {
19644 ++ td = open_table_device(md, dev, mode);
19645 ++ if (IS_ERR(td)) {
19646 + mutex_unlock(&md->table_devices_lock);
19647 +- return -ENOMEM;
19648 ++ return PTR_ERR(td);
19649 + }
19650 +-
19651 +- td->dm_dev.mode = mode;
19652 +- td->dm_dev.bdev = NULL;
19653 +-
19654 +- if ((r = open_table_device(td, dev, md))) {
19655 +- mutex_unlock(&md->table_devices_lock);
19656 +- kfree(td);
19657 +- return r;
19658 +- }
19659 +-
19660 +- format_dev_t(td->dm_dev.name, dev);
19661 +-
19662 +- refcount_set(&td->count, 1);
19663 +- list_add(&td->list, &md->table_devices);
19664 + } else {
19665 + refcount_inc(&td->count);
19666 + }
19667 +@@ -825,11 +828,8 @@ void dm_put_table_device(struct mapped_device *md, struct dm_dev *d)
19668 + struct table_device *td = container_of(d, struct table_device, dm_dev);
19669 +
19670 + mutex_lock(&md->table_devices_lock);
19671 +- if (refcount_dec_and_test(&td->count)) {
19672 ++ if (refcount_dec_and_test(&td->count))
19673 + close_table_device(td, md);
19674 +- list_del(&td->list);
19675 +- kfree(td);
19676 +- }
19677 + mutex_unlock(&md->table_devices_lock);
19678 + }
19679 +
19680 +@@ -1972,8 +1972,21 @@ static void cleanup_mapped_device(struct mapped_device *md)
19681 + md->disk->private_data = NULL;
19682 + spin_unlock(&_minor_lock);
19683 + if (dm_get_md_type(md) != DM_TYPE_NONE) {
19684 ++ struct table_device *td;
19685 ++
19686 + dm_sysfs_exit(md);
19687 ++ list_for_each_entry(td, &md->table_devices, list) {
19688 ++ bd_unlink_disk_holder(td->dm_dev.bdev,
19689 ++ md->disk);
19690 ++ }
19691 ++
19692 ++ /*
19693 ++ * Hold lock to make sure del_gendisk() won't concurrent
19694 ++ * with open/close_table_device().
19695 ++ */
19696 ++ mutex_lock(&md->table_devices_lock);
19697 + del_gendisk(md->disk);
19698 ++ mutex_unlock(&md->table_devices_lock);
19699 + }
19700 + dm_queue_destroy_crypto_profile(md->queue);
19701 + put_disk(md->disk);
19702 +@@ -2305,6 +2318,7 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t)
19703 + {
19704 + enum dm_queue_mode type = dm_table_get_type(t);
19705 + struct queue_limits limits;
19706 ++ struct table_device *td;
19707 + int r;
19708 +
19709 + switch (type) {
19710 +@@ -2333,17 +2347,40 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t)
19711 + if (r)
19712 + return r;
19713 +
19714 ++ /*
19715 ++ * Hold lock to make sure add_disk() and del_gendisk() won't concurrent
19716 ++ * with open_table_device() and close_table_device().
19717 ++ */
19718 ++ mutex_lock(&md->table_devices_lock);
19719 + r = add_disk(md->disk);
19720 ++ mutex_unlock(&md->table_devices_lock);
19721 + if (r)
19722 + return r;
19723 +
19724 +- r = dm_sysfs_init(md);
19725 +- if (r) {
19726 +- del_gendisk(md->disk);
19727 +- return r;
19728 ++ /*
19729 ++ * Register the holder relationship for devices added before the disk
19730 ++ * was live.
19731 ++ */
19732 ++ list_for_each_entry(td, &md->table_devices, list) {
19733 ++ r = bd_link_disk_holder(td->dm_dev.bdev, md->disk);
19734 ++ if (r)
19735 ++ goto out_undo_holders;
19736 + }
19737 ++
19738 ++ r = dm_sysfs_init(md);
19739 ++ if (r)
19740 ++ goto out_undo_holders;
19741 ++
19742 + md->type = type;
19743 + return 0;
19744 ++
19745 ++out_undo_holders:
19746 ++ list_for_each_entry_continue_reverse(td, &md->table_devices, list)
19747 ++ bd_unlink_disk_holder(td->dm_dev.bdev, md->disk);
19748 ++ mutex_lock(&md->table_devices_lock);
19749 ++ del_gendisk(md->disk);
19750 ++ mutex_unlock(&md->table_devices_lock);
19751 ++ return r;
19752 + }
19753 +
19754 + struct mapped_device *dm_get_md(dev_t dev)
19755 +diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
19756 +index bf6dffadbe6f6..63ece30114e53 100644
19757 +--- a/drivers/md/md-bitmap.c
19758 ++++ b/drivers/md/md-bitmap.c
19759 +@@ -2195,20 +2195,23 @@ int md_bitmap_resize(struct bitmap *bitmap, sector_t blocks,
19760 +
19761 + if (set) {
19762 + bmc_new = md_bitmap_get_counter(&bitmap->counts, block, &new_blocks, 1);
19763 +- if (*bmc_new == 0) {
19764 +- /* need to set on-disk bits too. */
19765 +- sector_t end = block + new_blocks;
19766 +- sector_t start = block >> chunkshift;
19767 +- start <<= chunkshift;
19768 +- while (start < end) {
19769 +- md_bitmap_file_set_bit(bitmap, block);
19770 +- start += 1 << chunkshift;
19771 ++ if (bmc_new) {
19772 ++ if (*bmc_new == 0) {
19773 ++ /* need to set on-disk bits too. */
19774 ++ sector_t end = block + new_blocks;
19775 ++ sector_t start = block >> chunkshift;
19776 ++
19777 ++ start <<= chunkshift;
19778 ++ while (start < end) {
19779 ++ md_bitmap_file_set_bit(bitmap, block);
19780 ++ start += 1 << chunkshift;
19781 ++ }
19782 ++ *bmc_new = 2;
19783 ++ md_bitmap_count_page(&bitmap->counts, block, 1);
19784 ++ md_bitmap_set_pending(&bitmap->counts, block);
19785 + }
19786 +- *bmc_new = 2;
19787 +- md_bitmap_count_page(&bitmap->counts, block, 1);
19788 +- md_bitmap_set_pending(&bitmap->counts, block);
19789 ++ *bmc_new |= NEEDED_MASK;
19790 + }
19791 +- *bmc_new |= NEEDED_MASK;
19792 + if (new_blocks < old_blocks)
19793 + old_blocks = new_blocks;
19794 + }
19795 +diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
19796 +index 857c49399c28e..b536befd88988 100644
19797 +--- a/drivers/md/raid0.c
19798 ++++ b/drivers/md/raid0.c
19799 +@@ -398,7 +398,6 @@ static int raid0_run(struct mddev *mddev)
19800 +
19801 + blk_queue_max_hw_sectors(mddev->queue, mddev->chunk_sectors);
19802 + blk_queue_max_write_zeroes_sectors(mddev->queue, mddev->chunk_sectors);
19803 +- blk_queue_max_discard_sectors(mddev->queue, UINT_MAX);
19804 +
19805 + blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9);
19806 + blk_queue_io_opt(mddev->queue,
19807 +diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
19808 +index 05d8438cfec88..58f705f429480 100644
19809 +--- a/drivers/md/raid1.c
19810 ++++ b/drivers/md/raid1.c
19811 +@@ -3159,6 +3159,7 @@ static int raid1_run(struct mddev *mddev)
19812 + * RAID1 needs at least one disk in active
19813 + */
19814 + if (conf->raid_disks - mddev->degraded < 1) {
19815 ++ md_unregister_thread(&conf->thread);
19816 + ret = -EINVAL;
19817 + goto abort;
19818 + }
19819 +diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
19820 +index 3aa8b6e11d585..9a6503f5cb982 100644
19821 +--- a/drivers/md/raid10.c
19822 ++++ b/drivers/md/raid10.c
19823 +@@ -4145,8 +4145,6 @@ static int raid10_run(struct mddev *mddev)
19824 + conf->thread = NULL;
19825 +
19826 + if (mddev->queue) {
19827 +- blk_queue_max_discard_sectors(mddev->queue,
19828 +- UINT_MAX);
19829 + blk_queue_max_write_zeroes_sectors(mddev->queue, 0);
19830 + blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9);
19831 + raid10_set_io_opt(conf);
19832 +diff --git a/drivers/media/dvb-core/dvb_ca_en50221.c b/drivers/media/dvb-core/dvb_ca_en50221.c
19833 +index 15a08d8c69ef8..c2d2792227f86 100644
19834 +--- a/drivers/media/dvb-core/dvb_ca_en50221.c
19835 ++++ b/drivers/media/dvb-core/dvb_ca_en50221.c
19836 +@@ -157,7 +157,7 @@ static void dvb_ca_private_free(struct dvb_ca_private *ca)
19837 + {
19838 + unsigned int i;
19839 +
19840 +- dvb_free_device(ca->dvbdev);
19841 ++ dvb_device_put(ca->dvbdev);
19842 + for (i = 0; i < ca->slot_count; i++)
19843 + vfree(ca->slot_info[i].rx_buffer.data);
19844 +
19845 +diff --git a/drivers/media/dvb-core/dvb_frontend.c b/drivers/media/dvb-core/dvb_frontend.c
19846 +index 48e735cdbe6bb..c41a7e5c2b928 100644
19847 +--- a/drivers/media/dvb-core/dvb_frontend.c
19848 ++++ b/drivers/media/dvb-core/dvb_frontend.c
19849 +@@ -136,7 +136,7 @@ static void __dvb_frontend_free(struct dvb_frontend *fe)
19850 + struct dvb_frontend_private *fepriv = fe->frontend_priv;
19851 +
19852 + if (fepriv)
19853 +- dvb_free_device(fepriv->dvbdev);
19854 ++ dvb_device_put(fepriv->dvbdev);
19855 +
19856 + dvb_frontend_invoke_release(fe, fe->ops.release);
19857 +
19858 +@@ -2986,6 +2986,7 @@ int dvb_register_frontend(struct dvb_adapter *dvb,
19859 + .name = fe->ops.info.name,
19860 + #endif
19861 + };
19862 ++ int ret;
19863 +
19864 + dev_dbg(dvb->device, "%s:\n", __func__);
19865 +
19866 +@@ -3019,8 +3020,13 @@ int dvb_register_frontend(struct dvb_adapter *dvb,
19867 + "DVB: registering adapter %i frontend %i (%s)...\n",
19868 + fe->dvb->num, fe->id, fe->ops.info.name);
19869 +
19870 +- dvb_register_device(fe->dvb, &fepriv->dvbdev, &dvbdev_template,
19871 ++ ret = dvb_register_device(fe->dvb, &fepriv->dvbdev, &dvbdev_template,
19872 + fe, DVB_DEVICE_FRONTEND, 0);
19873 ++ if (ret) {
19874 ++ dvb_frontend_put(fe);
19875 ++ mutex_unlock(&frontend_mutex);
19876 ++ return ret;
19877 ++ }
19878 +
19879 + /*
19880 + * Initialize the cache to the proper values according with the
19881 +diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
19882 +index 675d877a67b25..9934728734af9 100644
19883 +--- a/drivers/media/dvb-core/dvbdev.c
19884 ++++ b/drivers/media/dvb-core/dvbdev.c
19885 +@@ -97,7 +97,7 @@ static int dvb_device_open(struct inode *inode, struct file *file)
19886 + new_fops = fops_get(dvbdev->fops);
19887 + if (!new_fops)
19888 + goto fail;
19889 +- file->private_data = dvbdev;
19890 ++ file->private_data = dvb_device_get(dvbdev);
19891 + replace_fops(file, new_fops);
19892 + if (file->f_op->open)
19893 + err = file->f_op->open(inode, file);
19894 +@@ -161,6 +161,9 @@ int dvb_generic_release(struct inode *inode, struct file *file)
19895 + }
19896 +
19897 + dvbdev->users++;
19898 ++
19899 ++ dvb_device_put(dvbdev);
19900 ++
19901 + return 0;
19902 + }
19903 + EXPORT_SYMBOL(dvb_generic_release);
19904 +@@ -478,6 +481,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
19905 + }
19906 +
19907 + memcpy(dvbdev, template, sizeof(struct dvb_device));
19908 ++ kref_init(&dvbdev->ref);
19909 + dvbdev->type = type;
19910 + dvbdev->id = id;
19911 + dvbdev->adapter = adap;
19912 +@@ -508,7 +512,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
19913 + #endif
19914 +
19915 + dvbdev->minor = minor;
19916 +- dvb_minors[minor] = dvbdev;
19917 ++ dvb_minors[minor] = dvb_device_get(dvbdev);
19918 + up_write(&minor_rwsem);
19919 +
19920 + ret = dvb_register_media_device(dvbdev, type, minor, demux_sink_pads);
19921 +@@ -553,6 +557,7 @@ void dvb_remove_device(struct dvb_device *dvbdev)
19922 +
19923 + down_write(&minor_rwsem);
19924 + dvb_minors[dvbdev->minor] = NULL;
19925 ++ dvb_device_put(dvbdev);
19926 + up_write(&minor_rwsem);
19927 +
19928 + dvb_media_device_free(dvbdev);
19929 +@@ -564,21 +569,34 @@ void dvb_remove_device(struct dvb_device *dvbdev)
19930 + EXPORT_SYMBOL(dvb_remove_device);
19931 +
19932 +
19933 +-void dvb_free_device(struct dvb_device *dvbdev)
19934 ++static void dvb_free_device(struct kref *ref)
19935 + {
19936 +- if (!dvbdev)
19937 +- return;
19938 ++ struct dvb_device *dvbdev = container_of(ref, struct dvb_device, ref);
19939 +
19940 + kfree (dvbdev->fops);
19941 + kfree (dvbdev);
19942 + }
19943 +-EXPORT_SYMBOL(dvb_free_device);
19944 ++
19945 ++
19946 ++struct dvb_device *dvb_device_get(struct dvb_device *dvbdev)
19947 ++{
19948 ++ kref_get(&dvbdev->ref);
19949 ++ return dvbdev;
19950 ++}
19951 ++EXPORT_SYMBOL(dvb_device_get);
19952 ++
19953 ++
19954 ++void dvb_device_put(struct dvb_device *dvbdev)
19955 ++{
19956 ++ if (dvbdev)
19957 ++ kref_put(&dvbdev->ref, dvb_free_device);
19958 ++}
19959 +
19960 +
19961 + void dvb_unregister_device(struct dvb_device *dvbdev)
19962 + {
19963 + dvb_remove_device(dvbdev);
19964 +- dvb_free_device(dvbdev);
19965 ++ dvb_device_put(dvbdev);
19966 + }
19967 + EXPORT_SYMBOL(dvb_unregister_device);
19968 +
19969 +diff --git a/drivers/media/dvb-frontends/bcm3510.c b/drivers/media/dvb-frontends/bcm3510.c
19970 +index da0ff7b44da41..68b92b4419cff 100644
19971 +--- a/drivers/media/dvb-frontends/bcm3510.c
19972 ++++ b/drivers/media/dvb-frontends/bcm3510.c
19973 +@@ -649,6 +649,7 @@ static int bcm3510_download_firmware(struct dvb_frontend* fe)
19974 + deb_info("firmware chunk, addr: 0x%04x, len: 0x%04x, total length: 0x%04zx\n",addr,len,fw->size);
19975 + if ((ret = bcm3510_write_ram(st,addr,&b[i+4],len)) < 0) {
19976 + err("firmware download failed: %d\n",ret);
19977 ++ release_firmware(fw);
19978 + return ret;
19979 + }
19980 + i += 4 + len;
19981 +diff --git a/drivers/media/i2c/ad5820.c b/drivers/media/i2c/ad5820.c
19982 +index 516de278cc493..a12fedcc3a1ce 100644
19983 +--- a/drivers/media/i2c/ad5820.c
19984 ++++ b/drivers/media/i2c/ad5820.c
19985 +@@ -327,18 +327,18 @@ static int ad5820_probe(struct i2c_client *client,
19986 +
19987 + ret = media_entity_pads_init(&coil->subdev.entity, 0, NULL);
19988 + if (ret < 0)
19989 +- goto cleanup2;
19990 ++ goto clean_mutex;
19991 +
19992 + ret = v4l2_async_register_subdev(&coil->subdev);
19993 + if (ret < 0)
19994 +- goto cleanup;
19995 ++ goto clean_entity;
19996 +
19997 + return ret;
19998 +
19999 +-cleanup2:
20000 +- mutex_destroy(&coil->power_lock);
20001 +-cleanup:
20002 ++clean_entity:
20003 + media_entity_cleanup(&coil->subdev.entity);
20004 ++clean_mutex:
20005 ++ mutex_destroy(&coil->power_lock);
20006 + return ret;
20007 + }
20008 +
20009 +diff --git a/drivers/media/i2c/adv748x/adv748x-afe.c b/drivers/media/i2c/adv748x/adv748x-afe.c
20010 +index 02eabe10ab970..00095c7762c24 100644
20011 +--- a/drivers/media/i2c/adv748x/adv748x-afe.c
20012 ++++ b/drivers/media/i2c/adv748x/adv748x-afe.c
20013 +@@ -521,6 +521,10 @@ int adv748x_afe_init(struct adv748x_afe *afe)
20014 + }
20015 + }
20016 +
20017 ++ adv748x_afe_s_input(afe, afe->input);
20018 ++
20019 ++ adv_dbg(state, "AFE Default input set to %d\n", afe->input);
20020 ++
20021 + /* Entity pads and sinks are 0-indexed to match the pads */
20022 + for (i = ADV748X_AFE_SINK_AIN0; i <= ADV748X_AFE_SINK_AIN7; i++)
20023 + afe->pads[i].flags = MEDIA_PAD_FL_SINK;
20024 +diff --git a/drivers/media/i2c/dw9768.c b/drivers/media/i2c/dw9768.c
20025 +index 0f47ef015a1d3..83a3ee275bbe8 100644
20026 +--- a/drivers/media/i2c/dw9768.c
20027 ++++ b/drivers/media/i2c/dw9768.c
20028 +@@ -414,6 +414,7 @@ static int dw9768_probe(struct i2c_client *client)
20029 + {
20030 + struct device *dev = &client->dev;
20031 + struct dw9768 *dw9768;
20032 ++ bool full_power;
20033 + unsigned int i;
20034 + int ret;
20035 +
20036 +@@ -469,13 +470,23 @@ static int dw9768_probe(struct i2c_client *client)
20037 +
20038 + dw9768->sd.entity.function = MEDIA_ENT_F_LENS;
20039 +
20040 ++ /*
20041 ++ * Figure out whether we're going to power up the device here. Generally
20042 ++ * this is done if CONFIG_PM is disabled in a DT system or the device is
20043 ++ * to be powered on in an ACPI system. Similarly for power off in
20044 ++ * remove.
20045 ++ */
20046 + pm_runtime_enable(dev);
20047 +- if (!pm_runtime_enabled(dev)) {
20048 ++ full_power = (is_acpi_node(dev_fwnode(dev)) &&
20049 ++ acpi_dev_state_d0(dev)) ||
20050 ++ (is_of_node(dev_fwnode(dev)) && !pm_runtime_enabled(dev));
20051 ++ if (full_power) {
20052 + ret = dw9768_runtime_resume(dev);
20053 + if (ret < 0) {
20054 + dev_err(dev, "failed to power on: %d\n", ret);
20055 + goto err_clean_entity;
20056 + }
20057 ++ pm_runtime_set_active(dev);
20058 + }
20059 +
20060 + ret = v4l2_async_register_subdev(&dw9768->sd);
20061 +@@ -484,14 +495,17 @@ static int dw9768_probe(struct i2c_client *client)
20062 + goto err_power_off;
20063 + }
20064 +
20065 ++ pm_runtime_idle(dev);
20066 ++
20067 + return 0;
20068 +
20069 + err_power_off:
20070 +- if (pm_runtime_enabled(dev))
20071 +- pm_runtime_disable(dev);
20072 +- else
20073 ++ if (full_power) {
20074 + dw9768_runtime_suspend(dev);
20075 ++ pm_runtime_set_suspended(dev);
20076 ++ }
20077 + err_clean_entity:
20078 ++ pm_runtime_disable(dev);
20079 + media_entity_cleanup(&dw9768->sd.entity);
20080 + err_free_handler:
20081 + v4l2_ctrl_handler_free(&dw9768->ctrls);
20082 +@@ -503,14 +517,17 @@ static void dw9768_remove(struct i2c_client *client)
20083 + {
20084 + struct v4l2_subdev *sd = i2c_get_clientdata(client);
20085 + struct dw9768 *dw9768 = sd_to_dw9768(sd);
20086 ++ struct device *dev = &client->dev;
20087 +
20088 + v4l2_async_unregister_subdev(&dw9768->sd);
20089 + v4l2_ctrl_handler_free(&dw9768->ctrls);
20090 + media_entity_cleanup(&dw9768->sd.entity);
20091 +- pm_runtime_disable(&client->dev);
20092 +- if (!pm_runtime_status_suspended(&client->dev))
20093 +- dw9768_runtime_suspend(&client->dev);
20094 +- pm_runtime_set_suspended(&client->dev);
20095 ++ if ((is_acpi_node(dev_fwnode(dev)) && acpi_dev_state_d0(dev)) ||
20096 ++ (is_of_node(dev_fwnode(dev)) && !pm_runtime_enabled(dev))) {
20097 ++ dw9768_runtime_suspend(dev);
20098 ++ pm_runtime_set_suspended(dev);
20099 ++ }
20100 ++ pm_runtime_disable(dev);
20101 + }
20102 +
20103 + static const struct of_device_id dw9768_of_table[] = {
20104 +diff --git a/drivers/media/i2c/hi846.c b/drivers/media/i2c/hi846.c
20105 +index c5b69823f257e..7c61873b71981 100644
20106 +--- a/drivers/media/i2c/hi846.c
20107 ++++ b/drivers/media/i2c/hi846.c
20108 +@@ -2008,22 +2008,24 @@ static int hi846_parse_dt(struct hi846 *hi846, struct device *dev)
20109 + bus_cfg.bus.mipi_csi2.num_data_lanes != 4) {
20110 + dev_err(dev, "number of CSI2 data lanes %d is not supported",
20111 + bus_cfg.bus.mipi_csi2.num_data_lanes);
20112 +- v4l2_fwnode_endpoint_free(&bus_cfg);
20113 +- return -EINVAL;
20114 ++ ret = -EINVAL;
20115 ++ goto check_hwcfg_error;
20116 + }
20117 +
20118 + hi846->nr_lanes = bus_cfg.bus.mipi_csi2.num_data_lanes;
20119 +
20120 + if (!bus_cfg.nr_of_link_frequencies) {
20121 + dev_err(dev, "link-frequency property not found in DT\n");
20122 +- return -EINVAL;
20123 ++ ret = -EINVAL;
20124 ++ goto check_hwcfg_error;
20125 + }
20126 +
20127 + /* Check that link frequences for all the modes are in device tree */
20128 + fq = hi846_check_link_freqs(hi846, &bus_cfg);
20129 + if (fq) {
20130 + dev_err(dev, "Link frequency of %lld is not supported\n", fq);
20131 +- return -EINVAL;
20132 ++ ret = -EINVAL;
20133 ++ goto check_hwcfg_error;
20134 + }
20135 +
20136 + v4l2_fwnode_endpoint_free(&bus_cfg);
20137 +@@ -2044,6 +2046,10 @@ static int hi846_parse_dt(struct hi846 *hi846, struct device *dev)
20138 + }
20139 +
20140 + return 0;
20141 ++
20142 ++check_hwcfg_error:
20143 ++ v4l2_fwnode_endpoint_free(&bus_cfg);
20144 ++ return ret;
20145 + }
20146 +
20147 + static int hi846_probe(struct i2c_client *client)
20148 +diff --git a/drivers/media/i2c/mt9p031.c b/drivers/media/i2c/mt9p031.c
20149 +index 45f7b5e52bc39..b69db6fc82618 100644
20150 +--- a/drivers/media/i2c/mt9p031.c
20151 ++++ b/drivers/media/i2c/mt9p031.c
20152 +@@ -702,7 +702,6 @@ static int mt9p031_init_cfg(struct v4l2_subdev *subdev,
20153 + V4L2_SUBDEV_FORMAT_TRY;
20154 +
20155 + crop = __mt9p031_get_pad_crop(mt9p031, sd_state, 0, which);
20156 +- v4l2_subdev_get_try_crop(subdev, sd_state, 0);
20157 + crop->left = MT9P031_COLUMN_START_DEF;
20158 + crop->top = MT9P031_ROW_START_DEF;
20159 + crop->width = MT9P031_WINDOW_WIDTH_DEF;
20160 +diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
20161 +index 2d740397a5d4d..3f6d715efa823 100644
20162 +--- a/drivers/media/i2c/ov5640.c
20163 ++++ b/drivers/media/i2c/ov5640.c
20164 +@@ -3817,7 +3817,8 @@ static int ov5640_probe(struct i2c_client *client)
20165 + sensor->current_mode =
20166 + &ov5640_mode_data[OV5640_MODE_VGA_640_480];
20167 + sensor->last_mode = sensor->current_mode;
20168 +- sensor->current_link_freq = OV5640_DEFAULT_LINK_FREQ;
20169 ++ sensor->current_link_freq =
20170 ++ ov5640_csi2_link_freqs[OV5640_DEFAULT_LINK_FREQ];
20171 +
20172 + sensor->ae_target = 52;
20173 +
20174 +diff --git a/drivers/media/i2c/ov5648.c b/drivers/media/i2c/ov5648.c
20175 +index 84604ea7bdf9e..17465fcf28e33 100644
20176 +--- a/drivers/media/i2c/ov5648.c
20177 ++++ b/drivers/media/i2c/ov5648.c
20178 +@@ -2597,6 +2597,7 @@ static void ov5648_remove(struct i2c_client *client)
20179 + v4l2_ctrl_handler_free(&sensor->ctrls.handler);
20180 + mutex_destroy(&sensor->mutex);
20181 + media_entity_cleanup(&subdev->entity);
20182 ++ v4l2_fwnode_endpoint_free(&sensor->endpoint);
20183 + }
20184 +
20185 + static const struct dev_pm_ops ov5648_pm_ops = {
20186 +diff --git a/drivers/media/pci/saa7164/saa7164-core.c b/drivers/media/pci/saa7164/saa7164-core.c
20187 +index d5f32e3ff5441..754c8be1b6d8b 100644
20188 +--- a/drivers/media/pci/saa7164/saa7164-core.c
20189 ++++ b/drivers/media/pci/saa7164/saa7164-core.c
20190 +@@ -1259,7 +1259,7 @@ static int saa7164_initdev(struct pci_dev *pci_dev,
20191 +
20192 + if (saa7164_dev_setup(dev) < 0) {
20193 + err = -EINVAL;
20194 +- goto fail_free;
20195 ++ goto fail_dev;
20196 + }
20197 +
20198 + /* print pci info */
20199 +@@ -1427,6 +1427,8 @@ fail_fw:
20200 +
20201 + fail_irq:
20202 + saa7164_dev_unregister(dev);
20203 ++fail_dev:
20204 ++ pci_disable_device(pci_dev);
20205 + fail_free:
20206 + v4l2_device_unregister(&dev->v4l2_dev);
20207 + kfree(dev);
20208 +diff --git a/drivers/media/pci/solo6x10/solo6x10-core.c b/drivers/media/pci/solo6x10/solo6x10-core.c
20209 +index 4a546eeefe38f..6d87fbb0ee04a 100644
20210 +--- a/drivers/media/pci/solo6x10/solo6x10-core.c
20211 ++++ b/drivers/media/pci/solo6x10/solo6x10-core.c
20212 +@@ -420,6 +420,7 @@ static int solo_sysfs_init(struct solo_dev *solo_dev)
20213 + solo_dev->nr_chans);
20214 +
20215 + if (device_register(dev)) {
20216 ++ put_device(dev);
20217 + dev->parent = NULL;
20218 + return -ENOMEM;
20219 + }
20220 +diff --git a/drivers/media/platform/amphion/vdec.c b/drivers/media/platform/amphion/vdec.c
20221 +index feb75dc204de8..b27e6bed85f0f 100644
20222 +--- a/drivers/media/platform/amphion/vdec.c
20223 ++++ b/drivers/media/platform/amphion/vdec.c
20224 +@@ -286,6 +286,7 @@ static int vdec_g_fmt(struct file *file, void *fh, struct v4l2_format *f)
20225 + struct vpu_format *cur_fmt;
20226 + int i;
20227 +
20228 ++ vpu_inst_lock(inst);
20229 + cur_fmt = vpu_get_format(inst, f->type);
20230 +
20231 + pixmp->pixelformat = cur_fmt->pixfmt;
20232 +@@ -303,6 +304,7 @@ static int vdec_g_fmt(struct file *file, void *fh, struct v4l2_format *f)
20233 + f->fmt.pix_mp.xfer_func = vdec->codec_info.transfer_chars;
20234 + f->fmt.pix_mp.ycbcr_enc = vdec->codec_info.matrix_coeffs;
20235 + f->fmt.pix_mp.quantization = vdec->codec_info.full_range;
20236 ++ vpu_inst_unlock(inst);
20237 +
20238 + return 0;
20239 + }
20240 +@@ -753,6 +755,9 @@ static bool vdec_check_source_change(struct vpu_inst *inst)
20241 + if (!inst->fh.m2m_ctx)
20242 + return false;
20243 +
20244 ++ if (vdec->reset_codec)
20245 ++ return false;
20246 ++
20247 + if (!vb2_is_streaming(v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx)))
20248 + return true;
20249 + fmt = vpu_helper_find_format(inst, inst->cap_format.type, vdec->codec_info.pixfmt);
20250 +@@ -1088,7 +1093,8 @@ static void vdec_event_seq_hdr(struct vpu_inst *inst, struct vpu_dec_codec_info
20251 + vdec->seq_tag = vdec->codec_info.tag;
20252 + if (vdec->is_source_changed) {
20253 + vdec_update_state(inst, VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE, 0);
20254 +- vpu_notify_source_change(inst);
20255 ++ vdec->source_change++;
20256 ++ vdec_handle_resolution_change(inst);
20257 + vdec->is_source_changed = false;
20258 + }
20259 + }
20260 +@@ -1335,6 +1341,8 @@ static void vdec_abort(struct vpu_inst *inst)
20261 + vdec->decoded_frame_count,
20262 + vdec->display_frame_count,
20263 + vdec->sequence);
20264 ++ if (!vdec->seq_hdr_found)
20265 ++ vdec->reset_codec = true;
20266 + vdec->params.end_flag = 0;
20267 + vdec->drain = 0;
20268 + vdec->params.frame_count = 0;
20269 +@@ -1342,6 +1350,7 @@ static void vdec_abort(struct vpu_inst *inst)
20270 + vdec->display_frame_count = 0;
20271 + vdec->sequence = 0;
20272 + vdec->aborting = false;
20273 ++ inst->extra_size = 0;
20274 + }
20275 +
20276 + static void vdec_stop(struct vpu_inst *inst, bool free)
20277 +@@ -1464,8 +1473,7 @@ static int vdec_start_session(struct vpu_inst *inst, u32 type)
20278 + }
20279 +
20280 + if (V4L2_TYPE_IS_OUTPUT(type)) {
20281 +- if (inst->state == VPU_CODEC_STATE_SEEK)
20282 +- vdec_update_state(inst, vdec->state, 1);
20283 ++ vdec_update_state(inst, vdec->state, 1);
20284 + vdec->eos_received = 0;
20285 + vpu_process_output_buffer(inst);
20286 + } else {
20287 +@@ -1629,6 +1637,7 @@ static int vdec_open(struct file *file)
20288 + return ret;
20289 +
20290 + vdec->fixed_fmt = false;
20291 ++ vdec->state = VPU_CODEC_STATE_ACTIVE;
20292 + inst->min_buffer_cap = VDEC_MIN_BUFFER_CAP;
20293 + inst->min_buffer_out = VDEC_MIN_BUFFER_OUT;
20294 + vdec_init(file);
20295 +diff --git a/drivers/media/platform/amphion/vpu.h b/drivers/media/platform/amphion/vpu.h
20296 +index beac0309ca8d9..048c23c2bf4db 100644
20297 +--- a/drivers/media/platform/amphion/vpu.h
20298 ++++ b/drivers/media/platform/amphion/vpu.h
20299 +@@ -13,6 +13,7 @@
20300 + #include <linux/mailbox_controller.h>
20301 + #include <linux/kfifo.h>
20302 +
20303 ++#define VPU_TIMEOUT_WAKEUP msecs_to_jiffies(200)
20304 + #define VPU_TIMEOUT msecs_to_jiffies(1000)
20305 + #define VPU_INST_NULL_ID (-1L)
20306 + #define VPU_MSG_BUFFER_SIZE (8192)
20307 +diff --git a/drivers/media/platform/amphion/vpu_cmds.c b/drivers/media/platform/amphion/vpu_cmds.c
20308 +index f4d7ca78a6212..fa581ba6bab2d 100644
20309 +--- a/drivers/media/platform/amphion/vpu_cmds.c
20310 ++++ b/drivers/media/platform/amphion/vpu_cmds.c
20311 +@@ -269,7 +269,7 @@ exit:
20312 + return flag;
20313 + }
20314 +
20315 +-static int sync_session_response(struct vpu_inst *inst, unsigned long key)
20316 ++static int sync_session_response(struct vpu_inst *inst, unsigned long key, long timeout, int try)
20317 + {
20318 + struct vpu_core *core;
20319 +
20320 +@@ -279,10 +279,12 @@ static int sync_session_response(struct vpu_inst *inst, unsigned long key)
20321 + core = inst->core;
20322 +
20323 + call_void_vop(inst, wait_prepare);
20324 +- wait_event_timeout(core->ack_wq, check_is_responsed(inst, key), VPU_TIMEOUT);
20325 ++ wait_event_timeout(core->ack_wq, check_is_responsed(inst, key), timeout);
20326 + call_void_vop(inst, wait_finish);
20327 +
20328 + if (!check_is_responsed(inst, key)) {
20329 ++ if (try)
20330 ++ return -EINVAL;
20331 + dev_err(inst->dev, "[%d] sync session timeout\n", inst->id);
20332 + set_bit(inst->id, &core->hang_mask);
20333 + mutex_lock(&inst->core->cmd_lock);
20334 +@@ -294,6 +296,19 @@ static int sync_session_response(struct vpu_inst *inst, unsigned long key)
20335 + return 0;
20336 + }
20337 +
20338 ++static void vpu_core_keep_active(struct vpu_core *core)
20339 ++{
20340 ++ struct vpu_rpc_event pkt;
20341 ++
20342 ++ memset(&pkt, 0, sizeof(pkt));
20343 ++ vpu_iface_pack_cmd(core, &pkt, 0, VPU_CMD_ID_NOOP, NULL);
20344 ++
20345 ++ dev_dbg(core->dev, "try to wake up\n");
20346 ++ mutex_lock(&core->cmd_lock);
20347 ++ vpu_cmd_send(core, &pkt);
20348 ++ mutex_unlock(&core->cmd_lock);
20349 ++}
20350 ++
20351 + static int vpu_session_send_cmd(struct vpu_inst *inst, u32 id, void *data)
20352 + {
20353 + unsigned long key;
20354 +@@ -304,9 +319,25 @@ static int vpu_session_send_cmd(struct vpu_inst *inst, u32 id, void *data)
20355 + return -EINVAL;
20356 +
20357 + ret = vpu_request_cmd(inst, id, data, &key, &sync);
20358 +- if (!ret && sync)
20359 +- ret = sync_session_response(inst, key);
20360 ++ if (ret)
20361 ++ goto exit;
20362 ++
20363 ++ /* workaround for a firmware issue,
20364 ++ * firmware should be waked up by start or configure command,
20365 ++ * but there is a very small change that firmware failed to wakeup.
20366 ++ * in such case, try to wakeup firmware again by sending a noop command
20367 ++ */
20368 ++ if (sync && (id == VPU_CMD_ID_CONFIGURE_CODEC || id == VPU_CMD_ID_START)) {
20369 ++ if (sync_session_response(inst, key, VPU_TIMEOUT_WAKEUP, 1))
20370 ++ vpu_core_keep_active(inst->core);
20371 ++ else
20372 ++ goto exit;
20373 ++ }
20374 ++
20375 ++ if (sync)
20376 ++ ret = sync_session_response(inst, key, VPU_TIMEOUT, 0);
20377 +
20378 ++exit:
20379 + if (ret)
20380 + dev_err(inst->dev, "[%d] send cmd(0x%x) fail\n", inst->id, id);
20381 +
20382 +diff --git a/drivers/media/platform/amphion/vpu_drv.c b/drivers/media/platform/amphion/vpu_drv.c
20383 +index 9d5a5075343d3..f01ce49d27e80 100644
20384 +--- a/drivers/media/platform/amphion/vpu_drv.c
20385 ++++ b/drivers/media/platform/amphion/vpu_drv.c
20386 +@@ -245,7 +245,11 @@ static int __init vpu_driver_init(void)
20387 + if (ret)
20388 + return ret;
20389 +
20390 +- return vpu_core_driver_init();
20391 ++ ret = vpu_core_driver_init();
20392 ++ if (ret)
20393 ++ platform_driver_unregister(&amphion_vpu_driver);
20394 ++
20395 ++ return ret;
20396 + }
20397 +
20398 + static void __exit vpu_driver_exit(void)
20399 +diff --git a/drivers/media/platform/amphion/vpu_malone.c b/drivers/media/platform/amphion/vpu_malone.c
20400 +index 51e0702f9ae17..9f2890730fd70 100644
20401 +--- a/drivers/media/platform/amphion/vpu_malone.c
20402 ++++ b/drivers/media/platform/amphion/vpu_malone.c
20403 +@@ -692,6 +692,7 @@ int vpu_malone_set_decode_params(struct vpu_shared_addr *shared,
20404 + }
20405 +
20406 + static struct vpu_pair malone_cmds[] = {
20407 ++ {VPU_CMD_ID_NOOP, VID_API_CMD_NULL},
20408 + {VPU_CMD_ID_START, VID_API_CMD_START},
20409 + {VPU_CMD_ID_STOP, VID_API_CMD_STOP},
20410 + {VPU_CMD_ID_ABORT, VID_API_CMD_ABORT},
20411 +diff --git a/drivers/media/platform/amphion/vpu_msgs.c b/drivers/media/platform/amphion/vpu_msgs.c
20412 +index d8247f36d84ba..92672a802b492 100644
20413 +--- a/drivers/media/platform/amphion/vpu_msgs.c
20414 ++++ b/drivers/media/platform/amphion/vpu_msgs.c
20415 +@@ -43,6 +43,7 @@ static void vpu_session_handle_mem_request(struct vpu_inst *inst, struct vpu_rpc
20416 + req_data.ref_frame_num,
20417 + req_data.act_buf_size,
20418 + req_data.act_buf_num);
20419 ++ vpu_inst_lock(inst);
20420 + call_void_vop(inst, mem_request,
20421 + req_data.enc_frame_size,
20422 + req_data.enc_frame_num,
20423 +@@ -50,6 +51,7 @@ static void vpu_session_handle_mem_request(struct vpu_inst *inst, struct vpu_rpc
20424 + req_data.ref_frame_num,
20425 + req_data.act_buf_size,
20426 + req_data.act_buf_num);
20427 ++ vpu_inst_unlock(inst);
20428 + }
20429 +
20430 + static void vpu_session_handle_stop_done(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
20431 +diff --git a/drivers/media/platform/amphion/vpu_v4l2.c b/drivers/media/platform/amphion/vpu_v4l2.c
20432 +index b779e0ba916ca..590d1084e5a5d 100644
20433 +--- a/drivers/media/platform/amphion/vpu_v4l2.c
20434 ++++ b/drivers/media/platform/amphion/vpu_v4l2.c
20435 +@@ -65,18 +65,11 @@ unsigned int vpu_get_buffer_state(struct vb2_v4l2_buffer *vbuf)
20436 +
20437 + void vpu_v4l2_set_error(struct vpu_inst *inst)
20438 + {
20439 +- struct vb2_queue *src_q;
20440 +- struct vb2_queue *dst_q;
20441 +-
20442 + vpu_inst_lock(inst);
20443 + dev_err(inst->dev, "some error occurs in codec\n");
20444 + if (inst->fh.m2m_ctx) {
20445 +- src_q = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
20446 +- dst_q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
20447 +- src_q->error = 1;
20448 +- dst_q->error = 1;
20449 +- wake_up(&src_q->done_wq);
20450 +- wake_up(&dst_q->done_wq);
20451 ++ vb2_queue_error(v4l2_m2m_get_src_vq(inst->fh.m2m_ctx));
20452 ++ vb2_queue_error(v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx));
20453 + }
20454 + vpu_inst_unlock(inst);
20455 + }
20456 +@@ -249,8 +242,12 @@ int vpu_process_capture_buffer(struct vpu_inst *inst)
20457 +
20458 + struct vb2_v4l2_buffer *vpu_next_src_buf(struct vpu_inst *inst)
20459 + {
20460 +- struct vb2_v4l2_buffer *src_buf = v4l2_m2m_next_src_buf(inst->fh.m2m_ctx);
20461 ++ struct vb2_v4l2_buffer *src_buf = NULL;
20462 ++
20463 ++ if (!inst->fh.m2m_ctx)
20464 ++ return NULL;
20465 +
20466 ++ src_buf = v4l2_m2m_next_src_buf(inst->fh.m2m_ctx);
20467 + if (!src_buf || vpu_get_buffer_state(src_buf) == VPU_BUF_STATE_IDLE)
20468 + return NULL;
20469 +
20470 +@@ -273,7 +270,7 @@ void vpu_skip_frame(struct vpu_inst *inst, int count)
20471 + enum vb2_buffer_state state;
20472 + int i = 0;
20473 +
20474 +- if (count <= 0)
20475 ++ if (count <= 0 || !inst->fh.m2m_ctx)
20476 + return;
20477 +
20478 + while (i < count) {
20479 +@@ -603,10 +600,6 @@ static int vpu_v4l2_release(struct vpu_inst *inst)
20480 + inst->workqueue = NULL;
20481 + }
20482 +
20483 +- if (inst->fh.m2m_ctx) {
20484 +- v4l2_m2m_ctx_release(inst->fh.m2m_ctx);
20485 +- inst->fh.m2m_ctx = NULL;
20486 +- }
20487 + v4l2_ctrl_handler_free(&inst->ctrl_handler);
20488 + mutex_destroy(&inst->lock);
20489 + v4l2_fh_del(&inst->fh);
20490 +@@ -689,6 +682,13 @@ int vpu_v4l2_close(struct file *file)
20491 +
20492 + vpu_trace(vpu->dev, "tgid = %d, pid = %d, inst = %p\n", inst->tgid, inst->pid, inst);
20493 +
20494 ++ vpu_inst_lock(inst);
20495 ++ if (inst->fh.m2m_ctx) {
20496 ++ v4l2_m2m_ctx_release(inst->fh.m2m_ctx);
20497 ++ inst->fh.m2m_ctx = NULL;
20498 ++ }
20499 ++ vpu_inst_unlock(inst);
20500 ++
20501 + call_void_vop(inst, release);
20502 + vpu_inst_unregister(inst);
20503 + vpu_inst_put(inst);
20504 +diff --git a/drivers/media/platform/amphion/vpu_windsor.c b/drivers/media/platform/amphion/vpu_windsor.c
20505 +index 1526af2ef9da4..b93c8cfdee7f5 100644
20506 +--- a/drivers/media/platform/amphion/vpu_windsor.c
20507 ++++ b/drivers/media/platform/amphion/vpu_windsor.c
20508 +@@ -658,6 +658,7 @@ int vpu_windsor_get_stream_buffer_size(struct vpu_shared_addr *shared)
20509 + }
20510 +
20511 + static struct vpu_pair windsor_cmds[] = {
20512 ++ {VPU_CMD_ID_NOOP, GTB_ENC_CMD_NOOP},
20513 + {VPU_CMD_ID_CONFIGURE_CODEC, GTB_ENC_CMD_CONFIGURE_CODEC},
20514 + {VPU_CMD_ID_START, GTB_ENC_CMD_STREAM_START},
20515 + {VPU_CMD_ID_STOP, GTB_ENC_CMD_STREAM_STOP},
20516 +diff --git a/drivers/media/platform/chips-media/coda-bit.c b/drivers/media/platform/chips-media/coda-bit.c
20517 +index 2736a902e3df3..ed47d5bd8d61e 100644
20518 +--- a/drivers/media/platform/chips-media/coda-bit.c
20519 ++++ b/drivers/media/platform/chips-media/coda-bit.c
20520 +@@ -854,7 +854,7 @@ static void coda_setup_iram(struct coda_ctx *ctx)
20521 + /* Only H.264BP and H.263P3 are considered */
20522 + iram_info->buf_dbk_y_use = coda_iram_alloc(iram_info, w64);
20523 + iram_info->buf_dbk_c_use = coda_iram_alloc(iram_info, w64);
20524 +- if (!iram_info->buf_dbk_c_use)
20525 ++ if (!iram_info->buf_dbk_y_use || !iram_info->buf_dbk_c_use)
20526 + goto out;
20527 + iram_info->axi_sram_use |= dbk_bits;
20528 +
20529 +@@ -878,7 +878,7 @@ static void coda_setup_iram(struct coda_ctx *ctx)
20530 +
20531 + iram_info->buf_dbk_y_use = coda_iram_alloc(iram_info, w128);
20532 + iram_info->buf_dbk_c_use = coda_iram_alloc(iram_info, w128);
20533 +- if (!iram_info->buf_dbk_c_use)
20534 ++ if (!iram_info->buf_dbk_y_use || !iram_info->buf_dbk_c_use)
20535 + goto out;
20536 + iram_info->axi_sram_use |= dbk_bits;
20537 +
20538 +@@ -1084,10 +1084,16 @@ static int coda_start_encoding(struct coda_ctx *ctx)
20539 + }
20540 +
20541 + if (dst_fourcc == V4L2_PIX_FMT_JPEG) {
20542 +- if (!ctx->params.jpeg_qmat_tab[0])
20543 ++ if (!ctx->params.jpeg_qmat_tab[0]) {
20544 + ctx->params.jpeg_qmat_tab[0] = kmalloc(64, GFP_KERNEL);
20545 +- if (!ctx->params.jpeg_qmat_tab[1])
20546 ++ if (!ctx->params.jpeg_qmat_tab[0])
20547 ++ return -ENOMEM;
20548 ++ }
20549 ++ if (!ctx->params.jpeg_qmat_tab[1]) {
20550 + ctx->params.jpeg_qmat_tab[1] = kmalloc(64, GFP_KERNEL);
20551 ++ if (!ctx->params.jpeg_qmat_tab[1])
20552 ++ return -ENOMEM;
20553 ++ }
20554 + coda_set_jpeg_compression_quality(ctx, ctx->params.jpeg_quality);
20555 + }
20556 +
20557 +diff --git a/drivers/media/platform/chips-media/coda-jpeg.c b/drivers/media/platform/chips-media/coda-jpeg.c
20558 +index 435e7030fc2a8..ba8f410029172 100644
20559 +--- a/drivers/media/platform/chips-media/coda-jpeg.c
20560 ++++ b/drivers/media/platform/chips-media/coda-jpeg.c
20561 +@@ -1052,10 +1052,16 @@ static int coda9_jpeg_start_encoding(struct coda_ctx *ctx)
20562 + v4l2_err(&dev->v4l2_dev, "error loading Huffman tables\n");
20563 + return ret;
20564 + }
20565 +- if (!ctx->params.jpeg_qmat_tab[0])
20566 ++ if (!ctx->params.jpeg_qmat_tab[0]) {
20567 + ctx->params.jpeg_qmat_tab[0] = kmalloc(64, GFP_KERNEL);
20568 +- if (!ctx->params.jpeg_qmat_tab[1])
20569 ++ if (!ctx->params.jpeg_qmat_tab[0])
20570 ++ return -ENOMEM;
20571 ++ }
20572 ++ if (!ctx->params.jpeg_qmat_tab[1]) {
20573 + ctx->params.jpeg_qmat_tab[1] = kmalloc(64, GFP_KERNEL);
20574 ++ if (!ctx->params.jpeg_qmat_tab[1])
20575 ++ return -ENOMEM;
20576 ++ }
20577 + coda_set_jpeg_compression_quality(ctx, ctx->params.jpeg_quality);
20578 +
20579 + return 0;
20580 +diff --git a/drivers/media/platform/mediatek/mdp3/mtk-mdp3-cmdq.c b/drivers/media/platform/mediatek/mdp3/mtk-mdp3-cmdq.c
20581 +index 86c054600a08c..124c1b96e96bd 100644
20582 +--- a/drivers/media/platform/mediatek/mdp3/mtk-mdp3-cmdq.c
20583 ++++ b/drivers/media/platform/mediatek/mdp3/mtk-mdp3-cmdq.c
20584 +@@ -252,10 +252,9 @@ static int mdp_cmdq_pkt_create(struct cmdq_client *client, struct cmdq_pkt *pkt,
20585 + dma_addr_t dma_addr;
20586 +
20587 + pkt->va_base = kzalloc(size, GFP_KERNEL);
20588 +- if (!pkt->va_base) {
20589 +- kfree(pkt);
20590 ++ if (!pkt->va_base)
20591 + return -ENOMEM;
20592 +- }
20593 ++
20594 + pkt->buf_size = size;
20595 + pkt->cl = (void *)client;
20596 +
20597 +@@ -368,25 +367,30 @@ int mdp_cmdq_send(struct mdp_dev *mdp, struct mdp_cmdq_param *param)
20598 + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
20599 + if (!cmd) {
20600 + ret = -ENOMEM;
20601 +- goto err_cmdq_data;
20602 ++ goto err_cancel_job;
20603 + }
20604 +
20605 +- if (mdp_cmdq_pkt_create(mdp->cmdq_clt, &cmd->pkt, SZ_16K)) {
20606 +- ret = -ENOMEM;
20607 +- goto err_cmdq_data;
20608 +- }
20609 ++ ret = mdp_cmdq_pkt_create(mdp->cmdq_clt, &cmd->pkt, SZ_16K);
20610 ++ if (ret)
20611 ++ goto err_free_cmd;
20612 +
20613 + comps = kcalloc(param->config->num_components, sizeof(*comps),
20614 + GFP_KERNEL);
20615 + if (!comps) {
20616 + ret = -ENOMEM;
20617 +- goto err_cmdq_data;
20618 ++ goto err_destroy_pkt;
20619 + }
20620 +
20621 + path = kzalloc(sizeof(*path), GFP_KERNEL);
20622 + if (!path) {
20623 + ret = -ENOMEM;
20624 +- goto err_cmdq_data;
20625 ++ goto err_free_comps;
20626 ++ }
20627 ++
20628 ++ ret = mtk_mutex_prepare(mdp->mdp_mutex[MDP_PIPE_RDMA0]);
20629 ++ if (ret) {
20630 ++ dev_err(dev, "Fail to enable mutex clk\n");
20631 ++ goto err_free_path;
20632 + }
20633 +
20634 + path->mdp_dev = mdp;
20635 +@@ -406,15 +410,13 @@ int mdp_cmdq_send(struct mdp_dev *mdp, struct mdp_cmdq_param *param)
20636 + ret = mdp_path_ctx_init(mdp, path);
20637 + if (ret) {
20638 + dev_err(dev, "mdp_path_ctx_init error\n");
20639 +- goto err_cmdq_data;
20640 ++ goto err_free_path;
20641 + }
20642 +
20643 +- mtk_mutex_prepare(mdp->mdp_mutex[MDP_PIPE_RDMA0]);
20644 +-
20645 + ret = mdp_path_config(mdp, cmd, path);
20646 + if (ret) {
20647 + dev_err(dev, "mdp_path_config error\n");
20648 +- goto err_cmdq_data;
20649 ++ goto err_free_path;
20650 + }
20651 + cmdq_pkt_finalize(&cmd->pkt);
20652 +
20653 +@@ -431,10 +433,8 @@ int mdp_cmdq_send(struct mdp_dev *mdp, struct mdp_cmdq_param *param)
20654 + cmd->mdp_ctx = param->mdp_ctx;
20655 +
20656 + ret = mdp_comp_clocks_on(&mdp->pdev->dev, cmd->comps, cmd->num_comps);
20657 +- if (ret) {
20658 +- dev_err(dev, "comp %d failed to enable clock!\n", ret);
20659 +- goto err_clock_off;
20660 +- }
20661 ++ if (ret)
20662 ++ goto err_free_path;
20663 +
20664 + dma_sync_single_for_device(mdp->cmdq_clt->chan->mbox->dev,
20665 + cmd->pkt.pa_base, cmd->pkt.cmd_buf_size,
20666 +@@ -450,17 +450,20 @@ int mdp_cmdq_send(struct mdp_dev *mdp, struct mdp_cmdq_param *param)
20667 + return 0;
20668 +
20669 + err_clock_off:
20670 +- mtk_mutex_unprepare(mdp->mdp_mutex[MDP_PIPE_RDMA0]);
20671 + mdp_comp_clocks_off(&mdp->pdev->dev, cmd->comps,
20672 + cmd->num_comps);
20673 +-err_cmdq_data:
20674 ++err_free_path:
20675 ++ mtk_mutex_unprepare(mdp->mdp_mutex[MDP_PIPE_RDMA0]);
20676 + kfree(path);
20677 +- atomic_dec(&mdp->job_count);
20678 +- wake_up(&mdp->callback_wq);
20679 +- if (cmd && cmd->pkt.buf_size > 0)
20680 +- mdp_cmdq_pkt_destroy(&cmd->pkt);
20681 ++err_free_comps:
20682 + kfree(comps);
20683 ++err_destroy_pkt:
20684 ++ mdp_cmdq_pkt_destroy(&cmd->pkt);
20685 ++err_free_cmd:
20686 + kfree(cmd);
20687 ++err_cancel_job:
20688 ++ atomic_dec(&mdp->job_count);
20689 ++
20690 + return ret;
20691 + }
20692 + EXPORT_SYMBOL_GPL(mdp_cmdq_send);
20693 +diff --git a/drivers/media/platform/mediatek/mdp3/mtk-mdp3-comp.c b/drivers/media/platform/mediatek/mdp3/mtk-mdp3-comp.c
20694 +index d3eaf8884412d..7bc05f42a23c1 100644
20695 +--- a/drivers/media/platform/mediatek/mdp3/mtk-mdp3-comp.c
20696 ++++ b/drivers/media/platform/mediatek/mdp3/mtk-mdp3-comp.c
20697 +@@ -699,12 +699,22 @@ int mdp_comp_clock_on(struct device *dev, struct mdp_comp *comp)
20698 + dev_err(dev,
20699 + "Failed to enable clk %d. type:%d id:%d\n",
20700 + i, comp->type, comp->id);
20701 +- pm_runtime_put(comp->comp_dev);
20702 +- return ret;
20703 ++ goto err_revert;
20704 + }
20705 + }
20706 +
20707 + return 0;
20708 ++
20709 ++err_revert:
20710 ++ while (--i >= 0) {
20711 ++ if (IS_ERR_OR_NULL(comp->clks[i]))
20712 ++ continue;
20713 ++ clk_disable_unprepare(comp->clks[i]);
20714 ++ }
20715 ++ if (comp->comp_dev)
20716 ++ pm_runtime_put_sync(comp->comp_dev);
20717 ++
20718 ++ return ret;
20719 + }
20720 +
20721 + void mdp_comp_clock_off(struct device *dev, struct mdp_comp *comp)
20722 +@@ -723,11 +733,13 @@ void mdp_comp_clock_off(struct device *dev, struct mdp_comp *comp)
20723 +
20724 + int mdp_comp_clocks_on(struct device *dev, struct mdp_comp *comps, int num)
20725 + {
20726 +- int i;
20727 ++ int i, ret;
20728 +
20729 +- for (i = 0; i < num; i++)
20730 +- if (mdp_comp_clock_on(dev, &comps[i]) != 0)
20731 +- return ++i;
20732 ++ for (i = 0; i < num; i++) {
20733 ++ ret = mdp_comp_clock_on(dev, &comps[i]);
20734 ++ if (ret)
20735 ++ return ret;
20736 ++ }
20737 +
20738 + return 0;
20739 + }
20740 +diff --git a/drivers/media/platform/mediatek/mdp3/mtk-mdp3-core.c b/drivers/media/platform/mediatek/mdp3/mtk-mdp3-core.c
20741 +index c413e59d42860..2d1f6ae9f0802 100644
20742 +--- a/drivers/media/platform/mediatek/mdp3/mtk-mdp3-core.c
20743 ++++ b/drivers/media/platform/mediatek/mdp3/mtk-mdp3-core.c
20744 +@@ -196,27 +196,27 @@ static int mdp_probe(struct platform_device *pdev)
20745 + mm_pdev = __get_pdev_by_id(pdev, MDP_INFRA_MMSYS);
20746 + if (!mm_pdev) {
20747 + ret = -ENODEV;
20748 +- goto err_return;
20749 ++ goto err_destroy_device;
20750 + }
20751 + mdp->mdp_mmsys = &mm_pdev->dev;
20752 +
20753 + mm_pdev = __get_pdev_by_id(pdev, MDP_INFRA_MUTEX);
20754 + if (WARN_ON(!mm_pdev)) {
20755 + ret = -ENODEV;
20756 +- goto err_return;
20757 ++ goto err_destroy_device;
20758 + }
20759 + for (i = 0; i < MDP_PIPE_MAX; i++) {
20760 + mdp->mdp_mutex[i] = mtk_mutex_get(&mm_pdev->dev);
20761 + if (!mdp->mdp_mutex[i]) {
20762 + ret = -ENODEV;
20763 +- goto err_return;
20764 ++ goto err_free_mutex;
20765 + }
20766 + }
20767 +
20768 + ret = mdp_comp_config(mdp);
20769 + if (ret) {
20770 + dev_err(dev, "Failed to config mdp components\n");
20771 +- goto err_return;
20772 ++ goto err_free_mutex;
20773 + }
20774 +
20775 + mdp->job_wq = alloc_workqueue(MDP_MODULE_NAME, WQ_FREEZABLE, 0);
20776 +@@ -287,11 +287,12 @@ err_destroy_job_wq:
20777 + destroy_workqueue(mdp->job_wq);
20778 + err_deinit_comp:
20779 + mdp_comp_destroy(mdp);
20780 +-err_return:
20781 ++err_free_mutex:
20782 + for (i = 0; i < MDP_PIPE_MAX; i++)
20783 +- if (mdp)
20784 +- mtk_mutex_put(mdp->mdp_mutex[i]);
20785 ++ mtk_mutex_put(mdp->mdp_mutex[i]);
20786 ++err_destroy_device:
20787 + kfree(mdp);
20788 ++err_return:
20789 + dev_dbg(dev, "Errno %d\n", ret);
20790 + return ret;
20791 + }
20792 +diff --git a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateless.c b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateless.c
20793 +index c45bd2599bb2d..ffbcee04dc26f 100644
20794 +--- a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateless.c
20795 ++++ b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateless.c
20796 +@@ -138,10 +138,13 @@ static void mtk_vdec_stateless_cap_to_disp(struct mtk_vcodec_ctx *ctx, int error
20797 + state = VB2_BUF_STATE_DONE;
20798 +
20799 + vb2_dst = v4l2_m2m_dst_buf_remove(ctx->m2m_ctx);
20800 +- v4l2_m2m_buf_done(vb2_dst, state);
20801 +-
20802 +- mtk_v4l2_debug(2, "free frame buffer id:%d to done list",
20803 +- vb2_dst->vb2_buf.index);
20804 ++ if (vb2_dst) {
20805 ++ v4l2_m2m_buf_done(vb2_dst, state);
20806 ++ mtk_v4l2_debug(2, "free frame buffer id:%d to done list",
20807 ++ vb2_dst->vb2_buf.index);
20808 ++ } else {
20809 ++ mtk_v4l2_err("dst buffer is NULL");
20810 ++ }
20811 +
20812 + if (src_buf_req)
20813 + v4l2_ctrl_request_complete(src_buf_req, &ctx->ctrl_hdl);
20814 +@@ -250,7 +253,7 @@ static void mtk_vdec_worker(struct work_struct *work)
20815 +
20816 + state = ret ? VB2_BUF_STATE_ERROR : VB2_BUF_STATE_DONE;
20817 + if (!IS_VDEC_LAT_ARCH(dev->vdec_pdata->hw_arch) ||
20818 +- ctx->current_codec == V4L2_PIX_FMT_VP8_FRAME || ret) {
20819 ++ ctx->current_codec == V4L2_PIX_FMT_VP8_FRAME) {
20820 + v4l2_m2m_buf_done_and_job_finish(dev->m2m_dev_dec, ctx->m2m_ctx, state);
20821 + if (src_buf_req)
20822 + v4l2_ctrl_request_complete(src_buf_req, &ctx->ctrl_hdl);
20823 +diff --git a/drivers/media/platform/mediatek/vcodec/vdec/vdec_h264_req_multi_if.c b/drivers/media/platform/mediatek/vcodec/vdec/vdec_h264_req_multi_if.c
20824 +index 4cc92700692b3..955b2d0c8f53f 100644
20825 +--- a/drivers/media/platform/mediatek/vcodec/vdec/vdec_h264_req_multi_if.c
20826 ++++ b/drivers/media/platform/mediatek/vcodec/vdec/vdec_h264_req_multi_if.c
20827 +@@ -471,14 +471,19 @@ static int vdec_h264_slice_core_decode(struct vdec_lat_buf *lat_buf)
20828 + sizeof(share_info->h264_slice_params));
20829 +
20830 + fb = ctx->dev->vdec_pdata->get_cap_buffer(ctx);
20831 +- y_fb_dma = fb ? (u64)fb->base_y.dma_addr : 0;
20832 +- vdec_fb_va = (unsigned long)fb;
20833 ++ if (!fb) {
20834 ++ err = -EBUSY;
20835 ++ mtk_vcodec_err(inst, "fb buffer is NULL");
20836 ++ goto vdec_dec_end;
20837 ++ }
20838 +
20839 ++ vdec_fb_va = (unsigned long)fb;
20840 ++ y_fb_dma = (u64)fb->base_y.dma_addr;
20841 + if (ctx->q_data[MTK_Q_DATA_DST].fmt->num_planes == 1)
20842 + c_fb_dma =
20843 + y_fb_dma + inst->ctx->picinfo.buf_w * inst->ctx->picinfo.buf_h;
20844 + else
20845 +- c_fb_dma = fb ? (u64)fb->base_c.dma_addr : 0;
20846 ++ c_fb_dma = (u64)fb->base_c.dma_addr;
20847 +
20848 + mtk_vcodec_debug(inst, "[h264-core] y/c addr = 0x%llx 0x%llx", y_fb_dma,
20849 + c_fb_dma);
20850 +@@ -539,6 +544,29 @@ vdec_dec_end:
20851 + return 0;
20852 + }
20853 +
20854 ++static void vdec_h264_insert_startcode(struct mtk_vcodec_dev *vcodec_dev, unsigned char *buf,
20855 ++ size_t *bs_size, struct mtk_h264_pps_param *pps)
20856 ++{
20857 ++ struct device *dev = &vcodec_dev->plat_dev->dev;
20858 ++
20859 ++ /* Need to add pending data at the end of bitstream when bs_sz is small than
20860 ++ * 20 bytes for cavlc bitstream, or lat will decode fail. This pending data is
20861 ++ * useful for mt8192 and mt8195 platform.
20862 ++ *
20863 ++ * cavlc bitstream when entropy_coding_mode_flag is false.
20864 ++ */
20865 ++ if (pps->entropy_coding_mode_flag || *bs_size > 20 ||
20866 ++ !(of_device_is_compatible(dev->of_node, "mediatek,mt8192-vcodec-dec") ||
20867 ++ of_device_is_compatible(dev->of_node, "mediatek,mt8195-vcodec-dec")))
20868 ++ return;
20869 ++
20870 ++ buf[*bs_size] = 0;
20871 ++ buf[*bs_size + 1] = 0;
20872 ++ buf[*bs_size + 2] = 1;
20873 ++ buf[*bs_size + 3] = 0xff;
20874 ++ (*bs_size) += 4;
20875 ++}
20876 ++
20877 + static int vdec_h264_slice_lat_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
20878 + struct vdec_fb *fb, bool *res_chg)
20879 + {
20880 +@@ -582,9 +610,6 @@ static int vdec_h264_slice_lat_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
20881 + }
20882 +
20883 + inst->vsi->dec.nal_info = buf[nal_start_idx];
20884 +- inst->vsi->dec.bs_buf_addr = (u64)bs->dma_addr;
20885 +- inst->vsi->dec.bs_buf_size = bs->size;
20886 +-
20887 + lat_buf->src_buf_req = src_buf_info->m2m_buf.vb.vb2_buf.req_obj.req;
20888 + v4l2_m2m_buf_copy_metadata(&src_buf_info->m2m_buf.vb, &lat_buf->ts_info, true);
20889 +
20890 +@@ -592,6 +617,12 @@ static int vdec_h264_slice_lat_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
20891 + if (err)
20892 + goto err_free_fb_out;
20893 +
20894 ++ vdec_h264_insert_startcode(inst->ctx->dev, buf, &bs->size,
20895 ++ &share_info->h264_slice_params.pps);
20896 ++
20897 ++ inst->vsi->dec.bs_buf_addr = (uint64_t)bs->dma_addr;
20898 ++ inst->vsi->dec.bs_buf_size = bs->size;
20899 ++
20900 + *res_chg = inst->resolution_changed;
20901 + if (inst->resolution_changed) {
20902 + mtk_vcodec_debug(inst, "- resolution changed -");
20903 +@@ -630,7 +661,7 @@ static int vdec_h264_slice_lat_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
20904 + err = vpu_dec_start(vpu, data, 2);
20905 + if (err) {
20906 + mtk_vcodec_debug(inst, "lat decode err: %d", err);
20907 +- goto err_scp_decode;
20908 ++ goto err_free_fb_out;
20909 + }
20910 +
20911 + share_info->trans_end = inst->ctx->msg_queue.wdma_addr.dma_addr +
20912 +@@ -647,12 +678,17 @@ static int vdec_h264_slice_lat_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
20913 + /* wait decoder done interrupt */
20914 + timeout = mtk_vcodec_wait_for_done_ctx(inst->ctx, MTK_INST_IRQ_RECEIVED,
20915 + WAIT_INTR_TIMEOUT_MS, MTK_VDEC_LAT0);
20916 ++ if (timeout)
20917 ++ mtk_vcodec_err(inst, "lat decode timeout: pic_%d", inst->slice_dec_num);
20918 + inst->vsi->dec.timeout = !!timeout;
20919 +
20920 + err = vpu_dec_end(vpu);
20921 +- if (err == SLICE_HEADER_FULL || timeout || err == TRANS_BUFFER_FULL) {
20922 +- err = -EINVAL;
20923 +- goto err_scp_decode;
20924 ++ if (err == SLICE_HEADER_FULL || err == TRANS_BUFFER_FULL) {
20925 ++ if (!IS_VDEC_INNER_RACING(inst->ctx->dev->dec_capability))
20926 ++ vdec_msg_queue_qbuf(&inst->ctx->msg_queue.lat_ctx, lat_buf);
20927 ++ inst->slice_dec_num++;
20928 ++ mtk_vcodec_err(inst, "lat dec fail: pic_%d err:%d", inst->slice_dec_num, err);
20929 ++ return -EINVAL;
20930 + }
20931 +
20932 + share_info->trans_end = inst->ctx->msg_queue.wdma_addr.dma_addr +
20933 +@@ -669,10 +705,6 @@ static int vdec_h264_slice_lat_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
20934 +
20935 + inst->slice_dec_num++;
20936 + return 0;
20937 +-
20938 +-err_scp_decode:
20939 +- if (!IS_VDEC_INNER_RACING(inst->ctx->dev->dec_capability))
20940 +- vdec_msg_queue_qbuf(&inst->ctx->msg_queue.lat_ctx, lat_buf);
20941 + err_free_fb_out:
20942 + vdec_msg_queue_qbuf(&inst->ctx->msg_queue.lat_ctx, lat_buf);
20943 + mtk_vcodec_err(inst, "slice dec number: %d err: %d", inst->slice_dec_num, err);
20944 +diff --git a/drivers/media/platform/mediatek/vcodec/vdec/vdec_vp9_req_lat_if.c b/drivers/media/platform/mediatek/vcodec/vdec/vdec_vp9_req_lat_if.c
20945 +index fb1c36a3592d1..cbb6728b8a40b 100644
20946 +--- a/drivers/media/platform/mediatek/vcodec/vdec/vdec_vp9_req_lat_if.c
20947 ++++ b/drivers/media/platform/mediatek/vcodec/vdec/vdec_vp9_req_lat_if.c
20948 +@@ -2073,21 +2073,23 @@ static int vdec_vp9_slice_lat_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
20949 + return -EBUSY;
20950 + }
20951 + pfc = (struct vdec_vp9_slice_pfc *)lat_buf->private_data;
20952 +- if (!pfc)
20953 +- return -EINVAL;
20954 ++ if (!pfc) {
20955 ++ ret = -EINVAL;
20956 ++ goto err_free_fb_out;
20957 ++ }
20958 + vsi = &pfc->vsi;
20959 +
20960 + ret = vdec_vp9_slice_setup_lat(instance, bs, lat_buf, pfc);
20961 + if (ret) {
20962 + mtk_vcodec_err(instance, "Failed to setup VP9 lat ret %d\n", ret);
20963 +- return ret;
20964 ++ goto err_free_fb_out;
20965 + }
20966 + vdec_vp9_slice_vsi_to_remote(vsi, instance->vsi);
20967 +
20968 + ret = vpu_dec_start(&instance->vpu, NULL, 0);
20969 + if (ret) {
20970 + mtk_vcodec_err(instance, "Failed to dec VP9 ret %d\n", ret);
20971 +- return ret;
20972 ++ goto err_free_fb_out;
20973 + }
20974 +
20975 + if (instance->irq) {
20976 +@@ -2107,7 +2109,7 @@ static int vdec_vp9_slice_lat_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
20977 + /* LAT trans full, no more UBE or decode timeout */
20978 + if (ret) {
20979 + mtk_vcodec_err(instance, "VP9 decode error: %d\n", ret);
20980 +- return ret;
20981 ++ goto err_free_fb_out;
20982 + }
20983 +
20984 + mtk_vcodec_debug(instance, "lat dma addr: 0x%lx 0x%lx\n",
20985 +@@ -2120,6 +2122,9 @@ static int vdec_vp9_slice_lat_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
20986 + vdec_msg_queue_qbuf(&ctx->dev->msg_queue_core_ctx, lat_buf);
20987 +
20988 + return 0;
20989 ++err_free_fb_out:
20990 ++ vdec_msg_queue_qbuf(&ctx->msg_queue.lat_ctx, lat_buf);
20991 ++ return ret;
20992 + }
20993 +
20994 + static int vdec_vp9_slice_decode(void *h_vdec, struct mtk_vcodec_mem *bs,
20995 +diff --git a/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c b/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c
20996 +index ae500980ad45c..dc2004790a472 100644
20997 +--- a/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c
20998 ++++ b/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c
20999 +@@ -221,7 +221,7 @@ static void vdec_msg_queue_core_work(struct work_struct *work)
21000 + mtk_vcodec_dec_disable_hardware(ctx, MTK_VDEC_CORE);
21001 + vdec_msg_queue_qbuf(&ctx->msg_queue.lat_ctx, lat_buf);
21002 +
21003 +- if (!list_empty(&ctx->msg_queue.lat_ctx.ready_queue)) {
21004 ++ if (!list_empty(&dev->msg_queue_core_ctx.ready_queue)) {
21005 + mtk_v4l2_debug(3, "re-schedule to decode for core: %d",
21006 + dev->msg_queue_core_ctx.ready_num);
21007 + queue_work(dev->core_workqueue, &msg_queue->core_work);
21008 +diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg-hw.c b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg-hw.c
21009 +index 9418fcf740a82..ef28122a5ed49 100644
21010 +--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg-hw.c
21011 ++++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg-hw.c
21012 +@@ -76,12 +76,14 @@ void print_wrapper_info(struct device *dev, void __iomem *reg)
21013 +
21014 + void mxc_jpeg_enable_irq(void __iomem *reg, int slot)
21015 + {
21016 +- writel(0xFFFFFFFF, reg + MXC_SLOT_OFFSET(slot, SLOT_IRQ_EN));
21017 ++ writel(0xFFFFFFFF, reg + MXC_SLOT_OFFSET(slot, SLOT_STATUS));
21018 ++ writel(0xF0C, reg + MXC_SLOT_OFFSET(slot, SLOT_IRQ_EN));
21019 + }
21020 +
21021 + void mxc_jpeg_disable_irq(void __iomem *reg, int slot)
21022 + {
21023 + writel(0x0, reg + MXC_SLOT_OFFSET(slot, SLOT_IRQ_EN));
21024 ++ writel(0xFFFFFFFF, reg + MXC_SLOT_OFFSET(slot, SLOT_STATUS));
21025 + }
21026 +
21027 + void mxc_jpeg_sw_reset(void __iomem *reg)
21028 +diff --git a/drivers/media/platform/qcom/camss/camss-video.c b/drivers/media/platform/qcom/camss/camss-video.c
21029 +index 81fb3a5bc1d51..41deda232e4a1 100644
21030 +--- a/drivers/media/platform/qcom/camss/camss-video.c
21031 ++++ b/drivers/media/platform/qcom/camss/camss-video.c
21032 +@@ -495,7 +495,7 @@ static int video_start_streaming(struct vb2_queue *q, unsigned int count)
21033 +
21034 + ret = video_device_pipeline_start(vdev, &video->pipe);
21035 + if (ret < 0)
21036 +- return ret;
21037 ++ goto flush_buffers;
21038 +
21039 + ret = video_check_format(video);
21040 + if (ret < 0)
21041 +@@ -524,6 +524,7 @@ static int video_start_streaming(struct vb2_queue *q, unsigned int count)
21042 + error:
21043 + video_device_pipeline_stop(vdev);
21044 +
21045 ++flush_buffers:
21046 + video->ops->flush_buffers(video, VB2_BUF_STATE_QUEUED);
21047 +
21048 + return ret;
21049 +diff --git a/drivers/media/platform/qcom/camss/camss.c b/drivers/media/platform/qcom/camss/camss.c
21050 +index 1118c40886d52..a157cac72e0ab 100644
21051 +--- a/drivers/media/platform/qcom/camss/camss.c
21052 ++++ b/drivers/media/platform/qcom/camss/camss.c
21053 +@@ -1465,6 +1465,14 @@ static int camss_configure_pd(struct camss *camss)
21054 + return camss->genpd_num;
21055 + }
21056 +
21057 ++ /*
21058 ++ * If a platform device has just one power domain, then it is attached
21059 ++ * at platform_probe() level, thus there shall be no need and even no
21060 ++ * option to attach it again, this is the case for CAMSS on MSM8916.
21061 ++ */
21062 ++ if (camss->genpd_num == 1)
21063 ++ return 0;
21064 ++
21065 + camss->genpd = devm_kmalloc_array(dev, camss->genpd_num,
21066 + sizeof(*camss->genpd), GFP_KERNEL);
21067 + if (!camss->genpd)
21068 +@@ -1698,6 +1706,9 @@ void camss_delete(struct camss *camss)
21069 +
21070 + pm_runtime_disable(camss->dev);
21071 +
21072 ++ if (camss->genpd_num == 1)
21073 ++ return;
21074 ++
21075 + for (i = 0; i < camss->genpd_num; i++) {
21076 + device_link_del(camss->genpd_link[i]);
21077 + dev_pm_domain_detach(camss->genpd[i], true);
21078 +diff --git a/drivers/media/platform/qcom/venus/pm_helpers.c b/drivers/media/platform/qcom/venus/pm_helpers.c
21079 +index c93d2906e4c7d..48c9084bb4dba 100644
21080 +--- a/drivers/media/platform/qcom/venus/pm_helpers.c
21081 ++++ b/drivers/media/platform/qcom/venus/pm_helpers.c
21082 +@@ -869,8 +869,8 @@ static int vcodec_domains_get(struct venus_core *core)
21083 + for (i = 0; i < res->vcodec_pmdomains_num; i++) {
21084 + pd = dev_pm_domain_attach_by_name(dev,
21085 + res->vcodec_pmdomains[i]);
21086 +- if (IS_ERR(pd))
21087 +- return PTR_ERR(pd);
21088 ++ if (IS_ERR_OR_NULL(pd))
21089 ++ return PTR_ERR(pd) ? : -ENODATA;
21090 + core->pmdomains[i] = pd;
21091 + }
21092 +
21093 +diff --git a/drivers/media/platform/samsung/exynos4-is/fimc-core.c b/drivers/media/platform/samsung/exynos4-is/fimc-core.c
21094 +index 91cc8d58a663b..1791100b69353 100644
21095 +--- a/drivers/media/platform/samsung/exynos4-is/fimc-core.c
21096 ++++ b/drivers/media/platform/samsung/exynos4-is/fimc-core.c
21097 +@@ -1173,7 +1173,7 @@ int __init fimc_register_driver(void)
21098 + return platform_driver_register(&fimc_driver);
21099 + }
21100 +
21101 +-void __exit fimc_unregister_driver(void)
21102 ++void fimc_unregister_driver(void)
21103 + {
21104 + platform_driver_unregister(&fimc_driver);
21105 + }
21106 +diff --git a/drivers/media/platform/samsung/exynos4-is/media-dev.c b/drivers/media/platform/samsung/exynos4-is/media-dev.c
21107 +index 52b43ea040302..2f3071acb9c97 100644
21108 +--- a/drivers/media/platform/samsung/exynos4-is/media-dev.c
21109 ++++ b/drivers/media/platform/samsung/exynos4-is/media-dev.c
21110 +@@ -1380,9 +1380,7 @@ static int subdev_notifier_bound(struct v4l2_async_notifier *notifier,
21111 +
21112 + /* Find platform data for this sensor subdev */
21113 + for (i = 0; i < ARRAY_SIZE(fmd->sensor); i++)
21114 +- if (fmd->sensor[i].asd &&
21115 +- fmd->sensor[i].asd->match.fwnode ==
21116 +- of_fwnode_handle(subdev->dev->of_node))
21117 ++ if (fmd->sensor[i].asd == asd)
21118 + si = &fmd->sensor[i];
21119 +
21120 + if (si == NULL)
21121 +@@ -1474,7 +1472,7 @@ static int fimc_md_probe(struct platform_device *pdev)
21122 + pinctrl = devm_pinctrl_get(dev);
21123 + if (IS_ERR(pinctrl)) {
21124 + ret = PTR_ERR(pinctrl);
21125 +- if (ret != EPROBE_DEFER)
21126 ++ if (ret != -EPROBE_DEFER)
21127 + dev_err(dev, "Failed to get pinctrl: %d\n", ret);
21128 + goto err_clk;
21129 + }
21130 +@@ -1586,7 +1584,11 @@ static int __init fimc_md_init(void)
21131 + if (ret)
21132 + return ret;
21133 +
21134 +- return platform_driver_register(&fimc_md_driver);
21135 ++ ret = platform_driver_register(&fimc_md_driver);
21136 ++ if (ret)
21137 ++ fimc_unregister_driver();
21138 ++
21139 ++ return ret;
21140 + }
21141 +
21142 + static void __exit fimc_md_exit(void)
21143 +diff --git a/drivers/media/platform/samsung/s5p-mfc/s5p_mfc.c b/drivers/media/platform/samsung/s5p-mfc/s5p_mfc.c
21144 +index fca5c6405eec3..007c7dbee0377 100644
21145 +--- a/drivers/media/platform/samsung/s5p-mfc/s5p_mfc.c
21146 ++++ b/drivers/media/platform/samsung/s5p-mfc/s5p_mfc.c
21147 +@@ -1576,8 +1576,18 @@ static struct s5p_mfc_variant mfc_drvdata_v7 = {
21148 + .port_num = MFC_NUM_PORTS_V7,
21149 + .buf_size = &buf_size_v7,
21150 + .fw_name[0] = "s5p-mfc-v7.fw",
21151 +- .clk_names = {"mfc", "sclk_mfc"},
21152 +- .num_clocks = 2,
21153 ++ .clk_names = {"mfc"},
21154 ++ .num_clocks = 1,
21155 ++};
21156 ++
21157 ++static struct s5p_mfc_variant mfc_drvdata_v7_3250 = {
21158 ++ .version = MFC_VERSION_V7,
21159 ++ .version_bit = MFC_V7_BIT,
21160 ++ .port_num = MFC_NUM_PORTS_V7,
21161 ++ .buf_size = &buf_size_v7,
21162 ++ .fw_name[0] = "s5p-mfc-v7.fw",
21163 ++ .clk_names = {"mfc", "sclk_mfc"},
21164 ++ .num_clocks = 2,
21165 + };
21166 +
21167 + static struct s5p_mfc_buf_size_v6 mfc_buf_size_v8 = {
21168 +@@ -1647,6 +1657,9 @@ static const struct of_device_id exynos_mfc_match[] = {
21169 + }, {
21170 + .compatible = "samsung,mfc-v7",
21171 + .data = &mfc_drvdata_v7,
21172 ++ }, {
21173 ++ .compatible = "samsung,exynos3250-mfc",
21174 ++ .data = &mfc_drvdata_v7_3250,
21175 + }, {
21176 + .compatible = "samsung,mfc-v8",
21177 + .data = &mfc_drvdata_v8,
21178 +diff --git a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c
21179 +index cefe6b7bfdc4e..1dbb89f0ddb8c 100644
21180 +--- a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c
21181 ++++ b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c
21182 +@@ -925,6 +925,7 @@ static int configure_channels(struct c8sectpfei *fei)
21183 + if (ret) {
21184 + dev_err(fei->dev,
21185 + "configure_memdma_and_inputblock failed\n");
21186 ++ of_node_put(child);
21187 + goto err_unmap;
21188 + }
21189 + index++;
21190 +diff --git a/drivers/media/platform/sunxi/sun6i-mipi-csi2/sun6i_mipi_csi2.c b/drivers/media/platform/sunxi/sun6i-mipi-csi2/sun6i_mipi_csi2.c
21191 +index 30d6c0c5161f4..484ac5f054d53 100644
21192 +--- a/drivers/media/platform/sunxi/sun6i-mipi-csi2/sun6i_mipi_csi2.c
21193 ++++ b/drivers/media/platform/sunxi/sun6i-mipi-csi2/sun6i_mipi_csi2.c
21194 +@@ -498,6 +498,7 @@ static int sun6i_mipi_csi2_bridge_setup(struct sun6i_mipi_csi2_device *csi2_dev)
21195 + struct v4l2_async_notifier *notifier = &bridge->notifier;
21196 + struct media_pad *pads = bridge->pads;
21197 + struct device *dev = csi2_dev->dev;
21198 ++ bool notifier_registered = false;
21199 + int ret;
21200 +
21201 + mutex_init(&bridge->lock);
21202 +@@ -519,8 +520,10 @@ static int sun6i_mipi_csi2_bridge_setup(struct sun6i_mipi_csi2_device *csi2_dev)
21203 +
21204 + /* Media Pads */
21205 +
21206 +- pads[SUN6I_MIPI_CSI2_PAD_SINK].flags = MEDIA_PAD_FL_SINK;
21207 +- pads[SUN6I_MIPI_CSI2_PAD_SOURCE].flags = MEDIA_PAD_FL_SOURCE;
21208 ++ pads[SUN6I_MIPI_CSI2_PAD_SINK].flags = MEDIA_PAD_FL_SINK |
21209 ++ MEDIA_PAD_FL_MUST_CONNECT;
21210 ++ pads[SUN6I_MIPI_CSI2_PAD_SOURCE].flags = MEDIA_PAD_FL_SOURCE |
21211 ++ MEDIA_PAD_FL_MUST_CONNECT;
21212 +
21213 + ret = media_entity_pads_init(&subdev->entity, SUN6I_MIPI_CSI2_PAD_COUNT,
21214 + pads);
21215 +@@ -533,12 +536,17 @@ static int sun6i_mipi_csi2_bridge_setup(struct sun6i_mipi_csi2_device *csi2_dev)
21216 + notifier->ops = &sun6i_mipi_csi2_notifier_ops;
21217 +
21218 + ret = sun6i_mipi_csi2_bridge_source_setup(csi2_dev);
21219 +- if (ret)
21220 ++ if (ret && ret != -ENODEV)
21221 + goto error_v4l2_notifier_cleanup;
21222 +
21223 +- ret = v4l2_async_subdev_nf_register(subdev, notifier);
21224 +- if (ret < 0)
21225 +- goto error_v4l2_notifier_cleanup;
21226 ++ /* Only register the notifier when a sensor is connected. */
21227 ++ if (ret != -ENODEV) {
21228 ++ ret = v4l2_async_subdev_nf_register(subdev, notifier);
21229 ++ if (ret < 0)
21230 ++ goto error_v4l2_notifier_cleanup;
21231 ++
21232 ++ notifier_registered = true;
21233 ++ }
21234 +
21235 + /* V4L2 Subdev */
21236 +
21237 +@@ -549,7 +557,8 @@ static int sun6i_mipi_csi2_bridge_setup(struct sun6i_mipi_csi2_device *csi2_dev)
21238 + return 0;
21239 +
21240 + error_v4l2_notifier_unregister:
21241 +- v4l2_async_nf_unregister(notifier);
21242 ++ if (notifier_registered)
21243 ++ v4l2_async_nf_unregister(notifier);
21244 +
21245 + error_v4l2_notifier_cleanup:
21246 + v4l2_async_nf_cleanup(notifier);
21247 +diff --git a/drivers/media/platform/sunxi/sun8i-a83t-mipi-csi2/sun8i_a83t_mipi_csi2.c b/drivers/media/platform/sunxi/sun8i-a83t-mipi-csi2/sun8i_a83t_mipi_csi2.c
21248 +index b032ec13a683a..d993c09a48202 100644
21249 +--- a/drivers/media/platform/sunxi/sun8i-a83t-mipi-csi2/sun8i_a83t_mipi_csi2.c
21250 ++++ b/drivers/media/platform/sunxi/sun8i-a83t-mipi-csi2/sun8i_a83t_mipi_csi2.c
21251 +@@ -536,6 +536,7 @@ sun8i_a83t_mipi_csi2_bridge_setup(struct sun8i_a83t_mipi_csi2_device *csi2_dev)
21252 + struct v4l2_async_notifier *notifier = &bridge->notifier;
21253 + struct media_pad *pads = bridge->pads;
21254 + struct device *dev = csi2_dev->dev;
21255 ++ bool notifier_registered = false;
21256 + int ret;
21257 +
21258 + mutex_init(&bridge->lock);
21259 +@@ -557,8 +558,10 @@ sun8i_a83t_mipi_csi2_bridge_setup(struct sun8i_a83t_mipi_csi2_device *csi2_dev)
21260 +
21261 + /* Media Pads */
21262 +
21263 +- pads[SUN8I_A83T_MIPI_CSI2_PAD_SINK].flags = MEDIA_PAD_FL_SINK;
21264 +- pads[SUN8I_A83T_MIPI_CSI2_PAD_SOURCE].flags = MEDIA_PAD_FL_SOURCE;
21265 ++ pads[SUN8I_A83T_MIPI_CSI2_PAD_SINK].flags = MEDIA_PAD_FL_SINK |
21266 ++ MEDIA_PAD_FL_MUST_CONNECT;
21267 ++ pads[SUN8I_A83T_MIPI_CSI2_PAD_SOURCE].flags = MEDIA_PAD_FL_SOURCE |
21268 ++ MEDIA_PAD_FL_MUST_CONNECT;
21269 +
21270 + ret = media_entity_pads_init(&subdev->entity,
21271 + SUN8I_A83T_MIPI_CSI2_PAD_COUNT, pads);
21272 +@@ -571,12 +574,17 @@ sun8i_a83t_mipi_csi2_bridge_setup(struct sun8i_a83t_mipi_csi2_device *csi2_dev)
21273 + notifier->ops = &sun8i_a83t_mipi_csi2_notifier_ops;
21274 +
21275 + ret = sun8i_a83t_mipi_csi2_bridge_source_setup(csi2_dev);
21276 +- if (ret)
21277 ++ if (ret && ret != -ENODEV)
21278 + goto error_v4l2_notifier_cleanup;
21279 +
21280 +- ret = v4l2_async_subdev_nf_register(subdev, notifier);
21281 +- if (ret < 0)
21282 +- goto error_v4l2_notifier_cleanup;
21283 ++ /* Only register the notifier when a sensor is connected. */
21284 ++ if (ret != -ENODEV) {
21285 ++ ret = v4l2_async_subdev_nf_register(subdev, notifier);
21286 ++ if (ret < 0)
21287 ++ goto error_v4l2_notifier_cleanup;
21288 ++
21289 ++ notifier_registered = true;
21290 ++ }
21291 +
21292 + /* V4L2 Subdev */
21293 +
21294 +@@ -587,7 +595,8 @@ sun8i_a83t_mipi_csi2_bridge_setup(struct sun8i_a83t_mipi_csi2_device *csi2_dev)
21295 + return 0;
21296 +
21297 + error_v4l2_notifier_unregister:
21298 +- v4l2_async_nf_unregister(notifier);
21299 ++ if (notifier_registered)
21300 ++ v4l2_async_nf_unregister(notifier);
21301 +
21302 + error_v4l2_notifier_cleanup:
21303 + v4l2_async_nf_cleanup(notifier);
21304 +diff --git a/drivers/media/radio/si470x/radio-si470x-usb.c b/drivers/media/radio/si470x/radio-si470x-usb.c
21305 +index 6b2768623c883..aa7a580dbecc0 100644
21306 +--- a/drivers/media/radio/si470x/radio-si470x-usb.c
21307 ++++ b/drivers/media/radio/si470x/radio-si470x-usb.c
21308 +@@ -727,8 +727,10 @@ static int si470x_usb_driver_probe(struct usb_interface *intf,
21309 +
21310 + /* start radio */
21311 + retval = si470x_start_usb(radio);
21312 +- if (retval < 0)
21313 ++ if (retval < 0 && !radio->int_in_running)
21314 + goto err_buf;
21315 ++ else if (retval < 0) /* in case of radio->int_in_running == 1 */
21316 ++ goto err_all;
21317 +
21318 + /* set initial frequency */
21319 + si470x_set_freq(radio, 87.5 * FREQ_MUL); /* available in all regions */
21320 +diff --git a/drivers/media/rc/imon.c b/drivers/media/rc/imon.c
21321 +index 5edfd8a9e8494..74546f7e34691 100644
21322 +--- a/drivers/media/rc/imon.c
21323 ++++ b/drivers/media/rc/imon.c
21324 +@@ -646,15 +646,14 @@ static int send_packet(struct imon_context *ictx)
21325 + pr_err_ratelimited("error submitting urb(%d)\n", retval);
21326 + } else {
21327 + /* Wait for transmission to complete (or abort) */
21328 +- mutex_unlock(&ictx->lock);
21329 + retval = wait_for_completion_interruptible(
21330 + &ictx->tx.finished);
21331 + if (retval) {
21332 + usb_kill_urb(ictx->tx_urb);
21333 + pr_err_ratelimited("task interrupted\n");
21334 + }
21335 +- mutex_lock(&ictx->lock);
21336 +
21337 ++ ictx->tx.busy = false;
21338 + retval = ictx->tx.status;
21339 + if (retval)
21340 + pr_err_ratelimited("packet tx failed (%d)\n", retval);
21341 +@@ -953,7 +952,8 @@ static ssize_t vfd_write(struct file *file, const char __user *buf,
21342 + if (ictx->disconnected)
21343 + return -ENODEV;
21344 +
21345 +- mutex_lock(&ictx->lock);
21346 ++ if (mutex_lock_interruptible(&ictx->lock))
21347 ++ return -ERESTARTSYS;
21348 +
21349 + if (!ictx->dev_present_intf0) {
21350 + pr_err_ratelimited("no iMON device present\n");
21351 +diff --git a/drivers/media/test-drivers/vidtv/vidtv_bridge.c b/drivers/media/test-drivers/vidtv/vidtv_bridge.c
21352 +index 82620613d56b8..dff7265a42ca2 100644
21353 +--- a/drivers/media/test-drivers/vidtv/vidtv_bridge.c
21354 ++++ b/drivers/media/test-drivers/vidtv/vidtv_bridge.c
21355 +@@ -459,26 +459,20 @@ fail_dmx_conn:
21356 + for (j = j - 1; j >= 0; --j)
21357 + dvb->demux.dmx.remove_frontend(&dvb->demux.dmx,
21358 + &dvb->dmx_fe[j]);
21359 +-fail_dmx_dev:
21360 + dvb_dmxdev_release(&dvb->dmx_dev);
21361 +-fail_dmx:
21362 ++fail_dmx_dev:
21363 + dvb_dmx_release(&dvb->demux);
21364 ++fail_dmx:
21365 ++fail_demod_probe:
21366 ++ for (i = i - 1; i >= 0; --i) {
21367 ++ dvb_unregister_frontend(dvb->fe[i]);
21368 + fail_fe:
21369 +- for (j = i; j >= 0; --j)
21370 +- dvb_unregister_frontend(dvb->fe[j]);
21371 ++ dvb_module_release(dvb->i2c_client_tuner[i]);
21372 + fail_tuner_probe:
21373 +- for (j = i; j >= 0; --j)
21374 +- if (dvb->i2c_client_tuner[j])
21375 +- dvb_module_release(dvb->i2c_client_tuner[j]);
21376 +-
21377 +-fail_demod_probe:
21378 +- for (j = i; j >= 0; --j)
21379 +- if (dvb->i2c_client_demod[j])
21380 +- dvb_module_release(dvb->i2c_client_demod[j]);
21381 +-
21382 ++ dvb_module_release(dvb->i2c_client_demod[i]);
21383 ++ }
21384 + fail_adapter:
21385 + dvb_unregister_adapter(&dvb->adapter);
21386 +-
21387 + fail_i2c:
21388 + i2c_del_adapter(&dvb->i2c_adapter);
21389 +
21390 +diff --git a/drivers/media/test-drivers/vimc/vimc-core.c b/drivers/media/test-drivers/vimc/vimc-core.c
21391 +index 2ae7a0f11ebfc..e82cfa5ffbf47 100644
21392 +--- a/drivers/media/test-drivers/vimc/vimc-core.c
21393 ++++ b/drivers/media/test-drivers/vimc/vimc-core.c
21394 +@@ -433,7 +433,7 @@ static int __init vimc_init(void)
21395 + if (ret) {
21396 + dev_err(&vimc_pdev.dev,
21397 + "platform driver registration failed (err=%d)\n", ret);
21398 +- platform_driver_unregister(&vimc_pdrv);
21399 ++ platform_device_unregister(&vimc_pdev);
21400 + return ret;
21401 + }
21402 +
21403 +diff --git a/drivers/media/test-drivers/vivid/vivid-vid-cap.c b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
21404 +index 11620eaf941e3..c0999581c599b 100644
21405 +--- a/drivers/media/test-drivers/vivid/vivid-vid-cap.c
21406 ++++ b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
21407 +@@ -973,6 +973,7 @@ int vivid_vid_cap_s_selection(struct file *file, void *fh, struct v4l2_selection
21408 + if (dev->has_compose_cap) {
21409 + v4l2_rect_set_min_size(compose, &min_rect);
21410 + v4l2_rect_set_max_size(compose, &max_rect);
21411 ++ v4l2_rect_map_inside(compose, &fmt);
21412 + }
21413 + dev->fmt_cap_rect = fmt;
21414 + tpg_s_buf_height(&dev->tpg, fmt.height);
21415 +diff --git a/drivers/media/usb/dvb-usb/az6027.c b/drivers/media/usb/dvb-usb/az6027.c
21416 +index cf15988dfb510..7d78ee09be5e1 100644
21417 +--- a/drivers/media/usb/dvb-usb/az6027.c
21418 ++++ b/drivers/media/usb/dvb-usb/az6027.c
21419 +@@ -975,6 +975,10 @@ static int az6027_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int n
21420 + if (msg[i].addr == 0x99) {
21421 + req = 0xBE;
21422 + index = 0;
21423 ++ if (msg[i].len < 1) {
21424 ++ i = -EOPNOTSUPP;
21425 ++ break;
21426 ++ }
21427 + value = msg[i].buf[0] & 0x00ff;
21428 + length = 1;
21429 + az6027_usb_out_op(d, req, value, index, data, length);
21430 +diff --git a/drivers/media/usb/dvb-usb/dvb-usb-init.c b/drivers/media/usb/dvb-usb/dvb-usb-init.c
21431 +index 61439c8f33cab..58eea8ab54779 100644
21432 +--- a/drivers/media/usb/dvb-usb/dvb-usb-init.c
21433 ++++ b/drivers/media/usb/dvb-usb/dvb-usb-init.c
21434 +@@ -81,7 +81,7 @@ static int dvb_usb_adapter_init(struct dvb_usb_device *d, short *adapter_nrs)
21435 +
21436 + ret = dvb_usb_adapter_stream_init(adap);
21437 + if (ret)
21438 +- return ret;
21439 ++ goto stream_init_err;
21440 +
21441 + ret = dvb_usb_adapter_dvb_init(adap, adapter_nrs);
21442 + if (ret)
21443 +@@ -114,6 +114,8 @@ frontend_init_err:
21444 + dvb_usb_adapter_dvb_exit(adap);
21445 + dvb_init_err:
21446 + dvb_usb_adapter_stream_exit(adap);
21447 ++stream_init_err:
21448 ++ kfree(adap->priv);
21449 + return ret;
21450 + }
21451 +
21452 +diff --git a/drivers/media/v4l2-core/v4l2-ctrls-api.c b/drivers/media/v4l2-core/v4l2-ctrls-api.c
21453 +index d0a3aa3806fbd..3d3b6dc24ca63 100644
21454 +--- a/drivers/media/v4l2-core/v4l2-ctrls-api.c
21455 ++++ b/drivers/media/v4l2-core/v4l2-ctrls-api.c
21456 +@@ -150,6 +150,7 @@ static int user_to_new(struct v4l2_ext_control *c, struct v4l2_ctrl *ctrl)
21457 + * then return an error.
21458 + */
21459 + if (strlen(ctrl->p_new.p_char) == ctrl->maximum && last)
21460 ++ ctrl->is_new = 1;
21461 + return -ERANGE;
21462 + }
21463 + return ret;
21464 +diff --git a/drivers/media/v4l2-core/v4l2-ctrls-core.c b/drivers/media/v4l2-core/v4l2-ctrls-core.c
21465 +index 0dab1d7b90f0e..29169170880a6 100644
21466 +--- a/drivers/media/v4l2-core/v4l2-ctrls-core.c
21467 ++++ b/drivers/media/v4l2-core/v4l2-ctrls-core.c
21468 +@@ -1827,7 +1827,7 @@ struct v4l2_ctrl *v4l2_ctrl_new_std_menu(struct v4l2_ctrl_handler *hdl,
21469 + else if (type == V4L2_CTRL_TYPE_INTEGER_MENU)
21470 + qmenu_int = v4l2_ctrl_get_int_menu(id, &qmenu_int_len);
21471 +
21472 +- if ((!qmenu && !qmenu_int) || (qmenu_int && max > qmenu_int_len)) {
21473 ++ if ((!qmenu && !qmenu_int) || (qmenu_int && max >= qmenu_int_len)) {
21474 + handler_set_err(hdl, -EINVAL);
21475 + return NULL;
21476 + }
21477 +diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
21478 +index fddba75d90745..6876ec25bc512 100644
21479 +--- a/drivers/media/v4l2-core/v4l2-ioctl.c
21480 ++++ b/drivers/media/v4l2-core/v4l2-ioctl.c
21481 +@@ -1347,23 +1347,23 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
21482 + case V4L2_PIX_FMT_YUV420: descr = "Planar YUV 4:2:0"; break;
21483 + case V4L2_PIX_FMT_HI240: descr = "8-bit Dithered RGB (BTTV)"; break;
21484 + case V4L2_PIX_FMT_M420: descr = "YUV 4:2:0 (M420)"; break;
21485 +- case V4L2_PIX_FMT_NV12: descr = "Y/CbCr 4:2:0"; break;
21486 +- case V4L2_PIX_FMT_NV21: descr = "Y/CrCb 4:2:0"; break;
21487 +- case V4L2_PIX_FMT_NV16: descr = "Y/CbCr 4:2:2"; break;
21488 +- case V4L2_PIX_FMT_NV61: descr = "Y/CrCb 4:2:2"; break;
21489 +- case V4L2_PIX_FMT_NV24: descr = "Y/CbCr 4:4:4"; break;
21490 +- case V4L2_PIX_FMT_NV42: descr = "Y/CrCb 4:4:4"; break;
21491 +- case V4L2_PIX_FMT_P010: descr = "10-bit Y/CbCr 4:2:0"; break;
21492 +- case V4L2_PIX_FMT_NV12_4L4: descr = "Y/CbCr 4:2:0 (4x4 Linear)"; break;
21493 +- case V4L2_PIX_FMT_NV12_16L16: descr = "Y/CbCr 4:2:0 (16x16 Linear)"; break;
21494 +- case V4L2_PIX_FMT_NV12_32L32: descr = "Y/CbCr 4:2:0 (32x32 Linear)"; break;
21495 +- case V4L2_PIX_FMT_P010_4L4: descr = "10-bit Y/CbCr 4:2:0 (4x4 Linear)"; break;
21496 +- case V4L2_PIX_FMT_NV12M: descr = "Y/CbCr 4:2:0 (N-C)"; break;
21497 +- case V4L2_PIX_FMT_NV21M: descr = "Y/CrCb 4:2:0 (N-C)"; break;
21498 +- case V4L2_PIX_FMT_NV16M: descr = "Y/CbCr 4:2:2 (N-C)"; break;
21499 +- case V4L2_PIX_FMT_NV61M: descr = "Y/CrCb 4:2:2 (N-C)"; break;
21500 +- case V4L2_PIX_FMT_NV12MT: descr = "Y/CbCr 4:2:0 (64x32 MB, N-C)"; break;
21501 +- case V4L2_PIX_FMT_NV12MT_16X16: descr = "Y/CbCr 4:2:0 (16x16 MB, N-C)"; break;
21502 ++ case V4L2_PIX_FMT_NV12: descr = "Y/UV 4:2:0"; break;
21503 ++ case V4L2_PIX_FMT_NV21: descr = "Y/VU 4:2:0"; break;
21504 ++ case V4L2_PIX_FMT_NV16: descr = "Y/UV 4:2:2"; break;
21505 ++ case V4L2_PIX_FMT_NV61: descr = "Y/VU 4:2:2"; break;
21506 ++ case V4L2_PIX_FMT_NV24: descr = "Y/UV 4:4:4"; break;
21507 ++ case V4L2_PIX_FMT_NV42: descr = "Y/VU 4:4:4"; break;
21508 ++ case V4L2_PIX_FMT_P010: descr = "10-bit Y/UV 4:2:0"; break;
21509 ++ case V4L2_PIX_FMT_NV12_4L4: descr = "Y/UV 4:2:0 (4x4 Linear)"; break;
21510 ++ case V4L2_PIX_FMT_NV12_16L16: descr = "Y/UV 4:2:0 (16x16 Linear)"; break;
21511 ++ case V4L2_PIX_FMT_NV12_32L32: descr = "Y/UV 4:2:0 (32x32 Linear)"; break;
21512 ++ case V4L2_PIX_FMT_P010_4L4: descr = "10-bit Y/UV 4:2:0 (4x4 Linear)"; break;
21513 ++ case V4L2_PIX_FMT_NV12M: descr = "Y/UV 4:2:0 (N-C)"; break;
21514 ++ case V4L2_PIX_FMT_NV21M: descr = "Y/VU 4:2:0 (N-C)"; break;
21515 ++ case V4L2_PIX_FMT_NV16M: descr = "Y/UV 4:2:2 (N-C)"; break;
21516 ++ case V4L2_PIX_FMT_NV61M: descr = "Y/VU 4:2:2 (N-C)"; break;
21517 ++ case V4L2_PIX_FMT_NV12MT: descr = "Y/UV 4:2:0 (64x32 MB, N-C)"; break;
21518 ++ case V4L2_PIX_FMT_NV12MT_16X16: descr = "Y/UV 4:2:0 (16x16 MB, N-C)"; break;
21519 + case V4L2_PIX_FMT_YUV420M: descr = "Planar YUV 4:2:0 (N-C)"; break;
21520 + case V4L2_PIX_FMT_YVU420M: descr = "Planar YVU 4:2:0 (N-C)"; break;
21521 + case V4L2_PIX_FMT_YUV422M: descr = "Planar YUV 4:2:2 (N-C)"; break;
21522 +diff --git a/drivers/media/v4l2-core/videobuf-dma-contig.c b/drivers/media/v4l2-core/videobuf-dma-contig.c
21523 +index 52312ce2ba056..f2c4393595574 100644
21524 +--- a/drivers/media/v4l2-core/videobuf-dma-contig.c
21525 ++++ b/drivers/media/v4l2-core/videobuf-dma-contig.c
21526 +@@ -36,12 +36,11 @@ struct videobuf_dma_contig_memory {
21527 +
21528 + static int __videobuf_dc_alloc(struct device *dev,
21529 + struct videobuf_dma_contig_memory *mem,
21530 +- unsigned long size, gfp_t flags)
21531 ++ unsigned long size)
21532 + {
21533 + mem->size = size;
21534 +- mem->vaddr = dma_alloc_coherent(dev, mem->size,
21535 +- &mem->dma_handle, flags);
21536 +-
21537 ++ mem->vaddr = dma_alloc_coherent(dev, mem->size, &mem->dma_handle,
21538 ++ GFP_KERNEL);
21539 + if (!mem->vaddr) {
21540 + dev_err(dev, "memory alloc size %ld failed\n", mem->size);
21541 + return -ENOMEM;
21542 +@@ -258,8 +257,7 @@ static int __videobuf_iolock(struct videobuf_queue *q,
21543 + return videobuf_dma_contig_user_get(mem, vb);
21544 +
21545 + /* allocate memory for the read() method */
21546 +- if (__videobuf_dc_alloc(q->dev, mem, PAGE_ALIGN(vb->size),
21547 +- GFP_KERNEL))
21548 ++ if (__videobuf_dc_alloc(q->dev, mem, PAGE_ALIGN(vb->size)))
21549 + return -ENOMEM;
21550 + break;
21551 + case V4L2_MEMORY_OVERLAY:
21552 +@@ -295,22 +293,18 @@ static int __videobuf_mmap_mapper(struct videobuf_queue *q,
21553 + BUG_ON(!mem);
21554 + MAGIC_CHECK(mem->magic, MAGIC_DC_MEM);
21555 +
21556 +- if (__videobuf_dc_alloc(q->dev, mem, PAGE_ALIGN(buf->bsize),
21557 +- GFP_KERNEL | __GFP_COMP))
21558 ++ if (__videobuf_dc_alloc(q->dev, mem, PAGE_ALIGN(buf->bsize)))
21559 + goto error;
21560 +
21561 +- /* Try to remap memory */
21562 +- vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
21563 +-
21564 + /* the "vm_pgoff" is just used in v4l2 to find the
21565 + * corresponding buffer data structure which is allocated
21566 + * earlier and it does not mean the offset from the physical
21567 + * buffer start address as usual. So set it to 0 to pass
21568 +- * the sanity check in vm_iomap_memory().
21569 ++ * the sanity check in dma_mmap_coherent().
21570 + */
21571 + vma->vm_pgoff = 0;
21572 +-
21573 +- retval = vm_iomap_memory(vma, mem->dma_handle, mem->size);
21574 ++ retval = dma_mmap_coherent(q->dev, vma, mem->vaddr, mem->dma_handle,
21575 ++ mem->size);
21576 + if (retval) {
21577 + dev_err(q->dev, "mmap: remap failed with error %d. ",
21578 + retval);
21579 +diff --git a/drivers/memory/renesas-rpc-if.c b/drivers/memory/renesas-rpc-if.c
21580 +index 4316988d791a5..61c288d403750 100644
21581 +--- a/drivers/memory/renesas-rpc-if.c
21582 ++++ b/drivers/memory/renesas-rpc-if.c
21583 +@@ -317,6 +317,9 @@ int rpcif_hw_init(struct rpcif *rpc, bool hyperflash)
21584 + regmap_update_bits(rpc->regmap, RPCIF_PHYCNT, RPCIF_PHYCNT_PHYMEM_MASK,
21585 + RPCIF_PHYCNT_PHYMEM(hyperflash ? 3 : 0));
21586 +
21587 ++ /* DMA Transfer is not supported */
21588 ++ regmap_update_bits(rpc->regmap, RPCIF_PHYCNT, RPCIF_PHYCNT_HS, 0);
21589 ++
21590 + if (rpc->type == RPCIF_RCAR_GEN3)
21591 + regmap_update_bits(rpc->regmap, RPCIF_PHYCNT,
21592 + RPCIF_PHYCNT_STRTIM(7), RPCIF_PHYCNT_STRTIM(7));
21593 +diff --git a/drivers/memstick/core/ms_block.c b/drivers/memstick/core/ms_block.c
21594 +index ba84145195158..04115cd92433b 100644
21595 +--- a/drivers/memstick/core/ms_block.c
21596 ++++ b/drivers/memstick/core/ms_block.c
21597 +@@ -2116,6 +2116,11 @@ static int msb_init_disk(struct memstick_dev *card)
21598 + dbg("Set total disk size to %lu sectors", capacity);
21599 +
21600 + msb->io_queue = alloc_ordered_workqueue("ms_block", WQ_MEM_RECLAIM);
21601 ++ if (!msb->io_queue) {
21602 ++ rc = -ENOMEM;
21603 ++ goto out_cleanup_disk;
21604 ++ }
21605 ++
21606 + INIT_WORK(&msb->io_work, msb_io_work);
21607 + sg_init_table(msb->prealloc_sg, MS_BLOCK_MAX_SEGS+1);
21608 +
21609 +@@ -2125,10 +2130,12 @@ static int msb_init_disk(struct memstick_dev *card)
21610 + msb_start(card);
21611 + rc = device_add_disk(&card->dev, msb->disk, NULL);
21612 + if (rc)
21613 +- goto out_cleanup_disk;
21614 ++ goto out_destroy_workqueue;
21615 + dbg("Disk added");
21616 + return 0;
21617 +
21618 ++out_destroy_workqueue:
21619 ++ destroy_workqueue(msb->io_queue);
21620 + out_cleanup_disk:
21621 + put_disk(msb->disk);
21622 + out_free_tag_set:
21623 +diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
21624 +index 8b93856de432a..9940e2724c05d 100644
21625 +--- a/drivers/mfd/Kconfig
21626 ++++ b/drivers/mfd/Kconfig
21627 +@@ -2027,6 +2027,7 @@ config MFD_ROHM_BD957XMUF
21628 + depends on I2C=y
21629 + depends on OF
21630 + select REGMAP_I2C
21631 ++ select REGMAP_IRQ
21632 + select MFD_CORE
21633 + help
21634 + Select this option to get support for the ROHM BD9576MUF and
21635 +diff --git a/drivers/mfd/axp20x.c b/drivers/mfd/axp20x.c
21636 +index 88a212a8168cf..880c41fa7021b 100644
21637 +--- a/drivers/mfd/axp20x.c
21638 ++++ b/drivers/mfd/axp20x.c
21639 +@@ -842,7 +842,7 @@ static void axp20x_power_off(void)
21640 + AXP20X_OFF);
21641 +
21642 + /* Give capacitors etc. time to drain to avoid kernel panic msg. */
21643 +- msleep(500);
21644 ++ mdelay(500);
21645 + }
21646 +
21647 + int axp20x_match_device(struct axp20x_dev *axp20x)
21648 +diff --git a/drivers/mfd/qcom-pm8008.c b/drivers/mfd/qcom-pm8008.c
21649 +index 4b8ff947762f2..9f3c4a01b4c1c 100644
21650 +--- a/drivers/mfd/qcom-pm8008.c
21651 ++++ b/drivers/mfd/qcom-pm8008.c
21652 +@@ -215,8 +215,8 @@ static int pm8008_probe(struct i2c_client *client)
21653 +
21654 + dev = &client->dev;
21655 + regmap = devm_regmap_init_i2c(client, &qcom_mfd_regmap_cfg);
21656 +- if (!regmap)
21657 +- return -ENODEV;
21658 ++ if (IS_ERR(regmap))
21659 ++ return PTR_ERR(regmap);
21660 +
21661 + i2c_set_clientdata(client, regmap);
21662 +
21663 +diff --git a/drivers/mfd/qcom_rpm.c b/drivers/mfd/qcom_rpm.c
21664 +index 71bc34b74bc9c..8fea0e511550a 100644
21665 +--- a/drivers/mfd/qcom_rpm.c
21666 ++++ b/drivers/mfd/qcom_rpm.c
21667 +@@ -547,7 +547,7 @@ static int qcom_rpm_probe(struct platform_device *pdev)
21668 + init_completion(&rpm->ack);
21669 +
21670 + /* Enable message RAM clock */
21671 +- rpm->ramclk = devm_clk_get(&pdev->dev, "ram");
21672 ++ rpm->ramclk = devm_clk_get_enabled(&pdev->dev, "ram");
21673 + if (IS_ERR(rpm->ramclk)) {
21674 + ret = PTR_ERR(rpm->ramclk);
21675 + if (ret == -EPROBE_DEFER)
21676 +@@ -558,7 +558,6 @@ static int qcom_rpm_probe(struct platform_device *pdev)
21677 + */
21678 + rpm->ramclk = NULL;
21679 + }
21680 +- clk_prepare_enable(rpm->ramclk); /* Accepts NULL */
21681 +
21682 + irq_ack = platform_get_irq_byname(pdev, "ack");
21683 + if (irq_ack < 0)
21684 +@@ -673,22 +672,11 @@ static int qcom_rpm_probe(struct platform_device *pdev)
21685 + if (ret)
21686 + dev_warn(&pdev->dev, "failed to mark wakeup irq as wakeup\n");
21687 +
21688 +- return of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);
21689 +-}
21690 +-
21691 +-static int qcom_rpm_remove(struct platform_device *pdev)
21692 +-{
21693 +- struct qcom_rpm *rpm = dev_get_drvdata(&pdev->dev);
21694 +-
21695 +- of_platform_depopulate(&pdev->dev);
21696 +- clk_disable_unprepare(rpm->ramclk);
21697 +-
21698 +- return 0;
21699 ++ return devm_of_platform_populate(&pdev->dev);
21700 + }
21701 +
21702 + static struct platform_driver qcom_rpm_driver = {
21703 + .probe = qcom_rpm_probe,
21704 +- .remove = qcom_rpm_remove,
21705 + .driver = {
21706 + .name = "qcom_rpm",
21707 + .of_match_table = qcom_rpm_of_match,
21708 +diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
21709 +index 375f692ae9d68..fb95a2d5cef48 100644
21710 +--- a/drivers/misc/cxl/guest.c
21711 ++++ b/drivers/misc/cxl/guest.c
21712 +@@ -965,10 +965,10 @@ int cxl_guest_init_afu(struct cxl *adapter, int slice, struct device_node *afu_n
21713 + * if it returns an error!
21714 + */
21715 + if ((rc = cxl_register_afu(afu)))
21716 +- goto err_put1;
21717 ++ goto err_put_dev;
21718 +
21719 + if ((rc = cxl_sysfs_afu_add(afu)))
21720 +- goto err_put1;
21721 ++ goto err_del_dev;
21722 +
21723 + /*
21724 + * pHyp doesn't expose the programming models supported by the
21725 +@@ -984,7 +984,7 @@ int cxl_guest_init_afu(struct cxl *adapter, int slice, struct device_node *afu_n
21726 + afu->modes_supported = CXL_MODE_DIRECTED;
21727 +
21728 + if ((rc = cxl_afu_select_best_mode(afu)))
21729 +- goto err_put2;
21730 ++ goto err_remove_sysfs;
21731 +
21732 + adapter->afu[afu->slice] = afu;
21733 +
21734 +@@ -1004,10 +1004,12 @@ int cxl_guest_init_afu(struct cxl *adapter, int slice, struct device_node *afu_n
21735 +
21736 + return 0;
21737 +
21738 +-err_put2:
21739 ++err_remove_sysfs:
21740 + cxl_sysfs_afu_remove(afu);
21741 +-err_put1:
21742 +- device_unregister(&afu->dev);
21743 ++err_del_dev:
21744 ++ device_del(&afu->dev);
21745 ++err_put_dev:
21746 ++ put_device(&afu->dev);
21747 + free = false;
21748 + guest_release_serr_irq(afu);
21749 + err2:
21750 +@@ -1141,18 +1143,20 @@ struct cxl *cxl_guest_init_adapter(struct device_node *np, struct platform_devic
21751 + * even if it returns an error!
21752 + */
21753 + if ((rc = cxl_register_adapter(adapter)))
21754 +- goto err_put1;
21755 ++ goto err_put_dev;
21756 +
21757 + if ((rc = cxl_sysfs_adapter_add(adapter)))
21758 +- goto err_put1;
21759 ++ goto err_del_dev;
21760 +
21761 + /* release the context lock as the adapter is configured */
21762 + cxl_adapter_context_unlock(adapter);
21763 +
21764 + return adapter;
21765 +
21766 +-err_put1:
21767 +- device_unregister(&adapter->dev);
21768 ++err_del_dev:
21769 ++ device_del(&adapter->dev);
21770 ++err_put_dev:
21771 ++ put_device(&adapter->dev);
21772 + free = false;
21773 + cxl_guest_remove_chardev(adapter);
21774 + err1:
21775 +diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
21776 +index 3de0aea62ade4..0ff944860dda9 100644
21777 +--- a/drivers/misc/cxl/pci.c
21778 ++++ b/drivers/misc/cxl/pci.c
21779 +@@ -387,6 +387,7 @@ int cxl_calc_capp_routing(struct pci_dev *dev, u64 *chipid,
21780 + rc = get_phb_index(np, phb_index);
21781 + if (rc) {
21782 + pr_err("cxl: invalid phb index\n");
21783 ++ of_node_put(np);
21784 + return rc;
21785 + }
21786 +
21787 +@@ -1164,10 +1165,10 @@ static int pci_init_afu(struct cxl *adapter, int slice, struct pci_dev *dev)
21788 + * if it returns an error!
21789 + */
21790 + if ((rc = cxl_register_afu(afu)))
21791 +- goto err_put1;
21792 ++ goto err_put_dev;
21793 +
21794 + if ((rc = cxl_sysfs_afu_add(afu)))
21795 +- goto err_put1;
21796 ++ goto err_del_dev;
21797 +
21798 + adapter->afu[afu->slice] = afu;
21799 +
21800 +@@ -1176,10 +1177,12 @@ static int pci_init_afu(struct cxl *adapter, int slice, struct pci_dev *dev)
21801 +
21802 + return 0;
21803 +
21804 +-err_put1:
21805 ++err_del_dev:
21806 ++ device_del(&afu->dev);
21807 ++err_put_dev:
21808 + pci_deconfigure_afu(afu);
21809 + cxl_debugfs_afu_remove(afu);
21810 +- device_unregister(&afu->dev);
21811 ++ put_device(&afu->dev);
21812 + return rc;
21813 +
21814 + err_free_native:
21815 +@@ -1667,23 +1670,25 @@ static struct cxl *cxl_pci_init_adapter(struct pci_dev *dev)
21816 + * even if it returns an error!
21817 + */
21818 + if ((rc = cxl_register_adapter(adapter)))
21819 +- goto err_put1;
21820 ++ goto err_put_dev;
21821 +
21822 + if ((rc = cxl_sysfs_adapter_add(adapter)))
21823 +- goto err_put1;
21824 ++ goto err_del_dev;
21825 +
21826 + /* Release the context lock as adapter is configured */
21827 + cxl_adapter_context_unlock(adapter);
21828 +
21829 + return adapter;
21830 +
21831 +-err_put1:
21832 ++err_del_dev:
21833 ++ device_del(&adapter->dev);
21834 ++err_put_dev:
21835 + /* This should mirror cxl_remove_adapter, except without the
21836 + * sysfs parts
21837 + */
21838 + cxl_debugfs_adapter_remove(adapter);
21839 + cxl_deconfigure_adapter(adapter);
21840 +- device_unregister(&adapter->dev);
21841 ++ put_device(&adapter->dev);
21842 + return ERR_PTR(rc);
21843 +
21844 + err_release:
21845 +diff --git a/drivers/misc/habanalabs/common/firmware_if.c b/drivers/misc/habanalabs/common/firmware_if.c
21846 +index 2de6a9bd564de..f18e53bbba6bb 100644
21847 +--- a/drivers/misc/habanalabs/common/firmware_if.c
21848 ++++ b/drivers/misc/habanalabs/common/firmware_if.c
21849 +@@ -2983,7 +2983,7 @@ static int hl_fw_get_sec_attest_data(struct hl_device *hdev, u32 packet_id, void
21850 + int rc;
21851 +
21852 + req_cpu_addr = hl_cpu_accessible_dma_pool_alloc(hdev, size, &req_dma_addr);
21853 +- if (!data) {
21854 ++ if (!req_cpu_addr) {
21855 + dev_err(hdev->dev,
21856 + "Failed to allocate DMA memory for CPU-CP packet %u\n", packet_id);
21857 + return -ENOMEM;
21858 +diff --git a/drivers/misc/lkdtm/cfi.c b/drivers/misc/lkdtm/cfi.c
21859 +index 5245cf6013c95..fc28714ae3a61 100644
21860 +--- a/drivers/misc/lkdtm/cfi.c
21861 ++++ b/drivers/misc/lkdtm/cfi.c
21862 +@@ -54,7 +54,11 @@ static void lkdtm_CFI_FORWARD_PROTO(void)
21863 + # ifdef CONFIG_ARM64_BTI_KERNEL
21864 + # define __no_pac "branch-protection=bti"
21865 + # else
21866 +-# define __no_pac "branch-protection=none"
21867 ++# ifdef CONFIG_CC_HAS_BRANCH_PROT_PAC_RET
21868 ++# define __no_pac "branch-protection=none"
21869 ++# else
21870 ++# define __no_pac "sign-return-address=none"
21871 ++# endif
21872 + # endif
21873 + # define __no_ret_protection __noscs __attribute__((__target__(__no_pac)))
21874 + #else
21875 +diff --git a/drivers/misc/ocxl/config.c b/drivers/misc/ocxl/config.c
21876 +index e401a51596b9c..92ab49705f645 100644
21877 +--- a/drivers/misc/ocxl/config.c
21878 ++++ b/drivers/misc/ocxl/config.c
21879 +@@ -193,6 +193,18 @@ static int read_dvsec_vendor(struct pci_dev *dev)
21880 + return 0;
21881 + }
21882 +
21883 ++/**
21884 ++ * get_dvsec_vendor0() - Find a related PCI device (function 0)
21885 ++ * @dev: PCI device to match
21886 ++ * @dev0: The PCI device (function 0) found
21887 ++ * @out_pos: The position of PCI device (function 0)
21888 ++ *
21889 ++ * Returns 0 on success, negative on failure.
21890 ++ *
21891 ++ * NOTE: If it's successful, the reference of dev0 is increased,
21892 ++ * so after using it, the callers must call pci_dev_put() to give
21893 ++ * up the reference.
21894 ++ */
21895 + static int get_dvsec_vendor0(struct pci_dev *dev, struct pci_dev **dev0,
21896 + int *out_pos)
21897 + {
21898 +@@ -202,10 +214,14 @@ static int get_dvsec_vendor0(struct pci_dev *dev, struct pci_dev **dev0,
21899 + dev = get_function_0(dev);
21900 + if (!dev)
21901 + return -1;
21902 ++ } else {
21903 ++ dev = pci_dev_get(dev);
21904 + }
21905 + pos = find_dvsec(dev, OCXL_DVSEC_VENDOR_ID);
21906 +- if (!pos)
21907 ++ if (!pos) {
21908 ++ pci_dev_put(dev);
21909 + return -1;
21910 ++ }
21911 + *dev0 = dev;
21912 + *out_pos = pos;
21913 + return 0;
21914 +@@ -222,6 +238,7 @@ int ocxl_config_get_reset_reload(struct pci_dev *dev, int *val)
21915 +
21916 + pci_read_config_dword(dev0, pos + OCXL_DVSEC_VENDOR_RESET_RELOAD,
21917 + &reset_reload);
21918 ++ pci_dev_put(dev0);
21919 + *val = !!(reset_reload & BIT(0));
21920 + return 0;
21921 + }
21922 +@@ -243,6 +260,7 @@ int ocxl_config_set_reset_reload(struct pci_dev *dev, int val)
21923 + reset_reload &= ~BIT(0);
21924 + pci_write_config_dword(dev0, pos + OCXL_DVSEC_VENDOR_RESET_RELOAD,
21925 + reset_reload);
21926 ++ pci_dev_put(dev0);
21927 + return 0;
21928 + }
21929 +
21930 +diff --git a/drivers/misc/ocxl/file.c b/drivers/misc/ocxl/file.c
21931 +index d46dba2df5a10..452d5777a0e4c 100644
21932 +--- a/drivers/misc/ocxl/file.c
21933 ++++ b/drivers/misc/ocxl/file.c
21934 +@@ -541,8 +541,11 @@ int ocxl_file_register_afu(struct ocxl_afu *afu)
21935 + goto err_put;
21936 +
21937 + rc = device_register(&info->dev);
21938 +- if (rc)
21939 +- goto err_put;
21940 ++ if (rc) {
21941 ++ free_minor(info);
21942 ++ put_device(&info->dev);
21943 ++ return rc;
21944 ++ }
21945 +
21946 + rc = ocxl_sysfs_register_afu(info);
21947 + if (rc)
21948 +diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c
21949 +index d7ef61e602ede..b836936e97471 100644
21950 +--- a/drivers/misc/sgi-gru/grufault.c
21951 ++++ b/drivers/misc/sgi-gru/grufault.c
21952 +@@ -648,6 +648,7 @@ int gru_handle_user_call_os(unsigned long cb)
21953 + if ((cb & (GRU_HANDLE_STRIDE - 1)) || ucbnum >= GRU_NUM_CB)
21954 + return -EINVAL;
21955 +
21956 ++again:
21957 + gts = gru_find_lock_gts(cb);
21958 + if (!gts)
21959 + return -EINVAL;
21960 +@@ -656,7 +657,11 @@ int gru_handle_user_call_os(unsigned long cb)
21961 + if (ucbnum >= gts->ts_cbr_au_count * GRU_CBR_AU_SIZE)
21962 + goto exit;
21963 +
21964 +- gru_check_context_placement(gts);
21965 ++ if (gru_check_context_placement(gts)) {
21966 ++ gru_unlock_gts(gts);
21967 ++ gru_unload_context(gts, 1);
21968 ++ goto again;
21969 ++ }
21970 +
21971 + /*
21972 + * CCH may contain stale data if ts_force_cch_reload is set.
21973 +@@ -874,7 +879,11 @@ int gru_set_context_option(unsigned long arg)
21974 + } else {
21975 + gts->ts_user_blade_id = req.val1;
21976 + gts->ts_user_chiplet_id = req.val0;
21977 +- gru_check_context_placement(gts);
21978 ++ if (gru_check_context_placement(gts)) {
21979 ++ gru_unlock_gts(gts);
21980 ++ gru_unload_context(gts, 1);
21981 ++ return ret;
21982 ++ }
21983 + }
21984 + break;
21985 + case sco_gseg_owner:
21986 +diff --git a/drivers/misc/sgi-gru/grumain.c b/drivers/misc/sgi-gru/grumain.c
21987 +index 6706ef3c59776..4eb4b94551390 100644
21988 +--- a/drivers/misc/sgi-gru/grumain.c
21989 ++++ b/drivers/misc/sgi-gru/grumain.c
21990 +@@ -716,9 +716,10 @@ static int gru_check_chiplet_assignment(struct gru_state *gru,
21991 + * chiplet. Misassignment can occur if the process migrates to a different
21992 + * blade or if the user changes the selected blade/chiplet.
21993 + */
21994 +-void gru_check_context_placement(struct gru_thread_state *gts)
21995 ++int gru_check_context_placement(struct gru_thread_state *gts)
21996 + {
21997 + struct gru_state *gru;
21998 ++ int ret = 0;
21999 +
22000 + /*
22001 + * If the current task is the context owner, verify that the
22002 +@@ -726,15 +727,23 @@ void gru_check_context_placement(struct gru_thread_state *gts)
22003 + * references. Pthread apps use non-owner references to the CBRs.
22004 + */
22005 + gru = gts->ts_gru;
22006 ++ /*
22007 ++ * If gru or gts->ts_tgid_owner isn't initialized properly, return
22008 ++ * success to indicate that the caller does not need to unload the
22009 ++ * gru context.The caller is responsible for their inspection and
22010 ++ * reinitialization if needed.
22011 ++ */
22012 + if (!gru || gts->ts_tgid_owner != current->tgid)
22013 +- return;
22014 ++ return ret;
22015 +
22016 + if (!gru_check_chiplet_assignment(gru, gts)) {
22017 + STAT(check_context_unload);
22018 +- gru_unload_context(gts, 1);
22019 ++ ret = -EINVAL;
22020 + } else if (gru_retarget_intr(gts)) {
22021 + STAT(check_context_retarget_intr);
22022 + }
22023 ++
22024 ++ return ret;
22025 + }
22026 +
22027 +
22028 +@@ -934,7 +943,12 @@ again:
22029 + mutex_lock(&gts->ts_ctxlock);
22030 + preempt_disable();
22031 +
22032 +- gru_check_context_placement(gts);
22033 ++ if (gru_check_context_placement(gts)) {
22034 ++ preempt_enable();
22035 ++ mutex_unlock(&gts->ts_ctxlock);
22036 ++ gru_unload_context(gts, 1);
22037 ++ return VM_FAULT_NOPAGE;
22038 ++ }
22039 +
22040 + if (!gts->ts_gru) {
22041 + STAT(load_user_context);
22042 +diff --git a/drivers/misc/sgi-gru/grutables.h b/drivers/misc/sgi-gru/grutables.h
22043 +index 8c52776db2341..640daf1994df7 100644
22044 +--- a/drivers/misc/sgi-gru/grutables.h
22045 ++++ b/drivers/misc/sgi-gru/grutables.h
22046 +@@ -632,7 +632,7 @@ extern int gru_user_flush_tlb(unsigned long arg);
22047 + extern int gru_user_unload_context(unsigned long arg);
22048 + extern int gru_get_exception_detail(unsigned long arg);
22049 + extern int gru_set_context_option(unsigned long address);
22050 +-extern void gru_check_context_placement(struct gru_thread_state *gts);
22051 ++extern int gru_check_context_placement(struct gru_thread_state *gts);
22052 + extern int gru_cpu_fault_map_id(void);
22053 + extern struct vm_area_struct *gru_find_vma(unsigned long vaddr);
22054 + extern void gru_flush_all_tlb(struct gru_state *gru);
22055 +diff --git a/drivers/misc/tifm_7xx1.c b/drivers/misc/tifm_7xx1.c
22056 +index 017c2f7d62871..7dd86a9858aba 100644
22057 +--- a/drivers/misc/tifm_7xx1.c
22058 ++++ b/drivers/misc/tifm_7xx1.c
22059 +@@ -190,7 +190,7 @@ static void tifm_7xx1_switch_media(struct work_struct *work)
22060 + spin_unlock_irqrestore(&fm->lock, flags);
22061 + }
22062 + if (sock)
22063 +- tifm_free_device(&sock->dev);
22064 ++ put_device(&sock->dev);
22065 + }
22066 + spin_lock_irqsave(&fm->lock, flags);
22067 + }
22068 +diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
22069 +index 3662bf5320ce5..72b664ed90cf6 100644
22070 +--- a/drivers/mmc/core/sd.c
22071 ++++ b/drivers/mmc/core/sd.c
22072 +@@ -1259,7 +1259,7 @@ static int sd_read_ext_regs(struct mmc_card *card)
22073 + */
22074 + err = sd_read_ext_reg(card, 0, 0, 0, 512, gen_info_buf);
22075 + if (err) {
22076 +- pr_warn("%s: error %d reading general info of SD ext reg\n",
22077 ++ pr_err("%s: error %d reading general info of SD ext reg\n",
22078 + mmc_hostname(card->host), err);
22079 + goto out;
22080 + }
22081 +@@ -1273,7 +1273,12 @@ static int sd_read_ext_regs(struct mmc_card *card)
22082 + /* Number of extensions to be find. */
22083 + num_ext = gen_info_buf[4];
22084 +
22085 +- /* We support revision 0, but limit it to 512 bytes for simplicity. */
22086 ++ /*
22087 ++ * We only support revision 0 and limit it to 512 bytes for simplicity.
22088 ++ * No matter what, let's return zero to allow us to continue using the
22089 ++ * card, even if we can't support the features from the SD function
22090 ++ * extensions registers.
22091 ++ */
22092 + if (rev != 0 || len > 512) {
22093 + pr_warn("%s: non-supported SD ext reg layout\n",
22094 + mmc_hostname(card->host));
22095 +@@ -1288,7 +1293,7 @@ static int sd_read_ext_regs(struct mmc_card *card)
22096 + for (i = 0; i < num_ext; i++) {
22097 + err = sd_parse_ext_reg(card, gen_info_buf, &next_ext_addr);
22098 + if (err) {
22099 +- pr_warn("%s: error %d parsing SD ext reg\n",
22100 ++ pr_err("%s: error %d parsing SD ext reg\n",
22101 + mmc_hostname(card->host), err);
22102 + goto out;
22103 + }
22104 +diff --git a/drivers/mmc/host/alcor.c b/drivers/mmc/host/alcor.c
22105 +index bfb8efeb7eb80..d01df01d4b4d1 100644
22106 +--- a/drivers/mmc/host/alcor.c
22107 ++++ b/drivers/mmc/host/alcor.c
22108 +@@ -1114,7 +1114,10 @@ static int alcor_pci_sdmmc_drv_probe(struct platform_device *pdev)
22109 + alcor_hw_init(host);
22110 +
22111 + dev_set_drvdata(&pdev->dev, host);
22112 +- mmc_add_host(mmc);
22113 ++ ret = mmc_add_host(mmc);
22114 ++ if (ret)
22115 ++ goto free_host;
22116 ++
22117 + return 0;
22118 +
22119 + free_host:
22120 +diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
22121 +index 91d52ba7a39fc..bb9bbf1c927b6 100644
22122 +--- a/drivers/mmc/host/atmel-mci.c
22123 ++++ b/drivers/mmc/host/atmel-mci.c
22124 +@@ -2222,6 +2222,7 @@ static int atmci_init_slot(struct atmel_mci *host,
22125 + {
22126 + struct mmc_host *mmc;
22127 + struct atmel_mci_slot *slot;
22128 ++ int ret;
22129 +
22130 + mmc = mmc_alloc_host(sizeof(struct atmel_mci_slot), &host->pdev->dev);
22131 + if (!mmc)
22132 +@@ -2305,11 +2306,13 @@ static int atmci_init_slot(struct atmel_mci *host,
22133 +
22134 + host->slot[id] = slot;
22135 + mmc_regulator_get_supply(mmc);
22136 +- mmc_add_host(mmc);
22137 ++ ret = mmc_add_host(mmc);
22138 ++ if (ret) {
22139 ++ mmc_free_host(mmc);
22140 ++ return ret;
22141 ++ }
22142 +
22143 + if (gpio_is_valid(slot->detect_pin)) {
22144 +- int ret;
22145 +-
22146 + timer_setup(&slot->detect_timer, atmci_detect_change, 0);
22147 +
22148 + ret = request_irq(gpio_to_irq(slot->detect_pin),
22149 +diff --git a/drivers/mmc/host/litex_mmc.c b/drivers/mmc/host/litex_mmc.c
22150 +index 6ba0d63b8c078..39c6707fdfdbc 100644
22151 +--- a/drivers/mmc/host/litex_mmc.c
22152 ++++ b/drivers/mmc/host/litex_mmc.c
22153 +@@ -502,6 +502,7 @@ static int litex_mmc_irq_init(struct platform_device *pdev,
22154 +
22155 + use_polling:
22156 + host->mmc->caps |= MMC_CAP_NEEDS_POLL;
22157 ++ host->irq = 0;
22158 + return 0;
22159 + }
22160 +
22161 +diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
22162 +index df05e60bed9a2..6e5ea0213b477 100644
22163 +--- a/drivers/mmc/host/meson-gx-mmc.c
22164 ++++ b/drivers/mmc/host/meson-gx-mmc.c
22165 +@@ -1335,7 +1335,9 @@ static int meson_mmc_probe(struct platform_device *pdev)
22166 + }
22167 +
22168 + mmc->ops = &meson_mmc_ops;
22169 +- mmc_add_host(mmc);
22170 ++ ret = mmc_add_host(mmc);
22171 ++ if (ret)
22172 ++ goto err_free_irq;
22173 +
22174 + return 0;
22175 +
22176 +diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c
22177 +index 012aa85489d86..b9e5dfe74e5c7 100644
22178 +--- a/drivers/mmc/host/mmci.c
22179 ++++ b/drivers/mmc/host/mmci.c
22180 +@@ -2256,7 +2256,9 @@ static int mmci_probe(struct amba_device *dev,
22181 + pm_runtime_set_autosuspend_delay(&dev->dev, 50);
22182 + pm_runtime_use_autosuspend(&dev->dev);
22183 +
22184 +- mmc_add_host(mmc);
22185 ++ ret = mmc_add_host(mmc);
22186 ++ if (ret)
22187 ++ goto clk_disable;
22188 +
22189 + pm_runtime_put(&dev->dev);
22190 + return 0;
22191 +diff --git a/drivers/mmc/host/moxart-mmc.c b/drivers/mmc/host/moxart-mmc.c
22192 +index dfc3ffd5b1f8c..52ed30f2d9f4f 100644
22193 +--- a/drivers/mmc/host/moxart-mmc.c
22194 ++++ b/drivers/mmc/host/moxart-mmc.c
22195 +@@ -665,7 +665,9 @@ static int moxart_probe(struct platform_device *pdev)
22196 + goto out;
22197 +
22198 + dev_set_drvdata(dev, mmc);
22199 +- mmc_add_host(mmc);
22200 ++ ret = mmc_add_host(mmc);
22201 ++ if (ret)
22202 ++ goto out;
22203 +
22204 + dev_dbg(dev, "IRQ=%d, FIFO is %d bytes\n", irq, host->fifo_width);
22205 +
22206 +diff --git a/drivers/mmc/host/mxcmmc.c b/drivers/mmc/host/mxcmmc.c
22207 +index 2cf0413407ea2..668f865f3efb0 100644
22208 +--- a/drivers/mmc/host/mxcmmc.c
22209 ++++ b/drivers/mmc/host/mxcmmc.c
22210 +@@ -1143,7 +1143,9 @@ static int mxcmci_probe(struct platform_device *pdev)
22211 +
22212 + timer_setup(&host->watchdog, mxcmci_watchdog, 0);
22213 +
22214 +- mmc_add_host(mmc);
22215 ++ ret = mmc_add_host(mmc);
22216 ++ if (ret)
22217 ++ goto out_free_dma;
22218 +
22219 + return 0;
22220 +
22221 +diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c
22222 +index fca30add563e9..4bd7447552055 100644
22223 +--- a/drivers/mmc/host/omap_hsmmc.c
22224 ++++ b/drivers/mmc/host/omap_hsmmc.c
22225 +@@ -1946,7 +1946,9 @@ static int omap_hsmmc_probe(struct platform_device *pdev)
22226 + if (!ret)
22227 + mmc->caps |= MMC_CAP_SDIO_IRQ;
22228 +
22229 +- mmc_add_host(mmc);
22230 ++ ret = mmc_add_host(mmc);
22231 ++ if (ret)
22232 ++ goto err_irq;
22233 +
22234 + if (mmc_pdata(host)->name != NULL) {
22235 + ret = device_create_file(&mmc->class_dev, &dev_attr_slot_name);
22236 +diff --git a/drivers/mmc/host/pxamci.c b/drivers/mmc/host/pxamci.c
22237 +index e4003f6058eb5..2a988f942b6ca 100644
22238 +--- a/drivers/mmc/host/pxamci.c
22239 ++++ b/drivers/mmc/host/pxamci.c
22240 +@@ -763,7 +763,12 @@ static int pxamci_probe(struct platform_device *pdev)
22241 + dev_warn(dev, "gpio_ro and get_ro() both defined\n");
22242 + }
22243 +
22244 +- mmc_add_host(mmc);
22245 ++ ret = mmc_add_host(mmc);
22246 ++ if (ret) {
22247 ++ if (host->pdata && host->pdata->exit)
22248 ++ host->pdata->exit(dev, mmc);
22249 ++ goto out;
22250 ++ }
22251 +
22252 + return 0;
22253 +
22254 +diff --git a/drivers/mmc/host/renesas_sdhi.h b/drivers/mmc/host/renesas_sdhi.h
22255 +index c4abfee1ebae1..e4c490729c98e 100644
22256 +--- a/drivers/mmc/host/renesas_sdhi.h
22257 ++++ b/drivers/mmc/host/renesas_sdhi.h
22258 +@@ -44,6 +44,7 @@ struct renesas_sdhi_quirks {
22259 + bool fixed_addr_mode;
22260 + bool dma_one_rx_only;
22261 + bool manual_tap_correction;
22262 ++ bool old_info1_layout;
22263 + u32 hs400_bad_taps;
22264 + const u8 (*hs400_calib_table)[SDHI_CALIB_TABLE_MAX];
22265 + };
22266 +diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
22267 +index b970699743e0a..e38d0e8b8e0ed 100644
22268 +--- a/drivers/mmc/host/renesas_sdhi_core.c
22269 ++++ b/drivers/mmc/host/renesas_sdhi_core.c
22270 +@@ -546,7 +546,7 @@ static void renesas_sdhi_reset_hs400_mode(struct tmio_mmc_host *host,
22271 + SH_MOBILE_SDHI_SCC_TMPPORT2_HS400OSEL) &
22272 + sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_TMPPORT2));
22273 +
22274 +- if (priv->adjust_hs400_calib_table)
22275 ++ if (priv->quirks && (priv->quirks->hs400_calib_table || priv->quirks->hs400_bad_taps))
22276 + renesas_sdhi_adjust_hs400_mode_disable(host);
22277 +
22278 + sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, CLK_CTL_SCLKEN |
22279 +@@ -1068,11 +1068,14 @@ int renesas_sdhi_probe(struct platform_device *pdev,
22280 + if (ver >= SDHI_VER_GEN3_SD)
22281 + host->get_timeout_cycles = renesas_sdhi_gen3_get_cycles;
22282 +
22283 ++ /* Check for SCC so we can reset it if needed */
22284 ++ if (of_data && of_data->scc_offset && ver >= SDHI_VER_GEN2_SDR104)
22285 ++ priv->scc_ctl = host->ctl + of_data->scc_offset;
22286 ++
22287 + /* Enable tuning iff we have an SCC and a supported mode */
22288 +- if (of_data && of_data->scc_offset &&
22289 +- (host->mmc->caps & MMC_CAP_UHS_SDR104 ||
22290 +- host->mmc->caps2 & (MMC_CAP2_HS200_1_8V_SDR |
22291 +- MMC_CAP2_HS400_1_8V))) {
22292 ++ if (priv->scc_ctl && (host->mmc->caps & MMC_CAP_UHS_SDR104 ||
22293 ++ host->mmc->caps2 & (MMC_CAP2_HS200_1_8V_SDR |
22294 ++ MMC_CAP2_HS400_1_8V))) {
22295 + const struct renesas_sdhi_scc *taps = of_data->taps;
22296 + bool use_4tap = quirks && quirks->hs400_4taps;
22297 + bool hit = false;
22298 +@@ -1092,7 +1095,6 @@ int renesas_sdhi_probe(struct platform_device *pdev,
22299 + if (!hit)
22300 + dev_warn(&host->pdev->dev, "Unknown clock rate for tuning\n");
22301 +
22302 +- priv->scc_ctl = host->ctl + of_data->scc_offset;
22303 + host->check_retune = renesas_sdhi_check_scc_error;
22304 + host->ops.execute_tuning = renesas_sdhi_execute_tuning;
22305 + host->ops.prepare_hs400_tuning = renesas_sdhi_prepare_hs400_tuning;
22306 +diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
22307 +index 42937596c4c41..7c81c2680701f 100644
22308 +--- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c
22309 ++++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
22310 +@@ -49,7 +49,8 @@
22311 + /* DM_CM_INFO1 and DM_CM_INFO1_MASK */
22312 + #define INFO1_CLEAR 0
22313 + #define INFO1_MASK_CLEAR GENMASK_ULL(31, 0)
22314 +-#define INFO1_DTRANEND1 BIT(17)
22315 ++#define INFO1_DTRANEND1 BIT(20)
22316 ++#define INFO1_DTRANEND1_OLD BIT(17)
22317 + #define INFO1_DTRANEND0 BIT(16)
22318 +
22319 + /* DM_CM_INFO2 and DM_CM_INFO2_MASK */
22320 +@@ -165,6 +166,7 @@ static const struct renesas_sdhi_quirks sdhi_quirks_4tap_nohs400_one_rx = {
22321 + .hs400_disabled = true,
22322 + .hs400_4taps = true,
22323 + .dma_one_rx_only = true,
22324 ++ .old_info1_layout = true,
22325 + };
22326 +
22327 + static const struct renesas_sdhi_quirks sdhi_quirks_4tap = {
22328 +diff --git a/drivers/mmc/host/rtsx_pci_sdmmc.c b/drivers/mmc/host/rtsx_pci_sdmmc.c
22329 +index e1580f78c6b2d..8098726dcc0bf 100644
22330 +--- a/drivers/mmc/host/rtsx_pci_sdmmc.c
22331 ++++ b/drivers/mmc/host/rtsx_pci_sdmmc.c
22332 +@@ -1474,6 +1474,7 @@ static int rtsx_pci_sdmmc_drv_probe(struct platform_device *pdev)
22333 + struct realtek_pci_sdmmc *host;
22334 + struct rtsx_pcr *pcr;
22335 + struct pcr_handle *handle = pdev->dev.platform_data;
22336 ++ int ret;
22337 +
22338 + if (!handle)
22339 + return -ENXIO;
22340 +@@ -1511,7 +1512,13 @@ static int rtsx_pci_sdmmc_drv_probe(struct platform_device *pdev)
22341 + pm_runtime_mark_last_busy(&pdev->dev);
22342 + pm_runtime_use_autosuspend(&pdev->dev);
22343 +
22344 +- mmc_add_host(mmc);
22345 ++ ret = mmc_add_host(mmc);
22346 ++ if (ret) {
22347 ++ pm_runtime_dont_use_autosuspend(&pdev->dev);
22348 ++ pm_runtime_disable(&pdev->dev);
22349 ++ mmc_free_host(mmc);
22350 ++ return ret;
22351 ++ }
22352 +
22353 + return 0;
22354 + }
22355 +diff --git a/drivers/mmc/host/rtsx_usb_sdmmc.c b/drivers/mmc/host/rtsx_usb_sdmmc.c
22356 +index 5798aee066531..2c650cd58693e 100644
22357 +--- a/drivers/mmc/host/rtsx_usb_sdmmc.c
22358 ++++ b/drivers/mmc/host/rtsx_usb_sdmmc.c
22359 +@@ -1329,6 +1329,7 @@ static int rtsx_usb_sdmmc_drv_probe(struct platform_device *pdev)
22360 + #ifdef RTSX_USB_USE_LEDS_CLASS
22361 + int err;
22362 + #endif
22363 ++ int ret;
22364 +
22365 + ucr = usb_get_intfdata(to_usb_interface(pdev->dev.parent));
22366 + if (!ucr)
22367 +@@ -1365,7 +1366,15 @@ static int rtsx_usb_sdmmc_drv_probe(struct platform_device *pdev)
22368 + INIT_WORK(&host->led_work, rtsx_usb_update_led);
22369 +
22370 + #endif
22371 +- mmc_add_host(mmc);
22372 ++ ret = mmc_add_host(mmc);
22373 ++ if (ret) {
22374 ++#ifdef RTSX_USB_USE_LEDS_CLASS
22375 ++ led_classdev_unregister(&host->led);
22376 ++#endif
22377 ++ mmc_free_host(mmc);
22378 ++ pm_runtime_disable(&pdev->dev);
22379 ++ return ret;
22380 ++ }
22381 +
22382 + return 0;
22383 + }
22384 +diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
22385 +index c71000a07656e..1adaa94c31aca 100644
22386 +--- a/drivers/mmc/host/sdhci-tegra.c
22387 ++++ b/drivers/mmc/host/sdhci-tegra.c
22388 +@@ -1526,7 +1526,8 @@ static const struct sdhci_pltfm_data sdhci_tegra186_pdata = {
22389 + SDHCI_QUIRK_NO_HISPD_BIT |
22390 + SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC |
22391 + SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
22392 +- .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
22393 ++ .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
22394 ++ SDHCI_QUIRK2_ISSUE_CMD_DAT_RESET_TOGETHER,
22395 + .ops = &tegra186_sdhci_ops,
22396 + };
22397 +
22398 +diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
22399 +index c7ad32a75b570..632341911b6e7 100644
22400 +--- a/drivers/mmc/host/sdhci.c
22401 ++++ b/drivers/mmc/host/sdhci.c
22402 +@@ -270,6 +270,11 @@ enum sdhci_reset_reason {
22403 +
22404 + static void sdhci_reset_for_reason(struct sdhci_host *host, enum sdhci_reset_reason reason)
22405 + {
22406 ++ if (host->quirks2 & SDHCI_QUIRK2_ISSUE_CMD_DAT_RESET_TOGETHER) {
22407 ++ sdhci_do_reset(host, SDHCI_RESET_CMD | SDHCI_RESET_DATA);
22408 ++ return;
22409 ++ }
22410 ++
22411 + switch (reason) {
22412 + case SDHCI_RESET_FOR_INIT:
22413 + sdhci_do_reset(host, SDHCI_RESET_CMD | SDHCI_RESET_DATA);
22414 +diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
22415 +index 87a3aaa074387..5ce7cdcc192fd 100644
22416 +--- a/drivers/mmc/host/sdhci.h
22417 ++++ b/drivers/mmc/host/sdhci.h
22418 +@@ -478,6 +478,8 @@ struct sdhci_host {
22419 + * block count.
22420 + */
22421 + #define SDHCI_QUIRK2_USE_32BIT_BLK_CNT (1<<18)
22422 ++/* Issue CMD and DATA reset together */
22423 ++#define SDHCI_QUIRK2_ISSUE_CMD_DAT_RESET_TOGETHER (1<<19)
22424 +
22425 + int irq; /* Device IRQ */
22426 + void __iomem *ioaddr; /* Mapped address */
22427 +diff --git a/drivers/mmc/host/sdhci_f_sdh30.c b/drivers/mmc/host/sdhci_f_sdh30.c
22428 +index 3f5977979cf25..6c4f43e112826 100644
22429 +--- a/drivers/mmc/host/sdhci_f_sdh30.c
22430 ++++ b/drivers/mmc/host/sdhci_f_sdh30.c
22431 +@@ -168,6 +168,9 @@ static int sdhci_f_sdh30_probe(struct platform_device *pdev)
22432 + if (reg & SDHCI_CAN_DO_8BIT)
22433 + priv->vendor_hs200 = F_SDH30_EMMC_HS200;
22434 +
22435 ++ if (!(reg & SDHCI_TIMEOUT_CLK_MASK))
22436 ++ host->quirks |= SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK;
22437 ++
22438 + ret = sdhci_add_host(host);
22439 + if (ret)
22440 + goto err_add_host;
22441 +diff --git a/drivers/mmc/host/toshsd.c b/drivers/mmc/host/toshsd.c
22442 +index 8d037c2071abc..497791ffada6d 100644
22443 +--- a/drivers/mmc/host/toshsd.c
22444 ++++ b/drivers/mmc/host/toshsd.c
22445 +@@ -651,7 +651,9 @@ static int toshsd_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
22446 + if (ret)
22447 + goto unmap;
22448 +
22449 +- mmc_add_host(mmc);
22450 ++ ret = mmc_add_host(mmc);
22451 ++ if (ret)
22452 ++ goto free_irq;
22453 +
22454 + base = pci_resource_start(pdev, 0);
22455 + dev_dbg(&pdev->dev, "MMIO %pa, IRQ %d\n", &base, pdev->irq);
22456 +@@ -660,6 +662,8 @@ static int toshsd_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
22457 +
22458 + return 0;
22459 +
22460 ++free_irq:
22461 ++ free_irq(pdev->irq, host);
22462 + unmap:
22463 + pci_iounmap(pdev, host->ioaddr);
22464 + release:
22465 +diff --git a/drivers/mmc/host/via-sdmmc.c b/drivers/mmc/host/via-sdmmc.c
22466 +index 88662a90ed960..a2b0d9461665b 100644
22467 +--- a/drivers/mmc/host/via-sdmmc.c
22468 ++++ b/drivers/mmc/host/via-sdmmc.c
22469 +@@ -1151,7 +1151,9 @@ static int via_sd_probe(struct pci_dev *pcidev,
22470 + pcidev->subsystem_device == 0x3891)
22471 + sdhost->quirks = VIA_CRDR_QUIRK_300MS_PWRDELAY;
22472 +
22473 +- mmc_add_host(mmc);
22474 ++ ret = mmc_add_host(mmc);
22475 ++ if (ret)
22476 ++ goto unmap;
22477 +
22478 + return 0;
22479 +
22480 +diff --git a/drivers/mmc/host/vub300.c b/drivers/mmc/host/vub300.c
22481 +index 97beece62fec4..ab36ec4797478 100644
22482 +--- a/drivers/mmc/host/vub300.c
22483 ++++ b/drivers/mmc/host/vub300.c
22484 +@@ -2299,14 +2299,14 @@ static int vub300_probe(struct usb_interface *interface,
22485 + 0x0000, 0x0000, &vub300->system_port_status,
22486 + sizeof(vub300->system_port_status), 1000);
22487 + if (retval < 0) {
22488 +- goto error4;
22489 ++ goto error5;
22490 + } else if (sizeof(vub300->system_port_status) == retval) {
22491 + vub300->card_present =
22492 + (0x0001 & vub300->system_port_status.port_flags) ? 1 : 0;
22493 + vub300->read_only =
22494 + (0x0010 & vub300->system_port_status.port_flags) ? 1 : 0;
22495 + } else {
22496 +- goto error4;
22497 ++ goto error5;
22498 + }
22499 + usb_set_intfdata(interface, vub300);
22500 + INIT_DELAYED_WORK(&vub300->pollwork, vub300_pollwork_thread);
22501 +@@ -2329,8 +2329,13 @@ static int vub300_probe(struct usb_interface *interface,
22502 + "USB vub300 remote SDIO host controller[%d]"
22503 + "connected with no SD/SDIO card inserted\n",
22504 + interface_to_InterfaceNumber(interface));
22505 +- mmc_add_host(mmc);
22506 ++ retval = mmc_add_host(mmc);
22507 ++ if (retval)
22508 ++ goto error6;
22509 ++
22510 + return 0;
22511 ++error6:
22512 ++ del_timer_sync(&vub300->inactivity_timer);
22513 + error5:
22514 + mmc_free_host(mmc);
22515 + /*
22516 +diff --git a/drivers/mmc/host/wbsd.c b/drivers/mmc/host/wbsd.c
22517 +index 67ecd342fe5f1..7c7ec8d10232b 100644
22518 +--- a/drivers/mmc/host/wbsd.c
22519 ++++ b/drivers/mmc/host/wbsd.c
22520 +@@ -1698,7 +1698,17 @@ static int wbsd_init(struct device *dev, int base, int irq, int dma,
22521 + */
22522 + wbsd_init_device(host);
22523 +
22524 +- mmc_add_host(mmc);
22525 ++ ret = mmc_add_host(mmc);
22526 ++ if (ret) {
22527 ++ if (!pnp)
22528 ++ wbsd_chip_poweroff(host);
22529 ++
22530 ++ wbsd_release_resources(host);
22531 ++ wbsd_free_mmc(dev);
22532 ++
22533 ++ mmc_free_host(mmc);
22534 ++ return ret;
22535 ++ }
22536 +
22537 + pr_info("%s: W83L51xD", mmc_hostname(mmc));
22538 + if (host->chip_id != 0)
22539 +diff --git a/drivers/mmc/host/wmt-sdmmc.c b/drivers/mmc/host/wmt-sdmmc.c
22540 +index 9b5c503e3a3fc..9aa3027ca25e4 100644
22541 +--- a/drivers/mmc/host/wmt-sdmmc.c
22542 ++++ b/drivers/mmc/host/wmt-sdmmc.c
22543 +@@ -856,11 +856,15 @@ static int wmt_mci_probe(struct platform_device *pdev)
22544 + /* configure the controller to a known 'ready' state */
22545 + wmt_reset_hardware(mmc);
22546 +
22547 +- mmc_add_host(mmc);
22548 ++ ret = mmc_add_host(mmc);
22549 ++ if (ret)
22550 ++ goto fail7;
22551 +
22552 + dev_info(&pdev->dev, "WMT SDHC Controller initialized\n");
22553 +
22554 + return 0;
22555 ++fail7:
22556 ++ clk_disable_unprepare(priv->clk_sdmmc);
22557 + fail6:
22558 + clk_put(priv->clk_sdmmc);
22559 + fail5_and_a_half:
22560 +diff --git a/drivers/mtd/lpddr/lpddr2_nvm.c b/drivers/mtd/lpddr/lpddr2_nvm.c
22561 +index 367e2d906de02..e71af4c490969 100644
22562 +--- a/drivers/mtd/lpddr/lpddr2_nvm.c
22563 ++++ b/drivers/mtd/lpddr/lpddr2_nvm.c
22564 +@@ -433,6 +433,8 @@ static int lpddr2_nvm_probe(struct platform_device *pdev)
22565 +
22566 + /* lpddr2_nvm address range */
22567 + add_range = platform_get_resource(pdev, IORESOURCE_MEM, 0);
22568 ++ if (!add_range)
22569 ++ return -ENODEV;
22570 +
22571 + /* Populate map_info data structure */
22572 + *map = (struct map_info) {
22573 +diff --git a/drivers/mtd/maps/pxa2xx-flash.c b/drivers/mtd/maps/pxa2xx-flash.c
22574 +index 1749dbbacc135..62a5bf41a6d72 100644
22575 +--- a/drivers/mtd/maps/pxa2xx-flash.c
22576 ++++ b/drivers/mtd/maps/pxa2xx-flash.c
22577 +@@ -64,6 +64,7 @@ static int pxa2xx_flash_probe(struct platform_device *pdev)
22578 + if (!info->map.virt) {
22579 + printk(KERN_WARNING "Failed to ioremap %s\n",
22580 + info->map.name);
22581 ++ kfree(info);
22582 + return -ENOMEM;
22583 + }
22584 + info->map.cached = ioremap_cache(info->map.phys, info->map.size);
22585 +@@ -85,6 +86,7 @@ static int pxa2xx_flash_probe(struct platform_device *pdev)
22586 + iounmap((void *)info->map.virt);
22587 + if (info->map.cached)
22588 + iounmap(info->map.cached);
22589 ++ kfree(info);
22590 + return -EIO;
22591 + }
22592 + info->mtd->dev.parent = &pdev->dev;
22593 +diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
22594 +index 0b4ca0aa41321..686ada1a63e9a 100644
22595 +--- a/drivers/mtd/mtdcore.c
22596 ++++ b/drivers/mtd/mtdcore.c
22597 +@@ -723,8 +723,10 @@ int add_mtd_device(struct mtd_info *mtd)
22598 + mtd_check_of_node(mtd);
22599 + of_node_get(mtd_get_of_node(mtd));
22600 + error = device_register(&mtd->dev);
22601 +- if (error)
22602 ++ if (error) {
22603 ++ put_device(&mtd->dev);
22604 + goto fail_added;
22605 ++ }
22606 +
22607 + /* Add the nvmem provider */
22608 + error = mtd_nvmem_add(mtd);
22609 +@@ -774,6 +776,7 @@ int del_mtd_device(struct mtd_info *mtd)
22610 + {
22611 + int ret;
22612 + struct mtd_notifier *not;
22613 ++ struct device_node *mtd_of_node;
22614 +
22615 + mutex_lock(&mtd_table_mutex);
22616 +
22617 +@@ -792,6 +795,7 @@ int del_mtd_device(struct mtd_info *mtd)
22618 + mtd->index, mtd->name, mtd->usecount);
22619 + ret = -EBUSY;
22620 + } else {
22621 ++ mtd_of_node = mtd_get_of_node(mtd);
22622 + debugfs_remove_recursive(mtd->dbg.dfs_dir);
22623 +
22624 + /* Try to remove the NVMEM provider */
22625 +@@ -803,7 +807,7 @@ int del_mtd_device(struct mtd_info *mtd)
22626 + memset(&mtd->dev, 0, sizeof(mtd->dev));
22627 +
22628 + idr_remove(&mtd_idr, mtd->index);
22629 +- of_node_put(mtd_get_of_node(mtd));
22630 ++ of_node_put(mtd_of_node);
22631 +
22632 + module_put(THIS_MODULE);
22633 + ret = 0;
22634 +@@ -2483,6 +2487,7 @@ static int __init init_mtd(void)
22635 + out_procfs:
22636 + if (proc_mtd)
22637 + remove_proc_entry("mtd", NULL);
22638 ++ bdi_unregister(mtd_bdi);
22639 + bdi_put(mtd_bdi);
22640 + err_bdi:
22641 + class_unregister(&mtd_class);
22642 +diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
22643 +index bee8fc4c9f078..0cf1a1797ea32 100644
22644 +--- a/drivers/mtd/spi-nor/core.c
22645 ++++ b/drivers/mtd/spi-nor/core.c
22646 +@@ -1914,7 +1914,8 @@ static int spi_nor_spimem_check_readop(struct spi_nor *nor,
22647 + spi_nor_spimem_setup_op(nor, &op, read->proto);
22648 +
22649 + /* convert the dummy cycles to the number of bytes */
22650 +- op.dummy.nbytes = (nor->read_dummy * op.dummy.buswidth) / 8;
22651 ++ op.dummy.nbytes = (read->num_mode_clocks + read->num_wait_states) *
22652 ++ op.dummy.buswidth / 8;
22653 + if (spi_nor_protocol_is_dtr(nor->read_proto))
22654 + op.dummy.nbytes *= 2;
22655 +
22656 +diff --git a/drivers/mtd/spi-nor/sysfs.c b/drivers/mtd/spi-nor/sysfs.c
22657 +index 9aec9d8a98ada..4c3b351aef245 100644
22658 +--- a/drivers/mtd/spi-nor/sysfs.c
22659 ++++ b/drivers/mtd/spi-nor/sysfs.c
22660 +@@ -67,6 +67,19 @@ static struct bin_attribute *spi_nor_sysfs_bin_entries[] = {
22661 + NULL
22662 + };
22663 +
22664 ++static umode_t spi_nor_sysfs_is_visible(struct kobject *kobj,
22665 ++ struct attribute *attr, int n)
22666 ++{
22667 ++ struct spi_device *spi = to_spi_device(kobj_to_dev(kobj));
22668 ++ struct spi_mem *spimem = spi_get_drvdata(spi);
22669 ++ struct spi_nor *nor = spi_mem_get_drvdata(spimem);
22670 ++
22671 ++ if (attr == &dev_attr_jedec_id.attr && !nor->info->id_len)
22672 ++ return 0;
22673 ++
22674 ++ return 0444;
22675 ++}
22676 ++
22677 + static umode_t spi_nor_sysfs_is_bin_visible(struct kobject *kobj,
22678 + struct bin_attribute *attr, int n)
22679 + {
22680 +@@ -82,6 +95,7 @@ static umode_t spi_nor_sysfs_is_bin_visible(struct kobject *kobj,
22681 +
22682 + static const struct attribute_group spi_nor_sysfs_group = {
22683 + .name = "spi-nor",
22684 ++ .is_visible = spi_nor_sysfs_is_visible,
22685 + .is_bin_visible = spi_nor_sysfs_is_bin_visible,
22686 + .attrs = spi_nor_sysfs_entries,
22687 + .bin_attrs = spi_nor_sysfs_bin_entries,
22688 +diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
22689 +index b9a882f182d29..b108f2f4adc20 100644
22690 +--- a/drivers/net/bonding/bond_main.c
22691 ++++ b/drivers/net/bonding/bond_main.c
22692 +@@ -2531,12 +2531,21 @@ static int bond_slave_info_query(struct net_device *bond_dev, struct ifslave *in
22693 + /* called with rcu_read_lock() */
22694 + static int bond_miimon_inspect(struct bonding *bond)
22695 + {
22696 ++ bool ignore_updelay = false;
22697 + int link_state, commit = 0;
22698 + struct list_head *iter;
22699 + struct slave *slave;
22700 +- bool ignore_updelay;
22701 +
22702 +- ignore_updelay = !rcu_dereference(bond->curr_active_slave);
22703 ++ if (BOND_MODE(bond) == BOND_MODE_ACTIVEBACKUP) {
22704 ++ ignore_updelay = !rcu_dereference(bond->curr_active_slave);
22705 ++ } else {
22706 ++ struct bond_up_slave *usable_slaves;
22707 ++
22708 ++ usable_slaves = rcu_dereference(bond->usable_slaves);
22709 ++
22710 ++ if (usable_slaves && usable_slaves->count == 0)
22711 ++ ignore_updelay = true;
22712 ++ }
22713 +
22714 + bond_for_each_slave_rcu(bond, slave, iter) {
22715 + bond_propose_link_state(slave, BOND_LINK_NOCHANGE);
22716 +@@ -2644,8 +2653,9 @@ static void bond_miimon_link_change(struct bonding *bond,
22717 +
22718 + static void bond_miimon_commit(struct bonding *bond)
22719 + {
22720 +- struct list_head *iter;
22721 + struct slave *slave, *primary;
22722 ++ bool do_failover = false;
22723 ++ struct list_head *iter;
22724 +
22725 + bond_for_each_slave(bond, slave, iter) {
22726 + switch (slave->link_new_state) {
22727 +@@ -2689,8 +2699,9 @@ static void bond_miimon_commit(struct bonding *bond)
22728 +
22729 + bond_miimon_link_change(bond, slave, BOND_LINK_UP);
22730 +
22731 +- if (!bond->curr_active_slave || slave == primary)
22732 +- goto do_failover;
22733 ++ if (!rcu_access_pointer(bond->curr_active_slave) || slave == primary ||
22734 ++ slave->prio > rcu_dereference(bond->curr_active_slave)->prio)
22735 ++ do_failover = true;
22736 +
22737 + continue;
22738 +
22739 +@@ -2711,7 +2722,7 @@ static void bond_miimon_commit(struct bonding *bond)
22740 + bond_miimon_link_change(bond, slave, BOND_LINK_DOWN);
22741 +
22742 + if (slave == rcu_access_pointer(bond->curr_active_slave))
22743 +- goto do_failover;
22744 ++ do_failover = true;
22745 +
22746 + continue;
22747 +
22748 +@@ -2722,8 +2733,9 @@ static void bond_miimon_commit(struct bonding *bond)
22749 +
22750 + continue;
22751 + }
22752 ++ }
22753 +
22754 +-do_failover:
22755 ++ if (do_failover) {
22756 + block_netpoll_tx();
22757 + bond_select_active_slave(bond);
22758 + unblock_netpoll_tx();
22759 +@@ -3521,6 +3533,7 @@ static int bond_ab_arp_inspect(struct bonding *bond)
22760 + */
22761 + static void bond_ab_arp_commit(struct bonding *bond)
22762 + {
22763 ++ bool do_failover = false;
22764 + struct list_head *iter;
22765 + unsigned long last_tx;
22766 + struct slave *slave;
22767 +@@ -3550,8 +3563,9 @@ static void bond_ab_arp_commit(struct bonding *bond)
22768 + slave_info(bond->dev, slave->dev, "link status definitely up\n");
22769 +
22770 + if (!rtnl_dereference(bond->curr_active_slave) ||
22771 +- slave == rtnl_dereference(bond->primary_slave))
22772 +- goto do_failover;
22773 ++ slave == rtnl_dereference(bond->primary_slave) ||
22774 ++ slave->prio > rtnl_dereference(bond->curr_active_slave)->prio)
22775 ++ do_failover = true;
22776 +
22777 + }
22778 +
22779 +@@ -3570,7 +3584,7 @@ static void bond_ab_arp_commit(struct bonding *bond)
22780 +
22781 + if (slave == rtnl_dereference(bond->curr_active_slave)) {
22782 + RCU_INIT_POINTER(bond->current_arp_slave, NULL);
22783 +- goto do_failover;
22784 ++ do_failover = true;
22785 + }
22786 +
22787 + continue;
22788 +@@ -3594,8 +3608,9 @@ static void bond_ab_arp_commit(struct bonding *bond)
22789 + slave->link_new_state);
22790 + continue;
22791 + }
22792 ++ }
22793 +
22794 +-do_failover:
22795 ++ if (do_failover) {
22796 + block_netpoll_tx();
22797 + bond_select_active_slave(bond);
22798 + unblock_netpoll_tx();
22799 +diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
22800 +index e5575d2755e4b..2de998b98cb5e 100644
22801 +--- a/drivers/net/can/m_can/m_can.c
22802 ++++ b/drivers/net/can/m_can/m_can.c
22803 +@@ -1233,10 +1233,17 @@ static int m_can_set_bittiming(struct net_device *dev)
22804 + * - setup bittiming
22805 + * - configure timestamp generation
22806 + */
22807 +-static void m_can_chip_config(struct net_device *dev)
22808 ++static int m_can_chip_config(struct net_device *dev)
22809 + {
22810 + struct m_can_classdev *cdev = netdev_priv(dev);
22811 + u32 cccr, test;
22812 ++ int err;
22813 ++
22814 ++ err = m_can_init_ram(cdev);
22815 ++ if (err) {
22816 ++ dev_err(cdev->dev, "Message RAM configuration failed\n");
22817 ++ return err;
22818 ++ }
22819 +
22820 + m_can_config_endisable(cdev, true);
22821 +
22822 +@@ -1360,18 +1367,25 @@ static void m_can_chip_config(struct net_device *dev)
22823 +
22824 + if (cdev->ops->init)
22825 + cdev->ops->init(cdev);
22826 ++
22827 ++ return 0;
22828 + }
22829 +
22830 +-static void m_can_start(struct net_device *dev)
22831 ++static int m_can_start(struct net_device *dev)
22832 + {
22833 + struct m_can_classdev *cdev = netdev_priv(dev);
22834 ++ int ret;
22835 +
22836 + /* basic m_can configuration */
22837 +- m_can_chip_config(dev);
22838 ++ ret = m_can_chip_config(dev);
22839 ++ if (ret)
22840 ++ return ret;
22841 +
22842 + cdev->can.state = CAN_STATE_ERROR_ACTIVE;
22843 +
22844 + m_can_enable_all_interrupts(cdev);
22845 ++
22846 ++ return 0;
22847 + }
22848 +
22849 + static int m_can_set_mode(struct net_device *dev, enum can_mode mode)
22850 +@@ -1799,7 +1813,9 @@ static int m_can_open(struct net_device *dev)
22851 + }
22852 +
22853 + /* start the m_can controller */
22854 +- m_can_start(dev);
22855 ++ err = m_can_start(dev);
22856 ++ if (err)
22857 ++ goto exit_irq_fail;
22858 +
22859 + if (!cdev->is_peripheral)
22860 + napi_enable(&cdev->napi);
22861 +@@ -2058,9 +2074,13 @@ int m_can_class_resume(struct device *dev)
22862 + ret = m_can_clk_start(cdev);
22863 + if (ret)
22864 + return ret;
22865 ++ ret = m_can_start(ndev);
22866 ++ if (ret) {
22867 ++ m_can_clk_stop(cdev);
22868 ++
22869 ++ return ret;
22870 ++ }
22871 +
22872 +- m_can_init_ram(cdev);
22873 +- m_can_start(ndev);
22874 + netif_device_attach(ndev);
22875 + netif_start_queue(ndev);
22876 + }
22877 +diff --git a/drivers/net/can/m_can/m_can_platform.c b/drivers/net/can/m_can/m_can_platform.c
22878 +index eee47bad05920..de6d8e01bf2e8 100644
22879 +--- a/drivers/net/can/m_can/m_can_platform.c
22880 ++++ b/drivers/net/can/m_can/m_can_platform.c
22881 +@@ -140,10 +140,6 @@ static int m_can_plat_probe(struct platform_device *pdev)
22882 +
22883 + platform_set_drvdata(pdev, mcan_class);
22884 +
22885 +- ret = m_can_init_ram(mcan_class);
22886 +- if (ret)
22887 +- goto probe_fail;
22888 +-
22889 + pm_runtime_enable(mcan_class->dev);
22890 + ret = m_can_class_register(mcan_class);
22891 + if (ret)
22892 +diff --git a/drivers/net/can/m_can/tcan4x5x-core.c b/drivers/net/can/m_can/tcan4x5x-core.c
22893 +index 41645a24384ce..2342aa011647c 100644
22894 +--- a/drivers/net/can/m_can/tcan4x5x-core.c
22895 ++++ b/drivers/net/can/m_can/tcan4x5x-core.c
22896 +@@ -10,7 +10,7 @@
22897 + #define TCAN4X5X_DEV_ID1 0x04
22898 + #define TCAN4X5X_REV 0x08
22899 + #define TCAN4X5X_STATUS 0x0C
22900 +-#define TCAN4X5X_ERROR_STATUS 0x10
22901 ++#define TCAN4X5X_ERROR_STATUS_MASK 0x10
22902 + #define TCAN4X5X_CONTROL 0x14
22903 +
22904 + #define TCAN4X5X_CONFIG 0x800
22905 +@@ -204,17 +204,7 @@ static int tcan4x5x_clear_interrupts(struct m_can_classdev *cdev)
22906 + if (ret)
22907 + return ret;
22908 +
22909 +- ret = tcan4x5x_write_tcan_reg(cdev, TCAN4X5X_MCAN_INT_REG,
22910 +- TCAN4X5X_ENABLE_MCAN_INT);
22911 +- if (ret)
22912 +- return ret;
22913 +-
22914 +- ret = tcan4x5x_write_tcan_reg(cdev, TCAN4X5X_INT_FLAGS,
22915 +- TCAN4X5X_CLEAR_ALL_INT);
22916 +- if (ret)
22917 +- return ret;
22918 +-
22919 +- return tcan4x5x_write_tcan_reg(cdev, TCAN4X5X_ERROR_STATUS,
22920 ++ return tcan4x5x_write_tcan_reg(cdev, TCAN4X5X_INT_FLAGS,
22921 + TCAN4X5X_CLEAR_ALL_INT);
22922 + }
22923 +
22924 +@@ -234,8 +224,8 @@ static int tcan4x5x_init(struct m_can_classdev *cdev)
22925 + if (ret)
22926 + return ret;
22927 +
22928 +- /* Zero out the MCAN buffers */
22929 +- ret = m_can_init_ram(cdev);
22930 ++ ret = tcan4x5x_write_tcan_reg(cdev, TCAN4X5X_ERROR_STATUS_MASK,
22931 ++ TCAN4X5X_CLEAR_ALL_INT);
22932 + if (ret)
22933 + return ret;
22934 +
22935 +diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
22936 +index f6c0938027ece..ff10b3790d844 100644
22937 +--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
22938 ++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
22939 +@@ -76,6 +76,14 @@ struct kvaser_usb_tx_urb_context {
22940 + u32 echo_index;
22941 + };
22942 +
22943 ++struct kvaser_usb_busparams {
22944 ++ __le32 bitrate;
22945 ++ u8 tseg1;
22946 ++ u8 tseg2;
22947 ++ u8 sjw;
22948 ++ u8 nsamples;
22949 ++} __packed;
22950 ++
22951 + struct kvaser_usb {
22952 + struct usb_device *udev;
22953 + struct usb_interface *intf;
22954 +@@ -104,13 +112,19 @@ struct kvaser_usb_net_priv {
22955 + struct can_priv can;
22956 + struct can_berr_counter bec;
22957 +
22958 ++ /* subdriver-specific data */
22959 ++ void *sub_priv;
22960 ++
22961 + struct kvaser_usb *dev;
22962 + struct net_device *netdev;
22963 + int channel;
22964 +
22965 +- struct completion start_comp, stop_comp, flush_comp;
22966 ++ struct completion start_comp, stop_comp, flush_comp,
22967 ++ get_busparams_comp;
22968 + struct usb_anchor tx_submitted;
22969 +
22970 ++ struct kvaser_usb_busparams busparams_nominal, busparams_data;
22971 ++
22972 + spinlock_t tx_contexts_lock; /* lock for active_tx_contexts */
22973 + int active_tx_contexts;
22974 + struct kvaser_usb_tx_urb_context tx_contexts[];
22975 +@@ -120,11 +134,15 @@ struct kvaser_usb_net_priv {
22976 + * struct kvaser_usb_dev_ops - Device specific functions
22977 + * @dev_set_mode: used for can.do_set_mode
22978 + * @dev_set_bittiming: used for can.do_set_bittiming
22979 ++ * @dev_get_busparams: readback arbitration busparams
22980 + * @dev_set_data_bittiming: used for can.do_set_data_bittiming
22981 ++ * @dev_get_data_busparams: readback data busparams
22982 + * @dev_get_berr_counter: used for can.do_get_berr_counter
22983 + *
22984 + * @dev_setup_endpoints: setup USB in and out endpoints
22985 + * @dev_init_card: initialize card
22986 ++ * @dev_init_channel: initialize channel
22987 ++ * @dev_remove_channel: uninitialize channel
22988 + * @dev_get_software_info: get software info
22989 + * @dev_get_software_details: get software details
22990 + * @dev_get_card_info: get card info
22991 +@@ -140,12 +158,18 @@ struct kvaser_usb_net_priv {
22992 + */
22993 + struct kvaser_usb_dev_ops {
22994 + int (*dev_set_mode)(struct net_device *netdev, enum can_mode mode);
22995 +- int (*dev_set_bittiming)(struct net_device *netdev);
22996 +- int (*dev_set_data_bittiming)(struct net_device *netdev);
22997 ++ int (*dev_set_bittiming)(const struct net_device *netdev,
22998 ++ const struct kvaser_usb_busparams *busparams);
22999 ++ int (*dev_get_busparams)(struct kvaser_usb_net_priv *priv);
23000 ++ int (*dev_set_data_bittiming)(const struct net_device *netdev,
23001 ++ const struct kvaser_usb_busparams *busparams);
23002 ++ int (*dev_get_data_busparams)(struct kvaser_usb_net_priv *priv);
23003 + int (*dev_get_berr_counter)(const struct net_device *netdev,
23004 + struct can_berr_counter *bec);
23005 + int (*dev_setup_endpoints)(struct kvaser_usb *dev);
23006 + int (*dev_init_card)(struct kvaser_usb *dev);
23007 ++ int (*dev_init_channel)(struct kvaser_usb_net_priv *priv);
23008 ++ void (*dev_remove_channel)(struct kvaser_usb_net_priv *priv);
23009 + int (*dev_get_software_info)(struct kvaser_usb *dev);
23010 + int (*dev_get_software_details)(struct kvaser_usb *dev);
23011 + int (*dev_get_card_info)(struct kvaser_usb *dev);
23012 +diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
23013 +index 802e27c0ecedb..3a2bfaad14065 100644
23014 +--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
23015 ++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
23016 +@@ -440,10 +440,6 @@ static int kvaser_usb_open(struct net_device *netdev)
23017 + if (err)
23018 + return err;
23019 +
23020 +- err = kvaser_usb_setup_rx_urbs(dev);
23021 +- if (err)
23022 +- goto error;
23023 +-
23024 + err = ops->dev_set_opt_mode(priv);
23025 + if (err)
23026 + goto error;
23027 +@@ -534,6 +530,93 @@ static int kvaser_usb_close(struct net_device *netdev)
23028 + return 0;
23029 + }
23030 +
23031 ++static int kvaser_usb_set_bittiming(struct net_device *netdev)
23032 ++{
23033 ++ struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
23034 ++ struct kvaser_usb *dev = priv->dev;
23035 ++ const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops;
23036 ++ struct can_bittiming *bt = &priv->can.bittiming;
23037 ++
23038 ++ struct kvaser_usb_busparams busparams;
23039 ++ int tseg1 = bt->prop_seg + bt->phase_seg1;
23040 ++ int tseg2 = bt->phase_seg2;
23041 ++ int sjw = bt->sjw;
23042 ++ int err = -EOPNOTSUPP;
23043 ++
23044 ++ busparams.bitrate = cpu_to_le32(bt->bitrate);
23045 ++ busparams.sjw = (u8)sjw;
23046 ++ busparams.tseg1 = (u8)tseg1;
23047 ++ busparams.tseg2 = (u8)tseg2;
23048 ++ if (priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES)
23049 ++ busparams.nsamples = 3;
23050 ++ else
23051 ++ busparams.nsamples = 1;
23052 ++
23053 ++ err = ops->dev_set_bittiming(netdev, &busparams);
23054 ++ if (err)
23055 ++ return err;
23056 ++
23057 ++ err = kvaser_usb_setup_rx_urbs(priv->dev);
23058 ++ if (err)
23059 ++ return err;
23060 ++
23061 ++ err = ops->dev_get_busparams(priv);
23062 ++ if (err) {
23063 ++ /* Treat EOPNOTSUPP as success */
23064 ++ if (err == -EOPNOTSUPP)
23065 ++ err = 0;
23066 ++ return err;
23067 ++ }
23068 ++
23069 ++ if (memcmp(&busparams, &priv->busparams_nominal,
23070 ++ sizeof(priv->busparams_nominal)) != 0)
23071 ++ err = -EINVAL;
23072 ++
23073 ++ return err;
23074 ++}
23075 ++
23076 ++static int kvaser_usb_set_data_bittiming(struct net_device *netdev)
23077 ++{
23078 ++ struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
23079 ++ struct kvaser_usb *dev = priv->dev;
23080 ++ const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops;
23081 ++ struct can_bittiming *dbt = &priv->can.data_bittiming;
23082 ++
23083 ++ struct kvaser_usb_busparams busparams;
23084 ++ int tseg1 = dbt->prop_seg + dbt->phase_seg1;
23085 ++ int tseg2 = dbt->phase_seg2;
23086 ++ int sjw = dbt->sjw;
23087 ++ int err;
23088 ++
23089 ++ if (!ops->dev_set_data_bittiming ||
23090 ++ !ops->dev_get_data_busparams)
23091 ++ return -EOPNOTSUPP;
23092 ++
23093 ++ busparams.bitrate = cpu_to_le32(dbt->bitrate);
23094 ++ busparams.sjw = (u8)sjw;
23095 ++ busparams.tseg1 = (u8)tseg1;
23096 ++ busparams.tseg2 = (u8)tseg2;
23097 ++ busparams.nsamples = 1;
23098 ++
23099 ++ err = ops->dev_set_data_bittiming(netdev, &busparams);
23100 ++ if (err)
23101 ++ return err;
23102 ++
23103 ++ err = kvaser_usb_setup_rx_urbs(priv->dev);
23104 ++ if (err)
23105 ++ return err;
23106 ++
23107 ++ err = ops->dev_get_data_busparams(priv);
23108 ++ if (err)
23109 ++ return err;
23110 ++
23111 ++ if (memcmp(&busparams, &priv->busparams_data,
23112 ++ sizeof(priv->busparams_data)) != 0)
23113 ++ err = -EINVAL;
23114 ++
23115 ++ return err;
23116 ++}
23117 ++
23118 + static void kvaser_usb_write_bulk_callback(struct urb *urb)
23119 + {
23120 + struct kvaser_usb_tx_urb_context *context = urb->context;
23121 +@@ -684,6 +767,7 @@ static const struct ethtool_ops kvaser_usb_ethtool_ops_hwts = {
23122 +
23123 + static void kvaser_usb_remove_interfaces(struct kvaser_usb *dev)
23124 + {
23125 ++ const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops;
23126 + int i;
23127 +
23128 + for (i = 0; i < dev->nchannels; i++) {
23129 +@@ -699,6 +783,9 @@ static void kvaser_usb_remove_interfaces(struct kvaser_usb *dev)
23130 + if (!dev->nets[i])
23131 + continue;
23132 +
23133 ++ if (ops->dev_remove_channel)
23134 ++ ops->dev_remove_channel(dev->nets[i]);
23135 ++
23136 + free_candev(dev->nets[i]->netdev);
23137 + }
23138 + }
23139 +@@ -730,6 +817,7 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
23140 + init_completion(&priv->start_comp);
23141 + init_completion(&priv->stop_comp);
23142 + init_completion(&priv->flush_comp);
23143 ++ init_completion(&priv->get_busparams_comp);
23144 + priv->can.ctrlmode_supported = 0;
23145 +
23146 + priv->dev = dev;
23147 +@@ -742,7 +830,7 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
23148 + priv->can.state = CAN_STATE_STOPPED;
23149 + priv->can.clock.freq = dev->cfg->clock.freq;
23150 + priv->can.bittiming_const = dev->cfg->bittiming_const;
23151 +- priv->can.do_set_bittiming = ops->dev_set_bittiming;
23152 ++ priv->can.do_set_bittiming = kvaser_usb_set_bittiming;
23153 + priv->can.do_set_mode = ops->dev_set_mode;
23154 + if ((driver_info->quirks & KVASER_USB_QUIRK_HAS_TXRX_ERRORS) ||
23155 + (priv->dev->card_data.capabilities & KVASER_USB_CAP_BERR_CAP))
23156 +@@ -754,7 +842,7 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
23157 +
23158 + if (priv->can.ctrlmode_supported & CAN_CTRLMODE_FD) {
23159 + priv->can.data_bittiming_const = dev->cfg->data_bittiming_const;
23160 +- priv->can.do_set_data_bittiming = ops->dev_set_data_bittiming;
23161 ++ priv->can.do_set_data_bittiming = kvaser_usb_set_data_bittiming;
23162 + }
23163 +
23164 + netdev->flags |= IFF_ECHO;
23165 +@@ -772,17 +860,26 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
23166 +
23167 + dev->nets[channel] = priv;
23168 +
23169 ++ if (ops->dev_init_channel) {
23170 ++ err = ops->dev_init_channel(priv);
23171 ++ if (err)
23172 ++ goto err;
23173 ++ }
23174 ++
23175 + err = register_candev(netdev);
23176 + if (err) {
23177 + dev_err(&dev->intf->dev, "Failed to register CAN device\n");
23178 +- free_candev(netdev);
23179 +- dev->nets[channel] = NULL;
23180 +- return err;
23181 ++ goto err;
23182 + }
23183 +
23184 + netdev_dbg(netdev, "device registered\n");
23185 +
23186 + return 0;
23187 ++
23188 ++err:
23189 ++ free_candev(netdev);
23190 ++ dev->nets[channel] = NULL;
23191 ++ return err;
23192 + }
23193 +
23194 + static int kvaser_usb_probe(struct usb_interface *intf,
23195 +diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
23196 +index 66f672ea631b8..f688124d6d669 100644
23197 +--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
23198 ++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
23199 +@@ -45,6 +45,8 @@ static const struct kvaser_usb_dev_cfg kvaser_usb_hydra_dev_cfg_rt;
23200 +
23201 + /* Minihydra command IDs */
23202 + #define CMD_SET_BUSPARAMS_REQ 16
23203 ++#define CMD_GET_BUSPARAMS_REQ 17
23204 ++#define CMD_GET_BUSPARAMS_RESP 18
23205 + #define CMD_GET_CHIP_STATE_REQ 19
23206 + #define CMD_CHIP_STATE_EVENT 20
23207 + #define CMD_SET_DRIVERMODE_REQ 21
23208 +@@ -196,21 +198,26 @@ struct kvaser_cmd_chip_state_event {
23209 + #define KVASER_USB_HYDRA_BUS_MODE_CANFD_ISO 0x01
23210 + #define KVASER_USB_HYDRA_BUS_MODE_NONISO 0x02
23211 + struct kvaser_cmd_set_busparams {
23212 +- __le32 bitrate;
23213 +- u8 tseg1;
23214 +- u8 tseg2;
23215 +- u8 sjw;
23216 +- u8 nsamples;
23217 ++ struct kvaser_usb_busparams busparams_nominal;
23218 + u8 reserved0[4];
23219 +- __le32 bitrate_d;
23220 +- u8 tseg1_d;
23221 +- u8 tseg2_d;
23222 +- u8 sjw_d;
23223 +- u8 nsamples_d;
23224 ++ struct kvaser_usb_busparams busparams_data;
23225 + u8 canfd_mode;
23226 + u8 reserved1[7];
23227 + } __packed;
23228 +
23229 ++/* Busparam type */
23230 ++#define KVASER_USB_HYDRA_BUSPARAM_TYPE_CAN 0x00
23231 ++#define KVASER_USB_HYDRA_BUSPARAM_TYPE_CANFD 0x01
23232 ++struct kvaser_cmd_get_busparams_req {
23233 ++ u8 type;
23234 ++ u8 reserved[27];
23235 ++} __packed;
23236 ++
23237 ++struct kvaser_cmd_get_busparams_res {
23238 ++ struct kvaser_usb_busparams busparams;
23239 ++ u8 reserved[20];
23240 ++} __packed;
23241 ++
23242 + /* Ctrl modes */
23243 + #define KVASER_USB_HYDRA_CTRLMODE_NORMAL 0x01
23244 + #define KVASER_USB_HYDRA_CTRLMODE_LISTEN 0x02
23245 +@@ -281,6 +288,8 @@ struct kvaser_cmd {
23246 + struct kvaser_cmd_error_event error_event;
23247 +
23248 + struct kvaser_cmd_set_busparams set_busparams_req;
23249 ++ struct kvaser_cmd_get_busparams_req get_busparams_req;
23250 ++ struct kvaser_cmd_get_busparams_res get_busparams_res;
23251 +
23252 + struct kvaser_cmd_chip_state_event chip_state_event;
23253 +
23254 +@@ -363,6 +372,10 @@ struct kvaser_cmd_ext {
23255 + } __packed;
23256 + } __packed;
23257 +
23258 ++struct kvaser_usb_net_hydra_priv {
23259 ++ int pending_get_busparams_type;
23260 ++};
23261 ++
23262 + static const struct can_bittiming_const kvaser_usb_hydra_kcan_bittiming_c = {
23263 + .name = "kvaser_usb_kcan",
23264 + .tseg1_min = 1,
23265 +@@ -840,6 +853,39 @@ static void kvaser_usb_hydra_flush_queue_reply(const struct kvaser_usb *dev,
23266 + complete(&priv->flush_comp);
23267 + }
23268 +
23269 ++static void kvaser_usb_hydra_get_busparams_reply(const struct kvaser_usb *dev,
23270 ++ const struct kvaser_cmd *cmd)
23271 ++{
23272 ++ struct kvaser_usb_net_priv *priv;
23273 ++ struct kvaser_usb_net_hydra_priv *hydra;
23274 ++
23275 ++ priv = kvaser_usb_hydra_net_priv_from_cmd(dev, cmd);
23276 ++ if (!priv)
23277 ++ return;
23278 ++
23279 ++ hydra = priv->sub_priv;
23280 ++ if (!hydra)
23281 ++ return;
23282 ++
23283 ++ switch (hydra->pending_get_busparams_type) {
23284 ++ case KVASER_USB_HYDRA_BUSPARAM_TYPE_CAN:
23285 ++ memcpy(&priv->busparams_nominal, &cmd->get_busparams_res.busparams,
23286 ++ sizeof(priv->busparams_nominal));
23287 ++ break;
23288 ++ case KVASER_USB_HYDRA_BUSPARAM_TYPE_CANFD:
23289 ++ memcpy(&priv->busparams_data, &cmd->get_busparams_res.busparams,
23290 ++ sizeof(priv->busparams_nominal));
23291 ++ break;
23292 ++ default:
23293 ++ dev_warn(&dev->intf->dev, "Unknown get_busparams_type %d\n",
23294 ++ hydra->pending_get_busparams_type);
23295 ++ break;
23296 ++ }
23297 ++ hydra->pending_get_busparams_type = -1;
23298 ++
23299 ++ complete(&priv->get_busparams_comp);
23300 ++}
23301 ++
23302 + static void
23303 + kvaser_usb_hydra_bus_status_to_can_state(const struct kvaser_usb_net_priv *priv,
23304 + u8 bus_status,
23305 +@@ -1326,6 +1372,10 @@ static void kvaser_usb_hydra_handle_cmd_std(const struct kvaser_usb *dev,
23306 + kvaser_usb_hydra_state_event(dev, cmd);
23307 + break;
23308 +
23309 ++ case CMD_GET_BUSPARAMS_RESP:
23310 ++ kvaser_usb_hydra_get_busparams_reply(dev, cmd);
23311 ++ break;
23312 ++
23313 + case CMD_ERROR_EVENT:
23314 + kvaser_usb_hydra_error_event(dev, cmd);
23315 + break;
23316 +@@ -1522,15 +1572,58 @@ static int kvaser_usb_hydra_set_mode(struct net_device *netdev,
23317 + return err;
23318 + }
23319 +
23320 +-static int kvaser_usb_hydra_set_bittiming(struct net_device *netdev)
23321 ++static int kvaser_usb_hydra_get_busparams(struct kvaser_usb_net_priv *priv,
23322 ++ int busparams_type)
23323 ++{
23324 ++ struct kvaser_usb *dev = priv->dev;
23325 ++ struct kvaser_usb_net_hydra_priv *hydra = priv->sub_priv;
23326 ++ struct kvaser_cmd *cmd;
23327 ++ int err;
23328 ++
23329 ++ if (!hydra)
23330 ++ return -EINVAL;
23331 ++
23332 ++ cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
23333 ++ if (!cmd)
23334 ++ return -ENOMEM;
23335 ++
23336 ++ cmd->header.cmd_no = CMD_GET_BUSPARAMS_REQ;
23337 ++ kvaser_usb_hydra_set_cmd_dest_he
23338 ++ (cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
23339 ++ kvaser_usb_hydra_set_cmd_transid
23340 ++ (cmd, kvaser_usb_hydra_get_next_transid(dev));
23341 ++ cmd->get_busparams_req.type = busparams_type;
23342 ++ hydra->pending_get_busparams_type = busparams_type;
23343 ++
23344 ++ reinit_completion(&priv->get_busparams_comp);
23345 ++
23346 ++ err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
23347 ++ if (err)
23348 ++ return err;
23349 ++
23350 ++ if (!wait_for_completion_timeout(&priv->get_busparams_comp,
23351 ++ msecs_to_jiffies(KVASER_USB_TIMEOUT)))
23352 ++ return -ETIMEDOUT;
23353 ++
23354 ++ return err;
23355 ++}
23356 ++
23357 ++static int kvaser_usb_hydra_get_nominal_busparams(struct kvaser_usb_net_priv *priv)
23358 ++{
23359 ++ return kvaser_usb_hydra_get_busparams(priv, KVASER_USB_HYDRA_BUSPARAM_TYPE_CAN);
23360 ++}
23361 ++
23362 ++static int kvaser_usb_hydra_get_data_busparams(struct kvaser_usb_net_priv *priv)
23363 ++{
23364 ++ return kvaser_usb_hydra_get_busparams(priv, KVASER_USB_HYDRA_BUSPARAM_TYPE_CANFD);
23365 ++}
23366 ++
23367 ++static int kvaser_usb_hydra_set_bittiming(const struct net_device *netdev,
23368 ++ const struct kvaser_usb_busparams *busparams)
23369 + {
23370 + struct kvaser_cmd *cmd;
23371 + struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
23372 +- struct can_bittiming *bt = &priv->can.bittiming;
23373 + struct kvaser_usb *dev = priv->dev;
23374 +- int tseg1 = bt->prop_seg + bt->phase_seg1;
23375 +- int tseg2 = bt->phase_seg2;
23376 +- int sjw = bt->sjw;
23377 + int err;
23378 +
23379 + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
23380 +@@ -1538,11 +1631,8 @@ static int kvaser_usb_hydra_set_bittiming(struct net_device *netdev)
23381 + return -ENOMEM;
23382 +
23383 + cmd->header.cmd_no = CMD_SET_BUSPARAMS_REQ;
23384 +- cmd->set_busparams_req.bitrate = cpu_to_le32(bt->bitrate);
23385 +- cmd->set_busparams_req.sjw = (u8)sjw;
23386 +- cmd->set_busparams_req.tseg1 = (u8)tseg1;
23387 +- cmd->set_busparams_req.tseg2 = (u8)tseg2;
23388 +- cmd->set_busparams_req.nsamples = 1;
23389 ++ memcpy(&cmd->set_busparams_req.busparams_nominal, busparams,
23390 ++ sizeof(cmd->set_busparams_req.busparams_nominal));
23391 +
23392 + kvaser_usb_hydra_set_cmd_dest_he
23393 + (cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
23394 +@@ -1556,15 +1646,12 @@ static int kvaser_usb_hydra_set_bittiming(struct net_device *netdev)
23395 + return err;
23396 + }
23397 +
23398 +-static int kvaser_usb_hydra_set_data_bittiming(struct net_device *netdev)
23399 ++static int kvaser_usb_hydra_set_data_bittiming(const struct net_device *netdev,
23400 ++ const struct kvaser_usb_busparams *busparams)
23401 + {
23402 + struct kvaser_cmd *cmd;
23403 + struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
23404 +- struct can_bittiming *dbt = &priv->can.data_bittiming;
23405 + struct kvaser_usb *dev = priv->dev;
23406 +- int tseg1 = dbt->prop_seg + dbt->phase_seg1;
23407 +- int tseg2 = dbt->phase_seg2;
23408 +- int sjw = dbt->sjw;
23409 + int err;
23410 +
23411 + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
23412 +@@ -1572,11 +1659,8 @@ static int kvaser_usb_hydra_set_data_bittiming(struct net_device *netdev)
23413 + return -ENOMEM;
23414 +
23415 + cmd->header.cmd_no = CMD_SET_BUSPARAMS_FD_REQ;
23416 +- cmd->set_busparams_req.bitrate_d = cpu_to_le32(dbt->bitrate);
23417 +- cmd->set_busparams_req.sjw_d = (u8)sjw;
23418 +- cmd->set_busparams_req.tseg1_d = (u8)tseg1;
23419 +- cmd->set_busparams_req.tseg2_d = (u8)tseg2;
23420 +- cmd->set_busparams_req.nsamples_d = 1;
23421 ++ memcpy(&cmd->set_busparams_req.busparams_data, busparams,
23422 ++ sizeof(cmd->set_busparams_req.busparams_data));
23423 +
23424 + if (priv->can.ctrlmode & CAN_CTRLMODE_FD) {
23425 + if (priv->can.ctrlmode & CAN_CTRLMODE_FD_NON_ISO)
23426 +@@ -1683,6 +1767,19 @@ static int kvaser_usb_hydra_init_card(struct kvaser_usb *dev)
23427 + return 0;
23428 + }
23429 +
23430 ++static int kvaser_usb_hydra_init_channel(struct kvaser_usb_net_priv *priv)
23431 ++{
23432 ++ struct kvaser_usb_net_hydra_priv *hydra;
23433 ++
23434 ++ hydra = devm_kzalloc(&priv->dev->intf->dev, sizeof(*hydra), GFP_KERNEL);
23435 ++ if (!hydra)
23436 ++ return -ENOMEM;
23437 ++
23438 ++ priv->sub_priv = hydra;
23439 ++
23440 ++ return 0;
23441 ++}
23442 ++
23443 + static int kvaser_usb_hydra_get_software_info(struct kvaser_usb *dev)
23444 + {
23445 + struct kvaser_cmd cmd;
23446 +@@ -2027,10 +2124,13 @@ kvaser_usb_hydra_frame_to_cmd(const struct kvaser_usb_net_priv *priv,
23447 + const struct kvaser_usb_dev_ops kvaser_usb_hydra_dev_ops = {
23448 + .dev_set_mode = kvaser_usb_hydra_set_mode,
23449 + .dev_set_bittiming = kvaser_usb_hydra_set_bittiming,
23450 ++ .dev_get_busparams = kvaser_usb_hydra_get_nominal_busparams,
23451 + .dev_set_data_bittiming = kvaser_usb_hydra_set_data_bittiming,
23452 ++ .dev_get_data_busparams = kvaser_usb_hydra_get_data_busparams,
23453 + .dev_get_berr_counter = kvaser_usb_hydra_get_berr_counter,
23454 + .dev_setup_endpoints = kvaser_usb_hydra_setup_endpoints,
23455 + .dev_init_card = kvaser_usb_hydra_init_card,
23456 ++ .dev_init_channel = kvaser_usb_hydra_init_channel,
23457 + .dev_get_software_info = kvaser_usb_hydra_get_software_info,
23458 + .dev_get_software_details = kvaser_usb_hydra_get_software_details,
23459 + .dev_get_card_info = kvaser_usb_hydra_get_card_info,
23460 +diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
23461 +index 19958037720f4..b423fd4c79890 100644
23462 +--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
23463 ++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
23464 +@@ -21,6 +21,7 @@
23465 + #include <linux/types.h>
23466 + #include <linux/units.h>
23467 + #include <linux/usb.h>
23468 ++#include <linux/workqueue.h>
23469 +
23470 + #include <linux/can.h>
23471 + #include <linux/can/dev.h>
23472 +@@ -56,6 +57,9 @@
23473 + #define CMD_RX_EXT_MESSAGE 14
23474 + #define CMD_TX_EXT_MESSAGE 15
23475 + #define CMD_SET_BUS_PARAMS 16
23476 ++#define CMD_GET_BUS_PARAMS 17
23477 ++#define CMD_GET_BUS_PARAMS_REPLY 18
23478 ++#define CMD_GET_CHIP_STATE 19
23479 + #define CMD_CHIP_STATE_EVENT 20
23480 + #define CMD_SET_CTRL_MODE 21
23481 + #define CMD_RESET_CHIP 24
23482 +@@ -70,10 +74,13 @@
23483 + #define CMD_GET_CARD_INFO_REPLY 35
23484 + #define CMD_GET_SOFTWARE_INFO 38
23485 + #define CMD_GET_SOFTWARE_INFO_REPLY 39
23486 ++#define CMD_ERROR_EVENT 45
23487 + #define CMD_FLUSH_QUEUE 48
23488 + #define CMD_TX_ACKNOWLEDGE 50
23489 + #define CMD_CAN_ERROR_EVENT 51
23490 + #define CMD_FLUSH_QUEUE_REPLY 68
23491 ++#define CMD_GET_CAPABILITIES_REQ 95
23492 ++#define CMD_GET_CAPABILITIES_RESP 96
23493 +
23494 + #define CMD_LEAF_LOG_MESSAGE 106
23495 +
23496 +@@ -83,6 +90,8 @@
23497 + #define KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK BIT(5)
23498 + #define KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK BIT(6)
23499 +
23500 ++#define KVASER_USB_LEAF_SWOPTION_EXT_CAP BIT(12)
23501 ++
23502 + /* error factors */
23503 + #define M16C_EF_ACKE BIT(0)
23504 + #define M16C_EF_CRCE BIT(1)
23505 +@@ -157,11 +166,7 @@ struct usbcan_cmd_softinfo {
23506 + struct kvaser_cmd_busparams {
23507 + u8 tid;
23508 + u8 channel;
23509 +- __le32 bitrate;
23510 +- u8 tseg1;
23511 +- u8 tseg2;
23512 +- u8 sjw;
23513 +- u8 no_samp;
23514 ++ struct kvaser_usb_busparams busparams;
23515 + } __packed;
23516 +
23517 + struct kvaser_cmd_tx_can {
23518 +@@ -230,7 +235,7 @@ struct kvaser_cmd_tx_acknowledge_header {
23519 + u8 tid;
23520 + } __packed;
23521 +
23522 +-struct leaf_cmd_error_event {
23523 ++struct leaf_cmd_can_error_event {
23524 + u8 tid;
23525 + u8 flags;
23526 + __le16 time[3];
23527 +@@ -242,7 +247,7 @@ struct leaf_cmd_error_event {
23528 + u8 error_factor;
23529 + } __packed;
23530 +
23531 +-struct usbcan_cmd_error_event {
23532 ++struct usbcan_cmd_can_error_event {
23533 + u8 tid;
23534 + u8 padding;
23535 + u8 tx_errors_count_ch0;
23536 +@@ -254,6 +259,28 @@ struct usbcan_cmd_error_event {
23537 + __le16 time;
23538 + } __packed;
23539 +
23540 ++/* CMD_ERROR_EVENT error codes */
23541 ++#define KVASER_USB_LEAF_ERROR_EVENT_TX_QUEUE_FULL 0x8
23542 ++#define KVASER_USB_LEAF_ERROR_EVENT_PARAM 0x9
23543 ++
23544 ++struct leaf_cmd_error_event {
23545 ++ u8 tid;
23546 ++ u8 error_code;
23547 ++ __le16 timestamp[3];
23548 ++ __le16 padding;
23549 ++ __le16 info1;
23550 ++ __le16 info2;
23551 ++} __packed;
23552 ++
23553 ++struct usbcan_cmd_error_event {
23554 ++ u8 tid;
23555 ++ u8 error_code;
23556 ++ __le16 info1;
23557 ++ __le16 info2;
23558 ++ __le16 timestamp;
23559 ++ __le16 padding;
23560 ++} __packed;
23561 ++
23562 + struct kvaser_cmd_ctrl_mode {
23563 + u8 tid;
23564 + u8 channel;
23565 +@@ -278,6 +305,28 @@ struct leaf_cmd_log_message {
23566 + u8 data[8];
23567 + } __packed;
23568 +
23569 ++/* Sub commands for cap_req and cap_res */
23570 ++#define KVASER_USB_LEAF_CAP_CMD_LISTEN_MODE 0x02
23571 ++#define KVASER_USB_LEAF_CAP_CMD_ERR_REPORT 0x05
23572 ++struct kvaser_cmd_cap_req {
23573 ++ __le16 padding0;
23574 ++ __le16 cap_cmd;
23575 ++ __le16 padding1;
23576 ++ __le16 channel;
23577 ++} __packed;
23578 ++
23579 ++/* Status codes for cap_res */
23580 ++#define KVASER_USB_LEAF_CAP_STAT_OK 0x00
23581 ++#define KVASER_USB_LEAF_CAP_STAT_NOT_IMPL 0x01
23582 ++#define KVASER_USB_LEAF_CAP_STAT_UNAVAIL 0x02
23583 ++struct kvaser_cmd_cap_res {
23584 ++ __le16 padding;
23585 ++ __le16 cap_cmd;
23586 ++ __le16 status;
23587 ++ __le32 mask;
23588 ++ __le32 value;
23589 ++} __packed;
23590 ++
23591 + struct kvaser_cmd {
23592 + u8 len;
23593 + u8 id;
23594 +@@ -293,14 +342,18 @@ struct kvaser_cmd {
23595 + struct leaf_cmd_softinfo softinfo;
23596 + struct leaf_cmd_rx_can rx_can;
23597 + struct leaf_cmd_chip_state_event chip_state_event;
23598 +- struct leaf_cmd_error_event error_event;
23599 ++ struct leaf_cmd_can_error_event can_error_event;
23600 + struct leaf_cmd_log_message log_message;
23601 ++ struct leaf_cmd_error_event error_event;
23602 ++ struct kvaser_cmd_cap_req cap_req;
23603 ++ struct kvaser_cmd_cap_res cap_res;
23604 + } __packed leaf;
23605 +
23606 + union {
23607 + struct usbcan_cmd_softinfo softinfo;
23608 + struct usbcan_cmd_rx_can rx_can;
23609 + struct usbcan_cmd_chip_state_event chip_state_event;
23610 ++ struct usbcan_cmd_can_error_event can_error_event;
23611 + struct usbcan_cmd_error_event error_event;
23612 + } __packed usbcan;
23613 +
23614 +@@ -323,7 +376,10 @@ static const u8 kvaser_usb_leaf_cmd_sizes_leaf[] = {
23615 + [CMD_RX_EXT_MESSAGE] = kvaser_fsize(u.leaf.rx_can),
23616 + [CMD_LEAF_LOG_MESSAGE] = kvaser_fsize(u.leaf.log_message),
23617 + [CMD_CHIP_STATE_EVENT] = kvaser_fsize(u.leaf.chip_state_event),
23618 +- [CMD_CAN_ERROR_EVENT] = kvaser_fsize(u.leaf.error_event),
23619 ++ [CMD_CAN_ERROR_EVENT] = kvaser_fsize(u.leaf.can_error_event),
23620 ++ [CMD_GET_CAPABILITIES_RESP] = kvaser_fsize(u.leaf.cap_res),
23621 ++ [CMD_GET_BUS_PARAMS_REPLY] = kvaser_fsize(u.busparams),
23622 ++ [CMD_ERROR_EVENT] = kvaser_fsize(u.leaf.error_event),
23623 + /* ignored events: */
23624 + [CMD_FLUSH_QUEUE_REPLY] = CMD_SIZE_ANY,
23625 + };
23626 +@@ -337,7 +393,8 @@ static const u8 kvaser_usb_leaf_cmd_sizes_usbcan[] = {
23627 + [CMD_RX_STD_MESSAGE] = kvaser_fsize(u.usbcan.rx_can),
23628 + [CMD_RX_EXT_MESSAGE] = kvaser_fsize(u.usbcan.rx_can),
23629 + [CMD_CHIP_STATE_EVENT] = kvaser_fsize(u.usbcan.chip_state_event),
23630 +- [CMD_CAN_ERROR_EVENT] = kvaser_fsize(u.usbcan.error_event),
23631 ++ [CMD_CAN_ERROR_EVENT] = kvaser_fsize(u.usbcan.can_error_event),
23632 ++ [CMD_ERROR_EVENT] = kvaser_fsize(u.usbcan.error_event),
23633 + /* ignored events: */
23634 + [CMD_USBCAN_CLOCK_OVERFLOW_EVENT] = CMD_SIZE_ANY,
23635 + };
23636 +@@ -365,6 +422,12 @@ struct kvaser_usb_err_summary {
23637 + };
23638 + };
23639 +
23640 ++struct kvaser_usb_net_leaf_priv {
23641 ++ struct kvaser_usb_net_priv *net;
23642 ++
23643 ++ struct delayed_work chip_state_req_work;
23644 ++};
23645 ++
23646 + static const struct can_bittiming_const kvaser_usb_leaf_m16c_bittiming_const = {
23647 + .name = "kvaser_usb_ucii",
23648 + .tseg1_min = 4,
23649 +@@ -606,6 +669,9 @@ static void kvaser_usb_leaf_get_software_info_leaf(struct kvaser_usb *dev,
23650 + dev->fw_version = le32_to_cpu(softinfo->fw_version);
23651 + dev->max_tx_urbs = le16_to_cpu(softinfo->max_outstanding_tx);
23652 +
23653 ++ if (sw_options & KVASER_USB_LEAF_SWOPTION_EXT_CAP)
23654 ++ dev->card_data.capabilities |= KVASER_USB_CAP_EXT_CAP;
23655 ++
23656 + if (dev->driver_info->quirks & KVASER_USB_QUIRK_IGNORE_CLK_FREQ) {
23657 + /* Firmware expects bittiming parameters calculated for 16MHz
23658 + * clock, regardless of the actual clock
23659 +@@ -693,6 +759,116 @@ static int kvaser_usb_leaf_get_card_info(struct kvaser_usb *dev)
23660 + return 0;
23661 + }
23662 +
23663 ++static int kvaser_usb_leaf_get_single_capability(struct kvaser_usb *dev,
23664 ++ u16 cap_cmd_req, u16 *status)
23665 ++{
23666 ++ struct kvaser_usb_dev_card_data *card_data = &dev->card_data;
23667 ++ struct kvaser_cmd *cmd;
23668 ++ u32 value = 0;
23669 ++ u32 mask = 0;
23670 ++ u16 cap_cmd_res;
23671 ++ int err;
23672 ++ int i;
23673 ++
23674 ++ cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
23675 ++ if (!cmd)
23676 ++ return -ENOMEM;
23677 ++
23678 ++ cmd->id = CMD_GET_CAPABILITIES_REQ;
23679 ++ cmd->u.leaf.cap_req.cap_cmd = cpu_to_le16(cap_cmd_req);
23680 ++ cmd->len = CMD_HEADER_LEN + sizeof(struct kvaser_cmd_cap_req);
23681 ++
23682 ++ err = kvaser_usb_send_cmd(dev, cmd, cmd->len);
23683 ++ if (err)
23684 ++ goto end;
23685 ++
23686 ++ err = kvaser_usb_leaf_wait_cmd(dev, CMD_GET_CAPABILITIES_RESP, cmd);
23687 ++ if (err)
23688 ++ goto end;
23689 ++
23690 ++ *status = le16_to_cpu(cmd->u.leaf.cap_res.status);
23691 ++
23692 ++ if (*status != KVASER_USB_LEAF_CAP_STAT_OK)
23693 ++ goto end;
23694 ++
23695 ++ cap_cmd_res = le16_to_cpu(cmd->u.leaf.cap_res.cap_cmd);
23696 ++ switch (cap_cmd_res) {
23697 ++ case KVASER_USB_LEAF_CAP_CMD_LISTEN_MODE:
23698 ++ case KVASER_USB_LEAF_CAP_CMD_ERR_REPORT:
23699 ++ value = le32_to_cpu(cmd->u.leaf.cap_res.value);
23700 ++ mask = le32_to_cpu(cmd->u.leaf.cap_res.mask);
23701 ++ break;
23702 ++ default:
23703 ++ dev_warn(&dev->intf->dev, "Unknown capability command %u\n",
23704 ++ cap_cmd_res);
23705 ++ break;
23706 ++ }
23707 ++
23708 ++ for (i = 0; i < dev->nchannels; i++) {
23709 ++ if (BIT(i) & (value & mask)) {
23710 ++ switch (cap_cmd_res) {
23711 ++ case KVASER_USB_LEAF_CAP_CMD_LISTEN_MODE:
23712 ++ card_data->ctrlmode_supported |=
23713 ++ CAN_CTRLMODE_LISTENONLY;
23714 ++ break;
23715 ++ case KVASER_USB_LEAF_CAP_CMD_ERR_REPORT:
23716 ++ card_data->capabilities |=
23717 ++ KVASER_USB_CAP_BERR_CAP;
23718 ++ break;
23719 ++ }
23720 ++ }
23721 ++ }
23722 ++
23723 ++end:
23724 ++ kfree(cmd);
23725 ++
23726 ++ return err;
23727 ++}
23728 ++
23729 ++static int kvaser_usb_leaf_get_capabilities_leaf(struct kvaser_usb *dev)
23730 ++{
23731 ++ int err;
23732 ++ u16 status;
23733 ++
23734 ++ if (!(dev->card_data.capabilities & KVASER_USB_CAP_EXT_CAP)) {
23735 ++ dev_info(&dev->intf->dev,
23736 ++ "No extended capability support. Upgrade device firmware.\n");
23737 ++ return 0;
23738 ++ }
23739 ++
23740 ++ err = kvaser_usb_leaf_get_single_capability(dev,
23741 ++ KVASER_USB_LEAF_CAP_CMD_LISTEN_MODE,
23742 ++ &status);
23743 ++ if (err)
23744 ++ return err;
23745 ++ if (status)
23746 ++ dev_info(&dev->intf->dev,
23747 ++ "KVASER_USB_LEAF_CAP_CMD_LISTEN_MODE failed %u\n",
23748 ++ status);
23749 ++
23750 ++ err = kvaser_usb_leaf_get_single_capability(dev,
23751 ++ KVASER_USB_LEAF_CAP_CMD_ERR_REPORT,
23752 ++ &status);
23753 ++ if (err)
23754 ++ return err;
23755 ++ if (status)
23756 ++ dev_info(&dev->intf->dev,
23757 ++ "KVASER_USB_LEAF_CAP_CMD_ERR_REPORT failed %u\n",
23758 ++ status);
23759 ++
23760 ++ return 0;
23761 ++}
23762 ++
23763 ++static int kvaser_usb_leaf_get_capabilities(struct kvaser_usb *dev)
23764 ++{
23765 ++ int err = 0;
23766 ++
23767 ++ if (dev->driver_info->family == KVASER_LEAF)
23768 ++ err = kvaser_usb_leaf_get_capabilities_leaf(dev);
23769 ++
23770 ++ return err;
23771 ++}
23772 ++
23773 + static void kvaser_usb_leaf_tx_acknowledge(const struct kvaser_usb *dev,
23774 + const struct kvaser_cmd *cmd)
23775 + {
23776 +@@ -721,7 +897,7 @@ static void kvaser_usb_leaf_tx_acknowledge(const struct kvaser_usb *dev,
23777 + context = &priv->tx_contexts[tid % dev->max_tx_urbs];
23778 +
23779 + /* Sometimes the state change doesn't come after a bus-off event */
23780 +- if (priv->can.restart_ms && priv->can.state >= CAN_STATE_BUS_OFF) {
23781 ++ if (priv->can.restart_ms && priv->can.state == CAN_STATE_BUS_OFF) {
23782 + struct sk_buff *skb;
23783 + struct can_frame *cf;
23784 +
23785 +@@ -774,6 +950,16 @@ static int kvaser_usb_leaf_simple_cmd_async(struct kvaser_usb_net_priv *priv,
23786 + return err;
23787 + }
23788 +
23789 ++static void kvaser_usb_leaf_chip_state_req_work(struct work_struct *work)
23790 ++{
23791 ++ struct kvaser_usb_net_leaf_priv *leaf =
23792 ++ container_of(work, struct kvaser_usb_net_leaf_priv,
23793 ++ chip_state_req_work.work);
23794 ++ struct kvaser_usb_net_priv *priv = leaf->net;
23795 ++
23796 ++ kvaser_usb_leaf_simple_cmd_async(priv, CMD_GET_CHIP_STATE);
23797 ++}
23798 ++
23799 + static void
23800 + kvaser_usb_leaf_rx_error_update_can_state(struct kvaser_usb_net_priv *priv,
23801 + const struct kvaser_usb_err_summary *es,
23802 +@@ -792,20 +978,16 @@ kvaser_usb_leaf_rx_error_update_can_state(struct kvaser_usb_net_priv *priv,
23803 + new_state = CAN_STATE_BUS_OFF;
23804 + } else if (es->status & M16C_STATE_BUS_PASSIVE) {
23805 + new_state = CAN_STATE_ERROR_PASSIVE;
23806 +- } else if (es->status & M16C_STATE_BUS_ERROR) {
23807 ++ } else if ((es->status & M16C_STATE_BUS_ERROR) &&
23808 ++ cur_state >= CAN_STATE_BUS_OFF) {
23809 + /* Guard against spurious error events after a busoff */
23810 +- if (cur_state < CAN_STATE_BUS_OFF) {
23811 +- if (es->txerr >= 128 || es->rxerr >= 128)
23812 +- new_state = CAN_STATE_ERROR_PASSIVE;
23813 +- else if (es->txerr >= 96 || es->rxerr >= 96)
23814 +- new_state = CAN_STATE_ERROR_WARNING;
23815 +- else if (cur_state > CAN_STATE_ERROR_ACTIVE)
23816 +- new_state = CAN_STATE_ERROR_ACTIVE;
23817 +- }
23818 +- }
23819 +-
23820 +- if (!es->status)
23821 ++ } else if (es->txerr >= 128 || es->rxerr >= 128) {
23822 ++ new_state = CAN_STATE_ERROR_PASSIVE;
23823 ++ } else if (es->txerr >= 96 || es->rxerr >= 96) {
23824 ++ new_state = CAN_STATE_ERROR_WARNING;
23825 ++ } else {
23826 + new_state = CAN_STATE_ERROR_ACTIVE;
23827 ++ }
23828 +
23829 + if (new_state != cur_state) {
23830 + tx_state = (es->txerr >= es->rxerr) ? new_state : 0;
23831 +@@ -815,7 +997,7 @@ kvaser_usb_leaf_rx_error_update_can_state(struct kvaser_usb_net_priv *priv,
23832 + }
23833 +
23834 + if (priv->can.restart_ms &&
23835 +- cur_state >= CAN_STATE_BUS_OFF &&
23836 ++ cur_state == CAN_STATE_BUS_OFF &&
23837 + new_state < CAN_STATE_BUS_OFF)
23838 + priv->can.can_stats.restarts++;
23839 +
23840 +@@ -849,6 +1031,7 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
23841 + struct sk_buff *skb;
23842 + struct net_device_stats *stats;
23843 + struct kvaser_usb_net_priv *priv;
23844 ++ struct kvaser_usb_net_leaf_priv *leaf;
23845 + enum can_state old_state, new_state;
23846 +
23847 + if (es->channel >= dev->nchannels) {
23848 +@@ -858,8 +1041,13 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
23849 + }
23850 +
23851 + priv = dev->nets[es->channel];
23852 ++ leaf = priv->sub_priv;
23853 + stats = &priv->netdev->stats;
23854 +
23855 ++ /* Ignore e.g. state change to bus-off reported just after stopping */
23856 ++ if (!netif_running(priv->netdev))
23857 ++ return;
23858 ++
23859 + /* Update all of the CAN interface's state and error counters before
23860 + * trying any memory allocation that can actually fail with -ENOMEM.
23861 + *
23862 +@@ -874,6 +1062,14 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
23863 + kvaser_usb_leaf_rx_error_update_can_state(priv, es, &tmp_cf);
23864 + new_state = priv->can.state;
23865 +
23866 ++ /* If there are errors, request status updates periodically as we do
23867 ++ * not get automatic notifications of improved state.
23868 ++ */
23869 ++ if (new_state < CAN_STATE_BUS_OFF &&
23870 ++ (es->rxerr || es->txerr || new_state == CAN_STATE_ERROR_PASSIVE))
23871 ++ schedule_delayed_work(&leaf->chip_state_req_work,
23872 ++ msecs_to_jiffies(500));
23873 ++
23874 + skb = alloc_can_err_skb(priv->netdev, &cf);
23875 + if (!skb) {
23876 + stats->rx_dropped++;
23877 +@@ -891,7 +1087,7 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
23878 + }
23879 +
23880 + if (priv->can.restart_ms &&
23881 +- old_state >= CAN_STATE_BUS_OFF &&
23882 ++ old_state == CAN_STATE_BUS_OFF &&
23883 + new_state < CAN_STATE_BUS_OFF) {
23884 + cf->can_id |= CAN_ERR_RESTARTED;
23885 + netif_carrier_on(priv->netdev);
23886 +@@ -990,11 +1186,11 @@ static void kvaser_usb_leaf_usbcan_rx_error(const struct kvaser_usb *dev,
23887 +
23888 + case CMD_CAN_ERROR_EVENT:
23889 + es.channel = 0;
23890 +- es.status = cmd->u.usbcan.error_event.status_ch0;
23891 +- es.txerr = cmd->u.usbcan.error_event.tx_errors_count_ch0;
23892 +- es.rxerr = cmd->u.usbcan.error_event.rx_errors_count_ch0;
23893 ++ es.status = cmd->u.usbcan.can_error_event.status_ch0;
23894 ++ es.txerr = cmd->u.usbcan.can_error_event.tx_errors_count_ch0;
23895 ++ es.rxerr = cmd->u.usbcan.can_error_event.rx_errors_count_ch0;
23896 + es.usbcan.other_ch_status =
23897 +- cmd->u.usbcan.error_event.status_ch1;
23898 ++ cmd->u.usbcan.can_error_event.status_ch1;
23899 + kvaser_usb_leaf_usbcan_conditionally_rx_error(dev, &es);
23900 +
23901 + /* The USBCAN firmware supports up to 2 channels.
23902 +@@ -1002,13 +1198,13 @@ static void kvaser_usb_leaf_usbcan_rx_error(const struct kvaser_usb *dev,
23903 + */
23904 + if (dev->nchannels == MAX_USBCAN_NET_DEVICES) {
23905 + es.channel = 1;
23906 +- es.status = cmd->u.usbcan.error_event.status_ch1;
23907 ++ es.status = cmd->u.usbcan.can_error_event.status_ch1;
23908 + es.txerr =
23909 +- cmd->u.usbcan.error_event.tx_errors_count_ch1;
23910 ++ cmd->u.usbcan.can_error_event.tx_errors_count_ch1;
23911 + es.rxerr =
23912 +- cmd->u.usbcan.error_event.rx_errors_count_ch1;
23913 ++ cmd->u.usbcan.can_error_event.rx_errors_count_ch1;
23914 + es.usbcan.other_ch_status =
23915 +- cmd->u.usbcan.error_event.status_ch0;
23916 ++ cmd->u.usbcan.can_error_event.status_ch0;
23917 + kvaser_usb_leaf_usbcan_conditionally_rx_error(dev, &es);
23918 + }
23919 + break;
23920 +@@ -1025,11 +1221,11 @@ static void kvaser_usb_leaf_leaf_rx_error(const struct kvaser_usb *dev,
23921 +
23922 + switch (cmd->id) {
23923 + case CMD_CAN_ERROR_EVENT:
23924 +- es.channel = cmd->u.leaf.error_event.channel;
23925 +- es.status = cmd->u.leaf.error_event.status;
23926 +- es.txerr = cmd->u.leaf.error_event.tx_errors_count;
23927 +- es.rxerr = cmd->u.leaf.error_event.rx_errors_count;
23928 +- es.leaf.error_factor = cmd->u.leaf.error_event.error_factor;
23929 ++ es.channel = cmd->u.leaf.can_error_event.channel;
23930 ++ es.status = cmd->u.leaf.can_error_event.status;
23931 ++ es.txerr = cmd->u.leaf.can_error_event.tx_errors_count;
23932 ++ es.rxerr = cmd->u.leaf.can_error_event.rx_errors_count;
23933 ++ es.leaf.error_factor = cmd->u.leaf.can_error_event.error_factor;
23934 + break;
23935 + case CMD_LEAF_LOG_MESSAGE:
23936 + es.channel = cmd->u.leaf.log_message.channel;
23937 +@@ -1162,6 +1358,74 @@ static void kvaser_usb_leaf_rx_can_msg(const struct kvaser_usb *dev,
23938 + netif_rx(skb);
23939 + }
23940 +
23941 ++static void kvaser_usb_leaf_error_event_parameter(const struct kvaser_usb *dev,
23942 ++ const struct kvaser_cmd *cmd)
23943 ++{
23944 ++ u16 info1 = 0;
23945 ++
23946 ++ switch (dev->driver_info->family) {
23947 ++ case KVASER_LEAF:
23948 ++ info1 = le16_to_cpu(cmd->u.leaf.error_event.info1);
23949 ++ break;
23950 ++ case KVASER_USBCAN:
23951 ++ info1 = le16_to_cpu(cmd->u.usbcan.error_event.info1);
23952 ++ break;
23953 ++ }
23954 ++
23955 ++ /* info1 will contain the offending cmd_no */
23956 ++ switch (info1) {
23957 ++ case CMD_SET_CTRL_MODE:
23958 ++ dev_warn(&dev->intf->dev,
23959 ++ "CMD_SET_CTRL_MODE error in parameter\n");
23960 ++ break;
23961 ++
23962 ++ case CMD_SET_BUS_PARAMS:
23963 ++ dev_warn(&dev->intf->dev,
23964 ++ "CMD_SET_BUS_PARAMS error in parameter\n");
23965 ++ break;
23966 ++
23967 ++ default:
23968 ++ dev_warn(&dev->intf->dev,
23969 ++ "Unhandled parameter error event cmd_no (%u)\n",
23970 ++ info1);
23971 ++ break;
23972 ++ }
23973 ++}
23974 ++
23975 ++static void kvaser_usb_leaf_error_event(const struct kvaser_usb *dev,
23976 ++ const struct kvaser_cmd *cmd)
23977 ++{
23978 ++ u8 error_code = 0;
23979 ++
23980 ++ switch (dev->driver_info->family) {
23981 ++ case KVASER_LEAF:
23982 ++ error_code = cmd->u.leaf.error_event.error_code;
23983 ++ break;
23984 ++ case KVASER_USBCAN:
23985 ++ error_code = cmd->u.usbcan.error_event.error_code;
23986 ++ break;
23987 ++ }
23988 ++
23989 ++ switch (error_code) {
23990 ++ case KVASER_USB_LEAF_ERROR_EVENT_TX_QUEUE_FULL:
23991 ++ /* Received additional CAN message, when firmware TX queue is
23992 ++ * already full. Something is wrong with the driver.
23993 ++ * This should never happen!
23994 ++ */
23995 ++ dev_err(&dev->intf->dev,
23996 ++ "Received error event TX_QUEUE_FULL\n");
23997 ++ break;
23998 ++ case KVASER_USB_LEAF_ERROR_EVENT_PARAM:
23999 ++ kvaser_usb_leaf_error_event_parameter(dev, cmd);
24000 ++ break;
24001 ++
24002 ++ default:
24003 ++ dev_warn(&dev->intf->dev,
24004 ++ "Unhandled error event (%d)\n", error_code);
24005 ++ break;
24006 ++ }
24007 ++}
24008 ++
24009 + static void kvaser_usb_leaf_start_chip_reply(const struct kvaser_usb *dev,
24010 + const struct kvaser_cmd *cmd)
24011 + {
24012 +@@ -1202,6 +1466,25 @@ static void kvaser_usb_leaf_stop_chip_reply(const struct kvaser_usb *dev,
24013 + complete(&priv->stop_comp);
24014 + }
24015 +
24016 ++static void kvaser_usb_leaf_get_busparams_reply(const struct kvaser_usb *dev,
24017 ++ const struct kvaser_cmd *cmd)
24018 ++{
24019 ++ struct kvaser_usb_net_priv *priv;
24020 ++ u8 channel = cmd->u.busparams.channel;
24021 ++
24022 ++ if (channel >= dev->nchannels) {
24023 ++ dev_err(&dev->intf->dev,
24024 ++ "Invalid channel number (%d)\n", channel);
24025 ++ return;
24026 ++ }
24027 ++
24028 ++ priv = dev->nets[channel];
24029 ++ memcpy(&priv->busparams_nominal, &cmd->u.busparams.busparams,
24030 ++ sizeof(priv->busparams_nominal));
24031 ++
24032 ++ complete(&priv->get_busparams_comp);
24033 ++}
24034 ++
24035 + static void kvaser_usb_leaf_handle_command(const struct kvaser_usb *dev,
24036 + const struct kvaser_cmd *cmd)
24037 + {
24038 +@@ -1240,6 +1523,14 @@ static void kvaser_usb_leaf_handle_command(const struct kvaser_usb *dev,
24039 + kvaser_usb_leaf_tx_acknowledge(dev, cmd);
24040 + break;
24041 +
24042 ++ case CMD_ERROR_EVENT:
24043 ++ kvaser_usb_leaf_error_event(dev, cmd);
24044 ++ break;
24045 ++
24046 ++ case CMD_GET_BUS_PARAMS_REPLY:
24047 ++ kvaser_usb_leaf_get_busparams_reply(dev, cmd);
24048 ++ break;
24049 ++
24050 + /* Ignored commands */
24051 + case CMD_USBCAN_CLOCK_OVERFLOW_EVENT:
24052 + if (dev->driver_info->family != KVASER_USBCAN)
24053 +@@ -1336,10 +1627,13 @@ static int kvaser_usb_leaf_start_chip(struct kvaser_usb_net_priv *priv)
24054 +
24055 + static int kvaser_usb_leaf_stop_chip(struct kvaser_usb_net_priv *priv)
24056 + {
24057 ++ struct kvaser_usb_net_leaf_priv *leaf = priv->sub_priv;
24058 + int err;
24059 +
24060 + reinit_completion(&priv->stop_comp);
24061 +
24062 ++ cancel_delayed_work(&leaf->chip_state_req_work);
24063 ++
24064 + err = kvaser_usb_leaf_send_simple_cmd(priv->dev, CMD_STOP_CHIP,
24065 + priv->channel);
24066 + if (err)
24067 +@@ -1386,10 +1680,35 @@ static int kvaser_usb_leaf_init_card(struct kvaser_usb *dev)
24068 + return 0;
24069 + }
24070 +
24071 +-static int kvaser_usb_leaf_set_bittiming(struct net_device *netdev)
24072 ++static int kvaser_usb_leaf_init_channel(struct kvaser_usb_net_priv *priv)
24073 ++{
24074 ++ struct kvaser_usb_net_leaf_priv *leaf;
24075 ++
24076 ++ leaf = devm_kzalloc(&priv->dev->intf->dev, sizeof(*leaf), GFP_KERNEL);
24077 ++ if (!leaf)
24078 ++ return -ENOMEM;
24079 ++
24080 ++ leaf->net = priv;
24081 ++ INIT_DELAYED_WORK(&leaf->chip_state_req_work,
24082 ++ kvaser_usb_leaf_chip_state_req_work);
24083 ++
24084 ++ priv->sub_priv = leaf;
24085 ++
24086 ++ return 0;
24087 ++}
24088 ++
24089 ++static void kvaser_usb_leaf_remove_channel(struct kvaser_usb_net_priv *priv)
24090 ++{
24091 ++ struct kvaser_usb_net_leaf_priv *leaf = priv->sub_priv;
24092 ++
24093 ++ if (leaf)
24094 ++ cancel_delayed_work_sync(&leaf->chip_state_req_work);
24095 ++}
24096 ++
24097 ++static int kvaser_usb_leaf_set_bittiming(const struct net_device *netdev,
24098 ++ const struct kvaser_usb_busparams *busparams)
24099 + {
24100 + struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
24101 +- struct can_bittiming *bt = &priv->can.bittiming;
24102 + struct kvaser_usb *dev = priv->dev;
24103 + struct kvaser_cmd *cmd;
24104 + int rc;
24105 +@@ -1402,15 +1721,8 @@ static int kvaser_usb_leaf_set_bittiming(struct net_device *netdev)
24106 + cmd->len = CMD_HEADER_LEN + sizeof(struct kvaser_cmd_busparams);
24107 + cmd->u.busparams.channel = priv->channel;
24108 + cmd->u.busparams.tid = 0xff;
24109 +- cmd->u.busparams.bitrate = cpu_to_le32(bt->bitrate);
24110 +- cmd->u.busparams.sjw = bt->sjw;
24111 +- cmd->u.busparams.tseg1 = bt->prop_seg + bt->phase_seg1;
24112 +- cmd->u.busparams.tseg2 = bt->phase_seg2;
24113 +-
24114 +- if (priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES)
24115 +- cmd->u.busparams.no_samp = 3;
24116 +- else
24117 +- cmd->u.busparams.no_samp = 1;
24118 ++ memcpy(&cmd->u.busparams.busparams, busparams,
24119 ++ sizeof(cmd->u.busparams.busparams));
24120 +
24121 + rc = kvaser_usb_send_cmd(dev, cmd, cmd->len);
24122 +
24123 +@@ -1418,6 +1730,27 @@ static int kvaser_usb_leaf_set_bittiming(struct net_device *netdev)
24124 + return rc;
24125 + }
24126 +
24127 ++static int kvaser_usb_leaf_get_busparams(struct kvaser_usb_net_priv *priv)
24128 ++{
24129 ++ int err;
24130 ++
24131 ++ if (priv->dev->driver_info->family == KVASER_USBCAN)
24132 ++ return -EOPNOTSUPP;
24133 ++
24134 ++ reinit_completion(&priv->get_busparams_comp);
24135 ++
24136 ++ err = kvaser_usb_leaf_send_simple_cmd(priv->dev, CMD_GET_BUS_PARAMS,
24137 ++ priv->channel);
24138 ++ if (err)
24139 ++ return err;
24140 ++
24141 ++ if (!wait_for_completion_timeout(&priv->get_busparams_comp,
24142 ++ msecs_to_jiffies(KVASER_USB_TIMEOUT)))
24143 ++ return -ETIMEDOUT;
24144 ++
24145 ++ return 0;
24146 ++}
24147 ++
24148 + static int kvaser_usb_leaf_set_mode(struct net_device *netdev,
24149 + enum can_mode mode)
24150 + {
24151 +@@ -1479,14 +1812,18 @@ static int kvaser_usb_leaf_setup_endpoints(struct kvaser_usb *dev)
24152 + const struct kvaser_usb_dev_ops kvaser_usb_leaf_dev_ops = {
24153 + .dev_set_mode = kvaser_usb_leaf_set_mode,
24154 + .dev_set_bittiming = kvaser_usb_leaf_set_bittiming,
24155 ++ .dev_get_busparams = kvaser_usb_leaf_get_busparams,
24156 + .dev_set_data_bittiming = NULL,
24157 ++ .dev_get_data_busparams = NULL,
24158 + .dev_get_berr_counter = kvaser_usb_leaf_get_berr_counter,
24159 + .dev_setup_endpoints = kvaser_usb_leaf_setup_endpoints,
24160 + .dev_init_card = kvaser_usb_leaf_init_card,
24161 ++ .dev_init_channel = kvaser_usb_leaf_init_channel,
24162 ++ .dev_remove_channel = kvaser_usb_leaf_remove_channel,
24163 + .dev_get_software_info = kvaser_usb_leaf_get_software_info,
24164 + .dev_get_software_details = NULL,
24165 + .dev_get_card_info = kvaser_usb_leaf_get_card_info,
24166 +- .dev_get_capabilities = NULL,
24167 ++ .dev_get_capabilities = kvaser_usb_leaf_get_capabilities,
24168 + .dev_set_opt_mode = kvaser_usb_leaf_set_opt_mode,
24169 + .dev_start_chip = kvaser_usb_leaf_start_chip,
24170 + .dev_stop_chip = kvaser_usb_leaf_stop_chip,
24171 +diff --git a/drivers/net/dsa/lan9303-core.c b/drivers/net/dsa/lan9303-core.c
24172 +index 80f07bd205934..2e270b4791432 100644
24173 +--- a/drivers/net/dsa/lan9303-core.c
24174 ++++ b/drivers/net/dsa/lan9303-core.c
24175 +@@ -1005,9 +1005,11 @@ static void lan9303_get_ethtool_stats(struct dsa_switch *ds, int port,
24176 + ret = lan9303_read_switch_port(
24177 + chip, port, lan9303_mib[u].offset, &reg);
24178 +
24179 +- if (ret)
24180 ++ if (ret) {
24181 + dev_warn(chip->dev, "Reading status port %d reg %u failed\n",
24182 + port, lan9303_mib[u].offset);
24183 ++ reg = 0;
24184 ++ }
24185 + data[u] = reg;
24186 + }
24187 + }
24188 +diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
24189 +index d612181b3226e..c68f48cd1ec08 100644
24190 +--- a/drivers/net/dsa/microchip/ksz_common.c
24191 ++++ b/drivers/net/dsa/microchip/ksz_common.c
24192 +@@ -1883,8 +1883,7 @@ static int ksz_irq_common_setup(struct ksz_device *dev, struct ksz_irq *kirq)
24193 + irq_create_mapping(kirq->domain, n);
24194 +
24195 + ret = request_threaded_irq(kirq->irq_num, NULL, ksz_irq_thread_fn,
24196 +- IRQF_ONESHOT | IRQF_TRIGGER_FALLING,
24197 +- kirq->name, kirq);
24198 ++ IRQF_ONESHOT, kirq->name, kirq);
24199 + if (ret)
24200 + goto out;
24201 +
24202 +diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
24203 +index 937cb22cb3d48..3b8b2d0fbafaf 100644
24204 +--- a/drivers/net/dsa/mv88e6xxx/chip.c
24205 ++++ b/drivers/net/dsa/mv88e6xxx/chip.c
24206 +@@ -689,13 +689,12 @@ static void mv88e6352_phylink_get_caps(struct mv88e6xxx_chip *chip, int port,
24207 +
24208 + /* Port 4 supports automedia if the serdes is associated with it. */
24209 + if (port == 4) {
24210 +- mv88e6xxx_reg_lock(chip);
24211 + err = mv88e6352_g2_scratch_port_has_serdes(chip, port);
24212 + if (err < 0)
24213 + dev_err(chip->dev, "p%d: failed to read scratch\n",
24214 + port);
24215 + if (err <= 0)
24216 +- goto unlock;
24217 ++ return;
24218 +
24219 + cmode = mv88e6352_get_port4_serdes_cmode(chip);
24220 + if (cmode < 0)
24221 +@@ -703,8 +702,6 @@ static void mv88e6352_phylink_get_caps(struct mv88e6xxx_chip *chip, int port,
24222 + port);
24223 + else
24224 + mv88e6xxx_translate_cmode(cmode, supported);
24225 +-unlock:
24226 +- mv88e6xxx_reg_unlock(chip);
24227 + }
24228 + }
24229 +
24230 +@@ -831,7 +828,9 @@ static void mv88e6xxx_get_caps(struct dsa_switch *ds, int port,
24231 + {
24232 + struct mv88e6xxx_chip *chip = ds->priv;
24233 +
24234 ++ mv88e6xxx_reg_lock(chip);
24235 + chip->info->ops->phylink_get_caps(chip, port, config);
24236 ++ mv88e6xxx_reg_unlock(chip);
24237 +
24238 + if (mv88e6xxx_phy_is_internal(ds, port)) {
24239 + __set_bit(PHY_INTERFACE_MODE_INTERNAL,
24240 +@@ -3307,7 +3306,7 @@ static int mv88e6xxx_setup_port(struct mv88e6xxx_chip *chip, int port)
24241 + struct phylink_config pl_config = {};
24242 + unsigned long caps;
24243 +
24244 +- mv88e6xxx_get_caps(ds, port, &pl_config);
24245 ++ chip->info->ops->phylink_get_caps(chip, port, &pl_config);
24246 +
24247 + caps = pl_config.mac_capabilities;
24248 +
24249 +diff --git a/drivers/net/ethernet/adi/adin1110.c b/drivers/net/ethernet/adi/adin1110.c
24250 +index 606c976108085..9d8dfe1729948 100644
24251 +--- a/drivers/net/ethernet/adi/adin1110.c
24252 ++++ b/drivers/net/ethernet/adi/adin1110.c
24253 +@@ -196,7 +196,7 @@ static int adin1110_read_reg(struct adin1110_priv *priv, u16 reg, u32 *val)
24254 + {
24255 + u32 header_len = ADIN1110_RD_HEADER_LEN;
24256 + u32 read_len = ADIN1110_REG_LEN;
24257 +- struct spi_transfer t[2] = {0};
24258 ++ struct spi_transfer t = {0};
24259 + int ret;
24260 +
24261 + priv->data[0] = ADIN1110_CD | FIELD_GET(GENMASK(12, 8), reg);
24262 +@@ -209,17 +209,15 @@ static int adin1110_read_reg(struct adin1110_priv *priv, u16 reg, u32 *val)
24263 + header_len++;
24264 + }
24265 +
24266 +- t[0].tx_buf = &priv->data[0];
24267 +- t[0].len = header_len;
24268 +-
24269 + if (priv->append_crc)
24270 + read_len++;
24271 +
24272 + memset(&priv->data[header_len], 0, read_len);
24273 +- t[1].rx_buf = &priv->data[header_len];
24274 +- t[1].len = read_len;
24275 ++ t.tx_buf = &priv->data[0];
24276 ++ t.rx_buf = &priv->data[0];
24277 ++ t.len = read_len + header_len;
24278 +
24279 +- ret = spi_sync_transfer(priv->spidev, t, 2);
24280 ++ ret = spi_sync_transfer(priv->spidev, &t, 1);
24281 + if (ret)
24282 + return ret;
24283 +
24284 +@@ -296,7 +294,7 @@ static int adin1110_read_fifo(struct adin1110_port_priv *port_priv)
24285 + {
24286 + struct adin1110_priv *priv = port_priv->priv;
24287 + u32 header_len = ADIN1110_RD_HEADER_LEN;
24288 +- struct spi_transfer t[2] = {0};
24289 ++ struct spi_transfer t;
24290 + u32 frame_size_no_fcs;
24291 + struct sk_buff *rxb;
24292 + u32 frame_size;
24293 +@@ -327,12 +325,7 @@ static int adin1110_read_fifo(struct adin1110_port_priv *port_priv)
24294 + return ret;
24295 +
24296 + frame_size_no_fcs = frame_size - ADIN1110_FRAME_HEADER_LEN - ADIN1110_FEC_LEN;
24297 +-
24298 +- rxb = netdev_alloc_skb(port_priv->netdev, round_len);
24299 +- if (!rxb)
24300 +- return -ENOMEM;
24301 +-
24302 +- memset(priv->data, 0, round_len + ADIN1110_RD_HEADER_LEN);
24303 ++ memset(priv->data, 0, ADIN1110_RD_HEADER_LEN);
24304 +
24305 + priv->data[0] = ADIN1110_CD | FIELD_GET(GENMASK(12, 8), reg);
24306 + priv->data[1] = FIELD_GET(GENMASK(7, 0), reg);
24307 +@@ -342,21 +335,23 @@ static int adin1110_read_fifo(struct adin1110_port_priv *port_priv)
24308 + header_len++;
24309 + }
24310 +
24311 +- skb_put(rxb, frame_size_no_fcs + ADIN1110_FRAME_HEADER_LEN);
24312 ++ rxb = netdev_alloc_skb(port_priv->netdev, round_len + header_len);
24313 ++ if (!rxb)
24314 ++ return -ENOMEM;
24315 +
24316 +- t[0].tx_buf = &priv->data[0];
24317 +- t[0].len = header_len;
24318 ++ skb_put(rxb, frame_size_no_fcs + header_len + ADIN1110_FRAME_HEADER_LEN);
24319 +
24320 +- t[1].rx_buf = &rxb->data[0];
24321 +- t[1].len = round_len;
24322 ++ t.tx_buf = &priv->data[0];
24323 ++ t.rx_buf = &rxb->data[0];
24324 ++ t.len = header_len + round_len;
24325 +
24326 +- ret = spi_sync_transfer(priv->spidev, t, 2);
24327 ++ ret = spi_sync_transfer(priv->spidev, &t, 1);
24328 + if (ret) {
24329 + kfree_skb(rxb);
24330 + return ret;
24331 + }
24332 +
24333 +- skb_pull(rxb, ADIN1110_FRAME_HEADER_LEN);
24334 ++ skb_pull(rxb, header_len + ADIN1110_FRAME_HEADER_LEN);
24335 + rxb->protocol = eth_type_trans(rxb, port_priv->netdev);
24336 +
24337 + if ((port_priv->flags & IFF_ALLMULTI && rxb->pkt_type == PACKET_MULTICAST) ||
24338 +diff --git a/drivers/net/ethernet/amd/atarilance.c b/drivers/net/ethernet/amd/atarilance.c
24339 +index 3222c48ce6ae4..ec704222925d8 100644
24340 +--- a/drivers/net/ethernet/amd/atarilance.c
24341 ++++ b/drivers/net/ethernet/amd/atarilance.c
24342 +@@ -824,7 +824,7 @@ lance_start_xmit(struct sk_buff *skb, struct net_device *dev)
24343 + lp->memcpy_f( PKTBUF_ADDR(head), (void *)skb->data, skb->len );
24344 + head->flag = TMD1_OWN_CHIP | TMD1_ENP | TMD1_STP;
24345 + dev->stats.tx_bytes += skb->len;
24346 +- dev_kfree_skb( skb );
24347 ++ dev_consume_skb_irq(skb);
24348 + lp->cur_tx++;
24349 + while( lp->cur_tx >= TX_RING_SIZE && lp->dirty_tx >= TX_RING_SIZE ) {
24350 + lp->cur_tx -= TX_RING_SIZE;
24351 +diff --git a/drivers/net/ethernet/amd/lance.c b/drivers/net/ethernet/amd/lance.c
24352 +index fb8686214a327..8971665a4b2ac 100644
24353 +--- a/drivers/net/ethernet/amd/lance.c
24354 ++++ b/drivers/net/ethernet/amd/lance.c
24355 +@@ -1001,7 +1001,7 @@ static netdev_tx_t lance_start_xmit(struct sk_buff *skb,
24356 + skb_copy_from_linear_data(skb, &lp->tx_bounce_buffs[entry], skb->len);
24357 + lp->tx_ring[entry].base =
24358 + ((u32)isa_virt_to_bus((lp->tx_bounce_buffs + entry)) & 0xffffff) | 0x83000000;
24359 +- dev_kfree_skb(skb);
24360 ++ dev_consume_skb_irq(skb);
24361 + } else {
24362 + lp->tx_skbuff[entry] = skb;
24363 + lp->tx_ring[entry].base = ((u32)isa_virt_to_bus(skb->data) & 0xffffff) | 0x83000000;
24364 +diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
24365 +index 4064c3e3dd492..c731a04731f83 100644
24366 +--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
24367 ++++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
24368 +@@ -189,6 +189,7 @@ enum xgbe_sfp_cable {
24369 + XGBE_SFP_CABLE_UNKNOWN = 0,
24370 + XGBE_SFP_CABLE_ACTIVE,
24371 + XGBE_SFP_CABLE_PASSIVE,
24372 ++ XGBE_SFP_CABLE_FIBER,
24373 + };
24374 +
24375 + enum xgbe_sfp_base {
24376 +@@ -236,10 +237,7 @@ enum xgbe_sfp_speed {
24377 +
24378 + #define XGBE_SFP_BASE_BR 12
24379 + #define XGBE_SFP_BASE_BR_1GBE_MIN 0x0a
24380 +-#define XGBE_SFP_BASE_BR_1GBE_MAX 0x0d
24381 + #define XGBE_SFP_BASE_BR_10GBE_MIN 0x64
24382 +-#define XGBE_SFP_BASE_BR_10GBE_MAX 0x68
24383 +-#define XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX 0x78
24384 +
24385 + #define XGBE_SFP_BASE_CU_CABLE_LEN 18
24386 +
24387 +@@ -826,29 +824,22 @@ static void xgbe_phy_sfp_phy_settings(struct xgbe_prv_data *pdata)
24388 + static bool xgbe_phy_sfp_bit_rate(struct xgbe_sfp_eeprom *sfp_eeprom,
24389 + enum xgbe_sfp_speed sfp_speed)
24390 + {
24391 +- u8 *sfp_base, min, max;
24392 ++ u8 *sfp_base, min;
24393 +
24394 + sfp_base = sfp_eeprom->base;
24395 +
24396 + switch (sfp_speed) {
24397 + case XGBE_SFP_SPEED_1000:
24398 + min = XGBE_SFP_BASE_BR_1GBE_MIN;
24399 +- max = XGBE_SFP_BASE_BR_1GBE_MAX;
24400 + break;
24401 + case XGBE_SFP_SPEED_10000:
24402 + min = XGBE_SFP_BASE_BR_10GBE_MIN;
24403 +- if (memcmp(&sfp_eeprom->base[XGBE_SFP_BASE_VENDOR_NAME],
24404 +- XGBE_MOLEX_VENDOR, XGBE_SFP_BASE_VENDOR_NAME_LEN) == 0)
24405 +- max = XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX;
24406 +- else
24407 +- max = XGBE_SFP_BASE_BR_10GBE_MAX;
24408 + break;
24409 + default:
24410 + return false;
24411 + }
24412 +
24413 +- return ((sfp_base[XGBE_SFP_BASE_BR] >= min) &&
24414 +- (sfp_base[XGBE_SFP_BASE_BR] <= max));
24415 ++ return sfp_base[XGBE_SFP_BASE_BR] >= min;
24416 + }
24417 +
24418 + static void xgbe_phy_free_phy_device(struct xgbe_prv_data *pdata)
24419 +@@ -1149,16 +1140,18 @@ static void xgbe_phy_sfp_parse_eeprom(struct xgbe_prv_data *pdata)
24420 + phy_data->sfp_tx_fault = xgbe_phy_check_sfp_tx_fault(phy_data);
24421 + phy_data->sfp_rx_los = xgbe_phy_check_sfp_rx_los(phy_data);
24422 +
24423 +- /* Assume ACTIVE cable unless told it is PASSIVE */
24424 ++ /* Assume FIBER cable unless told otherwise */
24425 + if (sfp_base[XGBE_SFP_BASE_CABLE] & XGBE_SFP_BASE_CABLE_PASSIVE) {
24426 + phy_data->sfp_cable = XGBE_SFP_CABLE_PASSIVE;
24427 + phy_data->sfp_cable_len = sfp_base[XGBE_SFP_BASE_CU_CABLE_LEN];
24428 +- } else {
24429 ++ } else if (sfp_base[XGBE_SFP_BASE_CABLE] & XGBE_SFP_BASE_CABLE_ACTIVE) {
24430 + phy_data->sfp_cable = XGBE_SFP_CABLE_ACTIVE;
24431 ++ } else {
24432 ++ phy_data->sfp_cable = XGBE_SFP_CABLE_FIBER;
24433 + }
24434 +
24435 + /* Determine the type of SFP */
24436 +- if (phy_data->sfp_cable == XGBE_SFP_CABLE_PASSIVE &&
24437 ++ if (phy_data->sfp_cable != XGBE_SFP_CABLE_FIBER &&
24438 + xgbe_phy_sfp_bit_rate(sfp_eeprom, XGBE_SFP_SPEED_10000))
24439 + phy_data->sfp_base = XGBE_SFP_BASE_10000_CR;
24440 + else if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_SR)
24441 +diff --git a/drivers/net/ethernet/apple/bmac.c b/drivers/net/ethernet/apple/bmac.c
24442 +index 334de0d93c899..9e653e2925f78 100644
24443 +--- a/drivers/net/ethernet/apple/bmac.c
24444 ++++ b/drivers/net/ethernet/apple/bmac.c
24445 +@@ -1510,7 +1510,7 @@ static void bmac_tx_timeout(struct timer_list *t)
24446 + i = bp->tx_empty;
24447 + ++dev->stats.tx_errors;
24448 + if (i != bp->tx_fill) {
24449 +- dev_kfree_skb(bp->tx_bufs[i]);
24450 ++ dev_kfree_skb_irq(bp->tx_bufs[i]);
24451 + bp->tx_bufs[i] = NULL;
24452 + if (++i >= N_TX_RING) i = 0;
24453 + bp->tx_empty = i;
24454 +diff --git a/drivers/net/ethernet/apple/mace.c b/drivers/net/ethernet/apple/mace.c
24455 +index d0a771b65e888..fd1b008b7208c 100644
24456 +--- a/drivers/net/ethernet/apple/mace.c
24457 ++++ b/drivers/net/ethernet/apple/mace.c
24458 +@@ -846,7 +846,7 @@ static void mace_tx_timeout(struct timer_list *t)
24459 + if (mp->tx_bad_runt) {
24460 + mp->tx_bad_runt = 0;
24461 + } else if (i != mp->tx_fill) {
24462 +- dev_kfree_skb(mp->tx_bufs[i]);
24463 ++ dev_kfree_skb_irq(mp->tx_bufs[i]);
24464 + if (++i >= N_TX_RING)
24465 + i = 0;
24466 + mp->tx_empty = i;
24467 +diff --git a/drivers/net/ethernet/broadcom/bnx2.c b/drivers/net/ethernet/broadcom/bnx2.c
24468 +index fec57f1982c86..dbe3101447804 100644
24469 +--- a/drivers/net/ethernet/broadcom/bnx2.c
24470 ++++ b/drivers/net/ethernet/broadcom/bnx2.c
24471 +@@ -5415,8 +5415,9 @@ bnx2_set_rx_ring_size(struct bnx2 *bp, u32 size)
24472 +
24473 + bp->rx_buf_use_size = rx_size;
24474 + /* hw alignment + build_skb() overhead*/
24475 +- bp->rx_buf_size = SKB_DATA_ALIGN(bp->rx_buf_use_size + BNX2_RX_ALIGN) +
24476 +- NET_SKB_PAD + SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
24477 ++ bp->rx_buf_size = kmalloc_size_roundup(
24478 ++ SKB_DATA_ALIGN(bp->rx_buf_use_size + BNX2_RX_ALIGN) +
24479 ++ NET_SKB_PAD + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
24480 + bp->rx_jumbo_thresh = rx_size - BNX2_RX_OFFSET;
24481 + bp->rx_ring_size = size;
24482 + bp->rx_max_ring = bnx2_find_max_ring(size, BNX2_MAX_RX_RINGS);
24483 +diff --git a/drivers/net/ethernet/dnet.c b/drivers/net/ethernet/dnet.c
24484 +index 08184f20f5104..151ca9573be97 100644
24485 +--- a/drivers/net/ethernet/dnet.c
24486 ++++ b/drivers/net/ethernet/dnet.c
24487 +@@ -550,11 +550,11 @@ static netdev_tx_t dnet_start_xmit(struct sk_buff *skb, struct net_device *dev)
24488 +
24489 + skb_tx_timestamp(skb);
24490 +
24491 ++ spin_unlock_irqrestore(&bp->lock, flags);
24492 ++
24493 + /* free the buffer */
24494 + dev_kfree_skb(skb);
24495 +
24496 +- spin_unlock_irqrestore(&bp->lock, flags);
24497 +-
24498 + return NETDEV_TX_OK;
24499 + }
24500 +
24501 +diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
24502 +index 8671591cb7501..3a79ead5219ae 100644
24503 +--- a/drivers/net/ethernet/freescale/enetc/enetc.c
24504 ++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
24505 +@@ -1489,23 +1489,6 @@ static void enetc_xdp_drop(struct enetc_bdr *rx_ring, int rx_ring_first,
24506 + rx_ring->stats.xdp_drops++;
24507 + }
24508 +
24509 +-static void enetc_xdp_free(struct enetc_bdr *rx_ring, int rx_ring_first,
24510 +- int rx_ring_last)
24511 +-{
24512 +- while (rx_ring_first != rx_ring_last) {
24513 +- struct enetc_rx_swbd *rx_swbd = &rx_ring->rx_swbd[rx_ring_first];
24514 +-
24515 +- if (rx_swbd->page) {
24516 +- dma_unmap_page(rx_ring->dev, rx_swbd->dma, PAGE_SIZE,
24517 +- rx_swbd->dir);
24518 +- __free_page(rx_swbd->page);
24519 +- rx_swbd->page = NULL;
24520 +- }
24521 +- enetc_bdr_idx_inc(rx_ring, &rx_ring_first);
24522 +- }
24523 +- rx_ring->stats.xdp_redirect_failures++;
24524 +-}
24525 +-
24526 + static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
24527 + struct napi_struct *napi, int work_limit,
24528 + struct bpf_prog *prog)
24529 +@@ -1527,8 +1510,8 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
24530 + int orig_i, orig_cleaned_cnt;
24531 + struct xdp_buff xdp_buff;
24532 + struct sk_buff *skb;
24533 +- int tmp_orig_i, err;
24534 + u32 bd_status;
24535 ++ int err;
24536 +
24537 + rxbd = enetc_rxbd(rx_ring, i);
24538 + bd_status = le32_to_cpu(rxbd->r.lstatus);
24539 +@@ -1615,18 +1598,16 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
24540 + break;
24541 + }
24542 +
24543 +- tmp_orig_i = orig_i;
24544 +-
24545 +- while (orig_i != i) {
24546 +- enetc_flip_rx_buff(rx_ring,
24547 +- &rx_ring->rx_swbd[orig_i]);
24548 +- enetc_bdr_idx_inc(rx_ring, &orig_i);
24549 +- }
24550 +-
24551 + err = xdp_do_redirect(rx_ring->ndev, &xdp_buff, prog);
24552 + if (unlikely(err)) {
24553 +- enetc_xdp_free(rx_ring, tmp_orig_i, i);
24554 ++ enetc_xdp_drop(rx_ring, orig_i, i);
24555 ++ rx_ring->stats.xdp_redirect_failures++;
24556 + } else {
24557 ++ while (orig_i != i) {
24558 ++ enetc_flip_rx_buff(rx_ring,
24559 ++ &rx_ring->rx_swbd[orig_i]);
24560 ++ enetc_bdr_idx_inc(rx_ring, &orig_i);
24561 ++ }
24562 + xdp_redirect_frm_cnt++;
24563 + rx_ring->stats.xdp_redirect++;
24564 + }
24565 +diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
24566 +index 23e1a94b9ce45..f250b0df27fbb 100644
24567 +--- a/drivers/net/ethernet/freescale/fec_main.c
24568 ++++ b/drivers/net/ethernet/freescale/fec_main.c
24569 +@@ -1642,6 +1642,14 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
24570 + * bridging applications.
24571 + */
24572 + skb = build_skb(page_address(page), PAGE_SIZE);
24573 ++ if (unlikely(!skb)) {
24574 ++ page_pool_recycle_direct(rxq->page_pool, page);
24575 ++ ndev->stats.rx_dropped++;
24576 ++
24577 ++ netdev_err_once(ndev, "build_skb failed!\n");
24578 ++ goto rx_processing_done;
24579 ++ }
24580 ++
24581 + skb_reserve(skb, FEC_ENET_XDP_HEADROOM);
24582 + skb_put(skb, pkt_len - 4);
24583 + skb_mark_for_recycle(skb);
24584 +diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
24585 +index 6416322d7c18b..e6e349f0c9457 100644
24586 +--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
24587 ++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
24588 +@@ -3693,6 +3693,24 @@ static int i40e_vsi_configure_tx(struct i40e_vsi *vsi)
24589 + return err;
24590 + }
24591 +
24592 ++/**
24593 ++ * i40e_calculate_vsi_rx_buf_len - Calculates buffer length
24594 ++ *
24595 ++ * @vsi: VSI to calculate rx_buf_len from
24596 ++ */
24597 ++static u16 i40e_calculate_vsi_rx_buf_len(struct i40e_vsi *vsi)
24598 ++{
24599 ++ if (!vsi->netdev || (vsi->back->flags & I40E_FLAG_LEGACY_RX))
24600 ++ return I40E_RXBUFFER_2048;
24601 ++
24602 ++#if (PAGE_SIZE < 8192)
24603 ++ if (!I40E_2K_TOO_SMALL_WITH_PADDING && vsi->netdev->mtu <= ETH_DATA_LEN)
24604 ++ return I40E_RXBUFFER_1536 - NET_IP_ALIGN;
24605 ++#endif
24606 ++
24607 ++ return PAGE_SIZE < 8192 ? I40E_RXBUFFER_3072 : I40E_RXBUFFER_2048;
24608 ++}
24609 ++
24610 + /**
24611 + * i40e_vsi_configure_rx - Configure the VSI for Rx
24612 + * @vsi: the VSI being configured
24613 +@@ -3704,20 +3722,14 @@ static int i40e_vsi_configure_rx(struct i40e_vsi *vsi)
24614 + int err = 0;
24615 + u16 i;
24616 +
24617 +- if (!vsi->netdev || (vsi->back->flags & I40E_FLAG_LEGACY_RX)) {
24618 +- vsi->max_frame = I40E_MAX_RXBUFFER;
24619 +- vsi->rx_buf_len = I40E_RXBUFFER_2048;
24620 ++ vsi->max_frame = I40E_MAX_RXBUFFER;
24621 ++ vsi->rx_buf_len = i40e_calculate_vsi_rx_buf_len(vsi);
24622 ++
24623 + #if (PAGE_SIZE < 8192)
24624 +- } else if (!I40E_2K_TOO_SMALL_WITH_PADDING &&
24625 +- (vsi->netdev->mtu <= ETH_DATA_LEN)) {
24626 ++ if (vsi->netdev && !I40E_2K_TOO_SMALL_WITH_PADDING &&
24627 ++ vsi->netdev->mtu <= ETH_DATA_LEN)
24628 + vsi->max_frame = I40E_RXBUFFER_1536 - NET_IP_ALIGN;
24629 +- vsi->rx_buf_len = I40E_RXBUFFER_1536 - NET_IP_ALIGN;
24630 + #endif
24631 +- } else {
24632 +- vsi->max_frame = I40E_MAX_RXBUFFER;
24633 +- vsi->rx_buf_len = (PAGE_SIZE < 8192) ? I40E_RXBUFFER_3072 :
24634 +- I40E_RXBUFFER_2048;
24635 +- }
24636 +
24637 + /* set up individual rings */
24638 + for (i = 0; i < vsi->num_queue_pairs && !err; i++)
24639 +@@ -13282,7 +13294,7 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi, struct bpf_prog *prog,
24640 + int i;
24641 +
24642 + /* Don't allow frames that span over multiple buffers */
24643 +- if (frame_size > vsi->rx_buf_len) {
24644 ++ if (frame_size > i40e_calculate_vsi_rx_buf_len(vsi)) {
24645 + NL_SET_ERR_MSG_MOD(extack, "MTU too large to enable XDP");
24646 + return -EINVAL;
24647 + }
24648 +diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
24649 +index 0f668468d1414..53fec5bbe6e00 100644
24650 +--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
24651 ++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
24652 +@@ -639,7 +639,7 @@ static u64 ice_ptp_extend_40b_ts(struct ice_pf *pf, u64 in_tstamp)
24653 + static bool ice_ptp_tx_tstamp(struct ice_ptp_tx *tx)
24654 + {
24655 + struct ice_ptp_port *ptp_port;
24656 +- bool ts_handled = true;
24657 ++ bool more_timestamps;
24658 + struct ice_pf *pf;
24659 + u8 idx;
24660 +
24661 +@@ -701,11 +701,10 @@ static bool ice_ptp_tx_tstamp(struct ice_ptp_tx *tx)
24662 + * poll for remaining timestamps.
24663 + */
24664 + spin_lock(&tx->lock);
24665 +- if (!bitmap_empty(tx->in_use, tx->len))
24666 +- ts_handled = false;
24667 ++ more_timestamps = tx->init && !bitmap_empty(tx->in_use, tx->len);
24668 + spin_unlock(&tx->lock);
24669 +
24670 +- return ts_handled;
24671 ++ return !more_timestamps;
24672 + }
24673 +
24674 + /**
24675 +@@ -776,6 +775,9 @@ ice_ptp_release_tx_tracker(struct ice_pf *pf, struct ice_ptp_tx *tx)
24676 + {
24677 + tx->init = 0;
24678 +
24679 ++ /* wait for potentially outstanding interrupt to complete */
24680 ++ synchronize_irq(pf->msix_entries[pf->oicr_idx].vector);
24681 ++
24682 + ice_ptp_flush_tx_tracker(pf, tx);
24683 +
24684 + kfree(tx->tstamps);
24685 +diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
24686 +index 473158c09f1d7..24a6ae19ad8ed 100644
24687 +--- a/drivers/net/ethernet/intel/igb/igb_main.c
24688 ++++ b/drivers/net/ethernet/intel/igb/igb_main.c
24689 +@@ -1202,8 +1202,12 @@ static int igb_alloc_q_vector(struct igb_adapter *adapter,
24690 + if (!q_vector) {
24691 + q_vector = kzalloc(size, GFP_KERNEL);
24692 + } else if (size > ksize(q_vector)) {
24693 +- kfree_rcu(q_vector, rcu);
24694 +- q_vector = kzalloc(size, GFP_KERNEL);
24695 ++ struct igb_q_vector *new_q_vector;
24696 ++
24697 ++ new_q_vector = kzalloc(size, GFP_KERNEL);
24698 ++ if (new_q_vector)
24699 ++ kfree_rcu(q_vector, rcu);
24700 ++ q_vector = new_q_vector;
24701 + } else {
24702 + memset(q_vector, 0, size);
24703 + }
24704 +diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h
24705 +index 1e7e7071f64d2..df3e26c0cf01a 100644
24706 +--- a/drivers/net/ethernet/intel/igc/igc.h
24707 ++++ b/drivers/net/ethernet/intel/igc/igc.h
24708 +@@ -94,6 +94,8 @@ struct igc_ring {
24709 + u8 queue_index; /* logical index of the ring*/
24710 + u8 reg_idx; /* physical index of the ring */
24711 + bool launchtime_enable; /* true if LaunchTime is enabled */
24712 ++ ktime_t last_tx_cycle; /* end of the cycle with a launchtime transmission */
24713 ++ ktime_t last_ff_cycle; /* Last cycle with an active first flag */
24714 +
24715 + u32 start_time;
24716 + u32 end_time;
24717 +@@ -182,6 +184,7 @@ struct igc_adapter {
24718 +
24719 + ktime_t base_time;
24720 + ktime_t cycle_time;
24721 ++ bool qbv_enable;
24722 +
24723 + /* OS defined structs */
24724 + struct pci_dev *pdev;
24725 +diff --git a/drivers/net/ethernet/intel/igc/igc_defines.h b/drivers/net/ethernet/intel/igc/igc_defines.h
24726 +index 4f9d7f013a958..4ad35fbdc02e8 100644
24727 +--- a/drivers/net/ethernet/intel/igc/igc_defines.h
24728 ++++ b/drivers/net/ethernet/intel/igc/igc_defines.h
24729 +@@ -321,6 +321,8 @@
24730 + #define IGC_ADVTXD_L4LEN_SHIFT 8 /* Adv ctxt L4LEN shift */
24731 + #define IGC_ADVTXD_MSS_SHIFT 16 /* Adv ctxt MSS shift */
24732 +
24733 ++#define IGC_ADVTXD_TSN_CNTX_FIRST 0x00000080
24734 ++
24735 + /* Transmit Control */
24736 + #define IGC_TCTL_EN 0x00000002 /* enable Tx */
24737 + #define IGC_TCTL_PSP 0x00000008 /* pad short packets */
24738 +diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
24739 +index 34889be63e788..34db1c006b20a 100644
24740 +--- a/drivers/net/ethernet/intel/igc/igc_main.c
24741 ++++ b/drivers/net/ethernet/intel/igc/igc_main.c
24742 +@@ -1000,25 +1000,118 @@ static int igc_write_mc_addr_list(struct net_device *netdev)
24743 + return netdev_mc_count(netdev);
24744 + }
24745 +
24746 +-static __le32 igc_tx_launchtime(struct igc_adapter *adapter, ktime_t txtime)
24747 ++static __le32 igc_tx_launchtime(struct igc_ring *ring, ktime_t txtime,
24748 ++ bool *first_flag, bool *insert_empty)
24749 + {
24750 ++ struct igc_adapter *adapter = netdev_priv(ring->netdev);
24751 + ktime_t cycle_time = adapter->cycle_time;
24752 + ktime_t base_time = adapter->base_time;
24753 ++ ktime_t now = ktime_get_clocktai();
24754 ++ ktime_t baset_est, end_of_cycle;
24755 + u32 launchtime;
24756 ++ s64 n;
24757 +
24758 +- /* FIXME: when using ETF together with taprio, we may have a
24759 +- * case where 'delta' is larger than the cycle_time, this may
24760 +- * cause problems if we don't read the current value of
24761 +- * IGC_BASET, as the value writen into the launchtime
24762 +- * descriptor field may be misinterpreted.
24763 ++ n = div64_s64(ktime_sub_ns(now, base_time), cycle_time);
24764 ++
24765 ++ baset_est = ktime_add_ns(base_time, cycle_time * (n));
24766 ++ end_of_cycle = ktime_add_ns(baset_est, cycle_time);
24767 ++
24768 ++ if (ktime_compare(txtime, end_of_cycle) >= 0) {
24769 ++ if (baset_est != ring->last_ff_cycle) {
24770 ++ *first_flag = true;
24771 ++ ring->last_ff_cycle = baset_est;
24772 ++
24773 ++ if (ktime_compare(txtime, ring->last_tx_cycle) > 0)
24774 ++ *insert_empty = true;
24775 ++ }
24776 ++ }
24777 ++
24778 ++ /* Introducing a window at end of cycle on which packets
24779 ++ * potentially not honor launchtime. Window of 5us chosen
24780 ++ * considering software update the tail pointer and packets
24781 ++ * are dma'ed to packet buffer.
24782 + */
24783 +- div_s64_rem(ktime_sub_ns(txtime, base_time), cycle_time, &launchtime);
24784 ++ if ((ktime_sub_ns(end_of_cycle, now) < 5 * NSEC_PER_USEC))
24785 ++ netdev_warn(ring->netdev, "Packet with txtime=%llu may not be honoured\n",
24786 ++ txtime);
24787 ++
24788 ++ ring->last_tx_cycle = end_of_cycle;
24789 ++
24790 ++ launchtime = ktime_sub_ns(txtime, baset_est);
24791 ++ if (launchtime > 0)
24792 ++ div_s64_rem(launchtime, cycle_time, &launchtime);
24793 ++ else
24794 ++ launchtime = 0;
24795 +
24796 + return cpu_to_le32(launchtime);
24797 + }
24798 +
24799 ++static int igc_init_empty_frame(struct igc_ring *ring,
24800 ++ struct igc_tx_buffer *buffer,
24801 ++ struct sk_buff *skb)
24802 ++{
24803 ++ unsigned int size;
24804 ++ dma_addr_t dma;
24805 ++
24806 ++ size = skb_headlen(skb);
24807 ++
24808 ++ dma = dma_map_single(ring->dev, skb->data, size, DMA_TO_DEVICE);
24809 ++ if (dma_mapping_error(ring->dev, dma)) {
24810 ++ netdev_err_once(ring->netdev, "Failed to map DMA for TX\n");
24811 ++ return -ENOMEM;
24812 ++ }
24813 ++
24814 ++ buffer->skb = skb;
24815 ++ buffer->protocol = 0;
24816 ++ buffer->bytecount = skb->len;
24817 ++ buffer->gso_segs = 1;
24818 ++ buffer->time_stamp = jiffies;
24819 ++ dma_unmap_len_set(buffer, len, skb->len);
24820 ++ dma_unmap_addr_set(buffer, dma, dma);
24821 ++
24822 ++ return 0;
24823 ++}
24824 ++
24825 ++static int igc_init_tx_empty_descriptor(struct igc_ring *ring,
24826 ++ struct sk_buff *skb,
24827 ++ struct igc_tx_buffer *first)
24828 ++{
24829 ++ union igc_adv_tx_desc *desc;
24830 ++ u32 cmd_type, olinfo_status;
24831 ++ int err;
24832 ++
24833 ++ if (!igc_desc_unused(ring))
24834 ++ return -EBUSY;
24835 ++
24836 ++ err = igc_init_empty_frame(ring, first, skb);
24837 ++ if (err)
24838 ++ return err;
24839 ++
24840 ++ cmd_type = IGC_ADVTXD_DTYP_DATA | IGC_ADVTXD_DCMD_DEXT |
24841 ++ IGC_ADVTXD_DCMD_IFCS | IGC_TXD_DCMD |
24842 ++ first->bytecount;
24843 ++ olinfo_status = first->bytecount << IGC_ADVTXD_PAYLEN_SHIFT;
24844 ++
24845 ++ desc = IGC_TX_DESC(ring, ring->next_to_use);
24846 ++ desc->read.cmd_type_len = cpu_to_le32(cmd_type);
24847 ++ desc->read.olinfo_status = cpu_to_le32(olinfo_status);
24848 ++ desc->read.buffer_addr = cpu_to_le64(dma_unmap_addr(first, dma));
24849 ++
24850 ++ netdev_tx_sent_queue(txring_txq(ring), skb->len);
24851 ++
24852 ++ first->next_to_watch = desc;
24853 ++
24854 ++ ring->next_to_use++;
24855 ++ if (ring->next_to_use == ring->count)
24856 ++ ring->next_to_use = 0;
24857 ++
24858 ++ return 0;
24859 ++}
24860 ++
24861 ++#define IGC_EMPTY_FRAME_SIZE 60
24862 ++
24863 + static void igc_tx_ctxtdesc(struct igc_ring *tx_ring,
24864 +- struct igc_tx_buffer *first,
24865 ++ __le32 launch_time, bool first_flag,
24866 + u32 vlan_macip_lens, u32 type_tucmd,
24867 + u32 mss_l4len_idx)
24868 + {
24869 +@@ -1037,26 +1130,17 @@ static void igc_tx_ctxtdesc(struct igc_ring *tx_ring,
24870 + if (test_bit(IGC_RING_FLAG_TX_CTX_IDX, &tx_ring->flags))
24871 + mss_l4len_idx |= tx_ring->reg_idx << 4;
24872 +
24873 ++ if (first_flag)
24874 ++ mss_l4len_idx |= IGC_ADVTXD_TSN_CNTX_FIRST;
24875 ++
24876 + context_desc->vlan_macip_lens = cpu_to_le32(vlan_macip_lens);
24877 + context_desc->type_tucmd_mlhl = cpu_to_le32(type_tucmd);
24878 + context_desc->mss_l4len_idx = cpu_to_le32(mss_l4len_idx);
24879 +-
24880 +- /* We assume there is always a valid Tx time available. Invalid times
24881 +- * should have been handled by the upper layers.
24882 +- */
24883 +- if (tx_ring->launchtime_enable) {
24884 +- struct igc_adapter *adapter = netdev_priv(tx_ring->netdev);
24885 +- ktime_t txtime = first->skb->tstamp;
24886 +-
24887 +- skb_txtime_consumed(first->skb);
24888 +- context_desc->launch_time = igc_tx_launchtime(adapter,
24889 +- txtime);
24890 +- } else {
24891 +- context_desc->launch_time = 0;
24892 +- }
24893 ++ context_desc->launch_time = launch_time;
24894 + }
24895 +
24896 +-static void igc_tx_csum(struct igc_ring *tx_ring, struct igc_tx_buffer *first)
24897 ++static void igc_tx_csum(struct igc_ring *tx_ring, struct igc_tx_buffer *first,
24898 ++ __le32 launch_time, bool first_flag)
24899 + {
24900 + struct sk_buff *skb = first->skb;
24901 + u32 vlan_macip_lens = 0;
24902 +@@ -1096,7 +1180,8 @@ no_csum:
24903 + vlan_macip_lens |= skb_network_offset(skb) << IGC_ADVTXD_MACLEN_SHIFT;
24904 + vlan_macip_lens |= first->tx_flags & IGC_TX_FLAGS_VLAN_MASK;
24905 +
24906 +- igc_tx_ctxtdesc(tx_ring, first, vlan_macip_lens, type_tucmd, 0);
24907 ++ igc_tx_ctxtdesc(tx_ring, launch_time, first_flag,
24908 ++ vlan_macip_lens, type_tucmd, 0);
24909 + }
24910 +
24911 + static int __igc_maybe_stop_tx(struct igc_ring *tx_ring, const u16 size)
24912 +@@ -1320,6 +1405,7 @@ dma_error:
24913 +
24914 + static int igc_tso(struct igc_ring *tx_ring,
24915 + struct igc_tx_buffer *first,
24916 ++ __le32 launch_time, bool first_flag,
24917 + u8 *hdr_len)
24918 + {
24919 + u32 vlan_macip_lens, type_tucmd, mss_l4len_idx;
24920 +@@ -1406,8 +1492,8 @@ static int igc_tso(struct igc_ring *tx_ring,
24921 + vlan_macip_lens |= (ip.hdr - skb->data) << IGC_ADVTXD_MACLEN_SHIFT;
24922 + vlan_macip_lens |= first->tx_flags & IGC_TX_FLAGS_VLAN_MASK;
24923 +
24924 +- igc_tx_ctxtdesc(tx_ring, first, vlan_macip_lens,
24925 +- type_tucmd, mss_l4len_idx);
24926 ++ igc_tx_ctxtdesc(tx_ring, launch_time, first_flag,
24927 ++ vlan_macip_lens, type_tucmd, mss_l4len_idx);
24928 +
24929 + return 1;
24930 + }
24931 +@@ -1415,11 +1501,14 @@ static int igc_tso(struct igc_ring *tx_ring,
24932 + static netdev_tx_t igc_xmit_frame_ring(struct sk_buff *skb,
24933 + struct igc_ring *tx_ring)
24934 + {
24935 ++ bool first_flag = false, insert_empty = false;
24936 + u16 count = TXD_USE_COUNT(skb_headlen(skb));
24937 + __be16 protocol = vlan_get_protocol(skb);
24938 + struct igc_tx_buffer *first;
24939 ++ __le32 launch_time = 0;
24940 + u32 tx_flags = 0;
24941 + unsigned short f;
24942 ++ ktime_t txtime;
24943 + u8 hdr_len = 0;
24944 + int tso = 0;
24945 +
24946 +@@ -1433,11 +1522,40 @@ static netdev_tx_t igc_xmit_frame_ring(struct sk_buff *skb,
24947 + count += TXD_USE_COUNT(skb_frag_size(
24948 + &skb_shinfo(skb)->frags[f]));
24949 +
24950 +- if (igc_maybe_stop_tx(tx_ring, count + 3)) {
24951 ++ if (igc_maybe_stop_tx(tx_ring, count + 5)) {
24952 + /* this is a hard error */
24953 + return NETDEV_TX_BUSY;
24954 + }
24955 +
24956 ++ if (!tx_ring->launchtime_enable)
24957 ++ goto done;
24958 ++
24959 ++ txtime = skb->tstamp;
24960 ++ skb->tstamp = ktime_set(0, 0);
24961 ++ launch_time = igc_tx_launchtime(tx_ring, txtime, &first_flag, &insert_empty);
24962 ++
24963 ++ if (insert_empty) {
24964 ++ struct igc_tx_buffer *empty_info;
24965 ++ struct sk_buff *empty;
24966 ++ void *data;
24967 ++
24968 ++ empty_info = &tx_ring->tx_buffer_info[tx_ring->next_to_use];
24969 ++ empty = alloc_skb(IGC_EMPTY_FRAME_SIZE, GFP_ATOMIC);
24970 ++ if (!empty)
24971 ++ goto done;
24972 ++
24973 ++ data = skb_put(empty, IGC_EMPTY_FRAME_SIZE);
24974 ++ memset(data, 0, IGC_EMPTY_FRAME_SIZE);
24975 ++
24976 ++ igc_tx_ctxtdesc(tx_ring, 0, false, 0, 0, 0);
24977 ++
24978 ++ if (igc_init_tx_empty_descriptor(tx_ring,
24979 ++ empty,
24980 ++ empty_info) < 0)
24981 ++ dev_kfree_skb_any(empty);
24982 ++ }
24983 ++
24984 ++done:
24985 + /* record the location of the first descriptor for this packet */
24986 + first = &tx_ring->tx_buffer_info[tx_ring->next_to_use];
24987 + first->type = IGC_TX_BUFFER_TYPE_SKB;
24988 +@@ -1474,11 +1592,11 @@ static netdev_tx_t igc_xmit_frame_ring(struct sk_buff *skb,
24989 + first->tx_flags = tx_flags;
24990 + first->protocol = protocol;
24991 +
24992 +- tso = igc_tso(tx_ring, first, &hdr_len);
24993 ++ tso = igc_tso(tx_ring, first, launch_time, first_flag, &hdr_len);
24994 + if (tso < 0)
24995 + goto out_drop;
24996 + else if (!tso)
24997 +- igc_tx_csum(tx_ring, first);
24998 ++ igc_tx_csum(tx_ring, first, launch_time, first_flag);
24999 +
25000 + igc_tx_map(tx_ring, first, hdr_len);
25001 +
25002 +@@ -5918,10 +6036,16 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
25003 + bool queue_configured[IGC_MAX_TX_QUEUES] = { };
25004 + u32 start_time = 0, end_time = 0;
25005 + size_t n;
25006 ++ int i;
25007 ++
25008 ++ adapter->qbv_enable = qopt->enable;
25009 +
25010 + if (!qopt->enable)
25011 + return igc_tsn_clear_schedule(adapter);
25012 +
25013 ++ if (qopt->base_time < 0)
25014 ++ return -ERANGE;
25015 ++
25016 + if (adapter->base_time)
25017 + return -EALREADY;
25018 +
25019 +@@ -5933,10 +6057,24 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
25020 +
25021 + for (n = 0; n < qopt->num_entries; n++) {
25022 + struct tc_taprio_sched_entry *e = &qopt->entries[n];
25023 +- int i;
25024 +
25025 + end_time += e->interval;
25026 +
25027 ++ /* If any of the conditions below are true, we need to manually
25028 ++ * control the end time of the cycle.
25029 ++ * 1. Qbv users can specify a cycle time that is not equal
25030 ++ * to the total GCL intervals. Hence, recalculation is
25031 ++ * necessary here to exclude the time interval that
25032 ++ * exceeds the cycle time.
25033 ++ * 2. According to IEEE Std. 802.1Q-2018 section 8.6.9.2,
25034 ++ * once the end of the list is reached, it will switch
25035 ++ * to the END_OF_CYCLE state and leave the gates in the
25036 ++ * same state until the next cycle is started.
25037 ++ */
25038 ++ if (end_time > adapter->cycle_time ||
25039 ++ n + 1 == qopt->num_entries)
25040 ++ end_time = adapter->cycle_time;
25041 ++
25042 + for (i = 0; i < adapter->num_tx_queues; i++) {
25043 + struct igc_ring *ring = adapter->tx_ring[i];
25044 +
25045 +@@ -5957,6 +6095,18 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
25046 + start_time += e->interval;
25047 + }
25048 +
25049 ++ /* Check whether a queue gets configured.
25050 ++ * If not, set the start and end time to be end time.
25051 ++ */
25052 ++ for (i = 0; i < adapter->num_tx_queues; i++) {
25053 ++ if (!queue_configured[i]) {
25054 ++ struct igc_ring *ring = adapter->tx_ring[i];
25055 ++
25056 ++ ring->start_time = end_time;
25057 ++ ring->end_time = end_time;
25058 ++ }
25059 ++ }
25060 ++
25061 + return 0;
25062 + }
25063 +
25064 +diff --git a/drivers/net/ethernet/intel/igc/igc_tsn.c b/drivers/net/ethernet/intel/igc/igc_tsn.c
25065 +index 0fce22de2ab85..356c7455c5cee 100644
25066 +--- a/drivers/net/ethernet/intel/igc/igc_tsn.c
25067 ++++ b/drivers/net/ethernet/intel/igc/igc_tsn.c
25068 +@@ -36,7 +36,7 @@ static unsigned int igc_tsn_new_flags(struct igc_adapter *adapter)
25069 + {
25070 + unsigned int new_flags = adapter->flags & ~IGC_FLAG_TSN_ANY_ENABLED;
25071 +
25072 +- if (adapter->base_time)
25073 ++ if (adapter->qbv_enable)
25074 + new_flags |= IGC_FLAG_TSN_QBV_ENABLED;
25075 +
25076 + if (is_any_launchtime(adapter))
25077 +@@ -110,15 +110,8 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter)
25078 + wr32(IGC_STQT(i), ring->start_time);
25079 + wr32(IGC_ENDQT(i), ring->end_time);
25080 +
25081 +- if (adapter->base_time) {
25082 +- /* If we have a base_time we are in "taprio"
25083 +- * mode and we need to be strict about the
25084 +- * cycles: only transmit a packet if it can be
25085 +- * completed during that cycle.
25086 +- */
25087 +- txqctl |= IGC_TXQCTL_STRICT_CYCLE |
25088 +- IGC_TXQCTL_STRICT_END;
25089 +- }
25090 ++ txqctl |= IGC_TXQCTL_STRICT_CYCLE |
25091 ++ IGC_TXQCTL_STRICT_END;
25092 +
25093 + if (ring->launchtime_enable)
25094 + txqctl |= IGC_TXQCTL_QUEUE_MODE_LAUNCHT;
25095 +diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
25096 +index c0bedf402da93..f68a6a0e3aa41 100644
25097 +--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
25098 ++++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
25099 +@@ -1184,10 +1184,13 @@ static int mcs_register_interrupts(struct mcs *mcs)
25100 + mcs->tx_sa_active = alloc_mem(mcs, mcs->hw->sc_entries);
25101 + if (!mcs->tx_sa_active) {
25102 + ret = -ENOMEM;
25103 +- goto exit;
25104 ++ goto free_irq;
25105 + }
25106 +
25107 + return ret;
25108 ++
25109 ++free_irq:
25110 ++ free_irq(pci_irq_vector(mcs->pdev, MCS_INT_VEC_IP), mcs);
25111 + exit:
25112 + pci_free_irq_vectors(mcs->pdev);
25113 + mcs->num_vec = 0;
25114 +@@ -1589,6 +1592,7 @@ static void mcs_remove(struct pci_dev *pdev)
25115 +
25116 + /* Set MCS to external bypass */
25117 + mcs_set_external_bypass(mcs, true);
25118 ++ free_irq(pci_irq_vector(pdev, MCS_INT_VEC_IP), mcs);
25119 + pci_free_irq_vectors(pdev);
25120 + pci_release_regions(pdev);
25121 + pci_disable_device(pdev);
25122 +diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
25123 +index 1d36619c5ec91..9aa1892a609c7 100644
25124 +--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
25125 ++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
25126 +@@ -3229,6 +3229,30 @@ static void mtk_dim_tx(struct work_struct *work)
25127 + dim->state = DIM_START_MEASURE;
25128 + }
25129 +
25130 ++static void mtk_set_mcr_max_rx(struct mtk_mac *mac, u32 val)
25131 ++{
25132 ++ struct mtk_eth *eth = mac->hw;
25133 ++ u32 mcr_cur, mcr_new;
25134 ++
25135 ++ if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628))
25136 ++ return;
25137 ++
25138 ++ mcr_cur = mtk_r32(mac->hw, MTK_MAC_MCR(mac->id));
25139 ++ mcr_new = mcr_cur & ~MAC_MCR_MAX_RX_MASK;
25140 ++
25141 ++ if (val <= 1518)
25142 ++ mcr_new |= MAC_MCR_MAX_RX(MAC_MCR_MAX_RX_1518);
25143 ++ else if (val <= 1536)
25144 ++ mcr_new |= MAC_MCR_MAX_RX(MAC_MCR_MAX_RX_1536);
25145 ++ else if (val <= 1552)
25146 ++ mcr_new |= MAC_MCR_MAX_RX(MAC_MCR_MAX_RX_1552);
25147 ++ else
25148 ++ mcr_new |= MAC_MCR_MAX_RX(MAC_MCR_MAX_RX_2048);
25149 ++
25150 ++ if (mcr_new != mcr_cur)
25151 ++ mtk_w32(mac->hw, mcr_new, MTK_MAC_MCR(mac->id));
25152 ++}
25153 ++
25154 + static int mtk_hw_init(struct mtk_eth *eth)
25155 + {
25156 + u32 dma_mask = ETHSYS_DMA_AG_MAP_PDMA | ETHSYS_DMA_AG_MAP_QDMA |
25157 +@@ -3268,16 +3292,17 @@ static int mtk_hw_init(struct mtk_eth *eth)
25158 + return 0;
25159 + }
25160 +
25161 +- val = RSTCTRL_FE | RSTCTRL_PPE;
25162 + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) {
25163 + regmap_write(eth->ethsys, ETHSYS_FE_RST_CHK_IDLE_EN, 0);
25164 +-
25165 +- val |= RSTCTRL_ETH;
25166 +- if (MTK_HAS_CAPS(eth->soc->caps, MTK_RSTCTRL_PPE1))
25167 +- val |= RSTCTRL_PPE1;
25168 ++ val = RSTCTRL_PPE0_V2;
25169 ++ } else {
25170 ++ val = RSTCTRL_PPE0;
25171 + }
25172 +
25173 +- ethsys_reset(eth, val);
25174 ++ if (MTK_HAS_CAPS(eth->soc->caps, MTK_RSTCTRL_PPE1))
25175 ++ val |= RSTCTRL_PPE1;
25176 ++
25177 ++ ethsys_reset(eth, RSTCTRL_ETH | RSTCTRL_FE | val);
25178 +
25179 + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) {
25180 + regmap_write(eth->ethsys, ETHSYS_FE_RST_CHK_IDLE_EN,
25181 +@@ -3303,8 +3328,16 @@ static int mtk_hw_init(struct mtk_eth *eth)
25182 + * up with the more appropriate value when mtk_mac_config call is being
25183 + * invoked.
25184 + */
25185 +- for (i = 0; i < MTK_MAC_COUNT; i++)
25186 ++ for (i = 0; i < MTK_MAC_COUNT; i++) {
25187 ++ struct net_device *dev = eth->netdev[i];
25188 ++
25189 + mtk_w32(eth, MAC_MCR_FORCE_LINK_DOWN, MTK_MAC_MCR(i));
25190 ++ if (dev) {
25191 ++ struct mtk_mac *mac = netdev_priv(dev);
25192 ++
25193 ++ mtk_set_mcr_max_rx(mac, dev->mtu + MTK_RX_ETH_HLEN);
25194 ++ }
25195 ++ }
25196 +
25197 + /* Indicates CDM to parse the MTK special tag from CPU
25198 + * which also is working out for untag packets.
25199 +@@ -3331,9 +3364,12 @@ static int mtk_hw_init(struct mtk_eth *eth)
25200 + mtk_w32(eth, 0x21021000, MTK_FE_INT_GRP);
25201 +
25202 + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) {
25203 +- /* PSE should not drop port8 and port9 packets */
25204 ++ /* PSE should not drop port8 and port9 packets from WDMA Tx */
25205 + mtk_w32(eth, 0x00000300, PSE_DROP_CFG);
25206 +
25207 ++ /* PSE should drop packets to port 8/9 on WDMA Rx ring full */
25208 ++ mtk_w32(eth, 0x00000300, PSE_PPE0_DROP);
25209 ++
25210 + /* PSE Free Queue Flow Control */
25211 + mtk_w32(eth, 0x01fa01f4, PSE_FQFC_CFG2);
25212 +
25213 +@@ -3420,7 +3456,6 @@ static int mtk_change_mtu(struct net_device *dev, int new_mtu)
25214 + int length = new_mtu + MTK_RX_ETH_HLEN;
25215 + struct mtk_mac *mac = netdev_priv(dev);
25216 + struct mtk_eth *eth = mac->hw;
25217 +- u32 mcr_cur, mcr_new;
25218 +
25219 + if (rcu_access_pointer(eth->prog) &&
25220 + length > MTK_PP_MAX_BUF_SIZE) {
25221 +@@ -3428,23 +3463,7 @@ static int mtk_change_mtu(struct net_device *dev, int new_mtu)
25222 + return -EINVAL;
25223 + }
25224 +
25225 +- if (!MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) {
25226 +- mcr_cur = mtk_r32(mac->hw, MTK_MAC_MCR(mac->id));
25227 +- mcr_new = mcr_cur & ~MAC_MCR_MAX_RX_MASK;
25228 +-
25229 +- if (length <= 1518)
25230 +- mcr_new |= MAC_MCR_MAX_RX(MAC_MCR_MAX_RX_1518);
25231 +- else if (length <= 1536)
25232 +- mcr_new |= MAC_MCR_MAX_RX(MAC_MCR_MAX_RX_1536);
25233 +- else if (length <= 1552)
25234 +- mcr_new |= MAC_MCR_MAX_RX(MAC_MCR_MAX_RX_1552);
25235 +- else
25236 +- mcr_new |= MAC_MCR_MAX_RX(MAC_MCR_MAX_RX_2048);
25237 +-
25238 +- if (mcr_new != mcr_cur)
25239 +- mtk_w32(mac->hw, mcr_new, MTK_MAC_MCR(mac->id));
25240 +- }
25241 +-
25242 ++ mtk_set_mcr_max_rx(mac, length);
25243 + dev->mtu = new_mtu;
25244 +
25245 + return 0;
25246 +diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
25247 +index b52f3b0177efb..306fdc2c608a4 100644
25248 +--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
25249 ++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
25250 +@@ -120,6 +120,7 @@
25251 + #define PSE_FQFC_CFG1 0x100
25252 + #define PSE_FQFC_CFG2 0x104
25253 + #define PSE_DROP_CFG 0x108
25254 ++#define PSE_PPE0_DROP 0x110
25255 +
25256 + /* PSE Input Queue Reservation Register*/
25257 + #define PSE_IQ_REV(x) (0x140 + (((x) - 1) << 2))
25258 +@@ -447,18 +448,14 @@
25259 + /* ethernet reset control register */
25260 + #define ETHSYS_RSTCTRL 0x34
25261 + #define RSTCTRL_FE BIT(6)
25262 +-#define RSTCTRL_PPE BIT(31)
25263 +-#define RSTCTRL_PPE1 BIT(30)
25264 ++#define RSTCTRL_PPE0 BIT(31)
25265 ++#define RSTCTRL_PPE0_V2 BIT(30)
25266 ++#define RSTCTRL_PPE1 BIT(31)
25267 + #define RSTCTRL_ETH BIT(23)
25268 +
25269 + /* ethernet reset check idle register */
25270 + #define ETHSYS_FE_RST_CHK_IDLE_EN 0x28
25271 +
25272 +-/* ethernet reset control register */
25273 +-#define ETHSYS_RSTCTRL 0x34
25274 +-#define RSTCTRL_FE BIT(6)
25275 +-#define RSTCTRL_PPE BIT(31)
25276 +-
25277 + /* ethernet dma channel agent map */
25278 + #define ETHSYS_DMA_AG_MAP 0x408
25279 + #define ETHSYS_DMA_AG_MAP_PDMA BIT(0)
25280 +diff --git a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
25281 +index 9063e2e22cd5c..9a9341a348c00 100644
25282 +--- a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
25283 ++++ b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
25284 +@@ -3913,6 +3913,7 @@ abort_with_slices:
25285 + myri10ge_free_slices(mgp);
25286 +
25287 + abort_with_firmware:
25288 ++ kfree(mgp->msix_vectors);
25289 + myri10ge_dummy_rdma(mgp, 0);
25290 +
25291 + abort_with_ioremap:
25292 +diff --git a/drivers/net/ethernet/neterion/s2io.c b/drivers/net/ethernet/neterion/s2io.c
25293 +index 1d3c4474b7cb4..700c05fb05b97 100644
25294 +--- a/drivers/net/ethernet/neterion/s2io.c
25295 ++++ b/drivers/net/ethernet/neterion/s2io.c
25296 +@@ -2386,7 +2386,7 @@ static void free_tx_buffers(struct s2io_nic *nic)
25297 + skb = s2io_txdl_getskb(&mac_control->fifos[i], txdp, j);
25298 + if (skb) {
25299 + swstats->mem_freed += skb->truesize;
25300 +- dev_kfree_skb(skb);
25301 ++ dev_kfree_skb_irq(skb);
25302 + cnt++;
25303 + }
25304 + }
25305 +diff --git a/drivers/net/ethernet/qlogic/qed/qed_debug.c b/drivers/net/ethernet/qlogic/qed/qed_debug.c
25306 +index 5250d1d1e49ca..86ecb080b1536 100644
25307 +--- a/drivers/net/ethernet/qlogic/qed/qed_debug.c
25308 ++++ b/drivers/net/ethernet/qlogic/qed/qed_debug.c
25309 +@@ -1972,9 +1972,10 @@ static u32 qed_grc_dump_addr_range(struct qed_hwfn *p_hwfn,
25310 + u8 split_id)
25311 + {
25312 + struct dbg_tools_data *dev_data = &p_hwfn->dbg_info;
25313 +- u8 port_id = 0, pf_id = 0, vf_id = 0, fid = 0;
25314 ++ u8 port_id = 0, pf_id = 0, vf_id = 0;
25315 + bool read_using_dmae = false;
25316 + u32 thresh;
25317 ++ u16 fid;
25318 +
25319 + if (!dump)
25320 + return len;
25321 +diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
25322 +index 9282321c2e7fb..f9dd50152b1e3 100644
25323 +--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
25324 ++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
25325 +@@ -221,6 +221,8 @@ int qlcnic_sriov_init(struct qlcnic_adapter *adapter, int num_vfs)
25326 + return 0;
25327 +
25328 + qlcnic_destroy_async_wq:
25329 ++ while (i--)
25330 ++ kfree(sriov->vf_info[i].vp);
25331 + destroy_workqueue(bc->bc_async_wq);
25332 +
25333 + qlcnic_destroy_trans_wq:
25334 +diff --git a/drivers/net/ethernet/rdc/r6040.c b/drivers/net/ethernet/rdc/r6040.c
25335 +index eecd52ed1ed21..f4d434c379e7c 100644
25336 +--- a/drivers/net/ethernet/rdc/r6040.c
25337 ++++ b/drivers/net/ethernet/rdc/r6040.c
25338 +@@ -1159,10 +1159,12 @@ static int r6040_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
25339 + err = register_netdev(dev);
25340 + if (err) {
25341 + dev_err(&pdev->dev, "Failed to register net device\n");
25342 +- goto err_out_mdio_unregister;
25343 ++ goto err_out_phy_disconnect;
25344 + }
25345 + return 0;
25346 +
25347 ++err_out_phy_disconnect:
25348 ++ phy_disconnect(dev->phydev);
25349 + err_out_mdio_unregister:
25350 + mdiobus_unregister(lp->mii_bus);
25351 + err_out_mdio:
25352 +@@ -1186,6 +1188,7 @@ static void r6040_remove_one(struct pci_dev *pdev)
25353 + struct r6040_private *lp = netdev_priv(dev);
25354 +
25355 + unregister_netdev(dev);
25356 ++ phy_disconnect(dev->phydev);
25357 + mdiobus_unregister(lp->mii_bus);
25358 + mdiobus_free(lp->mii_bus);
25359 + netif_napi_del(&lp->napi);
25360 +diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
25361 +index 764832f4dae1a..8b50f03056b7b 100644
25362 +--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
25363 ++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
25364 +@@ -47,7 +47,8 @@ static void config_sub_second_increment(void __iomem *ioaddr,
25365 + if (!(value & PTP_TCR_TSCTRLSSR))
25366 + data = (data * 1000) / 465;
25367 +
25368 +- data &= PTP_SSIR_SSINC_MASK;
25369 ++ if (data > PTP_SSIR_SSINC_MAX)
25370 ++ data = PTP_SSIR_SSINC_MAX;
25371 +
25372 + reg_value = data;
25373 + if (gmac4)
25374 +diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
25375 +index 23ec0a9e396c6..feb209d4b991e 100644
25376 +--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
25377 ++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
25378 +@@ -7097,7 +7097,8 @@ int stmmac_dvr_probe(struct device *device,
25379 + priv->wq = create_singlethread_workqueue("stmmac_wq");
25380 + if (!priv->wq) {
25381 + dev_err(priv->device, "failed to create workqueue\n");
25382 +- return -ENOMEM;
25383 ++ ret = -ENOMEM;
25384 ++ goto error_wq_init;
25385 + }
25386 +
25387 + INIT_WORK(&priv->service_task, stmmac_service_task);
25388 +@@ -7325,6 +7326,7 @@ error_mdio_register:
25389 + stmmac_napi_del(ndev);
25390 + error_hw_init:
25391 + destroy_workqueue(priv->wq);
25392 ++error_wq_init:
25393 + bitmap_free(priv->af_xdp_zc_qps);
25394 +
25395 + return ret;
25396 +diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h
25397 +index 53172a4398101..bf619295d079f 100644
25398 +--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h
25399 ++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h
25400 +@@ -64,7 +64,7 @@
25401 + #define PTP_TCR_TSENMACADDR BIT(18)
25402 +
25403 + /* SSIR defines */
25404 +-#define PTP_SSIR_SSINC_MASK 0xff
25405 ++#define PTP_SSIR_SSINC_MAX 0xff
25406 + #define GMAC4_PTP_SSIR_SSINC_SHIFT 16
25407 +
25408 + /* Auxiliary Control defines */
25409 +diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
25410 +index 49af7e78b7f59..687f43cd466c6 100644
25411 +--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
25412 ++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
25413 +@@ -1654,12 +1654,16 @@ static int stmmac_test_arpoffload(struct stmmac_priv *priv)
25414 + }
25415 +
25416 + ret = stmmac_set_arp_offload(priv, priv->hw, true, ip_addr);
25417 +- if (ret)
25418 ++ if (ret) {
25419 ++ kfree_skb(skb);
25420 + goto cleanup;
25421 ++ }
25422 +
25423 + ret = dev_set_promiscuity(priv->dev, 1);
25424 +- if (ret)
25425 ++ if (ret) {
25426 ++ kfree_skb(skb);
25427 + goto cleanup;
25428 ++ }
25429 +
25430 + ret = dev_direct_xmit(skb, 0);
25431 + if (ret)
25432 +diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
25433 +index b3b0ba842541d..4ff1cfdb9730c 100644
25434 +--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
25435 ++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
25436 +@@ -564,13 +564,13 @@ static int am65_cpsw_nuss_ndo_slave_open(struct net_device *ndev)
25437 + ret = netif_set_real_num_tx_queues(ndev, common->tx_ch_num);
25438 + if (ret) {
25439 + dev_err(common->dev, "cannot set real number of tx queues\n");
25440 +- return ret;
25441 ++ goto runtime_put;
25442 + }
25443 +
25444 + ret = netif_set_real_num_rx_queues(ndev, AM65_CPSW_MAX_RX_QUEUES);
25445 + if (ret) {
25446 + dev_err(common->dev, "cannot set real number of rx queues\n");
25447 +- return ret;
25448 ++ goto runtime_put;
25449 + }
25450 +
25451 + for (i = 0; i < common->tx_ch_num; i++)
25452 +@@ -578,7 +578,7 @@ static int am65_cpsw_nuss_ndo_slave_open(struct net_device *ndev)
25453 +
25454 + ret = am65_cpsw_nuss_common_open(common);
25455 + if (ret)
25456 +- return ret;
25457 ++ goto runtime_put;
25458 +
25459 + common->usage_count++;
25460 +
25461 +@@ -606,6 +606,10 @@ static int am65_cpsw_nuss_ndo_slave_open(struct net_device *ndev)
25462 + error_cleanup:
25463 + am65_cpsw_nuss_ndo_slave_stop(ndev);
25464 + return ret;
25465 ++
25466 ++runtime_put:
25467 ++ pm_runtime_put(common->dev);
25468 ++ return ret;
25469 + }
25470 +
25471 + static void am65_cpsw_nuss_rx_cleanup(void *data, dma_addr_t desc_dma)
25472 +diff --git a/drivers/net/ethernet/ti/netcp_core.c b/drivers/net/ethernet/ti/netcp_core.c
25473 +index aba70bef48945..9eb9eaff4dc90 100644
25474 +--- a/drivers/net/ethernet/ti/netcp_core.c
25475 ++++ b/drivers/net/ethernet/ti/netcp_core.c
25476 +@@ -1261,7 +1261,7 @@ out:
25477 + }
25478 +
25479 + /* Submit the packet */
25480 +-static int netcp_ndo_start_xmit(struct sk_buff *skb, struct net_device *ndev)
25481 ++static netdev_tx_t netcp_ndo_start_xmit(struct sk_buff *skb, struct net_device *ndev)
25482 + {
25483 + struct netcp_intf *netcp = netdev_priv(ndev);
25484 + struct netcp_stats *tx_stats = &netcp->stats;
25485 +diff --git a/drivers/net/ethernet/xilinx/xilinx_emaclite.c b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
25486 +index a3967f8de417d..ad2c30d9a4824 100644
25487 +--- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c
25488 ++++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
25489 +@@ -536,7 +536,7 @@ static void xemaclite_tx_timeout(struct net_device *dev, unsigned int txqueue)
25490 + xemaclite_enable_interrupts(lp);
25491 +
25492 + if (lp->deferred_skb) {
25493 +- dev_kfree_skb(lp->deferred_skb);
25494 ++ dev_kfree_skb_irq(lp->deferred_skb);
25495 + lp->deferred_skb = NULL;
25496 + dev->stats.tx_errors++;
25497 + }
25498 +diff --git a/drivers/net/fddi/defxx.c b/drivers/net/fddi/defxx.c
25499 +index b584ffe38ad68..1fef8a9b1a0fd 100644
25500 +--- a/drivers/net/fddi/defxx.c
25501 ++++ b/drivers/net/fddi/defxx.c
25502 +@@ -3831,10 +3831,24 @@ static int dfx_init(void)
25503 + int status;
25504 +
25505 + status = pci_register_driver(&dfx_pci_driver);
25506 +- if (!status)
25507 +- status = eisa_driver_register(&dfx_eisa_driver);
25508 +- if (!status)
25509 +- status = tc_register_driver(&dfx_tc_driver);
25510 ++ if (status)
25511 ++ goto err_pci_register;
25512 ++
25513 ++ status = eisa_driver_register(&dfx_eisa_driver);
25514 ++ if (status)
25515 ++ goto err_eisa_register;
25516 ++
25517 ++ status = tc_register_driver(&dfx_tc_driver);
25518 ++ if (status)
25519 ++ goto err_tc_register;
25520 ++
25521 ++ return 0;
25522 ++
25523 ++err_tc_register:
25524 ++ eisa_driver_unregister(&dfx_eisa_driver);
25525 ++err_eisa_register:
25526 ++ pci_unregister_driver(&dfx_pci_driver);
25527 ++err_pci_register:
25528 + return status;
25529 + }
25530 +
25531 +diff --git a/drivers/net/hamradio/baycom_epp.c b/drivers/net/hamradio/baycom_epp.c
25532 +index 791b4a53d69fd..bd3b0c2655a28 100644
25533 +--- a/drivers/net/hamradio/baycom_epp.c
25534 ++++ b/drivers/net/hamradio/baycom_epp.c
25535 +@@ -758,7 +758,7 @@ static void epp_bh(struct work_struct *work)
25536 + * ===================== network driver interface =========================
25537 + */
25538 +
25539 +-static int baycom_send_packet(struct sk_buff *skb, struct net_device *dev)
25540 ++static netdev_tx_t baycom_send_packet(struct sk_buff *skb, struct net_device *dev)
25541 + {
25542 + struct baycom_state *bc = netdev_priv(dev);
25543 +
25544 +diff --git a/drivers/net/hamradio/scc.c b/drivers/net/hamradio/scc.c
25545 +index f90830d3dfa69..a9184a78650b0 100644
25546 +--- a/drivers/net/hamradio/scc.c
25547 ++++ b/drivers/net/hamradio/scc.c
25548 +@@ -302,12 +302,12 @@ static inline void scc_discard_buffers(struct scc_channel *scc)
25549 + spin_lock_irqsave(&scc->lock, flags);
25550 + if (scc->tx_buff != NULL)
25551 + {
25552 +- dev_kfree_skb(scc->tx_buff);
25553 ++ dev_kfree_skb_irq(scc->tx_buff);
25554 + scc->tx_buff = NULL;
25555 + }
25556 +
25557 + while (!skb_queue_empty(&scc->tx_queue))
25558 +- dev_kfree_skb(skb_dequeue(&scc->tx_queue));
25559 ++ dev_kfree_skb_irq(skb_dequeue(&scc->tx_queue));
25560 +
25561 + spin_unlock_irqrestore(&scc->lock, flags);
25562 + }
25563 +@@ -1668,7 +1668,7 @@ static netdev_tx_t scc_net_tx(struct sk_buff *skb, struct net_device *dev)
25564 + if (skb_queue_len(&scc->tx_queue) > scc->dev->tx_queue_len) {
25565 + struct sk_buff *skb_del;
25566 + skb_del = skb_dequeue(&scc->tx_queue);
25567 +- dev_kfree_skb(skb_del);
25568 ++ dev_kfree_skb_irq(skb_del);
25569 + }
25570 + skb_queue_tail(&scc->tx_queue, skb);
25571 + netif_trans_update(dev);
25572 +diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
25573 +index 2fbac51b9b19e..038a787943927 100644
25574 +--- a/drivers/net/macsec.c
25575 ++++ b/drivers/net/macsec.c
25576 +@@ -2593,7 +2593,7 @@ static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info)
25577 + const struct macsec_ops *ops;
25578 + struct macsec_context ctx;
25579 + struct macsec_dev *macsec;
25580 +- int ret;
25581 ++ int ret = 0;
25582 +
25583 + if (!attrs[MACSEC_ATTR_IFINDEX])
25584 + return -EINVAL;
25585 +@@ -2606,28 +2606,36 @@ static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info)
25586 + macsec_genl_offload_policy, NULL))
25587 + return -EINVAL;
25588 +
25589 ++ rtnl_lock();
25590 ++
25591 + dev = get_dev_from_nl(genl_info_net(info), attrs);
25592 +- if (IS_ERR(dev))
25593 +- return PTR_ERR(dev);
25594 ++ if (IS_ERR(dev)) {
25595 ++ ret = PTR_ERR(dev);
25596 ++ goto out;
25597 ++ }
25598 + macsec = macsec_priv(dev);
25599 +
25600 +- if (!tb_offload[MACSEC_OFFLOAD_ATTR_TYPE])
25601 +- return -EINVAL;
25602 ++ if (!tb_offload[MACSEC_OFFLOAD_ATTR_TYPE]) {
25603 ++ ret = -EINVAL;
25604 ++ goto out;
25605 ++ }
25606 +
25607 + offload = nla_get_u8(tb_offload[MACSEC_OFFLOAD_ATTR_TYPE]);
25608 + if (macsec->offload == offload)
25609 +- return 0;
25610 ++ goto out;
25611 +
25612 + /* Check if the offloading mode is supported by the underlying layers */
25613 + if (offload != MACSEC_OFFLOAD_OFF &&
25614 +- !macsec_check_offload(offload, macsec))
25615 +- return -EOPNOTSUPP;
25616 ++ !macsec_check_offload(offload, macsec)) {
25617 ++ ret = -EOPNOTSUPP;
25618 ++ goto out;
25619 ++ }
25620 +
25621 + /* Check if the net device is busy. */
25622 +- if (netif_running(dev))
25623 +- return -EBUSY;
25624 +-
25625 +- rtnl_lock();
25626 ++ if (netif_running(dev)) {
25627 ++ ret = -EBUSY;
25628 ++ goto out;
25629 ++ }
25630 +
25631 + prev_offload = macsec->offload;
25632 + macsec->offload = offload;
25633 +@@ -2662,7 +2670,7 @@ static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info)
25634 +
25635 + rollback:
25636 + macsec->offload = prev_offload;
25637 +-
25638 ++out:
25639 + rtnl_unlock();
25640 + return ret;
25641 + }
25642 +diff --git a/drivers/net/mctp/mctp-serial.c b/drivers/net/mctp/mctp-serial.c
25643 +index 7cd103fd34ef7..9f9eaf896047c 100644
25644 +--- a/drivers/net/mctp/mctp-serial.c
25645 ++++ b/drivers/net/mctp/mctp-serial.c
25646 +@@ -35,6 +35,8 @@
25647 + #define BYTE_FRAME 0x7e
25648 + #define BYTE_ESC 0x7d
25649 +
25650 ++#define FCS_INIT 0xffff
25651 ++
25652 + static DEFINE_IDA(mctp_serial_ida);
25653 +
25654 + enum mctp_serial_state {
25655 +@@ -123,7 +125,7 @@ static void mctp_serial_tx_work(struct work_struct *work)
25656 + buf[2] = dev->txlen;
25657 +
25658 + if (!dev->txpos)
25659 +- dev->txfcs = crc_ccitt(0, buf + 1, 2);
25660 ++ dev->txfcs = crc_ccitt(FCS_INIT, buf + 1, 2);
25661 +
25662 + txlen = write_chunk(dev, buf + dev->txpos, 3 - dev->txpos);
25663 + if (txlen <= 0) {
25664 +@@ -303,7 +305,7 @@ static void mctp_serial_push_header(struct mctp_serial *dev, unsigned char c)
25665 + case 1:
25666 + if (c == MCTP_SERIAL_VERSION) {
25667 + dev->rxpos++;
25668 +- dev->rxfcs = crc_ccitt_byte(0, c);
25669 ++ dev->rxfcs = crc_ccitt_byte(FCS_INIT, c);
25670 + } else {
25671 + dev->rxstate = STATE_ERR;
25672 + }
25673 +diff --git a/drivers/net/ntb_netdev.c b/drivers/net/ntb_netdev.c
25674 +index a4abea921046b..85dbe7f73e319 100644
25675 +--- a/drivers/net/ntb_netdev.c
25676 ++++ b/drivers/net/ntb_netdev.c
25677 +@@ -137,7 +137,7 @@ static void ntb_netdev_rx_handler(struct ntb_transport_qp *qp, void *qp_data,
25678 + enqueue_again:
25679 + rc = ntb_transport_rx_enqueue(qp, skb, skb->data, ndev->mtu + ETH_HLEN);
25680 + if (rc) {
25681 +- dev_kfree_skb(skb);
25682 ++ dev_kfree_skb_any(skb);
25683 + ndev->stats.rx_errors++;
25684 + ndev->stats.rx_fifo_errors++;
25685 + }
25686 +@@ -192,7 +192,7 @@ static void ntb_netdev_tx_handler(struct ntb_transport_qp *qp, void *qp_data,
25687 + ndev->stats.tx_aborted_errors++;
25688 + }
25689 +
25690 +- dev_kfree_skb(skb);
25691 ++ dev_kfree_skb_any(skb);
25692 +
25693 + if (ntb_transport_tx_free_entry(dev->qp) >= tx_start) {
25694 + /* Make sure anybody stopping the queue after this sees the new
25695 +diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
25696 +index 9206c660a72ed..d4c821c8cf57c 100644
25697 +--- a/drivers/net/ppp/ppp_generic.c
25698 ++++ b/drivers/net/ppp/ppp_generic.c
25699 +@@ -1743,6 +1743,8 @@ ppp_send_frame(struct ppp *ppp, struct sk_buff *skb)
25700 + int len;
25701 + unsigned char *cp;
25702 +
25703 ++ skb->dev = ppp->dev;
25704 ++
25705 + if (proto < 0x8000) {
25706 + #ifdef CONFIG_PPP_FILTER
25707 + /* check if we should pass this packet */
25708 +diff --git a/drivers/net/wan/farsync.c b/drivers/net/wan/farsync.c
25709 +index 6a212c085435b..5b01642ca44e0 100644
25710 +--- a/drivers/net/wan/farsync.c
25711 ++++ b/drivers/net/wan/farsync.c
25712 +@@ -2545,6 +2545,7 @@ fst_remove_one(struct pci_dev *pdev)
25713 + struct net_device *dev = port_to_dev(&card->ports[i]);
25714 +
25715 + unregister_hdlc_device(dev);
25716 ++ free_netdev(dev);
25717 + }
25718 +
25719 + fst_disable_intr(card);
25720 +@@ -2564,6 +2565,7 @@ fst_remove_one(struct pci_dev *pdev)
25721 + card->tx_dma_handle_card);
25722 + }
25723 + fst_card_array[card->card_no] = NULL;
25724 ++ kfree(card);
25725 + }
25726 +
25727 + static struct pci_driver fst_driver = {
25728 +diff --git a/drivers/net/wireless/ath/ar5523/ar5523.c b/drivers/net/wireless/ath/ar5523/ar5523.c
25729 +index 6f937d2cc1263..ce3d613fa36c4 100644
25730 +--- a/drivers/net/wireless/ath/ar5523/ar5523.c
25731 ++++ b/drivers/net/wireless/ath/ar5523/ar5523.c
25732 +@@ -241,6 +241,11 @@ static void ar5523_cmd_tx_cb(struct urb *urb)
25733 + }
25734 + }
25735 +
25736 ++static void ar5523_cancel_tx_cmd(struct ar5523 *ar)
25737 ++{
25738 ++ usb_kill_urb(ar->tx_cmd.urb_tx);
25739 ++}
25740 ++
25741 + static int ar5523_cmd(struct ar5523 *ar, u32 code, const void *idata,
25742 + int ilen, void *odata, int olen, int flags)
25743 + {
25744 +@@ -280,6 +285,7 @@ static int ar5523_cmd(struct ar5523 *ar, u32 code, const void *idata,
25745 + }
25746 +
25747 + if (!wait_for_completion_timeout(&cmd->done, 2 * HZ)) {
25748 ++ ar5523_cancel_tx_cmd(ar);
25749 + cmd->odata = NULL;
25750 + ar5523_err(ar, "timeout waiting for command %02x reply\n",
25751 + code);
25752 +diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
25753 +index 400f332a7ff01..5eb131ab916fd 100644
25754 +--- a/drivers/net/wireless/ath/ath10k/core.c
25755 ++++ b/drivers/net/wireless/ath/ath10k/core.c
25756 +@@ -99,6 +99,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
25757 + .dynamic_sar_support = false,
25758 + .hw_restart_disconnect = false,
25759 + .use_fw_tx_credits = true,
25760 ++ .delay_unmap_buffer = false,
25761 + },
25762 + {
25763 + .id = QCA988X_HW_2_0_VERSION,
25764 +@@ -138,6 +139,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
25765 + .dynamic_sar_support = false,
25766 + .hw_restart_disconnect = false,
25767 + .use_fw_tx_credits = true,
25768 ++ .delay_unmap_buffer = false,
25769 + },
25770 + {
25771 + .id = QCA9887_HW_1_0_VERSION,
25772 +@@ -178,6 +180,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
25773 + .dynamic_sar_support = false,
25774 + .hw_restart_disconnect = false,
25775 + .use_fw_tx_credits = true,
25776 ++ .delay_unmap_buffer = false,
25777 + },
25778 + {
25779 + .id = QCA6174_HW_3_2_VERSION,
25780 +@@ -213,6 +216,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
25781 + .dynamic_sar_support = true,
25782 + .hw_restart_disconnect = false,
25783 + .use_fw_tx_credits = true,
25784 ++ .delay_unmap_buffer = false,
25785 + },
25786 + {
25787 + .id = QCA6174_HW_2_1_VERSION,
25788 +@@ -252,6 +256,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
25789 + .dynamic_sar_support = false,
25790 + .hw_restart_disconnect = false,
25791 + .use_fw_tx_credits = true,
25792 ++ .delay_unmap_buffer = false,
25793 + },
25794 + {
25795 + .id = QCA6174_HW_2_1_VERSION,
25796 +@@ -291,6 +296,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
25797 + .dynamic_sar_support = false,
25798 + .hw_restart_disconnect = false,
25799 + .use_fw_tx_credits = true,
25800 ++ .delay_unmap_buffer = false,
25801 + },
25802 + {
25803 + .id = QCA6174_HW_3_0_VERSION,
25804 +@@ -330,6 +336,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
25805 + .dynamic_sar_support = false,
25806 + .hw_restart_disconnect = false,
25807 + .use_fw_tx_credits = true,
25808 ++ .delay_unmap_buffer = false,
25809 + },
25810 + {
25811 + .id = QCA6174_HW_3_2_VERSION,
25812 +@@ -373,6 +380,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
25813 + .dynamic_sar_support = true,
25814 + .hw_restart_disconnect = false,
25815 + .use_fw_tx_credits = true,
25816 ++ .delay_unmap_buffer = false,
25817 + },
25818 + {
25819 + .id = QCA99X0_HW_2_0_DEV_VERSION,
25820 +@@ -418,6 +426,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
25821 + .dynamic_sar_support = false,
25822 + .hw_restart_disconnect = false,
25823 + .use_fw_tx_credits = true,
25824 ++ .delay_unmap_buffer = false,
25825 + },
25826 + {
25827 + .id = QCA9984_HW_1_0_DEV_VERSION,
25828 +@@ -470,6 +479,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
25829 + .dynamic_sar_support = false,
25830 + .hw_restart_disconnect = false,
25831 + .use_fw_tx_credits = true,
25832 ++ .delay_unmap_buffer = false,
25833 + },
25834 + {
25835 + .id = QCA9888_HW_2_0_DEV_VERSION,
25836 +@@ -519,6 +529,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
25837 + .dynamic_sar_support = false,
25838 + .hw_restart_disconnect = false,
25839 + .use_fw_tx_credits = true,
25840 ++ .delay_unmap_buffer = false,
25841 + },
25842 + {
25843 + .id = QCA9377_HW_1_0_DEV_VERSION,
25844 +@@ -558,6 +569,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
25845 + .dynamic_sar_support = false,
25846 + .hw_restart_disconnect = false,
25847 + .use_fw_tx_credits = true,
25848 ++ .delay_unmap_buffer = false,
25849 + },
25850 + {
25851 + .id = QCA9377_HW_1_1_DEV_VERSION,
25852 +@@ -599,6 +611,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
25853 + .dynamic_sar_support = false,
25854 + .hw_restart_disconnect = false,
25855 + .use_fw_tx_credits = true,
25856 ++ .delay_unmap_buffer = false,
25857 + },
25858 + {
25859 + .id = QCA9377_HW_1_1_DEV_VERSION,
25860 +@@ -631,6 +644,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
25861 + .dynamic_sar_support = false,
25862 + .hw_restart_disconnect = false,
25863 + .use_fw_tx_credits = true,
25864 ++ .delay_unmap_buffer = false,
25865 + },
25866 + {
25867 + .id = QCA4019_HW_1_0_DEV_VERSION,
25868 +@@ -677,6 +691,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
25869 + .dynamic_sar_support = false,
25870 + .hw_restart_disconnect = false,
25871 + .use_fw_tx_credits = true,
25872 ++ .delay_unmap_buffer = false,
25873 + },
25874 + {
25875 + .id = WCN3990_HW_1_0_DEV_VERSION,
25876 +@@ -709,6 +724,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
25877 + .dynamic_sar_support = true,
25878 + .hw_restart_disconnect = true,
25879 + .use_fw_tx_credits = false,
25880 ++ .delay_unmap_buffer = true,
25881 + },
25882 + };
25883 +
25884 +diff --git a/drivers/net/wireless/ath/ath10k/htc.c b/drivers/net/wireless/ath/ath10k/htc.c
25885 +index 6d1784f74bea4..5bfeecb95fca2 100644
25886 +--- a/drivers/net/wireless/ath/ath10k/htc.c
25887 ++++ b/drivers/net/wireless/ath/ath10k/htc.c
25888 +@@ -56,6 +56,15 @@ void ath10k_htc_notify_tx_completion(struct ath10k_htc_ep *ep,
25889 + ath10k_dbg(ar, ATH10K_DBG_HTC, "%s: ep %d skb %pK\n", __func__,
25890 + ep->eid, skb);
25891 +
25892 ++ /* A corner case where the copy completion is reaching to host but still
25893 ++ * copy engine is processing it due to which host unmaps corresponding
25894 ++ * memory and causes SMMU fault, hence as workaround adding delay
25895 ++ * the unmapping memory to avoid SMMU faults.
25896 ++ */
25897 ++ if (ar->hw_params.delay_unmap_buffer &&
25898 ++ ep->ul_pipe_id == 3)
25899 ++ mdelay(2);
25900 ++
25901 + hdr = (struct ath10k_htc_hdr *)skb->data;
25902 + ath10k_htc_restore_tx_skb(ep->htc, skb);
25903 +
25904 +diff --git a/drivers/net/wireless/ath/ath10k/hw.h b/drivers/net/wireless/ath/ath10k/hw.h
25905 +index 1b99f3a39a113..9643031a4427a 100644
25906 +--- a/drivers/net/wireless/ath/ath10k/hw.h
25907 ++++ b/drivers/net/wireless/ath/ath10k/hw.h
25908 +@@ -637,6 +637,8 @@ struct ath10k_hw_params {
25909 + bool hw_restart_disconnect;
25910 +
25911 + bool use_fw_tx_credits;
25912 ++
25913 ++ bool delay_unmap_buffer;
25914 + };
25915 +
25916 + struct htt_resp;
25917 +diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
25918 +index e56c6a6b13791..728d607289c36 100644
25919 +--- a/drivers/net/wireless/ath/ath10k/pci.c
25920 ++++ b/drivers/net/wireless/ath/ath10k/pci.c
25921 +@@ -3792,18 +3792,22 @@ static struct pci_driver ath10k_pci_driver = {
25922 +
25923 + static int __init ath10k_pci_init(void)
25924 + {
25925 +- int ret;
25926 ++ int ret1, ret2;
25927 +
25928 +- ret = pci_register_driver(&ath10k_pci_driver);
25929 +- if (ret)
25930 ++ ret1 = pci_register_driver(&ath10k_pci_driver);
25931 ++ if (ret1)
25932 + printk(KERN_ERR "failed to register ath10k pci driver: %d\n",
25933 +- ret);
25934 ++ ret1);
25935 +
25936 +- ret = ath10k_ahb_init();
25937 +- if (ret)
25938 +- printk(KERN_ERR "ahb init failed: %d\n", ret);
25939 ++ ret2 = ath10k_ahb_init();
25940 ++ if (ret2)
25941 ++ printk(KERN_ERR "ahb init failed: %d\n", ret2);
25942 +
25943 +- return ret;
25944 ++ if (ret1 && ret2)
25945 ++ return ret1;
25946 ++
25947 ++ /* registered to at least one bus */
25948 ++ return 0;
25949 + }
25950 + module_init(ath10k_pci_init);
25951 +
25952 +diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
25953 +index cf2f52cc4e30d..c20e84e031fad 100644
25954 +--- a/drivers/net/wireless/ath/ath11k/core.h
25955 ++++ b/drivers/net/wireless/ath/ath11k/core.h
25956 +@@ -505,6 +505,8 @@ struct ath11k_sta {
25957 + u64 ps_start_jiffies;
25958 + u64 ps_total_duration;
25959 + bool peer_current_ps_valid;
25960 ++
25961 ++ u32 bw_prev;
25962 + };
25963 +
25964 + #define ATH11K_MIN_5G_FREQ 4150
25965 +diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
25966 +index 2d1e3fd9b526c..ef7617802491e 100644
25967 +--- a/drivers/net/wireless/ath/ath11k/mac.c
25968 ++++ b/drivers/net/wireless/ath/ath11k/mac.c
25969 +@@ -4215,10 +4215,11 @@ static void ath11k_sta_rc_update_wk(struct work_struct *wk)
25970 + const u8 *ht_mcs_mask;
25971 + const u16 *vht_mcs_mask;
25972 + const u16 *he_mcs_mask;
25973 +- u32 changed, bw, nss, smps;
25974 ++ u32 changed, bw, nss, smps, bw_prev;
25975 + int err, num_vht_rates, num_he_rates;
25976 + const struct cfg80211_bitrate_mask *mask;
25977 + struct peer_assoc_params peer_arg;
25978 ++ enum wmi_phy_mode peer_phymode;
25979 +
25980 + arsta = container_of(wk, struct ath11k_sta, update_wk);
25981 + sta = container_of((void *)arsta, struct ieee80211_sta, drv_priv);
25982 +@@ -4239,6 +4240,7 @@ static void ath11k_sta_rc_update_wk(struct work_struct *wk)
25983 + arsta->changed = 0;
25984 +
25985 + bw = arsta->bw;
25986 ++ bw_prev = arsta->bw_prev;
25987 + nss = arsta->nss;
25988 + smps = arsta->smps;
25989 +
25990 +@@ -4252,26 +4254,57 @@ static void ath11k_sta_rc_update_wk(struct work_struct *wk)
25991 + ath11k_mac_max_he_nss(he_mcs_mask)));
25992 +
25993 + if (changed & IEEE80211_RC_BW_CHANGED) {
25994 +- /* Send peer assoc command before set peer bandwidth param to
25995 +- * avoid the mismatch between the peer phymode and the peer
25996 +- * bandwidth.
25997 +- */
25998 +- ath11k_peer_assoc_prepare(ar, arvif->vif, sta, &peer_arg, true);
25999 +-
26000 +- peer_arg.is_assoc = false;
26001 +- err = ath11k_wmi_send_peer_assoc_cmd(ar, &peer_arg);
26002 +- if (err) {
26003 +- ath11k_warn(ar->ab, "failed to send peer assoc for STA %pM vdev %i: %d\n",
26004 +- sta->addr, arvif->vdev_id, err);
26005 +- } else if (wait_for_completion_timeout(&ar->peer_assoc_done, 1 * HZ)) {
26006 ++ /* Get the peer phymode */
26007 ++ ath11k_peer_assoc_h_phymode(ar, arvif->vif, sta, &peer_arg);
26008 ++ peer_phymode = peer_arg.peer_phymode;
26009 ++
26010 ++ ath11k_dbg(ar->ab, ATH11K_DBG_MAC, "mac update sta %pM peer bw %d phymode %d\n",
26011 ++ sta->addr, bw, peer_phymode);
26012 ++
26013 ++ if (bw > bw_prev) {
26014 ++ /* BW is upgraded. In this case we send WMI_PEER_PHYMODE
26015 ++ * followed by WMI_PEER_CHWIDTH
26016 ++ */
26017 ++ ath11k_dbg(ar->ab, ATH11K_DBG_MAC, "mac BW upgrade for sta %pM new BW %d, old BW %d\n",
26018 ++ sta->addr, bw, bw_prev);
26019 ++
26020 ++ err = ath11k_wmi_set_peer_param(ar, sta->addr, arvif->vdev_id,
26021 ++ WMI_PEER_PHYMODE, peer_phymode);
26022 ++
26023 ++ if (err) {
26024 ++ ath11k_warn(ar->ab, "failed to update STA %pM peer phymode %d: %d\n",
26025 ++ sta->addr, peer_phymode, err);
26026 ++ goto err_rc_bw_changed;
26027 ++ }
26028 ++
26029 + err = ath11k_wmi_set_peer_param(ar, sta->addr, arvif->vdev_id,
26030 + WMI_PEER_CHWIDTH, bw);
26031 ++
26032 + if (err)
26033 + ath11k_warn(ar->ab, "failed to update STA %pM peer bw %d: %d\n",
26034 + sta->addr, bw, err);
26035 + } else {
26036 +- ath11k_warn(ar->ab, "failed to get peer assoc conf event for %pM vdev %i\n",
26037 +- sta->addr, arvif->vdev_id);
26038 ++ /* BW is downgraded. In this case we send WMI_PEER_CHWIDTH
26039 ++ * followed by WMI_PEER_PHYMODE
26040 ++ */
26041 ++ ath11k_dbg(ar->ab, ATH11K_DBG_MAC, "mac BW downgrade for sta %pM new BW %d,old BW %d\n",
26042 ++ sta->addr, bw, bw_prev);
26043 ++
26044 ++ err = ath11k_wmi_set_peer_param(ar, sta->addr, arvif->vdev_id,
26045 ++ WMI_PEER_CHWIDTH, bw);
26046 ++
26047 ++ if (err) {
26048 ++ ath11k_warn(ar->ab, "failed to update STA %pM peer bw %d: %d\n",
26049 ++ sta->addr, bw, err);
26050 ++ goto err_rc_bw_changed;
26051 ++ }
26052 ++
26053 ++ err = ath11k_wmi_set_peer_param(ar, sta->addr, arvif->vdev_id,
26054 ++ WMI_PEER_PHYMODE, peer_phymode);
26055 ++
26056 ++ if (err)
26057 ++ ath11k_warn(ar->ab, "failed to update STA %pM peer phymode %d: %d\n",
26058 ++ sta->addr, peer_phymode, err);
26059 + }
26060 + }
26061 +
26062 +@@ -4352,6 +4385,7 @@ static void ath11k_sta_rc_update_wk(struct work_struct *wk)
26063 + }
26064 + }
26065 +
26066 ++err_rc_bw_changed:
26067 + mutex_unlock(&ar->conf_mutex);
26068 + }
26069 +
26070 +@@ -4505,6 +4539,34 @@ exit:
26071 + return ret;
26072 + }
26073 +
26074 ++static u32 ath11k_mac_ieee80211_sta_bw_to_wmi(struct ath11k *ar,
26075 ++ struct ieee80211_sta *sta)
26076 ++{
26077 ++ u32 bw = WMI_PEER_CHWIDTH_20MHZ;
26078 ++
26079 ++ switch (sta->deflink.bandwidth) {
26080 ++ case IEEE80211_STA_RX_BW_20:
26081 ++ bw = WMI_PEER_CHWIDTH_20MHZ;
26082 ++ break;
26083 ++ case IEEE80211_STA_RX_BW_40:
26084 ++ bw = WMI_PEER_CHWIDTH_40MHZ;
26085 ++ break;
26086 ++ case IEEE80211_STA_RX_BW_80:
26087 ++ bw = WMI_PEER_CHWIDTH_80MHZ;
26088 ++ break;
26089 ++ case IEEE80211_STA_RX_BW_160:
26090 ++ bw = WMI_PEER_CHWIDTH_160MHZ;
26091 ++ break;
26092 ++ default:
26093 ++ ath11k_warn(ar->ab, "Invalid bandwidth %d for %pM\n",
26094 ++ sta->deflink.bandwidth, sta->addr);
26095 ++ bw = WMI_PEER_CHWIDTH_20MHZ;
26096 ++ break;
26097 ++ }
26098 ++
26099 ++ return bw;
26100 ++}
26101 ++
26102 + static int ath11k_mac_op_sta_state(struct ieee80211_hw *hw,
26103 + struct ieee80211_vif *vif,
26104 + struct ieee80211_sta *sta,
26105 +@@ -4590,6 +4652,12 @@ static int ath11k_mac_op_sta_state(struct ieee80211_hw *hw,
26106 + if (ret)
26107 + ath11k_warn(ar->ab, "Failed to associate station: %pM\n",
26108 + sta->addr);
26109 ++
26110 ++ spin_lock_bh(&ar->data_lock);
26111 ++ /* Set arsta bw and prev bw */
26112 ++ arsta->bw = ath11k_mac_ieee80211_sta_bw_to_wmi(ar, sta);
26113 ++ arsta->bw_prev = arsta->bw;
26114 ++ spin_unlock_bh(&ar->data_lock);
26115 + } else if (old_state == IEEE80211_STA_ASSOC &&
26116 + new_state == IEEE80211_STA_AUTHORIZED) {
26117 + spin_lock_bh(&ar->ab->base_lock);
26118 +@@ -4713,28 +4781,8 @@ static void ath11k_mac_op_sta_rc_update(struct ieee80211_hw *hw,
26119 + spin_lock_bh(&ar->data_lock);
26120 +
26121 + if (changed & IEEE80211_RC_BW_CHANGED) {
26122 +- bw = WMI_PEER_CHWIDTH_20MHZ;
26123 +-
26124 +- switch (sta->deflink.bandwidth) {
26125 +- case IEEE80211_STA_RX_BW_20:
26126 +- bw = WMI_PEER_CHWIDTH_20MHZ;
26127 +- break;
26128 +- case IEEE80211_STA_RX_BW_40:
26129 +- bw = WMI_PEER_CHWIDTH_40MHZ;
26130 +- break;
26131 +- case IEEE80211_STA_RX_BW_80:
26132 +- bw = WMI_PEER_CHWIDTH_80MHZ;
26133 +- break;
26134 +- case IEEE80211_STA_RX_BW_160:
26135 +- bw = WMI_PEER_CHWIDTH_160MHZ;
26136 +- break;
26137 +- default:
26138 +- ath11k_warn(ar->ab, "Invalid bandwidth %d in rc update for %pM\n",
26139 +- sta->deflink.bandwidth, sta->addr);
26140 +- bw = WMI_PEER_CHWIDTH_20MHZ;
26141 +- break;
26142 +- }
26143 +-
26144 ++ bw = ath11k_mac_ieee80211_sta_bw_to_wmi(ar, sta);
26145 ++ arsta->bw_prev = arsta->bw;
26146 + arsta->bw = bw;
26147 + }
26148 +
26149 +diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
26150 +index 51de2208b7899..8358fe08c2344 100644
26151 +--- a/drivers/net/wireless/ath/ath11k/qmi.c
26152 ++++ b/drivers/net/wireless/ath/ath11k/qmi.c
26153 +@@ -3087,6 +3087,9 @@ static const struct qmi_msg_handler ath11k_qmi_msg_handlers[] = {
26154 + sizeof(struct qmi_wlfw_fw_init_done_ind_msg_v01),
26155 + .fn = ath11k_qmi_msg_fw_init_done_cb,
26156 + },
26157 ++
26158 ++ /* end of list */
26159 ++ {},
26160 + };
26161 +
26162 + static int ath11k_qmi_ops_new_server(struct qmi_handle *qmi_hdl,
26163 +diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
26164 +index 4d9002a9d082c..1a2e0c7eeb023 100644
26165 +--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
26166 ++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
26167 +@@ -708,14 +708,13 @@ static void ath9k_hif_usb_reg_in_cb(struct urb *urb)
26168 + struct rx_buf *rx_buf = (struct rx_buf *)urb->context;
26169 + struct hif_device_usb *hif_dev = rx_buf->hif_dev;
26170 + struct sk_buff *skb = rx_buf->skb;
26171 +- struct sk_buff *nskb;
26172 + int ret;
26173 +
26174 + if (!skb)
26175 + return;
26176 +
26177 + if (!hif_dev)
26178 +- goto free;
26179 ++ goto free_skb;
26180 +
26181 + switch (urb->status) {
26182 + case 0:
26183 +@@ -724,7 +723,7 @@ static void ath9k_hif_usb_reg_in_cb(struct urb *urb)
26184 + case -ECONNRESET:
26185 + case -ENODEV:
26186 + case -ESHUTDOWN:
26187 +- goto free;
26188 ++ goto free_skb;
26189 + default:
26190 + skb_reset_tail_pointer(skb);
26191 + skb_trim(skb, 0);
26192 +@@ -735,25 +734,27 @@ static void ath9k_hif_usb_reg_in_cb(struct urb *urb)
26193 + if (likely(urb->actual_length != 0)) {
26194 + skb_put(skb, urb->actual_length);
26195 +
26196 +- /* Process the command first */
26197 ++ /*
26198 ++ * Process the command first.
26199 ++ * skb is either freed here or passed to be
26200 ++ * managed to another callback function.
26201 ++ */
26202 + ath9k_htc_rx_msg(hif_dev->htc_handle, skb,
26203 + skb->len, USB_REG_IN_PIPE);
26204 +
26205 +-
26206 +- nskb = alloc_skb(MAX_REG_IN_BUF_SIZE, GFP_ATOMIC);
26207 +- if (!nskb) {
26208 ++ skb = alloc_skb(MAX_REG_IN_BUF_SIZE, GFP_ATOMIC);
26209 ++ if (!skb) {
26210 + dev_err(&hif_dev->udev->dev,
26211 + "ath9k_htc: REG_IN memory allocation failure\n");
26212 +- urb->context = NULL;
26213 +- return;
26214 ++ goto free_rx_buf;
26215 + }
26216 +
26217 +- rx_buf->skb = nskb;
26218 ++ rx_buf->skb = skb;
26219 +
26220 + usb_fill_int_urb(urb, hif_dev->udev,
26221 + usb_rcvintpipe(hif_dev->udev,
26222 + USB_REG_IN_PIPE),
26223 +- nskb->data, MAX_REG_IN_BUF_SIZE,
26224 ++ skb->data, MAX_REG_IN_BUF_SIZE,
26225 + ath9k_hif_usb_reg_in_cb, rx_buf, 1);
26226 + }
26227 +
26228 +@@ -762,12 +763,13 @@ resubmit:
26229 + ret = usb_submit_urb(urb, GFP_ATOMIC);
26230 + if (ret) {
26231 + usb_unanchor_urb(urb);
26232 +- goto free;
26233 ++ goto free_skb;
26234 + }
26235 +
26236 + return;
26237 +-free:
26238 ++free_skb:
26239 + kfree_skb(skb);
26240 ++free_rx_buf:
26241 + kfree(rx_buf);
26242 + urb->context = NULL;
26243 + }
26244 +@@ -780,14 +782,10 @@ static void ath9k_hif_usb_dealloc_tx_urbs(struct hif_device_usb *hif_dev)
26245 + spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
26246 + list_for_each_entry_safe(tx_buf, tx_buf_tmp,
26247 + &hif_dev->tx.tx_buf, list) {
26248 +- usb_get_urb(tx_buf->urb);
26249 +- spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
26250 +- usb_kill_urb(tx_buf->urb);
26251 + list_del(&tx_buf->list);
26252 + usb_free_urb(tx_buf->urb);
26253 + kfree(tx_buf->buf);
26254 + kfree(tx_buf);
26255 +- spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
26256 + }
26257 + spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
26258 +
26259 +@@ -1329,10 +1327,24 @@ static int send_eject_command(struct usb_interface *interface)
26260 + static int ath9k_hif_usb_probe(struct usb_interface *interface,
26261 + const struct usb_device_id *id)
26262 + {
26263 ++ struct usb_endpoint_descriptor *bulk_in, *bulk_out, *int_in, *int_out;
26264 + struct usb_device *udev = interface_to_usbdev(interface);
26265 ++ struct usb_host_interface *alt;
26266 + struct hif_device_usb *hif_dev;
26267 + int ret = 0;
26268 +
26269 ++ /* Verify the expected endpoints are present */
26270 ++ alt = interface->cur_altsetting;
26271 ++ if (usb_find_common_endpoints(alt, &bulk_in, &bulk_out, &int_in, &int_out) < 0 ||
26272 ++ usb_endpoint_num(bulk_in) != USB_WLAN_RX_PIPE ||
26273 ++ usb_endpoint_num(bulk_out) != USB_WLAN_TX_PIPE ||
26274 ++ usb_endpoint_num(int_in) != USB_REG_IN_PIPE ||
26275 ++ usb_endpoint_num(int_out) != USB_REG_OUT_PIPE) {
26276 ++ dev_err(&udev->dev,
26277 ++ "ath9k_htc: Device endpoint numbers are not the expected ones\n");
26278 ++ return -ENODEV;
26279 ++ }
26280 ++
26281 + if (id->driver_info == STORAGE_DEVICE)
26282 + return send_eject_command(interface);
26283 +
26284 +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
26285 +index 74020fa100659..22344e68fd597 100644
26286 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
26287 ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
26288 +@@ -305,8 +305,12 @@ int brcmf_c_preinit_dcmds(struct brcmf_if *ifp)
26289 + brcmf_info("Firmware: %s %s\n", ri->chipname, buf);
26290 +
26291 + /* locate firmware version number for ethtool */
26292 +- ptr = strrchr(buf, ' ') + 1;
26293 +- strscpy(ifp->drvr->fwver, ptr, sizeof(ifp->drvr->fwver));
26294 ++ ptr = strrchr(buf, ' ');
26295 ++ if (!ptr) {
26296 ++ bphy_err(drvr, "Retrieving version number failed");
26297 ++ goto done;
26298 ++ }
26299 ++ strscpy(ifp->drvr->fwver, ptr + 1, sizeof(ifp->drvr->fwver));
26300 +
26301 + /* Query for 'clmver' to get CLM version info from firmware */
26302 + memset(buf, 0, sizeof(buf));
26303 +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
26304 +index f2207793f6e27..09d2f2dc2b46f 100644
26305 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
26306 ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
26307 +@@ -803,6 +803,11 @@ brcmf_fw_alloc_request(u32 chip, u32 chiprev,
26308 + u32 i, j;
26309 + char end = '\0';
26310 +
26311 ++ if (chiprev >= BITS_PER_TYPE(u32)) {
26312 ++ brcmf_err("Invalid chip revision %u\n", chiprev);
26313 ++ return NULL;
26314 ++ }
26315 ++
26316 + for (i = 0; i < table_size; i++) {
26317 + if (mapping_table[i].chipid == chip &&
26318 + mapping_table[i].revmask & BIT(chiprev))
26319 +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
26320 +index 80083f9ea3116..5630f6e718e12 100644
26321 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
26322 ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
26323 +@@ -726,7 +726,7 @@ static int brcmf_pcie_exit_download_state(struct brcmf_pciedev_info *devinfo,
26324 + }
26325 +
26326 + if (!brcmf_chip_set_active(devinfo->ci, resetintr))
26327 +- return -EINVAL;
26328 ++ return -EIO;
26329 + return 0;
26330 + }
26331 +
26332 +@@ -1218,6 +1218,10 @@ static int brcmf_pcie_init_ringbuffers(struct brcmf_pciedev_info *devinfo)
26333 + BRCMF_NROF_H2D_COMMON_MSGRINGS;
26334 + max_completionrings = BRCMF_NROF_D2H_COMMON_MSGRINGS;
26335 + }
26336 ++ if (max_flowrings > 256) {
26337 ++ brcmf_err(bus, "invalid max_flowrings(%d)\n", max_flowrings);
26338 ++ return -EIO;
26339 ++ }
26340 +
26341 + if (devinfo->dma_idx_sz != 0) {
26342 + bufsz = (max_submissionrings + max_completionrings) *
26343 +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
26344 +index 465d95d837592..e265a2e411a09 100644
26345 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
26346 ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
26347 +@@ -3414,6 +3414,7 @@ static int brcmf_sdio_download_firmware(struct brcmf_sdio *bus,
26348 + /* Take arm out of reset */
26349 + if (!brcmf_chip_set_active(bus->ci, rstvec)) {
26350 + brcmf_err("error getting out of ARM core reset\n");
26351 ++ bcmerror = -EIO;
26352 + goto err;
26353 + }
26354 +
26355 +diff --git a/drivers/net/wireless/intel/iwlwifi/mei/iwl-mei.h b/drivers/net/wireless/intel/iwlwifi/mei/iwl-mei.h
26356 +index 67122cfa22920..5409699c9a1fd 100644
26357 +--- a/drivers/net/wireless/intel/iwlwifi/mei/iwl-mei.h
26358 ++++ b/drivers/net/wireless/intel/iwlwifi/mei/iwl-mei.h
26359 +@@ -446,9 +446,10 @@ void iwl_mei_host_associated(const struct iwl_mei_conn_info *conn_info,
26360 + void iwl_mei_host_disassociated(void);
26361 +
26362 + /**
26363 +- * iwl_mei_device_down() - must be called when the device is down
26364 ++ * iwl_mei_device_state() - must be called when the device changes up/down state
26365 ++ * @up: true if the device is up, false otherwise.
26366 + */
26367 +-void iwl_mei_device_down(void);
26368 ++void iwl_mei_device_state(bool up);
26369 +
26370 + #else
26371 +
26372 +@@ -497,7 +498,7 @@ static inline void iwl_mei_host_associated(const struct iwl_mei_conn_info *conn_
26373 + static inline void iwl_mei_host_disassociated(void)
26374 + {}
26375 +
26376 +-static inline void iwl_mei_device_down(void)
26377 ++static inline void iwl_mei_device_state(bool up)
26378 + {}
26379 +
26380 + #endif /* CONFIG_IWLMEI */
26381 +diff --git a/drivers/net/wireless/intel/iwlwifi/mei/main.c b/drivers/net/wireless/intel/iwlwifi/mei/main.c
26382 +index 357f14626cf43..c0142093c7682 100644
26383 +--- a/drivers/net/wireless/intel/iwlwifi/mei/main.c
26384 ++++ b/drivers/net/wireless/intel/iwlwifi/mei/main.c
26385 +@@ -147,9 +147,13 @@ struct iwl_mei_filters {
26386 + * to send CSME_OWNERSHIP_CONFIRMED when the driver completes its down
26387 + * flow.
26388 + * @link_prot_state: true when we are in link protection PASSIVE
26389 ++ * @device_down: true if the device is down. Used to remember to send
26390 ++ * CSME_OWNERSHIP_CONFIRMED when the driver is already down.
26391 + * @csa_throttle_end_wk: used when &csa_throttled is true
26392 + * @data_q_lock: protects the access to the data queues which are
26393 + * accessed without the mutex.
26394 ++ * @netdev_work: used to defer registering and unregistering of the netdev to
26395 ++ * avoid taking the rtnl lock in the SAP messages handlers.
26396 + * @sap_seq_no: the sequence number for the SAP messages
26397 + * @seq_no: the sequence number for the SAP messages
26398 + * @dbgfs_dir: the debugfs dir entry
26399 +@@ -167,8 +171,10 @@ struct iwl_mei {
26400 + bool csa_throttled;
26401 + bool csme_taking_ownership;
26402 + bool link_prot_state;
26403 ++ bool device_down;
26404 + struct delayed_work csa_throttle_end_wk;
26405 + spinlock_t data_q_lock;
26406 ++ struct work_struct netdev_work;
26407 +
26408 + atomic_t sap_seq_no;
26409 + atomic_t seq_no;
26410 +@@ -588,13 +594,38 @@ static rx_handler_result_t iwl_mei_rx_handler(struct sk_buff **pskb)
26411 + return res;
26412 + }
26413 +
26414 ++static void iwl_mei_netdev_work(struct work_struct *wk)
26415 ++{
26416 ++ struct iwl_mei *mei =
26417 ++ container_of(wk, struct iwl_mei, netdev_work);
26418 ++ struct net_device *netdev;
26419 ++
26420 ++ /*
26421 ++ * First take rtnl and only then the mutex to avoid an ABBA
26422 ++ * with iwl_mei_set_netdev()
26423 ++ */
26424 ++ rtnl_lock();
26425 ++ mutex_lock(&iwl_mei_mutex);
26426 ++
26427 ++ netdev = rcu_dereference_protected(iwl_mei_cache.netdev,
26428 ++ lockdep_is_held(&iwl_mei_mutex));
26429 ++ if (netdev) {
26430 ++ if (mei->amt_enabled)
26431 ++ netdev_rx_handler_register(netdev, iwl_mei_rx_handler,
26432 ++ mei);
26433 ++ else
26434 ++ netdev_rx_handler_unregister(netdev);
26435 ++ }
26436 ++
26437 ++ mutex_unlock(&iwl_mei_mutex);
26438 ++ rtnl_unlock();
26439 ++}
26440 ++
26441 + static void
26442 + iwl_mei_handle_rx_start_ok(struct mei_cl_device *cldev,
26443 + const struct iwl_sap_me_msg_start_ok *rsp,
26444 + ssize_t len)
26445 + {
26446 +- struct iwl_mei *mei = mei_cldev_get_drvdata(cldev);
26447 +-
26448 + if (len != sizeof(*rsp)) {
26449 + dev_err(&cldev->dev,
26450 + "got invalid SAP_ME_MSG_START_OK from CSME firmware\n");
26451 +@@ -613,13 +644,10 @@ iwl_mei_handle_rx_start_ok(struct mei_cl_device *cldev,
26452 +
26453 + mutex_lock(&iwl_mei_mutex);
26454 + set_bit(IWL_MEI_STATUS_SAP_CONNECTED, &iwl_mei_status);
26455 +- /* wifi driver has registered already */
26456 +- if (iwl_mei_cache.ops) {
26457 +- iwl_mei_send_sap_msg(mei->cldev,
26458 +- SAP_MSG_NOTIF_WIFIDR_UP);
26459 +- iwl_mei_cache.ops->sap_connected(iwl_mei_cache.priv);
26460 +- }
26461 +-
26462 ++ /*
26463 ++ * We'll receive AMT_STATE SAP message in a bit and
26464 ++ * that will continue the flow
26465 ++ */
26466 + mutex_unlock(&iwl_mei_mutex);
26467 + }
26468 +
26469 +@@ -712,6 +740,13 @@ static void iwl_mei_set_init_conf(struct iwl_mei *mei)
26470 + .val = cpu_to_le32(iwl_mei_cache.rf_kill),
26471 + };
26472 +
26473 ++ /* wifi driver has registered already */
26474 ++ if (iwl_mei_cache.ops) {
26475 ++ iwl_mei_send_sap_msg(mei->cldev,
26476 ++ SAP_MSG_NOTIF_WIFIDR_UP);
26477 ++ iwl_mei_cache.ops->sap_connected(iwl_mei_cache.priv);
26478 ++ }
26479 ++
26480 + iwl_mei_send_sap_msg(mei->cldev, SAP_MSG_NOTIF_WHO_OWNS_NIC);
26481 +
26482 + if (iwl_mei_cache.conn_info) {
26483 +@@ -738,38 +773,23 @@ static void iwl_mei_handle_amt_state(struct mei_cl_device *cldev,
26484 + const struct iwl_sap_msg_dw *dw)
26485 + {
26486 + struct iwl_mei *mei = mei_cldev_get_drvdata(cldev);
26487 +- struct net_device *netdev;
26488 +
26489 +- /*
26490 +- * First take rtnl and only then the mutex to avoid an ABBA
26491 +- * with iwl_mei_set_netdev()
26492 +- */
26493 +- rtnl_lock();
26494 + mutex_lock(&iwl_mei_mutex);
26495 +
26496 +- netdev = rcu_dereference_protected(iwl_mei_cache.netdev,
26497 +- lockdep_is_held(&iwl_mei_mutex));
26498 +-
26499 + if (mei->amt_enabled == !!le32_to_cpu(dw->val))
26500 + goto out;
26501 +
26502 + mei->amt_enabled = dw->val;
26503 +
26504 +- if (mei->amt_enabled) {
26505 +- if (netdev)
26506 +- netdev_rx_handler_register(netdev, iwl_mei_rx_handler, mei);
26507 +-
26508 ++ if (mei->amt_enabled)
26509 + iwl_mei_set_init_conf(mei);
26510 +- } else {
26511 +- if (iwl_mei_cache.ops)
26512 +- iwl_mei_cache.ops->rfkill(iwl_mei_cache.priv, false);
26513 +- if (netdev)
26514 +- netdev_rx_handler_unregister(netdev);
26515 +- }
26516 ++ else if (iwl_mei_cache.ops)
26517 ++ iwl_mei_cache.ops->rfkill(iwl_mei_cache.priv, false, false);
26518 ++
26519 ++ schedule_work(&mei->netdev_work);
26520 +
26521 + out:
26522 + mutex_unlock(&iwl_mei_mutex);
26523 +- rtnl_unlock();
26524 + }
26525 +
26526 + static void iwl_mei_handle_nic_owner(struct mei_cl_device *cldev,
26527 +@@ -798,14 +818,18 @@ static void iwl_mei_handle_csme_taking_ownership(struct mei_cl_device *cldev,
26528 +
26529 + mei->got_ownership = false;
26530 +
26531 +- /*
26532 +- * Remember to send CSME_OWNERSHIP_CONFIRMED when the wifi driver
26533 +- * is finished taking the device down.
26534 +- */
26535 +- mei->csme_taking_ownership = true;
26536 ++ if (iwl_mei_cache.ops && !mei->device_down) {
26537 ++ /*
26538 ++ * Remember to send CSME_OWNERSHIP_CONFIRMED when the wifi
26539 ++ * driver is finished taking the device down.
26540 ++ */
26541 ++ mei->csme_taking_ownership = true;
26542 +
26543 +- if (iwl_mei_cache.ops)
26544 +- iwl_mei_cache.ops->rfkill(iwl_mei_cache.priv, true);
26545 ++ iwl_mei_cache.ops->rfkill(iwl_mei_cache.priv, true, true);
26546 ++ } else {
26547 ++ iwl_mei_send_sap_msg(cldev,
26548 ++ SAP_MSG_NOTIF_CSME_OWNERSHIP_CONFIRMED);
26549 ++ }
26550 + }
26551 +
26552 + static void iwl_mei_handle_nvm(struct mei_cl_device *cldev,
26553 +@@ -1413,10 +1437,7 @@ void iwl_mei_host_associated(const struct iwl_mei_conn_info *conn_info,
26554 +
26555 + mei = mei_cldev_get_drvdata(iwl_mei_global_cldev);
26556 +
26557 +- if (!mei)
26558 +- goto out;
26559 +-
26560 +- if (!mei->amt_enabled)
26561 ++ if (!mei && !mei->amt_enabled)
26562 + goto out;
26563 +
26564 + iwl_mei_send_sap_msg_payload(mei->cldev, &msg.hdr);
26565 +@@ -1445,7 +1466,7 @@ void iwl_mei_host_disassociated(void)
26566 +
26567 + mei = mei_cldev_get_drvdata(iwl_mei_global_cldev);
26568 +
26569 +- if (!mei)
26570 ++ if (!mei && !mei->amt_enabled)
26571 + goto out;
26572 +
26573 + iwl_mei_send_sap_msg_payload(mei->cldev, &msg.hdr);
26574 +@@ -1481,7 +1502,7 @@ void iwl_mei_set_rfkill_state(bool hw_rfkill, bool sw_rfkill)
26575 +
26576 + mei = mei_cldev_get_drvdata(iwl_mei_global_cldev);
26577 +
26578 +- if (!mei)
26579 ++ if (!mei && !mei->amt_enabled)
26580 + goto out;
26581 +
26582 + iwl_mei_send_sap_msg_payload(mei->cldev, &msg.hdr);
26583 +@@ -1510,7 +1531,7 @@ void iwl_mei_set_nic_info(const u8 *mac_address, const u8 *nvm_address)
26584 +
26585 + mei = mei_cldev_get_drvdata(iwl_mei_global_cldev);
26586 +
26587 +- if (!mei)
26588 ++ if (!mei && !mei->amt_enabled)
26589 + goto out;
26590 +
26591 + iwl_mei_send_sap_msg_payload(mei->cldev, &msg.hdr);
26592 +@@ -1538,7 +1559,7 @@ void iwl_mei_set_country_code(u16 mcc)
26593 +
26594 + mei = mei_cldev_get_drvdata(iwl_mei_global_cldev);
26595 +
26596 +- if (!mei)
26597 ++ if (!mei && !mei->amt_enabled)
26598 + goto out;
26599 +
26600 + iwl_mei_send_sap_msg_payload(mei->cldev, &msg.hdr);
26601 +@@ -1564,7 +1585,7 @@ void iwl_mei_set_power_limit(const __le16 *power_limit)
26602 +
26603 + mei = mei_cldev_get_drvdata(iwl_mei_global_cldev);
26604 +
26605 +- if (!mei)
26606 ++ if (!mei && !mei->amt_enabled)
26607 + goto out;
26608 +
26609 + memcpy(msg.sar_chain_info_table, power_limit, sizeof(msg.sar_chain_info_table));
26610 +@@ -1616,7 +1637,7 @@ out:
26611 + }
26612 + EXPORT_SYMBOL_GPL(iwl_mei_set_netdev);
26613 +
26614 +-void iwl_mei_device_down(void)
26615 ++void iwl_mei_device_state(bool up)
26616 + {
26617 + struct iwl_mei *mei;
26618 +
26619 +@@ -1630,7 +1651,9 @@ void iwl_mei_device_down(void)
26620 + if (!mei)
26621 + goto out;
26622 +
26623 +- if (!mei->csme_taking_ownership)
26624 ++ mei->device_down = !up;
26625 ++
26626 ++ if (up || !mei->csme_taking_ownership)
26627 + goto out;
26628 +
26629 + iwl_mei_send_sap_msg(mei->cldev,
26630 +@@ -1639,7 +1662,7 @@ void iwl_mei_device_down(void)
26631 + out:
26632 + mutex_unlock(&iwl_mei_mutex);
26633 + }
26634 +-EXPORT_SYMBOL_GPL(iwl_mei_device_down);
26635 ++EXPORT_SYMBOL_GPL(iwl_mei_device_state);
26636 +
26637 + int iwl_mei_register(void *priv, const struct iwl_mei_ops *ops)
26638 + {
26639 +@@ -1669,9 +1692,10 @@ int iwl_mei_register(void *priv, const struct iwl_mei_ops *ops)
26640 +
26641 + /* we have already a SAP connection */
26642 + if (iwl_mei_is_connected()) {
26643 +- iwl_mei_send_sap_msg(mei->cldev,
26644 +- SAP_MSG_NOTIF_WIFIDR_UP);
26645 +- ops->rfkill(priv, mei->link_prot_state);
26646 ++ if (mei->amt_enabled)
26647 ++ iwl_mei_send_sap_msg(mei->cldev,
26648 ++ SAP_MSG_NOTIF_WIFIDR_UP);
26649 ++ ops->rfkill(priv, mei->link_prot_state, false);
26650 + }
26651 + }
26652 + ret = 0;
26653 +@@ -1818,9 +1842,11 @@ static int iwl_mei_probe(struct mei_cl_device *cldev,
26654 + iwl_mei_csa_throttle_end_wk);
26655 + init_waitqueue_head(&mei->get_ownership_wq);
26656 + spin_lock_init(&mei->data_q_lock);
26657 ++ INIT_WORK(&mei->netdev_work, iwl_mei_netdev_work);
26658 +
26659 + mei_cldev_set_drvdata(cldev, mei);
26660 + mei->cldev = cldev;
26661 ++ mei->device_down = true;
26662 +
26663 + do {
26664 + ret = iwl_mei_alloc_shared_mem(cldev);
26665 +@@ -1921,29 +1947,32 @@ static void iwl_mei_remove(struct mei_cl_device *cldev)
26666 +
26667 + mutex_lock(&iwl_mei_mutex);
26668 +
26669 +- /*
26670 +- * Tell CSME that we are going down so that it won't access the
26671 +- * memory anymore, make sure this message goes through immediately.
26672 +- */
26673 +- mei->csa_throttled = false;
26674 +- iwl_mei_send_sap_msg(mei->cldev,
26675 +- SAP_MSG_NOTIF_HOST_GOES_DOWN);
26676 ++ if (mei->amt_enabled) {
26677 ++ /*
26678 ++ * Tell CSME that we are going down so that it won't access the
26679 ++ * memory anymore, make sure this message goes through immediately.
26680 ++ */
26681 ++ mei->csa_throttled = false;
26682 ++ iwl_mei_send_sap_msg(mei->cldev,
26683 ++ SAP_MSG_NOTIF_HOST_GOES_DOWN);
26684 +
26685 +- for (i = 0; i < SEND_SAP_MAX_WAIT_ITERATION; i++) {
26686 +- if (!iwl_mei_host_to_me_data_pending(mei))
26687 +- break;
26688 ++ for (i = 0; i < SEND_SAP_MAX_WAIT_ITERATION; i++) {
26689 ++ if (!iwl_mei_host_to_me_data_pending(mei))
26690 ++ break;
26691 +
26692 +- msleep(5);
26693 +- }
26694 ++ msleep(20);
26695 ++ }
26696 +
26697 +- /*
26698 +- * If we couldn't make sure that CSME saw the HOST_GOES_DOWN message,
26699 +- * it means that it will probably keep reading memory that we are going
26700 +- * to unmap and free, expect IOMMU error messages.
26701 +- */
26702 +- if (i == SEND_SAP_MAX_WAIT_ITERATION)
26703 +- dev_err(&mei->cldev->dev,
26704 +- "Couldn't get ACK from CSME on HOST_GOES_DOWN message\n");
26705 ++ /*
26706 ++ * If we couldn't make sure that CSME saw the HOST_GOES_DOWN
26707 ++ * message, it means that it will probably keep reading memory
26708 ++ * that we are going to unmap and free, expect IOMMU error
26709 ++ * messages.
26710 ++ */
26711 ++ if (i == SEND_SAP_MAX_WAIT_ITERATION)
26712 ++ dev_err(&mei->cldev->dev,
26713 ++ "Couldn't get ACK from CSME on HOST_GOES_DOWN message\n");
26714 ++ }
26715 +
26716 + mutex_unlock(&iwl_mei_mutex);
26717 +
26718 +@@ -1976,6 +2005,7 @@ static void iwl_mei_remove(struct mei_cl_device *cldev)
26719 + */
26720 + cancel_work_sync(&mei->send_csa_msg_wk);
26721 + cancel_delayed_work_sync(&mei->csa_throttle_end_wk);
26722 ++ cancel_work_sync(&mei->netdev_work);
26723 +
26724 + /*
26725 + * If someone waits for the ownership, let him know that we are going
26726 +diff --git a/drivers/net/wireless/intel/iwlwifi/mei/net.c b/drivers/net/wireless/intel/iwlwifi/mei/net.c
26727 +index 3472167c83707..eac46d1a397a8 100644
26728 +--- a/drivers/net/wireless/intel/iwlwifi/mei/net.c
26729 ++++ b/drivers/net/wireless/intel/iwlwifi/mei/net.c
26730 +@@ -1,6 +1,6 @@
26731 + // SPDX-License-Identifier: GPL-2.0-only
26732 + /*
26733 +- * Copyright (C) 2021 Intel Corporation
26734 ++ * Copyright (C) 2021-2022 Intel Corporation
26735 + */
26736 +
26737 + #include <uapi/linux/if_ether.h>
26738 +@@ -337,10 +337,14 @@ rx_handler_result_t iwl_mei_rx_filter(struct sk_buff *orig_skb,
26739 + if (!*pass_to_csme)
26740 + return RX_HANDLER_PASS;
26741 +
26742 +- if (ret == RX_HANDLER_PASS)
26743 ++ if (ret == RX_HANDLER_PASS) {
26744 + skb = skb_copy(orig_skb, GFP_ATOMIC);
26745 +- else
26746 ++
26747 ++ if (!skb)
26748 ++ return RX_HANDLER_PASS;
26749 ++ } else {
26750 + skb = orig_skb;
26751 ++ }
26752 +
26753 + /* CSME wants the MAC header as well, push it back */
26754 + skb_push(skb, skb->data - skb_mac_header(skb));
26755 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
26756 +index f041e77af059e..5de34edc51fe9 100644
26757 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
26758 ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
26759 +@@ -1665,6 +1665,8 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
26760 + iwl_rfi_send_config_cmd(mvm, NULL);
26761 + }
26762 +
26763 ++ iwl_mvm_mei_device_state(mvm, true);
26764 ++
26765 + IWL_DEBUG_INFO(mvm, "RT uCode started.\n");
26766 + return 0;
26767 + error:
26768 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
26769 +index 97cba526e4651..1ccb3cad7cdc1 100644
26770 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
26771 ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
26772 +@@ -2201,10 +2201,10 @@ static inline void iwl_mvm_mei_host_disassociated(struct iwl_mvm *mvm)
26773 + iwl_mei_host_disassociated();
26774 + }
26775 +
26776 +-static inline void iwl_mvm_mei_device_down(struct iwl_mvm *mvm)
26777 ++static inline void iwl_mvm_mei_device_state(struct iwl_mvm *mvm, bool up)
26778 + {
26779 + if (mvm->mei_registered)
26780 +- iwl_mei_device_down();
26781 ++ iwl_mei_device_state(up);
26782 + }
26783 +
26784 + static inline void iwl_mvm_mei_set_sw_rfkill_state(struct iwl_mvm *mvm)
26785 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
26786 +index d2d42cd48af22..5b8e9a06f6d4a 100644
26787 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
26788 ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
26789 +@@ -1375,7 +1375,7 @@ void iwl_mvm_stop_device(struct iwl_mvm *mvm)
26790 + iwl_trans_stop_device(mvm->trans);
26791 + iwl_free_fw_paging(&mvm->fwrt);
26792 + iwl_fw_dump_conf_clear(&mvm->fwrt);
26793 +- iwl_mvm_mei_device_down(mvm);
26794 ++ iwl_mvm_mei_device_state(mvm, false);
26795 + }
26796 +
26797 + static void iwl_op_mode_mvm_stop(struct iwl_op_mode *op_mode)
26798 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
26799 +index 86d20e13bf47a..ba944175546d4 100644
26800 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
26801 ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
26802 +@@ -1171,9 +1171,15 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
26803 + /* From now on, we cannot access info->control */
26804 + iwl_mvm_skb_prepare_status(skb, dev_cmd);
26805 +
26806 ++ /*
26807 ++ * The IV is introduced by the HW for new tx api, and it is not present
26808 ++ * in the skb, hence, don't tell iwl_mvm_mei_tx_copy_to_csme about the
26809 ++ * IV for those devices.
26810 ++ */
26811 + if (ieee80211_is_data(fc))
26812 + iwl_mvm_mei_tx_copy_to_csme(mvm, skb,
26813 +- info->control.hw_key ?
26814 ++ info->control.hw_key &&
26815 ++ !iwl_mvm_has_new_tx_api(mvm) ?
26816 + info->control.hw_key->iv_len : 0);
26817 +
26818 + if (iwl_trans_tx(mvm->trans, skb, dev_cmd, txq_id))
26819 +@@ -1206,6 +1212,7 @@ int iwl_mvm_tx_skb_sta(struct iwl_mvm *mvm, struct sk_buff *skb,
26820 + struct sk_buff_head mpdus_skbs;
26821 + unsigned int payload_len;
26822 + int ret;
26823 ++ struct sk_buff *orig_skb = skb;
26824 +
26825 + if (WARN_ON_ONCE(!mvmsta))
26826 + return -1;
26827 +@@ -1238,8 +1245,17 @@ int iwl_mvm_tx_skb_sta(struct iwl_mvm *mvm, struct sk_buff *skb,
26828 +
26829 + ret = iwl_mvm_tx_mpdu(mvm, skb, &info, sta);
26830 + if (ret) {
26831 ++ /* Free skbs created as part of TSO logic that have not yet been dequeued */
26832 + __skb_queue_purge(&mpdus_skbs);
26833 +- return ret;
26834 ++ /* skb here is not necessarily same as skb that entered this method,
26835 ++ * so free it explicitly.
26836 ++ */
26837 ++ if (skb == orig_skb)
26838 ++ ieee80211_free_txskb(mvm->hw, skb);
26839 ++ else
26840 ++ kfree_skb(skb);
26841 ++ /* there was error, but we consumed skb one way or another, so return 0 */
26842 ++ return 0;
26843 + }
26844 + }
26845 +
26846 +diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
26847 +index 87db9498dea44..7bcf7a6b67df3 100644
26848 +--- a/drivers/net/wireless/mediatek/mt76/mt76.h
26849 ++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
26850 +@@ -1107,8 +1107,9 @@ static inline bool mt76_is_skb_pktid(u8 pktid)
26851 + static inline u8 mt76_tx_power_nss_delta(u8 nss)
26852 + {
26853 + static const u8 nss_delta[4] = { 0, 6, 9, 12 };
26854 ++ u8 idx = nss - 1;
26855 +
26856 +- return nss_delta[nss - 1];
26857 ++ return (idx < ARRAY_SIZE(nss_delta)) ? nss_delta[idx] : 0;
26858 + }
26859 +
26860 + static inline bool mt76_testmode_enabled(struct mt76_phy *phy)
26861 +diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
26862 +index 011fc9729b38c..025a237c1cce8 100644
26863 +--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
26864 ++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
26865 +@@ -2834,6 +2834,9 @@ mt76_connac_mcu_send_ram_firmware(struct mt76_dev *dev,
26866 + len = le32_to_cpu(region->len);
26867 + addr = le32_to_cpu(region->addr);
26868 +
26869 ++ if (region->feature_set & FW_FEATURE_NON_DL)
26870 ++ goto next;
26871 ++
26872 + if (region->feature_set & FW_FEATURE_OVERRIDE_ADDR)
26873 + override = addr;
26874 +
26875 +@@ -2850,6 +2853,7 @@ mt76_connac_mcu_send_ram_firmware(struct mt76_dev *dev,
26876 + return err;
26877 + }
26878 +
26879 ++next:
26880 + offset += len;
26881 + }
26882 +
26883 +diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
26884 +index 4b1a9811646fd..0bce0ce51be00 100644
26885 +--- a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
26886 ++++ b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
26887 +@@ -173,60 +173,50 @@ static void mt7915_eeprom_parse_band_config(struct mt7915_phy *phy)
26888 + void mt7915_eeprom_parse_hw_cap(struct mt7915_dev *dev,
26889 + struct mt7915_phy *phy)
26890 + {
26891 +- u8 nss, nss_band, nss_band_max, *eeprom = dev->mt76.eeprom.data;
26892 ++ u8 path, nss, nss_max = 4, *eeprom = dev->mt76.eeprom.data;
26893 + struct mt76_phy *mphy = phy->mt76;
26894 +- bool ext_phy = phy != &dev->phy;
26895 +
26896 + mt7915_eeprom_parse_band_config(phy);
26897 +
26898 +- /* read tx/rx mask from eeprom */
26899 ++ /* read tx/rx path from eeprom */
26900 + if (is_mt7915(&dev->mt76)) {
26901 +- nss = FIELD_GET(MT_EE_WIFI_CONF0_TX_PATH,
26902 +- eeprom[MT_EE_WIFI_CONF]);
26903 ++ path = FIELD_GET(MT_EE_WIFI_CONF0_TX_PATH,
26904 ++ eeprom[MT_EE_WIFI_CONF]);
26905 + } else {
26906 +- nss = FIELD_GET(MT_EE_WIFI_CONF0_TX_PATH,
26907 +- eeprom[MT_EE_WIFI_CONF + phy->band_idx]);
26908 ++ path = FIELD_GET(MT_EE_WIFI_CONF0_TX_PATH,
26909 ++ eeprom[MT_EE_WIFI_CONF + phy->band_idx]);
26910 + }
26911 +
26912 +- if (!nss || nss > 4)
26913 +- nss = 4;
26914 ++ if (!path || path > 4)
26915 ++ path = 4;
26916 +
26917 + /* read tx/rx stream */
26918 +- nss_band = nss;
26919 +-
26920 ++ nss = path;
26921 + if (dev->dbdc_support) {
26922 + if (is_mt7915(&dev->mt76)) {
26923 +- nss_band = FIELD_GET(MT_EE_WIFI_CONF3_TX_PATH_B0,
26924 +- eeprom[MT_EE_WIFI_CONF + 3]);
26925 ++ path = min_t(u8, path, 2);
26926 ++ nss = FIELD_GET(MT_EE_WIFI_CONF3_TX_PATH_B0,
26927 ++ eeprom[MT_EE_WIFI_CONF + 3]);
26928 + if (phy->band_idx)
26929 +- nss_band = FIELD_GET(MT_EE_WIFI_CONF3_TX_PATH_B1,
26930 +- eeprom[MT_EE_WIFI_CONF + 3]);
26931 ++ nss = FIELD_GET(MT_EE_WIFI_CONF3_TX_PATH_B1,
26932 ++ eeprom[MT_EE_WIFI_CONF + 3]);
26933 + } else {
26934 +- nss_band = FIELD_GET(MT_EE_WIFI_CONF_STREAM_NUM,
26935 +- eeprom[MT_EE_WIFI_CONF + 2 + phy->band_idx]);
26936 ++ nss = FIELD_GET(MT_EE_WIFI_CONF_STREAM_NUM,
26937 ++ eeprom[MT_EE_WIFI_CONF + 2 + phy->band_idx]);
26938 + }
26939 +
26940 +- nss_band_max = is_mt7986(&dev->mt76) ?
26941 +- MT_EE_NSS_MAX_DBDC_MA7986 : MT_EE_NSS_MAX_DBDC_MA7915;
26942 +- } else {
26943 +- nss_band_max = is_mt7986(&dev->mt76) ?
26944 +- MT_EE_NSS_MAX_MA7986 : MT_EE_NSS_MAX_MA7915;
26945 ++ if (!is_mt7986(&dev->mt76))
26946 ++ nss_max = 2;
26947 + }
26948 +
26949 +- if (!nss_band || nss_band > nss_band_max)
26950 +- nss_band = nss_band_max;
26951 +-
26952 +- if (nss_band > nss) {
26953 +- dev_warn(dev->mt76.dev,
26954 +- "nss mismatch, nss(%d) nss_band(%d) band(%d) ext_phy(%d)\n",
26955 +- nss, nss_band, phy->band_idx, ext_phy);
26956 +- nss = nss_band;
26957 +- }
26958 ++ if (!nss)
26959 ++ nss = nss_max;
26960 ++ nss = min_t(u8, min_t(u8, nss_max, nss), path);
26961 +
26962 +- mphy->chainmask = BIT(nss) - 1;
26963 +- if (ext_phy)
26964 ++ mphy->chainmask = BIT(path) - 1;
26965 ++ if (phy->band_idx)
26966 + mphy->chainmask <<= dev->chainshift;
26967 +- mphy->antenna_mask = BIT(nss_band) - 1;
26968 ++ mphy->antenna_mask = BIT(nss) - 1;
26969 + dev->chainmask |= mphy->chainmask;
26970 + dev->chainshift = hweight8(dev->mphy.chainmask);
26971 + }
26972 +diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.h b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.h
26973 +index 7578ac6d0be62..f3e56817d36e9 100644
26974 +--- a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.h
26975 ++++ b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.h
26976 +@@ -58,11 +58,6 @@ enum mt7915_eeprom_field {
26977 + #define MT_EE_RATE_DELTA_SIGN BIT(6)
26978 + #define MT_EE_RATE_DELTA_EN BIT(7)
26979 +
26980 +-#define MT_EE_NSS_MAX_MA7915 4
26981 +-#define MT_EE_NSS_MAX_DBDC_MA7915 2
26982 +-#define MT_EE_NSS_MAX_MA7986 4
26983 +-#define MT_EE_NSS_MAX_DBDC_MA7986 4
26984 +-
26985 + enum mt7915_adie_sku {
26986 + MT7976_ONE_ADIE_DBDC = 0x7,
26987 + MT7975_ONE_ADIE = 0x8,
26988 +diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
26989 +index a4bcc617c1a34..e6bf6e04d4b9c 100644
26990 +--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
26991 ++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
26992 +@@ -1151,7 +1151,7 @@ void mt7915_mac_set_timing(struct mt7915_phy *phy)
26993 + FIELD_PREP(MT_TIMEOUT_VAL_CCA, 48);
26994 + u32 ofdm = FIELD_PREP(MT_TIMEOUT_VAL_PLCP, 60) |
26995 + FIELD_PREP(MT_TIMEOUT_VAL_CCA, 28);
26996 +- int offset;
26997 ++ int eifs_ofdm = 360, sifs = 10, offset;
26998 + bool a_band = !(phy->mt76->chandef.chan->band == NL80211_BAND_2GHZ);
26999 +
27000 + if (!test_bit(MT76_STATE_RUNNING, &phy->mt76->state))
27001 +@@ -1169,17 +1169,26 @@ void mt7915_mac_set_timing(struct mt7915_phy *phy)
27002 + reg_offset = FIELD_PREP(MT_TIMEOUT_VAL_PLCP, offset) |
27003 + FIELD_PREP(MT_TIMEOUT_VAL_CCA, offset);
27004 +
27005 ++ if (!is_mt7915(&dev->mt76)) {
27006 ++ if (!a_band) {
27007 ++ mt76_wr(dev, MT_TMAC_ICR1(phy->band_idx),
27008 ++ FIELD_PREP(MT_IFS_EIFS_CCK, 314));
27009 ++ eifs_ofdm = 78;
27010 ++ } else {
27011 ++ eifs_ofdm = 84;
27012 ++ }
27013 ++ } else if (a_band) {
27014 ++ sifs = 16;
27015 ++ }
27016 ++
27017 + mt76_wr(dev, MT_TMAC_CDTR(phy->band_idx), cck + reg_offset);
27018 + mt76_wr(dev, MT_TMAC_ODTR(phy->band_idx), ofdm + reg_offset);
27019 + mt76_wr(dev, MT_TMAC_ICR0(phy->band_idx),
27020 +- FIELD_PREP(MT_IFS_EIFS_OFDM, a_band ? 84 : 78) |
27021 ++ FIELD_PREP(MT_IFS_EIFS_OFDM, eifs_ofdm) |
27022 + FIELD_PREP(MT_IFS_RIFS, 2) |
27023 +- FIELD_PREP(MT_IFS_SIFS, 10) |
27024 ++ FIELD_PREP(MT_IFS_SIFS, sifs) |
27025 + FIELD_PREP(MT_IFS_SLOT, phy->slottime));
27026 +
27027 +- mt76_wr(dev, MT_TMAC_ICR1(phy->band_idx),
27028 +- FIELD_PREP(MT_IFS_EIFS_CCK, 314));
27029 +-
27030 + if (phy->slottime < 20 || a_band)
27031 + val = MT7915_CFEND_RATE_DEFAULT;
27032 + else
27033 +@@ -1600,7 +1609,7 @@ void mt7915_mac_update_stats(struct mt7915_phy *phy)
27034 +
27035 + aggr0 = phy->band_idx ? ARRAY_SIZE(dev->mt76.aggr_stats) / 2 : 0;
27036 + if (is_mt7915(&dev->mt76)) {
27037 +- for (i = 0, aggr1 = aggr0 + 4; i < 4; i++) {
27038 ++ for (i = 0, aggr1 = aggr0 + 8; i < 4; i++) {
27039 + val = mt76_rr(dev, MT_MIB_MB_SDR1(phy->band_idx, (i << 4)));
27040 + mib->ba_miss_cnt +=
27041 + FIELD_GET(MT_MIB_BA_MISS_COUNT_MASK, val);
27042 +diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/pci.c b/drivers/net/wireless/mediatek/mt76/mt7915/pci.c
27043 +index 728a879c3b008..3808ce1647d9e 100644
27044 +--- a/drivers/net/wireless/mediatek/mt76/mt7915/pci.c
27045 ++++ b/drivers/net/wireless/mediatek/mt76/mt7915/pci.c
27046 +@@ -65,10 +65,17 @@ static void mt7915_put_hif2(struct mt7915_hif *hif)
27047 +
27048 + static struct mt7915_hif *mt7915_pci_init_hif2(struct pci_dev *pdev)
27049 + {
27050 ++ struct pci_dev *tmp_pdev;
27051 ++
27052 + hif_idx++;
27053 +- if (!pci_get_device(PCI_VENDOR_ID_MEDIATEK, 0x7916, NULL) &&
27054 +- !pci_get_device(PCI_VENDOR_ID_MEDIATEK, 0x790a, NULL))
27055 +- return NULL;
27056 ++
27057 ++ tmp_pdev = pci_get_device(PCI_VENDOR_ID_MEDIATEK, 0x7916, NULL);
27058 ++ if (!tmp_pdev) {
27059 ++ tmp_pdev = pci_get_device(PCI_VENDOR_ID_MEDIATEK, 0x790a, NULL);
27060 ++ if (!tmp_pdev)
27061 ++ return NULL;
27062 ++ }
27063 ++ pci_dev_put(tmp_pdev);
27064 +
27065 + writel(hif_idx | MT_PCIE_RECOG_ID_SEM,
27066 + pcim_iomap_table(pdev)[0] + MT_PCIE_RECOG_ID);
27067 +diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/init.c b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
27068 +index dcdb3cf04ac1b..4ad66b3443838 100644
27069 +--- a/drivers/net/wireless/mediatek/mt76/mt7921/init.c
27070 ++++ b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
27071 +@@ -37,6 +37,7 @@ mt7921_regd_notifier(struct wiphy *wiphy,
27072 +
27073 + memcpy(dev->mt76.alpha2, request->alpha2, sizeof(dev->mt76.alpha2));
27074 + dev->mt76.region = request->dfs_region;
27075 ++ dev->country_ie_env = request->country_ie_env;
27076 +
27077 + mt7921_mutex_acquire(dev);
27078 + mt7921_mcu_set_clc(dev, request->alpha2, request->country_ie_env);
27079 +diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
27080 +index 650ab97ae0524..1c0d8cf19b8eb 100644
27081 +--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
27082 ++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
27083 +@@ -396,6 +396,27 @@ mt7921_mac_fill_rx(struct mt7921_dev *dev, struct sk_buff *skb)
27084 + if (v0 & MT_PRXV_HT_AD_CODE)
27085 + status->enc_flags |= RX_ENC_FLAG_LDPC;
27086 +
27087 ++ ret = mt76_connac2_mac_fill_rx_rate(&dev->mt76, status, sband,
27088 ++ rxv, &mode);
27089 ++ if (ret < 0)
27090 ++ return ret;
27091 ++
27092 ++ if (rxd1 & MT_RXD1_NORMAL_GROUP_5) {
27093 ++ rxd += 6;
27094 ++ if ((u8 *)rxd - skb->data >= skb->len)
27095 ++ return -EINVAL;
27096 ++
27097 ++ rxv = rxd;
27098 ++ /* Monitor mode would use RCPI described in GROUP 5
27099 ++ * instead.
27100 ++ */
27101 ++ v1 = le32_to_cpu(rxv[0]);
27102 ++
27103 ++ rxd += 12;
27104 ++ if ((u8 *)rxd - skb->data >= skb->len)
27105 ++ return -EINVAL;
27106 ++ }
27107 ++
27108 + status->chains = mphy->antenna_mask;
27109 + status->chain_signal[0] = to_rssi(MT_PRXV_RCPI0, v1);
27110 + status->chain_signal[1] = to_rssi(MT_PRXV_RCPI1, v1);
27111 +@@ -410,17 +431,6 @@ mt7921_mac_fill_rx(struct mt7921_dev *dev, struct sk_buff *skb)
27112 + status->signal = max(status->signal,
27113 + status->chain_signal[i]);
27114 + }
27115 +-
27116 +- ret = mt76_connac2_mac_fill_rx_rate(&dev->mt76, status, sband,
27117 +- rxv, &mode);
27118 +- if (ret < 0)
27119 +- return ret;
27120 +-
27121 +- if (rxd1 & MT_RXD1_NORMAL_GROUP_5) {
27122 +- rxd += 18;
27123 +- if ((u8 *)rxd - skb->data >= skb->len)
27124 +- return -EINVAL;
27125 +- }
27126 + }
27127 +
27128 + amsdu_info = FIELD_GET(MT_RXD4_NORMAL_PAYLOAD_FORMAT, rxd4);
27129 +@@ -974,7 +984,7 @@ void mt7921_mac_update_mib_stats(struct mt7921_phy *phy)
27130 + mib->tx_amsdu_cnt += val;
27131 + }
27132 +
27133 +- for (i = 0, aggr1 = aggr0 + 4; i < 4; i++) {
27134 ++ for (i = 0, aggr1 = aggr0 + 8; i < 4; i++) {
27135 + u32 val2;
27136 +
27137 + val = mt76_rr(dev, MT_TX_AGG_CNT(0, i));
27138 +diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
27139 +index 7e409ac7d9a82..111d9221b94f5 100644
27140 +--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
27141 ++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
27142 +@@ -1504,7 +1504,13 @@ static int mt7921_set_sar_specs(struct ieee80211_hw *hw,
27143 + int err;
27144 +
27145 + mt7921_mutex_acquire(dev);
27146 ++ err = mt7921_mcu_set_clc(dev, dev->mt76.alpha2,
27147 ++ dev->country_ie_env);
27148 ++ if (err < 0)
27149 ++ goto out;
27150 ++
27151 + err = mt7921_set_tx_sar_pwr(hw, sar);
27152 ++out:
27153 + mt7921_mutex_release(dev);
27154 +
27155 + return err;
27156 +diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
27157 +index eaba114a9c7e4..d36b940c0a07a 100644
27158 +--- a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
27159 ++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
27160 +@@ -171,7 +171,7 @@ struct mt7921_clc {
27161 + u8 type;
27162 + u8 rsv[8];
27163 + u8 data[];
27164 +-};
27165 ++} __packed;
27166 +
27167 + struct mt7921_phy {
27168 + struct mt76_phy *mt76;
27169 +@@ -244,6 +244,8 @@ struct mt7921_dev {
27170 + struct work_struct ipv6_ns_work;
27171 + /* IPv6 addresses for WoWLAN */
27172 + struct sk_buff_head ipv6_ns_list;
27173 ++
27174 ++ enum environment_cap country_ie_env;
27175 + };
27176 +
27177 + enum {
27178 +diff --git a/drivers/net/wireless/mediatek/mt76/usb.c b/drivers/net/wireless/mediatek/mt76/usb.c
27179 +index 4c4033bb1bb35..0597df2729a62 100644
27180 +--- a/drivers/net/wireless/mediatek/mt76/usb.c
27181 ++++ b/drivers/net/wireless/mediatek/mt76/usb.c
27182 +@@ -766,6 +766,9 @@ static void mt76u_status_worker(struct mt76_worker *w)
27183 + struct mt76_queue *q;
27184 + int i;
27185 +
27186 ++ if (!test_bit(MT76_STATE_RUNNING, &dev->phy.state))
27187 ++ return;
27188 ++
27189 + for (i = 0; i < IEEE80211_NUM_ACS; i++) {
27190 + q = dev->phy.q_tx[i];
27191 + if (!q)
27192 +@@ -785,11 +788,11 @@ static void mt76u_status_worker(struct mt76_worker *w)
27193 + wake_up(&dev->tx_wait);
27194 +
27195 + mt76_worker_schedule(&dev->tx_worker);
27196 +-
27197 +- if (dev->drv->tx_status_data &&
27198 +- !test_and_set_bit(MT76_READING_STATS, &dev->phy.state))
27199 +- queue_work(dev->wq, &dev->usb.stat_work);
27200 + }
27201 ++
27202 ++ if (dev->drv->tx_status_data &&
27203 ++ !test_and_set_bit(MT76_READING_STATS, &dev->phy.state))
27204 ++ queue_work(dev->wq, &dev->usb.stat_work);
27205 + }
27206 +
27207 + static void mt76u_tx_status_data(struct work_struct *work)
27208 +diff --git a/drivers/net/wireless/purelifi/plfxlc/usb.c b/drivers/net/wireless/purelifi/plfxlc/usb.c
27209 +index 39e54b3787d6a..76d0a778636a4 100644
27210 +--- a/drivers/net/wireless/purelifi/plfxlc/usb.c
27211 ++++ b/drivers/net/wireless/purelifi/plfxlc/usb.c
27212 +@@ -247,6 +247,7 @@ error:
27213 + for (i = 0; i < RX_URBS_COUNT; i++)
27214 + free_rx_urb(urbs[i]);
27215 + }
27216 ++ kfree(urbs);
27217 + return r;
27218 + }
27219 +
27220 +diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
27221 +index 782b089a2e1ba..1ba66b8f70c95 100644
27222 +--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
27223 ++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
27224 +@@ -1190,7 +1190,7 @@ struct rtl8723bu_c2h {
27225 + u8 bw;
27226 + } __packed ra_report;
27227 + };
27228 +-};
27229 ++} __packed;
27230 +
27231 + struct rtl8xxxu_fileops;
27232 +
27233 +diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
27234 +index ac641a56efb09..e9c1b62c9c3c2 100644
27235 +--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
27236 ++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
27237 +@@ -1608,18 +1608,18 @@ static int rtl8xxxu_identify_chip(struct rtl8xxxu_priv *priv)
27238 + {
27239 + struct device *dev = &priv->udev->dev;
27240 + struct ieee80211_hw *hw = priv->hw;
27241 +- u32 val32, bonding;
27242 ++ u32 val32, bonding, sys_cfg;
27243 + u16 val16;
27244 +
27245 +- val32 = rtl8xxxu_read32(priv, REG_SYS_CFG);
27246 +- priv->chip_cut = (val32 & SYS_CFG_CHIP_VERSION_MASK) >>
27247 ++ sys_cfg = rtl8xxxu_read32(priv, REG_SYS_CFG);
27248 ++ priv->chip_cut = (sys_cfg & SYS_CFG_CHIP_VERSION_MASK) >>
27249 + SYS_CFG_CHIP_VERSION_SHIFT;
27250 +- if (val32 & SYS_CFG_TRP_VAUX_EN) {
27251 ++ if (sys_cfg & SYS_CFG_TRP_VAUX_EN) {
27252 + dev_info(dev, "Unsupported test chip\n");
27253 + return -ENOTSUPP;
27254 + }
27255 +
27256 +- if (val32 & SYS_CFG_BT_FUNC) {
27257 ++ if (sys_cfg & SYS_CFG_BT_FUNC) {
27258 + if (priv->chip_cut >= 3) {
27259 + sprintf(priv->chip_name, "8723BU");
27260 + priv->rtl_chip = RTL8723B;
27261 +@@ -1641,7 +1641,7 @@ static int rtl8xxxu_identify_chip(struct rtl8xxxu_priv *priv)
27262 + if (val32 & MULTI_GPS_FUNC_EN)
27263 + priv->has_gps = 1;
27264 + priv->is_multi_func = 1;
27265 +- } else if (val32 & SYS_CFG_TYPE_ID) {
27266 ++ } else if (sys_cfg & SYS_CFG_TYPE_ID) {
27267 + bonding = rtl8xxxu_read32(priv, REG_HPON_FSM);
27268 + bonding &= HPON_FSM_BONDING_MASK;
27269 + if (priv->fops->tx_desc_size ==
27270 +@@ -1692,7 +1692,7 @@ static int rtl8xxxu_identify_chip(struct rtl8xxxu_priv *priv)
27271 + case RTL8188E:
27272 + case RTL8192E:
27273 + case RTL8723B:
27274 +- switch (val32 & SYS_CFG_VENDOR_EXT_MASK) {
27275 ++ switch (sys_cfg & SYS_CFG_VENDOR_EXT_MASK) {
27276 + case SYS_CFG_VENDOR_ID_TSMC:
27277 + sprintf(priv->chip_vendor, "TSMC");
27278 + break;
27279 +@@ -1709,7 +1709,7 @@ static int rtl8xxxu_identify_chip(struct rtl8xxxu_priv *priv)
27280 + }
27281 + break;
27282 + default:
27283 +- if (val32 & SYS_CFG_VENDOR_ID) {
27284 ++ if (sys_cfg & SYS_CFG_VENDOR_ID) {
27285 + sprintf(priv->chip_vendor, "UMC");
27286 + priv->vendor_umc = 1;
27287 + } else {
27288 +@@ -4654,7 +4654,6 @@ rtl8xxxu_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
27289 + if (sta->deflink.ht_cap.cap &
27290 + (IEEE80211_HT_CAP_SGI_40 | IEEE80211_HT_CAP_SGI_20))
27291 + sgi = 1;
27292 +- rcu_read_unlock();
27293 +
27294 + highest_rate = fls(ramask) - 1;
27295 + if (highest_rate < DESC_RATE_MCS0) {
27296 +@@ -4679,6 +4678,7 @@ rtl8xxxu_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
27297 + else
27298 + rarpt->txrate.bw = RATE_INFO_BW_20;
27299 + }
27300 ++ rcu_read_unlock();
27301 + bit_rate = cfg80211_calculate_bitrate(&rarpt->txrate);
27302 + rarpt->bit_rate = bit_rate;
27303 + rarpt->desc_rate = highest_rate;
27304 +@@ -5574,7 +5574,6 @@ static void rtl8xxxu_c2hcmd_callback(struct work_struct *work)
27305 + rarpt->txrate.flags = 0;
27306 + rate = c2h->ra_report.rate;
27307 + sgi = c2h->ra_report.sgi;
27308 +- bw = c2h->ra_report.bw;
27309 +
27310 + if (rate < DESC_RATE_MCS0) {
27311 + rarpt->txrate.legacy =
27312 +@@ -5591,8 +5590,13 @@ static void rtl8xxxu_c2hcmd_callback(struct work_struct *work)
27313 + RATE_INFO_FLAGS_SHORT_GI;
27314 + }
27315 +
27316 +- if (bw == RATE_INFO_BW_20)
27317 +- rarpt->txrate.bw |= RATE_INFO_BW_20;
27318 ++ if (skb->len >= offsetofend(typeof(*c2h), ra_report.bw)) {
27319 ++ if (c2h->ra_report.bw == RTL8XXXU_CHANNEL_WIDTH_40)
27320 ++ bw = RATE_INFO_BW_40;
27321 ++ else
27322 ++ bw = RATE_INFO_BW_20;
27323 ++ rarpt->txrate.bw = bw;
27324 ++ }
27325 + }
27326 + bit_rate = cfg80211_calculate_bitrate(&rarpt->txrate);
27327 + rarpt->bit_rate = bit_rate;
27328 +diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
27329 +index bc2994865372b..ad420d7ec8af9 100644
27330 +--- a/drivers/net/wireless/realtek/rtw89/core.c
27331 ++++ b/drivers/net/wireless/realtek/rtw89/core.c
27332 +@@ -2527,7 +2527,7 @@ int rtw89_core_sta_assoc(struct rtw89_dev *rtwdev,
27333 + }
27334 +
27335 + /* update cam aid mac_id net_type */
27336 +- rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL);
27337 ++ ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL);
27338 + if (ret) {
27339 + rtw89_warn(rtwdev, "failed to send h2c cam\n");
27340 + return ret;
27341 +diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c
27342 +index 0508dfca8edf7..077fddc5fa1ea 100644
27343 +--- a/drivers/net/wireless/realtek/rtw89/mac.c
27344 ++++ b/drivers/net/wireless/realtek/rtw89/mac.c
27345 +@@ -1429,10 +1429,8 @@ static int dle_mix_cfg(struct rtw89_dev *rtwdev, const struct rtw89_dle_mem *cfg
27346 + #define INVALID_QT_WCPU U16_MAX
27347 + #define SET_QUOTA_VAL(_min_x, _max_x, _module, _idx) \
27348 + do { \
27349 +- val = ((_min_x) & \
27350 +- B_AX_ ## _module ## _MIN_SIZE_MASK) | \
27351 +- (((_max_x) << 16) & \
27352 +- B_AX_ ## _module ## _MAX_SIZE_MASK); \
27353 ++ val = u32_encode_bits(_min_x, B_AX_ ## _module ## _MIN_SIZE_MASK) | \
27354 ++ u32_encode_bits(_max_x, B_AX_ ## _module ## _MAX_SIZE_MASK); \
27355 + rtw89_write32(rtwdev, \
27356 + R_AX_ ## _module ## _QTA ## _idx ## _CFG, \
27357 + val); \
27358 +diff --git a/drivers/net/wireless/realtek/rtw89/phy.c b/drivers/net/wireless/realtek/rtw89/phy.c
27359 +index 6a6bdc652e09e..c894a2b614eb1 100644
27360 +--- a/drivers/net/wireless/realtek/rtw89/phy.c
27361 ++++ b/drivers/net/wireless/realtek/rtw89/phy.c
27362 +@@ -3139,7 +3139,7 @@ void rtw89_phy_env_monitor_track(struct rtw89_dev *rtwdev)
27363 +
27364 + static bool rtw89_physts_ie_page_valid(enum rtw89_phy_status_bitmap *ie_page)
27365 + {
27366 +- if (*ie_page > RTW89_PHYSTS_BITMAP_NUM ||
27367 ++ if (*ie_page >= RTW89_PHYSTS_BITMAP_NUM ||
27368 + *ie_page == RTW89_RSVD_9)
27369 + return false;
27370 + else if (*ie_page > RTW89_RSVD_9)
27371 +diff --git a/drivers/net/wireless/rsi/rsi_91x_core.c b/drivers/net/wireless/rsi/rsi_91x_core.c
27372 +index 0f3a80f66b61c..ead4d4e043280 100644
27373 +--- a/drivers/net/wireless/rsi/rsi_91x_core.c
27374 ++++ b/drivers/net/wireless/rsi/rsi_91x_core.c
27375 +@@ -466,7 +466,9 @@ void rsi_core_xmit(struct rsi_common *common, struct sk_buff *skb)
27376 + tid, 0);
27377 + }
27378 + }
27379 +- if (skb->protocol == cpu_to_be16(ETH_P_PAE)) {
27380 ++
27381 ++ if (IEEE80211_SKB_CB(skb)->control.flags &
27382 ++ IEEE80211_TX_CTRL_PORT_CTRL_PROTO) {
27383 + q_num = MGMT_SOFT_Q;
27384 + skb->priority = q_num;
27385 + }
27386 +diff --git a/drivers/net/wireless/rsi/rsi_91x_hal.c b/drivers/net/wireless/rsi/rsi_91x_hal.c
27387 +index c61f83a7333b6..c7460fbba0142 100644
27388 +--- a/drivers/net/wireless/rsi/rsi_91x_hal.c
27389 ++++ b/drivers/net/wireless/rsi/rsi_91x_hal.c
27390 +@@ -162,12 +162,16 @@ int rsi_prepare_data_desc(struct rsi_common *common, struct sk_buff *skb)
27391 + u8 header_size;
27392 + u8 vap_id = 0;
27393 + u8 dword_align_bytes;
27394 ++ bool tx_eapol;
27395 + u16 seq_num;
27396 +
27397 + info = IEEE80211_SKB_CB(skb);
27398 + vif = info->control.vif;
27399 + tx_params = (struct skb_info *)info->driver_data;
27400 +
27401 ++ tx_eapol = IEEE80211_SKB_CB(skb)->control.flags &
27402 ++ IEEE80211_TX_CTRL_PORT_CTRL_PROTO;
27403 ++
27404 + header_size = FRAME_DESC_SZ + sizeof(struct rsi_xtended_desc);
27405 + if (header_size > skb_headroom(skb)) {
27406 + rsi_dbg(ERR_ZONE, "%s: Unable to send pkt\n", __func__);
27407 +@@ -231,7 +235,7 @@ int rsi_prepare_data_desc(struct rsi_common *common, struct sk_buff *skb)
27408 + }
27409 + }
27410 +
27411 +- if (skb->protocol == cpu_to_be16(ETH_P_PAE)) {
27412 ++ if (tx_eapol) {
27413 + rsi_dbg(INFO_ZONE, "*** Tx EAPOL ***\n");
27414 +
27415 + data_desc->frame_info = cpu_to_le16(RATE_INFO_ENABLE);
27416 +diff --git a/drivers/nfc/pn533/pn533.c b/drivers/nfc/pn533/pn533.c
27417 +index d9f6367b9993d..f0cac19005527 100644
27418 +--- a/drivers/nfc/pn533/pn533.c
27419 ++++ b/drivers/nfc/pn533/pn533.c
27420 +@@ -1295,6 +1295,8 @@ static int pn533_poll_dep_complete(struct pn533 *dev, void *arg,
27421 + if (IS_ERR(resp))
27422 + return PTR_ERR(resp);
27423 +
27424 ++ memset(&nfc_target, 0, sizeof(struct nfc_target));
27425 ++
27426 + rsp = (struct pn533_cmd_jump_dep_response *)resp->data;
27427 +
27428 + rc = rsp->status & PN533_CMD_RET_MASK;
27429 +@@ -1926,6 +1928,8 @@ static int pn533_in_dep_link_up_complete(struct pn533 *dev, void *arg,
27430 +
27431 + dev_dbg(dev->dev, "Creating new target\n");
27432 +
27433 ++ memset(&nfc_target, 0, sizeof(struct nfc_target));
27434 ++
27435 + nfc_target.supported_protocols = NFC_PROTO_NFC_DEP_MASK;
27436 + nfc_target.nfcid1_len = 10;
27437 + memcpy(nfc_target.nfcid1, rsp->nfcid3t, nfc_target.nfcid1_len);
27438 +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
27439 +index 7e3893d06babd..108b5022ceadc 100644
27440 +--- a/drivers/nvme/host/core.c
27441 ++++ b/drivers/nvme/host/core.c
27442 +@@ -3049,7 +3049,7 @@ static int nvme_init_non_mdts_limits(struct nvme_ctrl *ctrl)
27443 +
27444 + id = kzalloc(sizeof(*id), GFP_KERNEL);
27445 + if (!id)
27446 +- return 0;
27447 ++ return -ENOMEM;
27448 +
27449 + c.identify.opcode = nvme_admin_identify;
27450 + c.identify.cns = NVME_ID_CNS_CS_CTRL;
27451 +@@ -3745,13 +3745,17 @@ static ssize_t nvme_ctrl_dhchap_secret_store(struct device *dev,
27452 + memcpy(dhchap_secret, buf, count);
27453 + nvme_auth_stop(ctrl);
27454 + if (strcmp(dhchap_secret, opts->dhchap_secret)) {
27455 ++ struct nvme_dhchap_key *key, *host_key;
27456 + int ret;
27457 +
27458 +- ret = nvme_auth_generate_key(dhchap_secret, &ctrl->host_key);
27459 ++ ret = nvme_auth_generate_key(dhchap_secret, &key);
27460 + if (ret)
27461 + return ret;
27462 + kfree(opts->dhchap_secret);
27463 + opts->dhchap_secret = dhchap_secret;
27464 ++ host_key = ctrl->host_key;
27465 ++ ctrl->host_key = key;
27466 ++ nvme_auth_free_key(host_key);
27467 + /* Key has changed; re-authentication with new key */
27468 + nvme_auth_reset(ctrl);
27469 + }
27470 +@@ -3795,13 +3799,17 @@ static ssize_t nvme_ctrl_dhchap_ctrl_secret_store(struct device *dev,
27471 + memcpy(dhchap_secret, buf, count);
27472 + nvme_auth_stop(ctrl);
27473 + if (strcmp(dhchap_secret, opts->dhchap_ctrl_secret)) {
27474 ++ struct nvme_dhchap_key *key, *ctrl_key;
27475 + int ret;
27476 +
27477 +- ret = nvme_auth_generate_key(dhchap_secret, &ctrl->ctrl_key);
27478 ++ ret = nvme_auth_generate_key(dhchap_secret, &key);
27479 + if (ret)
27480 + return ret;
27481 + kfree(opts->dhchap_ctrl_secret);
27482 + opts->dhchap_ctrl_secret = dhchap_secret;
27483 ++ ctrl_key = ctrl->ctrl_key;
27484 ++ ctrl->ctrl_key = key;
27485 ++ nvme_auth_free_key(ctrl_key);
27486 + /* Key has changed; re-authentication with new key */
27487 + nvme_auth_reset(ctrl);
27488 + }
27489 +@@ -4867,7 +4875,7 @@ EXPORT_SYMBOL_GPL(nvme_remove_admin_tag_set);
27490 +
27491 + int nvme_alloc_io_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
27492 + const struct blk_mq_ops *ops, unsigned int flags,
27493 +- unsigned int cmd_size)
27494 ++ unsigned int nr_maps, unsigned int cmd_size)
27495 + {
27496 + int ret;
27497 +
27498 +@@ -4881,8 +4889,7 @@ int nvme_alloc_io_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
27499 + set->driver_data = ctrl;
27500 + set->nr_hw_queues = ctrl->queue_count - 1;
27501 + set->timeout = NVME_IO_TIMEOUT;
27502 +- if (ops->map_queues)
27503 +- set->nr_maps = ctrl->opts->nr_poll_queues ? HCTX_MAX_TYPES : 2;
27504 ++ set->nr_maps = nr_maps;
27505 + ret = blk_mq_alloc_tag_set(set);
27506 + if (ret)
27507 + return ret;
27508 +diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
27509 +index 5d57a042dbcad..20b0c29a9a341 100644
27510 +--- a/drivers/nvme/host/fc.c
27511 ++++ b/drivers/nvme/host/fc.c
27512 +@@ -2903,7 +2903,7 @@ nvme_fc_create_io_queues(struct nvme_fc_ctrl *ctrl)
27513 + nvme_fc_init_io_queues(ctrl);
27514 +
27515 + ret = nvme_alloc_io_tag_set(&ctrl->ctrl, &ctrl->tag_set,
27516 +- &nvme_fc_mq_ops, BLK_MQ_F_SHOULD_MERGE,
27517 ++ &nvme_fc_mq_ops, BLK_MQ_F_SHOULD_MERGE, 1,
27518 + struct_size((struct nvme_fcp_op_w_sgl *)NULL, priv,
27519 + ctrl->lport->ops->fcprqst_priv_sz));
27520 + if (ret)
27521 +diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
27522 +index a29877217ee65..8a0db9e06dc65 100644
27523 +--- a/drivers/nvme/host/nvme.h
27524 ++++ b/drivers/nvme/host/nvme.h
27525 +@@ -743,7 +743,7 @@ int nvme_alloc_admin_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
27526 + void nvme_remove_admin_tag_set(struct nvme_ctrl *ctrl);
27527 + int nvme_alloc_io_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
27528 + const struct blk_mq_ops *ops, unsigned int flags,
27529 +- unsigned int cmd_size);
27530 ++ unsigned int nr_maps, unsigned int cmd_size);
27531 + void nvme_remove_io_tag_set(struct nvme_ctrl *ctrl);
27532 +
27533 + void nvme_remove_namespaces(struct nvme_ctrl *ctrl);
27534 +diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
27535 +index 6e079abb22ee9..a55d3e8b607d5 100644
27536 +--- a/drivers/nvme/host/rdma.c
27537 ++++ b/drivers/nvme/host/rdma.c
27538 +@@ -798,7 +798,9 @@ static int nvme_rdma_alloc_tag_set(struct nvme_ctrl *ctrl)
27539 + NVME_RDMA_METADATA_SGL_SIZE;
27540 +
27541 + return nvme_alloc_io_tag_set(ctrl, &to_rdma_ctrl(ctrl)->tag_set,
27542 +- &nvme_rdma_mq_ops, BLK_MQ_F_SHOULD_MERGE, cmd_size);
27543 ++ &nvme_rdma_mq_ops, BLK_MQ_F_SHOULD_MERGE,
27544 ++ ctrl->opts->nr_poll_queues ? HCTX_MAX_TYPES : 2,
27545 ++ cmd_size);
27546 + }
27547 +
27548 + static void nvme_rdma_destroy_admin_queue(struct nvme_rdma_ctrl *ctrl)
27549 +diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
27550 +index 9b47dcb2a7d97..83735c52d34a0 100644
27551 +--- a/drivers/nvme/host/tcp.c
27552 ++++ b/drivers/nvme/host/tcp.c
27553 +@@ -1868,6 +1868,7 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
27554 + ret = nvme_alloc_io_tag_set(ctrl, &to_tcp_ctrl(ctrl)->tag_set,
27555 + &nvme_tcp_mq_ops,
27556 + BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING,
27557 ++ ctrl->opts->nr_poll_queues ? HCTX_MAX_TYPES : 2,
27558 + sizeof(struct nvme_tcp_request));
27559 + if (ret)
27560 + goto out_free_io_queues;
27561 +diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
27562 +index aecb5853f8da4..683b75a992b3d 100644
27563 +--- a/drivers/nvme/target/core.c
27564 ++++ b/drivers/nvme/target/core.c
27565 +@@ -15,6 +15,7 @@
27566 +
27567 + #include "nvmet.h"
27568 +
27569 ++struct kmem_cache *nvmet_bvec_cache;
27570 + struct workqueue_struct *buffered_io_wq;
27571 + struct workqueue_struct *zbd_wq;
27572 + static const struct nvmet_fabrics_ops *nvmet_transports[NVMF_TRTYPE_MAX];
27573 +@@ -1631,26 +1632,28 @@ void nvmet_subsys_put(struct nvmet_subsys *subsys)
27574 +
27575 + static int __init nvmet_init(void)
27576 + {
27577 +- int error;
27578 ++ int error = -ENOMEM;
27579 +
27580 + nvmet_ana_group_enabled[NVMET_DEFAULT_ANA_GRPID] = 1;
27581 +
27582 ++ nvmet_bvec_cache = kmem_cache_create("nvmet-bvec",
27583 ++ NVMET_MAX_MPOOL_BVEC * sizeof(struct bio_vec), 0,
27584 ++ SLAB_HWCACHE_ALIGN, NULL);
27585 ++ if (!nvmet_bvec_cache)
27586 ++ return -ENOMEM;
27587 ++
27588 + zbd_wq = alloc_workqueue("nvmet-zbd-wq", WQ_MEM_RECLAIM, 0);
27589 + if (!zbd_wq)
27590 +- return -ENOMEM;
27591 ++ goto out_destroy_bvec_cache;
27592 +
27593 + buffered_io_wq = alloc_workqueue("nvmet-buffered-io-wq",
27594 + WQ_MEM_RECLAIM, 0);
27595 +- if (!buffered_io_wq) {
27596 +- error = -ENOMEM;
27597 ++ if (!buffered_io_wq)
27598 + goto out_free_zbd_work_queue;
27599 +- }
27600 +
27601 + nvmet_wq = alloc_workqueue("nvmet-wq", WQ_MEM_RECLAIM, 0);
27602 +- if (!nvmet_wq) {
27603 +- error = -ENOMEM;
27604 ++ if (!nvmet_wq)
27605 + goto out_free_buffered_work_queue;
27606 +- }
27607 +
27608 + error = nvmet_init_discovery();
27609 + if (error)
27610 +@@ -1669,6 +1672,8 @@ out_free_buffered_work_queue:
27611 + destroy_workqueue(buffered_io_wq);
27612 + out_free_zbd_work_queue:
27613 + destroy_workqueue(zbd_wq);
27614 ++out_destroy_bvec_cache:
27615 ++ kmem_cache_destroy(nvmet_bvec_cache);
27616 + return error;
27617 + }
27618 +
27619 +@@ -1680,6 +1685,7 @@ static void __exit nvmet_exit(void)
27620 + destroy_workqueue(nvmet_wq);
27621 + destroy_workqueue(buffered_io_wq);
27622 + destroy_workqueue(zbd_wq);
27623 ++ kmem_cache_destroy(nvmet_bvec_cache);
27624 +
27625 + BUILD_BUG_ON(sizeof(struct nvmf_disc_rsp_page_entry) != 1024);
27626 + BUILD_BUG_ON(sizeof(struct nvmf_disc_rsp_page_hdr) != 1024);
27627 +diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
27628 +index 64b47e2a46330..e55ec6fefd7f4 100644
27629 +--- a/drivers/nvme/target/io-cmd-file.c
27630 ++++ b/drivers/nvme/target/io-cmd-file.c
27631 +@@ -11,7 +11,6 @@
27632 + #include <linux/fs.h>
27633 + #include "nvmet.h"
27634 +
27635 +-#define NVMET_MAX_MPOOL_BVEC 16
27636 + #define NVMET_MIN_MPOOL_OBJ 16
27637 +
27638 + void nvmet_file_ns_revalidate(struct nvmet_ns *ns)
27639 +@@ -26,8 +25,6 @@ void nvmet_file_ns_disable(struct nvmet_ns *ns)
27640 + flush_workqueue(buffered_io_wq);
27641 + mempool_destroy(ns->bvec_pool);
27642 + ns->bvec_pool = NULL;
27643 +- kmem_cache_destroy(ns->bvec_cache);
27644 +- ns->bvec_cache = NULL;
27645 + fput(ns->file);
27646 + ns->file = NULL;
27647 + }
27648 +@@ -59,16 +56,8 @@ int nvmet_file_ns_enable(struct nvmet_ns *ns)
27649 + ns->blksize_shift = min_t(u8,
27650 + file_inode(ns->file)->i_blkbits, 12);
27651 +
27652 +- ns->bvec_cache = kmem_cache_create("nvmet-bvec",
27653 +- NVMET_MAX_MPOOL_BVEC * sizeof(struct bio_vec),
27654 +- 0, SLAB_HWCACHE_ALIGN, NULL);
27655 +- if (!ns->bvec_cache) {
27656 +- ret = -ENOMEM;
27657 +- goto err;
27658 +- }
27659 +-
27660 + ns->bvec_pool = mempool_create(NVMET_MIN_MPOOL_OBJ, mempool_alloc_slab,
27661 +- mempool_free_slab, ns->bvec_cache);
27662 ++ mempool_free_slab, nvmet_bvec_cache);
27663 +
27664 + if (!ns->bvec_pool) {
27665 + ret = -ENOMEM;
27666 +@@ -77,9 +66,10 @@ int nvmet_file_ns_enable(struct nvmet_ns *ns)
27667 +
27668 + return ret;
27669 + err:
27670 ++ fput(ns->file);
27671 ++ ns->file = NULL;
27672 + ns->size = 0;
27673 + ns->blksize_shift = 0;
27674 +- nvmet_file_ns_disable(ns);
27675 + return ret;
27676 + }
27677 +
27678 +diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
27679 +index b45fe3adf015f..08c583258e90f 100644
27680 +--- a/drivers/nvme/target/loop.c
27681 ++++ b/drivers/nvme/target/loop.c
27682 +@@ -494,7 +494,7 @@ static int nvme_loop_create_io_queues(struct nvme_loop_ctrl *ctrl)
27683 + return ret;
27684 +
27685 + ret = nvme_alloc_io_tag_set(&ctrl->ctrl, &ctrl->tag_set,
27686 +- &nvme_loop_mq_ops, BLK_MQ_F_SHOULD_MERGE,
27687 ++ &nvme_loop_mq_ops, BLK_MQ_F_SHOULD_MERGE, 1,
27688 + sizeof(struct nvme_loop_iod) +
27689 + NVME_INLINE_SG_CNT * sizeof(struct scatterlist));
27690 + if (ret)
27691 +diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
27692 +index dfe3894205aa7..bda1c1f71f394 100644
27693 +--- a/drivers/nvme/target/nvmet.h
27694 ++++ b/drivers/nvme/target/nvmet.h
27695 +@@ -77,7 +77,6 @@ struct nvmet_ns {
27696 +
27697 + struct completion disable_done;
27698 + mempool_t *bvec_pool;
27699 +- struct kmem_cache *bvec_cache;
27700 +
27701 + int use_p2pmem;
27702 + struct pci_dev *p2p_dev;
27703 +@@ -393,6 +392,8 @@ struct nvmet_req {
27704 + u64 error_slba;
27705 + };
27706 +
27707 ++#define NVMET_MAX_MPOOL_BVEC 16
27708 ++extern struct kmem_cache *nvmet_bvec_cache;
27709 + extern struct workqueue_struct *buffered_io_wq;
27710 + extern struct workqueue_struct *zbd_wq;
27711 + extern struct workqueue_struct *nvmet_wq;
27712 +diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c
27713 +index bd8ff4df723da..ed4e6c144a681 100644
27714 +--- a/drivers/of/overlay.c
27715 ++++ b/drivers/of/overlay.c
27716 +@@ -545,7 +545,7 @@ static int find_dup_cset_node_entry(struct overlay_changeset *ovcs,
27717 +
27718 + fn_1 = kasprintf(GFP_KERNEL, "%pOF", ce_1->np);
27719 + fn_2 = kasprintf(GFP_KERNEL, "%pOF", ce_2->np);
27720 +- node_path_match = !strcmp(fn_1, fn_2);
27721 ++ node_path_match = !fn_1 || !fn_2 || !strcmp(fn_1, fn_2);
27722 + kfree(fn_1);
27723 + kfree(fn_2);
27724 + if (node_path_match) {
27725 +@@ -580,7 +580,7 @@ static int find_dup_cset_prop(struct overlay_changeset *ovcs,
27726 +
27727 + fn_1 = kasprintf(GFP_KERNEL, "%pOF", ce_1->np);
27728 + fn_2 = kasprintf(GFP_KERNEL, "%pOF", ce_2->np);
27729 +- node_path_match = !strcmp(fn_1, fn_2);
27730 ++ node_path_match = !fn_1 || !fn_2 || !strcmp(fn_1, fn_2);
27731 + kfree(fn_1);
27732 + kfree(fn_2);
27733 + if (node_path_match &&
27734 +diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
27735 +index 2616585ca5f8a..1dde5c579edc8 100644
27736 +--- a/drivers/pci/controller/dwc/pci-imx6.c
27737 ++++ b/drivers/pci/controller/dwc/pci-imx6.c
27738 +@@ -952,12 +952,6 @@ static int imx6_pcie_host_init(struct dw_pcie_rp *pp)
27739 + }
27740 + }
27741 +
27742 +- ret = imx6_pcie_deassert_core_reset(imx6_pcie);
27743 +- if (ret < 0) {
27744 +- dev_err(dev, "pcie deassert core reset failed: %d\n", ret);
27745 +- goto err_phy_off;
27746 +- }
27747 +-
27748 + if (imx6_pcie->phy) {
27749 + ret = phy_power_on(imx6_pcie->phy);
27750 + if (ret) {
27751 +@@ -965,6 +959,13 @@ static int imx6_pcie_host_init(struct dw_pcie_rp *pp)
27752 + goto err_phy_off;
27753 + }
27754 + }
27755 ++
27756 ++ ret = imx6_pcie_deassert_core_reset(imx6_pcie);
27757 ++ if (ret < 0) {
27758 ++ dev_err(dev, "pcie deassert core reset failed: %d\n", ret);
27759 ++ goto err_phy_off;
27760 ++ }
27761 ++
27762 + imx6_setup_phy_mpll(imx6_pcie);
27763 +
27764 + return 0;
27765 +diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
27766 +index c6725c519a479..9e4d96e5a3f5a 100644
27767 +--- a/drivers/pci/controller/dwc/pcie-designware.c
27768 ++++ b/drivers/pci/controller/dwc/pcie-designware.c
27769 +@@ -641,7 +641,7 @@ void dw_pcie_setup(struct dw_pcie *pci)
27770 + if (pci->n_fts[1]) {
27771 + val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
27772 + val &= ~PORT_LOGIC_N_FTS_MASK;
27773 +- val |= pci->n_fts[pci->link_gen - 1];
27774 ++ val |= pci->n_fts[1];
27775 + dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
27776 + }
27777 +
27778 +diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
27779 +index e06e9f4fc50f7..769eedeb8802a 100644
27780 +--- a/drivers/pci/controller/vmd.c
27781 ++++ b/drivers/pci/controller/vmd.c
27782 +@@ -719,6 +719,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
27783 + resource_size_t offset[2] = {0};
27784 + resource_size_t membar2_offset = 0x2000;
27785 + struct pci_bus *child;
27786 ++ struct pci_dev *dev;
27787 + int ret;
27788 +
27789 + /*
27790 +@@ -859,8 +860,25 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
27791 +
27792 + pci_scan_child_bus(vmd->bus);
27793 + vmd_domain_reset(vmd);
27794 +- list_for_each_entry(child, &vmd->bus->children, node)
27795 +- pci_reset_bus(child->self);
27796 ++
27797 ++ /* When Intel VMD is enabled, the OS does not discover the Root Ports
27798 ++ * owned by Intel VMD within the MMCFG space. pci_reset_bus() applies
27799 ++ * a reset to the parent of the PCI device supplied as argument. This
27800 ++ * is why we pass a child device, so the reset can be triggered at
27801 ++ * the Intel bridge level and propagated to all the children in the
27802 ++ * hierarchy.
27803 ++ */
27804 ++ list_for_each_entry(child, &vmd->bus->children, node) {
27805 ++ if (!list_empty(&child->devices)) {
27806 ++ dev = list_first_entry(&child->devices,
27807 ++ struct pci_dev, bus_list);
27808 ++ if (pci_reset_bus(dev))
27809 ++ pci_warn(dev, "can't reset device: %d\n", ret);
27810 ++
27811 ++ break;
27812 ++ }
27813 ++ }
27814 ++
27815 + pci_assign_unassigned_bus_resources(vmd->bus);
27816 +
27817 + /*
27818 +@@ -980,6 +998,11 @@ static int vmd_resume(struct device *dev)
27819 + struct vmd_dev *vmd = pci_get_drvdata(pdev);
27820 + int err, i;
27821 +
27822 ++ if (vmd->irq_domain)
27823 ++ vmd_set_msi_remapping(vmd, true);
27824 ++ else
27825 ++ vmd_set_msi_remapping(vmd, false);
27826 ++
27827 + for (i = 0; i < vmd->msix_count; i++) {
27828 + err = devm_request_irq(dev, vmd->irqs[i].virq,
27829 + vmd_irq, IRQF_NO_THREAD,
27830 +diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
27831 +index 36b1801a061b7..55283d2379a6a 100644
27832 +--- a/drivers/pci/endpoint/functions/pci-epf-test.c
27833 ++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
27834 +@@ -979,7 +979,7 @@ static int pci_epf_test_bind(struct pci_epf *epf)
27835 + if (ret)
27836 + epf_test->dma_supported = false;
27837 +
27838 +- if (linkup_notifier) {
27839 ++ if (linkup_notifier || core_init_notifier) {
27840 + epf->nb.notifier_call = pci_epf_test_notifier;
27841 + pci_epc_register_notifier(epc, &epf->nb);
27842 + } else {
27843 +diff --git a/drivers/pci/endpoint/functions/pci-epf-vntb.c b/drivers/pci/endpoint/functions/pci-epf-vntb.c
27844 +index 0ea85e1d292ec..fba0179939b8f 100644
27845 +--- a/drivers/pci/endpoint/functions/pci-epf-vntb.c
27846 ++++ b/drivers/pci/endpoint/functions/pci-epf-vntb.c
27847 +@@ -557,7 +557,7 @@ static int epf_ntb_db_bar_init(struct epf_ntb *ntb)
27848 + return ret;
27849 +
27850 + err_alloc_peer_mem:
27851 +- pci_epc_mem_free_addr(ntb->epf->epc, epf_bar->phys_addr, mw_addr, epf_bar->size);
27852 ++ pci_epf_free_space(ntb->epf, mw_addr, barno, 0);
27853 + return -1;
27854 + }
27855 +
27856 +diff --git a/drivers/pci/irq.c b/drivers/pci/irq.c
27857 +index 12ecd0aaa28d6..0050e8f6814ed 100644
27858 +--- a/drivers/pci/irq.c
27859 ++++ b/drivers/pci/irq.c
27860 +@@ -44,6 +44,8 @@ int pci_request_irq(struct pci_dev *dev, unsigned int nr, irq_handler_t handler,
27861 + va_start(ap, fmt);
27862 + devname = kvasprintf(GFP_KERNEL, fmt, ap);
27863 + va_end(ap);
27864 ++ if (!devname)
27865 ++ return -ENOMEM;
27866 +
27867 + ret = request_threaded_irq(pci_irq_vector(dev, nr), handler, thread_fn,
27868 + irqflags, devname, dev_id);
27869 +diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
27870 +index b66fa42c4b1fa..1d6f7b502020d 100644
27871 +--- a/drivers/pci/probe.c
27872 ++++ b/drivers/pci/probe.c
27873 +@@ -1891,9 +1891,6 @@ int pci_setup_device(struct pci_dev *dev)
27874 +
27875 + dev->broken_intx_masking = pci_intx_mask_broken(dev);
27876 +
27877 +- /* Clear errors left from system firmware */
27878 +- pci_write_config_word(dev, PCI_STATUS, 0xffff);
27879 +-
27880 + switch (dev->hdr_type) { /* header type */
27881 + case PCI_HEADER_TYPE_NORMAL: /* standard header */
27882 + if (class == PCI_CLASS_BRIDGE_PCI)
27883 +diff --git a/drivers/perf/arm_dmc620_pmu.c b/drivers/perf/arm_dmc620_pmu.c
27884 +index 280a6ae3e27cf..54aa4658fb36e 100644
27885 +--- a/drivers/perf/arm_dmc620_pmu.c
27886 ++++ b/drivers/perf/arm_dmc620_pmu.c
27887 +@@ -725,6 +725,8 @@ static struct platform_driver dmc620_pmu_driver = {
27888 +
27889 + static int __init dmc620_pmu_init(void)
27890 + {
27891 ++ int ret;
27892 ++
27893 + cpuhp_state_num = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN,
27894 + DMC620_DRVNAME,
27895 + NULL,
27896 +@@ -732,7 +734,11 @@ static int __init dmc620_pmu_init(void)
27897 + if (cpuhp_state_num < 0)
27898 + return cpuhp_state_num;
27899 +
27900 +- return platform_driver_register(&dmc620_pmu_driver);
27901 ++ ret = platform_driver_register(&dmc620_pmu_driver);
27902 ++ if (ret)
27903 ++ cpuhp_remove_multi_state(cpuhp_state_num);
27904 ++
27905 ++ return ret;
27906 + }
27907 +
27908 + static void __exit dmc620_pmu_exit(void)
27909 +diff --git a/drivers/perf/arm_dsu_pmu.c b/drivers/perf/arm_dsu_pmu.c
27910 +index 4a15c86f45efb..fe2abb412c004 100644
27911 +--- a/drivers/perf/arm_dsu_pmu.c
27912 ++++ b/drivers/perf/arm_dsu_pmu.c
27913 +@@ -858,7 +858,11 @@ static int __init dsu_pmu_init(void)
27914 + if (ret < 0)
27915 + return ret;
27916 + dsu_pmu_cpuhp_state = ret;
27917 +- return platform_driver_register(&dsu_pmu_driver);
27918 ++ ret = platform_driver_register(&dsu_pmu_driver);
27919 ++ if (ret)
27920 ++ cpuhp_remove_multi_state(dsu_pmu_cpuhp_state);
27921 ++
27922 ++ return ret;
27923 + }
27924 +
27925 + static void __exit dsu_pmu_exit(void)
27926 +diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c
27927 +index 00d4c45a8017d..25a269d431e45 100644
27928 +--- a/drivers/perf/arm_smmuv3_pmu.c
27929 ++++ b/drivers/perf/arm_smmuv3_pmu.c
27930 +@@ -959,6 +959,8 @@ static struct platform_driver smmu_pmu_driver = {
27931 +
27932 + static int __init arm_smmu_pmu_init(void)
27933 + {
27934 ++ int ret;
27935 ++
27936 + cpuhp_state_num = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN,
27937 + "perf/arm/pmcg:online",
27938 + NULL,
27939 +@@ -966,7 +968,11 @@ static int __init arm_smmu_pmu_init(void)
27940 + if (cpuhp_state_num < 0)
27941 + return cpuhp_state_num;
27942 +
27943 +- return platform_driver_register(&smmu_pmu_driver);
27944 ++ ret = platform_driver_register(&smmu_pmu_driver);
27945 ++ if (ret)
27946 ++ cpuhp_remove_multi_state(cpuhp_state_num);
27947 ++
27948 ++ return ret;
27949 + }
27950 + module_init(arm_smmu_pmu_init);
27951 +
27952 +diff --git a/drivers/perf/hisilicon/hisi_pcie_pmu.c b/drivers/perf/hisilicon/hisi_pcie_pmu.c
27953 +index 21771708597db..071e63d9a9ac6 100644
27954 +--- a/drivers/perf/hisilicon/hisi_pcie_pmu.c
27955 ++++ b/drivers/perf/hisilicon/hisi_pcie_pmu.c
27956 +@@ -693,10 +693,10 @@ static struct attribute *hisi_pcie_pmu_events_attr[] = {
27957 + HISI_PCIE_PMU_EVENT_ATTR(rx_mrd_cnt, 0x10210),
27958 + HISI_PCIE_PMU_EVENT_ATTR(tx_mrd_latency, 0x0011),
27959 + HISI_PCIE_PMU_EVENT_ATTR(tx_mrd_cnt, 0x10011),
27960 +- HISI_PCIE_PMU_EVENT_ATTR(rx_mrd_flux, 0x1005),
27961 +- HISI_PCIE_PMU_EVENT_ATTR(rx_mrd_time, 0x11005),
27962 +- HISI_PCIE_PMU_EVENT_ATTR(tx_mrd_flux, 0x2004),
27963 +- HISI_PCIE_PMU_EVENT_ATTR(tx_mrd_time, 0x12004),
27964 ++ HISI_PCIE_PMU_EVENT_ATTR(rx_mrd_flux, 0x0804),
27965 ++ HISI_PCIE_PMU_EVENT_ATTR(rx_mrd_time, 0x10804),
27966 ++ HISI_PCIE_PMU_EVENT_ATTR(tx_mrd_flux, 0x0405),
27967 ++ HISI_PCIE_PMU_EVENT_ATTR(tx_mrd_time, 0x10405),
27968 + NULL
27969 + };
27970 +
27971 +diff --git a/drivers/perf/marvell_cn10k_tad_pmu.c b/drivers/perf/marvell_cn10k_tad_pmu.c
27972 +index 69c3050a4348b..a1166afb37024 100644
27973 +--- a/drivers/perf/marvell_cn10k_tad_pmu.c
27974 ++++ b/drivers/perf/marvell_cn10k_tad_pmu.c
27975 +@@ -408,7 +408,11 @@ static int __init tad_pmu_init(void)
27976 + if (ret < 0)
27977 + return ret;
27978 + tad_pmu_cpuhp_state = ret;
27979 +- return platform_driver_register(&tad_pmu_driver);
27980 ++ ret = platform_driver_register(&tad_pmu_driver);
27981 ++ if (ret)
27982 ++ cpuhp_remove_multi_state(tad_pmu_cpuhp_state);
27983 ++
27984 ++ return ret;
27985 + }
27986 +
27987 + static void __exit tad_pmu_exit(void)
27988 +diff --git a/drivers/phy/broadcom/phy-brcm-usb-init-synopsys.c b/drivers/phy/broadcom/phy-brcm-usb-init-synopsys.c
27989 +index d2524b70ea161..3b374b37b965b 100644
27990 +--- a/drivers/phy/broadcom/phy-brcm-usb-init-synopsys.c
27991 ++++ b/drivers/phy/broadcom/phy-brcm-usb-init-synopsys.c
27992 +@@ -331,13 +331,12 @@ static void usb_uninit_common_7216(struct brcm_usb_init_params *params)
27993 +
27994 + pr_debug("%s\n", __func__);
27995 +
27996 +- if (!params->wake_enabled) {
27997 +- USB_CTRL_SET(ctrl, USB_PM, USB_PWRDN);
27998 +-
27999 ++ if (params->wake_enabled) {
28000 + /* Switch to using slower clock during suspend to save power */
28001 + USB_CTRL_SET(ctrl, USB_PM, XHC_S2_CLK_SWITCH_EN);
28002 +- } else {
28003 + usb_wake_enable_7216(params, true);
28004 ++ } else {
28005 ++ USB_CTRL_SET(ctrl, USB_PM, USB_PWRDN);
28006 + }
28007 + }
28008 +
28009 +@@ -425,7 +424,6 @@ void brcm_usb_dvr_init_7216(struct brcm_usb_init_params *params)
28010 +
28011 + params->family_name = "7216";
28012 + params->ops = &bcm7216_ops;
28013 +- params->suspend_with_clocks = true;
28014 + }
28015 +
28016 + void brcm_usb_dvr_init_7211b0(struct brcm_usb_init_params *params)
28017 +@@ -435,5 +433,4 @@ void brcm_usb_dvr_init_7211b0(struct brcm_usb_init_params *params)
28018 +
28019 + params->family_name = "7211";
28020 + params->ops = &bcm7211b0_ops;
28021 +- params->suspend_with_clocks = true;
28022 + }
28023 +diff --git a/drivers/phy/broadcom/phy-brcm-usb-init.h b/drivers/phy/broadcom/phy-brcm-usb-init.h
28024 +index 1ccb5ddab865c..3236e94988428 100644
28025 +--- a/drivers/phy/broadcom/phy-brcm-usb-init.h
28026 ++++ b/drivers/phy/broadcom/phy-brcm-usb-init.h
28027 +@@ -61,7 +61,6 @@ struct brcm_usb_init_params {
28028 + const struct brcm_usb_init_ops *ops;
28029 + struct regmap *syscon_piarbctl;
28030 + bool wake_enabled;
28031 +- bool suspend_with_clocks;
28032 + };
28033 +
28034 + void brcm_usb_dvr_init_4908(struct brcm_usb_init_params *params);
28035 +diff --git a/drivers/phy/broadcom/phy-brcm-usb.c b/drivers/phy/broadcom/phy-brcm-usb.c
28036 +index 2cb3779fcdf82..2bfd78e2d8fd6 100644
28037 +--- a/drivers/phy/broadcom/phy-brcm-usb.c
28038 ++++ b/drivers/phy/broadcom/phy-brcm-usb.c
28039 +@@ -102,9 +102,9 @@ static int brcm_pm_notifier(struct notifier_block *notifier,
28040 +
28041 + static irqreturn_t brcm_usb_phy_wake_isr(int irq, void *dev_id)
28042 + {
28043 +- struct phy *gphy = dev_id;
28044 ++ struct device *dev = dev_id;
28045 +
28046 +- pm_wakeup_event(&gphy->dev, 0);
28047 ++ pm_wakeup_event(dev, 0);
28048 +
28049 + return IRQ_HANDLED;
28050 + }
28051 +@@ -451,7 +451,7 @@ static int brcm_usb_phy_dvr_init(struct platform_device *pdev,
28052 + if (priv->wake_irq >= 0) {
28053 + err = devm_request_irq(dev, priv->wake_irq,
28054 + brcm_usb_phy_wake_isr, 0,
28055 +- dev_name(dev), gphy);
28056 ++ dev_name(dev), dev);
28057 + if (err < 0)
28058 + return err;
28059 + device_set_wakeup_capable(dev, 1);
28060 +@@ -598,7 +598,7 @@ static int brcm_usb_phy_suspend(struct device *dev)
28061 + * and newer XHCI->2.0-clks/3.0-clks.
28062 + */
28063 +
28064 +- if (!priv->ini.suspend_with_clocks) {
28065 ++ if (!priv->ini.wake_enabled) {
28066 + if (priv->phys[BRCM_USB_PHY_3_0].inited)
28067 + clk_disable_unprepare(priv->usb_30_clk);
28068 + if (priv->phys[BRCM_USB_PHY_2_0].inited ||
28069 +@@ -615,8 +615,10 @@ static int brcm_usb_phy_resume(struct device *dev)
28070 + {
28071 + struct brcm_usb_phy_data *priv = dev_get_drvdata(dev);
28072 +
28073 +- clk_prepare_enable(priv->usb_20_clk);
28074 +- clk_prepare_enable(priv->usb_30_clk);
28075 ++ if (!priv->ini.wake_enabled) {
28076 ++ clk_prepare_enable(priv->usb_20_clk);
28077 ++ clk_prepare_enable(priv->usb_30_clk);
28078 ++ }
28079 + brcm_usb_init_ipp(&priv->ini);
28080 +
28081 + /*
28082 +diff --git a/drivers/phy/marvell/phy-mvebu-a3700-comphy.c b/drivers/phy/marvell/phy-mvebu-a3700-comphy.c
28083 +index 67712c77d806f..d641b345afa35 100644
28084 +--- a/drivers/phy/marvell/phy-mvebu-a3700-comphy.c
28085 ++++ b/drivers/phy/marvell/phy-mvebu-a3700-comphy.c
28086 +@@ -826,6 +826,9 @@ mvebu_a3700_comphy_usb3_power_on(struct mvebu_a3700_comphy_lane *lane)
28087 + if (ret)
28088 + return ret;
28089 +
28090 ++ /* COMPHY register reset (cleared automatically) */
28091 ++ comphy_lane_reg_set(lane, COMPHY_SFT_RESET, SFT_RST, SFT_RST);
28092 ++
28093 + /*
28094 + * 0. Set PHY OTG Control(0x5d034), bit 4, Power up OTG module The
28095 + * register belong to UTMI module, so it is set in UTMI phy driver.
28096 +diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c b/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
28097 +index 5be5348fbb26b..bb40172e23d49 100644
28098 +--- a/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
28099 ++++ b/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
28100 +@@ -14,6 +14,7 @@
28101 + #include <linux/of.h>
28102 + #include <linux/of_device.h>
28103 + #include <linux/of_address.h>
28104 ++#include <linux/phy/pcie.h>
28105 + #include <linux/phy/phy.h>
28106 + #include <linux/platform_device.h>
28107 + #include <linux/regulator/consumer.h>
28108 +@@ -505,6 +506,13 @@ static const struct qmp_phy_init_tbl ipq8074_pcie_gen3_pcs_tbl[] = {
28109 + QMP_PHY_INIT_CFG(QPHY_V4_PCS_FLL_CNTRL1, 0x01),
28110 + QMP_PHY_INIT_CFG(QPHY_V4_PCS_P2U3_WAKEUP_DLY_TIME_AUXCLK_H, 0x0),
28111 + QMP_PHY_INIT_CFG(QPHY_V4_PCS_P2U3_WAKEUP_DLY_TIME_AUXCLK_L, 0x1),
28112 ++ QMP_PHY_INIT_CFG(QPHY_V4_PCS_G12S1_TXDEEMPH_M3P5DB, 0x10),
28113 ++ QMP_PHY_INIT_CFG(QPHY_V4_PCS_RX_DCC_CAL_CONFIG, 0x01),
28114 ++ QMP_PHY_INIT_CFG(QPHY_V4_PCS_RX_SIGDET_LVL, 0xaa),
28115 ++ QMP_PHY_INIT_CFG(QPHY_V4_PCS_REFGEN_REQ_CONFIG1, 0x0d),
28116 ++};
28117 ++
28118 ++static const struct qmp_phy_init_tbl ipq8074_pcie_gen3_pcs_misc_tbl[] = {
28119 + QMP_PHY_INIT_CFG(QPHY_V4_PCS_PCIE_OSC_DTCT_ACTIONS, 0x0),
28120 + QMP_PHY_INIT_CFG(QPHY_V4_PCS_PCIE_L1P1_WAKEUP_DLY_TIME_AUXCLK_H, 0x00),
28121 + QMP_PHY_INIT_CFG(QPHY_V4_PCS_PCIE_L1P1_WAKEUP_DLY_TIME_AUXCLK_L, 0x01),
28122 +@@ -517,11 +525,7 @@ static const struct qmp_phy_init_tbl ipq8074_pcie_gen3_pcs_tbl[] = {
28123 + QMP_PHY_INIT_CFG(QPHY_V4_PCS_PCIE_OSC_DTCT_MODE2_CONFIG2, 0x50),
28124 + QMP_PHY_INIT_CFG(QPHY_V4_PCS_PCIE_OSC_DTCT_MODE2_CONFIG4, 0x1a),
28125 + QMP_PHY_INIT_CFG(QPHY_V4_PCS_PCIE_OSC_DTCT_MODE2_CONFIG5, 0x6),
28126 +- QMP_PHY_INIT_CFG(QPHY_V4_PCS_G12S1_TXDEEMPH_M3P5DB, 0x10),
28127 + QMP_PHY_INIT_CFG(QPHY_V4_PCS_PCIE_ENDPOINT_REFCLK_DRIVE, 0xc1),
28128 +- QMP_PHY_INIT_CFG(QPHY_V4_PCS_RX_DCC_CAL_CONFIG, 0x01),
28129 +- QMP_PHY_INIT_CFG(QPHY_V4_PCS_RX_SIGDET_LVL, 0xaa),
28130 +- QMP_PHY_INIT_CFG(QPHY_V4_PCS_REFGEN_REQ_CONFIG1, 0x0d),
28131 + };
28132 +
28133 + static const struct qmp_phy_init_tbl sdm845_qmp_pcie_serdes_tbl[] = {
28134 +@@ -1184,15 +1188,29 @@ static const struct qmp_phy_init_tbl sm8450_qmp_gen3x1_pcie_pcs_misc_tbl[] = {
28135 + };
28136 +
28137 + static const struct qmp_phy_init_tbl sm8450_qmp_gen4x2_pcie_serdes_tbl[] = {
28138 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_BIAS_EN_CLKBUFLR_EN, 0x14),
28139 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_PLL_IVCO, 0x0f),
28140 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_LOCK_CMP_EN, 0x46),
28141 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_LOCK_CMP_CFG, 0x04),
28142 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_VCO_TUNE_MAP, 0x02),
28143 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_HSCLK_SEL, 0x12),
28144 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_HSCLK_HS_SWITCH_SEL, 0x00),
28145 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_CORECLK_DIV_MODE0, 0x0a),
28146 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_CORECLK_DIV_MODE1, 0x04),
28147 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_CMN_MISC1, 0x88),
28148 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_CMN_CONFIG, 0x06),
28149 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_CMN_MODE, 0x14),
28150 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_VCO_DC_LEVEL_CTRL, 0x0f),
28151 ++};
28152 ++
28153 ++static const struct qmp_phy_init_tbl sm8450_qmp_gen4x2_pcie_rc_serdes_tbl[] = {
28154 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_SSC_PER1, 0x31),
28155 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_SSC_PER2, 0x01),
28156 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_SSC_STEP_SIZE1_MODE0, 0xde),
28157 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_SSC_STEP_SIZE2_MODE0, 0x07),
28158 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_SSC_STEP_SIZE1_MODE1, 0x97),
28159 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_SSC_STEP_SIZE2_MODE1, 0x0c),
28160 +- QMP_PHY_INIT_CFG(QSERDES_V5_COM_BIAS_EN_CLKBUFLR_EN, 0x14),
28161 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_CLK_ENABLE1, 0x90),
28162 +- QMP_PHY_INIT_CFG(QSERDES_V5_COM_PLL_IVCO, 0x0f),
28163 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_CP_CTRL_MODE0, 0x06),
28164 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_CP_CTRL_MODE1, 0x06),
28165 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_PLL_RCTRL_MODE0, 0x16),
28166 +@@ -1200,8 +1218,6 @@ static const struct qmp_phy_init_tbl sm8450_qmp_gen4x2_pcie_serdes_tbl[] = {
28167 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_PLL_CCTRL_MODE0, 0x36),
28168 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_PLL_CCTRL_MODE1, 0x36),
28169 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_SYSCLK_EN_SEL, 0x08),
28170 +- QMP_PHY_INIT_CFG(QSERDES_V5_COM_LOCK_CMP_EN, 0x46),
28171 +- QMP_PHY_INIT_CFG(QSERDES_V5_COM_LOCK_CMP_CFG, 0x04),
28172 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_LOCK_CMP1_MODE0, 0x0a),
28173 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_LOCK_CMP2_MODE0, 0x1a),
28174 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_LOCK_CMP1_MODE1, 0x14),
28175 +@@ -1214,17 +1230,8 @@ static const struct qmp_phy_init_tbl sm8450_qmp_gen4x2_pcie_serdes_tbl[] = {
28176 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_DIV_FRAC_START1_MODE1, 0x55),
28177 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_DIV_FRAC_START2_MODE1, 0x55),
28178 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_DIV_FRAC_START3_MODE1, 0x05),
28179 +- QMP_PHY_INIT_CFG(QSERDES_V5_COM_VCO_TUNE_MAP, 0x02),
28180 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_CLK_SELECT, 0x34),
28181 +- QMP_PHY_INIT_CFG(QSERDES_V5_COM_HSCLK_SEL, 0x12),
28182 +- QMP_PHY_INIT_CFG(QSERDES_V5_COM_HSCLK_HS_SWITCH_SEL, 0x00),
28183 +- QMP_PHY_INIT_CFG(QSERDES_V5_COM_CORECLK_DIV_MODE0, 0x0a),
28184 +- QMP_PHY_INIT_CFG(QSERDES_V5_COM_CORECLK_DIV_MODE1, 0x04),
28185 +- QMP_PHY_INIT_CFG(QSERDES_V5_COM_CMN_MISC1, 0x88),
28186 + QMP_PHY_INIT_CFG(QSERDES_V5_COM_CORE_CLK_EN, 0x20),
28187 +- QMP_PHY_INIT_CFG(QSERDES_V5_COM_CMN_CONFIG, 0x06),
28188 +- QMP_PHY_INIT_CFG(QSERDES_V5_COM_CMN_MODE, 0x14),
28189 +- QMP_PHY_INIT_CFG(QSERDES_V5_COM_VCO_DC_LEVEL_CTRL, 0x0f),
28190 + };
28191 +
28192 + static const struct qmp_phy_init_tbl sm8450_qmp_gen4x2_pcie_tx_tbl[] = {
28193 +@@ -1285,46 +1292,80 @@ static const struct qmp_phy_init_tbl sm8450_qmp_gen4x2_pcie_rx_tbl[] = {
28194 + };
28195 +
28196 + static const struct qmp_phy_init_tbl sm8450_qmp_gen4x2_pcie_pcs_tbl[] = {
28197 +- QMP_PHY_INIT_CFG(QPHY_V5_PCS_EQ_CONFIG2, 0x16),
28198 +- QMP_PHY_INIT_CFG(QPHY_V5_PCS_EQ_CONFIG3, 0x22),
28199 +- QMP_PHY_INIT_CFG(QPHY_V5_PCS_G3S2_PRE_GAIN, 0x2e),
28200 +- QMP_PHY_INIT_CFG(QPHY_V5_PCS_RX_SIGDET_LVL, 0x99),
28201 ++ QMP_PHY_INIT_CFG(QPHY_V5_20_PCS_EQ_CONFIG4, 0x16),
28202 ++ QMP_PHY_INIT_CFG(QPHY_V5_20_PCS_EQ_CONFIG5, 0x22),
28203 ++ QMP_PHY_INIT_CFG(QPHY_V5_20_PCS_G3S2_PRE_GAIN, 0x2e),
28204 ++ QMP_PHY_INIT_CFG(QPHY_V5_20_PCS_RX_SIGDET_LVL, 0x99),
28205 + };
28206 +
28207 + static const struct qmp_phy_init_tbl sm8450_qmp_gen4x2_pcie_pcs_misc_tbl[] = {
28208 +- QMP_PHY_INIT_CFG(QPHY_V5_20_PCS_PCIE_ENDPOINT_REFCLK_DRIVE, 0xc1),
28209 +- QMP_PHY_INIT_CFG(QPHY_V5_20_PCS_PCIE_OSC_DTCT_ACTIONS, 0x00),
28210 + QMP_PHY_INIT_CFG(QPHY_V5_20_PCS_PCIE_G4_EQ_CONFIG5, 0x02),
28211 + QMP_PHY_INIT_CFG(QPHY_V5_20_PCS_PCIE_EQ_CONFIG1, 0x16),
28212 + QMP_PHY_INIT_CFG(QPHY_V5_20_PCS_PCIE_RX_MARGINING_CONFIG3, 0x28),
28213 + QMP_PHY_INIT_CFG(QPHY_V5_20_PCS_PCIE_G4_PRE_GAIN, 0x2e),
28214 + };
28215 +
28216 ++static const struct qmp_phy_init_tbl sm8450_qmp_gen4x2_pcie_rc_pcs_misc_tbl[] = {
28217 ++ QMP_PHY_INIT_CFG(QPHY_V5_20_PCS_PCIE_ENDPOINT_REFCLK_DRIVE, 0xc1),
28218 ++ QMP_PHY_INIT_CFG(QPHY_V5_20_PCS_PCIE_OSC_DTCT_ACTIONS, 0x00),
28219 ++ QMP_PHY_INIT_CFG(QPHY_V5_20_PCS_PCIE_PRESET_P10_POST, 0x00),
28220 ++};
28221 ++
28222 ++static const struct qmp_phy_init_tbl sm8450_qmp_gen4x2_pcie_ep_serdes_tbl[] = {
28223 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_BG_TIMER, 0x02),
28224 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_SYS_CLK_CTRL, 0x07),
28225 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_CP_CTRL_MODE0, 0x27),
28226 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_CP_CTRL_MODE1, 0x0a),
28227 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_PLL_RCTRL_MODE0, 0x17),
28228 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_PLL_RCTRL_MODE1, 0x19),
28229 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_PLL_CCTRL_MODE0, 0x00),
28230 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_PLL_CCTRL_MODE1, 0x03),
28231 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_SYSCLK_EN_SEL, 0x00),
28232 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_LOCK_CMP1_MODE0, 0xff),
28233 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_LOCK_CMP2_MODE0, 0x04),
28234 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_LOCK_CMP1_MODE1, 0xff),
28235 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_LOCK_CMP2_MODE1, 0x09),
28236 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_DEC_START_MODE0, 0x19),
28237 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_DEC_START_MODE1, 0x28),
28238 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_INTEGLOOP_GAIN0_MODE0, 0xfb),
28239 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_INTEGLOOP_GAIN1_MODE0, 0x01),
28240 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_INTEGLOOP_GAIN0_MODE1, 0xfb),
28241 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_INTEGLOOP_GAIN1_MODE1, 0x01),
28242 ++ QMP_PHY_INIT_CFG(QSERDES_V5_COM_CORE_CLK_EN, 0x60),
28243 ++};
28244 ++
28245 ++static const struct qmp_phy_init_tbl sm8450_qmp_gen4x2_pcie_ep_pcs_misc_tbl[] = {
28246 ++ QMP_PHY_INIT_CFG(QPHY_V5_20_PCS_PCIE_OSC_DTCT_MODE2_CONFIG5, 0x08),
28247 ++};
28248 ++
28249 ++struct qmp_phy_cfg_tables {
28250 ++ const struct qmp_phy_init_tbl *serdes;
28251 ++ int serdes_num;
28252 ++ const struct qmp_phy_init_tbl *tx;
28253 ++ int tx_num;
28254 ++ const struct qmp_phy_init_tbl *rx;
28255 ++ int rx_num;
28256 ++ const struct qmp_phy_init_tbl *pcs;
28257 ++ int pcs_num;
28258 ++ const struct qmp_phy_init_tbl *pcs_misc;
28259 ++ int pcs_misc_num;
28260 ++};
28261 ++
28262 + /* struct qmp_phy_cfg - per-PHY initialization config */
28263 + struct qmp_phy_cfg {
28264 + int lanes;
28265 +
28266 +- /* Init sequence for PHY blocks - serdes, tx, rx, pcs */
28267 +- const struct qmp_phy_init_tbl *serdes_tbl;
28268 +- int serdes_tbl_num;
28269 +- const struct qmp_phy_init_tbl *serdes_tbl_sec;
28270 +- int serdes_tbl_num_sec;
28271 +- const struct qmp_phy_init_tbl *tx_tbl;
28272 +- int tx_tbl_num;
28273 +- const struct qmp_phy_init_tbl *tx_tbl_sec;
28274 +- int tx_tbl_num_sec;
28275 +- const struct qmp_phy_init_tbl *rx_tbl;
28276 +- int rx_tbl_num;
28277 +- const struct qmp_phy_init_tbl *rx_tbl_sec;
28278 +- int rx_tbl_num_sec;
28279 +- const struct qmp_phy_init_tbl *pcs_tbl;
28280 +- int pcs_tbl_num;
28281 +- const struct qmp_phy_init_tbl *pcs_tbl_sec;
28282 +- int pcs_tbl_num_sec;
28283 +- const struct qmp_phy_init_tbl *pcs_misc_tbl;
28284 +- int pcs_misc_tbl_num;
28285 +- const struct qmp_phy_init_tbl *pcs_misc_tbl_sec;
28286 +- int pcs_misc_tbl_num_sec;
28287 ++ /* Main init sequence for PHY blocks - serdes, tx, rx, pcs */
28288 ++ const struct qmp_phy_cfg_tables tables;
28289 ++ /*
28290 ++ * Additional init sequences for PHY blocks, providing additional
28291 ++ * register programming. They are used for providing separate sequences
28292 ++ * for the Root Complex and End Point use cases.
28293 ++ *
28294 ++ * If EP mode is not supported, both tables can be left unset.
28295 ++ */
28296 ++ const struct qmp_phy_cfg_tables *tables_rc;
28297 ++ const struct qmp_phy_cfg_tables *tables_ep;
28298 +
28299 + /* clock ids to be requested */
28300 + const char * const *clk_list;
28301 +@@ -1344,11 +1385,7 @@ struct qmp_phy_cfg {
28302 + /* bit offset of PHYSTATUS in QPHY_PCS_STATUS register */
28303 + unsigned int phy_status;
28304 +
28305 +- /* true, if PHY needs delay after POWER_DOWN */
28306 +- bool has_pwrdn_delay;
28307 +- /* power_down delay in usec */
28308 +- int pwrdn_delay_min;
28309 +- int pwrdn_delay_max;
28310 ++ bool skip_start_delay;
28311 +
28312 + /* QMP PHY pipe clock interface rate */
28313 + unsigned long pipe_clock_rate;
28314 +@@ -1368,6 +1405,7 @@ struct qmp_phy_cfg {
28315 + * @pcs_misc: iomapped memory space for lane's pcs_misc
28316 + * @pipe_clk: pipe clock
28317 + * @qmp: QMP phy to which this lane belongs
28318 ++ * @mode: currently selected PHY mode
28319 + */
28320 + struct qmp_phy {
28321 + struct phy *phy;
28322 +@@ -1381,6 +1419,7 @@ struct qmp_phy {
28323 + void __iomem *pcs_misc;
28324 + struct clk *pipe_clk;
28325 + struct qcom_qmp *qmp;
28326 ++ int mode;
28327 + };
28328 +
28329 + /**
28330 +@@ -1459,14 +1498,16 @@ static const char * const sdm845_pciephy_reset_l[] = {
28331 + static const struct qmp_phy_cfg ipq8074_pciephy_cfg = {
28332 + .lanes = 1,
28333 +
28334 +- .serdes_tbl = ipq8074_pcie_serdes_tbl,
28335 +- .serdes_tbl_num = ARRAY_SIZE(ipq8074_pcie_serdes_tbl),
28336 +- .tx_tbl = ipq8074_pcie_tx_tbl,
28337 +- .tx_tbl_num = ARRAY_SIZE(ipq8074_pcie_tx_tbl),
28338 +- .rx_tbl = ipq8074_pcie_rx_tbl,
28339 +- .rx_tbl_num = ARRAY_SIZE(ipq8074_pcie_rx_tbl),
28340 +- .pcs_tbl = ipq8074_pcie_pcs_tbl,
28341 +- .pcs_tbl_num = ARRAY_SIZE(ipq8074_pcie_pcs_tbl),
28342 ++ .tables = {
28343 ++ .serdes = ipq8074_pcie_serdes_tbl,
28344 ++ .serdes_num = ARRAY_SIZE(ipq8074_pcie_serdes_tbl),
28345 ++ .tx = ipq8074_pcie_tx_tbl,
28346 ++ .tx_num = ARRAY_SIZE(ipq8074_pcie_tx_tbl),
28347 ++ .rx = ipq8074_pcie_rx_tbl,
28348 ++ .rx_num = ARRAY_SIZE(ipq8074_pcie_rx_tbl),
28349 ++ .pcs = ipq8074_pcie_pcs_tbl,
28350 ++ .pcs_num = ARRAY_SIZE(ipq8074_pcie_pcs_tbl),
28351 ++ },
28352 + .clk_list = ipq8074_pciephy_clk_l,
28353 + .num_clks = ARRAY_SIZE(ipq8074_pciephy_clk_l),
28354 + .reset_list = ipq8074_pciephy_reset_l,
28355 +@@ -1478,23 +1519,23 @@ static const struct qmp_phy_cfg ipq8074_pciephy_cfg = {
28356 + .start_ctrl = SERDES_START | PCS_START,
28357 + .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL,
28358 + .phy_status = PHYSTATUS,
28359 +-
28360 +- .has_pwrdn_delay = true,
28361 +- .pwrdn_delay_min = 995, /* us */
28362 +- .pwrdn_delay_max = 1005, /* us */
28363 + };
28364 +
28365 + static const struct qmp_phy_cfg ipq8074_pciephy_gen3_cfg = {
28366 + .lanes = 1,
28367 +
28368 +- .serdes_tbl = ipq8074_pcie_gen3_serdes_tbl,
28369 +- .serdes_tbl_num = ARRAY_SIZE(ipq8074_pcie_gen3_serdes_tbl),
28370 +- .tx_tbl = ipq8074_pcie_gen3_tx_tbl,
28371 +- .tx_tbl_num = ARRAY_SIZE(ipq8074_pcie_gen3_tx_tbl),
28372 +- .rx_tbl = ipq8074_pcie_gen3_rx_tbl,
28373 +- .rx_tbl_num = ARRAY_SIZE(ipq8074_pcie_gen3_rx_tbl),
28374 +- .pcs_tbl = ipq8074_pcie_gen3_pcs_tbl,
28375 +- .pcs_tbl_num = ARRAY_SIZE(ipq8074_pcie_gen3_pcs_tbl),
28376 ++ .tables = {
28377 ++ .serdes = ipq8074_pcie_gen3_serdes_tbl,
28378 ++ .serdes_num = ARRAY_SIZE(ipq8074_pcie_gen3_serdes_tbl),
28379 ++ .tx = ipq8074_pcie_gen3_tx_tbl,
28380 ++ .tx_num = ARRAY_SIZE(ipq8074_pcie_gen3_tx_tbl),
28381 ++ .rx = ipq8074_pcie_gen3_rx_tbl,
28382 ++ .rx_num = ARRAY_SIZE(ipq8074_pcie_gen3_rx_tbl),
28383 ++ .pcs = ipq8074_pcie_gen3_pcs_tbl,
28384 ++ .pcs_num = ARRAY_SIZE(ipq8074_pcie_gen3_pcs_tbl),
28385 ++ .pcs_misc = ipq8074_pcie_gen3_pcs_misc_tbl,
28386 ++ .pcs_misc_num = ARRAY_SIZE(ipq8074_pcie_gen3_pcs_misc_tbl),
28387 ++ },
28388 + .clk_list = ipq8074_pciephy_clk_l,
28389 + .num_clks = ARRAY_SIZE(ipq8074_pciephy_clk_l),
28390 + .reset_list = ipq8074_pciephy_reset_l,
28391 +@@ -1505,10 +1546,7 @@ static const struct qmp_phy_cfg ipq8074_pciephy_gen3_cfg = {
28392 +
28393 + .start_ctrl = SERDES_START | PCS_START,
28394 + .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL,
28395 +-
28396 +- .has_pwrdn_delay = true,
28397 +- .pwrdn_delay_min = 995, /* us */
28398 +- .pwrdn_delay_max = 1005, /* us */
28399 ++ .phy_status = PHYSTATUS,
28400 +
28401 + .pipe_clock_rate = 250000000,
28402 + };
28403 +@@ -1516,16 +1554,18 @@ static const struct qmp_phy_cfg ipq8074_pciephy_gen3_cfg = {
28404 + static const struct qmp_phy_cfg ipq6018_pciephy_cfg = {
28405 + .lanes = 1,
28406 +
28407 +- .serdes_tbl = ipq6018_pcie_serdes_tbl,
28408 +- .serdes_tbl_num = ARRAY_SIZE(ipq6018_pcie_serdes_tbl),
28409 +- .tx_tbl = ipq6018_pcie_tx_tbl,
28410 +- .tx_tbl_num = ARRAY_SIZE(ipq6018_pcie_tx_tbl),
28411 +- .rx_tbl = ipq6018_pcie_rx_tbl,
28412 +- .rx_tbl_num = ARRAY_SIZE(ipq6018_pcie_rx_tbl),
28413 +- .pcs_tbl = ipq6018_pcie_pcs_tbl,
28414 +- .pcs_tbl_num = ARRAY_SIZE(ipq6018_pcie_pcs_tbl),
28415 +- .pcs_misc_tbl = ipq6018_pcie_pcs_misc_tbl,
28416 +- .pcs_misc_tbl_num = ARRAY_SIZE(ipq6018_pcie_pcs_misc_tbl),
28417 ++ .tables = {
28418 ++ .serdes = ipq6018_pcie_serdes_tbl,
28419 ++ .serdes_num = ARRAY_SIZE(ipq6018_pcie_serdes_tbl),
28420 ++ .tx = ipq6018_pcie_tx_tbl,
28421 ++ .tx_num = ARRAY_SIZE(ipq6018_pcie_tx_tbl),
28422 ++ .rx = ipq6018_pcie_rx_tbl,
28423 ++ .rx_num = ARRAY_SIZE(ipq6018_pcie_rx_tbl),
28424 ++ .pcs = ipq6018_pcie_pcs_tbl,
28425 ++ .pcs_num = ARRAY_SIZE(ipq6018_pcie_pcs_tbl),
28426 ++ .pcs_misc = ipq6018_pcie_pcs_misc_tbl,
28427 ++ .pcs_misc_num = ARRAY_SIZE(ipq6018_pcie_pcs_misc_tbl),
28428 ++ },
28429 + .clk_list = ipq8074_pciephy_clk_l,
28430 + .num_clks = ARRAY_SIZE(ipq8074_pciephy_clk_l),
28431 + .reset_list = ipq8074_pciephy_reset_l,
28432 +@@ -1536,25 +1576,24 @@ static const struct qmp_phy_cfg ipq6018_pciephy_cfg = {
28433 +
28434 + .start_ctrl = SERDES_START | PCS_START,
28435 + .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL,
28436 +-
28437 +- .has_pwrdn_delay = true,
28438 +- .pwrdn_delay_min = 995, /* us */
28439 +- .pwrdn_delay_max = 1005, /* us */
28440 ++ .phy_status = PHYSTATUS,
28441 + };
28442 +
28443 + static const struct qmp_phy_cfg sdm845_qmp_pciephy_cfg = {
28444 + .lanes = 1,
28445 +
28446 +- .serdes_tbl = sdm845_qmp_pcie_serdes_tbl,
28447 +- .serdes_tbl_num = ARRAY_SIZE(sdm845_qmp_pcie_serdes_tbl),
28448 +- .tx_tbl = sdm845_qmp_pcie_tx_tbl,
28449 +- .tx_tbl_num = ARRAY_SIZE(sdm845_qmp_pcie_tx_tbl),
28450 +- .rx_tbl = sdm845_qmp_pcie_rx_tbl,
28451 +- .rx_tbl_num = ARRAY_SIZE(sdm845_qmp_pcie_rx_tbl),
28452 +- .pcs_tbl = sdm845_qmp_pcie_pcs_tbl,
28453 +- .pcs_tbl_num = ARRAY_SIZE(sdm845_qmp_pcie_pcs_tbl),
28454 +- .pcs_misc_tbl = sdm845_qmp_pcie_pcs_misc_tbl,
28455 +- .pcs_misc_tbl_num = ARRAY_SIZE(sdm845_qmp_pcie_pcs_misc_tbl),
28456 ++ .tables = {
28457 ++ .serdes = sdm845_qmp_pcie_serdes_tbl,
28458 ++ .serdes_num = ARRAY_SIZE(sdm845_qmp_pcie_serdes_tbl),
28459 ++ .tx = sdm845_qmp_pcie_tx_tbl,
28460 ++ .tx_num = ARRAY_SIZE(sdm845_qmp_pcie_tx_tbl),
28461 ++ .rx = sdm845_qmp_pcie_rx_tbl,
28462 ++ .rx_num = ARRAY_SIZE(sdm845_qmp_pcie_rx_tbl),
28463 ++ .pcs = sdm845_qmp_pcie_pcs_tbl,
28464 ++ .pcs_num = ARRAY_SIZE(sdm845_qmp_pcie_pcs_tbl),
28465 ++ .pcs_misc = sdm845_qmp_pcie_pcs_misc_tbl,
28466 ++ .pcs_misc_num = ARRAY_SIZE(sdm845_qmp_pcie_pcs_misc_tbl),
28467 ++ },
28468 + .clk_list = sdm845_pciephy_clk_l,
28469 + .num_clks = ARRAY_SIZE(sdm845_pciephy_clk_l),
28470 + .reset_list = sdm845_pciephy_reset_l,
28471 +@@ -1566,23 +1605,21 @@ static const struct qmp_phy_cfg sdm845_qmp_pciephy_cfg = {
28472 + .start_ctrl = PCS_START | SERDES_START,
28473 + .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL,
28474 + .phy_status = PHYSTATUS,
28475 +-
28476 +- .has_pwrdn_delay = true,
28477 +- .pwrdn_delay_min = 995, /* us */
28478 +- .pwrdn_delay_max = 1005, /* us */
28479 + };
28480 +
28481 + static const struct qmp_phy_cfg sdm845_qhp_pciephy_cfg = {
28482 + .lanes = 1,
28483 +
28484 +- .serdes_tbl = sdm845_qhp_pcie_serdes_tbl,
28485 +- .serdes_tbl_num = ARRAY_SIZE(sdm845_qhp_pcie_serdes_tbl),
28486 +- .tx_tbl = sdm845_qhp_pcie_tx_tbl,
28487 +- .tx_tbl_num = ARRAY_SIZE(sdm845_qhp_pcie_tx_tbl),
28488 +- .rx_tbl = sdm845_qhp_pcie_rx_tbl,
28489 +- .rx_tbl_num = ARRAY_SIZE(sdm845_qhp_pcie_rx_tbl),
28490 +- .pcs_tbl = sdm845_qhp_pcie_pcs_tbl,
28491 +- .pcs_tbl_num = ARRAY_SIZE(sdm845_qhp_pcie_pcs_tbl),
28492 ++ .tables = {
28493 ++ .serdes = sdm845_qhp_pcie_serdes_tbl,
28494 ++ .serdes_num = ARRAY_SIZE(sdm845_qhp_pcie_serdes_tbl),
28495 ++ .tx = sdm845_qhp_pcie_tx_tbl,
28496 ++ .tx_num = ARRAY_SIZE(sdm845_qhp_pcie_tx_tbl),
28497 ++ .rx = sdm845_qhp_pcie_rx_tbl,
28498 ++ .rx_num = ARRAY_SIZE(sdm845_qhp_pcie_rx_tbl),
28499 ++ .pcs = sdm845_qhp_pcie_pcs_tbl,
28500 ++ .pcs_num = ARRAY_SIZE(sdm845_qhp_pcie_pcs_tbl),
28501 ++ },
28502 + .clk_list = sdm845_pciephy_clk_l,
28503 + .num_clks = ARRAY_SIZE(sdm845_pciephy_clk_l),
28504 + .reset_list = sdm845_pciephy_reset_l,
28505 +@@ -1594,33 +1631,33 @@ static const struct qmp_phy_cfg sdm845_qhp_pciephy_cfg = {
28506 + .start_ctrl = PCS_START | SERDES_START,
28507 + .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL,
28508 + .phy_status = PHYSTATUS,
28509 +-
28510 +- .has_pwrdn_delay = true,
28511 +- .pwrdn_delay_min = 995, /* us */
28512 +- .pwrdn_delay_max = 1005, /* us */
28513 + };
28514 +
28515 + static const struct qmp_phy_cfg sm8250_qmp_gen3x1_pciephy_cfg = {
28516 + .lanes = 1,
28517 +
28518 +- .serdes_tbl = sm8250_qmp_pcie_serdes_tbl,
28519 +- .serdes_tbl_num = ARRAY_SIZE(sm8250_qmp_pcie_serdes_tbl),
28520 +- .serdes_tbl_sec = sm8250_qmp_gen3x1_pcie_serdes_tbl,
28521 +- .serdes_tbl_num_sec = ARRAY_SIZE(sm8250_qmp_gen3x1_pcie_serdes_tbl),
28522 +- .tx_tbl = sm8250_qmp_pcie_tx_tbl,
28523 +- .tx_tbl_num = ARRAY_SIZE(sm8250_qmp_pcie_tx_tbl),
28524 +- .rx_tbl = sm8250_qmp_pcie_rx_tbl,
28525 +- .rx_tbl_num = ARRAY_SIZE(sm8250_qmp_pcie_rx_tbl),
28526 +- .rx_tbl_sec = sm8250_qmp_gen3x1_pcie_rx_tbl,
28527 +- .rx_tbl_num_sec = ARRAY_SIZE(sm8250_qmp_gen3x1_pcie_rx_tbl),
28528 +- .pcs_tbl = sm8250_qmp_pcie_pcs_tbl,
28529 +- .pcs_tbl_num = ARRAY_SIZE(sm8250_qmp_pcie_pcs_tbl),
28530 +- .pcs_tbl_sec = sm8250_qmp_gen3x1_pcie_pcs_tbl,
28531 +- .pcs_tbl_num_sec = ARRAY_SIZE(sm8250_qmp_gen3x1_pcie_pcs_tbl),
28532 +- .pcs_misc_tbl = sm8250_qmp_pcie_pcs_misc_tbl,
28533 +- .pcs_misc_tbl_num = ARRAY_SIZE(sm8250_qmp_pcie_pcs_misc_tbl),
28534 +- .pcs_misc_tbl_sec = sm8250_qmp_gen3x1_pcie_pcs_misc_tbl,
28535 +- .pcs_misc_tbl_num_sec = ARRAY_SIZE(sm8250_qmp_gen3x1_pcie_pcs_misc_tbl),
28536 ++ .tables = {
28537 ++ .serdes = sm8250_qmp_pcie_serdes_tbl,
28538 ++ .serdes_num = ARRAY_SIZE(sm8250_qmp_pcie_serdes_tbl),
28539 ++ .tx = sm8250_qmp_pcie_tx_tbl,
28540 ++ .tx_num = ARRAY_SIZE(sm8250_qmp_pcie_tx_tbl),
28541 ++ .rx = sm8250_qmp_pcie_rx_tbl,
28542 ++ .rx_num = ARRAY_SIZE(sm8250_qmp_pcie_rx_tbl),
28543 ++ .pcs = sm8250_qmp_pcie_pcs_tbl,
28544 ++ .pcs_num = ARRAY_SIZE(sm8250_qmp_pcie_pcs_tbl),
28545 ++ .pcs_misc = sm8250_qmp_pcie_pcs_misc_tbl,
28546 ++ .pcs_misc_num = ARRAY_SIZE(sm8250_qmp_pcie_pcs_misc_tbl),
28547 ++ },
28548 ++ .tables_rc = &(const struct qmp_phy_cfg_tables) {
28549 ++ .serdes = sm8250_qmp_gen3x1_pcie_serdes_tbl,
28550 ++ .serdes_num = ARRAY_SIZE(sm8250_qmp_gen3x1_pcie_serdes_tbl),
28551 ++ .rx = sm8250_qmp_gen3x1_pcie_rx_tbl,
28552 ++ .rx_num = ARRAY_SIZE(sm8250_qmp_gen3x1_pcie_rx_tbl),
28553 ++ .pcs = sm8250_qmp_gen3x1_pcie_pcs_tbl,
28554 ++ .pcs_num = ARRAY_SIZE(sm8250_qmp_gen3x1_pcie_pcs_tbl),
28555 ++ .pcs_misc = sm8250_qmp_gen3x1_pcie_pcs_misc_tbl,
28556 ++ .pcs_misc_num = ARRAY_SIZE(sm8250_qmp_gen3x1_pcie_pcs_misc_tbl),
28557 ++ },
28558 + .clk_list = sdm845_pciephy_clk_l,
28559 + .num_clks = ARRAY_SIZE(sdm845_pciephy_clk_l),
28560 + .reset_list = sdm845_pciephy_reset_l,
28561 +@@ -1632,33 +1669,33 @@ static const struct qmp_phy_cfg sm8250_qmp_gen3x1_pciephy_cfg = {
28562 + .start_ctrl = PCS_START | SERDES_START,
28563 + .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL,
28564 + .phy_status = PHYSTATUS,
28565 +-
28566 +- .has_pwrdn_delay = true,
28567 +- .pwrdn_delay_min = 995, /* us */
28568 +- .pwrdn_delay_max = 1005, /* us */
28569 + };
28570 +
28571 + static const struct qmp_phy_cfg sm8250_qmp_gen3x2_pciephy_cfg = {
28572 + .lanes = 2,
28573 +
28574 +- .serdes_tbl = sm8250_qmp_pcie_serdes_tbl,
28575 +- .serdes_tbl_num = ARRAY_SIZE(sm8250_qmp_pcie_serdes_tbl),
28576 +- .tx_tbl = sm8250_qmp_pcie_tx_tbl,
28577 +- .tx_tbl_num = ARRAY_SIZE(sm8250_qmp_pcie_tx_tbl),
28578 +- .tx_tbl_sec = sm8250_qmp_gen3x2_pcie_tx_tbl,
28579 +- .tx_tbl_num_sec = ARRAY_SIZE(sm8250_qmp_gen3x2_pcie_tx_tbl),
28580 +- .rx_tbl = sm8250_qmp_pcie_rx_tbl,
28581 +- .rx_tbl_num = ARRAY_SIZE(sm8250_qmp_pcie_rx_tbl),
28582 +- .rx_tbl_sec = sm8250_qmp_gen3x2_pcie_rx_tbl,
28583 +- .rx_tbl_num_sec = ARRAY_SIZE(sm8250_qmp_gen3x2_pcie_rx_tbl),
28584 +- .pcs_tbl = sm8250_qmp_pcie_pcs_tbl,
28585 +- .pcs_tbl_num = ARRAY_SIZE(sm8250_qmp_pcie_pcs_tbl),
28586 +- .pcs_tbl_sec = sm8250_qmp_gen3x2_pcie_pcs_tbl,
28587 +- .pcs_tbl_num_sec = ARRAY_SIZE(sm8250_qmp_gen3x2_pcie_pcs_tbl),
28588 +- .pcs_misc_tbl = sm8250_qmp_pcie_pcs_misc_tbl,
28589 +- .pcs_misc_tbl_num = ARRAY_SIZE(sm8250_qmp_pcie_pcs_misc_tbl),
28590 +- .pcs_misc_tbl_sec = sm8250_qmp_gen3x2_pcie_pcs_misc_tbl,
28591 +- .pcs_misc_tbl_num_sec = ARRAY_SIZE(sm8250_qmp_gen3x2_pcie_pcs_misc_tbl),
28592 ++ .tables = {
28593 ++ .serdes = sm8250_qmp_pcie_serdes_tbl,
28594 ++ .serdes_num = ARRAY_SIZE(sm8250_qmp_pcie_serdes_tbl),
28595 ++ .tx = sm8250_qmp_pcie_tx_tbl,
28596 ++ .tx_num = ARRAY_SIZE(sm8250_qmp_pcie_tx_tbl),
28597 ++ .rx = sm8250_qmp_pcie_rx_tbl,
28598 ++ .rx_num = ARRAY_SIZE(sm8250_qmp_pcie_rx_tbl),
28599 ++ .pcs = sm8250_qmp_pcie_pcs_tbl,
28600 ++ .pcs_num = ARRAY_SIZE(sm8250_qmp_pcie_pcs_tbl),
28601 ++ .pcs_misc = sm8250_qmp_pcie_pcs_misc_tbl,
28602 ++ .pcs_misc_num = ARRAY_SIZE(sm8250_qmp_pcie_pcs_misc_tbl),
28603 ++ },
28604 ++ .tables_rc = &(const struct qmp_phy_cfg_tables) {
28605 ++ .tx = sm8250_qmp_gen3x2_pcie_tx_tbl,
28606 ++ .tx_num = ARRAY_SIZE(sm8250_qmp_gen3x2_pcie_tx_tbl),
28607 ++ .rx = sm8250_qmp_gen3x2_pcie_rx_tbl,
28608 ++ .rx_num = ARRAY_SIZE(sm8250_qmp_gen3x2_pcie_rx_tbl),
28609 ++ .pcs = sm8250_qmp_gen3x2_pcie_pcs_tbl,
28610 ++ .pcs_num = ARRAY_SIZE(sm8250_qmp_gen3x2_pcie_pcs_tbl),
28611 ++ .pcs_misc = sm8250_qmp_gen3x2_pcie_pcs_misc_tbl,
28612 ++ .pcs_misc_num = ARRAY_SIZE(sm8250_qmp_gen3x2_pcie_pcs_misc_tbl),
28613 ++ },
28614 + .clk_list = sdm845_pciephy_clk_l,
28615 + .num_clks = ARRAY_SIZE(sdm845_pciephy_clk_l),
28616 + .reset_list = sdm845_pciephy_reset_l,
28617 +@@ -1670,23 +1707,21 @@ static const struct qmp_phy_cfg sm8250_qmp_gen3x2_pciephy_cfg = {
28618 + .start_ctrl = PCS_START | SERDES_START,
28619 + .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL,
28620 + .phy_status = PHYSTATUS,
28621 +-
28622 +- .has_pwrdn_delay = true,
28623 +- .pwrdn_delay_min = 995, /* us */
28624 +- .pwrdn_delay_max = 1005, /* us */
28625 + };
28626 +
28627 + static const struct qmp_phy_cfg msm8998_pciephy_cfg = {
28628 + .lanes = 1,
28629 +
28630 +- .serdes_tbl = msm8998_pcie_serdes_tbl,
28631 +- .serdes_tbl_num = ARRAY_SIZE(msm8998_pcie_serdes_tbl),
28632 +- .tx_tbl = msm8998_pcie_tx_tbl,
28633 +- .tx_tbl_num = ARRAY_SIZE(msm8998_pcie_tx_tbl),
28634 +- .rx_tbl = msm8998_pcie_rx_tbl,
28635 +- .rx_tbl_num = ARRAY_SIZE(msm8998_pcie_rx_tbl),
28636 +- .pcs_tbl = msm8998_pcie_pcs_tbl,
28637 +- .pcs_tbl_num = ARRAY_SIZE(msm8998_pcie_pcs_tbl),
28638 ++ .tables = {
28639 ++ .serdes = msm8998_pcie_serdes_tbl,
28640 ++ .serdes_num = ARRAY_SIZE(msm8998_pcie_serdes_tbl),
28641 ++ .tx = msm8998_pcie_tx_tbl,
28642 ++ .tx_num = ARRAY_SIZE(msm8998_pcie_tx_tbl),
28643 ++ .rx = msm8998_pcie_rx_tbl,
28644 ++ .rx_num = ARRAY_SIZE(msm8998_pcie_rx_tbl),
28645 ++ .pcs = msm8998_pcie_pcs_tbl,
28646 ++ .pcs_num = ARRAY_SIZE(msm8998_pcie_pcs_tbl),
28647 ++ },
28648 + .clk_list = msm8996_phy_clk_l,
28649 + .num_clks = ARRAY_SIZE(msm8996_phy_clk_l),
28650 + .reset_list = ipq8074_pciephy_reset_l,
28651 +@@ -1698,21 +1733,25 @@ static const struct qmp_phy_cfg msm8998_pciephy_cfg = {
28652 + .start_ctrl = SERDES_START | PCS_START,
28653 + .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL,
28654 + .phy_status = PHYSTATUS,
28655 ++
28656 ++ .skip_start_delay = true,
28657 + };
28658 +
28659 + static const struct qmp_phy_cfg sc8180x_pciephy_cfg = {
28660 + .lanes = 1,
28661 +
28662 +- .serdes_tbl = sc8180x_qmp_pcie_serdes_tbl,
28663 +- .serdes_tbl_num = ARRAY_SIZE(sc8180x_qmp_pcie_serdes_tbl),
28664 +- .tx_tbl = sc8180x_qmp_pcie_tx_tbl,
28665 +- .tx_tbl_num = ARRAY_SIZE(sc8180x_qmp_pcie_tx_tbl),
28666 +- .rx_tbl = sc8180x_qmp_pcie_rx_tbl,
28667 +- .rx_tbl_num = ARRAY_SIZE(sc8180x_qmp_pcie_rx_tbl),
28668 +- .pcs_tbl = sc8180x_qmp_pcie_pcs_tbl,
28669 +- .pcs_tbl_num = ARRAY_SIZE(sc8180x_qmp_pcie_pcs_tbl),
28670 +- .pcs_misc_tbl = sc8180x_qmp_pcie_pcs_misc_tbl,
28671 +- .pcs_misc_tbl_num = ARRAY_SIZE(sc8180x_qmp_pcie_pcs_misc_tbl),
28672 ++ .tables = {
28673 ++ .serdes = sc8180x_qmp_pcie_serdes_tbl,
28674 ++ .serdes_num = ARRAY_SIZE(sc8180x_qmp_pcie_serdes_tbl),
28675 ++ .tx = sc8180x_qmp_pcie_tx_tbl,
28676 ++ .tx_num = ARRAY_SIZE(sc8180x_qmp_pcie_tx_tbl),
28677 ++ .rx = sc8180x_qmp_pcie_rx_tbl,
28678 ++ .rx_num = ARRAY_SIZE(sc8180x_qmp_pcie_rx_tbl),
28679 ++ .pcs = sc8180x_qmp_pcie_pcs_tbl,
28680 ++ .pcs_num = ARRAY_SIZE(sc8180x_qmp_pcie_pcs_tbl),
28681 ++ .pcs_misc = sc8180x_qmp_pcie_pcs_misc_tbl,
28682 ++ .pcs_misc_num = ARRAY_SIZE(sc8180x_qmp_pcie_pcs_misc_tbl),
28683 ++ },
28684 + .clk_list = sdm845_pciephy_clk_l,
28685 + .num_clks = ARRAY_SIZE(sdm845_pciephy_clk_l),
28686 + .reset_list = sdm845_pciephy_reset_l,
28687 +@@ -1723,25 +1762,24 @@ static const struct qmp_phy_cfg sc8180x_pciephy_cfg = {
28688 +
28689 + .start_ctrl = PCS_START | SERDES_START,
28690 + .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL,
28691 +-
28692 +- .has_pwrdn_delay = true,
28693 +- .pwrdn_delay_min = 995, /* us */
28694 +- .pwrdn_delay_max = 1005, /* us */
28695 ++ .phy_status = PHYSTATUS,
28696 + };
28697 +
28698 + static const struct qmp_phy_cfg sdx55_qmp_pciephy_cfg = {
28699 + .lanes = 2,
28700 +
28701 +- .serdes_tbl = sdx55_qmp_pcie_serdes_tbl,
28702 +- .serdes_tbl_num = ARRAY_SIZE(sdx55_qmp_pcie_serdes_tbl),
28703 +- .tx_tbl = sdx55_qmp_pcie_tx_tbl,
28704 +- .tx_tbl_num = ARRAY_SIZE(sdx55_qmp_pcie_tx_tbl),
28705 +- .rx_tbl = sdx55_qmp_pcie_rx_tbl,
28706 +- .rx_tbl_num = ARRAY_SIZE(sdx55_qmp_pcie_rx_tbl),
28707 +- .pcs_tbl = sdx55_qmp_pcie_pcs_tbl,
28708 +- .pcs_tbl_num = ARRAY_SIZE(sdx55_qmp_pcie_pcs_tbl),
28709 +- .pcs_misc_tbl = sdx55_qmp_pcie_pcs_misc_tbl,
28710 +- .pcs_misc_tbl_num = ARRAY_SIZE(sdx55_qmp_pcie_pcs_misc_tbl),
28711 ++ .tables = {
28712 ++ .serdes = sdx55_qmp_pcie_serdes_tbl,
28713 ++ .serdes_num = ARRAY_SIZE(sdx55_qmp_pcie_serdes_tbl),
28714 ++ .tx = sdx55_qmp_pcie_tx_tbl,
28715 ++ .tx_num = ARRAY_SIZE(sdx55_qmp_pcie_tx_tbl),
28716 ++ .rx = sdx55_qmp_pcie_rx_tbl,
28717 ++ .rx_num = ARRAY_SIZE(sdx55_qmp_pcie_rx_tbl),
28718 ++ .pcs = sdx55_qmp_pcie_pcs_tbl,
28719 ++ .pcs_num = ARRAY_SIZE(sdx55_qmp_pcie_pcs_tbl),
28720 ++ .pcs_misc = sdx55_qmp_pcie_pcs_misc_tbl,
28721 ++ .pcs_misc_num = ARRAY_SIZE(sdx55_qmp_pcie_pcs_misc_tbl),
28722 ++ },
28723 + .clk_list = sdm845_pciephy_clk_l,
28724 + .num_clks = ARRAY_SIZE(sdm845_pciephy_clk_l),
28725 + .reset_list = sdm845_pciephy_reset_l,
28726 +@@ -1753,25 +1791,23 @@ static const struct qmp_phy_cfg sdx55_qmp_pciephy_cfg = {
28727 + .start_ctrl = PCS_START | SERDES_START,
28728 + .pwrdn_ctrl = SW_PWRDN,
28729 + .phy_status = PHYSTATUS_4_20,
28730 +-
28731 +- .has_pwrdn_delay = true,
28732 +- .pwrdn_delay_min = 995, /* us */
28733 +- .pwrdn_delay_max = 1005, /* us */
28734 + };
28735 +
28736 + static const struct qmp_phy_cfg sm8450_qmp_gen3x1_pciephy_cfg = {
28737 + .lanes = 1,
28738 +
28739 +- .serdes_tbl = sm8450_qmp_gen3x1_pcie_serdes_tbl,
28740 +- .serdes_tbl_num = ARRAY_SIZE(sm8450_qmp_gen3x1_pcie_serdes_tbl),
28741 +- .tx_tbl = sm8450_qmp_gen3x1_pcie_tx_tbl,
28742 +- .tx_tbl_num = ARRAY_SIZE(sm8450_qmp_gen3x1_pcie_tx_tbl),
28743 +- .rx_tbl = sm8450_qmp_gen3x1_pcie_rx_tbl,
28744 +- .rx_tbl_num = ARRAY_SIZE(sm8450_qmp_gen3x1_pcie_rx_tbl),
28745 +- .pcs_tbl = sm8450_qmp_gen3x1_pcie_pcs_tbl,
28746 +- .pcs_tbl_num = ARRAY_SIZE(sm8450_qmp_gen3x1_pcie_pcs_tbl),
28747 +- .pcs_misc_tbl = sm8450_qmp_gen3x1_pcie_pcs_misc_tbl,
28748 +- .pcs_misc_tbl_num = ARRAY_SIZE(sm8450_qmp_gen3x1_pcie_pcs_misc_tbl),
28749 ++ .tables = {
28750 ++ .serdes = sm8450_qmp_gen3x1_pcie_serdes_tbl,
28751 ++ .serdes_num = ARRAY_SIZE(sm8450_qmp_gen3x1_pcie_serdes_tbl),
28752 ++ .tx = sm8450_qmp_gen3x1_pcie_tx_tbl,
28753 ++ .tx_num = ARRAY_SIZE(sm8450_qmp_gen3x1_pcie_tx_tbl),
28754 ++ .rx = sm8450_qmp_gen3x1_pcie_rx_tbl,
28755 ++ .rx_num = ARRAY_SIZE(sm8450_qmp_gen3x1_pcie_rx_tbl),
28756 ++ .pcs = sm8450_qmp_gen3x1_pcie_pcs_tbl,
28757 ++ .pcs_num = ARRAY_SIZE(sm8450_qmp_gen3x1_pcie_pcs_tbl),
28758 ++ .pcs_misc = sm8450_qmp_gen3x1_pcie_pcs_misc_tbl,
28759 ++ .pcs_misc_num = ARRAY_SIZE(sm8450_qmp_gen3x1_pcie_pcs_misc_tbl),
28760 ++ },
28761 + .clk_list = sdm845_pciephy_clk_l,
28762 + .num_clks = ARRAY_SIZE(sdm845_pciephy_clk_l),
28763 + .reset_list = sdm845_pciephy_reset_l,
28764 +@@ -1783,25 +1819,38 @@ static const struct qmp_phy_cfg sm8450_qmp_gen3x1_pciephy_cfg = {
28765 + .start_ctrl = SERDES_START | PCS_START,
28766 + .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL,
28767 + .phy_status = PHYSTATUS,
28768 +-
28769 +- .has_pwrdn_delay = true,
28770 +- .pwrdn_delay_min = 995, /* us */
28771 +- .pwrdn_delay_max = 1005, /* us */
28772 + };
28773 +
28774 + static const struct qmp_phy_cfg sm8450_qmp_gen4x2_pciephy_cfg = {
28775 + .lanes = 2,
28776 +
28777 +- .serdes_tbl = sm8450_qmp_gen4x2_pcie_serdes_tbl,
28778 +- .serdes_tbl_num = ARRAY_SIZE(sm8450_qmp_gen4x2_pcie_serdes_tbl),
28779 +- .tx_tbl = sm8450_qmp_gen4x2_pcie_tx_tbl,
28780 +- .tx_tbl_num = ARRAY_SIZE(sm8450_qmp_gen4x2_pcie_tx_tbl),
28781 +- .rx_tbl = sm8450_qmp_gen4x2_pcie_rx_tbl,
28782 +- .rx_tbl_num = ARRAY_SIZE(sm8450_qmp_gen4x2_pcie_rx_tbl),
28783 +- .pcs_tbl = sm8450_qmp_gen4x2_pcie_pcs_tbl,
28784 +- .pcs_tbl_num = ARRAY_SIZE(sm8450_qmp_gen4x2_pcie_pcs_tbl),
28785 +- .pcs_misc_tbl = sm8450_qmp_gen4x2_pcie_pcs_misc_tbl,
28786 +- .pcs_misc_tbl_num = ARRAY_SIZE(sm8450_qmp_gen4x2_pcie_pcs_misc_tbl),
28787 ++ .tables = {
28788 ++ .serdes = sm8450_qmp_gen4x2_pcie_serdes_tbl,
28789 ++ .serdes_num = ARRAY_SIZE(sm8450_qmp_gen4x2_pcie_serdes_tbl),
28790 ++ .tx = sm8450_qmp_gen4x2_pcie_tx_tbl,
28791 ++ .tx_num = ARRAY_SIZE(sm8450_qmp_gen4x2_pcie_tx_tbl),
28792 ++ .rx = sm8450_qmp_gen4x2_pcie_rx_tbl,
28793 ++ .rx_num = ARRAY_SIZE(sm8450_qmp_gen4x2_pcie_rx_tbl),
28794 ++ .pcs = sm8450_qmp_gen4x2_pcie_pcs_tbl,
28795 ++ .pcs_num = ARRAY_SIZE(sm8450_qmp_gen4x2_pcie_pcs_tbl),
28796 ++ .pcs_misc = sm8450_qmp_gen4x2_pcie_pcs_misc_tbl,
28797 ++ .pcs_misc_num = ARRAY_SIZE(sm8450_qmp_gen4x2_pcie_pcs_misc_tbl),
28798 ++ },
28799 ++
28800 ++ .tables_rc = &(const struct qmp_phy_cfg_tables) {
28801 ++ .serdes = sm8450_qmp_gen4x2_pcie_rc_serdes_tbl,
28802 ++ .serdes_num = ARRAY_SIZE(sm8450_qmp_gen4x2_pcie_rc_serdes_tbl),
28803 ++ .pcs_misc = sm8450_qmp_gen4x2_pcie_rc_pcs_misc_tbl,
28804 ++ .pcs_misc_num = ARRAY_SIZE(sm8450_qmp_gen4x2_pcie_rc_pcs_misc_tbl),
28805 ++ },
28806 ++
28807 ++ .tables_ep = &(const struct qmp_phy_cfg_tables) {
28808 ++ .serdes = sm8450_qmp_gen4x2_pcie_ep_serdes_tbl,
28809 ++ .serdes_num = ARRAY_SIZE(sm8450_qmp_gen4x2_pcie_ep_serdes_tbl),
28810 ++ .pcs_misc = sm8450_qmp_gen4x2_pcie_ep_pcs_misc_tbl,
28811 ++ .pcs_misc_num = ARRAY_SIZE(sm8450_qmp_gen4x2_pcie_ep_pcs_misc_tbl),
28812 ++ },
28813 ++
28814 + .clk_list = sdm845_pciephy_clk_l,
28815 + .num_clks = ARRAY_SIZE(sdm845_pciephy_clk_l),
28816 + .reset_list = sdm845_pciephy_reset_l,
28817 +@@ -1813,10 +1862,6 @@ static const struct qmp_phy_cfg sm8450_qmp_gen4x2_pciephy_cfg = {
28818 + .start_ctrl = SERDES_START | PCS_START,
28819 + .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL,
28820 + .phy_status = PHYSTATUS_4_20,
28821 +-
28822 +- .has_pwrdn_delay = true,
28823 +- .pwrdn_delay_min = 995, /* us */
28824 +- .pwrdn_delay_max = 1005, /* us */
28825 + };
28826 +
28827 + static void qmp_pcie_configure_lane(void __iomem *base,
28828 +@@ -1850,17 +1895,49 @@ static void qmp_pcie_configure(void __iomem *base,
28829 + qmp_pcie_configure_lane(base, regs, tbl, num, 0xff);
28830 + }
28831 +
28832 +-static int qmp_pcie_serdes_init(struct qmp_phy *qphy)
28833 ++static void qmp_pcie_serdes_init(struct qmp_phy *qphy, const struct qmp_phy_cfg_tables *tables)
28834 + {
28835 + const struct qmp_phy_cfg *cfg = qphy->cfg;
28836 + void __iomem *serdes = qphy->serdes;
28837 +- const struct qmp_phy_init_tbl *serdes_tbl = cfg->serdes_tbl;
28838 +- int serdes_tbl_num = cfg->serdes_tbl_num;
28839 +
28840 +- qmp_pcie_configure(serdes, cfg->regs, serdes_tbl, serdes_tbl_num);
28841 +- qmp_pcie_configure(serdes, cfg->regs, cfg->serdes_tbl_sec, cfg->serdes_tbl_num_sec);
28842 ++ if (!tables)
28843 ++ return;
28844 +
28845 +- return 0;
28846 ++ qmp_pcie_configure(serdes, cfg->regs, tables->serdes, tables->serdes_num);
28847 ++}
28848 ++
28849 ++static void qmp_pcie_lanes_init(struct qmp_phy *qphy, const struct qmp_phy_cfg_tables *tables)
28850 ++{
28851 ++ const struct qmp_phy_cfg *cfg = qphy->cfg;
28852 ++ void __iomem *tx = qphy->tx;
28853 ++ void __iomem *rx = qphy->rx;
28854 ++
28855 ++ if (!tables)
28856 ++ return;
28857 ++
28858 ++ qmp_pcie_configure_lane(tx, cfg->regs, tables->tx, tables->tx_num, 1);
28859 ++
28860 ++ if (cfg->lanes >= 2)
28861 ++ qmp_pcie_configure_lane(qphy->tx2, cfg->regs, tables->tx, tables->tx_num, 2);
28862 ++
28863 ++ qmp_pcie_configure_lane(rx, cfg->regs, tables->rx, tables->rx_num, 1);
28864 ++ if (cfg->lanes >= 2)
28865 ++ qmp_pcie_configure_lane(qphy->rx2, cfg->regs, tables->rx, tables->rx_num, 2);
28866 ++}
28867 ++
28868 ++static void qmp_pcie_pcs_init(struct qmp_phy *qphy, const struct qmp_phy_cfg_tables *tables)
28869 ++{
28870 ++ const struct qmp_phy_cfg *cfg = qphy->cfg;
28871 ++ void __iomem *pcs = qphy->pcs;
28872 ++ void __iomem *pcs_misc = qphy->pcs_misc;
28873 ++
28874 ++ if (!tables)
28875 ++ return;
28876 ++
28877 ++ qmp_pcie_configure(pcs, cfg->regs,
28878 ++ tables->pcs, tables->pcs_num);
28879 ++ qmp_pcie_configure(pcs_misc, cfg->regs,
28880 ++ tables->pcs_misc, tables->pcs_misc_num);
28881 + }
28882 +
28883 + static int qmp_pcie_init(struct phy *phy)
28884 +@@ -1932,15 +2009,19 @@ static int qmp_pcie_power_on(struct phy *phy)
28885 + struct qmp_phy *qphy = phy_get_drvdata(phy);
28886 + struct qcom_qmp *qmp = qphy->qmp;
28887 + const struct qmp_phy_cfg *cfg = qphy->cfg;
28888 +- void __iomem *tx = qphy->tx;
28889 +- void __iomem *rx = qphy->rx;
28890 ++ const struct qmp_phy_cfg_tables *mode_tables;
28891 + void __iomem *pcs = qphy->pcs;
28892 +- void __iomem *pcs_misc = qphy->pcs_misc;
28893 + void __iomem *status;
28894 + unsigned int mask, val, ready;
28895 + int ret;
28896 +
28897 +- qmp_pcie_serdes_init(qphy);
28898 ++ if (qphy->mode == PHY_MODE_PCIE_RC)
28899 ++ mode_tables = cfg->tables_rc;
28900 ++ else
28901 ++ mode_tables = cfg->tables_ep;
28902 ++
28903 ++ qmp_pcie_serdes_init(qphy, &cfg->tables);
28904 ++ qmp_pcie_serdes_init(qphy, mode_tables);
28905 +
28906 + ret = clk_prepare_enable(qphy->pipe_clk);
28907 + if (ret) {
28908 +@@ -1949,40 +2030,11 @@ static int qmp_pcie_power_on(struct phy *phy)
28909 + }
28910 +
28911 + /* Tx, Rx, and PCS configurations */
28912 +- qmp_pcie_configure_lane(tx, cfg->regs, cfg->tx_tbl, cfg->tx_tbl_num, 1);
28913 +- qmp_pcie_configure_lane(tx, cfg->regs, cfg->tx_tbl_sec, cfg->tx_tbl_num_sec, 1);
28914 +-
28915 +- if (cfg->lanes >= 2) {
28916 +- qmp_pcie_configure_lane(qphy->tx2, cfg->regs, cfg->tx_tbl,
28917 +- cfg->tx_tbl_num, 2);
28918 +- qmp_pcie_configure_lane(qphy->tx2, cfg->regs, cfg->tx_tbl_sec,
28919 +- cfg->tx_tbl_num_sec, 2);
28920 +- }
28921 +-
28922 +- qmp_pcie_configure_lane(rx, cfg->regs, cfg->rx_tbl, cfg->rx_tbl_num, 1);
28923 +- qmp_pcie_configure_lane(rx, cfg->regs, cfg->rx_tbl_sec, cfg->rx_tbl_num_sec, 1);
28924 +-
28925 +- if (cfg->lanes >= 2) {
28926 +- qmp_pcie_configure_lane(qphy->rx2, cfg->regs, cfg->rx_tbl,
28927 +- cfg->rx_tbl_num, 2);
28928 +- qmp_pcie_configure_lane(qphy->rx2, cfg->regs, cfg->rx_tbl_sec,
28929 +- cfg->rx_tbl_num_sec, 2);
28930 +- }
28931 ++ qmp_pcie_lanes_init(qphy, &cfg->tables);
28932 ++ qmp_pcie_lanes_init(qphy, mode_tables);
28933 +
28934 +- qmp_pcie_configure(pcs, cfg->regs, cfg->pcs_tbl, cfg->pcs_tbl_num);
28935 +- qmp_pcie_configure(pcs, cfg->regs, cfg->pcs_tbl_sec, cfg->pcs_tbl_num_sec);
28936 +-
28937 +- qmp_pcie_configure(pcs_misc, cfg->regs, cfg->pcs_misc_tbl, cfg->pcs_misc_tbl_num);
28938 +- qmp_pcie_configure(pcs_misc, cfg->regs, cfg->pcs_misc_tbl_sec, cfg->pcs_misc_tbl_num_sec);
28939 +-
28940 +- /*
28941 +- * Pull out PHY from POWER DOWN state.
28942 +- * This is active low enable signal to power-down PHY.
28943 +- */
28944 +- qphy_setbits(pcs, QPHY_V2_PCS_POWER_DOWN_CONTROL, cfg->pwrdn_ctrl);
28945 +-
28946 +- if (cfg->has_pwrdn_delay)
28947 +- usleep_range(cfg->pwrdn_delay_min, cfg->pwrdn_delay_max);
28948 ++ qmp_pcie_pcs_init(qphy, &cfg->tables);
28949 ++ qmp_pcie_pcs_init(qphy, mode_tables);
28950 +
28951 + /* Pull PHY out of reset state */
28952 + qphy_clrbits(pcs, cfg->regs[QPHY_SW_RESET], SW_RESET);
28953 +@@ -1990,6 +2042,9 @@ static int qmp_pcie_power_on(struct phy *phy)
28954 + /* start SerDes and Phy-Coding-Sublayer */
28955 + qphy_setbits(pcs, cfg->regs[QPHY_START_CTRL], cfg->start_ctrl);
28956 +
28957 ++ if (!cfg->skip_start_delay)
28958 ++ usleep_range(1000, 1200);
28959 ++
28960 + status = pcs + cfg->regs[QPHY_PCS_STATUS];
28961 + mask = cfg->phy_status;
28962 + ready = 0;
28963 +@@ -2060,6 +2115,23 @@ static int qmp_pcie_disable(struct phy *phy)
28964 + return qmp_pcie_exit(phy);
28965 + }
28966 +
28967 ++static int qmp_pcie_set_mode(struct phy *phy, enum phy_mode mode, int submode)
28968 ++{
28969 ++ struct qmp_phy *qphy = phy_get_drvdata(phy);
28970 ++
28971 ++ switch (submode) {
28972 ++ case PHY_MODE_PCIE_RC:
28973 ++ case PHY_MODE_PCIE_EP:
28974 ++ qphy->mode = submode;
28975 ++ break;
28976 ++ default:
28977 ++ dev_err(&phy->dev, "Unsupported submode %d\n", submode);
28978 ++ return -EINVAL;
28979 ++ }
28980 ++
28981 ++ return 0;
28982 ++}
28983 ++
28984 + static int qmp_pcie_vreg_init(struct device *dev, const struct qmp_phy_cfg *cfg)
28985 + {
28986 + struct qcom_qmp *qmp = dev_get_drvdata(dev);
28987 +@@ -2183,6 +2255,7 @@ static int phy_pipe_clk_register(struct qcom_qmp *qmp, struct device_node *np)
28988 + static const struct phy_ops qmp_pcie_ops = {
28989 + .power_on = qmp_pcie_enable,
28990 + .power_off = qmp_pcie_disable,
28991 ++ .set_mode = qmp_pcie_set_mode,
28992 + .owner = THIS_MODULE,
28993 + };
28994 +
28995 +@@ -2198,6 +2271,8 @@ static int qmp_pcie_create(struct device *dev, struct device_node *np, int id,
28996 + if (!qphy)
28997 + return -ENOMEM;
28998 +
28999 ++ qphy->mode = PHY_MODE_PCIE_RC;
29000 ++
29001 + qphy->cfg = cfg;
29002 + qphy->serdes = serdes;
29003 + /*
29004 +@@ -2240,7 +2315,9 @@ static int qmp_pcie_create(struct device *dev, struct device_node *np, int id,
29005 + qphy->pcs_misc = qphy->pcs + 0x400;
29006 +
29007 + if (IS_ERR(qphy->pcs_misc)) {
29008 +- if (cfg->pcs_misc_tbl || cfg->pcs_misc_tbl_sec)
29009 ++ if (cfg->tables.pcs_misc ||
29010 ++ (cfg->tables_rc && cfg->tables_rc->pcs_misc) ||
29011 ++ (cfg->tables_ep && cfg->tables_ep->pcs_misc))
29012 + return PTR_ERR(qphy->pcs_misc);
29013 + }
29014 +
29015 +diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-pcs-pcie-v5_20.h b/drivers/phy/qualcomm/phy-qcom-qmp-pcs-pcie-v5_20.h
29016 +index 1eedf50cf9cbc..3d9713d348fe6 100644
29017 +--- a/drivers/phy/qualcomm/phy-qcom-qmp-pcs-pcie-v5_20.h
29018 ++++ b/drivers/phy/qualcomm/phy-qcom-qmp-pcs-pcie-v5_20.h
29019 +@@ -8,8 +8,10 @@
29020 +
29021 + /* Only for QMP V5_20 PHY - PCIe PCS registers */
29022 + #define QPHY_V5_20_PCS_PCIE_ENDPOINT_REFCLK_DRIVE 0x01c
29023 ++#define QPHY_V5_20_PCS_PCIE_OSC_DTCT_MODE2_CONFIG5 0x084
29024 + #define QPHY_V5_20_PCS_PCIE_OSC_DTCT_ACTIONS 0x090
29025 + #define QPHY_V5_20_PCS_PCIE_EQ_CONFIG1 0x0a0
29026 ++#define QPHY_V5_20_PCS_PCIE_PRESET_P10_POST 0x0e0
29027 + #define QPHY_V5_20_PCS_PCIE_G4_EQ_CONFIG5 0x108
29028 + #define QPHY_V5_20_PCS_PCIE_G4_PRE_GAIN 0x15c
29029 + #define QPHY_V5_20_PCS_PCIE_RX_MARGINING_CONFIG3 0x184
29030 +diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-pcs-v5_20.h b/drivers/phy/qualcomm/phy-qcom-qmp-pcs-v5_20.h
29031 +new file mode 100644
29032 +index 0000000000000..9a5a20daf62cd
29033 +--- /dev/null
29034 ++++ b/drivers/phy/qualcomm/phy-qcom-qmp-pcs-v5_20.h
29035 +@@ -0,0 +1,14 @@
29036 ++/* SPDX-License-Identifier: GPL-2.0 */
29037 ++/*
29038 ++ * Copyright (c) 2022, Linaro Ltd.
29039 ++ */
29040 ++
29041 ++#ifndef QCOM_PHY_QMP_PCS_V5_20_H_
29042 ++#define QCOM_PHY_QMP_PCS_V5_20_H_
29043 ++
29044 ++#define QPHY_V5_20_PCS_G3S2_PRE_GAIN 0x170
29045 ++#define QPHY_V5_20_PCS_RX_SIGDET_LVL 0x188
29046 ++#define QPHY_V5_20_PCS_EQ_CONFIG4 0x1e0
29047 ++#define QPHY_V5_20_PCS_EQ_CONFIG5 0x1e4
29048 ++
29049 ++#endif
29050 +diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-usb.c b/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
29051 +index b84c0d4b57541..f0ba35bb73c1b 100644
29052 +--- a/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
29053 ++++ b/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
29054 +@@ -63,8 +63,6 @@
29055 + #define CLAMP_EN BIT(0) /* enables i/o clamp_n */
29056 +
29057 + #define PHY_INIT_COMPLETE_TIMEOUT 10000
29058 +-#define POWER_DOWN_DELAY_US_MIN 10
29059 +-#define POWER_DOWN_DELAY_US_MAX 11
29060 +
29061 + struct qmp_phy_init_tbl {
29062 + unsigned int offset;
29063 +@@ -126,6 +124,7 @@ static const unsigned int usb3phy_regs_layout[QPHY_LAYOUT_SIZE] = {
29064 + [QPHY_PCS_AUTONOMOUS_MODE_CTRL] = 0x0d4,
29065 + [QPHY_PCS_LFPS_RXTERM_IRQ_CLEAR] = 0x0d8,
29066 + [QPHY_PCS_LFPS_RXTERM_IRQ_STATUS] = 0x178,
29067 ++ [QPHY_PCS_POWER_DOWN_CONTROL] = 0x04,
29068 + };
29069 +
29070 + static const unsigned int qmp_v3_usb3phy_regs_layout[QPHY_LAYOUT_SIZE] = {
29071 +@@ -135,6 +134,7 @@ static const unsigned int qmp_v3_usb3phy_regs_layout[QPHY_LAYOUT_SIZE] = {
29072 + [QPHY_PCS_AUTONOMOUS_MODE_CTRL] = 0x0d8,
29073 + [QPHY_PCS_LFPS_RXTERM_IRQ_CLEAR] = 0x0dc,
29074 + [QPHY_PCS_LFPS_RXTERM_IRQ_STATUS] = 0x170,
29075 ++ [QPHY_PCS_POWER_DOWN_CONTROL] = 0x04,
29076 + };
29077 +
29078 + static const unsigned int qmp_v4_usb3phy_regs_layout[QPHY_LAYOUT_SIZE] = {
29079 +@@ -1456,16 +1456,8 @@ struct qmp_phy_cfg {
29080 + /* array of registers with different offsets */
29081 + const unsigned int *regs;
29082 +
29083 +- unsigned int start_ctrl;
29084 +- unsigned int pwrdn_ctrl;
29085 +- /* bit offset of PHYSTATUS in QPHY_PCS_STATUS register */
29086 +- unsigned int phy_status;
29087 +-
29088 + /* true, if PHY needs delay after POWER_DOWN */
29089 + bool has_pwrdn_delay;
29090 +- /* power_down delay in usec */
29091 +- int pwrdn_delay_min;
29092 +- int pwrdn_delay_max;
29093 +
29094 + /* true, if PHY has a separate DP_COM control block */
29095 + bool has_phy_dp_com_ctrl;
29096 +@@ -1616,11 +1608,7 @@ static const struct qmp_phy_cfg ipq8074_usb3phy_cfg = {
29097 + .num_resets = ARRAY_SIZE(msm8996_usb3phy_reset_l),
29098 + .vreg_list = qmp_phy_vreg_l,
29099 + .num_vregs = ARRAY_SIZE(qmp_phy_vreg_l),
29100 +- .regs = usb3phy_regs_layout,
29101 +-
29102 +- .start_ctrl = SERDES_START | PCS_START,
29103 +- .pwrdn_ctrl = SW_PWRDN,
29104 +- .phy_status = PHYSTATUS,
29105 ++ .regs = qmp_v3_usb3phy_regs_layout,
29106 + };
29107 +
29108 + static const struct qmp_phy_cfg msm8996_usb3phy_cfg = {
29109 +@@ -1641,10 +1629,6 @@ static const struct qmp_phy_cfg msm8996_usb3phy_cfg = {
29110 + .vreg_list = qmp_phy_vreg_l,
29111 + .num_vregs = ARRAY_SIZE(qmp_phy_vreg_l),
29112 + .regs = usb3phy_regs_layout,
29113 +-
29114 +- .start_ctrl = SERDES_START | PCS_START,
29115 +- .pwrdn_ctrl = SW_PWRDN,
29116 +- .phy_status = PHYSTATUS,
29117 + };
29118 +
29119 + static const struct qmp_phy_cfg qmp_v3_usb3phy_cfg = {
29120 +@@ -1666,14 +1650,7 @@ static const struct qmp_phy_cfg qmp_v3_usb3phy_cfg = {
29121 + .num_vregs = ARRAY_SIZE(qmp_phy_vreg_l),
29122 + .regs = qmp_v3_usb3phy_regs_layout,
29123 +
29124 +- .start_ctrl = SERDES_START | PCS_START,
29125 +- .pwrdn_ctrl = SW_PWRDN,
29126 +- .phy_status = PHYSTATUS,
29127 +-
29128 + .has_pwrdn_delay = true,
29129 +- .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN,
29130 +- .pwrdn_delay_max = POWER_DOWN_DELAY_US_MAX,
29131 +-
29132 + .has_phy_dp_com_ctrl = true,
29133 + };
29134 +
29135 +@@ -1696,14 +1673,7 @@ static const struct qmp_phy_cfg sc7180_usb3phy_cfg = {
29136 + .num_vregs = ARRAY_SIZE(qmp_phy_vreg_l),
29137 + .regs = qmp_v3_usb3phy_regs_layout,
29138 +
29139 +- .start_ctrl = SERDES_START | PCS_START,
29140 +- .pwrdn_ctrl = SW_PWRDN,
29141 +- .phy_status = PHYSTATUS,
29142 +-
29143 + .has_pwrdn_delay = true,
29144 +- .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN,
29145 +- .pwrdn_delay_max = POWER_DOWN_DELAY_US_MAX,
29146 +-
29147 + .has_phy_dp_com_ctrl = true,
29148 + };
29149 +
29150 +@@ -1725,14 +1695,7 @@ static const struct qmp_phy_cfg sc8280xp_usb3_uniphy_cfg = {
29151 + .vreg_list = qmp_phy_vreg_l,
29152 + .num_vregs = ARRAY_SIZE(qmp_phy_vreg_l),
29153 + .regs = qmp_v4_usb3phy_regs_layout,
29154 +-
29155 +- .start_ctrl = SERDES_START | PCS_START,
29156 +- .pwrdn_ctrl = SW_PWRDN,
29157 +- .phy_status = PHYSTATUS,
29158 +-
29159 +- .has_pwrdn_delay = true,
29160 +- .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN,
29161 +- .pwrdn_delay_max = POWER_DOWN_DELAY_US_MAX,
29162 ++ .pcs_usb_offset = 0x1000,
29163 + };
29164 +
29165 + static const struct qmp_phy_cfg qmp_v3_usb3_uniphy_cfg = {
29166 +@@ -1754,13 +1717,7 @@ static const struct qmp_phy_cfg qmp_v3_usb3_uniphy_cfg = {
29167 + .num_vregs = ARRAY_SIZE(qmp_phy_vreg_l),
29168 + .regs = qmp_v3_usb3phy_regs_layout,
29169 +
29170 +- .start_ctrl = SERDES_START | PCS_START,
29171 +- .pwrdn_ctrl = SW_PWRDN,
29172 +- .phy_status = PHYSTATUS,
29173 +-
29174 + .has_pwrdn_delay = true,
29175 +- .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN,
29176 +- .pwrdn_delay_max = POWER_DOWN_DELAY_US_MAX,
29177 + };
29178 +
29179 + static const struct qmp_phy_cfg msm8998_usb3phy_cfg = {
29180 +@@ -1781,10 +1738,6 @@ static const struct qmp_phy_cfg msm8998_usb3phy_cfg = {
29181 + .vreg_list = qmp_phy_vreg_l,
29182 + .num_vregs = ARRAY_SIZE(qmp_phy_vreg_l),
29183 + .regs = qmp_v3_usb3phy_regs_layout,
29184 +-
29185 +- .start_ctrl = SERDES_START | PCS_START,
29186 +- .pwrdn_ctrl = SW_PWRDN,
29187 +- .phy_status = PHYSTATUS,
29188 + };
29189 +
29190 + static const struct qmp_phy_cfg sm8150_usb3phy_cfg = {
29191 +@@ -1809,15 +1762,7 @@ static const struct qmp_phy_cfg sm8150_usb3phy_cfg = {
29192 + .regs = qmp_v4_usb3phy_regs_layout,
29193 + .pcs_usb_offset = 0x300,
29194 +
29195 +- .start_ctrl = SERDES_START | PCS_START,
29196 +- .pwrdn_ctrl = SW_PWRDN,
29197 +- .phy_status = PHYSTATUS,
29198 +-
29199 +-
29200 + .has_pwrdn_delay = true,
29201 +- .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN,
29202 +- .pwrdn_delay_max = POWER_DOWN_DELAY_US_MAX,
29203 +-
29204 + .has_phy_dp_com_ctrl = true,
29205 + };
29206 +
29207 +@@ -1843,13 +1788,7 @@ static const struct qmp_phy_cfg sm8150_usb3_uniphy_cfg = {
29208 + .regs = qmp_v4_usb3phy_regs_layout,
29209 + .pcs_usb_offset = 0x600,
29210 +
29211 +- .start_ctrl = SERDES_START | PCS_START,
29212 +- .pwrdn_ctrl = SW_PWRDN,
29213 +- .phy_status = PHYSTATUS,
29214 +-
29215 + .has_pwrdn_delay = true,
29216 +- .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN,
29217 +- .pwrdn_delay_max = POWER_DOWN_DELAY_US_MAX,
29218 + };
29219 +
29220 + static const struct qmp_phy_cfg sm8250_usb3phy_cfg = {
29221 +@@ -1874,14 +1813,7 @@ static const struct qmp_phy_cfg sm8250_usb3phy_cfg = {
29222 + .regs = qmp_v4_usb3phy_regs_layout,
29223 + .pcs_usb_offset = 0x300,
29224 +
29225 +- .start_ctrl = SERDES_START | PCS_START,
29226 +- .pwrdn_ctrl = SW_PWRDN,
29227 +- .phy_status = PHYSTATUS,
29228 +-
29229 + .has_pwrdn_delay = true,
29230 +- .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN,
29231 +- .pwrdn_delay_max = POWER_DOWN_DELAY_US_MAX,
29232 +-
29233 + .has_phy_dp_com_ctrl = true,
29234 + };
29235 +
29236 +@@ -1907,13 +1839,7 @@ static const struct qmp_phy_cfg sm8250_usb3_uniphy_cfg = {
29237 + .regs = qmp_v4_usb3phy_regs_layout,
29238 + .pcs_usb_offset = 0x600,
29239 +
29240 +- .start_ctrl = SERDES_START | PCS_START,
29241 +- .pwrdn_ctrl = SW_PWRDN,
29242 +- .phy_status = PHYSTATUS,
29243 +-
29244 + .has_pwrdn_delay = true,
29245 +- .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN,
29246 +- .pwrdn_delay_max = POWER_DOWN_DELAY_US_MAX,
29247 + };
29248 +
29249 + static const struct qmp_phy_cfg sdx55_usb3_uniphy_cfg = {
29250 +@@ -1938,13 +1864,7 @@ static const struct qmp_phy_cfg sdx55_usb3_uniphy_cfg = {
29251 + .regs = qmp_v4_usb3phy_regs_layout,
29252 + .pcs_usb_offset = 0x600,
29253 +
29254 +- .start_ctrl = SERDES_START | PCS_START,
29255 +- .pwrdn_ctrl = SW_PWRDN,
29256 +- .phy_status = PHYSTATUS,
29257 +-
29258 + .has_pwrdn_delay = true,
29259 +- .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN,
29260 +- .pwrdn_delay_max = POWER_DOWN_DELAY_US_MAX,
29261 + };
29262 +
29263 + static const struct qmp_phy_cfg sdx65_usb3_uniphy_cfg = {
29264 +@@ -1969,13 +1889,7 @@ static const struct qmp_phy_cfg sdx65_usb3_uniphy_cfg = {
29265 + .regs = qmp_v4_usb3phy_regs_layout,
29266 + .pcs_usb_offset = 0x1000,
29267 +
29268 +- .start_ctrl = SERDES_START | PCS_START,
29269 +- .pwrdn_ctrl = SW_PWRDN,
29270 +- .phy_status = PHYSTATUS,
29271 +-
29272 + .has_pwrdn_delay = true,
29273 +- .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN,
29274 +- .pwrdn_delay_max = POWER_DOWN_DELAY_US_MAX,
29275 + };
29276 +
29277 + static const struct qmp_phy_cfg sm8350_usb3phy_cfg = {
29278 +@@ -2000,14 +1914,7 @@ static const struct qmp_phy_cfg sm8350_usb3phy_cfg = {
29279 + .regs = qmp_v4_usb3phy_regs_layout,
29280 + .pcs_usb_offset = 0x300,
29281 +
29282 +- .start_ctrl = SERDES_START | PCS_START,
29283 +- .pwrdn_ctrl = SW_PWRDN,
29284 +- .phy_status = PHYSTATUS,
29285 +-
29286 + .has_pwrdn_delay = true,
29287 +- .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN,
29288 +- .pwrdn_delay_max = POWER_DOWN_DELAY_US_MAX,
29289 +-
29290 + .has_phy_dp_com_ctrl = true,
29291 + };
29292 +
29293 +@@ -2033,13 +1940,7 @@ static const struct qmp_phy_cfg sm8350_usb3_uniphy_cfg = {
29294 + .regs = qmp_v4_usb3phy_regs_layout,
29295 + .pcs_usb_offset = 0x1000,
29296 +
29297 +- .start_ctrl = SERDES_START | PCS_START,
29298 +- .pwrdn_ctrl = SW_PWRDN,
29299 +- .phy_status = PHYSTATUS,
29300 +-
29301 + .has_pwrdn_delay = true,
29302 +- .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN,
29303 +- .pwrdn_delay_max = POWER_DOWN_DELAY_US_MAX,
29304 + };
29305 +
29306 + static const struct qmp_phy_cfg qcm2290_usb3phy_cfg = {
29307 +@@ -2060,10 +1961,6 @@ static const struct qmp_phy_cfg qcm2290_usb3phy_cfg = {
29308 + .vreg_list = qmp_phy_vreg_l,
29309 + .num_vregs = ARRAY_SIZE(qmp_phy_vreg_l),
29310 + .regs = qcm2290_usb3phy_regs_layout,
29311 +-
29312 +- .start_ctrl = SERDES_START | PCS_START,
29313 +- .pwrdn_ctrl = SW_PWRDN,
29314 +- .phy_status = PHYSTATUS,
29315 + };
29316 +
29317 + static void qmp_usb_configure_lane(void __iomem *base,
29318 +@@ -2164,13 +2061,7 @@ static int qmp_usb_init(struct phy *phy)
29319 + qphy_clrbits(dp_com, QPHY_V3_DP_COM_SW_RESET, SW_RESET);
29320 + }
29321 +
29322 +- if (cfg->regs[QPHY_PCS_POWER_DOWN_CONTROL])
29323 +- qphy_setbits(pcs,
29324 +- cfg->regs[QPHY_PCS_POWER_DOWN_CONTROL],
29325 +- cfg->pwrdn_ctrl);
29326 +- else
29327 +- qphy_setbits(pcs, QPHY_V2_PCS_POWER_DOWN_CONTROL,
29328 +- cfg->pwrdn_ctrl);
29329 ++ qphy_setbits(pcs, cfg->regs[QPHY_PCS_POWER_DOWN_CONTROL], SW_PWRDN);
29330 +
29331 + return 0;
29332 +
29333 +@@ -2206,7 +2097,7 @@ static int qmp_usb_power_on(struct phy *phy)
29334 + void __iomem *rx = qphy->rx;
29335 + void __iomem *pcs = qphy->pcs;
29336 + void __iomem *status;
29337 +- unsigned int mask, val, ready;
29338 ++ unsigned int val;
29339 + int ret;
29340 +
29341 + qmp_usb_serdes_init(qphy);
29342 +@@ -2236,19 +2127,16 @@ static int qmp_usb_power_on(struct phy *phy)
29343 + qmp_usb_configure(pcs, cfg->regs, cfg->pcs_tbl, cfg->pcs_tbl_num);
29344 +
29345 + if (cfg->has_pwrdn_delay)
29346 +- usleep_range(cfg->pwrdn_delay_min, cfg->pwrdn_delay_max);
29347 ++ usleep_range(10, 20);
29348 +
29349 + /* Pull PHY out of reset state */
29350 + qphy_clrbits(pcs, cfg->regs[QPHY_SW_RESET], SW_RESET);
29351 +
29352 + /* start SerDes and Phy-Coding-Sublayer */
29353 +- qphy_setbits(pcs, cfg->regs[QPHY_START_CTRL], cfg->start_ctrl);
29354 ++ qphy_setbits(pcs, cfg->regs[QPHY_START_CTRL], SERDES_START | PCS_START);
29355 +
29356 + status = pcs + cfg->regs[QPHY_PCS_STATUS];
29357 +- mask = cfg->phy_status;
29358 +- ready = 0;
29359 +-
29360 +- ret = readl_poll_timeout(status, val, (val & mask) == ready, 10,
29361 ++ ret = readl_poll_timeout(status, val, !(val & PHYSTATUS), 10,
29362 + PHY_INIT_COMPLETE_TIMEOUT);
29363 + if (ret) {
29364 + dev_err(qmp->dev, "phy initialization timed-out\n");
29365 +@@ -2274,16 +2162,12 @@ static int qmp_usb_power_off(struct phy *phy)
29366 + qphy_setbits(qphy->pcs, cfg->regs[QPHY_SW_RESET], SW_RESET);
29367 +
29368 + /* stop SerDes and Phy-Coding-Sublayer */
29369 +- qphy_clrbits(qphy->pcs, cfg->regs[QPHY_START_CTRL], cfg->start_ctrl);
29370 ++ qphy_clrbits(qphy->pcs, cfg->regs[QPHY_START_CTRL],
29371 ++ SERDES_START | PCS_START);
29372 +
29373 + /* Put PHY into POWER DOWN state: active low */
29374 +- if (cfg->regs[QPHY_PCS_POWER_DOWN_CONTROL]) {
29375 +- qphy_clrbits(qphy->pcs, cfg->regs[QPHY_PCS_POWER_DOWN_CONTROL],
29376 +- cfg->pwrdn_ctrl);
29377 +- } else {
29378 +- qphy_clrbits(qphy->pcs, QPHY_V2_PCS_POWER_DOWN_CONTROL,
29379 +- cfg->pwrdn_ctrl);
29380 +- }
29381 ++ qphy_clrbits(qphy->pcs, cfg->regs[QPHY_PCS_POWER_DOWN_CONTROL],
29382 ++ SW_PWRDN);
29383 +
29384 + return 0;
29385 + }
29386 +diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.h b/drivers/phy/qualcomm/phy-qcom-qmp.h
29387 +index 26274e3c0cf95..29a48f0436d2a 100644
29388 +--- a/drivers/phy/qualcomm/phy-qcom-qmp.h
29389 ++++ b/drivers/phy/qualcomm/phy-qcom-qmp.h
29390 +@@ -38,6 +38,7 @@
29391 + #include "phy-qcom-qmp-pcs-pcie-v4_20.h"
29392 +
29393 + #include "phy-qcom-qmp-pcs-v5.h"
29394 ++#include "phy-qcom-qmp-pcs-v5_20.h"
29395 + #include "phy-qcom-qmp-pcs-pcie-v5.h"
29396 + #include "phy-qcom-qmp-pcs-usb-v5.h"
29397 + #include "phy-qcom-qmp-pcs-ufs-v5.h"
29398 +diff --git a/drivers/pinctrl/mediatek/pinctrl-mt7986.c b/drivers/pinctrl/mediatek/pinctrl-mt7986.c
29399 +index 50cb736f9f116..b587299697481 100644
29400 +--- a/drivers/pinctrl/mediatek/pinctrl-mt7986.c
29401 ++++ b/drivers/pinctrl/mediatek/pinctrl-mt7986.c
29402 +@@ -316,10 +316,10 @@ static const struct mtk_pin_field_calc mt7986_pin_pupd_range[] = {
29403 + PIN_FIELD_BASE(38, 38, IOCFG_LT_BASE, 0x30, 0x10, 9, 1),
29404 + PIN_FIELD_BASE(39, 40, IOCFG_RB_BASE, 0x60, 0x10, 18, 1),
29405 + PIN_FIELD_BASE(41, 41, IOCFG_RB_BASE, 0x60, 0x10, 12, 1),
29406 +- PIN_FIELD_BASE(42, 43, IOCFG_RB_BASE, 0x60, 0x10, 22, 1),
29407 +- PIN_FIELD_BASE(44, 45, IOCFG_RB_BASE, 0x60, 0x10, 20, 1),
29408 +- PIN_FIELD_BASE(46, 47, IOCFG_RB_BASE, 0x60, 0x10, 26, 1),
29409 +- PIN_FIELD_BASE(48, 49, IOCFG_RB_BASE, 0x60, 0x10, 24, 1),
29410 ++ PIN_FIELD_BASE(42, 43, IOCFG_RB_BASE, 0x60, 0x10, 23, 1),
29411 ++ PIN_FIELD_BASE(44, 45, IOCFG_RB_BASE, 0x60, 0x10, 21, 1),
29412 ++ PIN_FIELD_BASE(46, 47, IOCFG_RB_BASE, 0x60, 0x10, 27, 1),
29413 ++ PIN_FIELD_BASE(48, 49, IOCFG_RB_BASE, 0x60, 0x10, 25, 1),
29414 + PIN_FIELD_BASE(50, 57, IOCFG_RT_BASE, 0x40, 0x10, 2, 1),
29415 + PIN_FIELD_BASE(58, 58, IOCFG_RT_BASE, 0x40, 0x10, 1, 1),
29416 + PIN_FIELD_BASE(59, 59, IOCFG_RT_BASE, 0x40, 0x10, 0, 1),
29417 +@@ -354,10 +354,10 @@ static const struct mtk_pin_field_calc mt7986_pin_r0_range[] = {
29418 + PIN_FIELD_BASE(38, 38, IOCFG_LT_BASE, 0x40, 0x10, 9, 1),
29419 + PIN_FIELD_BASE(39, 40, IOCFG_RB_BASE, 0x70, 0x10, 18, 1),
29420 + PIN_FIELD_BASE(41, 41, IOCFG_RB_BASE, 0x70, 0x10, 12, 1),
29421 +- PIN_FIELD_BASE(42, 43, IOCFG_RB_BASE, 0x70, 0x10, 22, 1),
29422 +- PIN_FIELD_BASE(44, 45, IOCFG_RB_BASE, 0x70, 0x10, 20, 1),
29423 +- PIN_FIELD_BASE(46, 47, IOCFG_RB_BASE, 0x70, 0x10, 26, 1),
29424 +- PIN_FIELD_BASE(48, 49, IOCFG_RB_BASE, 0x70, 0x10, 24, 1),
29425 ++ PIN_FIELD_BASE(42, 43, IOCFG_RB_BASE, 0x70, 0x10, 23, 1),
29426 ++ PIN_FIELD_BASE(44, 45, IOCFG_RB_BASE, 0x70, 0x10, 21, 1),
29427 ++ PIN_FIELD_BASE(46, 47, IOCFG_RB_BASE, 0x70, 0x10, 27, 1),
29428 ++ PIN_FIELD_BASE(48, 49, IOCFG_RB_BASE, 0x70, 0x10, 25, 1),
29429 + PIN_FIELD_BASE(50, 57, IOCFG_RT_BASE, 0x50, 0x10, 2, 1),
29430 + PIN_FIELD_BASE(58, 58, IOCFG_RT_BASE, 0x50, 0x10, 1, 1),
29431 + PIN_FIELD_BASE(59, 59, IOCFG_RT_BASE, 0x50, 0x10, 0, 1),
29432 +@@ -392,10 +392,10 @@ static const struct mtk_pin_field_calc mt7986_pin_r1_range[] = {
29433 + PIN_FIELD_BASE(38, 38, IOCFG_LT_BASE, 0x50, 0x10, 9, 1),
29434 + PIN_FIELD_BASE(39, 40, IOCFG_RB_BASE, 0x80, 0x10, 18, 1),
29435 + PIN_FIELD_BASE(41, 41, IOCFG_RB_BASE, 0x80, 0x10, 12, 1),
29436 +- PIN_FIELD_BASE(42, 43, IOCFG_RB_BASE, 0x80, 0x10, 22, 1),
29437 +- PIN_FIELD_BASE(44, 45, IOCFG_RB_BASE, 0x80, 0x10, 20, 1),
29438 +- PIN_FIELD_BASE(46, 47, IOCFG_RB_BASE, 0x80, 0x10, 26, 1),
29439 +- PIN_FIELD_BASE(48, 49, IOCFG_RB_BASE, 0x80, 0x10, 24, 1),
29440 ++ PIN_FIELD_BASE(42, 43, IOCFG_RB_BASE, 0x80, 0x10, 23, 1),
29441 ++ PIN_FIELD_BASE(44, 45, IOCFG_RB_BASE, 0x80, 0x10, 21, 1),
29442 ++ PIN_FIELD_BASE(46, 47, IOCFG_RB_BASE, 0x80, 0x10, 27, 1),
29443 ++ PIN_FIELD_BASE(48, 49, IOCFG_RB_BASE, 0x80, 0x10, 25, 1),
29444 + PIN_FIELD_BASE(50, 57, IOCFG_RT_BASE, 0x60, 0x10, 2, 1),
29445 + PIN_FIELD_BASE(58, 58, IOCFG_RT_BASE, 0x60, 0x10, 1, 1),
29446 + PIN_FIELD_BASE(59, 59, IOCFG_RT_BASE, 0x60, 0x10, 0, 1),
29447 +diff --git a/drivers/pinctrl/pinconf-generic.c b/drivers/pinctrl/pinconf-generic.c
29448 +index 415d1df8f46a5..365c4b0ca4654 100644
29449 +--- a/drivers/pinctrl/pinconf-generic.c
29450 ++++ b/drivers/pinctrl/pinconf-generic.c
29451 +@@ -395,8 +395,10 @@ int pinconf_generic_dt_node_to_map(struct pinctrl_dev *pctldev,
29452 + for_each_available_child_of_node(np_config, np) {
29453 + ret = pinconf_generic_dt_subnode_to_map(pctldev, np, map,
29454 + &reserved_maps, num_maps, type);
29455 +- if (ret < 0)
29456 ++ if (ret < 0) {
29457 ++ of_node_put(np);
29458 + goto exit;
29459 ++ }
29460 + }
29461 + return 0;
29462 +
29463 +diff --git a/drivers/pinctrl/pinctrl-k210.c b/drivers/pinctrl/pinctrl-k210.c
29464 +index ecab6bf63dc6d..ad4db99094a79 100644
29465 +--- a/drivers/pinctrl/pinctrl-k210.c
29466 ++++ b/drivers/pinctrl/pinctrl-k210.c
29467 +@@ -862,8 +862,10 @@ static int k210_pinctrl_dt_node_to_map(struct pinctrl_dev *pctldev,
29468 + for_each_available_child_of_node(np_config, np) {
29469 + ret = k210_pinctrl_dt_subnode_to_map(pctldev, np, map,
29470 + &reserved_maps, num_maps);
29471 +- if (ret < 0)
29472 ++ if (ret < 0) {
29473 ++ of_node_put(np);
29474 + goto err;
29475 ++ }
29476 + }
29477 + return 0;
29478 +
29479 +diff --git a/drivers/pinctrl/pinctrl-ocelot.c b/drivers/pinctrl/pinctrl-ocelot.c
29480 +index 687aaa6015555..3d5995cbcb782 100644
29481 +--- a/drivers/pinctrl/pinctrl-ocelot.c
29482 ++++ b/drivers/pinctrl/pinctrl-ocelot.c
29483 +@@ -2047,6 +2047,11 @@ static struct regmap *ocelot_pinctrl_create_pincfg(struct platform_device *pdev,
29484 + return devm_regmap_init_mmio(&pdev->dev, base, &regmap_config);
29485 + }
29486 +
29487 ++static void ocelot_destroy_workqueue(void *data)
29488 ++{
29489 ++ destroy_workqueue(data);
29490 ++}
29491 ++
29492 + static int ocelot_pinctrl_probe(struct platform_device *pdev)
29493 + {
29494 + const struct ocelot_match_data *data;
29495 +@@ -2078,6 +2083,11 @@ static int ocelot_pinctrl_probe(struct platform_device *pdev)
29496 + if (!info->wq)
29497 + return -ENOMEM;
29498 +
29499 ++ ret = devm_add_action_or_reset(dev, ocelot_destroy_workqueue,
29500 ++ info->wq);
29501 ++ if (ret)
29502 ++ return ret;
29503 ++
29504 + info->pincfg_data = &data->pincfg_data;
29505 +
29506 + reset = devm_reset_control_get_optional_shared(dev, "switch");
29507 +@@ -2119,15 +2129,6 @@ static int ocelot_pinctrl_probe(struct platform_device *pdev)
29508 + return 0;
29509 + }
29510 +
29511 +-static int ocelot_pinctrl_remove(struct platform_device *pdev)
29512 +-{
29513 +- struct ocelot_pinctrl *info = platform_get_drvdata(pdev);
29514 +-
29515 +- destroy_workqueue(info->wq);
29516 +-
29517 +- return 0;
29518 +-}
29519 +-
29520 + static struct platform_driver ocelot_pinctrl_driver = {
29521 + .driver = {
29522 + .name = "pinctrl-ocelot",
29523 +@@ -2135,7 +2136,6 @@ static struct platform_driver ocelot_pinctrl_driver = {
29524 + .suppress_bind_attrs = true,
29525 + },
29526 + .probe = ocelot_pinctrl_probe,
29527 +- .remove = ocelot_pinctrl_remove,
29528 + };
29529 + module_platform_driver(ocelot_pinctrl_driver);
29530 +
29531 +diff --git a/drivers/pinctrl/pinctrl-thunderbay.c b/drivers/pinctrl/pinctrl-thunderbay.c
29532 +index 9328b17485cf0..590bbbf619afc 100644
29533 +--- a/drivers/pinctrl/pinctrl-thunderbay.c
29534 ++++ b/drivers/pinctrl/pinctrl-thunderbay.c
29535 +@@ -808,7 +808,7 @@ static int thunderbay_add_functions(struct thunderbay_pinctrl *tpc, struct funct
29536 + funcs[i].num_group_names,
29537 + funcs[i].data);
29538 + }
29539 +- kfree(funcs);
29540 ++
29541 + return 0;
29542 + }
29543 +
29544 +@@ -817,6 +817,7 @@ static int thunderbay_build_functions(struct thunderbay_pinctrl *tpc)
29545 + struct function_desc *thunderbay_funcs;
29546 + void *ptr;
29547 + int pin;
29548 ++ int ret;
29549 +
29550 + /*
29551 + * Allocate maximum possible number of functions. Assume every pin
29552 +@@ -860,7 +861,10 @@ static int thunderbay_build_functions(struct thunderbay_pinctrl *tpc)
29553 + return -ENOMEM;
29554 +
29555 + thunderbay_funcs = ptr;
29556 +- return thunderbay_add_functions(tpc, thunderbay_funcs);
29557 ++ ret = thunderbay_add_functions(tpc, thunderbay_funcs);
29558 ++
29559 ++ kfree(thunderbay_funcs);
29560 ++ return ret;
29561 + }
29562 +
29563 + static int thunderbay_pinconf_set_tristate(struct thunderbay_pinctrl *tpc,
29564 +diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c
29565 +index 2a7ff14dc37e9..59de4ce01faba 100644
29566 +--- a/drivers/platform/chrome/cros_ec_typec.c
29567 ++++ b/drivers/platform/chrome/cros_ec_typec.c
29568 +@@ -173,10 +173,13 @@ static int cros_typec_get_switch_handles(struct cros_typec_port *port,
29569 +
29570 + role_sw_err:
29571 + typec_switch_put(port->ori_sw);
29572 ++ port->ori_sw = NULL;
29573 + ori_sw_err:
29574 + typec_retimer_put(port->retimer);
29575 ++ port->retimer = NULL;
29576 + retimer_sw_err:
29577 + typec_mux_put(port->mux);
29578 ++ port->mux = NULL;
29579 + mux_err:
29580 + return -ENODEV;
29581 + }
29582 +diff --git a/drivers/platform/chrome/cros_usbpd_notify.c b/drivers/platform/chrome/cros_usbpd_notify.c
29583 +index 4b5a81c9dc6da..10670b6588e3e 100644
29584 +--- a/drivers/platform/chrome/cros_usbpd_notify.c
29585 ++++ b/drivers/platform/chrome/cros_usbpd_notify.c
29586 +@@ -239,7 +239,11 @@ static int __init cros_usbpd_notify_init(void)
29587 + return ret;
29588 +
29589 + #ifdef CONFIG_ACPI
29590 +- platform_driver_register(&cros_usbpd_notify_acpi_driver);
29591 ++ ret = platform_driver_register(&cros_usbpd_notify_acpi_driver);
29592 ++ if (ret) {
29593 ++ platform_driver_unregister(&cros_usbpd_notify_plat_driver);
29594 ++ return ret;
29595 ++ }
29596 + #endif
29597 + return 0;
29598 + }
29599 +diff --git a/drivers/platform/mellanox/mlxbf-pmc.c b/drivers/platform/mellanox/mlxbf-pmc.c
29600 +index 65b4a819f1bdf..c2c9b0d3244cb 100644
29601 +--- a/drivers/platform/mellanox/mlxbf-pmc.c
29602 ++++ b/drivers/platform/mellanox/mlxbf-pmc.c
29603 +@@ -358,7 +358,7 @@ static const struct mlxbf_pmc_events mlxbf_pmc_hnfnet_events[] = {
29604 + { 0x32, "DDN_DIAG_W_INGRESS" },
29605 + { 0x33, "DDN_DIAG_C_INGRESS" },
29606 + { 0x34, "DDN_DIAG_CORE_SENT" },
29607 +- { 0x35, "NDN_DIAG_S_OUT_OF_CRED" },
29608 ++ { 0x35, "NDN_DIAG_N_OUT_OF_CRED" },
29609 + { 0x36, "NDN_DIAG_S_OUT_OF_CRED" },
29610 + { 0x37, "NDN_DIAG_E_OUT_OF_CRED" },
29611 + { 0x38, "NDN_DIAG_W_OUT_OF_CRED" },
29612 +diff --git a/drivers/platform/x86/huawei-wmi.c b/drivers/platform/x86/huawei-wmi.c
29613 +index 5873c2663a65b..b85050e4a0d65 100644
29614 +--- a/drivers/platform/x86/huawei-wmi.c
29615 ++++ b/drivers/platform/x86/huawei-wmi.c
29616 +@@ -760,6 +760,9 @@ static int huawei_wmi_input_setup(struct device *dev,
29617 + const char *guid,
29618 + struct input_dev **idev)
29619 + {
29620 ++ acpi_status status;
29621 ++ int err;
29622 ++
29623 + *idev = devm_input_allocate_device(dev);
29624 + if (!*idev)
29625 + return -ENOMEM;
29626 +@@ -769,10 +772,19 @@ static int huawei_wmi_input_setup(struct device *dev,
29627 + (*idev)->id.bustype = BUS_HOST;
29628 + (*idev)->dev.parent = dev;
29629 +
29630 +- return sparse_keymap_setup(*idev, huawei_wmi_keymap, NULL) ||
29631 +- input_register_device(*idev) ||
29632 +- wmi_install_notify_handler(guid, huawei_wmi_input_notify,
29633 +- *idev);
29634 ++ err = sparse_keymap_setup(*idev, huawei_wmi_keymap, NULL);
29635 ++ if (err)
29636 ++ return err;
29637 ++
29638 ++ err = input_register_device(*idev);
29639 ++ if (err)
29640 ++ return err;
29641 ++
29642 ++ status = wmi_install_notify_handler(guid, huawei_wmi_input_notify, *idev);
29643 ++ if (ACPI_FAILURE(status))
29644 ++ return -EIO;
29645 ++
29646 ++ return 0;
29647 + }
29648 +
29649 + static void huawei_wmi_input_exit(struct device *dev, const char *guid)
29650 +diff --git a/drivers/platform/x86/intel/int3472/clk_and_regulator.c b/drivers/platform/x86/intel/int3472/clk_and_regulator.c
29651 +index 1cf958983e868..b2342b3d78c72 100644
29652 +--- a/drivers/platform/x86/intel/int3472/clk_and_regulator.c
29653 ++++ b/drivers/platform/x86/intel/int3472/clk_and_regulator.c
29654 +@@ -185,7 +185,8 @@ int skl_int3472_register_regulator(struct int3472_discrete_device *int3472,
29655 + cfg.init_data = &init_data;
29656 + cfg.ena_gpiod = int3472->regulator.gpio;
29657 +
29658 +- int3472->regulator.rdev = regulator_register(&int3472->regulator.rdesc,
29659 ++ int3472->regulator.rdev = regulator_register(int3472->dev,
29660 ++ &int3472->regulator.rdesc,
29661 + &cfg);
29662 + if (IS_ERR(int3472->regulator.rdev)) {
29663 + ret = PTR_ERR(int3472->regulator.rdev);
29664 +diff --git a/drivers/platform/x86/intel_scu_ipc.c b/drivers/platform/x86/intel_scu_ipc.c
29665 +index 7cc9089d1e14f..e7a3e34028178 100644
29666 +--- a/drivers/platform/x86/intel_scu_ipc.c
29667 ++++ b/drivers/platform/x86/intel_scu_ipc.c
29668 +@@ -583,7 +583,6 @@ __intel_scu_ipc_register(struct device *parent,
29669 + scu->dev.parent = parent;
29670 + scu->dev.class = &intel_scu_ipc_class;
29671 + scu->dev.release = intel_scu_ipc_release;
29672 +- dev_set_name(&scu->dev, "intel_scu_ipc");
29673 +
29674 + if (!request_mem_region(scu_data->mem.start, resource_size(&scu_data->mem),
29675 + "intel_scu_ipc")) {
29676 +@@ -612,6 +611,7 @@ __intel_scu_ipc_register(struct device *parent,
29677 + * After this point intel_scu_ipc_release() takes care of
29678 + * releasing the SCU IPC resources once refcount drops to zero.
29679 + */
29680 ++ dev_set_name(&scu->dev, "intel_scu_ipc");
29681 + err = device_register(&scu->dev);
29682 + if (err) {
29683 + put_device(&scu->dev);
29684 +diff --git a/drivers/platform/x86/mxm-wmi.c b/drivers/platform/x86/mxm-wmi.c
29685 +index 9a19fbd2f7341..9a457956025a5 100644
29686 +--- a/drivers/platform/x86/mxm-wmi.c
29687 ++++ b/drivers/platform/x86/mxm-wmi.c
29688 +@@ -35,13 +35,11 @@ int mxm_wmi_call_mxds(int adapter)
29689 + .xarg = 1,
29690 + };
29691 + struct acpi_buffer input = { (acpi_size)sizeof(args), &args };
29692 +- struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL };
29693 + acpi_status status;
29694 +
29695 + printk("calling mux switch %d\n", adapter);
29696 +
29697 +- status = wmi_evaluate_method(MXM_WMMX_GUID, 0x0, adapter, &input,
29698 +- &output);
29699 ++ status = wmi_evaluate_method(MXM_WMMX_GUID, 0x0, adapter, &input, NULL);
29700 +
29701 + if (ACPI_FAILURE(status))
29702 + return status;
29703 +@@ -60,13 +58,11 @@ int mxm_wmi_call_mxmx(int adapter)
29704 + .xarg = 1,
29705 + };
29706 + struct acpi_buffer input = { (acpi_size)sizeof(args), &args };
29707 +- struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL };
29708 + acpi_status status;
29709 +
29710 + printk("calling mux switch %d\n", adapter);
29711 +
29712 +- status = wmi_evaluate_method(MXM_WMMX_GUID, 0x0, adapter, &input,
29713 +- &output);
29714 ++ status = wmi_evaluate_method(MXM_WMMX_GUID, 0x0, adapter, &input, NULL);
29715 +
29716 + if (ACPI_FAILURE(status))
29717 + return status;
29718 +diff --git a/drivers/pnp/core.c b/drivers/pnp/core.c
29719 +index 4df5aa6a309c3..6a60c5d83383b 100644
29720 +--- a/drivers/pnp/core.c
29721 ++++ b/drivers/pnp/core.c
29722 +@@ -148,14 +148,14 @@ struct pnp_dev *pnp_alloc_dev(struct pnp_protocol *protocol, int id,
29723 + dev->dev.coherent_dma_mask = dev->dma_mask;
29724 + dev->dev.release = &pnp_release_device;
29725 +
29726 +- dev_set_name(&dev->dev, "%02x:%02x", dev->protocol->number, dev->number);
29727 +-
29728 + dev_id = pnp_add_id(dev, pnpid);
29729 + if (!dev_id) {
29730 + kfree(dev);
29731 + return NULL;
29732 + }
29733 +
29734 ++ dev_set_name(&dev->dev, "%02x:%02x", dev->protocol->number, dev->number);
29735 ++
29736 + return dev;
29737 + }
29738 +
29739 +diff --git a/drivers/power/supply/ab8500_charger.c b/drivers/power/supply/ab8500_charger.c
29740 +index c19c50442761d..58757a5799f8b 100644
29741 +--- a/drivers/power/supply/ab8500_charger.c
29742 ++++ b/drivers/power/supply/ab8500_charger.c
29743 +@@ -3719,7 +3719,14 @@ static int __init ab8500_charger_init(void)
29744 + if (ret)
29745 + return ret;
29746 +
29747 +- return platform_driver_register(&ab8500_charger_driver);
29748 ++ ret = platform_driver_register(&ab8500_charger_driver);
29749 ++ if (ret) {
29750 ++ platform_unregister_drivers(ab8500_charger_component_drivers,
29751 ++ ARRAY_SIZE(ab8500_charger_component_drivers));
29752 ++ return ret;
29753 ++ }
29754 ++
29755 ++ return 0;
29756 + }
29757 +
29758 + static void __exit ab8500_charger_exit(void)
29759 +diff --git a/drivers/power/supply/bq25890_charger.c b/drivers/power/supply/bq25890_charger.c
29760 +index 6020b58c641d2..0e15302b8df22 100644
29761 +--- a/drivers/power/supply/bq25890_charger.c
29762 ++++ b/drivers/power/supply/bq25890_charger.c
29763 +@@ -1049,6 +1049,36 @@ static const struct regulator_desc bq25890_vbus_desc = {
29764 + .fixed_uV = 5000000,
29765 + .n_voltages = 1,
29766 + };
29767 ++
29768 ++static int bq25890_register_regulator(struct bq25890_device *bq)
29769 ++{
29770 ++ struct bq25890_platform_data *pdata = dev_get_platdata(bq->dev);
29771 ++ struct regulator_config cfg = {
29772 ++ .dev = bq->dev,
29773 ++ .driver_data = bq,
29774 ++ };
29775 ++ struct regulator_dev *reg;
29776 ++
29777 ++ if (!IS_ERR_OR_NULL(bq->usb_phy))
29778 ++ return 0;
29779 ++
29780 ++ if (pdata)
29781 ++ cfg.init_data = pdata->regulator_init_data;
29782 ++
29783 ++ reg = devm_regulator_register(bq->dev, &bq25890_vbus_desc, &cfg);
29784 ++ if (IS_ERR(reg)) {
29785 ++ return dev_err_probe(bq->dev, PTR_ERR(reg),
29786 ++ "registering vbus regulator");
29787 ++ }
29788 ++
29789 ++ return 0;
29790 ++}
29791 ++#else
29792 ++static inline int
29793 ++bq25890_register_regulator(struct bq25890_device *bq)
29794 ++{
29795 ++ return 0;
29796 ++}
29797 + #endif
29798 +
29799 + static int bq25890_get_chip_version(struct bq25890_device *bq)
29800 +@@ -1189,8 +1219,14 @@ static int bq25890_fw_probe(struct bq25890_device *bq)
29801 + return 0;
29802 + }
29803 +
29804 +-static int bq25890_probe(struct i2c_client *client,
29805 +- const struct i2c_device_id *id)
29806 ++static void bq25890_non_devm_cleanup(void *data)
29807 ++{
29808 ++ struct bq25890_device *bq = data;
29809 ++
29810 ++ cancel_delayed_work_sync(&bq->pump_express_work);
29811 ++}
29812 ++
29813 ++static int bq25890_probe(struct i2c_client *client)
29814 + {
29815 + struct device *dev = &client->dev;
29816 + struct bq25890_device *bq;
29817 +@@ -1244,27 +1280,24 @@ static int bq25890_probe(struct i2c_client *client,
29818 +
29819 + /* OTG reporting */
29820 + bq->usb_phy = devm_usb_get_phy(dev, USB_PHY_TYPE_USB2);
29821 ++
29822 ++ /*
29823 ++ * This must be before bq25890_power_supply_init(), so that it runs
29824 ++ * after devm unregisters the power_supply.
29825 ++ */
29826 ++ ret = devm_add_action_or_reset(dev, bq25890_non_devm_cleanup, bq);
29827 ++ if (ret)
29828 ++ return ret;
29829 ++
29830 ++ ret = bq25890_register_regulator(bq);
29831 ++ if (ret)
29832 ++ return ret;
29833 ++
29834 + if (!IS_ERR_OR_NULL(bq->usb_phy)) {
29835 + INIT_WORK(&bq->usb_work, bq25890_usb_work);
29836 + bq->usb_nb.notifier_call = bq25890_usb_notifier;
29837 + usb_register_notifier(bq->usb_phy, &bq->usb_nb);
29838 + }
29839 +-#ifdef CONFIG_REGULATOR
29840 +- else {
29841 +- struct bq25890_platform_data *pdata = dev_get_platdata(dev);
29842 +- struct regulator_config cfg = { };
29843 +- struct regulator_dev *reg;
29844 +-
29845 +- cfg.dev = dev;
29846 +- cfg.driver_data = bq;
29847 +- if (pdata)
29848 +- cfg.init_data = pdata->regulator_init_data;
29849 +-
29850 +- reg = devm_regulator_register(dev, &bq25890_vbus_desc, &cfg);
29851 +- if (IS_ERR(reg))
29852 +- return dev_err_probe(dev, PTR_ERR(reg), "registering regulator");
29853 +- }
29854 +-#endif
29855 +
29856 + ret = bq25890_power_supply_init(bq);
29857 + if (ret < 0) {
29858 +@@ -1400,7 +1433,7 @@ static struct i2c_driver bq25890_driver = {
29859 + .acpi_match_table = ACPI_PTR(bq25890_acpi_match),
29860 + .pm = &bq25890_pm,
29861 + },
29862 +- .probe = bq25890_probe,
29863 ++ .probe_new = bq25890_probe,
29864 + .remove = bq25890_remove,
29865 + .shutdown = bq25890_shutdown,
29866 + .id_table = bq25890_i2c_ids,
29867 +diff --git a/drivers/power/supply/cw2015_battery.c b/drivers/power/supply/cw2015_battery.c
29868 +index 6d52641151d9a..473522b4326ad 100644
29869 +--- a/drivers/power/supply/cw2015_battery.c
29870 ++++ b/drivers/power/supply/cw2015_battery.c
29871 +@@ -699,6 +699,9 @@ static int cw_bat_probe(struct i2c_client *client)
29872 + }
29873 +
29874 + cw_bat->battery_workqueue = create_singlethread_workqueue("rk_battery");
29875 ++ if (!cw_bat->battery_workqueue)
29876 ++ return -ENOMEM;
29877 ++
29878 + devm_delayed_work_autocancel(&client->dev,
29879 + &cw_bat->battery_delay_work, cw_bat_work);
29880 + queue_delayed_work(cw_bat->battery_workqueue,
29881 +diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
29882 +index 4b5fb172fa994..01d1ac79d982e 100644
29883 +--- a/drivers/power/supply/power_supply_core.c
29884 ++++ b/drivers/power/supply/power_supply_core.c
29885 +@@ -750,6 +750,11 @@ int power_supply_get_battery_info(struct power_supply *psy,
29886 + int i, tab_len, size;
29887 +
29888 + propname = kasprintf(GFP_KERNEL, "ocv-capacity-table-%d", index);
29889 ++ if (!propname) {
29890 ++ power_supply_put_battery_info(psy, info);
29891 ++ err = -ENOMEM;
29892 ++ goto out_put_node;
29893 ++ }
29894 + list = of_get_property(battery_np, propname, &size);
29895 + if (!list || !size) {
29896 + dev_err(&psy->dev, "failed to get %s\n", propname);
29897 +@@ -1387,8 +1392,8 @@ create_triggers_failed:
29898 + register_cooler_failed:
29899 + psy_unregister_thermal(psy);
29900 + register_thermal_failed:
29901 +- device_del(dev);
29902 + wakeup_init_failed:
29903 ++ device_del(dev);
29904 + device_add_failed:
29905 + check_supplies_failed:
29906 + dev_set_name_failed:
29907 +diff --git a/drivers/power/supply/rk817_charger.c b/drivers/power/supply/rk817_charger.c
29908 +index f20a6ac584ccd..4f9c1c4179165 100644
29909 +--- a/drivers/power/supply/rk817_charger.c
29910 ++++ b/drivers/power/supply/rk817_charger.c
29911 +@@ -1060,8 +1060,10 @@ static int rk817_charger_probe(struct platform_device *pdev)
29912 + return -ENODEV;
29913 +
29914 + charger = devm_kzalloc(&pdev->dev, sizeof(*charger), GFP_KERNEL);
29915 +- if (!charger)
29916 ++ if (!charger) {
29917 ++ of_node_put(node);
29918 + return -ENOMEM;
29919 ++ }
29920 +
29921 + charger->rk808 = rk808;
29922 +
29923 +diff --git a/drivers/power/supply/z2_battery.c b/drivers/power/supply/z2_battery.c
29924 +index 1897c29848600..d033c1d3ee42a 100644
29925 +--- a/drivers/power/supply/z2_battery.c
29926 ++++ b/drivers/power/supply/z2_battery.c
29927 +@@ -206,10 +206,12 @@ static int z2_batt_probe(struct i2c_client *client,
29928 +
29929 + charger->charge_gpiod = devm_gpiod_get_optional(&client->dev,
29930 + NULL, GPIOD_IN);
29931 +- if (IS_ERR(charger->charge_gpiod))
29932 +- return dev_err_probe(&client->dev,
29933 ++ if (IS_ERR(charger->charge_gpiod)) {
29934 ++ ret = dev_err_probe(&client->dev,
29935 + PTR_ERR(charger->charge_gpiod),
29936 + "failed to get charge GPIO\n");
29937 ++ goto err;
29938 ++ }
29939 +
29940 + if (charger->charge_gpiod) {
29941 + gpiod_set_consumer_name(charger->charge_gpiod, "BATT CHRG");
29942 +diff --git a/drivers/pwm/pwm-mediatek.c b/drivers/pwm/pwm-mediatek.c
29943 +index 6901a44dc428d..a337b47dc2f7d 100644
29944 +--- a/drivers/pwm/pwm-mediatek.c
29945 ++++ b/drivers/pwm/pwm-mediatek.c
29946 +@@ -296,7 +296,7 @@ static const struct pwm_mediatek_of_data mt6795_pwm_data = {
29947 + static const struct pwm_mediatek_of_data mt7622_pwm_data = {
29948 + .num_pwms = 6,
29949 + .pwm45_fixup = false,
29950 +- .has_ck_26m_sel = false,
29951 ++ .has_ck_26m_sel = true,
29952 + };
29953 +
29954 + static const struct pwm_mediatek_of_data mt7623_pwm_data = {
29955 +diff --git a/drivers/pwm/pwm-mtk-disp.c b/drivers/pwm/pwm-mtk-disp.c
29956 +index c605013e4114c..3fbb4bae93a4e 100644
29957 +--- a/drivers/pwm/pwm-mtk-disp.c
29958 ++++ b/drivers/pwm/pwm-mtk-disp.c
29959 +@@ -178,7 +178,7 @@ static void mtk_disp_pwm_get_state(struct pwm_chip *chip,
29960 + {
29961 + struct mtk_disp_pwm *mdp = to_mtk_disp_pwm(chip);
29962 + u64 rate, period, high_width;
29963 +- u32 clk_div, con0, con1;
29964 ++ u32 clk_div, pwm_en, con0, con1;
29965 + int err;
29966 +
29967 + err = clk_prepare_enable(mdp->clk_main);
29968 +@@ -197,7 +197,8 @@ static void mtk_disp_pwm_get_state(struct pwm_chip *chip,
29969 + rate = clk_get_rate(mdp->clk_main);
29970 + con0 = readl(mdp->base + mdp->data->con0);
29971 + con1 = readl(mdp->base + mdp->data->con1);
29972 +- state->enabled = !!(con0 & BIT(0));
29973 ++ pwm_en = readl(mdp->base + DISP_PWM_EN);
29974 ++ state->enabled = !!(pwm_en & mdp->data->enable_mask);
29975 + clk_div = FIELD_GET(PWM_CLKDIV_MASK, con0);
29976 + period = FIELD_GET(PWM_PERIOD_MASK, con1);
29977 + /*
29978 +diff --git a/drivers/pwm/pwm-sifive.c b/drivers/pwm/pwm-sifive.c
29979 +index 2d4fa5e5fdd46..bb72393134016 100644
29980 +--- a/drivers/pwm/pwm-sifive.c
29981 ++++ b/drivers/pwm/pwm-sifive.c
29982 +@@ -204,8 +204,11 @@ static int pwm_sifive_clock_notifier(struct notifier_block *nb,
29983 + struct pwm_sifive_ddata *ddata =
29984 + container_of(nb, struct pwm_sifive_ddata, notifier);
29985 +
29986 +- if (event == POST_RATE_CHANGE)
29987 ++ if (event == POST_RATE_CHANGE) {
29988 ++ mutex_lock(&ddata->lock);
29989 + pwm_sifive_update_clock(ddata, ndata->new_rate);
29990 ++ mutex_unlock(&ddata->lock);
29991 ++ }
29992 +
29993 + return NOTIFY_OK;
29994 + }
29995 +diff --git a/drivers/pwm/pwm-tegra.c b/drivers/pwm/pwm-tegra.c
29996 +index dad9978c91861..249dc01932979 100644
29997 +--- a/drivers/pwm/pwm-tegra.c
29998 ++++ b/drivers/pwm/pwm-tegra.c
29999 +@@ -145,8 +145,19 @@ static int tegra_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
30000 + * source clock rate as required_clk_rate, PWM controller will
30001 + * be able to configure the requested period.
30002 + */
30003 +- required_clk_rate =
30004 +- (NSEC_PER_SEC / period_ns) << PWM_DUTY_WIDTH;
30005 ++ required_clk_rate = DIV_ROUND_UP_ULL((u64)NSEC_PER_SEC << PWM_DUTY_WIDTH,
30006 ++ period_ns);
30007 ++
30008 ++ if (required_clk_rate > clk_round_rate(pc->clk, required_clk_rate))
30009 ++ /*
30010 ++ * required_clk_rate is a lower bound for the input
30011 ++ * rate; for lower rates there is no value for PWM_SCALE
30012 ++ * that yields a period less than or equal to the
30013 ++ * requested period. Hence, for lower rates, double the
30014 ++ * required_clk_rate to get a clock rate that can meet
30015 ++ * the requested period.
30016 ++ */
30017 ++ required_clk_rate *= 2;
30018 +
30019 + err = dev_pm_opp_set_rate(pc->dev, required_clk_rate);
30020 + if (err < 0)
30021 +diff --git a/drivers/rapidio/devices/rio_mport_cdev.c b/drivers/rapidio/devices/rio_mport_cdev.c
30022 +index 2cdc054e53a53..43db495f19861 100644
30023 +--- a/drivers/rapidio/devices/rio_mport_cdev.c
30024 ++++ b/drivers/rapidio/devices/rio_mport_cdev.c
30025 +@@ -1804,8 +1804,11 @@ static int rio_mport_add_riodev(struct mport_cdev_priv *priv,
30026 + rio_init_dbell_res(&rdev->riores[RIO_DOORBELL_RESOURCE],
30027 + 0, 0xffff);
30028 + err = rio_add_device(rdev);
30029 +- if (err)
30030 +- goto cleanup;
30031 ++ if (err) {
30032 ++ put_device(&rdev->dev);
30033 ++ return err;
30034 ++ }
30035 ++
30036 + rio_dev_get(rdev);
30037 +
30038 + return 0;
30039 +@@ -1901,10 +1904,6 @@ static int mport_cdev_open(struct inode *inode, struct file *filp)
30040 +
30041 + priv->md = chdev;
30042 +
30043 +- mutex_lock(&chdev->file_mutex);
30044 +- list_add_tail(&priv->list, &chdev->file_list);
30045 +- mutex_unlock(&chdev->file_mutex);
30046 +-
30047 + INIT_LIST_HEAD(&priv->db_filters);
30048 + INIT_LIST_HEAD(&priv->pw_filters);
30049 + spin_lock_init(&priv->fifo_lock);
30050 +@@ -1913,6 +1912,7 @@ static int mport_cdev_open(struct inode *inode, struct file *filp)
30051 + sizeof(struct rio_event) * MPORT_EVENT_DEPTH,
30052 + GFP_KERNEL);
30053 + if (ret < 0) {
30054 ++ put_device(&chdev->dev);
30055 + dev_err(&chdev->dev, DRV_NAME ": kfifo_alloc failed\n");
30056 + ret = -ENOMEM;
30057 + goto err_fifo;
30058 +@@ -1923,6 +1923,9 @@ static int mport_cdev_open(struct inode *inode, struct file *filp)
30059 + spin_lock_init(&priv->req_lock);
30060 + mutex_init(&priv->dma_lock);
30061 + #endif
30062 ++ mutex_lock(&chdev->file_mutex);
30063 ++ list_add_tail(&priv->list, &chdev->file_list);
30064 ++ mutex_unlock(&chdev->file_mutex);
30065 +
30066 + filp->private_data = priv;
30067 + goto out;
30068 +diff --git a/drivers/rapidio/rio-scan.c b/drivers/rapidio/rio-scan.c
30069 +index 19b0c33f4a62a..fdcf742b2adbc 100644
30070 +--- a/drivers/rapidio/rio-scan.c
30071 ++++ b/drivers/rapidio/rio-scan.c
30072 +@@ -454,8 +454,12 @@ static struct rio_dev *rio_setup_device(struct rio_net *net,
30073 + 0, 0xffff);
30074 +
30075 + ret = rio_add_device(rdev);
30076 +- if (ret)
30077 +- goto cleanup;
30078 ++ if (ret) {
30079 ++ if (rswitch)
30080 ++ kfree(rswitch->route_table);
30081 ++ put_device(&rdev->dev);
30082 ++ return NULL;
30083 ++ }
30084 +
30085 + rio_dev_get(rdev);
30086 +
30087 +diff --git a/drivers/rapidio/rio.c b/drivers/rapidio/rio.c
30088 +index e74cf09eeff07..9544b8ee0c963 100644
30089 +--- a/drivers/rapidio/rio.c
30090 ++++ b/drivers/rapidio/rio.c
30091 +@@ -2186,11 +2186,16 @@ int rio_register_mport(struct rio_mport *port)
30092 + atomic_set(&port->state, RIO_DEVICE_RUNNING);
30093 +
30094 + res = device_register(&port->dev);
30095 +- if (res)
30096 ++ if (res) {
30097 + dev_err(&port->dev, "RIO: mport%d registration failed ERR=%d\n",
30098 + port->id, res);
30099 +- else
30100 ++ mutex_lock(&rio_mport_list_lock);
30101 ++ list_del(&port->node);
30102 ++ mutex_unlock(&rio_mport_list_lock);
30103 ++ put_device(&port->dev);
30104 ++ } else {
30105 + dev_dbg(&port->dev, "RIO: registered mport%d\n", port->id);
30106 ++ }
30107 +
30108 + return res;
30109 + }
30110 +diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
30111 +index e8c00a884f1f1..3716ba060368c 100644
30112 +--- a/drivers/regulator/core.c
30113 ++++ b/drivers/regulator/core.c
30114 +@@ -1002,7 +1002,7 @@ static int drms_uA_update(struct regulator_dev *rdev)
30115 + /* get input voltage */
30116 + input_uV = 0;
30117 + if (rdev->supply)
30118 +- input_uV = regulator_get_voltage(rdev->supply);
30119 ++ input_uV = regulator_get_voltage_rdev(rdev->supply->rdev);
30120 + if (input_uV <= 0)
30121 + input_uV = rdev->constraints->input_uV;
30122 +
30123 +@@ -1596,7 +1596,13 @@ static int set_machine_constraints(struct regulator_dev *rdev)
30124 + if (rdev->supply_name && !rdev->supply)
30125 + return -EPROBE_DEFER;
30126 +
30127 +- if (rdev->supply) {
30128 ++ /* If supplying regulator has already been enabled,
30129 ++ * it's not intended to have use_count increment
30130 ++ * when rdev is only boot-on.
30131 ++ */
30132 ++ if (rdev->supply &&
30133 ++ (rdev->constraints->always_on ||
30134 ++ !regulator_is_enabled(rdev->supply))) {
30135 + ret = regulator_enable(rdev->supply);
30136 + if (ret < 0) {
30137 + _regulator_put(rdev->supply);
30138 +@@ -1640,6 +1646,7 @@ static int set_supply(struct regulator_dev *rdev,
30139 +
30140 + rdev->supply = create_regulator(supply_rdev, &rdev->dev, "SUPPLY");
30141 + if (rdev->supply == NULL) {
30142 ++ module_put(supply_rdev->owner);
30143 + err = -ENOMEM;
30144 + return err;
30145 + }
30146 +@@ -1813,7 +1820,7 @@ static struct regulator *create_regulator(struct regulator_dev *rdev,
30147 +
30148 + regulator = kzalloc(sizeof(*regulator), GFP_KERNEL);
30149 + if (regulator == NULL) {
30150 +- kfree(supply_name);
30151 ++ kfree_const(supply_name);
30152 + return NULL;
30153 + }
30154 +
30155 +@@ -1943,6 +1950,7 @@ static struct regulator_dev *regulator_dev_lookup(struct device *dev,
30156 + node = of_get_regulator(dev, supply);
30157 + if (node) {
30158 + r = of_find_regulator_by_node(node);
30159 ++ of_node_put(node);
30160 + if (r)
30161 + return r;
30162 +
30163 +@@ -5396,6 +5404,7 @@ static struct regulator_coupler generic_regulator_coupler = {
30164 +
30165 + /**
30166 + * regulator_register - register regulator
30167 ++ * @dev: the device that drive the regulator
30168 + * @regulator_desc: regulator to register
30169 + * @cfg: runtime configuration for regulator
30170 + *
30171 +@@ -5404,7 +5413,8 @@ static struct regulator_coupler generic_regulator_coupler = {
30172 + * or an ERR_PTR() on error.
30173 + */
30174 + struct regulator_dev *
30175 +-regulator_register(const struct regulator_desc *regulator_desc,
30176 ++regulator_register(struct device *dev,
30177 ++ const struct regulator_desc *regulator_desc,
30178 + const struct regulator_config *cfg)
30179 + {
30180 + const struct regulator_init_data *init_data;
30181 +@@ -5413,7 +5423,6 @@ regulator_register(const struct regulator_desc *regulator_desc,
30182 + struct regulator_dev *rdev;
30183 + bool dangling_cfg_gpiod = false;
30184 + bool dangling_of_gpiod = false;
30185 +- struct device *dev;
30186 + int ret, i;
30187 + bool resolved_early = false;
30188 +
30189 +@@ -5426,8 +5435,7 @@ regulator_register(const struct regulator_desc *regulator_desc,
30190 + goto rinse;
30191 + }
30192 +
30193 +- dev = cfg->dev;
30194 +- WARN_ON(!dev);
30195 ++ WARN_ON(!dev || !cfg->dev);
30196 +
30197 + if (regulator_desc->name == NULL || regulator_desc->ops == NULL) {
30198 + ret = -EINVAL;
30199 +@@ -5526,7 +5534,7 @@ regulator_register(const struct regulator_desc *regulator_desc,
30200 +
30201 + /* register with sysfs */
30202 + rdev->dev.class = &regulator_class;
30203 +- rdev->dev.parent = dev;
30204 ++ rdev->dev.parent = config->dev;
30205 + dev_set_name(&rdev->dev, "regulator.%lu",
30206 + (unsigned long) atomic_inc_return(&regulator_no));
30207 + dev_set_drvdata(&rdev->dev, rdev);
30208 +@@ -5641,6 +5649,7 @@ unset_supplies:
30209 + regulator_remove_coupling(rdev);
30210 + mutex_unlock(&regulator_list_mutex);
30211 + wash:
30212 ++ regulator_put(rdev->supply);
30213 + kfree(rdev->coupling_desc.coupled_rdevs);
30214 + mutex_lock(&regulator_list_mutex);
30215 + regulator_ena_gpio_free(rdev);
30216 +diff --git a/drivers/regulator/devres.c b/drivers/regulator/devres.c
30217 +index 3265e75e97ab4..5c7ff9b3e8a79 100644
30218 +--- a/drivers/regulator/devres.c
30219 ++++ b/drivers/regulator/devres.c
30220 +@@ -385,7 +385,7 @@ struct regulator_dev *devm_regulator_register(struct device *dev,
30221 + if (!ptr)
30222 + return ERR_PTR(-ENOMEM);
30223 +
30224 +- rdev = regulator_register(regulator_desc, config);
30225 ++ rdev = regulator_register(dev, regulator_desc, config);
30226 + if (!IS_ERR(rdev)) {
30227 + *ptr = rdev;
30228 + devres_add(dev, ptr);
30229 +diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c
30230 +index 0aff1c2886b5d..cd726d4e8fbfb 100644
30231 +--- a/drivers/regulator/of_regulator.c
30232 ++++ b/drivers/regulator/of_regulator.c
30233 +@@ -505,7 +505,7 @@ struct regulator_init_data *regulator_of_get_init_data(struct device *dev,
30234 + struct device_node *child;
30235 + struct regulator_init_data *init_data = NULL;
30236 +
30237 +- child = regulator_of_get_init_node(dev, desc);
30238 ++ child = regulator_of_get_init_node(config->dev, desc);
30239 + if (!child)
30240 + return NULL;
30241 +
30242 +diff --git a/drivers/regulator/qcom-labibb-regulator.c b/drivers/regulator/qcom-labibb-regulator.c
30243 +index 639b71eb41ffe..bcf7140f3bc98 100644
30244 +--- a/drivers/regulator/qcom-labibb-regulator.c
30245 ++++ b/drivers/regulator/qcom-labibb-regulator.c
30246 +@@ -822,6 +822,7 @@ static int qcom_labibb_regulator_probe(struct platform_device *pdev)
30247 + if (irq == 0)
30248 + irq = -EINVAL;
30249 +
30250 ++ of_node_put(reg_node);
30251 + return dev_err_probe(vreg->dev, irq,
30252 + "Short-circuit irq not found.\n");
30253 + }
30254 +diff --git a/drivers/regulator/qcom-rpmh-regulator.c b/drivers/regulator/qcom-rpmh-regulator.c
30255 +index 4158ff126a67a..f90bcdeecea58 100644
30256 +--- a/drivers/regulator/qcom-rpmh-regulator.c
30257 ++++ b/drivers/regulator/qcom-rpmh-regulator.c
30258 +@@ -1187,7 +1187,7 @@ static const struct rpmh_vreg_init_data pm7325_vreg_data[] = {
30259 + static const struct rpmh_vreg_init_data pmr735a_vreg_data[] = {
30260 + RPMH_VREG("smps1", "smp%s1", &pmic5_ftsmps520, "vdd-s1"),
30261 + RPMH_VREG("smps2", "smp%s2", &pmic5_ftsmps520, "vdd-s2"),
30262 +- RPMH_VREG("smps3", "smp%s3", &pmic5_hfsmps510, "vdd-s3"),
30263 ++ RPMH_VREG("smps3", "smp%s3", &pmic5_hfsmps515, "vdd-s3"),
30264 + RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo, "vdd-l1-l2"),
30265 + RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo, "vdd-l1-l2"),
30266 + RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo, "vdd-l3"),
30267 +diff --git a/drivers/regulator/stm32-vrefbuf.c b/drivers/regulator/stm32-vrefbuf.c
30268 +index 30ea3bc8ca192..7a454b7b6eab9 100644
30269 +--- a/drivers/regulator/stm32-vrefbuf.c
30270 ++++ b/drivers/regulator/stm32-vrefbuf.c
30271 +@@ -210,7 +210,7 @@ static int stm32_vrefbuf_probe(struct platform_device *pdev)
30272 + pdev->dev.of_node,
30273 + &stm32_vrefbuf_regu);
30274 +
30275 +- rdev = regulator_register(&stm32_vrefbuf_regu, &config);
30276 ++ rdev = regulator_register(&pdev->dev, &stm32_vrefbuf_regu, &config);
30277 + if (IS_ERR(rdev)) {
30278 + ret = PTR_ERR(rdev);
30279 + dev_err(&pdev->dev, "register failed with error %d\n", ret);
30280 +diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
30281 +index 6afd0941e5524..dc6f07ca83410 100644
30282 +--- a/drivers/remoteproc/qcom_q6v5_pas.c
30283 ++++ b/drivers/remoteproc/qcom_q6v5_pas.c
30284 +@@ -449,6 +449,7 @@ static int adsp_alloc_memory_region(struct qcom_adsp *adsp)
30285 + }
30286 +
30287 + ret = of_address_to_resource(node, 0, &r);
30288 ++ of_node_put(node);
30289 + if (ret)
30290 + return ret;
30291 +
30292 +@@ -556,6 +557,7 @@ static int adsp_probe(struct platform_device *pdev)
30293 + detach_proxy_pds:
30294 + adsp_pds_detach(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
30295 + free_rproc:
30296 ++ device_init_wakeup(adsp->dev, false);
30297 + rproc_free(rproc);
30298 +
30299 + return ret;
30300 +@@ -572,6 +574,8 @@ static int adsp_remove(struct platform_device *pdev)
30301 + qcom_remove_sysmon_subdev(adsp->sysmon);
30302 + qcom_remove_smd_subdev(adsp->rproc, &adsp->smd_subdev);
30303 + qcom_remove_ssr_subdev(adsp->rproc, &adsp->ssr_subdev);
30304 ++ adsp_pds_detach(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
30305 ++ device_init_wakeup(adsp->dev, false);
30306 + rproc_free(adsp->rproc);
30307 +
30308 + return 0;
30309 +diff --git a/drivers/remoteproc/qcom_q6v5_wcss.c b/drivers/remoteproc/qcom_q6v5_wcss.c
30310 +index bb0947f7770ea..ba24d745b2d65 100644
30311 +--- a/drivers/remoteproc/qcom_q6v5_wcss.c
30312 ++++ b/drivers/remoteproc/qcom_q6v5_wcss.c
30313 +@@ -351,7 +351,7 @@ static int q6v5_wcss_qcs404_power_on(struct q6v5_wcss *wcss)
30314 + if (ret) {
30315 + dev_err(wcss->dev,
30316 + "xo cbcr enabling timed out (rc:%d)\n", ret);
30317 +- return ret;
30318 ++ goto disable_xo_cbcr_clk;
30319 + }
30320 +
30321 + writel(0, wcss->reg_base + Q6SS_CGC_OVERRIDE);
30322 +@@ -417,6 +417,7 @@ disable_sleep_cbcr_clk:
30323 + val = readl(wcss->reg_base + Q6SS_SLEEP_CBCR);
30324 + val &= ~Q6SS_CLK_ENABLE;
30325 + writel(val, wcss->reg_base + Q6SS_SLEEP_CBCR);
30326 ++disable_xo_cbcr_clk:
30327 + val = readl(wcss->reg_base + Q6SS_XO_CBCR);
30328 + val &= ~Q6SS_CLK_ENABLE;
30329 + writel(val, wcss->reg_base + Q6SS_XO_CBCR);
30330 +@@ -827,6 +828,9 @@ static int q6v5_wcss_init_mmio(struct q6v5_wcss *wcss,
30331 + int ret;
30332 +
30333 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qdsp6");
30334 ++ if (!res)
30335 ++ return -EINVAL;
30336 ++
30337 + wcss->reg_base = devm_ioremap(&pdev->dev, res->start,
30338 + resource_size(res));
30339 + if (!wcss->reg_base)
30340 +diff --git a/drivers/remoteproc/qcom_sysmon.c b/drivers/remoteproc/qcom_sysmon.c
30341 +index 57dde2a69b9dd..15af52f8499eb 100644
30342 +--- a/drivers/remoteproc/qcom_sysmon.c
30343 ++++ b/drivers/remoteproc/qcom_sysmon.c
30344 +@@ -652,7 +652,9 @@ struct qcom_sysmon *qcom_add_sysmon_subdev(struct rproc *rproc,
30345 + if (sysmon->shutdown_irq != -ENODATA) {
30346 + dev_err(sysmon->dev,
30347 + "failed to retrieve shutdown-ack IRQ\n");
30348 +- return ERR_PTR(sysmon->shutdown_irq);
30349 ++ ret = sysmon->shutdown_irq;
30350 ++ kfree(sysmon);
30351 ++ return ERR_PTR(ret);
30352 + }
30353 + } else {
30354 + ret = devm_request_threaded_irq(sysmon->dev,
30355 +@@ -663,6 +665,7 @@ struct qcom_sysmon *qcom_add_sysmon_subdev(struct rproc *rproc,
30356 + if (ret) {
30357 + dev_err(sysmon->dev,
30358 + "failed to acquire shutdown-ack IRQ\n");
30359 ++ kfree(sysmon);
30360 + return ERR_PTR(ret);
30361 + }
30362 + }
30363 +diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
30364 +index 8768cb64f560c..cb1d414a23896 100644
30365 +--- a/drivers/remoteproc/remoteproc_core.c
30366 ++++ b/drivers/remoteproc/remoteproc_core.c
30367 +@@ -509,7 +509,13 @@ static int rproc_handle_vdev(struct rproc *rproc, void *ptr,
30368 + rvdev_data.rsc_offset = offset;
30369 + rvdev_data.rsc = rsc;
30370 +
30371 +- pdev = platform_device_register_data(dev, "rproc-virtio", rvdev_data.index, &rvdev_data,
30372 ++ /*
30373 ++ * When there is more than one remote processor, rproc->nb_vdev number is
30374 ++ * same for each separate instances of "rproc". If rvdev_data.index is used
30375 ++ * as device id, then we get duplication in sysfs, so need to use
30376 ++ * PLATFORM_DEVID_AUTO to auto select device id.
30377 ++ */
30378 ++ pdev = platform_device_register_data(dev, "rproc-virtio", PLATFORM_DEVID_AUTO, &rvdev_data,
30379 + sizeof(rvdev_data));
30380 + if (IS_ERR(pdev)) {
30381 + dev_err(dev, "failed to create rproc-virtio device\n");
30382 +diff --git a/drivers/rtc/class.c b/drivers/rtc/class.c
30383 +index e48223c00c672..e5b7b48cffac0 100644
30384 +--- a/drivers/rtc/class.c
30385 ++++ b/drivers/rtc/class.c
30386 +@@ -374,11 +374,11 @@ struct rtc_device *devm_rtc_allocate_device(struct device *dev)
30387 +
30388 + rtc->id = id;
30389 + rtc->dev.parent = dev;
30390 +- err = dev_set_name(&rtc->dev, "rtc%d", id);
30391 ++ err = devm_add_action_or_reset(dev, devm_rtc_release_device, rtc);
30392 + if (err)
30393 + return ERR_PTR(err);
30394 +
30395 +- err = devm_add_action_or_reset(dev, devm_rtc_release_device, rtc);
30396 ++ err = dev_set_name(&rtc->dev, "rtc%d", id);
30397 + if (err)
30398 + return ERR_PTR(err);
30399 +
30400 +diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
30401 +index 58cc2bae2f8a0..00e2ca7374ecf 100644
30402 +--- a/drivers/rtc/rtc-cmos.c
30403 ++++ b/drivers/rtc/rtc-cmos.c
30404 +@@ -744,6 +744,168 @@ static irqreturn_t cmos_interrupt(int irq, void *p)
30405 + return IRQ_NONE;
30406 + }
30407 +
30408 ++#ifdef CONFIG_ACPI
30409 ++
30410 ++#include <linux/acpi.h>
30411 ++
30412 ++static u32 rtc_handler(void *context)
30413 ++{
30414 ++ struct device *dev = context;
30415 ++ struct cmos_rtc *cmos = dev_get_drvdata(dev);
30416 ++ unsigned char rtc_control = 0;
30417 ++ unsigned char rtc_intr;
30418 ++ unsigned long flags;
30419 ++
30420 ++
30421 ++ /*
30422 ++ * Always update rtc irq when ACPI is used as RTC Alarm.
30423 ++ * Or else, ACPI SCI is enabled during suspend/resume only,
30424 ++ * update rtc irq in that case.
30425 ++ */
30426 ++ if (cmos_use_acpi_alarm())
30427 ++ cmos_interrupt(0, (void *)cmos->rtc);
30428 ++ else {
30429 ++ /* Fix me: can we use cmos_interrupt() here as well? */
30430 ++ spin_lock_irqsave(&rtc_lock, flags);
30431 ++ if (cmos_rtc.suspend_ctrl)
30432 ++ rtc_control = CMOS_READ(RTC_CONTROL);
30433 ++ if (rtc_control & RTC_AIE) {
30434 ++ cmos_rtc.suspend_ctrl &= ~RTC_AIE;
30435 ++ CMOS_WRITE(rtc_control, RTC_CONTROL);
30436 ++ rtc_intr = CMOS_READ(RTC_INTR_FLAGS);
30437 ++ rtc_update_irq(cmos->rtc, 1, rtc_intr);
30438 ++ }
30439 ++ spin_unlock_irqrestore(&rtc_lock, flags);
30440 ++ }
30441 ++
30442 ++ pm_wakeup_hard_event(dev);
30443 ++ acpi_clear_event(ACPI_EVENT_RTC);
30444 ++ acpi_disable_event(ACPI_EVENT_RTC, 0);
30445 ++ return ACPI_INTERRUPT_HANDLED;
30446 ++}
30447 ++
30448 ++static void acpi_rtc_event_setup(struct device *dev)
30449 ++{
30450 ++ if (acpi_disabled)
30451 ++ return;
30452 ++
30453 ++ acpi_install_fixed_event_handler(ACPI_EVENT_RTC, rtc_handler, dev);
30454 ++ /*
30455 ++ * After the RTC handler is installed, the Fixed_RTC event should
30456 ++ * be disabled. Only when the RTC alarm is set will it be enabled.
30457 ++ */
30458 ++ acpi_clear_event(ACPI_EVENT_RTC);
30459 ++ acpi_disable_event(ACPI_EVENT_RTC, 0);
30460 ++}
30461 ++
30462 ++static void acpi_rtc_event_cleanup(void)
30463 ++{
30464 ++ if (acpi_disabled)
30465 ++ return;
30466 ++
30467 ++ acpi_remove_fixed_event_handler(ACPI_EVENT_RTC, rtc_handler);
30468 ++}
30469 ++
30470 ++static void rtc_wake_on(struct device *dev)
30471 ++{
30472 ++ acpi_clear_event(ACPI_EVENT_RTC);
30473 ++ acpi_enable_event(ACPI_EVENT_RTC, 0);
30474 ++}
30475 ++
30476 ++static void rtc_wake_off(struct device *dev)
30477 ++{
30478 ++ acpi_disable_event(ACPI_EVENT_RTC, 0);
30479 ++}
30480 ++
30481 ++#ifdef CONFIG_X86
30482 ++/* Enable use_acpi_alarm mode for Intel platforms no earlier than 2015 */
30483 ++static void use_acpi_alarm_quirks(void)
30484 ++{
30485 ++ if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
30486 ++ return;
30487 ++
30488 ++ if (!is_hpet_enabled())
30489 ++ return;
30490 ++
30491 ++ if (dmi_get_bios_year() < 2015)
30492 ++ return;
30493 ++
30494 ++ use_acpi_alarm = true;
30495 ++}
30496 ++#else
30497 ++static inline void use_acpi_alarm_quirks(void) { }
30498 ++#endif
30499 ++
30500 ++static void acpi_cmos_wake_setup(struct device *dev)
30501 ++{
30502 ++ if (acpi_disabled)
30503 ++ return;
30504 ++
30505 ++ use_acpi_alarm_quirks();
30506 ++
30507 ++ cmos_rtc.wake_on = rtc_wake_on;
30508 ++ cmos_rtc.wake_off = rtc_wake_off;
30509 ++
30510 ++ /* ACPI tables bug workaround. */
30511 ++ if (acpi_gbl_FADT.month_alarm && !acpi_gbl_FADT.day_alarm) {
30512 ++ dev_dbg(dev, "bogus FADT month_alarm (%d)\n",
30513 ++ acpi_gbl_FADT.month_alarm);
30514 ++ acpi_gbl_FADT.month_alarm = 0;
30515 ++ }
30516 ++
30517 ++ cmos_rtc.day_alrm = acpi_gbl_FADT.day_alarm;
30518 ++ cmos_rtc.mon_alrm = acpi_gbl_FADT.month_alarm;
30519 ++ cmos_rtc.century = acpi_gbl_FADT.century;
30520 ++
30521 ++ if (acpi_gbl_FADT.flags & ACPI_FADT_S4_RTC_WAKE)
30522 ++ dev_info(dev, "RTC can wake from S4\n");
30523 ++
30524 ++ /* RTC always wakes from S1/S2/S3, and often S4/STD */
30525 ++ device_init_wakeup(dev, 1);
30526 ++}
30527 ++
30528 ++static void cmos_check_acpi_rtc_status(struct device *dev,
30529 ++ unsigned char *rtc_control)
30530 ++{
30531 ++ struct cmos_rtc *cmos = dev_get_drvdata(dev);
30532 ++ acpi_event_status rtc_status;
30533 ++ acpi_status status;
30534 ++
30535 ++ if (acpi_gbl_FADT.flags & ACPI_FADT_FIXED_RTC)
30536 ++ return;
30537 ++
30538 ++ status = acpi_get_event_status(ACPI_EVENT_RTC, &rtc_status);
30539 ++ if (ACPI_FAILURE(status)) {
30540 ++ dev_err(dev, "Could not get RTC status\n");
30541 ++ } else if (rtc_status & ACPI_EVENT_FLAG_SET) {
30542 ++ unsigned char mask;
30543 ++ *rtc_control &= ~RTC_AIE;
30544 ++ CMOS_WRITE(*rtc_control, RTC_CONTROL);
30545 ++ mask = CMOS_READ(RTC_INTR_FLAGS);
30546 ++ rtc_update_irq(cmos->rtc, 1, mask);
30547 ++ }
30548 ++}
30549 ++
30550 ++#else /* !CONFIG_ACPI */
30551 ++
30552 ++static inline void acpi_rtc_event_setup(struct device *dev)
30553 ++{
30554 ++}
30555 ++
30556 ++static inline void acpi_rtc_event_cleanup(void)
30557 ++{
30558 ++}
30559 ++
30560 ++static inline void acpi_cmos_wake_setup(struct device *dev)
30561 ++{
30562 ++}
30563 ++
30564 ++static inline void cmos_check_acpi_rtc_status(struct device *dev,
30565 ++ unsigned char *rtc_control)
30566 ++{
30567 ++}
30568 ++#endif /* CONFIG_ACPI */
30569 ++
30570 + #ifdef CONFIG_PNP
30571 + #define INITSECTION
30572 +
30573 +@@ -827,19 +989,27 @@ cmos_do_probe(struct device *dev, struct resource *ports, int rtc_irq)
30574 + if (info->address_space)
30575 + address_space = info->address_space;
30576 +
30577 +- if (info->rtc_day_alarm && info->rtc_day_alarm < 128)
30578 +- cmos_rtc.day_alrm = info->rtc_day_alarm;
30579 +- if (info->rtc_mon_alarm && info->rtc_mon_alarm < 128)
30580 +- cmos_rtc.mon_alrm = info->rtc_mon_alarm;
30581 +- if (info->rtc_century && info->rtc_century < 128)
30582 +- cmos_rtc.century = info->rtc_century;
30583 ++ cmos_rtc.day_alrm = info->rtc_day_alarm;
30584 ++ cmos_rtc.mon_alrm = info->rtc_mon_alarm;
30585 ++ cmos_rtc.century = info->rtc_century;
30586 +
30587 + if (info->wake_on && info->wake_off) {
30588 + cmos_rtc.wake_on = info->wake_on;
30589 + cmos_rtc.wake_off = info->wake_off;
30590 + }
30591 ++ } else {
30592 ++ acpi_cmos_wake_setup(dev);
30593 + }
30594 +
30595 ++ if (cmos_rtc.day_alrm >= 128)
30596 ++ cmos_rtc.day_alrm = 0;
30597 ++
30598 ++ if (cmos_rtc.mon_alrm >= 128)
30599 ++ cmos_rtc.mon_alrm = 0;
30600 ++
30601 ++ if (cmos_rtc.century >= 128)
30602 ++ cmos_rtc.century = 0;
30603 ++
30604 + cmos_rtc.dev = dev;
30605 + dev_set_drvdata(dev, &cmos_rtc);
30606 +
30607 +@@ -928,6 +1098,13 @@ cmos_do_probe(struct device *dev, struct resource *ports, int rtc_irq)
30608 + nvmem_cfg.size = address_space - NVRAM_OFFSET;
30609 + devm_rtc_nvmem_register(cmos_rtc.rtc, &nvmem_cfg);
30610 +
30611 ++ /*
30612 ++ * Everything has gone well so far, so by default register a handler for
30613 ++ * the ACPI RTC fixed event.
30614 ++ */
30615 ++ if (!info)
30616 ++ acpi_rtc_event_setup(dev);
30617 ++
30618 + dev_info(dev, "%s%s, %d bytes nvram%s\n",
30619 + !is_valid_irq(rtc_irq) ? "no alarms" :
30620 + cmos_rtc.mon_alrm ? "alarms up to one year" :
30621 +@@ -973,6 +1150,9 @@ static void cmos_do_remove(struct device *dev)
30622 + hpet_unregister_irq_handler(cmos_interrupt);
30623 + }
30624 +
30625 ++ if (!dev_get_platdata(dev))
30626 ++ acpi_rtc_event_cleanup();
30627 ++
30628 + cmos->rtc = NULL;
30629 +
30630 + ports = cmos->iomem;
30631 +@@ -1122,9 +1302,6 @@ static void cmos_check_wkalrm(struct device *dev)
30632 + }
30633 + }
30634 +
30635 +-static void cmos_check_acpi_rtc_status(struct device *dev,
30636 +- unsigned char *rtc_control);
30637 +-
30638 + static int __maybe_unused cmos_resume(struct device *dev)
30639 + {
30640 + struct cmos_rtc *cmos = dev_get_drvdata(dev);
30641 +@@ -1191,175 +1368,13 @@ static SIMPLE_DEV_PM_OPS(cmos_pm_ops, cmos_suspend, cmos_resume);
30642 + * predate even PNPBIOS should set up platform_bus devices.
30643 + */
30644 +
30645 +-#ifdef CONFIG_ACPI
30646 +-
30647 +-#include <linux/acpi.h>
30648 +-
30649 +-static u32 rtc_handler(void *context)
30650 +-{
30651 +- struct device *dev = context;
30652 +- struct cmos_rtc *cmos = dev_get_drvdata(dev);
30653 +- unsigned char rtc_control = 0;
30654 +- unsigned char rtc_intr;
30655 +- unsigned long flags;
30656 +-
30657 +-
30658 +- /*
30659 +- * Always update rtc irq when ACPI is used as RTC Alarm.
30660 +- * Or else, ACPI SCI is enabled during suspend/resume only,
30661 +- * update rtc irq in that case.
30662 +- */
30663 +- if (cmos_use_acpi_alarm())
30664 +- cmos_interrupt(0, (void *)cmos->rtc);
30665 +- else {
30666 +- /* Fix me: can we use cmos_interrupt() here as well? */
30667 +- spin_lock_irqsave(&rtc_lock, flags);
30668 +- if (cmos_rtc.suspend_ctrl)
30669 +- rtc_control = CMOS_READ(RTC_CONTROL);
30670 +- if (rtc_control & RTC_AIE) {
30671 +- cmos_rtc.suspend_ctrl &= ~RTC_AIE;
30672 +- CMOS_WRITE(rtc_control, RTC_CONTROL);
30673 +- rtc_intr = CMOS_READ(RTC_INTR_FLAGS);
30674 +- rtc_update_irq(cmos->rtc, 1, rtc_intr);
30675 +- }
30676 +- spin_unlock_irqrestore(&rtc_lock, flags);
30677 +- }
30678 +-
30679 +- pm_wakeup_hard_event(dev);
30680 +- acpi_clear_event(ACPI_EVENT_RTC);
30681 +- acpi_disable_event(ACPI_EVENT_RTC, 0);
30682 +- return ACPI_INTERRUPT_HANDLED;
30683 +-}
30684 +-
30685 +-static inline void rtc_wake_setup(struct device *dev)
30686 +-{
30687 +- if (acpi_disabled)
30688 +- return;
30689 +-
30690 +- acpi_install_fixed_event_handler(ACPI_EVENT_RTC, rtc_handler, dev);
30691 +- /*
30692 +- * After the RTC handler is installed, the Fixed_RTC event should
30693 +- * be disabled. Only when the RTC alarm is set will it be enabled.
30694 +- */
30695 +- acpi_clear_event(ACPI_EVENT_RTC);
30696 +- acpi_disable_event(ACPI_EVENT_RTC, 0);
30697 +-}
30698 +-
30699 +-static void rtc_wake_on(struct device *dev)
30700 +-{
30701 +- acpi_clear_event(ACPI_EVENT_RTC);
30702 +- acpi_enable_event(ACPI_EVENT_RTC, 0);
30703 +-}
30704 +-
30705 +-static void rtc_wake_off(struct device *dev)
30706 +-{
30707 +- acpi_disable_event(ACPI_EVENT_RTC, 0);
30708 +-}
30709 +-
30710 +-#ifdef CONFIG_X86
30711 +-/* Enable use_acpi_alarm mode for Intel platforms no earlier than 2015 */
30712 +-static void use_acpi_alarm_quirks(void)
30713 +-{
30714 +- if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
30715 +- return;
30716 +-
30717 +- if (!is_hpet_enabled())
30718 +- return;
30719 +-
30720 +- if (dmi_get_bios_year() < 2015)
30721 +- return;
30722 +-
30723 +- use_acpi_alarm = true;
30724 +-}
30725 +-#else
30726 +-static inline void use_acpi_alarm_quirks(void) { }
30727 +-#endif
30728 +-
30729 +-/* Every ACPI platform has a mc146818 compatible "cmos rtc". Here we find
30730 +- * its device node and pass extra config data. This helps its driver use
30731 +- * capabilities that the now-obsolete mc146818 didn't have, and informs it
30732 +- * that this board's RTC is wakeup-capable (per ACPI spec).
30733 +- */
30734 +-static struct cmos_rtc_board_info acpi_rtc_info;
30735 +-
30736 +-static void cmos_wake_setup(struct device *dev)
30737 +-{
30738 +- if (acpi_disabled)
30739 +- return;
30740 +-
30741 +- use_acpi_alarm_quirks();
30742 +-
30743 +- acpi_rtc_info.wake_on = rtc_wake_on;
30744 +- acpi_rtc_info.wake_off = rtc_wake_off;
30745 +-
30746 +- /* workaround bug in some ACPI tables */
30747 +- if (acpi_gbl_FADT.month_alarm && !acpi_gbl_FADT.day_alarm) {
30748 +- dev_dbg(dev, "bogus FADT month_alarm (%d)\n",
30749 +- acpi_gbl_FADT.month_alarm);
30750 +- acpi_gbl_FADT.month_alarm = 0;
30751 +- }
30752 +-
30753 +- acpi_rtc_info.rtc_day_alarm = acpi_gbl_FADT.day_alarm;
30754 +- acpi_rtc_info.rtc_mon_alarm = acpi_gbl_FADT.month_alarm;
30755 +- acpi_rtc_info.rtc_century = acpi_gbl_FADT.century;
30756 +-
30757 +- /* NOTE: S4_RTC_WAKE is NOT currently useful to Linux */
30758 +- if (acpi_gbl_FADT.flags & ACPI_FADT_S4_RTC_WAKE)
30759 +- dev_info(dev, "RTC can wake from S4\n");
30760 +-
30761 +- dev->platform_data = &acpi_rtc_info;
30762 +-
30763 +- /* RTC always wakes from S1/S2/S3, and often S4/STD */
30764 +- device_init_wakeup(dev, 1);
30765 +-}
30766 +-
30767 +-static void cmos_check_acpi_rtc_status(struct device *dev,
30768 +- unsigned char *rtc_control)
30769 +-{
30770 +- struct cmos_rtc *cmos = dev_get_drvdata(dev);
30771 +- acpi_event_status rtc_status;
30772 +- acpi_status status;
30773 +-
30774 +- if (acpi_gbl_FADT.flags & ACPI_FADT_FIXED_RTC)
30775 +- return;
30776 +-
30777 +- status = acpi_get_event_status(ACPI_EVENT_RTC, &rtc_status);
30778 +- if (ACPI_FAILURE(status)) {
30779 +- dev_err(dev, "Could not get RTC status\n");
30780 +- } else if (rtc_status & ACPI_EVENT_FLAG_SET) {
30781 +- unsigned char mask;
30782 +- *rtc_control &= ~RTC_AIE;
30783 +- CMOS_WRITE(*rtc_control, RTC_CONTROL);
30784 +- mask = CMOS_READ(RTC_INTR_FLAGS);
30785 +- rtc_update_irq(cmos->rtc, 1, mask);
30786 +- }
30787 +-}
30788 +-
30789 +-#else
30790 +-
30791 +-static void cmos_wake_setup(struct device *dev)
30792 +-{
30793 +-}
30794 +-
30795 +-static void cmos_check_acpi_rtc_status(struct device *dev,
30796 +- unsigned char *rtc_control)
30797 +-{
30798 +-}
30799 +-
30800 +-static void rtc_wake_setup(struct device *dev)
30801 +-{
30802 +-}
30803 +-#endif
30804 +-
30805 + #ifdef CONFIG_PNP
30806 +
30807 + #include <linux/pnp.h>
30808 +
30809 + static int cmos_pnp_probe(struct pnp_dev *pnp, const struct pnp_device_id *id)
30810 + {
30811 +- int irq, ret;
30812 +-
30813 +- cmos_wake_setup(&pnp->dev);
30814 ++ int irq;
30815 +
30816 + if (pnp_port_start(pnp, 0) == 0x70 && !pnp_irq_valid(pnp, 0)) {
30817 + irq = 0;
30818 +@@ -1375,13 +1390,7 @@ static int cmos_pnp_probe(struct pnp_dev *pnp, const struct pnp_device_id *id)
30819 + irq = pnp_irq(pnp, 0);
30820 + }
30821 +
30822 +- ret = cmos_do_probe(&pnp->dev, pnp_get_resource(pnp, IORESOURCE_IO, 0), irq);
30823 +- if (ret)
30824 +- return ret;
30825 +-
30826 +- rtc_wake_setup(&pnp->dev);
30827 +-
30828 +- return 0;
30829 ++ return cmos_do_probe(&pnp->dev, pnp_get_resource(pnp, IORESOURCE_IO, 0), irq);
30830 + }
30831 +
30832 + static void cmos_pnp_remove(struct pnp_dev *pnp)
30833 +@@ -1465,10 +1474,9 @@ static inline void cmos_of_init(struct platform_device *pdev) {}
30834 + static int __init cmos_platform_probe(struct platform_device *pdev)
30835 + {
30836 + struct resource *resource;
30837 +- int irq, ret;
30838 ++ int irq;
30839 +
30840 + cmos_of_init(pdev);
30841 +- cmos_wake_setup(&pdev->dev);
30842 +
30843 + if (RTC_IOMAPPED)
30844 + resource = platform_get_resource(pdev, IORESOURCE_IO, 0);
30845 +@@ -1478,13 +1486,7 @@ static int __init cmos_platform_probe(struct platform_device *pdev)
30846 + if (irq < 0)
30847 + irq = -1;
30848 +
30849 +- ret = cmos_do_probe(&pdev->dev, resource, irq);
30850 +- if (ret)
30851 +- return ret;
30852 +-
30853 +- rtc_wake_setup(&pdev->dev);
30854 +-
30855 +- return 0;
30856 ++ return cmos_do_probe(&pdev->dev, resource, irq);
30857 + }
30858 +
30859 + static int cmos_platform_remove(struct platform_device *pdev)
30860 +diff --git a/drivers/rtc/rtc-mxc_v2.c b/drivers/rtc/rtc-mxc_v2.c
30861 +index 5e03834016294..f6d2ad91ff7a9 100644
30862 +--- a/drivers/rtc/rtc-mxc_v2.c
30863 ++++ b/drivers/rtc/rtc-mxc_v2.c
30864 +@@ -336,8 +336,10 @@ static int mxc_rtc_probe(struct platform_device *pdev)
30865 + }
30866 +
30867 + pdata->rtc = devm_rtc_allocate_device(&pdev->dev);
30868 +- if (IS_ERR(pdata->rtc))
30869 ++ if (IS_ERR(pdata->rtc)) {
30870 ++ clk_disable_unprepare(pdata->clk);
30871 + return PTR_ERR(pdata->rtc);
30872 ++ }
30873 +
30874 + pdata->rtc->ops = &mxc_rtc_ops;
30875 + pdata->rtc->range_max = U32_MAX;
30876 +diff --git a/drivers/rtc/rtc-pcf2127.c b/drivers/rtc/rtc-pcf2127.c
30877 +index 63b275b014bd6..87f4fc9df68b4 100644
30878 +--- a/drivers/rtc/rtc-pcf2127.c
30879 ++++ b/drivers/rtc/rtc-pcf2127.c
30880 +@@ -885,9 +885,17 @@ static const struct regmap_bus pcf2127_i2c_regmap = {
30881 +
30882 + static struct i2c_driver pcf2127_i2c_driver;
30883 +
30884 +-static int pcf2127_i2c_probe(struct i2c_client *client,
30885 +- const struct i2c_device_id *id)
30886 ++static const struct i2c_device_id pcf2127_i2c_id[] = {
30887 ++ { "pcf2127", 1 },
30888 ++ { "pcf2129", 0 },
30889 ++ { "pca2129", 0 },
30890 ++ { }
30891 ++};
30892 ++MODULE_DEVICE_TABLE(i2c, pcf2127_i2c_id);
30893 ++
30894 ++static int pcf2127_i2c_probe(struct i2c_client *client)
30895 + {
30896 ++ const struct i2c_device_id *id = i2c_match_id(pcf2127_i2c_id, client);
30897 + struct regmap *regmap;
30898 + static const struct regmap_config config = {
30899 + .reg_bits = 8,
30900 +@@ -910,20 +918,12 @@ static int pcf2127_i2c_probe(struct i2c_client *client,
30901 + pcf2127_i2c_driver.driver.name, id->driver_data);
30902 + }
30903 +
30904 +-static const struct i2c_device_id pcf2127_i2c_id[] = {
30905 +- { "pcf2127", 1 },
30906 +- { "pcf2129", 0 },
30907 +- { "pca2129", 0 },
30908 +- { }
30909 +-};
30910 +-MODULE_DEVICE_TABLE(i2c, pcf2127_i2c_id);
30911 +-
30912 + static struct i2c_driver pcf2127_i2c_driver = {
30913 + .driver = {
30914 + .name = "rtc-pcf2127-i2c",
30915 + .of_match_table = of_match_ptr(pcf2127_of_match),
30916 + },
30917 +- .probe = pcf2127_i2c_probe,
30918 ++ .probe_new = pcf2127_i2c_probe,
30919 + .id_table = pcf2127_i2c_id,
30920 + };
30921 +
30922 +diff --git a/drivers/rtc/rtc-pcf85063.c b/drivers/rtc/rtc-pcf85063.c
30923 +index 095891999da11..754e03984f986 100644
30924 +--- a/drivers/rtc/rtc-pcf85063.c
30925 ++++ b/drivers/rtc/rtc-pcf85063.c
30926 +@@ -169,10 +169,10 @@ static int pcf85063_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alrm)
30927 + if (ret)
30928 + return ret;
30929 +
30930 +- alrm->time.tm_sec = bcd2bin(buf[0]);
30931 +- alrm->time.tm_min = bcd2bin(buf[1]);
30932 +- alrm->time.tm_hour = bcd2bin(buf[2]);
30933 +- alrm->time.tm_mday = bcd2bin(buf[3]);
30934 ++ alrm->time.tm_sec = bcd2bin(buf[0] & 0x7f);
30935 ++ alrm->time.tm_min = bcd2bin(buf[1] & 0x7f);
30936 ++ alrm->time.tm_hour = bcd2bin(buf[2] & 0x3f);
30937 ++ alrm->time.tm_mday = bcd2bin(buf[3] & 0x3f);
30938 +
30939 + ret = regmap_read(pcf85063->regmap, PCF85063_REG_CTRL2, &val);
30940 + if (ret)
30941 +@@ -424,7 +424,7 @@ static int pcf85063_clkout_control(struct clk_hw *hw, bool enable)
30942 + unsigned int buf;
30943 + int ret;
30944 +
30945 +- ret = regmap_read(pcf85063->regmap, PCF85063_REG_OFFSET, &buf);
30946 ++ ret = regmap_read(pcf85063->regmap, PCF85063_REG_CTRL2, &buf);
30947 + if (ret < 0)
30948 + return ret;
30949 + buf &= PCF85063_REG_CLKO_F_MASK;
30950 +diff --git a/drivers/rtc/rtc-pic32.c b/drivers/rtc/rtc-pic32.c
30951 +index 7fb9145c43bd5..fa351ac201587 100644
30952 +--- a/drivers/rtc/rtc-pic32.c
30953 ++++ b/drivers/rtc/rtc-pic32.c
30954 +@@ -324,16 +324,16 @@ static int pic32_rtc_probe(struct platform_device *pdev)
30955 +
30956 + spin_lock_init(&pdata->alarm_lock);
30957 +
30958 ++ pdata->rtc = devm_rtc_allocate_device(&pdev->dev);
30959 ++ if (IS_ERR(pdata->rtc))
30960 ++ return PTR_ERR(pdata->rtc);
30961 ++
30962 + clk_prepare_enable(pdata->clk);
30963 +
30964 + pic32_rtc_enable(pdata, 1);
30965 +
30966 + device_init_wakeup(&pdev->dev, 1);
30967 +
30968 +- pdata->rtc = devm_rtc_allocate_device(&pdev->dev);
30969 +- if (IS_ERR(pdata->rtc))
30970 +- return PTR_ERR(pdata->rtc);
30971 +-
30972 + pdata->rtc->ops = &pic32_rtcops;
30973 + pdata->rtc->range_min = RTC_TIMESTAMP_BEGIN_2000;
30974 + pdata->rtc->range_max = RTC_TIMESTAMP_END_2099;
30975 +diff --git a/drivers/rtc/rtc-rzn1.c b/drivers/rtc/rtc-rzn1.c
30976 +index ac788799c8e3e..0d36bc50197c1 100644
30977 +--- a/drivers/rtc/rtc-rzn1.c
30978 ++++ b/drivers/rtc/rtc-rzn1.c
30979 +@@ -355,7 +355,9 @@ static int rzn1_rtc_probe(struct platform_device *pdev)
30980 + set_bit(RTC_FEATURE_ALARM_RES_MINUTE, rtc->rtcdev->features);
30981 + clear_bit(RTC_FEATURE_UPDATE_INTERRUPT, rtc->rtcdev->features);
30982 +
30983 +- devm_pm_runtime_enable(&pdev->dev);
30984 ++ ret = devm_pm_runtime_enable(&pdev->dev);
30985 ++ if (ret < 0)
30986 ++ return ret;
30987 + ret = pm_runtime_resume_and_get(&pdev->dev);
30988 + if (ret < 0)
30989 + return ret;
30990 +diff --git a/drivers/rtc/rtc-snvs.c b/drivers/rtc/rtc-snvs.c
30991 +index bd929b0e7d7de..d82acf1af1fae 100644
30992 +--- a/drivers/rtc/rtc-snvs.c
30993 ++++ b/drivers/rtc/rtc-snvs.c
30994 +@@ -32,6 +32,14 @@
30995 + #define SNVS_LPPGDR_INIT 0x41736166
30996 + #define CNTR_TO_SECS_SH 15
30997 +
30998 ++/* The maximum RTC clock cycles that are allowed to pass between two
30999 ++ * consecutive clock counter register reads. If the values are corrupted a
31000 ++ * bigger difference is expected. The RTC frequency is 32kHz. With 320 cycles
31001 ++ * we end at 10ms which should be enough for most cases. If it once takes
31002 ++ * longer than expected we do a retry.
31003 ++ */
31004 ++#define MAX_RTC_READ_DIFF_CYCLES 320
31005 ++
31006 + struct snvs_rtc_data {
31007 + struct rtc_device *rtc;
31008 + struct regmap *regmap;
31009 +@@ -56,6 +64,7 @@ static u64 rtc_read_lpsrt(struct snvs_rtc_data *data)
31010 + static u32 rtc_read_lp_counter(struct snvs_rtc_data *data)
31011 + {
31012 + u64 read1, read2;
31013 ++ s64 diff;
31014 + unsigned int timeout = 100;
31015 +
31016 + /* As expected, the registers might update between the read of the LSB
31017 +@@ -66,7 +75,8 @@ static u32 rtc_read_lp_counter(struct snvs_rtc_data *data)
31018 + do {
31019 + read2 = read1;
31020 + read1 = rtc_read_lpsrt(data);
31021 +- } while (read1 != read2 && --timeout);
31022 ++ diff = read1 - read2;
31023 ++ } while (((diff < 0) || (diff > MAX_RTC_READ_DIFF_CYCLES)) && --timeout);
31024 + if (!timeout)
31025 + dev_err(&data->rtc->dev, "Timeout trying to get valid LPSRT Counter read\n");
31026 +
31027 +@@ -78,13 +88,15 @@ static u32 rtc_read_lp_counter(struct snvs_rtc_data *data)
31028 + static int rtc_read_lp_counter_lsb(struct snvs_rtc_data *data, u32 *lsb)
31029 + {
31030 + u32 count1, count2;
31031 ++ s32 diff;
31032 + unsigned int timeout = 100;
31033 +
31034 + regmap_read(data->regmap, data->offset + SNVS_LPSRTCLR, &count1);
31035 + do {
31036 + count2 = count1;
31037 + regmap_read(data->regmap, data->offset + SNVS_LPSRTCLR, &count1);
31038 +- } while (count1 != count2 && --timeout);
31039 ++ diff = count1 - count2;
31040 ++ } while (((diff < 0) || (diff > MAX_RTC_READ_DIFF_CYCLES)) && --timeout);
31041 + if (!timeout) {
31042 + dev_err(&data->rtc->dev, "Timeout trying to get valid LPSRT Counter read\n");
31043 + return -ETIMEDOUT;
31044 +diff --git a/drivers/rtc/rtc-st-lpc.c b/drivers/rtc/rtc-st-lpc.c
31045 +index bdb20f63254e2..0f8e4231098ef 100644
31046 +--- a/drivers/rtc/rtc-st-lpc.c
31047 ++++ b/drivers/rtc/rtc-st-lpc.c
31048 +@@ -238,6 +238,7 @@ static int st_rtc_probe(struct platform_device *pdev)
31049 +
31050 + rtc->clkrate = clk_get_rate(rtc->clk);
31051 + if (!rtc->clkrate) {
31052 ++ clk_disable_unprepare(rtc->clk);
31053 + dev_err(&pdev->dev, "Unable to fetch clock rate\n");
31054 + return -EINVAL;
31055 + }
31056 +diff --git a/drivers/s390/net/ctcm_main.c b/drivers/s390/net/ctcm_main.c
31057 +index 37b551bd43bff..bdfab9ea00464 100644
31058 +--- a/drivers/s390/net/ctcm_main.c
31059 ++++ b/drivers/s390/net/ctcm_main.c
31060 +@@ -825,16 +825,9 @@ done:
31061 + /*
31062 + * Start transmission of a packet.
31063 + * Called from generic network device layer.
31064 +- *
31065 +- * skb Pointer to buffer containing the packet.
31066 +- * dev Pointer to interface struct.
31067 +- *
31068 +- * returns 0 if packet consumed, !0 if packet rejected.
31069 +- * Note: If we return !0, then the packet is free'd by
31070 +- * the generic network layer.
31071 + */
31072 + /* first merge version - leaving both functions separated */
31073 +-static int ctcm_tx(struct sk_buff *skb, struct net_device *dev)
31074 ++static netdev_tx_t ctcm_tx(struct sk_buff *skb, struct net_device *dev)
31075 + {
31076 + struct ctcm_priv *priv = dev->ml_priv;
31077 +
31078 +@@ -877,7 +870,7 @@ static int ctcm_tx(struct sk_buff *skb, struct net_device *dev)
31079 + }
31080 +
31081 + /* unmerged MPC variant of ctcm_tx */
31082 +-static int ctcmpc_tx(struct sk_buff *skb, struct net_device *dev)
31083 ++static netdev_tx_t ctcmpc_tx(struct sk_buff *skb, struct net_device *dev)
31084 + {
31085 + int len = 0;
31086 + struct ctcm_priv *priv = dev->ml_priv;
31087 +diff --git a/drivers/s390/net/lcs.c b/drivers/s390/net/lcs.c
31088 +index 84c8981317b46..38f312664ce72 100644
31089 +--- a/drivers/s390/net/lcs.c
31090 ++++ b/drivers/s390/net/lcs.c
31091 +@@ -1519,9 +1519,8 @@ lcs_txbuffer_cb(struct lcs_channel *channel, struct lcs_buffer *buffer)
31092 + /*
31093 + * Packet transmit function called by network stack
31094 + */
31095 +-static int
31096 +-__lcs_start_xmit(struct lcs_card *card, struct sk_buff *skb,
31097 +- struct net_device *dev)
31098 ++static netdev_tx_t __lcs_start_xmit(struct lcs_card *card, struct sk_buff *skb,
31099 ++ struct net_device *dev)
31100 + {
31101 + struct lcs_header *header;
31102 + int rc = NETDEV_TX_OK;
31103 +@@ -1582,8 +1581,7 @@ out:
31104 + return rc;
31105 + }
31106 +
31107 +-static int
31108 +-lcs_start_xmit(struct sk_buff *skb, struct net_device *dev)
31109 ++static netdev_tx_t lcs_start_xmit(struct sk_buff *skb, struct net_device *dev)
31110 + {
31111 + struct lcs_card *card;
31112 + int rc;
31113 +diff --git a/drivers/s390/net/netiucv.c b/drivers/s390/net/netiucv.c
31114 +index 65aa0a96c21de..66076cada8ae4 100644
31115 +--- a/drivers/s390/net/netiucv.c
31116 ++++ b/drivers/s390/net/netiucv.c
31117 +@@ -1248,15 +1248,8 @@ static int netiucv_close(struct net_device *dev)
31118 + /*
31119 + * Start transmission of a packet.
31120 + * Called from generic network device layer.
31121 +- *
31122 +- * @param skb Pointer to buffer containing the packet.
31123 +- * @param dev Pointer to interface struct.
31124 +- *
31125 +- * @return 0 if packet consumed, !0 if packet rejected.
31126 +- * Note: If we return !0, then the packet is free'd by
31127 +- * the generic network layer.
31128 + */
31129 +-static int netiucv_tx(struct sk_buff *skb, struct net_device *dev)
31130 ++static netdev_tx_t netiucv_tx(struct sk_buff *skb, struct net_device *dev)
31131 + {
31132 + struct netiucv_priv *privptr = netdev_priv(dev);
31133 + int rc;
31134 +diff --git a/drivers/scsi/elx/efct/efct_driver.c b/drivers/scsi/elx/efct/efct_driver.c
31135 +index b08fc8839808d..49fd2cfed70c7 100644
31136 +--- a/drivers/scsi/elx/efct/efct_driver.c
31137 ++++ b/drivers/scsi/elx/efct/efct_driver.c
31138 +@@ -42,6 +42,7 @@ efct_device_init(void)
31139 +
31140 + rc = efct_scsi_reg_fc_transport();
31141 + if (rc) {
31142 ++ efct_scsi_tgt_driver_exit();
31143 + pr_err("failed to register to FC host\n");
31144 + return rc;
31145 + }
31146 +diff --git a/drivers/scsi/elx/libefc/efclib.h b/drivers/scsi/elx/libefc/efclib.h
31147 +index dde20891c2dd7..57e3386128127 100644
31148 +--- a/drivers/scsi/elx/libefc/efclib.h
31149 ++++ b/drivers/scsi/elx/libefc/efclib.h
31150 +@@ -58,10 +58,12 @@ enum efc_node_send_ls_acc {
31151 + #define EFC_LINK_STATUS_UP 0
31152 + #define EFC_LINK_STATUS_DOWN 1
31153 +
31154 ++enum efc_sm_event;
31155 ++
31156 + /* State machine context header */
31157 + struct efc_sm_ctx {
31158 + void (*current_state)(struct efc_sm_ctx *ctx,
31159 +- u32 evt, void *arg);
31160 ++ enum efc_sm_event evt, void *arg);
31161 +
31162 + const char *description;
31163 + void *app;
31164 +@@ -365,7 +367,7 @@ struct efc_node {
31165 + int prev_evt;
31166 +
31167 + void (*nodedb_state)(struct efc_sm_ctx *ctx,
31168 +- u32 evt, void *arg);
31169 ++ enum efc_sm_event evt, void *arg);
31170 + struct timer_list gidpt_delay_timer;
31171 + u64 time_last_gidpt_msec;
31172 +
31173 +diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
31174 +index 6ec296321ffc1..38774a272e627 100644
31175 +--- a/drivers/scsi/fcoe/fcoe.c
31176 ++++ b/drivers/scsi/fcoe/fcoe.c
31177 +@@ -2491,6 +2491,7 @@ static int __init fcoe_init(void)
31178 +
31179 + out_free:
31180 + mutex_unlock(&fcoe_config_mutex);
31181 ++ fcoe_transport_detach(&fcoe_sw_transport);
31182 + out_destroy:
31183 + destroy_workqueue(fcoe_wq);
31184 + return rc;
31185 +diff --git a/drivers/scsi/fcoe/fcoe_sysfs.c b/drivers/scsi/fcoe/fcoe_sysfs.c
31186 +index af658aa38fedf..6260aa5ea6af8 100644
31187 +--- a/drivers/scsi/fcoe/fcoe_sysfs.c
31188 ++++ b/drivers/scsi/fcoe/fcoe_sysfs.c
31189 +@@ -830,14 +830,15 @@ struct fcoe_ctlr_device *fcoe_ctlr_device_add(struct device *parent,
31190 +
31191 + dev_set_name(&ctlr->dev, "ctlr_%d", ctlr->id);
31192 + error = device_register(&ctlr->dev);
31193 +- if (error)
31194 +- goto out_del_q2;
31195 ++ if (error) {
31196 ++ destroy_workqueue(ctlr->devloss_work_q);
31197 ++ destroy_workqueue(ctlr->work_q);
31198 ++ put_device(&ctlr->dev);
31199 ++ return NULL;
31200 ++ }
31201 +
31202 + return ctlr;
31203 +
31204 +-out_del_q2:
31205 +- destroy_workqueue(ctlr->devloss_work_q);
31206 +- ctlr->devloss_work_q = NULL;
31207 + out_del_q:
31208 + destroy_workqueue(ctlr->work_q);
31209 + ctlr->work_q = NULL;
31210 +@@ -1036,16 +1037,16 @@ struct fcoe_fcf_device *fcoe_fcf_device_add(struct fcoe_ctlr_device *ctlr,
31211 + fcf->selected = new_fcf->selected;
31212 +
31213 + error = device_register(&fcf->dev);
31214 +- if (error)
31215 +- goto out_del;
31216 ++ if (error) {
31217 ++ put_device(&fcf->dev);
31218 ++ goto out;
31219 ++ }
31220 +
31221 + fcf->state = FCOE_FCF_STATE_CONNECTED;
31222 + list_add_tail(&fcf->peers, &ctlr->fcfs);
31223 +
31224 + return fcf;
31225 +
31226 +-out_del:
31227 +- kfree(fcf);
31228 + out:
31229 + return NULL;
31230 + }
31231 +diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
31232 +index f8e832b1bc46a..4dbf51e2623ad 100644
31233 +--- a/drivers/scsi/hpsa.c
31234 ++++ b/drivers/scsi/hpsa.c
31235 +@@ -8925,7 +8925,7 @@ clean1: /* wq/aer/h */
31236 + destroy_workqueue(h->monitor_ctlr_wq);
31237 + h->monitor_ctlr_wq = NULL;
31238 + }
31239 +- kfree(h);
31240 ++ hpda_free_ctlr_info(h);
31241 + return rc;
31242 + }
31243 +
31244 +@@ -9786,7 +9786,8 @@ static int hpsa_add_sas_host(struct ctlr_info *h)
31245 + return 0;
31246 +
31247 + free_sas_phy:
31248 +- hpsa_free_sas_phy(hpsa_sas_phy);
31249 ++ sas_phy_free(hpsa_sas_phy->phy);
31250 ++ kfree(hpsa_sas_phy);
31251 + free_sas_port:
31252 + hpsa_free_sas_port(hpsa_sas_port);
31253 + free_sas_node:
31254 +@@ -9822,10 +9823,12 @@ static int hpsa_add_sas_device(struct hpsa_sas_node *hpsa_sas_node,
31255 +
31256 + rc = hpsa_sas_port_add_rphy(hpsa_sas_port, rphy);
31257 + if (rc)
31258 +- goto free_sas_port;
31259 ++ goto free_sas_rphy;
31260 +
31261 + return 0;
31262 +
31263 ++free_sas_rphy:
31264 ++ sas_rphy_free(rphy);
31265 + free_sas_port:
31266 + hpsa_free_sas_port(hpsa_sas_port);
31267 + device->sas_port = NULL;
31268 +diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
31269 +index 9d01a3e3c26aa..2022ffb450417 100644
31270 +--- a/drivers/scsi/ipr.c
31271 ++++ b/drivers/scsi/ipr.c
31272 +@@ -10872,11 +10872,19 @@ static struct notifier_block ipr_notifier = {
31273 + **/
31274 + static int __init ipr_init(void)
31275 + {
31276 ++ int rc;
31277 ++
31278 + ipr_info("IBM Power RAID SCSI Device Driver version: %s %s\n",
31279 + IPR_DRIVER_VERSION, IPR_DRIVER_DATE);
31280 +
31281 + register_reboot_notifier(&ipr_notifier);
31282 +- return pci_register_driver(&ipr_driver);
31283 ++ rc = pci_register_driver(&ipr_driver);
31284 ++ if (rc) {
31285 ++ unregister_reboot_notifier(&ipr_notifier);
31286 ++ return rc;
31287 ++ }
31288 ++
31289 ++ return 0;
31290 + }
31291 +
31292 + /**
31293 +diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
31294 +index 99d06dc7ddf6b..21c52154626f1 100644
31295 +--- a/drivers/scsi/lpfc/lpfc_sli.c
31296 ++++ b/drivers/scsi/lpfc/lpfc_sli.c
31297 +@@ -8150,10 +8150,10 @@ u32 lpfc_rx_monitor_report(struct lpfc_hba *phba,
31298 + "IO_cnt", "Info", "BWutil(ms)");
31299 + }
31300 +
31301 +- /* Needs to be _bh because record is called from timer interrupt
31302 ++ /* Needs to be _irq because record is called from timer interrupt
31303 + * context
31304 + */
31305 +- spin_lock_bh(ring_lock);
31306 ++ spin_lock_irq(ring_lock);
31307 + while (*head_idx != *tail_idx) {
31308 + entry = &ring[*head_idx];
31309 +
31310 +@@ -8197,7 +8197,7 @@ u32 lpfc_rx_monitor_report(struct lpfc_hba *phba,
31311 + if (cnt >= max_read_entries)
31312 + break;
31313 + }
31314 +- spin_unlock_bh(ring_lock);
31315 ++ spin_unlock_irq(ring_lock);
31316 +
31317 + return cnt;
31318 + }
31319 +diff --git a/drivers/scsi/mpt3sas/mpt3sas_transport.c b/drivers/scsi/mpt3sas/mpt3sas_transport.c
31320 +index 0681daee6c149..e5ecd6ada6cdd 100644
31321 +--- a/drivers/scsi/mpt3sas/mpt3sas_transport.c
31322 ++++ b/drivers/scsi/mpt3sas/mpt3sas_transport.c
31323 +@@ -829,6 +829,8 @@ mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc, u16 handle,
31324 + if ((sas_rphy_add(rphy))) {
31325 + ioc_err(ioc, "failure at %s:%d/%s()!\n",
31326 + __FILE__, __LINE__, __func__);
31327 ++ sas_rphy_free(rphy);
31328 ++ rphy = NULL;
31329 + }
31330 +
31331 + if (mpt3sas_port->remote_identify.device_type == SAS_END_DEVICE) {
31332 +diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
31333 +index 802eec6407d9a..a26a373be9da3 100644
31334 +--- a/drivers/scsi/qla2xxx/qla_def.h
31335 ++++ b/drivers/scsi/qla2xxx/qla_def.h
31336 +@@ -5136,17 +5136,17 @@ struct secure_flash_update_block_pk {
31337 + (test_bit(ISP_ABORT_NEEDED, &ha->dpc_flags) || \
31338 + test_bit(LOOP_RESYNC_NEEDED, &ha->dpc_flags))
31339 +
31340 +-#define QLA_VHA_MARK_BUSY(__vha, __bail) do { \
31341 +- atomic_inc(&__vha->vref_count); \
31342 +- mb(); \
31343 +- if (__vha->flags.delete_progress) { \
31344 +- atomic_dec(&__vha->vref_count); \
31345 +- wake_up(&__vha->vref_waitq); \
31346 +- __bail = 1; \
31347 +- } else { \
31348 +- __bail = 0; \
31349 +- } \
31350 +-} while (0)
31351 ++static inline bool qla_vha_mark_busy(scsi_qla_host_t *vha)
31352 ++{
31353 ++ atomic_inc(&vha->vref_count);
31354 ++ mb();
31355 ++ if (vha->flags.delete_progress) {
31356 ++ atomic_dec(&vha->vref_count);
31357 ++ wake_up(&vha->vref_waitq);
31358 ++ return true;
31359 ++ }
31360 ++ return false;
31361 ++}
31362 +
31363 + #define QLA_VHA_MARK_NOT_BUSY(__vha) do { \
31364 + atomic_dec(&__vha->vref_count); \
31365 +diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
31366 +index e12db95de6883..432f47fc5e1f3 100644
31367 +--- a/drivers/scsi/qla2xxx/qla_init.c
31368 ++++ b/drivers/scsi/qla2xxx/qla_init.c
31369 +@@ -110,6 +110,7 @@ static void qla24xx_abort_iocb_timeout(void *data)
31370 + struct qla_qpair *qpair = sp->qpair;
31371 + u32 handle;
31372 + unsigned long flags;
31373 ++ int sp_found = 0, cmdsp_found = 0;
31374 +
31375 + if (sp->cmd_sp)
31376 + ql_dbg(ql_dbg_async, sp->vha, 0x507c,
31377 +@@ -124,18 +125,21 @@ static void qla24xx_abort_iocb_timeout(void *data)
31378 + spin_lock_irqsave(qpair->qp_lock_ptr, flags);
31379 + for (handle = 1; handle < qpair->req->num_outstanding_cmds; handle++) {
31380 + if (sp->cmd_sp && (qpair->req->outstanding_cmds[handle] ==
31381 +- sp->cmd_sp))
31382 ++ sp->cmd_sp)) {
31383 + qpair->req->outstanding_cmds[handle] = NULL;
31384 ++ cmdsp_found = 1;
31385 ++ }
31386 +
31387 + /* removing the abort */
31388 + if (qpair->req->outstanding_cmds[handle] == sp) {
31389 + qpair->req->outstanding_cmds[handle] = NULL;
31390 ++ sp_found = 1;
31391 + break;
31392 + }
31393 + }
31394 + spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
31395 +
31396 +- if (sp->cmd_sp) {
31397 ++ if (cmdsp_found && sp->cmd_sp) {
31398 + /*
31399 + * This done function should take care of
31400 + * original command ref: INIT
31401 +@@ -143,8 +147,10 @@ static void qla24xx_abort_iocb_timeout(void *data)
31402 + sp->cmd_sp->done(sp->cmd_sp, QLA_OS_TIMER_EXPIRED);
31403 + }
31404 +
31405 +- abt->u.abt.comp_status = cpu_to_le16(CS_TIMEOUT);
31406 +- sp->done(sp, QLA_OS_TIMER_EXPIRED);
31407 ++ if (sp_found) {
31408 ++ abt->u.abt.comp_status = cpu_to_le16(CS_TIMEOUT);
31409 ++ sp->done(sp, QLA_OS_TIMER_EXPIRED);
31410 ++ }
31411 + }
31412 +
31413 + static void qla24xx_abort_sp_done(srb_t *sp, int res)
31414 +@@ -168,7 +174,6 @@ int qla24xx_async_abort_cmd(srb_t *cmd_sp, bool wait)
31415 + struct srb_iocb *abt_iocb;
31416 + srb_t *sp;
31417 + int rval = QLA_FUNCTION_FAILED;
31418 +- uint8_t bail;
31419 +
31420 + /* ref: INIT for ABTS command */
31421 + sp = qla2xxx_get_qpair_sp(cmd_sp->vha, cmd_sp->qpair, cmd_sp->fcport,
31422 +@@ -176,7 +181,7 @@ int qla24xx_async_abort_cmd(srb_t *cmd_sp, bool wait)
31423 + if (!sp)
31424 + return QLA_MEMORY_ALLOC_FAILED;
31425 +
31426 +- QLA_VHA_MARK_BUSY(vha, bail);
31427 ++ qla_vha_mark_busy(vha);
31428 + abt_iocb = &sp->u.iocb_cmd;
31429 + sp->type = SRB_ABT_CMD;
31430 + sp->name = "abort";
31431 +@@ -2020,14 +2025,13 @@ qla2x00_async_tm_cmd(fc_port_t *fcport, uint32_t flags, uint32_t lun,
31432 + struct srb_iocb *tm_iocb;
31433 + srb_t *sp;
31434 + int rval = QLA_FUNCTION_FAILED;
31435 +- uint8_t bail;
31436 +
31437 + /* ref: INIT */
31438 + sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
31439 + if (!sp)
31440 + goto done;
31441 +
31442 +- QLA_VHA_MARK_BUSY(vha, bail);
31443 ++ qla_vha_mark_busy(vha);
31444 + sp->type = SRB_TM_CMD;
31445 + sp->name = "tmf";
31446 + qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha),
31447 +diff --git a/drivers/scsi/qla2xxx/qla_inline.h b/drivers/scsi/qla2xxx/qla_inline.h
31448 +index db17f7f410cdd..5185dc5daf80d 100644
31449 +--- a/drivers/scsi/qla2xxx/qla_inline.h
31450 ++++ b/drivers/scsi/qla2xxx/qla_inline.h
31451 +@@ -225,11 +225,9 @@ static inline srb_t *
31452 + qla2x00_get_sp(scsi_qla_host_t *vha, fc_port_t *fcport, gfp_t flag)
31453 + {
31454 + srb_t *sp = NULL;
31455 +- uint8_t bail;
31456 + struct qla_qpair *qpair;
31457 +
31458 +- QLA_VHA_MARK_BUSY(vha, bail);
31459 +- if (unlikely(bail))
31460 ++ if (unlikely(qla_vha_mark_busy(vha)))
31461 + return NULL;
31462 +
31463 + qpair = vha->hw->base_qpair;
31464 +diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
31465 +index 2c85f3cce7264..96ba1398f20c1 100644
31466 +--- a/drivers/scsi/qla2xxx/qla_os.c
31467 ++++ b/drivers/scsi/qla2xxx/qla_os.c
31468 +@@ -5069,13 +5069,11 @@ struct qla_work_evt *
31469 + qla2x00_alloc_work(struct scsi_qla_host *vha, enum qla_work_type type)
31470 + {
31471 + struct qla_work_evt *e;
31472 +- uint8_t bail;
31473 +
31474 + if (test_bit(UNLOADING, &vha->dpc_flags))
31475 + return NULL;
31476 +
31477 +- QLA_VHA_MARK_BUSY(vha, bail);
31478 +- if (bail)
31479 ++ if (qla_vha_mark_busy(vha))
31480 + return NULL;
31481 +
31482 + e = kzalloc(sizeof(struct qla_work_evt), GFP_ATOMIC);
31483 +diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
31484 +index bebda917b1383..b77035ddc9440 100644
31485 +--- a/drivers/scsi/scsi_debug.c
31486 ++++ b/drivers/scsi/scsi_debug.c
31487 +@@ -3785,7 +3785,7 @@ static int resp_write_scat(struct scsi_cmnd *scp,
31488 + mk_sense_buffer(scp, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB, 0);
31489 + return illegal_condition_result;
31490 + }
31491 +- lrdp = kzalloc(lbdof_blen, GFP_ATOMIC);
31492 ++ lrdp = kzalloc(lbdof_blen, GFP_ATOMIC | __GFP_NOWARN);
31493 + if (lrdp == NULL)
31494 + return SCSI_MLQUEUE_HOST_BUSY;
31495 + if (sdebug_verbose)
31496 +@@ -4436,7 +4436,7 @@ static int resp_verify(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
31497 + if (ret)
31498 + return ret;
31499 +
31500 +- arr = kcalloc(lb_size, vnum, GFP_ATOMIC);
31501 ++ arr = kcalloc(lb_size, vnum, GFP_ATOMIC | __GFP_NOWARN);
31502 + if (!arr) {
31503 + mk_sense_buffer(scp, ILLEGAL_REQUEST, INSUFF_RES_ASC,
31504 + INSUFF_RES_ASCQ);
31505 +@@ -4504,7 +4504,7 @@ static int resp_report_zones(struct scsi_cmnd *scp,
31506 +
31507 + rep_max_zones = (alloc_len - 64) >> ilog2(RZONES_DESC_HD);
31508 +
31509 +- arr = kzalloc(alloc_len, GFP_ATOMIC);
31510 ++ arr = kzalloc(alloc_len, GFP_ATOMIC | __GFP_NOWARN);
31511 + if (!arr) {
31512 + mk_sense_buffer(scp, ILLEGAL_REQUEST, INSUFF_RES_ASC,
31513 + INSUFF_RES_ASCQ);
31514 +@@ -7340,7 +7340,10 @@ clean:
31515 + kfree(sdbg_devinfo->zstate);
31516 + kfree(sdbg_devinfo);
31517 + }
31518 +- kfree(sdbg_host);
31519 ++ if (sdbg_host->dev.release)
31520 ++ put_device(&sdbg_host->dev);
31521 ++ else
31522 ++ kfree(sdbg_host);
31523 + pr_warn("%s: failed, errno=%d\n", __func__, -error);
31524 + return error;
31525 + }
31526 +diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
31527 +index 6995c89792300..02520f9123066 100644
31528 +--- a/drivers/scsi/scsi_error.c
31529 ++++ b/drivers/scsi/scsi_error.c
31530 +@@ -343,19 +343,11 @@ enum blk_eh_timer_return scsi_timeout(struct request *req)
31531 +
31532 + if (rtn == BLK_EH_DONE) {
31533 + /*
31534 +- * Set the command to complete first in order to prevent a real
31535 +- * completion from releasing the command while error handling
31536 +- * is using it. If the command was already completed, then the
31537 +- * lower level driver beat the timeout handler, and it is safe
31538 +- * to return without escalating error recovery.
31539 +- *
31540 +- * If timeout handling lost the race to a real completion, the
31541 +- * block layer may ignore that due to a fake timeout injection,
31542 +- * so return RESET_TIMER to allow error handling another shot
31543 +- * at this command.
31544 ++ * If scsi_done() has already set SCMD_STATE_COMPLETE, do not
31545 ++ * modify *scmd.
31546 + */
31547 + if (test_and_set_bit(SCMD_STATE_COMPLETE, &scmd->state))
31548 +- return BLK_EH_RESET_TIMER;
31549 ++ return BLK_EH_DONE;
31550 + if (scsi_abort_command(scmd) != SUCCESS) {
31551 + set_host_byte(scmd, DID_TIME_OUT);
31552 + scsi_eh_scmd_add(scmd);
31553 +diff --git a/drivers/scsi/smartpqi/smartpqi.h b/drivers/scsi/smartpqi/smartpqi.h
31554 +index e550b12e525a1..c8235f15728bb 100644
31555 +--- a/drivers/scsi/smartpqi/smartpqi.h
31556 ++++ b/drivers/scsi/smartpqi/smartpqi.h
31557 +@@ -1130,7 +1130,7 @@ struct pqi_scsi_dev {
31558 + u8 phy_id;
31559 + u8 ncq_prio_enable;
31560 + u8 ncq_prio_support;
31561 +- u8 multi_lun_device_lun_count;
31562 ++ u8 lun_count;
31563 + bool raid_bypass_configured; /* RAID bypass configured */
31564 + bool raid_bypass_enabled; /* RAID bypass enabled */
31565 + u32 next_bypass_group[RAID_MAP_MAX_DATA_DISKS_PER_ROW];
31566 +diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
31567 +index b971fbe3b3a17..9f0f69c1ed665 100644
31568 +--- a/drivers/scsi/smartpqi/smartpqi_init.c
31569 ++++ b/drivers/scsi/smartpqi/smartpqi_init.c
31570 +@@ -1610,9 +1610,7 @@ static int pqi_get_physical_device_info(struct pqi_ctrl_info *ctrl_info,
31571 + &id_phys->alternate_paths_phys_connector,
31572 + sizeof(device->phys_connector));
31573 + device->bay = id_phys->phys_bay_in_box;
31574 +- device->multi_lun_device_lun_count = id_phys->multi_lun_device_lun_count;
31575 +- if (!device->multi_lun_device_lun_count)
31576 +- device->multi_lun_device_lun_count = 1;
31577 ++ device->lun_count = id_phys->multi_lun_device_lun_count;
31578 + if ((id_phys->even_more_flags & PQI_DEVICE_PHY_MAP_SUPPORTED) &&
31579 + id_phys->phy_count)
31580 + device->phy_id =
31581 +@@ -1746,7 +1744,7 @@ out:
31582 + return offline;
31583 + }
31584 +
31585 +-static int pqi_get_device_info(struct pqi_ctrl_info *ctrl_info,
31586 ++static int pqi_get_device_info_phys_logical(struct pqi_ctrl_info *ctrl_info,
31587 + struct pqi_scsi_dev *device,
31588 + struct bmic_identify_physical_device *id_phys)
31589 + {
31590 +@@ -1763,6 +1761,20 @@ static int pqi_get_device_info(struct pqi_ctrl_info *ctrl_info,
31591 + return rc;
31592 + }
31593 +
31594 ++static int pqi_get_device_info(struct pqi_ctrl_info *ctrl_info,
31595 ++ struct pqi_scsi_dev *device,
31596 ++ struct bmic_identify_physical_device *id_phys)
31597 ++{
31598 ++ int rc;
31599 ++
31600 ++ rc = pqi_get_device_info_phys_logical(ctrl_info, device, id_phys);
31601 ++
31602 ++ if (rc == 0 && device->lun_count == 0)
31603 ++ device->lun_count = 1;
31604 ++
31605 ++ return rc;
31606 ++}
31607 ++
31608 + static void pqi_show_volume_status(struct pqi_ctrl_info *ctrl_info,
31609 + struct pqi_scsi_dev *device)
31610 + {
31611 +@@ -1897,7 +1909,7 @@ static inline void pqi_remove_device(struct pqi_ctrl_info *ctrl_info, struct pqi
31612 + int rc;
31613 + int lun;
31614 +
31615 +- for (lun = 0; lun < device->multi_lun_device_lun_count; lun++) {
31616 ++ for (lun = 0; lun < device->lun_count; lun++) {
31617 + rc = pqi_device_wait_for_pending_io(ctrl_info, device, lun,
31618 + PQI_REMOVE_DEVICE_PENDING_IO_TIMEOUT_MSECS);
31619 + if (rc)
31620 +@@ -2076,6 +2088,7 @@ static void pqi_scsi_update_device(struct pqi_ctrl_info *ctrl_info,
31621 + existing_device->sas_address = new_device->sas_address;
31622 + existing_device->queue_depth = new_device->queue_depth;
31623 + existing_device->device_offline = false;
31624 ++ existing_device->lun_count = new_device->lun_count;
31625 +
31626 + if (pqi_is_logical_device(existing_device)) {
31627 + existing_device->is_external_raid_device = new_device->is_external_raid_device;
31628 +@@ -2108,10 +2121,6 @@ static void pqi_scsi_update_device(struct pqi_ctrl_info *ctrl_info,
31629 + existing_device->phy_connected_dev_type = new_device->phy_connected_dev_type;
31630 + memcpy(existing_device->box, new_device->box, sizeof(existing_device->box));
31631 + memcpy(existing_device->phys_connector, new_device->phys_connector, sizeof(existing_device->phys_connector));
31632 +-
31633 +- existing_device->multi_lun_device_lun_count = new_device->multi_lun_device_lun_count;
31634 +- if (existing_device->multi_lun_device_lun_count == 0)
31635 +- existing_device->multi_lun_device_lun_count = 1;
31636 + }
31637 + }
31638 +
31639 +@@ -6484,6 +6493,12 @@ static void pqi_slave_destroy(struct scsi_device *sdev)
31640 + return;
31641 + }
31642 +
31643 ++ device->lun_count--;
31644 ++ if (device->lun_count > 0) {
31645 ++ mutex_unlock(&ctrl_info->scan_mutex);
31646 ++ return;
31647 ++ }
31648 ++
31649 + spin_lock_irqsave(&ctrl_info->scsi_device_list_lock, flags);
31650 + list_del(&device->scsi_device_list_entry);
31651 + spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags);
31652 +@@ -9302,6 +9317,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
31653 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31654 + 0x193d, 0x1109)
31655 + },
31656 ++ {
31657 ++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31658 ++ 0x193d, 0x110b)
31659 ++ },
31660 + {
31661 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31662 + 0x193d, 0x8460)
31663 +@@ -9402,6 +9421,22 @@ static const struct pci_device_id pqi_pci_id_table[] = {
31664 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31665 + 0x1bd4, 0x0072)
31666 + },
31667 ++ {
31668 ++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31669 ++ 0x1bd4, 0x0086)
31670 ++ },
31671 ++ {
31672 ++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31673 ++ 0x1bd4, 0x0087)
31674 ++ },
31675 ++ {
31676 ++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31677 ++ 0x1bd4, 0x0088)
31678 ++ },
31679 ++ {
31680 ++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31681 ++ 0x1bd4, 0x0089)
31682 ++ },
31683 + {
31684 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31685 + 0x19e5, 0xd227)
31686 +@@ -9650,6 +9685,10 @@ static const struct pci_device_id pqi_pci_id_table[] = {
31687 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31688 + PCI_VENDOR_ID_ADAPTEC2, 0x1474)
31689 + },
31690 ++ {
31691 ++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31692 ++ PCI_VENDOR_ID_ADAPTEC2, 0x1475)
31693 ++ },
31694 + {
31695 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31696 + PCI_VENDOR_ID_ADAPTEC2, 0x1480)
31697 +@@ -9706,6 +9745,14 @@ static const struct pci_device_id pqi_pci_id_table[] = {
31698 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31699 + PCI_VENDOR_ID_ADAPTEC2, 0x14c2)
31700 + },
31701 ++ {
31702 ++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31703 ++ PCI_VENDOR_ID_ADAPTEC2, 0x14c3)
31704 ++ },
31705 ++ {
31706 ++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31707 ++ PCI_VENDOR_ID_ADAPTEC2, 0x14c4)
31708 ++ },
31709 + {
31710 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31711 + PCI_VENDOR_ID_ADAPTEC2, 0x14d0)
31712 +@@ -9942,6 +9989,18 @@ static const struct pci_device_id pqi_pci_id_table[] = {
31713 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31714 + PCI_VENDOR_ID_LENOVO, 0x0623)
31715 + },
31716 ++ {
31717 ++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31718 ++ 0x1e93, 0x1000)
31719 ++ },
31720 ++ {
31721 ++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31722 ++ 0x1e93, 0x1001)
31723 ++ },
31724 ++ {
31725 ++ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31726 ++ 0x1e93, 0x1002)
31727 ++ },
31728 + {
31729 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
31730 + PCI_ANY_ID, PCI_ANY_ID)
31731 +diff --git a/drivers/scsi/snic/snic_disc.c b/drivers/scsi/snic/snic_disc.c
31732 +index 9b2b5f8c23b9a..8fbf3c1b1311d 100644
31733 +--- a/drivers/scsi/snic/snic_disc.c
31734 ++++ b/drivers/scsi/snic/snic_disc.c
31735 +@@ -304,6 +304,9 @@ snic_tgt_create(struct snic *snic, struct snic_tgt_id *tgtid)
31736 + ret);
31737 +
31738 + put_device(&snic->shost->shost_gendev);
31739 ++ spin_lock_irqsave(snic->shost->host_lock, flags);
31740 ++ list_del(&tgt->list);
31741 ++ spin_unlock_irqrestore(snic->shost->host_lock, flags);
31742 + kfree(tgt);
31743 + tgt = NULL;
31744 +
31745 +diff --git a/drivers/soc/apple/rtkit.c b/drivers/soc/apple/rtkit.c
31746 +index 031ec4aa06d55..8ec74d7539eb4 100644
31747 +--- a/drivers/soc/apple/rtkit.c
31748 ++++ b/drivers/soc/apple/rtkit.c
31749 +@@ -926,8 +926,10 @@ int apple_rtkit_wake(struct apple_rtkit *rtk)
31750 + }
31751 + EXPORT_SYMBOL_GPL(apple_rtkit_wake);
31752 +
31753 +-static void apple_rtkit_free(struct apple_rtkit *rtk)
31754 ++static void apple_rtkit_free(void *data)
31755 + {
31756 ++ struct apple_rtkit *rtk = data;
31757 ++
31758 + mbox_free_channel(rtk->mbox_chan);
31759 + destroy_workqueue(rtk->wq);
31760 +
31761 +@@ -950,8 +952,7 @@ struct apple_rtkit *devm_apple_rtkit_init(struct device *dev, void *cookie,
31762 + if (IS_ERR(rtk))
31763 + return rtk;
31764 +
31765 +- ret = devm_add_action_or_reset(dev, (void (*)(void *))apple_rtkit_free,
31766 +- rtk);
31767 ++ ret = devm_add_action_or_reset(dev, apple_rtkit_free, rtk);
31768 + if (ret)
31769 + return ERR_PTR(ret);
31770 +
31771 +diff --git a/drivers/soc/apple/sart.c b/drivers/soc/apple/sart.c
31772 +index 83804b16ad03d..afa1117368997 100644
31773 +--- a/drivers/soc/apple/sart.c
31774 ++++ b/drivers/soc/apple/sart.c
31775 +@@ -164,6 +164,11 @@ static int apple_sart_probe(struct platform_device *pdev)
31776 + return 0;
31777 + }
31778 +
31779 ++static void apple_sart_put_device(void *dev)
31780 ++{
31781 ++ put_device(dev);
31782 ++}
31783 ++
31784 + struct apple_sart *devm_apple_sart_get(struct device *dev)
31785 + {
31786 + struct device_node *sart_node;
31787 +@@ -187,7 +192,7 @@ struct apple_sart *devm_apple_sart_get(struct device *dev)
31788 + return ERR_PTR(-EPROBE_DEFER);
31789 + }
31790 +
31791 +- ret = devm_add_action_or_reset(dev, (void (*)(void *))put_device,
31792 ++ ret = devm_add_action_or_reset(dev, apple_sart_put_device,
31793 + &sart_pdev->dev);
31794 + if (ret)
31795 + return ERR_PTR(ret);
31796 +diff --git a/drivers/soc/mediatek/mtk-pm-domains.c b/drivers/soc/mediatek/mtk-pm-domains.c
31797 +index 09e3c38b84664..474b272f9b02d 100644
31798 +--- a/drivers/soc/mediatek/mtk-pm-domains.c
31799 ++++ b/drivers/soc/mediatek/mtk-pm-domains.c
31800 +@@ -275,9 +275,9 @@ static int scpsys_power_off(struct generic_pm_domain *genpd)
31801 + clk_bulk_disable_unprepare(pd->num_subsys_clks, pd->subsys_clks);
31802 +
31803 + /* subsys power off */
31804 +- regmap_clear_bits(scpsys->base, pd->data->ctl_offs, PWR_RST_B_BIT);
31805 + regmap_set_bits(scpsys->base, pd->data->ctl_offs, PWR_ISO_BIT);
31806 + regmap_set_bits(scpsys->base, pd->data->ctl_offs, PWR_CLK_DIS_BIT);
31807 ++ regmap_clear_bits(scpsys->base, pd->data->ctl_offs, PWR_RST_B_BIT);
31808 + regmap_clear_bits(scpsys->base, pd->data->ctl_offs, PWR_ON_2ND_BIT);
31809 + regmap_clear_bits(scpsys->base, pd->data->ctl_offs, PWR_ON_BIT);
31810 +
31811 +diff --git a/drivers/soc/qcom/apr.c b/drivers/soc/qcom/apr.c
31812 +index b4046f393575e..cd44f17dad3d0 100644
31813 +--- a/drivers/soc/qcom/apr.c
31814 ++++ b/drivers/soc/qcom/apr.c
31815 +@@ -454,11 +454,19 @@ static int apr_add_device(struct device *dev, struct device_node *np,
31816 + adev->dev.driver = NULL;
31817 +
31818 + spin_lock(&apr->svcs_lock);
31819 +- idr_alloc(&apr->svcs_idr, svc, svc_id, svc_id + 1, GFP_ATOMIC);
31820 ++ ret = idr_alloc(&apr->svcs_idr, svc, svc_id, svc_id + 1, GFP_ATOMIC);
31821 + spin_unlock(&apr->svcs_lock);
31822 ++ if (ret < 0) {
31823 ++ dev_err(dev, "idr_alloc failed: %d\n", ret);
31824 ++ goto out;
31825 ++ }
31826 +
31827 +- of_property_read_string_index(np, "qcom,protection-domain",
31828 +- 1, &adev->service_path);
31829 ++ ret = of_property_read_string_index(np, "qcom,protection-domain",
31830 ++ 1, &adev->service_path);
31831 ++ if (ret < 0) {
31832 ++ dev_err(dev, "Failed to read second value of qcom,protection-domain\n");
31833 ++ goto out;
31834 ++ }
31835 +
31836 + dev_info(dev, "Adding APR/GPR dev: %s\n", dev_name(&adev->dev));
31837 +
31838 +@@ -468,6 +476,7 @@ static int apr_add_device(struct device *dev, struct device_node *np,
31839 + put_device(&adev->dev);
31840 + }
31841 +
31842 ++out:
31843 + return ret;
31844 + }
31845 +
31846 +diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c
31847 +index 8b7e8118f3cec..82c3cfdcc5601 100644
31848 +--- a/drivers/soc/qcom/llcc-qcom.c
31849 ++++ b/drivers/soc/qcom/llcc-qcom.c
31850 +@@ -849,7 +849,7 @@ static int qcom_llcc_probe(struct platform_device *pdev)
31851 + if (ret)
31852 + goto err;
31853 +
31854 +- drv_data->ecc_irq = platform_get_irq(pdev, 0);
31855 ++ drv_data->ecc_irq = platform_get_irq_optional(pdev, 0);
31856 + if (drv_data->ecc_irq >= 0) {
31857 + llcc_edac = platform_device_register_data(&pdev->dev,
31858 + "qcom_llcc_edac", -1, drv_data,
31859 +diff --git a/drivers/soc/sifive/sifive_ccache.c b/drivers/soc/sifive/sifive_ccache.c
31860 +index 1c171150e878d..3684f5b40a80e 100644
31861 +--- a/drivers/soc/sifive/sifive_ccache.c
31862 ++++ b/drivers/soc/sifive/sifive_ccache.c
31863 +@@ -215,20 +215,27 @@ static int __init sifive_ccache_init(void)
31864 + if (!np)
31865 + return -ENODEV;
31866 +
31867 +- if (of_address_to_resource(np, 0, &res))
31868 +- return -ENODEV;
31869 ++ if (of_address_to_resource(np, 0, &res)) {
31870 ++ rc = -ENODEV;
31871 ++ goto err_node_put;
31872 ++ }
31873 +
31874 + ccache_base = ioremap(res.start, resource_size(&res));
31875 +- if (!ccache_base)
31876 +- return -ENOMEM;
31877 ++ if (!ccache_base) {
31878 ++ rc = -ENOMEM;
31879 ++ goto err_node_put;
31880 ++ }
31881 +
31882 +- if (of_property_read_u32(np, "cache-level", &level))
31883 +- return -ENOENT;
31884 ++ if (of_property_read_u32(np, "cache-level", &level)) {
31885 ++ rc = -ENOENT;
31886 ++ goto err_unmap;
31887 ++ }
31888 +
31889 + intr_num = of_property_count_u32_elems(np, "interrupts");
31890 + if (!intr_num) {
31891 + pr_err("No interrupts property\n");
31892 +- return -ENODEV;
31893 ++ rc = -ENODEV;
31894 ++ goto err_unmap;
31895 + }
31896 +
31897 + for (i = 0; i < intr_num; i++) {
31898 +@@ -237,9 +244,10 @@ static int __init sifive_ccache_init(void)
31899 + NULL);
31900 + if (rc) {
31901 + pr_err("Could not request IRQ %d\n", g_irq[i]);
31902 +- return rc;
31903 ++ goto err_free_irq;
31904 + }
31905 + }
31906 ++ of_node_put(np);
31907 +
31908 + ccache_config_read();
31909 +
31910 +@@ -250,6 +258,15 @@ static int __init sifive_ccache_init(void)
31911 + setup_sifive_debug();
31912 + #endif
31913 + return 0;
31914 ++
31915 ++err_free_irq:
31916 ++ while (--i >= 0)
31917 ++ free_irq(g_irq[i], NULL);
31918 ++err_unmap:
31919 ++ iounmap(ccache_base);
31920 ++err_node_put:
31921 ++ of_node_put(np);
31922 ++ return rc;
31923 + }
31924 +
31925 + device_initcall(sifive_ccache_init);
31926 +diff --git a/drivers/soc/tegra/cbb/tegra194-cbb.c b/drivers/soc/tegra/cbb/tegra194-cbb.c
31927 +index 1ae0bd9a1ac1b..2e952c6f7c9e3 100644
31928 +--- a/drivers/soc/tegra/cbb/tegra194-cbb.c
31929 ++++ b/drivers/soc/tegra/cbb/tegra194-cbb.c
31930 +@@ -102,8 +102,6 @@
31931 + #define CLUSTER_NOC_VQC GENMASK(17, 16)
31932 + #define CLUSTER_NOC_MSTR_ID GENMASK(21, 18)
31933 +
31934 +-#define USRBITS_MSTR_ID GENMASK(21, 18)
31935 +-
31936 + #define CBB_ERR_OPC GENMASK(4, 1)
31937 + #define CBB_ERR_ERRCODE GENMASK(10, 8)
31938 + #define CBB_ERR_LEN1 GENMASK(27, 16)
31939 +@@ -2038,15 +2036,17 @@ static irqreturn_t tegra194_cbb_err_isr(int irq, void *data)
31940 + smp_processor_id(), priv->noc->name, priv->res->start,
31941 + irq);
31942 +
31943 +- mstr_id = FIELD_GET(USRBITS_MSTR_ID, priv->errlog5) - 1;
31944 + is_fatal = print_errlog(NULL, priv, status);
31945 +
31946 + /*
31947 +- * If illegal request is from CCPLEX(0x1)
31948 +- * initiator then call BUG() to crash system.
31949 ++ * If illegal request is from CCPLEX(0x1) initiator
31950 ++ * and error is fatal then call BUG() to crash system.
31951 + */
31952 +- if ((mstr_id == 0x1) && priv->noc->erd_mask_inband_err)
31953 +- is_inband_err = 1;
31954 ++ if (priv->noc->erd_mask_inband_err) {
31955 ++ mstr_id = FIELD_GET(CBB_NOC_MSTR_ID, priv->errlog5);
31956 ++ if (mstr_id == 0x1)
31957 ++ is_inband_err = 1;
31958 ++ }
31959 + }
31960 + }
31961 +
31962 +diff --git a/drivers/soc/tegra/cbb/tegra234-cbb.c b/drivers/soc/tegra/cbb/tegra234-cbb.c
31963 +index 3528f9e15d5c0..f33d094e5ea60 100644
31964 +--- a/drivers/soc/tegra/cbb/tegra234-cbb.c
31965 ++++ b/drivers/soc/tegra/cbb/tegra234-cbb.c
31966 +@@ -72,6 +72,11 @@
31967 +
31968 + #define REQ_SOCKET_ID GENMASK(27, 24)
31969 +
31970 ++#define CCPLEX_MSTRID 0x1
31971 ++#define FIREWALL_APERTURE_SZ 0x10000
31972 ++/* Write firewall check enable */
31973 ++#define WEN 0x20000
31974 ++
31975 + enum tegra234_cbb_fabric_ids {
31976 + CBB_FAB_ID,
31977 + SCE_FAB_ID,
31978 +@@ -92,11 +97,15 @@ struct tegra234_slave_lookup {
31979 + struct tegra234_cbb_fabric {
31980 + const char *name;
31981 + phys_addr_t off_mask_erd;
31982 +- bool erd_mask_inband_err;
31983 ++ phys_addr_t firewall_base;
31984 ++ unsigned int firewall_ctl;
31985 ++ unsigned int firewall_wr_ctl;
31986 + const char * const *master_id;
31987 + unsigned int notifier_offset;
31988 + const struct tegra_cbb_error *errors;
31989 ++ const int max_errors;
31990 + const struct tegra234_slave_lookup *slave_map;
31991 ++ const int max_slaves;
31992 + };
31993 +
31994 + struct tegra234_cbb {
31995 +@@ -128,6 +137,44 @@ static inline struct tegra234_cbb *to_tegra234_cbb(struct tegra_cbb *cbb)
31996 + static LIST_HEAD(cbb_list);
31997 + static DEFINE_SPINLOCK(cbb_lock);
31998 +
31999 ++static bool
32000 ++tegra234_cbb_write_access_allowed(struct platform_device *pdev, struct tegra234_cbb *cbb)
32001 ++{
32002 ++ u32 val;
32003 ++
32004 ++ if (!cbb->fabric->firewall_base ||
32005 ++ !cbb->fabric->firewall_ctl ||
32006 ++ !cbb->fabric->firewall_wr_ctl) {
32007 ++ dev_info(&pdev->dev, "SoC data missing for firewall\n");
32008 ++ return false;
32009 ++ }
32010 ++
32011 ++ if ((cbb->fabric->firewall_ctl > FIREWALL_APERTURE_SZ) ||
32012 ++ (cbb->fabric->firewall_wr_ctl > FIREWALL_APERTURE_SZ)) {
32013 ++ dev_err(&pdev->dev, "wrong firewall offset value\n");
32014 ++ return false;
32015 ++ }
32016 ++
32017 ++ val = readl(cbb->regs + cbb->fabric->firewall_base + cbb->fabric->firewall_ctl);
32018 ++ /*
32019 ++ * If the firewall check feature for allowing or blocking the
32020 ++ * write accesses through the firewall of a fabric is disabled
32021 ++ * then CCPLEX can write to the registers of that fabric.
32022 ++ */
32023 ++ if (!(val & WEN))
32024 ++ return true;
32025 ++
32026 ++ /*
32027 ++ * If the firewall check is enabled then check whether CCPLEX
32028 ++ * has write access to the fabric's error notifier registers
32029 ++ */
32030 ++ val = readl(cbb->regs + cbb->fabric->firewall_base + cbb->fabric->firewall_wr_ctl);
32031 ++ if (val & (BIT(CCPLEX_MSTRID)))
32032 ++ return true;
32033 ++
32034 ++ return false;
32035 ++}
32036 ++
32037 + static void tegra234_cbb_fault_enable(struct tegra_cbb *cbb)
32038 + {
32039 + struct tegra234_cbb *priv = to_tegra234_cbb(cbb);
32040 +@@ -271,6 +318,12 @@ static void tegra234_cbb_print_error(struct seq_file *file, struct tegra234_cbb
32041 + tegra_cbb_print_err(file, "\t Multiple type of errors reported\n");
32042 +
32043 + while (status) {
32044 ++ if (type >= cbb->fabric->max_errors) {
32045 ++ tegra_cbb_print_err(file, "\t Wrong type index:%u, status:%u\n",
32046 ++ type, status);
32047 ++ return;
32048 ++ }
32049 ++
32050 + if (status & 0x1)
32051 + tegra_cbb_print_err(file, "\t Error Code\t\t: %s\n",
32052 + cbb->fabric->errors[type].code);
32053 +@@ -282,6 +335,12 @@ static void tegra234_cbb_print_error(struct seq_file *file, struct tegra234_cbb
32054 + type = 0;
32055 +
32056 + while (overflow) {
32057 ++ if (type >= cbb->fabric->max_errors) {
32058 ++ tegra_cbb_print_err(file, "\t Wrong type index:%u, overflow:%u\n",
32059 ++ type, overflow);
32060 ++ return;
32061 ++ }
32062 ++
32063 + if (overflow & 0x1)
32064 + tegra_cbb_print_err(file, "\t Overflow\t\t: Multiple %s\n",
32065 + cbb->fabric->errors[type].code);
32066 +@@ -334,8 +393,11 @@ static void print_errlog_err(struct seq_file *file, struct tegra234_cbb *cbb)
32067 + access_type = FIELD_GET(FAB_EM_EL_ACCESSTYPE, cbb->mn_attr0);
32068 +
32069 + tegra_cbb_print_err(file, "\n");
32070 +- tegra_cbb_print_err(file, "\t Error Code\t\t: %s\n",
32071 +- cbb->fabric->errors[cbb->type].code);
32072 ++ if (cbb->type < cbb->fabric->max_errors)
32073 ++ tegra_cbb_print_err(file, "\t Error Code\t\t: %s\n",
32074 ++ cbb->fabric->errors[cbb->type].code);
32075 ++ else
32076 ++ tegra_cbb_print_err(file, "\t Wrong type index:%u\n", cbb->type);
32077 +
32078 + tegra_cbb_print_err(file, "\t MASTER_ID\t\t: %s\n", cbb->fabric->master_id[mstr_id]);
32079 + tegra_cbb_print_err(file, "\t Address\t\t: %#llx\n", cbb->access);
32080 +@@ -374,6 +436,11 @@ static void print_errlog_err(struct seq_file *file, struct tegra234_cbb *cbb)
32081 + if ((fab_id == PSC_FAB_ID) || (fab_id == FSI_FAB_ID))
32082 + return;
32083 +
32084 ++ if (slave_id >= cbb->fabric->max_slaves) {
32085 ++ tegra_cbb_print_err(file, "\t Invalid slave_id:%d\n", slave_id);
32086 ++ return;
32087 ++ }
32088 ++
32089 + if (!strcmp(cbb->fabric->errors[cbb->type].code, "TIMEOUT_ERR")) {
32090 + tegra234_lookup_slave_timeout(file, cbb, slave_id, fab_id);
32091 + return;
32092 +@@ -517,7 +584,7 @@ static irqreturn_t tegra234_cbb_isr(int irq, void *data)
32093 + u32 status = tegra_cbb_get_status(cbb);
32094 +
32095 + if (status && (irq == priv->sec_irq)) {
32096 +- tegra_cbb_print_err(NULL, "CPU:%d, Error: %s@%llx, irq=%d\n",
32097 ++ tegra_cbb_print_err(NULL, "CPU:%d, Error: %s@0x%llx, irq=%d\n",
32098 + smp_processor_id(), priv->fabric->name,
32099 + priv->res->start, irq);
32100 +
32101 +@@ -525,14 +592,14 @@ static irqreturn_t tegra234_cbb_isr(int irq, void *data)
32102 + if (err)
32103 + goto unlock;
32104 +
32105 +- mstr_id = FIELD_GET(USRBITS_MSTR_ID, priv->mn_user_bits);
32106 +-
32107 + /*
32108 +- * If illegal request is from CCPLEX(id:0x1) master then call BUG() to
32109 +- * crash system.
32110 ++ * If illegal request is from CCPLEX(id:0x1) master then call WARN()
32111 + */
32112 +- if ((mstr_id == 0x1) && priv->fabric->off_mask_erd)
32113 +- is_inband_err = 1;
32114 ++ if (priv->fabric->off_mask_erd) {
32115 ++ mstr_id = FIELD_GET(USRBITS_MSTR_ID, priv->mn_user_bits);
32116 ++ if (mstr_id == CCPLEX_MSTRID)
32117 ++ is_inband_err = 1;
32118 ++ }
32119 + }
32120 + }
32121 +
32122 +@@ -640,8 +707,13 @@ static const struct tegra234_cbb_fabric tegra234_aon_fabric = {
32123 + .name = "aon-fabric",
32124 + .master_id = tegra234_master_id,
32125 + .slave_map = tegra234_aon_slave_map,
32126 ++ .max_slaves = ARRAY_SIZE(tegra234_aon_slave_map),
32127 + .errors = tegra234_cbb_errors,
32128 ++ .max_errors = ARRAY_SIZE(tegra234_cbb_errors),
32129 + .notifier_offset = 0x17000,
32130 ++ .firewall_base = 0x30000,
32131 ++ .firewall_ctl = 0x8d0,
32132 ++ .firewall_wr_ctl = 0x8c8,
32133 + };
32134 +
32135 + static const struct tegra234_slave_lookup tegra234_bpmp_slave_map[] = {
32136 +@@ -656,8 +728,13 @@ static const struct tegra234_cbb_fabric tegra234_bpmp_fabric = {
32137 + .name = "bpmp-fabric",
32138 + .master_id = tegra234_master_id,
32139 + .slave_map = tegra234_bpmp_slave_map,
32140 ++ .max_slaves = ARRAY_SIZE(tegra234_bpmp_slave_map),
32141 + .errors = tegra234_cbb_errors,
32142 ++ .max_errors = ARRAY_SIZE(tegra234_cbb_errors),
32143 + .notifier_offset = 0x19000,
32144 ++ .firewall_base = 0x30000,
32145 ++ .firewall_ctl = 0x8f0,
32146 ++ .firewall_wr_ctl = 0x8e8,
32147 + };
32148 +
32149 + static const struct tegra234_slave_lookup tegra234_cbb_slave_map[] = {
32150 +@@ -728,55 +805,62 @@ static const struct tegra234_cbb_fabric tegra234_cbb_fabric = {
32151 + .name = "cbb-fabric",
32152 + .master_id = tegra234_master_id,
32153 + .slave_map = tegra234_cbb_slave_map,
32154 ++ .max_slaves = ARRAY_SIZE(tegra234_cbb_slave_map),
32155 + .errors = tegra234_cbb_errors,
32156 ++ .max_errors = ARRAY_SIZE(tegra234_cbb_errors),
32157 + .notifier_offset = 0x60000,
32158 +- .off_mask_erd = 0x3a004
32159 ++ .off_mask_erd = 0x3a004,
32160 ++ .firewall_base = 0x10000,
32161 ++ .firewall_ctl = 0x23f0,
32162 ++ .firewall_wr_ctl = 0x23e8,
32163 + };
32164 +
32165 +-static const struct tegra234_slave_lookup tegra234_dce_slave_map[] = {
32166 ++static const struct tegra234_slave_lookup tegra234_common_slave_map[] = {
32167 + { "AXI2APB", 0x00000 },
32168 + { "AST0", 0x15000 },
32169 + { "AST1", 0x16000 },
32170 ++ { "CBB", 0x17000 },
32171 ++ { "RSVD", 0x00000 },
32172 + { "CPU", 0x18000 },
32173 + };
32174 +
32175 + static const struct tegra234_cbb_fabric tegra234_dce_fabric = {
32176 + .name = "dce-fabric",
32177 + .master_id = tegra234_master_id,
32178 +- .slave_map = tegra234_dce_slave_map,
32179 ++ .slave_map = tegra234_common_slave_map,
32180 ++ .max_slaves = ARRAY_SIZE(tegra234_common_slave_map),
32181 + .errors = tegra234_cbb_errors,
32182 ++ .max_errors = ARRAY_SIZE(tegra234_cbb_errors),
32183 + .notifier_offset = 0x19000,
32184 +-};
32185 +-
32186 +-static const struct tegra234_slave_lookup tegra234_rce_slave_map[] = {
32187 +- { "AXI2APB", 0x00000 },
32188 +- { "AST0", 0x15000 },
32189 +- { "AST1", 0x16000 },
32190 +- { "CPU", 0x18000 },
32191 ++ .firewall_base = 0x30000,
32192 ++ .firewall_ctl = 0x290,
32193 ++ .firewall_wr_ctl = 0x288,
32194 + };
32195 +
32196 + static const struct tegra234_cbb_fabric tegra234_rce_fabric = {
32197 + .name = "rce-fabric",
32198 + .master_id = tegra234_master_id,
32199 +- .slave_map = tegra234_rce_slave_map,
32200 ++ .slave_map = tegra234_common_slave_map,
32201 ++ .max_slaves = ARRAY_SIZE(tegra234_common_slave_map),
32202 + .errors = tegra234_cbb_errors,
32203 ++ .max_errors = ARRAY_SIZE(tegra234_cbb_errors),
32204 + .notifier_offset = 0x19000,
32205 +-};
32206 +-
32207 +-static const struct tegra234_slave_lookup tegra234_sce_slave_map[] = {
32208 +- { "AXI2APB", 0x00000 },
32209 +- { "AST0", 0x15000 },
32210 +- { "AST1", 0x16000 },
32211 +- { "CBB", 0x17000 },
32212 +- { "CPU", 0x18000 },
32213 ++ .firewall_base = 0x30000,
32214 ++ .firewall_ctl = 0x290,
32215 ++ .firewall_wr_ctl = 0x288,
32216 + };
32217 +
32218 + static const struct tegra234_cbb_fabric tegra234_sce_fabric = {
32219 + .name = "sce-fabric",
32220 + .master_id = tegra234_master_id,
32221 +- .slave_map = tegra234_sce_slave_map,
32222 ++ .slave_map = tegra234_common_slave_map,
32223 ++ .max_slaves = ARRAY_SIZE(tegra234_common_slave_map),
32224 + .errors = tegra234_cbb_errors,
32225 ++ .max_errors = ARRAY_SIZE(tegra234_cbb_errors),
32226 + .notifier_offset = 0x19000,
32227 ++ .firewall_base = 0x30000,
32228 ++ .firewall_ctl = 0x290,
32229 ++ .firewall_wr_ctl = 0x288,
32230 + };
32231 +
32232 + static const char * const tegra241_master_id[] = {
32233 +@@ -889,7 +973,7 @@ static const struct tegra_cbb_error tegra241_cbb_errors[] = {
32234 + };
32235 +
32236 + static const struct tegra234_slave_lookup tegra241_cbb_slave_map[] = {
32237 +- { "CCPLEX", 0x50000 },
32238 ++ { "RSVD", 0x00000 },
32239 + { "PCIE_C8", 0x51000 },
32240 + { "PCIE_C9", 0x52000 },
32241 + { "RSVD", 0x00000 },
32242 +@@ -942,20 +1026,30 @@ static const struct tegra234_slave_lookup tegra241_cbb_slave_map[] = {
32243 + { "PCIE_C3", 0x58000 },
32244 + { "PCIE_C0", 0x59000 },
32245 + { "PCIE_C1", 0x5a000 },
32246 ++ { "CCPLEX", 0x50000 },
32247 + { "AXI2APB_29", 0x85000 },
32248 + { "AXI2APB_30", 0x86000 },
32249 ++ { "CBB_CENTRAL", 0x00000 },
32250 ++ { "AXI2APB_31", 0x8E000 },
32251 ++ { "AXI2APB_32", 0x8F000 },
32252 + };
32253 +
32254 + static const struct tegra234_cbb_fabric tegra241_cbb_fabric = {
32255 + .name = "cbb-fabric",
32256 + .master_id = tegra241_master_id,
32257 + .slave_map = tegra241_cbb_slave_map,
32258 ++ .max_slaves = ARRAY_SIZE(tegra241_cbb_slave_map),
32259 + .errors = tegra241_cbb_errors,
32260 ++ .max_errors = ARRAY_SIZE(tegra241_cbb_errors),
32261 + .notifier_offset = 0x60000,
32262 + .off_mask_erd = 0x40004,
32263 ++ .firewall_base = 0x20000,
32264 ++ .firewall_ctl = 0x2370,
32265 ++ .firewall_wr_ctl = 0x2368,
32266 + };
32267 +
32268 + static const struct tegra234_slave_lookup tegra241_bpmp_slave_map[] = {
32269 ++ { "RSVD", 0x00000 },
32270 + { "RSVD", 0x00000 },
32271 + { "RSVD", 0x00000 },
32272 + { "CBB", 0x15000 },
32273 +@@ -969,8 +1063,13 @@ static const struct tegra234_cbb_fabric tegra241_bpmp_fabric = {
32274 + .name = "bpmp-fabric",
32275 + .master_id = tegra241_master_id,
32276 + .slave_map = tegra241_bpmp_slave_map,
32277 ++ .max_slaves = ARRAY_SIZE(tegra241_bpmp_slave_map),
32278 + .errors = tegra241_cbb_errors,
32279 ++ .max_errors = ARRAY_SIZE(tegra241_cbb_errors),
32280 + .notifier_offset = 0x19000,
32281 ++ .firewall_base = 0x30000,
32282 ++ .firewall_ctl = 0x8f0,
32283 ++ .firewall_wr_ctl = 0x8e8,
32284 + };
32285 +
32286 + static const struct of_device_id tegra234_cbb_dt_ids[] = {
32287 +@@ -1055,6 +1154,15 @@ static int tegra234_cbb_probe(struct platform_device *pdev)
32288 +
32289 + platform_set_drvdata(pdev, cbb);
32290 +
32291 ++ /*
32292 ++ * Don't enable error reporting for a Fabric if write to it's registers
32293 ++ * is blocked by CBB firewall.
32294 ++ */
32295 ++ if (!tegra234_cbb_write_access_allowed(pdev, cbb)) {
32296 ++ dev_info(&pdev->dev, "error reporting not enabled due to firewall\n");
32297 ++ return 0;
32298 ++ }
32299 ++
32300 + spin_lock_irqsave(&cbb_lock, flags);
32301 + list_add(&cbb->base.node, &cbb_list);
32302 + spin_unlock_irqrestore(&cbb_lock, flags);
32303 +diff --git a/drivers/soc/ti/knav_qmss_queue.c b/drivers/soc/ti/knav_qmss_queue.c
32304 +index 92af7d1b6f5bd..8fb76908be704 100644
32305 +--- a/drivers/soc/ti/knav_qmss_queue.c
32306 ++++ b/drivers/soc/ti/knav_qmss_queue.c
32307 +@@ -67,7 +67,7 @@ static DEFINE_MUTEX(knav_dev_lock);
32308 + * Newest followed by older ones. Search is done from start of the array
32309 + * until a firmware file is found.
32310 + */
32311 +-const char *knav_acc_firmwares[] = {"ks2_qmss_pdsp_acc48.bin"};
32312 ++static const char * const knav_acc_firmwares[] = {"ks2_qmss_pdsp_acc48.bin"};
32313 +
32314 + static bool device_ready;
32315 + bool knav_qmss_device_ready(void)
32316 +@@ -1785,6 +1785,7 @@ static int knav_queue_probe(struct platform_device *pdev)
32317 + pm_runtime_enable(&pdev->dev);
32318 + ret = pm_runtime_resume_and_get(&pdev->dev);
32319 + if (ret < 0) {
32320 ++ pm_runtime_disable(&pdev->dev);
32321 + dev_err(dev, "Failed to enable QMSS\n");
32322 + return ret;
32323 + }
32324 +diff --git a/drivers/soc/ti/smartreflex.c b/drivers/soc/ti/smartreflex.c
32325 +index ad2bb72e640c8..6a389a6444f36 100644
32326 +--- a/drivers/soc/ti/smartreflex.c
32327 ++++ b/drivers/soc/ti/smartreflex.c
32328 +@@ -932,6 +932,7 @@ static int omap_sr_probe(struct platform_device *pdev)
32329 + err_debugfs:
32330 + debugfs_remove_recursive(sr_info->dbg_dir);
32331 + err_list_del:
32332 ++ pm_runtime_disable(&pdev->dev);
32333 + list_del(&sr_info->node);
32334 + clk_unprepare(sr_info->fck);
32335 +
32336 +diff --git a/drivers/spi/spi-fsl-spi.c b/drivers/spi/spi-fsl-spi.c
32337 +index 731624f157fc0..93152144fd2ec 100644
32338 +--- a/drivers/spi/spi-fsl-spi.c
32339 ++++ b/drivers/spi/spi-fsl-spi.c
32340 +@@ -333,13 +333,26 @@ static int fsl_spi_prepare_message(struct spi_controller *ctlr,
32341 + {
32342 + struct mpc8xxx_spi *mpc8xxx_spi = spi_controller_get_devdata(ctlr);
32343 + struct spi_transfer *t;
32344 ++ struct spi_transfer *first;
32345 ++
32346 ++ first = list_first_entry(&m->transfers, struct spi_transfer,
32347 ++ transfer_list);
32348 +
32349 + /*
32350 + * In CPU mode, optimize large byte transfers to use larger
32351 + * bits_per_word values to reduce number of interrupts taken.
32352 ++ *
32353 ++ * Some glitches can appear on the SPI clock when the mode changes.
32354 ++ * Check that there is no speed change during the transfer and set it up
32355 ++ * now to change the mode without having a chip-select asserted.
32356 + */
32357 +- if (!(mpc8xxx_spi->flags & SPI_CPM_MODE)) {
32358 +- list_for_each_entry(t, &m->transfers, transfer_list) {
32359 ++ list_for_each_entry(t, &m->transfers, transfer_list) {
32360 ++ if (t->speed_hz != first->speed_hz) {
32361 ++ dev_err(&m->spi->dev,
32362 ++ "speed_hz cannot change during message.\n");
32363 ++ return -EINVAL;
32364 ++ }
32365 ++ if (!(mpc8xxx_spi->flags & SPI_CPM_MODE)) {
32366 + if (t->len < 256 || t->bits_per_word != 8)
32367 + continue;
32368 + if ((t->len & 3) == 0)
32369 +@@ -348,7 +361,7 @@ static int fsl_spi_prepare_message(struct spi_controller *ctlr,
32370 + t->bits_per_word = 16;
32371 + }
32372 + }
32373 +- return 0;
32374 ++ return fsl_spi_setup_transfer(m->spi, first);
32375 + }
32376 +
32377 + static int fsl_spi_transfer_one(struct spi_controller *controller,
32378 +diff --git a/drivers/spi/spi-gpio.c b/drivers/spi/spi-gpio.c
32379 +index 4b12c4964a664..9c8c7948044ed 100644
32380 +--- a/drivers/spi/spi-gpio.c
32381 ++++ b/drivers/spi/spi-gpio.c
32382 +@@ -268,9 +268,19 @@ static int spi_gpio_set_direction(struct spi_device *spi, bool output)
32383 + if (output)
32384 + return gpiod_direction_output(spi_gpio->mosi, 1);
32385 +
32386 +- ret = gpiod_direction_input(spi_gpio->mosi);
32387 +- if (ret)
32388 +- return ret;
32389 ++ /*
32390 ++ * Only change MOSI to an input if using 3WIRE mode.
32391 ++ * Otherwise, MOSI could be left floating if there is
32392 ++ * no pull resistor connected to the I/O pin, or could
32393 ++ * be left logic high if there is a pull-up. Transmitting
32394 ++ * logic high when only clocking MISO data in can put some
32395 ++ * SPI devices in to a bad state.
32396 ++ */
32397 ++ if (spi->mode & SPI_3WIRE) {
32398 ++ ret = gpiod_direction_input(spi_gpio->mosi);
32399 ++ if (ret)
32400 ++ return ret;
32401 ++ }
32402 + /*
32403 + * Send a turnaround high impedance cycle when switching
32404 + * from output to input. Theoretically there should be
32405 +diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
32406 +index b2775d82d2d7b..6313e7d0cdf87 100644
32407 +--- a/drivers/spi/spidev.c
32408 ++++ b/drivers/spi/spidev.c
32409 +@@ -377,12 +377,23 @@ spidev_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
32410 + switch (cmd) {
32411 + /* read requests */
32412 + case SPI_IOC_RD_MODE:
32413 +- retval = put_user(spi->mode & SPI_MODE_MASK,
32414 +- (__u8 __user *)arg);
32415 +- break;
32416 + case SPI_IOC_RD_MODE32:
32417 +- retval = put_user(spi->mode & SPI_MODE_MASK,
32418 +- (__u32 __user *)arg);
32419 ++ tmp = spi->mode;
32420 ++
32421 ++ {
32422 ++ struct spi_controller *ctlr = spi->controller;
32423 ++
32424 ++ if (ctlr->use_gpio_descriptors && ctlr->cs_gpiods &&
32425 ++ ctlr->cs_gpiods[spi->chip_select])
32426 ++ tmp &= ~SPI_CS_HIGH;
32427 ++ }
32428 ++
32429 ++ if (cmd == SPI_IOC_RD_MODE)
32430 ++ retval = put_user(tmp & SPI_MODE_MASK,
32431 ++ (__u8 __user *)arg);
32432 ++ else
32433 ++ retval = put_user(tmp & SPI_MODE_MASK,
32434 ++ (__u32 __user *)arg);
32435 + break;
32436 + case SPI_IOC_RD_LSB_FIRST:
32437 + retval = put_user((spi->mode & SPI_LSB_FIRST) ? 1 : 0,
32438 +diff --git a/drivers/staging/media/deprecated/stkwebcam/Kconfig b/drivers/staging/media/deprecated/stkwebcam/Kconfig
32439 +index 4450403dff41f..7234498e634ac 100644
32440 +--- a/drivers/staging/media/deprecated/stkwebcam/Kconfig
32441 ++++ b/drivers/staging/media/deprecated/stkwebcam/Kconfig
32442 +@@ -2,7 +2,7 @@
32443 + config VIDEO_STKWEBCAM
32444 + tristate "USB Syntek DC1125 Camera support (DEPRECATED)"
32445 + depends on VIDEO_DEV
32446 +- depends on USB
32447 ++ depends on MEDIA_USB_SUPPORT && MEDIA_CAMERA_SUPPORT
32448 + help
32449 + Say Y here if you want to use this type of camera.
32450 + Supported devices are typically found in some Asus laptops,
32451 +diff --git a/drivers/staging/media/imx/imx7-media-csi.c b/drivers/staging/media/imx/imx7-media-csi.c
32452 +index e5b550ccfa22d..c77401f184d74 100644
32453 +--- a/drivers/staging/media/imx/imx7-media-csi.c
32454 ++++ b/drivers/staging/media/imx/imx7-media-csi.c
32455 +@@ -521,9 +521,9 @@ static void imx7_csi_configure(struct imx7_csi *csi)
32456 + cr18 = imx7_csi_reg_read(csi, CSI_CSICR18);
32457 +
32458 + cr18 &= ~(BIT_CSI_HW_ENABLE | BIT_MIPI_DATA_FORMAT_MASK |
32459 +- BIT_DATA_FROM_MIPI | BIT_BASEADDR_CHG_ERR_EN |
32460 +- BIT_BASEADDR_SWITCH_EN | BIT_BASEADDR_SWITCH_SEL |
32461 +- BIT_DEINTERLACE_EN);
32462 ++ BIT_DATA_FROM_MIPI | BIT_MIPI_DOUBLE_CMPNT |
32463 ++ BIT_BASEADDR_CHG_ERR_EN | BIT_BASEADDR_SWITCH_SEL |
32464 ++ BIT_BASEADDR_SWITCH_EN | BIT_DEINTERLACE_EN);
32465 +
32466 + if (out_pix->field == V4L2_FIELD_INTERLACED) {
32467 + cr18 |= BIT_DEINTERLACE_EN;
32468 +diff --git a/drivers/staging/media/rkvdec/rkvdec-vp9.c b/drivers/staging/media/rkvdec/rkvdec-vp9.c
32469 +index d8c1c0db15c70..cfae99b40ccb4 100644
32470 +--- a/drivers/staging/media/rkvdec/rkvdec-vp9.c
32471 ++++ b/drivers/staging/media/rkvdec/rkvdec-vp9.c
32472 +@@ -84,6 +84,8 @@ struct rkvdec_vp9_probs {
32473 + struct rkvdec_vp9_inter_frame_probs inter;
32474 + struct rkvdec_vp9_intra_only_frame_probs intra_only;
32475 + };
32476 ++ /* 128 bit alignment */
32477 ++ u8 padding1[11];
32478 + };
32479 +
32480 + /* Data structure describing auxiliary buffer format. */
32481 +@@ -1006,6 +1008,7 @@ static int rkvdec_vp9_start(struct rkvdec_ctx *ctx)
32482 +
32483 + ctx->priv = vp9_ctx;
32484 +
32485 ++ BUILD_BUG_ON(sizeof(priv_tbl->probs) % 16); /* ensure probs size is 128-bit aligned */
32486 + priv_tbl = dma_alloc_coherent(rkvdec->dev, sizeof(*priv_tbl),
32487 + &vp9_ctx->priv_tbl.dma, GFP_KERNEL);
32488 + if (!priv_tbl) {
32489 +diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
32490 +index 4952fc17f3e6d..625f77a8c5bde 100644
32491 +--- a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
32492 ++++ b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
32493 +@@ -242,6 +242,18 @@ static void cedrus_h265_skip_bits(struct cedrus_dev *dev, int num)
32494 + }
32495 + }
32496 +
32497 ++static u32 cedrus_h265_show_bits(struct cedrus_dev *dev, int num)
32498 ++{
32499 ++ cedrus_write(dev, VE_DEC_H265_TRIGGER,
32500 ++ VE_DEC_H265_TRIGGER_SHOW_BITS |
32501 ++ VE_DEC_H265_TRIGGER_TYPE_N_BITS(num));
32502 ++
32503 ++ cedrus_wait_for(dev, VE_DEC_H265_STATUS,
32504 ++ VE_DEC_H265_STATUS_VLD_BUSY);
32505 ++
32506 ++ return cedrus_read(dev, VE_DEC_H265_BITS_READ);
32507 ++}
32508 ++
32509 + static void cedrus_h265_write_scaling_list(struct cedrus_ctx *ctx,
32510 + struct cedrus_run *run)
32511 + {
32512 +@@ -406,7 +418,7 @@ static int cedrus_h265_setup(struct cedrus_ctx *ctx, struct cedrus_run *run)
32513 + u32 num_entry_point_offsets;
32514 + u32 output_pic_list_index;
32515 + u32 pic_order_cnt[2];
32516 +- u8 *padding;
32517 ++ u8 padding;
32518 + int count;
32519 + u32 reg;
32520 +
32521 +@@ -520,21 +532,22 @@ static int cedrus_h265_setup(struct cedrus_ctx *ctx, struct cedrus_run *run)
32522 + if (slice_params->data_byte_offset == 0)
32523 + return -EOPNOTSUPP;
32524 +
32525 +- padding = (u8 *)vb2_plane_vaddr(&run->src->vb2_buf, 0) +
32526 +- slice_params->data_byte_offset - 1;
32527 ++ cedrus_h265_skip_bits(dev, (slice_params->data_byte_offset - 1) * 8);
32528 ++
32529 ++ padding = cedrus_h265_show_bits(dev, 8);
32530 +
32531 + /* at least one bit must be set in that byte */
32532 +- if (*padding == 0)
32533 ++ if (padding == 0)
32534 + return -EINVAL;
32535 +
32536 + for (count = 0; count < 8; count++)
32537 +- if (*padding & (1 << count))
32538 ++ if (padding & (1 << count))
32539 + break;
32540 +
32541 + /* Include the one bit. */
32542 + count++;
32543 +
32544 +- cedrus_h265_skip_bits(dev, slice_params->data_byte_offset * 8 - count);
32545 ++ cedrus_h265_skip_bits(dev, 8 - count);
32546 +
32547 + /* Bitstream parameters. */
32548 +
32549 +diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_regs.h b/drivers/staging/media/sunxi/cedrus/cedrus_regs.h
32550 +index d81f7513ade0d..655c05b389cf5 100644
32551 +--- a/drivers/staging/media/sunxi/cedrus/cedrus_regs.h
32552 ++++ b/drivers/staging/media/sunxi/cedrus/cedrus_regs.h
32553 +@@ -505,6 +505,8 @@
32554 + #define VE_DEC_H265_LOW_ADDR_ENTRY_POINTS_BUF(a) \
32555 + SHIFT_AND_MASK_BITS(a, 7, 0)
32556 +
32557 ++#define VE_DEC_H265_BITS_READ (VE_ENGINE_DEC_H265 + 0xdc)
32558 ++
32559 + #define VE_DEC_H265_SRAM_OFFSET (VE_ENGINE_DEC_H265 + 0xe0)
32560 +
32561 + #define VE_DEC_H265_SRAM_OFFSET_PRED_WEIGHT_LUMA_L0 0x00
32562 +diff --git a/drivers/staging/r8188eu/core/rtw_pwrctrl.c b/drivers/staging/r8188eu/core/rtw_pwrctrl.c
32563 +index 870d81735b8dc..5290ac36f08c1 100644
32564 +--- a/drivers/staging/r8188eu/core/rtw_pwrctrl.c
32565 ++++ b/drivers/staging/r8188eu/core/rtw_pwrctrl.c
32566 +@@ -273,7 +273,7 @@ static s32 LPS_RF_ON_check(struct adapter *padapter, u32 delay_ms)
32567 + err = -1;
32568 + break;
32569 + }
32570 +- msleep(1);
32571 ++ mdelay(1);
32572 + }
32573 +
32574 + return err;
32575 +diff --git a/drivers/staging/rtl8192e/rtllib_rx.c b/drivers/staging/rtl8192e/rtllib_rx.c
32576 +index 46d75e925ee9b..f710eb2a95f3a 100644
32577 +--- a/drivers/staging/rtl8192e/rtllib_rx.c
32578 ++++ b/drivers/staging/rtl8192e/rtllib_rx.c
32579 +@@ -1489,9 +1489,9 @@ static int rtllib_rx_Monitor(struct rtllib_device *ieee, struct sk_buff *skb,
32580 + hdrlen += 4;
32581 + }
32582 +
32583 +- rtllib_monitor_rx(ieee, skb, rx_stats, hdrlen);
32584 + ieee->stats.rx_packets++;
32585 + ieee->stats.rx_bytes += skb->len;
32586 ++ rtllib_monitor_rx(ieee, skb, rx_stats, hdrlen);
32587 +
32588 + return 1;
32589 + }
32590 +diff --git a/drivers/staging/rtl8192u/ieee80211/ieee80211_rx.c b/drivers/staging/rtl8192u/ieee80211/ieee80211_rx.c
32591 +index b58e75932ecd5..3686b3c599ce7 100644
32592 +--- a/drivers/staging/rtl8192u/ieee80211/ieee80211_rx.c
32593 ++++ b/drivers/staging/rtl8192u/ieee80211/ieee80211_rx.c
32594 +@@ -951,9 +951,11 @@ int ieee80211_rx(struct ieee80211_device *ieee, struct sk_buff *skb,
32595 + #endif
32596 +
32597 + if (ieee->iw_mode == IW_MODE_MONITOR) {
32598 ++ unsigned int len = skb->len;
32599 ++
32600 + ieee80211_monitor_rx(ieee, skb, rx_stats);
32601 + stats->rx_packets++;
32602 +- stats->rx_bytes += skb->len;
32603 ++ stats->rx_bytes += len;
32604 + return 1;
32605 + }
32606 +
32607 +diff --git a/drivers/staging/vme_user/vme_fake.c b/drivers/staging/vme_user/vme_fake.c
32608 +index dd646b0c531d4..1ee432c223e2b 100644
32609 +--- a/drivers/staging/vme_user/vme_fake.c
32610 ++++ b/drivers/staging/vme_user/vme_fake.c
32611 +@@ -1073,6 +1073,8 @@ static int __init fake_init(void)
32612 +
32613 + /* We need a fake parent device */
32614 + vme_root = __root_device_register("vme", THIS_MODULE);
32615 ++ if (IS_ERR(vme_root))
32616 ++ return PTR_ERR(vme_root);
32617 +
32618 + /* If we want to support more than one bridge at some point, we need to
32619 + * dynamically allocate this so we get one per device.
32620 +diff --git a/drivers/staging/vme_user/vme_tsi148.c b/drivers/staging/vme_user/vme_tsi148.c
32621 +index 020e0b3bce64b..0171f46d1848f 100644
32622 +--- a/drivers/staging/vme_user/vme_tsi148.c
32623 ++++ b/drivers/staging/vme_user/vme_tsi148.c
32624 +@@ -1751,6 +1751,7 @@ static int tsi148_dma_list_add(struct vme_dma_list *list,
32625 + return 0;
32626 +
32627 + err_dma:
32628 ++ list_del(&entry->list);
32629 + err_dest:
32630 + err_source:
32631 + err_align:
32632 +diff --git a/drivers/target/iscsi/iscsi_target_nego.c b/drivers/target/iscsi/iscsi_target_nego.c
32633 +index f2919319ad383..ff49c8f3fe241 100644
32634 +--- a/drivers/target/iscsi/iscsi_target_nego.c
32635 ++++ b/drivers/target/iscsi/iscsi_target_nego.c
32636 +@@ -1018,6 +1018,13 @@ static int iscsi_target_handle_csg_one(struct iscsit_conn *conn, struct iscsi_lo
32637 + return 0;
32638 + }
32639 +
32640 ++/*
32641 ++ * RETURN VALUE:
32642 ++ *
32643 ++ * 1 = Login successful
32644 ++ * -1 = Login failed
32645 ++ * 0 = More PDU exchanges required
32646 ++ */
32647 + static int iscsi_target_do_login(struct iscsit_conn *conn, struct iscsi_login *login)
32648 + {
32649 + int pdu_count = 0;
32650 +@@ -1363,12 +1370,13 @@ int iscsi_target_start_negotiation(
32651 + ret = -1;
32652 +
32653 + if (ret < 0) {
32654 +- cancel_delayed_work_sync(&conn->login_work);
32655 + iscsi_target_restore_sock_callbacks(conn);
32656 + iscsi_remove_failed_auth_entry(conn);
32657 + }
32658 +- if (ret != 0)
32659 ++ if (ret != 0) {
32660 ++ cancel_delayed_work_sync(&conn->login_work);
32661 + iscsi_target_nego_release(conn);
32662 ++ }
32663 +
32664 + return ret;
32665 + }
32666 +diff --git a/drivers/thermal/imx8mm_thermal.c b/drivers/thermal/imx8mm_thermal.c
32667 +index e2c2673025a7a..258d988b266d7 100644
32668 +--- a/drivers/thermal/imx8mm_thermal.c
32669 ++++ b/drivers/thermal/imx8mm_thermal.c
32670 +@@ -65,8 +65,14 @@ static int imx8mm_tmu_get_temp(void *data, int *temp)
32671 + u32 val;
32672 +
32673 + val = readl_relaxed(tmu->base + TRITSR) & TRITSR_TEMP0_VAL_MASK;
32674 ++
32675 ++ /*
32676 ++ * Do not validate against the V bit (bit 31) due to errata
32677 ++ * ERR051272: TMU: Bit 31 of registers TMU_TSCR/TMU_TRITSR/TMU_TRATSR invalid
32678 ++ */
32679 ++
32680 + *temp = val * 1000;
32681 +- if (*temp < VER1_TEMP_LOW_LIMIT)
32682 ++ if (*temp < VER1_TEMP_LOW_LIMIT || *temp > VER2_TEMP_HIGH_LIMIT)
32683 + return -EAGAIN;
32684 +
32685 + return 0;
32686 +diff --git a/drivers/thermal/k3_j72xx_bandgap.c b/drivers/thermal/k3_j72xx_bandgap.c
32687 +index 16b6bcf1bf4fa..c073b1023bbe7 100644
32688 +--- a/drivers/thermal/k3_j72xx_bandgap.c
32689 ++++ b/drivers/thermal/k3_j72xx_bandgap.c
32690 +@@ -439,7 +439,7 @@ static int k3_j72xx_bandgap_probe(struct platform_device *pdev)
32691 + workaround_needed = false;
32692 +
32693 + dev_dbg(bgp->dev, "Work around %sneeded\n",
32694 +- workaround_needed ? "not " : "");
32695 ++ workaround_needed ? "" : "not ");
32696 +
32697 + if (!workaround_needed)
32698 + init_table(5, ref_table, golden_factors);
32699 +diff --git a/drivers/thermal/qcom/lmh.c b/drivers/thermal/qcom/lmh.c
32700 +index d3d9b9fa49e81..4122a51e98741 100644
32701 +--- a/drivers/thermal/qcom/lmh.c
32702 ++++ b/drivers/thermal/qcom/lmh.c
32703 +@@ -45,7 +45,7 @@ static irqreturn_t lmh_handle_irq(int hw_irq, void *data)
32704 + if (irq)
32705 + generic_handle_irq(irq);
32706 +
32707 +- return 0;
32708 ++ return IRQ_HANDLED;
32709 + }
32710 +
32711 + static void lmh_enable_interrupt(struct irq_data *d)
32712 +diff --git a/drivers/thermal/qcom/qcom-spmi-temp-alarm.c b/drivers/thermal/qcom/qcom-spmi-temp-alarm.c
32713 +index be785ab37e53d..ad84978109e6f 100644
32714 +--- a/drivers/thermal/qcom/qcom-spmi-temp-alarm.c
32715 ++++ b/drivers/thermal/qcom/qcom-spmi-temp-alarm.c
32716 +@@ -252,7 +252,8 @@ static int qpnp_tm_update_critical_trip_temp(struct qpnp_tm_chip *chip,
32717 + disable_s2_shutdown = true;
32718 + else
32719 + dev_warn(chip->dev,
32720 +- "No ADC is configured and critical temperature is above the maximum stage 2 threshold of 140 C! Configuring stage 2 shutdown at 140 C.\n");
32721 ++ "No ADC is configured and critical temperature %d mC is above the maximum stage 2 threshold of %ld mC! Configuring stage 2 shutdown at %ld mC.\n",
32722 ++ temp, stage2_threshold_max, stage2_threshold_max);
32723 + }
32724 +
32725 + skip:
32726 +diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
32727 +index 117eeaf7dd241..615fdda3a5de7 100644
32728 +--- a/drivers/thermal/thermal_core.c
32729 ++++ b/drivers/thermal/thermal_core.c
32730 +@@ -883,10 +883,6 @@ __thermal_cooling_device_register(struct device_node *np,
32731 + cdev->id = ret;
32732 + id = ret;
32733 +
32734 +- ret = dev_set_name(&cdev->device, "cooling_device%d", cdev->id);
32735 +- if (ret)
32736 +- goto out_ida_remove;
32737 +-
32738 + cdev->type = kstrdup(type ? type : "", GFP_KERNEL);
32739 + if (!cdev->type) {
32740 + ret = -ENOMEM;
32741 +@@ -901,6 +897,11 @@ __thermal_cooling_device_register(struct device_node *np,
32742 + cdev->device.class = &thermal_class;
32743 + cdev->devdata = devdata;
32744 + thermal_cooling_device_setup_sysfs(cdev);
32745 ++ ret = dev_set_name(&cdev->device, "cooling_device%d", cdev->id);
32746 ++ if (ret) {
32747 ++ thermal_cooling_device_destroy_sysfs(cdev);
32748 ++ goto out_kfree_type;
32749 ++ }
32750 + ret = device_register(&cdev->device);
32751 + if (ret)
32752 + goto out_kfree_type;
32753 +@@ -1234,10 +1235,6 @@ thermal_zone_device_register_with_trips(const char *type, struct thermal_trip *t
32754 + tz->id = id;
32755 + strscpy(tz->type, type, sizeof(tz->type));
32756 +
32757 +- result = dev_set_name(&tz->device, "thermal_zone%d", tz->id);
32758 +- if (result)
32759 +- goto remove_id;
32760 +-
32761 + if (!ops->critical)
32762 + ops->critical = thermal_zone_device_critical;
32763 +
32764 +@@ -1260,6 +1257,11 @@ thermal_zone_device_register_with_trips(const char *type, struct thermal_trip *t
32765 + /* A new thermal zone needs to be updated anyway. */
32766 + atomic_set(&tz->need_update, 1);
32767 +
32768 ++ result = dev_set_name(&tz->device, "thermal_zone%d", tz->id);
32769 ++ if (result) {
32770 ++ thermal_zone_destroy_device_groups(tz);
32771 ++ goto remove_id;
32772 ++ }
32773 + result = device_register(&tz->device);
32774 + if (result)
32775 + goto release_device;
32776 +diff --git a/drivers/thermal/thermal_helpers.c b/drivers/thermal/thermal_helpers.c
32777 +index c65cdce8f856e..fca0b23570f96 100644
32778 +--- a/drivers/thermal/thermal_helpers.c
32779 ++++ b/drivers/thermal/thermal_helpers.c
32780 +@@ -115,7 +115,12 @@ int thermal_zone_get_temp(struct thermal_zone_device *tz, int *temp)
32781 + int ret;
32782 +
32783 + mutex_lock(&tz->lock);
32784 +- ret = __thermal_zone_get_temp(tz, temp);
32785 ++
32786 ++ if (device_is_registered(&tz->device))
32787 ++ ret = __thermal_zone_get_temp(tz, temp);
32788 ++ else
32789 ++ ret = -ENODEV;
32790 ++
32791 + mutex_unlock(&tz->lock);
32792 +
32793 + return ret;
32794 +diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c
32795 +index d4b6335ace15f..aacba30bc10c1 100644
32796 +--- a/drivers/thermal/thermal_of.c
32797 ++++ b/drivers/thermal/thermal_of.c
32798 +@@ -604,13 +604,15 @@ struct thermal_zone_device *thermal_of_zone_register(struct device_node *sensor,
32799 + if (IS_ERR(np)) {
32800 + if (PTR_ERR(np) != -ENODEV)
32801 + pr_err("Failed to find thermal zone for %pOFn id=%d\n", sensor, id);
32802 +- return ERR_CAST(np);
32803 ++ ret = PTR_ERR(np);
32804 ++ goto out_kfree_of_ops;
32805 + }
32806 +
32807 + trips = thermal_of_trips_init(np, &ntrips);
32808 + if (IS_ERR(trips)) {
32809 + pr_err("Failed to find trip points for %pOFn id=%d\n", sensor, id);
32810 +- return ERR_CAST(trips);
32811 ++ ret = PTR_ERR(trips);
32812 ++ goto out_kfree_of_ops;
32813 + }
32814 +
32815 + ret = thermal_of_monitor_init(np, &delay, &pdelay);
32816 +@@ -659,6 +661,8 @@ out_kfree_tzp:
32817 + kfree(tzp);
32818 + out_kfree_trips:
32819 + kfree(trips);
32820 ++out_kfree_of_ops:
32821 ++ kfree(of_ops);
32822 +
32823 + return ERR_PTR(ret);
32824 + }
32825 +diff --git a/drivers/tty/serial/8250/8250_bcm7271.c b/drivers/tty/serial/8250/8250_bcm7271.c
32826 +index fa8ccf204d860..89bfcefbea848 100644
32827 +--- a/drivers/tty/serial/8250/8250_bcm7271.c
32828 ++++ b/drivers/tty/serial/8250/8250_bcm7271.c
32829 +@@ -1212,9 +1212,17 @@ static struct platform_driver brcmuart_platform_driver = {
32830 +
32831 + static int __init brcmuart_init(void)
32832 + {
32833 ++ int ret;
32834 ++
32835 + brcmuart_debugfs_root = debugfs_create_dir(
32836 + brcmuart_platform_driver.driver.name, NULL);
32837 +- return platform_driver_register(&brcmuart_platform_driver);
32838 ++ ret = platform_driver_register(&brcmuart_platform_driver);
32839 ++ if (ret) {
32840 ++ debugfs_remove_recursive(brcmuart_debugfs_root);
32841 ++ return ret;
32842 ++ }
32843 ++
32844 ++ return 0;
32845 + }
32846 + module_init(brcmuart_init);
32847 +
32848 +diff --git a/drivers/tty/serial/altera_uart.c b/drivers/tty/serial/altera_uart.c
32849 +index 82f2790de28d1..1203d1e08cd6c 100644
32850 +--- a/drivers/tty/serial/altera_uart.c
32851 ++++ b/drivers/tty/serial/altera_uart.c
32852 +@@ -278,16 +278,17 @@ static irqreturn_t altera_uart_interrupt(int irq, void *data)
32853 + {
32854 + struct uart_port *port = data;
32855 + struct altera_uart *pp = container_of(port, struct altera_uart, port);
32856 ++ unsigned long flags;
32857 + unsigned int isr;
32858 +
32859 + isr = altera_uart_readl(port, ALTERA_UART_STATUS_REG) & pp->imr;
32860 +
32861 +- spin_lock(&port->lock);
32862 ++ spin_lock_irqsave(&port->lock, flags);
32863 + if (isr & ALTERA_UART_STATUS_RRDY_MSK)
32864 + altera_uart_rx_chars(port);
32865 + if (isr & ALTERA_UART_STATUS_TRDY_MSK)
32866 + altera_uart_tx_chars(port);
32867 +- spin_unlock(&port->lock);
32868 ++ spin_unlock_irqrestore(&port->lock, flags);
32869 +
32870 + return IRQ_RETVAL(isr);
32871 + }
32872 +diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
32873 +index 5cdced39eafdb..aa0bbb7abeacf 100644
32874 +--- a/drivers/tty/serial/amba-pl011.c
32875 ++++ b/drivers/tty/serial/amba-pl011.c
32876 +@@ -1045,6 +1045,9 @@ static void pl011_dma_rx_callback(void *data)
32877 + */
32878 + static inline void pl011_dma_rx_stop(struct uart_amba_port *uap)
32879 + {
32880 ++ if (!uap->using_rx_dma)
32881 ++ return;
32882 ++
32883 + /* FIXME. Just disable the DMA enable */
32884 + uap->dmacr &= ~UART011_RXDMAE;
32885 + pl011_write(uap->dmacr, uap, REG_DMACR);
32886 +@@ -1828,8 +1831,17 @@ static void pl011_enable_interrupts(struct uart_amba_port *uap)
32887 + static void pl011_unthrottle_rx(struct uart_port *port)
32888 + {
32889 + struct uart_amba_port *uap = container_of(port, struct uart_amba_port, port);
32890 ++ unsigned long flags;
32891 +
32892 +- pl011_enable_interrupts(uap);
32893 ++ spin_lock_irqsave(&uap->port.lock, flags);
32894 ++
32895 ++ uap->im = UART011_RTIM;
32896 ++ if (!pl011_dma_rx_running(uap))
32897 ++ uap->im |= UART011_RXIM;
32898 ++
32899 ++ pl011_write(uap->im, uap, REG_IMSC);
32900 ++
32901 ++ spin_unlock_irqrestore(&uap->port.lock, flags);
32902 + }
32903 +
32904 + static int pl011_startup(struct uart_port *port)
32905 +diff --git a/drivers/tty/serial/pch_uart.c b/drivers/tty/serial/pch_uart.c
32906 +index c59ce78865799..b17788cf309b1 100644
32907 +--- a/drivers/tty/serial/pch_uart.c
32908 ++++ b/drivers/tty/serial/pch_uart.c
32909 +@@ -694,6 +694,7 @@ static void pch_request_dma(struct uart_port *port)
32910 + if (!chan) {
32911 + dev_err(priv->port.dev, "%s:dma_request_channel FAILS(Tx)\n",
32912 + __func__);
32913 ++ pci_dev_put(dma_dev);
32914 + return;
32915 + }
32916 + priv->chan_tx = chan;
32917 +@@ -710,6 +711,7 @@ static void pch_request_dma(struct uart_port *port)
32918 + __func__);
32919 + dma_release_channel(priv->chan_tx);
32920 + priv->chan_tx = NULL;
32921 ++ pci_dev_put(dma_dev);
32922 + return;
32923 + }
32924 +
32925 +@@ -717,6 +719,8 @@ static void pch_request_dma(struct uart_port *port)
32926 + priv->rx_buf_virt = dma_alloc_coherent(port->dev, port->fifosize,
32927 + &priv->rx_buf_dma, GFP_KERNEL);
32928 + priv->chan_rx = chan;
32929 ++
32930 ++ pci_dev_put(dma_dev);
32931 + }
32932 +
32933 + static void pch_dma_rx_complete(void *arg)
32934 +diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c
32935 +index b7170cb9a544f..cda9cd4fa92c8 100644
32936 +--- a/drivers/tty/serial/serial-tegra.c
32937 ++++ b/drivers/tty/serial/serial-tegra.c
32938 +@@ -619,8 +619,9 @@ static void tegra_uart_stop_tx(struct uart_port *u)
32939 + if (tup->tx_in_progress != TEGRA_UART_TX_DMA)
32940 + return;
32941 +
32942 +- dmaengine_terminate_all(tup->tx_dma_chan);
32943 ++ dmaengine_pause(tup->tx_dma_chan);
32944 + dmaengine_tx_status(tup->tx_dma_chan, tup->tx_cookie, &state);
32945 ++ dmaengine_terminate_all(tup->tx_dma_chan);
32946 + count = tup->tx_bytes_requested - state.residue;
32947 + async_tx_ack(tup->tx_dma_desc);
32948 + uart_xmit_advance(&tup->uport, count);
32949 +@@ -763,8 +764,9 @@ static void tegra_uart_terminate_rx_dma(struct tegra_uart_port *tup)
32950 + return;
32951 + }
32952 +
32953 +- dmaengine_terminate_all(tup->rx_dma_chan);
32954 ++ dmaengine_pause(tup->rx_dma_chan);
32955 + dmaengine_tx_status(tup->rx_dma_chan, tup->rx_cookie, &state);
32956 ++ dmaengine_terminate_all(tup->rx_dma_chan);
32957 +
32958 + tegra_uart_rx_buffer_push(tup, state.residue);
32959 + tup->rx_dma_active = false;
32960 +diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
32961 +index dfdbcf092facc..b8aed28b8f17b 100644
32962 +--- a/drivers/tty/serial/stm32-usart.c
32963 ++++ b/drivers/tty/serial/stm32-usart.c
32964 +@@ -1681,22 +1681,10 @@ static int stm32_usart_serial_probe(struct platform_device *pdev)
32965 + if (!stm32port->info)
32966 + return -EINVAL;
32967 +
32968 +- ret = stm32_usart_init_port(stm32port, pdev);
32969 +- if (ret)
32970 +- return ret;
32971 +-
32972 +- if (stm32port->wakeup_src) {
32973 +- device_set_wakeup_capable(&pdev->dev, true);
32974 +- ret = dev_pm_set_wake_irq(&pdev->dev, stm32port->port.irq);
32975 +- if (ret)
32976 +- goto err_deinit_port;
32977 +- }
32978 +-
32979 + stm32port->rx_ch = dma_request_chan(&pdev->dev, "rx");
32980 +- if (PTR_ERR(stm32port->rx_ch) == -EPROBE_DEFER) {
32981 +- ret = -EPROBE_DEFER;
32982 +- goto err_wakeirq;
32983 +- }
32984 ++ if (PTR_ERR(stm32port->rx_ch) == -EPROBE_DEFER)
32985 ++ return -EPROBE_DEFER;
32986 ++
32987 + /* Fall back in interrupt mode for any non-deferral error */
32988 + if (IS_ERR(stm32port->rx_ch))
32989 + stm32port->rx_ch = NULL;
32990 +@@ -1710,6 +1698,17 @@ static int stm32_usart_serial_probe(struct platform_device *pdev)
32991 + if (IS_ERR(stm32port->tx_ch))
32992 + stm32port->tx_ch = NULL;
32993 +
32994 ++ ret = stm32_usart_init_port(stm32port, pdev);
32995 ++ if (ret)
32996 ++ goto err_dma_tx;
32997 ++
32998 ++ if (stm32port->wakeup_src) {
32999 ++ device_set_wakeup_capable(&pdev->dev, true);
33000 ++ ret = dev_pm_set_wake_irq(&pdev->dev, stm32port->port.irq);
33001 ++ if (ret)
33002 ++ goto err_deinit_port;
33003 ++ }
33004 ++
33005 + if (stm32port->rx_ch && stm32_usart_of_dma_rx_probe(stm32port, pdev)) {
33006 + /* Fall back in interrupt mode */
33007 + dma_release_channel(stm32port->rx_ch);
33008 +@@ -1746,19 +1745,11 @@ err_port:
33009 + pm_runtime_set_suspended(&pdev->dev);
33010 + pm_runtime_put_noidle(&pdev->dev);
33011 +
33012 +- if (stm32port->tx_ch) {
33013 ++ if (stm32port->tx_ch)
33014 + stm32_usart_of_dma_tx_remove(stm32port, pdev);
33015 +- dma_release_channel(stm32port->tx_ch);
33016 +- }
33017 +-
33018 + if (stm32port->rx_ch)
33019 + stm32_usart_of_dma_rx_remove(stm32port, pdev);
33020 +
33021 +-err_dma_rx:
33022 +- if (stm32port->rx_ch)
33023 +- dma_release_channel(stm32port->rx_ch);
33024 +-
33025 +-err_wakeirq:
33026 + if (stm32port->wakeup_src)
33027 + dev_pm_clear_wake_irq(&pdev->dev);
33028 +
33029 +@@ -1768,6 +1759,14 @@ err_deinit_port:
33030 +
33031 + stm32_usart_deinit_port(stm32port);
33032 +
33033 ++err_dma_tx:
33034 ++ if (stm32port->tx_ch)
33035 ++ dma_release_channel(stm32port->tx_ch);
33036 ++
33037 ++err_dma_rx:
33038 ++ if (stm32port->rx_ch)
33039 ++ dma_release_channel(stm32port->rx_ch);
33040 ++
33041 + return ret;
33042 + }
33043 +
33044 +diff --git a/drivers/tty/serial/sunsab.c b/drivers/tty/serial/sunsab.c
33045 +index 99608b2a2b74f..7ace3aa498402 100644
33046 +--- a/drivers/tty/serial/sunsab.c
33047 ++++ b/drivers/tty/serial/sunsab.c
33048 +@@ -1133,7 +1133,13 @@ static int __init sunsab_init(void)
33049 + }
33050 + }
33051 +
33052 +- return platform_driver_register(&sab_driver);
33053 ++ err = platform_driver_register(&sab_driver);
33054 ++ if (err) {
33055 ++ kfree(sunsab_ports);
33056 ++ sunsab_ports = NULL;
33057 ++ }
33058 ++
33059 ++ return err;
33060 + }
33061 +
33062 + static void __exit sunsab_exit(void)
33063 +diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
33064 +index b1f59a5fe6327..d1db6be801560 100644
33065 +--- a/drivers/ufs/core/ufshcd.c
33066 ++++ b/drivers/ufs/core/ufshcd.c
33067 +@@ -5382,6 +5382,26 @@ static void __ufshcd_transfer_req_compl(struct ufs_hba *hba,
33068 + }
33069 + }
33070 +
33071 ++/* Any value that is not an existing queue number is fine for this constant. */
33072 ++enum {
33073 ++ UFSHCD_POLL_FROM_INTERRUPT_CONTEXT = -1
33074 ++};
33075 ++
33076 ++static void ufshcd_clear_polled(struct ufs_hba *hba,
33077 ++ unsigned long *completed_reqs)
33078 ++{
33079 ++ int tag;
33080 ++
33081 ++ for_each_set_bit(tag, completed_reqs, hba->nutrs) {
33082 ++ struct scsi_cmnd *cmd = hba->lrb[tag].cmd;
33083 ++
33084 ++ if (!cmd)
33085 ++ continue;
33086 ++ if (scsi_cmd_to_rq(cmd)->cmd_flags & REQ_POLLED)
33087 ++ __clear_bit(tag, completed_reqs);
33088 ++ }
33089 ++}
33090 ++
33091 + /*
33092 + * Returns > 0 if one or more commands have been completed or 0 if no
33093 + * requests have been completed.
33094 +@@ -5398,13 +5418,17 @@ static int ufshcd_poll(struct Scsi_Host *shost, unsigned int queue_num)
33095 + WARN_ONCE(completed_reqs & ~hba->outstanding_reqs,
33096 + "completed: %#lx; outstanding: %#lx\n", completed_reqs,
33097 + hba->outstanding_reqs);
33098 ++ if (queue_num == UFSHCD_POLL_FROM_INTERRUPT_CONTEXT) {
33099 ++ /* Do not complete polled requests from interrupt context. */
33100 ++ ufshcd_clear_polled(hba, &completed_reqs);
33101 ++ }
33102 + hba->outstanding_reqs &= ~completed_reqs;
33103 + spin_unlock_irqrestore(&hba->outstanding_lock, flags);
33104 +
33105 + if (completed_reqs)
33106 + __ufshcd_transfer_req_compl(hba, completed_reqs);
33107 +
33108 +- return completed_reqs;
33109 ++ return completed_reqs != 0;
33110 + }
33111 +
33112 + /**
33113 +@@ -5435,7 +5459,7 @@ static irqreturn_t ufshcd_transfer_req_compl(struct ufs_hba *hba)
33114 + * Ignore the ufshcd_poll() return value and return IRQ_HANDLED since we
33115 + * do not want polling to trigger spurious interrupt complaints.
33116 + */
33117 +- ufshcd_poll(hba->host, 0);
33118 ++ ufshcd_poll(hba->host, UFSHCD_POLL_FROM_INTERRUPT_CONTEXT);
33119 +
33120 + return IRQ_HANDLED;
33121 + }
33122 +@@ -8747,8 +8771,6 @@ static int ufshcd_set_dev_pwr_mode(struct ufs_hba *hba,
33123 + struct scsi_device *sdp;
33124 + unsigned long flags;
33125 + int ret, retries;
33126 +- unsigned long deadline;
33127 +- int32_t remaining;
33128 +
33129 + spin_lock_irqsave(hba->host->host_lock, flags);
33130 + sdp = hba->ufs_device_wlun;
33131 +@@ -8781,14 +8803,9 @@ static int ufshcd_set_dev_pwr_mode(struct ufs_hba *hba,
33132 + * callbacks hence set the RQF_PM flag so that it doesn't resume the
33133 + * already suspended childs.
33134 + */
33135 +- deadline = jiffies + 10 * HZ;
33136 + for (retries = 3; retries > 0; --retries) {
33137 +- ret = -ETIMEDOUT;
33138 +- remaining = deadline - jiffies;
33139 +- if (remaining <= 0)
33140 +- break;
33141 + ret = scsi_execute(sdp, cmd, DMA_NONE, NULL, 0, NULL, &sshdr,
33142 +- remaining / HZ, 0, 0, RQF_PM, NULL);
33143 ++ HZ, 0, 0, RQF_PM, NULL);
33144 + if (!scsi_status_is_check_condition(ret) ||
33145 + !scsi_sense_valid(&sshdr) ||
33146 + sshdr.sense_key != UNIT_ATTENTION)
33147 +diff --git a/drivers/uio/uio_dmem_genirq.c b/drivers/uio/uio_dmem_genirq.c
33148 +index 1106f33764047..792c3e9c9ce53 100644
33149 +--- a/drivers/uio/uio_dmem_genirq.c
33150 ++++ b/drivers/uio/uio_dmem_genirq.c
33151 +@@ -110,8 +110,10 @@ static irqreturn_t uio_dmem_genirq_handler(int irq, struct uio_info *dev_info)
33152 + * remember the state so we can allow user space to enable it later.
33153 + */
33154 +
33155 ++ spin_lock(&priv->lock);
33156 + if (!test_and_set_bit(0, &priv->flags))
33157 + disable_irq_nosync(irq);
33158 ++ spin_unlock(&priv->lock);
33159 +
33160 + return IRQ_HANDLED;
33161 + }
33162 +@@ -125,20 +127,19 @@ static int uio_dmem_genirq_irqcontrol(struct uio_info *dev_info, s32 irq_on)
33163 + * in the interrupt controller, but keep track of the
33164 + * state to prevent per-irq depth damage.
33165 + *
33166 +- * Serialize this operation to support multiple tasks.
33167 ++ * Serialize this operation to support multiple tasks and concurrency
33168 ++ * with irq handler on SMP systems.
33169 + */
33170 +
33171 + spin_lock_irqsave(&priv->lock, flags);
33172 + if (irq_on) {
33173 + if (test_and_clear_bit(0, &priv->flags))
33174 + enable_irq(dev_info->irq);
33175 +- spin_unlock_irqrestore(&priv->lock, flags);
33176 + } else {
33177 +- if (!test_and_set_bit(0, &priv->flags)) {
33178 +- spin_unlock_irqrestore(&priv->lock, flags);
33179 +- disable_irq(dev_info->irq);
33180 +- }
33181 ++ if (!test_and_set_bit(0, &priv->flags))
33182 ++ disable_irq_nosync(dev_info->irq);
33183 + }
33184 ++ spin_unlock_irqrestore(&priv->lock, flags);
33185 +
33186 + return 0;
33187 + }
33188 +diff --git a/drivers/usb/cdns3/cdnsp-ring.c b/drivers/usb/cdns3/cdnsp-ring.c
33189 +index 2f29431f612e0..b23e543b3a3d5 100644
33190 +--- a/drivers/usb/cdns3/cdnsp-ring.c
33191 ++++ b/drivers/usb/cdns3/cdnsp-ring.c
33192 +@@ -2006,10 +2006,11 @@ int cdnsp_queue_bulk_tx(struct cdnsp_device *pdev, struct cdnsp_request *preq)
33193 +
33194 + int cdnsp_queue_ctrl_tx(struct cdnsp_device *pdev, struct cdnsp_request *preq)
33195 + {
33196 +- u32 field, length_field, remainder;
33197 ++ u32 field, length_field, zlp = 0;
33198 + struct cdnsp_ep *pep = preq->pep;
33199 + struct cdnsp_ring *ep_ring;
33200 + int num_trbs;
33201 ++ u32 maxp;
33202 + int ret;
33203 +
33204 + ep_ring = cdnsp_request_to_transfer_ring(pdev, preq);
33205 +@@ -2019,26 +2020,33 @@ int cdnsp_queue_ctrl_tx(struct cdnsp_device *pdev, struct cdnsp_request *preq)
33206 + /* 1 TRB for data, 1 for status */
33207 + num_trbs = (pdev->three_stage_setup) ? 2 : 1;
33208 +
33209 ++ maxp = usb_endpoint_maxp(pep->endpoint.desc);
33210 ++
33211 ++ if (preq->request.zero && preq->request.length &&
33212 ++ (preq->request.length % maxp == 0)) {
33213 ++ num_trbs++;
33214 ++ zlp = 1;
33215 ++ }
33216 ++
33217 + ret = cdnsp_prepare_transfer(pdev, preq, num_trbs);
33218 + if (ret)
33219 + return ret;
33220 +
33221 + /* If there's data, queue data TRBs */
33222 +- if (pdev->ep0_expect_in)
33223 +- field = TRB_TYPE(TRB_DATA) | TRB_IOC;
33224 +- else
33225 +- field = TRB_ISP | TRB_TYPE(TRB_DATA) | TRB_IOC;
33226 +-
33227 + if (preq->request.length > 0) {
33228 +- remainder = cdnsp_td_remainder(pdev, 0, preq->request.length,
33229 +- preq->request.length, preq, 1, 0);
33230 ++ field = TRB_TYPE(TRB_DATA);
33231 +
33232 +- length_field = TRB_LEN(preq->request.length) |
33233 +- TRB_TD_SIZE(remainder) | TRB_INTR_TARGET(0);
33234 ++ if (zlp)
33235 ++ field |= TRB_CHAIN;
33236 ++ else
33237 ++ field |= TRB_IOC | (pdev->ep0_expect_in ? 0 : TRB_ISP);
33238 +
33239 + if (pdev->ep0_expect_in)
33240 + field |= TRB_DIR_IN;
33241 +
33242 ++ length_field = TRB_LEN(preq->request.length) |
33243 ++ TRB_TD_SIZE(zlp) | TRB_INTR_TARGET(0);
33244 ++
33245 + cdnsp_queue_trb(pdev, ep_ring, true,
33246 + lower_32_bits(preq->request.dma),
33247 + upper_32_bits(preq->request.dma), length_field,
33248 +@@ -2046,6 +2054,20 @@ int cdnsp_queue_ctrl_tx(struct cdnsp_device *pdev, struct cdnsp_request *preq)
33249 + TRB_SETUPID(pdev->setup_id) |
33250 + pdev->setup_speed);
33251 +
33252 ++ if (zlp) {
33253 ++ field = TRB_TYPE(TRB_NORMAL) | TRB_IOC;
33254 ++
33255 ++ if (!pdev->ep0_expect_in)
33256 ++ field = TRB_ISP;
33257 ++
33258 ++ cdnsp_queue_trb(pdev, ep_ring, true,
33259 ++ lower_32_bits(preq->request.dma),
33260 ++ upper_32_bits(preq->request.dma), 0,
33261 ++ field | ep_ring->cycle_state |
33262 ++ TRB_SETUPID(pdev->setup_id) |
33263 ++ pdev->setup_speed);
33264 ++ }
33265 ++
33266 + pdev->ep0_stage = CDNSP_DATA_STAGE;
33267 + }
33268 +
33269 +diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
33270 +index faeaace0d197d..8300baedafd20 100644
33271 +--- a/drivers/usb/core/hcd.c
33272 ++++ b/drivers/usb/core/hcd.c
33273 +@@ -3133,8 +3133,12 @@ int usb_hcd_setup_local_mem(struct usb_hcd *hcd, phys_addr_t phys_addr,
33274 + GFP_KERNEL,
33275 + DMA_ATTR_WRITE_COMBINE);
33276 +
33277 +- if (IS_ERR(local_mem))
33278 ++ if (IS_ERR_OR_NULL(local_mem)) {
33279 ++ if (!local_mem)
33280 ++ return -ENOMEM;
33281 ++
33282 + return PTR_ERR(local_mem);
33283 ++ }
33284 +
33285 + /*
33286 + * Here we pass a dma_addr_t but the arg type is a phys_addr_t.
33287 +diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
33288 +index 1f348bc867c22..476b636185116 100644
33289 +--- a/drivers/usb/dwc3/core.c
33290 ++++ b/drivers/usb/dwc3/core.c
33291 +@@ -122,21 +122,25 @@ static void __dwc3_set_mode(struct work_struct *work)
33292 + unsigned long flags;
33293 + int ret;
33294 + u32 reg;
33295 ++ u32 desired_dr_role;
33296 +
33297 + mutex_lock(&dwc->mutex);
33298 ++ spin_lock_irqsave(&dwc->lock, flags);
33299 ++ desired_dr_role = dwc->desired_dr_role;
33300 ++ spin_unlock_irqrestore(&dwc->lock, flags);
33301 +
33302 + pm_runtime_get_sync(dwc->dev);
33303 +
33304 + if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_OTG)
33305 + dwc3_otg_update(dwc, 0);
33306 +
33307 +- if (!dwc->desired_dr_role)
33308 ++ if (!desired_dr_role)
33309 + goto out;
33310 +
33311 +- if (dwc->desired_dr_role == dwc->current_dr_role)
33312 ++ if (desired_dr_role == dwc->current_dr_role)
33313 + goto out;
33314 +
33315 +- if (dwc->desired_dr_role == DWC3_GCTL_PRTCAP_OTG && dwc->edev)
33316 ++ if (desired_dr_role == DWC3_GCTL_PRTCAP_OTG && dwc->edev)
33317 + goto out;
33318 +
33319 + switch (dwc->current_dr_role) {
33320 +@@ -164,7 +168,7 @@ static void __dwc3_set_mode(struct work_struct *work)
33321 + */
33322 + if (dwc->current_dr_role && ((DWC3_IP_IS(DWC3) ||
33323 + DWC3_VER_IS_PRIOR(DWC31, 190A)) &&
33324 +- dwc->desired_dr_role != DWC3_GCTL_PRTCAP_OTG)) {
33325 ++ desired_dr_role != DWC3_GCTL_PRTCAP_OTG)) {
33326 + reg = dwc3_readl(dwc->regs, DWC3_GCTL);
33327 + reg |= DWC3_GCTL_CORESOFTRESET;
33328 + dwc3_writel(dwc->regs, DWC3_GCTL, reg);
33329 +@@ -184,11 +188,11 @@ static void __dwc3_set_mode(struct work_struct *work)
33330 +
33331 + spin_lock_irqsave(&dwc->lock, flags);
33332 +
33333 +- dwc3_set_prtcap(dwc, dwc->desired_dr_role);
33334 ++ dwc3_set_prtcap(dwc, desired_dr_role);
33335 +
33336 + spin_unlock_irqrestore(&dwc->lock, flags);
33337 +
33338 +- switch (dwc->desired_dr_role) {
33339 ++ switch (desired_dr_role) {
33340 + case DWC3_GCTL_PRTCAP_HOST:
33341 + ret = dwc3_host_init(dwc);
33342 + if (ret) {
33343 +@@ -1096,8 +1100,13 @@ static int dwc3_core_init(struct dwc3 *dwc)
33344 +
33345 + if (!dwc->ulpi_ready) {
33346 + ret = dwc3_core_ulpi_init(dwc);
33347 +- if (ret)
33348 ++ if (ret) {
33349 ++ if (ret == -ETIMEDOUT) {
33350 ++ dwc3_core_soft_reset(dwc);
33351 ++ ret = -EPROBE_DEFER;
33352 ++ }
33353 + goto err0;
33354 ++ }
33355 + dwc->ulpi_ready = true;
33356 + }
33357 +
33358 +diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
33359 +index 7c40f3ffc0544..b0a0351d2d8b5 100644
33360 +--- a/drivers/usb/dwc3/dwc3-qcom.c
33361 ++++ b/drivers/usb/dwc3/dwc3-qcom.c
33362 +@@ -261,7 +261,8 @@ static int dwc3_qcom_interconnect_init(struct dwc3_qcom *qcom)
33363 + if (IS_ERR(qcom->icc_path_apps)) {
33364 + dev_err(dev, "failed to get apps-usb path: %ld\n",
33365 + PTR_ERR(qcom->icc_path_apps));
33366 +- return PTR_ERR(qcom->icc_path_apps);
33367 ++ ret = PTR_ERR(qcom->icc_path_apps);
33368 ++ goto put_path_ddr;
33369 + }
33370 +
33371 + max_speed = usb_get_maximum_speed(&qcom->dwc3->dev);
33372 +@@ -274,16 +275,22 @@ static int dwc3_qcom_interconnect_init(struct dwc3_qcom *qcom)
33373 + }
33374 + if (ret) {
33375 + dev_err(dev, "failed to set bandwidth for usb-ddr path: %d\n", ret);
33376 +- return ret;
33377 ++ goto put_path_apps;
33378 + }
33379 +
33380 + ret = icc_set_bw(qcom->icc_path_apps, APPS_USB_AVG_BW, APPS_USB_PEAK_BW);
33381 + if (ret) {
33382 + dev_err(dev, "failed to set bandwidth for apps-usb path: %d\n", ret);
33383 +- return ret;
33384 ++ goto put_path_apps;
33385 + }
33386 +
33387 + return 0;
33388 ++
33389 ++put_path_apps:
33390 ++ icc_put(qcom->icc_path_apps);
33391 ++put_path_ddr:
33392 ++ icc_put(qcom->icc_path_ddr);
33393 ++ return ret;
33394 + }
33395 +
33396 + /**
33397 +diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
33398 +index ca0a7d9eaa34e..6be6009f911e1 100644
33399 +--- a/drivers/usb/gadget/function/f_hid.c
33400 ++++ b/drivers/usb/gadget/function/f_hid.c
33401 +@@ -71,7 +71,7 @@ struct f_hidg {
33402 + wait_queue_head_t write_queue;
33403 + struct usb_request *req;
33404 +
33405 +- int minor;
33406 ++ struct device dev;
33407 + struct cdev cdev;
33408 + struct usb_function func;
33409 +
33410 +@@ -84,6 +84,14 @@ static inline struct f_hidg *func_to_hidg(struct usb_function *f)
33411 + return container_of(f, struct f_hidg, func);
33412 + }
33413 +
33414 ++static void hidg_release(struct device *dev)
33415 ++{
33416 ++ struct f_hidg *hidg = container_of(dev, struct f_hidg, dev);
33417 ++
33418 ++ kfree(hidg->set_report_buf);
33419 ++ kfree(hidg);
33420 ++}
33421 ++
33422 + /*-------------------------------------------------------------------------*/
33423 + /* Static descriptors */
33424 +
33425 +@@ -904,9 +912,7 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
33426 + struct usb_ep *ep;
33427 + struct f_hidg *hidg = func_to_hidg(f);
33428 + struct usb_string *us;
33429 +- struct device *device;
33430 + int status;
33431 +- dev_t dev;
33432 +
33433 + /* maybe allocate device-global string IDs, and patch descriptors */
33434 + us = usb_gstrings_attach(c->cdev, ct_func_strings,
33435 +@@ -999,21 +1005,11 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
33436 +
33437 + /* create char device */
33438 + cdev_init(&hidg->cdev, &f_hidg_fops);
33439 +- dev = MKDEV(major, hidg->minor);
33440 +- status = cdev_add(&hidg->cdev, dev, 1);
33441 ++ status = cdev_device_add(&hidg->cdev, &hidg->dev);
33442 + if (status)
33443 + goto fail_free_descs;
33444 +
33445 +- device = device_create(hidg_class, NULL, dev, NULL,
33446 +- "%s%d", "hidg", hidg->minor);
33447 +- if (IS_ERR(device)) {
33448 +- status = PTR_ERR(device);
33449 +- goto del;
33450 +- }
33451 +-
33452 + return 0;
33453 +-del:
33454 +- cdev_del(&hidg->cdev);
33455 + fail_free_descs:
33456 + usb_free_all_descriptors(f);
33457 + fail:
33458 +@@ -1244,9 +1240,7 @@ static void hidg_free(struct usb_function *f)
33459 +
33460 + hidg = func_to_hidg(f);
33461 + opts = container_of(f->fi, struct f_hid_opts, func_inst);
33462 +- kfree(hidg->report_desc);
33463 +- kfree(hidg->set_report_buf);
33464 +- kfree(hidg);
33465 ++ put_device(&hidg->dev);
33466 + mutex_lock(&opts->lock);
33467 + --opts->refcnt;
33468 + mutex_unlock(&opts->lock);
33469 +@@ -1256,8 +1250,7 @@ static void hidg_unbind(struct usb_configuration *c, struct usb_function *f)
33470 + {
33471 + struct f_hidg *hidg = func_to_hidg(f);
33472 +
33473 +- device_destroy(hidg_class, MKDEV(major, hidg->minor));
33474 +- cdev_del(&hidg->cdev);
33475 ++ cdev_device_del(&hidg->cdev, &hidg->dev);
33476 +
33477 + usb_free_all_descriptors(f);
33478 + }
33479 +@@ -1266,6 +1259,7 @@ static struct usb_function *hidg_alloc(struct usb_function_instance *fi)
33480 + {
33481 + struct f_hidg *hidg;
33482 + struct f_hid_opts *opts;
33483 ++ int ret;
33484 +
33485 + /* allocate and initialize one new instance */
33486 + hidg = kzalloc(sizeof(*hidg), GFP_KERNEL);
33487 +@@ -1277,17 +1271,28 @@ static struct usb_function *hidg_alloc(struct usb_function_instance *fi)
33488 + mutex_lock(&opts->lock);
33489 + ++opts->refcnt;
33490 +
33491 +- hidg->minor = opts->minor;
33492 ++ device_initialize(&hidg->dev);
33493 ++ hidg->dev.release = hidg_release;
33494 ++ hidg->dev.class = hidg_class;
33495 ++ hidg->dev.devt = MKDEV(major, opts->minor);
33496 ++ ret = dev_set_name(&hidg->dev, "hidg%d", opts->minor);
33497 ++ if (ret) {
33498 ++ --opts->refcnt;
33499 ++ mutex_unlock(&opts->lock);
33500 ++ return ERR_PTR(ret);
33501 ++ }
33502 ++
33503 + hidg->bInterfaceSubClass = opts->subclass;
33504 + hidg->bInterfaceProtocol = opts->protocol;
33505 + hidg->report_length = opts->report_length;
33506 + hidg->report_desc_length = opts->report_desc_length;
33507 + if (opts->report_desc) {
33508 +- hidg->report_desc = kmemdup(opts->report_desc,
33509 +- opts->report_desc_length,
33510 +- GFP_KERNEL);
33511 ++ hidg->report_desc = devm_kmemdup(&hidg->dev, opts->report_desc,
33512 ++ opts->report_desc_length,
33513 ++ GFP_KERNEL);
33514 + if (!hidg->report_desc) {
33515 +- kfree(hidg);
33516 ++ put_device(&hidg->dev);
33517 ++ --opts->refcnt;
33518 + mutex_unlock(&opts->lock);
33519 + return ERR_PTR(-ENOMEM);
33520 + }
33521 +diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
33522 +index c63c0c2cf649d..bf9878e1a72a8 100644
33523 +--- a/drivers/usb/gadget/udc/core.c
33524 ++++ b/drivers/usb/gadget/udc/core.c
33525 +@@ -734,13 +734,13 @@ int usb_gadget_disconnect(struct usb_gadget *gadget)
33526 + }
33527 +
33528 + ret = gadget->ops->pullup(gadget, 0);
33529 +- if (!ret) {
33530 ++ if (!ret)
33531 + gadget->connected = 0;
33532 +- mutex_lock(&udc_lock);
33533 +- if (gadget->udc->driver)
33534 +- gadget->udc->driver->disconnect(gadget);
33535 +- mutex_unlock(&udc_lock);
33536 +- }
33537 ++
33538 ++ mutex_lock(&udc_lock);
33539 ++ if (gadget->udc->driver)
33540 ++ gadget->udc->driver->disconnect(gadget);
33541 ++ mutex_unlock(&udc_lock);
33542 +
33543 + out:
33544 + trace_usb_gadget_disconnect(gadget, ret);
33545 +diff --git a/drivers/usb/gadget/udc/fotg210-udc.c b/drivers/usb/gadget/udc/fotg210-udc.c
33546 +index fdca28e72a3b4..d0e051beb3af9 100644
33547 +--- a/drivers/usb/gadget/udc/fotg210-udc.c
33548 ++++ b/drivers/usb/gadget/udc/fotg210-udc.c
33549 +@@ -629,10 +629,10 @@ static void fotg210_request_error(struct fotg210_udc *fotg210)
33550 + static void fotg210_set_address(struct fotg210_udc *fotg210,
33551 + struct usb_ctrlrequest *ctrl)
33552 + {
33553 +- if (ctrl->wValue >= 0x0100) {
33554 ++ if (le16_to_cpu(ctrl->wValue) >= 0x0100) {
33555 + fotg210_request_error(fotg210);
33556 + } else {
33557 +- fotg210_set_dev_addr(fotg210, ctrl->wValue);
33558 ++ fotg210_set_dev_addr(fotg210, le16_to_cpu(ctrl->wValue));
33559 + fotg210_set_cxdone(fotg210);
33560 + }
33561 + }
33562 +@@ -713,17 +713,17 @@ static void fotg210_get_status(struct fotg210_udc *fotg210,
33563 +
33564 + switch (ctrl->bRequestType & USB_RECIP_MASK) {
33565 + case USB_RECIP_DEVICE:
33566 +- fotg210->ep0_data = 1 << USB_DEVICE_SELF_POWERED;
33567 ++ fotg210->ep0_data = cpu_to_le16(1 << USB_DEVICE_SELF_POWERED);
33568 + break;
33569 + case USB_RECIP_INTERFACE:
33570 +- fotg210->ep0_data = 0;
33571 ++ fotg210->ep0_data = cpu_to_le16(0);
33572 + break;
33573 + case USB_RECIP_ENDPOINT:
33574 + epnum = ctrl->wIndex & USB_ENDPOINT_NUMBER_MASK;
33575 + if (epnum)
33576 + fotg210->ep0_data =
33577 +- fotg210_is_epnstall(fotg210->ep[epnum])
33578 +- << USB_ENDPOINT_HALT;
33579 ++ cpu_to_le16(fotg210_is_epnstall(fotg210->ep[epnum])
33580 ++ << USB_ENDPOINT_HALT);
33581 + else
33582 + fotg210_request_error(fotg210);
33583 + break;
33584 +diff --git a/drivers/usb/host/xhci-mtk.c b/drivers/usb/host/xhci-mtk.c
33585 +index 01705e559c422..c61fc19ef1154 100644
33586 +--- a/drivers/usb/host/xhci-mtk.c
33587 ++++ b/drivers/usb/host/xhci-mtk.c
33588 +@@ -639,7 +639,6 @@ static int xhci_mtk_probe(struct platform_device *pdev)
33589 +
33590 + dealloc_usb3_hcd:
33591 + usb_remove_hcd(xhci->shared_hcd);
33592 +- xhci->shared_hcd = NULL;
33593 +
33594 + dealloc_usb2_hcd:
33595 + usb_remove_hcd(hcd);
33596 +diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
33597 +index ad81e9a508b14..343709af4c16f 100644
33598 +--- a/drivers/usb/host/xhci-ring.c
33599 ++++ b/drivers/usb/host/xhci-ring.c
33600 +@@ -2458,7 +2458,7 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep,
33601 +
33602 + switch (trb_comp_code) {
33603 + case COMP_SUCCESS:
33604 +- ep_ring->err_count = 0;
33605 ++ ep->err_count = 0;
33606 + /* handle success with untransferred data as short packet */
33607 + if (ep_trb != td->last_trb || remaining) {
33608 + xhci_warn(xhci, "WARN Successful completion on short TX\n");
33609 +@@ -2484,7 +2484,7 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep,
33610 + break;
33611 + case COMP_USB_TRANSACTION_ERROR:
33612 + if (xhci->quirks & XHCI_NO_SOFT_RETRY ||
33613 +- (ep_ring->err_count++ > MAX_SOFT_RETRY) ||
33614 ++ (ep->err_count++ > MAX_SOFT_RETRY) ||
33615 + le32_to_cpu(slot_ctx->tt_info) & TT_SLOT)
33616 + break;
33617 +
33618 +@@ -2565,8 +2565,14 @@ static int handle_tx_event(struct xhci_hcd *xhci,
33619 + case COMP_USB_TRANSACTION_ERROR:
33620 + case COMP_INVALID_STREAM_TYPE_ERROR:
33621 + case COMP_INVALID_STREAM_ID_ERROR:
33622 +- xhci_handle_halted_endpoint(xhci, ep, 0, NULL,
33623 +- EP_SOFT_RESET);
33624 ++ xhci_dbg(xhci, "Stream transaction error ep %u no id\n",
33625 ++ ep_index);
33626 ++ if (ep->err_count++ > MAX_SOFT_RETRY)
33627 ++ xhci_handle_halted_endpoint(xhci, ep, 0, NULL,
33628 ++ EP_HARD_RESET);
33629 ++ else
33630 ++ xhci_handle_halted_endpoint(xhci, ep, 0, NULL,
33631 ++ EP_SOFT_RESET);
33632 + goto cleanup;
33633 + case COMP_RING_UNDERRUN:
33634 + case COMP_RING_OVERRUN:
33635 +diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
33636 +index cc084d9505cdf..c9f06c5e4e9d2 100644
33637 +--- a/drivers/usb/host/xhci.h
33638 ++++ b/drivers/usb/host/xhci.h
33639 +@@ -933,6 +933,7 @@ struct xhci_virt_ep {
33640 + * have to restore the device state to the previous state
33641 + */
33642 + struct xhci_ring *new_ring;
33643 ++ unsigned int err_count;
33644 + unsigned int ep_state;
33645 + #define SET_DEQ_PENDING (1 << 0)
33646 + #define EP_HALTED (1 << 1) /* For stall handling */
33647 +@@ -1627,7 +1628,6 @@ struct xhci_ring {
33648 + * if we own the TRB (if we are the consumer). See section 4.9.1.
33649 + */
33650 + u32 cycle_state;
33651 +- unsigned int err_count;
33652 + unsigned int stream_id;
33653 + unsigned int num_segs;
33654 + unsigned int num_trbs_free;
33655 +diff --git a/drivers/usb/musb/musb_gadget.c b/drivers/usb/musb/musb_gadget.c
33656 +index 6704a62a16659..ba20272d22215 100644
33657 +--- a/drivers/usb/musb/musb_gadget.c
33658 ++++ b/drivers/usb/musb/musb_gadget.c
33659 +@@ -1628,8 +1628,6 @@ static int musb_gadget_vbus_draw(struct usb_gadget *gadget, unsigned mA)
33660 + {
33661 + struct musb *musb = gadget_to_musb(gadget);
33662 +
33663 +- if (!musb->xceiv->set_power)
33664 +- return -EOPNOTSUPP;
33665 + return usb_phy_set_power(musb->xceiv, mA);
33666 + }
33667 +
33668 +diff --git a/drivers/usb/musb/omap2430.c b/drivers/usb/musb/omap2430.c
33669 +index f571a65ae6ee2..476f55d1fec30 100644
33670 +--- a/drivers/usb/musb/omap2430.c
33671 ++++ b/drivers/usb/musb/omap2430.c
33672 +@@ -15,6 +15,7 @@
33673 + #include <linux/list.h>
33674 + #include <linux/io.h>
33675 + #include <linux/of.h>
33676 ++#include <linux/of_irq.h>
33677 + #include <linux/platform_device.h>
33678 + #include <linux/dma-mapping.h>
33679 + #include <linux/pm_runtime.h>
33680 +@@ -310,6 +311,7 @@ static int omap2430_probe(struct platform_device *pdev)
33681 + struct device_node *control_node;
33682 + struct platform_device *control_pdev;
33683 + int ret = -ENOMEM, val;
33684 ++ bool populate_irqs = false;
33685 +
33686 + if (!np)
33687 + return -ENODEV;
33688 +@@ -328,6 +330,18 @@ static int omap2430_probe(struct platform_device *pdev)
33689 + musb->dev.dma_mask = &omap2430_dmamask;
33690 + musb->dev.coherent_dma_mask = omap2430_dmamask;
33691 +
33692 ++ /*
33693 ++ * Legacy SoCs using omap_device get confused if node is moved
33694 ++ * because of interconnect properties mixed into the node.
33695 ++ */
33696 ++ if (of_get_property(np, "ti,hwmods", NULL)) {
33697 ++ dev_warn(&pdev->dev, "please update to probe with ti-sysc\n");
33698 ++ populate_irqs = true;
33699 ++ } else {
33700 ++ device_set_of_node_from_dev(&musb->dev, &pdev->dev);
33701 ++ }
33702 ++ of_node_put(np);
33703 ++
33704 + glue->dev = &pdev->dev;
33705 + glue->musb = musb;
33706 + glue->status = MUSB_UNKNOWN;
33707 +@@ -389,6 +403,46 @@ static int omap2430_probe(struct platform_device *pdev)
33708 + goto err2;
33709 + }
33710 +
33711 ++ if (populate_irqs) {
33712 ++ struct resource musb_res[3];
33713 ++ struct resource *res;
33714 ++ int i = 0;
33715 ++
33716 ++ memset(musb_res, 0, sizeof(*musb_res) * ARRAY_SIZE(musb_res));
33717 ++
33718 ++ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
33719 ++ if (!res)
33720 ++ goto err2;
33721 ++
33722 ++ musb_res[i].start = res->start;
33723 ++ musb_res[i].end = res->end;
33724 ++ musb_res[i].flags = res->flags;
33725 ++ musb_res[i].name = res->name;
33726 ++ i++;
33727 ++
33728 ++ ret = of_irq_get_byname(np, "mc");
33729 ++ if (ret > 0) {
33730 ++ musb_res[i].start = ret;
33731 ++ musb_res[i].flags = IORESOURCE_IRQ;
33732 ++ musb_res[i].name = "mc";
33733 ++ i++;
33734 ++ }
33735 ++
33736 ++ ret = of_irq_get_byname(np, "dma");
33737 ++ if (ret > 0) {
33738 ++ musb_res[i].start = ret;
33739 ++ musb_res[i].flags = IORESOURCE_IRQ;
33740 ++ musb_res[i].name = "dma";
33741 ++ i++;
33742 ++ }
33743 ++
33744 ++ ret = platform_device_add_resources(musb, musb_res, i);
33745 ++ if (ret) {
33746 ++ dev_err(&pdev->dev, "failed to add IRQ resources\n");
33747 ++ goto err2;
33748 ++ }
33749 ++ }
33750 ++
33751 + ret = platform_device_add_data(musb, pdata, sizeof(*pdata));
33752 + if (ret) {
33753 + dev_err(&pdev->dev, "failed to add platform_data\n");
33754 +diff --git a/drivers/usb/roles/class.c b/drivers/usb/roles/class.c
33755 +index dfaed7eee94fc..32e6d19f7011a 100644
33756 +--- a/drivers/usb/roles/class.c
33757 ++++ b/drivers/usb/roles/class.c
33758 +@@ -106,10 +106,13 @@ usb_role_switch_is_parent(struct fwnode_handle *fwnode)
33759 + struct fwnode_handle *parent = fwnode_get_parent(fwnode);
33760 + struct device *dev;
33761 +
33762 +- if (!parent || !fwnode_property_present(parent, "usb-role-switch"))
33763 ++ if (!fwnode_property_present(parent, "usb-role-switch")) {
33764 ++ fwnode_handle_put(parent);
33765 + return NULL;
33766 ++ }
33767 +
33768 + dev = class_find_device_by_fwnode(role_class, parent);
33769 ++ fwnode_handle_put(parent);
33770 + return dev ? to_role_switch(dev) : ERR_PTR(-EPROBE_DEFER);
33771 + }
33772 +
33773 +diff --git a/drivers/usb/storage/alauda.c b/drivers/usb/storage/alauda.c
33774 +index 747be69e5e699..5e912dd29b4c9 100644
33775 +--- a/drivers/usb/storage/alauda.c
33776 ++++ b/drivers/usb/storage/alauda.c
33777 +@@ -438,6 +438,8 @@ static int alauda_init_media(struct us_data *us)
33778 + + MEDIA_INFO(us).blockshift + MEDIA_INFO(us).pageshift);
33779 + MEDIA_INFO(us).pba_to_lba = kcalloc(num_zones, sizeof(u16*), GFP_NOIO);
33780 + MEDIA_INFO(us).lba_to_pba = kcalloc(num_zones, sizeof(u16*), GFP_NOIO);
33781 ++ if (MEDIA_INFO(us).pba_to_lba == NULL || MEDIA_INFO(us).lba_to_pba == NULL)
33782 ++ return USB_STOR_TRANSPORT_ERROR;
33783 +
33784 + if (alauda_reset_media(us) != USB_STOR_XFER_GOOD)
33785 + return USB_STOR_TRANSPORT_ERROR;
33786 +diff --git a/drivers/usb/typec/bus.c b/drivers/usb/typec/bus.c
33787 +index 26ea2fdec17dc..31c2a3130cadb 100644
33788 +--- a/drivers/usb/typec/bus.c
33789 ++++ b/drivers/usb/typec/bus.c
33790 +@@ -134,7 +134,7 @@ int typec_altmode_exit(struct typec_altmode *adev)
33791 + if (!adev || !adev->active)
33792 + return 0;
33793 +
33794 +- if (!pdev->ops || !pdev->ops->enter)
33795 ++ if (!pdev->ops || !pdev->ops->exit)
33796 + return -EOPNOTSUPP;
33797 +
33798 + /* Moving to USB Safe State */
33799 +diff --git a/drivers/usb/typec/tcpm/tcpci.c b/drivers/usb/typec/tcpm/tcpci.c
33800 +index b2bfcebe218f0..72f8d1e876004 100644
33801 +--- a/drivers/usb/typec/tcpm/tcpci.c
33802 ++++ b/drivers/usb/typec/tcpm/tcpci.c
33803 +@@ -794,8 +794,10 @@ struct tcpci *tcpci_register_port(struct device *dev, struct tcpci_data *data)
33804 + return ERR_PTR(err);
33805 +
33806 + tcpci->port = tcpm_register_port(tcpci->dev, &tcpci->tcpc);
33807 +- if (IS_ERR(tcpci->port))
33808 ++ if (IS_ERR(tcpci->port)) {
33809 ++ fwnode_handle_put(tcpci->tcpc.fwnode);
33810 + return ERR_CAST(tcpci->port);
33811 ++ }
33812 +
33813 + return tcpci;
33814 + }
33815 +@@ -804,6 +806,7 @@ EXPORT_SYMBOL_GPL(tcpci_register_port);
33816 + void tcpci_unregister_port(struct tcpci *tcpci)
33817 + {
33818 + tcpm_unregister_port(tcpci->port);
33819 ++ fwnode_handle_put(tcpci->tcpc.fwnode);
33820 + }
33821 + EXPORT_SYMBOL_GPL(tcpci_unregister_port);
33822 +
33823 +diff --git a/drivers/usb/typec/tipd/core.c b/drivers/usb/typec/tipd/core.c
33824 +index 2a77bab948f54..195c9c16f817f 100644
33825 +--- a/drivers/usb/typec/tipd/core.c
33826 ++++ b/drivers/usb/typec/tipd/core.c
33827 +@@ -814,20 +814,19 @@ static int tps6598x_probe(struct i2c_client *client)
33828 +
33829 + ret = devm_tps6598_psy_register(tps);
33830 + if (ret)
33831 +- return ret;
33832 ++ goto err_role_put;
33833 +
33834 + tps->port = typec_register_port(&client->dev, &typec_cap);
33835 + if (IS_ERR(tps->port)) {
33836 + ret = PTR_ERR(tps->port);
33837 + goto err_role_put;
33838 + }
33839 +- fwnode_handle_put(fwnode);
33840 +
33841 + if (status & TPS_STATUS_PLUG_PRESENT) {
33842 + ret = tps6598x_read16(tps, TPS_REG_POWER_STATUS, &tps->pwr_status);
33843 + if (ret < 0) {
33844 + dev_err(tps->dev, "failed to read power status: %d\n", ret);
33845 +- goto err_role_put;
33846 ++ goto err_unregister_port;
33847 + }
33848 + ret = tps6598x_connect(tps, status);
33849 + if (ret)
33850 +@@ -840,14 +839,16 @@ static int tps6598x_probe(struct i2c_client *client)
33851 + dev_name(&client->dev), tps);
33852 + if (ret) {
33853 + tps6598x_disconnect(tps, 0);
33854 +- typec_unregister_port(tps->port);
33855 +- goto err_role_put;
33856 ++ goto err_unregister_port;
33857 + }
33858 +
33859 + i2c_set_clientdata(client, tps);
33860 ++ fwnode_handle_put(fwnode);
33861 +
33862 + return 0;
33863 +
33864 ++err_unregister_port:
33865 ++ typec_unregister_port(tps->port);
33866 + err_role_put:
33867 + usb_role_switch_put(tps->role_sw);
33868 + err_fwnode_put:
33869 +diff --git a/drivers/usb/typec/wusb3801.c b/drivers/usb/typec/wusb3801.c
33870 +index 3cc7a15ecbd31..a43a18d4b02ed 100644
33871 +--- a/drivers/usb/typec/wusb3801.c
33872 ++++ b/drivers/usb/typec/wusb3801.c
33873 +@@ -364,7 +364,7 @@ static int wusb3801_probe(struct i2c_client *client)
33874 + /* Initialize the hardware with the devicetree settings. */
33875 + ret = wusb3801_hw_init(wusb3801);
33876 + if (ret)
33877 +- return ret;
33878 ++ goto err_put_connector;
33879 +
33880 + wusb3801->cap.revision = USB_TYPEC_REV_1_2;
33881 + wusb3801->cap.accessory[0] = TYPEC_ACCESSORY_AUDIO;
33882 +diff --git a/drivers/vfio/iova_bitmap.c b/drivers/vfio/iova_bitmap.c
33883 +index 6631e8befe1b2..0f19d502f351b 100644
33884 +--- a/drivers/vfio/iova_bitmap.c
33885 ++++ b/drivers/vfio/iova_bitmap.c
33886 +@@ -295,11 +295,13 @@ void iova_bitmap_free(struct iova_bitmap *bitmap)
33887 + */
33888 + static unsigned long iova_bitmap_mapped_remaining(struct iova_bitmap *bitmap)
33889 + {
33890 +- unsigned long remaining;
33891 ++ unsigned long remaining, bytes;
33892 ++
33893 ++ bytes = (bitmap->mapped.npages << PAGE_SHIFT) - bitmap->mapped.pgoff;
33894 +
33895 + remaining = bitmap->mapped_total_index - bitmap->mapped_base_index;
33896 + remaining = min_t(unsigned long, remaining,
33897 +- (bitmap->mapped.npages << PAGE_SHIFT) / sizeof(*bitmap->bitmap));
33898 ++ bytes / sizeof(*bitmap->bitmap));
33899 +
33900 + return remaining;
33901 + }
33902 +@@ -394,29 +396,27 @@ int iova_bitmap_for_each(struct iova_bitmap *bitmap, void *opaque,
33903 + * Set the bits corresponding to the range [iova .. iova+length-1] in
33904 + * the user bitmap.
33905 + *
33906 +- * Return: The number of bits set.
33907 + */
33908 + void iova_bitmap_set(struct iova_bitmap *bitmap,
33909 + unsigned long iova, size_t length)
33910 + {
33911 + struct iova_bitmap_map *mapped = &bitmap->mapped;
33912 +- unsigned long offset = (iova - mapped->iova) >> mapped->pgshift;
33913 +- unsigned long nbits = max_t(unsigned long, 1, length >> mapped->pgshift);
33914 +- unsigned long page_idx = offset / BITS_PER_PAGE;
33915 +- unsigned long page_offset = mapped->pgoff;
33916 +- void *kaddr;
33917 +-
33918 +- offset = offset % BITS_PER_PAGE;
33919 ++ unsigned long cur_bit = ((iova - mapped->iova) >>
33920 ++ mapped->pgshift) + mapped->pgoff * BITS_PER_BYTE;
33921 ++ unsigned long last_bit = (((iova + length - 1) - mapped->iova) >>
33922 ++ mapped->pgshift) + mapped->pgoff * BITS_PER_BYTE;
33923 +
33924 + do {
33925 +- unsigned long size = min(BITS_PER_PAGE - offset, nbits);
33926 ++ unsigned int page_idx = cur_bit / BITS_PER_PAGE;
33927 ++ unsigned int offset = cur_bit % BITS_PER_PAGE;
33928 ++ unsigned int nbits = min(BITS_PER_PAGE - offset,
33929 ++ last_bit - cur_bit + 1);
33930 ++ void *kaddr;
33931 +
33932 + kaddr = kmap_local_page(mapped->pages[page_idx]);
33933 +- bitmap_set(kaddr + page_offset, offset, size);
33934 ++ bitmap_set(kaddr, offset, nbits);
33935 + kunmap_local(kaddr);
33936 +- page_offset = offset = 0;
33937 +- nbits -= size;
33938 +- page_idx++;
33939 +- } while (nbits > 0);
33940 ++ cur_bit += nbits;
33941 ++ } while (cur_bit <= last_bit);
33942 + }
33943 + EXPORT_SYMBOL_GPL(iova_bitmap_set);
33944 +diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
33945 +index 55dc4f43c31e3..1a0a238ffa354 100644
33946 +--- a/drivers/vfio/platform/vfio_platform_common.c
33947 ++++ b/drivers/vfio/platform/vfio_platform_common.c
33948 +@@ -72,12 +72,11 @@ static int vfio_platform_acpi_call_reset(struct vfio_platform_device *vdev,
33949 + const char **extra_dbg)
33950 + {
33951 + #ifdef CONFIG_ACPI
33952 +- struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
33953 + struct device *dev = vdev->device;
33954 + acpi_handle handle = ACPI_HANDLE(dev);
33955 + acpi_status acpi_ret;
33956 +
33957 +- acpi_ret = acpi_evaluate_object(handle, "_RST", NULL, &buffer);
33958 ++ acpi_ret = acpi_evaluate_object(handle, "_RST", NULL, NULL);
33959 + if (ACPI_FAILURE(acpi_ret)) {
33960 + if (extra_dbg)
33961 + *extra_dbg = acpi_format_exception(acpi_ret);
33962 +diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
33963 +index cfc55273dc5d1..974e862cd20d6 100644
33964 +--- a/drivers/video/fbdev/Kconfig
33965 ++++ b/drivers/video/fbdev/Kconfig
33966 +@@ -601,6 +601,7 @@ config FB_TGA
33967 + config FB_UVESA
33968 + tristate "Userspace VESA VGA graphics support"
33969 + depends on FB && CONNECTOR
33970 ++ depends on !UML
33971 + select FB_CFB_FILLRECT
33972 + select FB_CFB_COPYAREA
33973 + select FB_CFB_IMAGEBLIT
33974 +@@ -2217,7 +2218,6 @@ config FB_SSD1307
33975 + select FB_SYS_COPYAREA
33976 + select FB_SYS_IMAGEBLIT
33977 + select FB_DEFERRED_IO
33978 +- select PWM
33979 + select FB_BACKLIGHT
33980 + help
33981 + This driver implements support for the Solomon SSD1307
33982 +diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
33983 +index c0143d38df83a..14a7d404062c3 100644
33984 +--- a/drivers/video/fbdev/core/fbcon.c
33985 ++++ b/drivers/video/fbdev/core/fbcon.c
33986 +@@ -2450,7 +2450,8 @@ err_out:
33987 +
33988 + if (userfont) {
33989 + p->userfont = old_userfont;
33990 +- REFCOUNT(data)--;
33991 ++ if (--REFCOUNT(data) == 0)
33992 ++ kfree(data - FONT_EXTRA_WORDS * sizeof(int));
33993 + }
33994 +
33995 + vc->vc_font.width = old_width;
33996 +diff --git a/drivers/video/fbdev/ep93xx-fb.c b/drivers/video/fbdev/ep93xx-fb.c
33997 +index 2398b3d48fedf..305f1587bd898 100644
33998 +--- a/drivers/video/fbdev/ep93xx-fb.c
33999 ++++ b/drivers/video/fbdev/ep93xx-fb.c
34000 +@@ -552,12 +552,14 @@ static int ep93xxfb_probe(struct platform_device *pdev)
34001 +
34002 + err = register_framebuffer(info);
34003 + if (err)
34004 +- goto failed_check;
34005 ++ goto failed_framebuffer;
34006 +
34007 + dev_info(info->dev, "registered. Mode = %dx%d-%d\n",
34008 + info->var.xres, info->var.yres, info->var.bits_per_pixel);
34009 + return 0;
34010 +
34011 ++failed_framebuffer:
34012 ++ clk_disable_unprepare(fbi->clk);
34013 + failed_check:
34014 + if (fbi->mach_info->teardown)
34015 + fbi->mach_info->teardown(pdev);
34016 +diff --git a/drivers/video/fbdev/geode/Kconfig b/drivers/video/fbdev/geode/Kconfig
34017 +index ac9c860592aaf..85bc14b6faf64 100644
34018 +--- a/drivers/video/fbdev/geode/Kconfig
34019 ++++ b/drivers/video/fbdev/geode/Kconfig
34020 +@@ -5,6 +5,7 @@
34021 + config FB_GEODE
34022 + bool "AMD Geode family framebuffer support"
34023 + depends on FB && PCI && (X86_32 || (X86 && COMPILE_TEST))
34024 ++ depends on !UML
34025 + help
34026 + Say 'Y' here to allow you to select framebuffer drivers for
34027 + the AMD Geode family of processors.
34028 +diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c
34029 +index 072ce07ba9e05..4ff25dfc865d9 100644
34030 +--- a/drivers/video/fbdev/hyperv_fb.c
34031 ++++ b/drivers/video/fbdev/hyperv_fb.c
34032 +@@ -780,12 +780,18 @@ static void hvfb_ondemand_refresh_throttle(struct hvfb_par *par,
34033 + static int hvfb_on_panic(struct notifier_block *nb,
34034 + unsigned long e, void *p)
34035 + {
34036 ++ struct hv_device *hdev;
34037 + struct hvfb_par *par;
34038 + struct fb_info *info;
34039 +
34040 + par = container_of(nb, struct hvfb_par, hvfb_panic_nb);
34041 +- par->synchronous_fb = true;
34042 + info = par->info;
34043 ++ hdev = device_to_hv_device(info->device);
34044 ++
34045 ++ if (hv_ringbuffer_spinlock_busy(hdev->channel))
34046 ++ return NOTIFY_DONE;
34047 ++
34048 ++ par->synchronous_fb = true;
34049 + if (par->need_docopy)
34050 + hvfb_docopy(par, 0, dio_fb_size);
34051 + synthvid_update(info, 0, 0, INT_MAX, INT_MAX);
34052 +diff --git a/drivers/video/fbdev/pm2fb.c b/drivers/video/fbdev/pm2fb.c
34053 +index 7da715d31a933..7a8609c40ae93 100644
34054 +--- a/drivers/video/fbdev/pm2fb.c
34055 ++++ b/drivers/video/fbdev/pm2fb.c
34056 +@@ -1533,8 +1533,10 @@ static int pm2fb_probe(struct pci_dev *pdev, const struct pci_device_id *id)
34057 + }
34058 +
34059 + info = framebuffer_alloc(sizeof(struct pm2fb_par), &pdev->dev);
34060 +- if (!info)
34061 +- return -ENOMEM;
34062 ++ if (!info) {
34063 ++ err = -ENOMEM;
34064 ++ goto err_exit_disable;
34065 ++ }
34066 + default_par = info->par;
34067 +
34068 + switch (pdev->device) {
34069 +@@ -1715,6 +1717,8 @@ static int pm2fb_probe(struct pci_dev *pdev, const struct pci_device_id *id)
34070 + release_mem_region(pm2fb_fix.mmio_start, pm2fb_fix.mmio_len);
34071 + err_exit_neither:
34072 + framebuffer_release(info);
34073 ++ err_exit_disable:
34074 ++ pci_disable_device(pdev);
34075 + return retval;
34076 + }
34077 +
34078 +@@ -1739,6 +1743,7 @@ static void pm2fb_remove(struct pci_dev *pdev)
34079 + fb_dealloc_cmap(&info->cmap);
34080 + kfree(info->pixmap.addr);
34081 + framebuffer_release(info);
34082 ++ pci_disable_device(pdev);
34083 + }
34084 +
34085 + static const struct pci_device_id pm2fb_id_table[] = {
34086 +diff --git a/drivers/video/fbdev/uvesafb.c b/drivers/video/fbdev/uvesafb.c
34087 +index 00d789b6c0faf..0e3cabbec4b40 100644
34088 +--- a/drivers/video/fbdev/uvesafb.c
34089 ++++ b/drivers/video/fbdev/uvesafb.c
34090 +@@ -1758,6 +1758,7 @@ static int uvesafb_probe(struct platform_device *dev)
34091 + out_unmap:
34092 + iounmap(info->screen_base);
34093 + out_mem:
34094 ++ arch_phys_wc_del(par->mtrr_handle);
34095 + release_mem_region(info->fix.smem_start, info->fix.smem_len);
34096 + out_reg:
34097 + release_region(0x3c0, 32);
34098 +diff --git a/drivers/video/fbdev/vermilion/vermilion.c b/drivers/video/fbdev/vermilion/vermilion.c
34099 +index 82b36dbb5b1a9..33051e3a2561e 100644
34100 +--- a/drivers/video/fbdev/vermilion/vermilion.c
34101 ++++ b/drivers/video/fbdev/vermilion/vermilion.c
34102 +@@ -278,8 +278,10 @@ static int vmlfb_get_gpu(struct vml_par *par)
34103 +
34104 + mutex_unlock(&vml_mutex);
34105 +
34106 +- if (pci_enable_device(par->gpu) < 0)
34107 ++ if (pci_enable_device(par->gpu) < 0) {
34108 ++ pci_dev_put(par->gpu);
34109 + return -ENODEV;
34110 ++ }
34111 +
34112 + return 0;
34113 + }
34114 +diff --git a/drivers/video/fbdev/via/via-core.c b/drivers/video/fbdev/via/via-core.c
34115 +index 2ee8fcae08dfb..b8cd04defc5e2 100644
34116 +--- a/drivers/video/fbdev/via/via-core.c
34117 ++++ b/drivers/video/fbdev/via/via-core.c
34118 +@@ -730,7 +730,14 @@ static int __init via_core_init(void)
34119 + return ret;
34120 + viafb_i2c_init();
34121 + viafb_gpio_init();
34122 +- return pci_register_driver(&via_driver);
34123 ++ ret = pci_register_driver(&via_driver);
34124 ++ if (ret) {
34125 ++ viafb_gpio_exit();
34126 ++ viafb_i2c_exit();
34127 ++ return ret;
34128 ++ }
34129 ++
34130 ++ return 0;
34131 + }
34132 +
34133 + static void __exit via_core_exit(void)
34134 +diff --git a/drivers/virt/coco/sev-guest/sev-guest.c b/drivers/virt/coco/sev-guest/sev-guest.c
34135 +index 1ea6d2e5b2187..99d6062afe72f 100644
34136 +--- a/drivers/virt/coco/sev-guest/sev-guest.c
34137 ++++ b/drivers/virt/coco/sev-guest/sev-guest.c
34138 +@@ -800,3 +800,4 @@ MODULE_AUTHOR("Brijesh Singh <brijesh.singh@×××.com>");
34139 + MODULE_LICENSE("GPL");
34140 + MODULE_VERSION("1.0.0");
34141 + MODULE_DESCRIPTION("AMD SEV Guest Driver");
34142 ++MODULE_ALIAS("platform:sev-guest");
34143 +diff --git a/drivers/watchdog/iTCO_wdt.c b/drivers/watchdog/iTCO_wdt.c
34144 +index 34693f11385f6..e937b4dd28be7 100644
34145 +--- a/drivers/watchdog/iTCO_wdt.c
34146 ++++ b/drivers/watchdog/iTCO_wdt.c
34147 +@@ -423,14 +423,18 @@ static unsigned int iTCO_wdt_get_timeleft(struct watchdog_device *wd_dev)
34148 + return time_left;
34149 + }
34150 +
34151 +-static void iTCO_wdt_set_running(struct iTCO_wdt_private *p)
34152 ++/* Returns true if the watchdog was running */
34153 ++static bool iTCO_wdt_set_running(struct iTCO_wdt_private *p)
34154 + {
34155 + u16 val;
34156 +
34157 +- /* Bit 11: TCO Timer Halt -> 0 = The TCO timer is * enabled */
34158 ++ /* Bit 11: TCO Timer Halt -> 0 = The TCO timer is enabled */
34159 + val = inw(TCO1_CNT(p));
34160 +- if (!(val & BIT(11)))
34161 ++ if (!(val & BIT(11))) {
34162 + set_bit(WDOG_HW_RUNNING, &p->wddev.status);
34163 ++ return true;
34164 ++ }
34165 ++ return false;
34166 + }
34167 +
34168 + /*
34169 +@@ -518,9 +522,6 @@ static int iTCO_wdt_probe(struct platform_device *pdev)
34170 + return -ENODEV; /* Cannot reset NO_REBOOT bit */
34171 + }
34172 +
34173 +- /* Set the NO_REBOOT bit to prevent later reboots, just for sure */
34174 +- p->update_no_reboot_bit(p->no_reboot_priv, true);
34175 +-
34176 + if (turn_SMI_watchdog_clear_off >= p->iTCO_version) {
34177 + /*
34178 + * Bit 13: TCO_EN -> 0
34179 +@@ -572,7 +573,13 @@ static int iTCO_wdt_probe(struct platform_device *pdev)
34180 + watchdog_set_drvdata(&p->wddev, p);
34181 + platform_set_drvdata(pdev, p);
34182 +
34183 +- iTCO_wdt_set_running(p);
34184 ++ if (!iTCO_wdt_set_running(p)) {
34185 ++ /*
34186 ++ * If the watchdog was not running set NO_REBOOT now to
34187 ++ * prevent later reboots.
34188 ++ */
34189 ++ p->update_no_reboot_bit(p->no_reboot_priv, true);
34190 ++ }
34191 +
34192 + /* Check that the heartbeat value is within it's range;
34193 + if not reset to the default */
34194 +diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
34195 +index fae50a24630bd..1edf45ee9890d 100644
34196 +--- a/drivers/xen/privcmd.c
34197 ++++ b/drivers/xen/privcmd.c
34198 +@@ -760,7 +760,7 @@ static long privcmd_ioctl_mmap_resource(struct file *file,
34199 + goto out;
34200 + }
34201 +
34202 +- pfns = kcalloc(kdata.num, sizeof(*pfns), GFP_KERNEL);
34203 ++ pfns = kcalloc(kdata.num, sizeof(*pfns), GFP_KERNEL | __GFP_NOWARN);
34204 + if (!pfns) {
34205 + rc = -ENOMEM;
34206 + goto out;
34207 +diff --git a/fs/afs/fs_probe.c b/fs/afs/fs_probe.c
34208 +index 3ac5fcf98d0d6..daaf3810cc925 100644
34209 +--- a/fs/afs/fs_probe.c
34210 ++++ b/fs/afs/fs_probe.c
34211 +@@ -366,12 +366,15 @@ void afs_fs_probe_dispatcher(struct work_struct *work)
34212 + unsigned long nowj, timer_at, poll_at;
34213 + bool first_pass = true, set_timer = false;
34214 +
34215 +- if (!net->live)
34216 ++ if (!net->live) {
34217 ++ afs_dec_servers_outstanding(net);
34218 + return;
34219 ++ }
34220 +
34221 + _enter("");
34222 +
34223 + if (list_empty(&net->fs_probe_fast) && list_empty(&net->fs_probe_slow)) {
34224 ++ afs_dec_servers_outstanding(net);
34225 + _leave(" [none]");
34226 + return;
34227 + }
34228 +diff --git a/fs/binfmt_misc.c b/fs/binfmt_misc.c
34229 +index e1eae7ea823ae..bb202ad369d53 100644
34230 +--- a/fs/binfmt_misc.c
34231 ++++ b/fs/binfmt_misc.c
34232 +@@ -44,10 +44,10 @@ static LIST_HEAD(entries);
34233 + static int enabled = 1;
34234 +
34235 + enum {Enabled, Magic};
34236 +-#define MISC_FMT_PRESERVE_ARGV0 (1 << 31)
34237 +-#define MISC_FMT_OPEN_BINARY (1 << 30)
34238 +-#define MISC_FMT_CREDENTIALS (1 << 29)
34239 +-#define MISC_FMT_OPEN_FILE (1 << 28)
34240 ++#define MISC_FMT_PRESERVE_ARGV0 (1UL << 31)
34241 ++#define MISC_FMT_OPEN_BINARY (1UL << 30)
34242 ++#define MISC_FMT_CREDENTIALS (1UL << 29)
34243 ++#define MISC_FMT_OPEN_FILE (1UL << 28)
34244 +
34245 + typedef struct {
34246 + struct list_head list;
34247 +diff --git a/fs/btrfs/extent-io-tree.c b/fs/btrfs/extent-io-tree.c
34248 +index 83cb0378096f2..3676580c2d97e 100644
34249 +--- a/fs/btrfs/extent-io-tree.c
34250 ++++ b/fs/btrfs/extent-io-tree.c
34251 +@@ -572,7 +572,7 @@ int __clear_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
34252 + if (bits & (EXTENT_LOCKED | EXTENT_BOUNDARY))
34253 + clear = 1;
34254 + again:
34255 +- if (!prealloc && gfpflags_allow_blocking(mask)) {
34256 ++ if (!prealloc) {
34257 + /*
34258 + * Don't care for allocation failure here because we might end
34259 + * up not needing the pre-allocated extent state at all, which
34260 +@@ -636,7 +636,8 @@ hit_next:
34261 +
34262 + if (state->start < start) {
34263 + prealloc = alloc_extent_state_atomic(prealloc);
34264 +- BUG_ON(!prealloc);
34265 ++ if (!prealloc)
34266 ++ goto search_again;
34267 + err = split_state(tree, state, prealloc, start);
34268 + if (err)
34269 + extent_io_tree_panic(tree, err);
34270 +@@ -657,7 +658,8 @@ hit_next:
34271 + */
34272 + if (state->start <= end && state->end > end) {
34273 + prealloc = alloc_extent_state_atomic(prealloc);
34274 +- BUG_ON(!prealloc);
34275 ++ if (!prealloc)
34276 ++ goto search_again;
34277 + err = split_state(tree, state, prealloc, end + 1);
34278 + if (err)
34279 + extent_io_tree_panic(tree, err);
34280 +@@ -966,7 +968,7 @@ static int __set_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
34281 + else
34282 + ASSERT(failed_start == NULL);
34283 + again:
34284 +- if (!prealloc && gfpflags_allow_blocking(mask)) {
34285 ++ if (!prealloc) {
34286 + /*
34287 + * Don't care for allocation failure here because we might end
34288 + * up not needing the pre-allocated extent state at all, which
34289 +@@ -991,7 +993,8 @@ again:
34290 + state = tree_search_for_insert(tree, start, &p, &parent);
34291 + if (!state) {
34292 + prealloc = alloc_extent_state_atomic(prealloc);
34293 +- BUG_ON(!prealloc);
34294 ++ if (!prealloc)
34295 ++ goto search_again;
34296 + prealloc->start = start;
34297 + prealloc->end = end;
34298 + insert_state_fast(tree, prealloc, p, parent, bits, changeset);
34299 +@@ -1062,7 +1065,8 @@ hit_next:
34300 + }
34301 +
34302 + prealloc = alloc_extent_state_atomic(prealloc);
34303 +- BUG_ON(!prealloc);
34304 ++ if (!prealloc)
34305 ++ goto search_again;
34306 + err = split_state(tree, state, prealloc, start);
34307 + if (err)
34308 + extent_io_tree_panic(tree, err);
34309 +@@ -1099,7 +1103,8 @@ hit_next:
34310 + this_end = last_start - 1;
34311 +
34312 + prealloc = alloc_extent_state_atomic(prealloc);
34313 +- BUG_ON(!prealloc);
34314 ++ if (!prealloc)
34315 ++ goto search_again;
34316 +
34317 + /*
34318 + * Avoid to free 'prealloc' if it can be merged with the later
34319 +@@ -1130,7 +1135,8 @@ hit_next:
34320 + }
34321 +
34322 + prealloc = alloc_extent_state_atomic(prealloc);
34323 +- BUG_ON(!prealloc);
34324 ++ if (!prealloc)
34325 ++ goto search_again;
34326 + err = split_state(tree, state, prealloc, end + 1);
34327 + if (err)
34328 + extent_io_tree_panic(tree, err);
34329 +diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
34330 +index d01631d478067..ed4e1c3705d0a 100644
34331 +--- a/fs/btrfs/file.c
34332 ++++ b/fs/btrfs/file.c
34333 +@@ -696,7 +696,10 @@ next_slot:
34334 + args->start - extent_offset,
34335 + 0, false);
34336 + ret = btrfs_inc_extent_ref(trans, &ref);
34337 +- BUG_ON(ret); /* -ENOMEM */
34338 ++ if (ret) {
34339 ++ btrfs_abort_transaction(trans, ret);
34340 ++ break;
34341 ++ }
34342 + }
34343 + key.offset = args->start;
34344 + }
34345 +@@ -783,7 +786,10 @@ delete_extent_item:
34346 + key.offset - extent_offset, 0,
34347 + false);
34348 + ret = btrfs_free_extent(trans, &ref);
34349 +- BUG_ON(ret); /* -ENOMEM */
34350 ++ if (ret) {
34351 ++ btrfs_abort_transaction(trans, ret);
34352 ++ break;
34353 ++ }
34354 + args->bytes_found += extent_end - key.offset;
34355 + }
34356 +
34357 +diff --git a/fs/char_dev.c b/fs/char_dev.c
34358 +index ba0ded7842a77..3f667292608c0 100644
34359 +--- a/fs/char_dev.c
34360 ++++ b/fs/char_dev.c
34361 +@@ -547,7 +547,7 @@ int cdev_device_add(struct cdev *cdev, struct device *dev)
34362 + }
34363 +
34364 + rc = device_add(dev);
34365 +- if (rc)
34366 ++ if (rc && dev->devt)
34367 + cdev_del(cdev);
34368 +
34369 + return rc;
34370 +diff --git a/fs/cifs/smb2file.c b/fs/cifs/smb2file.c
34371 +index ffbd9a99fc128..ba6cc50af390f 100644
34372 +--- a/fs/cifs/smb2file.c
34373 ++++ b/fs/cifs/smb2file.c
34374 +@@ -122,8 +122,8 @@ int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32
34375 + struct smb2_hdr *hdr = err_iov.iov_base;
34376 +
34377 + if (unlikely(!err_iov.iov_base || err_buftype == CIFS_NO_BUFFER))
34378 +- rc = -ENOMEM;
34379 +- else if (hdr->Status == STATUS_STOPPED_ON_SYMLINK) {
34380 ++ goto out;
34381 ++ if (hdr->Status == STATUS_STOPPED_ON_SYMLINK) {
34382 + rc = smb2_parse_symlink_response(oparms->cifs_sb, &err_iov,
34383 + &data->symlink_target);
34384 + if (!rc) {
34385 +diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
34386 +index d1f9d26322027..ec6519e1ca3bf 100644
34387 +--- a/fs/configfs/dir.c
34388 ++++ b/fs/configfs/dir.c
34389 +@@ -316,6 +316,7 @@ static int configfs_create_dir(struct config_item *item, struct dentry *dentry,
34390 + return 0;
34391 +
34392 + out_remove:
34393 ++ configfs_put(dentry->d_fsdata);
34394 + configfs_remove_dirent(dentry);
34395 + return PTR_ERR(inode);
34396 + }
34397 +@@ -382,6 +383,7 @@ int configfs_create_link(struct configfs_dirent *target, struct dentry *parent,
34398 + return 0;
34399 +
34400 + out_remove:
34401 ++ configfs_put(dentry->d_fsdata);
34402 + configfs_remove_dirent(dentry);
34403 + return PTR_ERR(inode);
34404 + }
34405 +diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
34406 +index ddb3fc258df94..b54f470e0d031 100644
34407 +--- a/fs/debugfs/file.c
34408 ++++ b/fs/debugfs/file.c
34409 +@@ -378,8 +378,8 @@ ssize_t debugfs_attr_read(struct file *file, char __user *buf,
34410 + }
34411 + EXPORT_SYMBOL_GPL(debugfs_attr_read);
34412 +
34413 +-ssize_t debugfs_attr_write(struct file *file, const char __user *buf,
34414 +- size_t len, loff_t *ppos)
34415 ++static ssize_t debugfs_attr_write_xsigned(struct file *file, const char __user *buf,
34416 ++ size_t len, loff_t *ppos, bool is_signed)
34417 + {
34418 + struct dentry *dentry = F_DENTRY(file);
34419 + ssize_t ret;
34420 +@@ -387,12 +387,28 @@ ssize_t debugfs_attr_write(struct file *file, const char __user *buf,
34421 + ret = debugfs_file_get(dentry);
34422 + if (unlikely(ret))
34423 + return ret;
34424 +- ret = simple_attr_write(file, buf, len, ppos);
34425 ++ if (is_signed)
34426 ++ ret = simple_attr_write_signed(file, buf, len, ppos);
34427 ++ else
34428 ++ ret = simple_attr_write(file, buf, len, ppos);
34429 + debugfs_file_put(dentry);
34430 + return ret;
34431 + }
34432 ++
34433 ++ssize_t debugfs_attr_write(struct file *file, const char __user *buf,
34434 ++ size_t len, loff_t *ppos)
34435 ++{
34436 ++ return debugfs_attr_write_xsigned(file, buf, len, ppos, false);
34437 ++}
34438 + EXPORT_SYMBOL_GPL(debugfs_attr_write);
34439 +
34440 ++ssize_t debugfs_attr_write_signed(struct file *file, const char __user *buf,
34441 ++ size_t len, loff_t *ppos)
34442 ++{
34443 ++ return debugfs_attr_write_xsigned(file, buf, len, ppos, true);
34444 ++}
34445 ++EXPORT_SYMBOL_GPL(debugfs_attr_write_signed);
34446 ++
34447 + static struct dentry *debugfs_create_mode_unsafe(const char *name, umode_t mode,
34448 + struct dentry *parent, void *value,
34449 + const struct file_operations *fops,
34450 +@@ -738,11 +754,11 @@ static int debugfs_atomic_t_get(void *data, u64 *val)
34451 + *val = atomic_read((atomic_t *)data);
34452 + return 0;
34453 + }
34454 +-DEFINE_DEBUGFS_ATTRIBUTE(fops_atomic_t, debugfs_atomic_t_get,
34455 ++DEFINE_DEBUGFS_ATTRIBUTE_SIGNED(fops_atomic_t, debugfs_atomic_t_get,
34456 + debugfs_atomic_t_set, "%lld\n");
34457 +-DEFINE_DEBUGFS_ATTRIBUTE(fops_atomic_t_ro, debugfs_atomic_t_get, NULL,
34458 ++DEFINE_DEBUGFS_ATTRIBUTE_SIGNED(fops_atomic_t_ro, debugfs_atomic_t_get, NULL,
34459 + "%lld\n");
34460 +-DEFINE_DEBUGFS_ATTRIBUTE(fops_atomic_t_wo, NULL, debugfs_atomic_t_set,
34461 ++DEFINE_DEBUGFS_ATTRIBUTE_SIGNED(fops_atomic_t_wo, NULL, debugfs_atomic_t_set,
34462 + "%lld\n");
34463 +
34464 + /**
34465 +diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
34466 +index af5ed6b9c54dd..6a792a513d6b8 100644
34467 +--- a/fs/erofs/fscache.c
34468 ++++ b/fs/erofs/fscache.c
34469 +@@ -494,7 +494,8 @@ static int erofs_fscache_register_domain(struct super_block *sb)
34470 +
34471 + static
34472 + struct erofs_fscache *erofs_fscache_acquire_cookie(struct super_block *sb,
34473 +- char *name, bool need_inode)
34474 ++ char *name,
34475 ++ unsigned int flags)
34476 + {
34477 + struct fscache_volume *volume = EROFS_SB(sb)->volume;
34478 + struct erofs_fscache *ctx;
34479 +@@ -516,7 +517,7 @@ struct erofs_fscache *erofs_fscache_acquire_cookie(struct super_block *sb,
34480 + fscache_use_cookie(cookie, false);
34481 + ctx->cookie = cookie;
34482 +
34483 +- if (need_inode) {
34484 ++ if (flags & EROFS_REG_COOKIE_NEED_INODE) {
34485 + struct inode *const inode = new_inode(sb);
34486 +
34487 + if (!inode) {
34488 +@@ -554,14 +555,15 @@ static void erofs_fscache_relinquish_cookie(struct erofs_fscache *ctx)
34489 +
34490 + static
34491 + struct erofs_fscache *erofs_fscache_domain_init_cookie(struct super_block *sb,
34492 +- char *name, bool need_inode)
34493 ++ char *name,
34494 ++ unsigned int flags)
34495 + {
34496 + int err;
34497 + struct inode *inode;
34498 + struct erofs_fscache *ctx;
34499 + struct erofs_domain *domain = EROFS_SB(sb)->domain;
34500 +
34501 +- ctx = erofs_fscache_acquire_cookie(sb, name, need_inode);
34502 ++ ctx = erofs_fscache_acquire_cookie(sb, name, flags);
34503 + if (IS_ERR(ctx))
34504 + return ctx;
34505 +
34506 +@@ -589,7 +591,8 @@ out:
34507 +
34508 + static
34509 + struct erofs_fscache *erofs_domain_register_cookie(struct super_block *sb,
34510 +- char *name, bool need_inode)
34511 ++ char *name,
34512 ++ unsigned int flags)
34513 + {
34514 + struct inode *inode;
34515 + struct erofs_fscache *ctx;
34516 +@@ -602,23 +605,30 @@ struct erofs_fscache *erofs_domain_register_cookie(struct super_block *sb,
34517 + ctx = inode->i_private;
34518 + if (!ctx || ctx->domain != domain || strcmp(ctx->name, name))
34519 + continue;
34520 +- igrab(inode);
34521 ++ if (!(flags & EROFS_REG_COOKIE_NEED_NOEXIST)) {
34522 ++ igrab(inode);
34523 ++ } else {
34524 ++ erofs_err(sb, "%s already exists in domain %s", name,
34525 ++ domain->domain_id);
34526 ++ ctx = ERR_PTR(-EEXIST);
34527 ++ }
34528 + spin_unlock(&psb->s_inode_list_lock);
34529 + mutex_unlock(&erofs_domain_cookies_lock);
34530 + return ctx;
34531 + }
34532 + spin_unlock(&psb->s_inode_list_lock);
34533 +- ctx = erofs_fscache_domain_init_cookie(sb, name, need_inode);
34534 ++ ctx = erofs_fscache_domain_init_cookie(sb, name, flags);
34535 + mutex_unlock(&erofs_domain_cookies_lock);
34536 + return ctx;
34537 + }
34538 +
34539 + struct erofs_fscache *erofs_fscache_register_cookie(struct super_block *sb,
34540 +- char *name, bool need_inode)
34541 ++ char *name,
34542 ++ unsigned int flags)
34543 + {
34544 + if (EROFS_SB(sb)->domain_id)
34545 +- return erofs_domain_register_cookie(sb, name, need_inode);
34546 +- return erofs_fscache_acquire_cookie(sb, name, need_inode);
34547 ++ return erofs_domain_register_cookie(sb, name, flags);
34548 ++ return erofs_fscache_acquire_cookie(sb, name, flags);
34549 + }
34550 +
34551 + void erofs_fscache_unregister_cookie(struct erofs_fscache *ctx)
34552 +@@ -647,6 +657,7 @@ int erofs_fscache_register_fs(struct super_block *sb)
34553 + int ret;
34554 + struct erofs_sb_info *sbi = EROFS_SB(sb);
34555 + struct erofs_fscache *fscache;
34556 ++ unsigned int flags;
34557 +
34558 + if (sbi->domain_id)
34559 + ret = erofs_fscache_register_domain(sb);
34560 +@@ -655,8 +666,20 @@ int erofs_fscache_register_fs(struct super_block *sb)
34561 + if (ret)
34562 + return ret;
34563 +
34564 +- /* acquired domain/volume will be relinquished in kill_sb() on error */
34565 +- fscache = erofs_fscache_register_cookie(sb, sbi->fsid, true);
34566 ++ /*
34567 ++ * When shared domain is enabled, using NEED_NOEXIST to guarantee
34568 ++ * the primary data blob (aka fsid) is unique in the shared domain.
34569 ++ *
34570 ++ * For non-shared-domain case, fscache_acquire_volume() invoked by
34571 ++ * erofs_fscache_register_volume() has already guaranteed
34572 ++ * the uniqueness of primary data blob.
34573 ++ *
34574 ++ * Acquired domain/volume will be relinquished in kill_sb() on error.
34575 ++ */
34576 ++ flags = EROFS_REG_COOKIE_NEED_INODE;
34577 ++ if (sbi->domain_id)
34578 ++ flags |= EROFS_REG_COOKIE_NEED_NOEXIST;
34579 ++ fscache = erofs_fscache_register_cookie(sb, sbi->fsid, flags);
34580 + if (IS_ERR(fscache))
34581 + return PTR_ERR(fscache);
34582 +
34583 +diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
34584 +index 05dc686277220..e51f27b6bde15 100644
34585 +--- a/fs/erofs/internal.h
34586 ++++ b/fs/erofs/internal.h
34587 +@@ -604,13 +604,18 @@ static inline int z_erofs_load_lzma_config(struct super_block *sb,
34588 + }
34589 + #endif /* !CONFIG_EROFS_FS_ZIP */
34590 +
34591 ++/* flags for erofs_fscache_register_cookie() */
34592 ++#define EROFS_REG_COOKIE_NEED_INODE 1
34593 ++#define EROFS_REG_COOKIE_NEED_NOEXIST 2
34594 ++
34595 + /* fscache.c */
34596 + #ifdef CONFIG_EROFS_FS_ONDEMAND
34597 + int erofs_fscache_register_fs(struct super_block *sb);
34598 + void erofs_fscache_unregister_fs(struct super_block *sb);
34599 +
34600 + struct erofs_fscache *erofs_fscache_register_cookie(struct super_block *sb,
34601 +- char *name, bool need_inode);
34602 ++ char *name,
34603 ++ unsigned int flags);
34604 + void erofs_fscache_unregister_cookie(struct erofs_fscache *fscache);
34605 +
34606 + extern const struct address_space_operations erofs_fscache_access_aops;
34607 +@@ -623,7 +628,8 @@ static inline void erofs_fscache_unregister_fs(struct super_block *sb) {}
34608 +
34609 + static inline
34610 + struct erofs_fscache *erofs_fscache_register_cookie(struct super_block *sb,
34611 +- char *name, bool need_inode)
34612 ++ char *name,
34613 ++ unsigned int flags)
34614 + {
34615 + return ERR_PTR(-EOPNOTSUPP);
34616 + }
34617 +diff --git a/fs/erofs/super.c b/fs/erofs/super.c
34618 +index 1c7dcca702b3e..481788c24a68b 100644
34619 +--- a/fs/erofs/super.c
34620 ++++ b/fs/erofs/super.c
34621 +@@ -245,7 +245,7 @@ static int erofs_init_device(struct erofs_buf *buf, struct super_block *sb,
34622 + }
34623 +
34624 + if (erofs_is_fscache_mode(sb)) {
34625 +- fscache = erofs_fscache_register_cookie(sb, dif->path, false);
34626 ++ fscache = erofs_fscache_register_cookie(sb, dif->path, 0);
34627 + if (IS_ERR(fscache))
34628 + return PTR_ERR(fscache);
34629 + dif->fscache = fscache;
34630 +diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
34631 +index b792d424d774c..cf4871834ebb2 100644
34632 +--- a/fs/erofs/zdata.c
34633 ++++ b/fs/erofs/zdata.c
34634 +@@ -488,7 +488,8 @@ static int z_erofs_register_pcluster(struct z_erofs_decompress_frontend *fe)
34635 + struct erofs_workgroup *grp;
34636 + int err;
34637 +
34638 +- if (!(map->m_flags & EROFS_MAP_ENCODED)) {
34639 ++ if (!(map->m_flags & EROFS_MAP_ENCODED) ||
34640 ++ (!ztailpacking && !(map->m_pa >> PAGE_SHIFT))) {
34641 + DBG_BUGON(1);
34642 + return -EFSCORRUPTED;
34643 + }
34644 +diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
34645 +index 0bb66927e3d06..e6d5d7a18fb06 100644
34646 +--- a/fs/erofs/zmap.c
34647 ++++ b/fs/erofs/zmap.c
34648 +@@ -694,10 +694,15 @@ static int z_erofs_do_map_blocks(struct inode *inode,
34649 + map->m_pa = blknr_to_addr(m.pblk);
34650 + err = z_erofs_get_extent_compressedlen(&m, initial_lcn);
34651 + if (err)
34652 +- goto out;
34653 ++ goto unmap_out;
34654 + }
34655 +
34656 + if (m.headtype == Z_EROFS_VLE_CLUSTER_TYPE_PLAIN) {
34657 ++ if (map->m_llen > map->m_plen) {
34658 ++ DBG_BUGON(1);
34659 ++ err = -EFSCORRUPTED;
34660 ++ goto unmap_out;
34661 ++ }
34662 + if (vi->z_advise & Z_EROFS_ADVISE_INTERLACED_PCLUSTER)
34663 + map->m_algorithmformat =
34664 + Z_EROFS_COMPRESSION_INTERLACED;
34665 +@@ -718,14 +723,12 @@ static int z_erofs_do_map_blocks(struct inode *inode,
34666 + if (!err)
34667 + map->m_flags |= EROFS_MAP_FULL_MAPPED;
34668 + }
34669 ++
34670 + unmap_out:
34671 + erofs_unmap_metabuf(&m.map->buf);
34672 +-
34673 +-out:
34674 + erofs_dbg("%s, m_la %llu m_pa %llu m_llen %llu m_plen %llu m_flags 0%o",
34675 + __func__, map->m_la, map->m_pa,
34676 + map->m_llen, map->m_plen, map->m_flags);
34677 +-
34678 + return err;
34679 + }
34680 +
34681 +diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
34682 +index d315c2de136f2..74d3f2d2271f3 100644
34683 +--- a/fs/f2fs/compress.c
34684 ++++ b/fs/f2fs/compress.c
34685 +@@ -346,7 +346,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
34686 + if (!level)
34687 + level = F2FS_ZSTD_DEFAULT_CLEVEL;
34688 +
34689 +- params = zstd_get_params(F2FS_ZSTD_DEFAULT_CLEVEL, cc->rlen);
34690 ++ params = zstd_get_params(level, cc->rlen);
34691 + workspace_size = zstd_cstream_workspace_bound(&params.cParams);
34692 +
34693 + workspace = f2fs_kvmalloc(F2FS_I_SB(cc->inode),
34694 +diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
34695 +index e6355a5683b75..8b9f0b3c77232 100644
34696 +--- a/fs/f2fs/f2fs.h
34697 ++++ b/fs/f2fs/f2fs.h
34698 +@@ -2974,7 +2974,7 @@ static inline void f2fs_change_bit(unsigned int nr, char *addr)
34699 + /* Flags that should be inherited by new inodes from their parent. */
34700 + #define F2FS_FL_INHERITED (F2FS_SYNC_FL | F2FS_NODUMP_FL | F2FS_NOATIME_FL | \
34701 + F2FS_DIRSYNC_FL | F2FS_PROJINHERIT_FL | \
34702 +- F2FS_CASEFOLD_FL | F2FS_COMPR_FL | F2FS_NOCOMP_FL)
34703 ++ F2FS_CASEFOLD_FL)
34704 +
34705 + /* Flags that are appropriate for regular files (all but dir-specific ones). */
34706 + #define F2FS_REG_FLMASK (~(F2FS_DIRSYNC_FL | F2FS_PROJINHERIT_FL | \
34707 +diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
34708 +index 82cda12582272..f96bbfa8b3991 100644
34709 +--- a/fs/f2fs/file.c
34710 ++++ b/fs/f2fs/file.c
34711 +@@ -1915,6 +1915,10 @@ static int f2fs_setflags_common(struct inode *inode, u32 iflags, u32 mask)
34712 + if (!f2fs_disable_compressed_file(inode))
34713 + return -EINVAL;
34714 + } else {
34715 ++ /* try to convert inline_data to support compression */
34716 ++ int err = f2fs_convert_inline_inode(inode);
34717 ++ if (err)
34718 ++ return err;
34719 + if (!f2fs_may_compress(inode))
34720 + return -EINVAL;
34721 + if (S_ISREG(inode->i_mode) && F2FS_HAS_BLOCKS(inode))
34722 +diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
34723 +index 4546e01b2ee08..22c1f876e8c52 100644
34724 +--- a/fs/f2fs/gc.c
34725 ++++ b/fs/f2fs/gc.c
34726 +@@ -96,16 +96,6 @@ static int gc_thread_func(void *data)
34727 + * invalidated soon after by user update or deletion.
34728 + * So, I'd like to wait some time to collect dirty segments.
34729 + */
34730 +- if (sbi->gc_mode == GC_URGENT_HIGH) {
34731 +- spin_lock(&sbi->gc_urgent_high_lock);
34732 +- if (sbi->gc_urgent_high_remaining) {
34733 +- sbi->gc_urgent_high_remaining--;
34734 +- if (!sbi->gc_urgent_high_remaining)
34735 +- sbi->gc_mode = GC_NORMAL;
34736 +- }
34737 +- spin_unlock(&sbi->gc_urgent_high_lock);
34738 +- }
34739 +-
34740 + if (sbi->gc_mode == GC_URGENT_HIGH ||
34741 + sbi->gc_mode == GC_URGENT_MID) {
34742 + wait_ms = gc_th->urgent_sleep_time;
34743 +@@ -162,6 +152,15 @@ do_gc:
34744 + /* balancing f2fs's metadata periodically */
34745 + f2fs_balance_fs_bg(sbi, true);
34746 + next:
34747 ++ if (sbi->gc_mode == GC_URGENT_HIGH) {
34748 ++ spin_lock(&sbi->gc_urgent_high_lock);
34749 ++ if (sbi->gc_urgent_high_remaining) {
34750 ++ sbi->gc_urgent_high_remaining--;
34751 ++ if (!sbi->gc_urgent_high_remaining)
34752 ++ sbi->gc_mode = GC_NORMAL;
34753 ++ }
34754 ++ spin_unlock(&sbi->gc_urgent_high_lock);
34755 ++ }
34756 + sb_end_write(sbi->sb);
34757 +
34758 + } while (!kthread_should_stop());
34759 +@@ -1744,8 +1743,9 @@ freed:
34760 + get_valid_blocks(sbi, segno, false) == 0)
34761 + seg_freed++;
34762 +
34763 +- if (__is_large_section(sbi) && segno + 1 < end_segno)
34764 +- sbi->next_victim_seg[gc_type] = segno + 1;
34765 ++ if (__is_large_section(sbi))
34766 ++ sbi->next_victim_seg[gc_type] =
34767 ++ (segno + 1 < end_segno) ? segno + 1 : NULL_SEGNO;
34768 + skip:
34769 + f2fs_put_page(sum_page, 0);
34770 + }
34771 +@@ -2133,8 +2133,6 @@ out_unlock:
34772 + if (err)
34773 + return err;
34774 +
34775 +- set_sbi_flag(sbi, SBI_IS_RESIZEFS);
34776 +-
34777 + freeze_super(sbi->sb);
34778 + f2fs_down_write(&sbi->gc_lock);
34779 + f2fs_down_write(&sbi->cp_global_sem);
34780 +@@ -2150,6 +2148,7 @@ out_unlock:
34781 + if (err)
34782 + goto out_err;
34783 +
34784 ++ set_sbi_flag(sbi, SBI_IS_RESIZEFS);
34785 + err = free_segment_range(sbi, secs, false);
34786 + if (err)
34787 + goto recover_out;
34788 +@@ -2173,6 +2172,7 @@ out_unlock:
34789 + f2fs_commit_super(sbi, false);
34790 + }
34791 + recover_out:
34792 ++ clear_sbi_flag(sbi, SBI_IS_RESIZEFS);
34793 + if (err) {
34794 + set_sbi_flag(sbi, SBI_NEED_FSCK);
34795 + f2fs_err(sbi, "resize_fs failed, should run fsck to repair!");
34796 +@@ -2185,6 +2185,5 @@ out_err:
34797 + f2fs_up_write(&sbi->cp_global_sem);
34798 + f2fs_up_write(&sbi->gc_lock);
34799 + thaw_super(sbi->sb);
34800 +- clear_sbi_flag(sbi, SBI_IS_RESIZEFS);
34801 + return err;
34802 + }
34803 +diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
34804 +index a389772fd212a..b6c14c9c33a08 100644
34805 +--- a/fs/f2fs/namei.c
34806 ++++ b/fs/f2fs/namei.c
34807 +@@ -22,8 +22,163 @@
34808 + #include "acl.h"
34809 + #include <trace/events/f2fs.h>
34810 +
34811 ++static inline int is_extension_exist(const unsigned char *s, const char *sub,
34812 ++ bool tmp_ext)
34813 ++{
34814 ++ size_t slen = strlen(s);
34815 ++ size_t sublen = strlen(sub);
34816 ++ int i;
34817 ++
34818 ++ if (sublen == 1 && *sub == '*')
34819 ++ return 1;
34820 ++
34821 ++ /*
34822 ++ * filename format of multimedia file should be defined as:
34823 ++ * "filename + '.' + extension + (optional: '.' + temp extension)".
34824 ++ */
34825 ++ if (slen < sublen + 2)
34826 ++ return 0;
34827 ++
34828 ++ if (!tmp_ext) {
34829 ++ /* file has no temp extension */
34830 ++ if (s[slen - sublen - 1] != '.')
34831 ++ return 0;
34832 ++ return !strncasecmp(s + slen - sublen, sub, sublen);
34833 ++ }
34834 ++
34835 ++ for (i = 1; i < slen - sublen; i++) {
34836 ++ if (s[i] != '.')
34837 ++ continue;
34838 ++ if (!strncasecmp(s + i + 1, sub, sublen))
34839 ++ return 1;
34840 ++ }
34841 ++
34842 ++ return 0;
34843 ++}
34844 ++
34845 ++int f2fs_update_extension_list(struct f2fs_sb_info *sbi, const char *name,
34846 ++ bool hot, bool set)
34847 ++{
34848 ++ __u8 (*extlist)[F2FS_EXTENSION_LEN] = sbi->raw_super->extension_list;
34849 ++ int cold_count = le32_to_cpu(sbi->raw_super->extension_count);
34850 ++ int hot_count = sbi->raw_super->hot_ext_count;
34851 ++ int total_count = cold_count + hot_count;
34852 ++ int start, count;
34853 ++ int i;
34854 ++
34855 ++ if (set) {
34856 ++ if (total_count == F2FS_MAX_EXTENSION)
34857 ++ return -EINVAL;
34858 ++ } else {
34859 ++ if (!hot && !cold_count)
34860 ++ return -EINVAL;
34861 ++ if (hot && !hot_count)
34862 ++ return -EINVAL;
34863 ++ }
34864 ++
34865 ++ if (hot) {
34866 ++ start = cold_count;
34867 ++ count = total_count;
34868 ++ } else {
34869 ++ start = 0;
34870 ++ count = cold_count;
34871 ++ }
34872 ++
34873 ++ for (i = start; i < count; i++) {
34874 ++ if (strcmp(name, extlist[i]))
34875 ++ continue;
34876 ++
34877 ++ if (set)
34878 ++ return -EINVAL;
34879 ++
34880 ++ memcpy(extlist[i], extlist[i + 1],
34881 ++ F2FS_EXTENSION_LEN * (total_count - i - 1));
34882 ++ memset(extlist[total_count - 1], 0, F2FS_EXTENSION_LEN);
34883 ++ if (hot)
34884 ++ sbi->raw_super->hot_ext_count = hot_count - 1;
34885 ++ else
34886 ++ sbi->raw_super->extension_count =
34887 ++ cpu_to_le32(cold_count - 1);
34888 ++ return 0;
34889 ++ }
34890 ++
34891 ++ if (!set)
34892 ++ return -EINVAL;
34893 ++
34894 ++ if (hot) {
34895 ++ memcpy(extlist[count], name, strlen(name));
34896 ++ sbi->raw_super->hot_ext_count = hot_count + 1;
34897 ++ } else {
34898 ++ char buf[F2FS_MAX_EXTENSION][F2FS_EXTENSION_LEN];
34899 ++
34900 ++ memcpy(buf, &extlist[cold_count],
34901 ++ F2FS_EXTENSION_LEN * hot_count);
34902 ++ memset(extlist[cold_count], 0, F2FS_EXTENSION_LEN);
34903 ++ memcpy(extlist[cold_count], name, strlen(name));
34904 ++ memcpy(&extlist[cold_count + 1], buf,
34905 ++ F2FS_EXTENSION_LEN * hot_count);
34906 ++ sbi->raw_super->extension_count = cpu_to_le32(cold_count + 1);
34907 ++ }
34908 ++ return 0;
34909 ++}
34910 ++
34911 ++static void set_compress_new_inode(struct f2fs_sb_info *sbi, struct inode *dir,
34912 ++ struct inode *inode, const unsigned char *name)
34913 ++{
34914 ++ __u8 (*extlist)[F2FS_EXTENSION_LEN] = sbi->raw_super->extension_list;
34915 ++ unsigned char (*noext)[F2FS_EXTENSION_LEN] =
34916 ++ F2FS_OPTION(sbi).noextensions;
34917 ++ unsigned char (*ext)[F2FS_EXTENSION_LEN] = F2FS_OPTION(sbi).extensions;
34918 ++ unsigned char ext_cnt = F2FS_OPTION(sbi).compress_ext_cnt;
34919 ++ unsigned char noext_cnt = F2FS_OPTION(sbi).nocompress_ext_cnt;
34920 ++ int i, cold_count, hot_count;
34921 ++
34922 ++ if (!f2fs_sb_has_compression(sbi))
34923 ++ return;
34924 ++
34925 ++ if (S_ISDIR(inode->i_mode))
34926 ++ goto inherit_comp;
34927 ++
34928 ++ /* This name comes only from normal files. */
34929 ++ if (!name)
34930 ++ return;
34931 ++
34932 ++ /* Don't compress hot files. */
34933 ++ f2fs_down_read(&sbi->sb_lock);
34934 ++ cold_count = le32_to_cpu(sbi->raw_super->extension_count);
34935 ++ hot_count = sbi->raw_super->hot_ext_count;
34936 ++ for (i = cold_count; i < cold_count + hot_count; i++)
34937 ++ if (is_extension_exist(name, extlist[i], false))
34938 ++ break;
34939 ++ f2fs_up_read(&sbi->sb_lock);
34940 ++ if (i < (cold_count + hot_count))
34941 ++ return;
34942 ++
34943 ++ /* Don't compress unallowed extension. */
34944 ++ for (i = 0; i < noext_cnt; i++)
34945 ++ if (is_extension_exist(name, noext[i], false))
34946 ++ return;
34947 ++
34948 ++ /* Compress wanting extension. */
34949 ++ for (i = 0; i < ext_cnt; i++) {
34950 ++ if (is_extension_exist(name, ext[i], false)) {
34951 ++ set_compress_context(inode);
34952 ++ return;
34953 ++ }
34954 ++ }
34955 ++inherit_comp:
34956 ++ /* Inherit the {no-}compression flag in directory */
34957 ++ if (F2FS_I(dir)->i_flags & F2FS_NOCOMP_FL) {
34958 ++ F2FS_I(inode)->i_flags |= F2FS_NOCOMP_FL;
34959 ++ f2fs_mark_inode_dirty_sync(inode, true);
34960 ++ } else if (F2FS_I(dir)->i_flags & F2FS_COMPR_FL) {
34961 ++ set_compress_context(inode);
34962 ++ }
34963 ++}
34964 ++
34965 + static struct inode *f2fs_new_inode(struct user_namespace *mnt_userns,
34966 +- struct inode *dir, umode_t mode)
34967 ++ struct inode *dir, umode_t mode,
34968 ++ const char *name)
34969 + {
34970 + struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
34971 + nid_t ino;
34972 +@@ -114,12 +269,8 @@ static struct inode *f2fs_new_inode(struct user_namespace *mnt_userns,
34973 + if (F2FS_I(inode)->i_flags & F2FS_PROJINHERIT_FL)
34974 + set_inode_flag(inode, FI_PROJ_INHERIT);
34975 +
34976 +- if (f2fs_sb_has_compression(sbi)) {
34977 +- /* Inherit the compression flag in directory */
34978 +- if ((F2FS_I(dir)->i_flags & F2FS_COMPR_FL) &&
34979 +- f2fs_may_compress(inode))
34980 +- set_compress_context(inode);
34981 +- }
34982 ++ /* Check compression first. */
34983 ++ set_compress_new_inode(sbi, dir, inode, name);
34984 +
34985 + /* Should enable inline_data after compression set */
34986 + if (test_opt(sbi, INLINE_DATA) && f2fs_may_inline_data(inode))
34987 +@@ -153,40 +304,6 @@ fail_drop:
34988 + return ERR_PTR(err);
34989 + }
34990 +
34991 +-static inline int is_extension_exist(const unsigned char *s, const char *sub,
34992 +- bool tmp_ext)
34993 +-{
34994 +- size_t slen = strlen(s);
34995 +- size_t sublen = strlen(sub);
34996 +- int i;
34997 +-
34998 +- if (sublen == 1 && *sub == '*')
34999 +- return 1;
35000 +-
35001 +- /*
35002 +- * filename format of multimedia file should be defined as:
35003 +- * "filename + '.' + extension + (optional: '.' + temp extension)".
35004 +- */
35005 +- if (slen < sublen + 2)
35006 +- return 0;
35007 +-
35008 +- if (!tmp_ext) {
35009 +- /* file has no temp extension */
35010 +- if (s[slen - sublen - 1] != '.')
35011 +- return 0;
35012 +- return !strncasecmp(s + slen - sublen, sub, sublen);
35013 +- }
35014 +-
35015 +- for (i = 1; i < slen - sublen; i++) {
35016 +- if (s[i] != '.')
35017 +- continue;
35018 +- if (!strncasecmp(s + i + 1, sub, sublen))
35019 +- return 1;
35020 +- }
35021 +-
35022 +- return 0;
35023 +-}
35024 +-
35025 + /*
35026 + * Set file's temperature for hot/cold data separation
35027 + */
35028 +@@ -217,124 +334,6 @@ static inline void set_file_temperature(struct f2fs_sb_info *sbi, struct inode *
35029 + file_set_hot(inode);
35030 + }
35031 +
35032 +-int f2fs_update_extension_list(struct f2fs_sb_info *sbi, const char *name,
35033 +- bool hot, bool set)
35034 +-{
35035 +- __u8 (*extlist)[F2FS_EXTENSION_LEN] = sbi->raw_super->extension_list;
35036 +- int cold_count = le32_to_cpu(sbi->raw_super->extension_count);
35037 +- int hot_count = sbi->raw_super->hot_ext_count;
35038 +- int total_count = cold_count + hot_count;
35039 +- int start, count;
35040 +- int i;
35041 +-
35042 +- if (set) {
35043 +- if (total_count == F2FS_MAX_EXTENSION)
35044 +- return -EINVAL;
35045 +- } else {
35046 +- if (!hot && !cold_count)
35047 +- return -EINVAL;
35048 +- if (hot && !hot_count)
35049 +- return -EINVAL;
35050 +- }
35051 +-
35052 +- if (hot) {
35053 +- start = cold_count;
35054 +- count = total_count;
35055 +- } else {
35056 +- start = 0;
35057 +- count = cold_count;
35058 +- }
35059 +-
35060 +- for (i = start; i < count; i++) {
35061 +- if (strcmp(name, extlist[i]))
35062 +- continue;
35063 +-
35064 +- if (set)
35065 +- return -EINVAL;
35066 +-
35067 +- memcpy(extlist[i], extlist[i + 1],
35068 +- F2FS_EXTENSION_LEN * (total_count - i - 1));
35069 +- memset(extlist[total_count - 1], 0, F2FS_EXTENSION_LEN);
35070 +- if (hot)
35071 +- sbi->raw_super->hot_ext_count = hot_count - 1;
35072 +- else
35073 +- sbi->raw_super->extension_count =
35074 +- cpu_to_le32(cold_count - 1);
35075 +- return 0;
35076 +- }
35077 +-
35078 +- if (!set)
35079 +- return -EINVAL;
35080 +-
35081 +- if (hot) {
35082 +- memcpy(extlist[count], name, strlen(name));
35083 +- sbi->raw_super->hot_ext_count = hot_count + 1;
35084 +- } else {
35085 +- char buf[F2FS_MAX_EXTENSION][F2FS_EXTENSION_LEN];
35086 +-
35087 +- memcpy(buf, &extlist[cold_count],
35088 +- F2FS_EXTENSION_LEN * hot_count);
35089 +- memset(extlist[cold_count], 0, F2FS_EXTENSION_LEN);
35090 +- memcpy(extlist[cold_count], name, strlen(name));
35091 +- memcpy(&extlist[cold_count + 1], buf,
35092 +- F2FS_EXTENSION_LEN * hot_count);
35093 +- sbi->raw_super->extension_count = cpu_to_le32(cold_count + 1);
35094 +- }
35095 +- return 0;
35096 +-}
35097 +-
35098 +-static void set_compress_inode(struct f2fs_sb_info *sbi, struct inode *inode,
35099 +- const unsigned char *name)
35100 +-{
35101 +- __u8 (*extlist)[F2FS_EXTENSION_LEN] = sbi->raw_super->extension_list;
35102 +- unsigned char (*noext)[F2FS_EXTENSION_LEN] = F2FS_OPTION(sbi).noextensions;
35103 +- unsigned char (*ext)[F2FS_EXTENSION_LEN] = F2FS_OPTION(sbi).extensions;
35104 +- unsigned char ext_cnt = F2FS_OPTION(sbi).compress_ext_cnt;
35105 +- unsigned char noext_cnt = F2FS_OPTION(sbi).nocompress_ext_cnt;
35106 +- int i, cold_count, hot_count;
35107 +-
35108 +- if (!f2fs_sb_has_compression(sbi) ||
35109 +- F2FS_I(inode)->i_flags & F2FS_NOCOMP_FL ||
35110 +- !f2fs_may_compress(inode) ||
35111 +- (!ext_cnt && !noext_cnt))
35112 +- return;
35113 +-
35114 +- f2fs_down_read(&sbi->sb_lock);
35115 +-
35116 +- cold_count = le32_to_cpu(sbi->raw_super->extension_count);
35117 +- hot_count = sbi->raw_super->hot_ext_count;
35118 +-
35119 +- for (i = cold_count; i < cold_count + hot_count; i++) {
35120 +- if (is_extension_exist(name, extlist[i], false)) {
35121 +- f2fs_up_read(&sbi->sb_lock);
35122 +- return;
35123 +- }
35124 +- }
35125 +-
35126 +- f2fs_up_read(&sbi->sb_lock);
35127 +-
35128 +- for (i = 0; i < noext_cnt; i++) {
35129 +- if (is_extension_exist(name, noext[i], false)) {
35130 +- f2fs_disable_compressed_file(inode);
35131 +- return;
35132 +- }
35133 +- }
35134 +-
35135 +- if (is_inode_flag_set(inode, FI_COMPRESSED_FILE))
35136 +- return;
35137 +-
35138 +- for (i = 0; i < ext_cnt; i++) {
35139 +- if (!is_extension_exist(name, ext[i], false))
35140 +- continue;
35141 +-
35142 +- /* Do not use inline_data with compression */
35143 +- stat_dec_inline_inode(inode);
35144 +- clear_inode_flag(inode, FI_INLINE_DATA);
35145 +- set_compress_context(inode);
35146 +- return;
35147 +- }
35148 +-}
35149 +-
35150 + static int f2fs_create(struct user_namespace *mnt_userns, struct inode *dir,
35151 + struct dentry *dentry, umode_t mode, bool excl)
35152 + {
35153 +@@ -352,15 +351,13 @@ static int f2fs_create(struct user_namespace *mnt_userns, struct inode *dir,
35154 + if (err)
35155 + return err;
35156 +
35157 +- inode = f2fs_new_inode(mnt_userns, dir, mode);
35158 ++ inode = f2fs_new_inode(mnt_userns, dir, mode, dentry->d_name.name);
35159 + if (IS_ERR(inode))
35160 + return PTR_ERR(inode);
35161 +
35162 + if (!test_opt(sbi, DISABLE_EXT_IDENTIFY))
35163 + set_file_temperature(sbi, inode, dentry->d_name.name);
35164 +
35165 +- set_compress_inode(sbi, inode, dentry->d_name.name);
35166 +-
35167 + inode->i_op = &f2fs_file_inode_operations;
35168 + inode->i_fop = &f2fs_file_operations;
35169 + inode->i_mapping->a_ops = &f2fs_dblock_aops;
35170 +@@ -689,7 +686,7 @@ static int f2fs_symlink(struct user_namespace *mnt_userns, struct inode *dir,
35171 + if (err)
35172 + return err;
35173 +
35174 +- inode = f2fs_new_inode(mnt_userns, dir, S_IFLNK | S_IRWXUGO);
35175 ++ inode = f2fs_new_inode(mnt_userns, dir, S_IFLNK | S_IRWXUGO, NULL);
35176 + if (IS_ERR(inode))
35177 + return PTR_ERR(inode);
35178 +
35179 +@@ -760,7 +757,7 @@ static int f2fs_mkdir(struct user_namespace *mnt_userns, struct inode *dir,
35180 + if (err)
35181 + return err;
35182 +
35183 +- inode = f2fs_new_inode(mnt_userns, dir, S_IFDIR | mode);
35184 ++ inode = f2fs_new_inode(mnt_userns, dir, S_IFDIR | mode, NULL);
35185 + if (IS_ERR(inode))
35186 + return PTR_ERR(inode);
35187 +
35188 +@@ -817,7 +814,7 @@ static int f2fs_mknod(struct user_namespace *mnt_userns, struct inode *dir,
35189 + if (err)
35190 + return err;
35191 +
35192 +- inode = f2fs_new_inode(mnt_userns, dir, mode);
35193 ++ inode = f2fs_new_inode(mnt_userns, dir, mode, NULL);
35194 + if (IS_ERR(inode))
35195 + return PTR_ERR(inode);
35196 +
35197 +@@ -856,7 +853,7 @@ static int __f2fs_tmpfile(struct user_namespace *mnt_userns, struct inode *dir,
35198 + if (err)
35199 + return err;
35200 +
35201 +- inode = f2fs_new_inode(mnt_userns, dir, mode);
35202 ++ inode = f2fs_new_inode(mnt_userns, dir, mode, NULL);
35203 + if (IS_ERR(inode))
35204 + return PTR_ERR(inode);
35205 +
35206 +diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
35207 +index acf3d3fa43635..c1d0713666ee5 100644
35208 +--- a/fs/f2fs/segment.c
35209 ++++ b/fs/f2fs/segment.c
35210 +@@ -1170,7 +1170,7 @@ submit:
35211 +
35212 + atomic_inc(&dcc->issued_discard);
35213 +
35214 +- f2fs_update_iostat(sbi, NULL, FS_DISCARD, 1);
35215 ++ f2fs_update_iostat(sbi, NULL, FS_DISCARD, len * F2FS_BLKSIZE);
35216 +
35217 + lstart += len;
35218 + start += len;
35219 +@@ -1448,7 +1448,7 @@ retry:
35220 + if (i + 1 < dpolicy->granularity)
35221 + break;
35222 +
35223 +- if (i < DEFAULT_DISCARD_GRANULARITY && dpolicy->ordered)
35224 ++ if (i + 1 < DEFAULT_DISCARD_GRANULARITY && dpolicy->ordered)
35225 + return __issue_discard_cmd_orderly(sbi, dpolicy);
35226 +
35227 + pend_list = &dcc->pend_list[i];
35228 +@@ -2025,8 +2025,10 @@ int f2fs_start_discard_thread(struct f2fs_sb_info *sbi)
35229 +
35230 + dcc->f2fs_issue_discard = kthread_run(issue_discard_thread, sbi,
35231 + "f2fs_discard-%u:%u", MAJOR(dev), MINOR(dev));
35232 +- if (IS_ERR(dcc->f2fs_issue_discard))
35233 ++ if (IS_ERR(dcc->f2fs_issue_discard)) {
35234 + err = PTR_ERR(dcc->f2fs_issue_discard);
35235 ++ dcc->f2fs_issue_discard = NULL;
35236 ++ }
35237 +
35238 + return err;
35239 + }
35240 +diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
35241 +index 3834ead046200..67d51f5276061 100644
35242 +--- a/fs/f2fs/super.c
35243 ++++ b/fs/f2fs/super.c
35244 +@@ -4188,6 +4188,9 @@ try_onemore:
35245 + if (err)
35246 + goto free_bio_info;
35247 +
35248 ++ spin_lock_init(&sbi->error_lock);
35249 ++ memcpy(sbi->errors, raw_super->s_errors, MAX_F2FS_ERRORS);
35250 ++
35251 + init_f2fs_rwsem(&sbi->cp_rwsem);
35252 + init_f2fs_rwsem(&sbi->quota_sem);
35253 + init_waitqueue_head(&sbi->cp_wait);
35254 +@@ -4255,9 +4258,6 @@ try_onemore:
35255 + goto free_devices;
35256 + }
35257 +
35258 +- spin_lock_init(&sbi->error_lock);
35259 +- memcpy(sbi->errors, raw_super->s_errors, MAX_F2FS_ERRORS);
35260 +-
35261 + sbi->total_valid_node_count =
35262 + le32_to_cpu(sbi->ckpt->valid_node_count);
35263 + percpu_counter_set(&sbi->total_valid_inode_count,
35264 +@@ -4523,9 +4523,9 @@ free_nm:
35265 + f2fs_destroy_node_manager(sbi);
35266 + free_sm:
35267 + f2fs_destroy_segment_manager(sbi);
35268 +- f2fs_destroy_post_read_wq(sbi);
35269 + stop_ckpt_thread:
35270 + f2fs_stop_ckpt_thread(sbi);
35271 ++ f2fs_destroy_post_read_wq(sbi);
35272 + free_devices:
35273 + destroy_device_list(sbi);
35274 + kvfree(sbi->ckpt);
35275 +diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
35276 +index df335c258eb08..235a0948f6cc6 100644
35277 +--- a/fs/gfs2/glock.c
35278 ++++ b/fs/gfs2/glock.c
35279 +@@ -1039,6 +1039,7 @@ static void delete_work_func(struct work_struct *work)
35280 + if (gfs2_queue_delete_work(gl, 5 * HZ))
35281 + return;
35282 + }
35283 ++ goto out;
35284 + }
35285 +
35286 + inode = gfs2_lookup_by_inum(sdp, no_addr, gl->gl_no_formal_ino,
35287 +@@ -1051,6 +1052,7 @@ static void delete_work_func(struct work_struct *work)
35288 + d_prune_aliases(inode);
35289 + iput(inode);
35290 + }
35291 ++out:
35292 + gfs2_glock_put(gl);
35293 + }
35294 +
35295 +diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
35296 +index c4526f16355d5..a0746be3c1de7 100644
35297 +--- a/fs/hfs/inode.c
35298 ++++ b/fs/hfs/inode.c
35299 +@@ -458,6 +458,8 @@ int hfs_write_inode(struct inode *inode, struct writeback_control *wbc)
35300 + /* panic? */
35301 + return -EIO;
35302 +
35303 ++ if (HFS_I(main_inode)->cat_key.CName.len > HFS_NAMELEN)
35304 ++ return -EIO;
35305 + fd.search_key->cat = HFS_I(main_inode)->cat_key;
35306 + if (hfs_brec_find(&fd))
35307 + /* panic? */
35308 +diff --git a/fs/hfs/trans.c b/fs/hfs/trans.c
35309 +index 39f5e343bf4d4..fdb0edb8a607d 100644
35310 +--- a/fs/hfs/trans.c
35311 ++++ b/fs/hfs/trans.c
35312 +@@ -109,7 +109,7 @@ void hfs_asc2mac(struct super_block *sb, struct hfs_name *out, const struct qstr
35313 + if (nls_io) {
35314 + wchar_t ch;
35315 +
35316 +- while (srclen > 0) {
35317 ++ while (srclen > 0 && dstlen > 0) {
35318 + size = nls_io->char2uni(src, srclen, &ch);
35319 + if (size < 0) {
35320 + ch = '?';
35321 +diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
35322 +index df7772335dc0e..8eea709e36599 100644
35323 +--- a/fs/hugetlbfs/inode.c
35324 ++++ b/fs/hugetlbfs/inode.c
35325 +@@ -1377,7 +1377,7 @@ static int hugetlbfs_parse_param(struct fs_context *fc, struct fs_parameter *par
35326 +
35327 + case Opt_size:
35328 + /* memparse() will accept a K/M/G without a digit */
35329 +- if (!isdigit(param->string[0]))
35330 ++ if (!param->string || !isdigit(param->string[0]))
35331 + goto bad_val;
35332 + ctx->max_size_opt = memparse(param->string, &rest);
35333 + ctx->max_val_type = SIZE_STD;
35334 +@@ -1387,7 +1387,7 @@ static int hugetlbfs_parse_param(struct fs_context *fc, struct fs_parameter *par
35335 +
35336 + case Opt_nr_inodes:
35337 + /* memparse() will accept a K/M/G without a digit */
35338 +- if (!isdigit(param->string[0]))
35339 ++ if (!param->string || !isdigit(param->string[0]))
35340 + goto bad_val;
35341 + ctx->nr_inodes = memparse(param->string, &rest);
35342 + return 0;
35343 +@@ -1403,7 +1403,7 @@ static int hugetlbfs_parse_param(struct fs_context *fc, struct fs_parameter *par
35344 +
35345 + case Opt_min_size:
35346 + /* memparse() will accept a K/M/G without a digit */
35347 +- if (!isdigit(param->string[0]))
35348 ++ if (!param->string || !isdigit(param->string[0]))
35349 + goto bad_val;
35350 + ctx->min_size_opt = memparse(param->string, &rest);
35351 + ctx->min_val_type = SIZE_STD;
35352 +diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
35353 +index 6b838d3ae7c2e..765838578a722 100644
35354 +--- a/fs/jfs/jfs_dmap.c
35355 ++++ b/fs/jfs/jfs_dmap.c
35356 +@@ -155,7 +155,7 @@ int dbMount(struct inode *ipbmap)
35357 + struct bmap *bmp;
35358 + struct dbmap_disk *dbmp_le;
35359 + struct metapage *mp;
35360 +- int i;
35361 ++ int i, err;
35362 +
35363 + /*
35364 + * allocate/initialize the in-memory bmap descriptor
35365 +@@ -170,8 +170,8 @@ int dbMount(struct inode *ipbmap)
35366 + BMAPBLKNO << JFS_SBI(ipbmap->i_sb)->l2nbperpage,
35367 + PSIZE, 0);
35368 + if (mp == NULL) {
35369 +- kfree(bmp);
35370 +- return -EIO;
35371 ++ err = -EIO;
35372 ++ goto err_kfree_bmp;
35373 + }
35374 +
35375 + /* copy the on-disk bmap descriptor to its in-memory version. */
35376 +@@ -181,9 +181,8 @@ int dbMount(struct inode *ipbmap)
35377 + bmp->db_l2nbperpage = le32_to_cpu(dbmp_le->dn_l2nbperpage);
35378 + bmp->db_numag = le32_to_cpu(dbmp_le->dn_numag);
35379 + if (!bmp->db_numag) {
35380 +- release_metapage(mp);
35381 +- kfree(bmp);
35382 +- return -EINVAL;
35383 ++ err = -EINVAL;
35384 ++ goto err_release_metapage;
35385 + }
35386 +
35387 + bmp->db_maxlevel = le32_to_cpu(dbmp_le->dn_maxlevel);
35388 +@@ -194,6 +193,16 @@ int dbMount(struct inode *ipbmap)
35389 + bmp->db_agwidth = le32_to_cpu(dbmp_le->dn_agwidth);
35390 + bmp->db_agstart = le32_to_cpu(dbmp_le->dn_agstart);
35391 + bmp->db_agl2size = le32_to_cpu(dbmp_le->dn_agl2size);
35392 ++ if (bmp->db_agl2size > L2MAXL2SIZE - L2MAXAG) {
35393 ++ err = -EINVAL;
35394 ++ goto err_release_metapage;
35395 ++ }
35396 ++
35397 ++ if (((bmp->db_mapsize - 1) >> bmp->db_agl2size) > MAXAG) {
35398 ++ err = -EINVAL;
35399 ++ goto err_release_metapage;
35400 ++ }
35401 ++
35402 + for (i = 0; i < MAXAG; i++)
35403 + bmp->db_agfree[i] = le64_to_cpu(dbmp_le->dn_agfree[i]);
35404 + bmp->db_agsize = le64_to_cpu(dbmp_le->dn_agsize);
35405 +@@ -214,6 +223,12 @@ int dbMount(struct inode *ipbmap)
35406 + BMAP_LOCK_INIT(bmp);
35407 +
35408 + return (0);
35409 ++
35410 ++err_release_metapage:
35411 ++ release_metapage(mp);
35412 ++err_kfree_bmp:
35413 ++ kfree(bmp);
35414 ++ return err;
35415 + }
35416 +
35417 +
35418 +diff --git a/fs/jfs/namei.c b/fs/jfs/namei.c
35419 +index 9db4f5789c0ec..4fbbf88435e69 100644
35420 +--- a/fs/jfs/namei.c
35421 ++++ b/fs/jfs/namei.c
35422 +@@ -946,7 +946,7 @@ static int jfs_symlink(struct user_namespace *mnt_userns, struct inode *dip,
35423 + if (ssize <= IDATASIZE) {
35424 + ip->i_op = &jfs_fast_symlink_inode_operations;
35425 +
35426 +- ip->i_link = JFS_IP(ip)->i_inline;
35427 ++ ip->i_link = JFS_IP(ip)->i_inline_all;
35428 + memcpy(ip->i_link, name, ssize);
35429 + ip->i_size = ssize - 1;
35430 +
35431 +diff --git a/fs/ksmbd/mgmt/user_session.c b/fs/ksmbd/mgmt/user_session.c
35432 +index 3fa2139a0b309..92b1603b5abeb 100644
35433 +--- a/fs/ksmbd/mgmt/user_session.c
35434 ++++ b/fs/ksmbd/mgmt/user_session.c
35435 +@@ -108,15 +108,17 @@ int ksmbd_session_rpc_open(struct ksmbd_session *sess, char *rpc_name)
35436 + entry->method = method;
35437 + entry->id = ksmbd_ipc_id_alloc();
35438 + if (entry->id < 0)
35439 +- goto error;
35440 ++ goto free_entry;
35441 +
35442 + resp = ksmbd_rpc_open(sess, entry->id);
35443 + if (!resp)
35444 +- goto error;
35445 ++ goto free_id;
35446 +
35447 + kvfree(resp);
35448 + return entry->id;
35449 +-error:
35450 ++free_id:
35451 ++ ksmbd_rpc_id_free(entry->id);
35452 ++free_entry:
35453 + list_del(&entry->list);
35454 + kfree(entry);
35455 + return -EINVAL;
35456 +diff --git a/fs/libfs.c b/fs/libfs.c
35457 +index 682d56345a1cf..aada4e7c87132 100644
35458 +--- a/fs/libfs.c
35459 ++++ b/fs/libfs.c
35460 +@@ -995,8 +995,8 @@ out:
35461 + EXPORT_SYMBOL_GPL(simple_attr_read);
35462 +
35463 + /* interpret the buffer as a number to call the set function with */
35464 +-ssize_t simple_attr_write(struct file *file, const char __user *buf,
35465 +- size_t len, loff_t *ppos)
35466 ++static ssize_t simple_attr_write_xsigned(struct file *file, const char __user *buf,
35467 ++ size_t len, loff_t *ppos, bool is_signed)
35468 + {
35469 + struct simple_attr *attr;
35470 + unsigned long long val;
35471 +@@ -1017,7 +1017,10 @@ ssize_t simple_attr_write(struct file *file, const char __user *buf,
35472 + goto out;
35473 +
35474 + attr->set_buf[size] = '\0';
35475 +- ret = kstrtoull(attr->set_buf, 0, &val);
35476 ++ if (is_signed)
35477 ++ ret = kstrtoll(attr->set_buf, 0, &val);
35478 ++ else
35479 ++ ret = kstrtoull(attr->set_buf, 0, &val);
35480 + if (ret)
35481 + goto out;
35482 + ret = attr->set(attr->data, val);
35483 +@@ -1027,8 +1030,21 @@ out:
35484 + mutex_unlock(&attr->mutex);
35485 + return ret;
35486 + }
35487 ++
35488 ++ssize_t simple_attr_write(struct file *file, const char __user *buf,
35489 ++ size_t len, loff_t *ppos)
35490 ++{
35491 ++ return simple_attr_write_xsigned(file, buf, len, ppos, false);
35492 ++}
35493 + EXPORT_SYMBOL_GPL(simple_attr_write);
35494 +
35495 ++ssize_t simple_attr_write_signed(struct file *file, const char __user *buf,
35496 ++ size_t len, loff_t *ppos)
35497 ++{
35498 ++ return simple_attr_write_xsigned(file, buf, len, ppos, true);
35499 ++}
35500 ++EXPORT_SYMBOL_GPL(simple_attr_write_signed);
35501 ++
35502 + /**
35503 + * generic_fh_to_dentry - generic helper for the fh_to_dentry export operation
35504 + * @sb: filesystem to do the file handle conversion on
35505 +diff --git a/fs/lockd/svcsubs.c b/fs/lockd/svcsubs.c
35506 +index e1c4617de7714..3515f17eaf3fb 100644
35507 +--- a/fs/lockd/svcsubs.c
35508 ++++ b/fs/lockd/svcsubs.c
35509 +@@ -176,7 +176,7 @@ nlm_delete_file(struct nlm_file *file)
35510 + }
35511 + }
35512 +
35513 +-static int nlm_unlock_files(struct nlm_file *file, fl_owner_t owner)
35514 ++static int nlm_unlock_files(struct nlm_file *file, const struct file_lock *fl)
35515 + {
35516 + struct file_lock lock;
35517 +
35518 +@@ -184,12 +184,15 @@ static int nlm_unlock_files(struct nlm_file *file, fl_owner_t owner)
35519 + lock.fl_type = F_UNLCK;
35520 + lock.fl_start = 0;
35521 + lock.fl_end = OFFSET_MAX;
35522 +- lock.fl_owner = owner;
35523 +- if (file->f_file[O_RDONLY] &&
35524 +- vfs_lock_file(file->f_file[O_RDONLY], F_SETLK, &lock, NULL))
35525 ++ lock.fl_owner = fl->fl_owner;
35526 ++ lock.fl_pid = fl->fl_pid;
35527 ++ lock.fl_flags = FL_POSIX;
35528 ++
35529 ++ lock.fl_file = file->f_file[O_RDONLY];
35530 ++ if (lock.fl_file && vfs_lock_file(lock.fl_file, F_SETLK, &lock, NULL))
35531 + goto out_err;
35532 +- if (file->f_file[O_WRONLY] &&
35533 +- vfs_lock_file(file->f_file[O_WRONLY], F_SETLK, &lock, NULL))
35534 ++ lock.fl_file = file->f_file[O_WRONLY];
35535 ++ if (lock.fl_file && vfs_lock_file(lock.fl_file, F_SETLK, &lock, NULL))
35536 + goto out_err;
35537 + return 0;
35538 + out_err:
35539 +@@ -226,7 +229,7 @@ again:
35540 + if (match(lockhost, host)) {
35541 +
35542 + spin_unlock(&flctx->flc_lock);
35543 +- if (nlm_unlock_files(file, fl->fl_owner))
35544 ++ if (nlm_unlock_files(file, fl))
35545 + return 1;
35546 + goto again;
35547 + }
35548 +diff --git a/fs/nfs/fs_context.c b/fs/nfs/fs_context.c
35549 +index 09833ec102fca..9bcd53d5c7d46 100644
35550 +--- a/fs/nfs/fs_context.c
35551 ++++ b/fs/nfs/fs_context.c
35552 +@@ -684,6 +684,8 @@ static int nfs_fs_context_parse_param(struct fs_context *fc,
35553 + return ret;
35554 + break;
35555 + case Opt_vers:
35556 ++ if (!param->string)
35557 ++ goto out_invalid_value;
35558 + trace_nfs_mount_assign(param->key, param->string);
35559 + ret = nfs_parse_version_string(fc, param->string);
35560 + if (ret < 0)
35561 +@@ -696,6 +698,8 @@ static int nfs_fs_context_parse_param(struct fs_context *fc,
35562 + break;
35563 +
35564 + case Opt_proto:
35565 ++ if (!param->string)
35566 ++ goto out_invalid_value;
35567 + trace_nfs_mount_assign(param->key, param->string);
35568 + protofamily = AF_INET;
35569 + switch (lookup_constant(nfs_xprt_protocol_tokens, param->string, -1)) {
35570 +@@ -732,6 +736,8 @@ static int nfs_fs_context_parse_param(struct fs_context *fc,
35571 + break;
35572 +
35573 + case Opt_mountproto:
35574 ++ if (!param->string)
35575 ++ goto out_invalid_value;
35576 + trace_nfs_mount_assign(param->key, param->string);
35577 + mountfamily = AF_INET;
35578 + switch (lookup_constant(nfs_xprt_protocol_tokens, param->string, -1)) {
35579 +diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
35580 +index 647fc3f547cbe..ae7d4a8c728c2 100644
35581 +--- a/fs/nfs/internal.h
35582 ++++ b/fs/nfs/internal.h
35583 +@@ -739,12 +739,10 @@ unsigned long nfs_io_size(unsigned long iosize, enum xprt_transports proto)
35584 + iosize = NFS_DEF_FILE_IO_SIZE;
35585 + else if (iosize >= NFS_MAX_FILE_IO_SIZE)
35586 + iosize = NFS_MAX_FILE_IO_SIZE;
35587 +- else
35588 +- iosize = iosize & PAGE_MASK;
35589 +
35590 +- if (proto == XPRT_TRANSPORT_UDP)
35591 ++ if (proto == XPRT_TRANSPORT_UDP || iosize < PAGE_SIZE)
35592 + return nfs_block_bits(iosize, NULL);
35593 +- return iosize;
35594 ++ return iosize & PAGE_MASK;
35595 + }
35596 +
35597 + /*
35598 +diff --git a/fs/nfs/namespace.c b/fs/nfs/namespace.c
35599 +index 2f336ace75554..88a23af2bd5c9 100644
35600 +--- a/fs/nfs/namespace.c
35601 ++++ b/fs/nfs/namespace.c
35602 +@@ -147,7 +147,7 @@ struct vfsmount *nfs_d_automount(struct path *path)
35603 + struct nfs_fs_context *ctx;
35604 + struct fs_context *fc;
35605 + struct vfsmount *mnt = ERR_PTR(-ENOMEM);
35606 +- struct nfs_server *server = NFS_SERVER(d_inode(path->dentry));
35607 ++ struct nfs_server *server = NFS_SB(path->dentry->d_sb);
35608 + struct nfs_client *client = server->nfs_client;
35609 + int timeout = READ_ONCE(nfs_mountpoint_expiry_timeout);
35610 + int ret;
35611 +diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c
35612 +index fe1aeb0f048f2..2fd465cab631d 100644
35613 +--- a/fs/nfs/nfs42xdr.c
35614 ++++ b/fs/nfs/nfs42xdr.c
35615 +@@ -1142,7 +1142,7 @@ static int decode_read_plus(struct xdr_stream *xdr, struct nfs_pgio_res *res)
35616 + if (!segs)
35617 + return -ENOMEM;
35618 +
35619 +- xdr_set_scratch_buffer(xdr, &scratch_buf, 32);
35620 ++ xdr_set_scratch_buffer(xdr, &scratch_buf, sizeof(scratch_buf));
35621 + status = -EIO;
35622 + for (i = 0; i < segments; i++) {
35623 + status = decode_read_plus_segment(xdr, &segs[i]);
35624 +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
35625 +index 86ed5c0142c3d..e51044a5f550f 100644
35626 +--- a/fs/nfs/nfs4proc.c
35627 ++++ b/fs/nfs/nfs4proc.c
35628 +@@ -122,6 +122,11 @@ nfs4_label_init_security(struct inode *dir, struct dentry *dentry,
35629 + if (nfs_server_capable(dir, NFS_CAP_SECURITY_LABEL) == 0)
35630 + return NULL;
35631 +
35632 ++ label->lfs = 0;
35633 ++ label->pi = 0;
35634 ++ label->len = 0;
35635 ++ label->label = NULL;
35636 ++
35637 + err = security_dentry_init_security(dentry, sattr->ia_mode,
35638 + &dentry->d_name, NULL,
35639 + (void **)&label->label, &label->len);
35640 +@@ -2126,18 +2131,18 @@ static struct nfs4_opendata *nfs4_open_recoverdata_alloc(struct nfs_open_context
35641 + }
35642 +
35643 + static int nfs4_open_recover_helper(struct nfs4_opendata *opendata,
35644 +- fmode_t fmode)
35645 ++ fmode_t fmode)
35646 + {
35647 + struct nfs4_state *newstate;
35648 ++ struct nfs_server *server = NFS_SB(opendata->dentry->d_sb);
35649 ++ int openflags = opendata->o_arg.open_flags;
35650 + int ret;
35651 +
35652 + if (!nfs4_mode_match_open_stateid(opendata->state, fmode))
35653 + return 0;
35654 +- opendata->o_arg.open_flags = 0;
35655 + opendata->o_arg.fmode = fmode;
35656 +- opendata->o_arg.share_access = nfs4_map_atomic_open_share(
35657 +- NFS_SB(opendata->dentry->d_sb),
35658 +- fmode, 0);
35659 ++ opendata->o_arg.share_access =
35660 ++ nfs4_map_atomic_open_share(server, fmode, openflags);
35661 + memset(&opendata->o_res, 0, sizeof(opendata->o_res));
35662 + memset(&opendata->c_res, 0, sizeof(opendata->c_res));
35663 + nfs4_init_opendata_res(opendata);
35664 +@@ -2719,10 +2724,15 @@ static int _nfs4_open_expired(struct nfs_open_context *ctx, struct nfs4_state *s
35665 + struct nfs4_opendata *opendata;
35666 + int ret;
35667 +
35668 +- opendata = nfs4_open_recoverdata_alloc(ctx, state,
35669 +- NFS4_OPEN_CLAIM_FH);
35670 ++ opendata = nfs4_open_recoverdata_alloc(ctx, state, NFS4_OPEN_CLAIM_FH);
35671 + if (IS_ERR(opendata))
35672 + return PTR_ERR(opendata);
35673 ++ /*
35674 ++ * We're not recovering a delegation, so ask for no delegation.
35675 ++ * Otherwise the recovery thread could deadlock with an outstanding
35676 ++ * delegation return.
35677 ++ */
35678 ++ opendata->o_arg.open_flags = O_DIRECT;
35679 + ret = nfs4_open_recover(opendata, state);
35680 + if (ret == -ESTALE)
35681 + d_drop(ctx->dentry);
35682 +@@ -3796,7 +3806,7 @@ nfs4_atomic_open(struct inode *dir, struct nfs_open_context *ctx,
35683 + int open_flags, struct iattr *attr, int *opened)
35684 + {
35685 + struct nfs4_state *state;
35686 +- struct nfs4_label l = {0, 0, 0, NULL}, *label = NULL;
35687 ++ struct nfs4_label l, *label;
35688 +
35689 + label = nfs4_label_init_security(dir, ctx->dentry, attr, &l);
35690 +
35691 +@@ -4013,7 +4023,7 @@ static int _nfs4_discover_trunking(struct nfs_server *server,
35692 +
35693 + page = alloc_page(GFP_KERNEL);
35694 + if (!page)
35695 +- return -ENOMEM;
35696 ++ goto out_put_cred;
35697 + locations = kmalloc(sizeof(struct nfs4_fs_locations), GFP_KERNEL);
35698 + if (!locations)
35699 + goto out_free;
35700 +@@ -4035,6 +4045,8 @@ out_free_2:
35701 + kfree(locations);
35702 + out_free:
35703 + __free_page(page);
35704 ++out_put_cred:
35705 ++ put_cred(cred);
35706 + return status;
35707 + }
35708 +
35709 +@@ -4682,7 +4694,7 @@ nfs4_proc_create(struct inode *dir, struct dentry *dentry, struct iattr *sattr,
35710 + int flags)
35711 + {
35712 + struct nfs_server *server = NFS_SERVER(dir);
35713 +- struct nfs4_label l, *ilabel = NULL;
35714 ++ struct nfs4_label l, *ilabel;
35715 + struct nfs_open_context *ctx;
35716 + struct nfs4_state *state;
35717 + int status = 0;
35718 +@@ -5033,7 +5045,7 @@ static int nfs4_proc_symlink(struct inode *dir, struct dentry *dentry,
35719 + struct nfs4_exception exception = {
35720 + .interruptible = true,
35721 + };
35722 +- struct nfs4_label l, *label = NULL;
35723 ++ struct nfs4_label l, *label;
35724 + int err;
35725 +
35726 + label = nfs4_label_init_security(dir, dentry, sattr, &l);
35727 +@@ -5074,7 +5086,7 @@ static int nfs4_proc_mkdir(struct inode *dir, struct dentry *dentry,
35728 + struct nfs4_exception exception = {
35729 + .interruptible = true,
35730 + };
35731 +- struct nfs4_label l, *label = NULL;
35732 ++ struct nfs4_label l, *label;
35733 + int err;
35734 +
35735 + label = nfs4_label_init_security(dir, dentry, sattr, &l);
35736 +@@ -5193,7 +5205,7 @@ static int nfs4_proc_mknod(struct inode *dir, struct dentry *dentry,
35737 + struct nfs4_exception exception = {
35738 + .interruptible = true,
35739 + };
35740 +- struct nfs4_label l, *label = NULL;
35741 ++ struct nfs4_label l, *label;
35742 + int err;
35743 +
35744 + label = nfs4_label_init_security(dir, dentry, sattr, &l);
35745 +diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
35746 +index a2d2d5d1b0888..03087ef1c7b4a 100644
35747 +--- a/fs/nfs/nfs4state.c
35748 ++++ b/fs/nfs/nfs4state.c
35749 +@@ -1230,6 +1230,8 @@ void nfs4_schedule_state_manager(struct nfs_client *clp)
35750 + if (IS_ERR(task)) {
35751 + printk(KERN_ERR "%s: kthread_run: %ld\n",
35752 + __func__, PTR_ERR(task));
35753 ++ if (!nfs_client_init_is_complete(clp))
35754 ++ nfs_mark_client_ready(clp, PTR_ERR(task));
35755 + nfs4_clear_state_manager_bit(clp);
35756 + clear_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state);
35757 + nfs_put_client(clp);
35758 +diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
35759 +index acfe5f4bda480..deec76cf5afea 100644
35760 +--- a/fs/nfs/nfs4xdr.c
35761 ++++ b/fs/nfs/nfs4xdr.c
35762 +@@ -4234,19 +4234,17 @@ static int decode_attr_security_label(struct xdr_stream *xdr, uint32_t *bitmap,
35763 + p = xdr_inline_decode(xdr, len);
35764 + if (unlikely(!p))
35765 + return -EIO;
35766 ++ bitmap[2] &= ~FATTR4_WORD2_SECURITY_LABEL;
35767 + if (len < NFS4_MAXLABELLEN) {
35768 +- if (label) {
35769 +- if (label->len) {
35770 +- if (label->len < len)
35771 +- return -ERANGE;
35772 +- memcpy(label->label, p, len);
35773 +- }
35774 ++ if (label && label->len) {
35775 ++ if (label->len < len)
35776 ++ return -ERANGE;
35777 ++ memcpy(label->label, p, len);
35778 + label->len = len;
35779 + label->pi = pi;
35780 + label->lfs = lfs;
35781 + status = NFS_ATTR_FATTR_V4_SECURITY_LABEL;
35782 + }
35783 +- bitmap[2] &= ~FATTR4_WORD2_SECURITY_LABEL;
35784 + } else
35785 + printk(KERN_WARNING "%s: label too long (%u)!\n",
35786 + __func__, len);
35787 +@@ -4755,12 +4753,10 @@ static int decode_getfattr_attrs(struct xdr_stream *xdr, uint32_t *bitmap,
35788 + if (status < 0)
35789 + goto xdr_error;
35790 +
35791 +- if (fattr->label) {
35792 +- status = decode_attr_security_label(xdr, bitmap, fattr->label);
35793 +- if (status < 0)
35794 +- goto xdr_error;
35795 +- fattr->valid |= status;
35796 +- }
35797 ++ status = decode_attr_security_label(xdr, bitmap, fattr->label);
35798 ++ if (status < 0)
35799 ++ goto xdr_error;
35800 ++ fattr->valid |= status;
35801 +
35802 + xdr_error:
35803 + dprintk("%s: xdr returned %d\n", __func__, -status);
35804 +diff --git a/fs/nfsd/nfs2acl.c b/fs/nfsd/nfs2acl.c
35805 +index 13e6e6897f6cf..65d4511b7af08 100644
35806 +--- a/fs/nfsd/nfs2acl.c
35807 ++++ b/fs/nfsd/nfs2acl.c
35808 +@@ -246,7 +246,6 @@ nfsaclsvc_encode_getaclres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
35809 + struct nfsd3_getaclres *resp = rqstp->rq_resp;
35810 + struct dentry *dentry = resp->fh.fh_dentry;
35811 + struct inode *inode;
35812 +- int w;
35813 +
35814 + if (!svcxdr_encode_stat(xdr, resp->status))
35815 + return false;
35816 +@@ -260,15 +259,6 @@ nfsaclsvc_encode_getaclres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
35817 + if (xdr_stream_encode_u32(xdr, resp->mask) < 0)
35818 + return false;
35819 +
35820 +- rqstp->rq_res.page_len = w = nfsacl_size(
35821 +- (resp->mask & NFS_ACL) ? resp->acl_access : NULL,
35822 +- (resp->mask & NFS_DFACL) ? resp->acl_default : NULL);
35823 +- while (w > 0) {
35824 +- if (!*(rqstp->rq_next_page++))
35825 +- return true;
35826 +- w -= PAGE_SIZE;
35827 +- }
35828 +-
35829 + if (!nfs_stream_encode_acl(xdr, inode, resp->acl_access,
35830 + resp->mask & NFS_ACL, 0))
35831 + return false;
35832 +diff --git a/fs/nfsd/nfs3acl.c b/fs/nfsd/nfs3acl.c
35833 +index 2fb9ee3564558..a34a22e272ad5 100644
35834 +--- a/fs/nfsd/nfs3acl.c
35835 ++++ b/fs/nfsd/nfs3acl.c
35836 +@@ -171,11 +171,7 @@ nfs3svc_encode_getaclres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
35837 + {
35838 + struct nfsd3_getaclres *resp = rqstp->rq_resp;
35839 + struct dentry *dentry = resp->fh.fh_dentry;
35840 +- struct kvec *head = rqstp->rq_res.head;
35841 + struct inode *inode;
35842 +- unsigned int base;
35843 +- int n;
35844 +- int w;
35845 +
35846 + if (!svcxdr_encode_nfsstat3(xdr, resp->status))
35847 + return false;
35848 +@@ -187,26 +183,12 @@ nfs3svc_encode_getaclres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
35849 + if (xdr_stream_encode_u32(xdr, resp->mask) < 0)
35850 + return false;
35851 +
35852 +- base = (char *)xdr->p - (char *)head->iov_base;
35853 +-
35854 +- rqstp->rq_res.page_len = w = nfsacl_size(
35855 +- (resp->mask & NFS_ACL) ? resp->acl_access : NULL,
35856 +- (resp->mask & NFS_DFACL) ? resp->acl_default : NULL);
35857 +- while (w > 0) {
35858 +- if (!*(rqstp->rq_next_page++))
35859 +- return false;
35860 +- w -= PAGE_SIZE;
35861 +- }
35862 +-
35863 +- n = nfsacl_encode(&rqstp->rq_res, base, inode,
35864 +- resp->acl_access,
35865 +- resp->mask & NFS_ACL, 0);
35866 +- if (n > 0)
35867 +- n = nfsacl_encode(&rqstp->rq_res, base + n, inode,
35868 +- resp->acl_default,
35869 +- resp->mask & NFS_DFACL,
35870 +- NFS_ACL_DEFAULT);
35871 +- if (n <= 0)
35872 ++ if (!nfs_stream_encode_acl(xdr, inode, resp->acl_access,
35873 ++ resp->mask & NFS_ACL, 0))
35874 ++ return false;
35875 ++ if (!nfs_stream_encode_acl(xdr, inode, resp->acl_default,
35876 ++ resp->mask & NFS_DFACL,
35877 ++ NFS_ACL_DEFAULT))
35878 + return false;
35879 + break;
35880 + default:
35881 +diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
35882 +index f0e69edf5f0f1..6253cbe5f81b4 100644
35883 +--- a/fs/nfsd/nfs4callback.c
35884 ++++ b/fs/nfsd/nfs4callback.c
35885 +@@ -916,7 +916,6 @@ static int setup_callback_client(struct nfs4_client *clp, struct nfs4_cb_conn *c
35886 + } else {
35887 + if (!conn->cb_xprt)
35888 + return -EINVAL;
35889 +- clp->cl_cb_conn.cb_xprt = conn->cb_xprt;
35890 + clp->cl_cb_session = ses;
35891 + args.bc_xprt = conn->cb_xprt;
35892 + args.prognumber = clp->cl_cb_session->se_cb_prog;
35893 +@@ -936,6 +935,9 @@ static int setup_callback_client(struct nfs4_client *clp, struct nfs4_cb_conn *c
35894 + rpc_shutdown_client(client);
35895 + return -ENOMEM;
35896 + }
35897 ++
35898 ++ if (clp->cl_minorversion != 0)
35899 ++ clp->cl_cb_conn.cb_xprt = conn->cb_xprt;
35900 + clp->cl_cb_client = client;
35901 + clp->cl_cb_cred = cred;
35902 + rcu_read_lock();
35903 +diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
35904 +index 8beb2bc4c328f..32fe7cbfb28b3 100644
35905 +--- a/fs/nfsd/nfs4proc.c
35906 ++++ b/fs/nfsd/nfs4proc.c
35907 +@@ -1133,6 +1133,8 @@ nfsd4_setattr(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
35908 + 0, (time64_t)0);
35909 + if (!status)
35910 + status = nfserrno(attrs.na_labelerr);
35911 ++ if (!status)
35912 ++ status = nfserrno(attrs.na_aclerr);
35913 + out:
35914 + nfsd_attrs_free(&attrs);
35915 + fh_drop_write(&cstate->current_fh);
35916 +@@ -1644,6 +1646,7 @@ static ssize_t _nfsd_copy_file_range(struct nfsd4_copy *copy,
35917 + u64 src_pos = copy->cp_src_pos;
35918 + u64 dst_pos = copy->cp_dst_pos;
35919 + int status;
35920 ++ loff_t end;
35921 +
35922 + /* See RFC 7862 p.67: */
35923 + if (bytes_total == 0)
35924 +@@ -1663,8 +1666,8 @@ static ssize_t _nfsd_copy_file_range(struct nfsd4_copy *copy,
35925 + /* for a non-zero asynchronous copy do a commit of data */
35926 + if (nfsd4_copy_is_async(copy) && copy->cp_res.wr_bytes_written > 0) {
35927 + since = READ_ONCE(dst->f_wb_err);
35928 +- status = vfs_fsync_range(dst, copy->cp_dst_pos,
35929 +- copy->cp_res.wr_bytes_written, 0);
35930 ++ end = copy->cp_dst_pos + copy->cp_res.wr_bytes_written - 1;
35931 ++ status = vfs_fsync_range(dst, copy->cp_dst_pos, end, 0);
35932 + if (!status)
35933 + status = filemap_check_wb_err(dst->f_mapping, since);
35934 + if (!status)
35935 +diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
35936 +index 836bd825ca4ad..52b5552d0d70e 100644
35937 +--- a/fs/nfsd/nfs4state.c
35938 ++++ b/fs/nfsd/nfs4state.c
35939 +@@ -675,15 +675,26 @@ find_any_file(struct nfs4_file *f)
35940 + return ret;
35941 + }
35942 +
35943 +-static struct nfsd_file *find_deleg_file(struct nfs4_file *f)
35944 ++static struct nfsd_file *find_any_file_locked(struct nfs4_file *f)
35945 + {
35946 +- struct nfsd_file *ret = NULL;
35947 ++ lockdep_assert_held(&f->fi_lock);
35948 ++
35949 ++ if (f->fi_fds[O_RDWR])
35950 ++ return f->fi_fds[O_RDWR];
35951 ++ if (f->fi_fds[O_WRONLY])
35952 ++ return f->fi_fds[O_WRONLY];
35953 ++ if (f->fi_fds[O_RDONLY])
35954 ++ return f->fi_fds[O_RDONLY];
35955 ++ return NULL;
35956 ++}
35957 ++
35958 ++static struct nfsd_file *find_deleg_file_locked(struct nfs4_file *f)
35959 ++{
35960 ++ lockdep_assert_held(&f->fi_lock);
35961 +
35962 +- spin_lock(&f->fi_lock);
35963 + if (f->fi_deleg_file)
35964 +- ret = nfsd_file_get(f->fi_deleg_file);
35965 +- spin_unlock(&f->fi_lock);
35966 +- return ret;
35967 ++ return f->fi_deleg_file;
35968 ++ return NULL;
35969 + }
35970 +
35971 + static atomic_long_t num_delegations;
35972 +@@ -2613,9 +2624,11 @@ static int nfs4_show_open(struct seq_file *s, struct nfs4_stid *st)
35973 + ols = openlockstateid(st);
35974 + oo = ols->st_stateowner;
35975 + nf = st->sc_file;
35976 +- file = find_any_file(nf);
35977 ++
35978 ++ spin_lock(&nf->fi_lock);
35979 ++ file = find_any_file_locked(nf);
35980 + if (!file)
35981 +- return 0;
35982 ++ goto out;
35983 +
35984 + seq_printf(s, "- ");
35985 + nfs4_show_stateid(s, &st->sc_stateid);
35986 +@@ -2637,8 +2650,8 @@ static int nfs4_show_open(struct seq_file *s, struct nfs4_stid *st)
35987 + seq_printf(s, ", ");
35988 + nfs4_show_owner(s, oo);
35989 + seq_printf(s, " }\n");
35990 +- nfsd_file_put(file);
35991 +-
35992 ++out:
35993 ++ spin_unlock(&nf->fi_lock);
35994 + return 0;
35995 + }
35996 +
35997 +@@ -2652,9 +2665,10 @@ static int nfs4_show_lock(struct seq_file *s, struct nfs4_stid *st)
35998 + ols = openlockstateid(st);
35999 + oo = ols->st_stateowner;
36000 + nf = st->sc_file;
36001 +- file = find_any_file(nf);
36002 ++ spin_lock(&nf->fi_lock);
36003 ++ file = find_any_file_locked(nf);
36004 + if (!file)
36005 +- return 0;
36006 ++ goto out;
36007 +
36008 + seq_printf(s, "- ");
36009 + nfs4_show_stateid(s, &st->sc_stateid);
36010 +@@ -2674,8 +2688,8 @@ static int nfs4_show_lock(struct seq_file *s, struct nfs4_stid *st)
36011 + seq_printf(s, ", ");
36012 + nfs4_show_owner(s, oo);
36013 + seq_printf(s, " }\n");
36014 +- nfsd_file_put(file);
36015 +-
36016 ++out:
36017 ++ spin_unlock(&nf->fi_lock);
36018 + return 0;
36019 + }
36020 +
36021 +@@ -2687,9 +2701,10 @@ static int nfs4_show_deleg(struct seq_file *s, struct nfs4_stid *st)
36022 +
36023 + ds = delegstateid(st);
36024 + nf = st->sc_file;
36025 +- file = find_deleg_file(nf);
36026 ++ spin_lock(&nf->fi_lock);
36027 ++ file = find_deleg_file_locked(nf);
36028 + if (!file)
36029 +- return 0;
36030 ++ goto out;
36031 +
36032 + seq_printf(s, "- ");
36033 + nfs4_show_stateid(s, &st->sc_stateid);
36034 +@@ -2705,8 +2720,8 @@ static int nfs4_show_deleg(struct seq_file *s, struct nfs4_stid *st)
36035 + seq_printf(s, ", ");
36036 + nfs4_show_fname(s, file);
36037 + seq_printf(s, " }\n");
36038 +- nfsd_file_put(file);
36039 +-
36040 ++out:
36041 ++ spin_unlock(&nf->fi_lock);
36042 + return 0;
36043 + }
36044 +
36045 +diff --git a/fs/nilfs2/the_nilfs.c b/fs/nilfs2/the_nilfs.c
36046 +index c8b89b4f94e0e..2064e6473d304 100644
36047 +--- a/fs/nilfs2/the_nilfs.c
36048 ++++ b/fs/nilfs2/the_nilfs.c
36049 +@@ -13,6 +13,7 @@
36050 + #include <linux/blkdev.h>
36051 + #include <linux/backing-dev.h>
36052 + #include <linux/random.h>
36053 ++#include <linux/log2.h>
36054 + #include <linux/crc32.h>
36055 + #include "nilfs.h"
36056 + #include "segment.h"
36057 +@@ -192,6 +193,34 @@ static int nilfs_store_log_cursor(struct the_nilfs *nilfs,
36058 + return ret;
36059 + }
36060 +
36061 ++/**
36062 ++ * nilfs_get_blocksize - get block size from raw superblock data
36063 ++ * @sb: super block instance
36064 ++ * @sbp: superblock raw data buffer
36065 ++ * @blocksize: place to store block size
36066 ++ *
36067 ++ * nilfs_get_blocksize() calculates the block size from the block size
36068 ++ * exponent information written in @sbp and stores it in @blocksize,
36069 ++ * or aborts with an error message if it's too large.
36070 ++ *
36071 ++ * Return Value: On success, 0 is returned. If the block size is too
36072 ++ * large, -EINVAL is returned.
36073 ++ */
36074 ++static int nilfs_get_blocksize(struct super_block *sb,
36075 ++ struct nilfs_super_block *sbp, int *blocksize)
36076 ++{
36077 ++ unsigned int shift_bits = le32_to_cpu(sbp->s_log_block_size);
36078 ++
36079 ++ if (unlikely(shift_bits >
36080 ++ ilog2(NILFS_MAX_BLOCK_SIZE) - BLOCK_SIZE_BITS)) {
36081 ++ nilfs_err(sb, "too large filesystem blocksize: 2 ^ %u KiB",
36082 ++ shift_bits);
36083 ++ return -EINVAL;
36084 ++ }
36085 ++ *blocksize = BLOCK_SIZE << shift_bits;
36086 ++ return 0;
36087 ++}
36088 ++
36089 + /**
36090 + * load_nilfs - load and recover the nilfs
36091 + * @nilfs: the_nilfs structure to be released
36092 +@@ -245,11 +274,15 @@ int load_nilfs(struct the_nilfs *nilfs, struct super_block *sb)
36093 + nilfs->ns_sbwtime = le64_to_cpu(sbp[0]->s_wtime);
36094 +
36095 + /* verify consistency between two super blocks */
36096 +- blocksize = BLOCK_SIZE << le32_to_cpu(sbp[0]->s_log_block_size);
36097 ++ err = nilfs_get_blocksize(sb, sbp[0], &blocksize);
36098 ++ if (err)
36099 ++ goto scan_error;
36100 ++
36101 + if (blocksize != nilfs->ns_blocksize) {
36102 + nilfs_warn(sb,
36103 + "blocksize differs between two super blocks (%d != %d)",
36104 + blocksize, nilfs->ns_blocksize);
36105 ++ err = -EINVAL;
36106 + goto scan_error;
36107 + }
36108 +
36109 +@@ -443,11 +476,33 @@ static int nilfs_valid_sb(struct nilfs_super_block *sbp)
36110 + return crc == le32_to_cpu(sbp->s_sum);
36111 + }
36112 +
36113 +-static int nilfs_sb2_bad_offset(struct nilfs_super_block *sbp, u64 offset)
36114 ++/**
36115 ++ * nilfs_sb2_bad_offset - check the location of the second superblock
36116 ++ * @sbp: superblock raw data buffer
36117 ++ * @offset: byte offset of second superblock calculated from device size
36118 ++ *
36119 ++ * nilfs_sb2_bad_offset() checks if the position on the second
36120 ++ * superblock is valid or not based on the filesystem parameters
36121 ++ * stored in @sbp. If @offset points to a location within the segment
36122 ++ * area, or if the parameters themselves are not normal, it is
36123 ++ * determined to be invalid.
36124 ++ *
36125 ++ * Return Value: true if invalid, false if valid.
36126 ++ */
36127 ++static bool nilfs_sb2_bad_offset(struct nilfs_super_block *sbp, u64 offset)
36128 + {
36129 +- return offset < ((le64_to_cpu(sbp->s_nsegments) *
36130 +- le32_to_cpu(sbp->s_blocks_per_segment)) <<
36131 +- (le32_to_cpu(sbp->s_log_block_size) + 10));
36132 ++ unsigned int shift_bits = le32_to_cpu(sbp->s_log_block_size);
36133 ++ u32 blocks_per_segment = le32_to_cpu(sbp->s_blocks_per_segment);
36134 ++ u64 nsegments = le64_to_cpu(sbp->s_nsegments);
36135 ++ u64 index;
36136 ++
36137 ++ if (blocks_per_segment < NILFS_SEG_MIN_BLOCKS ||
36138 ++ shift_bits > ilog2(NILFS_MAX_BLOCK_SIZE) - BLOCK_SIZE_BITS)
36139 ++ return true;
36140 ++
36141 ++ index = offset >> (shift_bits + BLOCK_SIZE_BITS);
36142 ++ do_div(index, blocks_per_segment);
36143 ++ return index < nsegments;
36144 + }
36145 +
36146 + static void nilfs_release_super_block(struct the_nilfs *nilfs)
36147 +@@ -586,9 +641,11 @@ int init_nilfs(struct the_nilfs *nilfs, struct super_block *sb, char *data)
36148 + if (err)
36149 + goto failed_sbh;
36150 +
36151 +- blocksize = BLOCK_SIZE << le32_to_cpu(sbp->s_log_block_size);
36152 +- if (blocksize < NILFS_MIN_BLOCK_SIZE ||
36153 +- blocksize > NILFS_MAX_BLOCK_SIZE) {
36154 ++ err = nilfs_get_blocksize(sb, sbp, &blocksize);
36155 ++ if (err)
36156 ++ goto failed_sbh;
36157 ++
36158 ++ if (blocksize < NILFS_MIN_BLOCK_SIZE) {
36159 + nilfs_err(sb,
36160 + "couldn't mount because of unsupported filesystem blocksize %d",
36161 + blocksize);
36162 +diff --git a/fs/ntfs3/bitmap.c b/fs/ntfs3/bitmap.c
36163 +index e92bbd754365e..1930640be31a8 100644
36164 +--- a/fs/ntfs3/bitmap.c
36165 ++++ b/fs/ntfs3/bitmap.c
36166 +@@ -1424,7 +1424,7 @@ int ntfs_trim_fs(struct ntfs_sb_info *sbi, struct fstrim_range *range)
36167 +
36168 + down_read_nested(&wnd->rw_lock, BITMAP_MUTEX_CLUSTERS);
36169 +
36170 +- for (; iw < wnd->nbits; iw++, wbit = 0) {
36171 ++ for (; iw < wnd->nwnd; iw++, wbit = 0) {
36172 + CLST lcn_wnd = iw * wbits;
36173 + struct buffer_head *bh;
36174 +
36175 +diff --git a/fs/ntfs3/super.c b/fs/ntfs3/super.c
36176 +index 47012c9bf505e..adc4f73722b7c 100644
36177 +--- a/fs/ntfs3/super.c
36178 ++++ b/fs/ntfs3/super.c
36179 +@@ -672,7 +672,7 @@ static u32 true_sectors_per_clst(const struct NTFS_BOOT *boot)
36180 + if (boot->sectors_per_clusters <= 0x80)
36181 + return boot->sectors_per_clusters;
36182 + if (boot->sectors_per_clusters >= 0xf4) /* limit shift to 2MB max */
36183 +- return 1U << (0 - boot->sectors_per_clusters);
36184 ++ return 1U << -(s8)boot->sectors_per_clusters;
36185 + return -EINVAL;
36186 + }
36187 +
36188 +diff --git a/fs/ntfs3/xattr.c b/fs/ntfs3/xattr.c
36189 +index 7de8718c68a90..ea582b4fe1d9d 100644
36190 +--- a/fs/ntfs3/xattr.c
36191 ++++ b/fs/ntfs3/xattr.c
36192 +@@ -107,7 +107,7 @@ static int ntfs_read_ea(struct ntfs_inode *ni, struct EA_FULL **ea,
36193 + return -EFBIG;
36194 +
36195 + /* Allocate memory for packed Ea. */
36196 +- ea_p = kmalloc(size + add_bytes, GFP_NOFS);
36197 ++ ea_p = kmalloc(size_add(size, add_bytes), GFP_NOFS);
36198 + if (!ea_p)
36199 + return -ENOMEM;
36200 +
36201 +diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
36202 +index 126671e6caeda..3fb98b4569a28 100644
36203 +--- a/fs/ocfs2/journal.c
36204 ++++ b/fs/ocfs2/journal.c
36205 +@@ -157,7 +157,7 @@ static void ocfs2_queue_replay_slots(struct ocfs2_super *osb,
36206 + replay_map->rm_state = REPLAY_DONE;
36207 + }
36208 +
36209 +-static void ocfs2_free_replay_slots(struct ocfs2_super *osb)
36210 ++void ocfs2_free_replay_slots(struct ocfs2_super *osb)
36211 + {
36212 + struct ocfs2_replay_map *replay_map = osb->replay_map;
36213 +
36214 +diff --git a/fs/ocfs2/journal.h b/fs/ocfs2/journal.h
36215 +index 969d0aa287187..41c382f68529e 100644
36216 +--- a/fs/ocfs2/journal.h
36217 ++++ b/fs/ocfs2/journal.h
36218 +@@ -150,6 +150,7 @@ int ocfs2_recovery_init(struct ocfs2_super *osb);
36219 + void ocfs2_recovery_exit(struct ocfs2_super *osb);
36220 +
36221 + int ocfs2_compute_replay_slots(struct ocfs2_super *osb);
36222 ++void ocfs2_free_replay_slots(struct ocfs2_super *osb);
36223 + /*
36224 + * Journal Control:
36225 + * Initialize, Load, Shutdown, Wipe a journal.
36226 +diff --git a/fs/ocfs2/stackglue.c b/fs/ocfs2/stackglue.c
36227 +index 317126261523b..a8d5ca98fa57c 100644
36228 +--- a/fs/ocfs2/stackglue.c
36229 ++++ b/fs/ocfs2/stackglue.c
36230 +@@ -669,6 +669,8 @@ static struct ctl_table_header *ocfs2_table_header;
36231 +
36232 + static int __init ocfs2_stack_glue_init(void)
36233 + {
36234 ++ int ret;
36235 ++
36236 + strcpy(cluster_stack_name, OCFS2_STACK_PLUGIN_O2CB);
36237 +
36238 + ocfs2_table_header = register_sysctl("fs/ocfs2/nm", ocfs2_nm_table);
36239 +@@ -678,7 +680,11 @@ static int __init ocfs2_stack_glue_init(void)
36240 + return -ENOMEM; /* or something. */
36241 + }
36242 +
36243 +- return ocfs2_sysfs_init();
36244 ++ ret = ocfs2_sysfs_init();
36245 ++ if (ret)
36246 ++ unregister_sysctl_table(ocfs2_table_header);
36247 ++
36248 ++ return ret;
36249 + }
36250 +
36251 + static void __exit ocfs2_stack_glue_exit(void)
36252 +diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
36253 +index 42c993e53924f..0b0e6a1321018 100644
36254 +--- a/fs/ocfs2/super.c
36255 ++++ b/fs/ocfs2/super.c
36256 +@@ -1159,6 +1159,7 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
36257 + out_dismount:
36258 + atomic_set(&osb->vol_state, VOLUME_DISABLED);
36259 + wake_up(&osb->osb_mount_event);
36260 ++ ocfs2_free_replay_slots(osb);
36261 + ocfs2_dismount_volume(sb, 1);
36262 + goto out;
36263 +
36264 +@@ -1822,12 +1823,14 @@ static int ocfs2_mount_volume(struct super_block *sb)
36265 + status = ocfs2_truncate_log_init(osb);
36266 + if (status < 0) {
36267 + mlog_errno(status);
36268 +- goto out_system_inodes;
36269 ++ goto out_check_volume;
36270 + }
36271 +
36272 + ocfs2_super_unlock(osb, 1);
36273 + return 0;
36274 +
36275 ++out_check_volume:
36276 ++ ocfs2_free_replay_slots(osb);
36277 + out_system_inodes:
36278 + if (osb->local_alloc_state == OCFS2_LA_ENABLED)
36279 + ocfs2_shutdown_local_alloc(osb);
36280 +diff --git a/fs/orangefs/orangefs-debugfs.c b/fs/orangefs/orangefs-debugfs.c
36281 +index 29eaa45443727..1b508f5433846 100644
36282 +--- a/fs/orangefs/orangefs-debugfs.c
36283 ++++ b/fs/orangefs/orangefs-debugfs.c
36284 +@@ -194,15 +194,10 @@ void orangefs_debugfs_init(int debug_mask)
36285 + */
36286 + static void orangefs_kernel_debug_init(void)
36287 + {
36288 +- int rc = -ENOMEM;
36289 +- char *k_buffer = NULL;
36290 ++ static char k_buffer[ORANGEFS_MAX_DEBUG_STRING_LEN] = { };
36291 +
36292 + gossip_debug(GOSSIP_DEBUGFS_DEBUG, "%s: start\n", __func__);
36293 +
36294 +- k_buffer = kzalloc(ORANGEFS_MAX_DEBUG_STRING_LEN, GFP_KERNEL);
36295 +- if (!k_buffer)
36296 +- goto out;
36297 +-
36298 + if (strlen(kernel_debug_string) + 1 < ORANGEFS_MAX_DEBUG_STRING_LEN) {
36299 + strcpy(k_buffer, kernel_debug_string);
36300 + strcat(k_buffer, "\n");
36301 +@@ -213,15 +208,14 @@ static void orangefs_kernel_debug_init(void)
36302 +
36303 + debugfs_create_file(ORANGEFS_KMOD_DEBUG_FILE, 0444, debug_dir, k_buffer,
36304 + &kernel_debug_fops);
36305 +-
36306 +-out:
36307 +- gossip_debug(GOSSIP_DEBUGFS_DEBUG, "%s: rc:%d:\n", __func__, rc);
36308 + }
36309 +
36310 +
36311 + void orangefs_debugfs_cleanup(void)
36312 + {
36313 + debugfs_remove_recursive(debug_dir);
36314 ++ kfree(debug_help_string);
36315 ++ debug_help_string = NULL;
36316 + }
36317 +
36318 + /* open ORANGEFS_KMOD_DEBUG_HELP_FILE */
36319 +@@ -297,18 +291,13 @@ static int help_show(struct seq_file *m, void *v)
36320 + /*
36321 + * initialize the client-debug file.
36322 + */
36323 +-static int orangefs_client_debug_init(void)
36324 ++static void orangefs_client_debug_init(void)
36325 + {
36326 +
36327 +- int rc = -ENOMEM;
36328 +- char *c_buffer = NULL;
36329 ++ static char c_buffer[ORANGEFS_MAX_DEBUG_STRING_LEN] = { };
36330 +
36331 + gossip_debug(GOSSIP_DEBUGFS_DEBUG, "%s: start\n", __func__);
36332 +
36333 +- c_buffer = kzalloc(ORANGEFS_MAX_DEBUG_STRING_LEN, GFP_KERNEL);
36334 +- if (!c_buffer)
36335 +- goto out;
36336 +-
36337 + if (strlen(client_debug_string) + 1 < ORANGEFS_MAX_DEBUG_STRING_LEN) {
36338 + strcpy(c_buffer, client_debug_string);
36339 + strcat(c_buffer, "\n");
36340 +@@ -322,13 +311,6 @@ static int orangefs_client_debug_init(void)
36341 + debug_dir,
36342 + c_buffer,
36343 + &kernel_debug_fops);
36344 +-
36345 +- rc = 0;
36346 +-
36347 +-out:
36348 +-
36349 +- gossip_debug(GOSSIP_DEBUGFS_DEBUG, "%s: rc:%d:\n", __func__, rc);
36350 +- return rc;
36351 + }
36352 +
36353 + /* open ORANGEFS_KMOD_DEBUG_FILE or ORANGEFS_CLIENT_DEBUG_FILE.*/
36354 +@@ -671,6 +653,7 @@ int orangefs_prepare_debugfs_help_string(int at_boot)
36355 + memset(debug_help_string, 0, DEBUG_HELP_STRING_SIZE);
36356 + strlcat(debug_help_string, new, string_size);
36357 + mutex_unlock(&orangefs_help_file_lock);
36358 ++ kfree(new);
36359 + }
36360 +
36361 + rc = 0;
36362 +diff --git a/fs/orangefs/orangefs-mod.c b/fs/orangefs/orangefs-mod.c
36363 +index cd7297815f91e..5ab741c60b7e2 100644
36364 +--- a/fs/orangefs/orangefs-mod.c
36365 ++++ b/fs/orangefs/orangefs-mod.c
36366 +@@ -141,7 +141,7 @@ static int __init orangefs_init(void)
36367 + gossip_err("%s: could not initialize device subsystem %d!\n",
36368 + __func__,
36369 + ret);
36370 +- goto cleanup_device;
36371 ++ goto cleanup_sysfs;
36372 + }
36373 +
36374 + ret = register_filesystem(&orangefs_fs_type);
36375 +@@ -152,11 +152,11 @@ static int __init orangefs_init(void)
36376 + goto out;
36377 + }
36378 +
36379 +- orangefs_sysfs_exit();
36380 +-
36381 +-cleanup_device:
36382 + orangefs_dev_cleanup();
36383 +
36384 ++cleanup_sysfs:
36385 ++ orangefs_sysfs_exit();
36386 ++
36387 + sysfs_init_failed:
36388 + orangefs_debugfs_cleanup();
36389 +
36390 +diff --git a/fs/orangefs/orangefs-sysfs.c b/fs/orangefs/orangefs-sysfs.c
36391 +index de80b62553bb1..be4ba03a01a0f 100644
36392 +--- a/fs/orangefs/orangefs-sysfs.c
36393 ++++ b/fs/orangefs/orangefs-sysfs.c
36394 +@@ -896,9 +896,18 @@ static struct attribute *orangefs_default_attrs[] = {
36395 + };
36396 + ATTRIBUTE_GROUPS(orangefs_default);
36397 +
36398 ++static struct kobject *orangefs_obj;
36399 ++
36400 ++static void orangefs_obj_release(struct kobject *kobj)
36401 ++{
36402 ++ kfree(orangefs_obj);
36403 ++ orangefs_obj = NULL;
36404 ++}
36405 ++
36406 + static struct kobj_type orangefs_ktype = {
36407 + .sysfs_ops = &orangefs_sysfs_ops,
36408 + .default_groups = orangefs_default_groups,
36409 ++ .release = orangefs_obj_release,
36410 + };
36411 +
36412 + static struct orangefs_attribute acache_hard_limit_attribute =
36413 +@@ -934,9 +943,18 @@ static struct attribute *acache_orangefs_default_attrs[] = {
36414 + };
36415 + ATTRIBUTE_GROUPS(acache_orangefs_default);
36416 +
36417 ++static struct kobject *acache_orangefs_obj;
36418 ++
36419 ++static void acache_orangefs_obj_release(struct kobject *kobj)
36420 ++{
36421 ++ kfree(acache_orangefs_obj);
36422 ++ acache_orangefs_obj = NULL;
36423 ++}
36424 ++
36425 + static struct kobj_type acache_orangefs_ktype = {
36426 + .sysfs_ops = &orangefs_sysfs_ops,
36427 + .default_groups = acache_orangefs_default_groups,
36428 ++ .release = acache_orangefs_obj_release,
36429 + };
36430 +
36431 + static struct orangefs_attribute capcache_hard_limit_attribute =
36432 +@@ -972,9 +990,18 @@ static struct attribute *capcache_orangefs_default_attrs[] = {
36433 + };
36434 + ATTRIBUTE_GROUPS(capcache_orangefs_default);
36435 +
36436 ++static struct kobject *capcache_orangefs_obj;
36437 ++
36438 ++static void capcache_orangefs_obj_release(struct kobject *kobj)
36439 ++{
36440 ++ kfree(capcache_orangefs_obj);
36441 ++ capcache_orangefs_obj = NULL;
36442 ++}
36443 ++
36444 + static struct kobj_type capcache_orangefs_ktype = {
36445 + .sysfs_ops = &orangefs_sysfs_ops,
36446 + .default_groups = capcache_orangefs_default_groups,
36447 ++ .release = capcache_orangefs_obj_release,
36448 + };
36449 +
36450 + static struct orangefs_attribute ccache_hard_limit_attribute =
36451 +@@ -1010,9 +1037,18 @@ static struct attribute *ccache_orangefs_default_attrs[] = {
36452 + };
36453 + ATTRIBUTE_GROUPS(ccache_orangefs_default);
36454 +
36455 ++static struct kobject *ccache_orangefs_obj;
36456 ++
36457 ++static void ccache_orangefs_obj_release(struct kobject *kobj)
36458 ++{
36459 ++ kfree(ccache_orangefs_obj);
36460 ++ ccache_orangefs_obj = NULL;
36461 ++}
36462 ++
36463 + static struct kobj_type ccache_orangefs_ktype = {
36464 + .sysfs_ops = &orangefs_sysfs_ops,
36465 + .default_groups = ccache_orangefs_default_groups,
36466 ++ .release = ccache_orangefs_obj_release,
36467 + };
36468 +
36469 + static struct orangefs_attribute ncache_hard_limit_attribute =
36470 +@@ -1048,9 +1084,18 @@ static struct attribute *ncache_orangefs_default_attrs[] = {
36471 + };
36472 + ATTRIBUTE_GROUPS(ncache_orangefs_default);
36473 +
36474 ++static struct kobject *ncache_orangefs_obj;
36475 ++
36476 ++static void ncache_orangefs_obj_release(struct kobject *kobj)
36477 ++{
36478 ++ kfree(ncache_orangefs_obj);
36479 ++ ncache_orangefs_obj = NULL;
36480 ++}
36481 ++
36482 + static struct kobj_type ncache_orangefs_ktype = {
36483 + .sysfs_ops = &orangefs_sysfs_ops,
36484 + .default_groups = ncache_orangefs_default_groups,
36485 ++ .release = ncache_orangefs_obj_release,
36486 + };
36487 +
36488 + static struct orangefs_attribute pc_acache_attribute =
36489 +@@ -1079,9 +1124,18 @@ static struct attribute *pc_orangefs_default_attrs[] = {
36490 + };
36491 + ATTRIBUTE_GROUPS(pc_orangefs_default);
36492 +
36493 ++static struct kobject *pc_orangefs_obj;
36494 ++
36495 ++static void pc_orangefs_obj_release(struct kobject *kobj)
36496 ++{
36497 ++ kfree(pc_orangefs_obj);
36498 ++ pc_orangefs_obj = NULL;
36499 ++}
36500 ++
36501 + static struct kobj_type pc_orangefs_ktype = {
36502 + .sysfs_ops = &orangefs_sysfs_ops,
36503 + .default_groups = pc_orangefs_default_groups,
36504 ++ .release = pc_orangefs_obj_release,
36505 + };
36506 +
36507 + static struct orangefs_attribute stats_reads_attribute =
36508 +@@ -1103,19 +1157,20 @@ static struct attribute *stats_orangefs_default_attrs[] = {
36509 + };
36510 + ATTRIBUTE_GROUPS(stats_orangefs_default);
36511 +
36512 ++static struct kobject *stats_orangefs_obj;
36513 ++
36514 ++static void stats_orangefs_obj_release(struct kobject *kobj)
36515 ++{
36516 ++ kfree(stats_orangefs_obj);
36517 ++ stats_orangefs_obj = NULL;
36518 ++}
36519 ++
36520 + static struct kobj_type stats_orangefs_ktype = {
36521 + .sysfs_ops = &orangefs_sysfs_ops,
36522 + .default_groups = stats_orangefs_default_groups,
36523 ++ .release = stats_orangefs_obj_release,
36524 + };
36525 +
36526 +-static struct kobject *orangefs_obj;
36527 +-static struct kobject *acache_orangefs_obj;
36528 +-static struct kobject *capcache_orangefs_obj;
36529 +-static struct kobject *ccache_orangefs_obj;
36530 +-static struct kobject *ncache_orangefs_obj;
36531 +-static struct kobject *pc_orangefs_obj;
36532 +-static struct kobject *stats_orangefs_obj;
36533 +-
36534 + int orangefs_sysfs_init(void)
36535 + {
36536 + int rc = -EINVAL;
36537 +diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
36538 +index a1a22f58ba183..d066be3b9226e 100644
36539 +--- a/fs/overlayfs/file.c
36540 ++++ b/fs/overlayfs/file.c
36541 +@@ -517,9 +517,16 @@ static long ovl_fallocate(struct file *file, int mode, loff_t offset, loff_t len
36542 + const struct cred *old_cred;
36543 + int ret;
36544 +
36545 ++ inode_lock(inode);
36546 ++ /* Update mode */
36547 ++ ovl_copyattr(inode);
36548 ++ ret = file_remove_privs(file);
36549 ++ if (ret)
36550 ++ goto out_unlock;
36551 ++
36552 + ret = ovl_real_fdget(file, &real);
36553 + if (ret)
36554 +- return ret;
36555 ++ goto out_unlock;
36556 +
36557 + old_cred = ovl_override_creds(file_inode(file)->i_sb);
36558 + ret = vfs_fallocate(real.file, mode, offset, len);
36559 +@@ -530,6 +537,9 @@ static long ovl_fallocate(struct file *file, int mode, loff_t offset, loff_t len
36560 +
36561 + fdput(real);
36562 +
36563 ++out_unlock:
36564 ++ inode_unlock(inode);
36565 ++
36566 + return ret;
36567 + }
36568 +
36569 +@@ -567,14 +577,23 @@ static loff_t ovl_copyfile(struct file *file_in, loff_t pos_in,
36570 + const struct cred *old_cred;
36571 + loff_t ret;
36572 +
36573 ++ inode_lock(inode_out);
36574 ++ if (op != OVL_DEDUPE) {
36575 ++ /* Update mode */
36576 ++ ovl_copyattr(inode_out);
36577 ++ ret = file_remove_privs(file_out);
36578 ++ if (ret)
36579 ++ goto out_unlock;
36580 ++ }
36581 ++
36582 + ret = ovl_real_fdget(file_out, &real_out);
36583 + if (ret)
36584 +- return ret;
36585 ++ goto out_unlock;
36586 +
36587 + ret = ovl_real_fdget(file_in, &real_in);
36588 + if (ret) {
36589 + fdput(real_out);
36590 +- return ret;
36591 ++ goto out_unlock;
36592 + }
36593 +
36594 + old_cred = ovl_override_creds(file_inode(file_out)->i_sb);
36595 +@@ -603,6 +622,9 @@ static loff_t ovl_copyfile(struct file *file_in, loff_t pos_in,
36596 + fdput(real_in);
36597 + fdput(real_out);
36598 +
36599 ++out_unlock:
36600 ++ inode_unlock(inode_out);
36601 ++
36602 + return ret;
36603 + }
36604 +
36605 +diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
36606 +index a29a8afe9b262..3d14a3f1465d1 100644
36607 +--- a/fs/overlayfs/super.c
36608 ++++ b/fs/overlayfs/super.c
36609 +@@ -139,11 +139,16 @@ static int ovl_dentry_revalidate_common(struct dentry *dentry,
36610 + unsigned int flags, bool weak)
36611 + {
36612 + struct ovl_entry *oe = dentry->d_fsdata;
36613 ++ struct inode *inode = d_inode_rcu(dentry);
36614 + struct dentry *upper;
36615 + unsigned int i;
36616 + int ret = 1;
36617 +
36618 +- upper = ovl_dentry_upper(dentry);
36619 ++ /* Careful in RCU mode */
36620 ++ if (!inode)
36621 ++ return -ECHILD;
36622 ++
36623 ++ upper = ovl_i_dentry_upper(inode);
36624 + if (upper)
36625 + ret = ovl_revalidate_real(upper, flags, weak);
36626 +
36627 +diff --git a/fs/pstore/Kconfig b/fs/pstore/Kconfig
36628 +index 8adabde685f13..c49d554cc9ae9 100644
36629 +--- a/fs/pstore/Kconfig
36630 ++++ b/fs/pstore/Kconfig
36631 +@@ -126,6 +126,7 @@ config PSTORE_CONSOLE
36632 + config PSTORE_PMSG
36633 + bool "Log user space messages"
36634 + depends on PSTORE
36635 ++ select RT_MUTEXES
36636 + help
36637 + When the option is enabled, pstore will export a character
36638 + interface /dev/pmsg0 to log user space messages. On reboot
36639 +diff --git a/fs/pstore/pmsg.c b/fs/pstore/pmsg.c
36640 +index d8542ec2f38c6..18cf94b597e05 100644
36641 +--- a/fs/pstore/pmsg.c
36642 ++++ b/fs/pstore/pmsg.c
36643 +@@ -7,9 +7,10 @@
36644 + #include <linux/device.h>
36645 + #include <linux/fs.h>
36646 + #include <linux/uaccess.h>
36647 ++#include <linux/rtmutex.h>
36648 + #include "internal.h"
36649 +
36650 +-static DEFINE_MUTEX(pmsg_lock);
36651 ++static DEFINE_RT_MUTEX(pmsg_lock);
36652 +
36653 + static ssize_t write_pmsg(struct file *file, const char __user *buf,
36654 + size_t count, loff_t *ppos)
36655 +@@ -28,9 +29,9 @@ static ssize_t write_pmsg(struct file *file, const char __user *buf,
36656 + if (!access_ok(buf, count))
36657 + return -EFAULT;
36658 +
36659 +- mutex_lock(&pmsg_lock);
36660 ++ rt_mutex_lock(&pmsg_lock);
36661 + ret = psinfo->write_user(&record, buf);
36662 +- mutex_unlock(&pmsg_lock);
36663 ++ rt_mutex_unlock(&pmsg_lock);
36664 + return ret ? ret : count;
36665 + }
36666 +
36667 +diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
36668 +index fefe3d391d3af..74e4d93f3e08d 100644
36669 +--- a/fs/pstore/ram.c
36670 ++++ b/fs/pstore/ram.c
36671 +@@ -735,6 +735,7 @@ static int ramoops_probe(struct platform_device *pdev)
36672 + /* Make sure we didn't get bogus platform data pointer. */
36673 + if (!pdata) {
36674 + pr_err("NULL platform data\n");
36675 ++ err = -EINVAL;
36676 + goto fail_out;
36677 + }
36678 +
36679 +@@ -742,6 +743,7 @@ static int ramoops_probe(struct platform_device *pdev)
36680 + !pdata->ftrace_size && !pdata->pmsg_size)) {
36681 + pr_err("The memory size and the record/console size must be "
36682 + "non-zero\n");
36683 ++ err = -EINVAL;
36684 + goto fail_out;
36685 + }
36686 +
36687 +diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
36688 +index a89e33719fcf2..8bf09886e7e66 100644
36689 +--- a/fs/pstore/ram_core.c
36690 ++++ b/fs/pstore/ram_core.c
36691 +@@ -439,7 +439,11 @@ static void *persistent_ram_vmap(phys_addr_t start, size_t size,
36692 + phys_addr_t addr = page_start + i * PAGE_SIZE;
36693 + pages[i] = pfn_to_page(addr >> PAGE_SHIFT);
36694 + }
36695 +- vaddr = vmap(pages, page_count, VM_MAP, prot);
36696 ++ /*
36697 ++ * VM_IOREMAP used here to bypass this region during vread()
36698 ++ * and kmap_atomic() (i.e. kcore) to avoid __va() failures.
36699 ++ */
36700 ++ vaddr = vmap(pages, page_count, VM_MAP | VM_IOREMAP, prot);
36701 + kfree(pages);
36702 +
36703 + /*
36704 +diff --git a/fs/reiserfs/namei.c b/fs/reiserfs/namei.c
36705 +index 3d7a35d6a18bc..b916859992ec8 100644
36706 +--- a/fs/reiserfs/namei.c
36707 ++++ b/fs/reiserfs/namei.c
36708 +@@ -696,6 +696,7 @@ static int reiserfs_create(struct user_namespace *mnt_userns, struct inode *dir,
36709 +
36710 + out_failed:
36711 + reiserfs_write_unlock(dir->i_sb);
36712 ++ reiserfs_security_free(&security);
36713 + return retval;
36714 + }
36715 +
36716 +@@ -779,6 +780,7 @@ static int reiserfs_mknod(struct user_namespace *mnt_userns, struct inode *dir,
36717 +
36718 + out_failed:
36719 + reiserfs_write_unlock(dir->i_sb);
36720 ++ reiserfs_security_free(&security);
36721 + return retval;
36722 + }
36723 +
36724 +@@ -878,6 +880,7 @@ static int reiserfs_mkdir(struct user_namespace *mnt_userns, struct inode *dir,
36725 + retval = journal_end(&th);
36726 + out_failed:
36727 + reiserfs_write_unlock(dir->i_sb);
36728 ++ reiserfs_security_free(&security);
36729 + return retval;
36730 + }
36731 +
36732 +@@ -1194,6 +1197,7 @@ static int reiserfs_symlink(struct user_namespace *mnt_userns,
36733 + retval = journal_end(&th);
36734 + out_failed:
36735 + reiserfs_write_unlock(parent_dir->i_sb);
36736 ++ reiserfs_security_free(&security);
36737 + return retval;
36738 + }
36739 +
36740 +diff --git a/fs/reiserfs/xattr_security.c b/fs/reiserfs/xattr_security.c
36741 +index 8965c8e5e172b..857a65b057264 100644
36742 +--- a/fs/reiserfs/xattr_security.c
36743 ++++ b/fs/reiserfs/xattr_security.c
36744 +@@ -50,6 +50,7 @@ int reiserfs_security_init(struct inode *dir, struct inode *inode,
36745 + int error;
36746 +
36747 + sec->name = NULL;
36748 ++ sec->value = NULL;
36749 +
36750 + /* Don't add selinux attributes on xattrs - they'll never get used */
36751 + if (IS_PRIVATE(dir))
36752 +@@ -95,7 +96,6 @@ int reiserfs_security_write(struct reiserfs_transaction_handle *th,
36753 +
36754 + void reiserfs_security_free(struct reiserfs_security_handle *sec)
36755 + {
36756 +- kfree(sec->name);
36757 + kfree(sec->value);
36758 + sec->name = NULL;
36759 + sec->value = NULL;
36760 +diff --git a/fs/sysv/itree.c b/fs/sysv/itree.c
36761 +index d4ec9bb97de95..3b8567564e7e4 100644
36762 +--- a/fs/sysv/itree.c
36763 ++++ b/fs/sysv/itree.c
36764 +@@ -438,7 +438,7 @@ static unsigned sysv_nblocks(struct super_block *s, loff_t size)
36765 + res += blocks;
36766 + direct = 1;
36767 + }
36768 +- return blocks;
36769 ++ return res;
36770 + }
36771 +
36772 + int sysv_getattr(struct user_namespace *mnt_userns, const struct path *path,
36773 +diff --git a/fs/udf/namei.c b/fs/udf/namei.c
36774 +index ae7bc13a5298a..7c95c549dd64e 100644
36775 +--- a/fs/udf/namei.c
36776 ++++ b/fs/udf/namei.c
36777 +@@ -1091,8 +1091,9 @@ static int udf_rename(struct user_namespace *mnt_userns, struct inode *old_dir,
36778 + return -EINVAL;
36779 +
36780 + ofi = udf_find_entry(old_dir, &old_dentry->d_name, &ofibh, &ocfi);
36781 +- if (IS_ERR(ofi)) {
36782 +- retval = PTR_ERR(ofi);
36783 ++ if (!ofi || IS_ERR(ofi)) {
36784 ++ if (IS_ERR(ofi))
36785 ++ retval = PTR_ERR(ofi);
36786 + goto end_rename;
36787 + }
36788 +
36789 +@@ -1101,8 +1102,7 @@ static int udf_rename(struct user_namespace *mnt_userns, struct inode *old_dir,
36790 +
36791 + brelse(ofibh.sbh);
36792 + tloc = lelb_to_cpu(ocfi.icb.extLocation);
36793 +- if (!ofi || udf_get_lb_pblock(old_dir->i_sb, &tloc, 0)
36794 +- != old_inode->i_ino)
36795 ++ if (udf_get_lb_pblock(old_dir->i_sb, &tloc, 0) != old_inode->i_ino)
36796 + goto end_rename;
36797 +
36798 + nfi = udf_find_entry(new_dir, &new_dentry->d_name, &nfibh, &ncfi);
36799 +diff --git a/fs/xattr.c b/fs/xattr.c
36800 +index 61107b6bbed29..427b8cea1f968 100644
36801 +--- a/fs/xattr.c
36802 ++++ b/fs/xattr.c
36803 +@@ -1140,7 +1140,7 @@ static int xattr_list_one(char **buffer, ssize_t *remaining_size,
36804 + ssize_t simple_xattr_list(struct inode *inode, struct simple_xattrs *xattrs,
36805 + char *buffer, size_t size)
36806 + {
36807 +- bool trusted = capable(CAP_SYS_ADMIN);
36808 ++ bool trusted = ns_capable_noaudit(&init_user_ns, CAP_SYS_ADMIN);
36809 + struct simple_xattr *xattr;
36810 + ssize_t remaining_size = size;
36811 + int err = 0;
36812 +diff --git a/include/drm/drm_connector.h b/include/drm/drm_connector.h
36813 +index 56aee949c6fa2..4d830fc55a3df 100644
36814 +--- a/include/drm/drm_connector.h
36815 ++++ b/include/drm/drm_connector.h
36816 +@@ -656,6 +656,12 @@ struct drm_display_info {
36817 + * @mso_pixel_overlap: eDP MSO segment pixel overlap, 0-8 pixels.
36818 + */
36819 + u8 mso_pixel_overlap;
36820 ++
36821 ++ /**
36822 ++ * @max_dsc_bpp: Maximum DSC target bitrate, if it is set to 0 the
36823 ++ * monitor's default value is used instead.
36824 ++ */
36825 ++ u32 max_dsc_bpp;
36826 + };
36827 +
36828 + int drm_display_info_set_bus_formats(struct drm_display_info *info,
36829 +diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
36830 +index 17a0310e8aaaf..b7d3f3843f1e6 100644
36831 +--- a/include/drm/ttm/ttm_tt.h
36832 ++++ b/include/drm/ttm/ttm_tt.h
36833 +@@ -88,7 +88,7 @@ struct ttm_tt {
36834 + #define TTM_TT_FLAG_EXTERNAL (1 << 2)
36835 + #define TTM_TT_FLAG_EXTERNAL_MAPPABLE (1 << 3)
36836 +
36837 +-#define TTM_TT_FLAG_PRIV_POPULATED (1 << 31)
36838 ++#define TTM_TT_FLAG_PRIV_POPULATED (1U << 31)
36839 + uint32_t page_flags;
36840 + /** @num_pages: Number of pages in the page array. */
36841 + uint32_t num_pages;
36842 +diff --git a/include/dt-bindings/clock/imx8mn-clock.h b/include/dt-bindings/clock/imx8mn-clock.h
36843 +index 07b8a282c2682..04809edab33cf 100644
36844 +--- a/include/dt-bindings/clock/imx8mn-clock.h
36845 ++++ b/include/dt-bindings/clock/imx8mn-clock.h
36846 +@@ -16,40 +16,48 @@
36847 + #define IMX8MN_CLK_EXT4 7
36848 + #define IMX8MN_AUDIO_PLL1_REF_SEL 8
36849 + #define IMX8MN_AUDIO_PLL2_REF_SEL 9
36850 +-#define IMX8MN_VIDEO_PLL1_REF_SEL 10
36851 ++#define IMX8MN_VIDEO_PLL_REF_SEL 10
36852 ++#define IMX8MN_VIDEO_PLL1_REF_SEL IMX8MN_VIDEO_PLL_REF_SEL
36853 + #define IMX8MN_DRAM_PLL_REF_SEL 11
36854 + #define IMX8MN_GPU_PLL_REF_SEL 12
36855 +-#define IMX8MN_VPU_PLL_REF_SEL 13
36856 ++#define IMX8MN_M7_ALT_PLL_REF_SEL 13
36857 ++#define IMX8MN_VPU_PLL_REF_SEL IMX8MN_M7_ALT_PLL_REF_SEL
36858 + #define IMX8MN_ARM_PLL_REF_SEL 14
36859 + #define IMX8MN_SYS_PLL1_REF_SEL 15
36860 + #define IMX8MN_SYS_PLL2_REF_SEL 16
36861 + #define IMX8MN_SYS_PLL3_REF_SEL 17
36862 + #define IMX8MN_AUDIO_PLL1 18
36863 + #define IMX8MN_AUDIO_PLL2 19
36864 +-#define IMX8MN_VIDEO_PLL1 20
36865 ++#define IMX8MN_VIDEO_PLL 20
36866 ++#define IMX8MN_VIDEO_PLL1 IMX8MN_VIDEO_PLL
36867 + #define IMX8MN_DRAM_PLL 21
36868 + #define IMX8MN_GPU_PLL 22
36869 +-#define IMX8MN_VPU_PLL 23
36870 ++#define IMX8MN_M7_ALT_PLL 23
36871 ++#define IMX8MN_VPU_PLL IMX8MN_M7_ALT_PLL
36872 + #define IMX8MN_ARM_PLL 24
36873 + #define IMX8MN_SYS_PLL1 25
36874 + #define IMX8MN_SYS_PLL2 26
36875 + #define IMX8MN_SYS_PLL3 27
36876 + #define IMX8MN_AUDIO_PLL1_BYPASS 28
36877 + #define IMX8MN_AUDIO_PLL2_BYPASS 29
36878 +-#define IMX8MN_VIDEO_PLL1_BYPASS 30
36879 ++#define IMX8MN_VIDEO_PLL_BYPASS 30
36880 ++#define IMX8MN_VIDEO_PLL1_BYPASS IMX8MN_VIDEO_PLL_BYPASS
36881 + #define IMX8MN_DRAM_PLL_BYPASS 31
36882 + #define IMX8MN_GPU_PLL_BYPASS 32
36883 +-#define IMX8MN_VPU_PLL_BYPASS 33
36884 ++#define IMX8MN_M7_ALT_PLL_BYPASS 33
36885 ++#define IMX8MN_VPU_PLL_BYPASS IMX8MN_M7_ALT_PLL_BYPASS
36886 + #define IMX8MN_ARM_PLL_BYPASS 34
36887 + #define IMX8MN_SYS_PLL1_BYPASS 35
36888 + #define IMX8MN_SYS_PLL2_BYPASS 36
36889 + #define IMX8MN_SYS_PLL3_BYPASS 37
36890 + #define IMX8MN_AUDIO_PLL1_OUT 38
36891 + #define IMX8MN_AUDIO_PLL2_OUT 39
36892 +-#define IMX8MN_VIDEO_PLL1_OUT 40
36893 ++#define IMX8MN_VIDEO_PLL_OUT 40
36894 ++#define IMX8MN_VIDEO_PLL1_OUT IMX8MN_VIDEO_PLL_OUT
36895 + #define IMX8MN_DRAM_PLL_OUT 41
36896 + #define IMX8MN_GPU_PLL_OUT 42
36897 +-#define IMX8MN_VPU_PLL_OUT 43
36898 ++#define IMX8MN_M7_ALT_PLL_OUT 43
36899 ++#define IMX8MN_VPU_PLL_OUT IMX8MN_M7_ALT_PLL_OUT
36900 + #define IMX8MN_ARM_PLL_OUT 44
36901 + #define IMX8MN_SYS_PLL1_OUT 45
36902 + #define IMX8MN_SYS_PLL2_OUT 46
36903 +diff --git a/include/dt-bindings/clock/imx8mp-clock.h b/include/dt-bindings/clock/imx8mp-clock.h
36904 +index 9d5cc2ddde896..1417b7b1b7dfe 100644
36905 +--- a/include/dt-bindings/clock/imx8mp-clock.h
36906 ++++ b/include/dt-bindings/clock/imx8mp-clock.h
36907 +@@ -324,8 +324,9 @@
36908 + #define IMX8MP_CLK_CLKOUT2_SEL 317
36909 + #define IMX8MP_CLK_CLKOUT2_DIV 318
36910 + #define IMX8MP_CLK_CLKOUT2 319
36911 ++#define IMX8MP_CLK_USB_SUSP 320
36912 +
36913 +-#define IMX8MP_CLK_END 320
36914 ++#define IMX8MP_CLK_END 321
36915 +
36916 + #define IMX8MP_CLK_AUDIOMIX_SAI1_IPG 0
36917 + #define IMX8MP_CLK_AUDIOMIX_SAI1_MCLK1 1
36918 +diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h
36919 +index 2aea877d644f8..2b98720084285 100644
36920 +--- a/include/linux/btf_ids.h
36921 ++++ b/include/linux/btf_ids.h
36922 +@@ -204,7 +204,7 @@ extern struct btf_id_set8 name;
36923 +
36924 + #else
36925 +
36926 +-#define BTF_ID_LIST(name) static u32 __maybe_unused name[5];
36927 ++#define BTF_ID_LIST(name) static u32 __maybe_unused name[16];
36928 + #define BTF_ID(prefix, name)
36929 + #define BTF_ID_FLAGS(prefix, name, ...)
36930 + #define BTF_ID_UNUSED
36931 +diff --git a/include/linux/debugfs.h b/include/linux/debugfs.h
36932 +index f60674692d365..ea2d919fd9c79 100644
36933 +--- a/include/linux/debugfs.h
36934 ++++ b/include/linux/debugfs.h
36935 +@@ -45,7 +45,7 @@ struct debugfs_u32_array {
36936 +
36937 + extern struct dentry *arch_debugfs_dir;
36938 +
36939 +-#define DEFINE_DEBUGFS_ATTRIBUTE(__fops, __get, __set, __fmt) \
36940 ++#define DEFINE_DEBUGFS_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, __is_signed) \
36941 + static int __fops ## _open(struct inode *inode, struct file *file) \
36942 + { \
36943 + __simple_attr_check_format(__fmt, 0ull); \
36944 +@@ -56,10 +56,16 @@ static const struct file_operations __fops = { \
36945 + .open = __fops ## _open, \
36946 + .release = simple_attr_release, \
36947 + .read = debugfs_attr_read, \
36948 +- .write = debugfs_attr_write, \
36949 ++ .write = (__is_signed) ? debugfs_attr_write_signed : debugfs_attr_write, \
36950 + .llseek = no_llseek, \
36951 + }
36952 +
36953 ++#define DEFINE_DEBUGFS_ATTRIBUTE(__fops, __get, __set, __fmt) \
36954 ++ DEFINE_DEBUGFS_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, false)
36955 ++
36956 ++#define DEFINE_DEBUGFS_ATTRIBUTE_SIGNED(__fops, __get, __set, __fmt) \
36957 ++ DEFINE_DEBUGFS_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, true)
36958 ++
36959 + typedef struct vfsmount *(*debugfs_automount_t)(struct dentry *, void *);
36960 +
36961 + #if defined(CONFIG_DEBUG_FS)
36962 +@@ -102,6 +108,8 @@ ssize_t debugfs_attr_read(struct file *file, char __user *buf,
36963 + size_t len, loff_t *ppos);
36964 + ssize_t debugfs_attr_write(struct file *file, const char __user *buf,
36965 + size_t len, loff_t *ppos);
36966 ++ssize_t debugfs_attr_write_signed(struct file *file, const char __user *buf,
36967 ++ size_t len, loff_t *ppos);
36968 +
36969 + struct dentry *debugfs_rename(struct dentry *old_dir, struct dentry *old_dentry,
36970 + struct dentry *new_dir, const char *new_name);
36971 +@@ -254,6 +262,13 @@ static inline ssize_t debugfs_attr_write(struct file *file,
36972 + return -ENODEV;
36973 + }
36974 +
36975 ++static inline ssize_t debugfs_attr_write_signed(struct file *file,
36976 ++ const char __user *buf,
36977 ++ size_t len, loff_t *ppos)
36978 ++{
36979 ++ return -ENODEV;
36980 ++}
36981 ++
36982 + static inline struct dentry *debugfs_rename(struct dentry *old_dir, struct dentry *old_dentry,
36983 + struct dentry *new_dir, char *new_name)
36984 + {
36985 +diff --git a/include/linux/eventfd.h b/include/linux/eventfd.h
36986 +index 30eb30d6909b0..3cd202d3eefb3 100644
36987 +--- a/include/linux/eventfd.h
36988 ++++ b/include/linux/eventfd.h
36989 +@@ -61,7 +61,7 @@ static inline struct eventfd_ctx *eventfd_ctx_fdget(int fd)
36990 + return ERR_PTR(-ENOSYS);
36991 + }
36992 +
36993 +-static inline int eventfd_signal(struct eventfd_ctx *ctx, int n)
36994 ++static inline int eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
36995 + {
36996 + return -ENOSYS;
36997 + }
36998 +diff --git a/include/linux/fortify-string.h b/include/linux/fortify-string.h
36999 +index 1067a8450826b..5001a11258e4d 100644
37000 +--- a/include/linux/fortify-string.h
37001 ++++ b/include/linux/fortify-string.h
37002 +@@ -18,7 +18,7 @@ void __write_overflow_field(size_t avail, size_t wanted) __compiletime_warning("
37003 +
37004 + #define __compiletime_strlen(p) \
37005 + ({ \
37006 +- unsigned char *__p = (unsigned char *)(p); \
37007 ++ char *__p = (char *)(p); \
37008 + size_t __ret = SIZE_MAX; \
37009 + size_t __p_size = __member_size(p); \
37010 + if (__p_size != SIZE_MAX && \
37011 +diff --git a/include/linux/fs.h b/include/linux/fs.h
37012 +index 59ae95ddb6793..6b115bce14b98 100644
37013 +--- a/include/linux/fs.h
37014 ++++ b/include/linux/fs.h
37015 +@@ -3493,7 +3493,7 @@ void simple_transaction_set(struct file *file, size_t n);
37016 + * All attributes contain a text representation of a numeric value
37017 + * that are accessed with the get() and set() functions.
37018 + */
37019 +-#define DEFINE_SIMPLE_ATTRIBUTE(__fops, __get, __set, __fmt) \
37020 ++#define DEFINE_SIMPLE_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, __is_signed) \
37021 + static int __fops ## _open(struct inode *inode, struct file *file) \
37022 + { \
37023 + __simple_attr_check_format(__fmt, 0ull); \
37024 +@@ -3504,10 +3504,16 @@ static const struct file_operations __fops = { \
37025 + .open = __fops ## _open, \
37026 + .release = simple_attr_release, \
37027 + .read = simple_attr_read, \
37028 +- .write = simple_attr_write, \
37029 ++ .write = (__is_signed) ? simple_attr_write_signed : simple_attr_write, \
37030 + .llseek = generic_file_llseek, \
37031 + }
37032 +
37033 ++#define DEFINE_SIMPLE_ATTRIBUTE(__fops, __get, __set, __fmt) \
37034 ++ DEFINE_SIMPLE_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, false)
37035 ++
37036 ++#define DEFINE_SIMPLE_ATTRIBUTE_SIGNED(__fops, __get, __set, __fmt) \
37037 ++ DEFINE_SIMPLE_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, true)
37038 ++
37039 + static inline __printf(1, 2)
37040 + void __simple_attr_check_format(const char *fmt, ...)
37041 + {
37042 +@@ -3522,6 +3528,8 @@ ssize_t simple_attr_read(struct file *file, char __user *buf,
37043 + size_t len, loff_t *ppos);
37044 + ssize_t simple_attr_write(struct file *file, const char __user *buf,
37045 + size_t len, loff_t *ppos);
37046 ++ssize_t simple_attr_write_signed(struct file *file, const char __user *buf,
37047 ++ size_t len, loff_t *ppos);
37048 +
37049 + struct ctl_table;
37050 + int __init list_bdev_fs_names(char *buf, size_t size);
37051 +diff --git a/include/linux/hisi_acc_qm.h b/include/linux/hisi_acc_qm.h
37052 +index e230c7c46110a..c3618255b1504 100644
37053 +--- a/include/linux/hisi_acc_qm.h
37054 ++++ b/include/linux/hisi_acc_qm.h
37055 +@@ -384,14 +384,14 @@ struct hisi_qp {
37056 + static inline int q_num_set(const char *val, const struct kernel_param *kp,
37057 + unsigned int device)
37058 + {
37059 +- struct pci_dev *pdev = pci_get_device(PCI_VENDOR_ID_HUAWEI,
37060 +- device, NULL);
37061 ++ struct pci_dev *pdev;
37062 + u32 n, q_num;
37063 + int ret;
37064 +
37065 + if (!val)
37066 + return -EINVAL;
37067 +
37068 ++ pdev = pci_get_device(PCI_VENDOR_ID_HUAWEI, device, NULL);
37069 + if (!pdev) {
37070 + q_num = min_t(u32, QM_QNUM_V1, QM_QNUM_V2);
37071 + pr_info("No device found currently, suppose queue number is %u\n",
37072 +@@ -401,6 +401,8 @@ static inline int q_num_set(const char *val, const struct kernel_param *kp,
37073 + q_num = QM_QNUM_V1;
37074 + else
37075 + q_num = QM_QNUM_V2;
37076 ++
37077 ++ pci_dev_put(pdev);
37078 + }
37079 +
37080 + ret = kstrtou32(val, 10, &n);
37081 +diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
37082 +index 3b42264333ef8..646f1da9f27e0 100644
37083 +--- a/include/linux/hyperv.h
37084 ++++ b/include/linux/hyperv.h
37085 +@@ -1341,6 +1341,8 @@ struct hv_ring_buffer_debug_info {
37086 + int hv_ringbuffer_get_debuginfo(struct hv_ring_buffer_info *ring_info,
37087 + struct hv_ring_buffer_debug_info *debug_info);
37088 +
37089 ++bool hv_ringbuffer_spinlock_busy(struct vmbus_channel *channel);
37090 ++
37091 + /* Vmbus interface */
37092 + #define vmbus_driver_register(driver) \
37093 + __vmbus_driver_register(driver, THIS_MODULE, KBUILD_MODNAME)
37094 +diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h
37095 +index 79690938d9a2d..d3088666f3f44 100644
37096 +--- a/include/linux/ieee80211.h
37097 ++++ b/include/linux/ieee80211.h
37098 +@@ -4594,7 +4594,7 @@ static inline u8 ieee80211_mle_common_size(const u8 *data)
37099 + return 0;
37100 + }
37101 +
37102 +- return common + mle->variable[0];
37103 ++ return sizeof(*mle) + common + mle->variable[0];
37104 + }
37105 +
37106 + /**
37107 +diff --git a/include/linux/iio/imu/adis.h b/include/linux/iio/imu/adis.h
37108 +index 515ca09764fe4..bcbefb7574751 100644
37109 +--- a/include/linux/iio/imu/adis.h
37110 ++++ b/include/linux/iio/imu/adis.h
37111 +@@ -402,9 +402,20 @@ static inline int adis_update_bits_base(struct adis *adis, unsigned int reg,
37112 + __adis_update_bits_base(adis, reg, mask, val, sizeof(val)); \
37113 + })
37114 +
37115 +-int adis_enable_irq(struct adis *adis, bool enable);
37116 + int __adis_check_status(struct adis *adis);
37117 + int __adis_initial_startup(struct adis *adis);
37118 ++int __adis_enable_irq(struct adis *adis, bool enable);
37119 ++
37120 ++static inline int adis_enable_irq(struct adis *adis, bool enable)
37121 ++{
37122 ++ int ret;
37123 ++
37124 ++ mutex_lock(&adis->state_lock);
37125 ++ ret = __adis_enable_irq(adis, enable);
37126 ++ mutex_unlock(&adis->state_lock);
37127 ++
37128 ++ return ret;
37129 ++}
37130 +
37131 + static inline int adis_check_status(struct adis *adis)
37132 + {
37133 +diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
37134 +index eddf8ee270e74..ba2bd604359d4 100644
37135 +--- a/include/linux/netdevice.h
37136 ++++ b/include/linux/netdevice.h
37137 +@@ -171,31 +171,38 @@ static inline bool dev_xmit_complete(int rc)
37138 + * (unsigned long) so they can be read and written atomically.
37139 + */
37140 +
37141 ++#define NET_DEV_STAT(FIELD) \
37142 ++ union { \
37143 ++ unsigned long FIELD; \
37144 ++ atomic_long_t __##FIELD; \
37145 ++ }
37146 ++
37147 + struct net_device_stats {
37148 +- unsigned long rx_packets;
37149 +- unsigned long tx_packets;
37150 +- unsigned long rx_bytes;
37151 +- unsigned long tx_bytes;
37152 +- unsigned long rx_errors;
37153 +- unsigned long tx_errors;
37154 +- unsigned long rx_dropped;
37155 +- unsigned long tx_dropped;
37156 +- unsigned long multicast;
37157 +- unsigned long collisions;
37158 +- unsigned long rx_length_errors;
37159 +- unsigned long rx_over_errors;
37160 +- unsigned long rx_crc_errors;
37161 +- unsigned long rx_frame_errors;
37162 +- unsigned long rx_fifo_errors;
37163 +- unsigned long rx_missed_errors;
37164 +- unsigned long tx_aborted_errors;
37165 +- unsigned long tx_carrier_errors;
37166 +- unsigned long tx_fifo_errors;
37167 +- unsigned long tx_heartbeat_errors;
37168 +- unsigned long tx_window_errors;
37169 +- unsigned long rx_compressed;
37170 +- unsigned long tx_compressed;
37171 ++ NET_DEV_STAT(rx_packets);
37172 ++ NET_DEV_STAT(tx_packets);
37173 ++ NET_DEV_STAT(rx_bytes);
37174 ++ NET_DEV_STAT(tx_bytes);
37175 ++ NET_DEV_STAT(rx_errors);
37176 ++ NET_DEV_STAT(tx_errors);
37177 ++ NET_DEV_STAT(rx_dropped);
37178 ++ NET_DEV_STAT(tx_dropped);
37179 ++ NET_DEV_STAT(multicast);
37180 ++ NET_DEV_STAT(collisions);
37181 ++ NET_DEV_STAT(rx_length_errors);
37182 ++ NET_DEV_STAT(rx_over_errors);
37183 ++ NET_DEV_STAT(rx_crc_errors);
37184 ++ NET_DEV_STAT(rx_frame_errors);
37185 ++ NET_DEV_STAT(rx_fifo_errors);
37186 ++ NET_DEV_STAT(rx_missed_errors);
37187 ++ NET_DEV_STAT(tx_aborted_errors);
37188 ++ NET_DEV_STAT(tx_carrier_errors);
37189 ++ NET_DEV_STAT(tx_fifo_errors);
37190 ++ NET_DEV_STAT(tx_heartbeat_errors);
37191 ++ NET_DEV_STAT(tx_window_errors);
37192 ++ NET_DEV_STAT(rx_compressed);
37193 ++ NET_DEV_STAT(tx_compressed);
37194 + };
37195 ++#undef NET_DEV_STAT
37196 +
37197 + /* per-cpu stats, allocated on demand.
37198 + * Try to fit them in a single cache line, for dev_get_stats() sake.
37199 +@@ -5164,4 +5171,9 @@ extern struct list_head ptype_base[PTYPE_HASH_SIZE] __read_mostly;
37200 +
37201 + extern struct net_device *blackhole_netdev;
37202 +
37203 ++/* Note: Avoid these macros in fast path, prefer per-cpu or per-queue counters. */
37204 ++#define DEV_STATS_INC(DEV, FIELD) atomic_long_inc(&(DEV)->stats.__##FIELD)
37205 ++#define DEV_STATS_ADD(DEV, FIELD, VAL) \
37206 ++ atomic_long_add((VAL), &(DEV)->stats.__##FIELD)
37207 ++
37208 + #endif /* _LINUX_NETDEVICE_H */
37209 +diff --git a/include/linux/proc_fs.h b/include/linux/proc_fs.h
37210 +index 81d6e4ec2294b..0260f5ea98fe1 100644
37211 +--- a/include/linux/proc_fs.h
37212 ++++ b/include/linux/proc_fs.h
37213 +@@ -208,8 +208,10 @@ static inline void proc_remove(struct proc_dir_entry *de) {}
37214 + static inline int remove_proc_subtree(const char *name, struct proc_dir_entry *parent) { return 0; }
37215 +
37216 + #define proc_create_net_data(name, mode, parent, ops, state_size, data) ({NULL;})
37217 ++#define proc_create_net_data_write(name, mode, parent, ops, write, state_size, data) ({NULL;})
37218 + #define proc_create_net(name, mode, parent, state_size, ops) ({NULL;})
37219 + #define proc_create_net_single(name, mode, parent, show, data) ({NULL;})
37220 ++#define proc_create_net_single_write(name, mode, parent, show, write, data) ({NULL;})
37221 +
37222 + static inline struct pid *tgid_pidfd_to_pid(const struct file *file)
37223 + {
37224 +diff --git a/include/linux/regulator/driver.h b/include/linux/regulator/driver.h
37225 +index f9a7461e72b80..d3b4a3d4514ab 100644
37226 +--- a/include/linux/regulator/driver.h
37227 ++++ b/include/linux/regulator/driver.h
37228 +@@ -687,7 +687,8 @@ static inline int regulator_err2notif(int err)
37229 +
37230 +
37231 + struct regulator_dev *
37232 +-regulator_register(const struct regulator_desc *regulator_desc,
37233 ++regulator_register(struct device *dev,
37234 ++ const struct regulator_desc *regulator_desc,
37235 + const struct regulator_config *config);
37236 + struct regulator_dev *
37237 + devm_regulator_register(struct device *dev,
37238 +diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
37239 +index 70d6cb94e5802..84f787416a54d 100644
37240 +--- a/include/linux/skmsg.h
37241 ++++ b/include/linux/skmsg.h
37242 +@@ -82,6 +82,7 @@ struct sk_psock {
37243 + u32 apply_bytes;
37244 + u32 cork_bytes;
37245 + u32 eval;
37246 ++ bool redir_ingress; /* undefined if sk_redir is null */
37247 + struct sk_msg *cork;
37248 + struct sk_psock_progs progs;
37249 + #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER)
37250 +diff --git a/include/linux/timerqueue.h b/include/linux/timerqueue.h
37251 +index 93884086f3924..adc80e29168ea 100644
37252 +--- a/include/linux/timerqueue.h
37253 ++++ b/include/linux/timerqueue.h
37254 +@@ -35,7 +35,7 @@ struct timerqueue_node *timerqueue_getnext(struct timerqueue_head *head)
37255 + {
37256 + struct rb_node *leftmost = rb_first_cached(&head->rb_root);
37257 +
37258 +- return rb_entry(leftmost, struct timerqueue_node, node);
37259 ++ return rb_entry_safe(leftmost, struct timerqueue_node, node);
37260 + }
37261 +
37262 + static inline void timerqueue_init(struct timerqueue_node *node)
37263 +diff --git a/include/media/dvbdev.h b/include/media/dvbdev.h
37264 +index 2f6b0861322ae..ac60c9fcfe9a6 100644
37265 +--- a/include/media/dvbdev.h
37266 ++++ b/include/media/dvbdev.h
37267 +@@ -126,6 +126,7 @@ struct dvb_adapter {
37268 + * struct dvb_device - represents a DVB device node
37269 + *
37270 + * @list_head: List head with all DVB devices
37271 ++ * @ref: reference counter
37272 + * @fops: pointer to struct file_operations
37273 + * @adapter: pointer to the adapter that holds this device node
37274 + * @type: type of the device, as defined by &enum dvb_device_type.
37275 +@@ -156,6 +157,7 @@ struct dvb_adapter {
37276 + */
37277 + struct dvb_device {
37278 + struct list_head list_head;
37279 ++ struct kref ref;
37280 + const struct file_operations *fops;
37281 + struct dvb_adapter *adapter;
37282 + enum dvb_device_type type;
37283 +@@ -187,6 +189,20 @@ struct dvb_device {
37284 + void *priv;
37285 + };
37286 +
37287 ++/**
37288 ++ * dvb_device_get - Increase dvb_device reference
37289 ++ *
37290 ++ * @dvbdev: pointer to struct dvb_device
37291 ++ */
37292 ++struct dvb_device *dvb_device_get(struct dvb_device *dvbdev);
37293 ++
37294 ++/**
37295 ++ * dvb_device_put - Decrease dvb_device reference
37296 ++ *
37297 ++ * @dvbdev: pointer to struct dvb_device
37298 ++ */
37299 ++void dvb_device_put(struct dvb_device *dvbdev);
37300 ++
37301 + /**
37302 + * dvb_register_adapter - Registers a new DVB adapter
37303 + *
37304 +@@ -231,29 +247,17 @@ int dvb_register_device(struct dvb_adapter *adap,
37305 + /**
37306 + * dvb_remove_device - Remove a registered DVB device
37307 + *
37308 +- * This does not free memory. To do that, call dvb_free_device().
37309 ++ * This does not free memory. dvb_free_device() will do that when
37310 ++ * reference counter is empty
37311 + *
37312 + * @dvbdev: pointer to struct dvb_device
37313 + */
37314 + void dvb_remove_device(struct dvb_device *dvbdev);
37315 +
37316 +-/**
37317 +- * dvb_free_device - Free memory occupied by a DVB device.
37318 +- *
37319 +- * Call dvb_unregister_device() before calling this function.
37320 +- *
37321 +- * @dvbdev: pointer to struct dvb_device
37322 +- */
37323 +-void dvb_free_device(struct dvb_device *dvbdev);
37324 +
37325 + /**
37326 + * dvb_unregister_device - Unregisters a DVB device
37327 + *
37328 +- * This is a combination of dvb_remove_device() and dvb_free_device().
37329 +- * Using this function is usually a mistake, and is often an indicator
37330 +- * for a use-after-free bug (when a userspace process keeps a file
37331 +- * handle to a detached device).
37332 +- *
37333 + * @dvbdev: pointer to struct dvb_device
37334 + */
37335 + void dvb_unregister_device(struct dvb_device *dvbdev);
37336 +diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
37337 +index 684f1cd287301..7a381fcef939d 100644
37338 +--- a/include/net/bluetooth/hci.h
37339 ++++ b/include/net/bluetooth/hci.h
37340 +@@ -274,6 +274,26 @@ enum {
37341 + * during the hdev->setup vendor callback.
37342 + */
37343 + HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN,
37344 ++
37345 ++ /*
37346 ++ * When this quirk is set, the HCI_OP_LE_SET_EXT_SCAN_ENABLE command is
37347 ++ * disabled. This is required for some Broadcom controllers which
37348 ++ * erroneously claim to support extended scanning.
37349 ++ *
37350 ++ * This quirk can be set before hci_register_dev is called or
37351 ++ * during the hdev->setup vendor callback.
37352 ++ */
37353 ++ HCI_QUIRK_BROKEN_EXT_SCAN,
37354 ++
37355 ++ /*
37356 ++ * When this quirk is set, the HCI_OP_GET_MWS_TRANSPORT_CONFIG command is
37357 ++ * disabled. This is required for some Broadcom controllers which
37358 ++ * erroneously claim to support MWS Transport Layer Configuration.
37359 ++ *
37360 ++ * This quirk can be set before hci_register_dev is called or
37361 ++ * during the hdev->setup vendor callback.
37362 ++ */
37363 ++ HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG,
37364 + };
37365 +
37366 + /* HCI device flags */
37367 +diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
37368 +index c54bc71254afa..7f585e5dd71b8 100644
37369 +--- a/include/net/bluetooth/hci_core.h
37370 ++++ b/include/net/bluetooth/hci_core.h
37371 +@@ -1689,7 +1689,9 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
37372 +
37373 + /* Use ext scanning if set ext scan param and ext scan enable is supported */
37374 + #define use_ext_scan(dev) (((dev)->commands[37] & 0x20) && \
37375 +- ((dev)->commands[37] & 0x40))
37376 ++ ((dev)->commands[37] & 0x40) && \
37377 ++ !test_bit(HCI_QUIRK_BROKEN_EXT_SCAN, &(dev)->quirks))
37378 ++
37379 + /* Use ext create connection if command is supported */
37380 + #define use_ext_conn(dev) ((dev)->commands[37] & 0x80)
37381 +
37382 +@@ -1717,6 +1719,9 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
37383 + ((dev)->le_features[3] & HCI_LE_CIS_PERIPHERAL)
37384 + #define bis_capable(dev) ((dev)->le_features[3] & HCI_LE_ISO_BROADCASTER)
37385 +
37386 ++#define mws_transport_config_capable(dev) (((dev)->commands[30] & 0x08) && \
37387 ++ (!test_bit(HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG, &(dev)->quirks)))
37388 ++
37389 + /* ----- HCI protocols ----- */
37390 + #define HCI_PROTO_DEFER 0x01
37391 +
37392 +diff --git a/include/net/dst.h b/include/net/dst.h
37393 +index 00b479ce6b99c..d67fda89cd0fa 100644
37394 +--- a/include/net/dst.h
37395 ++++ b/include/net/dst.h
37396 +@@ -356,9 +356,8 @@ static inline void __skb_tunnel_rx(struct sk_buff *skb, struct net_device *dev,
37397 + static inline void skb_tunnel_rx(struct sk_buff *skb, struct net_device *dev,
37398 + struct net *net)
37399 + {
37400 +- /* TODO : stats should be SMP safe */
37401 +- dev->stats.rx_packets++;
37402 +- dev->stats.rx_bytes += skb->len;
37403 ++ DEV_STATS_INC(dev, rx_packets);
37404 ++ DEV_STATS_ADD(dev, rx_bytes, skb->len);
37405 + __skb_tunnel_rx(skb, dev, net);
37406 + }
37407 +
37408 +diff --git a/include/net/ip_vs.h b/include/net/ip_vs.h
37409 +index ff1804a0c4692..1fca6a88114ad 100644
37410 +--- a/include/net/ip_vs.h
37411 ++++ b/include/net/ip_vs.h
37412 +@@ -351,11 +351,11 @@ struct ip_vs_seq {
37413 +
37414 + /* counters per cpu */
37415 + struct ip_vs_counters {
37416 +- __u64 conns; /* connections scheduled */
37417 +- __u64 inpkts; /* incoming packets */
37418 +- __u64 outpkts; /* outgoing packets */
37419 +- __u64 inbytes; /* incoming bytes */
37420 +- __u64 outbytes; /* outgoing bytes */
37421 ++ u64_stats_t conns; /* connections scheduled */
37422 ++ u64_stats_t inpkts; /* incoming packets */
37423 ++ u64_stats_t outpkts; /* outgoing packets */
37424 ++ u64_stats_t inbytes; /* incoming bytes */
37425 ++ u64_stats_t outbytes; /* outgoing bytes */
37426 + };
37427 + /* Stats per cpu */
37428 + struct ip_vs_cpu_stats {
37429 +diff --git a/include/net/mrp.h b/include/net/mrp.h
37430 +index 92cd3fb6cf9da..b28915ffea284 100644
37431 +--- a/include/net/mrp.h
37432 ++++ b/include/net/mrp.h
37433 +@@ -124,6 +124,7 @@ struct mrp_applicant {
37434 + struct sk_buff *pdu;
37435 + struct rb_root mad;
37436 + struct rcu_head rcu;
37437 ++ bool active;
37438 + };
37439 +
37440 + struct mrp_port {
37441 +diff --git a/include/net/sock_reuseport.h b/include/net/sock_reuseport.h
37442 +index efc9085c68927..6ec140b0a61bf 100644
37443 +--- a/include/net/sock_reuseport.h
37444 ++++ b/include/net/sock_reuseport.h
37445 +@@ -16,6 +16,7 @@ struct sock_reuseport {
37446 + u16 max_socks; /* length of socks */
37447 + u16 num_socks; /* elements in socks */
37448 + u16 num_closed_socks; /* closed elements in socks */
37449 ++ u16 incoming_cpu;
37450 + /* The last synq overflow event timestamp of this
37451 + * reuse->socks[] group.
37452 + */
37453 +@@ -58,5 +59,6 @@ static inline bool reuseport_has_conns(struct sock *sk)
37454 + }
37455 +
37456 + void reuseport_has_conns_set(struct sock *sk);
37457 ++void reuseport_update_incoming_cpu(struct sock *sk, int val);
37458 +
37459 + #endif /* _SOCK_REUSEPORT_H */
37460 +diff --git a/include/net/tcp.h b/include/net/tcp.h
37461 +index 14d45661a84d8..5b70b241ce71b 100644
37462 +--- a/include/net/tcp.h
37463 ++++ b/include/net/tcp.h
37464 +@@ -2291,8 +2291,8 @@ int tcp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore);
37465 + void tcp_bpf_clone(const struct sock *sk, struct sock *newsk);
37466 + #endif /* CONFIG_BPF_SYSCALL */
37467 +
37468 +-int tcp_bpf_sendmsg_redir(struct sock *sk, struct sk_msg *msg, u32 bytes,
37469 +- int flags);
37470 ++int tcp_bpf_sendmsg_redir(struct sock *sk, bool ingress,
37471 ++ struct sk_msg *msg, u32 bytes, int flags);
37472 + #endif /* CONFIG_NET_SOCK_MSG */
37473 +
37474 + #if !defined(CONFIG_BPF_SYSCALL) || !defined(CONFIG_NET_SOCK_MSG)
37475 +diff --git a/include/sound/hda_codec.h b/include/sound/hda_codec.h
37476 +index 25ec8c181688d..eba23daf2c290 100644
37477 +--- a/include/sound/hda_codec.h
37478 ++++ b/include/sound/hda_codec.h
37479 +@@ -258,6 +258,7 @@ struct hda_codec {
37480 + unsigned int link_down_at_suspend:1; /* link down at runtime suspend */
37481 + unsigned int relaxed_resume:1; /* don't resume forcibly for jack */
37482 + unsigned int forced_resume:1; /* forced resume for jack */
37483 ++ unsigned int no_stream_clean_at_suspend:1; /* do not clean streams at suspend */
37484 +
37485 + #ifdef CONFIG_PM
37486 + unsigned long power_on_acct;
37487 +diff --git a/include/sound/pcm.h b/include/sound/pcm.h
37488 +index 7b1a022910e8e..27040b472a4f6 100644
37489 +--- a/include/sound/pcm.h
37490 ++++ b/include/sound/pcm.h
37491 +@@ -106,24 +106,24 @@ struct snd_pcm_ops {
37492 + #define SNDRV_PCM_POS_XRUN ((snd_pcm_uframes_t)-1)
37493 +
37494 + /* If you change this don't forget to change rates[] table in pcm_native.c */
37495 +-#define SNDRV_PCM_RATE_5512 (1<<0) /* 5512Hz */
37496 +-#define SNDRV_PCM_RATE_8000 (1<<1) /* 8000Hz */
37497 +-#define SNDRV_PCM_RATE_11025 (1<<2) /* 11025Hz */
37498 +-#define SNDRV_PCM_RATE_16000 (1<<3) /* 16000Hz */
37499 +-#define SNDRV_PCM_RATE_22050 (1<<4) /* 22050Hz */
37500 +-#define SNDRV_PCM_RATE_32000 (1<<5) /* 32000Hz */
37501 +-#define SNDRV_PCM_RATE_44100 (1<<6) /* 44100Hz */
37502 +-#define SNDRV_PCM_RATE_48000 (1<<7) /* 48000Hz */
37503 +-#define SNDRV_PCM_RATE_64000 (1<<8) /* 64000Hz */
37504 +-#define SNDRV_PCM_RATE_88200 (1<<9) /* 88200Hz */
37505 +-#define SNDRV_PCM_RATE_96000 (1<<10) /* 96000Hz */
37506 +-#define SNDRV_PCM_RATE_176400 (1<<11) /* 176400Hz */
37507 +-#define SNDRV_PCM_RATE_192000 (1<<12) /* 192000Hz */
37508 +-#define SNDRV_PCM_RATE_352800 (1<<13) /* 352800Hz */
37509 +-#define SNDRV_PCM_RATE_384000 (1<<14) /* 384000Hz */
37510 +-
37511 +-#define SNDRV_PCM_RATE_CONTINUOUS (1<<30) /* continuous range */
37512 +-#define SNDRV_PCM_RATE_KNOT (1<<31) /* supports more non-continuos rates */
37513 ++#define SNDRV_PCM_RATE_5512 (1U<<0) /* 5512Hz */
37514 ++#define SNDRV_PCM_RATE_8000 (1U<<1) /* 8000Hz */
37515 ++#define SNDRV_PCM_RATE_11025 (1U<<2) /* 11025Hz */
37516 ++#define SNDRV_PCM_RATE_16000 (1U<<3) /* 16000Hz */
37517 ++#define SNDRV_PCM_RATE_22050 (1U<<4) /* 22050Hz */
37518 ++#define SNDRV_PCM_RATE_32000 (1U<<5) /* 32000Hz */
37519 ++#define SNDRV_PCM_RATE_44100 (1U<<6) /* 44100Hz */
37520 ++#define SNDRV_PCM_RATE_48000 (1U<<7) /* 48000Hz */
37521 ++#define SNDRV_PCM_RATE_64000 (1U<<8) /* 64000Hz */
37522 ++#define SNDRV_PCM_RATE_88200 (1U<<9) /* 88200Hz */
37523 ++#define SNDRV_PCM_RATE_96000 (1U<<10) /* 96000Hz */
37524 ++#define SNDRV_PCM_RATE_176400 (1U<<11) /* 176400Hz */
37525 ++#define SNDRV_PCM_RATE_192000 (1U<<12) /* 192000Hz */
37526 ++#define SNDRV_PCM_RATE_352800 (1U<<13) /* 352800Hz */
37527 ++#define SNDRV_PCM_RATE_384000 (1U<<14) /* 384000Hz */
37528 ++
37529 ++#define SNDRV_PCM_RATE_CONTINUOUS (1U<<30) /* continuous range */
37530 ++#define SNDRV_PCM_RATE_KNOT (1U<<31) /* supports more non-continuos rates */
37531 +
37532 + #define SNDRV_PCM_RATE_8000_44100 (SNDRV_PCM_RATE_8000|SNDRV_PCM_RATE_11025|\
37533 + SNDRV_PCM_RATE_16000|SNDRV_PCM_RATE_22050|\
37534 +diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h
37535 +index c6b372401c278..ff57e7f9914cc 100644
37536 +--- a/include/trace/events/f2fs.h
37537 ++++ b/include/trace/events/f2fs.h
37538 +@@ -322,7 +322,7 @@ TRACE_EVENT(f2fs_unlink_enter,
37539 + __field(ino_t, ino)
37540 + __field(loff_t, size)
37541 + __field(blkcnt_t, blocks)
37542 +- __field(const char *, name)
37543 ++ __string(name, dentry->d_name.name)
37544 + ),
37545 +
37546 + TP_fast_assign(
37547 +@@ -330,7 +330,7 @@ TRACE_EVENT(f2fs_unlink_enter,
37548 + __entry->ino = dir->i_ino;
37549 + __entry->size = dir->i_size;
37550 + __entry->blocks = dir->i_blocks;
37551 +- __entry->name = dentry->d_name.name;
37552 ++ __assign_str(name, dentry->d_name.name);
37553 + ),
37554 +
37555 + TP_printk("dev = (%d,%d), dir ino = %lu, i_size = %lld, "
37556 +@@ -338,7 +338,7 @@ TRACE_EVENT(f2fs_unlink_enter,
37557 + show_dev_ino(__entry),
37558 + __entry->size,
37559 + (unsigned long long)__entry->blocks,
37560 +- __entry->name)
37561 ++ __get_str(name))
37562 + );
37563 +
37564 + DEFINE_EVENT(f2fs__inode_exit, f2fs_unlink_exit,
37565 +@@ -940,25 +940,29 @@ TRACE_EVENT(f2fs_direct_IO_enter,
37566 + TP_STRUCT__entry(
37567 + __field(dev_t, dev)
37568 + __field(ino_t, ino)
37569 +- __field(struct kiocb *, iocb)
37570 ++ __field(loff_t, ki_pos)
37571 ++ __field(int, ki_flags)
37572 ++ __field(u16, ki_ioprio)
37573 + __field(unsigned long, len)
37574 + __field(int, rw)
37575 + ),
37576 +
37577 + TP_fast_assign(
37578 +- __entry->dev = inode->i_sb->s_dev;
37579 +- __entry->ino = inode->i_ino;
37580 +- __entry->iocb = iocb;
37581 +- __entry->len = len;
37582 +- __entry->rw = rw;
37583 ++ __entry->dev = inode->i_sb->s_dev;
37584 ++ __entry->ino = inode->i_ino;
37585 ++ __entry->ki_pos = iocb->ki_pos;
37586 ++ __entry->ki_flags = iocb->ki_flags;
37587 ++ __entry->ki_ioprio = iocb->ki_ioprio;
37588 ++ __entry->len = len;
37589 ++ __entry->rw = rw;
37590 + ),
37591 +
37592 + TP_printk("dev = (%d,%d), ino = %lu pos = %lld len = %lu ki_flags = %x ki_ioprio = %x rw = %d",
37593 + show_dev_ino(__entry),
37594 +- __entry->iocb->ki_pos,
37595 ++ __entry->ki_pos,
37596 + __entry->len,
37597 +- __entry->iocb->ki_flags,
37598 +- __entry->iocb->ki_ioprio,
37599 ++ __entry->ki_flags,
37600 ++ __entry->ki_ioprio,
37601 + __entry->rw)
37602 + );
37603 +
37604 +@@ -1407,19 +1411,19 @@ TRACE_EVENT(f2fs_write_checkpoint,
37605 + TP_STRUCT__entry(
37606 + __field(dev_t, dev)
37607 + __field(int, reason)
37608 +- __field(char *, msg)
37609 ++ __string(dest_msg, msg)
37610 + ),
37611 +
37612 + TP_fast_assign(
37613 + __entry->dev = sb->s_dev;
37614 + __entry->reason = reason;
37615 +- __entry->msg = msg;
37616 ++ __assign_str(dest_msg, msg);
37617 + ),
37618 +
37619 + TP_printk("dev = (%d,%d), checkpoint for %s, state = %s",
37620 + show_dev(__entry->dev),
37621 + show_cpreason(__entry->reason),
37622 +- __entry->msg)
37623 ++ __get_str(dest_msg))
37624 + );
37625 +
37626 + DECLARE_EVENT_CLASS(f2fs_discard,
37627 +diff --git a/include/trace/events/ib_mad.h b/include/trace/events/ib_mad.h
37628 +index 59363a083ecb9..d92691c78cff6 100644
37629 +--- a/include/trace/events/ib_mad.h
37630 ++++ b/include/trace/events/ib_mad.h
37631 +@@ -49,7 +49,6 @@ DECLARE_EVENT_CLASS(ib_mad_send_template,
37632 + __field(int, retries_left)
37633 + __field(int, max_retries)
37634 + __field(int, retry)
37635 +- __field(u16, pkey)
37636 + ),
37637 +
37638 + TP_fast_assign(
37639 +@@ -89,7 +88,7 @@ DECLARE_EVENT_CLASS(ib_mad_send_template,
37640 + "hdr : base_ver 0x%x class 0x%x class_ver 0x%x " \
37641 + "method 0x%x status 0x%x class_specific 0x%x tid 0x%llx " \
37642 + "attr_id 0x%x attr_mod 0x%x => dlid 0x%08x sl %d "\
37643 +- "pkey 0x%x rpqn 0x%x rqpkey 0x%x",
37644 ++ "rpqn 0x%x rqpkey 0x%x",
37645 + __entry->dev_index, __entry->port_num, __entry->qp_num,
37646 + __entry->agent_priv, be64_to_cpu(__entry->wrtid),
37647 + __entry->retries_left, __entry->max_retries,
37648 +@@ -100,7 +99,7 @@ DECLARE_EVENT_CLASS(ib_mad_send_template,
37649 + be16_to_cpu(__entry->class_specific),
37650 + be64_to_cpu(__entry->tid), be16_to_cpu(__entry->attr_id),
37651 + be32_to_cpu(__entry->attr_mod),
37652 +- be32_to_cpu(__entry->dlid), __entry->sl, __entry->pkey,
37653 ++ be32_to_cpu(__entry->dlid), __entry->sl,
37654 + __entry->rqpn, __entry->rqkey
37655 + )
37656 + );
37657 +@@ -204,7 +203,6 @@ TRACE_EVENT(ib_mad_recv_done_handler,
37658 + __field(u16, wc_status)
37659 + __field(u32, slid)
37660 + __field(u32, dev_index)
37661 +- __field(u16, pkey)
37662 + ),
37663 +
37664 + TP_fast_assign(
37665 +@@ -224,9 +222,6 @@ TRACE_EVENT(ib_mad_recv_done_handler,
37666 + __entry->slid = wc->slid;
37667 + __entry->src_qp = wc->src_qp;
37668 + __entry->sl = wc->sl;
37669 +- ib_query_pkey(qp_info->port_priv->device,
37670 +- qp_info->port_priv->port_num,
37671 +- wc->pkey_index, &__entry->pkey);
37672 + __entry->wc_status = wc->status;
37673 + ),
37674 +
37675 +@@ -234,7 +229,7 @@ TRACE_EVENT(ib_mad_recv_done_handler,
37676 + "base_ver 0x%02x class 0x%02x class_ver 0x%02x " \
37677 + "method 0x%02x status 0x%04x class_specific 0x%04x " \
37678 + "tid 0x%016llx attr_id 0x%04x attr_mod 0x%08x " \
37679 +- "slid 0x%08x src QP%d, sl %d pkey 0x%04x",
37680 ++ "slid 0x%08x src QP%d, sl %d",
37681 + __entry->dev_index, __entry->port_num, __entry->qp_num,
37682 + __entry->wc_status,
37683 + __entry->length,
37684 +@@ -244,7 +239,7 @@ TRACE_EVENT(ib_mad_recv_done_handler,
37685 + be16_to_cpu(__entry->class_specific),
37686 + be64_to_cpu(__entry->tid), be16_to_cpu(__entry->attr_id),
37687 + be32_to_cpu(__entry->attr_mod),
37688 +- __entry->slid, __entry->src_qp, __entry->sl, __entry->pkey
37689 ++ __entry->slid, __entry->src_qp, __entry->sl
37690 + )
37691 + );
37692 +
37693 +diff --git a/include/uapi/linux/idxd.h b/include/uapi/linux/idxd.h
37694 +index 2b9e7feba3f32..1d553bedbdb51 100644
37695 +--- a/include/uapi/linux/idxd.h
37696 ++++ b/include/uapi/linux/idxd.h
37697 +@@ -295,7 +295,7 @@ struct dsa_completion_record {
37698 + };
37699 +
37700 + uint32_t delta_rec_size;
37701 +- uint32_t crc_val;
37702 ++ uint64_t crc_val;
37703 +
37704 + /* DIF check & strip */
37705 + struct {
37706 +diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
37707 +index 2df3225b562fa..9d4c4078e8d00 100644
37708 +--- a/include/uapi/linux/io_uring.h
37709 ++++ b/include/uapi/linux/io_uring.h
37710 +@@ -296,10 +296,28 @@ enum io_uring_op {
37711 + *
37712 + * IORING_RECVSEND_FIXED_BUF Use registered buffers, the index is stored in
37713 + * the buf_index field.
37714 ++ *
37715 ++ * IORING_SEND_ZC_REPORT_USAGE
37716 ++ * If set, SEND[MSG]_ZC should report
37717 ++ * the zerocopy usage in cqe.res
37718 ++ * for the IORING_CQE_F_NOTIF cqe.
37719 ++ * 0 is reported if zerocopy was actually possible.
37720 ++ * IORING_NOTIF_USAGE_ZC_COPIED if data was copied
37721 ++ * (at least partially).
37722 + */
37723 + #define IORING_RECVSEND_POLL_FIRST (1U << 0)
37724 + #define IORING_RECV_MULTISHOT (1U << 1)
37725 + #define IORING_RECVSEND_FIXED_BUF (1U << 2)
37726 ++#define IORING_SEND_ZC_REPORT_USAGE (1U << 3)
37727 ++
37728 ++/*
37729 ++ * cqe.res for IORING_CQE_F_NOTIF if
37730 ++ * IORING_SEND_ZC_REPORT_USAGE was requested
37731 ++ *
37732 ++ * It should be treated as a flag, all other
37733 ++ * bits of cqe.res should be treated as reserved!
37734 ++ */
37735 ++#define IORING_NOTIF_USAGE_ZC_COPIED (1U << 31)
37736 +
37737 + /*
37738 + * accept flags stored in sqe->ioprio
37739 +diff --git a/include/uapi/linux/swab.h b/include/uapi/linux/swab.h
37740 +index 0723a9cce747c..01717181339eb 100644
37741 +--- a/include/uapi/linux/swab.h
37742 ++++ b/include/uapi/linux/swab.h
37743 +@@ -3,7 +3,7 @@
37744 + #define _UAPI_LINUX_SWAB_H
37745 +
37746 + #include <linux/types.h>
37747 +-#include <linux/compiler.h>
37748 ++#include <linux/stddef.h>
37749 + #include <asm/bitsperlong.h>
37750 + #include <asm/swab.h>
37751 +
37752 +diff --git a/include/uapi/rdma/hns-abi.h b/include/uapi/rdma/hns-abi.h
37753 +index f6fde06db4b4e..745790ce3c261 100644
37754 +--- a/include/uapi/rdma/hns-abi.h
37755 ++++ b/include/uapi/rdma/hns-abi.h
37756 +@@ -85,11 +85,26 @@ struct hns_roce_ib_create_qp_resp {
37757 + __aligned_u64 dwqe_mmap_key;
37758 + };
37759 +
37760 ++enum {
37761 ++ HNS_ROCE_EXSGE_FLAGS = 1 << 0,
37762 ++};
37763 ++
37764 ++enum {
37765 ++ HNS_ROCE_RSP_EXSGE_FLAGS = 1 << 0,
37766 ++};
37767 ++
37768 + struct hns_roce_ib_alloc_ucontext_resp {
37769 + __u32 qp_tab_size;
37770 + __u32 cqe_size;
37771 + __u32 srq_tab_size;
37772 + __u32 reserved;
37773 ++ __u32 config;
37774 ++ __u32 max_inline_data;
37775 ++};
37776 ++
37777 ++struct hns_roce_ib_alloc_ucontext {
37778 ++ __u32 config;
37779 ++ __u32 reserved;
37780 + };
37781 +
37782 + struct hns_roce_ib_alloc_pd_resp {
37783 +diff --git a/include/uapi/sound/asequencer.h b/include/uapi/sound/asequencer.h
37784 +index 6d4a2c60808dd..00d2703e8fca5 100644
37785 +--- a/include/uapi/sound/asequencer.h
37786 ++++ b/include/uapi/sound/asequencer.h
37787 +@@ -328,10 +328,10 @@ typedef int __bitwise snd_seq_client_type_t;
37788 + #define KERNEL_CLIENT ((__force snd_seq_client_type_t) 2)
37789 +
37790 + /* event filter flags */
37791 +-#define SNDRV_SEQ_FILTER_BROADCAST (1<<0) /* accept broadcast messages */
37792 +-#define SNDRV_SEQ_FILTER_MULTICAST (1<<1) /* accept multicast messages */
37793 +-#define SNDRV_SEQ_FILTER_BOUNCE (1<<2) /* accept bounce event in error */
37794 +-#define SNDRV_SEQ_FILTER_USE_EVENT (1<<31) /* use event filter */
37795 ++#define SNDRV_SEQ_FILTER_BROADCAST (1U<<0) /* accept broadcast messages */
37796 ++#define SNDRV_SEQ_FILTER_MULTICAST (1U<<1) /* accept multicast messages */
37797 ++#define SNDRV_SEQ_FILTER_BOUNCE (1U<<2) /* accept bounce event in error */
37798 ++#define SNDRV_SEQ_FILTER_USE_EVENT (1U<<31) /* use event filter */
37799 +
37800 + struct snd_seq_client_info {
37801 + int client; /* client number to inquire */
37802 +diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
37803 +index 61cd7ffd0f6aa..17771cb3c3330 100644
37804 +--- a/io_uring/io_uring.c
37805 ++++ b/io_uring/io_uring.c
37806 +@@ -1757,7 +1757,7 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
37807 + return ret;
37808 +
37809 + /* If the op doesn't have a file, we're not polling for it */
37810 +- if ((req->ctx->flags & IORING_SETUP_IOPOLL) && req->file)
37811 ++ if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue)
37812 + io_iopoll_req_issued(req, issue_flags);
37813 +
37814 + return 0;
37815 +diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c
37816 +index 90d2fc6fd80e4..a49ccab262d53 100644
37817 +--- a/io_uring/msg_ring.c
37818 ++++ b/io_uring/msg_ring.c
37819 +@@ -164,12 +164,10 @@ int io_msg_ring(struct io_kiocb *req, unsigned int issue_flags)
37820 + }
37821 +
37822 + done:
37823 ++ if (ret == -EAGAIN)
37824 ++ return -EAGAIN;
37825 + if (ret < 0)
37826 + req_set_fail(req);
37827 + io_req_set_res(req, ret, 0);
37828 +- /* put file to avoid an attempt to IOPOLL the req */
37829 +- if (!(req->flags & REQ_F_FIXED_FILE))
37830 +- io_put_file(req->file);
37831 +- req->file = NULL;
37832 + return IOU_OK;
37833 + }
37834 +diff --git a/io_uring/net.c b/io_uring/net.c
37835 +index ab83da7e80f04..bdd2b4e370b35 100644
37836 +--- a/io_uring/net.c
37837 ++++ b/io_uring/net.c
37838 +@@ -479,6 +479,7 @@ static int __io_compat_recvmsg_copy_hdr(struct io_kiocb *req,
37839 + if (req->flags & REQ_F_BUFFER_SELECT) {
37840 + compat_ssize_t clen;
37841 +
37842 ++ iomsg->free_iov = NULL;
37843 + if (msg.msg_iovlen == 0) {
37844 + sr->len = 0;
37845 + } else if (msg.msg_iovlen > 1) {
37846 +@@ -805,10 +806,10 @@ retry_multishot:
37847 + goto retry_multishot;
37848 +
37849 + if (mshot_finished) {
37850 +- io_netmsg_recycle(req, issue_flags);
37851 + /* fast path, check for non-NULL to avoid function call */
37852 + if (kmsg->free_iov)
37853 + kfree(kmsg->free_iov);
37854 ++ io_netmsg_recycle(req, issue_flags);
37855 + req->flags &= ~REQ_F_NEED_CLEANUP;
37856 + }
37857 +
37858 +@@ -937,7 +938,8 @@ int io_send_zc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
37859 +
37860 + zc->flags = READ_ONCE(sqe->ioprio);
37861 + if (zc->flags & ~(IORING_RECVSEND_POLL_FIRST |
37862 +- IORING_RECVSEND_FIXED_BUF))
37863 ++ IORING_RECVSEND_FIXED_BUF |
37864 ++ IORING_SEND_ZC_REPORT_USAGE))
37865 + return -EINVAL;
37866 + notif = zc->notif = io_alloc_notif(ctx);
37867 + if (!notif)
37868 +@@ -955,6 +957,9 @@ int io_send_zc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
37869 + req->imu = READ_ONCE(ctx->user_bufs[idx]);
37870 + io_req_set_rsrc_node(notif, ctx, 0);
37871 + }
37872 ++ if (zc->flags & IORING_SEND_ZC_REPORT_USAGE) {
37873 ++ io_notif_to_data(notif)->zc_report = true;
37874 ++ }
37875 +
37876 + if (req->opcode == IORING_OP_SEND_ZC) {
37877 + if (READ_ONCE(sqe->__pad3[0]))
37878 +diff --git a/io_uring/notif.c b/io_uring/notif.c
37879 +index e37c6569d82e8..4bfef10161fa0 100644
37880 +--- a/io_uring/notif.c
37881 ++++ b/io_uring/notif.c
37882 +@@ -18,6 +18,10 @@ static void __io_notif_complete_tw(struct io_kiocb *notif, bool *locked)
37883 + __io_unaccount_mem(ctx->user, nd->account_pages);
37884 + nd->account_pages = 0;
37885 + }
37886 ++
37887 ++ if (nd->zc_report && (nd->zc_copied || !nd->zc_used))
37888 ++ notif->cqe.res |= IORING_NOTIF_USAGE_ZC_COPIED;
37889 ++
37890 + io_req_task_complete(notif, locked);
37891 + }
37892 +
37893 +@@ -28,6 +32,13 @@ static void io_uring_tx_zerocopy_callback(struct sk_buff *skb,
37894 + struct io_notif_data *nd = container_of(uarg, struct io_notif_data, uarg);
37895 + struct io_kiocb *notif = cmd_to_io_kiocb(nd);
37896 +
37897 ++ if (nd->zc_report) {
37898 ++ if (success && !nd->zc_used && skb)
37899 ++ WRITE_ONCE(nd->zc_used, true);
37900 ++ else if (!success && !nd->zc_copied)
37901 ++ WRITE_ONCE(nd->zc_copied, true);
37902 ++ }
37903 ++
37904 + if (refcount_dec_and_test(&uarg->refcnt)) {
37905 + notif->io_task_work.func = __io_notif_complete_tw;
37906 + io_req_task_work_add(notif);
37907 +@@ -55,6 +66,7 @@ struct io_kiocb *io_alloc_notif(struct io_ring_ctx *ctx)
37908 + nd->account_pages = 0;
37909 + nd->uarg.flags = SKBFL_ZEROCOPY_FRAG | SKBFL_DONT_ORPHAN;
37910 + nd->uarg.callback = io_uring_tx_zerocopy_callback;
37911 ++ nd->zc_report = nd->zc_used = nd->zc_copied = false;
37912 + refcount_set(&nd->uarg.refcnt, 1);
37913 + return notif;
37914 + }
37915 +diff --git a/io_uring/notif.h b/io_uring/notif.h
37916 +index 5b4d710c8ca54..4ae696273c781 100644
37917 +--- a/io_uring/notif.h
37918 ++++ b/io_uring/notif.h
37919 +@@ -13,6 +13,9 @@ struct io_notif_data {
37920 + struct file *file;
37921 + struct ubuf_info uarg;
37922 + unsigned long account_pages;
37923 ++ bool zc_report;
37924 ++ bool zc_used;
37925 ++ bool zc_copied;
37926 + };
37927 +
37928 + void io_notif_flush(struct io_kiocb *notif);
37929 +diff --git a/io_uring/opdef.c b/io_uring/opdef.c
37930 +index 83dc0f9ad3b2f..04dd2c983fce4 100644
37931 +--- a/io_uring/opdef.c
37932 ++++ b/io_uring/opdef.c
37933 +@@ -63,6 +63,7 @@ const struct io_op_def io_op_defs[] = {
37934 + .audit_skip = 1,
37935 + .ioprio = 1,
37936 + .iopoll = 1,
37937 ++ .iopoll_queue = 1,
37938 + .async_size = sizeof(struct io_async_rw),
37939 + .name = "READV",
37940 + .prep = io_prep_rw,
37941 +@@ -80,6 +81,7 @@ const struct io_op_def io_op_defs[] = {
37942 + .audit_skip = 1,
37943 + .ioprio = 1,
37944 + .iopoll = 1,
37945 ++ .iopoll_queue = 1,
37946 + .async_size = sizeof(struct io_async_rw),
37947 + .name = "WRITEV",
37948 + .prep = io_prep_rw,
37949 +@@ -103,6 +105,7 @@ const struct io_op_def io_op_defs[] = {
37950 + .audit_skip = 1,
37951 + .ioprio = 1,
37952 + .iopoll = 1,
37953 ++ .iopoll_queue = 1,
37954 + .async_size = sizeof(struct io_async_rw),
37955 + .name = "READ_FIXED",
37956 + .prep = io_prep_rw,
37957 +@@ -118,6 +121,7 @@ const struct io_op_def io_op_defs[] = {
37958 + .audit_skip = 1,
37959 + .ioprio = 1,
37960 + .iopoll = 1,
37961 ++ .iopoll_queue = 1,
37962 + .async_size = sizeof(struct io_async_rw),
37963 + .name = "WRITE_FIXED",
37964 + .prep = io_prep_rw,
37965 +@@ -277,6 +281,7 @@ const struct io_op_def io_op_defs[] = {
37966 + .audit_skip = 1,
37967 + .ioprio = 1,
37968 + .iopoll = 1,
37969 ++ .iopoll_queue = 1,
37970 + .async_size = sizeof(struct io_async_rw),
37971 + .name = "READ",
37972 + .prep = io_prep_rw,
37973 +@@ -292,6 +297,7 @@ const struct io_op_def io_op_defs[] = {
37974 + .audit_skip = 1,
37975 + .ioprio = 1,
37976 + .iopoll = 1,
37977 ++ .iopoll_queue = 1,
37978 + .async_size = sizeof(struct io_async_rw),
37979 + .name = "WRITE",
37980 + .prep = io_prep_rw,
37981 +@@ -481,6 +487,7 @@ const struct io_op_def io_op_defs[] = {
37982 + .plug = 1,
37983 + .name = "URING_CMD",
37984 + .iopoll = 1,
37985 ++ .iopoll_queue = 1,
37986 + .async_size = uring_cmd_pdu_size(1),
37987 + .prep = io_uring_cmd_prep,
37988 + .issue = io_uring_cmd,
37989 +diff --git a/io_uring/opdef.h b/io_uring/opdef.h
37990 +index 3efe06d25473a..df7e13d9bfba7 100644
37991 +--- a/io_uring/opdef.h
37992 ++++ b/io_uring/opdef.h
37993 +@@ -25,6 +25,8 @@ struct io_op_def {
37994 + unsigned ioprio : 1;
37995 + /* supports iopoll */
37996 + unsigned iopoll : 1;
37997 ++ /* have to be put into the iopoll list */
37998 ++ unsigned iopoll_queue : 1;
37999 + /* opcode specific path will handle ->async_data allocation if needed */
38000 + unsigned manual_alloc : 1;
38001 + /* size of async data needed, if any */
38002 +diff --git a/io_uring/timeout.c b/io_uring/timeout.c
38003 +index e8a8c20994805..06200fe73a044 100644
38004 +--- a/io_uring/timeout.c
38005 ++++ b/io_uring/timeout.c
38006 +@@ -72,10 +72,12 @@ static bool io_kill_timeout(struct io_kiocb *req, int status)
38007 + __cold void io_flush_timeouts(struct io_ring_ctx *ctx)
38008 + __must_hold(&ctx->completion_lock)
38009 + {
38010 +- u32 seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
38011 ++ u32 seq;
38012 + struct io_timeout *timeout, *tmp;
38013 +
38014 + spin_lock_irq(&ctx->timeout_lock);
38015 ++ seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
38016 ++
38017 + list_for_each_entry_safe(timeout, tmp, &ctx->timeout_list, list) {
38018 + struct io_kiocb *req = cmd_to_io_kiocb(timeout);
38019 + u32 events_needed, events_got;
38020 +diff --git a/ipc/mqueue.c b/ipc/mqueue.c
38021 +index 467a194b8a2ec..d09aa1c1e3e65 100644
38022 +--- a/ipc/mqueue.c
38023 ++++ b/ipc/mqueue.c
38024 +@@ -1726,7 +1726,8 @@ static int __init init_mqueue_fs(void)
38025 +
38026 + if (!setup_mq_sysctls(&init_ipc_ns)) {
38027 + pr_warn("sysctl registration failed\n");
38028 +- return -ENOMEM;
38029 ++ error = -ENOMEM;
38030 ++ goto out_kmem;
38031 + }
38032 +
38033 + error = register_filesystem(&mqueue_fs_type);
38034 +@@ -1744,8 +1745,9 @@ static int __init init_mqueue_fs(void)
38035 + out_filesystem:
38036 + unregister_filesystem(&mqueue_fs_type);
38037 + out_sysctl:
38038 +- kmem_cache_destroy(mqueue_inode_cachep);
38039 + retire_mq_sysctls(&init_ipc_ns);
38040 ++out_kmem:
38041 ++ kmem_cache_destroy(mqueue_inode_cachep);
38042 + return error;
38043 + }
38044 +
38045 +diff --git a/kernel/Makefile b/kernel/Makefile
38046 +index d754e0be1176d..ebc692242b68b 100644
38047 +--- a/kernel/Makefile
38048 ++++ b/kernel/Makefile
38049 +@@ -41,9 +41,6 @@ UBSAN_SANITIZE_kcov.o := n
38050 + KMSAN_SANITIZE_kcov.o := n
38051 + CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack) -fno-stack-protector
38052 +
38053 +-# Don't instrument error handlers
38054 +-CFLAGS_REMOVE_cfi.o := $(CC_FLAGS_CFI)
38055 +-
38056 + obj-y += sched/
38057 + obj-y += locking/
38058 + obj-y += power/
38059 +diff --git a/kernel/acct.c b/kernel/acct.c
38060 +index 62200d799b9b0..034a26daabb2e 100644
38061 +--- a/kernel/acct.c
38062 ++++ b/kernel/acct.c
38063 +@@ -350,6 +350,8 @@ static comp_t encode_comp_t(unsigned long value)
38064 + exp++;
38065 + }
38066 +
38067 ++ if (exp > (((comp_t) ~0U) >> MANTSIZE))
38068 ++ return (comp_t) ~0U;
38069 + /*
38070 + * Clean it up and polish it off.
38071 + */
38072 +diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
38073 +index 35c07afac924e..efdbba2a0230e 100644
38074 +--- a/kernel/bpf/btf.c
38075 ++++ b/kernel/bpf/btf.c
38076 +@@ -4481,6 +4481,11 @@ static int btf_func_proto_check(struct btf_verifier_env *env,
38077 + break;
38078 + }
38079 +
38080 ++ if (btf_type_is_resolve_source_only(arg_type)) {
38081 ++ btf_verifier_log_type(env, t, "Invalid arg#%u", i + 1);
38082 ++ return -EINVAL;
38083 ++ }
38084 ++
38085 + if (args[i].name_off &&
38086 + (!btf_name_offset_valid(btf, args[i].name_off) ||
38087 + !btf_name_valid_identifier(btf, args[i].name_off))) {
38088 +diff --git a/kernel/bpf/cgroup_iter.c b/kernel/bpf/cgroup_iter.c
38089 +index 9fcf09f2ef00f..c187a9e62bdbb 100644
38090 +--- a/kernel/bpf/cgroup_iter.c
38091 ++++ b/kernel/bpf/cgroup_iter.c
38092 +@@ -164,16 +164,30 @@ static int cgroup_iter_seq_init(void *priv, struct bpf_iter_aux_info *aux)
38093 + struct cgroup_iter_priv *p = (struct cgroup_iter_priv *)priv;
38094 + struct cgroup *cgrp = aux->cgroup.start;
38095 +
38096 ++ /* bpf_iter_attach_cgroup() has already acquired an extra reference
38097 ++ * for the start cgroup, but the reference may be released after
38098 ++ * cgroup_iter_seq_init(), so acquire another reference for the
38099 ++ * start cgroup.
38100 ++ */
38101 + p->start_css = &cgrp->self;
38102 ++ css_get(p->start_css);
38103 + p->terminate = false;
38104 + p->visited_all = false;
38105 + p->order = aux->cgroup.order;
38106 + return 0;
38107 + }
38108 +
38109 ++static void cgroup_iter_seq_fini(void *priv)
38110 ++{
38111 ++ struct cgroup_iter_priv *p = (struct cgroup_iter_priv *)priv;
38112 ++
38113 ++ css_put(p->start_css);
38114 ++}
38115 ++
38116 + static const struct bpf_iter_seq_info cgroup_iter_seq_info = {
38117 + .seq_ops = &cgroup_iter_seq_ops,
38118 + .init_seq_private = cgroup_iter_seq_init,
38119 ++ .fini_seq_private = cgroup_iter_seq_fini,
38120 + .seq_priv_size = sizeof(struct cgroup_iter_priv),
38121 + };
38122 +
38123 +diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
38124 +index 7b373a5e861f4..439ed7e5a82b8 100644
38125 +--- a/kernel/bpf/syscall.c
38126 ++++ b/kernel/bpf/syscall.c
38127 +@@ -3504,9 +3504,9 @@ static int bpf_prog_attach(const union bpf_attr *attr)
38128 + case BPF_PROG_TYPE_LSM:
38129 + if (ptype == BPF_PROG_TYPE_LSM &&
38130 + prog->expected_attach_type != BPF_LSM_CGROUP)
38131 +- return -EINVAL;
38132 +-
38133 +- ret = cgroup_bpf_prog_attach(attr, ptype, prog);
38134 ++ ret = -EINVAL;
38135 ++ else
38136 ++ ret = cgroup_bpf_prog_attach(attr, ptype, prog);
38137 + break;
38138 + default:
38139 + ret = -EINVAL;
38140 +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
38141 +index 264b3dc714cc4..242fe307032f1 100644
38142 +--- a/kernel/bpf/verifier.c
38143 ++++ b/kernel/bpf/verifier.c
38144 +@@ -1008,9 +1008,9 @@ static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
38145 + if (unlikely(check_mul_overflow(n, size, &bytes)))
38146 + return NULL;
38147 +
38148 +- if (ksize(dst) < bytes) {
38149 ++ if (ksize(dst) < ksize(src)) {
38150 + kfree(dst);
38151 +- dst = kmalloc_track_caller(bytes, flags);
38152 ++ dst = kmalloc_track_caller(kmalloc_size_roundup(bytes), flags);
38153 + if (!dst)
38154 + return NULL;
38155 + }
38156 +@@ -1027,12 +1027,14 @@ out:
38157 + */
38158 + static void *realloc_array(void *arr, size_t old_n, size_t new_n, size_t size)
38159 + {
38160 ++ size_t alloc_size;
38161 + void *new_arr;
38162 +
38163 + if (!new_n || old_n == new_n)
38164 + goto out;
38165 +
38166 +- new_arr = krealloc_array(arr, new_n, size, GFP_KERNEL);
38167 ++ alloc_size = kmalloc_size_roundup(size_mul(new_n, size));
38168 ++ new_arr = krealloc(arr, alloc_size, GFP_KERNEL);
38169 + if (!new_arr) {
38170 + kfree(arr);
38171 + return NULL;
38172 +@@ -2504,9 +2506,11 @@ static int push_jmp_history(struct bpf_verifier_env *env,
38173 + {
38174 + u32 cnt = cur->jmp_history_cnt;
38175 + struct bpf_idx_pair *p;
38176 ++ size_t alloc_size;
38177 +
38178 + cnt++;
38179 +- p = krealloc(cur->jmp_history, cnt * sizeof(*p), GFP_USER);
38180 ++ alloc_size = kmalloc_size_roundup(size_mul(cnt, sizeof(*p)));
38181 ++ p = krealloc(cur->jmp_history, alloc_size, GFP_USER);
38182 + if (!p)
38183 + return -ENOMEM;
38184 + p[cnt - 1].idx = env->insn_idx;
38185 +@@ -2768,7 +2772,7 @@ static void mark_all_scalars_precise(struct bpf_verifier_env *env,
38186 + }
38187 + }
38188 +
38189 +-static int __mark_chain_precision(struct bpf_verifier_env *env, int regno,
38190 ++static int __mark_chain_precision(struct bpf_verifier_env *env, int frame, int regno,
38191 + int spi)
38192 + {
38193 + struct bpf_verifier_state *st = env->cur_state;
38194 +@@ -2785,7 +2789,7 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno,
38195 + if (!env->bpf_capable)
38196 + return 0;
38197 +
38198 +- func = st->frame[st->curframe];
38199 ++ func = st->frame[frame];
38200 + if (regno >= 0) {
38201 + reg = &func->regs[regno];
38202 + if (reg->type != SCALAR_VALUE) {
38203 +@@ -2866,7 +2870,7 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno,
38204 + break;
38205 +
38206 + new_marks = false;
38207 +- func = st->frame[st->curframe];
38208 ++ func = st->frame[frame];
38209 + bitmap_from_u64(mask, reg_mask);
38210 + for_each_set_bit(i, mask, 32) {
38211 + reg = &func->regs[i];
38212 +@@ -2932,12 +2936,17 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno,
38213 +
38214 + int mark_chain_precision(struct bpf_verifier_env *env, int regno)
38215 + {
38216 +- return __mark_chain_precision(env, regno, -1);
38217 ++ return __mark_chain_precision(env, env->cur_state->curframe, regno, -1);
38218 + }
38219 +
38220 +-static int mark_chain_precision_stack(struct bpf_verifier_env *env, int spi)
38221 ++static int mark_chain_precision_frame(struct bpf_verifier_env *env, int frame, int regno)
38222 + {
38223 +- return __mark_chain_precision(env, -1, spi);
38224 ++ return __mark_chain_precision(env, frame, regno, -1);
38225 ++}
38226 ++
38227 ++static int mark_chain_precision_stack_frame(struct bpf_verifier_env *env, int frame, int spi)
38228 ++{
38229 ++ return __mark_chain_precision(env, frame, -1, spi);
38230 + }
38231 +
38232 + static bool is_spillable_regtype(enum bpf_reg_type type)
38233 +@@ -3186,14 +3195,17 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env,
38234 + stype = &state->stack[spi].slot_type[slot % BPF_REG_SIZE];
38235 + mark_stack_slot_scratched(env, spi);
38236 +
38237 +- if (!env->allow_ptr_leaks
38238 +- && *stype != NOT_INIT
38239 +- && *stype != SCALAR_VALUE) {
38240 +- /* Reject the write if there's are spilled pointers in
38241 +- * range. If we didn't reject here, the ptr status
38242 +- * would be erased below (even though not all slots are
38243 +- * actually overwritten), possibly opening the door to
38244 +- * leaks.
38245 ++ if (!env->allow_ptr_leaks && *stype != STACK_MISC && *stype != STACK_ZERO) {
38246 ++ /* Reject the write if range we may write to has not
38247 ++ * been initialized beforehand. If we didn't reject
38248 ++ * here, the ptr status would be erased below (even
38249 ++ * though not all slots are actually overwritten),
38250 ++ * possibly opening the door to leaks.
38251 ++ *
38252 ++ * We do however catch STACK_INVALID case below, and
38253 ++ * only allow reading possibly uninitialized memory
38254 ++ * later for CAP_PERFMON, as the write may not happen to
38255 ++ * that slot.
38256 + */
38257 + verbose(env, "spilled ptr in range of var-offset stack write; insn %d, ptr off: %d",
38258 + insn_idx, i);
38259 +@@ -5159,10 +5171,6 @@ static int check_stack_range_initialized(
38260 + goto mark;
38261 + }
38262 +
38263 +- if (is_spilled_reg(&state->stack[spi]) &&
38264 +- base_type(state->stack[spi].spilled_ptr.type) == PTR_TO_BTF_ID)
38265 +- goto mark;
38266 +-
38267 + if (is_spilled_reg(&state->stack[spi]) &&
38268 + (state->stack[spi].spilled_ptr.type == SCALAR_VALUE ||
38269 + env->allow_ptr_leaks)) {
38270 +@@ -5193,6 +5201,11 @@ mark:
38271 + mark_reg_read(env, &state->stack[spi].spilled_ptr,
38272 + state->stack[spi].spilled_ptr.parent,
38273 + REG_LIVE_READ64);
38274 ++ /* We do not set REG_LIVE_WRITTEN for stack slot, as we can not
38275 ++ * be sure that whether stack slot is written to or not. Hence,
38276 ++ * we must still conservatively propagate reads upwards even if
38277 ++ * helper may write to the entire memory range.
38278 ++ */
38279 + }
38280 + return update_stack_depth(env, state, min_off);
38281 + }
38282 +@@ -9211,6 +9224,11 @@ static int adjust_reg_min_max_vals(struct bpf_verifier_env *env,
38283 + return err;
38284 + return adjust_ptr_min_max_vals(env, insn,
38285 + dst_reg, src_reg);
38286 ++ } else if (dst_reg->precise) {
38287 ++ /* if dst_reg is precise, src_reg should be precise as well */
38288 ++ err = mark_chain_precision(env, insn->src_reg);
38289 ++ if (err)
38290 ++ return err;
38291 + }
38292 + } else {
38293 + /* Pretend the src is a reg with a known value, since we only
38294 +@@ -11847,34 +11865,36 @@ static int propagate_precision(struct bpf_verifier_env *env,
38295 + {
38296 + struct bpf_reg_state *state_reg;
38297 + struct bpf_func_state *state;
38298 +- int i, err = 0;
38299 ++ int i, err = 0, fr;
38300 +
38301 +- state = old->frame[old->curframe];
38302 +- state_reg = state->regs;
38303 +- for (i = 0; i < BPF_REG_FP; i++, state_reg++) {
38304 +- if (state_reg->type != SCALAR_VALUE ||
38305 +- !state_reg->precise)
38306 +- continue;
38307 +- if (env->log.level & BPF_LOG_LEVEL2)
38308 +- verbose(env, "propagating r%d\n", i);
38309 +- err = mark_chain_precision(env, i);
38310 +- if (err < 0)
38311 +- return err;
38312 +- }
38313 ++ for (fr = old->curframe; fr >= 0; fr--) {
38314 ++ state = old->frame[fr];
38315 ++ state_reg = state->regs;
38316 ++ for (i = 0; i < BPF_REG_FP; i++, state_reg++) {
38317 ++ if (state_reg->type != SCALAR_VALUE ||
38318 ++ !state_reg->precise)
38319 ++ continue;
38320 ++ if (env->log.level & BPF_LOG_LEVEL2)
38321 ++ verbose(env, "frame %d: propagating r%d\n", i, fr);
38322 ++ err = mark_chain_precision_frame(env, fr, i);
38323 ++ if (err < 0)
38324 ++ return err;
38325 ++ }
38326 +
38327 +- for (i = 0; i < state->allocated_stack / BPF_REG_SIZE; i++) {
38328 +- if (!is_spilled_reg(&state->stack[i]))
38329 +- continue;
38330 +- state_reg = &state->stack[i].spilled_ptr;
38331 +- if (state_reg->type != SCALAR_VALUE ||
38332 +- !state_reg->precise)
38333 +- continue;
38334 +- if (env->log.level & BPF_LOG_LEVEL2)
38335 +- verbose(env, "propagating fp%d\n",
38336 +- (-i - 1) * BPF_REG_SIZE);
38337 +- err = mark_chain_precision_stack(env, i);
38338 +- if (err < 0)
38339 +- return err;
38340 ++ for (i = 0; i < state->allocated_stack / BPF_REG_SIZE; i++) {
38341 ++ if (!is_spilled_reg(&state->stack[i]))
38342 ++ continue;
38343 ++ state_reg = &state->stack[i].spilled_ptr;
38344 ++ if (state_reg->type != SCALAR_VALUE ||
38345 ++ !state_reg->precise)
38346 ++ continue;
38347 ++ if (env->log.level & BPF_LOG_LEVEL2)
38348 ++ verbose(env, "frame %d: propagating fp%d\n",
38349 ++ (-i - 1) * BPF_REG_SIZE, fr);
38350 ++ err = mark_chain_precision_stack_frame(env, fr, i);
38351 ++ if (err < 0)
38352 ++ return err;
38353 ++ }
38354 + }
38355 + return 0;
38356 + }
38357 +@@ -13386,6 +13406,10 @@ static int opt_subreg_zext_lo32_rnd_hi32(struct bpf_verifier_env *env,
38358 + if (!bpf_jit_needs_zext() && !is_cmpxchg_insn(&insn))
38359 + continue;
38360 +
38361 ++ /* Zero-extension is done by the caller. */
38362 ++ if (bpf_pseudo_kfunc_call(&insn))
38363 ++ continue;
38364 ++
38365 + if (WARN_ON(load_reg == -1)) {
38366 + verbose(env, "verifier bug. zext_dst is set, but no reg is defined\n");
38367 + return -EFAULT;
38368 +diff --git a/kernel/cpu.c b/kernel/cpu.c
38369 +index bbad5e375d3ba..98a7a7b1471b7 100644
38370 +--- a/kernel/cpu.c
38371 ++++ b/kernel/cpu.c
38372 +@@ -663,21 +663,51 @@ static bool cpuhp_next_state(bool bringup,
38373 + return true;
38374 + }
38375 +
38376 +-static int cpuhp_invoke_callback_range(bool bringup,
38377 +- unsigned int cpu,
38378 +- struct cpuhp_cpu_state *st,
38379 +- enum cpuhp_state target)
38380 ++static int __cpuhp_invoke_callback_range(bool bringup,
38381 ++ unsigned int cpu,
38382 ++ struct cpuhp_cpu_state *st,
38383 ++ enum cpuhp_state target,
38384 ++ bool nofail)
38385 + {
38386 + enum cpuhp_state state;
38387 +- int err = 0;
38388 ++ int ret = 0;
38389 +
38390 + while (cpuhp_next_state(bringup, &state, st, target)) {
38391 ++ int err;
38392 ++
38393 + err = cpuhp_invoke_callback(cpu, state, bringup, NULL, NULL);
38394 +- if (err)
38395 ++ if (!err)
38396 ++ continue;
38397 ++
38398 ++ if (nofail) {
38399 ++ pr_warn("CPU %u %s state %s (%d) failed (%d)\n",
38400 ++ cpu, bringup ? "UP" : "DOWN",
38401 ++ cpuhp_get_step(st->state)->name,
38402 ++ st->state, err);
38403 ++ ret = -1;
38404 ++ } else {
38405 ++ ret = err;
38406 + break;
38407 ++ }
38408 + }
38409 +
38410 +- return err;
38411 ++ return ret;
38412 ++}
38413 ++
38414 ++static inline int cpuhp_invoke_callback_range(bool bringup,
38415 ++ unsigned int cpu,
38416 ++ struct cpuhp_cpu_state *st,
38417 ++ enum cpuhp_state target)
38418 ++{
38419 ++ return __cpuhp_invoke_callback_range(bringup, cpu, st, target, false);
38420 ++}
38421 ++
38422 ++static inline void cpuhp_invoke_callback_range_nofail(bool bringup,
38423 ++ unsigned int cpu,
38424 ++ struct cpuhp_cpu_state *st,
38425 ++ enum cpuhp_state target)
38426 ++{
38427 ++ __cpuhp_invoke_callback_range(bringup, cpu, st, target, true);
38428 + }
38429 +
38430 + static inline bool can_rollback_cpu(struct cpuhp_cpu_state *st)
38431 +@@ -999,7 +1029,6 @@ static int take_cpu_down(void *_param)
38432 + struct cpuhp_cpu_state *st = this_cpu_ptr(&cpuhp_state);
38433 + enum cpuhp_state target = max((int)st->target, CPUHP_AP_OFFLINE);
38434 + int err, cpu = smp_processor_id();
38435 +- int ret;
38436 +
38437 + /* Ensure this CPU doesn't handle any more interrupts. */
38438 + err = __cpu_disable();
38439 +@@ -1012,13 +1041,10 @@ static int take_cpu_down(void *_param)
38440 + */
38441 + WARN_ON(st->state != (CPUHP_TEARDOWN_CPU - 1));
38442 +
38443 +- /* Invoke the former CPU_DYING callbacks */
38444 +- ret = cpuhp_invoke_callback_range(false, cpu, st, target);
38445 +-
38446 + /*
38447 +- * DYING must not fail!
38448 ++ * Invoke the former CPU_DYING callbacks. DYING must not fail!
38449 + */
38450 +- WARN_ON_ONCE(ret);
38451 ++ cpuhp_invoke_callback_range_nofail(false, cpu, st, target);
38452 +
38453 + /* Give up timekeeping duties */
38454 + tick_handover_do_timer();
38455 +@@ -1296,16 +1322,14 @@ void notify_cpu_starting(unsigned int cpu)
38456 + {
38457 + struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
38458 + enum cpuhp_state target = min((int)st->target, CPUHP_AP_ONLINE);
38459 +- int ret;
38460 +
38461 + rcu_cpu_starting(cpu); /* Enables RCU usage on this CPU. */
38462 + cpumask_set_cpu(cpu, &cpus_booted_once_mask);
38463 +- ret = cpuhp_invoke_callback_range(true, cpu, st, target);
38464 +
38465 + /*
38466 + * STARTING must not fail!
38467 + */
38468 +- WARN_ON_ONCE(ret);
38469 ++ cpuhp_invoke_callback_range_nofail(true, cpu, st, target);
38470 + }
38471 +
38472 + /*
38473 +@@ -2326,8 +2350,10 @@ static ssize_t target_store(struct device *dev, struct device_attribute *attr,
38474 +
38475 + if (st->state < target)
38476 + ret = cpu_up(dev->id, target);
38477 +- else
38478 ++ else if (st->state > target)
38479 + ret = cpu_down(dev->id, target);
38480 ++ else if (WARN_ON(st->target != target))
38481 ++ st->target = target;
38482 + out:
38483 + unlock_device_hotplug();
38484 + return ret ? ret : count;
38485 +diff --git a/kernel/events/core.c b/kernel/events/core.c
38486 +index 7f04f995c9754..732b392fc5c63 100644
38487 +--- a/kernel/events/core.c
38488 ++++ b/kernel/events/core.c
38489 +@@ -11193,13 +11193,15 @@ static int pmu_dev_alloc(struct pmu *pmu)
38490 +
38491 + pmu->dev->groups = pmu->attr_groups;
38492 + device_initialize(pmu->dev);
38493 +- ret = dev_set_name(pmu->dev, "%s", pmu->name);
38494 +- if (ret)
38495 +- goto free_dev;
38496 +
38497 + dev_set_drvdata(pmu->dev, pmu);
38498 + pmu->dev->bus = &pmu_bus;
38499 + pmu->dev->release = pmu_dev_release;
38500 ++
38501 ++ ret = dev_set_name(pmu->dev, "%s", pmu->name);
38502 ++ if (ret)
38503 ++ goto free_dev;
38504 ++
38505 + ret = device_add(pmu->dev);
38506 + if (ret)
38507 + goto free_dev;
38508 +diff --git a/kernel/fork.c b/kernel/fork.c
38509 +index 08969f5aa38d5..844dfdc8c639c 100644
38510 +--- a/kernel/fork.c
38511 ++++ b/kernel/fork.c
38512 +@@ -535,6 +535,9 @@ void put_task_stack(struct task_struct *tsk)
38513 +
38514 + void free_task(struct task_struct *tsk)
38515 + {
38516 ++#ifdef CONFIG_SECCOMP
38517 ++ WARN_ON_ONCE(tsk->seccomp.filter);
38518 ++#endif
38519 + release_user_cpus_ptr(tsk);
38520 + scs_release(tsk);
38521 +
38522 +@@ -2406,12 +2409,6 @@ static __latent_entropy struct task_struct *copy_process(
38523 +
38524 + spin_lock(&current->sighand->siglock);
38525 +
38526 +- /*
38527 +- * Copy seccomp details explicitly here, in case they were changed
38528 +- * before holding sighand lock.
38529 +- */
38530 +- copy_seccomp(p);
38531 +-
38532 + rv_task_fork(p);
38533 +
38534 + rseq_fork(p, clone_flags);
38535 +@@ -2428,6 +2425,14 @@ static __latent_entropy struct task_struct *copy_process(
38536 + goto bad_fork_cancel_cgroup;
38537 + }
38538 +
38539 ++ /* No more failure paths after this point. */
38540 ++
38541 ++ /*
38542 ++ * Copy seccomp details explicitly here, in case they were changed
38543 ++ * before holding sighand lock.
38544 ++ */
38545 ++ copy_seccomp(p);
38546 ++
38547 + init_task_pid_links(p);
38548 + if (likely(p->pid)) {
38549 + ptrace_init_task(p, (clone_flags & CLONE_PTRACE) || trace);
38550 +diff --git a/kernel/futex/core.c b/kernel/futex/core.c
38551 +index b22ef1efe7511..514e4582b8634 100644
38552 +--- a/kernel/futex/core.c
38553 ++++ b/kernel/futex/core.c
38554 +@@ -638,6 +638,7 @@ static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr,
38555 + bool pi, bool pending_op)
38556 + {
38557 + u32 uval, nval, mval;
38558 ++ pid_t owner;
38559 + int err;
38560 +
38561 + /* Futex address must be 32bit aligned */
38562 +@@ -659,6 +660,10 @@ retry:
38563 + * 2. A woken up waiter is killed before it can acquire the
38564 + * futex in user space.
38565 + *
38566 ++ * In the second case, the wake up notification could be generated
38567 ++ * by the unlock path in user space after setting the futex value
38568 ++ * to zero or by the kernel after setting the OWNER_DIED bit below.
38569 ++ *
38570 + * In both cases the TID validation below prevents a wakeup of
38571 + * potential waiters which can cause these waiters to block
38572 + * forever.
38573 +@@ -667,24 +672,27 @@ retry:
38574 + *
38575 + * 1) task->robust_list->list_op_pending != NULL
38576 + * @pending_op == true
38577 +- * 2) User space futex value == 0
38578 ++ * 2) The owner part of user space futex value == 0
38579 + * 3) Regular futex: @pi == false
38580 + *
38581 + * If these conditions are met, it is safe to attempt waking up a
38582 + * potential waiter without touching the user space futex value and
38583 +- * trying to set the OWNER_DIED bit. The user space futex value is
38584 +- * uncontended and the rest of the user space mutex state is
38585 +- * consistent, so a woken waiter will just take over the
38586 +- * uncontended futex. Setting the OWNER_DIED bit would create
38587 +- * inconsistent state and malfunction of the user space owner died
38588 +- * handling.
38589 ++ * trying to set the OWNER_DIED bit. If the futex value is zero,
38590 ++ * the rest of the user space mutex state is consistent, so a woken
38591 ++ * waiter will just take over the uncontended futex. Setting the
38592 ++ * OWNER_DIED bit would create inconsistent state and malfunction
38593 ++ * of the user space owner died handling. Otherwise, the OWNER_DIED
38594 ++ * bit is already set, and the woken waiter is expected to deal with
38595 ++ * this.
38596 + */
38597 +- if (pending_op && !pi && !uval) {
38598 ++ owner = uval & FUTEX_TID_MASK;
38599 ++
38600 ++ if (pending_op && !pi && !owner) {
38601 + futex_wake(uaddr, 1, 1, FUTEX_BITSET_MATCH_ANY);
38602 + return 0;
38603 + }
38604 +
38605 +- if ((uval & FUTEX_TID_MASK) != task_pid_vnr(curr))
38606 ++ if (owner != task_pid_vnr(curr))
38607 + return 0;
38608 +
38609 + /*
38610 +diff --git a/kernel/gcov/gcc_4_7.c b/kernel/gcov/gcc_4_7.c
38611 +index 7971e989e425b..74a4ef1da9ad7 100644
38612 +--- a/kernel/gcov/gcc_4_7.c
38613 ++++ b/kernel/gcov/gcc_4_7.c
38614 +@@ -82,6 +82,7 @@ struct gcov_fn_info {
38615 + * @version: gcov version magic indicating the gcc version used for compilation
38616 + * @next: list head for a singly-linked list
38617 + * @stamp: uniquifying time stamp
38618 ++ * @checksum: unique object checksum
38619 + * @filename: name of the associated gcov data file
38620 + * @merge: merge functions (null for unused counter type)
38621 + * @n_functions: number of instrumented functions
38622 +@@ -94,6 +95,10 @@ struct gcov_info {
38623 + unsigned int version;
38624 + struct gcov_info *next;
38625 + unsigned int stamp;
38626 ++ /* Since GCC 12.1 a checksum field is added. */
38627 ++#if (__GNUC__ >= 12)
38628 ++ unsigned int checksum;
38629 ++#endif
38630 + const char *filename;
38631 + void (*merge[GCOV_COUNTERS])(gcov_type *, unsigned int);
38632 + unsigned int n_functions;
38633 +diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
38634 +index f09c60393e559..5fdc0b5575797 100644
38635 +--- a/kernel/irq/internals.h
38636 ++++ b/kernel/irq/internals.h
38637 +@@ -52,6 +52,7 @@ enum {
38638 + * IRQS_PENDING - irq is pending and replayed later
38639 + * IRQS_SUSPENDED - irq is suspended
38640 + * IRQS_NMI - irq line is used to deliver NMIs
38641 ++ * IRQS_SYSFS - descriptor has been added to sysfs
38642 + */
38643 + enum {
38644 + IRQS_AUTODETECT = 0x00000001,
38645 +@@ -64,6 +65,7 @@ enum {
38646 + IRQS_SUSPENDED = 0x00000800,
38647 + IRQS_TIMINGS = 0x00001000,
38648 + IRQS_NMI = 0x00002000,
38649 ++ IRQS_SYSFS = 0x00004000,
38650 + };
38651 +
38652 + #include "debug.h"
38653 +diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
38654 +index a91f9001103ce..fd09962744014 100644
38655 +--- a/kernel/irq/irqdesc.c
38656 ++++ b/kernel/irq/irqdesc.c
38657 +@@ -288,22 +288,25 @@ static void irq_sysfs_add(int irq, struct irq_desc *desc)
38658 + if (irq_kobj_base) {
38659 + /*
38660 + * Continue even in case of failure as this is nothing
38661 +- * crucial.
38662 ++ * crucial and failures in the late irq_sysfs_init()
38663 ++ * cannot be rolled back.
38664 + */
38665 + if (kobject_add(&desc->kobj, irq_kobj_base, "%d", irq))
38666 + pr_warn("Failed to add kobject for irq %d\n", irq);
38667 ++ else
38668 ++ desc->istate |= IRQS_SYSFS;
38669 + }
38670 + }
38671 +
38672 + static void irq_sysfs_del(struct irq_desc *desc)
38673 + {
38674 + /*
38675 +- * If irq_sysfs_init() has not yet been invoked (early boot), then
38676 +- * irq_kobj_base is NULL and the descriptor was never added.
38677 +- * kobject_del() complains about a object with no parent, so make
38678 +- * it conditional.
38679 ++ * Only invoke kobject_del() when kobject_add() was successfully
38680 ++ * invoked for the descriptor. This covers both early boot, where
38681 ++ * sysfs is not initialized yet, and the case of a failed
38682 ++ * kobject_add() invocation.
38683 + */
38684 +- if (irq_kobj_base)
38685 ++ if (desc->istate & IRQS_SYSFS)
38686 + kobject_del(&desc->kobj);
38687 + }
38688 +
38689 +diff --git a/kernel/kprobes.c b/kernel/kprobes.c
38690 +index 3050631e528d9..a35074f0daa1a 100644
38691 +--- a/kernel/kprobes.c
38692 ++++ b/kernel/kprobes.c
38693 +@@ -2364,6 +2364,14 @@ static void kill_kprobe(struct kprobe *p)
38694 +
38695 + lockdep_assert_held(&kprobe_mutex);
38696 +
38697 ++ /*
38698 ++ * The module is going away. We should disarm the kprobe which
38699 ++ * is using ftrace, because ftrace framework is still available at
38700 ++ * 'MODULE_STATE_GOING' notification.
38701 ++ */
38702 ++ if (kprobe_ftrace(p) && !kprobe_disabled(p) && !kprobes_all_disarmed)
38703 ++ disarm_kprobe_ftrace(p);
38704 ++
38705 + p->flags |= KPROBE_FLAG_GONE;
38706 + if (kprobe_aggrprobe(p)) {
38707 + /*
38708 +@@ -2380,14 +2388,6 @@ static void kill_kprobe(struct kprobe *p)
38709 + * the original probed function (which will be freed soon) any more.
38710 + */
38711 + arch_remove_kprobe(p);
38712 +-
38713 +- /*
38714 +- * The module is going away. We should disarm the kprobe which
38715 +- * is using ftrace, because ftrace framework is still available at
38716 +- * 'MODULE_STATE_GOING' notification.
38717 +- */
38718 +- if (kprobe_ftrace(p) && !kprobe_disabled(p) && !kprobes_all_disarmed)
38719 +- disarm_kprobe_ftrace(p);
38720 + }
38721 +
38722 + /* Disable one kprobe */
38723 +diff --git a/kernel/module/decompress.c b/kernel/module/decompress.c
38724 +index c033572d83f0e..720e719253cd1 100644
38725 +--- a/kernel/module/decompress.c
38726 ++++ b/kernel/module/decompress.c
38727 +@@ -114,8 +114,8 @@ static ssize_t module_gzip_decompress(struct load_info *info,
38728 + do {
38729 + struct page *page = module_get_next_page(info);
38730 +
38731 +- if (!page) {
38732 +- retval = -ENOMEM;
38733 ++ if (IS_ERR(page)) {
38734 ++ retval = PTR_ERR(page);
38735 + goto out_inflate_end;
38736 + }
38737 +
38738 +@@ -173,8 +173,8 @@ static ssize_t module_xz_decompress(struct load_info *info,
38739 + do {
38740 + struct page *page = module_get_next_page(info);
38741 +
38742 +- if (!page) {
38743 +- retval = -ENOMEM;
38744 ++ if (IS_ERR(page)) {
38745 ++ retval = PTR_ERR(page);
38746 + goto out;
38747 + }
38748 +
38749 +diff --git a/kernel/padata.c b/kernel/padata.c
38750 +index e5819bb8bd1dc..de90af5fcbe6b 100644
38751 +--- a/kernel/padata.c
38752 ++++ b/kernel/padata.c
38753 +@@ -207,14 +207,16 @@ int padata_do_parallel(struct padata_shell *ps,
38754 + pw = padata_work_alloc();
38755 + spin_unlock(&padata_works_lock);
38756 +
38757 ++ if (!pw) {
38758 ++ /* Maximum works limit exceeded, run in the current task. */
38759 ++ padata->parallel(padata);
38760 ++ }
38761 ++
38762 + rcu_read_unlock_bh();
38763 +
38764 + if (pw) {
38765 + padata_work_init(pw, padata_parallel_worker, padata, 0);
38766 + queue_work(pinst->parallel_wq, &pw->pw_work);
38767 +- } else {
38768 +- /* Maximum works limit exceeded, run in the current task. */
38769 +- padata->parallel(padata);
38770 + }
38771 +
38772 + return 0;
38773 +@@ -388,13 +390,16 @@ void padata_do_serial(struct padata_priv *padata)
38774 + int hashed_cpu = padata_cpu_hash(pd, padata->seq_nr);
38775 + struct padata_list *reorder = per_cpu_ptr(pd->reorder_list, hashed_cpu);
38776 + struct padata_priv *cur;
38777 ++ struct list_head *pos;
38778 +
38779 + spin_lock(&reorder->lock);
38780 + /* Sort in ascending order of sequence number. */
38781 +- list_for_each_entry_reverse(cur, &reorder->list, list)
38782 ++ list_for_each_prev(pos, &reorder->list) {
38783 ++ cur = list_entry(pos, struct padata_priv, list);
38784 + if (cur->seq_nr < padata->seq_nr)
38785 + break;
38786 +- list_add(&padata->list, &cur->list);
38787 ++ }
38788 ++ list_add(&padata->list, pos);
38789 + spin_unlock(&reorder->lock);
38790 +
38791 + /*
38792 +diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
38793 +index 2a406753af904..c20ca5fb9adc8 100644
38794 +--- a/kernel/power/snapshot.c
38795 ++++ b/kernel/power/snapshot.c
38796 +@@ -1723,8 +1723,8 @@ static unsigned long minimum_image_size(unsigned long saveable)
38797 + * /sys/power/reserved_size, respectively). To make this happen, we compute the
38798 + * total number of available page frames and allocate at least
38799 + *
38800 +- * ([page frames total] + PAGES_FOR_IO + [metadata pages]) / 2
38801 +- * + 2 * DIV_ROUND_UP(reserved_size, PAGE_SIZE)
38802 ++ * ([page frames total] - PAGES_FOR_IO - [metadata pages]) / 2
38803 ++ * - 2 * DIV_ROUND_UP(reserved_size, PAGE_SIZE)
38804 + *
38805 + * of them, which corresponds to the maximum size of a hibernation image.
38806 + *
38807 +diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
38808 +index 93416afebd59c..14d9384fba056 100644
38809 +--- a/kernel/rcu/tree.c
38810 ++++ b/kernel/rcu/tree.c
38811 +@@ -2418,7 +2418,7 @@ void rcu_force_quiescent_state(void)
38812 + struct rcu_node *rnp_old = NULL;
38813 +
38814 + /* Funnel through hierarchy to reduce memory contention. */
38815 +- rnp = __this_cpu_read(rcu_data.mynode);
38816 ++ rnp = raw_cpu_read(rcu_data.mynode);
38817 + for (; rnp != NULL; rnp = rnp->parent) {
38818 + ret = (READ_ONCE(rcu_state.gp_flags) & RCU_GP_FLAG_FQS) ||
38819 + !raw_spin_trylock(&rnp->fqslock);
38820 +diff --git a/kernel/relay.c b/kernel/relay.c
38821 +index d7edc934c56d5..88bcb09f0a1f2 100644
38822 +--- a/kernel/relay.c
38823 ++++ b/kernel/relay.c
38824 +@@ -148,13 +148,13 @@ static struct rchan_buf *relay_create_buf(struct rchan *chan)
38825 + {
38826 + struct rchan_buf *buf;
38827 +
38828 +- if (chan->n_subbufs > KMALLOC_MAX_SIZE / sizeof(size_t *))
38829 ++ if (chan->n_subbufs > KMALLOC_MAX_SIZE / sizeof(size_t))
38830 + return NULL;
38831 +
38832 + buf = kzalloc(sizeof(struct rchan_buf), GFP_KERNEL);
38833 + if (!buf)
38834 + return NULL;
38835 +- buf->padding = kmalloc_array(chan->n_subbufs, sizeof(size_t *),
38836 ++ buf->padding = kmalloc_array(chan->n_subbufs, sizeof(size_t),
38837 + GFP_KERNEL);
38838 + if (!buf->padding)
38839 + goto free_buf;
38840 +diff --git a/kernel/sched/core.c b/kernel/sched/core.c
38841 +index daff72f003858..535af9fbea7b8 100644
38842 +--- a/kernel/sched/core.c
38843 ++++ b/kernel/sched/core.c
38844 +@@ -1392,7 +1392,7 @@ static inline void uclamp_idle_reset(struct rq *rq, enum uclamp_id clamp_id,
38845 + if (!(rq->uclamp_flags & UCLAMP_FLAG_IDLE))
38846 + return;
38847 +
38848 +- WRITE_ONCE(rq->uclamp[clamp_id].value, clamp_value);
38849 ++ uclamp_rq_set(rq, clamp_id, clamp_value);
38850 + }
38851 +
38852 + static inline
38853 +@@ -1543,8 +1543,8 @@ static inline void uclamp_rq_inc_id(struct rq *rq, struct task_struct *p,
38854 + if (bucket->tasks == 1 || uc_se->value > bucket->value)
38855 + bucket->value = uc_se->value;
38856 +
38857 +- if (uc_se->value > READ_ONCE(uc_rq->value))
38858 +- WRITE_ONCE(uc_rq->value, uc_se->value);
38859 ++ if (uc_se->value > uclamp_rq_get(rq, clamp_id))
38860 ++ uclamp_rq_set(rq, clamp_id, uc_se->value);
38861 + }
38862 +
38863 + /*
38864 +@@ -1610,7 +1610,7 @@ static inline void uclamp_rq_dec_id(struct rq *rq, struct task_struct *p,
38865 + if (likely(bucket->tasks))
38866 + return;
38867 +
38868 +- rq_clamp = READ_ONCE(uc_rq->value);
38869 ++ rq_clamp = uclamp_rq_get(rq, clamp_id);
38870 + /*
38871 + * Defensive programming: this should never happen. If it happens,
38872 + * e.g. due to future modification, warn and fixup the expected value.
38873 +@@ -1618,7 +1618,7 @@ static inline void uclamp_rq_dec_id(struct rq *rq, struct task_struct *p,
38874 + SCHED_WARN_ON(bucket->value > rq_clamp);
38875 + if (bucket->value >= rq_clamp) {
38876 + bkt_clamp = uclamp_rq_max_value(rq, clamp_id, uc_se->value);
38877 +- WRITE_ONCE(uc_rq->value, bkt_clamp);
38878 ++ uclamp_rq_set(rq, clamp_id, bkt_clamp);
38879 + }
38880 + }
38881 +
38882 +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
38883 +index e4a0b8bd941c7..0f32acb05055f 100644
38884 +--- a/kernel/sched/fair.c
38885 ++++ b/kernel/sched/fair.c
38886 +@@ -4280,14 +4280,16 @@ static inline unsigned long task_util_est(struct task_struct *p)
38887 + }
38888 +
38889 + #ifdef CONFIG_UCLAMP_TASK
38890 +-static inline unsigned long uclamp_task_util(struct task_struct *p)
38891 ++static inline unsigned long uclamp_task_util(struct task_struct *p,
38892 ++ unsigned long uclamp_min,
38893 ++ unsigned long uclamp_max)
38894 + {
38895 +- return clamp(task_util_est(p),
38896 +- uclamp_eff_value(p, UCLAMP_MIN),
38897 +- uclamp_eff_value(p, UCLAMP_MAX));
38898 ++ return clamp(task_util_est(p), uclamp_min, uclamp_max);
38899 + }
38900 + #else
38901 +-static inline unsigned long uclamp_task_util(struct task_struct *p)
38902 ++static inline unsigned long uclamp_task_util(struct task_struct *p,
38903 ++ unsigned long uclamp_min,
38904 ++ unsigned long uclamp_max)
38905 + {
38906 + return task_util_est(p);
38907 + }
38908 +@@ -4426,10 +4428,135 @@ done:
38909 + trace_sched_util_est_se_tp(&p->se);
38910 + }
38911 +
38912 +-static inline int task_fits_capacity(struct task_struct *p,
38913 +- unsigned long capacity)
38914 ++static inline int util_fits_cpu(unsigned long util,
38915 ++ unsigned long uclamp_min,
38916 ++ unsigned long uclamp_max,
38917 ++ int cpu)
38918 + {
38919 +- return fits_capacity(uclamp_task_util(p), capacity);
38920 ++ unsigned long capacity_orig, capacity_orig_thermal;
38921 ++ unsigned long capacity = capacity_of(cpu);
38922 ++ bool fits, uclamp_max_fits;
38923 ++
38924 ++ /*
38925 ++ * Check if the real util fits without any uclamp boost/cap applied.
38926 ++ */
38927 ++ fits = fits_capacity(util, capacity);
38928 ++
38929 ++ if (!uclamp_is_used())
38930 ++ return fits;
38931 ++
38932 ++ /*
38933 ++ * We must use capacity_orig_of() for comparing against uclamp_min and
38934 ++ * uclamp_max. We only care about capacity pressure (by using
38935 ++ * capacity_of()) for comparing against the real util.
38936 ++ *
38937 ++ * If a task is boosted to 1024 for example, we don't want a tiny
38938 ++ * pressure to skew the check whether it fits a CPU or not.
38939 ++ *
38940 ++ * Similarly if a task is capped to capacity_orig_of(little_cpu), it
38941 ++ * should fit a little cpu even if there's some pressure.
38942 ++ *
38943 ++ * Only exception is for thermal pressure since it has a direct impact
38944 ++ * on available OPP of the system.
38945 ++ *
38946 ++ * We honour it for uclamp_min only as a drop in performance level
38947 ++ * could result in not getting the requested minimum performance level.
38948 ++ *
38949 ++ * For uclamp_max, we can tolerate a drop in performance level as the
38950 ++ * goal is to cap the task. So it's okay if it's getting less.
38951 ++ *
38952 ++ * In case of capacity inversion, which is not handled yet, we should
38953 ++ * honour the inverted capacity for both uclamp_min and uclamp_max all
38954 ++ * the time.
38955 ++ */
38956 ++ capacity_orig = capacity_orig_of(cpu);
38957 ++ capacity_orig_thermal = capacity_orig - arch_scale_thermal_pressure(cpu);
38958 ++
38959 ++ /*
38960 ++ * We want to force a task to fit a cpu as implied by uclamp_max.
38961 ++ * But we do have some corner cases to cater for..
38962 ++ *
38963 ++ *
38964 ++ * C=z
38965 ++ * | ___
38966 ++ * | C=y | |
38967 ++ * |_ _ _ _ _ _ _ _ _ ___ _ _ _ | _ | _ _ _ _ _ uclamp_max
38968 ++ * | C=x | | | |
38969 ++ * | ___ | | | |
38970 ++ * | | | | | | | (util somewhere in this region)
38971 ++ * | | | | | | |
38972 ++ * | | | | | | |
38973 ++ * +----------------------------------------
38974 ++ * cpu0 cpu1 cpu2
38975 ++ *
38976 ++ * In the above example if a task is capped to a specific performance
38977 ++ * point, y, then when:
38978 ++ *
38979 ++ * * util = 80% of x then it does not fit on cpu0 and should migrate
38980 ++ * to cpu1
38981 ++ * * util = 80% of y then it is forced to fit on cpu1 to honour
38982 ++ * uclamp_max request.
38983 ++ *
38984 ++ * which is what we're enforcing here. A task always fits if
38985 ++ * uclamp_max <= capacity_orig. But when uclamp_max > capacity_orig,
38986 ++ * the normal upmigration rules should withhold still.
38987 ++ *
38988 ++ * Only exception is when we are on max capacity, then we need to be
38989 ++ * careful not to block overutilized state. This is so because:
38990 ++ *
38991 ++ * 1. There's no concept of capping at max_capacity! We can't go
38992 ++ * beyond this performance level anyway.
38993 ++ * 2. The system is being saturated when we're operating near
38994 ++ * max capacity, it doesn't make sense to block overutilized.
38995 ++ */
38996 ++ uclamp_max_fits = (capacity_orig == SCHED_CAPACITY_SCALE) && (uclamp_max == SCHED_CAPACITY_SCALE);
38997 ++ uclamp_max_fits = !uclamp_max_fits && (uclamp_max <= capacity_orig);
38998 ++ fits = fits || uclamp_max_fits;
38999 ++
39000 ++ /*
39001 ++ *
39002 ++ * C=z
39003 ++ * | ___ (region a, capped, util >= uclamp_max)
39004 ++ * | C=y | |
39005 ++ * |_ _ _ _ _ _ _ _ _ ___ _ _ _ | _ | _ _ _ _ _ uclamp_max
39006 ++ * | C=x | | | |
39007 ++ * | ___ | | | | (region b, uclamp_min <= util <= uclamp_max)
39008 ++ * |_ _ _|_ _|_ _ _ _| _ | _ _ _| _ | _ _ _ _ _ uclamp_min
39009 ++ * | | | | | | |
39010 ++ * | | | | | | | (region c, boosted, util < uclamp_min)
39011 ++ * +----------------------------------------
39012 ++ * cpu0 cpu1 cpu2
39013 ++ *
39014 ++ * a) If util > uclamp_max, then we're capped, we don't care about
39015 ++ * actual fitness value here. We only care if uclamp_max fits
39016 ++ * capacity without taking margin/pressure into account.
39017 ++ * See comment above.
39018 ++ *
39019 ++ * b) If uclamp_min <= util <= uclamp_max, then the normal
39020 ++ * fits_capacity() rules apply. Except we need to ensure that we
39021 ++ * enforce we remain within uclamp_max, see comment above.
39022 ++ *
39023 ++ * c) If util < uclamp_min, then we are boosted. Same as (b) but we
39024 ++ * need to take into account the boosted value fits the CPU without
39025 ++ * taking margin/pressure into account.
39026 ++ *
39027 ++ * Cases (a) and (b) are handled in the 'fits' variable already. We
39028 ++ * just need to consider an extra check for case (c) after ensuring we
39029 ++ * handle the case uclamp_min > uclamp_max.
39030 ++ */
39031 ++ uclamp_min = min(uclamp_min, uclamp_max);
39032 ++ if (util < uclamp_min && capacity_orig != SCHED_CAPACITY_SCALE)
39033 ++ fits = fits && (uclamp_min <= capacity_orig_thermal);
39034 ++
39035 ++ return fits;
39036 ++}
39037 ++
39038 ++static inline int task_fits_cpu(struct task_struct *p, int cpu)
39039 ++{
39040 ++ unsigned long uclamp_min = uclamp_eff_value(p, UCLAMP_MIN);
39041 ++ unsigned long uclamp_max = uclamp_eff_value(p, UCLAMP_MAX);
39042 ++ unsigned long util = task_util_est(p);
39043 ++ return util_fits_cpu(util, uclamp_min, uclamp_max, cpu);
39044 + }
39045 +
39046 + static inline void update_misfit_status(struct task_struct *p, struct rq *rq)
39047 +@@ -4442,7 +4569,7 @@ static inline void update_misfit_status(struct task_struct *p, struct rq *rq)
39048 + return;
39049 + }
39050 +
39051 +- if (task_fits_capacity(p, capacity_of(cpu_of(rq)))) {
39052 ++ if (task_fits_cpu(p, cpu_of(rq))) {
39053 + rq->misfit_task_load = 0;
39054 + return;
39055 + }
39056 +@@ -5862,7 +5989,10 @@ static inline void hrtick_update(struct rq *rq)
39057 + #ifdef CONFIG_SMP
39058 + static inline bool cpu_overutilized(int cpu)
39059 + {
39060 +- return !fits_capacity(cpu_util_cfs(cpu), capacity_of(cpu));
39061 ++ unsigned long rq_util_min = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MIN);
39062 ++ unsigned long rq_util_max = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MAX);
39063 ++
39064 ++ return !util_fits_cpu(cpu_util_cfs(cpu), rq_util_min, rq_util_max, cpu);
39065 + }
39066 +
39067 + static inline void update_overutilized_status(struct rq *rq)
39068 +@@ -6654,21 +6784,23 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
39069 + static int
39070 + select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
39071 + {
39072 +- unsigned long task_util, best_cap = 0;
39073 ++ unsigned long task_util, util_min, util_max, best_cap = 0;
39074 + int cpu, best_cpu = -1;
39075 + struct cpumask *cpus;
39076 +
39077 + cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
39078 + cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
39079 +
39080 +- task_util = uclamp_task_util(p);
39081 ++ task_util = task_util_est(p);
39082 ++ util_min = uclamp_eff_value(p, UCLAMP_MIN);
39083 ++ util_max = uclamp_eff_value(p, UCLAMP_MAX);
39084 +
39085 + for_each_cpu_wrap(cpu, cpus, target) {
39086 + unsigned long cpu_cap = capacity_of(cpu);
39087 +
39088 + if (!available_idle_cpu(cpu) && !sched_idle_cpu(cpu))
39089 + continue;
39090 +- if (fits_capacity(task_util, cpu_cap))
39091 ++ if (util_fits_cpu(task_util, util_min, util_max, cpu))
39092 + return cpu;
39093 +
39094 + if (cpu_cap > best_cap) {
39095 +@@ -6680,10 +6812,13 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
39096 + return best_cpu;
39097 + }
39098 +
39099 +-static inline bool asym_fits_capacity(unsigned long task_util, int cpu)
39100 ++static inline bool asym_fits_cpu(unsigned long util,
39101 ++ unsigned long util_min,
39102 ++ unsigned long util_max,
39103 ++ int cpu)
39104 + {
39105 + if (sched_asym_cpucap_active())
39106 +- return fits_capacity(task_util, capacity_of(cpu));
39107 ++ return util_fits_cpu(util, util_min, util_max, cpu);
39108 +
39109 + return true;
39110 + }
39111 +@@ -6695,7 +6830,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
39112 + {
39113 + bool has_idle_core = false;
39114 + struct sched_domain *sd;
39115 +- unsigned long task_util;
39116 ++ unsigned long task_util, util_min, util_max;
39117 + int i, recent_used_cpu;
39118 +
39119 + /*
39120 +@@ -6704,7 +6839,9 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
39121 + */
39122 + if (sched_asym_cpucap_active()) {
39123 + sync_entity_load_avg(&p->se);
39124 +- task_util = uclamp_task_util(p);
39125 ++ task_util = task_util_est(p);
39126 ++ util_min = uclamp_eff_value(p, UCLAMP_MIN);
39127 ++ util_max = uclamp_eff_value(p, UCLAMP_MAX);
39128 + }
39129 +
39130 + /*
39131 +@@ -6713,7 +6850,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
39132 + lockdep_assert_irqs_disabled();
39133 +
39134 + if ((available_idle_cpu(target) || sched_idle_cpu(target)) &&
39135 +- asym_fits_capacity(task_util, target))
39136 ++ asym_fits_cpu(task_util, util_min, util_max, target))
39137 + return target;
39138 +
39139 + /*
39140 +@@ -6721,7 +6858,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
39141 + */
39142 + if (prev != target && cpus_share_cache(prev, target) &&
39143 + (available_idle_cpu(prev) || sched_idle_cpu(prev)) &&
39144 +- asym_fits_capacity(task_util, prev))
39145 ++ asym_fits_cpu(task_util, util_min, util_max, prev))
39146 + return prev;
39147 +
39148 + /*
39149 +@@ -6736,7 +6873,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
39150 + in_task() &&
39151 + prev == smp_processor_id() &&
39152 + this_rq()->nr_running <= 1 &&
39153 +- asym_fits_capacity(task_util, prev)) {
39154 ++ asym_fits_cpu(task_util, util_min, util_max, prev)) {
39155 + return prev;
39156 + }
39157 +
39158 +@@ -6748,7 +6885,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
39159 + cpus_share_cache(recent_used_cpu, target) &&
39160 + (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) &&
39161 + cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr) &&
39162 +- asym_fits_capacity(task_util, recent_used_cpu)) {
39163 ++ asym_fits_cpu(task_util, util_min, util_max, recent_used_cpu)) {
39164 + return recent_used_cpu;
39165 + }
39166 +
39167 +@@ -7044,6 +7181,8 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
39168 + {
39169 + struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
39170 + unsigned long prev_delta = ULONG_MAX, best_delta = ULONG_MAX;
39171 ++ unsigned long p_util_min = uclamp_is_used() ? uclamp_eff_value(p, UCLAMP_MIN) : 0;
39172 ++ unsigned long p_util_max = uclamp_is_used() ? uclamp_eff_value(p, UCLAMP_MAX) : 1024;
39173 + struct root_domain *rd = this_rq()->rd;
39174 + int cpu, best_energy_cpu, target = -1;
39175 + struct sched_domain *sd;
39176 +@@ -7068,7 +7207,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
39177 + target = prev_cpu;
39178 +
39179 + sync_entity_load_avg(&p->se);
39180 +- if (!task_util_est(p))
39181 ++ if (!uclamp_task_util(p, p_util_min, p_util_max))
39182 + goto unlock;
39183 +
39184 + eenv_task_busy_time(&eenv, p, prev_cpu);
39185 +@@ -7076,6 +7215,8 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
39186 + for (; pd; pd = pd->next) {
39187 + unsigned long cpu_cap, cpu_thermal_cap, util;
39188 + unsigned long cur_delta, max_spare_cap = 0;
39189 ++ unsigned long rq_util_min, rq_util_max;
39190 ++ unsigned long util_min, util_max;
39191 + bool compute_prev_delta = false;
39192 + int max_spare_cap_cpu = -1;
39193 + unsigned long base_energy;
39194 +@@ -7112,8 +7253,26 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
39195 + * much capacity we can get out of the CPU; this is
39196 + * aligned with sched_cpu_util().
39197 + */
39198 +- util = uclamp_rq_util_with(cpu_rq(cpu), util, p);
39199 +- if (!fits_capacity(util, cpu_cap))
39200 ++ if (uclamp_is_used()) {
39201 ++ if (uclamp_rq_is_idle(cpu_rq(cpu))) {
39202 ++ util_min = p_util_min;
39203 ++ util_max = p_util_max;
39204 ++ } else {
39205 ++ /*
39206 ++ * Open code uclamp_rq_util_with() except for
39207 ++ * the clamp() part. Ie: apply max aggregation
39208 ++ * only. util_fits_cpu() logic requires to
39209 ++ * operate on non clamped util but must use the
39210 ++ * max-aggregated uclamp_{min, max}.
39211 ++ */
39212 ++ rq_util_min = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MIN);
39213 ++ rq_util_max = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MAX);
39214 ++
39215 ++ util_min = max(rq_util_min, p_util_min);
39216 ++ util_max = max(rq_util_max, p_util_max);
39217 ++ }
39218 ++ }
39219 ++ if (!util_fits_cpu(util, util_min, util_max, cpu))
39220 + continue;
39221 +
39222 + lsub_positive(&cpu_cap, util);
39223 +@@ -8276,7 +8435,7 @@ static int detach_tasks(struct lb_env *env)
39224 +
39225 + case migrate_misfit:
39226 + /* This is not a misfit task */
39227 +- if (task_fits_capacity(p, capacity_of(env->src_cpu)))
39228 ++ if (task_fits_cpu(p, env->src_cpu))
39229 + goto next;
39230 +
39231 + env->imbalance = 0;
39232 +@@ -9281,6 +9440,10 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,
39233 +
39234 + memset(sgs, 0, sizeof(*sgs));
39235 +
39236 ++ /* Assume that task can't fit any CPU of the group */
39237 ++ if (sd->flags & SD_ASYM_CPUCAPACITY)
39238 ++ sgs->group_misfit_task_load = 1;
39239 ++
39240 + for_each_cpu(i, sched_group_span(group)) {
39241 + struct rq *rq = cpu_rq(i);
39242 + unsigned int local;
39243 +@@ -9300,12 +9463,12 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,
39244 + if (!nr_running && idle_cpu_without(i, p))
39245 + sgs->idle_cpus++;
39246 +
39247 +- }
39248 ++ /* Check if task fits in the CPU */
39249 ++ if (sd->flags & SD_ASYM_CPUCAPACITY &&
39250 ++ sgs->group_misfit_task_load &&
39251 ++ task_fits_cpu(p, i))
39252 ++ sgs->group_misfit_task_load = 0;
39253 +
39254 +- /* Check if task fits in the group */
39255 +- if (sd->flags & SD_ASYM_CPUCAPACITY &&
39256 +- !task_fits_capacity(p, group->sgc->max_capacity)) {
39257 +- sgs->group_misfit_task_load = 1;
39258 + }
39259 +
39260 + sgs->group_capacity = group->sgc->capacity;
39261 +diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
39262 +index ee2ecc081422e..7f40d87e8f509 100644
39263 +--- a/kernel/sched/psi.c
39264 ++++ b/kernel/sched/psi.c
39265 +@@ -539,10 +539,12 @@ static u64 update_triggers(struct psi_group *group, u64 now)
39266 +
39267 + /* Calculate growth since last update */
39268 + growth = window_update(&t->win, now, total[t->state]);
39269 +- if (growth < t->threshold)
39270 +- continue;
39271 ++ if (!t->pending_event) {
39272 ++ if (growth < t->threshold)
39273 ++ continue;
39274 +
39275 +- t->pending_event = true;
39276 ++ t->pending_event = true;
39277 ++ }
39278 + }
39279 + /* Limit event signaling to once per window */
39280 + if (now < t->last_event_time + t->win.size)
39281 +diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
39282 +index a4a20046e586e..d6d488e8eb554 100644
39283 +--- a/kernel/sched/sched.h
39284 ++++ b/kernel/sched/sched.h
39285 +@@ -2979,6 +2979,23 @@ static inline unsigned long cpu_util_rt(struct rq *rq)
39286 + #ifdef CONFIG_UCLAMP_TASK
39287 + unsigned long uclamp_eff_value(struct task_struct *p, enum uclamp_id clamp_id);
39288 +
39289 ++static inline unsigned long uclamp_rq_get(struct rq *rq,
39290 ++ enum uclamp_id clamp_id)
39291 ++{
39292 ++ return READ_ONCE(rq->uclamp[clamp_id].value);
39293 ++}
39294 ++
39295 ++static inline void uclamp_rq_set(struct rq *rq, enum uclamp_id clamp_id,
39296 ++ unsigned int value)
39297 ++{
39298 ++ WRITE_ONCE(rq->uclamp[clamp_id].value, value);
39299 ++}
39300 ++
39301 ++static inline bool uclamp_rq_is_idle(struct rq *rq)
39302 ++{
39303 ++ return rq->uclamp_flags & UCLAMP_FLAG_IDLE;
39304 ++}
39305 ++
39306 + /**
39307 + * uclamp_rq_util_with - clamp @util with @rq and @p effective uclamp values.
39308 + * @rq: The rq to clamp against. Must not be NULL.
39309 +@@ -3014,12 +3031,12 @@ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
39310 + * Ignore last runnable task's max clamp, as this task will
39311 + * reset it. Similarly, no need to read the rq's min clamp.
39312 + */
39313 +- if (rq->uclamp_flags & UCLAMP_FLAG_IDLE)
39314 ++ if (uclamp_rq_is_idle(rq))
39315 + goto out;
39316 + }
39317 +
39318 +- min_util = max_t(unsigned long, min_util, READ_ONCE(rq->uclamp[UCLAMP_MIN].value));
39319 +- max_util = max_t(unsigned long, max_util, READ_ONCE(rq->uclamp[UCLAMP_MAX].value));
39320 ++ min_util = max_t(unsigned long, min_util, uclamp_rq_get(rq, UCLAMP_MIN));
39321 ++ max_util = max_t(unsigned long, max_util, uclamp_rq_get(rq, UCLAMP_MAX));
39322 + out:
39323 + /*
39324 + * Since CPU's {min,max}_util clamps are MAX aggregated considering
39325 +@@ -3060,6 +3077,15 @@ static inline bool uclamp_is_used(void)
39326 + return static_branch_likely(&sched_uclamp_used);
39327 + }
39328 + #else /* CONFIG_UCLAMP_TASK */
39329 ++static inline unsigned long uclamp_eff_value(struct task_struct *p,
39330 ++ enum uclamp_id clamp_id)
39331 ++{
39332 ++ if (clamp_id == UCLAMP_MIN)
39333 ++ return 0;
39334 ++
39335 ++ return SCHED_CAPACITY_SCALE;
39336 ++}
39337 ++
39338 + static inline
39339 + unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
39340 + struct task_struct *p)
39341 +@@ -3073,6 +3099,25 @@ static inline bool uclamp_is_used(void)
39342 + {
39343 + return false;
39344 + }
39345 ++
39346 ++static inline unsigned long uclamp_rq_get(struct rq *rq,
39347 ++ enum uclamp_id clamp_id)
39348 ++{
39349 ++ if (clamp_id == UCLAMP_MIN)
39350 ++ return 0;
39351 ++
39352 ++ return SCHED_CAPACITY_SCALE;
39353 ++}
39354 ++
39355 ++static inline void uclamp_rq_set(struct rq *rq, enum uclamp_id clamp_id,
39356 ++ unsigned int value)
39357 ++{
39358 ++}
39359 ++
39360 ++static inline bool uclamp_rq_is_idle(struct rq *rq)
39361 ++{
39362 ++ return false;
39363 ++}
39364 + #endif /* CONFIG_UCLAMP_TASK */
39365 +
39366 + #ifdef CONFIG_HAVE_SCHED_AVG_IRQ
39367 +diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
39368 +index a995ea1ef849a..a66cff5a18579 100644
39369 +--- a/kernel/trace/blktrace.c
39370 ++++ b/kernel/trace/blktrace.c
39371 +@@ -1548,7 +1548,8 @@ blk_trace_event_print_binary(struct trace_iterator *iter, int flags,
39372 +
39373 + static enum print_line_t blk_tracer_print_line(struct trace_iterator *iter)
39374 + {
39375 +- if (!(blk_tracer_flags.val & TRACE_BLK_OPT_CLASSIC))
39376 ++ if ((iter->ent->type != TRACE_BLK) ||
39377 ++ !(blk_tracer_flags.val & TRACE_BLK_OPT_CLASSIC))
39378 + return TRACE_TYPE_UNHANDLED;
39379 +
39380 + return print_one_line(iter, true);
39381 +diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
39382 +index 1c82478e8dffe..b6e5724a9ea35 100644
39383 +--- a/kernel/trace/trace_events_hist.c
39384 ++++ b/kernel/trace/trace_events_hist.c
39385 +@@ -6438,7 +6438,7 @@ enable:
39386 + if (se)
39387 + se->ref++;
39388 + out:
39389 +- if (ret == 0)
39390 ++ if (ret == 0 && glob[0])
39391 + hist_err_clear();
39392 +
39393 + return ret;
39394 +diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
39395 +index 539b08ae70207..9cb53182bb31c 100644
39396 +--- a/kernel/trace/trace_events_user.c
39397 ++++ b/kernel/trace/trace_events_user.c
39398 +@@ -1359,6 +1359,7 @@ put_user_lock:
39399 + put_user:
39400 + user_event_destroy_fields(user);
39401 + user_event_destroy_validators(user);
39402 ++ kfree(user->call.print_fmt);
39403 + kfree(user);
39404 + return ret;
39405 + }
39406 +diff --git a/lib/debugobjects.c b/lib/debugobjects.c
39407 +index 337d797a71416..6f8e5dd1dcd0c 100644
39408 +--- a/lib/debugobjects.c
39409 ++++ b/lib/debugobjects.c
39410 +@@ -437,6 +437,7 @@ static int object_cpu_offline(unsigned int cpu)
39411 + struct debug_percpu_free *percpu_pool;
39412 + struct hlist_node *tmp;
39413 + struct debug_obj *obj;
39414 ++ unsigned long flags;
39415 +
39416 + /* Remote access is safe as the CPU is dead already */
39417 + percpu_pool = per_cpu_ptr(&percpu_obj_pool, cpu);
39418 +@@ -444,6 +445,12 @@ static int object_cpu_offline(unsigned int cpu)
39419 + hlist_del(&obj->node);
39420 + kmem_cache_free(obj_cache, obj);
39421 + }
39422 ++
39423 ++ raw_spin_lock_irqsave(&pool_lock, flags);
39424 ++ obj_pool_used -= percpu_pool->obj_free;
39425 ++ debug_objects_freed += percpu_pool->obj_free;
39426 ++ raw_spin_unlock_irqrestore(&pool_lock, flags);
39427 ++
39428 + percpu_pool->obj_free = 0;
39429 +
39430 + return 0;
39431 +@@ -1318,6 +1325,8 @@ static int __init debug_objects_replace_static_objects(void)
39432 + hlist_add_head(&obj->node, &objects);
39433 + }
39434 +
39435 ++ debug_objects_allocated += i;
39436 ++
39437 + /*
39438 + * debug_objects_mem_init() is now called early that only one CPU is up
39439 + * and interrupts have been disabled, so it is safe to replace the
39440 +@@ -1386,6 +1395,7 @@ void __init debug_objects_mem_init(void)
39441 + debug_objects_enabled = 0;
39442 + kmem_cache_destroy(obj_cache);
39443 + pr_warn("out of memory.\n");
39444 ++ return;
39445 + } else
39446 + debug_objects_selftest();
39447 +
39448 +diff --git a/lib/fonts/fonts.c b/lib/fonts/fonts.c
39449 +index 5f4b07b56cd9c..9738664386088 100644
39450 +--- a/lib/fonts/fonts.c
39451 ++++ b/lib/fonts/fonts.c
39452 +@@ -135,8 +135,8 @@ const struct font_desc *get_default_font(int xres, int yres, u32 font_w,
39453 + if (res > 20)
39454 + c += 20 - res;
39455 +
39456 +- if ((font_w & (1 << (f->width - 1))) &&
39457 +- (font_h & (1 << (f->height - 1))))
39458 ++ if ((font_w & (1U << (f->width - 1))) &&
39459 ++ (font_h & (1U << (f->height - 1))))
39460 + c += 1000;
39461 +
39462 + if (c > cc) {
39463 +diff --git a/lib/maple_tree.c b/lib/maple_tree.c
39464 +index df352f6ccc240..fe21bf276d91c 100644
39465 +--- a/lib/maple_tree.c
39466 ++++ b/lib/maple_tree.c
39467 +@@ -2989,7 +2989,9 @@ static int mas_spanning_rebalance(struct ma_state *mas,
39468 + mast->free = &free;
39469 + mast->destroy = &destroy;
39470 + l_mas.node = r_mas.node = m_mas.node = MAS_NONE;
39471 +- if (!(mast->orig_l->min && mast->orig_r->max == ULONG_MAX) &&
39472 ++
39473 ++ /* Check if this is not root and has sufficient data. */
39474 ++ if (((mast->orig_l->min != 0) || (mast->orig_r->max != ULONG_MAX)) &&
39475 + unlikely(mast->bn->b_end <= mt_min_slots[mast->bn->type]))
39476 + mast_spanning_rebalance(mast);
39477 +
39478 +diff --git a/lib/notifier-error-inject.c b/lib/notifier-error-inject.c
39479 +index 21016b32d3131..2b24ea6c94979 100644
39480 +--- a/lib/notifier-error-inject.c
39481 ++++ b/lib/notifier-error-inject.c
39482 +@@ -15,7 +15,7 @@ static int debugfs_errno_get(void *data, u64 *val)
39483 + return 0;
39484 + }
39485 +
39486 +-DEFINE_SIMPLE_ATTRIBUTE(fops_errno, debugfs_errno_get, debugfs_errno_set,
39487 ++DEFINE_SIMPLE_ATTRIBUTE_SIGNED(fops_errno, debugfs_errno_get, debugfs_errno_set,
39488 + "%lld\n");
39489 +
39490 + static struct dentry *debugfs_create_errno(const char *name, umode_t mode,
39491 +diff --git a/lib/test_firmware.c b/lib/test_firmware.c
39492 +index c82b65947ce68..1c5a2adb16ef5 100644
39493 +--- a/lib/test_firmware.c
39494 ++++ b/lib/test_firmware.c
39495 +@@ -1491,6 +1491,7 @@ static int __init test_firmware_init(void)
39496 +
39497 + rc = misc_register(&test_fw_misc_device);
39498 + if (rc) {
39499 ++ __test_firmware_config_free();
39500 + kfree(test_fw_config);
39501 + pr_err("could not register misc device: %d\n", rc);
39502 + return rc;
39503 +diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
39504 +index f425f169ef089..497fc93ccf9ec 100644
39505 +--- a/lib/test_maple_tree.c
39506 ++++ b/lib/test_maple_tree.c
39507 +@@ -2498,6 +2498,25 @@ static noinline void check_dup(struct maple_tree *mt)
39508 + }
39509 + }
39510 +
39511 ++static noinline void check_bnode_min_spanning(struct maple_tree *mt)
39512 ++{
39513 ++ int i = 50;
39514 ++ MA_STATE(mas, mt, 0, 0);
39515 ++
39516 ++ mt_set_non_kernel(9999);
39517 ++ mas_lock(&mas);
39518 ++ do {
39519 ++ mas_set_range(&mas, i*10, i*10+9);
39520 ++ mas_store(&mas, check_bnode_min_spanning);
39521 ++ } while (i--);
39522 ++
39523 ++ mas_set_range(&mas, 240, 509);
39524 ++ mas_store(&mas, NULL);
39525 ++ mas_unlock(&mas);
39526 ++ mas_destroy(&mas);
39527 ++ mt_set_non_kernel(0);
39528 ++}
39529 ++
39530 + static DEFINE_MTREE(tree);
39531 + static int maple_tree_seed(void)
39532 + {
39533 +@@ -2742,6 +2761,10 @@ static int maple_tree_seed(void)
39534 + check_dup(&tree);
39535 + mtree_destroy(&tree);
39536 +
39537 ++ mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
39538 ++ check_bnode_min_spanning(&tree);
39539 ++ mtree_destroy(&tree);
39540 ++
39541 + #if defined(BENCH)
39542 + skip:
39543 + #endif
39544 +diff --git a/mm/gup.c b/mm/gup.c
39545 +index 3b7bc2c1fd44c..eb8d7baf9e4d3 100644
39546 +--- a/mm/gup.c
39547 ++++ b/mm/gup.c
39548 +@@ -1065,6 +1065,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
39549 + if (!(vm_flags & VM_WRITE)) {
39550 + if (!(gup_flags & FOLL_FORCE))
39551 + return -EFAULT;
39552 ++ /* hugetlb does not support FOLL_FORCE|FOLL_WRITE. */
39553 ++ if (is_vm_hugetlb_page(vma))
39554 ++ return -EFAULT;
39555 + /*
39556 + * We used to let the write,force case do COW in a
39557 + * VM_MAYWRITE VM_SHARED !VM_WRITE vma, so ptrace could
39558 +diff --git a/net/802/mrp.c b/net/802/mrp.c
39559 +index 155f74d8b14f4..6c927d4b35f06 100644
39560 +--- a/net/802/mrp.c
39561 ++++ b/net/802/mrp.c
39562 +@@ -606,7 +606,10 @@ static void mrp_join_timer(struct timer_list *t)
39563 + spin_unlock(&app->lock);
39564 +
39565 + mrp_queue_xmit(app);
39566 +- mrp_join_timer_arm(app);
39567 ++ spin_lock(&app->lock);
39568 ++ if (likely(app->active))
39569 ++ mrp_join_timer_arm(app);
39570 ++ spin_unlock(&app->lock);
39571 + }
39572 +
39573 + static void mrp_periodic_timer_arm(struct mrp_applicant *app)
39574 +@@ -620,11 +623,12 @@ static void mrp_periodic_timer(struct timer_list *t)
39575 + struct mrp_applicant *app = from_timer(app, t, periodic_timer);
39576 +
39577 + spin_lock(&app->lock);
39578 +- mrp_mad_event(app, MRP_EVENT_PERIODIC);
39579 +- mrp_pdu_queue(app);
39580 ++ if (likely(app->active)) {
39581 ++ mrp_mad_event(app, MRP_EVENT_PERIODIC);
39582 ++ mrp_pdu_queue(app);
39583 ++ mrp_periodic_timer_arm(app);
39584 ++ }
39585 + spin_unlock(&app->lock);
39586 +-
39587 +- mrp_periodic_timer_arm(app);
39588 + }
39589 +
39590 + static int mrp_pdu_parse_end_mark(struct sk_buff *skb, int *offset)
39591 +@@ -872,6 +876,7 @@ int mrp_init_applicant(struct net_device *dev, struct mrp_application *appl)
39592 + app->dev = dev;
39593 + app->app = appl;
39594 + app->mad = RB_ROOT;
39595 ++ app->active = true;
39596 + spin_lock_init(&app->lock);
39597 + skb_queue_head_init(&app->queue);
39598 + rcu_assign_pointer(dev->mrp_port->applicants[appl->type], app);
39599 +@@ -900,6 +905,9 @@ void mrp_uninit_applicant(struct net_device *dev, struct mrp_application *appl)
39600 +
39601 + RCU_INIT_POINTER(port->applicants[appl->type], NULL);
39602 +
39603 ++ spin_lock_bh(&app->lock);
39604 ++ app->active = false;
39605 ++ spin_unlock_bh(&app->lock);
39606 + /* Delete timer and generate a final TX event to flush out
39607 + * all pending messages before the applicant is gone.
39608 + */
39609 +diff --git a/net/9p/client.c b/net/9p/client.c
39610 +index aaa37b07e30a5..b554f8357f967 100644
39611 +--- a/net/9p/client.c
39612 ++++ b/net/9p/client.c
39613 +@@ -297,6 +297,11 @@ p9_tag_alloc(struct p9_client *c, int8_t type, uint t_size, uint r_size,
39614 + p9pdu_reset(&req->rc);
39615 + req->t_err = 0;
39616 + req->status = REQ_STATUS_ALLOC;
39617 ++ /* refcount needs to be set to 0 before inserting into the idr
39618 ++ * so p9_tag_lookup does not accept a request that is not fully
39619 ++ * initialized. refcount_set to 2 below will mark request ready.
39620 ++ */
39621 ++ refcount_set(&req->refcount, 0);
39622 + init_waitqueue_head(&req->wq);
39623 + INIT_LIST_HEAD(&req->req_list);
39624 +
39625 +diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
39626 +index a6c12863a2532..8aab2e882958c 100644
39627 +--- a/net/bluetooth/hci_conn.c
39628 ++++ b/net/bluetooth/hci_conn.c
39629 +@@ -1881,7 +1881,7 @@ static int hci_create_cis_sync(struct hci_dev *hdev, void *data)
39630 + continue;
39631 +
39632 + /* Check if all CIS(s) belonging to a CIG are ready */
39633 +- if (conn->link->state != BT_CONNECTED ||
39634 ++ if (!conn->link || conn->link->state != BT_CONNECTED ||
39635 + conn->state != BT_CONNECT) {
39636 + cmd.cp.num_cis = 0;
39637 + break;
39638 +diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
39639 +index d97fac4f71303..b65c3aabcd536 100644
39640 +--- a/net/bluetooth/hci_core.c
39641 ++++ b/net/bluetooth/hci_core.c
39642 +@@ -2660,7 +2660,7 @@ int hci_register_dev(struct hci_dev *hdev)
39643 +
39644 + error = hci_register_suspend_notifier(hdev);
39645 + if (error)
39646 +- goto err_wqueue;
39647 ++ BT_WARN("register suspend notifier failed error:%d\n", error);
39648 +
39649 + queue_work(hdev->req_workqueue, &hdev->power_on);
39650 +
39651 +@@ -3985,7 +3985,7 @@ void hci_req_cmd_complete(struct hci_dev *hdev, u16 opcode, u8 status,
39652 + *req_complete_skb = bt_cb(skb)->hci.req_complete_skb;
39653 + else
39654 + *req_complete = bt_cb(skb)->hci.req_complete;
39655 +- kfree_skb(skb);
39656 ++ dev_kfree_skb_irq(skb);
39657 + }
39658 + spin_unlock_irqrestore(&hdev->cmd_q.lock, flags);
39659 + }
39660 +diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
39661 +index 1fc693122a47a..3a68d9bc43b8f 100644
39662 +--- a/net/bluetooth/hci_sync.c
39663 ++++ b/net/bluetooth/hci_sync.c
39664 +@@ -4261,7 +4261,7 @@ static int hci_read_local_pairing_opts_sync(struct hci_dev *hdev)
39665 + /* Get MWS transport configuration if the HCI command is supported */
39666 + static int hci_get_mws_transport_config_sync(struct hci_dev *hdev)
39667 + {
39668 +- if (!(hdev->commands[30] & 0x08))
39669 ++ if (!mws_transport_config_capable(hdev))
39670 + return 0;
39671 +
39672 + return __hci_cmd_sync_status(hdev, HCI_OP_GET_MWS_TRANSPORT_CONFIG,
39673 +diff --git a/net/bluetooth/lib.c b/net/bluetooth/lib.c
39674 +index 469a0c95b6e8a..53a796ac078c3 100644
39675 +--- a/net/bluetooth/lib.c
39676 ++++ b/net/bluetooth/lib.c
39677 +@@ -170,7 +170,7 @@ __u8 bt_status(int err)
39678 + case -EMLINK:
39679 + return 0x09;
39680 +
39681 +- case EALREADY:
39682 ++ case -EALREADY:
39683 + return 0x0b;
39684 +
39685 + case -EBUSY:
39686 +@@ -191,7 +191,7 @@ __u8 bt_status(int err)
39687 + case -ECONNABORTED:
39688 + return 0x16;
39689 +
39690 +- case ELOOP:
39691 ++ case -ELOOP:
39692 + return 0x17;
39693 +
39694 + case -EPROTONOSUPPORT:
39695 +diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
39696 +index a92e7e485feba..0dd30a3beb776 100644
39697 +--- a/net/bluetooth/mgmt.c
39698 ++++ b/net/bluetooth/mgmt.c
39699 +@@ -8859,7 +8859,7 @@ static int add_ext_adv_params(struct sock *sk, struct hci_dev *hdev,
39700 + * extra parameters we don't know about will be ignored in this request.
39701 + */
39702 + if (data_len < MGMT_ADD_EXT_ADV_PARAMS_MIN_SIZE)
39703 +- return mgmt_cmd_status(sk, hdev->id, MGMT_OP_ADD_ADVERTISING,
39704 ++ return mgmt_cmd_status(sk, hdev->id, MGMT_OP_ADD_EXT_ADV_PARAMS,
39705 + MGMT_STATUS_INVALID_PARAMS);
39706 +
39707 + flags = __le32_to_cpu(cp->flags);
39708 +diff --git a/net/bluetooth/rfcomm/core.c b/net/bluetooth/rfcomm/core.c
39709 +index 7324764384b67..8d6fce9005bdd 100644
39710 +--- a/net/bluetooth/rfcomm/core.c
39711 ++++ b/net/bluetooth/rfcomm/core.c
39712 +@@ -590,7 +590,7 @@ int rfcomm_dlc_send(struct rfcomm_dlc *d, struct sk_buff *skb)
39713 +
39714 + ret = rfcomm_dlc_send_frag(d, frag);
39715 + if (ret < 0) {
39716 +- kfree_skb(frag);
39717 ++ dev_kfree_skb_irq(frag);
39718 + goto unlock;
39719 + }
39720 +
39721 +diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
39722 +index fcb3e6c5e03c0..6094ef7cffcd2 100644
39723 +--- a/net/bpf/test_run.c
39724 ++++ b/net/bpf/test_run.c
39725 +@@ -980,9 +980,6 @@ static int convert___skb_to_skb(struct sk_buff *skb, struct __sk_buff *__skb)
39726 + {
39727 + struct qdisc_skb_cb *cb = (struct qdisc_skb_cb *)skb->cb;
39728 +
39729 +- if (!skb->len)
39730 +- return -EINVAL;
39731 +-
39732 + if (!__skb)
39733 + return 0;
39734 +
39735 +diff --git a/net/core/dev.c b/net/core/dev.c
39736 +index 3be256051e99b..70e06853ba255 100644
39737 +--- a/net/core/dev.c
39738 ++++ b/net/core/dev.c
39739 +@@ -10379,24 +10379,16 @@ void netdev_run_todo(void)
39740 + void netdev_stats_to_stats64(struct rtnl_link_stats64 *stats64,
39741 + const struct net_device_stats *netdev_stats)
39742 + {
39743 +-#if BITS_PER_LONG == 64
39744 +- BUILD_BUG_ON(sizeof(*stats64) < sizeof(*netdev_stats));
39745 +- memcpy(stats64, netdev_stats, sizeof(*netdev_stats));
39746 +- /* zero out counters that only exist in rtnl_link_stats64 */
39747 +- memset((char *)stats64 + sizeof(*netdev_stats), 0,
39748 +- sizeof(*stats64) - sizeof(*netdev_stats));
39749 +-#else
39750 +- size_t i, n = sizeof(*netdev_stats) / sizeof(unsigned long);
39751 +- const unsigned long *src = (const unsigned long *)netdev_stats;
39752 ++ size_t i, n = sizeof(*netdev_stats) / sizeof(atomic_long_t);
39753 ++ const atomic_long_t *src = (atomic_long_t *)netdev_stats;
39754 + u64 *dst = (u64 *)stats64;
39755 +
39756 + BUILD_BUG_ON(n > sizeof(*stats64) / sizeof(u64));
39757 + for (i = 0; i < n; i++)
39758 +- dst[i] = src[i];
39759 ++ dst[i] = atomic_long_read(&src[i]);
39760 + /* zero out counters that only exist in rtnl_link_stats64 */
39761 + memset((char *)stats64 + n * sizeof(u64), 0,
39762 + sizeof(*stats64) - n * sizeof(u64));
39763 +-#endif
39764 + }
39765 + EXPORT_SYMBOL(netdev_stats_to_stats64);
39766 +
39767 +diff --git a/net/core/devlink.c b/net/core/devlink.c
39768 +index 89baa7c0938b9..2aa77d4b80d0a 100644
39769 +--- a/net/core/devlink.c
39770 ++++ b/net/core/devlink.c
39771 +@@ -1505,10 +1505,13 @@ static int devlink_nl_cmd_get_dumpit(struct sk_buff *msg,
39772 + continue;
39773 + }
39774 +
39775 ++ devl_lock(devlink);
39776 + err = devlink_nl_fill(msg, devlink, DEVLINK_CMD_NEW,
39777 + NETLINK_CB(cb->skb).portid,
39778 + cb->nlh->nlmsg_seq, NLM_F_MULTI);
39779 ++ devl_unlock(devlink);
39780 + devlink_put(devlink);
39781 ++
39782 + if (err)
39783 + goto out;
39784 + idx++;
39785 +@@ -11435,8 +11438,10 @@ void devl_region_destroy(struct devlink_region *region)
39786 + devl_assert_locked(devlink);
39787 +
39788 + /* Free all snapshots of region */
39789 ++ mutex_lock(&region->snapshot_lock);
39790 + list_for_each_entry_safe(snapshot, ts, &region->snapshot_list, list)
39791 + devlink_region_snapshot_del(region, snapshot);
39792 ++ mutex_unlock(&region->snapshot_lock);
39793 +
39794 + list_del(&region->list);
39795 + mutex_destroy(&region->snapshot_lock);
39796 +diff --git a/net/core/filter.c b/net/core/filter.c
39797 +index bb0136e7a8e42..a368edd9057c7 100644
39798 +--- a/net/core/filter.c
39799 ++++ b/net/core/filter.c
39800 +@@ -80,6 +80,7 @@
39801 + #include <net/tls.h>
39802 + #include <net/xdp.h>
39803 + #include <net/mptcp.h>
39804 ++#include <net/netfilter/nf_conntrack_bpf.h>
39805 +
39806 + static const struct bpf_func_proto *
39807 + bpf_sk_base_func_proto(enum bpf_func_id func_id);
39808 +@@ -2124,8 +2125,17 @@ static int __bpf_redirect_no_mac(struct sk_buff *skb, struct net_device *dev,
39809 + {
39810 + unsigned int mlen = skb_network_offset(skb);
39811 +
39812 ++ if (unlikely(skb->len <= mlen)) {
39813 ++ kfree_skb(skb);
39814 ++ return -ERANGE;
39815 ++ }
39816 ++
39817 + if (mlen) {
39818 + __skb_pull(skb, mlen);
39819 ++ if (unlikely(!skb->len)) {
39820 ++ kfree_skb(skb);
39821 ++ return -ERANGE;
39822 ++ }
39823 +
39824 + /* At ingress, the mac header has already been pulled once.
39825 + * At egress, skb_pospull_rcsum has to be done in case that
39826 +@@ -2145,7 +2155,7 @@ static int __bpf_redirect_common(struct sk_buff *skb, struct net_device *dev,
39827 + u32 flags)
39828 + {
39829 + /* Verify that a link layer header is carried */
39830 +- if (unlikely(skb->mac_header >= skb->network_header)) {
39831 ++ if (unlikely(skb->mac_header >= skb->network_header || skb->len == 0)) {
39832 + kfree_skb(skb);
39833 + return -ERANGE;
39834 + }
39835 +@@ -7983,6 +7993,19 @@ xdp_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
39836 + default:
39837 + return bpf_sk_base_func_proto(func_id);
39838 + }
39839 ++
39840 ++#if IS_MODULE(CONFIG_NF_CONNTRACK) && IS_ENABLED(CONFIG_DEBUG_INFO_BTF_MODULES)
39841 ++ /* The nf_conn___init type is used in the NF_CONNTRACK kfuncs. The
39842 ++ * kfuncs are defined in two different modules, and we want to be able
39843 ++ * to use them interchangably with the same BTF type ID. Because modules
39844 ++ * can't de-duplicate BTF IDs between each other, we need the type to be
39845 ++ * referenced in the vmlinux BTF or the verifier will get confused about
39846 ++ * the different types. So we add this dummy type reference which will
39847 ++ * be included in vmlinux BTF, allowing both modules to refer to the
39848 ++ * same type ID.
39849 ++ */
39850 ++ BTF_TYPE_EMIT(struct nf_conn___init);
39851 ++#endif
39852 + }
39853 +
39854 + const struct bpf_func_proto bpf_sock_map_update_proto __weak;
39855 +diff --git a/net/core/skbuff.c b/net/core/skbuff.c
39856 +index 88fa40571d0c7..759bede0b3dd6 100644
39857 +--- a/net/core/skbuff.c
39858 ++++ b/net/core/skbuff.c
39859 +@@ -2416,6 +2416,9 @@ void *__pskb_pull_tail(struct sk_buff *skb, int delta)
39860 + insp = list;
39861 + } else {
39862 + /* Eaten partially. */
39863 ++ if (skb_is_gso(skb) && !list->head_frag &&
39864 ++ skb_headlen(list))
39865 ++ skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
39866 +
39867 + if (skb_shared(list)) {
39868 + /* Sucks! We need to fork list. :-( */
39869 +diff --git a/net/core/skmsg.c b/net/core/skmsg.c
39870 +index e6b9ced3eda82..53d0251788aa2 100644
39871 +--- a/net/core/skmsg.c
39872 ++++ b/net/core/skmsg.c
39873 +@@ -886,13 +886,16 @@ int sk_psock_msg_verdict(struct sock *sk, struct sk_psock *psock,
39874 + ret = sk_psock_map_verd(ret, msg->sk_redir);
39875 + psock->apply_bytes = msg->apply_bytes;
39876 + if (ret == __SK_REDIRECT) {
39877 +- if (psock->sk_redir)
39878 ++ if (psock->sk_redir) {
39879 + sock_put(psock->sk_redir);
39880 +- psock->sk_redir = msg->sk_redir;
39881 +- if (!psock->sk_redir) {
39882 ++ psock->sk_redir = NULL;
39883 ++ }
39884 ++ if (!msg->sk_redir) {
39885 + ret = __SK_DROP;
39886 + goto out;
39887 + }
39888 ++ psock->redir_ingress = sk_msg_to_ingress(msg);
39889 ++ psock->sk_redir = msg->sk_redir;
39890 + sock_hold(psock->sk_redir);
39891 + }
39892 + out:
39893 +diff --git a/net/core/sock.c b/net/core/sock.c
39894 +index a3ba0358c77c0..30407b2dd2ac4 100644
39895 +--- a/net/core/sock.c
39896 ++++ b/net/core/sock.c
39897 +@@ -1436,7 +1436,7 @@ set_sndbuf:
39898 + break;
39899 + }
39900 + case SO_INCOMING_CPU:
39901 +- WRITE_ONCE(sk->sk_incoming_cpu, val);
39902 ++ reuseport_update_incoming_cpu(sk, val);
39903 + break;
39904 +
39905 + case SO_CNX_ADVICE:
39906 +diff --git a/net/core/sock_map.c b/net/core/sock_map.c
39907 +index 81beb16ab1ebf..22fa2c5bc6ec9 100644
39908 +--- a/net/core/sock_map.c
39909 ++++ b/net/core/sock_map.c
39910 +@@ -349,11 +349,13 @@ static void sock_map_free(struct bpf_map *map)
39911 +
39912 + sk = xchg(psk, NULL);
39913 + if (sk) {
39914 ++ sock_hold(sk);
39915 + lock_sock(sk);
39916 + rcu_read_lock();
39917 + sock_map_unref(sk, psk);
39918 + rcu_read_unlock();
39919 + release_sock(sk);
39920 ++ sock_put(sk);
39921 + }
39922 + }
39923 +
39924 +diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
39925 +index fb90e1e00773b..5a165286e4d8e 100644
39926 +--- a/net/core/sock_reuseport.c
39927 ++++ b/net/core/sock_reuseport.c
39928 +@@ -37,6 +37,70 @@ void reuseport_has_conns_set(struct sock *sk)
39929 + }
39930 + EXPORT_SYMBOL(reuseport_has_conns_set);
39931 +
39932 ++static void __reuseport_get_incoming_cpu(struct sock_reuseport *reuse)
39933 ++{
39934 ++ /* Paired with READ_ONCE() in reuseport_select_sock_by_hash(). */
39935 ++ WRITE_ONCE(reuse->incoming_cpu, reuse->incoming_cpu + 1);
39936 ++}
39937 ++
39938 ++static void __reuseport_put_incoming_cpu(struct sock_reuseport *reuse)
39939 ++{
39940 ++ /* Paired with READ_ONCE() in reuseport_select_sock_by_hash(). */
39941 ++ WRITE_ONCE(reuse->incoming_cpu, reuse->incoming_cpu - 1);
39942 ++}
39943 ++
39944 ++static void reuseport_get_incoming_cpu(struct sock *sk, struct sock_reuseport *reuse)
39945 ++{
39946 ++ if (sk->sk_incoming_cpu >= 0)
39947 ++ __reuseport_get_incoming_cpu(reuse);
39948 ++}
39949 ++
39950 ++static void reuseport_put_incoming_cpu(struct sock *sk, struct sock_reuseport *reuse)
39951 ++{
39952 ++ if (sk->sk_incoming_cpu >= 0)
39953 ++ __reuseport_put_incoming_cpu(reuse);
39954 ++}
39955 ++
39956 ++void reuseport_update_incoming_cpu(struct sock *sk, int val)
39957 ++{
39958 ++ struct sock_reuseport *reuse;
39959 ++ int old_sk_incoming_cpu;
39960 ++
39961 ++ if (unlikely(!rcu_access_pointer(sk->sk_reuseport_cb))) {
39962 ++ /* Paired with REAE_ONCE() in sk_incoming_cpu_update()
39963 ++ * and compute_score().
39964 ++ */
39965 ++ WRITE_ONCE(sk->sk_incoming_cpu, val);
39966 ++ return;
39967 ++ }
39968 ++
39969 ++ spin_lock_bh(&reuseport_lock);
39970 ++
39971 ++ /* This must be done under reuseport_lock to avoid a race with
39972 ++ * reuseport_grow(), which accesses sk->sk_incoming_cpu without
39973 ++ * lock_sock() when detaching a shutdown()ed sk.
39974 ++ *
39975 ++ * Paired with READ_ONCE() in reuseport_select_sock_by_hash().
39976 ++ */
39977 ++ old_sk_incoming_cpu = sk->sk_incoming_cpu;
39978 ++ WRITE_ONCE(sk->sk_incoming_cpu, val);
39979 ++
39980 ++ reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
39981 ++ lockdep_is_held(&reuseport_lock));
39982 ++
39983 ++ /* reuseport_grow() has detached a closed sk. */
39984 ++ if (!reuse)
39985 ++ goto out;
39986 ++
39987 ++ if (old_sk_incoming_cpu < 0 && val >= 0)
39988 ++ __reuseport_get_incoming_cpu(reuse);
39989 ++ else if (old_sk_incoming_cpu >= 0 && val < 0)
39990 ++ __reuseport_put_incoming_cpu(reuse);
39991 ++
39992 ++out:
39993 ++ spin_unlock_bh(&reuseport_lock);
39994 ++}
39995 ++
39996 + static int reuseport_sock_index(struct sock *sk,
39997 + const struct sock_reuseport *reuse,
39998 + bool closed)
39999 +@@ -64,6 +128,7 @@ static void __reuseport_add_sock(struct sock *sk,
40000 + /* paired with smp_rmb() in reuseport_(select|migrate)_sock() */
40001 + smp_wmb();
40002 + reuse->num_socks++;
40003 ++ reuseport_get_incoming_cpu(sk, reuse);
40004 + }
40005 +
40006 + static bool __reuseport_detach_sock(struct sock *sk,
40007 +@@ -76,6 +141,7 @@ static bool __reuseport_detach_sock(struct sock *sk,
40008 +
40009 + reuse->socks[i] = reuse->socks[reuse->num_socks - 1];
40010 + reuse->num_socks--;
40011 ++ reuseport_put_incoming_cpu(sk, reuse);
40012 +
40013 + return true;
40014 + }
40015 +@@ -86,6 +152,7 @@ static void __reuseport_add_closed_sock(struct sock *sk,
40016 + reuse->socks[reuse->max_socks - reuse->num_closed_socks - 1] = sk;
40017 + /* paired with READ_ONCE() in inet_csk_bind_conflict() */
40018 + WRITE_ONCE(reuse->num_closed_socks, reuse->num_closed_socks + 1);
40019 ++ reuseport_get_incoming_cpu(sk, reuse);
40020 + }
40021 +
40022 + static bool __reuseport_detach_closed_sock(struct sock *sk,
40023 +@@ -99,6 +166,7 @@ static bool __reuseport_detach_closed_sock(struct sock *sk,
40024 + reuse->socks[i] = reuse->socks[reuse->max_socks - reuse->num_closed_socks];
40025 + /* paired with READ_ONCE() in inet_csk_bind_conflict() */
40026 + WRITE_ONCE(reuse->num_closed_socks, reuse->num_closed_socks - 1);
40027 ++ reuseport_put_incoming_cpu(sk, reuse);
40028 +
40029 + return true;
40030 + }
40031 +@@ -166,6 +234,7 @@ int reuseport_alloc(struct sock *sk, bool bind_inany)
40032 + reuse->bind_inany = bind_inany;
40033 + reuse->socks[0] = sk;
40034 + reuse->num_socks = 1;
40035 ++ reuseport_get_incoming_cpu(sk, reuse);
40036 + rcu_assign_pointer(sk->sk_reuseport_cb, reuse);
40037 +
40038 + out:
40039 +@@ -209,6 +278,7 @@ static struct sock_reuseport *reuseport_grow(struct sock_reuseport *reuse)
40040 + more_reuse->reuseport_id = reuse->reuseport_id;
40041 + more_reuse->bind_inany = reuse->bind_inany;
40042 + more_reuse->has_conns = reuse->has_conns;
40043 ++ more_reuse->incoming_cpu = reuse->incoming_cpu;
40044 +
40045 + memcpy(more_reuse->socks, reuse->socks,
40046 + reuse->num_socks * sizeof(struct sock *));
40047 +@@ -458,18 +528,32 @@ static struct sock *run_bpf_filter(struct sock_reuseport *reuse, u16 socks,
40048 + static struct sock *reuseport_select_sock_by_hash(struct sock_reuseport *reuse,
40049 + u32 hash, u16 num_socks)
40050 + {
40051 ++ struct sock *first_valid_sk = NULL;
40052 + int i, j;
40053 +
40054 + i = j = reciprocal_scale(hash, num_socks);
40055 +- while (reuse->socks[i]->sk_state == TCP_ESTABLISHED) {
40056 ++ do {
40057 ++ struct sock *sk = reuse->socks[i];
40058 ++
40059 ++ if (sk->sk_state != TCP_ESTABLISHED) {
40060 ++ /* Paired with WRITE_ONCE() in __reuseport_(get|put)_incoming_cpu(). */
40061 ++ if (!READ_ONCE(reuse->incoming_cpu))
40062 ++ return sk;
40063 ++
40064 ++ /* Paired with WRITE_ONCE() in reuseport_update_incoming_cpu(). */
40065 ++ if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
40066 ++ return sk;
40067 ++
40068 ++ if (!first_valid_sk)
40069 ++ first_valid_sk = sk;
40070 ++ }
40071 ++
40072 + i++;
40073 + if (i >= num_socks)
40074 + i = 0;
40075 +- if (i == j)
40076 +- return NULL;
40077 +- }
40078 ++ } while (i != j);
40079 +
40080 +- return reuse->socks[i];
40081 ++ return first_valid_sk;
40082 + }
40083 +
40084 + /**
40085 +diff --git a/net/core/stream.c b/net/core/stream.c
40086 +index 75fded8495f5b..516895f482356 100644
40087 +--- a/net/core/stream.c
40088 ++++ b/net/core/stream.c
40089 +@@ -196,6 +196,12 @@ void sk_stream_kill_queues(struct sock *sk)
40090 + /* First the read buffer. */
40091 + __skb_queue_purge(&sk->sk_receive_queue);
40092 +
40093 ++ /* Next, the error queue.
40094 ++ * We need to use queue lock, because other threads might
40095 ++ * add packets to the queue without socket lock being held.
40096 ++ */
40097 ++ skb_queue_purge(&sk->sk_error_queue);
40098 ++
40099 + /* Next, the write queue. */
40100 + WARN_ON_ONCE(!skb_queue_empty(&sk->sk_write_queue));
40101 +
40102 +diff --git a/net/dsa/tag_8021q.c b/net/dsa/tag_8021q.c
40103 +index 34e5ec5d3e236..89371b16416e2 100644
40104 +--- a/net/dsa/tag_8021q.c
40105 ++++ b/net/dsa/tag_8021q.c
40106 +@@ -398,6 +398,7 @@ static void dsa_tag_8021q_teardown(struct dsa_switch *ds)
40107 + int dsa_tag_8021q_register(struct dsa_switch *ds, __be16 proto)
40108 + {
40109 + struct dsa_8021q_context *ctx;
40110 ++ int err;
40111 +
40112 + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
40113 + if (!ctx)
40114 +@@ -410,7 +411,15 @@ int dsa_tag_8021q_register(struct dsa_switch *ds, __be16 proto)
40115 +
40116 + ds->tag_8021q_ctx = ctx;
40117 +
40118 +- return dsa_tag_8021q_setup(ds);
40119 ++ err = dsa_tag_8021q_setup(ds);
40120 ++ if (err)
40121 ++ goto err_free;
40122 ++
40123 ++ return 0;
40124 ++
40125 ++err_free:
40126 ++ kfree(ctx);
40127 ++ return err;
40128 + }
40129 + EXPORT_SYMBOL_GPL(dsa_tag_8021q_register);
40130 +
40131 +diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
40132 +index 57e7238a4136b..81fe2422fe58a 100644
40133 +--- a/net/ethtool/ioctl.c
40134 ++++ b/net/ethtool/ioctl.c
40135 +@@ -2008,7 +2008,8 @@ static int ethtool_phys_id(struct net_device *dev, void __user *useraddr)
40136 + } else {
40137 + /* Driver expects to be called at twice the frequency in rc */
40138 + int n = rc * 2, interval = HZ / n;
40139 +- u64 count = n * id.data, i = 0;
40140 ++ u64 count = mul_u32_u32(n, id.data);
40141 ++ u64 i = 0;
40142 +
40143 + do {
40144 + rtnl_lock();
40145 +diff --git a/net/hsr/hsr_debugfs.c b/net/hsr/hsr_debugfs.c
40146 +index de476a4176314..1a195efc79cd1 100644
40147 +--- a/net/hsr/hsr_debugfs.c
40148 ++++ b/net/hsr/hsr_debugfs.c
40149 +@@ -9,7 +9,6 @@
40150 + #include <linux/module.h>
40151 + #include <linux/errno.h>
40152 + #include <linux/debugfs.h>
40153 +-#include <linux/jhash.h>
40154 + #include "hsr_main.h"
40155 + #include "hsr_framereg.h"
40156 +
40157 +@@ -21,7 +20,6 @@ hsr_node_table_show(struct seq_file *sfp, void *data)
40158 + {
40159 + struct hsr_priv *priv = (struct hsr_priv *)sfp->private;
40160 + struct hsr_node *node;
40161 +- int i;
40162 +
40163 + seq_printf(sfp, "Node Table entries for (%s) device\n",
40164 + (priv->prot_version == PRP_V1 ? "PRP" : "HSR"));
40165 +@@ -33,28 +31,22 @@ hsr_node_table_show(struct seq_file *sfp, void *data)
40166 + seq_puts(sfp, "DAN-H\n");
40167 +
40168 + rcu_read_lock();
40169 +-
40170 +- for (i = 0 ; i < priv->hash_buckets; i++) {
40171 +- hlist_for_each_entry_rcu(node, &priv->node_db[i], mac_list) {
40172 +- /* skip self node */
40173 +- if (hsr_addr_is_self(priv, node->macaddress_A))
40174 +- continue;
40175 +- seq_printf(sfp, "%pM ", &node->macaddress_A[0]);
40176 +- seq_printf(sfp, "%pM ", &node->macaddress_B[0]);
40177 +- seq_printf(sfp, "%10lx, ",
40178 +- node->time_in[HSR_PT_SLAVE_A]);
40179 +- seq_printf(sfp, "%10lx, ",
40180 +- node->time_in[HSR_PT_SLAVE_B]);
40181 +- seq_printf(sfp, "%14x, ", node->addr_B_port);
40182 +-
40183 +- if (priv->prot_version == PRP_V1)
40184 +- seq_printf(sfp, "%5x, %5x, %5x\n",
40185 +- node->san_a, node->san_b,
40186 +- (node->san_a == 0 &&
40187 +- node->san_b == 0));
40188 +- else
40189 +- seq_printf(sfp, "%5x\n", 1);
40190 +- }
40191 ++ list_for_each_entry_rcu(node, &priv->node_db, mac_list) {
40192 ++ /* skip self node */
40193 ++ if (hsr_addr_is_self(priv, node->macaddress_A))
40194 ++ continue;
40195 ++ seq_printf(sfp, "%pM ", &node->macaddress_A[0]);
40196 ++ seq_printf(sfp, "%pM ", &node->macaddress_B[0]);
40197 ++ seq_printf(sfp, "%10lx, ", node->time_in[HSR_PT_SLAVE_A]);
40198 ++ seq_printf(sfp, "%10lx, ", node->time_in[HSR_PT_SLAVE_B]);
40199 ++ seq_printf(sfp, "%14x, ", node->addr_B_port);
40200 ++
40201 ++ if (priv->prot_version == PRP_V1)
40202 ++ seq_printf(sfp, "%5x, %5x, %5x\n",
40203 ++ node->san_a, node->san_b,
40204 ++ (node->san_a == 0 && node->san_b == 0));
40205 ++ else
40206 ++ seq_printf(sfp, "%5x\n", 1);
40207 + }
40208 + rcu_read_unlock();
40209 + return 0;
40210 +diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
40211 +index 6ffef47e9be55..b1e86a7265b32 100644
40212 +--- a/net/hsr/hsr_device.c
40213 ++++ b/net/hsr/hsr_device.c
40214 +@@ -219,7 +219,9 @@ static netdev_tx_t hsr_dev_xmit(struct sk_buff *skb, struct net_device *dev)
40215 + skb->dev = master->dev;
40216 + skb_reset_mac_header(skb);
40217 + skb_reset_mac_len(skb);
40218 ++ spin_lock_bh(&hsr->seqnr_lock);
40219 + hsr_forward_skb(skb, master);
40220 ++ spin_unlock_bh(&hsr->seqnr_lock);
40221 + } else {
40222 + dev_core_stats_tx_dropped_inc(dev);
40223 + dev_kfree_skb_any(skb);
40224 +@@ -278,7 +280,6 @@ static void send_hsr_supervision_frame(struct hsr_port *master,
40225 + __u8 type = HSR_TLV_LIFE_CHECK;
40226 + struct hsr_sup_payload *hsr_sp;
40227 + struct hsr_sup_tag *hsr_stag;
40228 +- unsigned long irqflags;
40229 + struct sk_buff *skb;
40230 +
40231 + *interval = msecs_to_jiffies(HSR_LIFE_CHECK_INTERVAL);
40232 +@@ -299,7 +300,7 @@ static void send_hsr_supervision_frame(struct hsr_port *master,
40233 + set_hsr_stag_HSR_ver(hsr_stag, hsr->prot_version);
40234 +
40235 + /* From HSRv1 on we have separate supervision sequence numbers. */
40236 +- spin_lock_irqsave(&master->hsr->seqnr_lock, irqflags);
40237 ++ spin_lock_bh(&hsr->seqnr_lock);
40238 + if (hsr->prot_version > 0) {
40239 + hsr_stag->sequence_nr = htons(hsr->sup_sequence_nr);
40240 + hsr->sup_sequence_nr++;
40241 +@@ -307,7 +308,6 @@ static void send_hsr_supervision_frame(struct hsr_port *master,
40242 + hsr_stag->sequence_nr = htons(hsr->sequence_nr);
40243 + hsr->sequence_nr++;
40244 + }
40245 +- spin_unlock_irqrestore(&master->hsr->seqnr_lock, irqflags);
40246 +
40247 + hsr_stag->tlv.HSR_TLV_type = type;
40248 + /* TODO: Why 12 in HSRv0? */
40249 +@@ -318,11 +318,13 @@ static void send_hsr_supervision_frame(struct hsr_port *master,
40250 + hsr_sp = skb_put(skb, sizeof(struct hsr_sup_payload));
40251 + ether_addr_copy(hsr_sp->macaddress_A, master->dev->dev_addr);
40252 +
40253 +- if (skb_put_padto(skb, ETH_ZLEN))
40254 ++ if (skb_put_padto(skb, ETH_ZLEN)) {
40255 ++ spin_unlock_bh(&hsr->seqnr_lock);
40256 + return;
40257 ++ }
40258 +
40259 + hsr_forward_skb(skb, master);
40260 +-
40261 ++ spin_unlock_bh(&hsr->seqnr_lock);
40262 + return;
40263 + }
40264 +
40265 +@@ -332,7 +334,6 @@ static void send_prp_supervision_frame(struct hsr_port *master,
40266 + struct hsr_priv *hsr = master->hsr;
40267 + struct hsr_sup_payload *hsr_sp;
40268 + struct hsr_sup_tag *hsr_stag;
40269 +- unsigned long irqflags;
40270 + struct sk_buff *skb;
40271 +
40272 + skb = hsr_init_skb(master);
40273 +@@ -347,7 +348,7 @@ static void send_prp_supervision_frame(struct hsr_port *master,
40274 + set_hsr_stag_HSR_ver(hsr_stag, (hsr->prot_version ? 1 : 0));
40275 +
40276 + /* From HSRv1 on we have separate supervision sequence numbers. */
40277 +- spin_lock_irqsave(&master->hsr->seqnr_lock, irqflags);
40278 ++ spin_lock_bh(&hsr->seqnr_lock);
40279 + hsr_stag->sequence_nr = htons(hsr->sup_sequence_nr);
40280 + hsr->sup_sequence_nr++;
40281 + hsr_stag->tlv.HSR_TLV_type = PRP_TLV_LIFE_CHECK_DD;
40282 +@@ -358,13 +359,12 @@ static void send_prp_supervision_frame(struct hsr_port *master,
40283 + ether_addr_copy(hsr_sp->macaddress_A, master->dev->dev_addr);
40284 +
40285 + if (skb_put_padto(skb, ETH_ZLEN)) {
40286 +- spin_unlock_irqrestore(&master->hsr->seqnr_lock, irqflags);
40287 ++ spin_unlock_bh(&hsr->seqnr_lock);
40288 + return;
40289 + }
40290 +
40291 +- spin_unlock_irqrestore(&master->hsr->seqnr_lock, irqflags);
40292 +-
40293 + hsr_forward_skb(skb, master);
40294 ++ spin_unlock_bh(&hsr->seqnr_lock);
40295 + }
40296 +
40297 + /* Announce (supervision frame) timer function
40298 +@@ -444,7 +444,7 @@ void hsr_dev_setup(struct net_device *dev)
40299 + dev->header_ops = &hsr_header_ops;
40300 + dev->netdev_ops = &hsr_device_ops;
40301 + SET_NETDEV_DEVTYPE(dev, &hsr_type);
40302 +- dev->priv_flags |= IFF_NO_QUEUE;
40303 ++ dev->priv_flags |= IFF_NO_QUEUE | IFF_DISABLE_NETPOLL;
40304 +
40305 + dev->needs_free_netdev = true;
40306 +
40307 +@@ -485,16 +485,12 @@ int hsr_dev_finalize(struct net_device *hsr_dev, struct net_device *slave[2],
40308 + {
40309 + bool unregister = false;
40310 + struct hsr_priv *hsr;
40311 +- int res, i;
40312 ++ int res;
40313 +
40314 + hsr = netdev_priv(hsr_dev);
40315 + INIT_LIST_HEAD(&hsr->ports);
40316 +- INIT_HLIST_HEAD(&hsr->self_node_db);
40317 +- hsr->hash_buckets = HSR_HSIZE;
40318 +- get_random_bytes(&hsr->hash_seed, sizeof(hsr->hash_seed));
40319 +- for (i = 0; i < hsr->hash_buckets; i++)
40320 +- INIT_HLIST_HEAD(&hsr->node_db[i]);
40321 +-
40322 ++ INIT_LIST_HEAD(&hsr->node_db);
40323 ++ INIT_LIST_HEAD(&hsr->self_node_db);
40324 + spin_lock_init(&hsr->list_lock);
40325 +
40326 + eth_hw_addr_set(hsr_dev, slave[0]->dev_addr);
40327 +diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c
40328 +index 56bb27d67a2ee..629daacc96071 100644
40329 +--- a/net/hsr/hsr_forward.c
40330 ++++ b/net/hsr/hsr_forward.c
40331 +@@ -500,7 +500,6 @@ static void handle_std_frame(struct sk_buff *skb,
40332 + {
40333 + struct hsr_port *port = frame->port_rcv;
40334 + struct hsr_priv *hsr = port->hsr;
40335 +- unsigned long irqflags;
40336 +
40337 + frame->skb_hsr = NULL;
40338 + frame->skb_prp = NULL;
40339 +@@ -510,10 +509,9 @@ static void handle_std_frame(struct sk_buff *skb,
40340 + frame->is_from_san = true;
40341 + } else {
40342 + /* Sequence nr for the master node */
40343 +- spin_lock_irqsave(&hsr->seqnr_lock, irqflags);
40344 ++ lockdep_assert_held(&hsr->seqnr_lock);
40345 + frame->sequence_nr = hsr->sequence_nr;
40346 + hsr->sequence_nr++;
40347 +- spin_unlock_irqrestore(&hsr->seqnr_lock, irqflags);
40348 + }
40349 + }
40350 +
40351 +@@ -571,23 +569,20 @@ static int fill_frame_info(struct hsr_frame_info *frame,
40352 + struct ethhdr *ethhdr;
40353 + __be16 proto;
40354 + int ret;
40355 +- u32 hash;
40356 +
40357 + /* Check if skb contains ethhdr */
40358 + if (skb->mac_len < sizeof(struct ethhdr))
40359 + return -EINVAL;
40360 +
40361 + memset(frame, 0, sizeof(*frame));
40362 +-
40363 +- ethhdr = (struct ethhdr *)skb_mac_header(skb);
40364 +- hash = hsr_mac_hash(port->hsr, ethhdr->h_source);
40365 + frame->is_supervision = is_supervision_frame(port->hsr, skb);
40366 +- frame->node_src = hsr_get_node(port, &hsr->node_db[hash], skb,
40367 ++ frame->node_src = hsr_get_node(port, &hsr->node_db, skb,
40368 + frame->is_supervision,
40369 + port->type);
40370 + if (!frame->node_src)
40371 + return -1; /* Unknown node and !is_supervision, or no mem */
40372 +
40373 ++ ethhdr = (struct ethhdr *)skb_mac_header(skb);
40374 + frame->is_vlan = false;
40375 + proto = ethhdr->h_proto;
40376 +
40377 +@@ -617,11 +612,13 @@ void hsr_forward_skb(struct sk_buff *skb, struct hsr_port *port)
40378 + {
40379 + struct hsr_frame_info frame;
40380 +
40381 ++ rcu_read_lock();
40382 + if (fill_frame_info(&frame, skb, port) < 0)
40383 + goto out_drop;
40384 +
40385 + hsr_register_frame_in(frame.node_src, port, frame.sequence_nr);
40386 + hsr_forward_do(&frame);
40387 ++ rcu_read_unlock();
40388 + /* Gets called for ingress frames as well as egress from master port.
40389 + * So check and increment stats for master port only here.
40390 + */
40391 +@@ -636,6 +633,7 @@ void hsr_forward_skb(struct sk_buff *skb, struct hsr_port *port)
40392 + return;
40393 +
40394 + out_drop:
40395 ++ rcu_read_unlock();
40396 + port->dev->stats.tx_dropped++;
40397 + kfree_skb(skb);
40398 + }
40399 +diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c
40400 +index 584e217887997..39a6088080e93 100644
40401 +--- a/net/hsr/hsr_framereg.c
40402 ++++ b/net/hsr/hsr_framereg.c
40403 +@@ -15,37 +15,10 @@
40404 + #include <linux/etherdevice.h>
40405 + #include <linux/slab.h>
40406 + #include <linux/rculist.h>
40407 +-#include <linux/jhash.h>
40408 + #include "hsr_main.h"
40409 + #include "hsr_framereg.h"
40410 + #include "hsr_netlink.h"
40411 +
40412 +-#ifdef CONFIG_LOCKDEP
40413 +-int lockdep_hsr_is_held(spinlock_t *lock)
40414 +-{
40415 +- return lockdep_is_held(lock);
40416 +-}
40417 +-#endif
40418 +-
40419 +-u32 hsr_mac_hash(struct hsr_priv *hsr, const unsigned char *addr)
40420 +-{
40421 +- u32 hash = jhash(addr, ETH_ALEN, hsr->hash_seed);
40422 +-
40423 +- return reciprocal_scale(hash, hsr->hash_buckets);
40424 +-}
40425 +-
40426 +-struct hsr_node *hsr_node_get_first(struct hlist_head *head, spinlock_t *lock)
40427 +-{
40428 +- struct hlist_node *first;
40429 +-
40430 +- first = rcu_dereference_bh_check(hlist_first_rcu(head),
40431 +- lockdep_hsr_is_held(lock));
40432 +- if (first)
40433 +- return hlist_entry(first, struct hsr_node, mac_list);
40434 +-
40435 +- return NULL;
40436 +-}
40437 +-
40438 + /* seq_nr_after(a, b) - return true if a is after (higher in sequence than) b,
40439 + * false otherwise.
40440 + */
40441 +@@ -67,7 +40,8 @@ bool hsr_addr_is_self(struct hsr_priv *hsr, unsigned char *addr)
40442 + {
40443 + struct hsr_node *node;
40444 +
40445 +- node = hsr_node_get_first(&hsr->self_node_db, &hsr->list_lock);
40446 ++ node = list_first_or_null_rcu(&hsr->self_node_db, struct hsr_node,
40447 ++ mac_list);
40448 + if (!node) {
40449 + WARN_ONCE(1, "HSR: No self node\n");
40450 + return false;
40451 +@@ -83,12 +57,12 @@ bool hsr_addr_is_self(struct hsr_priv *hsr, unsigned char *addr)
40452 +
40453 + /* Search for mac entry. Caller must hold rcu read lock.
40454 + */
40455 +-static struct hsr_node *find_node_by_addr_A(struct hlist_head *node_db,
40456 ++static struct hsr_node *find_node_by_addr_A(struct list_head *node_db,
40457 + const unsigned char addr[ETH_ALEN])
40458 + {
40459 + struct hsr_node *node;
40460 +
40461 +- hlist_for_each_entry_rcu(node, node_db, mac_list) {
40462 ++ list_for_each_entry_rcu(node, node_db, mac_list) {
40463 + if (ether_addr_equal(node->macaddress_A, addr))
40464 + return node;
40465 + }
40466 +@@ -103,7 +77,7 @@ int hsr_create_self_node(struct hsr_priv *hsr,
40467 + const unsigned char addr_a[ETH_ALEN],
40468 + const unsigned char addr_b[ETH_ALEN])
40469 + {
40470 +- struct hlist_head *self_node_db = &hsr->self_node_db;
40471 ++ struct list_head *self_node_db = &hsr->self_node_db;
40472 + struct hsr_node *node, *oldnode;
40473 +
40474 + node = kmalloc(sizeof(*node), GFP_KERNEL);
40475 +@@ -114,13 +88,14 @@ int hsr_create_self_node(struct hsr_priv *hsr,
40476 + ether_addr_copy(node->macaddress_B, addr_b);
40477 +
40478 + spin_lock_bh(&hsr->list_lock);
40479 +- oldnode = hsr_node_get_first(self_node_db, &hsr->list_lock);
40480 ++ oldnode = list_first_or_null_rcu(self_node_db,
40481 ++ struct hsr_node, mac_list);
40482 + if (oldnode) {
40483 +- hlist_replace_rcu(&oldnode->mac_list, &node->mac_list);
40484 ++ list_replace_rcu(&oldnode->mac_list, &node->mac_list);
40485 + spin_unlock_bh(&hsr->list_lock);
40486 + kfree_rcu(oldnode, rcu_head);
40487 + } else {
40488 +- hlist_add_tail_rcu(&node->mac_list, self_node_db);
40489 ++ list_add_tail_rcu(&node->mac_list, self_node_db);
40490 + spin_unlock_bh(&hsr->list_lock);
40491 + }
40492 +
40493 +@@ -129,25 +104,25 @@ int hsr_create_self_node(struct hsr_priv *hsr,
40494 +
40495 + void hsr_del_self_node(struct hsr_priv *hsr)
40496 + {
40497 +- struct hlist_head *self_node_db = &hsr->self_node_db;
40498 ++ struct list_head *self_node_db = &hsr->self_node_db;
40499 + struct hsr_node *node;
40500 +
40501 + spin_lock_bh(&hsr->list_lock);
40502 +- node = hsr_node_get_first(self_node_db, &hsr->list_lock);
40503 ++ node = list_first_or_null_rcu(self_node_db, struct hsr_node, mac_list);
40504 + if (node) {
40505 +- hlist_del_rcu(&node->mac_list);
40506 ++ list_del_rcu(&node->mac_list);
40507 + kfree_rcu(node, rcu_head);
40508 + }
40509 + spin_unlock_bh(&hsr->list_lock);
40510 + }
40511 +
40512 +-void hsr_del_nodes(struct hlist_head *node_db)
40513 ++void hsr_del_nodes(struct list_head *node_db)
40514 + {
40515 + struct hsr_node *node;
40516 +- struct hlist_node *tmp;
40517 ++ struct hsr_node *tmp;
40518 +
40519 +- hlist_for_each_entry_safe(node, tmp, node_db, mac_list)
40520 +- kfree_rcu(node, rcu_head);
40521 ++ list_for_each_entry_safe(node, tmp, node_db, mac_list)
40522 ++ kfree(node);
40523 + }
40524 +
40525 + void prp_handle_san_frame(bool san, enum hsr_port_type port,
40526 +@@ -168,7 +143,7 @@ void prp_handle_san_frame(bool san, enum hsr_port_type port,
40527 + * originating from the newly added node.
40528 + */
40529 + static struct hsr_node *hsr_add_node(struct hsr_priv *hsr,
40530 +- struct hlist_head *node_db,
40531 ++ struct list_head *node_db,
40532 + unsigned char addr[],
40533 + u16 seq_out, bool san,
40534 + enum hsr_port_type rx_port)
40535 +@@ -182,6 +157,7 @@ static struct hsr_node *hsr_add_node(struct hsr_priv *hsr,
40536 + return NULL;
40537 +
40538 + ether_addr_copy(new_node->macaddress_A, addr);
40539 ++ spin_lock_init(&new_node->seq_out_lock);
40540 +
40541 + /* We are only interested in time diffs here, so use current jiffies
40542 + * as initialization. (0 could trigger an spurious ring error warning).
40543 +@@ -198,14 +174,14 @@ static struct hsr_node *hsr_add_node(struct hsr_priv *hsr,
40544 + hsr->proto_ops->handle_san_frame(san, rx_port, new_node);
40545 +
40546 + spin_lock_bh(&hsr->list_lock);
40547 +- hlist_for_each_entry_rcu(node, node_db, mac_list,
40548 +- lockdep_hsr_is_held(&hsr->list_lock)) {
40549 ++ list_for_each_entry_rcu(node, node_db, mac_list,
40550 ++ lockdep_is_held(&hsr->list_lock)) {
40551 + if (ether_addr_equal(node->macaddress_A, addr))
40552 + goto out;
40553 + if (ether_addr_equal(node->macaddress_B, addr))
40554 + goto out;
40555 + }
40556 +- hlist_add_tail_rcu(&new_node->mac_list, node_db);
40557 ++ list_add_tail_rcu(&new_node->mac_list, node_db);
40558 + spin_unlock_bh(&hsr->list_lock);
40559 + return new_node;
40560 + out:
40561 +@@ -225,7 +201,7 @@ void prp_update_san_info(struct hsr_node *node, bool is_sup)
40562 +
40563 + /* Get the hsr_node from which 'skb' was sent.
40564 + */
40565 +-struct hsr_node *hsr_get_node(struct hsr_port *port, struct hlist_head *node_db,
40566 ++struct hsr_node *hsr_get_node(struct hsr_port *port, struct list_head *node_db,
40567 + struct sk_buff *skb, bool is_sup,
40568 + enum hsr_port_type rx_port)
40569 + {
40570 +@@ -241,7 +217,7 @@ struct hsr_node *hsr_get_node(struct hsr_port *port, struct hlist_head *node_db,
40571 +
40572 + ethhdr = (struct ethhdr *)skb_mac_header(skb);
40573 +
40574 +- hlist_for_each_entry_rcu(node, node_db, mac_list) {
40575 ++ list_for_each_entry_rcu(node, node_db, mac_list) {
40576 + if (ether_addr_equal(node->macaddress_A, ethhdr->h_source)) {
40577 + if (hsr->proto_ops->update_san_info)
40578 + hsr->proto_ops->update_san_info(node, is_sup);
40579 +@@ -291,12 +267,11 @@ void hsr_handle_sup_frame(struct hsr_frame_info *frame)
40580 + struct hsr_sup_tlv *hsr_sup_tlv;
40581 + struct hsr_node *node_real;
40582 + struct sk_buff *skb = NULL;
40583 +- struct hlist_head *node_db;
40584 ++ struct list_head *node_db;
40585 + struct ethhdr *ethhdr;
40586 + int i;
40587 + unsigned int pull_size = 0;
40588 + unsigned int total_pull_size = 0;
40589 +- u32 hash;
40590 +
40591 + /* Here either frame->skb_hsr or frame->skb_prp should be
40592 + * valid as supervision frame always will have protocol
40593 +@@ -334,13 +309,11 @@ void hsr_handle_sup_frame(struct hsr_frame_info *frame)
40594 + hsr_sp = (struct hsr_sup_payload *)skb->data;
40595 +
40596 + /* Merge node_curr (registered on macaddress_B) into node_real */
40597 +- node_db = port_rcv->hsr->node_db;
40598 +- hash = hsr_mac_hash(hsr, hsr_sp->macaddress_A);
40599 +- node_real = find_node_by_addr_A(&node_db[hash], hsr_sp->macaddress_A);
40600 ++ node_db = &port_rcv->hsr->node_db;
40601 ++ node_real = find_node_by_addr_A(node_db, hsr_sp->macaddress_A);
40602 + if (!node_real)
40603 + /* No frame received from AddrA of this node yet */
40604 +- node_real = hsr_add_node(hsr, &node_db[hash],
40605 +- hsr_sp->macaddress_A,
40606 ++ node_real = hsr_add_node(hsr, node_db, hsr_sp->macaddress_A,
40607 + HSR_SEQNR_START - 1, true,
40608 + port_rcv->type);
40609 + if (!node_real)
40610 +@@ -374,14 +347,14 @@ void hsr_handle_sup_frame(struct hsr_frame_info *frame)
40611 + hsr_sp = (struct hsr_sup_payload *)skb->data;
40612 +
40613 + /* Check if redbox mac and node mac are equal. */
40614 +- if (!ether_addr_equal(node_real->macaddress_A,
40615 +- hsr_sp->macaddress_A)) {
40616 ++ if (!ether_addr_equal(node_real->macaddress_A, hsr_sp->macaddress_A)) {
40617 + /* This is a redbox supervision frame for a VDAN! */
40618 + goto done;
40619 + }
40620 + }
40621 +
40622 + ether_addr_copy(node_real->macaddress_B, ethhdr->h_source);
40623 ++ spin_lock_bh(&node_real->seq_out_lock);
40624 + for (i = 0; i < HSR_PT_PORTS; i++) {
40625 + if (!node_curr->time_in_stale[i] &&
40626 + time_after(node_curr->time_in[i], node_real->time_in[i])) {
40627 +@@ -392,12 +365,16 @@ void hsr_handle_sup_frame(struct hsr_frame_info *frame)
40628 + if (seq_nr_after(node_curr->seq_out[i], node_real->seq_out[i]))
40629 + node_real->seq_out[i] = node_curr->seq_out[i];
40630 + }
40631 ++ spin_unlock_bh(&node_real->seq_out_lock);
40632 + node_real->addr_B_port = port_rcv->type;
40633 +
40634 + spin_lock_bh(&hsr->list_lock);
40635 +- hlist_del_rcu(&node_curr->mac_list);
40636 ++ if (!node_curr->removed) {
40637 ++ list_del_rcu(&node_curr->mac_list);
40638 ++ node_curr->removed = true;
40639 ++ kfree_rcu(node_curr, rcu_head);
40640 ++ }
40641 + spin_unlock_bh(&hsr->list_lock);
40642 +- kfree_rcu(node_curr, rcu_head);
40643 +
40644 + done:
40645 + /* Push back here */
40646 +@@ -433,7 +410,6 @@ void hsr_addr_subst_dest(struct hsr_node *node_src, struct sk_buff *skb,
40647 + struct hsr_port *port)
40648 + {
40649 + struct hsr_node *node_dst;
40650 +- u32 hash;
40651 +
40652 + if (!skb_mac_header_was_set(skb)) {
40653 + WARN_ONCE(1, "%s: Mac header not set\n", __func__);
40654 +@@ -443,8 +419,7 @@ void hsr_addr_subst_dest(struct hsr_node *node_src, struct sk_buff *skb,
40655 + if (!is_unicast_ether_addr(eth_hdr(skb)->h_dest))
40656 + return;
40657 +
40658 +- hash = hsr_mac_hash(port->hsr, eth_hdr(skb)->h_dest);
40659 +- node_dst = find_node_by_addr_A(&port->hsr->node_db[hash],
40660 ++ node_dst = find_node_by_addr_A(&port->hsr->node_db,
40661 + eth_hdr(skb)->h_dest);
40662 + if (!node_dst) {
40663 + if (net_ratelimit())
40664 +@@ -484,13 +459,17 @@ void hsr_register_frame_in(struct hsr_node *node, struct hsr_port *port,
40665 + int hsr_register_frame_out(struct hsr_port *port, struct hsr_node *node,
40666 + u16 sequence_nr)
40667 + {
40668 ++ spin_lock_bh(&node->seq_out_lock);
40669 + if (seq_nr_before_or_eq(sequence_nr, node->seq_out[port->type]) &&
40670 + time_is_after_jiffies(node->time_out[port->type] +
40671 +- msecs_to_jiffies(HSR_ENTRY_FORGET_TIME)))
40672 ++ msecs_to_jiffies(HSR_ENTRY_FORGET_TIME))) {
40673 ++ spin_unlock_bh(&node->seq_out_lock);
40674 + return 1;
40675 ++ }
40676 +
40677 + node->time_out[port->type] = jiffies;
40678 + node->seq_out[port->type] = sequence_nr;
40679 ++ spin_unlock_bh(&node->seq_out_lock);
40680 + return 0;
40681 + }
40682 +
40683 +@@ -520,71 +499,60 @@ static struct hsr_port *get_late_port(struct hsr_priv *hsr,
40684 + void hsr_prune_nodes(struct timer_list *t)
40685 + {
40686 + struct hsr_priv *hsr = from_timer(hsr, t, prune_timer);
40687 +- struct hlist_node *tmp;
40688 + struct hsr_node *node;
40689 ++ struct hsr_node *tmp;
40690 + struct hsr_port *port;
40691 + unsigned long timestamp;
40692 + unsigned long time_a, time_b;
40693 +- int i;
40694 +
40695 + spin_lock_bh(&hsr->list_lock);
40696 ++ list_for_each_entry_safe(node, tmp, &hsr->node_db, mac_list) {
40697 ++ /* Don't prune own node. Neither time_in[HSR_PT_SLAVE_A]
40698 ++ * nor time_in[HSR_PT_SLAVE_B], will ever be updated for
40699 ++ * the master port. Thus the master node will be repeatedly
40700 ++ * pruned leading to packet loss.
40701 ++ */
40702 ++ if (hsr_addr_is_self(hsr, node->macaddress_A))
40703 ++ continue;
40704 ++
40705 ++ /* Shorthand */
40706 ++ time_a = node->time_in[HSR_PT_SLAVE_A];
40707 ++ time_b = node->time_in[HSR_PT_SLAVE_B];
40708 ++
40709 ++ /* Check for timestamps old enough to risk wrap-around */
40710 ++ if (time_after(jiffies, time_a + MAX_JIFFY_OFFSET / 2))
40711 ++ node->time_in_stale[HSR_PT_SLAVE_A] = true;
40712 ++ if (time_after(jiffies, time_b + MAX_JIFFY_OFFSET / 2))
40713 ++ node->time_in_stale[HSR_PT_SLAVE_B] = true;
40714 ++
40715 ++ /* Get age of newest frame from node.
40716 ++ * At least one time_in is OK here; nodes get pruned long
40717 ++ * before both time_ins can get stale
40718 ++ */
40719 ++ timestamp = time_a;
40720 ++ if (node->time_in_stale[HSR_PT_SLAVE_A] ||
40721 ++ (!node->time_in_stale[HSR_PT_SLAVE_B] &&
40722 ++ time_after(time_b, time_a)))
40723 ++ timestamp = time_b;
40724 ++
40725 ++ /* Warn of ring error only as long as we get frames at all */
40726 ++ if (time_is_after_jiffies(timestamp +
40727 ++ msecs_to_jiffies(1.5 * MAX_SLAVE_DIFF))) {
40728 ++ rcu_read_lock();
40729 ++ port = get_late_port(hsr, node);
40730 ++ if (port)
40731 ++ hsr_nl_ringerror(hsr, node->macaddress_A, port);
40732 ++ rcu_read_unlock();
40733 ++ }
40734 +
40735 +- for (i = 0; i < hsr->hash_buckets; i++) {
40736 +- hlist_for_each_entry_safe(node, tmp, &hsr->node_db[i],
40737 +- mac_list) {
40738 +- /* Don't prune own node.
40739 +- * Neither time_in[HSR_PT_SLAVE_A]
40740 +- * nor time_in[HSR_PT_SLAVE_B], will ever be updated
40741 +- * for the master port. Thus the master node will be
40742 +- * repeatedly pruned leading to packet loss.
40743 +- */
40744 +- if (hsr_addr_is_self(hsr, node->macaddress_A))
40745 +- continue;
40746 +-
40747 +- /* Shorthand */
40748 +- time_a = node->time_in[HSR_PT_SLAVE_A];
40749 +- time_b = node->time_in[HSR_PT_SLAVE_B];
40750 +-
40751 +- /* Check for timestamps old enough to
40752 +- * risk wrap-around
40753 +- */
40754 +- if (time_after(jiffies, time_a + MAX_JIFFY_OFFSET / 2))
40755 +- node->time_in_stale[HSR_PT_SLAVE_A] = true;
40756 +- if (time_after(jiffies, time_b + MAX_JIFFY_OFFSET / 2))
40757 +- node->time_in_stale[HSR_PT_SLAVE_B] = true;
40758 +-
40759 +- /* Get age of newest frame from node.
40760 +- * At least one time_in is OK here; nodes get pruned
40761 +- * long before both time_ins can get stale
40762 +- */
40763 +- timestamp = time_a;
40764 +- if (node->time_in_stale[HSR_PT_SLAVE_A] ||
40765 +- (!node->time_in_stale[HSR_PT_SLAVE_B] &&
40766 +- time_after(time_b, time_a)))
40767 +- timestamp = time_b;
40768 +-
40769 +- /* Warn of ring error only as long as we get
40770 +- * frames at all
40771 +- */
40772 +- if (time_is_after_jiffies(timestamp +
40773 +- msecs_to_jiffies(1.5 * MAX_SLAVE_DIFF))) {
40774 +- rcu_read_lock();
40775 +- port = get_late_port(hsr, node);
40776 +- if (port)
40777 +- hsr_nl_ringerror(hsr,
40778 +- node->macaddress_A,
40779 +- port);
40780 +- rcu_read_unlock();
40781 +- }
40782 +-
40783 +- /* Prune old entries */
40784 +- if (time_is_before_jiffies(timestamp +
40785 +- msecs_to_jiffies(HSR_NODE_FORGET_TIME))) {
40786 +- hsr_nl_nodedown(hsr, node->macaddress_A);
40787 +- hlist_del_rcu(&node->mac_list);
40788 +- /* Note that we need to free this
40789 +- * entry later:
40790 +- */
40791 ++ /* Prune old entries */
40792 ++ if (time_is_before_jiffies(timestamp +
40793 ++ msecs_to_jiffies(HSR_NODE_FORGET_TIME))) {
40794 ++ hsr_nl_nodedown(hsr, node->macaddress_A);
40795 ++ if (!node->removed) {
40796 ++ list_del_rcu(&node->mac_list);
40797 ++ node->removed = true;
40798 ++ /* Note that we need to free this entry later: */
40799 + kfree_rcu(node, rcu_head);
40800 + }
40801 + }
40802 +@@ -600,20 +568,17 @@ void *hsr_get_next_node(struct hsr_priv *hsr, void *_pos,
40803 + unsigned char addr[ETH_ALEN])
40804 + {
40805 + struct hsr_node *node;
40806 +- u32 hash;
40807 +-
40808 +- hash = hsr_mac_hash(hsr, addr);
40809 +
40810 + if (!_pos) {
40811 +- node = hsr_node_get_first(&hsr->node_db[hash],
40812 +- &hsr->list_lock);
40813 ++ node = list_first_or_null_rcu(&hsr->node_db,
40814 ++ struct hsr_node, mac_list);
40815 + if (node)
40816 + ether_addr_copy(addr, node->macaddress_A);
40817 + return node;
40818 + }
40819 +
40820 + node = _pos;
40821 +- hlist_for_each_entry_continue_rcu(node, mac_list) {
40822 ++ list_for_each_entry_continue_rcu(node, &hsr->node_db, mac_list) {
40823 + ether_addr_copy(addr, node->macaddress_A);
40824 + return node;
40825 + }
40826 +@@ -633,11 +598,8 @@ int hsr_get_node_data(struct hsr_priv *hsr,
40827 + struct hsr_node *node;
40828 + struct hsr_port *port;
40829 + unsigned long tdiff;
40830 +- u32 hash;
40831 +-
40832 +- hash = hsr_mac_hash(hsr, addr);
40833 +
40834 +- node = find_node_by_addr_A(&hsr->node_db[hash], addr);
40835 ++ node = find_node_by_addr_A(&hsr->node_db, addr);
40836 + if (!node)
40837 + return -ENOENT;
40838 +
40839 +diff --git a/net/hsr/hsr_framereg.h b/net/hsr/hsr_framereg.h
40840 +index f3762e9e42b54..b23556251d621 100644
40841 +--- a/net/hsr/hsr_framereg.h
40842 ++++ b/net/hsr/hsr_framereg.h
40843 +@@ -28,17 +28,9 @@ struct hsr_frame_info {
40844 + bool is_from_san;
40845 + };
40846 +
40847 +-#ifdef CONFIG_LOCKDEP
40848 +-int lockdep_hsr_is_held(spinlock_t *lock);
40849 +-#else
40850 +-#define lockdep_hsr_is_held(lock) 1
40851 +-#endif
40852 +-
40853 +-u32 hsr_mac_hash(struct hsr_priv *hsr, const unsigned char *addr);
40854 +-struct hsr_node *hsr_node_get_first(struct hlist_head *head, spinlock_t *lock);
40855 + void hsr_del_self_node(struct hsr_priv *hsr);
40856 +-void hsr_del_nodes(struct hlist_head *node_db);
40857 +-struct hsr_node *hsr_get_node(struct hsr_port *port, struct hlist_head *node_db,
40858 ++void hsr_del_nodes(struct list_head *node_db);
40859 ++struct hsr_node *hsr_get_node(struct hsr_port *port, struct list_head *node_db,
40860 + struct sk_buff *skb, bool is_sup,
40861 + enum hsr_port_type rx_port);
40862 + void hsr_handle_sup_frame(struct hsr_frame_info *frame);
40863 +@@ -76,7 +68,9 @@ void prp_handle_san_frame(bool san, enum hsr_port_type port,
40864 + void prp_update_san_info(struct hsr_node *node, bool is_sup);
40865 +
40866 + struct hsr_node {
40867 +- struct hlist_node mac_list;
40868 ++ struct list_head mac_list;
40869 ++ /* Protect R/W access to seq_out */
40870 ++ spinlock_t seq_out_lock;
40871 + unsigned char macaddress_A[ETH_ALEN];
40872 + unsigned char macaddress_B[ETH_ALEN];
40873 + /* Local slave through which AddrB frames are received from this node */
40874 +@@ -88,6 +82,7 @@ struct hsr_node {
40875 + bool san_a;
40876 + bool san_b;
40877 + u16 seq_out[HSR_PT_PORTS];
40878 ++ bool removed;
40879 + struct rcu_head rcu_head;
40880 + };
40881 +
40882 +diff --git a/net/hsr/hsr_main.h b/net/hsr/hsr_main.h
40883 +index b158ba409f9a4..16ae9fb09ccd2 100644
40884 +--- a/net/hsr/hsr_main.h
40885 ++++ b/net/hsr/hsr_main.h
40886 +@@ -47,9 +47,6 @@
40887 +
40888 + #define HSR_V1_SUP_LSDUSIZE 52
40889 +
40890 +-#define HSR_HSIZE_SHIFT 8
40891 +-#define HSR_HSIZE BIT(HSR_HSIZE_SHIFT)
40892 +-
40893 + /* The helper functions below assumes that 'path' occupies the 4 most
40894 + * significant bits of the 16-bit field shared by 'path' and 'LSDU_size' (or
40895 + * equivalently, the 4 most significant bits of HSR tag byte 14).
40896 +@@ -188,8 +185,8 @@ struct hsr_proto_ops {
40897 + struct hsr_priv {
40898 + struct rcu_head rcu_head;
40899 + struct list_head ports;
40900 +- struct hlist_head node_db[HSR_HSIZE]; /* Known HSR nodes */
40901 +- struct hlist_head self_node_db; /* MACs of slaves */
40902 ++ struct list_head node_db; /* Known HSR nodes */
40903 ++ struct list_head self_node_db; /* MACs of slaves */
40904 + struct timer_list announce_timer; /* Supervision frame dispatch */
40905 + struct timer_list prune_timer;
40906 + int announce_count;
40907 +@@ -199,8 +196,6 @@ struct hsr_priv {
40908 + spinlock_t seqnr_lock; /* locking for sequence_nr */
40909 + spinlock_t list_lock; /* locking for node list */
40910 + struct hsr_proto_ops *proto_ops;
40911 +- u32 hash_buckets;
40912 +- u32 hash_seed;
40913 + #define PRP_LAN_ID 0x5 /* 0x1010 for A and 0x1011 for B. Bit 0 is set
40914 + * based on SLAVE_A or SLAVE_B
40915 + */
40916 +diff --git a/net/hsr/hsr_netlink.c b/net/hsr/hsr_netlink.c
40917 +index 7174a90929002..78fe40eb9f012 100644
40918 +--- a/net/hsr/hsr_netlink.c
40919 ++++ b/net/hsr/hsr_netlink.c
40920 +@@ -105,7 +105,6 @@ static int hsr_newlink(struct net *src_net, struct net_device *dev,
40921 + static void hsr_dellink(struct net_device *dev, struct list_head *head)
40922 + {
40923 + struct hsr_priv *hsr = netdev_priv(dev);
40924 +- int i;
40925 +
40926 + del_timer_sync(&hsr->prune_timer);
40927 + del_timer_sync(&hsr->announce_timer);
40928 +@@ -114,8 +113,7 @@ static void hsr_dellink(struct net_device *dev, struct list_head *head)
40929 + hsr_del_ports(hsr);
40930 +
40931 + hsr_del_self_node(hsr);
40932 +- for (i = 0; i < hsr->hash_buckets; i++)
40933 +- hsr_del_nodes(&hsr->node_db[i]);
40934 ++ hsr_del_nodes(&hsr->node_db);
40935 +
40936 + unregister_netdevice_queue(dev, head);
40937 + }
40938 +diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
40939 +index 0da6794113308..92d4237862518 100644
40940 +--- a/net/ipv4/af_inet.c
40941 ++++ b/net/ipv4/af_inet.c
40942 +@@ -522,9 +522,9 @@ int __inet_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len,
40943 + /* Make sure we are allowed to bind here. */
40944 + if (snum || !(inet->bind_address_no_port ||
40945 + (flags & BIND_FORCE_ADDRESS_NO_PORT))) {
40946 +- if (sk->sk_prot->get_port(sk, snum)) {
40947 ++ err = sk->sk_prot->get_port(sk, snum);
40948 ++ if (err) {
40949 + inet->inet_saddr = inet->inet_rcv_saddr = 0;
40950 +- err = -EADDRINUSE;
40951 + goto out_release_sock;
40952 + }
40953 + if (!(flags & BIND_FROM_BPF)) {
40954 +diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
40955 +index 4e84ed21d16fe..4a34bc7cb15ed 100644
40956 +--- a/net/ipv4/inet_connection_sock.c
40957 ++++ b/net/ipv4/inet_connection_sock.c
40958 +@@ -471,11 +471,11 @@ int inet_csk_get_port(struct sock *sk, unsigned short snum)
40959 + bool reuse = sk->sk_reuse && sk->sk_state != TCP_LISTEN;
40960 + bool found_port = false, check_bind_conflict = true;
40961 + bool bhash_created = false, bhash2_created = false;
40962 ++ int ret = -EADDRINUSE, port = snum, l3mdev;
40963 + struct inet_bind_hashbucket *head, *head2;
40964 + struct inet_bind2_bucket *tb2 = NULL;
40965 + struct inet_bind_bucket *tb = NULL;
40966 + bool head2_lock_acquired = false;
40967 +- int ret = 1, port = snum, l3mdev;
40968 + struct net *net = sock_net(sk);
40969 +
40970 + l3mdev = inet_sk_bound_l3mdev(sk);
40971 +@@ -1186,7 +1186,7 @@ int inet_csk_listen_start(struct sock *sk)
40972 + {
40973 + struct inet_connection_sock *icsk = inet_csk(sk);
40974 + struct inet_sock *inet = inet_sk(sk);
40975 +- int err = -EADDRINUSE;
40976 ++ int err;
40977 +
40978 + reqsk_queue_alloc(&icsk->icsk_accept_queue);
40979 +
40980 +@@ -1202,7 +1202,8 @@ int inet_csk_listen_start(struct sock *sk)
40981 + * after validation is complete.
40982 + */
40983 + inet_sk_state_store(sk, TCP_LISTEN);
40984 +- if (!sk->sk_prot->get_port(sk, inet->inet_num)) {
40985 ++ err = sk->sk_prot->get_port(sk, inet->inet_num);
40986 ++ if (!err) {
40987 + inet->inet_sport = htons(inet->inet_num);
40988 +
40989 + sk_dst_reset(sk);
40990 +diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
40991 +index 04b4ec07bb06c..409ec2a1f95b0 100644
40992 +--- a/net/ipv4/ping.c
40993 ++++ b/net/ipv4/ping.c
40994 +@@ -143,7 +143,7 @@ next_port:
40995 +
40996 + fail:
40997 + spin_unlock(&ping_table.lock);
40998 +- return 1;
40999 ++ return -EADDRINUSE;
41000 + }
41001 + EXPORT_SYMBOL_GPL(ping_get_port);
41002 +
41003 +diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
41004 +index cf9c3e8f7ccbf..94aad3870c5fc 100644
41005 +--- a/net/ipv4/tcp_bpf.c
41006 ++++ b/net/ipv4/tcp_bpf.c
41007 +@@ -45,8 +45,11 @@ static int bpf_tcp_ingress(struct sock *sk, struct sk_psock *psock,
41008 + tmp->sg.end = i;
41009 + if (apply) {
41010 + apply_bytes -= size;
41011 +- if (!apply_bytes)
41012 ++ if (!apply_bytes) {
41013 ++ if (sge->length)
41014 ++ sk_msg_iter_var_prev(i);
41015 + break;
41016 ++ }
41017 + }
41018 + } while (i != msg->sg.end);
41019 +
41020 +@@ -131,10 +134,9 @@ static int tcp_bpf_push_locked(struct sock *sk, struct sk_msg *msg,
41021 + return ret;
41022 + }
41023 +
41024 +-int tcp_bpf_sendmsg_redir(struct sock *sk, struct sk_msg *msg,
41025 +- u32 bytes, int flags)
41026 ++int tcp_bpf_sendmsg_redir(struct sock *sk, bool ingress,
41027 ++ struct sk_msg *msg, u32 bytes, int flags)
41028 + {
41029 +- bool ingress = sk_msg_to_ingress(msg);
41030 + struct sk_psock *psock = sk_psock_get(sk);
41031 + int ret;
41032 +
41033 +@@ -276,10 +278,10 @@ msg_bytes_ready:
41034 + static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
41035 + struct sk_msg *msg, int *copied, int flags)
41036 + {
41037 +- bool cork = false, enospc = sk_msg_full(msg);
41038 ++ bool cork = false, enospc = sk_msg_full(msg), redir_ingress;
41039 + struct sock *sk_redir;
41040 + u32 tosend, origsize, sent, delta = 0;
41041 +- u32 eval = __SK_NONE;
41042 ++ u32 eval;
41043 + int ret;
41044 +
41045 + more_data:
41046 +@@ -310,6 +312,7 @@ more_data:
41047 + tosend = msg->sg.size;
41048 + if (psock->apply_bytes && psock->apply_bytes < tosend)
41049 + tosend = psock->apply_bytes;
41050 ++ eval = __SK_NONE;
41051 +
41052 + switch (psock->eval) {
41053 + case __SK_PASS:
41054 +@@ -321,6 +324,7 @@ more_data:
41055 + sk_msg_apply_bytes(psock, tosend);
41056 + break;
41057 + case __SK_REDIRECT:
41058 ++ redir_ingress = psock->redir_ingress;
41059 + sk_redir = psock->sk_redir;
41060 + sk_msg_apply_bytes(psock, tosend);
41061 + if (!psock->apply_bytes) {
41062 +@@ -337,7 +341,8 @@ more_data:
41063 + release_sock(sk);
41064 +
41065 + origsize = msg->sg.size;
41066 +- ret = tcp_bpf_sendmsg_redir(sk_redir, msg, tosend, flags);
41067 ++ ret = tcp_bpf_sendmsg_redir(sk_redir, redir_ingress,
41068 ++ msg, tosend, flags);
41069 + sent = origsize - msg->sg.size;
41070 +
41071 + if (eval == __SK_REDIRECT)
41072 +diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
41073 +index 6a320a614e547..2eaf47e23b221 100644
41074 +--- a/net/ipv4/udp.c
41075 ++++ b/net/ipv4/udp.c
41076 +@@ -232,16 +232,16 @@ static int udp_reuseport_add_sock(struct sock *sk, struct udp_hslot *hslot)
41077 + int udp_lib_get_port(struct sock *sk, unsigned short snum,
41078 + unsigned int hash2_nulladdr)
41079 + {
41080 +- struct udp_hslot *hslot, *hslot2;
41081 + struct udp_table *udptable = sk->sk_prot->h.udp_table;
41082 +- int error = 1;
41083 ++ struct udp_hslot *hslot, *hslot2;
41084 + struct net *net = sock_net(sk);
41085 ++ int error = -EADDRINUSE;
41086 +
41087 + if (!snum) {
41088 ++ DECLARE_BITMAP(bitmap, PORTS_PER_CHAIN);
41089 ++ unsigned short first, last;
41090 + int low, high, remaining;
41091 + unsigned int rand;
41092 +- unsigned short first, last;
41093 +- DECLARE_BITMAP(bitmap, PORTS_PER_CHAIN);
41094 +
41095 + inet_get_local_port_range(net, &low, &high);
41096 + remaining = (high - low) + 1;
41097 +@@ -2518,10 +2518,13 @@ static struct sock *__udp4_lib_mcast_demux_lookup(struct net *net,
41098 + __be16 rmt_port, __be32 rmt_addr,
41099 + int dif, int sdif)
41100 + {
41101 +- struct sock *sk, *result;
41102 + unsigned short hnum = ntohs(loc_port);
41103 +- unsigned int slot = udp_hashfn(net, hnum, udp_table.mask);
41104 +- struct udp_hslot *hslot = &udp_table.hash[slot];
41105 ++ struct sock *sk, *result;
41106 ++ struct udp_hslot *hslot;
41107 ++ unsigned int slot;
41108 ++
41109 ++ slot = udp_hashfn(net, hnum, udp_table.mask);
41110 ++ hslot = &udp_table.hash[slot];
41111 +
41112 + /* Do not bother scanning a too big list */
41113 + if (hslot->count > 10)
41114 +@@ -2549,14 +2552,18 @@ static struct sock *__udp4_lib_demux_lookup(struct net *net,
41115 + __be16 rmt_port, __be32 rmt_addr,
41116 + int dif, int sdif)
41117 + {
41118 +- unsigned short hnum = ntohs(loc_port);
41119 +- unsigned int hash2 = ipv4_portaddr_hash(net, loc_addr, hnum);
41120 +- unsigned int slot2 = hash2 & udp_table.mask;
41121 +- struct udp_hslot *hslot2 = &udp_table.hash2[slot2];
41122 + INET_ADDR_COOKIE(acookie, rmt_addr, loc_addr);
41123 +- const __portpair ports = INET_COMBINED_PORTS(rmt_port, hnum);
41124 ++ unsigned short hnum = ntohs(loc_port);
41125 ++ unsigned int hash2, slot2;
41126 ++ struct udp_hslot *hslot2;
41127 ++ __portpair ports;
41128 + struct sock *sk;
41129 +
41130 ++ hash2 = ipv4_portaddr_hash(net, loc_addr, hnum);
41131 ++ slot2 = hash2 & udp_table.mask;
41132 ++ hslot2 = &udp_table.hash2[slot2];
41133 ++ ports = INET_COMBINED_PORTS(rmt_port, hnum);
41134 ++
41135 + udp_portaddr_for_each_entry_rcu(sk, &hslot2->head) {
41136 + if (inet_match(net, sk, acookie, ports, dif, sdif))
41137 + return sk;
41138 +@@ -2957,10 +2964,10 @@ EXPORT_SYMBOL(udp_prot);
41139 +
41140 + static struct sock *udp_get_first(struct seq_file *seq, int start)
41141 + {
41142 +- struct sock *sk;
41143 +- struct udp_seq_afinfo *afinfo;
41144 + struct udp_iter_state *state = seq->private;
41145 + struct net *net = seq_file_net(seq);
41146 ++ struct udp_seq_afinfo *afinfo;
41147 ++ struct sock *sk;
41148 +
41149 + if (state->bpf_seq_afinfo)
41150 + afinfo = state->bpf_seq_afinfo;
41151 +@@ -2991,9 +2998,9 @@ found:
41152 +
41153 + static struct sock *udp_get_next(struct seq_file *seq, struct sock *sk)
41154 + {
41155 +- struct udp_seq_afinfo *afinfo;
41156 + struct udp_iter_state *state = seq->private;
41157 + struct net *net = seq_file_net(seq);
41158 ++ struct udp_seq_afinfo *afinfo;
41159 +
41160 + if (state->bpf_seq_afinfo)
41161 + afinfo = state->bpf_seq_afinfo;
41162 +@@ -3049,8 +3056,8 @@ EXPORT_SYMBOL(udp_seq_next);
41163 +
41164 + void udp_seq_stop(struct seq_file *seq, void *v)
41165 + {
41166 +- struct udp_seq_afinfo *afinfo;
41167 + struct udp_iter_state *state = seq->private;
41168 ++ struct udp_seq_afinfo *afinfo;
41169 +
41170 + if (state->bpf_seq_afinfo)
41171 + afinfo = state->bpf_seq_afinfo;
41172 +diff --git a/net/ipv4/udp_tunnel_core.c b/net/ipv4/udp_tunnel_core.c
41173 +index 8242c8947340e..5f8104cf082d0 100644
41174 +--- a/net/ipv4/udp_tunnel_core.c
41175 ++++ b/net/ipv4/udp_tunnel_core.c
41176 +@@ -176,6 +176,7 @@ EXPORT_SYMBOL_GPL(udp_tunnel_xmit_skb);
41177 + void udp_tunnel_sock_release(struct socket *sock)
41178 + {
41179 + rcu_assign_sk_user_data(sock->sk, NULL);
41180 ++ synchronize_rcu();
41181 + kernel_sock_shutdown(sock, SHUT_RDWR);
41182 + sock_release(sock);
41183 + }
41184 +diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
41185 +index 0241910049825..7b0cd54da452b 100644
41186 +--- a/net/ipv6/af_inet6.c
41187 ++++ b/net/ipv6/af_inet6.c
41188 +@@ -409,10 +409,10 @@ static int __inet6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len,
41189 + /* Make sure we are allowed to bind here. */
41190 + if (snum || !(inet->bind_address_no_port ||
41191 + (flags & BIND_FORCE_ADDRESS_NO_PORT))) {
41192 +- if (sk->sk_prot->get_port(sk, snum)) {
41193 ++ err = sk->sk_prot->get_port(sk, snum);
41194 ++ if (err) {
41195 + sk->sk_ipv6only = saved_ipv6only;
41196 + inet_reset_saddr(sk);
41197 +- err = -EADDRINUSE;
41198 + goto out;
41199 + }
41200 + if (!(flags & BIND_FROM_BPF)) {
41201 +diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
41202 +index 5ecb56522f9d6..ba28aeb7cade0 100644
41203 +--- a/net/ipv6/datagram.c
41204 ++++ b/net/ipv6/datagram.c
41205 +@@ -42,24 +42,29 @@ static void ip6_datagram_flow_key_init(struct flowi6 *fl6, struct sock *sk)
41206 + {
41207 + struct inet_sock *inet = inet_sk(sk);
41208 + struct ipv6_pinfo *np = inet6_sk(sk);
41209 ++ int oif = sk->sk_bound_dev_if;
41210 +
41211 + memset(fl6, 0, sizeof(*fl6));
41212 + fl6->flowi6_proto = sk->sk_protocol;
41213 + fl6->daddr = sk->sk_v6_daddr;
41214 + fl6->saddr = np->saddr;
41215 +- fl6->flowi6_oif = sk->sk_bound_dev_if;
41216 + fl6->flowi6_mark = sk->sk_mark;
41217 + fl6->fl6_dport = inet->inet_dport;
41218 + fl6->fl6_sport = inet->inet_sport;
41219 + fl6->flowlabel = np->flow_label;
41220 + fl6->flowi6_uid = sk->sk_uid;
41221 +
41222 +- if (!fl6->flowi6_oif)
41223 +- fl6->flowi6_oif = np->sticky_pktinfo.ipi6_ifindex;
41224 ++ if (!oif)
41225 ++ oif = np->sticky_pktinfo.ipi6_ifindex;
41226 +
41227 +- if (!fl6->flowi6_oif && ipv6_addr_is_multicast(&fl6->daddr))
41228 +- fl6->flowi6_oif = np->mcast_oif;
41229 ++ if (!oif) {
41230 ++ if (ipv6_addr_is_multicast(&fl6->daddr))
41231 ++ oif = np->mcast_oif;
41232 ++ else
41233 ++ oif = np->ucast_oif;
41234 ++ }
41235 +
41236 ++ fl6->flowi6_oif = oif;
41237 + security_sk_classify_flow(sk, flowi6_to_flowi_common(fl6));
41238 + }
41239 +
41240 +diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
41241 +index 5703d3cbea9ba..70d81bba50939 100644
41242 +--- a/net/ipv6/sit.c
41243 ++++ b/net/ipv6/sit.c
41244 +@@ -694,7 +694,7 @@ static int ipip6_rcv(struct sk_buff *skb)
41245 + skb->dev = tunnel->dev;
41246 +
41247 + if (packet_is_spoofed(skb, iph, tunnel)) {
41248 +- tunnel->dev->stats.rx_errors++;
41249 ++ DEV_STATS_INC(tunnel->dev, rx_errors);
41250 + goto out;
41251 + }
41252 +
41253 +@@ -714,8 +714,8 @@ static int ipip6_rcv(struct sk_buff *skb)
41254 + net_info_ratelimited("non-ECT from %pI4 with TOS=%#x\n",
41255 + &iph->saddr, iph->tos);
41256 + if (err > 1) {
41257 +- ++tunnel->dev->stats.rx_frame_errors;
41258 +- ++tunnel->dev->stats.rx_errors;
41259 ++ DEV_STATS_INC(tunnel->dev, rx_frame_errors);
41260 ++ DEV_STATS_INC(tunnel->dev, rx_errors);
41261 + goto out;
41262 + }
41263 + }
41264 +@@ -942,7 +942,7 @@ static netdev_tx_t ipip6_tunnel_xmit(struct sk_buff *skb,
41265 + if (!rt) {
41266 + rt = ip_route_output_flow(tunnel->net, &fl4, NULL);
41267 + if (IS_ERR(rt)) {
41268 +- dev->stats.tx_carrier_errors++;
41269 ++ DEV_STATS_INC(dev, tx_carrier_errors);
41270 + goto tx_error_icmp;
41271 + }
41272 + dst_cache_set_ip4(&tunnel->dst_cache, &rt->dst, fl4.saddr);
41273 +@@ -950,14 +950,14 @@ static netdev_tx_t ipip6_tunnel_xmit(struct sk_buff *skb,
41274 +
41275 + if (rt->rt_type != RTN_UNICAST && rt->rt_type != RTN_LOCAL) {
41276 + ip_rt_put(rt);
41277 +- dev->stats.tx_carrier_errors++;
41278 ++ DEV_STATS_INC(dev, tx_carrier_errors);
41279 + goto tx_error_icmp;
41280 + }
41281 + tdev = rt->dst.dev;
41282 +
41283 + if (tdev == dev) {
41284 + ip_rt_put(rt);
41285 +- dev->stats.collisions++;
41286 ++ DEV_STATS_INC(dev, collisions);
41287 + goto tx_error;
41288 + }
41289 +
41290 +@@ -970,7 +970,7 @@ static netdev_tx_t ipip6_tunnel_xmit(struct sk_buff *skb,
41291 + mtu = dst_mtu(&rt->dst) - t_hlen;
41292 +
41293 + if (mtu < IPV4_MIN_MTU) {
41294 +- dev->stats.collisions++;
41295 ++ DEV_STATS_INC(dev, collisions);
41296 + ip_rt_put(rt);
41297 + goto tx_error;
41298 + }
41299 +@@ -1009,7 +1009,7 @@ static netdev_tx_t ipip6_tunnel_xmit(struct sk_buff *skb,
41300 + struct sk_buff *new_skb = skb_realloc_headroom(skb, max_headroom);
41301 + if (!new_skb) {
41302 + ip_rt_put(rt);
41303 +- dev->stats.tx_dropped++;
41304 ++ DEV_STATS_INC(dev, tx_dropped);
41305 + kfree_skb(skb);
41306 + return NETDEV_TX_OK;
41307 + }
41308 +@@ -1039,7 +1039,7 @@ tx_error_icmp:
41309 + dst_link_failure(skb);
41310 + tx_error:
41311 + kfree_skb(skb);
41312 +- dev->stats.tx_errors++;
41313 ++ DEV_STATS_INC(dev, tx_errors);
41314 + return NETDEV_TX_OK;
41315 + }
41316 +
41317 +@@ -1058,7 +1058,7 @@ static netdev_tx_t sit_tunnel_xmit__(struct sk_buff *skb,
41318 + return NETDEV_TX_OK;
41319 + tx_error:
41320 + kfree_skb(skb);
41321 +- dev->stats.tx_errors++;
41322 ++ DEV_STATS_INC(dev, tx_errors);
41323 + return NETDEV_TX_OK;
41324 + }
41325 +
41326 +@@ -1087,7 +1087,7 @@ static netdev_tx_t sit_tunnel_xmit(struct sk_buff *skb,
41327 + return NETDEV_TX_OK;
41328 +
41329 + tx_err:
41330 +- dev->stats.tx_errors++;
41331 ++ DEV_STATS_INC(dev, tx_errors);
41332 + kfree_skb(skb);
41333 + return NETDEV_TX_OK;
41334 +
41335 +diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
41336 +index bc65e5b7195b3..98a64e8d9bdaa 100644
41337 +--- a/net/ipv6/udp.c
41338 ++++ b/net/ipv6/udp.c
41339 +@@ -1063,12 +1063,16 @@ static struct sock *__udp6_lib_demux_lookup(struct net *net,
41340 + int dif, int sdif)
41341 + {
41342 + unsigned short hnum = ntohs(loc_port);
41343 +- unsigned int hash2 = ipv6_portaddr_hash(net, loc_addr, hnum);
41344 +- unsigned int slot2 = hash2 & udp_table.mask;
41345 +- struct udp_hslot *hslot2 = &udp_table.hash2[slot2];
41346 +- const __portpair ports = INET_COMBINED_PORTS(rmt_port, hnum);
41347 ++ unsigned int hash2, slot2;
41348 ++ struct udp_hslot *hslot2;
41349 ++ __portpair ports;
41350 + struct sock *sk;
41351 +
41352 ++ hash2 = ipv6_portaddr_hash(net, loc_addr, hnum);
41353 ++ slot2 = hash2 & udp_table.mask;
41354 ++ hslot2 = &udp_table.hash2[slot2];
41355 ++ ports = INET_COMBINED_PORTS(rmt_port, hnum);
41356 ++
41357 + udp_portaddr_for_each_entry_rcu(sk, &hslot2->head) {
41358 + if (sk->sk_state == TCP_ESTABLISHED &&
41359 + inet6_match(net, sk, rmt_addr, loc_addr, ports, dif, sdif))
41360 +diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
41361 +index 687b4c878d4ad..8c8ef87997a8a 100644
41362 +--- a/net/mac80211/cfg.c
41363 ++++ b/net/mac80211/cfg.c
41364 +@@ -576,7 +576,7 @@ static struct ieee80211_key *
41365 + ieee80211_lookup_key(struct ieee80211_sub_if_data *sdata, int link_id,
41366 + u8 key_idx, bool pairwise, const u8 *mac_addr)
41367 + {
41368 +- struct ieee80211_local *local = sdata->local;
41369 ++ struct ieee80211_local *local __maybe_unused = sdata->local;
41370 + struct ieee80211_link_data *link = &sdata->deflink;
41371 + struct ieee80211_key *key;
41372 +
41373 +diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
41374 +index a842f2e1c2309..de7b8a4d4bbbb 100644
41375 +--- a/net/mac80211/ieee80211_i.h
41376 ++++ b/net/mac80211/ieee80211_i.h
41377 +@@ -390,6 +390,7 @@ struct ieee80211_mgd_auth_data {
41378 + bool done, waiting;
41379 + bool peer_confirmed;
41380 + bool timeout_started;
41381 ++ int link_id;
41382 +
41383 + u8 ap_addr[ETH_ALEN] __aligned(2);
41384 +
41385 +diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
41386 +index dd9ac1f7d2ea6..46f08ec5ed760 100644
41387 +--- a/net/mac80211/iface.c
41388 ++++ b/net/mac80211/iface.c
41389 +@@ -2258,6 +2258,7 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
41390 +
41391 + ret = cfg80211_register_netdevice(ndev);
41392 + if (ret) {
41393 ++ ieee80211_if_free(ndev);
41394 + free_netdev(ndev);
41395 + return ret;
41396 + }
41397 +diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
41398 +index d8484cd870de5..0125b3e6175b7 100644
41399 +--- a/net/mac80211/mlme.c
41400 ++++ b/net/mac80211/mlme.c
41401 +@@ -5033,6 +5033,7 @@ static void ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata,
41402 + struct cfg80211_rx_assoc_resp resp = {
41403 + .uapsd_queues = -1,
41404 + };
41405 ++ u8 ap_mld_addr[ETH_ALEN] __aligned(2);
41406 + unsigned int link_id;
41407 +
41408 + sdata_assert_lock(sdata);
41409 +@@ -5199,6 +5200,11 @@ static void ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata,
41410 + resp.uapsd_queues |= ieee80211_ac_to_qos_mask[ac];
41411 + }
41412 +
41413 ++ if (sdata->vif.valid_links) {
41414 ++ ether_addr_copy(ap_mld_addr, sdata->vif.cfg.ap_addr);
41415 ++ resp.ap_mld_addr = ap_mld_addr;
41416 ++ }
41417 ++
41418 + ieee80211_destroy_assoc_data(sdata,
41419 + status_code == WLAN_STATUS_SUCCESS ?
41420 + ASSOC_SUCCESS :
41421 +@@ -5208,8 +5214,6 @@ static void ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata,
41422 + resp.len = len;
41423 + resp.req_ies = ifmgd->assoc_req_ies;
41424 + resp.req_ies_len = ifmgd->assoc_req_ies_len;
41425 +- if (sdata->vif.valid_links)
41426 +- resp.ap_mld_addr = sdata->vif.cfg.ap_addr;
41427 + cfg80211_rx_assoc_resp(sdata->dev, &resp);
41428 + notify_driver:
41429 + drv_mgd_complete_tx(sdata->local, sdata, &info);
41430 +@@ -6640,6 +6644,7 @@ int ieee80211_mgd_auth(struct ieee80211_sub_if_data *sdata,
41431 + req->ap_mld_addr ?: req->bss->bssid,
41432 + ETH_ALEN);
41433 + auth_data->bss = req->bss;
41434 ++ auth_data->link_id = req->link_id;
41435 +
41436 + if (req->auth_data_len >= 4) {
41437 + if (req->auth_type == NL80211_AUTHTYPE_SAE) {
41438 +@@ -6658,7 +6663,8 @@ int ieee80211_mgd_auth(struct ieee80211_sub_if_data *sdata,
41439 + * removal and re-addition of the STA entry in
41440 + * ieee80211_prep_connection().
41441 + */
41442 +- cont_auth = ifmgd->auth_data && req->bss == ifmgd->auth_data->bss;
41443 ++ cont_auth = ifmgd->auth_data && req->bss == ifmgd->auth_data->bss &&
41444 ++ ifmgd->auth_data->link_id == req->link_id;
41445 +
41446 + if (req->ie && req->ie_len) {
41447 + memcpy(&auth_data->data[auth_data->data_len],
41448 +@@ -6982,7 +6988,8 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
41449 +
41450 + /* keep sta info, bssid if matching */
41451 + match = ether_addr_equal(ifmgd->auth_data->ap_addr,
41452 +- assoc_data->ap_addr);
41453 ++ assoc_data->ap_addr) &&
41454 ++ ifmgd->auth_data->link_id == req->link_id;
41455 + ieee80211_destroy_auth_data(sdata, match);
41456 + }
41457 +
41458 +diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
41459 +index 874f2a4d831d0..cc10ee1ff8e93 100644
41460 +--- a/net/mac80211/tx.c
41461 ++++ b/net/mac80211/tx.c
41462 +@@ -2973,7 +2973,7 @@ static struct sk_buff *ieee80211_build_hdr(struct ieee80211_sub_if_data *sdata,
41463 +
41464 + if (pre_conf_link_id != link_id &&
41465 + link_id != IEEE80211_LINK_UNSPECIFIED) {
41466 +-#ifdef CPTCFG_MAC80211_VERBOSE_DEBUG
41467 ++#ifdef CONFIG_MAC80211_VERBOSE_DEBUG
41468 + net_info_ratelimited("%s: dropped frame to %pM with bad link ID request (%d vs. %d)\n",
41469 + sdata->name, hdr.addr1,
41470 + pre_conf_link_id, link_id);
41471 +diff --git a/net/mctp/device.c b/net/mctp/device.c
41472 +index 99a3bda8852f8..acb97b2574289 100644
41473 +--- a/net/mctp/device.c
41474 ++++ b/net/mctp/device.c
41475 +@@ -429,12 +429,6 @@ static void mctp_unregister(struct net_device *dev)
41476 + struct mctp_dev *mdev;
41477 +
41478 + mdev = mctp_dev_get_rtnl(dev);
41479 +- if (mdev && !mctp_known(dev)) {
41480 +- // Sanity check, should match what was set in mctp_register
41481 +- netdev_warn(dev, "%s: BUG mctp_ptr set for unknown type %d",
41482 +- __func__, dev->type);
41483 +- return;
41484 +- }
41485 + if (!mdev)
41486 + return;
41487 +
41488 +@@ -451,14 +445,8 @@ static int mctp_register(struct net_device *dev)
41489 + struct mctp_dev *mdev;
41490 +
41491 + /* Already registered? */
41492 +- mdev = rtnl_dereference(dev->mctp_ptr);
41493 +-
41494 +- if (mdev) {
41495 +- if (!mctp_known(dev))
41496 +- netdev_warn(dev, "%s: BUG mctp_ptr set for unknown type %d",
41497 +- __func__, dev->type);
41498 ++ if (rtnl_dereference(dev->mctp_ptr))
41499 + return 0;
41500 +- }
41501 +
41502 + /* only register specific types */
41503 + if (!mctp_known(dev))
41504 +diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c
41505 +index 51ad557a525b5..b5ae419661b80 100644
41506 +--- a/net/netfilter/ipvs/ip_vs_core.c
41507 ++++ b/net/netfilter/ipvs/ip_vs_core.c
41508 +@@ -132,21 +132,21 @@ ip_vs_in_stats(struct ip_vs_conn *cp, struct sk_buff *skb)
41509 +
41510 + s = this_cpu_ptr(dest->stats.cpustats);
41511 + u64_stats_update_begin(&s->syncp);
41512 +- s->cnt.inpkts++;
41513 +- s->cnt.inbytes += skb->len;
41514 ++ u64_stats_inc(&s->cnt.inpkts);
41515 ++ u64_stats_add(&s->cnt.inbytes, skb->len);
41516 + u64_stats_update_end(&s->syncp);
41517 +
41518 + svc = rcu_dereference(dest->svc);
41519 + s = this_cpu_ptr(svc->stats.cpustats);
41520 + u64_stats_update_begin(&s->syncp);
41521 +- s->cnt.inpkts++;
41522 +- s->cnt.inbytes += skb->len;
41523 ++ u64_stats_inc(&s->cnt.inpkts);
41524 ++ u64_stats_add(&s->cnt.inbytes, skb->len);
41525 + u64_stats_update_end(&s->syncp);
41526 +
41527 + s = this_cpu_ptr(ipvs->tot_stats.cpustats);
41528 + u64_stats_update_begin(&s->syncp);
41529 +- s->cnt.inpkts++;
41530 +- s->cnt.inbytes += skb->len;
41531 ++ u64_stats_inc(&s->cnt.inpkts);
41532 ++ u64_stats_add(&s->cnt.inbytes, skb->len);
41533 + u64_stats_update_end(&s->syncp);
41534 +
41535 + local_bh_enable();
41536 +@@ -168,21 +168,21 @@ ip_vs_out_stats(struct ip_vs_conn *cp, struct sk_buff *skb)
41537 +
41538 + s = this_cpu_ptr(dest->stats.cpustats);
41539 + u64_stats_update_begin(&s->syncp);
41540 +- s->cnt.outpkts++;
41541 +- s->cnt.outbytes += skb->len;
41542 ++ u64_stats_inc(&s->cnt.outpkts);
41543 ++ u64_stats_add(&s->cnt.outbytes, skb->len);
41544 + u64_stats_update_end(&s->syncp);
41545 +
41546 + svc = rcu_dereference(dest->svc);
41547 + s = this_cpu_ptr(svc->stats.cpustats);
41548 + u64_stats_update_begin(&s->syncp);
41549 +- s->cnt.outpkts++;
41550 +- s->cnt.outbytes += skb->len;
41551 ++ u64_stats_inc(&s->cnt.outpkts);
41552 ++ u64_stats_add(&s->cnt.outbytes, skb->len);
41553 + u64_stats_update_end(&s->syncp);
41554 +
41555 + s = this_cpu_ptr(ipvs->tot_stats.cpustats);
41556 + u64_stats_update_begin(&s->syncp);
41557 +- s->cnt.outpkts++;
41558 +- s->cnt.outbytes += skb->len;
41559 ++ u64_stats_inc(&s->cnt.outpkts);
41560 ++ u64_stats_add(&s->cnt.outbytes, skb->len);
41561 + u64_stats_update_end(&s->syncp);
41562 +
41563 + local_bh_enable();
41564 +@@ -200,17 +200,17 @@ ip_vs_conn_stats(struct ip_vs_conn *cp, struct ip_vs_service *svc)
41565 +
41566 + s = this_cpu_ptr(cp->dest->stats.cpustats);
41567 + u64_stats_update_begin(&s->syncp);
41568 +- s->cnt.conns++;
41569 ++ u64_stats_inc(&s->cnt.conns);
41570 + u64_stats_update_end(&s->syncp);
41571 +
41572 + s = this_cpu_ptr(svc->stats.cpustats);
41573 + u64_stats_update_begin(&s->syncp);
41574 +- s->cnt.conns++;
41575 ++ u64_stats_inc(&s->cnt.conns);
41576 + u64_stats_update_end(&s->syncp);
41577 +
41578 + s = this_cpu_ptr(ipvs->tot_stats.cpustats);
41579 + u64_stats_update_begin(&s->syncp);
41580 +- s->cnt.conns++;
41581 ++ u64_stats_inc(&s->cnt.conns);
41582 + u64_stats_update_end(&s->syncp);
41583 +
41584 + local_bh_enable();
41585 +diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
41586 +index 988222fff9f02..03af6a2ffd567 100644
41587 +--- a/net/netfilter/ipvs/ip_vs_ctl.c
41588 ++++ b/net/netfilter/ipvs/ip_vs_ctl.c
41589 +@@ -2297,11 +2297,11 @@ static int ip_vs_stats_percpu_show(struct seq_file *seq, void *v)
41590 +
41591 + do {
41592 + start = u64_stats_fetch_begin_irq(&u->syncp);
41593 +- conns = u->cnt.conns;
41594 +- inpkts = u->cnt.inpkts;
41595 +- outpkts = u->cnt.outpkts;
41596 +- inbytes = u->cnt.inbytes;
41597 +- outbytes = u->cnt.outbytes;
41598 ++ conns = u64_stats_read(&u->cnt.conns);
41599 ++ inpkts = u64_stats_read(&u->cnt.inpkts);
41600 ++ outpkts = u64_stats_read(&u->cnt.outpkts);
41601 ++ inbytes = u64_stats_read(&u->cnt.inbytes);
41602 ++ outbytes = u64_stats_read(&u->cnt.outbytes);
41603 + } while (u64_stats_fetch_retry_irq(&u->syncp, start));
41604 +
41605 + seq_printf(seq, "%3X %8LX %8LX %8LX %16LX %16LX\n",
41606 +diff --git a/net/netfilter/ipvs/ip_vs_est.c b/net/netfilter/ipvs/ip_vs_est.c
41607 +index 9a1a7af6a186a..f53150d82a92d 100644
41608 +--- a/net/netfilter/ipvs/ip_vs_est.c
41609 ++++ b/net/netfilter/ipvs/ip_vs_est.c
41610 +@@ -67,11 +67,11 @@ static void ip_vs_read_cpu_stats(struct ip_vs_kstats *sum,
41611 + if (add) {
41612 + do {
41613 + start = u64_stats_fetch_begin(&s->syncp);
41614 +- conns = s->cnt.conns;
41615 +- inpkts = s->cnt.inpkts;
41616 +- outpkts = s->cnt.outpkts;
41617 +- inbytes = s->cnt.inbytes;
41618 +- outbytes = s->cnt.outbytes;
41619 ++ conns = u64_stats_read(&s->cnt.conns);
41620 ++ inpkts = u64_stats_read(&s->cnt.inpkts);
41621 ++ outpkts = u64_stats_read(&s->cnt.outpkts);
41622 ++ inbytes = u64_stats_read(&s->cnt.inbytes);
41623 ++ outbytes = u64_stats_read(&s->cnt.outbytes);
41624 + } while (u64_stats_fetch_retry(&s->syncp, start));
41625 + sum->conns += conns;
41626 + sum->inpkts += inpkts;
41627 +@@ -82,11 +82,11 @@ static void ip_vs_read_cpu_stats(struct ip_vs_kstats *sum,
41628 + add = true;
41629 + do {
41630 + start = u64_stats_fetch_begin(&s->syncp);
41631 +- sum->conns = s->cnt.conns;
41632 +- sum->inpkts = s->cnt.inpkts;
41633 +- sum->outpkts = s->cnt.outpkts;
41634 +- sum->inbytes = s->cnt.inbytes;
41635 +- sum->outbytes = s->cnt.outbytes;
41636 ++ sum->conns = u64_stats_read(&s->cnt.conns);
41637 ++ sum->inpkts = u64_stats_read(&s->cnt.inpkts);
41638 ++ sum->outpkts = u64_stats_read(&s->cnt.outpkts);
41639 ++ sum->inbytes = u64_stats_read(&s->cnt.inbytes);
41640 ++ sum->outbytes = u64_stats_read(&s->cnt.outbytes);
41641 + } while (u64_stats_fetch_retry(&s->syncp, start));
41642 + }
41643 + }
41644 +diff --git a/net/netfilter/nf_conntrack_proto_icmpv6.c b/net/netfilter/nf_conntrack_proto_icmpv6.c
41645 +index 61e3b05cf02c3..1020d67600a95 100644
41646 +--- a/net/netfilter/nf_conntrack_proto_icmpv6.c
41647 ++++ b/net/netfilter/nf_conntrack_proto_icmpv6.c
41648 +@@ -129,6 +129,56 @@ static void icmpv6_error_log(const struct sk_buff *skb,
41649 + nf_l4proto_log_invalid(skb, state, IPPROTO_ICMPV6, "%s", msg);
41650 + }
41651 +
41652 ++static noinline_for_stack int
41653 ++nf_conntrack_icmpv6_redirect(struct nf_conn *tmpl, struct sk_buff *skb,
41654 ++ unsigned int dataoff,
41655 ++ const struct nf_hook_state *state)
41656 ++{
41657 ++ u8 hl = ipv6_hdr(skb)->hop_limit;
41658 ++ union nf_inet_addr outer_daddr;
41659 ++ union {
41660 ++ struct nd_opt_hdr nd_opt;
41661 ++ struct rd_msg rd_msg;
41662 ++ } tmp;
41663 ++ const struct nd_opt_hdr *nd_opt;
41664 ++ const struct rd_msg *rd_msg;
41665 ++
41666 ++ rd_msg = skb_header_pointer(skb, dataoff, sizeof(*rd_msg), &tmp.rd_msg);
41667 ++ if (!rd_msg) {
41668 ++ icmpv6_error_log(skb, state, "short redirect");
41669 ++ return -NF_ACCEPT;
41670 ++ }
41671 ++
41672 ++ if (rd_msg->icmph.icmp6_code != 0)
41673 ++ return NF_ACCEPT;
41674 ++
41675 ++ if (hl != 255 || !(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL)) {
41676 ++ icmpv6_error_log(skb, state, "invalid saddr or hoplimit for redirect");
41677 ++ return -NF_ACCEPT;
41678 ++ }
41679 ++
41680 ++ dataoff += sizeof(*rd_msg);
41681 ++
41682 ++ /* warning: rd_msg no longer usable after this call */
41683 ++ nd_opt = skb_header_pointer(skb, dataoff, sizeof(*nd_opt), &tmp.nd_opt);
41684 ++ if (!nd_opt || nd_opt->nd_opt_len == 0) {
41685 ++ icmpv6_error_log(skb, state, "redirect without options");
41686 ++ return -NF_ACCEPT;
41687 ++ }
41688 ++
41689 ++ /* We could call ndisc_parse_options(), but it would need
41690 ++ * skb_linearize() and a bit more work.
41691 ++ */
41692 ++ if (nd_opt->nd_opt_type != ND_OPT_REDIRECT_HDR)
41693 ++ return NF_ACCEPT;
41694 ++
41695 ++ memcpy(&outer_daddr.ip6, &ipv6_hdr(skb)->daddr,
41696 ++ sizeof(outer_daddr.ip6));
41697 ++ dataoff += 8;
41698 ++ return nf_conntrack_inet_error(tmpl, skb, dataoff, state,
41699 ++ IPPROTO_ICMPV6, &outer_daddr);
41700 ++}
41701 ++
41702 + int nf_conntrack_icmpv6_error(struct nf_conn *tmpl,
41703 + struct sk_buff *skb,
41704 + unsigned int dataoff,
41705 +@@ -159,6 +209,9 @@ int nf_conntrack_icmpv6_error(struct nf_conn *tmpl,
41706 + return NF_ACCEPT;
41707 + }
41708 +
41709 ++ if (icmp6h->icmp6_type == NDISC_REDIRECT)
41710 ++ return nf_conntrack_icmpv6_redirect(tmpl, skb, dataoff, state);
41711 ++
41712 + /* is not error message ? */
41713 + if (icmp6h->icmp6_type >= 128)
41714 + return NF_ACCEPT;
41715 +diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c
41716 +index 0fdcdb2c9ae43..4d9b99abe37d6 100644
41717 +--- a/net/netfilter/nf_flow_table_offload.c
41718 ++++ b/net/netfilter/nf_flow_table_offload.c
41719 +@@ -383,12 +383,12 @@ static void flow_offload_ipv6_mangle(struct nf_flow_rule *flow_rule,
41720 + const __be32 *addr, const __be32 *mask)
41721 + {
41722 + struct flow_action_entry *entry;
41723 +- int i, j;
41724 ++ int i;
41725 +
41726 +- for (i = 0, j = 0; i < sizeof(struct in6_addr) / sizeof(u32); i += sizeof(u32), j++) {
41727 ++ for (i = 0; i < sizeof(struct in6_addr) / sizeof(u32); i++) {
41728 + entry = flow_action_entry_next(flow_rule);
41729 + flow_offload_mangle(entry, FLOW_ACT_MANGLE_HDR_TYPE_IP6,
41730 +- offset + i, &addr[j], mask);
41731 ++ offset + i * sizeof(u32), &addr[i], mask);
41732 + }
41733 + }
41734 +
41735 +diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
41736 +index 8b84869eb2ac7..fa0f1952d7637 100644
41737 +--- a/net/openvswitch/datapath.c
41738 ++++ b/net/openvswitch/datapath.c
41739 +@@ -948,6 +948,7 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
41740 + struct sw_flow_mask mask;
41741 + struct sk_buff *reply;
41742 + struct datapath *dp;
41743 ++ struct sw_flow_key *key;
41744 + struct sw_flow_actions *acts;
41745 + struct sw_flow_match match;
41746 + u32 ufid_flags = ovs_nla_get_ufid_flags(a[OVS_FLOW_ATTR_UFID_FLAGS]);
41747 +@@ -975,24 +976,26 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
41748 + }
41749 +
41750 + /* Extract key. */
41751 +- ovs_match_init(&match, &new_flow->key, false, &mask);
41752 ++ key = kzalloc(sizeof(*key), GFP_KERNEL);
41753 ++ if (!key) {
41754 ++ error = -ENOMEM;
41755 ++ goto err_kfree_key;
41756 ++ }
41757 ++
41758 ++ ovs_match_init(&match, key, false, &mask);
41759 + error = ovs_nla_get_match(net, &match, a[OVS_FLOW_ATTR_KEY],
41760 + a[OVS_FLOW_ATTR_MASK], log);
41761 + if (error)
41762 + goto err_kfree_flow;
41763 +
41764 ++ ovs_flow_mask_key(&new_flow->key, key, true, &mask);
41765 ++
41766 + /* Extract flow identifier. */
41767 + error = ovs_nla_get_identifier(&new_flow->id, a[OVS_FLOW_ATTR_UFID],
41768 +- &new_flow->key, log);
41769 ++ key, log);
41770 + if (error)
41771 + goto err_kfree_flow;
41772 +
41773 +- /* unmasked key is needed to match when ufid is not used. */
41774 +- if (ovs_identifier_is_key(&new_flow->id))
41775 +- match.key = new_flow->id.unmasked_key;
41776 +-
41777 +- ovs_flow_mask_key(&new_flow->key, &new_flow->key, true, &mask);
41778 +-
41779 + /* Validate actions. */
41780 + error = ovs_nla_copy_actions(net, a[OVS_FLOW_ATTR_ACTIONS],
41781 + &new_flow->key, &acts, log);
41782 +@@ -1019,7 +1022,7 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
41783 + if (ovs_identifier_is_ufid(&new_flow->id))
41784 + flow = ovs_flow_tbl_lookup_ufid(&dp->table, &new_flow->id);
41785 + if (!flow)
41786 +- flow = ovs_flow_tbl_lookup(&dp->table, &new_flow->key);
41787 ++ flow = ovs_flow_tbl_lookup(&dp->table, key);
41788 + if (likely(!flow)) {
41789 + rcu_assign_pointer(new_flow->sf_acts, acts);
41790 +
41791 +@@ -1089,6 +1092,8 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
41792 +
41793 + if (reply)
41794 + ovs_notify(&dp_flow_genl_family, reply, info);
41795 ++
41796 ++ kfree(key);
41797 + return 0;
41798 +
41799 + err_unlock_ovs:
41800 +@@ -1098,6 +1103,8 @@ err_kfree_acts:
41801 + ovs_nla_free_flow_actions(acts);
41802 + err_kfree_flow:
41803 + ovs_flow_free(new_flow, false);
41804 ++err_kfree_key:
41805 ++ kfree(key);
41806 + error:
41807 + return error;
41808 + }
41809 +diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
41810 +index 4a07ab094a84e..ead5418c126e3 100644
41811 +--- a/net/openvswitch/flow_netlink.c
41812 ++++ b/net/openvswitch/flow_netlink.c
41813 +@@ -2309,7 +2309,7 @@ static struct sw_flow_actions *nla_alloc_flow_actions(int size)
41814 +
41815 + WARN_ON_ONCE(size > MAX_ACTIONS_BUFSIZE);
41816 +
41817 +- sfa = kmalloc(sizeof(*sfa) + size, GFP_KERNEL);
41818 ++ sfa = kmalloc(kmalloc_size_roundup(sizeof(*sfa) + size), GFP_KERNEL);
41819 + if (!sfa)
41820 + return ERR_PTR(-ENOMEM);
41821 +
41822 +diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
41823 +index 9683617db7049..08c117bc083ec 100644
41824 +--- a/net/rxrpc/output.c
41825 ++++ b/net/rxrpc/output.c
41826 +@@ -93,7 +93,7 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn,
41827 + *_hard_ack = hard_ack;
41828 + *_top = top;
41829 +
41830 +- pkt->ack.bufferSpace = htons(8);
41831 ++ pkt->ack.bufferSpace = htons(0);
41832 + pkt->ack.maxSkew = htons(0);
41833 + pkt->ack.firstPacket = htonl(hard_ack + 1);
41834 + pkt->ack.previousPacket = htonl(call->ackr_highest_seq);
41835 +diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
41836 +index 3c3a626459deb..d4e4e94f4f987 100644
41837 +--- a/net/rxrpc/sendmsg.c
41838 ++++ b/net/rxrpc/sendmsg.c
41839 +@@ -716,7 +716,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
41840 + if (call->tx_total_len != -1 ||
41841 + call->tx_pending ||
41842 + call->tx_top != 0)
41843 +- goto error_put;
41844 ++ goto out_put_unlock;
41845 + call->tx_total_len = p.call.tx_total_len;
41846 + }
41847 + }
41848 +diff --git a/net/sched/ematch.c b/net/sched/ematch.c
41849 +index 4ce6813618515..5c1235e6076ae 100644
41850 +--- a/net/sched/ematch.c
41851 ++++ b/net/sched/ematch.c
41852 +@@ -255,6 +255,8 @@ static int tcf_em_validate(struct tcf_proto *tp,
41853 + * the value carried.
41854 + */
41855 + if (em_hdr->flags & TCF_EM_SIMPLE) {
41856 ++ if (em->ops->datalen > 0)
41857 ++ goto errout;
41858 + if (data_len < sizeof(u32))
41859 + goto errout;
41860 + em->data = *(u32 *) data;
41861 +diff --git a/net/sctp/sysctl.c b/net/sctp/sysctl.c
41862 +index b46a416787ec3..43ebf090029d7 100644
41863 +--- a/net/sctp/sysctl.c
41864 ++++ b/net/sctp/sysctl.c
41865 +@@ -84,17 +84,18 @@ static struct ctl_table sctp_table[] = {
41866 + { /* sentinel */ }
41867 + };
41868 +
41869 ++/* The following index defines are used in sctp_sysctl_net_register().
41870 ++ * If you add new items to the sctp_net_table, please ensure that
41871 ++ * the index values of these defines hold the same meaning indicated by
41872 ++ * their macro names when they appear in sctp_net_table.
41873 ++ */
41874 ++#define SCTP_RTO_MIN_IDX 0
41875 ++#define SCTP_RTO_MAX_IDX 1
41876 ++#define SCTP_PF_RETRANS_IDX 2
41877 ++#define SCTP_PS_RETRANS_IDX 3
41878 ++
41879 + static struct ctl_table sctp_net_table[] = {
41880 +- {
41881 +- .procname = "rto_initial",
41882 +- .data = &init_net.sctp.rto_initial,
41883 +- .maxlen = sizeof(unsigned int),
41884 +- .mode = 0644,
41885 +- .proc_handler = proc_dointvec_minmax,
41886 +- .extra1 = SYSCTL_ONE,
41887 +- .extra2 = &timer_max
41888 +- },
41889 +- {
41890 ++ [SCTP_RTO_MIN_IDX] = {
41891 + .procname = "rto_min",
41892 + .data = &init_net.sctp.rto_min,
41893 + .maxlen = sizeof(unsigned int),
41894 +@@ -103,7 +104,7 @@ static struct ctl_table sctp_net_table[] = {
41895 + .extra1 = SYSCTL_ONE,
41896 + .extra2 = &init_net.sctp.rto_max
41897 + },
41898 +- {
41899 ++ [SCTP_RTO_MAX_IDX] = {
41900 + .procname = "rto_max",
41901 + .data = &init_net.sctp.rto_max,
41902 + .maxlen = sizeof(unsigned int),
41903 +@@ -112,6 +113,33 @@ static struct ctl_table sctp_net_table[] = {
41904 + .extra1 = &init_net.sctp.rto_min,
41905 + .extra2 = &timer_max
41906 + },
41907 ++ [SCTP_PF_RETRANS_IDX] = {
41908 ++ .procname = "pf_retrans",
41909 ++ .data = &init_net.sctp.pf_retrans,
41910 ++ .maxlen = sizeof(int),
41911 ++ .mode = 0644,
41912 ++ .proc_handler = proc_dointvec_minmax,
41913 ++ .extra1 = SYSCTL_ZERO,
41914 ++ .extra2 = &init_net.sctp.ps_retrans,
41915 ++ },
41916 ++ [SCTP_PS_RETRANS_IDX] = {
41917 ++ .procname = "ps_retrans",
41918 ++ .data = &init_net.sctp.ps_retrans,
41919 ++ .maxlen = sizeof(int),
41920 ++ .mode = 0644,
41921 ++ .proc_handler = proc_dointvec_minmax,
41922 ++ .extra1 = &init_net.sctp.pf_retrans,
41923 ++ .extra2 = &ps_retrans_max,
41924 ++ },
41925 ++ {
41926 ++ .procname = "rto_initial",
41927 ++ .data = &init_net.sctp.rto_initial,
41928 ++ .maxlen = sizeof(unsigned int),
41929 ++ .mode = 0644,
41930 ++ .proc_handler = proc_dointvec_minmax,
41931 ++ .extra1 = SYSCTL_ONE,
41932 ++ .extra2 = &timer_max
41933 ++ },
41934 + {
41935 + .procname = "rto_alpha_exp_divisor",
41936 + .data = &init_net.sctp.rto_alpha,
41937 +@@ -207,24 +235,6 @@ static struct ctl_table sctp_net_table[] = {
41938 + .extra1 = SYSCTL_ONE,
41939 + .extra2 = SYSCTL_INT_MAX,
41940 + },
41941 +- {
41942 +- .procname = "pf_retrans",
41943 +- .data = &init_net.sctp.pf_retrans,
41944 +- .maxlen = sizeof(int),
41945 +- .mode = 0644,
41946 +- .proc_handler = proc_dointvec_minmax,
41947 +- .extra1 = SYSCTL_ZERO,
41948 +- .extra2 = &init_net.sctp.ps_retrans,
41949 +- },
41950 +- {
41951 +- .procname = "ps_retrans",
41952 +- .data = &init_net.sctp.ps_retrans,
41953 +- .maxlen = sizeof(int),
41954 +- .mode = 0644,
41955 +- .proc_handler = proc_dointvec_minmax,
41956 +- .extra1 = &init_net.sctp.pf_retrans,
41957 +- .extra2 = &ps_retrans_max,
41958 +- },
41959 + {
41960 + .procname = "sndbuf_policy",
41961 + .data = &init_net.sctp.sndbuf_policy,
41962 +@@ -586,6 +596,11 @@ int sctp_sysctl_net_register(struct net *net)
41963 + for (i = 0; table[i].data; i++)
41964 + table[i].data += (char *)(&net->sctp) - (char *)&init_net.sctp;
41965 +
41966 ++ table[SCTP_RTO_MIN_IDX].extra2 = &net->sctp.rto_max;
41967 ++ table[SCTP_RTO_MAX_IDX].extra1 = &net->sctp.rto_min;
41968 ++ table[SCTP_PF_RETRANS_IDX].extra2 = &net->sctp.ps_retrans;
41969 ++ table[SCTP_PS_RETRANS_IDX].extra1 = &net->sctp.pf_retrans;
41970 ++
41971 + net->sctp.sysctl_header = register_net_sysctl(net, "net/sctp", table);
41972 + if (net->sctp.sysctl_header == NULL) {
41973 + kfree(table);
41974 +diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
41975 +index 993acf38af870..0b0b9f1eed469 100644
41976 +--- a/net/sunrpc/clnt.c
41977 ++++ b/net/sunrpc/clnt.c
41978 +@@ -1442,7 +1442,7 @@ static int rpc_sockname(struct net *net, struct sockaddr *sap, size_t salen,
41979 + break;
41980 + default:
41981 + err = -EAFNOSUPPORT;
41982 +- goto out;
41983 ++ goto out_release;
41984 + }
41985 + if (err < 0) {
41986 + dprintk("RPC: can't bind UDP socket (%d)\n", err);
41987 +diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
41988 +index 44b87e4274b42..b098fde373abf 100644
41989 +--- a/net/sunrpc/xprtrdma/verbs.c
41990 ++++ b/net/sunrpc/xprtrdma/verbs.c
41991 +@@ -831,7 +831,7 @@ struct rpcrdma_req *rpcrdma_req_create(struct rpcrdma_xprt *r_xprt,
41992 + return req;
41993 +
41994 + out3:
41995 +- kfree(req->rl_sendbuf);
41996 ++ rpcrdma_regbuf_free(req->rl_sendbuf);
41997 + out2:
41998 + kfree(req);
41999 + out1:
42000 +diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
42001 +index 264cf367e2656..9ed9786341259 100644
42002 +--- a/net/tls/tls_sw.c
42003 ++++ b/net/tls/tls_sw.c
42004 +@@ -792,7 +792,7 @@ static int bpf_exec_tx_verdict(struct sk_msg *msg, struct sock *sk,
42005 + struct sk_psock *psock;
42006 + struct sock *sk_redir;
42007 + struct tls_rec *rec;
42008 +- bool enospc, policy;
42009 ++ bool enospc, policy, redir_ingress;
42010 + int err = 0, send;
42011 + u32 delta = 0;
42012 +
42013 +@@ -837,6 +837,7 @@ more_data:
42014 + }
42015 + break;
42016 + case __SK_REDIRECT:
42017 ++ redir_ingress = psock->redir_ingress;
42018 + sk_redir = psock->sk_redir;
42019 + memcpy(&msg_redir, msg, sizeof(*msg));
42020 + if (msg->apply_bytes < send)
42021 +@@ -846,7 +847,8 @@ more_data:
42022 + sk_msg_return_zero(sk, msg, send);
42023 + msg->sg.size -= send;
42024 + release_sock(sk);
42025 +- err = tcp_bpf_sendmsg_redir(sk_redir, &msg_redir, send, flags);
42026 ++ err = tcp_bpf_sendmsg_redir(sk_redir, redir_ingress,
42027 ++ &msg_redir, send, flags);
42028 + lock_sock(sk);
42029 + if (err < 0) {
42030 + *copied -= sk_msg_free_nocharge(sk, &msg_redir);
42031 +diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
42032 +index b3545fc680979..f0c2293f1d3b8 100644
42033 +--- a/net/unix/af_unix.c
42034 ++++ b/net/unix/af_unix.c
42035 +@@ -1999,13 +1999,20 @@ restart_locked:
42036 + unix_state_lock(sk);
42037 +
42038 + err = 0;
42039 +- if (unix_peer(sk) == other) {
42040 ++ if (sk->sk_type == SOCK_SEQPACKET) {
42041 ++ /* We are here only when racing with unix_release_sock()
42042 ++ * is clearing @other. Never change state to TCP_CLOSE
42043 ++ * unlike SOCK_DGRAM wants.
42044 ++ */
42045 ++ unix_state_unlock(sk);
42046 ++ err = -EPIPE;
42047 ++ } else if (unix_peer(sk) == other) {
42048 + unix_peer(sk) = NULL;
42049 + unix_dgram_peer_wake_disconnect_wakeup(sk, other);
42050 +
42051 ++ sk->sk_state = TCP_CLOSE;
42052 + unix_state_unlock(sk);
42053 +
42054 +- sk->sk_state = TCP_CLOSE;
42055 + unix_dgram_disconnected(sk, other);
42056 + sock_put(other);
42057 + err = -ECONNREFUSED;
42058 +@@ -3738,6 +3745,7 @@ static int __init af_unix_init(void)
42059 + rc = proto_register(&unix_stream_proto, 1);
42060 + if (rc != 0) {
42061 + pr_crit("%s: Cannot create unix_sock SLAB cache!\n", __func__);
42062 ++ proto_unregister(&unix_dgram_proto);
42063 + goto out;
42064 + }
42065 +
42066 +diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
42067 +index 842c94286d316..36eb16a40745d 100644
42068 +--- a/net/vmw_vsock/vmci_transport.c
42069 ++++ b/net/vmw_vsock/vmci_transport.c
42070 +@@ -1711,7 +1711,11 @@ static int vmci_transport_dgram_enqueue(
42071 + if (!dg)
42072 + return -ENOMEM;
42073 +
42074 +- memcpy_from_msg(VMCI_DG_PAYLOAD(dg), msg, len);
42075 ++ err = memcpy_from_msg(VMCI_DG_PAYLOAD(dg), msg, len);
42076 ++ if (err) {
42077 ++ kfree(dg);
42078 ++ return err;
42079 ++ }
42080 +
42081 + dg->dst = vmci_make_handle(remote_addr->svm_cid,
42082 + remote_addr->svm_port);
42083 +diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
42084 +index 597c522365146..d2321c6833985 100644
42085 +--- a/net/wireless/nl80211.c
42086 ++++ b/net/wireless/nl80211.c
42087 +@@ -3868,6 +3868,9 @@ static int nl80211_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flag
42088 + struct cfg80211_chan_def chandef = {};
42089 + int ret;
42090 +
42091 ++ if (!link)
42092 ++ goto nla_put_failure;
42093 ++
42094 + if (nla_put_u8(msg, NL80211_ATTR_MLO_LINK_ID, link_id))
42095 + goto nla_put_failure;
42096 + if (nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN,
42097 +diff --git a/net/wireless/reg.c b/net/wireless/reg.c
42098 +index c3d950d294329..4f3f31244e8ba 100644
42099 +--- a/net/wireless/reg.c
42100 ++++ b/net/wireless/reg.c
42101 +@@ -4311,8 +4311,10 @@ static int __init regulatory_init_db(void)
42102 + return -EINVAL;
42103 +
42104 + err = load_builtin_regdb_keys();
42105 +- if (err)
42106 ++ if (err) {
42107 ++ platform_device_unregister(reg_pdev);
42108 + return err;
42109 ++ }
42110 +
42111 + /* We always try to get an update for the static regdomain */
42112 + err = regulatory_hint_core(cfg80211_world_regdom->alpha2);
42113 +diff --git a/samples/bpf/xdp1_user.c b/samples/bpf/xdp1_user.c
42114 +index ac370e638fa3d..281dc964de8da 100644
42115 +--- a/samples/bpf/xdp1_user.c
42116 ++++ b/samples/bpf/xdp1_user.c
42117 +@@ -51,7 +51,7 @@ static void poll_stats(int map_fd, int interval)
42118 +
42119 + sleep(interval);
42120 +
42121 +- while (bpf_map_get_next_key(map_fd, &key, &key) != -1) {
42122 ++ while (bpf_map_get_next_key(map_fd, &key, &key) == 0) {
42123 + __u64 sum = 0;
42124 +
42125 + assert(bpf_map_lookup_elem(map_fd, &key, values) == 0);
42126 +diff --git a/samples/bpf/xdp2_kern.c b/samples/bpf/xdp2_kern.c
42127 +index 3332ba6bb95fb..67804ecf7ce37 100644
42128 +--- a/samples/bpf/xdp2_kern.c
42129 ++++ b/samples/bpf/xdp2_kern.c
42130 +@@ -112,6 +112,10 @@ int xdp_prog1(struct xdp_md *ctx)
42131 +
42132 + if (ipproto == IPPROTO_UDP) {
42133 + swap_src_dst_mac(data);
42134 ++
42135 ++ if (bpf_xdp_store_bytes(ctx, 0, pkt, sizeof(pkt)))
42136 ++ return rc;
42137 ++
42138 + rc = XDP_TX;
42139 + }
42140 +
42141 +diff --git a/samples/vfio-mdev/mdpy-fb.c b/samples/vfio-mdev/mdpy-fb.c
42142 +index 9ec93d90e8a5a..4eb7aa11cfbb2 100644
42143 +--- a/samples/vfio-mdev/mdpy-fb.c
42144 ++++ b/samples/vfio-mdev/mdpy-fb.c
42145 +@@ -109,7 +109,7 @@ static int mdpy_fb_probe(struct pci_dev *pdev,
42146 +
42147 + ret = pci_request_regions(pdev, "mdpy-fb");
42148 + if (ret < 0)
42149 +- return ret;
42150 ++ goto err_disable_dev;
42151 +
42152 + pci_read_config_dword(pdev, MDPY_FORMAT_OFFSET, &format);
42153 + pci_read_config_dword(pdev, MDPY_WIDTH_OFFSET, &width);
42154 +@@ -191,6 +191,9 @@ err_release_fb:
42155 + err_release_regions:
42156 + pci_release_regions(pdev);
42157 +
42158 ++err_disable_dev:
42159 ++ pci_disable_device(pdev);
42160 ++
42161 + return ret;
42162 + }
42163 +
42164 +@@ -199,7 +202,10 @@ static void mdpy_fb_remove(struct pci_dev *pdev)
42165 + struct fb_info *info = pci_get_drvdata(pdev);
42166 +
42167 + unregister_framebuffer(info);
42168 ++ iounmap(info->screen_base);
42169 + framebuffer_release(info);
42170 ++ pci_release_regions(pdev);
42171 ++ pci_disable_device(pdev);
42172 + }
42173 +
42174 + static struct pci_device_id mdpy_fb_pci_table[] = {
42175 +diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
42176 +index d766b7d0ffd13..53baa95cb644f 100644
42177 +--- a/security/Kconfig.hardening
42178 ++++ b/security/Kconfig.hardening
42179 +@@ -257,6 +257,9 @@ config INIT_ON_FREE_DEFAULT_ON
42180 +
42181 + config CC_HAS_ZERO_CALL_USED_REGS
42182 + def_bool $(cc-option,-fzero-call-used-regs=used-gpr)
42183 ++ # https://github.com/ClangBuiltLinux/linux/issues/1766
42184 ++ # https://github.com/llvm/llvm-project/issues/59242
42185 ++ depends on !CC_IS_CLANG || CLANG_VERSION > 150006
42186 +
42187 + config ZERO_CALL_USED_REGS
42188 + bool "Enable register zeroing on function exit"
42189 +diff --git a/security/apparmor/apparmorfs.c b/security/apparmor/apparmorfs.c
42190 +index d066ccc219e2d..7160e7aa58b94 100644
42191 +--- a/security/apparmor/apparmorfs.c
42192 ++++ b/security/apparmor/apparmorfs.c
42193 +@@ -868,8 +868,10 @@ static struct multi_transaction *multi_transaction_new(struct file *file,
42194 + if (!t)
42195 + return ERR_PTR(-ENOMEM);
42196 + kref_init(&t->count);
42197 +- if (copy_from_user(t->data, buf, size))
42198 ++ if (copy_from_user(t->data, buf, size)) {
42199 ++ put_multi_transaction(t);
42200 + return ERR_PTR(-EFAULT);
42201 ++ }
42202 +
42203 + return t;
42204 + }
42205 +diff --git a/security/apparmor/label.c b/security/apparmor/label.c
42206 +index 0f36ee9074381..a67c5897ee254 100644
42207 +--- a/security/apparmor/label.c
42208 ++++ b/security/apparmor/label.c
42209 +@@ -197,15 +197,18 @@ static bool vec_is_stale(struct aa_profile **vec, int n)
42210 + return false;
42211 + }
42212 +
42213 +-static long union_vec_flags(struct aa_profile **vec, int n, long mask)
42214 ++static long accum_vec_flags(struct aa_profile **vec, int n)
42215 + {
42216 +- long u = 0;
42217 ++ long u = FLAG_UNCONFINED;
42218 + int i;
42219 +
42220 + AA_BUG(!vec);
42221 +
42222 + for (i = 0; i < n; i++) {
42223 +- u |= vec[i]->label.flags & mask;
42224 ++ u |= vec[i]->label.flags & (FLAG_DEBUG1 | FLAG_DEBUG2 |
42225 ++ FLAG_STALE);
42226 ++ if (!(u & vec[i]->label.flags & FLAG_UNCONFINED))
42227 ++ u &= ~FLAG_UNCONFINED;
42228 + }
42229 +
42230 + return u;
42231 +@@ -1097,8 +1100,7 @@ static struct aa_label *label_merge_insert(struct aa_label *new,
42232 + else if (k == b->size)
42233 + return aa_get_label(b);
42234 + }
42235 +- new->flags |= union_vec_flags(new->vec, new->size, FLAG_UNCONFINED |
42236 +- FLAG_DEBUG1 | FLAG_DEBUG2);
42237 ++ new->flags |= accum_vec_flags(new->vec, new->size);
42238 + ls = labels_set(new);
42239 + write_lock_irqsave(&ls->lock, flags);
42240 + label = __label_insert(labels_set(new), new, false);
42241 +diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
42242 +index f56070270c69d..1e2f40db15c58 100644
42243 +--- a/security/apparmor/lsm.c
42244 ++++ b/security/apparmor/lsm.c
42245 +@@ -1194,10 +1194,10 @@ static int apparmor_inet_conn_request(const struct sock *sk, struct sk_buff *skb
42246 + #endif
42247 +
42248 + /*
42249 +- * The cred blob is a pointer to, not an instance of, an aa_task_ctx.
42250 ++ * The cred blob is a pointer to, not an instance of, an aa_label.
42251 + */
42252 + struct lsm_blob_sizes apparmor_blob_sizes __lsm_ro_after_init = {
42253 +- .lbs_cred = sizeof(struct aa_task_ctx *),
42254 ++ .lbs_cred = sizeof(struct aa_label *),
42255 + .lbs_file = sizeof(struct aa_file_ctx),
42256 + .lbs_task = sizeof(struct aa_task_ctx),
42257 + };
42258 +diff --git a/security/apparmor/policy.c b/security/apparmor/policy.c
42259 +index 499c0209b6a46..fbdfcef91c616 100644
42260 +--- a/security/apparmor/policy.c
42261 ++++ b/security/apparmor/policy.c
42262 +@@ -1170,7 +1170,7 @@ ssize_t aa_remove_profiles(struct aa_ns *policy_ns, struct aa_label *subj,
42263 +
42264 + if (!name) {
42265 + /* remove namespace - can only happen if fqname[0] == ':' */
42266 +- mutex_lock_nested(&ns->parent->lock, ns->level);
42267 ++ mutex_lock_nested(&ns->parent->lock, ns->parent->level);
42268 + __aa_bump_ns_revision(ns);
42269 + __aa_remove_ns(ns);
42270 + mutex_unlock(&ns->parent->lock);
42271 +diff --git a/security/apparmor/policy_ns.c b/security/apparmor/policy_ns.c
42272 +index 43beaad083feb..78700d94b4533 100644
42273 +--- a/security/apparmor/policy_ns.c
42274 ++++ b/security/apparmor/policy_ns.c
42275 +@@ -134,7 +134,7 @@ static struct aa_ns *alloc_ns(const char *prefix, const char *name)
42276 + return ns;
42277 +
42278 + fail_unconfined:
42279 +- kfree_sensitive(ns->base.hname);
42280 ++ aa_policy_destroy(&ns->base);
42281 + fail_ns:
42282 + kfree_sensitive(ns);
42283 + return NULL;
42284 +diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
42285 +index 55d31bac4f35b..9d26bbb901338 100644
42286 +--- a/security/apparmor/policy_unpack.c
42287 ++++ b/security/apparmor/policy_unpack.c
42288 +@@ -972,7 +972,7 @@ static int verify_header(struct aa_ext *e, int required, const char **ns)
42289 + * if not specified use previous version
42290 + * Mask off everything that is not kernel abi version
42291 + */
42292 +- if (VERSION_LT(e->version, v5) || VERSION_GT(e->version, v7)) {
42293 ++ if (VERSION_LT(e->version, v5) || VERSION_GT(e->version, v8)) {
42294 + audit_iface(NULL, NULL, NULL, "unsupported interface version",
42295 + e, error);
42296 + return error;
42297 +diff --git a/security/integrity/digsig.c b/security/integrity/digsig.c
42298 +index 8a82a6c7f48a4..f2193c531f4a4 100644
42299 +--- a/security/integrity/digsig.c
42300 ++++ b/security/integrity/digsig.c
42301 +@@ -126,6 +126,7 @@ int __init integrity_init_keyring(const unsigned int id)
42302 + {
42303 + struct key_restriction *restriction;
42304 + key_perm_t perm;
42305 ++ int ret;
42306 +
42307 + perm = (KEY_POS_ALL & ~KEY_POS_SETATTR) | KEY_USR_VIEW
42308 + | KEY_USR_READ | KEY_USR_SEARCH;
42309 +@@ -154,7 +155,10 @@ int __init integrity_init_keyring(const unsigned int id)
42310 + perm |= KEY_USR_WRITE;
42311 +
42312 + out:
42313 +- return __integrity_init_keyring(id, perm, restriction);
42314 ++ ret = __integrity_init_keyring(id, perm, restriction);
42315 ++ if (ret)
42316 ++ kfree(restriction);
42317 ++ return ret;
42318 + }
42319 +
42320 + static int __init integrity_add_key(const unsigned int id, const void *data,
42321 +diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
42322 +index a8802b8da946b..2edff7f58c25c 100644
42323 +--- a/security/integrity/ima/ima_policy.c
42324 ++++ b/security/integrity/ima/ima_policy.c
42325 +@@ -398,12 +398,6 @@ static struct ima_rule_entry *ima_lsm_copy_rule(struct ima_rule_entry *entry)
42326 +
42327 + nentry->lsm[i].type = entry->lsm[i].type;
42328 + nentry->lsm[i].args_p = entry->lsm[i].args_p;
42329 +- /*
42330 +- * Remove the reference from entry so that the associated
42331 +- * memory will not be freed during a later call to
42332 +- * ima_lsm_free_rule(entry).
42333 +- */
42334 +- entry->lsm[i].args_p = NULL;
42335 +
42336 + ima_filter_rule_init(nentry->lsm[i].type, Audit_equal,
42337 + nentry->lsm[i].args_p,
42338 +@@ -417,6 +411,7 @@ static struct ima_rule_entry *ima_lsm_copy_rule(struct ima_rule_entry *entry)
42339 +
42340 + static int ima_lsm_update_rule(struct ima_rule_entry *entry)
42341 + {
42342 ++ int i;
42343 + struct ima_rule_entry *nentry;
42344 +
42345 + nentry = ima_lsm_copy_rule(entry);
42346 +@@ -431,7 +426,8 @@ static int ima_lsm_update_rule(struct ima_rule_entry *entry)
42347 + * references and the entry itself. All other memory references will now
42348 + * be owned by nentry.
42349 + */
42350 +- ima_lsm_free_rule(entry);
42351 ++ for (i = 0; i < MAX_LSM_RULES; i++)
42352 ++ ima_filter_rule_free(entry->lsm[i].rule);
42353 + kfree(entry);
42354 +
42355 + return 0;
42356 +@@ -549,6 +545,9 @@ static bool ima_match_rules(struct ima_rule_entry *rule,
42357 + const char *func_data)
42358 + {
42359 + int i;
42360 ++ bool result = false;
42361 ++ struct ima_rule_entry *lsm_rule = rule;
42362 ++ bool rule_reinitialized = false;
42363 +
42364 + if ((rule->flags & IMA_FUNC) &&
42365 + (rule->func != func && func != POST_SETATTR))
42366 +@@ -610,35 +609,55 @@ static bool ima_match_rules(struct ima_rule_entry *rule,
42367 + int rc = 0;
42368 + u32 osid;
42369 +
42370 +- if (!rule->lsm[i].rule) {
42371 +- if (!rule->lsm[i].args_p)
42372 ++ if (!lsm_rule->lsm[i].rule) {
42373 ++ if (!lsm_rule->lsm[i].args_p)
42374 + continue;
42375 + else
42376 + return false;
42377 + }
42378 ++
42379 ++retry:
42380 + switch (i) {
42381 + case LSM_OBJ_USER:
42382 + case LSM_OBJ_ROLE:
42383 + case LSM_OBJ_TYPE:
42384 + security_inode_getsecid(inode, &osid);
42385 +- rc = ima_filter_rule_match(osid, rule->lsm[i].type,
42386 ++ rc = ima_filter_rule_match(osid, lsm_rule->lsm[i].type,
42387 + Audit_equal,
42388 +- rule->lsm[i].rule);
42389 ++ lsm_rule->lsm[i].rule);
42390 + break;
42391 + case LSM_SUBJ_USER:
42392 + case LSM_SUBJ_ROLE:
42393 + case LSM_SUBJ_TYPE:
42394 +- rc = ima_filter_rule_match(secid, rule->lsm[i].type,
42395 ++ rc = ima_filter_rule_match(secid, lsm_rule->lsm[i].type,
42396 + Audit_equal,
42397 +- rule->lsm[i].rule);
42398 ++ lsm_rule->lsm[i].rule);
42399 + break;
42400 + default:
42401 + break;
42402 + }
42403 +- if (!rc)
42404 +- return false;
42405 ++
42406 ++ if (rc == -ESTALE && !rule_reinitialized) {
42407 ++ lsm_rule = ima_lsm_copy_rule(rule);
42408 ++ if (lsm_rule) {
42409 ++ rule_reinitialized = true;
42410 ++ goto retry;
42411 ++ }
42412 ++ }
42413 ++ if (!rc) {
42414 ++ result = false;
42415 ++ goto out;
42416 ++ }
42417 + }
42418 +- return true;
42419 ++ result = true;
42420 ++
42421 ++out:
42422 ++ if (rule_reinitialized) {
42423 ++ for (i = 0; i < MAX_LSM_RULES; i++)
42424 ++ ima_filter_rule_free(lsm_rule->lsm[i].rule);
42425 ++ kfree(lsm_rule);
42426 ++ }
42427 ++ return result;
42428 + }
42429 +
42430 + /*
42431 +diff --git a/security/integrity/ima/ima_template.c b/security/integrity/ima/ima_template.c
42432 +index c25079faa2088..195ac18f09275 100644
42433 +--- a/security/integrity/ima/ima_template.c
42434 ++++ b/security/integrity/ima/ima_template.c
42435 +@@ -245,11 +245,11 @@ int template_desc_init_fields(const char *template_fmt,
42436 + }
42437 +
42438 + if (fields && num_fields) {
42439 +- *fields = kmalloc_array(i, sizeof(*fields), GFP_KERNEL);
42440 ++ *fields = kmalloc_array(i, sizeof(**fields), GFP_KERNEL);
42441 + if (*fields == NULL)
42442 + return -ENOMEM;
42443 +
42444 +- memcpy(*fields, found_fields, i * sizeof(*fields));
42445 ++ memcpy(*fields, found_fields, i * sizeof(**fields));
42446 + *num_fields = i;
42447 + }
42448 +
42449 +diff --git a/security/loadpin/loadpin.c b/security/loadpin/loadpin.c
42450 +index de41621f4998e..110a5ab2b46bc 100644
42451 +--- a/security/loadpin/loadpin.c
42452 ++++ b/security/loadpin/loadpin.c
42453 +@@ -122,21 +122,11 @@ static void loadpin_sb_free_security(struct super_block *mnt_sb)
42454 + }
42455 + }
42456 +
42457 +-static int loadpin_read_file(struct file *file, enum kernel_read_file_id id,
42458 +- bool contents)
42459 ++static int loadpin_check(struct file *file, enum kernel_read_file_id id)
42460 + {
42461 + struct super_block *load_root;
42462 + const char *origin = kernel_read_file_id_str(id);
42463 +
42464 +- /*
42465 +- * If we will not know that we'll be seeing the full contents
42466 +- * then we cannot trust a load will be complete and unchanged
42467 +- * off disk. Treat all contents=false hooks as if there were
42468 +- * no associated file struct.
42469 +- */
42470 +- if (!contents)
42471 +- file = NULL;
42472 +-
42473 + /* If the file id is excluded, ignore the pinning. */
42474 + if ((unsigned int)id < ARRAY_SIZE(ignore_read_file_id) &&
42475 + ignore_read_file_id[id]) {
42476 +@@ -192,9 +182,25 @@ static int loadpin_read_file(struct file *file, enum kernel_read_file_id id,
42477 + return 0;
42478 + }
42479 +
42480 ++static int loadpin_read_file(struct file *file, enum kernel_read_file_id id,
42481 ++ bool contents)
42482 ++{
42483 ++ /*
42484 ++ * LoadPin only cares about the _origin_ of a file, not its
42485 ++ * contents, so we can ignore the "are full contents available"
42486 ++ * argument here.
42487 ++ */
42488 ++ return loadpin_check(file, id);
42489 ++}
42490 ++
42491 + static int loadpin_load_data(enum kernel_load_data_id id, bool contents)
42492 + {
42493 +- return loadpin_read_file(NULL, (enum kernel_read_file_id) id, contents);
42494 ++ /*
42495 ++ * LoadPin only cares about the _origin_ of a file, not its
42496 ++ * contents, so a NULL file is passed, and we can ignore the
42497 ++ * state of "contents".
42498 ++ */
42499 ++ return loadpin_check(NULL, (enum kernel_read_file_id) id);
42500 + }
42501 +
42502 + static struct security_hook_list loadpin_hooks[] __lsm_ro_after_init = {
42503 +diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c
42504 +index ba095558b6d16..7268304009ada 100644
42505 +--- a/sound/core/memalloc.c
42506 ++++ b/sound/core/memalloc.c
42507 +@@ -720,7 +720,6 @@ static const struct snd_malloc_ops snd_dma_sg_wc_ops = {
42508 + struct snd_dma_sg_fallback {
42509 + size_t count;
42510 + struct page **pages;
42511 +- dma_addr_t *addrs;
42512 + };
42513 +
42514 + static void __snd_dma_sg_fallback_free(struct snd_dma_buffer *dmab,
42515 +@@ -732,38 +731,49 @@ static void __snd_dma_sg_fallback_free(struct snd_dma_buffer *dmab,
42516 + for (i = 0; i < sgbuf->count && sgbuf->pages[i]; i++)
42517 + do_free_pages(page_address(sgbuf->pages[i]), PAGE_SIZE, wc);
42518 + kvfree(sgbuf->pages);
42519 +- kvfree(sgbuf->addrs);
42520 + kfree(sgbuf);
42521 + }
42522 +
42523 + static void *snd_dma_sg_fallback_alloc(struct snd_dma_buffer *dmab, size_t size)
42524 + {
42525 + struct snd_dma_sg_fallback *sgbuf;
42526 +- struct page **pages;
42527 +- size_t i, count;
42528 ++ struct page **pagep, *curp;
42529 ++ size_t chunk, npages;
42530 ++ dma_addr_t addr;
42531 + void *p;
42532 + bool wc = dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK;
42533 +
42534 + sgbuf = kzalloc(sizeof(*sgbuf), GFP_KERNEL);
42535 + if (!sgbuf)
42536 + return NULL;
42537 +- count = PAGE_ALIGN(size) >> PAGE_SHIFT;
42538 +- pages = kvcalloc(count, sizeof(*pages), GFP_KERNEL);
42539 +- if (!pages)
42540 +- goto error;
42541 +- sgbuf->pages = pages;
42542 +- sgbuf->addrs = kvcalloc(count, sizeof(*sgbuf->addrs), GFP_KERNEL);
42543 +- if (!sgbuf->addrs)
42544 ++ size = PAGE_ALIGN(size);
42545 ++ sgbuf->count = size >> PAGE_SHIFT;
42546 ++ sgbuf->pages = kvcalloc(sgbuf->count, sizeof(*sgbuf->pages), GFP_KERNEL);
42547 ++ if (!sgbuf->pages)
42548 + goto error;
42549 +
42550 +- for (i = 0; i < count; sgbuf->count++, i++) {
42551 +- p = do_alloc_pages(dmab->dev.dev, PAGE_SIZE, &sgbuf->addrs[i], wc);
42552 +- if (!p)
42553 +- goto error;
42554 +- sgbuf->pages[i] = virt_to_page(p);
42555 ++ pagep = sgbuf->pages;
42556 ++ chunk = size;
42557 ++ while (size > 0) {
42558 ++ chunk = min(size, chunk);
42559 ++ p = do_alloc_pages(dmab->dev.dev, chunk, &addr, wc);
42560 ++ if (!p) {
42561 ++ if (chunk <= PAGE_SIZE)
42562 ++ goto error;
42563 ++ chunk >>= 1;
42564 ++ chunk = PAGE_SIZE << get_order(chunk);
42565 ++ continue;
42566 ++ }
42567 ++
42568 ++ size -= chunk;
42569 ++ /* fill pages */
42570 ++ npages = chunk >> PAGE_SHIFT;
42571 ++ curp = virt_to_page(p);
42572 ++ while (npages--)
42573 ++ *pagep++ = curp++;
42574 + }
42575 +
42576 +- p = vmap(pages, count, VM_MAP, PAGE_KERNEL);
42577 ++ p = vmap(sgbuf->pages, sgbuf->count, VM_MAP, PAGE_KERNEL);
42578 + if (!p)
42579 + goto error;
42580 + dmab->private_data = sgbuf;
42581 +diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
42582 +index 33769ca78cc8f..9238abbfb2d62 100644
42583 +--- a/sound/core/pcm_native.c
42584 ++++ b/sound/core/pcm_native.c
42585 +@@ -1432,8 +1432,10 @@ static int snd_pcm_do_start(struct snd_pcm_substream *substream,
42586 + static void snd_pcm_undo_start(struct snd_pcm_substream *substream,
42587 + snd_pcm_state_t state)
42588 + {
42589 +- if (substream->runtime->trigger_master == substream)
42590 ++ if (substream->runtime->trigger_master == substream) {
42591 + substream->ops->trigger(substream, SNDRV_PCM_TRIGGER_STOP);
42592 ++ substream->runtime->stop_operating = true;
42593 ++ }
42594 + }
42595 +
42596 + static void snd_pcm_post_start(struct snd_pcm_substream *substream,
42597 +diff --git a/sound/drivers/mts64.c b/sound/drivers/mts64.c
42598 +index d3bc9e8c407dc..f0d34cf70c3e0 100644
42599 +--- a/sound/drivers/mts64.c
42600 ++++ b/sound/drivers/mts64.c
42601 +@@ -815,6 +815,9 @@ static void snd_mts64_interrupt(void *private)
42602 + u8 status, data;
42603 + struct snd_rawmidi_substream *substream;
42604 +
42605 ++ if (!mts)
42606 ++ return;
42607 ++
42608 + spin_lock(&mts->lock);
42609 + ret = mts64_read(mts->pardev->port);
42610 + data = ret & 0x00ff;
42611 +diff --git a/sound/pci/asihpi/hpioctl.c b/sound/pci/asihpi/hpioctl.c
42612 +index bb31b7fe867d6..477a5b4b50bcb 100644
42613 +--- a/sound/pci/asihpi/hpioctl.c
42614 ++++ b/sound/pci/asihpi/hpioctl.c
42615 +@@ -361,7 +361,7 @@ int asihpi_adapter_probe(struct pci_dev *pci_dev,
42616 + pci_dev->device, pci_dev->subsystem_vendor,
42617 + pci_dev->subsystem_device, pci_dev->devfn);
42618 +
42619 +- if (pci_enable_device(pci_dev) < 0) {
42620 ++ if (pcim_enable_device(pci_dev) < 0) {
42621 + dev_err(&pci_dev->dev,
42622 + "pci_enable_device failed, disabling device\n");
42623 + return -EIO;
42624 +diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
42625 +index b4d1e658c5560..edd653ece70d7 100644
42626 +--- a/sound/pci/hda/hda_codec.c
42627 ++++ b/sound/pci/hda/hda_codec.c
42628 +@@ -2886,7 +2886,8 @@ static unsigned int hda_call_codec_suspend(struct hda_codec *codec)
42629 + snd_hdac_enter_pm(&codec->core);
42630 + if (codec->patch_ops.suspend)
42631 + codec->patch_ops.suspend(codec);
42632 +- hda_cleanup_all_streams(codec);
42633 ++ if (!codec->no_stream_clean_at_suspend)
42634 ++ hda_cleanup_all_streams(codec);
42635 + state = hda_set_power_state(codec, AC_PWRST_D3);
42636 + update_power_acct(codec, true);
42637 + snd_hdac_leave_pm(&codec->core);
42638 +diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
42639 +index 21edf7a619f07..8015e44712678 100644
42640 +--- a/sound/pci/hda/patch_hdmi.c
42641 ++++ b/sound/pci/hda/patch_hdmi.c
42642 +@@ -1738,6 +1738,7 @@ static void silent_stream_enable(struct hda_codec *codec,
42643 +
42644 + switch (spec->silent_stream_type) {
42645 + case SILENT_STREAM_KAE:
42646 ++ silent_stream_enable_i915(codec, per_pin);
42647 + silent_stream_set_kae(codec, per_pin, true);
42648 + break;
42649 + case SILENT_STREAM_I915:
42650 +@@ -1975,6 +1976,7 @@ static int hdmi_add_cvt(struct hda_codec *codec, hda_nid_t cvt_nid)
42651 + static const struct snd_pci_quirk force_connect_list[] = {
42652 + SND_PCI_QUIRK(0x103c, 0x870f, "HP", 1),
42653 + SND_PCI_QUIRK(0x103c, 0x871a, "HP", 1),
42654 ++ SND_PCI_QUIRK(0x103c, 0x8711, "HP", 1),
42655 + SND_PCI_QUIRK(0x1462, 0xec94, "MS-7C94", 1),
42656 + SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", 1),
42657 + {}
42658 +@@ -2878,9 +2880,33 @@ static int i915_hsw_setup_stream(struct hda_codec *codec, hda_nid_t cvt_nid,
42659 + hda_nid_t pin_nid, int dev_id, u32 stream_tag,
42660 + int format)
42661 + {
42662 ++ struct hdmi_spec *spec = codec->spec;
42663 ++ int pin_idx = pin_id_to_pin_index(codec, pin_nid, dev_id);
42664 ++ struct hdmi_spec_per_pin *per_pin;
42665 ++ int res;
42666 ++
42667 ++ if (pin_idx < 0)
42668 ++ per_pin = NULL;
42669 ++ else
42670 ++ per_pin = get_pin(spec, pin_idx);
42671 ++
42672 + haswell_verify_D0(codec, cvt_nid, pin_nid);
42673 +- return hdmi_setup_stream(codec, cvt_nid, pin_nid, dev_id,
42674 +- stream_tag, format);
42675 ++
42676 ++ if (spec->silent_stream_type == SILENT_STREAM_KAE && per_pin && per_pin->silent_stream) {
42677 ++ silent_stream_set_kae(codec, per_pin, false);
42678 ++ /* wait for pending transfers in codec to clear */
42679 ++ usleep_range(100, 200);
42680 ++ }
42681 ++
42682 ++ res = hdmi_setup_stream(codec, cvt_nid, pin_nid, dev_id,
42683 ++ stream_tag, format);
42684 ++
42685 ++ if (spec->silent_stream_type == SILENT_STREAM_KAE && per_pin && per_pin->silent_stream) {
42686 ++ usleep_range(100, 200);
42687 ++ silent_stream_set_kae(codec, per_pin, true);
42688 ++ }
42689 ++
42690 ++ return res;
42691 + }
42692 +
42693 + /* pin_cvt_fixup ops override for HSW+ and VLV+ */
42694 +@@ -2900,6 +2926,88 @@ static void i915_pin_cvt_fixup(struct hda_codec *codec,
42695 + }
42696 + }
42697 +
42698 ++#ifdef CONFIG_PM
42699 ++static int i915_adlp_hdmi_suspend(struct hda_codec *codec)
42700 ++{
42701 ++ struct hdmi_spec *spec = codec->spec;
42702 ++ bool silent_streams = false;
42703 ++ int pin_idx, res;
42704 ++
42705 ++ res = generic_hdmi_suspend(codec);
42706 ++
42707 ++ for (pin_idx = 0; pin_idx < spec->num_pins; pin_idx++) {
42708 ++ struct hdmi_spec_per_pin *per_pin = get_pin(spec, pin_idx);
42709 ++
42710 ++ if (per_pin->silent_stream) {
42711 ++ silent_streams = true;
42712 ++ break;
42713 ++ }
42714 ++ }
42715 ++
42716 ++ if (silent_streams && spec->silent_stream_type == SILENT_STREAM_KAE) {
42717 ++ /*
42718 ++ * stream-id should remain programmed when codec goes
42719 ++ * to runtime suspend
42720 ++ */
42721 ++ codec->no_stream_clean_at_suspend = 1;
42722 ++
42723 ++ /*
42724 ++ * the system might go to S3, in which case keep-alive
42725 ++ * must be reprogrammed upon resume
42726 ++ */
42727 ++ codec->forced_resume = 1;
42728 ++
42729 ++ codec_dbg(codec, "HDMI: KAE active at suspend\n");
42730 ++ } else {
42731 ++ codec->no_stream_clean_at_suspend = 0;
42732 ++ codec->forced_resume = 0;
42733 ++ }
42734 ++
42735 ++ return res;
42736 ++}
42737 ++
42738 ++static int i915_adlp_hdmi_resume(struct hda_codec *codec)
42739 ++{
42740 ++ struct hdmi_spec *spec = codec->spec;
42741 ++ int pin_idx, res;
42742 ++
42743 ++ res = generic_hdmi_resume(codec);
42744 ++
42745 ++ /* KAE not programmed at suspend, nothing to do here */
42746 ++ if (!codec->no_stream_clean_at_suspend)
42747 ++ return res;
42748 ++
42749 ++ for (pin_idx = 0; pin_idx < spec->num_pins; pin_idx++) {
42750 ++ struct hdmi_spec_per_pin *per_pin = get_pin(spec, pin_idx);
42751 ++
42752 ++ /*
42753 ++ * If system was in suspend with monitor connected,
42754 ++ * the codec setting may have been lost. Re-enable
42755 ++ * keep-alive.
42756 ++ */
42757 ++ if (per_pin->silent_stream) {
42758 ++ unsigned int param;
42759 ++
42760 ++ param = snd_hda_codec_read(codec, per_pin->cvt_nid, 0,
42761 ++ AC_VERB_GET_CONV, 0);
42762 ++ if (!param) {
42763 ++ codec_dbg(codec, "HDMI: KAE: restore stream id\n");
42764 ++ silent_stream_enable_i915(codec, per_pin);
42765 ++ }
42766 ++
42767 ++ param = snd_hda_codec_read(codec, per_pin->cvt_nid, 0,
42768 ++ AC_VERB_GET_DIGI_CONVERT_1, 0);
42769 ++ if (!(param & (AC_DIG3_KAE << 16))) {
42770 ++ codec_dbg(codec, "HDMI: KAE: restore DIG3_KAE\n");
42771 ++ silent_stream_set_kae(codec, per_pin, true);
42772 ++ }
42773 ++ }
42774 ++ }
42775 ++
42776 ++ return res;
42777 ++}
42778 ++#endif
42779 ++
42780 + /* precondition and allocation for Intel codecs */
42781 + static int alloc_intel_hdmi(struct hda_codec *codec)
42782 + {
42783 +@@ -3030,8 +3138,14 @@ static int patch_i915_adlp_hdmi(struct hda_codec *codec)
42784 + if (!res) {
42785 + spec = codec->spec;
42786 +
42787 +- if (spec->silent_stream_type)
42788 ++ if (spec->silent_stream_type) {
42789 + spec->silent_stream_type = SILENT_STREAM_KAE;
42790 ++
42791 ++#ifdef CONFIG_PM
42792 ++ codec->patch_ops.resume = i915_adlp_hdmi_resume;
42793 ++ codec->patch_ops.suspend = i915_adlp_hdmi_suspend;
42794 ++#endif
42795 ++ }
42796 + }
42797 +
42798 + return res;
42799 +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
42800 +index cf7c825078dc7..f5f640851fdcb 100644
42801 +--- a/sound/pci/hda/patch_realtek.c
42802 ++++ b/sound/pci/hda/patch_realtek.c
42803 +@@ -10962,6 +10962,17 @@ static void alc897_fixup_lenovo_headset_mic(struct hda_codec *codec,
42804 + }
42805 + }
42806 +
42807 ++static void alc897_fixup_lenovo_headset_mode(struct hda_codec *codec,
42808 ++ const struct hda_fixup *fix, int action)
42809 ++{
42810 ++ struct alc_spec *spec = codec->spec;
42811 ++
42812 ++ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
42813 ++ spec->parse_flags |= HDA_PINCFG_HEADSET_MIC;
42814 ++ spec->gen.hp_automute_hook = alc897_hp_automute_hook;
42815 ++ }
42816 ++}
42817 ++
42818 + static const struct coef_fw alc668_coefs[] = {
42819 + WRITE_COEF(0x01, 0xbebe), WRITE_COEF(0x02, 0xaaaa), WRITE_COEF(0x03, 0x0),
42820 + WRITE_COEF(0x04, 0x0180), WRITE_COEF(0x06, 0x0), WRITE_COEF(0x07, 0x0f80),
42821 +@@ -11045,6 +11056,8 @@ enum {
42822 + ALC897_FIXUP_LENOVO_HEADSET_MIC,
42823 + ALC897_FIXUP_HEADSET_MIC_PIN,
42824 + ALC897_FIXUP_HP_HSMIC_VERB,
42825 ++ ALC897_FIXUP_LENOVO_HEADSET_MODE,
42826 ++ ALC897_FIXUP_HEADSET_MIC_PIN2,
42827 + };
42828 +
42829 + static const struct hda_fixup alc662_fixups[] = {
42830 +@@ -11471,6 +11484,19 @@ static const struct hda_fixup alc662_fixups[] = {
42831 + { }
42832 + },
42833 + },
42834 ++ [ALC897_FIXUP_LENOVO_HEADSET_MODE] = {
42835 ++ .type = HDA_FIXUP_FUNC,
42836 ++ .v.func = alc897_fixup_lenovo_headset_mode,
42837 ++ },
42838 ++ [ALC897_FIXUP_HEADSET_MIC_PIN2] = {
42839 ++ .type = HDA_FIXUP_PINS,
42840 ++ .v.pins = (const struct hda_pintbl[]) {
42841 ++ { 0x1a, 0x01a11140 }, /* use as headset mic, without its own jack detect */
42842 ++ { }
42843 ++ },
42844 ++ .chained = true,
42845 ++ .chain_id = ALC897_FIXUP_LENOVO_HEADSET_MODE
42846 ++ },
42847 + };
42848 +
42849 + static const struct snd_pci_quirk alc662_fixup_tbl[] = {
42850 +@@ -11523,6 +11549,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
42851 + SND_PCI_QUIRK(0x17aa, 0x32cb, "Lenovo ThinkCentre M70", ALC897_FIXUP_HEADSET_MIC_PIN),
42852 + SND_PCI_QUIRK(0x17aa, 0x32cf, "Lenovo ThinkCentre M950", ALC897_FIXUP_HEADSET_MIC_PIN),
42853 + SND_PCI_QUIRK(0x17aa, 0x32f7, "Lenovo ThinkCentre M90", ALC897_FIXUP_HEADSET_MIC_PIN),
42854 ++ SND_PCI_QUIRK(0x17aa, 0x3742, "Lenovo TianYi510Pro-14IOB", ALC897_FIXUP_HEADSET_MIC_PIN2),
42855 + SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo Ideapad Y550P", ALC662_FIXUP_IDEAPAD),
42856 + SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Ideapad Y550", ALC662_FIXUP_IDEAPAD),
42857 + SND_PCI_QUIRK(0x1849, 0x5892, "ASRock B150M", ALC892_FIXUP_ASROCK_MOBO),
42858 +diff --git a/sound/soc/amd/acp/acp-platform.c b/sound/soc/amd/acp/acp-platform.c
42859 +index 85a81add4ef9f..447612a7a7627 100644
42860 +--- a/sound/soc/amd/acp/acp-platform.c
42861 ++++ b/sound/soc/amd/acp/acp-platform.c
42862 +@@ -184,10 +184,6 @@ static int acp_dma_open(struct snd_soc_component *component, struct snd_pcm_subs
42863 +
42864 + stream->substream = substream;
42865 +
42866 +- spin_lock_irq(&adata->acp_lock);
42867 +- list_add_tail(&stream->list, &adata->stream_list);
42868 +- spin_unlock_irq(&adata->acp_lock);
42869 +-
42870 + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
42871 + runtime->hw = acp_pcm_hardware_playback;
42872 + else
42873 +@@ -203,6 +199,10 @@ static int acp_dma_open(struct snd_soc_component *component, struct snd_pcm_subs
42874 +
42875 + writel(1, ACP_EXTERNAL_INTR_ENB(adata));
42876 +
42877 ++ spin_lock_irq(&adata->acp_lock);
42878 ++ list_add_tail(&stream->list, &adata->stream_list);
42879 ++ spin_unlock_irq(&adata->acp_lock);
42880 ++
42881 + return ret;
42882 + }
42883 +
42884 +diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
42885 +index d9715bea965e1..1f0b5527c5949 100644
42886 +--- a/sound/soc/amd/yc/acp6x-mach.c
42887 ++++ b/sound/soc/amd/yc/acp6x-mach.c
42888 +@@ -213,6 +213,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
42889 + DMI_MATCH(DMI_PRODUCT_NAME, "Alienware m17 R5 AMD"),
42890 + }
42891 + },
42892 ++ {
42893 ++ .driver_data = &acp6x_card,
42894 ++ .matches = {
42895 ++ DMI_MATCH(DMI_BOARD_VENDOR, "TIMI"),
42896 ++ DMI_MATCH(DMI_PRODUCT_NAME, "Redmi Book Pro 14 2022"),
42897 ++ }
42898 ++ },
42899 + {}
42900 + };
42901 +
42902 +diff --git a/sound/soc/codecs/pcm512x.c b/sound/soc/codecs/pcm512x.c
42903 +index 767463e82665c..89059a673cf09 100644
42904 +--- a/sound/soc/codecs/pcm512x.c
42905 ++++ b/sound/soc/codecs/pcm512x.c
42906 +@@ -1634,7 +1634,7 @@ int pcm512x_probe(struct device *dev, struct regmap *regmap)
42907 + if (val > 6) {
42908 + dev_err(dev, "Invalid pll-in\n");
42909 + ret = -EINVAL;
42910 +- goto err_clk;
42911 ++ goto err_pm;
42912 + }
42913 + pcm512x->pll_in = val;
42914 + }
42915 +@@ -1643,7 +1643,7 @@ int pcm512x_probe(struct device *dev, struct regmap *regmap)
42916 + if (val > 6) {
42917 + dev_err(dev, "Invalid pll-out\n");
42918 + ret = -EINVAL;
42919 +- goto err_clk;
42920 ++ goto err_pm;
42921 + }
42922 + pcm512x->pll_out = val;
42923 + }
42924 +@@ -1652,12 +1652,12 @@ int pcm512x_probe(struct device *dev, struct regmap *regmap)
42925 + dev_err(dev,
42926 + "Error: both pll-in and pll-out, or none\n");
42927 + ret = -EINVAL;
42928 +- goto err_clk;
42929 ++ goto err_pm;
42930 + }
42931 + if (pcm512x->pll_in && pcm512x->pll_in == pcm512x->pll_out) {
42932 + dev_err(dev, "Error: pll-in == pll-out\n");
42933 + ret = -EINVAL;
42934 +- goto err_clk;
42935 ++ goto err_pm;
42936 + }
42937 + }
42938 + #endif
42939 +diff --git a/sound/soc/codecs/rt298.c b/sound/soc/codecs/rt298.c
42940 +index a2ce52dafea84..cea26f3a02b6a 100644
42941 +--- a/sound/soc/codecs/rt298.c
42942 ++++ b/sound/soc/codecs/rt298.c
42943 +@@ -1166,6 +1166,13 @@ static const struct dmi_system_id force_combo_jack_table[] = {
42944 + DMI_MATCH(DMI_PRODUCT_NAME, "Geminilake")
42945 + }
42946 + },
42947 ++ {
42948 ++ .ident = "Intel Kabylake R RVP",
42949 ++ .matches = {
42950 ++ DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
42951 ++ DMI_MATCH(DMI_PRODUCT_NAME, "Kabylake Client platform")
42952 ++ }
42953 ++ },
42954 + { }
42955 + };
42956 +
42957 +diff --git a/sound/soc/codecs/rt5670.c b/sound/soc/codecs/rt5670.c
42958 +index ebac6caeb40ad..a230f441559a6 100644
42959 +--- a/sound/soc/codecs/rt5670.c
42960 ++++ b/sound/soc/codecs/rt5670.c
42961 +@@ -3311,8 +3311,6 @@ static int rt5670_i2c_probe(struct i2c_client *i2c)
42962 + if (ret < 0)
42963 + goto err;
42964 +
42965 +- pm_runtime_put(&i2c->dev);
42966 +-
42967 + return 0;
42968 + err:
42969 + pm_runtime_disable(&i2c->dev);
42970 +diff --git a/sound/soc/codecs/wm8994.c b/sound/soc/codecs/wm8994.c
42971 +index d3cfd3788f2ab..8fe9a75d12357 100644
42972 +--- a/sound/soc/codecs/wm8994.c
42973 ++++ b/sound/soc/codecs/wm8994.c
42974 +@@ -3853,7 +3853,12 @@ static irqreturn_t wm1811_jackdet_irq(int irq, void *data)
42975 + } else {
42976 + dev_dbg(component->dev, "Jack not detected\n");
42977 +
42978 ++ /* Release wm8994->accdet_lock to avoid deadlock:
42979 ++ * cancel_delayed_work_sync() takes wm8994->mic_work internal
42980 ++ * lock and wm1811_mic_work takes wm8994->accdet_lock */
42981 ++ mutex_unlock(&wm8994->accdet_lock);
42982 + cancel_delayed_work_sync(&wm8994->mic_work);
42983 ++ mutex_lock(&wm8994->accdet_lock);
42984 +
42985 + snd_soc_component_update_bits(component, WM8958_MICBIAS2,
42986 + WM8958_MICB2_DISCH, WM8958_MICB2_DISCH);
42987 +diff --git a/sound/soc/codecs/wsa883x.c b/sound/soc/codecs/wsa883x.c
42988 +index c7b10bbfba7ea..0ddb6362fcc52 100644
42989 +--- a/sound/soc/codecs/wsa883x.c
42990 ++++ b/sound/soc/codecs/wsa883x.c
42991 +@@ -7,7 +7,7 @@
42992 + #include <linux/debugfs.h>
42993 + #include <linux/delay.h>
42994 + #include <linux/device.h>
42995 +-#include <linux/gpio.h>
42996 ++#include <linux/gpio/consumer.h>
42997 + #include <linux/init.h>
42998 + #include <linux/kernel.h>
42999 + #include <linux/module.h>
43000 +@@ -1392,7 +1392,7 @@ static int wsa883x_probe(struct sdw_slave *pdev,
43001 + }
43002 +
43003 + wsa883x->sd_n = devm_gpiod_get_optional(&pdev->dev, "powerdown",
43004 +- GPIOD_FLAGS_BIT_NONEXCLUSIVE);
43005 ++ GPIOD_FLAGS_BIT_NONEXCLUSIVE | GPIOD_OUT_HIGH);
43006 + if (IS_ERR(wsa883x->sd_n)) {
43007 + dev_err(&pdev->dev, "Shutdown Control GPIO not found\n");
43008 + ret = PTR_ERR(wsa883x->sd_n);
43009 +@@ -1411,7 +1411,7 @@ static int wsa883x_probe(struct sdw_slave *pdev,
43010 + pdev->prop.simple_clk_stop_capable = true;
43011 + pdev->prop.sink_dpn_prop = wsa_sink_dpn_prop;
43012 + pdev->prop.scp_int1_mask = SDW_SCP_INT1_BUS_CLASH | SDW_SCP_INT1_PARITY;
43013 +- gpiod_direction_output(wsa883x->sd_n, 1);
43014 ++ gpiod_direction_output(wsa883x->sd_n, 0);
43015 +
43016 + wsa883x->regmap = devm_regmap_init_sdw(pdev, &wsa883x_regmap_config);
43017 + if (IS_ERR(wsa883x->regmap)) {
43018 +diff --git a/sound/soc/generic/audio-graph-card.c b/sound/soc/generic/audio-graph-card.c
43019 +index fe7cf972d44ce..5daa824a4ffcf 100644
43020 +--- a/sound/soc/generic/audio-graph-card.c
43021 ++++ b/sound/soc/generic/audio-graph-card.c
43022 +@@ -485,8 +485,10 @@ static int __graph_for_each_link(struct asoc_simple_priv *priv,
43023 + of_node_put(codec_ep);
43024 + of_node_put(codec_port);
43025 +
43026 +- if (ret < 0)
43027 ++ if (ret < 0) {
43028 ++ of_node_put(cpu_ep);
43029 + return ret;
43030 ++ }
43031 +
43032 + codec_port_old = codec_port;
43033 + }
43034 +diff --git a/sound/soc/intel/Kconfig b/sound/soc/intel/Kconfig
43035 +index d2ca710ac3fa4..ac799de4f7fda 100644
43036 +--- a/sound/soc/intel/Kconfig
43037 ++++ b/sound/soc/intel/Kconfig
43038 +@@ -177,7 +177,7 @@ config SND_SOC_INTEL_SKYLAKE_COMMON
43039 + select SND_HDA_DSP_LOADER
43040 + select SND_SOC_TOPOLOGY
43041 + select SND_SOC_INTEL_SST
43042 +- select SND_SOC_HDAC_HDA if SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC
43043 ++ select SND_SOC_HDAC_HDA
43044 + select SND_SOC_ACPI_INTEL_MATCH
43045 + select SND_INTEL_DSP_CONFIG
43046 + help
43047 +diff --git a/sound/soc/intel/avs/boards/rt298.c b/sound/soc/intel/avs/boards/rt298.c
43048 +index b28d36872dcba..58c9d9edecf0a 100644
43049 +--- a/sound/soc/intel/avs/boards/rt298.c
43050 ++++ b/sound/soc/intel/avs/boards/rt298.c
43051 +@@ -6,6 +6,7 @@
43052 + // Amadeusz Slawinski <amadeuszx.slawinski@×××××××××××.com>
43053 + //
43054 +
43055 ++#include <linux/dmi.h>
43056 + #include <linux/module.h>
43057 + #include <sound/jack.h>
43058 + #include <sound/pcm.h>
43059 +@@ -14,6 +15,16 @@
43060 + #include <sound/soc-acpi.h>
43061 + #include "../../../codecs/rt298.h"
43062 +
43063 ++static const struct dmi_system_id kblr_dmi_table[] = {
43064 ++ {
43065 ++ .matches = {
43066 ++ DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
43067 ++ DMI_MATCH(DMI_BOARD_NAME, "Kabylake R DDR4 RVP"),
43068 ++ },
43069 ++ },
43070 ++ {}
43071 ++};
43072 ++
43073 + static const struct snd_kcontrol_new card_controls[] = {
43074 + SOC_DAPM_PIN_SWITCH("Headphone Jack"),
43075 + SOC_DAPM_PIN_SWITCH("Mic Jack"),
43076 +@@ -96,9 +107,15 @@ avs_rt298_hw_params(struct snd_pcm_substream *substream, struct snd_pcm_hw_param
43077 + {
43078 + struct snd_soc_pcm_runtime *rtd = substream->private_data;
43079 + struct snd_soc_dai *codec_dai = asoc_rtd_to_codec(rtd, 0);
43080 ++ unsigned int clk_freq;
43081 + int ret;
43082 +
43083 +- ret = snd_soc_dai_set_sysclk(codec_dai, RT298_SCLK_S_PLL, 19200000, SND_SOC_CLOCK_IN);
43084 ++ if (dmi_first_match(kblr_dmi_table))
43085 ++ clk_freq = 24000000;
43086 ++ else
43087 ++ clk_freq = 19200000;
43088 ++
43089 ++ ret = snd_soc_dai_set_sysclk(codec_dai, RT298_SCLK_S_PLL, clk_freq, SND_SOC_CLOCK_IN);
43090 + if (ret < 0)
43091 + dev_err(rtd->dev, "Set codec sysclk failed: %d\n", ret);
43092 +
43093 +@@ -139,7 +156,10 @@ static int avs_create_dai_link(struct device *dev, const char *platform_name, in
43094 + dl->platforms = platform;
43095 + dl->num_platforms = 1;
43096 + dl->id = 0;
43097 +- dl->dai_fmt = SND_SOC_DAIFMT_DSP_A | SND_SOC_DAIFMT_NB_NF | SND_SOC_DAIFMT_CBS_CFS;
43098 ++ if (dmi_first_match(kblr_dmi_table))
43099 ++ dl->dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF | SND_SOC_DAIFMT_CBS_CFS;
43100 ++ else
43101 ++ dl->dai_fmt = SND_SOC_DAIFMT_DSP_A | SND_SOC_DAIFMT_NB_NF | SND_SOC_DAIFMT_CBS_CFS;
43102 + dl->init = avs_rt298_codec_init;
43103 + dl->be_hw_params_fixup = avs_rt298_be_fixup;
43104 + dl->ops = &avs_rt298_ops;
43105 +diff --git a/sound/soc/intel/avs/core.c b/sound/soc/intel/avs/core.c
43106 +index bb0719c58ca49..4f93639ce4887 100644
43107 +--- a/sound/soc/intel/avs/core.c
43108 ++++ b/sound/soc/intel/avs/core.c
43109 +@@ -440,7 +440,7 @@ static int avs_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
43110 + if (bus->mlcap)
43111 + snd_hdac_ext_bus_get_ml_capabilities(bus);
43112 +
43113 +- if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)))
43114 ++ if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)))
43115 + dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
43116 + dma_set_max_seg_size(dev, UINT_MAX);
43117 +
43118 +diff --git a/sound/soc/intel/avs/ipc.c b/sound/soc/intel/avs/ipc.c
43119 +index 020d85c7520de..306f0dc4eaf58 100644
43120 +--- a/sound/soc/intel/avs/ipc.c
43121 ++++ b/sound/soc/intel/avs/ipc.c
43122 +@@ -123,7 +123,10 @@ static void avs_dsp_recovery(struct avs_dev *adev)
43123 + if (!substream || !substream->runtime)
43124 + continue;
43125 +
43126 ++ /* No need for _irq() as we are in nonatomic context. */
43127 ++ snd_pcm_stream_lock(substream);
43128 + snd_pcm_stop(substream, SNDRV_PCM_STATE_DISCONNECTED);
43129 ++ snd_pcm_stream_unlock(substream);
43130 + }
43131 + }
43132 + }
43133 +@@ -192,7 +195,8 @@ static void avs_dsp_receive_rx(struct avs_dev *adev, u64 header)
43134 + /* update size in case of LARGE_CONFIG_GET */
43135 + if (msg.msg_target == AVS_MOD_MSG &&
43136 + msg.global_msg_type == AVS_MOD_LARGE_CONFIG_GET)
43137 +- ipc->rx.size = msg.ext.large_config.data_off_size;
43138 ++ ipc->rx.size = min_t(u32, AVS_MAILBOX_SIZE,
43139 ++ msg.ext.large_config.data_off_size);
43140 +
43141 + memcpy_fromio(ipc->rx.data, avs_uplink_addr(adev), ipc->rx.size);
43142 + trace_avs_msg_payload(ipc->rx.data, ipc->rx.size);
43143 +diff --git a/sound/soc/intel/boards/sof_es8336.c b/sound/soc/intel/boards/sof_es8336.c
43144 +index 70713e4b07dc1..773e5d1d87d46 100644
43145 +--- a/sound/soc/intel/boards/sof_es8336.c
43146 ++++ b/sound/soc/intel/boards/sof_es8336.c
43147 +@@ -783,7 +783,7 @@ static int sof_es8336_remove(struct platform_device *pdev)
43148 + struct snd_soc_card *card = platform_get_drvdata(pdev);
43149 + struct sof_es8336_private *priv = snd_soc_card_get_drvdata(card);
43150 +
43151 +- cancel_delayed_work(&priv->pcm_pop_work);
43152 ++ cancel_delayed_work_sync(&priv->pcm_pop_work);
43153 + gpiod_put(priv->gpio_speakers);
43154 + device_remove_software_node(priv->codec_dev);
43155 + put_device(priv->codec_dev);
43156 +diff --git a/sound/soc/intel/skylake/skl.c b/sound/soc/intel/skylake/skl.c
43157 +index 3312b57e3c0cb..7f058acd221f0 100644
43158 +--- a/sound/soc/intel/skylake/skl.c
43159 ++++ b/sound/soc/intel/skylake/skl.c
43160 +@@ -1116,7 +1116,10 @@ static void skl_shutdown(struct pci_dev *pci)
43161 + if (!skl->init_done)
43162 + return;
43163 +
43164 +- snd_hdac_stop_streams_and_chip(bus);
43165 ++ snd_hdac_stop_streams(bus);
43166 ++ snd_hdac_ext_bus_link_power_down_all(bus);
43167 ++ skl_dsp_sleep(skl->dsp);
43168 ++
43169 + list_for_each_entry(s, &bus->stream_list, list) {
43170 + stream = stream_to_hdac_ext_stream(s);
43171 + snd_hdac_ext_stream_decouple(bus, stream, false);
43172 +diff --git a/sound/soc/mediatek/common/mtk-btcvsd.c b/sound/soc/mediatek/common/mtk-btcvsd.c
43173 +index d884bb7c0fc74..1c28b41e43112 100644
43174 +--- a/sound/soc/mediatek/common/mtk-btcvsd.c
43175 ++++ b/sound/soc/mediatek/common/mtk-btcvsd.c
43176 +@@ -1038,11 +1038,9 @@ static int mtk_pcm_btcvsd_copy(struct snd_soc_component *component,
43177 + struct mtk_btcvsd_snd *bt = snd_soc_component_get_drvdata(component);
43178 +
43179 + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
43180 +- mtk_btcvsd_snd_write(bt, buf, count);
43181 ++ return mtk_btcvsd_snd_write(bt, buf, count);
43182 + else
43183 +- mtk_btcvsd_snd_read(bt, buf, count);
43184 +-
43185 +- return 0;
43186 ++ return mtk_btcvsd_snd_read(bt, buf, count);
43187 + }
43188 +
43189 + /* kcontrol */
43190 +diff --git a/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c b/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
43191 +index dcaeeeb8aac70..bc155dd937e0b 100644
43192 +--- a/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
43193 ++++ b/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
43194 +@@ -1070,16 +1070,6 @@ static int mt8173_afe_pcm_dev_probe(struct platform_device *pdev)
43195 +
43196 + afe->dev = &pdev->dev;
43197 +
43198 +- irq_id = platform_get_irq(pdev, 0);
43199 +- if (irq_id <= 0)
43200 +- return irq_id < 0 ? irq_id : -ENXIO;
43201 +- ret = devm_request_irq(afe->dev, irq_id, mt8173_afe_irq_handler,
43202 +- 0, "Afe_ISR_Handle", (void *)afe);
43203 +- if (ret) {
43204 +- dev_err(afe->dev, "could not request_irq\n");
43205 +- return ret;
43206 +- }
43207 +-
43208 + afe->base_addr = devm_platform_ioremap_resource(pdev, 0);
43209 + if (IS_ERR(afe->base_addr))
43210 + return PTR_ERR(afe->base_addr);
43211 +@@ -1185,6 +1175,16 @@ static int mt8173_afe_pcm_dev_probe(struct platform_device *pdev)
43212 + if (ret)
43213 + goto err_cleanup_components;
43214 +
43215 ++ irq_id = platform_get_irq(pdev, 0);
43216 ++ if (irq_id <= 0)
43217 ++ return irq_id < 0 ? irq_id : -ENXIO;
43218 ++ ret = devm_request_irq(afe->dev, irq_id, mt8173_afe_irq_handler,
43219 ++ 0, "Afe_ISR_Handle", (void *)afe);
43220 ++ if (ret) {
43221 ++ dev_err(afe->dev, "could not request_irq\n");
43222 ++ goto err_pm_disable;
43223 ++ }
43224 ++
43225 + dev_info(&pdev->dev, "MT8173 AFE driver initialized.\n");
43226 + return 0;
43227 +
43228 +diff --git a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
43229 +index 12f40c81b101e..f803f121659de 100644
43230 +--- a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
43231 ++++ b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
43232 +@@ -200,14 +200,16 @@ static int mt8173_rt5650_rt5514_dev_probe(struct platform_device *pdev)
43233 + if (!mt8173_rt5650_rt5514_dais[DAI_LINK_CODEC_I2S].codecs[0].of_node) {
43234 + dev_err(&pdev->dev,
43235 + "Property 'audio-codec' missing or invalid\n");
43236 +- return -EINVAL;
43237 ++ ret = -EINVAL;
43238 ++ goto out;
43239 + }
43240 + mt8173_rt5650_rt5514_dais[DAI_LINK_CODEC_I2S].codecs[1].of_node =
43241 + of_parse_phandle(pdev->dev.of_node, "mediatek,audio-codec", 1);
43242 + if (!mt8173_rt5650_rt5514_dais[DAI_LINK_CODEC_I2S].codecs[1].of_node) {
43243 + dev_err(&pdev->dev,
43244 + "Property 'audio-codec' missing or invalid\n");
43245 +- return -EINVAL;
43246 ++ ret = -EINVAL;
43247 ++ goto out;
43248 + }
43249 + mt8173_rt5650_rt5514_codec_conf[0].dlc.of_node =
43250 + mt8173_rt5650_rt5514_dais[DAI_LINK_CODEC_I2S].codecs[1].of_node;
43251 +@@ -216,6 +218,7 @@ static int mt8173_rt5650_rt5514_dev_probe(struct platform_device *pdev)
43252 +
43253 + ret = devm_snd_soc_register_card(&pdev->dev, card);
43254 +
43255 ++out:
43256 + of_node_put(platform_node);
43257 + return ret;
43258 + }
43259 +diff --git a/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c b/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
43260 +index a860852236779..48c14be5e3db7 100644
43261 +--- a/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
43262 ++++ b/sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
43263 +@@ -677,8 +677,10 @@ mt8183_mt6358_ts3a227_max98357_dev_probe(struct platform_device *pdev)
43264 + }
43265 +
43266 + card = (struct snd_soc_card *)of_device_get_match_data(&pdev->dev);
43267 +- if (!card)
43268 ++ if (!card) {
43269 ++ of_node_put(platform_node);
43270 + return -EINVAL;
43271 ++ }
43272 + card->dev = &pdev->dev;
43273 +
43274 + ec_codec = of_parse_phandle(pdev->dev.of_node, "mediatek,ec-codec", 0);
43275 +@@ -767,8 +769,10 @@ mt8183_mt6358_ts3a227_max98357_dev_probe(struct platform_device *pdev)
43276 + }
43277 +
43278 + priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
43279 +- if (!priv)
43280 +- return -ENOMEM;
43281 ++ if (!priv) {
43282 ++ ret = -ENOMEM;
43283 ++ goto out;
43284 ++ }
43285 +
43286 + snd_soc_card_set_drvdata(card, priv);
43287 +
43288 +@@ -776,7 +780,8 @@ mt8183_mt6358_ts3a227_max98357_dev_probe(struct platform_device *pdev)
43289 + if (IS_ERR(priv->pinctrl)) {
43290 + dev_err(&pdev->dev, "%s devm_pinctrl_get failed\n",
43291 + __func__);
43292 +- return PTR_ERR(priv->pinctrl);
43293 ++ ret = PTR_ERR(priv->pinctrl);
43294 ++ goto out;
43295 + }
43296 +
43297 + for (i = 0; i < PIN_STATE_MAX; i++) {
43298 +@@ -809,6 +814,7 @@ mt8183_mt6358_ts3a227_max98357_dev_probe(struct platform_device *pdev)
43299 +
43300 + ret = devm_snd_soc_register_card(&pdev->dev, card);
43301 +
43302 ++out:
43303 + of_node_put(platform_node);
43304 + of_node_put(ec_codec);
43305 + of_node_put(hdmi_codec);
43306 +diff --git a/sound/soc/mediatek/mt8186/mt8186-mt6366-da7219-max98357.c b/sound/soc/mediatek/mt8186/mt8186-mt6366-da7219-max98357.c
43307 +index cfca6bdee8345..90ec0d0a83927 100644
43308 +--- a/sound/soc/mediatek/mt8186/mt8186-mt6366-da7219-max98357.c
43309 ++++ b/sound/soc/mediatek/mt8186/mt8186-mt6366-da7219-max98357.c
43310 +@@ -192,7 +192,7 @@ static int mt8186_mt6366_da7219_max98357_hdmi_init(struct snd_soc_pcm_runtime *r
43311 + struct mt8186_mt6366_da7219_max98357_priv *priv = soc_card_data->mach_priv;
43312 + int ret;
43313 +
43314 +- ret = mt8186_dai_i2s_set_share(afe, "I2S3", "I2S2");
43315 ++ ret = mt8186_dai_i2s_set_share(afe, "I2S2", "I2S3");
43316 + if (ret) {
43317 + dev_err(rtd->dev, "Failed to set up shared clocks\n");
43318 + return ret;
43319 +diff --git a/sound/soc/mediatek/mt8186/mt8186-mt6366-rt1019-rt5682s.c b/sound/soc/mediatek/mt8186/mt8186-mt6366-rt1019-rt5682s.c
43320 +index 2414c5b77233c..60fa55d0c91f0 100644
43321 +--- a/sound/soc/mediatek/mt8186/mt8186-mt6366-rt1019-rt5682s.c
43322 ++++ b/sound/soc/mediatek/mt8186/mt8186-mt6366-rt1019-rt5682s.c
43323 +@@ -168,7 +168,7 @@ static int mt8186_mt6366_rt1019_rt5682s_hdmi_init(struct snd_soc_pcm_runtime *rt
43324 + struct mt8186_mt6366_rt1019_rt5682s_priv *priv = soc_card_data->mach_priv;
43325 + int ret;
43326 +
43327 +- ret = mt8186_dai_i2s_set_share(afe, "I2S3", "I2S2");
43328 ++ ret = mt8186_dai_i2s_set_share(afe, "I2S2", "I2S3");
43329 + if (ret) {
43330 + dev_err(rtd->dev, "Failed to set up shared clocks\n");
43331 + return ret;
43332 +diff --git a/sound/soc/pxa/mmp-pcm.c b/sound/soc/pxa/mmp-pcm.c
43333 +index 5d520e18e512f..99b245e3079a2 100644
43334 +--- a/sound/soc/pxa/mmp-pcm.c
43335 ++++ b/sound/soc/pxa/mmp-pcm.c
43336 +@@ -98,7 +98,7 @@ static bool filter(struct dma_chan *chan, void *param)
43337 +
43338 + devname = kasprintf(GFP_KERNEL, "%s.%d", dma_data->dma_res->name,
43339 + dma_data->ssp_id);
43340 +- if ((strcmp(dev_name(chan->device->dev), devname) == 0) &&
43341 ++ if (devname && (strcmp(dev_name(chan->device->dev), devname) == 0) &&
43342 + (chan->chan_id == dma_data->dma_res->start)) {
43343 + found = true;
43344 + }
43345 +diff --git a/sound/soc/qcom/Kconfig b/sound/soc/qcom/Kconfig
43346 +index 8c7398bc1ca89..96a6d4731e6fd 100644
43347 +--- a/sound/soc/qcom/Kconfig
43348 ++++ b/sound/soc/qcom/Kconfig
43349 +@@ -2,6 +2,7 @@
43350 + menuconfig SND_SOC_QCOM
43351 + tristate "ASoC support for QCOM platforms"
43352 + depends on ARCH_QCOM || COMPILE_TEST
43353 ++ imply SND_SOC_QCOM_COMMON
43354 + help
43355 + Say Y or M if you want to add support to use audio devices
43356 + in Qualcomm Technologies SOC-based platforms.
43357 +@@ -59,13 +60,14 @@ config SND_SOC_STORM
43358 + config SND_SOC_APQ8016_SBC
43359 + tristate "SoC Audio support for APQ8016 SBC platforms"
43360 + select SND_SOC_LPASS_APQ8016
43361 +- select SND_SOC_QCOM_COMMON
43362 ++ depends on SND_SOC_QCOM_COMMON
43363 + help
43364 + Support for Qualcomm Technologies LPASS audio block in
43365 + APQ8016 SOC-based systems.
43366 + Say Y if you want to use audio devices on MI2S.
43367 +
43368 + config SND_SOC_QCOM_COMMON
43369 ++ depends on SOUNDWIRE
43370 + tristate
43371 +
43372 + config SND_SOC_QDSP6_COMMON
43373 +@@ -142,7 +144,7 @@ config SND_SOC_MSM8996
43374 + depends on QCOM_APR
43375 + depends on COMMON_CLK
43376 + select SND_SOC_QDSP6
43377 +- select SND_SOC_QCOM_COMMON
43378 ++ depends on SND_SOC_QCOM_COMMON
43379 + help
43380 + Support for Qualcomm Technologies LPASS audio block in
43381 + APQ8096 SoC-based systems.
43382 +@@ -153,7 +155,7 @@ config SND_SOC_SDM845
43383 + depends on QCOM_APR && I2C && SOUNDWIRE
43384 + depends on COMMON_CLK
43385 + select SND_SOC_QDSP6
43386 +- select SND_SOC_QCOM_COMMON
43387 ++ depends on SND_SOC_QCOM_COMMON
43388 + select SND_SOC_RT5663
43389 + select SND_SOC_MAX98927
43390 + imply SND_SOC_CROS_EC_CODEC
43391 +@@ -167,7 +169,7 @@ config SND_SOC_SM8250
43392 + depends on QCOM_APR && SOUNDWIRE
43393 + depends on COMMON_CLK
43394 + select SND_SOC_QDSP6
43395 +- select SND_SOC_QCOM_COMMON
43396 ++ depends on SND_SOC_QCOM_COMMON
43397 + help
43398 + To add support for audio on Qualcomm Technologies Inc.
43399 + SM8250 SoC-based systems.
43400 +@@ -178,7 +180,7 @@ config SND_SOC_SC8280XP
43401 + depends on QCOM_APR && SOUNDWIRE
43402 + depends on COMMON_CLK
43403 + select SND_SOC_QDSP6
43404 +- select SND_SOC_QCOM_COMMON
43405 ++ depends on SND_SOC_QCOM_COMMON
43406 + help
43407 + To add support for audio on Qualcomm Technologies Inc.
43408 + SC8280XP SoC-based systems.
43409 +@@ -188,7 +190,7 @@ config SND_SOC_SC7180
43410 + tristate "SoC Machine driver for SC7180 boards"
43411 + depends on I2C && GPIOLIB
43412 + depends on SOUNDWIRE || SOUNDWIRE=n
43413 +- select SND_SOC_QCOM_COMMON
43414 ++ depends on SND_SOC_QCOM_COMMON
43415 + select SND_SOC_LPASS_SC7180
43416 + select SND_SOC_MAX98357A
43417 + select SND_SOC_RT5682_I2C
43418 +@@ -202,7 +204,7 @@ config SND_SOC_SC7180
43419 + config SND_SOC_SC7280
43420 + tristate "SoC Machine driver for SC7280 boards"
43421 + depends on I2C && SOUNDWIRE
43422 +- select SND_SOC_QCOM_COMMON
43423 ++ depends on SND_SOC_QCOM_COMMON
43424 + select SND_SOC_LPASS_SC7280
43425 + select SND_SOC_MAX98357A
43426 + select SND_SOC_WCD938X_SDW
43427 +diff --git a/sound/soc/qcom/common.c b/sound/soc/qcom/common.c
43428 +index 69dd3b504e209..49c74c1662a3f 100644
43429 +--- a/sound/soc/qcom/common.c
43430 ++++ b/sound/soc/qcom/common.c
43431 +@@ -180,7 +180,6 @@ err_put_np:
43432 + }
43433 + EXPORT_SYMBOL_GPL(qcom_snd_parse_of);
43434 +
43435 +-#if IS_ENABLED(CONFIG_SOUNDWIRE)
43436 + int qcom_snd_sdw_prepare(struct snd_pcm_substream *substream,
43437 + struct sdw_stream_runtime *sruntime,
43438 + bool *stream_prepared)
43439 +@@ -294,7 +293,6 @@ int qcom_snd_sdw_hw_free(struct snd_pcm_substream *substream,
43440 + return 0;
43441 + }
43442 + EXPORT_SYMBOL_GPL(qcom_snd_sdw_hw_free);
43443 +-#endif
43444 +
43445 + int qcom_snd_wcd_jack_setup(struct snd_soc_pcm_runtime *rtd,
43446 + struct snd_soc_jack *jack, bool *jack_setup)
43447 +diff --git a/sound/soc/qcom/common.h b/sound/soc/qcom/common.h
43448 +index c5472a642de08..3ef5bb6d12df7 100644
43449 +--- a/sound/soc/qcom/common.h
43450 ++++ b/sound/soc/qcom/common.h
43451 +@@ -11,7 +11,6 @@ int qcom_snd_parse_of(struct snd_soc_card *card);
43452 + int qcom_snd_wcd_jack_setup(struct snd_soc_pcm_runtime *rtd,
43453 + struct snd_soc_jack *jack, bool *jack_setup);
43454 +
43455 +-#if IS_ENABLED(CONFIG_SOUNDWIRE)
43456 + int qcom_snd_sdw_prepare(struct snd_pcm_substream *substream,
43457 + struct sdw_stream_runtime *runtime,
43458 + bool *stream_prepared);
43459 +@@ -21,26 +20,4 @@ int qcom_snd_sdw_hw_params(struct snd_pcm_substream *substream,
43460 + int qcom_snd_sdw_hw_free(struct snd_pcm_substream *substream,
43461 + struct sdw_stream_runtime *sruntime,
43462 + bool *stream_prepared);
43463 +-#else
43464 +-static inline int qcom_snd_sdw_prepare(struct snd_pcm_substream *substream,
43465 +- struct sdw_stream_runtime *runtime,
43466 +- bool *stream_prepared)
43467 +-{
43468 +- return -ENOTSUPP;
43469 +-}
43470 +-
43471 +-static inline int qcom_snd_sdw_hw_params(struct snd_pcm_substream *substream,
43472 +- struct snd_pcm_hw_params *params,
43473 +- struct sdw_stream_runtime **psruntime)
43474 +-{
43475 +- return -ENOTSUPP;
43476 +-}
43477 +-
43478 +-static inline int qcom_snd_sdw_hw_free(struct snd_pcm_substream *substream,
43479 +- struct sdw_stream_runtime *sruntime,
43480 +- bool *stream_prepared)
43481 +-{
43482 +- return -ENOTSUPP;
43483 +-}
43484 +-#endif
43485 + #endif
43486 +diff --git a/sound/soc/qcom/lpass-sc7180.c b/sound/soc/qcom/lpass-sc7180.c
43487 +index 77a556b27cf09..24a1c121cb2e9 100644
43488 +--- a/sound/soc/qcom/lpass-sc7180.c
43489 ++++ b/sound/soc/qcom/lpass-sc7180.c
43490 +@@ -131,6 +131,9 @@ static int sc7180_lpass_init(struct platform_device *pdev)
43491 +
43492 + drvdata->clks = devm_kcalloc(dev, variant->num_clks,
43493 + sizeof(*drvdata->clks), GFP_KERNEL);
43494 ++ if (!drvdata->clks)
43495 ++ return -ENOMEM;
43496 ++
43497 + drvdata->num_clks = variant->num_clks;
43498 +
43499 + for (i = 0; i < drvdata->num_clks; i++)
43500 +diff --git a/sound/soc/rockchip/rockchip_pdm.c b/sound/soc/rockchip/rockchip_pdm.c
43501 +index a7549f8272359..5b1e47bdc376b 100644
43502 +--- a/sound/soc/rockchip/rockchip_pdm.c
43503 ++++ b/sound/soc/rockchip/rockchip_pdm.c
43504 +@@ -431,6 +431,7 @@ static int rockchip_pdm_runtime_resume(struct device *dev)
43505 +
43506 + ret = clk_prepare_enable(pdm->hclk);
43507 + if (ret) {
43508 ++ clk_disable_unprepare(pdm->clk);
43509 + dev_err(pdm->dev, "hclock enable failed %d\n", ret);
43510 + return ret;
43511 + }
43512 +diff --git a/sound/soc/rockchip/rockchip_spdif.c b/sound/soc/rockchip/rockchip_spdif.c
43513 +index 8bef572d3cbc1..5b4f004575879 100644
43514 +--- a/sound/soc/rockchip/rockchip_spdif.c
43515 ++++ b/sound/soc/rockchip/rockchip_spdif.c
43516 +@@ -88,6 +88,7 @@ static int __maybe_unused rk_spdif_runtime_resume(struct device *dev)
43517 +
43518 + ret = clk_prepare_enable(spdif->hclk);
43519 + if (ret) {
43520 ++ clk_disable_unprepare(spdif->mclk);
43521 + dev_err(spdif->dev, "hclk clock enable failed %d\n", ret);
43522 + return ret;
43523 + }
43524 +diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
43525 +index 310cd6fb0038a..4aaf0784940b5 100644
43526 +--- a/sound/usb/endpoint.c
43527 ++++ b/sound/usb/endpoint.c
43528 +@@ -1673,6 +1673,13 @@ void snd_usb_endpoint_stop(struct snd_usb_endpoint *ep, bool keep_pending)
43529 + stop_urbs(ep, false, keep_pending);
43530 + if (ep->clock_ref)
43531 + atomic_dec(&ep->clock_ref->locked);
43532 ++
43533 ++ if (ep->chip->quirk_flags & QUIRK_FLAG_FORCE_IFACE_RESET &&
43534 ++ usb_pipeout(ep->pipe)) {
43535 ++ ep->need_prepare = true;
43536 ++ if (ep->iface_ref)
43537 ++ ep->iface_ref->need_setup = true;
43538 ++ }
43539 + }
43540 + }
43541 +
43542 +diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
43543 +index 8ed165f036a01..9557bd4d1bbca 100644
43544 +--- a/sound/usb/pcm.c
43545 ++++ b/sound/usb/pcm.c
43546 +@@ -604,6 +604,7 @@ static int snd_usb_pcm_prepare(struct snd_pcm_substream *substream)
43547 + struct snd_pcm_runtime *runtime = substream->runtime;
43548 + struct snd_usb_substream *subs = runtime->private_data;
43549 + struct snd_usb_audio *chip = subs->stream->chip;
43550 ++ int retry = 0;
43551 + int ret;
43552 +
43553 + ret = snd_usb_lock_shutdown(chip);
43554 +@@ -614,6 +615,7 @@ static int snd_usb_pcm_prepare(struct snd_pcm_substream *substream)
43555 + goto unlock;
43556 + }
43557 +
43558 ++ again:
43559 + if (subs->sync_endpoint) {
43560 + ret = snd_usb_endpoint_prepare(chip, subs->sync_endpoint);
43561 + if (ret < 0)
43562 +@@ -638,9 +640,16 @@ static int snd_usb_pcm_prepare(struct snd_pcm_substream *substream)
43563 +
43564 + subs->lowlatency_playback = lowlatency_playback_available(runtime, subs);
43565 + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK &&
43566 +- !subs->lowlatency_playback)
43567 ++ !subs->lowlatency_playback) {
43568 + ret = start_endpoints(subs);
43569 +-
43570 ++ /* if XRUN happens at starting streams (possibly with implicit
43571 ++ * fb case), restart again, but only try once.
43572 ++ */
43573 ++ if (ret == -EPIPE && !retry++) {
43574 ++ sync_pending_stops(subs);
43575 ++ goto again;
43576 ++ }
43577 ++ }
43578 + unlock:
43579 + snd_usb_unlock_shutdown(chip);
43580 + return ret;
43581 +diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
43582 +index 874fcf245747f..271884e350035 100644
43583 +--- a/sound/usb/quirks-table.h
43584 ++++ b/sound/usb/quirks-table.h
43585 +@@ -76,6 +76,8 @@
43586 + { USB_DEVICE_VENDOR_SPEC(0x041e, 0x3f0a) },
43587 + /* E-Mu 0204 USB */
43588 + { USB_DEVICE_VENDOR_SPEC(0x041e, 0x3f19) },
43589 ++/* Ktmicro Usb_audio device */
43590 ++{ USB_DEVICE_VENDOR_SPEC(0x31b2, 0x0011) },
43591 +
43592 + /*
43593 + * Creative Technology, Ltd Live! Cam Sync HD [VF0770]
43594 +diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
43595 +index 0f4dd3503a6a9..58b37bfc885cb 100644
43596 +--- a/sound/usb/quirks.c
43597 ++++ b/sound/usb/quirks.c
43598 +@@ -2044,6 +2044,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
43599 + DEVICE_FLG(0x0644, 0x804a, /* TEAC UD-301 */
43600 + QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY |
43601 + QUIRK_FLAG_IFACE_DELAY),
43602 ++ DEVICE_FLG(0x0644, 0x805f, /* TEAC Model 12 */
43603 ++ QUIRK_FLAG_FORCE_IFACE_RESET),
43604 + DEVICE_FLG(0x06f8, 0xb000, /* Hercules DJ Console (Windows Edition) */
43605 + QUIRK_FLAG_IGNORE_CTL_ERROR),
43606 + DEVICE_FLG(0x06f8, 0xd002, /* Hercules DJ Console (Macintosh Edition) */
43607 +diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
43608 +index e97141ef730ad..2aba508a48312 100644
43609 +--- a/sound/usb/usbaudio.h
43610 ++++ b/sound/usb/usbaudio.h
43611 +@@ -172,6 +172,9 @@ extern bool snd_usb_skip_validation;
43612 + * Don't apply implicit feedback sync mode
43613 + * QUIRK_FLAG_IFACE_SKIP_CLOSE
43614 + * Don't closed interface during setting sample rate
43615 ++ * QUIRK_FLAG_FORCE_IFACE_RESET
43616 ++ * Force an interface reset whenever stopping & restarting a stream
43617 ++ * (e.g. after xrun)
43618 + */
43619 +
43620 + #define QUIRK_FLAG_GET_SAMPLE_RATE (1U << 0)
43621 +@@ -194,5 +197,6 @@ extern bool snd_usb_skip_validation;
43622 + #define QUIRK_FLAG_GENERIC_IMPLICIT_FB (1U << 17)
43623 + #define QUIRK_FLAG_SKIP_IMPLICIT_FB (1U << 18)
43624 + #define QUIRK_FLAG_IFACE_SKIP_CLOSE (1U << 19)
43625 ++#define QUIRK_FLAG_FORCE_IFACE_RESET (1U << 20)
43626 +
43627 + #endif /* __USBAUDIO_H */
43628 +diff --git a/tools/bpf/bpftool/common.c b/tools/bpf/bpftool/common.c
43629 +index 0cdb4f7115101..e7a11cff7245a 100644
43630 +--- a/tools/bpf/bpftool/common.c
43631 ++++ b/tools/bpf/bpftool/common.c
43632 +@@ -499,6 +499,7 @@ static int do_build_table_cb(const char *fpath, const struct stat *sb,
43633 + if (err) {
43634 + p_err("failed to append entry to hashmap for ID %u, path '%s': %s",
43635 + pinned_info.id, path, strerror(errno));
43636 ++ free(path);
43637 + goto out_close;
43638 + }
43639 +
43640 +diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
43641 +index 9c50beabdd145..fddc05c667b5d 100644
43642 +--- a/tools/lib/bpf/bpf.h
43643 ++++ b/tools/lib/bpf/bpf.h
43644 +@@ -393,8 +393,15 @@ LIBBPF_API int bpf_task_fd_query(int pid, int fd, __u32 flags, char *buf,
43645 + __u32 *buf_len, __u32 *prog_id, __u32 *fd_type,
43646 + __u64 *probe_offset, __u64 *probe_addr);
43647 +
43648 ++#ifdef __cplusplus
43649 ++/* forward-declaring enums in C++ isn't compatible with pure C enums, so
43650 ++ * instead define bpf_enable_stats() as accepting int as an input
43651 ++ */
43652 ++LIBBPF_API int bpf_enable_stats(int type);
43653 ++#else
43654 + enum bpf_stats_type; /* defined in up-to-date linux/bpf.h */
43655 + LIBBPF_API int bpf_enable_stats(enum bpf_stats_type type);
43656 ++#endif
43657 +
43658 + struct bpf_prog_bind_opts {
43659 + size_t sz; /* size of this struct for forward/backward compatibility */
43660 +diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
43661 +index d88647da2c7fc..675a0df5c840f 100644
43662 +--- a/tools/lib/bpf/btf.c
43663 ++++ b/tools/lib/bpf/btf.c
43664 +@@ -3887,14 +3887,14 @@ static inline __u16 btf_fwd_kind(struct btf_type *t)
43665 + }
43666 +
43667 + /* Check if given two types are identical ARRAY definitions */
43668 +-static int btf_dedup_identical_arrays(struct btf_dedup *d, __u32 id1, __u32 id2)
43669 ++static bool btf_dedup_identical_arrays(struct btf_dedup *d, __u32 id1, __u32 id2)
43670 + {
43671 + struct btf_type *t1, *t2;
43672 +
43673 + t1 = btf_type_by_id(d->btf, id1);
43674 + t2 = btf_type_by_id(d->btf, id2);
43675 + if (!btf_is_array(t1) || !btf_is_array(t2))
43676 +- return 0;
43677 ++ return false;
43678 +
43679 + return btf_equal_array(t1, t2);
43680 + }
43681 +@@ -3918,7 +3918,9 @@ static bool btf_dedup_identical_structs(struct btf_dedup *d, __u32 id1, __u32 id
43682 + m1 = btf_members(t1);
43683 + m2 = btf_members(t2);
43684 + for (i = 0, n = btf_vlen(t1); i < n; i++, m1++, m2++) {
43685 +- if (m1->type != m2->type)
43686 ++ if (m1->type != m2->type &&
43687 ++ !btf_dedup_identical_arrays(d, m1->type, m2->type) &&
43688 ++ !btf_dedup_identical_structs(d, m1->type, m2->type))
43689 + return false;
43690 + }
43691 + return true;
43692 +diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
43693 +index 3937f66c7f8d6..0b470169729e6 100644
43694 +--- a/tools/lib/bpf/btf_dump.c
43695 ++++ b/tools/lib/bpf/btf_dump.c
43696 +@@ -219,6 +219,17 @@ static int btf_dump_resize(struct btf_dump *d)
43697 + return 0;
43698 + }
43699 +
43700 ++static void btf_dump_free_names(struct hashmap *map)
43701 ++{
43702 ++ size_t bkt;
43703 ++ struct hashmap_entry *cur;
43704 ++
43705 ++ hashmap__for_each_entry(map, cur, bkt)
43706 ++ free((void *)cur->key);
43707 ++
43708 ++ hashmap__free(map);
43709 ++}
43710 ++
43711 + void btf_dump__free(struct btf_dump *d)
43712 + {
43713 + int i;
43714 +@@ -237,8 +248,8 @@ void btf_dump__free(struct btf_dump *d)
43715 + free(d->cached_names);
43716 + free(d->emit_queue);
43717 + free(d->decl_stack);
43718 +- hashmap__free(d->type_names);
43719 +- hashmap__free(d->ident_names);
43720 ++ btf_dump_free_names(d->type_names);
43721 ++ btf_dump_free_names(d->ident_names);
43722 +
43723 + free(d);
43724 + }
43725 +@@ -1520,11 +1531,23 @@ static void btf_dump_emit_type_cast(struct btf_dump *d, __u32 id,
43726 + static size_t btf_dump_name_dups(struct btf_dump *d, struct hashmap *name_map,
43727 + const char *orig_name)
43728 + {
43729 ++ char *old_name, *new_name;
43730 + size_t dup_cnt = 0;
43731 ++ int err;
43732 ++
43733 ++ new_name = strdup(orig_name);
43734 ++ if (!new_name)
43735 ++ return 1;
43736 +
43737 + hashmap__find(name_map, orig_name, (void **)&dup_cnt);
43738 + dup_cnt++;
43739 +- hashmap__set(name_map, orig_name, (void *)dup_cnt, NULL, NULL);
43740 ++
43741 ++ err = hashmap__set(name_map, new_name, (void *)dup_cnt,
43742 ++ (const void **)&old_name, NULL);
43743 ++ if (err)
43744 ++ free(new_name);
43745 ++
43746 ++ free(old_name);
43747 +
43748 + return dup_cnt;
43749 + }
43750 +diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
43751 +index 91b7106a4a735..b9a29d1053765 100644
43752 +--- a/tools/lib/bpf/libbpf.c
43753 ++++ b/tools/lib/bpf/libbpf.c
43754 +@@ -597,7 +597,7 @@ struct elf_state {
43755 + size_t shstrndx; /* section index for section name strings */
43756 + size_t strtabidx;
43757 + struct elf_sec_desc *secs;
43758 +- int sec_cnt;
43759 ++ size_t sec_cnt;
43760 + int btf_maps_shndx;
43761 + __u32 btf_maps_sec_btf_id;
43762 + int text_shndx;
43763 +@@ -1408,6 +1408,10 @@ static int bpf_object__check_endianness(struct bpf_object *obj)
43764 + static int
43765 + bpf_object__init_license(struct bpf_object *obj, void *data, size_t size)
43766 + {
43767 ++ if (!data) {
43768 ++ pr_warn("invalid license section in %s\n", obj->path);
43769 ++ return -LIBBPF_ERRNO__FORMAT;
43770 ++ }
43771 + /* libbpf_strlcpy() only copies first N - 1 bytes, so size + 1 won't
43772 + * go over allowed ELF data section buffer
43773 + */
43774 +@@ -1421,7 +1425,7 @@ bpf_object__init_kversion(struct bpf_object *obj, void *data, size_t size)
43775 + {
43776 + __u32 kver;
43777 +
43778 +- if (size != sizeof(kver)) {
43779 ++ if (!data || size != sizeof(kver)) {
43780 + pr_warn("invalid kver section in %s\n", obj->path);
43781 + return -LIBBPF_ERRNO__FORMAT;
43782 + }
43783 +@@ -3312,10 +3316,15 @@ static int bpf_object__elf_collect(struct bpf_object *obj)
43784 + Elf64_Shdr *sh;
43785 +
43786 + /* ELF section indices are 0-based, but sec #0 is special "invalid"
43787 +- * section. e_shnum does include sec #0, so e_shnum is the necessary
43788 +- * size of an array to keep all the sections.
43789 ++ * section. Since section count retrieved by elf_getshdrnum() does
43790 ++ * include sec #0, it is already the necessary size of an array to keep
43791 ++ * all the sections.
43792 + */
43793 +- obj->efile.sec_cnt = obj->efile.ehdr->e_shnum;
43794 ++ if (elf_getshdrnum(obj->efile.elf, &obj->efile.sec_cnt)) {
43795 ++ pr_warn("elf: failed to get the number of sections for %s: %s\n",
43796 ++ obj->path, elf_errmsg(-1));
43797 ++ return -LIBBPF_ERRNO__FORMAT;
43798 ++ }
43799 + obj->efile.secs = calloc(obj->efile.sec_cnt, sizeof(*obj->efile.secs));
43800 + if (!obj->efile.secs)
43801 + return -ENOMEM;
43802 +@@ -4106,6 +4115,9 @@ static struct bpf_program *find_prog_by_sec_insn(const struct bpf_object *obj,
43803 + int l = 0, r = obj->nr_programs - 1, m;
43804 + struct bpf_program *prog;
43805 +
43806 ++ if (!obj->nr_programs)
43807 ++ return NULL;
43808 ++
43809 + while (l < r) {
43810 + m = l + (r - l + 1) / 2;
43811 + prog = &obj->programs[m];
43812 +diff --git a/tools/lib/bpf/usdt.c b/tools/lib/bpf/usdt.c
43813 +index e83b497c22454..49f3c3b7f6095 100644
43814 +--- a/tools/lib/bpf/usdt.c
43815 ++++ b/tools/lib/bpf/usdt.c
43816 +@@ -1348,25 +1348,23 @@ static int calc_pt_regs_off(const char *reg_name)
43817 +
43818 + static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec *arg)
43819 + {
43820 +- char *reg_name = NULL;
43821 ++ char reg_name[16];
43822 + int arg_sz, len, reg_off;
43823 + long off;
43824 +
43825 +- if (sscanf(arg_str, " %d @ \[ %m[a-z0-9], %ld ] %n", &arg_sz, &reg_name, &off, &len) == 3) {
43826 ++ if (sscanf(arg_str, " %d @ \[ %15[a-z0-9], %ld ] %n", &arg_sz, reg_name, &off, &len) == 3) {
43827 + /* Memory dereference case, e.g., -4@[sp, 96] */
43828 + arg->arg_type = USDT_ARG_REG_DEREF;
43829 + arg->val_off = off;
43830 + reg_off = calc_pt_regs_off(reg_name);
43831 +- free(reg_name);
43832 + if (reg_off < 0)
43833 + return reg_off;
43834 + arg->reg_off = reg_off;
43835 +- } else if (sscanf(arg_str, " %d @ \[ %m[a-z0-9] ] %n", &arg_sz, &reg_name, &len) == 2) {
43836 ++ } else if (sscanf(arg_str, " %d @ \[ %15[a-z0-9] ] %n", &arg_sz, reg_name, &len) == 2) {
43837 + /* Memory dereference case, e.g., -4@[sp] */
43838 + arg->arg_type = USDT_ARG_REG_DEREF;
43839 + arg->val_off = 0;
43840 + reg_off = calc_pt_regs_off(reg_name);
43841 +- free(reg_name);
43842 + if (reg_off < 0)
43843 + return reg_off;
43844 + arg->reg_off = reg_off;
43845 +@@ -1375,12 +1373,11 @@ static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec
43846 + arg->arg_type = USDT_ARG_CONST;
43847 + arg->val_off = off;
43848 + arg->reg_off = 0;
43849 +- } else if (sscanf(arg_str, " %d @ %m[a-z0-9] %n", &arg_sz, &reg_name, &len) == 2) {
43850 ++ } else if (sscanf(arg_str, " %d @ %15[a-z0-9] %n", &arg_sz, reg_name, &len) == 2) {
43851 + /* Register read case, e.g., -8@x4 */
43852 + arg->arg_type = USDT_ARG_REG;
43853 + arg->val_off = 0;
43854 + reg_off = calc_pt_regs_off(reg_name);
43855 +- free(reg_name);
43856 + if (reg_off < 0)
43857 + return reg_off;
43858 + arg->reg_off = reg_off;
43859 +diff --git a/tools/objtool/check.c b/tools/objtool/check.c
43860 +index 43ec14c29a60c..a7f1e6c8bb0a7 100644
43861 +--- a/tools/objtool/check.c
43862 ++++ b/tools/objtool/check.c
43863 +@@ -999,6 +999,16 @@ static const char *uaccess_safe_builtin[] = {
43864 + "__tsan_read_write4",
43865 + "__tsan_read_write8",
43866 + "__tsan_read_write16",
43867 ++ "__tsan_volatile_read1",
43868 ++ "__tsan_volatile_read2",
43869 ++ "__tsan_volatile_read4",
43870 ++ "__tsan_volatile_read8",
43871 ++ "__tsan_volatile_read16",
43872 ++ "__tsan_volatile_write1",
43873 ++ "__tsan_volatile_write2",
43874 ++ "__tsan_volatile_write4",
43875 ++ "__tsan_volatile_write8",
43876 ++ "__tsan_volatile_write16",
43877 + "__tsan_atomic8_load",
43878 + "__tsan_atomic16_load",
43879 + "__tsan_atomic32_load",
43880 +diff --git a/tools/perf/Documentation/perf-annotate.txt b/tools/perf/Documentation/perf-annotate.txt
43881 +index 18fcc52809fbf..980fe2c292752 100644
43882 +--- a/tools/perf/Documentation/perf-annotate.txt
43883 ++++ b/tools/perf/Documentation/perf-annotate.txt
43884 +@@ -41,7 +41,7 @@ OPTIONS
43885 +
43886 + -q::
43887 + --quiet::
43888 +- Do not show any message. (Suppress -v)
43889 ++ Do not show any warnings or messages. (Suppress -v)
43890 +
43891 + -n::
43892 + --show-nr-samples::
43893 +diff --git a/tools/perf/Documentation/perf-diff.txt b/tools/perf/Documentation/perf-diff.txt
43894 +index be65bd55ab2aa..f3067a4af2940 100644
43895 +--- a/tools/perf/Documentation/perf-diff.txt
43896 ++++ b/tools/perf/Documentation/perf-diff.txt
43897 +@@ -75,7 +75,7 @@ OPTIONS
43898 +
43899 + -q::
43900 + --quiet::
43901 +- Do not show any message. (Suppress -v)
43902 ++ Do not show any warnings or messages. (Suppress -v)
43903 +
43904 + -f::
43905 + --force::
43906 +diff --git a/tools/perf/Documentation/perf-lock.txt b/tools/perf/Documentation/perf-lock.txt
43907 +index 3b1e16563b795..4958a1ffa1cca 100644
43908 +--- a/tools/perf/Documentation/perf-lock.txt
43909 ++++ b/tools/perf/Documentation/perf-lock.txt
43910 +@@ -42,7 +42,7 @@ COMMON OPTIONS
43911 +
43912 + -q::
43913 + --quiet::
43914 +- Do not show any message. (Suppress -v)
43915 ++ Do not show any warnings or messages. (Suppress -v)
43916 +
43917 + -D::
43918 + --dump-raw-trace::
43919 +diff --git a/tools/perf/Documentation/perf-probe.txt b/tools/perf/Documentation/perf-probe.txt
43920 +index 080981d38d7ba..7f8e8ba3a7872 100644
43921 +--- a/tools/perf/Documentation/perf-probe.txt
43922 ++++ b/tools/perf/Documentation/perf-probe.txt
43923 +@@ -57,7 +57,7 @@ OPTIONS
43924 +
43925 + -q::
43926 + --quiet::
43927 +- Be quiet (do not show any messages including errors).
43928 ++ Do not show any warnings or messages.
43929 + Can not use with -v.
43930 +
43931 + -a::
43932 +diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt
43933 +index e41ae950fdc3b..9ea6d44aca58c 100644
43934 +--- a/tools/perf/Documentation/perf-record.txt
43935 ++++ b/tools/perf/Documentation/perf-record.txt
43936 +@@ -282,7 +282,7 @@ OPTIONS
43937 +
43938 + -q::
43939 + --quiet::
43940 +- Don't print any message, useful for scripting.
43941 ++ Don't print any warnings or messages, useful for scripting.
43942 +
43943 + -v::
43944 + --verbose::
43945 +diff --git a/tools/perf/Documentation/perf-report.txt b/tools/perf/Documentation/perf-report.txt
43946 +index 4533db2ee56bb..4fa509b159489 100644
43947 +--- a/tools/perf/Documentation/perf-report.txt
43948 ++++ b/tools/perf/Documentation/perf-report.txt
43949 +@@ -27,7 +27,7 @@ OPTIONS
43950 +
43951 + -q::
43952 + --quiet::
43953 +- Do not show any message. (Suppress -v)
43954 ++ Do not show any warnings or messages. (Suppress -v)
43955 +
43956 + -n::
43957 + --show-nr-samples::
43958 +diff --git a/tools/perf/Documentation/perf-stat.txt b/tools/perf/Documentation/perf-stat.txt
43959 +index d7ff1867feda6..18abdc1dce055 100644
43960 +--- a/tools/perf/Documentation/perf-stat.txt
43961 ++++ b/tools/perf/Documentation/perf-stat.txt
43962 +@@ -354,8 +354,8 @@ forbids the event merging logic from sharing events between groups and
43963 + may be used to increase accuracy in this case.
43964 +
43965 + --quiet::
43966 +-Don't print output. This is useful with perf stat record below to only
43967 +-write data to the perf.data file.
43968 ++Don't print output, warnings or messages. This is useful with perf stat
43969 ++record below to only write data to the perf.data file.
43970 +
43971 + STAT RECORD
43972 + -----------
43973 +diff --git a/tools/perf/bench/numa.c b/tools/perf/bench/numa.c
43974 +index e78dedf9e682c..9717c6c17433c 100644
43975 +--- a/tools/perf/bench/numa.c
43976 ++++ b/tools/perf/bench/numa.c
43977 +@@ -16,6 +16,7 @@
43978 + #include <sched.h>
43979 + #include <stdio.h>
43980 + #include <assert.h>
43981 ++#include <debug.h>
43982 + #include <malloc.h>
43983 + #include <signal.h>
43984 + #include <stdlib.h>
43985 +@@ -116,7 +117,6 @@ struct params {
43986 + long bytes_thread;
43987 +
43988 + int nr_tasks;
43989 +- bool show_quiet;
43990 +
43991 + bool show_convergence;
43992 + bool measure_convergence;
43993 +@@ -197,7 +197,8 @@ static const struct option options[] = {
43994 + OPT_BOOLEAN('c', "show_convergence", &p0.show_convergence, "show convergence details, "
43995 + "convergence is reached when each process (all its threads) is running on a single NUMA node."),
43996 + OPT_BOOLEAN('m', "measure_convergence", &p0.measure_convergence, "measure convergence latency"),
43997 +- OPT_BOOLEAN('q', "quiet" , &p0.show_quiet, "quiet mode"),
43998 ++ OPT_BOOLEAN('q', "quiet" , &quiet,
43999 ++ "quiet mode (do not show any warnings or messages)"),
44000 + OPT_BOOLEAN('S', "serialize-startup", &p0.serialize_startup,"serialize thread startup"),
44001 +
44002 + /* Special option string parsing callbacks: */
44003 +@@ -1474,7 +1475,7 @@ static int init(void)
44004 + /* char array in count_process_nodes(): */
44005 + BUG_ON(g->p.nr_nodes < 0);
44006 +
44007 +- if (g->p.show_quiet && !g->p.show_details)
44008 ++ if (quiet && !g->p.show_details)
44009 + g->p.show_details = -1;
44010 +
44011 + /* Some memory should be specified: */
44012 +@@ -1553,7 +1554,7 @@ static void print_res(const char *name, double val,
44013 + if (!name)
44014 + name = "main,";
44015 +
44016 +- if (!g->p.show_quiet)
44017 ++ if (!quiet)
44018 + printf(" %-30s %15.3f, %-15s %s\n", name, val, txt_unit, txt_short);
44019 + else
44020 + printf(" %14.3f %s\n", val, txt_long);
44021 +diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c
44022 +index f839e69492e80..517d928c00e3f 100644
44023 +--- a/tools/perf/builtin-annotate.c
44024 ++++ b/tools/perf/builtin-annotate.c
44025 +@@ -525,7 +525,7 @@ int cmd_annotate(int argc, const char **argv)
44026 + OPT_BOOLEAN('f', "force", &data.force, "don't complain, do it"),
44027 + OPT_INCR('v', "verbose", &verbose,
44028 + "be more verbose (show symbol address, etc)"),
44029 +- OPT_BOOLEAN('q', "quiet", &quiet, "do now show any message"),
44030 ++ OPT_BOOLEAN('q', "quiet", &quiet, "do now show any warnings or messages"),
44031 + OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace,
44032 + "dump raw trace in ASCII"),
44033 + #ifdef HAVE_GTK2_SUPPORT
44034 +diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c
44035 +index d925096dd7f02..ed07cc6cca56c 100644
44036 +--- a/tools/perf/builtin-diff.c
44037 ++++ b/tools/perf/builtin-diff.c
44038 +@@ -1260,7 +1260,7 @@ static const char * const diff_usage[] = {
44039 + static const struct option options[] = {
44040 + OPT_INCR('v', "verbose", &verbose,
44041 + "be more verbose (show symbol address, etc)"),
44042 +- OPT_BOOLEAN('q', "quiet", &quiet, "Do not show any message"),
44043 ++ OPT_BOOLEAN('q', "quiet", &quiet, "Do not show any warnings or messages"),
44044 + OPT_BOOLEAN('b', "baseline-only", &show_baseline_only,
44045 + "Show only items with match in baseline"),
44046 + OPT_CALLBACK('c', "compute", &compute,
44047 +diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c
44048 +index 9722d4ab2e557..66520712a1675 100644
44049 +--- a/tools/perf/builtin-lock.c
44050 ++++ b/tools/perf/builtin-lock.c
44051 +@@ -1869,7 +1869,7 @@ int cmd_lock(int argc, const char **argv)
44052 + "file", "vmlinux pathname"),
44053 + OPT_STRING(0, "kallsyms", &symbol_conf.kallsyms_name,
44054 + "file", "kallsyms pathname"),
44055 +- OPT_BOOLEAN('q', "quiet", &quiet, "Do not show any message"),
44056 ++ OPT_BOOLEAN('q', "quiet", &quiet, "Do not show any warnings or messages"),
44057 + OPT_END()
44058 + };
44059 +
44060 +diff --git a/tools/perf/builtin-probe.c b/tools/perf/builtin-probe.c
44061 +index f62298f5db3b4..ed73d0b89ca2d 100644
44062 +--- a/tools/perf/builtin-probe.c
44063 ++++ b/tools/perf/builtin-probe.c
44064 +@@ -40,7 +40,6 @@ static struct {
44065 + int command; /* Command short_name */
44066 + bool list_events;
44067 + bool uprobes;
44068 +- bool quiet;
44069 + bool target_used;
44070 + int nevents;
44071 + struct perf_probe_event events[MAX_PROBES];
44072 +@@ -514,8 +513,8 @@ __cmd_probe(int argc, const char **argv)
44073 + struct option options[] = {
44074 + OPT_INCR('v', "verbose", &verbose,
44075 + "be more verbose (show parsed arguments, etc)"),
44076 +- OPT_BOOLEAN('q', "quiet", &params.quiet,
44077 +- "be quiet (do not show any messages)"),
44078 ++ OPT_BOOLEAN('q', "quiet", &quiet,
44079 ++ "be quiet (do not show any warnings or messages)"),
44080 + OPT_CALLBACK_DEFAULT('l', "list", NULL, "[GROUP:]EVENT",
44081 + "list up probe events",
44082 + opt_set_filter_with_command, DEFAULT_LIST_FILTER),
44083 +@@ -613,6 +612,15 @@ __cmd_probe(int argc, const char **argv)
44084 +
44085 + argc = parse_options(argc, argv, options, probe_usage,
44086 + PARSE_OPT_STOP_AT_NON_OPTION);
44087 ++
44088 ++ if (quiet) {
44089 ++ if (verbose != 0) {
44090 ++ pr_err(" Error: -v and -q are exclusive.\n");
44091 ++ return -EINVAL;
44092 ++ }
44093 ++ verbose = -1;
44094 ++ }
44095 ++
44096 + if (argc > 0) {
44097 + if (strcmp(argv[0], "-") == 0) {
44098 + usage_with_options_msg(probe_usage, options,
44099 +@@ -634,14 +642,6 @@ __cmd_probe(int argc, const char **argv)
44100 + if (ret)
44101 + return ret;
44102 +
44103 +- if (params.quiet) {
44104 +- if (verbose != 0) {
44105 +- pr_err(" Error: -v and -q are exclusive.\n");
44106 +- return -EINVAL;
44107 +- }
44108 +- verbose = -1;
44109 +- }
44110 +-
44111 + if (probe_conf.max_probes == 0)
44112 + probe_conf.max_probes = MAX_PROBES;
44113 +
44114 +diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
44115 +index e128b855dddec..59f3d98a0196d 100644
44116 +--- a/tools/perf/builtin-record.c
44117 ++++ b/tools/perf/builtin-record.c
44118 +@@ -3388,7 +3388,7 @@ static struct option __record_options[] = {
44119 + &record_parse_callchain_opt),
44120 + OPT_INCR('v', "verbose", &verbose,
44121 + "be more verbose (show counter open errors, etc)"),
44122 +- OPT_BOOLEAN('q', "quiet", &quiet, "don't print any message"),
44123 ++ OPT_BOOLEAN('q', "quiet", &quiet, "don't print any warnings or messages"),
44124 + OPT_BOOLEAN('s', "stat", &record.opts.inherit_stat,
44125 + "per thread counts"),
44126 + OPT_BOOLEAN('d', "data", &record.opts.sample_address, "Record the sample addresses"),
44127 +diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
44128 +index 8361890176c23..b6d77d3da64f6 100644
44129 +--- a/tools/perf/builtin-report.c
44130 ++++ b/tools/perf/builtin-report.c
44131 +@@ -1222,7 +1222,7 @@ int cmd_report(int argc, const char **argv)
44132 + "input file name"),
44133 + OPT_INCR('v', "verbose", &verbose,
44134 + "be more verbose (show symbol address, etc)"),
44135 +- OPT_BOOLEAN('q', "quiet", &quiet, "Do not show any message"),
44136 ++ OPT_BOOLEAN('q', "quiet", &quiet, "Do not show any warnings or messages"),
44137 + OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace,
44138 + "dump raw trace in ASCII"),
44139 + OPT_BOOLEAN(0, "stats", &report.stats_mode, "Display event stats"),
44140 +diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
44141 +index 265b051579726..978fdc60b4e84 100644
44142 +--- a/tools/perf/builtin-stat.c
44143 ++++ b/tools/perf/builtin-stat.c
44144 +@@ -528,26 +528,14 @@ static int enable_counters(void)
44145 + return err;
44146 + }
44147 +
44148 +- if (stat_config.initial_delay < 0) {
44149 +- pr_info(EVLIST_DISABLED_MSG);
44150 +- return 0;
44151 +- }
44152 +-
44153 +- if (stat_config.initial_delay > 0) {
44154 +- pr_info(EVLIST_DISABLED_MSG);
44155 +- usleep(stat_config.initial_delay * USEC_PER_MSEC);
44156 +- }
44157 +-
44158 + /*
44159 + * We need to enable counters only if:
44160 + * - we don't have tracee (attaching to task or cpu)
44161 + * - we have initial delay configured
44162 + */
44163 +- if (!target__none(&target) || stat_config.initial_delay) {
44164 ++ if (!target__none(&target)) {
44165 + if (!all_counters_use_bpf)
44166 + evlist__enable(evsel_list);
44167 +- if (stat_config.initial_delay > 0)
44168 +- pr_info(EVLIST_ENABLED_MSG);
44169 + }
44170 + return 0;
44171 + }
44172 +@@ -918,14 +906,27 @@ try_again_reset:
44173 + return err;
44174 + }
44175 +
44176 +- err = enable_counters();
44177 +- if (err)
44178 +- return -1;
44179 ++ if (stat_config.initial_delay) {
44180 ++ pr_info(EVLIST_DISABLED_MSG);
44181 ++ } else {
44182 ++ err = enable_counters();
44183 ++ if (err)
44184 ++ return -1;
44185 ++ }
44186 +
44187 + /* Exec the command, if any */
44188 + if (forks)
44189 + evlist__start_workload(evsel_list);
44190 +
44191 ++ if (stat_config.initial_delay > 0) {
44192 ++ usleep(stat_config.initial_delay * USEC_PER_MSEC);
44193 ++ err = enable_counters();
44194 ++ if (err)
44195 ++ return -1;
44196 ++
44197 ++ pr_info(EVLIST_ENABLED_MSG);
44198 ++ }
44199 ++
44200 + t0 = rdclock();
44201 + clock_gettime(CLOCK_MONOTONIC, &ref_time);
44202 +
44203 +@@ -1023,7 +1024,7 @@ static void print_counters(struct timespec *ts, int argc, const char **argv)
44204 + /* Do not print anything if we record to the pipe. */
44205 + if (STAT_RECORD && perf_stat.data.is_pipe)
44206 + return;
44207 +- if (stat_config.quiet)
44208 ++ if (quiet)
44209 + return;
44210 +
44211 + evlist__print_counters(evsel_list, &stat_config, &target, ts, argc, argv);
44212 +@@ -1273,8 +1274,8 @@ static struct option stat_options[] = {
44213 + "print summary for interval mode"),
44214 + OPT_BOOLEAN(0, "no-csv-summary", &stat_config.no_csv_summary,
44215 + "don't print 'summary' for CSV summary output"),
44216 +- OPT_BOOLEAN(0, "quiet", &stat_config.quiet,
44217 +- "don't print output (useful with record)"),
44218 ++ OPT_BOOLEAN(0, "quiet", &quiet,
44219 ++ "don't print any output, messages or warnings (useful with record)"),
44220 + OPT_CALLBACK(0, "cputype", &evsel_list, "hybrid cpu type",
44221 + "Only enable events on applying cpu with this type "
44222 + "for hybrid platform (e.g. core or atom)",
44223 +@@ -2277,7 +2278,7 @@ int cmd_stat(int argc, const char **argv)
44224 + goto out;
44225 + }
44226 +
44227 +- if (!output && !stat_config.quiet) {
44228 ++ if (!output && !quiet) {
44229 + struct timespec tm;
44230 + mode = append_file ? "a" : "w";
44231 +
44232 +diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
44233 +index d3c757769b965..3dcf6aed1ef71 100644
44234 +--- a/tools/perf/builtin-trace.c
44235 ++++ b/tools/perf/builtin-trace.c
44236 +@@ -88,6 +88,8 @@
44237 + # define F_LINUX_SPECIFIC_BASE 1024
44238 + #endif
44239 +
44240 ++#define RAW_SYSCALL_ARGS_NUM 6
44241 ++
44242 + /*
44243 + * strtoul: Go from a string to a value, i.e. for msr: MSR_FS_BASE to 0xc0000100
44244 + */
44245 +@@ -108,7 +110,7 @@ struct syscall_fmt {
44246 + const char *sys_enter,
44247 + *sys_exit;
44248 + } bpf_prog_name;
44249 +- struct syscall_arg_fmt arg[6];
44250 ++ struct syscall_arg_fmt arg[RAW_SYSCALL_ARGS_NUM];
44251 + u8 nr_args;
44252 + bool errpid;
44253 + bool timeout;
44254 +@@ -1226,7 +1228,7 @@ struct syscall {
44255 + */
44256 + struct bpf_map_syscall_entry {
44257 + bool enabled;
44258 +- u16 string_args_len[6];
44259 ++ u16 string_args_len[RAW_SYSCALL_ARGS_NUM];
44260 + };
44261 +
44262 + /*
44263 +@@ -1658,7 +1660,7 @@ static int syscall__alloc_arg_fmts(struct syscall *sc, int nr_args)
44264 + {
44265 + int idx;
44266 +
44267 +- if (nr_args == 6 && sc->fmt && sc->fmt->nr_args != 0)
44268 ++ if (nr_args == RAW_SYSCALL_ARGS_NUM && sc->fmt && sc->fmt->nr_args != 0)
44269 + nr_args = sc->fmt->nr_args;
44270 +
44271 + sc->arg_fmt = calloc(nr_args, sizeof(*sc->arg_fmt));
44272 +@@ -1791,11 +1793,11 @@ static int trace__read_syscall_info(struct trace *trace, int id)
44273 + #endif
44274 + sc = trace->syscalls.table + id;
44275 + if (sc->nonexistent)
44276 +- return 0;
44277 ++ return -EEXIST;
44278 +
44279 + if (name == NULL) {
44280 + sc->nonexistent = true;
44281 +- return 0;
44282 ++ return -EEXIST;
44283 + }
44284 +
44285 + sc->name = name;
44286 +@@ -1809,11 +1811,18 @@ static int trace__read_syscall_info(struct trace *trace, int id)
44287 + sc->tp_format = trace_event__tp_format("syscalls", tp_name);
44288 + }
44289 +
44290 +- if (syscall__alloc_arg_fmts(sc, IS_ERR(sc->tp_format) ? 6 : sc->tp_format->format.nr_fields))
44291 +- return -ENOMEM;
44292 +-
44293 +- if (IS_ERR(sc->tp_format))
44294 ++ /*
44295 ++ * Fails to read trace point format via sysfs node, so the trace point
44296 ++ * doesn't exist. Set the 'nonexistent' flag as true.
44297 ++ */
44298 ++ if (IS_ERR(sc->tp_format)) {
44299 ++ sc->nonexistent = true;
44300 + return PTR_ERR(sc->tp_format);
44301 ++ }
44302 ++
44303 ++ if (syscall__alloc_arg_fmts(sc, IS_ERR(sc->tp_format) ?
44304 ++ RAW_SYSCALL_ARGS_NUM : sc->tp_format->format.nr_fields))
44305 ++ return -ENOMEM;
44306 +
44307 + sc->args = sc->tp_format->format.fields;
44308 + /*
44309 +@@ -2131,11 +2140,8 @@ static struct syscall *trace__syscall_info(struct trace *trace,
44310 + (err = trace__read_syscall_info(trace, id)) != 0)
44311 + goto out_cant_read;
44312 +
44313 +- if (trace->syscalls.table[id].name == NULL) {
44314 +- if (trace->syscalls.table[id].nonexistent)
44315 +- return NULL;
44316 ++ if (trace->syscalls.table && trace->syscalls.table[id].nonexistent)
44317 + goto out_cant_read;
44318 +- }
44319 +
44320 + return &trace->syscalls.table[id];
44321 +
44322 +diff --git a/tools/perf/tests/shell/stat_all_pmu.sh b/tools/perf/tests/shell/stat_all_pmu.sh
44323 +index 9c9ef33e0b3c6..c779554191731 100755
44324 +--- a/tools/perf/tests/shell/stat_all_pmu.sh
44325 ++++ b/tools/perf/tests/shell/stat_all_pmu.sh
44326 +@@ -4,17 +4,8 @@
44327 +
44328 + set -e
44329 +
44330 +-for p in $(perf list --raw-dump pmu); do
44331 +- # In powerpc, skip the events for hv_24x7 and hv_gpci.
44332 +- # These events needs input values to be filled in for
44333 +- # core, chip, partition id based on system.
44334 +- # Example: hv_24x7/CPM_ADJUNCT_INST,domain=?,core=?/
44335 +- # hv_gpci/event,partition_id=?/
44336 +- # Hence skip these events for ppc.
44337 +- if echo "$p" |grep -Eq 'hv_24x7|hv_gpci' ; then
44338 +- echo "Skipping: Event '$p' in powerpc"
44339 +- continue
44340 +- fi
44341 ++# Test all PMU events; however exclude parametrized ones (name contains '?')
44342 ++for p in $(perf list --raw-dump pmu | sed 's/[[:graph:]]\+?[[:graph:]]\+[[:space:]]//g'); do
44343 + echo "Testing $p"
44344 + result=$(perf stat -e "$p" true 2>&1)
44345 + if ! echo "$result" | grep -q "$p" && ! echo "$result" | grep -q "<not supported>" ; then
44346 +diff --git a/tools/perf/ui/util.c b/tools/perf/ui/util.c
44347 +index 689b27c34246c..1d38ddf01b604 100644
44348 +--- a/tools/perf/ui/util.c
44349 ++++ b/tools/perf/ui/util.c
44350 +@@ -15,6 +15,9 @@ static int perf_stdio__error(const char *format, va_list args)
44351 +
44352 + static int perf_stdio__warning(const char *format, va_list args)
44353 + {
44354 ++ if (quiet)
44355 ++ return 0;
44356 ++
44357 + fprintf(stderr, "Warning:\n");
44358 + vfprintf(stderr, format, args);
44359 + return 0;
44360 +@@ -45,6 +48,8 @@ int ui__warning(const char *format, ...)
44361 + {
44362 + int ret;
44363 + va_list args;
44364 ++ if (quiet)
44365 ++ return 0;
44366 +
44367 + va_start(args, format);
44368 + ret = perf_eops->warning(format, args);
44369 +diff --git a/tools/perf/util/bpf_off_cpu.c b/tools/perf/util/bpf_off_cpu.c
44370 +index c257813e674ef..01f70b8e705a8 100644
44371 +--- a/tools/perf/util/bpf_off_cpu.c
44372 ++++ b/tools/perf/util/bpf_off_cpu.c
44373 +@@ -102,7 +102,7 @@ static void check_sched_switch_args(void)
44374 + const struct btf_type *t1, *t2, *t3;
44375 + u32 type_id;
44376 +
44377 +- type_id = btf__find_by_name_kind(btf, "bpf_trace_sched_switch",
44378 ++ type_id = btf__find_by_name_kind(btf, "btf_trace_sched_switch",
44379 + BTF_KIND_TYPEDEF);
44380 + if ((s32)type_id < 0)
44381 + return;
44382 +diff --git a/tools/perf/util/branch.h b/tools/perf/util/branch.h
44383 +index f838b23db1804..dca75cad96f68 100644
44384 +--- a/tools/perf/util/branch.h
44385 ++++ b/tools/perf/util/branch.h
44386 +@@ -24,9 +24,10 @@ struct branch_flags {
44387 + u64 abort:1;
44388 + u64 cycles:16;
44389 + u64 type:4;
44390 ++ u64 spec:2;
44391 + u64 new_type:4;
44392 + u64 priv:3;
44393 +- u64 reserved:33;
44394 ++ u64 reserved:31;
44395 + };
44396 + };
44397 + };
44398 +diff --git a/tools/perf/util/debug.c b/tools/perf/util/debug.c
44399 +index 65e6c22f38e4f..190e818a07176 100644
44400 +--- a/tools/perf/util/debug.c
44401 ++++ b/tools/perf/util/debug.c
44402 +@@ -241,6 +241,10 @@ int perf_quiet_option(void)
44403 + opt++;
44404 + }
44405 +
44406 ++ /* For debug variables that are used as bool types, set to 0. */
44407 ++ redirect_to_stderr = 0;
44408 ++ debug_peo_args = 0;
44409 ++
44410 + return 0;
44411 + }
44412 +
44413 +diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
44414 +index ba66bb7fc1ca7..bc866d18973e4 100644
44415 +--- a/tools/perf/util/stat-display.c
44416 ++++ b/tools/perf/util/stat-display.c
44417 +@@ -704,7 +704,7 @@ static void uniquify_event_name(struct evsel *counter)
44418 + counter->name = new_name;
44419 + }
44420 + } else {
44421 +- if (perf_pmu__has_hybrid()) {
44422 ++ if (evsel__is_hybrid(counter)) {
44423 + ret = asprintf(&new_name, "%s/%s/",
44424 + counter->pmu_name, counter->name);
44425 + } else {
44426 +@@ -744,26 +744,14 @@ static void collect_all_aliases(struct perf_stat_config *config, struct evsel *c
44427 + }
44428 + }
44429 +
44430 +-static bool is_uncore(struct evsel *evsel)
44431 +-{
44432 +- struct perf_pmu *pmu = evsel__find_pmu(evsel);
44433 +-
44434 +- return pmu && pmu->is_uncore;
44435 +-}
44436 +-
44437 +-static bool hybrid_uniquify(struct evsel *evsel)
44438 +-{
44439 +- return perf_pmu__has_hybrid() && !is_uncore(evsel);
44440 +-}
44441 +-
44442 + static bool hybrid_merge(struct evsel *counter, struct perf_stat_config *config,
44443 + bool check)
44444 + {
44445 +- if (hybrid_uniquify(counter)) {
44446 ++ if (evsel__is_hybrid(counter)) {
44447 + if (check)
44448 +- return config && config->hybrid_merge;
44449 ++ return config->hybrid_merge;
44450 + else
44451 +- return config && !config->hybrid_merge;
44452 ++ return !config->hybrid_merge;
44453 + }
44454 +
44455 + return false;
44456 +@@ -1142,11 +1130,16 @@ static void print_metric_headers(struct perf_stat_config *config,
44457 + struct evlist *evlist,
44458 + const char *prefix, bool no_indent)
44459 + {
44460 +- struct perf_stat_output_ctx out;
44461 + struct evsel *counter;
44462 + struct outstate os = {
44463 + .fh = config->output
44464 + };
44465 ++ struct perf_stat_output_ctx out = {
44466 ++ .ctx = &os,
44467 ++ .print_metric = print_metric_header,
44468 ++ .new_line = new_line_metric,
44469 ++ .force_header = true,
44470 ++ };
44471 + bool first = true;
44472 +
44473 + if (config->json_output && !config->interval)
44474 +@@ -1170,13 +1163,11 @@ static void print_metric_headers(struct perf_stat_config *config,
44475 + /* Print metrics headers only */
44476 + evlist__for_each_entry(evlist, counter) {
44477 + os.evsel = counter;
44478 +- out.ctx = &os;
44479 +- out.print_metric = print_metric_header;
44480 ++
44481 + if (!first && config->json_output)
44482 + fprintf(config->output, ", ");
44483 + first = false;
44484 +- out.new_line = new_line_metric;
44485 +- out.force_header = true;
44486 ++
44487 + perf_stat__print_shadow_stats(config, counter, 0,
44488 + 0,
44489 + &out,
44490 +diff --git a/tools/perf/util/stat.h b/tools/perf/util/stat.h
44491 +index b0899c6e002f5..35c940d7f29cd 100644
44492 +--- a/tools/perf/util/stat.h
44493 ++++ b/tools/perf/util/stat.h
44494 +@@ -139,7 +139,6 @@ struct perf_stat_config {
44495 + bool metric_no_group;
44496 + bool metric_no_merge;
44497 + bool stop_read_counter;
44498 +- bool quiet;
44499 + bool iostat_run;
44500 + char *user_requested_cpu_list;
44501 + bool system_wide;
44502 +diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
44503 +index 647b7dff8ef36..80345695b1360 100644
44504 +--- a/tools/perf/util/symbol-elf.c
44505 ++++ b/tools/perf/util/symbol-elf.c
44506 +@@ -1303,7 +1303,7 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
44507 + (!used_opd && syms_ss->adjust_symbols)) {
44508 + GElf_Phdr phdr;
44509 +
44510 +- if (elf_read_program_header(syms_ss->elf,
44511 ++ if (elf_read_program_header(runtime_ss->elf,
44512 + (u64)sym.st_value, &phdr)) {
44513 + pr_debug4("%s: failed to find program header for "
44514 + "symbol: %s st_value: %#" PRIx64 "\n",
44515 +diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config
44516 +index 9213565c03117..59cec4244b3a7 100644
44517 +--- a/tools/testing/selftests/bpf/config
44518 ++++ b/tools/testing/selftests/bpf/config
44519 +@@ -13,6 +13,7 @@ CONFIG_CRYPTO_USER_API_HASH=y
44520 + CONFIG_DYNAMIC_FTRACE=y
44521 + CONFIG_FPROBE=y
44522 + CONFIG_FTRACE_SYSCALLS=y
44523 ++CONFIG_FUNCTION_ERROR_INJECTION=y
44524 + CONFIG_FUNCTION_TRACER=y
44525 + CONFIG_GENEVE=y
44526 + CONFIG_IKCONFIG=y
44527 +diff --git a/tools/testing/selftests/bpf/network_helpers.c b/tools/testing/selftests/bpf/network_helpers.c
44528 +index bec15558fd938..1f37adff7632c 100644
44529 +--- a/tools/testing/selftests/bpf/network_helpers.c
44530 ++++ b/tools/testing/selftests/bpf/network_helpers.c
44531 +@@ -426,6 +426,10 @@ static int setns_by_fd(int nsfd)
44532 + if (!ASSERT_OK(err, "mount /sys/fs/bpf"))
44533 + return err;
44534 +
44535 ++ err = mount("debugfs", "/sys/kernel/debug", "debugfs", 0, NULL);
44536 ++ if (!ASSERT_OK(err, "mount /sys/kernel/debug"))
44537 ++ return err;
44538 ++
44539 + return 0;
44540 + }
44541 +
44542 +diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
44543 +index 3369c5ec3a17c..ecde236047fe1 100644
44544 +--- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
44545 ++++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
44546 +@@ -1498,7 +1498,6 @@ static noinline int trigger_func(int arg)
44547 + static void test_task_vma_offset_common(struct bpf_iter_attach_opts *opts, bool one_proc)
44548 + {
44549 + struct bpf_iter_vma_offset *skel;
44550 +- struct bpf_link *link;
44551 + char buf[16] = {};
44552 + int iter_fd, len;
44553 + int pgsz, shift;
44554 +@@ -1513,11 +1512,11 @@ static void test_task_vma_offset_common(struct bpf_iter_attach_opts *opts, bool
44555 + ;
44556 + skel->bss->page_shift = shift;
44557 +
44558 +- link = bpf_program__attach_iter(skel->progs.get_vma_offset, opts);
44559 +- if (!ASSERT_OK_PTR(link, "attach_iter"))
44560 +- return;
44561 ++ skel->links.get_vma_offset = bpf_program__attach_iter(skel->progs.get_vma_offset, opts);
44562 ++ if (!ASSERT_OK_PTR(skel->links.get_vma_offset, "attach_iter"))
44563 ++ goto exit;
44564 +
44565 +- iter_fd = bpf_iter_create(bpf_link__fd(link));
44566 ++ iter_fd = bpf_iter_create(bpf_link__fd(skel->links.get_vma_offset));
44567 + if (!ASSERT_GT(iter_fd, 0, "create_iter"))
44568 + goto exit;
44569 +
44570 +@@ -1535,7 +1534,7 @@ static void test_task_vma_offset_common(struct bpf_iter_attach_opts *opts, bool
44571 + close(iter_fd);
44572 +
44573 + exit:
44574 +- bpf_link__destroy(link);
44575 ++ bpf_iter_vma_offset__destroy(skel);
44576 + }
44577 +
44578 + static void test_task_vma_offset(void)
44579 +diff --git a/tools/testing/selftests/bpf/prog_tests/empty_skb.c b/tools/testing/selftests/bpf/prog_tests/empty_skb.c
44580 +new file mode 100644
44581 +index 0000000000000..0613f3bb8b5e4
44582 +--- /dev/null
44583 ++++ b/tools/testing/selftests/bpf/prog_tests/empty_skb.c
44584 +@@ -0,0 +1,146 @@
44585 ++// SPDX-License-Identifier: GPL-2.0
44586 ++#include <test_progs.h>
44587 ++#include <network_helpers.h>
44588 ++#include <net/if.h>
44589 ++#include "empty_skb.skel.h"
44590 ++
44591 ++#define SYS(cmd) ({ \
44592 ++ if (!ASSERT_OK(system(cmd), (cmd))) \
44593 ++ goto out; \
44594 ++})
44595 ++
44596 ++void serial_test_empty_skb(void)
44597 ++{
44598 ++ LIBBPF_OPTS(bpf_test_run_opts, tattr);
44599 ++ struct empty_skb *bpf_obj = NULL;
44600 ++ struct nstoken *tok = NULL;
44601 ++ struct bpf_program *prog;
44602 ++ char eth_hlen_pp[15];
44603 ++ char eth_hlen[14];
44604 ++ int veth_ifindex;
44605 ++ int ipip_ifindex;
44606 ++ int err;
44607 ++ int i;
44608 ++
44609 ++ struct {
44610 ++ const char *msg;
44611 ++ const void *data_in;
44612 ++ __u32 data_size_in;
44613 ++ int *ifindex;
44614 ++ int err;
44615 ++ int ret;
44616 ++ bool success_on_tc;
44617 ++ } tests[] = {
44618 ++ /* Empty packets are always rejected. */
44619 ++
44620 ++ {
44621 ++ /* BPF_PROG_RUN ETH_HLEN size check */
44622 ++ .msg = "veth empty ingress packet",
44623 ++ .data_in = NULL,
44624 ++ .data_size_in = 0,
44625 ++ .ifindex = &veth_ifindex,
44626 ++ .err = -EINVAL,
44627 ++ },
44628 ++ {
44629 ++ /* BPF_PROG_RUN ETH_HLEN size check */
44630 ++ .msg = "ipip empty ingress packet",
44631 ++ .data_in = NULL,
44632 ++ .data_size_in = 0,
44633 ++ .ifindex = &ipip_ifindex,
44634 ++ .err = -EINVAL,
44635 ++ },
44636 ++
44637 ++ /* ETH_HLEN-sized packets:
44638 ++ * - can not be redirected at LWT_XMIT
44639 ++ * - can be redirected at TC to non-tunneling dest
44640 ++ */
44641 ++
44642 ++ {
44643 ++ /* __bpf_redirect_common */
44644 ++ .msg = "veth ETH_HLEN packet ingress",
44645 ++ .data_in = eth_hlen,
44646 ++ .data_size_in = sizeof(eth_hlen),
44647 ++ .ifindex = &veth_ifindex,
44648 ++ .ret = -ERANGE,
44649 ++ .success_on_tc = true,
44650 ++ },
44651 ++ {
44652 ++ /* __bpf_redirect_no_mac
44653 ++ *
44654 ++ * lwt: skb->len=0 <= skb_network_offset=0
44655 ++ * tc: skb->len=14 <= skb_network_offset=14
44656 ++ */
44657 ++ .msg = "ipip ETH_HLEN packet ingress",
44658 ++ .data_in = eth_hlen,
44659 ++ .data_size_in = sizeof(eth_hlen),
44660 ++ .ifindex = &ipip_ifindex,
44661 ++ .ret = -ERANGE,
44662 ++ },
44663 ++
44664 ++ /* ETH_HLEN+1-sized packet should be redirected. */
44665 ++
44666 ++ {
44667 ++ .msg = "veth ETH_HLEN+1 packet ingress",
44668 ++ .data_in = eth_hlen_pp,
44669 ++ .data_size_in = sizeof(eth_hlen_pp),
44670 ++ .ifindex = &veth_ifindex,
44671 ++ },
44672 ++ {
44673 ++ .msg = "ipip ETH_HLEN+1 packet ingress",
44674 ++ .data_in = eth_hlen_pp,
44675 ++ .data_size_in = sizeof(eth_hlen_pp),
44676 ++ .ifindex = &ipip_ifindex,
44677 ++ },
44678 ++ };
44679 ++
44680 ++ SYS("ip netns add empty_skb");
44681 ++ tok = open_netns("empty_skb");
44682 ++ SYS("ip link add veth0 type veth peer veth1");
44683 ++ SYS("ip link set dev veth0 up");
44684 ++ SYS("ip link set dev veth1 up");
44685 ++ SYS("ip addr add 10.0.0.1/8 dev veth0");
44686 ++ SYS("ip addr add 10.0.0.2/8 dev veth1");
44687 ++ veth_ifindex = if_nametoindex("veth0");
44688 ++
44689 ++ SYS("ip link add ipip0 type ipip local 10.0.0.1 remote 10.0.0.2");
44690 ++ SYS("ip link set ipip0 up");
44691 ++ SYS("ip addr add 192.168.1.1/16 dev ipip0");
44692 ++ ipip_ifindex = if_nametoindex("ipip0");
44693 ++
44694 ++ bpf_obj = empty_skb__open_and_load();
44695 ++ if (!ASSERT_OK_PTR(bpf_obj, "open skeleton"))
44696 ++ goto out;
44697 ++
44698 ++ for (i = 0; i < ARRAY_SIZE(tests); i++) {
44699 ++ bpf_object__for_each_program(prog, bpf_obj->obj) {
44700 ++ char buf[128];
44701 ++ bool at_tc = !strncmp(bpf_program__section_name(prog), "tc", 2);
44702 ++
44703 ++ tattr.data_in = tests[i].data_in;
44704 ++ tattr.data_size_in = tests[i].data_size_in;
44705 ++
44706 ++ tattr.data_size_out = 0;
44707 ++ bpf_obj->bss->ifindex = *tests[i].ifindex;
44708 ++ bpf_obj->bss->ret = 0;
44709 ++ err = bpf_prog_test_run_opts(bpf_program__fd(prog), &tattr);
44710 ++ sprintf(buf, "err: %s [%s]", tests[i].msg, bpf_program__name(prog));
44711 ++
44712 ++ if (at_tc && tests[i].success_on_tc)
44713 ++ ASSERT_GE(err, 0, buf);
44714 ++ else
44715 ++ ASSERT_EQ(err, tests[i].err, buf);
44716 ++ sprintf(buf, "ret: %s [%s]", tests[i].msg, bpf_program__name(prog));
44717 ++ if (at_tc && tests[i].success_on_tc)
44718 ++ ASSERT_GE(bpf_obj->bss->ret, 0, buf);
44719 ++ else
44720 ++ ASSERT_EQ(bpf_obj->bss->ret, tests[i].ret, buf);
44721 ++ }
44722 ++ }
44723 ++
44724 ++out:
44725 ++ if (bpf_obj)
44726 ++ empty_skb__destroy(bpf_obj);
44727 ++ if (tok)
44728 ++ close_netns(tok);
44729 ++ system("ip netns del empty_skb");
44730 ++}
44731 +diff --git a/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c
44732 +index a4b4133d39e95..0d82e28aed1ac 100644
44733 +--- a/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c
44734 ++++ b/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c
44735 +@@ -325,7 +325,7 @@ static bool symbol_equal(const void *key1, const void *key2, void *ctx __maybe_u
44736 + static int get_syms(char ***symsp, size_t *cntp)
44737 + {
44738 + size_t cap = 0, cnt = 0, i;
44739 +- char *name, **syms = NULL;
44740 ++ char *name = NULL, **syms = NULL;
44741 + struct hashmap *map;
44742 + char buf[256];
44743 + FILE *f;
44744 +@@ -352,6 +352,8 @@ static int get_syms(char ***symsp, size_t *cntp)
44745 + /* skip modules */
44746 + if (strchr(buf, '['))
44747 + continue;
44748 ++
44749 ++ free(name);
44750 + if (sscanf(buf, "%ms$*[^\n]\n", &name) != 1)
44751 + continue;
44752 + /*
44753 +@@ -371,32 +373,32 @@ static int get_syms(char ***symsp, size_t *cntp)
44754 + if (!strncmp(name, "__ftrace_invalid_address__",
44755 + sizeof("__ftrace_invalid_address__") - 1))
44756 + continue;
44757 ++
44758 + err = hashmap__add(map, name, NULL);
44759 +- if (err) {
44760 +- free(name);
44761 +- if (err == -EEXIST)
44762 +- continue;
44763 ++ if (err == -EEXIST)
44764 ++ continue;
44765 ++ if (err)
44766 + goto error;
44767 +- }
44768 ++
44769 + err = libbpf_ensure_mem((void **) &syms, &cap,
44770 + sizeof(*syms), cnt + 1);
44771 +- if (err) {
44772 +- free(name);
44773 ++ if (err)
44774 + goto error;
44775 +- }
44776 +- syms[cnt] = name;
44777 +- cnt++;
44778 ++
44779 ++ syms[cnt++] = name;
44780 ++ name = NULL;
44781 + }
44782 +
44783 + *symsp = syms;
44784 + *cntp = cnt;
44785 +
44786 + error:
44787 ++ free(name);
44788 + fclose(f);
44789 + hashmap__free(map);
44790 + if (err) {
44791 + for (i = 0; i < cnt; i++)
44792 +- free(syms[cnt]);
44793 ++ free(syms[i]);
44794 + free(syms);
44795 + }
44796 + return err;
44797 +diff --git a/tools/testing/selftests/bpf/prog_tests/lsm_cgroup.c b/tools/testing/selftests/bpf/prog_tests/lsm_cgroup.c
44798 +index 1102e4f42d2d4..f117bfef68a14 100644
44799 +--- a/tools/testing/selftests/bpf/prog_tests/lsm_cgroup.c
44800 ++++ b/tools/testing/selftests/bpf/prog_tests/lsm_cgroup.c
44801 +@@ -173,10 +173,12 @@ static void test_lsm_cgroup_functional(void)
44802 + ASSERT_EQ(query_prog_cnt(cgroup_fd, NULL), 4, "total prog count");
44803 + ASSERT_EQ(query_prog_cnt(cgroup_fd2, NULL), 1, "total prog count");
44804 +
44805 +- /* AF_UNIX is prohibited. */
44806 +-
44807 + fd = socket(AF_UNIX, SOCK_STREAM, 0);
44808 +- ASSERT_LT(fd, 0, "socket(AF_UNIX)");
44809 ++ if (!(skel->kconfig->CONFIG_SECURITY_APPARMOR
44810 ++ || skel->kconfig->CONFIG_SECURITY_SELINUX
44811 ++ || skel->kconfig->CONFIG_SECURITY_SMACK))
44812 ++ /* AF_UNIX is prohibited. */
44813 ++ ASSERT_LT(fd, 0, "socket(AF_UNIX)");
44814 + close(fd);
44815 +
44816 + /* AF_INET6 gets default policy (sk_priority). */
44817 +@@ -233,11 +235,18 @@ static void test_lsm_cgroup_functional(void)
44818 +
44819 + /* AF_INET6+SOCK_STREAM
44820 + * AF_PACKET+SOCK_RAW
44821 ++ * AF_UNIX+SOCK_RAW if already have non-bpf lsms installed
44822 + * listen_fd
44823 + * client_fd
44824 + * accepted_fd
44825 + */
44826 +- ASSERT_EQ(skel->bss->called_socket_post_create2, 5, "called_create2");
44827 ++ if (skel->kconfig->CONFIG_SECURITY_APPARMOR
44828 ++ || skel->kconfig->CONFIG_SECURITY_SELINUX
44829 ++ || skel->kconfig->CONFIG_SECURITY_SMACK)
44830 ++ /* AF_UNIX+SOCK_RAW if already have non-bpf lsms installed */
44831 ++ ASSERT_EQ(skel->bss->called_socket_post_create2, 6, "called_create2");
44832 ++ else
44833 ++ ASSERT_EQ(skel->bss->called_socket_post_create2, 5, "called_create2");
44834 +
44835 + /* start_server
44836 + * bind(ETH_P_ALL)
44837 +diff --git a/tools/testing/selftests/bpf/prog_tests/map_kptr.c b/tools/testing/selftests/bpf/prog_tests/map_kptr.c
44838 +index fdcea7a61491e..0d66b15242089 100644
44839 +--- a/tools/testing/selftests/bpf/prog_tests/map_kptr.c
44840 ++++ b/tools/testing/selftests/bpf/prog_tests/map_kptr.c
44841 +@@ -105,7 +105,7 @@ static void test_map_kptr_success(bool test_run)
44842 + ASSERT_OK(opts.retval, "test_map_kptr_ref2 retval");
44843 +
44844 + if (test_run)
44845 +- return;
44846 ++ goto exit;
44847 +
44848 + ret = bpf_map__update_elem(skel->maps.array_map,
44849 + &key, sizeof(key), buf, sizeof(buf), 0);
44850 +@@ -132,6 +132,7 @@ static void test_map_kptr_success(bool test_run)
44851 + ret = bpf_map__delete_elem(skel->maps.lru_hash_map, &key, sizeof(key), 0);
44852 + ASSERT_OK(ret, "lru_hash_map delete");
44853 +
44854 ++exit:
44855 + map_kptr__destroy(skel);
44856 + }
44857 +
44858 +diff --git a/tools/testing/selftests/bpf/prog_tests/tcp_hdr_options.c b/tools/testing/selftests/bpf/prog_tests/tcp_hdr_options.c
44859 +index 617bbce6ef8f1..57191773572a0 100644
44860 +--- a/tools/testing/selftests/bpf/prog_tests/tcp_hdr_options.c
44861 ++++ b/tools/testing/selftests/bpf/prog_tests/tcp_hdr_options.c
44862 +@@ -485,7 +485,7 @@ static void misc(void)
44863 + goto check_linum;
44864 +
44865 + ret = read(sk_fds.passive_fd, recv_msg, sizeof(recv_msg));
44866 +- if (ASSERT_EQ(ret, sizeof(send_msg), "read(msg)"))
44867 ++ if (!ASSERT_EQ(ret, sizeof(send_msg), "read(msg)"))
44868 + goto check_linum;
44869 + }
44870 +
44871 +@@ -539,7 +539,7 @@ void test_tcp_hdr_options(void)
44872 + goto skel_destroy;
44873 +
44874 + cg_fd = test__join_cgroup(CG_NAME);
44875 +- if (ASSERT_GE(cg_fd, 0, "join_cgroup"))
44876 ++ if (!ASSERT_GE(cg_fd, 0, "join_cgroup"))
44877 + goto skel_destroy;
44878 +
44879 + for (i = 0; i < ARRAY_SIZE(tests); i++) {
44880 +diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_struct.c b/tools/testing/selftests/bpf/prog_tests/tracing_struct.c
44881 +index d5022b91d1e4c..48dc9472e160a 100644
44882 +--- a/tools/testing/selftests/bpf/prog_tests/tracing_struct.c
44883 ++++ b/tools/testing/selftests/bpf/prog_tests/tracing_struct.c
44884 +@@ -15,7 +15,7 @@ static void test_fentry(void)
44885 +
44886 + err = tracing_struct__attach(skel);
44887 + if (!ASSERT_OK(err, "tracing_struct__attach"))
44888 +- return;
44889 ++ goto destroy_skel;
44890 +
44891 + ASSERT_OK(trigger_module_test_read(256), "trigger_read");
44892 +
44893 +@@ -54,6 +54,7 @@ static void test_fentry(void)
44894 + ASSERT_EQ(skel->bss->t5_ret, 1, "t5 ret");
44895 +
44896 + tracing_struct__detach(skel);
44897 ++destroy_skel:
44898 + tracing_struct__destroy(skel);
44899 + }
44900 +
44901 +diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c b/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c
44902 +index 9b9cf8458adf8..39973ea1ce433 100644
44903 +--- a/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c
44904 ++++ b/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c
44905 +@@ -18,7 +18,7 @@ static void test_xdp_adjust_tail_shrink(void)
44906 + );
44907 +
44908 + err = bpf_prog_test_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd);
44909 +- if (ASSERT_OK(err, "test_xdp_adjust_tail_shrink"))
44910 ++ if (!ASSERT_OK(err, "test_xdp_adjust_tail_shrink"))
44911 + return;
44912 +
44913 + err = bpf_prog_test_run_opts(prog_fd, &topts);
44914 +@@ -53,7 +53,7 @@ static void test_xdp_adjust_tail_grow(void)
44915 + );
44916 +
44917 + err = bpf_prog_test_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd);
44918 +- if (ASSERT_OK(err, "test_xdp_adjust_tail_grow"))
44919 ++ if (!ASSERT_OK(err, "test_xdp_adjust_tail_grow"))
44920 + return;
44921 +
44922 + err = bpf_prog_test_run_opts(prog_fd, &topts);
44923 +@@ -63,6 +63,7 @@ static void test_xdp_adjust_tail_grow(void)
44924 + expect_sz = sizeof(pkt_v6) + 40; /* Test grow with 40 bytes */
44925 + topts.data_in = &pkt_v6;
44926 + topts.data_size_in = sizeof(pkt_v6);
44927 ++ topts.data_size_out = sizeof(buf);
44928 + err = bpf_prog_test_run_opts(prog_fd, &topts);
44929 + ASSERT_OK(err, "ipv6");
44930 + ASSERT_EQ(topts.retval, XDP_TX, "ipv6 retval");
44931 +@@ -89,7 +90,7 @@ static void test_xdp_adjust_tail_grow2(void)
44932 + );
44933 +
44934 + err = bpf_prog_test_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd);
44935 +- if (ASSERT_OK(err, "test_xdp_adjust_tail_grow"))
44936 ++ if (!ASSERT_OK(err, "test_xdp_adjust_tail_grow"))
44937 + return;
44938 +
44939 + /* Test case-64 */
44940 +diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c b/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c
44941 +index a50971c6cf4a5..9ac6f6a268db2 100644
44942 +--- a/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c
44943 ++++ b/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c
44944 +@@ -85,7 +85,7 @@ static void test_max_pkt_size(int fd)
44945 + }
44946 +
44947 + #define NUM_PKTS 10000
44948 +-void test_xdp_do_redirect(void)
44949 ++void serial_test_xdp_do_redirect(void)
44950 + {
44951 + int err, xdp_prog_fd, tc_prog_fd, ifindex_src, ifindex_dst;
44952 + char data[sizeof(pkt_udp) + sizeof(__u32)];
44953 +diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_synproxy.c b/tools/testing/selftests/bpf/prog_tests/xdp_synproxy.c
44954 +index 75550a40e029d..879f5da2f21e6 100644
44955 +--- a/tools/testing/selftests/bpf/prog_tests/xdp_synproxy.c
44956 ++++ b/tools/testing/selftests/bpf/prog_tests/xdp_synproxy.c
44957 +@@ -174,7 +174,7 @@ out:
44958 + system("ip netns del synproxy");
44959 + }
44960 +
44961 +-void test_xdp_synproxy(void)
44962 ++void serial_test_xdp_synproxy(void)
44963 + {
44964 + if (test__start_subtest("xdp"))
44965 + test_synproxy(true);
44966 +diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_ksym.c b/tools/testing/selftests/bpf/progs/bpf_iter_ksym.c
44967 +index 285c008cbf9c2..9ba14c37bbcc9 100644
44968 +--- a/tools/testing/selftests/bpf/progs/bpf_iter_ksym.c
44969 ++++ b/tools/testing/selftests/bpf/progs/bpf_iter_ksym.c
44970 +@@ -7,14 +7,14 @@ char _license[] SEC("license") = "GPL";
44971 +
44972 + unsigned long last_sym_value = 0;
44973 +
44974 +-static inline char tolower(char c)
44975 ++static inline char to_lower(char c)
44976 + {
44977 + if (c >= 'A' && c <= 'Z')
44978 + c += ('a' - 'A');
44979 + return c;
44980 + }
44981 +
44982 +-static inline char toupper(char c)
44983 ++static inline char to_upper(char c)
44984 + {
44985 + if (c >= 'a' && c <= 'z')
44986 + c -= ('a' - 'A');
44987 +@@ -54,7 +54,7 @@ int dump_ksym(struct bpf_iter__ksym *ctx)
44988 + type = iter->type;
44989 +
44990 + if (iter->module_name[0]) {
44991 +- type = iter->exported ? toupper(type) : tolower(type);
44992 ++ type = iter->exported ? to_upper(type) : to_lower(type);
44993 + BPF_SEQ_PRINTF(seq, "0x%llx %c %s [ %s ] ",
44994 + value, type, iter->name, iter->module_name);
44995 + } else {
44996 +diff --git a/tools/testing/selftests/bpf/progs/empty_skb.c b/tools/testing/selftests/bpf/progs/empty_skb.c
44997 +new file mode 100644
44998 +index 0000000000000..4b0cd67532511
44999 +--- /dev/null
45000 ++++ b/tools/testing/selftests/bpf/progs/empty_skb.c
45001 +@@ -0,0 +1,37 @@
45002 ++// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
45003 ++#include <linux/bpf.h>
45004 ++#include <bpf/bpf_helpers.h>
45005 ++#include <bpf/bpf_endian.h>
45006 ++
45007 ++char _license[] SEC("license") = "GPL";
45008 ++
45009 ++int ifindex;
45010 ++int ret;
45011 ++
45012 ++SEC("lwt_xmit")
45013 ++int redirect_ingress(struct __sk_buff *skb)
45014 ++{
45015 ++ ret = bpf_clone_redirect(skb, ifindex, BPF_F_INGRESS);
45016 ++ return 0;
45017 ++}
45018 ++
45019 ++SEC("lwt_xmit")
45020 ++int redirect_egress(struct __sk_buff *skb)
45021 ++{
45022 ++ ret = bpf_clone_redirect(skb, ifindex, 0);
45023 ++ return 0;
45024 ++}
45025 ++
45026 ++SEC("tc")
45027 ++int tc_redirect_ingress(struct __sk_buff *skb)
45028 ++{
45029 ++ ret = bpf_clone_redirect(skb, ifindex, BPF_F_INGRESS);
45030 ++ return 0;
45031 ++}
45032 ++
45033 ++SEC("tc")
45034 ++int tc_redirect_egress(struct __sk_buff *skb)
45035 ++{
45036 ++ ret = bpf_clone_redirect(skb, ifindex, 0);
45037 ++ return 0;
45038 ++}
45039 +diff --git a/tools/testing/selftests/bpf/progs/lsm_cgroup.c b/tools/testing/selftests/bpf/progs/lsm_cgroup.c
45040 +index 4f2d60b87b75d..02c11d16b692a 100644
45041 +--- a/tools/testing/selftests/bpf/progs/lsm_cgroup.c
45042 ++++ b/tools/testing/selftests/bpf/progs/lsm_cgroup.c
45043 +@@ -7,6 +7,10 @@
45044 +
45045 + char _license[] SEC("license") = "GPL";
45046 +
45047 ++extern bool CONFIG_SECURITY_SELINUX __kconfig __weak;
45048 ++extern bool CONFIG_SECURITY_SMACK __kconfig __weak;
45049 ++extern bool CONFIG_SECURITY_APPARMOR __kconfig __weak;
45050 ++
45051 + #ifndef AF_PACKET
45052 + #define AF_PACKET 17
45053 + #endif
45054 +@@ -140,6 +144,10 @@ SEC("lsm_cgroup/sk_alloc_security")
45055 + int BPF_PROG(socket_alloc, struct sock *sk, int family, gfp_t priority)
45056 + {
45057 + called_socket_alloc++;
45058 ++ /* if already have non-bpf lsms installed, EPERM will cause memory leak of non-bpf lsms */
45059 ++ if (CONFIG_SECURITY_SELINUX || CONFIG_SECURITY_SMACK || CONFIG_SECURITY_APPARMOR)
45060 ++ return 1;
45061 ++
45062 + if (family == AF_UNIX)
45063 + return 0; /* EPERM */
45064 +
45065 +diff --git a/tools/testing/selftests/bpf/test_bpftool_metadata.sh b/tools/testing/selftests/bpf/test_bpftool_metadata.sh
45066 +index 1bf81b49457af..b5520692f41bd 100755
45067 +--- a/tools/testing/selftests/bpf/test_bpftool_metadata.sh
45068 ++++ b/tools/testing/selftests/bpf/test_bpftool_metadata.sh
45069 +@@ -4,6 +4,9 @@
45070 + # Kselftest framework requirement - SKIP code is 4.
45071 + ksft_skip=4
45072 +
45073 ++BPF_FILE_USED="metadata_used.bpf.o"
45074 ++BPF_FILE_UNUSED="metadata_unused.bpf.o"
45075 ++
45076 + TESTNAME=bpftool_metadata
45077 + BPF_FS=$(awk '$3 == "bpf" {print $2; exit}' /proc/mounts)
45078 + BPF_DIR=$BPF_FS/test_$TESTNAME
45079 +@@ -55,7 +58,7 @@ mkdir $BPF_DIR
45080 +
45081 + trap cleanup EXIT
45082 +
45083 +-bpftool prog load metadata_unused.o $BPF_DIR/unused
45084 ++bpftool prog load $BPF_FILE_UNUSED $BPF_DIR/unused
45085 +
45086 + METADATA_PLAIN="$(bpftool prog)"
45087 + echo "$METADATA_PLAIN" | grep 'a = "foo"' > /dev/null
45088 +@@ -67,7 +70,7 @@ bpftool map | grep 'metadata.rodata' > /dev/null
45089 +
45090 + rm $BPF_DIR/unused
45091 +
45092 +-bpftool prog load metadata_used.o $BPF_DIR/used
45093 ++bpftool prog load $BPF_FILE_USED $BPF_DIR/used
45094 +
45095 + METADATA_PLAIN="$(bpftool prog)"
45096 + echo "$METADATA_PLAIN" | grep 'a = "bar"' > /dev/null
45097 +diff --git a/tools/testing/selftests/bpf/test_flow_dissector.sh b/tools/testing/selftests/bpf/test_flow_dissector.sh
45098 +index 5303ce0c977bd..4b298863797a2 100755
45099 +--- a/tools/testing/selftests/bpf/test_flow_dissector.sh
45100 ++++ b/tools/testing/selftests/bpf/test_flow_dissector.sh
45101 +@@ -2,6 +2,8 @@
45102 + # SPDX-License-Identifier: GPL-2.0
45103 + #
45104 + # Load BPF flow dissector and verify it correctly dissects traffic
45105 ++
45106 ++BPF_FILE="bpf_flow.bpf.o"
45107 + export TESTNAME=test_flow_dissector
45108 + unmount=0
45109 +
45110 +@@ -22,7 +24,7 @@ if [[ -z $(ip netns identify $$) ]]; then
45111 + if bpftool="$(which bpftool)"; then
45112 + echo "Testing global flow dissector..."
45113 +
45114 +- $bpftool prog loadall ./bpf_flow.o /sys/fs/bpf/flow \
45115 ++ $bpftool prog loadall $BPF_FILE /sys/fs/bpf/flow \
45116 + type flow_dissector
45117 +
45118 + if ! unshare --net $bpftool prog attach pinned \
45119 +@@ -95,7 +97,7 @@ else
45120 + fi
45121 +
45122 + # Attach BPF program
45123 +-./flow_dissector_load -p bpf_flow.o -s _dissect
45124 ++./flow_dissector_load -p $BPF_FILE -s _dissect
45125 +
45126 + # Setup
45127 + tc qdisc add dev lo ingress
45128 +diff --git a/tools/testing/selftests/bpf/test_lwt_ip_encap.sh b/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
45129 +index 6c69c42b1d607..1e565f47aca94 100755
45130 +--- a/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
45131 ++++ b/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
45132 +@@ -38,6 +38,7 @@
45133 + # ping: SRC->[encap at veth2:ingress]->GRE:decap->DST
45134 + # ping replies go DST->SRC directly
45135 +
45136 ++BPF_FILE="test_lwt_ip_encap.bpf.o"
45137 + if [[ $EUID -ne 0 ]]; then
45138 + echo "This script must be run as root"
45139 + echo "FAIL"
45140 +@@ -373,14 +374,14 @@ test_egress()
45141 + # install replacement routes (LWT/eBPF), pings succeed
45142 + if [ "${ENCAP}" == "IPv4" ] ; then
45143 + ip -netns ${NS1} route add ${IPv4_DST} encap bpf xmit obj \
45144 +- test_lwt_ip_encap.o sec encap_gre dev veth1 ${VRF}
45145 ++ ${BPF_FILE} sec encap_gre dev veth1 ${VRF}
45146 + ip -netns ${NS1} -6 route add ${IPv6_DST} encap bpf xmit obj \
45147 +- test_lwt_ip_encap.o sec encap_gre dev veth1 ${VRF}
45148 ++ ${BPF_FILE} sec encap_gre dev veth1 ${VRF}
45149 + elif [ "${ENCAP}" == "IPv6" ] ; then
45150 + ip -netns ${NS1} route add ${IPv4_DST} encap bpf xmit obj \
45151 +- test_lwt_ip_encap.o sec encap_gre6 dev veth1 ${VRF}
45152 ++ ${BPF_FILE} sec encap_gre6 dev veth1 ${VRF}
45153 + ip -netns ${NS1} -6 route add ${IPv6_DST} encap bpf xmit obj \
45154 +- test_lwt_ip_encap.o sec encap_gre6 dev veth1 ${VRF}
45155 ++ ${BPF_FILE} sec encap_gre6 dev veth1 ${VRF}
45156 + else
45157 + echo " unknown encap ${ENCAP}"
45158 + TEST_STATUS=1
45159 +@@ -431,14 +432,14 @@ test_ingress()
45160 + # install replacement routes (LWT/eBPF), pings succeed
45161 + if [ "${ENCAP}" == "IPv4" ] ; then
45162 + ip -netns ${NS2} route add ${IPv4_DST} encap bpf in obj \
45163 +- test_lwt_ip_encap.o sec encap_gre dev veth2 ${VRF}
45164 ++ ${BPF_FILE} sec encap_gre dev veth2 ${VRF}
45165 + ip -netns ${NS2} -6 route add ${IPv6_DST} encap bpf in obj \
45166 +- test_lwt_ip_encap.o sec encap_gre dev veth2 ${VRF}
45167 ++ ${BPF_FILE} sec encap_gre dev veth2 ${VRF}
45168 + elif [ "${ENCAP}" == "IPv6" ] ; then
45169 + ip -netns ${NS2} route add ${IPv4_DST} encap bpf in obj \
45170 +- test_lwt_ip_encap.o sec encap_gre6 dev veth2 ${VRF}
45171 ++ ${BPF_FILE} sec encap_gre6 dev veth2 ${VRF}
45172 + ip -netns ${NS2} -6 route add ${IPv6_DST} encap bpf in obj \
45173 +- test_lwt_ip_encap.o sec encap_gre6 dev veth2 ${VRF}
45174 ++ ${BPF_FILE} sec encap_gre6 dev veth2 ${VRF}
45175 + else
45176 + echo "FAIL: unknown encap ${ENCAP}"
45177 + TEST_STATUS=1
45178 +diff --git a/tools/testing/selftests/bpf/test_lwt_seg6local.sh b/tools/testing/selftests/bpf/test_lwt_seg6local.sh
45179 +index 826f4423ce029..0efea2292d6aa 100755
45180 +--- a/tools/testing/selftests/bpf/test_lwt_seg6local.sh
45181 ++++ b/tools/testing/selftests/bpf/test_lwt_seg6local.sh
45182 +@@ -23,6 +23,7 @@
45183 +
45184 + # Kselftest framework requirement - SKIP code is 4.
45185 + ksft_skip=4
45186 ++BPF_FILE="test_lwt_seg6local.bpf.o"
45187 + readonly NS1="ns1-$(mktemp -u XXXXXX)"
45188 + readonly NS2="ns2-$(mktemp -u XXXXXX)"
45189 + readonly NS3="ns3-$(mktemp -u XXXXXX)"
45190 +@@ -117,18 +118,18 @@ ip netns exec ${NS6} ip -6 addr add fb00::109/16 dev veth10 scope link
45191 + ip netns exec ${NS1} ip -6 addr add fb00::1/16 dev lo
45192 + ip netns exec ${NS1} ip -6 route add fb00::6 dev veth1 via fb00::21
45193 +
45194 +-ip netns exec ${NS2} ip -6 route add fb00::6 encap bpf in obj test_lwt_seg6local.o sec encap_srh dev veth2
45195 ++ip netns exec ${NS2} ip -6 route add fb00::6 encap bpf in obj ${BPF_FILE} sec encap_srh dev veth2
45196 + ip netns exec ${NS2} ip -6 route add fd00::1 dev veth3 via fb00::43 scope link
45197 +
45198 + ip netns exec ${NS3} ip -6 route add fc42::1 dev veth5 via fb00::65
45199 +-ip netns exec ${NS3} ip -6 route add fd00::1 encap seg6local action End.BPF endpoint obj test_lwt_seg6local.o sec add_egr_x dev veth4
45200 ++ip netns exec ${NS3} ip -6 route add fd00::1 encap seg6local action End.BPF endpoint obj ${BPF_FILE} sec add_egr_x dev veth4
45201 +
45202 +-ip netns exec ${NS4} ip -6 route add fd00::2 encap seg6local action End.BPF endpoint obj test_lwt_seg6local.o sec pop_egr dev veth6
45203 ++ip netns exec ${NS4} ip -6 route add fd00::2 encap seg6local action End.BPF endpoint obj ${BPF_FILE} sec pop_egr dev veth6
45204 + ip netns exec ${NS4} ip -6 addr add fc42::1 dev lo
45205 + ip netns exec ${NS4} ip -6 route add fd00::3 dev veth7 via fb00::87
45206 +
45207 + ip netns exec ${NS5} ip -6 route add fd00::4 table 117 dev veth9 via fb00::109
45208 +-ip netns exec ${NS5} ip -6 route add fd00::3 encap seg6local action End.BPF endpoint obj test_lwt_seg6local.o sec inspect_t dev veth8
45209 ++ip netns exec ${NS5} ip -6 route add fd00::3 encap seg6local action End.BPF endpoint obj ${BPF_FILE} sec inspect_t dev veth8
45210 +
45211 + ip netns exec ${NS6} ip -6 addr add fb00::6/16 dev lo
45212 + ip netns exec ${NS6} ip -6 addr add fd00::4/16 dev lo
45213 +diff --git a/tools/testing/selftests/bpf/test_tc_edt.sh b/tools/testing/selftests/bpf/test_tc_edt.sh
45214 +index daa7d1b8d3092..76f0bd17061f9 100755
45215 +--- a/tools/testing/selftests/bpf/test_tc_edt.sh
45216 ++++ b/tools/testing/selftests/bpf/test_tc_edt.sh
45217 +@@ -5,6 +5,7 @@
45218 + # with dst port = 9000 down to 5MBps. Then it measures actual
45219 + # throughput of the flow.
45220 +
45221 ++BPF_FILE="test_tc_edt.bpf.o"
45222 + if [[ $EUID -ne 0 ]]; then
45223 + echo "This script must be run as root"
45224 + echo "FAIL"
45225 +@@ -54,7 +55,7 @@ ip -netns ${NS_DST} route add ${IP_SRC}/32 dev veth_dst
45226 + ip netns exec ${NS_SRC} tc qdisc add dev veth_src root fq
45227 + ip netns exec ${NS_SRC} tc qdisc add dev veth_src clsact
45228 + ip netns exec ${NS_SRC} tc filter add dev veth_src egress \
45229 +- bpf da obj test_tc_edt.o sec cls_test
45230 ++ bpf da obj ${BPF_FILE} sec cls_test
45231 +
45232 +
45233 + # start the listener
45234 +diff --git a/tools/testing/selftests/bpf/test_tc_tunnel.sh b/tools/testing/selftests/bpf/test_tc_tunnel.sh
45235 +index 088fcad138c98..334bdfeab9403 100755
45236 +--- a/tools/testing/selftests/bpf/test_tc_tunnel.sh
45237 ++++ b/tools/testing/selftests/bpf/test_tc_tunnel.sh
45238 +@@ -3,6 +3,7 @@
45239 + #
45240 + # In-place tunneling
45241 +
45242 ++BPF_FILE="test_tc_tunnel.bpf.o"
45243 + # must match the port that the bpf program filters on
45244 + readonly port=8000
45245 +
45246 +@@ -196,7 +197,7 @@ verify_data
45247 + # client can no longer connect
45248 + ip netns exec "${ns1}" tc qdisc add dev veth1 clsact
45249 + ip netns exec "${ns1}" tc filter add dev veth1 egress \
45250 +- bpf direct-action object-file ./test_tc_tunnel.o \
45251 ++ bpf direct-action object-file ${BPF_FILE} \
45252 + section "encap_${tuntype}_${mac}"
45253 + echo "test bpf encap without decap (expect failure)"
45254 + server_listen
45255 +@@ -296,7 +297,7 @@ fi
45256 + ip netns exec "${ns2}" ip link del dev testtun0
45257 + ip netns exec "${ns2}" tc qdisc add dev veth2 clsact
45258 + ip netns exec "${ns2}" tc filter add dev veth2 ingress \
45259 +- bpf direct-action object-file ./test_tc_tunnel.o section decap
45260 ++ bpf direct-action object-file ${BPF_FILE} section decap
45261 + echo "test bpf encap with bpf decap"
45262 + client_connect
45263 + verify_data
45264 +diff --git a/tools/testing/selftests/bpf/test_tunnel.sh b/tools/testing/selftests/bpf/test_tunnel.sh
45265 +index e9ebc67d73f70..2eaedc1d9ed30 100755
45266 +--- a/tools/testing/selftests/bpf/test_tunnel.sh
45267 ++++ b/tools/testing/selftests/bpf/test_tunnel.sh
45268 +@@ -45,6 +45,7 @@
45269 + # 5) Tunnel protocol handler, ex: vxlan_rcv, decap the packet
45270 + # 6) Forward the packet to the overlay tnl dev
45271 +
45272 ++BPF_FILE="test_tunnel_kern.bpf.o"
45273 + BPF_PIN_TUNNEL_DIR="/sys/fs/bpf/tc/tunnel"
45274 + PING_ARG="-c 3 -w 10 -q"
45275 + ret=0
45276 +@@ -545,7 +546,7 @@ test_xfrm_tunnel()
45277 + > /sys/kernel/debug/tracing/trace
45278 + setup_xfrm_tunnel
45279 + mkdir -p ${BPF_PIN_TUNNEL_DIR}
45280 +- bpftool prog loadall ./test_tunnel_kern.o ${BPF_PIN_TUNNEL_DIR}
45281 ++ bpftool prog loadall ${BPF_FILE} ${BPF_PIN_TUNNEL_DIR}
45282 + tc qdisc add dev veth1 clsact
45283 + tc filter add dev veth1 proto ip ingress bpf da object-pinned \
45284 + ${BPF_PIN_TUNNEL_DIR}/xfrm_get_state
45285 +@@ -572,7 +573,7 @@ attach_bpf()
45286 + SET=$2
45287 + GET=$3
45288 + mkdir -p ${BPF_PIN_TUNNEL_DIR}
45289 +- bpftool prog loadall ./test_tunnel_kern.o ${BPF_PIN_TUNNEL_DIR}/
45290 ++ bpftool prog loadall ${BPF_FILE} ${BPF_PIN_TUNNEL_DIR}/
45291 + tc qdisc add dev $DEV clsact
45292 + tc filter add dev $DEV egress bpf da object-pinned ${BPF_PIN_TUNNEL_DIR}/$SET
45293 + tc filter add dev $DEV ingress bpf da object-pinned ${BPF_PIN_TUNNEL_DIR}/$GET
45294 +diff --git a/tools/testing/selftests/bpf/test_xdp_meta.sh b/tools/testing/selftests/bpf/test_xdp_meta.sh
45295 +index ea69370caae30..2740322c1878b 100755
45296 +--- a/tools/testing/selftests/bpf/test_xdp_meta.sh
45297 ++++ b/tools/testing/selftests/bpf/test_xdp_meta.sh
45298 +@@ -1,5 +1,6 @@
45299 + #!/bin/sh
45300 +
45301 ++BPF_FILE="test_xdp_meta.bpf.o"
45302 + # Kselftest framework requirement - SKIP code is 4.
45303 + readonly KSFT_SKIP=4
45304 + readonly NS1="ns1-$(mktemp -u XXXXXX)"
45305 +@@ -42,11 +43,11 @@ ip netns exec ${NS2} ip addr add 10.1.1.22/24 dev veth2
45306 + ip netns exec ${NS1} tc qdisc add dev veth1 clsact
45307 + ip netns exec ${NS2} tc qdisc add dev veth2 clsact
45308 +
45309 +-ip netns exec ${NS1} tc filter add dev veth1 ingress bpf da obj test_xdp_meta.o sec t
45310 +-ip netns exec ${NS2} tc filter add dev veth2 ingress bpf da obj test_xdp_meta.o sec t
45311 ++ip netns exec ${NS1} tc filter add dev veth1 ingress bpf da obj ${BPF_FILE} sec t
45312 ++ip netns exec ${NS2} tc filter add dev veth2 ingress bpf da obj ${BPF_FILE} sec t
45313 +
45314 +-ip netns exec ${NS1} ip link set dev veth1 xdp obj test_xdp_meta.o sec x
45315 +-ip netns exec ${NS2} ip link set dev veth2 xdp obj test_xdp_meta.o sec x
45316 ++ip netns exec ${NS1} ip link set dev veth1 xdp obj ${BPF_FILE} sec x
45317 ++ip netns exec ${NS2} ip link set dev veth2 xdp obj ${BPF_FILE} sec x
45318 +
45319 + ip netns exec ${NS1} ip link set dev veth1 up
45320 + ip netns exec ${NS2} ip link set dev veth2 up
45321 +diff --git a/tools/testing/selftests/bpf/test_xdp_vlan.sh b/tools/testing/selftests/bpf/test_xdp_vlan.sh
45322 +index 810c407e0286e..fbcaa9f0120b2 100755
45323 +--- a/tools/testing/selftests/bpf/test_xdp_vlan.sh
45324 ++++ b/tools/testing/selftests/bpf/test_xdp_vlan.sh
45325 +@@ -200,11 +200,11 @@ ip netns exec ${NS2} sh -c 'ping -W 1 -c 1 100.64.41.1 || echo "Success: First p
45326 + # ----------------------------------------------------------------------
45327 + # In ns1: ingress use XDP to remove VLAN tags
45328 + export DEVNS1=veth1
45329 +-export FILE=test_xdp_vlan.o
45330 ++export BPF_FILE=test_xdp_vlan.bpf.o
45331 +
45332 + # First test: Remove VLAN by setting VLAN ID 0, using "xdp_vlan_change"
45333 + export XDP_PROG=xdp_vlan_change
45334 +-ip netns exec ${NS1} ip link set $DEVNS1 $XDP_MODE object $FILE section $XDP_PROG
45335 ++ip netns exec ${NS1} ip link set $DEVNS1 $XDP_MODE object $BPF_FILE section $XDP_PROG
45336 +
45337 + # In ns1: egress use TC to add back VLAN tag 4011
45338 + # (del cmd)
45339 +@@ -212,7 +212,7 @@ ip netns exec ${NS1} ip link set $DEVNS1 $XDP_MODE object $FILE section $XDP_PRO
45340 + #
45341 + ip netns exec ${NS1} tc qdisc add dev $DEVNS1 clsact
45342 + ip netns exec ${NS1} tc filter add dev $DEVNS1 egress \
45343 +- prio 1 handle 1 bpf da obj $FILE sec tc_vlan_push
45344 ++ prio 1 handle 1 bpf da obj $BPF_FILE sec tc_vlan_push
45345 +
45346 + # Now the namespaces can reach each-other, test with ping:
45347 + ip netns exec ${NS2} ping -i 0.2 -W 2 -c 2 $IPADDR1
45348 +@@ -226,7 +226,7 @@ ip netns exec ${NS1} ping -i 0.2 -W 2 -c 2 $IPADDR2
45349 + #
45350 + export XDP_PROG=xdp_vlan_remove_outer2
45351 + ip netns exec ${NS1} ip link set $DEVNS1 $XDP_MODE off
45352 +-ip netns exec ${NS1} ip link set $DEVNS1 $XDP_MODE object $FILE section $XDP_PROG
45353 ++ip netns exec ${NS1} ip link set $DEVNS1 $XDP_MODE object $BPF_FILE section $XDP_PROG
45354 +
45355 + # Now the namespaces should still be able reach each-other, test with ping:
45356 + ip netns exec ${NS2} ping -i 0.2 -W 2 -c 2 $IPADDR1
45357 +diff --git a/tools/testing/selftests/bpf/xdp_synproxy.c b/tools/testing/selftests/bpf/xdp_synproxy.c
45358 +index ff35320d2be97..410a1385a01dd 100644
45359 +--- a/tools/testing/selftests/bpf/xdp_synproxy.c
45360 ++++ b/tools/testing/selftests/bpf/xdp_synproxy.c
45361 +@@ -104,7 +104,8 @@ static void parse_options(int argc, char *argv[], unsigned int *ifindex, __u32 *
45362 + { "tc", no_argument, NULL, 'c' },
45363 + { NULL, 0, NULL, 0 },
45364 + };
45365 +- unsigned long mss4, mss6, wscale, ttl;
45366 ++ unsigned long mss4, wscale, ttl;
45367 ++ unsigned long long mss6;
45368 + unsigned int tcpipopts_mask = 0;
45369 +
45370 + if (argc < 2)
45371 +@@ -286,7 +287,7 @@ static int syncookie_open_bpf_maps(__u32 prog_id, int *values_map_fd, int *ports
45372 +
45373 + prog_info = (struct bpf_prog_info) {
45374 + .nr_map_ids = 8,
45375 +- .map_ids = (__u64)map_ids,
45376 ++ .map_ids = (__u64)(unsigned long)map_ids,
45377 + };
45378 + info_len = sizeof(prog_info);
45379 +
45380 +diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c
45381 +index 4c52cc6f2f9cc..e8bbbdb77e0d5 100644
45382 +--- a/tools/testing/selftests/cgroup/cgroup_util.c
45383 ++++ b/tools/testing/selftests/cgroup/cgroup_util.c
45384 +@@ -555,6 +555,7 @@ int proc_mount_contains(const char *option)
45385 + ssize_t proc_read_text(int pid, bool thread, const char *item, char *buf, size_t size)
45386 + {
45387 + char path[PATH_MAX];
45388 ++ ssize_t ret;
45389 +
45390 + if (!pid)
45391 + snprintf(path, sizeof(path), "/proc/%s/%s",
45392 +@@ -562,8 +563,8 @@ ssize_t proc_read_text(int pid, bool thread, const char *item, char *buf, size_t
45393 + else
45394 + snprintf(path, sizeof(path), "/proc/%d/%s", pid, item);
45395 +
45396 +- size = read_text(path, buf, size);
45397 +- return size < 0 ? -1 : size;
45398 ++ ret = read_text(path, buf, size);
45399 ++ return ret < 0 ? -1 : ret;
45400 + }
45401 +
45402 + int proc_read_strstr(int pid, bool thread, const char *item, const char *needle)
45403 +diff --git a/tools/testing/selftests/drivers/net/netdevsim/devlink.sh b/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
45404 +index 9de1d123f4f5d..a08c02abde121 100755
45405 +--- a/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
45406 ++++ b/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
45407 +@@ -496,8 +496,8 @@ dummy_reporter_test()
45408 +
45409 + check_reporter_info dummy healthy 3 3 10 true
45410 +
45411 +- echo 8192> $DEBUGFS_DIR/health/binary_len
45412 +- check_fail $? "Failed set dummy reporter binary len to 8192"
45413 ++ echo 8192 > $DEBUGFS_DIR/health/binary_len
45414 ++ check_err $? "Failed set dummy reporter binary len to 8192"
45415 +
45416 + local dump=$(devlink health dump show $DL_HANDLE reporter dummy -j)
45417 + check_err $? "Failed show dump of dummy reporter"
45418 +diff --git a/tools/testing/selftests/efivarfs/efivarfs.sh b/tools/testing/selftests/efivarfs/efivarfs.sh
45419 +index a90f394f9aa90..d374878cc0ba9 100755
45420 +--- a/tools/testing/selftests/efivarfs/efivarfs.sh
45421 ++++ b/tools/testing/selftests/efivarfs/efivarfs.sh
45422 +@@ -87,6 +87,11 @@ test_create_read()
45423 + {
45424 + local file=$efivarfs_mount/$FUNCNAME-$test_guid
45425 + ./create-read $file
45426 ++ if [ $? -ne 0 ]; then
45427 ++ echo "create and read $file failed"
45428 ++ file_cleanup $file
45429 ++ exit 1
45430 ++ fi
45431 + file_cleanup $file
45432 + }
45433 +
45434 +diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc b/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
45435 +index 8d26d5505808b..3eea2abf68f9e 100644
45436 +--- a/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
45437 ++++ b/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
45438 +@@ -38,11 +38,18 @@ cnt_trace() {
45439 +
45440 + test_event_enabled() {
45441 + val=$1
45442 ++ check_times=10 # wait for 10 * SLEEP_TIME at most
45443 +
45444 +- e=`cat $EVENT_ENABLE`
45445 +- if [ "$e" != $val ]; then
45446 +- fail "Expected $val but found $e"
45447 +- fi
45448 ++ while [ $check_times -ne 0 ]; do
45449 ++ e=`cat $EVENT_ENABLE`
45450 ++ if [ "$e" == $val ]; then
45451 ++ return 0
45452 ++ fi
45453 ++ sleep $SLEEP_TIME
45454 ++ check_times=$((check_times - 1))
45455 ++ done
45456 ++
45457 ++ fail "Expected $val but found $e"
45458 + }
45459 +
45460 + run_enable_disable() {
45461 +diff --git a/tools/testing/selftests/netfilter/conntrack_icmp_related.sh b/tools/testing/selftests/netfilter/conntrack_icmp_related.sh
45462 +index b48e1833bc896..76645aaf2b58f 100755
45463 +--- a/tools/testing/selftests/netfilter/conntrack_icmp_related.sh
45464 ++++ b/tools/testing/selftests/netfilter/conntrack_icmp_related.sh
45465 +@@ -35,6 +35,8 @@ cleanup() {
45466 + for i in 1 2;do ip netns del nsrouter$i;done
45467 + }
45468 +
45469 ++trap cleanup EXIT
45470 ++
45471 + ipv4() {
45472 + echo -n 192.168.$1.2
45473 + }
45474 +@@ -146,11 +148,17 @@ ip netns exec nsclient1 nft -f - <<EOF
45475 + table inet filter {
45476 + counter unknown { }
45477 + counter related { }
45478 ++ counter redir4 { }
45479 ++ counter redir6 { }
45480 + chain input {
45481 + type filter hook input priority 0; policy accept;
45482 +- meta l4proto { icmp, icmpv6 } ct state established,untracked accept
45483 +
45484 ++ icmp type "redirect" ct state "related" counter name "redir4" accept
45485 ++ icmpv6 type "nd-redirect" ct state "related" counter name "redir6" accept
45486 ++
45487 ++ meta l4proto { icmp, icmpv6 } ct state established,untracked accept
45488 + meta l4proto { icmp, icmpv6 } ct state "related" counter name "related" accept
45489 ++
45490 + counter name "unknown" drop
45491 + }
45492 + }
45493 +@@ -279,5 +287,29 @@ else
45494 + echo "ERROR: icmp error RELATED state test has failed"
45495 + fi
45496 +
45497 +-cleanup
45498 ++# add 'bad' route, expect icmp REDIRECT to be generated
45499 ++ip netns exec nsclient1 ip route add 192.168.1.42 via 192.168.1.1
45500 ++ip netns exec nsclient1 ip route add dead:1::42 via dead:1::1
45501 ++
45502 ++ip netns exec "nsclient1" ping -q -c 2 192.168.1.42 > /dev/null
45503 ++
45504 ++expect="packets 1 bytes 112"
45505 ++check_counter nsclient1 "redir4" "$expect"
45506 ++if [ $? -ne 0 ];then
45507 ++ ret=1
45508 ++fi
45509 ++
45510 ++ip netns exec "nsclient1" ping -c 1 dead:1::42 > /dev/null
45511 ++expect="packets 1 bytes 192"
45512 ++check_counter nsclient1 "redir6" "$expect"
45513 ++if [ $? -ne 0 ];then
45514 ++ ret=1
45515 ++fi
45516 ++
45517 ++if [ $ret -eq 0 ];then
45518 ++ echo "PASS: icmp redirects had RELATED state"
45519 ++else
45520 ++ echo "ERROR: icmp redirect RELATED state test has failed"
45521 ++fi
45522 ++
45523 + exit $ret
45524 +diff --git a/tools/testing/selftests/powerpc/dscr/dscr_sysfs_test.c b/tools/testing/selftests/powerpc/dscr/dscr_sysfs_test.c
45525 +index fbbdffdb2e5d2..f20d1c166d1e4 100644
45526 +--- a/tools/testing/selftests/powerpc/dscr/dscr_sysfs_test.c
45527 ++++ b/tools/testing/selftests/powerpc/dscr/dscr_sysfs_test.c
45528 +@@ -24,6 +24,7 @@ static int check_cpu_dscr_default(char *file, unsigned long val)
45529 + rc = read(fd, buf, sizeof(buf));
45530 + if (rc == -1) {
45531 + perror("read() failed");
45532 ++ close(fd);
45533 + return 1;
45534 + }
45535 + close(fd);
45536 +@@ -65,8 +66,10 @@ static int check_all_cpu_dscr_defaults(unsigned long val)
45537 + if (access(file, F_OK))
45538 + continue;
45539 +
45540 +- if (check_cpu_dscr_default(file, val))
45541 ++ if (check_cpu_dscr_default(file, val)) {
45542 ++ closedir(sysfs);
45543 + return 1;
45544 ++ }
45545 + }
45546 + closedir(sysfs);
45547 + return 0;
45548 +diff --git a/tools/testing/selftests/proc/proc-uptime-002.c b/tools/testing/selftests/proc/proc-uptime-002.c
45549 +index e7ceabed7f51f..7d0aa22bdc12b 100644
45550 +--- a/tools/testing/selftests/proc/proc-uptime-002.c
45551 ++++ b/tools/testing/selftests/proc/proc-uptime-002.c
45552 +@@ -17,6 +17,7 @@
45553 + // while shifting across CPUs.
45554 + #undef NDEBUG
45555 + #include <assert.h>
45556 ++#include <errno.h>
45557 + #include <unistd.h>
45558 + #include <sys/syscall.h>
45559 + #include <stdlib.h>
45560 +@@ -54,7 +55,7 @@ int main(void)
45561 + len += sizeof(unsigned long);
45562 + free(m);
45563 + m = malloc(len);
45564 +- } while (sys_sched_getaffinity(0, len, m) == -EINVAL);
45565 ++ } while (sys_sched_getaffinity(0, len, m) == -1 && errno == EINVAL);
45566 +
45567 + fd = open("/proc/uptime", O_RDONLY);
45568 + assert(fd >= 0);
45569
45570 diff --git a/5021_sched-alt-missing-rq-lock-irq-function.patch b/5021_sched-alt-missing-rq-lock-irq-function.patch
45571 new file mode 100644
45572 index 00000000..04cca612
45573 --- /dev/null
45574 +++ b/5021_sched-alt-missing-rq-lock-irq-function.patch
45575 @@ -0,0 +1,30 @@
45576 +From 4157360d2e1cbdfb8065f151dbe057b17188a23f Mon Sep 17 00:00:00 2001
45577 +From: Tor Vic <torvic9@×××××××.org>
45578 +Date: Mon, 7 Nov 2022 15:11:54 +0100
45579 +Subject: [PATCH] sched/alt: Add missing rq_lock_irq() function to header file
45580 +
45581 +---
45582 + kernel/sched/alt_sched.h | 7 +++++++
45583 + 1 file changed, 7 insertions(+)
45584 +
45585 +diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
45586 +index 93ff3bddd36f..a00bc84b93b2 100644
45587 +--- a/kernel/sched/alt_sched.h
45588 ++++ b/kernel/sched/alt_sched.h
45589 +@@ -387,6 +387,13 @@ task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
45590 + raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
45591 + }
45592 +
45593 ++static inline void
45594 ++rq_lock_irq(struct rq *rq, struct rq_flags *rf)
45595 ++ __acquires(rq->lock)
45596 ++{
45597 ++ raw_spin_lock_irq(&rq->lock);
45598 ++}
45599 ++
45600 + static inline void
45601 + rq_lock(struct rq *rq, struct rq_flags *rf)
45602 + __acquires(rq->lock)
45603 +--
45604 +GitLab
45605 +