Gentoo Archives: gentoo-commits

From: Mike Pagano <mpagano@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] proj/linux-patches:5.10 commit in: /
Date: Wed, 22 Dec 2021 14:05:46
Message-Id: 1640181930.a65240864c0cc8dd0f787643c5f01b65e5dd5685.mpagano@gentoo
1 commit: a65240864c0cc8dd0f787643c5f01b65e5dd5685
2 Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
3 AuthorDate: Wed Dec 22 14:05:30 2021 +0000
4 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
5 CommitDate: Wed Dec 22 14:05:30 2021 +0000
6 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a6524086
7
8 Linux patch 5.10.88
9
10 Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
11
12 0000_README | 4 +
13 1087_linux-5.10.88.patch | 3012 ++++++++++++++++++++++++++++++++++++++++++++++
14 2 files changed, 3016 insertions(+)
15
16 diff --git a/0000_README b/0000_README
17 index b743d7ac..a6cd68ff 100644
18 --- a/0000_README
19 +++ b/0000_README
20 @@ -391,6 +391,10 @@ Patch: 1086_linux-5.10.87.patch
21 From: http://www.kernel.org
22 Desc: Linux 5.10.87
23
24 +Patch: 1087_linux-5.10.88.patch
25 +From: http://www.kernel.org
26 +Desc: Linux 5.10.88
27 +
28 Patch: 1500_XATTR_USER_PREFIX.patch
29 From: https://bugs.gentoo.org/show_bug.cgi?id=470644
30 Desc: Support for namespace user.pax.* on tmpfs.
31
32 diff --git a/1087_linux-5.10.88.patch b/1087_linux-5.10.88.patch
33 new file mode 100644
34 index 00000000..2deec06c
35 --- /dev/null
36 +++ b/1087_linux-5.10.88.patch
37 @@ -0,0 +1,3012 @@
38 +diff --git a/Documentation/networking/device_drivers/ethernet/intel/ixgbe.rst b/Documentation/networking/device_drivers/ethernet/intel/ixgbe.rst
39 +index f1d5233e5e510..0a233b17c664e 100644
40 +--- a/Documentation/networking/device_drivers/ethernet/intel/ixgbe.rst
41 ++++ b/Documentation/networking/device_drivers/ethernet/intel/ixgbe.rst
42 +@@ -440,6 +440,22 @@ NOTE: For 82599-based network connections, if you are enabling jumbo frames in
43 + a virtual function (VF), jumbo frames must first be enabled in the physical
44 + function (PF). The VF MTU setting cannot be larger than the PF MTU.
45 +
46 ++NBASE-T Support
47 ++---------------
48 ++The ixgbe driver supports NBASE-T on some devices. However, the advertisement
49 ++of NBASE-T speeds is suppressed by default, to accommodate broken network
50 ++switches which cannot cope with advertised NBASE-T speeds. Use the ethtool
51 ++command to enable advertising NBASE-T speeds on devices which support it::
52 ++
53 ++ ethtool -s eth? advertise 0x1800000001028
54 ++
55 ++On Linux systems with INTERFACES(5), this can be specified as a pre-up command
56 ++in /etc/network/interfaces so that the interface is always brought up with
57 ++NBASE-T support, e.g.::
58 ++
59 ++ iface eth? inet dhcp
60 ++ pre-up ethtool -s eth? advertise 0x1800000001028 || true
61 ++
62 + Generic Receive Offload, aka GRO
63 + --------------------------------
64 + The driver supports the in-kernel software implementation of GRO. GRO has
65 +diff --git a/Makefile b/Makefile
66 +index d627f4ae5af56..0b74b414f4e57 100644
67 +--- a/Makefile
68 ++++ b/Makefile
69 +@@ -1,7 +1,7 @@
70 + # SPDX-License-Identifier: GPL-2.0
71 + VERSION = 5
72 + PATCHLEVEL = 10
73 +-SUBLEVEL = 87
74 ++SUBLEVEL = 88
75 + EXTRAVERSION =
76 + NAME = Dare mighty things
77 +
78 +diff --git a/arch/arm/boot/dts/imx6ull-pinfunc.h b/arch/arm/boot/dts/imx6ull-pinfunc.h
79 +index eb025a9d47592..7328d4ef8559f 100644
80 +--- a/arch/arm/boot/dts/imx6ull-pinfunc.h
81 ++++ b/arch/arm/boot/dts/imx6ull-pinfunc.h
82 +@@ -82,6 +82,6 @@
83 + #define MX6ULL_PAD_CSI_DATA04__ESAI_TX_FS 0x01F4 0x0480 0x0000 0x9 0x0
84 + #define MX6ULL_PAD_CSI_DATA05__ESAI_TX_CLK 0x01F8 0x0484 0x0000 0x9 0x0
85 + #define MX6ULL_PAD_CSI_DATA06__ESAI_TX5_RX0 0x01FC 0x0488 0x0000 0x9 0x0
86 +-#define MX6ULL_PAD_CSI_DATA07__ESAI_T0 0x0200 0x048C 0x0000 0x9 0x0
87 ++#define MX6ULL_PAD_CSI_DATA07__ESAI_TX0 0x0200 0x048C 0x0000 0x9 0x0
88 +
89 + #endif /* __DTS_IMX6ULL_PINFUNC_H */
90 +diff --git a/arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts b/arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts
91 +index 2b645642b9352..2a745522404d6 100644
92 +--- a/arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts
93 ++++ b/arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts
94 +@@ -12,7 +12,7 @@
95 + flash0: n25q00@0 {
96 + #address-cells = <1>;
97 + #size-cells = <1>;
98 +- compatible = "n25q00aa";
99 ++ compatible = "micron,mt25qu02g", "jedec,spi-nor";
100 + reg = <0>;
101 + spi-max-frequency = <100000000>;
102 +
103 +diff --git a/arch/arm/boot/dts/socfpga_arria5_socdk.dts b/arch/arm/boot/dts/socfpga_arria5_socdk.dts
104 +index 90e676e7019f2..1b02d46496a85 100644
105 +--- a/arch/arm/boot/dts/socfpga_arria5_socdk.dts
106 ++++ b/arch/arm/boot/dts/socfpga_arria5_socdk.dts
107 +@@ -119,7 +119,7 @@
108 + flash: flash@0 {
109 + #address-cells = <1>;
110 + #size-cells = <1>;
111 +- compatible = "n25q256a";
112 ++ compatible = "micron,n25q256a", "jedec,spi-nor";
113 + reg = <0>;
114 + spi-max-frequency = <100000000>;
115 +
116 +diff --git a/arch/arm/boot/dts/socfpga_cyclone5_socdk.dts b/arch/arm/boot/dts/socfpga_cyclone5_socdk.dts
117 +index 6f138b2b26163..51bb436784e24 100644
118 +--- a/arch/arm/boot/dts/socfpga_cyclone5_socdk.dts
119 ++++ b/arch/arm/boot/dts/socfpga_cyclone5_socdk.dts
120 +@@ -124,7 +124,7 @@
121 + flash0: n25q00@0 {
122 + #address-cells = <1>;
123 + #size-cells = <1>;
124 +- compatible = "n25q00";
125 ++ compatible = "micron,mt25qu02g", "jedec,spi-nor";
126 + reg = <0>; /* chip select */
127 + spi-max-frequency = <100000000>;
128 +
129 +diff --git a/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts b/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts
130 +index c155ff02eb6e0..cae9ddd5ed38b 100644
131 +--- a/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts
132 ++++ b/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts
133 +@@ -169,7 +169,7 @@
134 + flash: flash@0 {
135 + #address-cells = <1>;
136 + #size-cells = <1>;
137 +- compatible = "n25q00";
138 ++ compatible = "micron,mt25qu02g", "jedec,spi-nor";
139 + reg = <0>;
140 + spi-max-frequency = <100000000>;
141 +
142 +diff --git a/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts b/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts
143 +index 8d5d3996f6f27..ca18b959e6559 100644
144 +--- a/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts
145 ++++ b/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts
146 +@@ -80,7 +80,7 @@
147 + flash: flash@0 {
148 + #address-cells = <1>;
149 + #size-cells = <1>;
150 +- compatible = "n25q256a";
151 ++ compatible = "micron,n25q256a", "jedec,spi-nor";
152 + reg = <0>;
153 + spi-max-frequency = <100000000>;
154 + m25p,fast-read;
155 +diff --git a/arch/arm/boot/dts/socfpga_cyclone5_sodia.dts b/arch/arm/boot/dts/socfpga_cyclone5_sodia.dts
156 +index 99a71757cdf46..3f7aa7bf0863a 100644
157 +--- a/arch/arm/boot/dts/socfpga_cyclone5_sodia.dts
158 ++++ b/arch/arm/boot/dts/socfpga_cyclone5_sodia.dts
159 +@@ -116,7 +116,7 @@
160 + flash0: n25q512a@0 {
161 + #address-cells = <1>;
162 + #size-cells = <1>;
163 +- compatible = "n25q512a";
164 ++ compatible = "micron,n25q512a", "jedec,spi-nor";
165 + reg = <0>;
166 + spi-max-frequency = <100000000>;
167 +
168 +diff --git a/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts b/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts
169 +index a060718758b67..25874e1b9c829 100644
170 +--- a/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts
171 ++++ b/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts
172 +@@ -224,7 +224,7 @@
173 + n25q128@0 {
174 + #address-cells = <1>;
175 + #size-cells = <1>;
176 +- compatible = "n25q128";
177 ++ compatible = "micron,n25q128", "jedec,spi-nor";
178 + reg = <0>; /* chip select */
179 + spi-max-frequency = <100000000>;
180 + m25p,fast-read;
181 +@@ -241,7 +241,7 @@
182 + n25q00@1 {
183 + #address-cells = <1>;
184 + #size-cells = <1>;
185 +- compatible = "n25q00";
186 ++ compatible = "micron,mt25qu02g", "jedec,spi-nor";
187 + reg = <1>; /* chip select */
188 + spi-max-frequency = <100000000>;
189 + m25p,fast-read;
190 +diff --git a/arch/arm64/boot/dts/freescale/imx8mm.dtsi b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
191 +index 05ee062548e4f..f4d7bb75707df 100644
192 +--- a/arch/arm64/boot/dts/freescale/imx8mm.dtsi
193 ++++ b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
194 +@@ -866,11 +866,12 @@
195 + assigned-clocks = <&clk IMX8MM_CLK_ENET_AXI>,
196 + <&clk IMX8MM_CLK_ENET_TIMER>,
197 + <&clk IMX8MM_CLK_ENET_REF>,
198 +- <&clk IMX8MM_CLK_ENET_TIMER>;
199 ++ <&clk IMX8MM_CLK_ENET_PHY_REF>;
200 + assigned-clock-parents = <&clk IMX8MM_SYS_PLL1_266M>,
201 + <&clk IMX8MM_SYS_PLL2_100M>,
202 +- <&clk IMX8MM_SYS_PLL2_125M>;
203 +- assigned-clock-rates = <0>, <0>, <125000000>, <100000000>;
204 ++ <&clk IMX8MM_SYS_PLL2_125M>,
205 ++ <&clk IMX8MM_SYS_PLL2_50M>;
206 ++ assigned-clock-rates = <0>, <100000000>, <125000000>, <0>;
207 + fsl,num-tx-queues = <3>;
208 + fsl,num-rx-queues = <3>;
209 + status = "disabled";
210 +diff --git a/arch/arm64/boot/dts/freescale/imx8mn.dtsi b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
211 +index 16c7202885d70..aea723eb2ba3f 100644
212 +--- a/arch/arm64/boot/dts/freescale/imx8mn.dtsi
213 ++++ b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
214 +@@ -753,11 +753,12 @@
215 + assigned-clocks = <&clk IMX8MN_CLK_ENET_AXI>,
216 + <&clk IMX8MN_CLK_ENET_TIMER>,
217 + <&clk IMX8MN_CLK_ENET_REF>,
218 +- <&clk IMX8MN_CLK_ENET_TIMER>;
219 ++ <&clk IMX8MN_CLK_ENET_PHY_REF>;
220 + assigned-clock-parents = <&clk IMX8MN_SYS_PLL1_266M>,
221 + <&clk IMX8MN_SYS_PLL2_100M>,
222 +- <&clk IMX8MN_SYS_PLL2_125M>;
223 +- assigned-clock-rates = <0>, <0>, <125000000>, <100000000>;
224 ++ <&clk IMX8MN_SYS_PLL2_125M>,
225 ++ <&clk IMX8MN_SYS_PLL2_50M>;
226 ++ assigned-clock-rates = <0>, <100000000>, <125000000>, <0>;
227 + fsl,num-tx-queues = <3>;
228 + fsl,num-rx-queues = <3>;
229 + status = "disabled";
230 +diff --git a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts
231 +index ad66f1286d95c..c13b4a02d12f8 100644
232 +--- a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts
233 ++++ b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts
234 +@@ -62,6 +62,8 @@
235 + reg = <1>;
236 + eee-broken-1000t;
237 + reset-gpios = <&gpio4 2 GPIO_ACTIVE_LOW>;
238 ++ reset-assert-us = <10000>;
239 ++ reset-deassert-us = <80000>;
240 + };
241 + };
242 + };
243 +diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
244 +index 03ef0e5f909e4..acee71ca32d83 100644
245 +--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
246 ++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
247 +@@ -725,11 +725,12 @@
248 + assigned-clocks = <&clk IMX8MP_CLK_ENET_AXI>,
249 + <&clk IMX8MP_CLK_ENET_TIMER>,
250 + <&clk IMX8MP_CLK_ENET_REF>,
251 +- <&clk IMX8MP_CLK_ENET_TIMER>;
252 ++ <&clk IMX8MP_CLK_ENET_PHY_REF>;
253 + assigned-clock-parents = <&clk IMX8MP_SYS_PLL1_266M>,
254 + <&clk IMX8MP_SYS_PLL2_100M>,
255 +- <&clk IMX8MP_SYS_PLL2_125M>;
256 +- assigned-clock-rates = <0>, <0>, <125000000>, <100000000>;
257 ++ <&clk IMX8MP_SYS_PLL2_125M>,
258 ++ <&clk IMX8MP_SYS_PLL2_50M>;
259 ++ assigned-clock-rates = <0>, <100000000>, <125000000>, <0>;
260 + fsl,num-tx-queues = <3>;
261 + fsl,num-rx-queues = <3>;
262 + status = "disabled";
263 +diff --git a/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts b/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
264 +index bce6f8b7db436..fbcb9531cc70d 100644
265 +--- a/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
266 ++++ b/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
267 +@@ -91,7 +91,7 @@
268 + regulator-max-microvolt = <3300000>;
269 + regulator-always-on;
270 + regulator-boot-on;
271 +- vim-supply = <&vcc_io>;
272 ++ vin-supply = <&vcc_io>;
273 + };
274 +
275 + vdd_core: vdd-core {
276 +diff --git a/arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi
277 +index 635afdd99122f..2c644ac1f84b9 100644
278 +--- a/arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi
279 ++++ b/arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi
280 +@@ -699,7 +699,6 @@
281 + &sdhci {
282 + bus-width = <8>;
283 + mmc-hs400-1_8v;
284 +- mmc-hs400-enhanced-strobe;
285 + non-removable;
286 + status = "okay";
287 + };
288 +diff --git a/arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts b/arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts
289 +index 1fa80ac15464b..88984b5e67b6e 100644
290 +--- a/arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts
291 ++++ b/arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts
292 +@@ -49,7 +49,7 @@
293 + regulator-boot-on;
294 + regulator-min-microvolt = <3300000>;
295 + regulator-max-microvolt = <3300000>;
296 +- vim-supply = <&vcc3v3_sys>;
297 ++ vin-supply = <&vcc3v3_sys>;
298 + };
299 +
300 + vcc3v3_sys: vcc3v3-sys {
301 +diff --git a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
302 +index 678a336010bf8..f121203081b97 100644
303 +--- a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
304 ++++ b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
305 +@@ -459,7 +459,7 @@
306 + status = "okay";
307 +
308 + bt656-supply = <&vcc_3v0>;
309 +- audio-supply = <&vcc_3v0>;
310 ++ audio-supply = <&vcc1v8_codec>;
311 + sdmmc-supply = <&vcc_sdio>;
312 + gpio1830-supply = <&vcc_3v0>;
313 + };
314 +diff --git a/arch/powerpc/platforms/85xx/smp.c b/arch/powerpc/platforms/85xx/smp.c
315 +index 83f4a6389a282..d7081e9af65c7 100644
316 +--- a/arch/powerpc/platforms/85xx/smp.c
317 ++++ b/arch/powerpc/platforms/85xx/smp.c
318 +@@ -220,7 +220,7 @@ static int smp_85xx_start_cpu(int cpu)
319 + local_irq_save(flags);
320 + hard_irq_disable();
321 +
322 +- if (qoriq_pm_ops)
323 ++ if (qoriq_pm_ops && qoriq_pm_ops->cpu_up_prepare)
324 + qoriq_pm_ops->cpu_up_prepare(cpu);
325 +
326 + /* if cpu is not spinning, reset it */
327 +@@ -292,7 +292,7 @@ static int smp_85xx_kick_cpu(int nr)
328 + booting_thread_hwid = cpu_thread_in_core(nr);
329 + primary = cpu_first_thread_sibling(nr);
330 +
331 +- if (qoriq_pm_ops)
332 ++ if (qoriq_pm_ops && qoriq_pm_ops->cpu_up_prepare)
333 + qoriq_pm_ops->cpu_up_prepare(nr);
334 +
335 + /*
336 +diff --git a/arch/s390/kernel/machine_kexec_file.c b/arch/s390/kernel/machine_kexec_file.c
337 +index e7435f3a3d2d2..76cd09879eaf4 100644
338 +--- a/arch/s390/kernel/machine_kexec_file.c
339 ++++ b/arch/s390/kernel/machine_kexec_file.c
340 +@@ -277,6 +277,7 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
341 + {
342 + Elf_Rela *relas;
343 + int i, r_type;
344 ++ int ret;
345 +
346 + relas = (void *)pi->ehdr + relsec->sh_offset;
347 +
348 +@@ -311,7 +312,11 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
349 + addr = section->sh_addr + relas[i].r_offset;
350 +
351 + r_type = ELF64_R_TYPE(relas[i].r_info);
352 +- arch_kexec_do_relocs(r_type, loc, val, addr);
353 ++ ret = arch_kexec_do_relocs(r_type, loc, val, addr);
354 ++ if (ret) {
355 ++ pr_err("Unknown rela relocation: %d\n", r_type);
356 ++ return -ENOEXEC;
357 ++ }
358 + }
359 + return 0;
360 + }
361 +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
362 +index b885063dc393f..4f828cac0273e 100644
363 +--- a/arch/x86/kvm/x86.c
364 ++++ b/arch/x86/kvm/x86.c
365 +@@ -3065,7 +3065,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
366 +
367 + if (!msr_info->host_initiated)
368 + return 1;
369 +- if (guest_cpuid_has(vcpu, X86_FEATURE_PDCM) && kvm_get_msr_feature(&msr_ent))
370 ++ if (kvm_get_msr_feature(&msr_ent))
371 + return 1;
372 + if (data & ~msr_ent.data)
373 + return 1;
374 +diff --git a/block/blk-iocost.c b/block/blk-iocost.c
375 +index e95b93f72bd5c..9af32b44b7173 100644
376 +--- a/block/blk-iocost.c
377 ++++ b/block/blk-iocost.c
378 +@@ -2246,7 +2246,14 @@ static void ioc_timer_fn(struct timer_list *timer)
379 + hwm = current_hweight_max(iocg);
380 + new_hwi = hweight_after_donation(iocg, old_hwi, hwm,
381 + usage, &now);
382 +- if (new_hwi < hwm) {
383 ++ /*
384 ++ * Donation calculation assumes hweight_after_donation
385 ++ * to be positive, a condition that a donor w/ hwa < 2
386 ++ * can't meet. Don't bother with donation if hwa is
387 ++ * below 2. It's not gonna make a meaningful difference
388 ++ * anyway.
389 ++ */
390 ++ if (new_hwi < hwm && hwa >= 2) {
391 + iocg->hweight_donating = hwa;
392 + iocg->hweight_after_donation = new_hwi;
393 + list_add(&iocg->surplus_list, &surpluses);
394 +diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
395 +index 48b8934970f36..a0e788b648214 100644
396 +--- a/drivers/ata/libata-scsi.c
397 ++++ b/drivers/ata/libata-scsi.c
398 +@@ -2870,8 +2870,19 @@ static unsigned int ata_scsi_pass_thru(struct ata_queued_cmd *qc)
399 + goto invalid_fld;
400 + }
401 +
402 +- if (ata_is_ncq(tf->protocol) && (cdb[2 + cdb_offset] & 0x3) == 0)
403 +- tf->protocol = ATA_PROT_NCQ_NODATA;
404 ++ if ((cdb[2 + cdb_offset] & 0x3) == 0) {
405 ++ /*
406 ++ * When T_LENGTH is zero (No data is transferred), dir should
407 ++ * be DMA_NONE.
408 ++ */
409 ++ if (scmd->sc_data_direction != DMA_NONE) {
410 ++ fp = 2 + cdb_offset;
411 ++ goto invalid_fld;
412 ++ }
413 ++
414 ++ if (ata_is_ncq(tf->protocol))
415 ++ tf->protocol = ATA_PROT_NCQ_NODATA;
416 ++ }
417 +
418 + /* enable LBA */
419 + tf->flags |= ATA_TFLAG_LBA;
420 +diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
421 +index ff7b62597b525..22842d2938c28 100644
422 +--- a/drivers/block/xen-blkfront.c
423 ++++ b/drivers/block/xen-blkfront.c
424 +@@ -1573,9 +1573,12 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
425 + unsigned long flags;
426 + struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
427 + struct blkfront_info *info = rinfo->dev_info;
428 ++ unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
429 +
430 +- if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
431 ++ if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
432 ++ xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS);
433 + return IRQ_HANDLED;
434 ++ }
435 +
436 + spin_lock_irqsave(&rinfo->ring_lock, flags);
437 + again:
438 +@@ -1591,6 +1594,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
439 + unsigned long id;
440 + unsigned int op;
441 +
442 ++ eoiflag = 0;
443 ++
444 + RING_COPY_RESPONSE(&rinfo->ring, i, &bret);
445 + id = bret.id;
446 +
447 +@@ -1707,6 +1712,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
448 +
449 + spin_unlock_irqrestore(&rinfo->ring_lock, flags);
450 +
451 ++ xen_irq_lateeoi(irq, eoiflag);
452 ++
453 + return IRQ_HANDLED;
454 +
455 + err:
456 +@@ -1714,6 +1721,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
457 +
458 + spin_unlock_irqrestore(&rinfo->ring_lock, flags);
459 +
460 ++ /* No EOI in order to avoid further interrupts. */
461 ++
462 + pr_alert("%s disabled for further use\n", info->gd->disk_name);
463 + return IRQ_HANDLED;
464 + }
465 +@@ -1753,8 +1762,8 @@ static int setup_blkring(struct xenbus_device *dev,
466 + if (err)
467 + goto fail;
468 +
469 +- err = bind_evtchn_to_irqhandler(rinfo->evtchn, blkif_interrupt, 0,
470 +- "blkif", rinfo);
471 ++ err = bind_evtchn_to_irqhandler_lateeoi(rinfo->evtchn, blkif_interrupt,
472 ++ 0, "blkif", rinfo);
473 + if (err <= 0) {
474 + xenbus_dev_fatal(dev, err,
475 + "bind_evtchn_to_irqhandler failed");
476 +diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
477 +index 43603dc9da430..18f0650c5d405 100644
478 +--- a/drivers/bus/ti-sysc.c
479 ++++ b/drivers/bus/ti-sysc.c
480 +@@ -2443,12 +2443,11 @@ static void sysc_reinit_modules(struct sysc_soc_info *soc)
481 + struct sysc_module *module;
482 + struct list_head *pos;
483 + struct sysc *ddata;
484 +- int error = 0;
485 +
486 + list_for_each(pos, &sysc_soc->restored_modules) {
487 + module = list_entry(pos, struct sysc_module, node);
488 + ddata = module->ddata;
489 +- error = sysc_reinit_module(ddata, ddata->enabled);
490 ++ sysc_reinit_module(ddata, ddata->enabled);
491 + }
492 + }
493 +
494 +diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
495 +index 61c78714c0957..515ef39c4610c 100644
496 +--- a/drivers/clk/clk.c
497 ++++ b/drivers/clk/clk.c
498 +@@ -3389,6 +3389,14 @@ static int __clk_core_init(struct clk_core *core)
499 +
500 + clk_prepare_lock();
501 +
502 ++ /*
503 ++ * Set hw->core after grabbing the prepare_lock to synchronize with
504 ++ * callers of clk_core_fill_parent_index() where we treat hw->core
505 ++ * being NULL as the clk not being registered yet. This is crucial so
506 ++ * that clks aren't parented until their parent is fully registered.
507 ++ */
508 ++ core->hw->core = core;
509 ++
510 + ret = clk_pm_runtime_get(core);
511 + if (ret)
512 + goto unlock;
513 +@@ -3557,8 +3565,10 @@ static int __clk_core_init(struct clk_core *core)
514 + out:
515 + clk_pm_runtime_put(core);
516 + unlock:
517 +- if (ret)
518 ++ if (ret) {
519 + hlist_del_init(&core->child_node);
520 ++ core->hw->core = NULL;
521 ++ }
522 +
523 + clk_prepare_unlock();
524 +
525 +@@ -3804,7 +3814,6 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
526 + core->num_parents = init->num_parents;
527 + core->min_rate = 0;
528 + core->max_rate = ULONG_MAX;
529 +- hw->core = core;
530 +
531 + ret = clk_core_populate_parent_map(core, init);
532 + if (ret)
533 +@@ -3822,7 +3831,7 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
534 + goto fail_create_clk;
535 + }
536 +
537 +- clk_core_link_consumer(hw->core, hw->clk);
538 ++ clk_core_link_consumer(core, hw->clk);
539 +
540 + ret = __clk_core_init(core);
541 + if (!ret)
542 +diff --git a/drivers/dma/st_fdma.c b/drivers/dma/st_fdma.c
543 +index 962b6e05287b5..d95c421877fb7 100644
544 +--- a/drivers/dma/st_fdma.c
545 ++++ b/drivers/dma/st_fdma.c
546 +@@ -874,4 +874,4 @@ MODULE_LICENSE("GPL v2");
547 + MODULE_DESCRIPTION("STMicroelectronics FDMA engine driver");
548 + MODULE_AUTHOR("Ludovic.barre <Ludovic.barre@××.com>");
549 + MODULE_AUTHOR("Peter Griffin <peter.griffin@××××××.org>");
550 +-MODULE_ALIAS("platform: " DRIVER_NAME);
551 ++MODULE_ALIAS("platform:" DRIVER_NAME);
552 +diff --git a/drivers/firmware/scpi_pm_domain.c b/drivers/firmware/scpi_pm_domain.c
553 +index 51201600d789b..800673910b511 100644
554 +--- a/drivers/firmware/scpi_pm_domain.c
555 ++++ b/drivers/firmware/scpi_pm_domain.c
556 +@@ -16,7 +16,6 @@ struct scpi_pm_domain {
557 + struct generic_pm_domain genpd;
558 + struct scpi_ops *ops;
559 + u32 domain;
560 +- char name[30];
561 + };
562 +
563 + /*
564 +@@ -110,8 +109,13 @@ static int scpi_pm_domain_probe(struct platform_device *pdev)
565 +
566 + scpi_pd->domain = i;
567 + scpi_pd->ops = scpi_ops;
568 +- sprintf(scpi_pd->name, "%pOFn.%d", np, i);
569 +- scpi_pd->genpd.name = scpi_pd->name;
570 ++ scpi_pd->genpd.name = devm_kasprintf(dev, GFP_KERNEL,
571 ++ "%pOFn.%d", np, i);
572 ++ if (!scpi_pd->genpd.name) {
573 ++ dev_err(dev, "Failed to allocate genpd name:%pOFn.%d\n",
574 ++ np, i);
575 ++ continue;
576 ++ }
577 + scpi_pd->genpd.power_off = scpi_pd_power_off;
578 + scpi_pd->genpd.power_on = scpi_pd_power_on;
579 +
580 +diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
581 +index bea451a39d601..b19f7bd37781f 100644
582 +--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
583 ++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
584 +@@ -3002,8 +3002,8 @@ static void gfx_v9_0_init_pg(struct amdgpu_device *adev)
585 + AMD_PG_SUPPORT_CP |
586 + AMD_PG_SUPPORT_GDS |
587 + AMD_PG_SUPPORT_RLC_SMU_HS)) {
588 +- WREG32(mmRLC_JUMP_TABLE_RESTORE,
589 +- adev->gfx.rlc.cp_table_gpu_addr >> 8);
590 ++ WREG32_SOC15(GC, 0, mmRLC_JUMP_TABLE_RESTORE,
591 ++ adev->gfx.rlc.cp_table_gpu_addr >> 8);
592 + gfx_v9_0_init_gfx_power_gating(adev);
593 + }
594 + }
595 +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c
596 +index 7907c9e0b5dec..b938fd12da4d5 100644
597 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c
598 ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c
599 +@@ -187,6 +187,9 @@ int smu_v12_0_fini_smc_tables(struct smu_context *smu)
600 + kfree(smu_table->watermarks_table);
601 + smu_table->watermarks_table = NULL;
602 +
603 ++ kfree(smu_table->gpu_metrics_table);
604 ++ smu_table->gpu_metrics_table = NULL;
605 ++
606 + return 0;
607 + }
608 +
609 +diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
610 +index a3c2f76668abe..d27f2840b9555 100644
611 +--- a/drivers/gpu/drm/ast/ast_mode.c
612 ++++ b/drivers/gpu/drm/ast/ast_mode.c
613 +@@ -857,7 +857,10 @@ static void ast_crtc_reset(struct drm_crtc *crtc)
614 + if (crtc->state)
615 + crtc->funcs->atomic_destroy_state(crtc, crtc->state);
616 +
617 +- __drm_atomic_helper_crtc_reset(crtc, &ast_state->base);
618 ++ if (ast_state)
619 ++ __drm_atomic_helper_crtc_reset(crtc, &ast_state->base);
620 ++ else
621 ++ __drm_atomic_helper_crtc_reset(crtc, NULL);
622 + }
623 +
624 + static struct drm_crtc_state *
625 +diff --git a/drivers/input/touchscreen/of_touchscreen.c b/drivers/input/touchscreen/of_touchscreen.c
626 +index 97342e14b4f18..8719a8b0e8682 100644
627 +--- a/drivers/input/touchscreen/of_touchscreen.c
628 ++++ b/drivers/input/touchscreen/of_touchscreen.c
629 +@@ -79,27 +79,27 @@ void touchscreen_parse_properties(struct input_dev *input, bool multitouch,
630 +
631 + data_present = touchscreen_get_prop_u32(dev, "touchscreen-min-x",
632 + input_abs_get_min(input, axis_x),
633 +- &minimum) |
634 +- touchscreen_get_prop_u32(dev, "touchscreen-size-x",
635 +- input_abs_get_max(input,
636 +- axis_x) + 1,
637 +- &maximum) |
638 +- touchscreen_get_prop_u32(dev, "touchscreen-fuzz-x",
639 +- input_abs_get_fuzz(input, axis_x),
640 +- &fuzz);
641 ++ &minimum);
642 ++ data_present |= touchscreen_get_prop_u32(dev, "touchscreen-size-x",
643 ++ input_abs_get_max(input,
644 ++ axis_x) + 1,
645 ++ &maximum);
646 ++ data_present |= touchscreen_get_prop_u32(dev, "touchscreen-fuzz-x",
647 ++ input_abs_get_fuzz(input, axis_x),
648 ++ &fuzz);
649 + if (data_present)
650 + touchscreen_set_params(input, axis_x, minimum, maximum - 1, fuzz);
651 +
652 + data_present = touchscreen_get_prop_u32(dev, "touchscreen-min-y",
653 + input_abs_get_min(input, axis_y),
654 +- &minimum) |
655 +- touchscreen_get_prop_u32(dev, "touchscreen-size-y",
656 +- input_abs_get_max(input,
657 +- axis_y) + 1,
658 +- &maximum) |
659 +- touchscreen_get_prop_u32(dev, "touchscreen-fuzz-y",
660 +- input_abs_get_fuzz(input, axis_y),
661 +- &fuzz);
662 ++ &minimum);
663 ++ data_present |= touchscreen_get_prop_u32(dev, "touchscreen-size-y",
664 ++ input_abs_get_max(input,
665 ++ axis_y) + 1,
666 ++ &maximum);
667 ++ data_present |= touchscreen_get_prop_u32(dev, "touchscreen-fuzz-y",
668 ++ input_abs_get_fuzz(input, axis_y),
669 ++ &fuzz);
670 + if (data_present)
671 + touchscreen_set_params(input, axis_y, minimum, maximum - 1, fuzz);
672 +
673 +@@ -107,11 +107,11 @@ void touchscreen_parse_properties(struct input_dev *input, bool multitouch,
674 + data_present = touchscreen_get_prop_u32(dev,
675 + "touchscreen-max-pressure",
676 + input_abs_get_max(input, axis),
677 +- &maximum) |
678 +- touchscreen_get_prop_u32(dev,
679 +- "touchscreen-fuzz-pressure",
680 +- input_abs_get_fuzz(input, axis),
681 +- &fuzz);
682 ++ &maximum);
683 ++ data_present |= touchscreen_get_prop_u32(dev,
684 ++ "touchscreen-fuzz-pressure",
685 ++ input_abs_get_fuzz(input, axis),
686 ++ &fuzz);
687 + if (data_present)
688 + touchscreen_set_params(input, axis, 0, maximum, fuzz);
689 +
690 +diff --git a/drivers/md/persistent-data/dm-btree-remove.c b/drivers/md/persistent-data/dm-btree-remove.c
691 +index 9e4d1212f4c16..63f2baed3c8a6 100644
692 +--- a/drivers/md/persistent-data/dm-btree-remove.c
693 ++++ b/drivers/md/persistent-data/dm-btree-remove.c
694 +@@ -423,9 +423,9 @@ static int rebalance_children(struct shadow_spine *s,
695 +
696 + memcpy(n, dm_block_data(child),
697 + dm_bm_block_size(dm_tm_get_bm(info->tm)));
698 +- dm_tm_unlock(info->tm, child);
699 +
700 + dm_tm_dec(info->tm, dm_block_location(child));
701 ++ dm_tm_unlock(info->tm, child);
702 + return 0;
703 + }
704 +
705 +diff --git a/drivers/media/usb/dvb-usb-v2/mxl111sf.c b/drivers/media/usb/dvb-usb-v2/mxl111sf.c
706 +index 7865fa0a82957..cd5861a30b6f8 100644
707 +--- a/drivers/media/usb/dvb-usb-v2/mxl111sf.c
708 ++++ b/drivers/media/usb/dvb-usb-v2/mxl111sf.c
709 +@@ -931,8 +931,6 @@ static int mxl111sf_init(struct dvb_usb_device *d)
710 + .len = sizeof(eeprom), .buf = eeprom },
711 + };
712 +
713 +- mutex_init(&state->msg_lock);
714 +-
715 + ret = get_chip_info(state);
716 + if (mxl_fail(ret))
717 + pr_err("failed to get chip info during probe");
718 +@@ -1074,6 +1072,14 @@ static int mxl111sf_get_stream_config_dvbt(struct dvb_frontend *fe,
719 + return 0;
720 + }
721 +
722 ++static int mxl111sf_probe(struct dvb_usb_device *dev)
723 ++{
724 ++ struct mxl111sf_state *state = d_to_priv(dev);
725 ++
726 ++ mutex_init(&state->msg_lock);
727 ++ return 0;
728 ++}
729 ++
730 + static struct dvb_usb_device_properties mxl111sf_props_dvbt = {
731 + .driver_name = KBUILD_MODNAME,
732 + .owner = THIS_MODULE,
733 +@@ -1083,6 +1089,7 @@ static struct dvb_usb_device_properties mxl111sf_props_dvbt = {
734 + .generic_bulk_ctrl_endpoint = 0x02,
735 + .generic_bulk_ctrl_endpoint_response = 0x81,
736 +
737 ++ .probe = mxl111sf_probe,
738 + .i2c_algo = &mxl111sf_i2c_algo,
739 + .frontend_attach = mxl111sf_frontend_attach_dvbt,
740 + .tuner_attach = mxl111sf_attach_tuner,
741 +@@ -1124,6 +1131,7 @@ static struct dvb_usb_device_properties mxl111sf_props_atsc = {
742 + .generic_bulk_ctrl_endpoint = 0x02,
743 + .generic_bulk_ctrl_endpoint_response = 0x81,
744 +
745 ++ .probe = mxl111sf_probe,
746 + .i2c_algo = &mxl111sf_i2c_algo,
747 + .frontend_attach = mxl111sf_frontend_attach_atsc,
748 + .tuner_attach = mxl111sf_attach_tuner,
749 +@@ -1165,6 +1173,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mh = {
750 + .generic_bulk_ctrl_endpoint = 0x02,
751 + .generic_bulk_ctrl_endpoint_response = 0x81,
752 +
753 ++ .probe = mxl111sf_probe,
754 + .i2c_algo = &mxl111sf_i2c_algo,
755 + .frontend_attach = mxl111sf_frontend_attach_mh,
756 + .tuner_attach = mxl111sf_attach_tuner,
757 +@@ -1233,6 +1242,7 @@ static struct dvb_usb_device_properties mxl111sf_props_atsc_mh = {
758 + .generic_bulk_ctrl_endpoint = 0x02,
759 + .generic_bulk_ctrl_endpoint_response = 0x81,
760 +
761 ++ .probe = mxl111sf_probe,
762 + .i2c_algo = &mxl111sf_i2c_algo,
763 + .frontend_attach = mxl111sf_frontend_attach_atsc_mh,
764 + .tuner_attach = mxl111sf_attach_tuner,
765 +@@ -1311,6 +1321,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mercury = {
766 + .generic_bulk_ctrl_endpoint = 0x02,
767 + .generic_bulk_ctrl_endpoint_response = 0x81,
768 +
769 ++ .probe = mxl111sf_probe,
770 + .i2c_algo = &mxl111sf_i2c_algo,
771 + .frontend_attach = mxl111sf_frontend_attach_mercury,
772 + .tuner_attach = mxl111sf_attach_tuner,
773 +@@ -1381,6 +1392,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mercury_mh = {
774 + .generic_bulk_ctrl_endpoint = 0x02,
775 + .generic_bulk_ctrl_endpoint_response = 0x81,
776 +
777 ++ .probe = mxl111sf_probe,
778 + .i2c_algo = &mxl111sf_i2c_algo,
779 + .frontend_attach = mxl111sf_frontend_attach_mercury_mh,
780 + .tuner_attach = mxl111sf_attach_tuner,
781 +diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
782 +index 0404aafd5ce56..1a703b95208b0 100644
783 +--- a/drivers/net/ethernet/broadcom/bcmsysport.c
784 ++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
785 +@@ -1304,11 +1304,11 @@ static netdev_tx_t bcm_sysport_xmit(struct sk_buff *skb,
786 + struct bcm_sysport_priv *priv = netdev_priv(dev);
787 + struct device *kdev = &priv->pdev->dev;
788 + struct bcm_sysport_tx_ring *ring;
789 ++ unsigned long flags, desc_flags;
790 + struct bcm_sysport_cb *cb;
791 + struct netdev_queue *txq;
792 + u32 len_status, addr_lo;
793 + unsigned int skb_len;
794 +- unsigned long flags;
795 + dma_addr_t mapping;
796 + u16 queue;
797 + int ret;
798 +@@ -1368,8 +1368,10 @@ static netdev_tx_t bcm_sysport_xmit(struct sk_buff *skb,
799 + ring->desc_count--;
800 +
801 + /* Ports are latched, so write upper address first */
802 ++ spin_lock_irqsave(&priv->desc_lock, desc_flags);
803 + tdma_writel(priv, len_status, TDMA_WRITE_PORT_HI(ring->index));
804 + tdma_writel(priv, addr_lo, TDMA_WRITE_PORT_LO(ring->index));
805 ++ spin_unlock_irqrestore(&priv->desc_lock, desc_flags);
806 +
807 + /* Check ring space and update SW control flow */
808 + if (ring->desc_count == 0)
809 +@@ -2008,6 +2010,7 @@ static int bcm_sysport_open(struct net_device *dev)
810 + }
811 +
812 + /* Initialize both hardware and software ring */
813 ++ spin_lock_init(&priv->desc_lock);
814 + for (i = 0; i < dev->num_tx_queues; i++) {
815 + ret = bcm_sysport_init_tx_ring(priv, i);
816 + if (ret) {
817 +diff --git a/drivers/net/ethernet/broadcom/bcmsysport.h b/drivers/net/ethernet/broadcom/bcmsysport.h
818 +index 3a5cb6f128f57..1276e330e9d03 100644
819 +--- a/drivers/net/ethernet/broadcom/bcmsysport.h
820 ++++ b/drivers/net/ethernet/broadcom/bcmsysport.h
821 +@@ -742,6 +742,7 @@ struct bcm_sysport_priv {
822 + int wol_irq;
823 +
824 + /* Transmit rings */
825 ++ spinlock_t desc_lock;
826 + struct bcm_sysport_tx_ring *tx_rings;
827 +
828 + /* Receive queue */
829 +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
830 +index 5b2dcd97c1078..b8e5ca6700ed5 100644
831 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
832 ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
833 +@@ -109,7 +109,8 @@ int hclgevf_send_mbx_msg(struct hclgevf_dev *hdev,
834 +
835 + memcpy(&req->msg, send_msg, sizeof(struct hclge_vf_to_pf_msg));
836 +
837 +- trace_hclge_vf_mbx_send(hdev, req);
838 ++ if (test_bit(HCLGEVF_STATE_NIC_REGISTERED, &hdev->state))
839 ++ trace_hclge_vf_mbx_send(hdev, req);
840 +
841 + /* synchronous send */
842 + if (need_resp) {
843 +diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
844 +index d5432d1448c05..1662c0985eca4 100644
845 +--- a/drivers/net/ethernet/intel/igb/igb_main.c
846 ++++ b/drivers/net/ethernet/intel/igb/igb_main.c
847 +@@ -7654,6 +7654,20 @@ static int igb_set_vf_mac_filter(struct igb_adapter *adapter, const int vf,
848 + struct vf_mac_filter *entry = NULL;
849 + int ret = 0;
850 +
851 ++ if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) &&
852 ++ !vf_data->trusted) {
853 ++ dev_warn(&pdev->dev,
854 ++ "VF %d requested MAC filter but is administratively denied\n",
855 ++ vf);
856 ++ return -EINVAL;
857 ++ }
858 ++ if (!is_valid_ether_addr(addr)) {
859 ++ dev_warn(&pdev->dev,
860 ++ "VF %d attempted to set invalid MAC filter\n",
861 ++ vf);
862 ++ return -EINVAL;
863 ++ }
864 ++
865 + switch (info) {
866 + case E1000_VF_MAC_FILTER_CLR:
867 + /* remove all unicast MAC filters related to the current VF */
868 +@@ -7667,20 +7681,6 @@ static int igb_set_vf_mac_filter(struct igb_adapter *adapter, const int vf,
869 + }
870 + break;
871 + case E1000_VF_MAC_FILTER_ADD:
872 +- if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) &&
873 +- !vf_data->trusted) {
874 +- dev_warn(&pdev->dev,
875 +- "VF %d requested MAC filter but is administratively denied\n",
876 +- vf);
877 +- return -EINVAL;
878 +- }
879 +- if (!is_valid_ether_addr(addr)) {
880 +- dev_warn(&pdev->dev,
881 +- "VF %d attempted to set invalid MAC filter\n",
882 +- vf);
883 +- return -EINVAL;
884 +- }
885 +-
886 + /* try to find empty slot in the list */
887 + list_for_each(pos, &adapter->vf_macs.l) {
888 + entry = list_entry(pos, struct vf_mac_filter, l);
889 +diff --git a/drivers/net/ethernet/intel/igbvf/netdev.c b/drivers/net/ethernet/intel/igbvf/netdev.c
890 +index 07c9e9e0546f5..fe8c0a26b7201 100644
891 +--- a/drivers/net/ethernet/intel/igbvf/netdev.c
892 ++++ b/drivers/net/ethernet/intel/igbvf/netdev.c
893 +@@ -2873,6 +2873,7 @@ static int igbvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
894 + return 0;
895 +
896 + err_hw_init:
897 ++ netif_napi_del(&adapter->rx_ring->napi);
898 + kfree(adapter->tx_ring);
899 + kfree(adapter->rx_ring);
900 + err_sw_init:
901 +diff --git a/drivers/net/ethernet/intel/igc/igc_i225.c b/drivers/net/ethernet/intel/igc/igc_i225.c
902 +index 7ec04e48860c6..553d6bc78e6bd 100644
903 +--- a/drivers/net/ethernet/intel/igc/igc_i225.c
904 ++++ b/drivers/net/ethernet/intel/igc/igc_i225.c
905 +@@ -636,7 +636,7 @@ s32 igc_set_ltr_i225(struct igc_hw *hw, bool link)
906 + ltrv = rd32(IGC_LTRMAXV);
907 + if (ltr_max != (ltrv & IGC_LTRMAXV_LTRV_MASK)) {
908 + ltrv = IGC_LTRMAXV_LSNP_REQ | ltr_max |
909 +- (scale_min << IGC_LTRMAXV_SCALE_SHIFT);
910 ++ (scale_max << IGC_LTRMAXV_SCALE_SHIFT);
911 + wr32(IGC_LTRMAXV, ltrv);
912 + }
913 + }
914 +diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
915 +index ffe322136c584..a3a02e2f92f64 100644
916 +--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
917 ++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
918 +@@ -5532,6 +5532,10 @@ static int ixgbe_non_sfp_link_config(struct ixgbe_hw *hw)
919 + if (!speed && hw->mac.ops.get_link_capabilities) {
920 + ret = hw->mac.ops.get_link_capabilities(hw, &speed,
921 + &autoneg);
922 ++ /* remove NBASE-T speeds from default autonegotiation
923 ++ * to accommodate broken network switches in the field
924 ++ * which cannot cope with advertised NBASE-T speeds
925 ++ */
926 + speed &= ~(IXGBE_LINK_SPEED_5GB_FULL |
927 + IXGBE_LINK_SPEED_2_5GB_FULL);
928 + }
929 +diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
930 +index 5e339afa682a6..37f2bc6de4b65 100644
931 +--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
932 ++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
933 +@@ -3405,6 +3405,9 @@ static s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw)
934 + /* flush pending Tx transactions */
935 + ixgbe_clear_tx_pending(hw);
936 +
937 ++ /* set MDIO speed before talking to the PHY in case it's the 1st time */
938 ++ ixgbe_set_mdio_speed(hw);
939 ++
940 + /* PHY ops must be identified and initialized prior to reset */
941 + status = hw->phy.ops.init(hw);
942 + if (status == IXGBE_ERR_SFP_NOT_SUPPORTED ||
943 +diff --git a/drivers/net/ethernet/sfc/ef100_nic.c b/drivers/net/ethernet/sfc/ef100_nic.c
944 +index 3148fe7703564..cb6897c2193c2 100644
945 +--- a/drivers/net/ethernet/sfc/ef100_nic.c
946 ++++ b/drivers/net/ethernet/sfc/ef100_nic.c
947 +@@ -597,6 +597,9 @@ static size_t ef100_update_stats(struct efx_nic *efx,
948 + ef100_common_stat_mask(mask);
949 + ef100_ethtool_stat_mask(mask);
950 +
951 ++ if (!mc_stats)
952 ++ return 0;
953 ++
954 + efx_nic_copy_stats(efx, mc_stats);
955 + efx_nic_update_stats(ef100_stat_desc, EF100_STAT_COUNT, mask,
956 + stats, mc_stats, false);
957 +diff --git a/drivers/net/netdevsim/bpf.c b/drivers/net/netdevsim/bpf.c
958 +index 90aafb56f1409..a438202129323 100644
959 +--- a/drivers/net/netdevsim/bpf.c
960 ++++ b/drivers/net/netdevsim/bpf.c
961 +@@ -514,6 +514,7 @@ nsim_bpf_map_alloc(struct netdevsim *ns, struct bpf_offloaded_map *offmap)
962 + goto err_free;
963 + key = nmap->entry[i].key;
964 + *key = i;
965 ++ memset(nmap->entry[i].value, 0, offmap->map.value_size);
966 + }
967 + }
968 +
969 +diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
970 +index 8ee24e351bdc2..6a9178896c909 100644
971 +--- a/drivers/net/xen-netback/common.h
972 ++++ b/drivers/net/xen-netback/common.h
973 +@@ -203,6 +203,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */
974 + unsigned int rx_queue_max;
975 + unsigned int rx_queue_len;
976 + unsigned long last_rx_time;
977 ++ unsigned int rx_slots_needed;
978 + bool stalled;
979 +
980 + struct xenvif_copy_state rx_copy;
981 +diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c
982 +index accc991d153f7..dbac4c03d21a1 100644
983 +--- a/drivers/net/xen-netback/rx.c
984 ++++ b/drivers/net/xen-netback/rx.c
985 +@@ -33,28 +33,36 @@
986 + #include <xen/xen.h>
987 + #include <xen/events.h>
988 +
989 +-static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)
990 ++/*
991 ++ * Update the needed ring page slots for the first SKB queued.
992 ++ * Note that any call sequence outside the RX thread calling this function
993 ++ * needs to wake up the RX thread via a call of xenvif_kick_thread()
994 ++ * afterwards in order to avoid a race with putting the thread to sleep.
995 ++ */
996 ++static void xenvif_update_needed_slots(struct xenvif_queue *queue,
997 ++ const struct sk_buff *skb)
998 + {
999 +- RING_IDX prod, cons;
1000 +- struct sk_buff *skb;
1001 +- int needed;
1002 +- unsigned long flags;
1003 +-
1004 +- spin_lock_irqsave(&queue->rx_queue.lock, flags);
1005 ++ unsigned int needed = 0;
1006 +
1007 +- skb = skb_peek(&queue->rx_queue);
1008 +- if (!skb) {
1009 +- spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
1010 +- return false;
1011 ++ if (skb) {
1012 ++ needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);
1013 ++ if (skb_is_gso(skb))
1014 ++ needed++;
1015 ++ if (skb->sw_hash)
1016 ++ needed++;
1017 + }
1018 +
1019 +- needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);
1020 +- if (skb_is_gso(skb))
1021 +- needed++;
1022 +- if (skb->sw_hash)
1023 +- needed++;
1024 ++ WRITE_ONCE(queue->rx_slots_needed, needed);
1025 ++}
1026 +
1027 +- spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
1028 ++static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)
1029 ++{
1030 ++ RING_IDX prod, cons;
1031 ++ unsigned int needed;
1032 ++
1033 ++ needed = READ_ONCE(queue->rx_slots_needed);
1034 ++ if (!needed)
1035 ++ return false;
1036 +
1037 + do {
1038 + prod = queue->rx.sring->req_prod;
1039 +@@ -80,13 +88,19 @@ void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb)
1040 +
1041 + spin_lock_irqsave(&queue->rx_queue.lock, flags);
1042 +
1043 +- __skb_queue_tail(&queue->rx_queue, skb);
1044 +-
1045 +- queue->rx_queue_len += skb->len;
1046 +- if (queue->rx_queue_len > queue->rx_queue_max) {
1047 ++ if (queue->rx_queue_len >= queue->rx_queue_max) {
1048 + struct net_device *dev = queue->vif->dev;
1049 +
1050 + netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
1051 ++ kfree_skb(skb);
1052 ++ queue->vif->dev->stats.rx_dropped++;
1053 ++ } else {
1054 ++ if (skb_queue_empty(&queue->rx_queue))
1055 ++ xenvif_update_needed_slots(queue, skb);
1056 ++
1057 ++ __skb_queue_tail(&queue->rx_queue, skb);
1058 ++
1059 ++ queue->rx_queue_len += skb->len;
1060 + }
1061 +
1062 + spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
1063 +@@ -100,6 +114,8 @@ static struct sk_buff *xenvif_rx_dequeue(struct xenvif_queue *queue)
1064 +
1065 + skb = __skb_dequeue(&queue->rx_queue);
1066 + if (skb) {
1067 ++ xenvif_update_needed_slots(queue, skb_peek(&queue->rx_queue));
1068 ++
1069 + queue->rx_queue_len -= skb->len;
1070 + if (queue->rx_queue_len < queue->rx_queue_max) {
1071 + struct netdev_queue *txq;
1072 +@@ -134,6 +150,7 @@ static void xenvif_rx_queue_drop_expired(struct xenvif_queue *queue)
1073 + break;
1074 + xenvif_rx_dequeue(queue);
1075 + kfree_skb(skb);
1076 ++ queue->vif->dev->stats.rx_dropped++;
1077 + }
1078 + }
1079 +
1080 +@@ -487,27 +504,31 @@ void xenvif_rx_action(struct xenvif_queue *queue)
1081 + xenvif_rx_copy_flush(queue);
1082 + }
1083 +
1084 +-static bool xenvif_rx_queue_stalled(struct xenvif_queue *queue)
1085 ++static RING_IDX xenvif_rx_queue_slots(const struct xenvif_queue *queue)
1086 + {
1087 + RING_IDX prod, cons;
1088 +
1089 + prod = queue->rx.sring->req_prod;
1090 + cons = queue->rx.req_cons;
1091 +
1092 ++ return prod - cons;
1093 ++}
1094 ++
1095 ++static bool xenvif_rx_queue_stalled(const struct xenvif_queue *queue)
1096 ++{
1097 ++ unsigned int needed = READ_ONCE(queue->rx_slots_needed);
1098 ++
1099 + return !queue->stalled &&
1100 +- prod - cons < 1 &&
1101 ++ xenvif_rx_queue_slots(queue) < needed &&
1102 + time_after(jiffies,
1103 + queue->last_rx_time + queue->vif->stall_timeout);
1104 + }
1105 +
1106 + static bool xenvif_rx_queue_ready(struct xenvif_queue *queue)
1107 + {
1108 +- RING_IDX prod, cons;
1109 +-
1110 +- prod = queue->rx.sring->req_prod;
1111 +- cons = queue->rx.req_cons;
1112 ++ unsigned int needed = READ_ONCE(queue->rx_slots_needed);
1113 +
1114 +- return queue->stalled && prod - cons >= 1;
1115 ++ return queue->stalled && xenvif_rx_queue_slots(queue) >= needed;
1116 + }
1117 +
1118 + bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread)
1119 +diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
1120 +index 8505024b89e9e..fce3a90a335cb 100644
1121 +--- a/drivers/net/xen-netfront.c
1122 ++++ b/drivers/net/xen-netfront.c
1123 +@@ -148,6 +148,9 @@ struct netfront_queue {
1124 + grant_ref_t gref_rx_head;
1125 + grant_ref_t grant_rx_ref[NET_RX_RING_SIZE];
1126 +
1127 ++ unsigned int rx_rsp_unconsumed;
1128 ++ spinlock_t rx_cons_lock;
1129 ++
1130 + struct page_pool *page_pool;
1131 + struct xdp_rxq_info xdp_rxq;
1132 + };
1133 +@@ -376,12 +379,13 @@ static int xennet_open(struct net_device *dev)
1134 + return 0;
1135 + }
1136 +
1137 +-static void xennet_tx_buf_gc(struct netfront_queue *queue)
1138 ++static bool xennet_tx_buf_gc(struct netfront_queue *queue)
1139 + {
1140 + RING_IDX cons, prod;
1141 + unsigned short id;
1142 + struct sk_buff *skb;
1143 + bool more_to_do;
1144 ++ bool work_done = false;
1145 + const struct device *dev = &queue->info->netdev->dev;
1146 +
1147 + BUG_ON(!netif_carrier_ok(queue->info->netdev));
1148 +@@ -398,6 +402,8 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
1149 + for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
1150 + struct xen_netif_tx_response txrsp;
1151 +
1152 ++ work_done = true;
1153 ++
1154 + RING_COPY_RESPONSE(&queue->tx, cons, &txrsp);
1155 + if (txrsp.status == XEN_NETIF_RSP_NULL)
1156 + continue;
1157 +@@ -441,11 +447,13 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
1158 +
1159 + xennet_maybe_wake_tx(queue);
1160 +
1161 +- return;
1162 ++ return work_done;
1163 +
1164 + err:
1165 + queue->info->broken = true;
1166 + dev_alert(dev, "Disabled for further use\n");
1167 ++
1168 ++ return work_done;
1169 + }
1170 +
1171 + struct xennet_gnttab_make_txreq {
1172 +@@ -836,6 +844,16 @@ static int xennet_close(struct net_device *dev)
1173 + return 0;
1174 + }
1175 +
1176 ++static void xennet_set_rx_rsp_cons(struct netfront_queue *queue, RING_IDX val)
1177 ++{
1178 ++ unsigned long flags;
1179 ++
1180 ++ spin_lock_irqsave(&queue->rx_cons_lock, flags);
1181 ++ queue->rx.rsp_cons = val;
1182 ++ queue->rx_rsp_unconsumed = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx);
1183 ++ spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
1184 ++}
1185 ++
1186 + static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
1187 + grant_ref_t ref)
1188 + {
1189 +@@ -887,7 +905,7 @@ static int xennet_get_extras(struct netfront_queue *queue,
1190 + xennet_move_rx_slot(queue, skb, ref);
1191 + } while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
1192 +
1193 +- queue->rx.rsp_cons = cons;
1194 ++ xennet_set_rx_rsp_cons(queue, cons);
1195 + return err;
1196 + }
1197 +
1198 +@@ -1041,7 +1059,7 @@ next:
1199 + }
1200 +
1201 + if (unlikely(err))
1202 +- queue->rx.rsp_cons = cons + slots;
1203 ++ xennet_set_rx_rsp_cons(queue, cons + slots);
1204 +
1205 + return err;
1206 + }
1207 +@@ -1095,7 +1113,8 @@ static int xennet_fill_frags(struct netfront_queue *queue,
1208 + __pskb_pull_tail(skb, pull_to - skb_headlen(skb));
1209 + }
1210 + if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) {
1211 +- queue->rx.rsp_cons = ++cons + skb_queue_len(list);
1212 ++ xennet_set_rx_rsp_cons(queue,
1213 ++ ++cons + skb_queue_len(list));
1214 + kfree_skb(nskb);
1215 + return -ENOENT;
1216 + }
1217 +@@ -1108,7 +1127,7 @@ static int xennet_fill_frags(struct netfront_queue *queue,
1218 + kfree_skb(nskb);
1219 + }
1220 +
1221 +- queue->rx.rsp_cons = cons;
1222 ++ xennet_set_rx_rsp_cons(queue, cons);
1223 +
1224 + return 0;
1225 + }
1226 +@@ -1231,7 +1250,9 @@ err:
1227 +
1228 + if (unlikely(xennet_set_skb_gso(skb, gso))) {
1229 + __skb_queue_head(&tmpq, skb);
1230 +- queue->rx.rsp_cons += skb_queue_len(&tmpq);
1231 ++ xennet_set_rx_rsp_cons(queue,
1232 ++ queue->rx.rsp_cons +
1233 ++ skb_queue_len(&tmpq));
1234 + goto err;
1235 + }
1236 + }
1237 +@@ -1255,7 +1276,8 @@ err:
1238 +
1239 + __skb_queue_tail(&rxq, skb);
1240 +
1241 +- i = ++queue->rx.rsp_cons;
1242 ++ i = queue->rx.rsp_cons + 1;
1243 ++ xennet_set_rx_rsp_cons(queue, i);
1244 + work_done++;
1245 + }
1246 + if (need_xdp_flush)
1247 +@@ -1419,40 +1441,79 @@ static int xennet_set_features(struct net_device *dev,
1248 + return 0;
1249 + }
1250 +
1251 +-static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
1252 ++static bool xennet_handle_tx(struct netfront_queue *queue, unsigned int *eoi)
1253 + {
1254 +- struct netfront_queue *queue = dev_id;
1255 + unsigned long flags;
1256 +
1257 +- if (queue->info->broken)
1258 +- return IRQ_HANDLED;
1259 ++ if (unlikely(queue->info->broken))
1260 ++ return false;
1261 +
1262 + spin_lock_irqsave(&queue->tx_lock, flags);
1263 +- xennet_tx_buf_gc(queue);
1264 ++ if (xennet_tx_buf_gc(queue))
1265 ++ *eoi = 0;
1266 + spin_unlock_irqrestore(&queue->tx_lock, flags);
1267 +
1268 ++ return true;
1269 ++}
1270 ++
1271 ++static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
1272 ++{
1273 ++ unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
1274 ++
1275 ++ if (likely(xennet_handle_tx(dev_id, &eoiflag)))
1276 ++ xen_irq_lateeoi(irq, eoiflag);
1277 ++
1278 + return IRQ_HANDLED;
1279 + }
1280 +
1281 +-static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
1282 ++static bool xennet_handle_rx(struct netfront_queue *queue, unsigned int *eoi)
1283 + {
1284 +- struct netfront_queue *queue = dev_id;
1285 +- struct net_device *dev = queue->info->netdev;
1286 ++ unsigned int work_queued;
1287 ++ unsigned long flags;
1288 +
1289 +- if (queue->info->broken)
1290 +- return IRQ_HANDLED;
1291 ++ if (unlikely(queue->info->broken))
1292 ++ return false;
1293 ++
1294 ++ spin_lock_irqsave(&queue->rx_cons_lock, flags);
1295 ++ work_queued = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx);
1296 ++ if (work_queued > queue->rx_rsp_unconsumed) {
1297 ++ queue->rx_rsp_unconsumed = work_queued;
1298 ++ *eoi = 0;
1299 ++ } else if (unlikely(work_queued < queue->rx_rsp_unconsumed)) {
1300 ++ const struct device *dev = &queue->info->netdev->dev;
1301 ++
1302 ++ spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
1303 ++ dev_alert(dev, "RX producer index going backwards\n");
1304 ++ dev_alert(dev, "Disabled for further use\n");
1305 ++ queue->info->broken = true;
1306 ++ return false;
1307 ++ }
1308 ++ spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
1309 +
1310 +- if (likely(netif_carrier_ok(dev) &&
1311 +- RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
1312 ++ if (likely(netif_carrier_ok(queue->info->netdev) && work_queued))
1313 + napi_schedule(&queue->napi);
1314 +
1315 ++ return true;
1316 ++}
1317 ++
1318 ++static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
1319 ++{
1320 ++ unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
1321 ++
1322 ++ if (likely(xennet_handle_rx(dev_id, &eoiflag)))
1323 ++ xen_irq_lateeoi(irq, eoiflag);
1324 ++
1325 + return IRQ_HANDLED;
1326 + }
1327 +
1328 + static irqreturn_t xennet_interrupt(int irq, void *dev_id)
1329 + {
1330 +- xennet_tx_interrupt(irq, dev_id);
1331 +- xennet_rx_interrupt(irq, dev_id);
1332 ++ unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
1333 ++
1334 ++ if (xennet_handle_tx(dev_id, &eoiflag) &&
1335 ++ xennet_handle_rx(dev_id, &eoiflag))
1336 ++ xen_irq_lateeoi(irq, eoiflag);
1337 ++
1338 + return IRQ_HANDLED;
1339 + }
1340 +
1341 +@@ -1770,9 +1831,10 @@ static int setup_netfront_single(struct netfront_queue *queue)
1342 + if (err < 0)
1343 + goto fail;
1344 +
1345 +- err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
1346 +- xennet_interrupt,
1347 +- 0, queue->info->netdev->name, queue);
1348 ++ err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn,
1349 ++ xennet_interrupt, 0,
1350 ++ queue->info->netdev->name,
1351 ++ queue);
1352 + if (err < 0)
1353 + goto bind_fail;
1354 + queue->rx_evtchn = queue->tx_evtchn;
1355 +@@ -1800,18 +1862,18 @@ static int setup_netfront_split(struct netfront_queue *queue)
1356 +
1357 + snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
1358 + "%s-tx", queue->name);
1359 +- err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
1360 +- xennet_tx_interrupt,
1361 +- 0, queue->tx_irq_name, queue);
1362 ++ err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn,
1363 ++ xennet_tx_interrupt, 0,
1364 ++ queue->tx_irq_name, queue);
1365 + if (err < 0)
1366 + goto bind_tx_fail;
1367 + queue->tx_irq = err;
1368 +
1369 + snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
1370 + "%s-rx", queue->name);
1371 +- err = bind_evtchn_to_irqhandler(queue->rx_evtchn,
1372 +- xennet_rx_interrupt,
1373 +- 0, queue->rx_irq_name, queue);
1374 ++ err = bind_evtchn_to_irqhandler_lateeoi(queue->rx_evtchn,
1375 ++ xennet_rx_interrupt, 0,
1376 ++ queue->rx_irq_name, queue);
1377 + if (err < 0)
1378 + goto bind_rx_fail;
1379 + queue->rx_irq = err;
1380 +@@ -1913,6 +1975,7 @@ static int xennet_init_queue(struct netfront_queue *queue)
1381 +
1382 + spin_lock_init(&queue->tx_lock);
1383 + spin_lock_init(&queue->rx_lock);
1384 ++ spin_lock_init(&queue->rx_cons_lock);
1385 +
1386 + timer_setup(&queue->rx_refill_timer, rx_refill_timeout, 0);
1387 +
1388 +diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
1389 +index db7475dc601f5..57314fec2261b 100644
1390 +--- a/drivers/pci/msi.c
1391 ++++ b/drivers/pci/msi.c
1392 +@@ -828,9 +828,6 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
1393 + goto out_disable;
1394 + }
1395 +
1396 +- /* Ensure that all table entries are masked. */
1397 +- msix_mask_all(base, tsize);
1398 +-
1399 + ret = msix_setup_entries(dev, base, entries, nvec, affd);
1400 + if (ret)
1401 + goto out_disable;
1402 +@@ -853,6 +850,16 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
1403 + /* Set MSI-X enabled bits and unmask the function */
1404 + pci_intx_for_msi(dev, 0);
1405 + dev->msix_enabled = 1;
1406 ++
1407 ++ /*
1408 ++ * Ensure that all table entries are masked to prevent
1409 ++ * stale entries from firing in a crash kernel.
1410 ++ *
1411 ++ * Done late to deal with a broken Marvell NVME device
1412 ++ * which takes the MSI-X mask bits into account even
1413 ++ * when MSI-X is disabled, which prevents MSI delivery.
1414 ++ */
1415 ++ msix_mask_all(base, tsize);
1416 + pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0);
1417 +
1418 + pcibios_free_irq(dev);
1419 +@@ -879,7 +886,7 @@ out_free:
1420 + free_msi_irqs(dev);
1421 +
1422 + out_disable:
1423 +- pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0);
1424 ++ pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE, 0);
1425 +
1426 + return ret;
1427 + }
1428 +diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
1429 +index 9188191433439..6b00de6b6f0ef 100644
1430 +--- a/drivers/scsi/scsi_debug.c
1431 ++++ b/drivers/scsi/scsi_debug.c
1432 +@@ -1188,7 +1188,7 @@ static int p_fill_from_dev_buffer(struct scsi_cmnd *scp, const void *arr,
1433 + __func__, off_dst, scsi_bufflen(scp), act_len,
1434 + scsi_get_resid(scp));
1435 + n = scsi_bufflen(scp) - (off_dst + act_len);
1436 +- scsi_set_resid(scp, min_t(int, scsi_get_resid(scp), n));
1437 ++ scsi_set_resid(scp, min_t(u32, scsi_get_resid(scp), n));
1438 + return 0;
1439 + }
1440 +
1441 +@@ -1561,7 +1561,8 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
1442 + unsigned char pq_pdt;
1443 + unsigned char *arr;
1444 + unsigned char *cmd = scp->cmnd;
1445 +- int alloc_len, n, ret;
1446 ++ u32 alloc_len, n;
1447 ++ int ret;
1448 + bool have_wlun, is_disk, is_zbc, is_disk_zbc;
1449 +
1450 + alloc_len = get_unaligned_be16(cmd + 3);
1451 +@@ -1584,7 +1585,8 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
1452 + kfree(arr);
1453 + return check_condition_result;
1454 + } else if (0x1 & cmd[1]) { /* EVPD bit set */
1455 +- int lu_id_num, port_group_id, target_dev_id, len;
1456 ++ int lu_id_num, port_group_id, target_dev_id;
1457 ++ u32 len;
1458 + char lu_id_str[6];
1459 + int host_no = devip->sdbg_host->shost->host_no;
1460 +
1461 +@@ -1675,9 +1677,9 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
1462 + kfree(arr);
1463 + return check_condition_result;
1464 + }
1465 +- len = min(get_unaligned_be16(arr + 2) + 4, alloc_len);
1466 ++ len = min_t(u32, get_unaligned_be16(arr + 2) + 4, alloc_len);
1467 + ret = fill_from_dev_buffer(scp, arr,
1468 +- min(len, SDEBUG_MAX_INQ_ARR_SZ));
1469 ++ min_t(u32, len, SDEBUG_MAX_INQ_ARR_SZ));
1470 + kfree(arr);
1471 + return ret;
1472 + }
1473 +@@ -1713,7 +1715,7 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
1474 + }
1475 + put_unaligned_be16(0x2100, arr + n); /* SPL-4 no version claimed */
1476 + ret = fill_from_dev_buffer(scp, arr,
1477 +- min_t(int, alloc_len, SDEBUG_LONG_INQ_SZ));
1478 ++ min_t(u32, alloc_len, SDEBUG_LONG_INQ_SZ));
1479 + kfree(arr);
1480 + return ret;
1481 + }
1482 +@@ -1728,8 +1730,8 @@ static int resp_requests(struct scsi_cmnd *scp,
1483 + unsigned char *cmd = scp->cmnd;
1484 + unsigned char arr[SCSI_SENSE_BUFFERSIZE]; /* assume >= 18 bytes */
1485 + bool dsense = !!(cmd[1] & 1);
1486 +- int alloc_len = cmd[4];
1487 +- int len = 18;
1488 ++ u32 alloc_len = cmd[4];
1489 ++ u32 len = 18;
1490 + int stopped_state = atomic_read(&devip->stopped);
1491 +
1492 + memset(arr, 0, sizeof(arr));
1493 +@@ -1773,7 +1775,7 @@ static int resp_requests(struct scsi_cmnd *scp,
1494 + arr[7] = 0xa;
1495 + }
1496 + }
1497 +- return fill_from_dev_buffer(scp, arr, min_t(int, len, alloc_len));
1498 ++ return fill_from_dev_buffer(scp, arr, min_t(u32, len, alloc_len));
1499 + }
1500 +
1501 + static int resp_start_stop(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
1502 +@@ -2311,7 +2313,8 @@ static int resp_mode_sense(struct scsi_cmnd *scp,
1503 + {
1504 + int pcontrol, pcode, subpcode, bd_len;
1505 + unsigned char dev_spec;
1506 +- int alloc_len, offset, len, target_dev_id;
1507 ++ u32 alloc_len, offset, len;
1508 ++ int target_dev_id;
1509 + int target = scp->device->id;
1510 + unsigned char *ap;
1511 + unsigned char arr[SDEBUG_MAX_MSENSE_SZ];
1512 +@@ -2467,7 +2470,7 @@ static int resp_mode_sense(struct scsi_cmnd *scp,
1513 + arr[0] = offset - 1;
1514 + else
1515 + put_unaligned_be16((offset - 2), arr + 0);
1516 +- return fill_from_dev_buffer(scp, arr, min_t(int, alloc_len, offset));
1517 ++ return fill_from_dev_buffer(scp, arr, min_t(u32, alloc_len, offset));
1518 + }
1519 +
1520 + #define SDEBUG_MAX_MSELECT_SZ 512
1521 +@@ -2498,11 +2501,11 @@ static int resp_mode_select(struct scsi_cmnd *scp,
1522 + __func__, param_len, res);
1523 + md_len = mselect6 ? (arr[0] + 1) : (get_unaligned_be16(arr + 0) + 2);
1524 + bd_len = mselect6 ? arr[3] : get_unaligned_be16(arr + 6);
1525 +- if (md_len > 2) {
1526 ++ off = bd_len + (mselect6 ? 4 : 8);
1527 ++ if (md_len > 2 || off >= res) {
1528 + mk_sense_invalid_fld(scp, SDEB_IN_DATA, 0, -1);
1529 + return check_condition_result;
1530 + }
1531 +- off = bd_len + (mselect6 ? 4 : 8);
1532 + mpage = arr[off] & 0x3f;
1533 + ps = !!(arr[off] & 0x80);
1534 + if (ps) {
1535 +@@ -2582,7 +2585,8 @@ static int resp_ie_l_pg(unsigned char *arr)
1536 + static int resp_log_sense(struct scsi_cmnd *scp,
1537 + struct sdebug_dev_info *devip)
1538 + {
1539 +- int ppc, sp, pcode, subpcode, alloc_len, len, n;
1540 ++ int ppc, sp, pcode, subpcode;
1541 ++ u32 alloc_len, len, n;
1542 + unsigned char arr[SDEBUG_MAX_LSENSE_SZ];
1543 + unsigned char *cmd = scp->cmnd;
1544 +
1545 +@@ -2652,9 +2656,9 @@ static int resp_log_sense(struct scsi_cmnd *scp,
1546 + mk_sense_invalid_fld(scp, SDEB_IN_CDB, 3, -1);
1547 + return check_condition_result;
1548 + }
1549 +- len = min_t(int, get_unaligned_be16(arr + 2) + 4, alloc_len);
1550 ++ len = min_t(u32, get_unaligned_be16(arr + 2) + 4, alloc_len);
1551 + return fill_from_dev_buffer(scp, arr,
1552 +- min_t(int, len, SDEBUG_MAX_INQ_ARR_SZ));
1553 ++ min_t(u32, len, SDEBUG_MAX_INQ_ARR_SZ));
1554 + }
1555 +
1556 + static inline bool sdebug_dev_is_zoned(struct sdebug_dev_info *devip)
1557 +@@ -4238,6 +4242,8 @@ static int resp_verify(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
1558 + mk_sense_invalid_opcode(scp);
1559 + return check_condition_result;
1560 + }
1561 ++ if (vnum == 0)
1562 ++ return 0; /* not an error */
1563 + a_num = is_bytchk3 ? 1 : vnum;
1564 + /* Treat following check like one for read (i.e. no write) access */
1565 + ret = check_device_access_params(scp, lba, a_num, false);
1566 +@@ -4301,6 +4307,8 @@ static int resp_report_zones(struct scsi_cmnd *scp,
1567 + }
1568 + zs_lba = get_unaligned_be64(cmd + 2);
1569 + alloc_len = get_unaligned_be32(cmd + 10);
1570 ++ if (alloc_len == 0)
1571 ++ return 0; /* not an error */
1572 + rep_opts = cmd[14] & 0x3f;
1573 + partial = cmd[14] & 0x80;
1574 +
1575 +@@ -4405,7 +4413,7 @@ static int resp_report_zones(struct scsi_cmnd *scp,
1576 + put_unaligned_be64(sdebug_capacity - 1, arr + 8);
1577 +
1578 + rep_len = (unsigned long)desc - (unsigned long)arr;
1579 +- ret = fill_from_dev_buffer(scp, arr, min_t(int, alloc_len, rep_len));
1580 ++ ret = fill_from_dev_buffer(scp, arr, min_t(u32, alloc_len, rep_len));
1581 +
1582 + fini:
1583 + read_unlock(macc_lckp);
1584 +diff --git a/drivers/soc/imx/soc-imx.c b/drivers/soc/imx/soc-imx.c
1585 +index 01bfea1cb64a8..1e8780299d5c4 100644
1586 +--- a/drivers/soc/imx/soc-imx.c
1587 ++++ b/drivers/soc/imx/soc-imx.c
1588 +@@ -33,6 +33,10 @@ static int __init imx_soc_device_init(void)
1589 + u32 val;
1590 + int ret;
1591 +
1592 ++ /* Return early if this is running on devices with different SoCs */
1593 ++ if (!__mxc_cpu_type)
1594 ++ return 0;
1595 ++
1596 + if (of_machine_is_compatible("fsl,ls1021a"))
1597 + return 0;
1598 +
1599 +diff --git a/drivers/soc/tegra/fuse/fuse-tegra.c b/drivers/soc/tegra/fuse/fuse-tegra.c
1600 +index 94b60a692b515..4388a4a5e0919 100644
1601 +--- a/drivers/soc/tegra/fuse/fuse-tegra.c
1602 ++++ b/drivers/soc/tegra/fuse/fuse-tegra.c
1603 +@@ -260,7 +260,7 @@ static struct platform_driver tegra_fuse_driver = {
1604 + };
1605 + builtin_platform_driver(tegra_fuse_driver);
1606 +
1607 +-bool __init tegra_fuse_read_spare(unsigned int spare)
1608 ++u32 __init tegra_fuse_read_spare(unsigned int spare)
1609 + {
1610 + unsigned int offset = fuse->soc->info->spare + spare * 4;
1611 +
1612 +diff --git a/drivers/soc/tegra/fuse/fuse.h b/drivers/soc/tegra/fuse/fuse.h
1613 +index e057a58e20603..21887a57cf2c2 100644
1614 +--- a/drivers/soc/tegra/fuse/fuse.h
1615 ++++ b/drivers/soc/tegra/fuse/fuse.h
1616 +@@ -63,7 +63,7 @@ struct tegra_fuse {
1617 + void tegra_init_revision(void);
1618 + void tegra_init_apbmisc(void);
1619 +
1620 +-bool __init tegra_fuse_read_spare(unsigned int spare);
1621 ++u32 __init tegra_fuse_read_spare(unsigned int spare);
1622 + u32 __init tegra_fuse_read_early(unsigned int offset);
1623 +
1624 + u8 tegra_get_major_rev(void);
1625 +diff --git a/drivers/tee/amdtee/core.c b/drivers/tee/amdtee/core.c
1626 +index da6b88e80dc07..297dc62bca298 100644
1627 +--- a/drivers/tee/amdtee/core.c
1628 ++++ b/drivers/tee/amdtee/core.c
1629 +@@ -203,9 +203,8 @@ static int copy_ta_binary(struct tee_context *ctx, void *ptr, void **ta,
1630 +
1631 + *ta_size = roundup(fw->size, PAGE_SIZE);
1632 + *ta = (void *)__get_free_pages(GFP_KERNEL, get_order(*ta_size));
1633 +- if (IS_ERR(*ta)) {
1634 +- pr_err("%s: get_free_pages failed 0x%llx\n", __func__,
1635 +- (u64)*ta);
1636 ++ if (!*ta) {
1637 ++ pr_err("%s: get_free_pages failed\n", __func__);
1638 + rc = -ENOMEM;
1639 + goto rel_fw;
1640 + }
1641 +diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
1642 +index 8f143c09a1696..7948660e042fd 100644
1643 +--- a/drivers/tty/hvc/hvc_xen.c
1644 ++++ b/drivers/tty/hvc/hvc_xen.c
1645 +@@ -37,6 +37,8 @@ struct xencons_info {
1646 + struct xenbus_device *xbdev;
1647 + struct xencons_interface *intf;
1648 + unsigned int evtchn;
1649 ++ XENCONS_RING_IDX out_cons;
1650 ++ unsigned int out_cons_same;
1651 + struct hvc_struct *hvc;
1652 + int irq;
1653 + int vtermno;
1654 +@@ -138,6 +140,8 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
1655 + XENCONS_RING_IDX cons, prod;
1656 + int recv = 0;
1657 + struct xencons_info *xencons = vtermno_to_xencons(vtermno);
1658 ++ unsigned int eoiflag = 0;
1659 ++
1660 + if (xencons == NULL)
1661 + return -EINVAL;
1662 + intf = xencons->intf;
1663 +@@ -157,7 +161,27 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
1664 + mb(); /* read ring before consuming */
1665 + intf->in_cons = cons;
1666 +
1667 +- notify_daemon(xencons);
1668 ++ /*
1669 ++ * When to mark interrupt having been spurious:
1670 ++ * - there was no new data to be read, and
1671 ++ * - the backend did not consume some output bytes, and
1672 ++ * - the previous round with no read data didn't see consumed bytes
1673 ++ * (we might have a race with an interrupt being in flight while
1674 ++ * updating xencons->out_cons, so account for that by allowing one
1675 ++ * round without any visible reason)
1676 ++ */
1677 ++ if (intf->out_cons != xencons->out_cons) {
1678 ++ xencons->out_cons = intf->out_cons;
1679 ++ xencons->out_cons_same = 0;
1680 ++ }
1681 ++ if (recv) {
1682 ++ notify_daemon(xencons);
1683 ++ } else if (xencons->out_cons_same++ > 1) {
1684 ++ eoiflag = XEN_EOI_FLAG_SPURIOUS;
1685 ++ }
1686 ++
1687 ++ xen_irq_lateeoi(xencons->irq, eoiflag);
1688 ++
1689 + return recv;
1690 + }
1691 +
1692 +@@ -386,7 +410,7 @@ static int xencons_connect_backend(struct xenbus_device *dev,
1693 + if (ret)
1694 + return ret;
1695 + info->evtchn = evtchn;
1696 +- irq = bind_evtchn_to_irq(evtchn);
1697 ++ irq = bind_interdomain_evtchn_to_irq_lateeoi(dev->otherend_id, evtchn);
1698 + if (irq < 0)
1699 + return irq;
1700 + info->irq = irq;
1701 +@@ -550,7 +574,7 @@ static int __init xen_hvc_init(void)
1702 + return r;
1703 +
1704 + info = vtermno_to_xencons(HVC_COOKIE);
1705 +- info->irq = bind_evtchn_to_irq(info->evtchn);
1706 ++ info->irq = bind_evtchn_to_irq_lateeoi(info->evtchn);
1707 + }
1708 + if (info->irq < 0)
1709 + info->irq = 0; /* NO_IRQ */
1710 +diff --git a/drivers/tty/n_hdlc.c b/drivers/tty/n_hdlc.c
1711 +index 1363e659dc1db..48c64e68017cd 100644
1712 +--- a/drivers/tty/n_hdlc.c
1713 ++++ b/drivers/tty/n_hdlc.c
1714 +@@ -139,6 +139,8 @@ struct n_hdlc {
1715 + struct n_hdlc_buf_list rx_buf_list;
1716 + struct n_hdlc_buf_list tx_free_buf_list;
1717 + struct n_hdlc_buf_list rx_free_buf_list;
1718 ++ struct work_struct write_work;
1719 ++ struct tty_struct *tty_for_write_work;
1720 + };
1721 +
1722 + /*
1723 +@@ -153,6 +155,7 @@ static struct n_hdlc_buf *n_hdlc_buf_get(struct n_hdlc_buf_list *list);
1724 + /* Local functions */
1725 +
1726 + static struct n_hdlc *n_hdlc_alloc(void);
1727 ++static void n_hdlc_tty_write_work(struct work_struct *work);
1728 +
1729 + /* max frame size for memory allocations */
1730 + static int maxframe = 4096;
1731 +@@ -209,6 +212,8 @@ static void n_hdlc_tty_close(struct tty_struct *tty)
1732 + wake_up_interruptible(&tty->read_wait);
1733 + wake_up_interruptible(&tty->write_wait);
1734 +
1735 ++ cancel_work_sync(&n_hdlc->write_work);
1736 ++
1737 + n_hdlc_free_buf_list(&n_hdlc->rx_free_buf_list);
1738 + n_hdlc_free_buf_list(&n_hdlc->tx_free_buf_list);
1739 + n_hdlc_free_buf_list(&n_hdlc->rx_buf_list);
1740 +@@ -240,6 +245,8 @@ static int n_hdlc_tty_open(struct tty_struct *tty)
1741 + return -ENFILE;
1742 + }
1743 +
1744 ++ INIT_WORK(&n_hdlc->write_work, n_hdlc_tty_write_work);
1745 ++ n_hdlc->tty_for_write_work = tty;
1746 + tty->disc_data = n_hdlc;
1747 + tty->receive_room = 65536;
1748 +
1749 +@@ -333,6 +340,20 @@ check_again:
1750 + goto check_again;
1751 + } /* end of n_hdlc_send_frames() */
1752 +
1753 ++/**
1754 ++ * n_hdlc_tty_write_work - Asynchronous callback for transmit wakeup
1755 ++ * @work: pointer to work_struct
1756 ++ *
1757 ++ * Called when low level device driver can accept more send data.
1758 ++ */
1759 ++static void n_hdlc_tty_write_work(struct work_struct *work)
1760 ++{
1761 ++ struct n_hdlc *n_hdlc = container_of(work, struct n_hdlc, write_work);
1762 ++ struct tty_struct *tty = n_hdlc->tty_for_write_work;
1763 ++
1764 ++ n_hdlc_send_frames(n_hdlc, tty);
1765 ++} /* end of n_hdlc_tty_write_work() */
1766 ++
1767 + /**
1768 + * n_hdlc_tty_wakeup - Callback for transmit wakeup
1769 + * @tty: pointer to associated tty instance data
1770 +@@ -343,7 +364,7 @@ static void n_hdlc_tty_wakeup(struct tty_struct *tty)
1771 + {
1772 + struct n_hdlc *n_hdlc = tty->disc_data;
1773 +
1774 +- n_hdlc_send_frames(n_hdlc, tty);
1775 ++ schedule_work(&n_hdlc->write_work);
1776 + } /* end of n_hdlc_tty_wakeup() */
1777 +
1778 + /**
1779 +diff --git a/drivers/tty/serial/8250/8250_fintek.c b/drivers/tty/serial/8250/8250_fintek.c
1780 +index 31c9e83ea3cb2..251f0018ae8ca 100644
1781 +--- a/drivers/tty/serial/8250/8250_fintek.c
1782 ++++ b/drivers/tty/serial/8250/8250_fintek.c
1783 +@@ -290,25 +290,6 @@ static void fintek_8250_set_max_fifo(struct fintek_8250 *pdata)
1784 + }
1785 + }
1786 +
1787 +-static void fintek_8250_goto_highspeed(struct uart_8250_port *uart,
1788 +- struct fintek_8250 *pdata)
1789 +-{
1790 +- sio_write_reg(pdata, LDN, pdata->index);
1791 +-
1792 +- switch (pdata->pid) {
1793 +- case CHIP_ID_F81966:
1794 +- case CHIP_ID_F81866: /* set uart clock for high speed serial mode */
1795 +- sio_write_mask_reg(pdata, F81866_UART_CLK,
1796 +- F81866_UART_CLK_MASK,
1797 +- F81866_UART_CLK_14_769MHZ);
1798 +-
1799 +- uart->port.uartclk = 921600 * 16;
1800 +- break;
1801 +- default: /* leave clock speed untouched */
1802 +- break;
1803 +- }
1804 +-}
1805 +-
1806 + static void fintek_8250_set_termios(struct uart_port *port,
1807 + struct ktermios *termios,
1808 + struct ktermios *old)
1809 +@@ -430,7 +411,6 @@ static int probe_setup_port(struct fintek_8250 *pdata,
1810 +
1811 + fintek_8250_set_irq_mode(pdata, level_mode);
1812 + fintek_8250_set_max_fifo(pdata);
1813 +- fintek_8250_goto_highspeed(uart, pdata);
1814 +
1815 + fintek_8250_exit_key(addr[i]);
1816 +
1817 +diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
1818 +index 61f686c5bd9c6..baf80e2ac7d8e 100644
1819 +--- a/drivers/usb/core/quirks.c
1820 ++++ b/drivers/usb/core/quirks.c
1821 +@@ -435,6 +435,9 @@ static const struct usb_device_id usb_quirk_list[] = {
1822 + { USB_DEVICE(0x1532, 0x0116), .driver_info =
1823 + USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL },
1824 +
1825 ++ /* Lenovo USB-C to Ethernet Adapter RTL8153-04 */
1826 ++ { USB_DEVICE(0x17ef, 0x720c), .driver_info = USB_QUIRK_NO_LPM },
1827 ++
1828 + /* Lenovo Powered USB-C Travel Hub (4X90S92381, RTL8153 GigE) */
1829 + { USB_DEVICE(0x17ef, 0x721e), .driver_info = USB_QUIRK_NO_LPM },
1830 +
1831 +diff --git a/drivers/usb/dwc2/platform.c b/drivers/usb/dwc2/platform.c
1832 +index 5f18acac74068..49d333f02af4e 100644
1833 +--- a/drivers/usb/dwc2/platform.c
1834 ++++ b/drivers/usb/dwc2/platform.c
1835 +@@ -542,6 +542,9 @@ static int dwc2_driver_probe(struct platform_device *dev)
1836 + ggpio |= GGPIO_STM32_OTG_GCCFG_IDEN;
1837 + ggpio |= GGPIO_STM32_OTG_GCCFG_VBDEN;
1838 + dwc2_writel(hsotg, ggpio, GGPIO);
1839 ++
1840 ++ /* ID/VBUS detection startup time */
1841 ++ usleep_range(5000, 7000);
1842 + }
1843 +
1844 + retval = dwc2_drd_init(hsotg);
1845 +diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
1846 +index be4ecbabdd586..6c0434100e38c 100644
1847 +--- a/drivers/usb/early/xhci-dbc.c
1848 ++++ b/drivers/usb/early/xhci-dbc.c
1849 +@@ -14,7 +14,6 @@
1850 + #include <linux/pci_ids.h>
1851 + #include <linux/memblock.h>
1852 + #include <linux/io.h>
1853 +-#include <linux/iopoll.h>
1854 + #include <asm/pci-direct.h>
1855 + #include <asm/fixmap.h>
1856 + #include <linux/bcd.h>
1857 +@@ -136,9 +135,17 @@ static int handshake(void __iomem *ptr, u32 mask, u32 done, int wait, int delay)
1858 + {
1859 + u32 result;
1860 +
1861 +- return readl_poll_timeout_atomic(ptr, result,
1862 +- ((result & mask) == done),
1863 +- delay, wait);
1864 ++ /* Can not use readl_poll_timeout_atomic() for early boot things */
1865 ++ do {
1866 ++ result = readl(ptr);
1867 ++ result &= mask;
1868 ++ if (result == done)
1869 ++ return 0;
1870 ++ udelay(delay);
1871 ++ wait -= delay;
1872 ++ } while (wait > 0);
1873 ++
1874 ++ return -ETIMEDOUT;
1875 + }
1876 +
1877 + static void __init xdbc_bios_handoff(void)
1878 +diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
1879 +index 426132988512d..8bec0cbf844ed 100644
1880 +--- a/drivers/usb/gadget/composite.c
1881 ++++ b/drivers/usb/gadget/composite.c
1882 +@@ -1649,14 +1649,14 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
1883 + u8 endp;
1884 +
1885 + if (w_length > USB_COMP_EP0_BUFSIZ) {
1886 +- if (ctrl->bRequestType == USB_DIR_OUT) {
1887 +- goto done;
1888 +- } else {
1889 ++ if (ctrl->bRequestType & USB_DIR_IN) {
1890 + /* Cast away the const, we are going to overwrite on purpose. */
1891 + __le16 *temp = (__le16 *)&ctrl->wLength;
1892 +
1893 + *temp = cpu_to_le16(USB_COMP_EP0_BUFSIZ);
1894 + w_length = USB_COMP_EP0_BUFSIZ;
1895 ++ } else {
1896 ++ goto done;
1897 + }
1898 + }
1899 +
1900 +diff --git a/drivers/usb/gadget/legacy/dbgp.c b/drivers/usb/gadget/legacy/dbgp.c
1901 +index 355bc7dab9d5f..6bcbad3825802 100644
1902 +--- a/drivers/usb/gadget/legacy/dbgp.c
1903 ++++ b/drivers/usb/gadget/legacy/dbgp.c
1904 +@@ -346,14 +346,14 @@ static int dbgp_setup(struct usb_gadget *gadget,
1905 + u16 len = 0;
1906 +
1907 + if (length > DBGP_REQ_LEN) {
1908 +- if (ctrl->bRequestType == USB_DIR_OUT) {
1909 +- return err;
1910 +- } else {
1911 ++ if (ctrl->bRequestType & USB_DIR_IN) {
1912 + /* Cast away the const, we are going to overwrite on purpose. */
1913 + __le16 *temp = (__le16 *)&ctrl->wLength;
1914 +
1915 + *temp = cpu_to_le16(DBGP_REQ_LEN);
1916 + length = DBGP_REQ_LEN;
1917 ++ } else {
1918 ++ return err;
1919 + }
1920 + }
1921 +
1922 +diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c
1923 +index 04b9c4f5f129d..217d2b66fa514 100644
1924 +--- a/drivers/usb/gadget/legacy/inode.c
1925 ++++ b/drivers/usb/gadget/legacy/inode.c
1926 +@@ -1336,14 +1336,14 @@ gadgetfs_setup (struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
1927 + u16 w_length = le16_to_cpu(ctrl->wLength);
1928 +
1929 + if (w_length > RBUF_SIZE) {
1930 +- if (ctrl->bRequestType == USB_DIR_OUT) {
1931 +- return value;
1932 +- } else {
1933 ++ if (ctrl->bRequestType & USB_DIR_IN) {
1934 + /* Cast away the const, we are going to overwrite on purpose. */
1935 + __le16 *temp = (__le16 *)&ctrl->wLength;
1936 +
1937 + *temp = cpu_to_le16(RBUF_SIZE);
1938 + w_length = RBUF_SIZE;
1939 ++ } else {
1940 ++ return value;
1941 + }
1942 + }
1943 +
1944 +diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
1945 +index 80251a2579fda..c9133df71e52b 100644
1946 +--- a/drivers/usb/host/xhci-pci.c
1947 ++++ b/drivers/usb/host/xhci-pci.c
1948 +@@ -70,6 +70,8 @@
1949 + #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4 0x161e
1950 + #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5 0x15d6
1951 + #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6 0x15d7
1952 ++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7 0x161c
1953 ++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8 0x161f
1954 +
1955 + #define PCI_DEVICE_ID_ASMEDIA_1042_XHCI 0x1042
1956 + #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI 0x1142
1957 +@@ -325,7 +327,9 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
1958 + pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_3 ||
1959 + pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4 ||
1960 + pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5 ||
1961 +- pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6))
1962 ++ pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6 ||
1963 ++ pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7 ||
1964 ++ pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8))
1965 + xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
1966 +
1967 + if (xhci->quirks & XHCI_RESET_ON_RESUME)
1968 +diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
1969 +index 6d858bdaf33ce..f906c1308f9f9 100644
1970 +--- a/drivers/usb/serial/cp210x.c
1971 ++++ b/drivers/usb/serial/cp210x.c
1972 +@@ -1750,6 +1750,8 @@ static int cp2105_gpioconf_init(struct usb_serial *serial)
1973 +
1974 + /* 2 banks of GPIO - One for the pins taken from each serial port */
1975 + if (intf_num == 0) {
1976 ++ priv->gc.ngpio = 2;
1977 ++
1978 + if (mode.eci == CP210X_PIN_MODE_MODEM) {
1979 + /* mark all GPIOs of this interface as reserved */
1980 + priv->gpio_altfunc = 0xff;
1981 +@@ -1760,8 +1762,9 @@ static int cp2105_gpioconf_init(struct usb_serial *serial)
1982 + priv->gpio_pushpull = (u8)((le16_to_cpu(config.gpio_mode) &
1983 + CP210X_ECI_GPIO_MODE_MASK) >>
1984 + CP210X_ECI_GPIO_MODE_OFFSET);
1985 +- priv->gc.ngpio = 2;
1986 + } else if (intf_num == 1) {
1987 ++ priv->gc.ngpio = 3;
1988 ++
1989 + if (mode.sci == CP210X_PIN_MODE_MODEM) {
1990 + /* mark all GPIOs of this interface as reserved */
1991 + priv->gpio_altfunc = 0xff;
1992 +@@ -1772,7 +1775,6 @@ static int cp2105_gpioconf_init(struct usb_serial *serial)
1993 + priv->gpio_pushpull = (u8)((le16_to_cpu(config.gpio_mode) &
1994 + CP210X_SCI_GPIO_MODE_MASK) >>
1995 + CP210X_SCI_GPIO_MODE_OFFSET);
1996 +- priv->gc.ngpio = 3;
1997 + } else {
1998 + return -ENODEV;
1999 + }
2000 +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
2001 +index 28ffe4e358b77..21b1488fe4461 100644
2002 +--- a/drivers/usb/serial/option.c
2003 ++++ b/drivers/usb/serial/option.c
2004 +@@ -1219,6 +1219,14 @@ static const struct usb_device_id option_ids[] = {
2005 + .driver_info = NCTRL(2) | RSVD(3) },
2006 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1063, 0xff), /* Telit LN920 (ECM) */
2007 + .driver_info = NCTRL(0) | RSVD(1) },
2008 ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1070, 0xff), /* Telit FN990 (rmnet) */
2009 ++ .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
2010 ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1071, 0xff), /* Telit FN990 (MBIM) */
2011 ++ .driver_info = NCTRL(0) | RSVD(1) },
2012 ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1072, 0xff), /* Telit FN990 (RNDIS) */
2013 ++ .driver_info = NCTRL(2) | RSVD(3) },
2014 ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1073, 0xff), /* Telit FN990 (ECM) */
2015 ++ .driver_info = NCTRL(0) | RSVD(1) },
2016 + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
2017 + .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
2018 + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
2019 +diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
2020 +index fdeb20f2f174c..e4d60009d9083 100644
2021 +--- a/drivers/vhost/vdpa.c
2022 ++++ b/drivers/vhost/vdpa.c
2023 +@@ -196,7 +196,7 @@ static int vhost_vdpa_config_validate(struct vhost_vdpa *v,
2024 + break;
2025 + }
2026 +
2027 +- if (c->len == 0)
2028 ++ if (c->len == 0 || c->off > size)
2029 + return -EINVAL;
2030 +
2031 + if (c->len > size - c->off)
2032 +diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
2033 +index e9432dbbec0a7..cce75d3b3ba05 100644
2034 +--- a/drivers/virtio/virtio_ring.c
2035 ++++ b/drivers/virtio/virtio_ring.c
2036 +@@ -263,7 +263,7 @@ size_t virtio_max_dma_size(struct virtio_device *vdev)
2037 + size_t max_segment_size = SIZE_MAX;
2038 +
2039 + if (vring_use_dma_api(vdev))
2040 +- max_segment_size = dma_max_mapping_size(&vdev->dev);
2041 ++ max_segment_size = dma_max_mapping_size(vdev->dev.parent);
2042 +
2043 + return max_segment_size;
2044 + }
2045 +diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
2046 +index bab2091c81683..a5bcad0278835 100644
2047 +--- a/fs/btrfs/disk-io.c
2048 ++++ b/fs/btrfs/disk-io.c
2049 +@@ -1603,6 +1603,14 @@ again:
2050 + }
2051 + return root;
2052 + fail:
2053 ++ /*
2054 ++ * If our caller provided us an anonymous device, then it's his
2055 ++ * responsability to free it in case we fail. So we have to set our
2056 ++ * root's anon_dev to 0 to avoid a double free, once by btrfs_put_root()
2057 ++ * and once again by our caller.
2058 ++ */
2059 ++ if (anon_dev)
2060 ++ root->anon_dev = 0;
2061 + btrfs_put_root(root);
2062 + return ERR_PTR(ret);
2063 + }
2064 +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
2065 +index 4a5a3ae0acaae..09ef6419e890a 100644
2066 +--- a/fs/btrfs/tree-log.c
2067 ++++ b/fs/btrfs/tree-log.c
2068 +@@ -1109,6 +1109,7 @@ again:
2069 + parent_objectid, victim_name,
2070 + victim_name_len);
2071 + if (ret < 0) {
2072 ++ kfree(victim_name);
2073 + return ret;
2074 + } else if (!ret) {
2075 + ret = -ENOENT;
2076 +diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
2077 +index 676f551953060..d3f67271d3c72 100644
2078 +--- a/fs/ceph/caps.c
2079 ++++ b/fs/ceph/caps.c
2080 +@@ -4359,7 +4359,7 @@ void ceph_get_fmode(struct ceph_inode_info *ci, int fmode, int count)
2081 + {
2082 + struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(ci->vfs_inode.i_sb);
2083 + int bits = (fmode << 1) | 1;
2084 +- bool is_opened = false;
2085 ++ bool already_opened = false;
2086 + int i;
2087 +
2088 + if (count == 1)
2089 +@@ -4367,19 +4367,19 @@ void ceph_get_fmode(struct ceph_inode_info *ci, int fmode, int count)
2090 +
2091 + spin_lock(&ci->i_ceph_lock);
2092 + for (i = 0; i < CEPH_FILE_MODE_BITS; i++) {
2093 +- if (bits & (1 << i))
2094 +- ci->i_nr_by_mode[i] += count;
2095 +-
2096 + /*
2097 +- * If any of the mode ref is larger than 1,
2098 ++ * If any of the mode ref is larger than 0,
2099 + * that means it has been already opened by
2100 + * others. Just skip checking the PIN ref.
2101 + */
2102 +- if (i && ci->i_nr_by_mode[i] > 1)
2103 +- is_opened = true;
2104 ++ if (i && ci->i_nr_by_mode[i])
2105 ++ already_opened = true;
2106 ++
2107 ++ if (bits & (1 << i))
2108 ++ ci->i_nr_by_mode[i] += count;
2109 + }
2110 +
2111 +- if (!is_opened)
2112 ++ if (!already_opened)
2113 + percpu_counter_inc(&mdsc->metric.opened_inodes);
2114 + spin_unlock(&ci->i_ceph_lock);
2115 + }
2116 +diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
2117 +index 76e347a8cf088..981a915906314 100644
2118 +--- a/fs/ceph/mds_client.c
2119 ++++ b/fs/ceph/mds_client.c
2120 +@@ -3696,7 +3696,7 @@ static int reconnect_caps_cb(struct inode *inode, struct ceph_cap *cap,
2121 + struct ceph_pagelist *pagelist = recon_state->pagelist;
2122 + struct dentry *dentry;
2123 + char *path;
2124 +- int pathlen, err;
2125 ++ int pathlen = 0, err;
2126 + u64 pathbase;
2127 + u64 snap_follows;
2128 +
2129 +@@ -3716,7 +3716,6 @@ static int reconnect_caps_cb(struct inode *inode, struct ceph_cap *cap,
2130 + }
2131 + } else {
2132 + path = NULL;
2133 +- pathlen = 0;
2134 + pathbase = 0;
2135 + }
2136 +
2137 +diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
2138 +index e7667497b6b77..8e95a75a4559c 100644
2139 +--- a/fs/fuse/dir.c
2140 ++++ b/fs/fuse/dir.c
2141 +@@ -1132,7 +1132,7 @@ int fuse_reverse_inval_entry(struct fuse_conn *fc, u64 parent_nodeid,
2142 + if (!parent)
2143 + return -ENOENT;
2144 +
2145 +- inode_lock(parent);
2146 ++ inode_lock_nested(parent, I_MUTEX_PARENT);
2147 + if (!S_ISDIR(parent->i_mode))
2148 + goto unlock;
2149 +
2150 +diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
2151 +index 16955a307dcd9..d0e5cde277022 100644
2152 +--- a/fs/overlayfs/dir.c
2153 ++++ b/fs/overlayfs/dir.c
2154 +@@ -137,8 +137,7 @@ kill_whiteout:
2155 + goto out;
2156 + }
2157 +
2158 +-static int ovl_mkdir_real(struct inode *dir, struct dentry **newdentry,
2159 +- umode_t mode)
2160 ++int ovl_mkdir_real(struct inode *dir, struct dentry **newdentry, umode_t mode)
2161 + {
2162 + int err;
2163 + struct dentry *d, *dentry = *newdentry;
2164 +diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
2165 +index e43dc68bd1b54..898de3bf884e4 100644
2166 +--- a/fs/overlayfs/overlayfs.h
2167 ++++ b/fs/overlayfs/overlayfs.h
2168 +@@ -519,6 +519,7 @@ struct ovl_cattr {
2169 +
2170 + #define OVL_CATTR(m) (&(struct ovl_cattr) { .mode = (m) })
2171 +
2172 ++int ovl_mkdir_real(struct inode *dir, struct dentry **newdentry, umode_t mode);
2173 + struct dentry *ovl_create_real(struct inode *dir, struct dentry *newdentry,
2174 + struct ovl_cattr *attr);
2175 + int ovl_cleanup(struct inode *dir, struct dentry *dentry);
2176 +diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
2177 +index 77f08ac04d1f3..45c596dfe3a36 100644
2178 +--- a/fs/overlayfs/super.c
2179 ++++ b/fs/overlayfs/super.c
2180 +@@ -743,10 +743,14 @@ retry:
2181 + goto retry;
2182 + }
2183 +
2184 +- work = ovl_create_real(dir, work, OVL_CATTR(attr.ia_mode));
2185 +- err = PTR_ERR(work);
2186 +- if (IS_ERR(work))
2187 +- goto out_err;
2188 ++ err = ovl_mkdir_real(dir, &work, attr.ia_mode);
2189 ++ if (err)
2190 ++ goto out_dput;
2191 ++
2192 ++ /* Weird filesystem returning with hashed negative (kernfs)? */
2193 ++ err = -EINVAL;
2194 ++ if (d_really_is_negative(work))
2195 ++ goto out_dput;
2196 +
2197 + /*
2198 + * Try to remove POSIX ACL xattrs from workdir. We are good if:
2199 +diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
2200 +index 2243dc1fb48fe..e60759d8bb5fb 100644
2201 +--- a/fs/zonefs/super.c
2202 ++++ b/fs/zonefs/super.c
2203 +@@ -1799,5 +1799,6 @@ static void __exit zonefs_exit(void)
2204 + MODULE_AUTHOR("Damien Le Moal");
2205 + MODULE_DESCRIPTION("Zone file system for zoned block devices");
2206 + MODULE_LICENSE("GPL");
2207 ++MODULE_ALIAS_FS("zonefs");
2208 + module_init(zonefs_init);
2209 + module_exit(zonefs_exit);
2210 +diff --git a/kernel/audit.c b/kernel/audit.c
2211 +index 68cee3bc8cfe6..d784000921da3 100644
2212 +--- a/kernel/audit.c
2213 ++++ b/kernel/audit.c
2214 +@@ -718,7 +718,7 @@ static int kauditd_send_queue(struct sock *sk, u32 portid,
2215 + {
2216 + int rc = 0;
2217 + struct sk_buff *skb;
2218 +- static unsigned int failed = 0;
2219 ++ unsigned int failed = 0;
2220 +
2221 + /* NOTE: kauditd_thread takes care of all our locking, we just use
2222 + * the netlink info passed to us (e.g. sk and portid) */
2223 +@@ -735,32 +735,30 @@ static int kauditd_send_queue(struct sock *sk, u32 portid,
2224 + continue;
2225 + }
2226 +
2227 ++retry:
2228 + /* grab an extra skb reference in case of error */
2229 + skb_get(skb);
2230 + rc = netlink_unicast(sk, skb, portid, 0);
2231 + if (rc < 0) {
2232 +- /* fatal failure for our queue flush attempt? */
2233 ++ /* send failed - try a few times unless fatal error */
2234 + if (++failed >= retry_limit ||
2235 + rc == -ECONNREFUSED || rc == -EPERM) {
2236 +- /* yes - error processing for the queue */
2237 + sk = NULL;
2238 + if (err_hook)
2239 + (*err_hook)(skb);
2240 +- if (!skb_hook)
2241 +- goto out;
2242 +- /* keep processing with the skb_hook */
2243 ++ if (rc == -EAGAIN)
2244 ++ rc = 0;
2245 ++ /* continue to drain the queue */
2246 + continue;
2247 + } else
2248 +- /* no - requeue to preserve ordering */
2249 +- skb_queue_head(queue, skb);
2250 ++ goto retry;
2251 + } else {
2252 +- /* it worked - drop the extra reference and continue */
2253 ++ /* skb sent - drop the extra reference and continue */
2254 + consume_skb(skb);
2255 + failed = 0;
2256 + }
2257 + }
2258 +
2259 +-out:
2260 + return (rc >= 0 ? 0 : rc);
2261 + }
2262 +
2263 +@@ -1609,7 +1607,8 @@ static int __net_init audit_net_init(struct net *net)
2264 + audit_panic("cannot initialize netlink socket in namespace");
2265 + return -ENOMEM;
2266 + }
2267 +- aunet->sk->sk_sndtimeo = MAX_SCHEDULE_TIMEOUT;
2268 ++ /* limit the timeout in case auditd is blocked/stopped */
2269 ++ aunet->sk->sk_sndtimeo = HZ / 10;
2270 +
2271 + return 0;
2272 + }
2273 +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
2274 +index 95ab3f243acde..4e28961cfa53e 100644
2275 +--- a/kernel/bpf/verifier.c
2276 ++++ b/kernel/bpf/verifier.c
2277 +@@ -1249,22 +1249,28 @@ static void __reg_bound_offset(struct bpf_reg_state *reg)
2278 + reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off);
2279 + }
2280 +
2281 ++static bool __reg32_bound_s64(s32 a)
2282 ++{
2283 ++ return a >= 0 && a <= S32_MAX;
2284 ++}
2285 ++
2286 + static void __reg_assign_32_into_64(struct bpf_reg_state *reg)
2287 + {
2288 + reg->umin_value = reg->u32_min_value;
2289 + reg->umax_value = reg->u32_max_value;
2290 +- /* Attempt to pull 32-bit signed bounds into 64-bit bounds
2291 +- * but must be positive otherwise set to worse case bounds
2292 +- * and refine later from tnum.
2293 ++
2294 ++ /* Attempt to pull 32-bit signed bounds into 64-bit bounds but must
2295 ++ * be positive otherwise set to worse case bounds and refine later
2296 ++ * from tnum.
2297 + */
2298 +- if (reg->s32_min_value >= 0 && reg->s32_max_value >= 0)
2299 +- reg->smax_value = reg->s32_max_value;
2300 +- else
2301 +- reg->smax_value = U32_MAX;
2302 +- if (reg->s32_min_value >= 0)
2303 ++ if (__reg32_bound_s64(reg->s32_min_value) &&
2304 ++ __reg32_bound_s64(reg->s32_max_value)) {
2305 + reg->smin_value = reg->s32_min_value;
2306 +- else
2307 ++ reg->smax_value = reg->s32_max_value;
2308 ++ } else {
2309 + reg->smin_value = 0;
2310 ++ reg->smax_value = U32_MAX;
2311 ++ }
2312 + }
2313 +
2314 + static void __reg_combine_32_into_64(struct bpf_reg_state *reg)
2315 +@@ -7125,6 +7131,10 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
2316 + insn->dst_reg);
2317 + }
2318 + zext_32_to_64(dst_reg);
2319 ++
2320 ++ __update_reg_bounds(dst_reg);
2321 ++ __reg_deduce_bounds(dst_reg);
2322 ++ __reg_bound_offset(dst_reg);
2323 + }
2324 + } else {
2325 + /* case: R = imm
2326 +diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
2327 +index 8c81c05c4236a..b74e7ace4376b 100644
2328 +--- a/kernel/rcu/tree.c
2329 ++++ b/kernel/rcu/tree.c
2330 +@@ -1888,7 +1888,7 @@ static void rcu_gp_fqs(bool first_time)
2331 + struct rcu_node *rnp = rcu_get_root();
2332 +
2333 + WRITE_ONCE(rcu_state.gp_activity, jiffies);
2334 +- rcu_state.n_force_qs++;
2335 ++ WRITE_ONCE(rcu_state.n_force_qs, rcu_state.n_force_qs + 1);
2336 + if (first_time) {
2337 + /* Collect dyntick-idle snapshots. */
2338 + force_qs_rnp(dyntick_save_progress_counter);
2339 +@@ -2530,7 +2530,7 @@ static void rcu_do_batch(struct rcu_data *rdp)
2340 + /* Reset ->qlen_last_fqs_check trigger if enough CBs have drained. */
2341 + if (count == 0 && rdp->qlen_last_fqs_check != 0) {
2342 + rdp->qlen_last_fqs_check = 0;
2343 +- rdp->n_force_qs_snap = rcu_state.n_force_qs;
2344 ++ rdp->n_force_qs_snap = READ_ONCE(rcu_state.n_force_qs);
2345 + } else if (count < rdp->qlen_last_fqs_check - qhimark)
2346 + rdp->qlen_last_fqs_check = count;
2347 +
2348 +@@ -2876,10 +2876,10 @@ static void __call_rcu_core(struct rcu_data *rdp, struct rcu_head *head,
2349 + } else {
2350 + /* Give the grace period a kick. */
2351 + rdp->blimit = DEFAULT_MAX_RCU_BLIMIT;
2352 +- if (rcu_state.n_force_qs == rdp->n_force_qs_snap &&
2353 ++ if (READ_ONCE(rcu_state.n_force_qs) == rdp->n_force_qs_snap &&
2354 + rcu_segcblist_first_pend_cb(&rdp->cblist) != head)
2355 + rcu_force_quiescent_state();
2356 +- rdp->n_force_qs_snap = rcu_state.n_force_qs;
2357 ++ rdp->n_force_qs_snap = READ_ONCE(rcu_state.n_force_qs);
2358 + rdp->qlen_last_fqs_check = rcu_segcblist_n_cbs(&rdp->cblist);
2359 + }
2360 + }
2361 +@@ -3986,7 +3986,7 @@ int rcutree_prepare_cpu(unsigned int cpu)
2362 + /* Set up local state, ensuring consistent view of global state. */
2363 + raw_spin_lock_irqsave_rcu_node(rnp, flags);
2364 + rdp->qlen_last_fqs_check = 0;
2365 +- rdp->n_force_qs_snap = rcu_state.n_force_qs;
2366 ++ rdp->n_force_qs_snap = READ_ONCE(rcu_state.n_force_qs);
2367 + rdp->blimit = blimit;
2368 + if (rcu_segcblist_empty(&rdp->cblist) && /* No early-boot CBs? */
2369 + !rcu_segcblist_is_offloaded(&rdp->cblist))
2370 +diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
2371 +index 6858a31364b64..cc4dc2857a870 100644
2372 +--- a/kernel/time/timekeeping.c
2373 ++++ b/kernel/time/timekeeping.c
2374 +@@ -1310,8 +1310,7 @@ int do_settimeofday64(const struct timespec64 *ts)
2375 + timekeeping_forward_now(tk);
2376 +
2377 + xt = tk_xtime(tk);
2378 +- ts_delta.tv_sec = ts->tv_sec - xt.tv_sec;
2379 +- ts_delta.tv_nsec = ts->tv_nsec - xt.tv_nsec;
2380 ++ ts_delta = timespec64_sub(*ts, xt);
2381 +
2382 + if (timespec64_compare(&tk->wall_to_monotonic, &ts_delta) > 0) {
2383 + ret = -EINVAL;
2384 +diff --git a/net/core/skbuff.c b/net/core/skbuff.c
2385 +index 825e6b9880030..0215ae898e836 100644
2386 +--- a/net/core/skbuff.c
2387 ++++ b/net/core/skbuff.c
2388 +@@ -769,7 +769,7 @@ void skb_dump(const char *level, const struct sk_buff *skb, bool full_pkt)
2389 + ntohs(skb->protocol), skb->pkt_type, skb->skb_iif);
2390 +
2391 + if (dev)
2392 +- printk("%sdev name=%s feat=0x%pNF\n",
2393 ++ printk("%sdev name=%s feat=%pNF\n",
2394 + level, dev->name, &dev->features);
2395 + if (sk)
2396 + printk("%ssk family=%hu type=%u proto=%u\n",
2397 +diff --git a/net/ipv4/inet_diag.c b/net/ipv4/inet_diag.c
2398 +index 93474b1bea4e0..fa9f1de58df46 100644
2399 +--- a/net/ipv4/inet_diag.c
2400 ++++ b/net/ipv4/inet_diag.c
2401 +@@ -261,6 +261,7 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk,
2402 + r->idiag_state = sk->sk_state;
2403 + r->idiag_timer = 0;
2404 + r->idiag_retrans = 0;
2405 ++ r->idiag_expires = 0;
2406 +
2407 + if (inet_diag_msg_attrs_fill(sk, skb, r, ext,
2408 + sk_user_ns(NETLINK_CB(cb->skb).sk),
2409 +@@ -314,9 +315,6 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk,
2410 + r->idiag_retrans = icsk->icsk_probes_out;
2411 + r->idiag_expires =
2412 + jiffies_delta_to_msecs(sk->sk_timer.expires - jiffies);
2413 +- } else {
2414 +- r->idiag_timer = 0;
2415 +- r->idiag_expires = 0;
2416 + }
2417 +
2418 + if ((ext & (1 << (INET_DIAG_INFO - 1))) && handler->idiag_info_size) {
2419 +diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
2420 +index a6a3d759246ec..bab0e99f6e356 100644
2421 +--- a/net/ipv6/sit.c
2422 ++++ b/net/ipv6/sit.c
2423 +@@ -1924,7 +1924,6 @@ static int __net_init sit_init_net(struct net *net)
2424 + return 0;
2425 +
2426 + err_reg_dev:
2427 +- ipip6_dev_free(sitn->fb_tunnel_dev);
2428 + free_netdev(sitn->fb_tunnel_dev);
2429 + err_alloc_dev:
2430 + return err;
2431 +diff --git a/net/mac80211/agg-rx.c b/net/mac80211/agg-rx.c
2432 +index cd4cf84a7f99f..6ef8ded4ec764 100644
2433 +--- a/net/mac80211/agg-rx.c
2434 ++++ b/net/mac80211/agg-rx.c
2435 +@@ -9,7 +9,7 @@
2436 + * Copyright 2007, Michael Wu <flamingice@××××××××.net>
2437 + * Copyright 2007-2010, Intel Corporation
2438 + * Copyright(c) 2015-2017 Intel Deutschland GmbH
2439 +- * Copyright (C) 2018-2020 Intel Corporation
2440 ++ * Copyright (C) 2018-2021 Intel Corporation
2441 + */
2442 +
2443 + /**
2444 +@@ -191,7 +191,8 @@ static void ieee80211_add_addbaext(struct ieee80211_sub_if_data *sdata,
2445 + sband = ieee80211_get_sband(sdata);
2446 + if (!sband)
2447 + return;
2448 +- he_cap = ieee80211_get_he_iftype_cap(sband, sdata->vif.type);
2449 ++ he_cap = ieee80211_get_he_iftype_cap(sband,
2450 ++ ieee80211_vif_type_p2p(&sdata->vif));
2451 + if (!he_cap)
2452 + return;
2453 +
2454 +diff --git a/net/mac80211/agg-tx.c b/net/mac80211/agg-tx.c
2455 +index b37c8a983d88d..190f300d8923c 100644
2456 +--- a/net/mac80211/agg-tx.c
2457 ++++ b/net/mac80211/agg-tx.c
2458 +@@ -9,7 +9,7 @@
2459 + * Copyright 2007, Michael Wu <flamingice@××××××××.net>
2460 + * Copyright 2007-2010, Intel Corporation
2461 + * Copyright(c) 2015-2017 Intel Deutschland GmbH
2462 +- * Copyright (C) 2018 - 2020 Intel Corporation
2463 ++ * Copyright (C) 2018 - 2021 Intel Corporation
2464 + */
2465 +
2466 + #include <linux/ieee80211.h>
2467 +@@ -106,7 +106,7 @@ static void ieee80211_send_addba_request(struct ieee80211_sub_if_data *sdata,
2468 + mgmt->u.action.u.addba_req.start_seq_num =
2469 + cpu_to_le16(start_seq_num << 4);
2470 +
2471 +- ieee80211_tx_skb(sdata, skb);
2472 ++ ieee80211_tx_skb_tid(sdata, skb, tid);
2473 + }
2474 +
2475 + void ieee80211_send_bar(struct ieee80211_vif *vif, u8 *ra, u16 tid, u16 ssn)
2476 +@@ -213,6 +213,8 @@ ieee80211_agg_start_txq(struct sta_info *sta, int tid, bool enable)
2477 + struct ieee80211_txq *txq = sta->sta.txq[tid];
2478 + struct txq_info *txqi;
2479 +
2480 ++ lockdep_assert_held(&sta->ampdu_mlme.mtx);
2481 ++
2482 + if (!txq)
2483 + return;
2484 +
2485 +@@ -290,7 +292,6 @@ static void ieee80211_remove_tid_tx(struct sta_info *sta, int tid)
2486 + ieee80211_assign_tid_tx(sta, tid, NULL);
2487 +
2488 + ieee80211_agg_splice_finish(sta->sdata, tid);
2489 +- ieee80211_agg_start_txq(sta, tid, false);
2490 +
2491 + kfree_rcu(tid_tx, rcu_head);
2492 + }
2493 +@@ -480,8 +481,7 @@ static void ieee80211_send_addba_with_timeout(struct sta_info *sta,
2494 +
2495 + /* send AddBA request */
2496 + ieee80211_send_addba_request(sdata, sta->sta.addr, tid,
2497 +- tid_tx->dialog_token,
2498 +- sta->tid_seq[tid] >> 4,
2499 ++ tid_tx->dialog_token, tid_tx->ssn,
2500 + buf_size, tid_tx->timeout);
2501 +
2502 + WARN_ON(test_and_set_bit(HT_AGG_STATE_SENT_ADDBA, &tid_tx->state));
2503 +@@ -523,6 +523,7 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
2504 +
2505 + params.ssn = sta->tid_seq[tid] >> 4;
2506 + ret = drv_ampdu_action(local, sdata, &params);
2507 ++ tid_tx->ssn = params.ssn;
2508 + if (ret == IEEE80211_AMPDU_TX_START_DELAY_ADDBA) {
2509 + return;
2510 + } else if (ret == IEEE80211_AMPDU_TX_START_IMMEDIATE) {
2511 +@@ -889,6 +890,7 @@ void ieee80211_stop_tx_ba_cb(struct sta_info *sta, int tid,
2512 + {
2513 + struct ieee80211_sub_if_data *sdata = sta->sdata;
2514 + bool send_delba = false;
2515 ++ bool start_txq = false;
2516 +
2517 + ht_dbg(sdata, "Stopping Tx BA session for %pM tid %d\n",
2518 + sta->sta.addr, tid);
2519 +@@ -906,10 +908,14 @@ void ieee80211_stop_tx_ba_cb(struct sta_info *sta, int tid,
2520 + send_delba = true;
2521 +
2522 + ieee80211_remove_tid_tx(sta, tid);
2523 ++ start_txq = true;
2524 +
2525 + unlock_sta:
2526 + spin_unlock_bh(&sta->lock);
2527 +
2528 ++ if (start_txq)
2529 ++ ieee80211_agg_start_txq(sta, tid, false);
2530 ++
2531 + if (send_delba)
2532 + ieee80211_send_delba(sdata, sta->sta.addr, tid,
2533 + WLAN_BACK_INITIATOR, WLAN_REASON_QSTA_NOT_USE);
2534 +diff --git a/net/mac80211/driver-ops.h b/net/mac80211/driver-ops.h
2535 +index bcdfd19a596be..a172f69c71123 100644
2536 +--- a/net/mac80211/driver-ops.h
2537 ++++ b/net/mac80211/driver-ops.h
2538 +@@ -1201,8 +1201,11 @@ static inline void drv_wake_tx_queue(struct ieee80211_local *local,
2539 + {
2540 + struct ieee80211_sub_if_data *sdata = vif_to_sdata(txq->txq.vif);
2541 +
2542 +- if (local->in_reconfig)
2543 ++ /* In reconfig don't transmit now, but mark for waking later */
2544 ++ if (local->in_reconfig) {
2545 ++ set_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txq->flags);
2546 + return;
2547 ++ }
2548 +
2549 + if (!check_sdata_in_driver(sdata))
2550 + return;
2551 +diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
2552 +index 32bc30ec50ec9..7bd42827540ae 100644
2553 +--- a/net/mac80211/mlme.c
2554 ++++ b/net/mac80211/mlme.c
2555 +@@ -2493,11 +2493,18 @@ static void ieee80211_sta_tx_wmm_ac_notify(struct ieee80211_sub_if_data *sdata,
2556 + u16 tx_time)
2557 + {
2558 + struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
2559 +- u16 tid = ieee80211_get_tid(hdr);
2560 +- int ac = ieee80211_ac_from_tid(tid);
2561 +- struct ieee80211_sta_tx_tspec *tx_tspec = &ifmgd->tx_tspec[ac];
2562 ++ u16 tid;
2563 ++ int ac;
2564 ++ struct ieee80211_sta_tx_tspec *tx_tspec;
2565 + unsigned long now = jiffies;
2566 +
2567 ++ if (!ieee80211_is_data_qos(hdr->frame_control))
2568 ++ return;
2569 ++
2570 ++ tid = ieee80211_get_tid(hdr);
2571 ++ ac = ieee80211_ac_from_tid(tid);
2572 ++ tx_tspec = &ifmgd->tx_tspec[ac];
2573 ++
2574 + if (likely(!tx_tspec->admitted_time))
2575 + return;
2576 +
2577 +diff --git a/net/mac80211/sta_info.h b/net/mac80211/sta_info.h
2578 +index 355e006432ccc..b9e5f8e8f29cc 100644
2579 +--- a/net/mac80211/sta_info.h
2580 ++++ b/net/mac80211/sta_info.h
2581 +@@ -190,6 +190,7 @@ struct tid_ampdu_tx {
2582 + u8 stop_initiator;
2583 + bool tx_stop;
2584 + u16 buf_size;
2585 ++ u16 ssn;
2586 +
2587 + u16 failed_bar_ssn;
2588 + bool bar_pending;
2589 +diff --git a/net/mac80211/util.c b/net/mac80211/util.c
2590 +index fbf56a203c0e8..a1f129292ad88 100644
2591 +--- a/net/mac80211/util.c
2592 ++++ b/net/mac80211/util.c
2593 +@@ -950,7 +950,12 @@ static void ieee80211_parse_extension_element(u32 *crc,
2594 + struct ieee802_11_elems *elems)
2595 + {
2596 + const void *data = elem->data + 1;
2597 +- u8 len = elem->datalen - 1;
2598 ++ u8 len;
2599 ++
2600 ++ if (!elem->datalen)
2601 ++ return;
2602 ++
2603 ++ len = elem->datalen - 1;
2604 +
2605 + switch (elem->data[0]) {
2606 + case WLAN_EID_EXT_HE_MU_EDCA:
2607 +diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
2608 +index 3ca8b359e399a..8123c79e27913 100644
2609 +--- a/net/mptcp/protocol.c
2610 ++++ b/net/mptcp/protocol.c
2611 +@@ -2149,7 +2149,7 @@ static struct sock *mptcp_accept(struct sock *sk, int flags, int *err,
2612 + */
2613 + if (WARN_ON_ONCE(!new_mptcp_sock)) {
2614 + tcp_sk(newsk)->is_mptcp = 0;
2615 +- return newsk;
2616 ++ goto out;
2617 + }
2618 +
2619 + /* acquire the 2nd reference for the owning socket */
2620 +@@ -2174,6 +2174,8 @@ static struct sock *mptcp_accept(struct sock *sk, int flags, int *err,
2621 + MPTCP_MIB_MPCAPABLEPASSIVEFALLBACK);
2622 + }
2623 +
2624 ++out:
2625 ++ newsk->sk_kern_sock = kern;
2626 + return newsk;
2627 + }
2628 +
2629 +diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
2630 +index 08144559eed56..f78097aa403a8 100644
2631 +--- a/net/packet/af_packet.c
2632 ++++ b/net/packet/af_packet.c
2633 +@@ -4461,9 +4461,10 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
2634 + }
2635 +
2636 + out_free_pg_vec:
2637 +- bitmap_free(rx_owner_map);
2638 +- if (pg_vec)
2639 ++ if (pg_vec) {
2640 ++ bitmap_free(rx_owner_map);
2641 + free_pg_vec(pg_vec, order, req->tp_block_nr);
2642 ++ }
2643 + out:
2644 + return err;
2645 + }
2646 +diff --git a/net/rds/connection.c b/net/rds/connection.c
2647 +index a3bc4b54d4910..b4cc699c5fad3 100644
2648 +--- a/net/rds/connection.c
2649 ++++ b/net/rds/connection.c
2650 +@@ -253,6 +253,7 @@ static struct rds_connection *__rds_conn_create(struct net *net,
2651 + * should end up here, but if it
2652 + * does, reset/destroy the connection.
2653 + */
2654 ++ kfree(conn->c_path);
2655 + kmem_cache_free(rds_conn_slab, conn);
2656 + conn = ERR_PTR(-EOPNOTSUPP);
2657 + goto out;
2658 +diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
2659 +index 8073657a0fd25..cb1331b357451 100644
2660 +--- a/net/sched/cls_api.c
2661 ++++ b/net/sched/cls_api.c
2662 +@@ -3703,6 +3703,7 @@ int tc_setup_flow_action(struct flow_action *flow_action,
2663 + entry->mpls_mangle.ttl = tcf_mpls_ttl(act);
2664 + break;
2665 + default:
2666 ++ err = -EOPNOTSUPP;
2667 + goto err_out_locked;
2668 + }
2669 + } else if (is_tcf_skbedit_ptype(act)) {
2670 +diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
2671 +index c2c37ffd94f22..c580139fcedec 100644
2672 +--- a/net/sched/sch_cake.c
2673 ++++ b/net/sched/sch_cake.c
2674 +@@ -2736,7 +2736,7 @@ static int cake_init(struct Qdisc *sch, struct nlattr *opt,
2675 + q->tins = kvcalloc(CAKE_MAX_TINS, sizeof(struct cake_tin_data),
2676 + GFP_KERNEL);
2677 + if (!q->tins)
2678 +- goto nomem;
2679 ++ return -ENOMEM;
2680 +
2681 + for (i = 0; i < CAKE_MAX_TINS; i++) {
2682 + struct cake_tin_data *b = q->tins + i;
2683 +@@ -2766,10 +2766,6 @@ static int cake_init(struct Qdisc *sch, struct nlattr *opt,
2684 + q->min_netlen = ~0;
2685 + q->min_adjlen = ~0;
2686 + return 0;
2687 +-
2688 +-nomem:
2689 +- cake_destroy(sch);
2690 +- return -ENOMEM;
2691 + }
2692 +
2693 + static int cake_dump(struct Qdisc *sch, struct sk_buff *skb)
2694 +diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c
2695 +index c34cb6e81d855..9c224872ef035 100644
2696 +--- a/net/sched/sch_ets.c
2697 ++++ b/net/sched/sch_ets.c
2698 +@@ -668,9 +668,9 @@ static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt,
2699 + }
2700 + }
2701 + for (i = q->nbands; i < oldbands; i++) {
2702 +- qdisc_tree_flush_backlog(q->classes[i].qdisc);
2703 +- if (i >= q->nstrict)
2704 ++ if (i >= q->nstrict && q->classes[i].qdisc->q.qlen)
2705 + list_del(&q->classes[i].alist);
2706 ++ qdisc_tree_flush_backlog(q->classes[i].qdisc);
2707 + }
2708 + q->nstrict = nstrict;
2709 + memcpy(q->prio2band, priomap, sizeof(priomap));
2710 +diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
2711 +index d324a12c26cd9..99b902e410c49 100644
2712 +--- a/net/smc/af_smc.c
2713 ++++ b/net/smc/af_smc.c
2714 +@@ -191,7 +191,9 @@ static int smc_release(struct socket *sock)
2715 + /* cleanup for a dangling non-blocking connect */
2716 + if (smc->connect_nonblock && sk->sk_state == SMC_INIT)
2717 + tcp_abort(smc->clcsock->sk, ECONNABORTED);
2718 +- flush_work(&smc->connect_work);
2719 ++
2720 ++ if (cancel_work_sync(&smc->connect_work))
2721 ++ sock_put(&smc->sk); /* sock_hold in smc_connect for passive closing */
2722 +
2723 + if (sk->sk_state == SMC_LISTEN)
2724 + /* smc_close_non_accepted() is called and acquires
2725 +diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
2726 +index 902cb6dd710bd..d6d3a05c008a4 100644
2727 +--- a/net/vmw_vsock/virtio_transport_common.c
2728 ++++ b/net/vmw_vsock/virtio_transport_common.c
2729 +@@ -1153,7 +1153,8 @@ void virtio_transport_recv_pkt(struct virtio_transport *t,
2730 + space_available = virtio_transport_space_update(sk, pkt);
2731 +
2732 + /* Update CID in case it has changed after a transport reset event */
2733 +- vsk->local_addr.svm_cid = dst.svm_cid;
2734 ++ if (vsk->local_addr.svm_cid != VMADDR_CID_ANY)
2735 ++ vsk->local_addr.svm_cid = dst.svm_cid;
2736 +
2737 + if (space_available)
2738 + sk->sk_write_space(sk);
2739 +diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl
2740 +index f459ae883a0a6..a4ca050815aba 100755
2741 +--- a/scripts/recordmcount.pl
2742 ++++ b/scripts/recordmcount.pl
2743 +@@ -252,7 +252,7 @@ if ($arch eq "x86_64") {
2744 +
2745 + } elsif ($arch eq "s390" && $bits == 64) {
2746 + if ($cc =~ /-DCC_USING_HOTPATCH/) {
2747 +- $mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*brcl\\s*0,[0-9a-f]+ <([^\+]*)>\$";
2748 ++ $mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*(bcrl\\s*0,|jgnop\\s*)[0-9a-f]+ <([^\+]*)>\$";
2749 + $mcount_adjust = 0;
2750 + } else {
2751 + $mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*R_390_(PC|PLT)32DBL\\s+_mcount\\+0x2\$";
2752 +diff --git a/tools/testing/selftests/bpf/prog_tests/btf_skc_cls_ingress.c b/tools/testing/selftests/bpf/prog_tests/btf_skc_cls_ingress.c
2753 +index 86ccf37e26b3f..d16fd888230a5 100644
2754 +--- a/tools/testing/selftests/bpf/prog_tests/btf_skc_cls_ingress.c
2755 ++++ b/tools/testing/selftests/bpf/prog_tests/btf_skc_cls_ingress.c
2756 +@@ -90,7 +90,7 @@ static void print_err_line(void)
2757 +
2758 + static void test_conn(void)
2759 + {
2760 +- int listen_fd = -1, cli_fd = -1, err;
2761 ++ int listen_fd = -1, cli_fd = -1, srv_fd = -1, err;
2762 + socklen_t addrlen = sizeof(srv_sa6);
2763 + int srv_port;
2764 +
2765 +@@ -112,6 +112,10 @@ static void test_conn(void)
2766 + if (CHECK_FAIL(cli_fd == -1))
2767 + goto done;
2768 +
2769 ++ srv_fd = accept(listen_fd, NULL, NULL);
2770 ++ if (CHECK_FAIL(srv_fd == -1))
2771 ++ goto done;
2772 ++
2773 + if (CHECK(skel->bss->listen_tp_sport != srv_port ||
2774 + skel->bss->req_sk_sport != srv_port,
2775 + "Unexpected sk src port",
2776 +@@ -134,11 +138,13 @@ done:
2777 + close(listen_fd);
2778 + if (cli_fd != -1)
2779 + close(cli_fd);
2780 ++ if (srv_fd != -1)
2781 ++ close(srv_fd);
2782 + }
2783 +
2784 + static void test_syncookie(void)
2785 + {
2786 +- int listen_fd = -1, cli_fd = -1, err;
2787 ++ int listen_fd = -1, cli_fd = -1, srv_fd = -1, err;
2788 + socklen_t addrlen = sizeof(srv_sa6);
2789 + int srv_port;
2790 +
2791 +@@ -161,6 +167,10 @@ static void test_syncookie(void)
2792 + if (CHECK_FAIL(cli_fd == -1))
2793 + goto done;
2794 +
2795 ++ srv_fd = accept(listen_fd, NULL, NULL);
2796 ++ if (CHECK_FAIL(srv_fd == -1))
2797 ++ goto done;
2798 ++
2799 + if (CHECK(skel->bss->listen_tp_sport != srv_port,
2800 + "Unexpected tp src port",
2801 + "listen_tp_sport:%u expected:%u\n",
2802 +@@ -188,6 +198,8 @@ done:
2803 + close(listen_fd);
2804 + if (cli_fd != -1)
2805 + close(cli_fd);
2806 ++ if (srv_fd != -1)
2807 ++ close(srv_fd);
2808 + }
2809 +
2810 + struct test {
2811 +diff --git a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
2812 +index a3e593ddfafc9..d8765a4d5bc6b 100644
2813 +--- a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
2814 ++++ b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
2815 +@@ -848,6 +848,29 @@
2816 + .errstr = "R0 invalid mem access 'inv'",
2817 + .errstr_unpriv = "R0 pointer -= pointer prohibited",
2818 + },
2819 ++{
2820 ++ "map access: trying to leak tained dst reg",
2821 ++ .insns = {
2822 ++ BPF_MOV64_IMM(BPF_REG_0, 0),
2823 ++ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
2824 ++ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
2825 ++ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
2826 ++ BPF_LD_MAP_FD(BPF_REG_1, 0),
2827 ++ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
2828 ++ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
2829 ++ BPF_EXIT_INSN(),
2830 ++ BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
2831 ++ BPF_MOV32_IMM(BPF_REG_1, 0xFFFFFFFF),
2832 ++ BPF_MOV32_REG(BPF_REG_1, BPF_REG_1),
2833 ++ BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1),
2834 ++ BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0),
2835 ++ BPF_MOV64_IMM(BPF_REG_0, 0),
2836 ++ BPF_EXIT_INSN(),
2837 ++ },
2838 ++ .fixup_map_array_48b = { 4 },
2839 ++ .result = REJECT,
2840 ++ .errstr = "math between map_value pointer and 4294967295 is not allowed",
2841 ++},
2842 + {
2843 + "32bit pkt_ptr -= scalar",
2844 + .insns = {
2845 +diff --git a/tools/testing/selftests/kvm/kvm_create_max_vcpus.c b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
2846 +index 0299cd81b8ba2..aa3795cd7bd3d 100644
2847 +--- a/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
2848 ++++ b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
2849 +@@ -12,6 +12,7 @@
2850 + #include <stdio.h>
2851 + #include <stdlib.h>
2852 + #include <string.h>
2853 ++#include <sys/resource.h>
2854 +
2855 + #include "test_util.h"
2856 +
2857 +@@ -40,10 +41,39 @@ int main(int argc, char *argv[])
2858 + {
2859 + int kvm_max_vcpu_id = kvm_check_cap(KVM_CAP_MAX_VCPU_ID);
2860 + int kvm_max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS);
2861 ++ /*
2862 ++ * Number of file descriptors reqired, KVM_CAP_MAX_VCPUS for vCPU fds +
2863 ++ * an arbitrary number for everything else.
2864 ++ */
2865 ++ int nr_fds_wanted = kvm_max_vcpus + 100;
2866 ++ struct rlimit rl;
2867 +
2868 + pr_info("KVM_CAP_MAX_VCPU_ID: %d\n", kvm_max_vcpu_id);
2869 + pr_info("KVM_CAP_MAX_VCPUS: %d\n", kvm_max_vcpus);
2870 +
2871 ++ /*
2872 ++ * Check that we're allowed to open nr_fds_wanted file descriptors and
2873 ++ * try raising the limits if needed.
2874 ++ */
2875 ++ TEST_ASSERT(!getrlimit(RLIMIT_NOFILE, &rl), "getrlimit() failed!");
2876 ++
2877 ++ if (rl.rlim_cur < nr_fds_wanted) {
2878 ++ rl.rlim_cur = nr_fds_wanted;
2879 ++ if (rl.rlim_max < nr_fds_wanted) {
2880 ++ int old_rlim_max = rl.rlim_max;
2881 ++ rl.rlim_max = nr_fds_wanted;
2882 ++
2883 ++ int r = setrlimit(RLIMIT_NOFILE, &rl);
2884 ++ if (r < 0) {
2885 ++ printf("RLIMIT_NOFILE hard limit is too low (%d, wanted %d)\n",
2886 ++ old_rlim_max, nr_fds_wanted);
2887 ++ exit(KSFT_SKIP);
2888 ++ }
2889 ++ } else {
2890 ++ TEST_ASSERT(!setrlimit(RLIMIT_NOFILE, &rl), "setrlimit() failed!");
2891 ++ }
2892 ++ }
2893 ++
2894 + /*
2895 + * Upstream KVM prior to 4.8 does not support KVM_CAP_MAX_VCPU_ID.
2896 + * Userspace is supposed to use KVM_CAP_MAX_VCPUS as the maximum ID
2897 +diff --git a/tools/testing/selftests/net/fcnal-test.sh b/tools/testing/selftests/net/fcnal-test.sh
2898 +index 7c9ace9d15991..ace976d891252 100755
2899 +--- a/tools/testing/selftests/net/fcnal-test.sh
2900 ++++ b/tools/testing/selftests/net/fcnal-test.sh
2901 +@@ -446,6 +446,22 @@ cleanup()
2902 + ip netns del ${NSC} >/dev/null 2>&1
2903 + }
2904 +
2905 ++cleanup_vrf_dup()
2906 ++{
2907 ++ ip link del ${NSA_DEV2} >/dev/null 2>&1
2908 ++ ip netns pids ${NSC} | xargs kill 2>/dev/null
2909 ++ ip netns del ${NSC} >/dev/null 2>&1
2910 ++}
2911 ++
2912 ++setup_vrf_dup()
2913 ++{
2914 ++ # some VRF tests use ns-C which has the same config as
2915 ++ # ns-B but for a device NOT in the VRF
2916 ++ create_ns ${NSC} "-" "-"
2917 ++ connect_ns ${NSA} ${NSA_DEV2} ${NSA_IP}/24 ${NSA_IP6}/64 \
2918 ++ ${NSC} ${NSC_DEV} ${NSB_IP}/24 ${NSB_IP6}/64
2919 ++}
2920 ++
2921 + setup()
2922 + {
2923 + local with_vrf=${1}
2924 +@@ -475,12 +491,6 @@ setup()
2925 +
2926 + ip -netns ${NSB} ro add ${VRF_IP}/32 via ${NSA_IP} dev ${NSB_DEV}
2927 + ip -netns ${NSB} -6 ro add ${VRF_IP6}/128 via ${NSA_IP6} dev ${NSB_DEV}
2928 +-
2929 +- # some VRF tests use ns-C which has the same config as
2930 +- # ns-B but for a device NOT in the VRF
2931 +- create_ns ${NSC} "-" "-"
2932 +- connect_ns ${NSA} ${NSA_DEV2} ${NSA_IP}/24 ${NSA_IP6}/64 \
2933 +- ${NSC} ${NSC_DEV} ${NSB_IP}/24 ${NSB_IP6}/64
2934 + else
2935 + ip -netns ${NSA} ro add ${NSB_LO_IP}/32 via ${NSB_IP} dev ${NSA_DEV}
2936 + ip -netns ${NSA} ro add ${NSB_LO_IP6}/128 via ${NSB_IP6} dev ${NSA_DEV}
2937 +@@ -1177,7 +1187,9 @@ ipv4_tcp_vrf()
2938 + log_test_addr ${a} $? 1 "Global server, local connection"
2939 +
2940 + # run MD5 tests
2941 ++ setup_vrf_dup
2942 + ipv4_tcp_md5
2943 ++ cleanup_vrf_dup
2944 +
2945 + #
2946 + # enable VRF global server
2947 +@@ -1735,8 +1747,9 @@ ipv4_addr_bind_vrf()
2948 + for a in ${NSA_IP} ${VRF_IP}
2949 + do
2950 + log_start
2951 ++ show_hint "Socket not bound to VRF, but address is in VRF"
2952 + run_cmd nettest -s -R -P icmp -l ${a} -b
2953 +- log_test_addr ${a} $? 0 "Raw socket bind to local address"
2954 ++ log_test_addr ${a} $? 1 "Raw socket bind to local address"
2955 +
2956 + log_start
2957 + run_cmd nettest -s -R -P icmp -l ${a} -d ${NSA_DEV} -b
2958 +@@ -2128,7 +2141,7 @@ ipv6_ping_vrf()
2959 + log_start
2960 + show_hint "Fails since VRF device does not support linklocal or multicast"
2961 + run_cmd ${ping6} -c1 -w1 ${a}
2962 +- log_test_addr ${a} $? 2 "ping out, VRF bind"
2963 ++ log_test_addr ${a} $? 1 "ping out, VRF bind"
2964 + done
2965 +
2966 + for a in ${NSB_IP6} ${NSB_LO_IP6} ${NSB_LINKIP6}%${NSA_DEV} ${MCAST}%${NSA_DEV}
2967 +@@ -2656,7 +2669,9 @@ ipv6_tcp_vrf()
2968 + log_test_addr ${a} $? 1 "Global server, local connection"
2969 +
2970 + # run MD5 tests
2971 ++ setup_vrf_dup
2972 + ipv6_tcp_md5
2973 ++ cleanup_vrf_dup
2974 +
2975 + #
2976 + # enable VRF global server
2977 +@@ -3351,11 +3366,14 @@ ipv6_addr_bind_novrf()
2978 + run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b
2979 + log_test_addr ${a} $? 0 "TCP socket bind to local address after device bind"
2980 +
2981 ++ # Sadly, the kernel allows binding a socket to a device and then
2982 ++ # binding to an address not on the device. So this test passes
2983 ++ # when it really should not
2984 + a=${NSA_LO_IP6}
2985 + log_start
2986 +- show_hint "Should fail with 'Cannot assign requested address'"
2987 +- run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b
2988 +- log_test_addr ${a} $? 1 "TCP socket bind to out of scope local address"
2989 ++ show_hint "Tecnically should fail since address is not on device but kernel allows"
2990 ++ run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b
2991 ++ log_test_addr ${a} $? 0 "TCP socket bind to out of scope local address"
2992 + }
2993 +
2994 + ipv6_addr_bind_vrf()
2995 +@@ -3396,10 +3414,15 @@ ipv6_addr_bind_vrf()
2996 + run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b
2997 + log_test_addr ${a} $? 0 "TCP socket bind to local address with device bind"
2998 +
2999 ++ # Sadly, the kernel allows binding a socket to a device and then
3000 ++ # binding to an address not on the device. The only restriction
3001 ++ # is that the address is valid in the L3 domain. So this test
3002 ++ # passes when it really should not
3003 + a=${VRF_IP6}
3004 + log_start
3005 +- run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b
3006 +- log_test_addr ${a} $? 1 "TCP socket bind to VRF address with device bind"
3007 ++ show_hint "Tecnically should fail since address is not on device but kernel allows"
3008 ++ run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b
3009 ++ log_test_addr ${a} $? 0 "TCP socket bind to VRF address with device bind"
3010 +
3011 + a=${NSA_LO_IP6}
3012 + log_start
3013 +diff --git a/tools/testing/selftests/net/forwarding/forwarding.config.sample b/tools/testing/selftests/net/forwarding/forwarding.config.sample
3014 +index e5e2fbeca22ec..e51def39fd801 100644
3015 +--- a/tools/testing/selftests/net/forwarding/forwarding.config.sample
3016 ++++ b/tools/testing/selftests/net/forwarding/forwarding.config.sample
3017 +@@ -13,6 +13,8 @@ NETIFS[p5]=veth4
3018 + NETIFS[p6]=veth5
3019 + NETIFS[p7]=veth6
3020 + NETIFS[p8]=veth7
3021 ++NETIFS[p9]=veth8
3022 ++NETIFS[p10]=veth9
3023 +
3024 + # Port that does not have a cable connected.
3025 + NETIF_NO_CABLE=eth8
3026 +diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
3027 +index 97ac3c6fd4441..4a7d377b3a500 100644
3028 +--- a/virt/kvm/kvm_main.c
3029 ++++ b/virt/kvm/kvm_main.c
3030 +@@ -2590,7 +2590,8 @@ int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
3031 + int r;
3032 + gpa_t gpa = ghc->gpa + offset;
3033 +
3034 +- BUG_ON(len + offset > ghc->len);
3035 ++ if (WARN_ON_ONCE(len + offset > ghc->len))
3036 ++ return -EINVAL;
3037 +
3038 + if (slots->generation != ghc->generation) {
3039 + if (__kvm_gfn_to_hva_cache_init(slots, ghc, ghc->gpa, ghc->len))
3040 +@@ -2627,7 +2628,8 @@ int kvm_read_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
3041 + int r;
3042 + gpa_t gpa = ghc->gpa + offset;
3043 +
3044 +- BUG_ON(len + offset > ghc->len);
3045 ++ if (WARN_ON_ONCE(len + offset > ghc->len))
3046 ++ return -EINVAL;
3047 +
3048 + if (slots->generation != ghc->generation) {
3049 + if (__kvm_gfn_to_hva_cache_init(slots, ghc, ghc->gpa, ghc->len))