Gentoo Archives: gentoo-commits

From: Mike Pagano <mpagano@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] proj/linux-patches:5.12 commit in: /
Date: Wed, 14 Jul 2021 16:18:06
Message-Id: 1626279470.bf0be5aef037fe660c266831697103692019cbaf.mpagano@gentoo
1 commit: bf0be5aef037fe660c266831697103692019cbaf
2 Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
3 AuthorDate: Wed Jul 14 16:17:50 2021 +0000
4 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
5 CommitDate: Wed Jul 14 16:17:50 2021 +0000
6 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bf0be5ae
7
8 Linux patch 5.12.17
9
10 Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
11
12 0000_README | 4 +
13 1016_linux-5.12.17.patch | 25146 +++++++++++++++++++++++++++++++++++++++++++++
14 2 files changed, 25150 insertions(+)
15
16 diff --git a/0000_README b/0000_README
17 index 010c87c..10634ea 100644
18 --- a/0000_README
19 +++ b/0000_README
20 @@ -107,6 +107,10 @@ Patch: 1015_linux-5.12.16.patch
21 From: http://www.kernel.org
22 Desc: Linux 5.12.16
23
24 +Patch: 1016_linux-5.12.17.patch
25 +From: http://www.kernel.org
26 +Desc: Linux 5.12.17
27 +
28 Patch: 1500_XATTR_USER_PREFIX.patch
29 From: https://bugs.gentoo.org/show_bug.cgi?id=470644
30 Desc: Support for namespace user.pax.* on tmpfs.
31
32 diff --git a/1016_linux-5.12.17.patch b/1016_linux-5.12.17.patch
33 new file mode 100644
34 index 0000000..f1a8f56
35 --- /dev/null
36 +++ b/1016_linux-5.12.17.patch
37 @@ -0,0 +1,25146 @@
38 +diff --git a/Documentation/ABI/testing/evm b/Documentation/ABI/testing/evm
39 +index 3c477ba48a312..2243b72e41107 100644
40 +--- a/Documentation/ABI/testing/evm
41 ++++ b/Documentation/ABI/testing/evm
42 +@@ -49,8 +49,30 @@ Description:
43 + modification of EVM-protected metadata and
44 + disable all further modification of policy
45 +
46 +- Note that once a key has been loaded, it will no longer be
47 +- possible to enable metadata modification.
48 ++ Echoing a value is additive, the new value is added to the
49 ++ existing initialization flags.
50 ++
51 ++ For example, after::
52 ++
53 ++ echo 2 ><securityfs>/evm
54 ++
55 ++ another echo can be performed::
56 ++
57 ++ echo 1 ><securityfs>/evm
58 ++
59 ++ and the resulting value will be 3.
60 ++
61 ++ Note that once an HMAC key has been loaded, it will no longer
62 ++ be possible to enable metadata modification. Signaling that an
63 ++ HMAC key has been loaded will clear the corresponding flag.
64 ++ For example, if the current value is 6 (2 and 4 set)::
65 ++
66 ++ echo 1 ><securityfs>/evm
67 ++
68 ++ will set the new value to 3 (4 cleared).
69 ++
70 ++ Loading an HMAC key is the only way to disable metadata
71 ++ modification.
72 +
73 + Until key loading has been signaled EVM can not create
74 + or validate the 'security.evm' xattr, but returns
75 +diff --git a/Documentation/ABI/testing/sysfs-bus-papr-pmem b/Documentation/ABI/testing/sysfs-bus-papr-pmem
76 +index 8316c33862a04..0aa02bf2bde5c 100644
77 +--- a/Documentation/ABI/testing/sysfs-bus-papr-pmem
78 ++++ b/Documentation/ABI/testing/sysfs-bus-papr-pmem
79 +@@ -39,9 +39,11 @@ KernelVersion: v5.9
80 + Contact: linuxppc-dev <linuxppc-dev@××××××××××××.org>, linux-nvdimm@××××××××.org,
81 + Description:
82 + (RO) Report various performance stats related to papr-scm NVDIMM
83 +- device. Each stat is reported on a new line with each line
84 +- composed of a stat-identifier followed by it value. Below are
85 +- currently known dimm performance stats which are reported:
86 ++ device. This attribute is only available for NVDIMM devices
87 ++ that support reporting NVDIMM performance stats. Each stat is
88 ++ reported on a new line with each line composed of a
89 ++ stat-identifier followed by it value. Below are currently known
90 ++ dimm performance stats which are reported:
91 +
92 + * "CtlResCt" : Controller Reset Count
93 + * "CtlResTm" : Controller Reset Elapsed Time
94 +diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
95 +index 835f810f2f26d..c08e174e6ff43 100644
96 +--- a/Documentation/admin-guide/kernel-parameters.txt
97 ++++ b/Documentation/admin-guide/kernel-parameters.txt
98 +@@ -583,6 +583,12 @@
99 + loops can be debugged more effectively on production
100 + systems.
101 +
102 ++ clocksource.max_cswd_read_retries= [KNL]
103 ++ Number of clocksource_watchdog() retries due to
104 ++ external delays before the clock will be marked
105 ++ unstable. Defaults to three retries, that is,
106 ++ four attempts to read the clock under test.
107 ++
108 + clearcpuid=BITNUM[,BITNUM...] [X86]
109 + Disable CPUID feature X for the kernel. See
110 + arch/x86/include/asm/cpufeatures.h for the valid bit
111 +diff --git a/Documentation/hwmon/max31790.rst b/Documentation/hwmon/max31790.rst
112 +index f301385d8cef3..7b097c3b9b908 100644
113 +--- a/Documentation/hwmon/max31790.rst
114 ++++ b/Documentation/hwmon/max31790.rst
115 +@@ -38,6 +38,7 @@ Sysfs entries
116 + fan[1-12]_input RO fan tachometer speed in RPM
117 + fan[1-12]_fault RO fan experienced fault
118 + fan[1-6]_target RW desired fan speed in RPM
119 +-pwm[1-6]_enable RW regulator mode, 0=disabled, 1=manual mode, 2=rpm mode
120 +-pwm[1-6] RW fan target duty cycle (0-255)
121 ++pwm[1-6]_enable RW regulator mode, 0=disabled (duty cycle=0%), 1=manual mode, 2=rpm mode
122 ++pwm[1-6] RW read: current pwm duty cycle,
123 ++ write: target pwm duty cycle (0-255)
124 + ================== === =======================================================
125 +diff --git a/Documentation/userspace-api/media/v4l/ext-ctrls-codec.rst b/Documentation/userspace-api/media/v4l/ext-ctrls-codec.rst
126 +index 00944e97d6380..09f28ba60e6fc 100644
127 +--- a/Documentation/userspace-api/media/v4l/ext-ctrls-codec.rst
128 ++++ b/Documentation/userspace-api/media/v4l/ext-ctrls-codec.rst
129 +@@ -3285,7 +3285,7 @@ enum v4l2_mpeg_video_hevc_size_of_length_field -
130 + :stub-columns: 0
131 + :widths: 1 1 2
132 +
133 +- * - ``V4L2_HEVC_PPS_FLAG_DEPENDENT_SLICE_SEGMENT``
134 ++ * - ``V4L2_HEVC_PPS_FLAG_DEPENDENT_SLICE_SEGMENT_ENABLED``
135 + - 0x00000001
136 + -
137 + * - ``V4L2_HEVC_PPS_FLAG_OUTPUT_FLAG_PRESENT``
138 +@@ -3493,6 +3493,9 @@ enum v4l2_mpeg_video_hevc_size_of_length_field -
139 + * - ``V4L2_HEVC_SLICE_PARAMS_FLAG_SLICE_LOOP_FILTER_ACROSS_SLICES_ENABLED``
140 + - 0x00000100
141 + -
142 ++ * - ``V4L2_HEVC_SLICE_PARAMS_FLAG_DEPENDENT_SLICE_SEGMENT``
143 ++ - 0x00000200
144 ++ -
145 +
146 + .. c:type:: v4l2_hevc_dpb_entry
147 +
148 +diff --git a/Makefile b/Makefile
149 +index bf6accb2328c3..f1d0775925cc6 100644
150 +--- a/Makefile
151 ++++ b/Makefile
152 +@@ -1,7 +1,7 @@
153 + # SPDX-License-Identifier: GPL-2.0
154 + VERSION = 5
155 + PATCHLEVEL = 12
156 +-SUBLEVEL = 16
157 ++SUBLEVEL = 17
158 + EXTRAVERSION =
159 + NAME = Frozen Wasteland
160 +
161 +@@ -1006,7 +1006,7 @@ LDFLAGS_vmlinux += $(call ld-option, -X,)
162 + endif
163 +
164 + ifeq ($(CONFIG_RELR),y)
165 +-LDFLAGS_vmlinux += --pack-dyn-relocs=relr
166 ++LDFLAGS_vmlinux += --pack-dyn-relocs=relr --use-android-relr-tags
167 + endif
168 +
169 + # We never want expected sections to be placed heuristically by the
170 +diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
171 +index f4dd9f3f30010..4b2575f936d46 100644
172 +--- a/arch/alpha/kernel/smp.c
173 ++++ b/arch/alpha/kernel/smp.c
174 +@@ -166,7 +166,6 @@ smp_callin(void)
175 + DBGS(("smp_callin: commencing CPU %d current %p active_mm %p\n",
176 + cpuid, current, current->active_mm));
177 +
178 +- preempt_disable();
179 + cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
180 + }
181 +
182 +diff --git a/arch/arc/kernel/smp.c b/arch/arc/kernel/smp.c
183 +index 52906d3145371..db0e104d68355 100644
184 +--- a/arch/arc/kernel/smp.c
185 ++++ b/arch/arc/kernel/smp.c
186 +@@ -189,7 +189,6 @@ void start_kernel_secondary(void)
187 + pr_info("## CPU%u LIVE ##: Executing Code...\n", cpu);
188 +
189 + local_irq_enable();
190 +- preempt_disable();
191 + cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
192 + }
193 +
194 +diff --git a/arch/arm/boot/dts/sama5d4.dtsi b/arch/arm/boot/dts/sama5d4.dtsi
195 +index 05c55875835d5..f70a8528b9598 100644
196 +--- a/arch/arm/boot/dts/sama5d4.dtsi
197 ++++ b/arch/arm/boot/dts/sama5d4.dtsi
198 +@@ -787,7 +787,7 @@
199 + 0xffffffff 0x3ffcfe7c 0x1c010101 /* pioA */
200 + 0x7fffffff 0xfffccc3a 0x3f00cc3a /* pioB */
201 + 0xffffffff 0x3ff83fff 0xff00ffff /* pioC */
202 +- 0x0003ff00 0x8002a800 0x00000000 /* pioD */
203 ++ 0xb003ff00 0x8002a800 0x00000000 /* pioD */
204 + 0xffffffff 0x7fffffff 0x76fff1bf /* pioE */
205 + >;
206 +
207 +diff --git a/arch/arm/boot/dts/ste-href.dtsi b/arch/arm/boot/dts/ste-href.dtsi
208 +index 83b179692dff7..13d2161929042 100644
209 +--- a/arch/arm/boot/dts/ste-href.dtsi
210 ++++ b/arch/arm/boot/dts/ste-href.dtsi
211 +@@ -4,6 +4,7 @@
212 + */
213 +
214 + #include <dt-bindings/interrupt-controller/irq.h>
215 ++#include <dt-bindings/leds/common.h>
216 + #include "ste-href-family-pinctrl.dtsi"
217 +
218 + / {
219 +@@ -64,17 +65,20 @@
220 + reg = <0>;
221 + led-cur = /bits/ 8 <0x2f>;
222 + max-cur = /bits/ 8 <0x5f>;
223 ++ color = <LED_COLOR_ID_BLUE>;
224 + linux,default-trigger = "heartbeat";
225 + };
226 + chan@1 {
227 + reg = <1>;
228 + led-cur = /bits/ 8 <0x2f>;
229 + max-cur = /bits/ 8 <0x5f>;
230 ++ color = <LED_COLOR_ID_BLUE>;
231 + };
232 + chan@2 {
233 + reg = <2>;
234 + led-cur = /bits/ 8 <0x2f>;
235 + max-cur = /bits/ 8 <0x5f>;
236 ++ color = <LED_COLOR_ID_BLUE>;
237 + };
238 + };
239 + lp5521@34 {
240 +@@ -88,16 +92,19 @@
241 + reg = <0>;
242 + led-cur = /bits/ 8 <0x2f>;
243 + max-cur = /bits/ 8 <0x5f>;
244 ++ color = <LED_COLOR_ID_BLUE>;
245 + };
246 + chan@1 {
247 + reg = <1>;
248 + led-cur = /bits/ 8 <0x2f>;
249 + max-cur = /bits/ 8 <0x5f>;
250 ++ color = <LED_COLOR_ID_BLUE>;
251 + };
252 + chan@2 {
253 + reg = <2>;
254 + led-cur = /bits/ 8 <0x2f>;
255 + max-cur = /bits/ 8 <0x5f>;
256 ++ color = <LED_COLOR_ID_BLUE>;
257 + };
258 + };
259 + bh1780@29 {
260 +diff --git a/arch/arm/kernel/perf_event_v7.c b/arch/arm/kernel/perf_event_v7.c
261 +index 2924d7910b106..eb2190477da10 100644
262 +--- a/arch/arm/kernel/perf_event_v7.c
263 ++++ b/arch/arm/kernel/perf_event_v7.c
264 +@@ -773,10 +773,10 @@ static inline void armv7pmu_write_counter(struct perf_event *event, u64 value)
265 + pr_err("CPU%u writing wrong counter %d\n",
266 + smp_processor_id(), idx);
267 + } else if (idx == ARMV7_IDX_CYCLE_COUNTER) {
268 +- asm volatile("mcr p15, 0, %0, c9, c13, 0" : : "r" (value));
269 ++ asm volatile("mcr p15, 0, %0, c9, c13, 0" : : "r" ((u32)value));
270 + } else {
271 + armv7_pmnc_select_counter(idx);
272 +- asm volatile("mcr p15, 0, %0, c9, c13, 2" : : "r" (value));
273 ++ asm volatile("mcr p15, 0, %0, c9, c13, 2" : : "r" ((u32)value));
274 + }
275 + }
276 +
277 +diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
278 +index 74679240a9d8e..c7bb168b0d97c 100644
279 +--- a/arch/arm/kernel/smp.c
280 ++++ b/arch/arm/kernel/smp.c
281 +@@ -432,7 +432,6 @@ asmlinkage void secondary_start_kernel(void)
282 + #endif
283 + pr_debug("CPU%u: Booted secondary processor\n", cpu);
284 +
285 +- preempt_disable();
286 + trace_hardirqs_off();
287 +
288 + /*
289 +diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
290 +index 456dcd4a7793f..6ffbb099fcac7 100644
291 +--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
292 ++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
293 +@@ -134,7 +134,7 @@
294 +
295 + uart0: serial@12000 {
296 + compatible = "marvell,armada-3700-uart";
297 +- reg = <0x12000 0x200>;
298 ++ reg = <0x12000 0x18>;
299 + clocks = <&xtalclk>;
300 + interrupts =
301 + <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>,
302 +diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
303 +index 858c2fcfc043b..4e4356add46ed 100644
304 +--- a/arch/arm64/include/asm/kvm_host.h
305 ++++ b/arch/arm64/include/asm/kvm_host.h
306 +@@ -46,6 +46,7 @@
307 + #define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(2)
308 + #define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3)
309 + #define KVM_REQ_RELOAD_GICv4 KVM_ARCH_REQ(4)
310 ++#define KVM_REQ_RELOAD_PMU KVM_ARCH_REQ(5)
311 +
312 + #define KVM_DIRTY_LOG_MANUAL_CAPS (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \
313 + KVM_DIRTY_LOG_INITIALLY_SET)
314 +diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
315 +index bd02e99b1a4c5..44dceac442fc5 100644
316 +--- a/arch/arm64/include/asm/mmu_context.h
317 ++++ b/arch/arm64/include/asm/mmu_context.h
318 +@@ -177,9 +177,9 @@ static inline void update_saved_ttbr0(struct task_struct *tsk,
319 + return;
320 +
321 + if (mm == &init_mm)
322 +- ttbr = __pa_symbol(reserved_pg_dir);
323 ++ ttbr = phys_to_ttbr(__pa_symbol(reserved_pg_dir));
324 + else
325 +- ttbr = virt_to_phys(mm->pgd) | ASID(mm) << 48;
326 ++ ttbr = phys_to_ttbr(virt_to_phys(mm->pgd)) | ASID(mm) << 48;
327 +
328 + WRITE_ONCE(task_thread_info(tsk)->ttbr0, ttbr);
329 + }
330 +diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/preempt.h
331 +index 80e946b2abee2..e83f0982b99c1 100644
332 +--- a/arch/arm64/include/asm/preempt.h
333 ++++ b/arch/arm64/include/asm/preempt.h
334 +@@ -23,7 +23,7 @@ static inline void preempt_count_set(u64 pc)
335 + } while (0)
336 +
337 + #define init_idle_preempt_count(p, cpu) do { \
338 +- task_thread_info(p)->preempt_count = PREEMPT_ENABLED; \
339 ++ task_thread_info(p)->preempt_count = PREEMPT_DISABLED; \
340 + } while (0)
341 +
342 + static inline void set_preempt_need_resched(void)
343 +diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
344 +index 4658fcf88c2b4..3baca49fcf6bd 100644
345 +--- a/arch/arm64/kernel/perf_event.c
346 ++++ b/arch/arm64/kernel/perf_event.c
347 +@@ -312,7 +312,7 @@ static ssize_t slots_show(struct device *dev, struct device_attribute *attr,
348 + struct arm_pmu *cpu_pmu = container_of(pmu, struct arm_pmu, pmu);
349 + u32 slots = cpu_pmu->reg_pmmir & ARMV8_PMU_SLOTS_MASK;
350 +
351 +- return snprintf(page, PAGE_SIZE, "0x%08x\n", slots);
352 ++ return sysfs_emit(page, "0x%08x\n", slots);
353 + }
354 +
355 + static DEVICE_ATTR_RO(slots);
356 +diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
357 +index 61845c0821d9d..68b30e8c22dbf 100644
358 +--- a/arch/arm64/kernel/setup.c
359 ++++ b/arch/arm64/kernel/setup.c
360 +@@ -381,7 +381,7 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
361 + * faults in case uaccess_enable() is inadvertently called by the init
362 + * thread.
363 + */
364 +- init_task.thread_info.ttbr0 = __pa_symbol(reserved_pg_dir);
365 ++ init_task.thread_info.ttbr0 = phys_to_ttbr(__pa_symbol(reserved_pg_dir));
366 + #endif
367 +
368 + if (boot_args[1] || boot_args[2] || boot_args[3]) {
369 +diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
370 +index 357590beaabb2..48fd892567390 100644
371 +--- a/arch/arm64/kernel/smp.c
372 ++++ b/arch/arm64/kernel/smp.c
373 +@@ -223,7 +223,6 @@ asmlinkage notrace void secondary_start_kernel(void)
374 + init_gic_priority_masking();
375 +
376 + rcu_cpu_starting(cpu);
377 +- preempt_disable();
378 + trace_hardirqs_off();
379 +
380 + /*
381 +diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
382 +index 7730b81aad6d1..8455c5c30116d 100644
383 +--- a/arch/arm64/kvm/arm.c
384 ++++ b/arch/arm64/kvm/arm.c
385 +@@ -684,6 +684,10 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu)
386 + vgic_v4_load(vcpu);
387 + preempt_enable();
388 + }
389 ++
390 ++ if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu))
391 ++ kvm_pmu_handle_pmcr(vcpu,
392 ++ __vcpu_sys_reg(vcpu, PMCR_EL0));
393 + }
394 + }
395 +
396 +diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
397 +index e32c6e139a09d..14957f7c7303a 100644
398 +--- a/arch/arm64/kvm/pmu-emul.c
399 ++++ b/arch/arm64/kvm/pmu-emul.c
400 +@@ -578,6 +578,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
401 + kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0);
402 +
403 + if (val & ARMV8_PMU_PMCR_P) {
404 ++ mask &= ~BIT(ARMV8_PMU_CYCLE_IDX);
405 + for_each_set_bit(i, &mask, 32)
406 + kvm_pmu_set_counter_value(vcpu, i, 0);
407 + }
408 +@@ -850,6 +851,9 @@ int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu)
409 + return -EINVAL;
410 + }
411 +
412 ++ /* One-off reload of the PMU on first run */
413 ++ kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu);
414 ++
415 + return 0;
416 + }
417 +
418 +diff --git a/arch/csky/kernel/smp.c b/arch/csky/kernel/smp.c
419 +index 0f9f5eef93386..e2993539af8ef 100644
420 +--- a/arch/csky/kernel/smp.c
421 ++++ b/arch/csky/kernel/smp.c
422 +@@ -281,7 +281,6 @@ void csky_start_secondary(void)
423 + pr_info("CPU%u Online: %s...\n", cpu, __func__);
424 +
425 + local_irq_enable();
426 +- preempt_disable();
427 + cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
428 + }
429 +
430 +diff --git a/arch/csky/mm/syscache.c b/arch/csky/mm/syscache.c
431 +index ffade2f9a4c87..cd847ad62c7ee 100644
432 +--- a/arch/csky/mm/syscache.c
433 ++++ b/arch/csky/mm/syscache.c
434 +@@ -12,14 +12,17 @@ SYSCALL_DEFINE3(cacheflush,
435 + int, cache)
436 + {
437 + switch (cache) {
438 +- case ICACHE:
439 + case BCACHE:
440 +- flush_icache_mm_range(current->mm,
441 +- (unsigned long)addr,
442 +- (unsigned long)addr + bytes);
443 + case DCACHE:
444 + dcache_wb_range((unsigned long)addr,
445 + (unsigned long)addr + bytes);
446 ++ if (cache != BCACHE)
447 ++ break;
448 ++ fallthrough;
449 ++ case ICACHE:
450 ++ flush_icache_mm_range(current->mm,
451 ++ (unsigned long)addr,
452 ++ (unsigned long)addr + bytes);
453 + break;
454 + default:
455 + return -EINVAL;
456 +diff --git a/arch/ia64/kernel/mca_drv.c b/arch/ia64/kernel/mca_drv.c
457 +index 36a69b4e61690..5bfc79be4cefe 100644
458 +--- a/arch/ia64/kernel/mca_drv.c
459 ++++ b/arch/ia64/kernel/mca_drv.c
460 +@@ -343,7 +343,7 @@ init_record_index_pools(void)
461 +
462 + /* - 2 - */
463 + sect_min_size = sal_log_sect_min_sizes[0];
464 +- for (i = 1; i < sizeof sal_log_sect_min_sizes/sizeof(size_t); i++)
465 ++ for (i = 1; i < ARRAY_SIZE(sal_log_sect_min_sizes); i++)
466 + if (sect_min_size > sal_log_sect_min_sizes[i])
467 + sect_min_size = sal_log_sect_min_sizes[i];
468 +
469 +diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
470 +index 49b4885809399..d10f780c13b9e 100644
471 +--- a/arch/ia64/kernel/smpboot.c
472 ++++ b/arch/ia64/kernel/smpboot.c
473 +@@ -441,7 +441,6 @@ start_secondary (void *unused)
474 + #endif
475 + efi_map_pal_code();
476 + cpu_init();
477 +- preempt_disable();
478 + smp_callin();
479 +
480 + cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
481 +diff --git a/arch/m68k/Kconfig.machine b/arch/m68k/Kconfig.machine
482 +index 4d59ec2f5b8d6..d964c1f273995 100644
483 +--- a/arch/m68k/Kconfig.machine
484 ++++ b/arch/m68k/Kconfig.machine
485 +@@ -25,6 +25,9 @@ config ATARI
486 + this kernel on an Atari, say Y here and browse the material
487 + available in <file:Documentation/m68k>; otherwise say N.
488 +
489 ++config ATARI_KBD_CORE
490 ++ bool
491 ++
492 + config MAC
493 + bool "Macintosh support"
494 + depends on MMU
495 +diff --git a/arch/mips/include/asm/highmem.h b/arch/mips/include/asm/highmem.h
496 +index 292d0425717f3..92a3802100178 100644
497 +--- a/arch/mips/include/asm/highmem.h
498 ++++ b/arch/mips/include/asm/highmem.h
499 +@@ -36,7 +36,7 @@ extern pte_t *pkmap_page_table;
500 + * easily, subsequent pte tables have to be allocated in one physical
501 + * chunk of RAM.
502 + */
503 +-#ifdef CONFIG_PHYS_ADDR_T_64BIT
504 ++#if defined(CONFIG_PHYS_ADDR_T_64BIT) || defined(CONFIG_MIPS_HUGE_TLB_SUPPORT)
505 + #define LAST_PKMAP 512
506 + #else
507 + #define LAST_PKMAP 1024
508 +diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
509 +index ef86fbad85460..d542fb7af3ba2 100644
510 +--- a/arch/mips/kernel/smp.c
511 ++++ b/arch/mips/kernel/smp.c
512 +@@ -348,7 +348,6 @@ asmlinkage void start_secondary(void)
513 + */
514 +
515 + calibrate_delay();
516 +- preempt_disable();
517 + cpu = smp_processor_id();
518 + cpu_data[cpu].udelay_val = loops_per_jiffy;
519 +
520 +diff --git a/arch/openrisc/kernel/smp.c b/arch/openrisc/kernel/smp.c
521 +index 48e1092a64de3..415e209732a3d 100644
522 +--- a/arch/openrisc/kernel/smp.c
523 ++++ b/arch/openrisc/kernel/smp.c
524 +@@ -145,8 +145,6 @@ asmlinkage __init void secondary_start_kernel(void)
525 + set_cpu_online(cpu, true);
526 +
527 + local_irq_enable();
528 +-
529 +- preempt_disable();
530 + /*
531 + * OK, it's off to the idle thread for us
532 + */
533 +diff --git a/arch/parisc/kernel/smp.c b/arch/parisc/kernel/smp.c
534 +index 10227f667c8a6..1405b603b91b6 100644
535 +--- a/arch/parisc/kernel/smp.c
536 ++++ b/arch/parisc/kernel/smp.c
537 +@@ -302,7 +302,6 @@ void __init smp_callin(unsigned long pdce_proc)
538 + #endif
539 +
540 + smp_cpu_init(slave_id);
541 +- preempt_disable();
542 +
543 + flush_cache_all_local(); /* start with known state */
544 + flush_tlb_all_local(NULL);
545 +diff --git a/arch/powerpc/include/asm/cputhreads.h b/arch/powerpc/include/asm/cputhreads.h
546 +index 98c8bd155bf9d..b167186aaee4a 100644
547 +--- a/arch/powerpc/include/asm/cputhreads.h
548 ++++ b/arch/powerpc/include/asm/cputhreads.h
549 +@@ -98,6 +98,36 @@ static inline int cpu_last_thread_sibling(int cpu)
550 + return cpu | (threads_per_core - 1);
551 + }
552 +
553 ++/*
554 ++ * tlb_thread_siblings are siblings which share a TLB. This is not
555 ++ * architected, is not something a hypervisor could emulate and a future
556 ++ * CPU may change behaviour even in compat mode, so this should only be
557 ++ * used on PowerNV, and only with care.
558 ++ */
559 ++static inline int cpu_first_tlb_thread_sibling(int cpu)
560 ++{
561 ++ if (cpu_has_feature(CPU_FTR_ARCH_300) && (threads_per_core == 8))
562 ++ return cpu & ~0x6; /* Big Core */
563 ++ else
564 ++ return cpu_first_thread_sibling(cpu);
565 ++}
566 ++
567 ++static inline int cpu_last_tlb_thread_sibling(int cpu)
568 ++{
569 ++ if (cpu_has_feature(CPU_FTR_ARCH_300) && (threads_per_core == 8))
570 ++ return cpu | 0x6; /* Big Core */
571 ++ else
572 ++ return cpu_last_thread_sibling(cpu);
573 ++}
574 ++
575 ++static inline int cpu_tlb_thread_sibling_step(void)
576 ++{
577 ++ if (cpu_has_feature(CPU_FTR_ARCH_300) && (threads_per_core == 8))
578 ++ return 2; /* Big Core */
579 ++ else
580 ++ return 1;
581 ++}
582 ++
583 + static inline u32 get_tensr(void)
584 + {
585 + #ifdef CONFIG_BOOKE
586 +diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
587 +index 31ed5356590a2..6d15728f06808 100644
588 +--- a/arch/powerpc/include/asm/interrupt.h
589 ++++ b/arch/powerpc/include/asm/interrupt.h
590 +@@ -120,6 +120,7 @@ struct interrupt_nmi_state {
591 + u8 irq_happened;
592 + #endif
593 + u8 ftrace_enabled;
594 ++ u64 softe;
595 + #endif
596 + };
597 +
598 +@@ -129,6 +130,7 @@ static inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct inte
599 + #ifdef CONFIG_PPC_BOOK3S_64
600 + state->irq_soft_mask = local_paca->irq_soft_mask;
601 + state->irq_happened = local_paca->irq_happened;
602 ++ state->softe = regs->softe;
603 +
604 + /*
605 + * Set IRQS_ALL_DISABLED unconditionally so irqs_disabled() does
606 +@@ -178,6 +180,7 @@ static inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct inter
607 + #ifdef CONFIG_PPC_BOOK3S_64
608 + /* Check we didn't change the pending interrupt mask. */
609 + WARN_ON_ONCE((state->irq_happened | PACA_IRQ_HARD_DIS) != local_paca->irq_happened);
610 ++ regs->softe = state->softe;
611 + local_paca->irq_happened = state->irq_happened;
612 + local_paca->irq_soft_mask = state->irq_soft_mask;
613 + #endif
614 +diff --git a/arch/powerpc/include/asm/kvm_guest.h b/arch/powerpc/include/asm/kvm_guest.h
615 +index 2fca299f7e192..c63105d2c9e7c 100644
616 +--- a/arch/powerpc/include/asm/kvm_guest.h
617 ++++ b/arch/powerpc/include/asm/kvm_guest.h
618 +@@ -16,10 +16,10 @@ static inline bool is_kvm_guest(void)
619 + return static_branch_unlikely(&kvm_guest);
620 + }
621 +
622 +-bool check_kvm_guest(void);
623 ++int check_kvm_guest(void);
624 + #else
625 + static inline bool is_kvm_guest(void) { return false; }
626 +-static inline bool check_kvm_guest(void) { return false; }
627 ++static inline int check_kvm_guest(void) { return 0; }
628 + #endif
629 +
630 + #endif /* _ASM_POWERPC_KVM_GUEST_H_ */
631 +diff --git a/arch/powerpc/kernel/firmware.c b/arch/powerpc/kernel/firmware.c
632 +index c9e2819b095ab..c7022c41cc314 100644
633 +--- a/arch/powerpc/kernel/firmware.c
634 ++++ b/arch/powerpc/kernel/firmware.c
635 +@@ -23,18 +23,20 @@ EXPORT_SYMBOL_GPL(powerpc_firmware_features);
636 +
637 + #if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_KVM_GUEST)
638 + DEFINE_STATIC_KEY_FALSE(kvm_guest);
639 +-bool check_kvm_guest(void)
640 ++int __init check_kvm_guest(void)
641 + {
642 + struct device_node *hyper_node;
643 +
644 + hyper_node = of_find_node_by_path("/hypervisor");
645 + if (!hyper_node)
646 +- return false;
647 ++ return 0;
648 +
649 + if (!of_device_is_compatible(hyper_node, "linux,kvm"))
650 +- return false;
651 ++ return 0;
652 +
653 + static_branch_enable(&kvm_guest);
654 +- return true;
655 ++
656 ++ return 0;
657 + }
658 ++core_initcall(check_kvm_guest); // before kvm_guest_init()
659 + #endif
660 +diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c
661 +index 667104d4c4550..2fff886c549d0 100644
662 +--- a/arch/powerpc/kernel/mce_power.c
663 ++++ b/arch/powerpc/kernel/mce_power.c
664 +@@ -481,12 +481,11 @@ static int mce_find_instr_ea_and_phys(struct pt_regs *regs, uint64_t *addr,
665 + return -1;
666 + }
667 +
668 +-static int mce_handle_ierror(struct pt_regs *regs,
669 ++static int mce_handle_ierror(struct pt_regs *regs, unsigned long srr1,
670 + const struct mce_ierror_table table[],
671 + struct mce_error_info *mce_err, uint64_t *addr,
672 + uint64_t *phys_addr)
673 + {
674 +- uint64_t srr1 = regs->msr;
675 + int handled = 0;
676 + int i;
677 +
678 +@@ -695,19 +694,19 @@ static long mce_handle_ue_error(struct pt_regs *regs,
679 + }
680 +
681 + static long mce_handle_error(struct pt_regs *regs,
682 ++ unsigned long srr1,
683 + const struct mce_derror_table dtable[],
684 + const struct mce_ierror_table itable[])
685 + {
686 + struct mce_error_info mce_err = { 0 };
687 + uint64_t addr, phys_addr = ULONG_MAX;
688 +- uint64_t srr1 = regs->msr;
689 + long handled;
690 +
691 + if (SRR1_MC_LOADSTORE(srr1))
692 + handled = mce_handle_derror(regs, dtable, &mce_err, &addr,
693 + &phys_addr);
694 + else
695 +- handled = mce_handle_ierror(regs, itable, &mce_err, &addr,
696 ++ handled = mce_handle_ierror(regs, srr1, itable, &mce_err, &addr,
697 + &phys_addr);
698 +
699 + if (!handled && mce_err.error_type == MCE_ERROR_TYPE_UE)
700 +@@ -723,16 +722,20 @@ long __machine_check_early_realmode_p7(struct pt_regs *regs)
701 + /* P7 DD1 leaves top bits of DSISR undefined */
702 + regs->dsisr &= 0x0000ffff;
703 +
704 +- return mce_handle_error(regs, mce_p7_derror_table, mce_p7_ierror_table);
705 ++ return mce_handle_error(regs, regs->msr,
706 ++ mce_p7_derror_table, mce_p7_ierror_table);
707 + }
708 +
709 + long __machine_check_early_realmode_p8(struct pt_regs *regs)
710 + {
711 +- return mce_handle_error(regs, mce_p8_derror_table, mce_p8_ierror_table);
712 ++ return mce_handle_error(regs, regs->msr,
713 ++ mce_p8_derror_table, mce_p8_ierror_table);
714 + }
715 +
716 + long __machine_check_early_realmode_p9(struct pt_regs *regs)
717 + {
718 ++ unsigned long srr1 = regs->msr;
719 ++
720 + /*
721 + * On POWER9 DD2.1 and below, it's possible to get a machine check
722 + * caused by a paste instruction where only DSISR bit 25 is set. This
723 +@@ -746,10 +749,39 @@ long __machine_check_early_realmode_p9(struct pt_regs *regs)
724 + if (SRR1_MC_LOADSTORE(regs->msr) && regs->dsisr == 0x02000000)
725 + return 1;
726 +
727 +- return mce_handle_error(regs, mce_p9_derror_table, mce_p9_ierror_table);
728 ++ /*
729 ++ * Async machine check due to bad real address from store or foreign
730 ++ * link time out comes with the load/store bit (PPC bit 42) set in
731 ++ * SRR1, but the cause comes in SRR1 not DSISR. Clear bit 42 so we're
732 ++ * directed to the ierror table so it will find the cause (which
733 ++ * describes it correctly as a store error).
734 ++ */
735 ++ if (SRR1_MC_LOADSTORE(srr1) &&
736 ++ ((srr1 & 0x081c0000) == 0x08140000 ||
737 ++ (srr1 & 0x081c0000) == 0x08180000)) {
738 ++ srr1 &= ~PPC_BIT(42);
739 ++ }
740 ++
741 ++ return mce_handle_error(regs, srr1,
742 ++ mce_p9_derror_table, mce_p9_ierror_table);
743 + }
744 +
745 + long __machine_check_early_realmode_p10(struct pt_regs *regs)
746 + {
747 +- return mce_handle_error(regs, mce_p10_derror_table, mce_p10_ierror_table);
748 ++ unsigned long srr1 = regs->msr;
749 ++
750 ++ /*
751 ++ * Async machine check due to bad real address from store comes with
752 ++ * the load/store bit (PPC bit 42) set in SRR1, but the cause comes in
753 ++ * SRR1 not DSISR. Clear bit 42 so we're directed to the ierror table
754 ++ * so it will find the cause (which describes it correctly as a store
755 ++ * error).
756 ++ */
757 ++ if (SRR1_MC_LOADSTORE(srr1) &&
758 ++ (srr1 & 0x081c0000) == 0x08140000) {
759 ++ srr1 &= ~PPC_BIT(42);
760 ++ }
761 ++
762 ++ return mce_handle_error(regs, srr1,
763 ++ mce_p10_derror_table, mce_p10_ierror_table);
764 + }
765 +diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
766 +index 3231c2df9e261..03d7261e14923 100644
767 +--- a/arch/powerpc/kernel/process.c
768 ++++ b/arch/powerpc/kernel/process.c
769 +@@ -1212,6 +1212,19 @@ struct task_struct *__switch_to(struct task_struct *prev,
770 + __flush_tlb_pending(batch);
771 + batch->active = 0;
772 + }
773 ++
774 ++ /*
775 ++ * On POWER9 the copy-paste buffer can only paste into
776 ++ * foreign real addresses, so unprivileged processes can not
777 ++ * see the data or use it in any way unless they have
778 ++ * foreign real mappings. If the new process has the foreign
779 ++ * real address mappings, we must issue a cp_abort to clear
780 ++ * any state and prevent snooping, corruption or a covert
781 ++ * channel. ISA v3.1 supports paste into local memory.
782 ++ */
783 ++ if (new->mm && (cpu_has_feature(CPU_FTR_ARCH_31) ||
784 ++ atomic_read(&new->mm->context.vas_windows)))
785 ++ asm volatile(PPC_CP_ABORT);
786 + #endif /* CONFIG_PPC_BOOK3S_64 */
787 +
788 + #ifdef CONFIG_PPC_ADV_DEBUG_REGS
789 +@@ -1257,30 +1270,33 @@ struct task_struct *__switch_to(struct task_struct *prev,
790 +
791 + last = _switch(old_thread, new_thread);
792 +
793 ++ /*
794 ++ * Nothing after _switch will be run for newly created tasks,
795 ++ * because they switch directly to ret_from_fork/ret_from_kernel_thread
796 ++ * etc. Code added here should have a comment explaining why that is
797 ++ * okay.
798 ++ */
799 ++
800 + #ifdef CONFIG_PPC_BOOK3S_64
801 ++ /*
802 ++ * This applies to a process that was context switched while inside
803 ++ * arch_enter_lazy_mmu_mode(), to re-activate the batch that was
804 ++ * deactivated above, before _switch(). This will never be the case
805 ++ * for new tasks.
806 ++ */
807 + if (current_thread_info()->local_flags & _TLF_LAZY_MMU) {
808 + current_thread_info()->local_flags &= ~_TLF_LAZY_MMU;
809 + batch = this_cpu_ptr(&ppc64_tlb_batch);
810 + batch->active = 1;
811 + }
812 +
813 +- if (current->thread.regs) {
814 ++ /*
815 ++ * Math facilities are masked out of the child MSR in copy_thread.
816 ++ * A new task does not need to restore_math because it will
817 ++ * demand fault them.
818 ++ */
819 ++ if (current->thread.regs)
820 + restore_math(current->thread.regs);
821 +-
822 +- /*
823 +- * On POWER9 the copy-paste buffer can only paste into
824 +- * foreign real addresses, so unprivileged processes can not
825 +- * see the data or use it in any way unless they have
826 +- * foreign real mappings. If the new process has the foreign
827 +- * real address mappings, we must issue a cp_abort to clear
828 +- * any state and prevent snooping, corruption or a covert
829 +- * channel. ISA v3.1 supports paste into local memory.
830 +- */
831 +- if (current->mm &&
832 +- (cpu_has_feature(CPU_FTR_ARCH_31) ||
833 +- atomic_read(&current->mm->context.vas_windows)))
834 +- asm volatile(PPC_CP_ABORT);
835 +- }
836 + #endif /* CONFIG_PPC_BOOK3S_64 */
837 +
838 + return last;
839 +diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
840 +index c2473e20f5f5b..216919de87d7e 100644
841 +--- a/arch/powerpc/kernel/smp.c
842 ++++ b/arch/powerpc/kernel/smp.c
843 +@@ -619,6 +619,8 @@ static void nmi_stop_this_cpu(struct pt_regs *regs)
844 + /*
845 + * IRQs are already hard disabled by the smp_handle_nmi_ipi.
846 + */
847 ++ set_cpu_online(smp_processor_id(), false);
848 ++
849 + spin_begin();
850 + while (1)
851 + spin_cpu_relax();
852 +@@ -634,6 +636,15 @@ void smp_send_stop(void)
853 + static void stop_this_cpu(void *dummy)
854 + {
855 + hard_irq_disable();
856 ++
857 ++ /*
858 ++ * Offlining CPUs in stop_this_cpu can result in scheduler warnings,
859 ++ * (see commit de6e5d38417e), but printk_safe_flush_on_panic() wants
860 ++ * to know other CPUs are offline before it breaks locks to flush
861 ++ * printk buffers, in case we panic()ed while holding the lock.
862 ++ */
863 ++ set_cpu_online(smp_processor_id(), false);
864 ++
865 + spin_begin();
866 + while (1)
867 + spin_cpu_relax();
868 +@@ -1530,7 +1541,6 @@ void start_secondary(void *unused)
869 + smp_store_cpu_info(cpu);
870 + set_dec(tb_ticks_per_jiffy);
871 + rcu_cpu_starting(cpu);
872 +- preempt_disable();
873 + cpu_callin_map[cpu] = 1;
874 +
875 + if (smp_ops->setup_cpu)
876 +diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
877 +index b6440657ef92d..b60c00e98dc01 100644
878 +--- a/arch/powerpc/kernel/stacktrace.c
879 ++++ b/arch/powerpc/kernel/stacktrace.c
880 +@@ -230,17 +230,31 @@ static void handle_backtrace_ipi(struct pt_regs *regs)
881 +
882 + static void raise_backtrace_ipi(cpumask_t *mask)
883 + {
884 ++ struct paca_struct *p;
885 + unsigned int cpu;
886 ++ u64 delay_us;
887 +
888 + for_each_cpu(cpu, mask) {
889 +- if (cpu == smp_processor_id())
890 ++ if (cpu == smp_processor_id()) {
891 + handle_backtrace_ipi(NULL);
892 +- else
893 +- smp_send_safe_nmi_ipi(cpu, handle_backtrace_ipi, 5 * USEC_PER_SEC);
894 +- }
895 ++ continue;
896 ++ }
897 +
898 +- for_each_cpu(cpu, mask) {
899 +- struct paca_struct *p = paca_ptrs[cpu];
900 ++ delay_us = 5 * USEC_PER_SEC;
901 ++
902 ++ if (smp_send_safe_nmi_ipi(cpu, handle_backtrace_ipi, delay_us)) {
903 ++ // Now wait up to 5s for the other CPU to do its backtrace
904 ++ while (cpumask_test_cpu(cpu, mask) && delay_us) {
905 ++ udelay(1);
906 ++ delay_us--;
907 ++ }
908 ++
909 ++ // Other CPU cleared itself from the mask
910 ++ if (delay_us)
911 ++ continue;
912 ++ }
913 ++
914 ++ p = paca_ptrs[cpu];
915 +
916 + cpumask_clear_cpu(cpu, mask);
917 +
918 +diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
919 +index 60c5bc0c130cf..1c6e0a52fb532 100644
920 +--- a/arch/powerpc/kvm/book3s_hv.c
921 ++++ b/arch/powerpc/kvm/book3s_hv.c
922 +@@ -2619,7 +2619,7 @@ static void radix_flush_cpu(struct kvm *kvm, int cpu, struct kvm_vcpu *vcpu)
923 + cpumask_t *cpu_in_guest;
924 + int i;
925 +
926 +- cpu = cpu_first_thread_sibling(cpu);
927 ++ cpu = cpu_first_tlb_thread_sibling(cpu);
928 + if (nested) {
929 + cpumask_set_cpu(cpu, &nested->need_tlb_flush);
930 + cpu_in_guest = &nested->cpu_in_guest;
931 +@@ -2633,9 +2633,10 @@ static void radix_flush_cpu(struct kvm *kvm, int cpu, struct kvm_vcpu *vcpu)
932 + * the other side is the first smp_mb() in kvmppc_run_core().
933 + */
934 + smp_mb();
935 +- for (i = 0; i < threads_per_core; ++i)
936 +- if (cpumask_test_cpu(cpu + i, cpu_in_guest))
937 +- smp_call_function_single(cpu + i, do_nothing, NULL, 1);
938 ++ for (i = cpu; i <= cpu_last_tlb_thread_sibling(cpu);
939 ++ i += cpu_tlb_thread_sibling_step())
940 ++ if (cpumask_test_cpu(i, cpu_in_guest))
941 ++ smp_call_function_single(i, do_nothing, NULL, 1);
942 + }
943 +
944 + static void kvmppc_prepare_radix_vcpu(struct kvm_vcpu *vcpu, int pcpu)
945 +@@ -2666,8 +2667,8 @@ static void kvmppc_prepare_radix_vcpu(struct kvm_vcpu *vcpu, int pcpu)
946 + */
947 + if (prev_cpu != pcpu) {
948 + if (prev_cpu >= 0 &&
949 +- cpu_first_thread_sibling(prev_cpu) !=
950 +- cpu_first_thread_sibling(pcpu))
951 ++ cpu_first_tlb_thread_sibling(prev_cpu) !=
952 ++ cpu_first_tlb_thread_sibling(pcpu))
953 + radix_flush_cpu(kvm, prev_cpu, vcpu);
954 + if (nested)
955 + nested->prev_cpu[vcpu->arch.nested_vcpu_id] = pcpu;
956 +diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c
957 +index 158d309b42a38..b5e5d07cb40f9 100644
958 +--- a/arch/powerpc/kvm/book3s_hv_builtin.c
959 ++++ b/arch/powerpc/kvm/book3s_hv_builtin.c
960 +@@ -797,7 +797,7 @@ void kvmppc_check_need_tlb_flush(struct kvm *kvm, int pcpu,
961 + * Thus we make all 4 threads use the same bit.
962 + */
963 + if (cpu_has_feature(CPU_FTR_ARCH_300))
964 +- pcpu = cpu_first_thread_sibling(pcpu);
965 ++ pcpu = cpu_first_tlb_thread_sibling(pcpu);
966 +
967 + if (nested)
968 + need_tlb_flush = &nested->need_tlb_flush;
969 +diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
970 +index 0cd0e7aad588b..bf892e134727c 100644
971 +--- a/arch/powerpc/kvm/book3s_hv_nested.c
972 ++++ b/arch/powerpc/kvm/book3s_hv_nested.c
973 +@@ -53,7 +53,8 @@ void kvmhv_save_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr)
974 + hr->dawrx1 = vcpu->arch.dawrx1;
975 + }
976 +
977 +-static void byteswap_pt_regs(struct pt_regs *regs)
978 ++/* Use noinline_for_stack due to https://bugs.llvm.org/show_bug.cgi?id=49610 */
979 ++static noinline_for_stack void byteswap_pt_regs(struct pt_regs *regs)
980 + {
981 + unsigned long *addr = (unsigned long *) regs;
982 +
983 +diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
984 +index 88da2764c1bb9..3ddc83d2e8493 100644
985 +--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
986 ++++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
987 +@@ -67,7 +67,7 @@ static int global_invalidates(struct kvm *kvm)
988 + * so use the bit for the first thread to represent the core.
989 + */
990 + if (cpu_has_feature(CPU_FTR_ARCH_300))
991 +- cpu = cpu_first_thread_sibling(cpu);
992 ++ cpu = cpu_first_tlb_thread_sibling(cpu);
993 + cpumask_clear_cpu(cpu, &kvm->arch.need_tlb_flush);
994 + }
995 +
996 +diff --git a/arch/powerpc/platforms/cell/smp.c b/arch/powerpc/platforms/cell/smp.c
997 +index c855a0aeb49cc..d7ab868aab54a 100644
998 +--- a/arch/powerpc/platforms/cell/smp.c
999 ++++ b/arch/powerpc/platforms/cell/smp.c
1000 +@@ -78,9 +78,6 @@ static inline int smp_startup_cpu(unsigned int lcpu)
1001 +
1002 + pcpu = get_hard_smp_processor_id(lcpu);
1003 +
1004 +- /* Fixup atomic count: it exited inside IRQ handler. */
1005 +- task_thread_info(paca_ptrs[lcpu]->__current)->preempt_count = 0;
1006 +-
1007 + /*
1008 + * If the RTAS start-cpu token does not exist then presume the
1009 + * cpu is already spinning.
1010 +diff --git a/arch/powerpc/platforms/pseries/papr_scm.c b/arch/powerpc/platforms/pseries/papr_scm.c
1011 +index 835163f54244a..057acbb9116dd 100644
1012 +--- a/arch/powerpc/platforms/pseries/papr_scm.c
1013 ++++ b/arch/powerpc/platforms/pseries/papr_scm.c
1014 +@@ -18,6 +18,7 @@
1015 + #include <asm/plpar_wrappers.h>
1016 + #include <asm/papr_pdsm.h>
1017 + #include <asm/mce.h>
1018 ++#include <asm/unaligned.h>
1019 +
1020 + #define BIND_ANY_ADDR (~0ul)
1021 +
1022 +@@ -867,6 +868,20 @@ static ssize_t flags_show(struct device *dev,
1023 + }
1024 + DEVICE_ATTR_RO(flags);
1025 +
1026 ++static umode_t papr_nd_attribute_visible(struct kobject *kobj,
1027 ++ struct attribute *attr, int n)
1028 ++{
1029 ++ struct device *dev = kobj_to_dev(kobj);
1030 ++ struct nvdimm *nvdimm = to_nvdimm(dev);
1031 ++ struct papr_scm_priv *p = nvdimm_provider_data(nvdimm);
1032 ++
1033 ++ /* For if perf-stats not available remove perf_stats sysfs */
1034 ++ if (attr == &dev_attr_perf_stats.attr && p->stat_buffer_len == 0)
1035 ++ return 0;
1036 ++
1037 ++ return attr->mode;
1038 ++}
1039 ++
1040 + /* papr_scm specific dimm attributes */
1041 + static struct attribute *papr_nd_attributes[] = {
1042 + &dev_attr_flags.attr,
1043 +@@ -876,6 +891,7 @@ static struct attribute *papr_nd_attributes[] = {
1044 +
1045 + static struct attribute_group papr_nd_attribute_group = {
1046 + .name = "papr",
1047 ++ .is_visible = papr_nd_attribute_visible,
1048 + .attrs = papr_nd_attributes,
1049 + };
1050 +
1051 +@@ -891,7 +907,6 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv *p)
1052 + struct nd_region_desc ndr_desc;
1053 + unsigned long dimm_flags;
1054 + int target_nid, online_nid;
1055 +- ssize_t stat_size;
1056 +
1057 + p->bus_desc.ndctl = papr_scm_ndctl;
1058 + p->bus_desc.module = THIS_MODULE;
1059 +@@ -962,16 +977,6 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv *p)
1060 + list_add_tail(&p->region_list, &papr_nd_regions);
1061 + mutex_unlock(&papr_ndr_lock);
1062 +
1063 +- /* Try retriving the stat buffer and see if its supported */
1064 +- stat_size = drc_pmem_query_stats(p, NULL, 0);
1065 +- if (stat_size > 0) {
1066 +- p->stat_buffer_len = stat_size;
1067 +- dev_dbg(&p->pdev->dev, "Max perf-stat size %lu-bytes\n",
1068 +- p->stat_buffer_len);
1069 +- } else {
1070 +- dev_info(&p->pdev->dev, "Dimm performance stats unavailable\n");
1071 +- }
1072 +-
1073 + return 0;
1074 +
1075 + err: nvdimm_bus_unregister(p->bus);
1076 +@@ -1047,8 +1052,10 @@ static int papr_scm_probe(struct platform_device *pdev)
1077 + u32 drc_index, metadata_size;
1078 + u64 blocks, block_size;
1079 + struct papr_scm_priv *p;
1080 ++ u8 uuid_raw[UUID_SIZE];
1081 + const char *uuid_str;
1082 +- u64 uuid[2];
1083 ++ ssize_t stat_size;
1084 ++ uuid_t uuid;
1085 + int rc;
1086 +
1087 + /* check we have all the required DT properties */
1088 +@@ -1090,16 +1097,23 @@ static int papr_scm_probe(struct platform_device *pdev)
1089 + p->is_volatile = !of_property_read_bool(dn, "ibm,cache-flush-required");
1090 +
1091 + /* We just need to ensure that set cookies are unique across */
1092 +- uuid_parse(uuid_str, (uuid_t *) uuid);
1093 ++ uuid_parse(uuid_str, &uuid);
1094 ++
1095 + /*
1096 +- * cookie1 and cookie2 are not really little endian
1097 +- * we store a little endian representation of the
1098 +- * uuid str so that we can compare this with the label
1099 +- * area cookie irrespective of the endian config with which
1100 +- * the kernel is built.
1101 ++ * The cookie1 and cookie2 are not really little endian.
1102 ++ * We store a raw buffer representation of the
1103 ++ * uuid string so that we can compare this with the label
1104 ++ * area cookie irrespective of the endian configuration
1105 ++ * with which the kernel is built.
1106 ++ *
1107 ++ * Historically we stored the cookie in the below format.
1108 ++ * for a uuid string 72511b67-0b3b-42fd-8d1d-5be3cae8bcaa
1109 ++ * cookie1 was 0xfd423b0b671b5172
1110 ++ * cookie2 was 0xaabce8cae35b1d8d
1111 + */
1112 +- p->nd_set.cookie1 = cpu_to_le64(uuid[0]);
1113 +- p->nd_set.cookie2 = cpu_to_le64(uuid[1]);
1114 ++ export_uuid(uuid_raw, &uuid);
1115 ++ p->nd_set.cookie1 = get_unaligned_le64(&uuid_raw[0]);
1116 ++ p->nd_set.cookie2 = get_unaligned_le64(&uuid_raw[8]);
1117 +
1118 + /* might be zero */
1119 + p->metadata_size = metadata_size;
1120 +@@ -1124,6 +1138,14 @@ static int papr_scm_probe(struct platform_device *pdev)
1121 + p->res.name = pdev->name;
1122 + p->res.flags = IORESOURCE_MEM;
1123 +
1124 ++ /* Try retrieving the stat buffer and see if its supported */
1125 ++ stat_size = drc_pmem_query_stats(p, NULL, 0);
1126 ++ if (stat_size > 0) {
1127 ++ p->stat_buffer_len = stat_size;
1128 ++ dev_dbg(&p->pdev->dev, "Max perf-stat size %lu-bytes\n",
1129 ++ p->stat_buffer_len);
1130 ++ }
1131 ++
1132 + rc = papr_scm_nvdimm_init(p);
1133 + if (rc)
1134 + goto err2;
1135 +diff --git a/arch/powerpc/platforms/pseries/smp.c b/arch/powerpc/platforms/pseries/smp.c
1136 +index c70b4be9f0a54..f47429323eee9 100644
1137 +--- a/arch/powerpc/platforms/pseries/smp.c
1138 ++++ b/arch/powerpc/platforms/pseries/smp.c
1139 +@@ -105,9 +105,6 @@ static inline int smp_startup_cpu(unsigned int lcpu)
1140 + return 1;
1141 + }
1142 +
1143 +- /* Fixup atomic count: it exited inside IRQ handler. */
1144 +- task_thread_info(paca_ptrs[lcpu]->__current)->preempt_count = 0;
1145 +-
1146 + /*
1147 + * If the RTAS start-cpu token does not exist then presume the
1148 + * cpu is already spinning.
1149 +@@ -211,7 +208,9 @@ static __init void pSeries_smp_probe(void)
1150 + if (!cpu_has_feature(CPU_FTR_SMT))
1151 + return;
1152 +
1153 +- if (check_kvm_guest()) {
1154 ++ check_kvm_guest();
1155 ++
1156 ++ if (is_kvm_guest()) {
1157 + /*
1158 + * KVM emulates doorbells by disabling FSCR[MSGP] so msgsndp
1159 + * faults to the hypervisor which then reads the instruction
1160 +diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
1161 +index 5e276c25646fc..1941a6ce86a15 100644
1162 +--- a/arch/riscv/kernel/smpboot.c
1163 ++++ b/arch/riscv/kernel/smpboot.c
1164 +@@ -176,7 +176,6 @@ asmlinkage __visible void smp_callin(void)
1165 + * Disable preemption before enabling interrupts, so we don't try to
1166 + * schedule a CPU that hasn't actually started yet.
1167 + */
1168 +- preempt_disable();
1169 + local_irq_enable();
1170 + cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
1171 + }
1172 +diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
1173 +index c1ff874e6c2e6..4fcd460f496ec 100644
1174 +--- a/arch/s390/Kconfig
1175 ++++ b/arch/s390/Kconfig
1176 +@@ -160,6 +160,7 @@ config S390
1177 + select HAVE_FUTEX_CMPXCHG if FUTEX
1178 + select HAVE_GCC_PLUGINS
1179 + select HAVE_GENERIC_VDSO
1180 ++ select HAVE_IOREMAP_PROT if PCI
1181 + select HAVE_IRQ_EXIT_ON_IRQ_STACK
1182 + select HAVE_KERNEL_BZIP2
1183 + select HAVE_KERNEL_GZIP
1184 +@@ -858,7 +859,7 @@ config CMM_IUCV
1185 + config APPLDATA_BASE
1186 + def_bool n
1187 + prompt "Linux - VM Monitor Stream, base infrastructure"
1188 +- depends on PROC_FS
1189 ++ depends on PROC_SYSCTL
1190 + help
1191 + This provides a kernel interface for creating and updating z/VM APPLDATA
1192 + monitor records. The monitor records are updated at certain time
1193 +diff --git a/arch/s390/boot/uv.c b/arch/s390/boot/uv.c
1194 +index 87641dd65ccf9..b3501ea5039e4 100644
1195 +--- a/arch/s390/boot/uv.c
1196 ++++ b/arch/s390/boot/uv.c
1197 +@@ -36,6 +36,7 @@ void uv_query_info(void)
1198 + uv_info.max_sec_stor_addr = ALIGN(uvcb.max_guest_stor_addr, PAGE_SIZE);
1199 + uv_info.max_num_sec_conf = uvcb.max_num_sec_conf;
1200 + uv_info.max_guest_cpu_id = uvcb.max_guest_cpu_id;
1201 ++ uv_info.uv_feature_indications = uvcb.uv_feature_indications;
1202 + }
1203 +
1204 + #ifdef CONFIG_PROTECTED_VIRTUALIZATION_GUEST
1205 +diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
1206 +index 29c7ecd5ad1d5..adea53f69bfd3 100644
1207 +--- a/arch/s390/include/asm/pgtable.h
1208 ++++ b/arch/s390/include/asm/pgtable.h
1209 +@@ -344,8 +344,6 @@ static inline int is_module_addr(void *addr)
1210 + #define PTRS_PER_P4D _CRST_ENTRIES
1211 + #define PTRS_PER_PGD _CRST_ENTRIES
1212 +
1213 +-#define MAX_PTRS_PER_P4D PTRS_PER_P4D
1214 +-
1215 + /*
1216 + * Segment table and region3 table entry encoding
1217 + * (R = read-only, I = invalid, y = young bit):
1218 +@@ -865,6 +863,25 @@ static inline int pte_unused(pte_t pte)
1219 + return pte_val(pte) & _PAGE_UNUSED;
1220 + }
1221 +
1222 ++/*
1223 ++ * Extract the pgprot value from the given pte while at the same time making it
1224 ++ * usable for kernel address space mappings where fault driven dirty and
1225 ++ * young/old accounting is not supported, i.e _PAGE_PROTECT and _PAGE_INVALID
1226 ++ * must not be set.
1227 ++ */
1228 ++static inline pgprot_t pte_pgprot(pte_t pte)
1229 ++{
1230 ++ unsigned long pte_flags = pte_val(pte) & _PAGE_CHG_MASK;
1231 ++
1232 ++ if (pte_write(pte))
1233 ++ pte_flags |= pgprot_val(PAGE_KERNEL);
1234 ++ else
1235 ++ pte_flags |= pgprot_val(PAGE_KERNEL_RO);
1236 ++ pte_flags |= pte_val(pte) & mio_wb_bit_mask;
1237 ++
1238 ++ return __pgprot(pte_flags);
1239 ++}
1240 ++
1241 + /*
1242 + * pgd/pmd/pte modification functions
1243 + */
1244 +diff --git a/arch/s390/include/asm/preempt.h b/arch/s390/include/asm/preempt.h
1245 +index b49e0492842cc..d9d5350cc3ec3 100644
1246 +--- a/arch/s390/include/asm/preempt.h
1247 ++++ b/arch/s390/include/asm/preempt.h
1248 +@@ -29,12 +29,6 @@ static inline void preempt_count_set(int pc)
1249 + old, new) != old);
1250 + }
1251 +
1252 +-#define init_task_preempt_count(p) do { } while (0)
1253 +-
1254 +-#define init_idle_preempt_count(p, cpu) do { \
1255 +- S390_lowcore.preempt_count = PREEMPT_ENABLED; \
1256 +-} while (0)
1257 +-
1258 + static inline void set_preempt_need_resched(void)
1259 + {
1260 + __atomic_and(~PREEMPT_NEED_RESCHED, &S390_lowcore.preempt_count);
1261 +@@ -88,12 +82,6 @@ static inline void preempt_count_set(int pc)
1262 + S390_lowcore.preempt_count = pc;
1263 + }
1264 +
1265 +-#define init_task_preempt_count(p) do { } while (0)
1266 +-
1267 +-#define init_idle_preempt_count(p, cpu) do { \
1268 +- S390_lowcore.preempt_count = PREEMPT_ENABLED; \
1269 +-} while (0)
1270 +-
1271 + static inline void set_preempt_need_resched(void)
1272 + {
1273 + }
1274 +@@ -130,6 +118,10 @@ static inline bool should_resched(int preempt_offset)
1275 +
1276 + #endif /* CONFIG_HAVE_MARCH_Z196_FEATURES */
1277 +
1278 ++#define init_task_preempt_count(p) do { } while (0)
1279 ++/* Deferred to CPU bringup time */
1280 ++#define init_idle_preempt_count(p, cpu) do { } while (0)
1281 ++
1282 + #ifdef CONFIG_PREEMPTION
1283 + extern void preempt_schedule(void);
1284 + #define __preempt_schedule() preempt_schedule()
1285 +diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h
1286 +index 7b98d4caee779..12c5f006c1364 100644
1287 +--- a/arch/s390/include/asm/uv.h
1288 ++++ b/arch/s390/include/asm/uv.h
1289 +@@ -73,6 +73,10 @@ enum uv_cmds_inst {
1290 + BIT_UVC_CMD_UNPIN_PAGE_SHARED = 22,
1291 + };
1292 +
1293 ++enum uv_feat_ind {
1294 ++ BIT_UV_FEAT_MISC = 0,
1295 ++};
1296 ++
1297 + struct uv_cb_header {
1298 + u16 len;
1299 + u16 cmd; /* Command Code */
1300 +@@ -97,7 +101,8 @@ struct uv_cb_qui {
1301 + u64 max_guest_stor_addr;
1302 + u8 reserved88[158 - 136];
1303 + u16 max_guest_cpu_id;
1304 +- u8 reserveda0[200 - 160];
1305 ++ u64 uv_feature_indications;
1306 ++ u8 reserveda0[200 - 168];
1307 + } __packed __aligned(8);
1308 +
1309 + /* Initialize Ultravisor */
1310 +@@ -274,6 +279,7 @@ struct uv_info {
1311 + unsigned long max_sec_stor_addr;
1312 + unsigned int max_num_sec_conf;
1313 + unsigned short max_guest_cpu_id;
1314 ++ unsigned long uv_feature_indications;
1315 + };
1316 +
1317 + extern struct uv_info uv_info;
1318 +diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
1319 +index 5aab59ad56881..382d73da134cf 100644
1320 +--- a/arch/s390/kernel/setup.c
1321 ++++ b/arch/s390/kernel/setup.c
1322 +@@ -466,6 +466,7 @@ static void __init setup_lowcore_dat_off(void)
1323 + lc->br_r1_trampoline = 0x07f1; /* br %r1 */
1324 + lc->return_lpswe = gen_lpswe(__LC_RETURN_PSW);
1325 + lc->return_mcck_lpswe = gen_lpswe(__LC_RETURN_MCCK_PSW);
1326 ++ lc->preempt_count = PREEMPT_DISABLED;
1327 +
1328 + set_prefix((u32)(unsigned long) lc);
1329 + lowcore_ptr[0] = lc;
1330 +diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
1331 +index 58c8afa3da65d..4b9960da3fc9c 100644
1332 +--- a/arch/s390/kernel/smp.c
1333 ++++ b/arch/s390/kernel/smp.c
1334 +@@ -219,6 +219,7 @@ static int pcpu_alloc_lowcore(struct pcpu *pcpu, int cpu)
1335 + lc->br_r1_trampoline = 0x07f1; /* br %r1 */
1336 + lc->return_lpswe = gen_lpswe(__LC_RETURN_PSW);
1337 + lc->return_mcck_lpswe = gen_lpswe(__LC_RETURN_MCCK_PSW);
1338 ++ lc->preempt_count = PREEMPT_DISABLED;
1339 + if (nmi_alloc_per_cpu(lc))
1340 + goto out_stack;
1341 + lowcore_ptr[cpu] = lc;
1342 +@@ -877,7 +878,6 @@ static void smp_init_secondary(void)
1343 + restore_access_regs(S390_lowcore.access_regs_save_area);
1344 + cpu_init();
1345 + rcu_cpu_starting(cpu);
1346 +- preempt_disable();
1347 + init_cpu_timer();
1348 + vtime_init();
1349 + vdso_getcpu_init();
1350 +diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
1351 +index b2d2ad1530676..c811b2313100b 100644
1352 +--- a/arch/s390/kernel/uv.c
1353 ++++ b/arch/s390/kernel/uv.c
1354 +@@ -364,6 +364,15 @@ static ssize_t uv_query_facilities(struct kobject *kobj,
1355 + static struct kobj_attribute uv_query_facilities_attr =
1356 + __ATTR(facilities, 0444, uv_query_facilities, NULL);
1357 +
1358 ++static ssize_t uv_query_feature_indications(struct kobject *kobj,
1359 ++ struct kobj_attribute *attr, char *buf)
1360 ++{
1361 ++ return sysfs_emit(buf, "%lx\n", uv_info.uv_feature_indications);
1362 ++}
1363 ++
1364 ++static struct kobj_attribute uv_query_feature_indications_attr =
1365 ++ __ATTR(feature_indications, 0444, uv_query_feature_indications, NULL);
1366 ++
1367 + static ssize_t uv_query_max_guest_cpus(struct kobject *kobj,
1368 + struct kobj_attribute *attr, char *page)
1369 + {
1370 +@@ -396,6 +405,7 @@ static struct kobj_attribute uv_query_max_guest_addr_attr =
1371 +
1372 + static struct attribute *uv_query_attrs[] = {
1373 + &uv_query_facilities_attr.attr,
1374 ++ &uv_query_feature_indications_attr.attr,
1375 + &uv_query_max_guest_cpus_attr.attr,
1376 + &uv_query_max_guest_vms_attr.attr,
1377 + &uv_query_max_guest_addr_attr.attr,
1378 +diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
1379 +index 24ad447e648c1..dd7136b3ed9a8 100644
1380 +--- a/arch/s390/kvm/kvm-s390.c
1381 ++++ b/arch/s390/kvm/kvm-s390.c
1382 +@@ -323,31 +323,31 @@ static void allow_cpu_feat(unsigned long nr)
1383 +
1384 + static inline int plo_test_bit(unsigned char nr)
1385 + {
1386 +- register unsigned long r0 asm("0") = (unsigned long) nr | 0x100;
1387 ++ unsigned long function = (unsigned long)nr | 0x100;
1388 + int cc;
1389 +
1390 + asm volatile(
1391 ++ " lgr 0,%[function]\n"
1392 + /* Parameter registers are ignored for "test bit" */
1393 + " plo 0,0,0,0(0)\n"
1394 + " ipm %0\n"
1395 + " srl %0,28\n"
1396 + : "=d" (cc)
1397 +- : "d" (r0)
1398 +- : "cc");
1399 ++ : [function] "d" (function)
1400 ++ : "cc", "0");
1401 + return cc == 0;
1402 + }
1403 +
1404 + static __always_inline void __insn32_query(unsigned int opcode, u8 *query)
1405 + {
1406 +- register unsigned long r0 asm("0") = 0; /* query function */
1407 +- register unsigned long r1 asm("1") = (unsigned long) query;
1408 +-
1409 + asm volatile(
1410 +- /* Parameter regs are ignored */
1411 ++ " lghi 0,0\n"
1412 ++ " lgr 1,%[query]\n"
1413 ++ /* Parameter registers are ignored */
1414 + " .insn rrf,%[opc] << 16,2,4,6,0\n"
1415 + :
1416 +- : "d" (r0), "a" (r1), [opc] "i" (opcode)
1417 +- : "cc", "memory");
1418 ++ : [query] "d" ((unsigned long)query), [opc] "i" (opcode)
1419 ++ : "cc", "memory", "0", "1");
1420 + }
1421 +
1422 + #define INSN_SORTL 0xb938
1423 +diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
1424 +index e30c7c781172c..62e62cb88c845 100644
1425 +--- a/arch/s390/mm/fault.c
1426 ++++ b/arch/s390/mm/fault.c
1427 +@@ -791,6 +791,32 @@ void do_secure_storage_access(struct pt_regs *regs)
1428 + struct page *page;
1429 + int rc;
1430 +
1431 ++ /*
1432 ++ * bit 61 tells us if the address is valid, if it's not we
1433 ++ * have a major problem and should stop the kernel or send a
1434 ++ * SIGSEGV to the process. Unfortunately bit 61 is not
1435 ++ * reliable without the misc UV feature so we need to check
1436 ++ * for that as well.
1437 ++ */
1438 ++ if (test_bit_inv(BIT_UV_FEAT_MISC, &uv_info.uv_feature_indications) &&
1439 ++ !test_bit_inv(61, &regs->int_parm_long)) {
1440 ++ /*
1441 ++ * When this happens, userspace did something that it
1442 ++ * was not supposed to do, e.g. branching into secure
1443 ++ * memory. Trigger a segmentation fault.
1444 ++ */
1445 ++ if (user_mode(regs)) {
1446 ++ send_sig(SIGSEGV, current, 0);
1447 ++ return;
1448 ++ }
1449 ++
1450 ++ /*
1451 ++ * The kernel should never run into this case and we
1452 ++ * have no way out of this situation.
1453 ++ */
1454 ++ panic("Unexpected PGM 0x3d with TEID bit 61=0");
1455 ++ }
1456 ++
1457 + switch (get_fault_type(regs)) {
1458 + case USER_FAULT:
1459 + mm = current->mm;
1460 +diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
1461 +index 372acdc9033eb..65924d9ec2459 100644
1462 +--- a/arch/sh/kernel/smp.c
1463 ++++ b/arch/sh/kernel/smp.c
1464 +@@ -186,8 +186,6 @@ asmlinkage void start_secondary(void)
1465 +
1466 + per_cpu_trap_init();
1467 +
1468 +- preempt_disable();
1469 +-
1470 + notify_cpu_starting(cpu);
1471 +
1472 + local_irq_enable();
1473 +diff --git a/arch/sparc/kernel/smp_32.c b/arch/sparc/kernel/smp_32.c
1474 +index 50c127ab46d5b..22b148e5a5f88 100644
1475 +--- a/arch/sparc/kernel/smp_32.c
1476 ++++ b/arch/sparc/kernel/smp_32.c
1477 +@@ -348,7 +348,6 @@ static void sparc_start_secondary(void *arg)
1478 + */
1479 + arch_cpu_pre_starting(arg);
1480 +
1481 +- preempt_disable();
1482 + cpu = smp_processor_id();
1483 +
1484 + notify_cpu_starting(cpu);
1485 +diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
1486 +index e38d8bf454e86..ae5faa1d989d2 100644
1487 +--- a/arch/sparc/kernel/smp_64.c
1488 ++++ b/arch/sparc/kernel/smp_64.c
1489 +@@ -138,9 +138,6 @@ void smp_callin(void)
1490 +
1491 + set_cpu_online(cpuid, true);
1492 +
1493 +- /* idle thread is expected to have preempt disabled */
1494 +- preempt_disable();
1495 +-
1496 + local_irq_enable();
1497 +
1498 + cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
1499 +diff --git a/arch/x86/crypto/curve25519-x86_64.c b/arch/x86/crypto/curve25519-x86_64.c
1500 +index 5af8021b98cea..11b4c83c715e3 100644
1501 +--- a/arch/x86/crypto/curve25519-x86_64.c
1502 ++++ b/arch/x86/crypto/curve25519-x86_64.c
1503 +@@ -1500,7 +1500,7 @@ static int __init curve25519_mod_init(void)
1504 + static void __exit curve25519_mod_exit(void)
1505 + {
1506 + if (IS_REACHABLE(CONFIG_CRYPTO_KPP) &&
1507 +- (boot_cpu_has(X86_FEATURE_BMI2) || boot_cpu_has(X86_FEATURE_ADX)))
1508 ++ static_branch_likely(&curve25519_use_bmi2_adx))
1509 + crypto_unregister_kpp(&curve25519_alg);
1510 + }
1511 +
1512 +diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
1513 +index 400908dff42eb..7aa1be30b6473 100644
1514 +--- a/arch/x86/entry/entry_64.S
1515 ++++ b/arch/x86/entry/entry_64.S
1516 +@@ -506,7 +506,7 @@ SYM_CODE_START(\asmsym)
1517 +
1518 + movq %rsp, %rdi /* pt_regs pointer */
1519 +
1520 +- call \cfunc
1521 ++ call kernel_\cfunc
1522 +
1523 + /*
1524 + * No need to switch back to the IST stack. The current stack is either
1525 +@@ -517,7 +517,7 @@ SYM_CODE_START(\asmsym)
1526 +
1527 + /* Switch to the regular task stack */
1528 + .Lfrom_usermode_switch_stack_\@:
1529 +- idtentry_body safe_stack_\cfunc, has_error_code=1
1530 ++ idtentry_body user_\cfunc, has_error_code=1
1531 +
1532 + _ASM_NOKPROBE(\asmsym)
1533 + SYM_CODE_END(\asmsym)
1534 +diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
1535 +index 77fe4fece6798..ea7a9319e9181 100644
1536 +--- a/arch/x86/events/intel/core.c
1537 ++++ b/arch/x86/events/intel/core.c
1538 +@@ -280,6 +280,8 @@ static struct extra_reg intel_spr_extra_regs[] __read_mostly = {
1539 + INTEL_UEVENT_EXTRA_REG(0x012b, MSR_OFFCORE_RSP_1, 0x3fffffffffull, RSP_1),
1540 + INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd),
1541 + INTEL_UEVENT_EXTRA_REG(0x01c6, MSR_PEBS_FRONTEND, 0x7fff17, FE),
1542 ++ INTEL_UEVENT_EXTRA_REG(0x40ad, MSR_PEBS_FRONTEND, 0x7, FE),
1543 ++ INTEL_UEVENT_EXTRA_REG(0x04c2, MSR_PEBS_FRONTEND, 0x8, FE),
1544 + EVENT_EXTRA_END
1545 + };
1546 +
1547 +@@ -3973,8 +3975,10 @@ spr_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
1548 + * The :ppp indicates the Precise Distribution (PDist) facility, which
1549 + * is only supported on the GP counter 0. If a :ppp event which is not
1550 + * available on the GP counter 0, error out.
1551 ++ * Exception: Instruction PDIR is only available on the fixed counter 0.
1552 + */
1553 +- if (event->attr.precise_ip == 3) {
1554 ++ if ((event->attr.precise_ip == 3) &&
1555 ++ !constraint_match(&fixed0_constraint, event->hw.config)) {
1556 + if (c->idxmsk64 & BIT_ULL(0))
1557 + return &counter0_constraint;
1558 +
1559 +diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
1560 +index 06b0789d61b91..0f1d17bb9d2f4 100644
1561 +--- a/arch/x86/include/asm/idtentry.h
1562 ++++ b/arch/x86/include/asm/idtentry.h
1563 +@@ -312,8 +312,8 @@ static __always_inline void __##func(struct pt_regs *regs)
1564 + */
1565 + #define DECLARE_IDTENTRY_VC(vector, func) \
1566 + DECLARE_IDTENTRY_RAW_ERRORCODE(vector, func); \
1567 +- __visible noinstr void ist_##func(struct pt_regs *regs, unsigned long error_code); \
1568 +- __visible noinstr void safe_stack_##func(struct pt_regs *regs, unsigned long error_code)
1569 ++ __visible noinstr void kernel_##func(struct pt_regs *regs, unsigned long error_code); \
1570 ++ __visible noinstr void user_##func(struct pt_regs *regs, unsigned long error_code)
1571 +
1572 + /**
1573 + * DEFINE_IDTENTRY_IST - Emit code for IST entry points
1574 +@@ -355,33 +355,24 @@ static __always_inline void __##func(struct pt_regs *regs)
1575 + DEFINE_IDTENTRY_RAW_ERRORCODE(func)
1576 +
1577 + /**
1578 +- * DEFINE_IDTENTRY_VC_SAFE_STACK - Emit code for VMM communication handler
1579 +- which runs on a safe stack.
1580 ++ * DEFINE_IDTENTRY_VC_KERNEL - Emit code for VMM communication handler
1581 ++ when raised from kernel mode
1582 + * @func: Function name of the entry point
1583 + *
1584 + * Maps to DEFINE_IDTENTRY_RAW_ERRORCODE
1585 + */
1586 +-#define DEFINE_IDTENTRY_VC_SAFE_STACK(func) \
1587 +- DEFINE_IDTENTRY_RAW_ERRORCODE(safe_stack_##func)
1588 ++#define DEFINE_IDTENTRY_VC_KERNEL(func) \
1589 ++ DEFINE_IDTENTRY_RAW_ERRORCODE(kernel_##func)
1590 +
1591 + /**
1592 +- * DEFINE_IDTENTRY_VC_IST - Emit code for VMM communication handler
1593 +- which runs on the VC fall-back stack
1594 ++ * DEFINE_IDTENTRY_VC_USER - Emit code for VMM communication handler
1595 ++ when raised from user mode
1596 + * @func: Function name of the entry point
1597 + *
1598 + * Maps to DEFINE_IDTENTRY_RAW_ERRORCODE
1599 + */
1600 +-#define DEFINE_IDTENTRY_VC_IST(func) \
1601 +- DEFINE_IDTENTRY_RAW_ERRORCODE(ist_##func)
1602 +-
1603 +-/**
1604 +- * DEFINE_IDTENTRY_VC - Emit code for VMM communication handler
1605 +- * @func: Function name of the entry point
1606 +- *
1607 +- * Maps to DEFINE_IDTENTRY_RAW_ERRORCODE
1608 +- */
1609 +-#define DEFINE_IDTENTRY_VC(func) \
1610 +- DEFINE_IDTENTRY_RAW_ERRORCODE(func)
1611 ++#define DEFINE_IDTENTRY_VC_USER(func) \
1612 ++ DEFINE_IDTENTRY_RAW_ERRORCODE(user_##func)
1613 +
1614 + #else /* CONFIG_X86_64 */
1615 +
1616 +diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
1617 +index ac7c786fa09f9..0758ff3008c6d 100644
1618 +--- a/arch/x86/include/asm/kvm_host.h
1619 ++++ b/arch/x86/include/asm/kvm_host.h
1620 +@@ -85,7 +85,7 @@
1621 + #define KVM_REQ_APICV_UPDATE \
1622 + KVM_ARCH_REQ_FLAGS(25, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
1623 + #define KVM_REQ_TLB_FLUSH_CURRENT KVM_ARCH_REQ(26)
1624 +-#define KVM_REQ_HV_TLB_FLUSH \
1625 ++#define KVM_REQ_TLB_FLUSH_GUEST \
1626 + KVM_ARCH_REQ_FLAGS(27, KVM_REQUEST_NO_WAKEUP)
1627 + #define KVM_REQ_APF_READY KVM_ARCH_REQ(28)
1628 + #define KVM_REQ_MSR_FILTER_CHANGED KVM_ARCH_REQ(29)
1629 +@@ -1433,6 +1433,7 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask,
1630 + u64 dirty_mask, u64 nx_mask, u64 x_mask, u64 p_mask,
1631 + u64 acc_track_mask, u64 me_mask);
1632 +
1633 ++void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu);
1634 + void kvm_mmu_reset_context(struct kvm_vcpu *vcpu);
1635 + void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
1636 + struct kvm_memory_slot *memslot,
1637 +diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
1638 +index f8cb8af4de5ce..fe5efbcba8240 100644
1639 +--- a/arch/x86/include/asm/preempt.h
1640 ++++ b/arch/x86/include/asm/preempt.h
1641 +@@ -44,7 +44,7 @@ static __always_inline void preempt_count_set(int pc)
1642 + #define init_task_preempt_count(p) do { } while (0)
1643 +
1644 + #define init_idle_preempt_count(p, cpu) do { \
1645 +- per_cpu(__preempt_count, (cpu)) = PREEMPT_ENABLED; \
1646 ++ per_cpu(__preempt_count, (cpu)) = PREEMPT_DISABLED; \
1647 + } while (0)
1648 +
1649 + /*
1650 +diff --git a/arch/x86/include/uapi/asm/hwcap2.h b/arch/x86/include/uapi/asm/hwcap2.h
1651 +index 5fdfcb47000f9..054604aba9f00 100644
1652 +--- a/arch/x86/include/uapi/asm/hwcap2.h
1653 ++++ b/arch/x86/include/uapi/asm/hwcap2.h
1654 +@@ -2,10 +2,12 @@
1655 + #ifndef _ASM_X86_HWCAP2_H
1656 + #define _ASM_X86_HWCAP2_H
1657 +
1658 ++#include <linux/const.h>
1659 ++
1660 + /* MONITOR/MWAIT enabled in Ring 3 */
1661 +-#define HWCAP2_RING3MWAIT (1 << 0)
1662 ++#define HWCAP2_RING3MWAIT _BITUL(0)
1663 +
1664 + /* Kernel allows FSGSBASE instructions available in Ring 3 */
1665 +-#define HWCAP2_FSGSBASE BIT(1)
1666 ++#define HWCAP2_FSGSBASE _BITUL(1)
1667 +
1668 + #endif
1669 +diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
1670 +index e88bc296afca0..a803fc423cb7f 100644
1671 +--- a/arch/x86/kernel/cpu/mshyperv.c
1672 ++++ b/arch/x86/kernel/cpu/mshyperv.c
1673 +@@ -245,7 +245,7 @@ static void __init hv_smp_prepare_cpus(unsigned int max_cpus)
1674 + for_each_present_cpu(i) {
1675 + if (i == 0)
1676 + continue;
1677 +- ret = hv_call_add_logical_proc(numa_cpu_node(i), i, cpu_physical_id(i));
1678 ++ ret = hv_call_add_logical_proc(numa_cpu_node(i), i, i);
1679 + BUG_ON(ret);
1680 + }
1681 +
1682 +diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
1683 +index a4b5af03dcc1b..534cc3f78c6be 100644
1684 +--- a/arch/x86/kernel/early-quirks.c
1685 ++++ b/arch/x86/kernel/early-quirks.c
1686 +@@ -549,6 +549,7 @@ static const struct pci_device_id intel_early_ids[] __initconst = {
1687 + INTEL_CNL_IDS(&gen9_early_ops),
1688 + INTEL_ICL_11_IDS(&gen11_early_ops),
1689 + INTEL_EHL_IDS(&gen11_early_ops),
1690 ++ INTEL_JSL_IDS(&gen11_early_ops),
1691 + INTEL_TGL_12_IDS(&gen11_early_ops),
1692 + INTEL_RKL_IDS(&gen11_early_ops),
1693 + };
1694 +diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
1695 +index e0cdab7cb632b..f3202b2e3c157 100644
1696 +--- a/arch/x86/kernel/sev-es.c
1697 ++++ b/arch/x86/kernel/sev-es.c
1698 +@@ -12,7 +12,6 @@
1699 + #include <linux/sched/debug.h> /* For show_regs() */
1700 + #include <linux/percpu-defs.h>
1701 + #include <linux/mem_encrypt.h>
1702 +-#include <linux/lockdep.h>
1703 + #include <linux/printk.h>
1704 + #include <linux/mm_types.h>
1705 + #include <linux/set_memory.h>
1706 +@@ -180,11 +179,19 @@ void noinstr __sev_es_ist_exit(void)
1707 + this_cpu_write(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC], *(unsigned long *)ist);
1708 + }
1709 +
1710 +-static __always_inline struct ghcb *sev_es_get_ghcb(struct ghcb_state *state)
1711 ++/*
1712 ++ * Nothing shall interrupt this code path while holding the per-CPU
1713 ++ * GHCB. The backup GHCB is only for NMIs interrupting this path.
1714 ++ *
1715 ++ * Callers must disable local interrupts around it.
1716 ++ */
1717 ++static noinstr struct ghcb *__sev_get_ghcb(struct ghcb_state *state)
1718 + {
1719 + struct sev_es_runtime_data *data;
1720 + struct ghcb *ghcb;
1721 +
1722 ++ WARN_ON(!irqs_disabled());
1723 ++
1724 + data = this_cpu_read(runtime_data);
1725 + ghcb = &data->ghcb_page;
1726 +
1727 +@@ -201,7 +208,9 @@ static __always_inline struct ghcb *sev_es_get_ghcb(struct ghcb_state *state)
1728 + data->ghcb_active = false;
1729 + data->backup_ghcb_active = false;
1730 +
1731 ++ instrumentation_begin();
1732 + panic("Unable to handle #VC exception! GHCB and Backup GHCB are already in use");
1733 ++ instrumentation_end();
1734 + }
1735 +
1736 + /* Mark backup_ghcb active before writing to it */
1737 +@@ -452,11 +461,13 @@ static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt
1738 + /* Include code shared with pre-decompression boot stage */
1739 + #include "sev-es-shared.c"
1740 +
1741 +-static __always_inline void sev_es_put_ghcb(struct ghcb_state *state)
1742 ++static noinstr void __sev_put_ghcb(struct ghcb_state *state)
1743 + {
1744 + struct sev_es_runtime_data *data;
1745 + struct ghcb *ghcb;
1746 +
1747 ++ WARN_ON(!irqs_disabled());
1748 ++
1749 + data = this_cpu_read(runtime_data);
1750 + ghcb = &data->ghcb_page;
1751 +
1752 +@@ -480,7 +491,7 @@ void noinstr __sev_es_nmi_complete(void)
1753 + struct ghcb_state state;
1754 + struct ghcb *ghcb;
1755 +
1756 +- ghcb = sev_es_get_ghcb(&state);
1757 ++ ghcb = __sev_get_ghcb(&state);
1758 +
1759 + vc_ghcb_invalidate(ghcb);
1760 + ghcb_set_sw_exit_code(ghcb, SVM_VMGEXIT_NMI_COMPLETE);
1761 +@@ -490,7 +501,7 @@ void noinstr __sev_es_nmi_complete(void)
1762 + sev_es_wr_ghcb_msr(__pa_nodebug(ghcb));
1763 + VMGEXIT();
1764 +
1765 +- sev_es_put_ghcb(&state);
1766 ++ __sev_put_ghcb(&state);
1767 + }
1768 +
1769 + static u64 get_jump_table_addr(void)
1770 +@@ -502,7 +513,7 @@ static u64 get_jump_table_addr(void)
1771 +
1772 + local_irq_save(flags);
1773 +
1774 +- ghcb = sev_es_get_ghcb(&state);
1775 ++ ghcb = __sev_get_ghcb(&state);
1776 +
1777 + vc_ghcb_invalidate(ghcb);
1778 + ghcb_set_sw_exit_code(ghcb, SVM_VMGEXIT_AP_JUMP_TABLE);
1779 +@@ -516,7 +527,7 @@ static u64 get_jump_table_addr(void)
1780 + ghcb_sw_exit_info_2_is_valid(ghcb))
1781 + ret = ghcb->save.sw_exit_info_2;
1782 +
1783 +- sev_es_put_ghcb(&state);
1784 ++ __sev_put_ghcb(&state);
1785 +
1786 + local_irq_restore(flags);
1787 +
1788 +@@ -641,7 +652,7 @@ static void sev_es_ap_hlt_loop(void)
1789 + struct ghcb_state state;
1790 + struct ghcb *ghcb;
1791 +
1792 +- ghcb = sev_es_get_ghcb(&state);
1793 ++ ghcb = __sev_get_ghcb(&state);
1794 +
1795 + while (true) {
1796 + vc_ghcb_invalidate(ghcb);
1797 +@@ -658,7 +669,7 @@ static void sev_es_ap_hlt_loop(void)
1798 + break;
1799 + }
1800 +
1801 +- sev_es_put_ghcb(&state);
1802 ++ __sev_put_ghcb(&state);
1803 + }
1804 +
1805 + /*
1806 +@@ -748,7 +759,7 @@ void __init sev_es_init_vc_handling(void)
1807 + sev_es_setup_play_dead();
1808 +
1809 + /* Secondary CPUs use the runtime #VC handler */
1810 +- initial_vc_handler = (unsigned long)safe_stack_exc_vmm_communication;
1811 ++ initial_vc_handler = (unsigned long)kernel_exc_vmm_communication;
1812 + }
1813 +
1814 + static void __init vc_early_forward_exception(struct es_em_ctxt *ctxt)
1815 +@@ -1186,14 +1197,6 @@ static enum es_result vc_handle_trap_ac(struct ghcb *ghcb,
1816 + return ES_EXCEPTION;
1817 + }
1818 +
1819 +-static __always_inline void vc_handle_trap_db(struct pt_regs *regs)
1820 +-{
1821 +- if (user_mode(regs))
1822 +- noist_exc_debug(regs);
1823 +- else
1824 +- exc_debug(regs);
1825 +-}
1826 +-
1827 + static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
1828 + struct ghcb *ghcb,
1829 + unsigned long exit_code)
1830 +@@ -1289,44 +1292,15 @@ static __always_inline bool on_vc_fallback_stack(struct pt_regs *regs)
1831 + return (sp >= __this_cpu_ist_bottom_va(VC2) && sp < __this_cpu_ist_top_va(VC2));
1832 + }
1833 +
1834 +-/*
1835 +- * Main #VC exception handler. It is called when the entry code was able to
1836 +- * switch off the IST to a safe kernel stack.
1837 +- *
1838 +- * With the current implementation it is always possible to switch to a safe
1839 +- * stack because #VC exceptions only happen at known places, like intercepted
1840 +- * instructions or accesses to MMIO areas/IO ports. They can also happen with
1841 +- * code instrumentation when the hypervisor intercepts #DB, but the critical
1842 +- * paths are forbidden to be instrumented, so #DB exceptions currently also
1843 +- * only happen in safe places.
1844 +- */
1845 +-DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
1846 ++static bool vc_raw_handle_exception(struct pt_regs *regs, unsigned long error_code)
1847 + {
1848 +- irqentry_state_t irq_state;
1849 + struct ghcb_state state;
1850 + struct es_em_ctxt ctxt;
1851 + enum es_result result;
1852 + struct ghcb *ghcb;
1853 ++ bool ret = true;
1854 +
1855 +- /*
1856 +- * Handle #DB before calling into !noinstr code to avoid recursive #DB.
1857 +- */
1858 +- if (error_code == SVM_EXIT_EXCP_BASE + X86_TRAP_DB) {
1859 +- vc_handle_trap_db(regs);
1860 +- return;
1861 +- }
1862 +-
1863 +- irq_state = irqentry_nmi_enter(regs);
1864 +- lockdep_assert_irqs_disabled();
1865 +- instrumentation_begin();
1866 +-
1867 +- /*
1868 +- * This is invoked through an interrupt gate, so IRQs are disabled. The
1869 +- * code below might walk page-tables for user or kernel addresses, so
1870 +- * keep the IRQs disabled to protect us against concurrent TLB flushes.
1871 +- */
1872 +-
1873 +- ghcb = sev_es_get_ghcb(&state);
1874 ++ ghcb = __sev_get_ghcb(&state);
1875 +
1876 + vc_ghcb_invalidate(ghcb);
1877 + result = vc_init_em_ctxt(&ctxt, regs, error_code);
1878 +@@ -1334,7 +1308,7 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
1879 + if (result == ES_OK)
1880 + result = vc_handle_exitcode(&ctxt, ghcb, error_code);
1881 +
1882 +- sev_es_put_ghcb(&state);
1883 ++ __sev_put_ghcb(&state);
1884 +
1885 + /* Done - now check the result */
1886 + switch (result) {
1887 +@@ -1344,15 +1318,18 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
1888 + case ES_UNSUPPORTED:
1889 + pr_err_ratelimited("Unsupported exit-code 0x%02lx in early #VC exception (IP: 0x%lx)\n",
1890 + error_code, regs->ip);
1891 +- goto fail;
1892 ++ ret = false;
1893 ++ break;
1894 + case ES_VMM_ERROR:
1895 + pr_err_ratelimited("Failure in communication with VMM (exit-code 0x%02lx IP: 0x%lx)\n",
1896 + error_code, regs->ip);
1897 +- goto fail;
1898 ++ ret = false;
1899 ++ break;
1900 + case ES_DECODE_FAILED:
1901 + pr_err_ratelimited("Failed to decode instruction (exit-code 0x%02lx IP: 0x%lx)\n",
1902 + error_code, regs->ip);
1903 +- goto fail;
1904 ++ ret = false;
1905 ++ break;
1906 + case ES_EXCEPTION:
1907 + vc_forward_exception(&ctxt);
1908 + break;
1909 +@@ -1368,24 +1345,52 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
1910 + BUG();
1911 + }
1912 +
1913 +-out:
1914 +- instrumentation_end();
1915 +- irqentry_nmi_exit(regs, irq_state);
1916 ++ return ret;
1917 ++}
1918 +
1919 +- return;
1920 ++static __always_inline bool vc_is_db(unsigned long error_code)
1921 ++{
1922 ++ return error_code == SVM_EXIT_EXCP_BASE + X86_TRAP_DB;
1923 ++}
1924 +
1925 +-fail:
1926 +- if (user_mode(regs)) {
1927 +- /*
1928 +- * Do not kill the machine if user-space triggered the
1929 +- * exception. Send SIGBUS instead and let user-space deal with
1930 +- * it.
1931 +- */
1932 +- force_sig_fault(SIGBUS, BUS_OBJERR, (void __user *)0);
1933 +- } else {
1934 +- pr_emerg("PANIC: Unhandled #VC exception in kernel space (result=%d)\n",
1935 +- result);
1936 ++/*
1937 ++ * Runtime #VC exception handler when raised from kernel mode. Runs in NMI mode
1938 ++ * and will panic when an error happens.
1939 ++ */
1940 ++DEFINE_IDTENTRY_VC_KERNEL(exc_vmm_communication)
1941 ++{
1942 ++ irqentry_state_t irq_state;
1943 +
1944 ++ /*
1945 ++ * With the current implementation it is always possible to switch to a
1946 ++ * safe stack because #VC exceptions only happen at known places, like
1947 ++ * intercepted instructions or accesses to MMIO areas/IO ports. They can
1948 ++ * also happen with code instrumentation when the hypervisor intercepts
1949 ++ * #DB, but the critical paths are forbidden to be instrumented, so #DB
1950 ++ * exceptions currently also only happen in safe places.
1951 ++ *
1952 ++ * But keep this here in case the noinstr annotations are violated due
1953 ++ * to bug elsewhere.
1954 ++ */
1955 ++ if (unlikely(on_vc_fallback_stack(regs))) {
1956 ++ instrumentation_begin();
1957 ++ panic("Can't handle #VC exception from unsupported context\n");
1958 ++ instrumentation_end();
1959 ++ }
1960 ++
1961 ++ /*
1962 ++ * Handle #DB before calling into !noinstr code to avoid recursive #DB.
1963 ++ */
1964 ++ if (vc_is_db(error_code)) {
1965 ++ exc_debug(regs);
1966 ++ return;
1967 ++ }
1968 ++
1969 ++ irq_state = irqentry_nmi_enter(regs);
1970 ++
1971 ++ instrumentation_begin();
1972 ++
1973 ++ if (!vc_raw_handle_exception(regs, error_code)) {
1974 + /* Show some debug info */
1975 + show_regs(regs);
1976 +
1977 +@@ -1396,23 +1401,38 @@ fail:
1978 + panic("Returned from Terminate-Request to Hypervisor\n");
1979 + }
1980 +
1981 +- goto out;
1982 ++ instrumentation_end();
1983 ++ irqentry_nmi_exit(regs, irq_state);
1984 + }
1985 +
1986 +-/* This handler runs on the #VC fall-back stack. It can cause further #VC exceptions */
1987 +-DEFINE_IDTENTRY_VC_IST(exc_vmm_communication)
1988 ++/*
1989 ++ * Runtime #VC exception handler when raised from user mode. Runs in IRQ mode
1990 ++ * and will kill the current task with SIGBUS when an error happens.
1991 ++ */
1992 ++DEFINE_IDTENTRY_VC_USER(exc_vmm_communication)
1993 + {
1994 ++ /*
1995 ++ * Handle #DB before calling into !noinstr code to avoid recursive #DB.
1996 ++ */
1997 ++ if (vc_is_db(error_code)) {
1998 ++ noist_exc_debug(regs);
1999 ++ return;
2000 ++ }
2001 ++
2002 ++ irqentry_enter_from_user_mode(regs);
2003 + instrumentation_begin();
2004 +- panic("Can't handle #VC exception from unsupported context\n");
2005 +- instrumentation_end();
2006 +-}
2007 +
2008 +-DEFINE_IDTENTRY_VC(exc_vmm_communication)
2009 +-{
2010 +- if (likely(!on_vc_fallback_stack(regs)))
2011 +- safe_stack_exc_vmm_communication(regs, error_code);
2012 +- else
2013 +- ist_exc_vmm_communication(regs, error_code);
2014 ++ if (!vc_raw_handle_exception(regs, error_code)) {
2015 ++ /*
2016 ++ * Do not kill the machine if user-space triggered the
2017 ++ * exception. Send SIGBUS instead and let user-space deal with
2018 ++ * it.
2019 ++ */
2020 ++ force_sig_fault(SIGBUS, BUS_OBJERR, (void __user *)0);
2021 ++ }
2022 ++
2023 ++ instrumentation_end();
2024 ++ irqentry_exit_to_user_mode(regs);
2025 + }
2026 +
2027 + bool __init handle_vc_boot_ghcb(struct pt_regs *regs)
2028 +diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
2029 +index 363b36bbd791a..ebc4b13b74a47 100644
2030 +--- a/arch/x86/kernel/smpboot.c
2031 ++++ b/arch/x86/kernel/smpboot.c
2032 +@@ -236,7 +236,6 @@ static void notrace start_secondary(void *unused)
2033 + cpu_init();
2034 + rcu_cpu_starting(raw_smp_processor_id());
2035 + x86_cpuinit.early_percpu_clock_init();
2036 +- preempt_disable();
2037 + smp_callin();
2038 +
2039 + enable_start_cpu0 = 0;
2040 +diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
2041 +index f70dffc2771f5..56289170753c5 100644
2042 +--- a/arch/x86/kernel/tsc.c
2043 ++++ b/arch/x86/kernel/tsc.c
2044 +@@ -1151,7 +1151,8 @@ static struct clocksource clocksource_tsc = {
2045 + .mask = CLOCKSOURCE_MASK(64),
2046 + .flags = CLOCK_SOURCE_IS_CONTINUOUS |
2047 + CLOCK_SOURCE_VALID_FOR_HRES |
2048 +- CLOCK_SOURCE_MUST_VERIFY,
2049 ++ CLOCK_SOURCE_MUST_VERIFY |
2050 ++ CLOCK_SOURCE_VERIFY_PERCPU,
2051 + .vdso_clock_mode = VDSO_CLOCKMODE_TSC,
2052 + .enable = tsc_cs_enable,
2053 + .resume = tsc_resume,
2054 +diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
2055 +index 62f795352c024..0ed116b8c211d 100644
2056 +--- a/arch/x86/kvm/cpuid.c
2057 ++++ b/arch/x86/kvm/cpuid.c
2058 +@@ -185,10 +185,10 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
2059 + static_call(kvm_x86_vcpu_after_set_cpuid)(vcpu);
2060 +
2061 + /*
2062 +- * Except for the MMU, which needs to be reset after any vendor
2063 +- * specific adjustments to the reserved GPA bits.
2064 ++ * Except for the MMU, which needs to do its thing any vendor specific
2065 ++ * adjustments to the reserved GPA bits.
2066 + */
2067 +- kvm_mmu_reset_context(vcpu);
2068 ++ kvm_mmu_after_set_cpuid(vcpu);
2069 + }
2070 +
2071 + static int is_efer_nx(void)
2072 +diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
2073 +index f00830e5202fe..fdd1eca717fd6 100644
2074 +--- a/arch/x86/kvm/hyperv.c
2075 ++++ b/arch/x86/kvm/hyperv.c
2076 +@@ -1704,7 +1704,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, u64 ingpa, u16 rep_cnt, bool
2077 + * vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't
2078 + * analyze it here, flush TLB regardless of the specified address space.
2079 + */
2080 +- kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH,
2081 ++ kvm_make_vcpus_request_mask(kvm, KVM_REQ_TLB_FLUSH_GUEST,
2082 + NULL, vcpu_mask, &hv_vcpu->tlb_flush);
2083 +
2084 + ret_success:
2085 +diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
2086 +index fb2231cf19b5d..7ffeeb6880d93 100644
2087 +--- a/arch/x86/kvm/mmu/mmu.c
2088 ++++ b/arch/x86/kvm/mmu/mmu.c
2089 +@@ -4155,7 +4155,15 @@ static inline u64 reserved_hpa_bits(void)
2090 + void
2091 + reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context)
2092 + {
2093 +- bool uses_nx = context->nx ||
2094 ++ /*
2095 ++ * KVM uses NX when TDP is disabled to handle a variety of scenarios,
2096 ++ * notably for huge SPTEs if iTLB multi-hit mitigation is enabled and
2097 ++ * to generate correct permissions for CR0.WP=0/CR4.SMEP=1/EFER.NX=0.
2098 ++ * The iTLB multi-hit workaround can be toggled at any time, so assume
2099 ++ * NX can be used by any non-nested shadow MMU to avoid having to reset
2100 ++ * MMU contexts. Note, KVM forces EFER.NX=1 when TDP is disabled.
2101 ++ */
2102 ++ bool uses_nx = context->nx || !tdp_enabled ||
2103 + context->mmu_role.base.smep_andnot_wp;
2104 + struct rsvd_bits_validate *shadow_zero_check;
2105 + int i;
2106 +@@ -4838,6 +4846,18 @@ kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu)
2107 + return role.base;
2108 + }
2109 +
2110 ++void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu)
2111 ++{
2112 ++ /*
2113 ++ * Invalidate all MMU roles to force them to reinitialize as CPUID
2114 ++ * information is factored into reserved bit calculations.
2115 ++ */
2116 ++ vcpu->arch.root_mmu.mmu_role.ext.valid = 0;
2117 ++ vcpu->arch.guest_mmu.mmu_role.ext.valid = 0;
2118 ++ vcpu->arch.nested_mmu.mmu_role.ext.valid = 0;
2119 ++ kvm_mmu_reset_context(vcpu);
2120 ++}
2121 ++
2122 + void kvm_mmu_reset_context(struct kvm_vcpu *vcpu)
2123 + {
2124 + kvm_mmu_unload(vcpu);
2125 +diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
2126 +index a2becf9c2553e..de6407610b19a 100644
2127 +--- a/arch/x86/kvm/mmu/paging_tmpl.h
2128 ++++ b/arch/x86/kvm/mmu/paging_tmpl.h
2129 +@@ -471,8 +471,7 @@ retry_walk:
2130 +
2131 + error:
2132 + errcode |= write_fault | user_fault;
2133 +- if (fetch_fault && (mmu->nx ||
2134 +- kvm_read_cr4_bits(vcpu, X86_CR4_SMEP)))
2135 ++ if (fetch_fault && (mmu->nx || mmu->mmu_role.ext.cr4_smep))
2136 + errcode |= PFERR_FETCH_MASK;
2137 +
2138 + walker->fault.vector = PF_VECTOR;
2139 +diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
2140 +index 018d82e73e311..5c83b912becc1 100644
2141 +--- a/arch/x86/kvm/mmu/tdp_mmu.c
2142 ++++ b/arch/x86/kvm/mmu/tdp_mmu.c
2143 +@@ -745,7 +745,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, int write,
2144 + kvm_pfn_t pfn, bool prefault)
2145 + {
2146 + u64 new_spte;
2147 +- int ret = 0;
2148 ++ int ret = RET_PF_FIXED;
2149 + int make_spte_ret = 0;
2150 +
2151 + if (unlikely(is_noslot_pfn(pfn)))
2152 +@@ -777,13 +777,16 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, int write,
2153 + trace_mark_mmio_spte(rcu_dereference(iter->sptep), iter->gfn,
2154 + new_spte);
2155 + ret = RET_PF_EMULATE;
2156 +- } else
2157 ++ } else {
2158 + trace_kvm_mmu_set_spte(iter->level, iter->gfn,
2159 + rcu_dereference(iter->sptep));
2160 ++ }
2161 +
2162 +- trace_kvm_mmu_set_spte(iter->level, iter->gfn,
2163 +- rcu_dereference(iter->sptep));
2164 +- if (!prefault)
2165 ++ /*
2166 ++ * Increase pf_fixed in both RET_PF_EMULATE and RET_PF_FIXED to be
2167 ++ * consistent with legacy MMU behavior.
2168 ++ */
2169 ++ if (ret != RET_PF_SPURIOUS)
2170 + vcpu->stat.pf_fixed++;
2171 +
2172 + return ret;
2173 +diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
2174 +index 4ba2a43e188b5..618dcf11d6885 100644
2175 +--- a/arch/x86/kvm/vmx/nested.c
2176 ++++ b/arch/x86/kvm/vmx/nested.c
2177 +@@ -1132,12 +1132,19 @@ static int nested_vmx_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, bool ne
2178 +
2179 + /*
2180 + * Unconditionally skip the TLB flush on fast CR3 switch, all TLB
2181 +- * flushes are handled by nested_vmx_transition_tlb_flush(). See
2182 +- * nested_vmx_transition_mmu_sync for details on skipping the MMU sync.
2183 ++ * flushes are handled by nested_vmx_transition_tlb_flush().
2184 + */
2185 +- if (!nested_ept)
2186 +- kvm_mmu_new_pgd(vcpu, cr3, true,
2187 +- !nested_vmx_transition_mmu_sync(vcpu));
2188 ++ if (!nested_ept) {
2189 ++ kvm_mmu_new_pgd(vcpu, cr3, true, true);
2190 ++
2191 ++ /*
2192 ++ * A TLB flush on VM-Enter/VM-Exit flushes all linear mappings
2193 ++ * across all PCIDs, i.e. all PGDs need to be synchronized.
2194 ++ * See nested_vmx_transition_mmu_sync() for more details.
2195 ++ */
2196 ++ if (nested_vmx_transition_mmu_sync(vcpu))
2197 ++ kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu);
2198 ++ }
2199 +
2200 + vcpu->arch.cr3 = cr3;
2201 + kvm_register_mark_available(vcpu, VCPU_EXREG_CR3);
2202 +@@ -5465,8 +5472,6 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
2203 + {
2204 + u32 index = kvm_rcx_read(vcpu);
2205 + u64 new_eptp;
2206 +- bool accessed_dirty;
2207 +- struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
2208 +
2209 + if (!nested_cpu_has_eptp_switching(vmcs12) ||
2210 + !nested_cpu_has_ept(vmcs12))
2211 +@@ -5475,13 +5480,10 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
2212 + if (index >= VMFUNC_EPTP_ENTRIES)
2213 + return 1;
2214 +
2215 +-
2216 + if (kvm_vcpu_read_guest_page(vcpu, vmcs12->eptp_list_address >> PAGE_SHIFT,
2217 + &new_eptp, index * 8, 8))
2218 + return 1;
2219 +
2220 +- accessed_dirty = !!(new_eptp & VMX_EPTP_AD_ENABLE_BIT);
2221 +-
2222 + /*
2223 + * If the (L2) guest does a vmfunc to the currently
2224 + * active ept pointer, we don't have to do anything else
2225 +@@ -5490,8 +5492,6 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
2226 + if (!nested_vmx_check_eptp(vcpu, new_eptp))
2227 + return 1;
2228 +
2229 +- mmu->ept_ad = accessed_dirty;
2230 +- mmu->mmu_role.base.ad_disabled = !accessed_dirty;
2231 + vmcs12->ept_pointer = new_eptp;
2232 +
2233 + kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
2234 +@@ -5517,7 +5517,7 @@ static int handle_vmfunc(struct kvm_vcpu *vcpu)
2235 + }
2236 +
2237 + vmcs12 = get_vmcs12(vcpu);
2238 +- if ((vmcs12->vm_function_control & (1 << function)) == 0)
2239 ++ if (!(vmcs12->vm_function_control & BIT_ULL(function)))
2240 + goto fail;
2241 +
2242 + switch (function) {
2243 +@@ -5775,6 +5775,9 @@ static bool nested_vmx_l0_wants_exit(struct kvm_vcpu *vcpu,
2244 + else if (is_breakpoint(intr_info) &&
2245 + vcpu->guest_debug & KVM_GUESTDBG_USE_SW_BP)
2246 + return true;
2247 ++ else if (is_alignment_check(intr_info) &&
2248 ++ !vmx_guest_inject_ac(vcpu))
2249 ++ return true;
2250 + return false;
2251 + case EXIT_REASON_EXTERNAL_INTERRUPT:
2252 + return true;
2253 +diff --git a/arch/x86/kvm/vmx/vmcs.h b/arch/x86/kvm/vmx/vmcs.h
2254 +index 1472c6c376f74..571d9ad80a59e 100644
2255 +--- a/arch/x86/kvm/vmx/vmcs.h
2256 ++++ b/arch/x86/kvm/vmx/vmcs.h
2257 +@@ -117,6 +117,11 @@ static inline bool is_gp_fault(u32 intr_info)
2258 + return is_exception_n(intr_info, GP_VECTOR);
2259 + }
2260 +
2261 ++static inline bool is_alignment_check(u32 intr_info)
2262 ++{
2263 ++ return is_exception_n(intr_info, AC_VECTOR);
2264 ++}
2265 ++
2266 + static inline bool is_machine_check(u32 intr_info)
2267 + {
2268 + return is_exception_n(intr_info, MC_VECTOR);
2269 +diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
2270 +index ae63d59be38c7..cd2ca9093e8dc 100644
2271 +--- a/arch/x86/kvm/vmx/vmx.c
2272 ++++ b/arch/x86/kvm/vmx/vmx.c
2273 +@@ -4796,7 +4796,7 @@ static int handle_machine_check(struct kvm_vcpu *vcpu)
2274 + * - Guest has #AC detection enabled in CR0
2275 + * - Guest EFLAGS has AC bit set
2276 + */
2277 +-static inline bool guest_inject_ac(struct kvm_vcpu *vcpu)
2278 ++bool vmx_guest_inject_ac(struct kvm_vcpu *vcpu)
2279 + {
2280 + if (!boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
2281 + return true;
2282 +@@ -4905,7 +4905,7 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
2283 + kvm_run->debug.arch.exception = ex_no;
2284 + break;
2285 + case AC_VECTOR:
2286 +- if (guest_inject_ac(vcpu)) {
2287 ++ if (vmx_guest_inject_ac(vcpu)) {
2288 + kvm_queue_exception_e(vcpu, AC_VECTOR, error_code);
2289 + return 1;
2290 + }
2291 +diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
2292 +index 89da5e1251f18..723782cd0511e 100644
2293 +--- a/arch/x86/kvm/vmx/vmx.h
2294 ++++ b/arch/x86/kvm/vmx/vmx.h
2295 +@@ -379,6 +379,7 @@ void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
2296 + u64 construct_eptp(struct kvm_vcpu *vcpu, unsigned long root_hpa,
2297 + int root_level);
2298 +
2299 ++bool vmx_guest_inject_ac(struct kvm_vcpu *vcpu);
2300 + void vmx_update_exception_bitmap(struct kvm_vcpu *vcpu);
2301 + void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu);
2302 + bool vmx_nmi_blocked(struct kvm_vcpu *vcpu);
2303 +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
2304 +index a6ca7e657af27..615dd236e8429 100644
2305 +--- a/arch/x86/kvm/x86.c
2306 ++++ b/arch/x86/kvm/x86.c
2307 +@@ -9022,7 +9022,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
2308 + }
2309 + if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))
2310 + kvm_vcpu_flush_tlb_current(vcpu);
2311 +- if (kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu))
2312 ++ if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu))
2313 + kvm_vcpu_flush_tlb_guest(vcpu);
2314 +
2315 + if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) {
2316 +@@ -10302,6 +10302,8 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
2317 +
2318 + void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
2319 + {
2320 ++ unsigned long old_cr0 = kvm_read_cr0(vcpu);
2321 ++
2322 + kvm_lapic_reset(vcpu, init_event);
2323 +
2324 + vcpu->arch.hflags = 0;
2325 +@@ -10370,6 +10372,17 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
2326 + vcpu->arch.ia32_xss = 0;
2327 +
2328 + static_call(kvm_x86_vcpu_reset)(vcpu, init_event);
2329 ++
2330 ++ /*
2331 ++ * Reset the MMU context if paging was enabled prior to INIT (which is
2332 ++ * implied if CR0.PG=1 as CR0 will be '0' prior to RESET). Unlike the
2333 ++ * standard CR0/CR4/EFER modification paths, only CR0.PG needs to be
2334 ++ * checked because it is unconditionally cleared on INIT and all other
2335 ++ * paging related bits are ignored if paging is disabled, i.e. CR0.WP,
2336 ++ * CR4, and EFER changes are all irrelevant if CR0.PG was '0'.
2337 ++ */
2338 ++ if (old_cr0 & X86_CR0_PG)
2339 ++ kvm_mmu_reset_context(vcpu);
2340 + }
2341 +
2342 + void kvm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
2343 +diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
2344 +index 7f1b3a862e141..1fb0c37e48cb1 100644
2345 +--- a/arch/x86/net/bpf_jit_comp.c
2346 ++++ b/arch/x86/net/bpf_jit_comp.c
2347 +@@ -1297,7 +1297,7 @@ st: if (is_imm8(insn->off))
2348 + emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off);
2349 + if (BPF_MODE(insn->code) == BPF_PROBE_MEM) {
2350 + struct exception_table_entry *ex;
2351 +- u8 *_insn = image + proglen;
2352 ++ u8 *_insn = image + proglen + (start_of_ldx - temp);
2353 + s64 delta;
2354 +
2355 + /* populate jmp_offset for JMP above */
2356 +diff --git a/arch/xtensa/kernel/smp.c b/arch/xtensa/kernel/smp.c
2357 +index cd85a7a2722ba..1254da07ead1f 100644
2358 +--- a/arch/xtensa/kernel/smp.c
2359 ++++ b/arch/xtensa/kernel/smp.c
2360 +@@ -145,7 +145,6 @@ void secondary_start_kernel(void)
2361 + cpumask_set_cpu(cpu, mm_cpumask(mm));
2362 + enter_lazy_tlb(mm, current);
2363 +
2364 +- preempt_disable();
2365 + trace_hardirqs_off();
2366 +
2367 + calibrate_delay();
2368 +diff --git a/block/bio.c b/block/bio.c
2369 +index 50e579088aca4..b00c5a88a7437 100644
2370 +--- a/block/bio.c
2371 ++++ b/block/bio.c
2372 +@@ -1412,8 +1412,7 @@ static inline bool bio_remaining_done(struct bio *bio)
2373 + *
2374 + * bio_endio() can be called several times on a bio that has been chained
2375 + * using bio_chain(). The ->bi_end_io() function will only be called the
2376 +- * last time. At this point the BLK_TA_COMPLETE tracing event will be
2377 +- * generated if BIO_TRACE_COMPLETION is set.
2378 ++ * last time.
2379 + **/
2380 + void bio_endio(struct bio *bio)
2381 + {
2382 +@@ -1426,6 +1425,11 @@ again:
2383 + if (bio->bi_bdev)
2384 + rq_qos_done_bio(bio->bi_bdev->bd_disk->queue, bio);
2385 +
2386 ++ if (bio->bi_bdev && bio_flagged(bio, BIO_TRACE_COMPLETION)) {
2387 ++ trace_block_bio_complete(bio->bi_bdev->bd_disk->queue, bio);
2388 ++ bio_clear_flag(bio, BIO_TRACE_COMPLETION);
2389 ++ }
2390 ++
2391 + /*
2392 + * Need to have a real endio function for chained bios, otherwise
2393 + * various corner cases will break (like stacking block devices that
2394 +@@ -1439,11 +1443,6 @@ again:
2395 + goto again;
2396 + }
2397 +
2398 +- if (bio->bi_bdev && bio_flagged(bio, BIO_TRACE_COMPLETION)) {
2399 +- trace_block_bio_complete(bio->bi_bdev->bd_disk->queue, bio);
2400 +- bio_clear_flag(bio, BIO_TRACE_COMPLETION);
2401 +- }
2402 +-
2403 + blk_throtl_bio_endio(bio);
2404 + /* release cgroup info */
2405 + bio_uninit(bio);
2406 +diff --git a/block/blk-flush.c b/block/blk-flush.c
2407 +index 7942ca6ed3211..1002f6c581816 100644
2408 +--- a/block/blk-flush.c
2409 ++++ b/block/blk-flush.c
2410 +@@ -219,8 +219,6 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error)
2411 + unsigned long flags = 0;
2412 + struct blk_flush_queue *fq = blk_get_flush_queue(q, flush_rq->mq_ctx);
2413 +
2414 +- blk_account_io_flush(flush_rq);
2415 +-
2416 + /* release the tag's ownership to the req cloned from */
2417 + spin_lock_irqsave(&fq->mq_flush_lock, flags);
2418 +
2419 +@@ -230,6 +228,7 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error)
2420 + return;
2421 + }
2422 +
2423 ++ blk_account_io_flush(flush_rq);
2424 + /*
2425 + * Flush request has to be marked as IDLE when it is really ended
2426 + * because its .end_io() is called from timeout code path too for
2427 +diff --git a/block/blk-merge.c b/block/blk-merge.c
2428 +index 4d97fb6dd2267..bcdff1879c346 100644
2429 +--- a/block/blk-merge.c
2430 ++++ b/block/blk-merge.c
2431 +@@ -559,10 +559,14 @@ static inline unsigned int blk_rq_get_max_segments(struct request *rq)
2432 + static inline int ll_new_hw_segment(struct request *req, struct bio *bio,
2433 + unsigned int nr_phys_segs)
2434 + {
2435 +- if (req->nr_phys_segments + nr_phys_segs > blk_rq_get_max_segments(req))
2436 ++ if (blk_integrity_merge_bio(req->q, req, bio) == false)
2437 + goto no_merge;
2438 +
2439 +- if (blk_integrity_merge_bio(req->q, req, bio) == false)
2440 ++ /* discard request merge won't add new segment */
2441 ++ if (req_op(req) == REQ_OP_DISCARD)
2442 ++ return 1;
2443 ++
2444 ++ if (req->nr_phys_segments + nr_phys_segs > blk_rq_get_max_segments(req))
2445 + goto no_merge;
2446 +
2447 + /*
2448 +diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
2449 +index 9c92053e704dc..c4f2f6c123aed 100644
2450 +--- a/block/blk-mq-tag.c
2451 ++++ b/block/blk-mq-tag.c
2452 +@@ -199,6 +199,20 @@ struct bt_iter_data {
2453 + bool reserved;
2454 + };
2455 +
2456 ++static struct request *blk_mq_find_and_get_req(struct blk_mq_tags *tags,
2457 ++ unsigned int bitnr)
2458 ++{
2459 ++ struct request *rq;
2460 ++ unsigned long flags;
2461 ++
2462 ++ spin_lock_irqsave(&tags->lock, flags);
2463 ++ rq = tags->rqs[bitnr];
2464 ++ if (!rq || !refcount_inc_not_zero(&rq->ref))
2465 ++ rq = NULL;
2466 ++ spin_unlock_irqrestore(&tags->lock, flags);
2467 ++ return rq;
2468 ++}
2469 ++
2470 + static bool bt_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
2471 + {
2472 + struct bt_iter_data *iter_data = data;
2473 +@@ -206,18 +220,22 @@ static bool bt_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
2474 + struct blk_mq_tags *tags = hctx->tags;
2475 + bool reserved = iter_data->reserved;
2476 + struct request *rq;
2477 ++ bool ret = true;
2478 +
2479 + if (!reserved)
2480 + bitnr += tags->nr_reserved_tags;
2481 +- rq = tags->rqs[bitnr];
2482 +-
2483 + /*
2484 + * We can hit rq == NULL here, because the tagging functions
2485 + * test and set the bit before assigning ->rqs[].
2486 + */
2487 +- if (rq && rq->q == hctx->queue && rq->mq_hctx == hctx)
2488 +- return iter_data->fn(hctx, rq, iter_data->data, reserved);
2489 +- return true;
2490 ++ rq = blk_mq_find_and_get_req(tags, bitnr);
2491 ++ if (!rq)
2492 ++ return true;
2493 ++
2494 ++ if (rq->q == hctx->queue && rq->mq_hctx == hctx)
2495 ++ ret = iter_data->fn(hctx, rq, iter_data->data, reserved);
2496 ++ blk_mq_put_rq_ref(rq);
2497 ++ return ret;
2498 + }
2499 +
2500 + /**
2501 +@@ -264,6 +282,8 @@ static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
2502 + struct blk_mq_tags *tags = iter_data->tags;
2503 + bool reserved = iter_data->flags & BT_TAG_ITER_RESERVED;
2504 + struct request *rq;
2505 ++ bool ret = true;
2506 ++ bool iter_static_rqs = !!(iter_data->flags & BT_TAG_ITER_STATIC_RQS);
2507 +
2508 + if (!reserved)
2509 + bitnr += tags->nr_reserved_tags;
2510 +@@ -272,16 +292,19 @@ static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
2511 + * We can hit rq == NULL here, because the tagging functions
2512 + * test and set the bit before assigning ->rqs[].
2513 + */
2514 +- if (iter_data->flags & BT_TAG_ITER_STATIC_RQS)
2515 ++ if (iter_static_rqs)
2516 + rq = tags->static_rqs[bitnr];
2517 + else
2518 +- rq = tags->rqs[bitnr];
2519 ++ rq = blk_mq_find_and_get_req(tags, bitnr);
2520 + if (!rq)
2521 + return true;
2522 +- if ((iter_data->flags & BT_TAG_ITER_STARTED) &&
2523 +- !blk_mq_request_started(rq))
2524 +- return true;
2525 +- return iter_data->fn(rq, iter_data->data, reserved);
2526 ++
2527 ++ if (!(iter_data->flags & BT_TAG_ITER_STARTED) ||
2528 ++ blk_mq_request_started(rq))
2529 ++ ret = iter_data->fn(rq, iter_data->data, reserved);
2530 ++ if (!iter_static_rqs)
2531 ++ blk_mq_put_rq_ref(rq);
2532 ++ return ret;
2533 + }
2534 +
2535 + /**
2536 +@@ -348,6 +371,9 @@ void blk_mq_all_tag_iter(struct blk_mq_tags *tags, busy_tag_iter_fn *fn,
2537 + * indicates whether or not @rq is a reserved request. Return
2538 + * true to continue iterating tags, false to stop.
2539 + * @priv: Will be passed as second argument to @fn.
2540 ++ *
2541 ++ * We grab one request reference before calling @fn and release it after
2542 ++ * @fn returns.
2543 + */
2544 + void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tagset,
2545 + busy_tag_iter_fn *fn, void *priv)
2546 +@@ -516,6 +542,7 @@ struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags,
2547 +
2548 + tags->nr_tags = total_tags;
2549 + tags->nr_reserved_tags = reserved_tags;
2550 ++ spin_lock_init(&tags->lock);
2551 +
2552 + if (flags & BLK_MQ_F_TAG_HCTX_SHARED)
2553 + return tags;
2554 +diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h
2555 +index 7d3e6b333a4a9..f887988e5ef60 100644
2556 +--- a/block/blk-mq-tag.h
2557 ++++ b/block/blk-mq-tag.h
2558 +@@ -20,6 +20,12 @@ struct blk_mq_tags {
2559 + struct request **rqs;
2560 + struct request **static_rqs;
2561 + struct list_head page_list;
2562 ++
2563 ++ /*
2564 ++ * used to clear request reference in rqs[] before freeing one
2565 ++ * request pool
2566 ++ */
2567 ++ spinlock_t lock;
2568 + };
2569 +
2570 + extern struct blk_mq_tags *blk_mq_init_tags(unsigned int nr_tags,
2571 +diff --git a/block/blk-mq.c b/block/blk-mq.c
2572 +index 0e120547ccb72..6a982a277176c 100644
2573 +--- a/block/blk-mq.c
2574 ++++ b/block/blk-mq.c
2575 +@@ -908,6 +908,14 @@ static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
2576 + return false;
2577 + }
2578 +
2579 ++void blk_mq_put_rq_ref(struct request *rq)
2580 ++{
2581 ++ if (is_flush_rq(rq, rq->mq_hctx))
2582 ++ rq->end_io(rq, 0);
2583 ++ else if (refcount_dec_and_test(&rq->ref))
2584 ++ __blk_mq_free_request(rq);
2585 ++}
2586 ++
2587 + static bool blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
2588 + struct request *rq, void *priv, bool reserved)
2589 + {
2590 +@@ -941,11 +949,7 @@ static bool blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
2591 + if (blk_mq_req_expired(rq, next))
2592 + blk_mq_rq_timed_out(rq, reserved);
2593 +
2594 +- if (is_flush_rq(rq, hctx))
2595 +- rq->end_io(rq, 0);
2596 +- else if (refcount_dec_and_test(&rq->ref))
2597 +- __blk_mq_free_request(rq);
2598 +-
2599 ++ blk_mq_put_rq_ref(rq);
2600 + return true;
2601 + }
2602 +
2603 +@@ -1219,9 +1223,6 @@ static void blk_mq_update_dispatch_busy(struct blk_mq_hw_ctx *hctx, bool busy)
2604 + {
2605 + unsigned int ewma;
2606 +
2607 +- if (hctx->queue->elevator)
2608 +- return;
2609 +-
2610 + ewma = hctx->dispatch_busy;
2611 +
2612 + if (!ewma && !busy)
2613 +@@ -2287,6 +2288,45 @@ queue_exit:
2614 + return BLK_QC_T_NONE;
2615 + }
2616 +
2617 ++static size_t order_to_size(unsigned int order)
2618 ++{
2619 ++ return (size_t)PAGE_SIZE << order;
2620 ++}
2621 ++
2622 ++/* called before freeing request pool in @tags */
2623 ++static void blk_mq_clear_rq_mapping(struct blk_mq_tag_set *set,
2624 ++ struct blk_mq_tags *tags, unsigned int hctx_idx)
2625 ++{
2626 ++ struct blk_mq_tags *drv_tags = set->tags[hctx_idx];
2627 ++ struct page *page;
2628 ++ unsigned long flags;
2629 ++
2630 ++ list_for_each_entry(page, &tags->page_list, lru) {
2631 ++ unsigned long start = (unsigned long)page_address(page);
2632 ++ unsigned long end = start + order_to_size(page->private);
2633 ++ int i;
2634 ++
2635 ++ for (i = 0; i < set->queue_depth; i++) {
2636 ++ struct request *rq = drv_tags->rqs[i];
2637 ++ unsigned long rq_addr = (unsigned long)rq;
2638 ++
2639 ++ if (rq_addr >= start && rq_addr < end) {
2640 ++ WARN_ON_ONCE(refcount_read(&rq->ref) != 0);
2641 ++ cmpxchg(&drv_tags->rqs[i], rq, NULL);
2642 ++ }
2643 ++ }
2644 ++ }
2645 ++
2646 ++ /*
2647 ++ * Wait until all pending iteration is done.
2648 ++ *
2649 ++ * Request reference is cleared and it is guaranteed to be observed
2650 ++ * after the ->lock is released.
2651 ++ */
2652 ++ spin_lock_irqsave(&drv_tags->lock, flags);
2653 ++ spin_unlock_irqrestore(&drv_tags->lock, flags);
2654 ++}
2655 ++
2656 + void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
2657 + unsigned int hctx_idx)
2658 + {
2659 +@@ -2305,6 +2345,8 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
2660 + }
2661 + }
2662 +
2663 ++ blk_mq_clear_rq_mapping(set, tags, hctx_idx);
2664 ++
2665 + while (!list_empty(&tags->page_list)) {
2666 + page = list_first_entry(&tags->page_list, struct page, lru);
2667 + list_del_init(&page->lru);
2668 +@@ -2364,11 +2406,6 @@ struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set,
2669 + return tags;
2670 + }
2671 +
2672 +-static size_t order_to_size(unsigned int order)
2673 +-{
2674 +- return (size_t)PAGE_SIZE << order;
2675 +-}
2676 +-
2677 + static int blk_mq_init_request(struct blk_mq_tag_set *set, struct request *rq,
2678 + unsigned int hctx_idx, int node)
2679 + {
2680 +diff --git a/block/blk-mq.h b/block/blk-mq.h
2681 +index 3616453ca28c8..143afe42c63a0 100644
2682 +--- a/block/blk-mq.h
2683 ++++ b/block/blk-mq.h
2684 +@@ -47,6 +47,7 @@ void blk_mq_add_to_requeue_list(struct request *rq, bool at_head,
2685 + void blk_mq_flush_busy_ctxs(struct blk_mq_hw_ctx *hctx, struct list_head *list);
2686 + struct request *blk_mq_dequeue_from_ctx(struct blk_mq_hw_ctx *hctx,
2687 + struct blk_mq_ctx *start);
2688 ++void blk_mq_put_rq_ref(struct request *rq);
2689 +
2690 + /*
2691 + * Internal helpers for allocating/freeing the request map
2692 +diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
2693 +index 2bc43e94f4c40..2bcb3495e376b 100644
2694 +--- a/block/blk-rq-qos.h
2695 ++++ b/block/blk-rq-qos.h
2696 +@@ -7,6 +7,7 @@
2697 + #include <linux/blk_types.h>
2698 + #include <linux/atomic.h>
2699 + #include <linux/wait.h>
2700 ++#include <linux/blk-mq.h>
2701 +
2702 + #include "blk-mq-debugfs.h"
2703 +
2704 +@@ -99,8 +100,21 @@ static inline void rq_wait_init(struct rq_wait *rq_wait)
2705 +
2706 + static inline void rq_qos_add(struct request_queue *q, struct rq_qos *rqos)
2707 + {
2708 ++ /*
2709 ++ * No IO can be in-flight when adding rqos, so freeze queue, which
2710 ++ * is fine since we only support rq_qos for blk-mq queue.
2711 ++ *
2712 ++ * Reuse ->queue_lock for protecting against other concurrent
2713 ++ * rq_qos adding/deleting
2714 ++ */
2715 ++ blk_mq_freeze_queue(q);
2716 ++
2717 ++ spin_lock_irq(&q->queue_lock);
2718 + rqos->next = q->rq_qos;
2719 + q->rq_qos = rqos;
2720 ++ spin_unlock_irq(&q->queue_lock);
2721 ++
2722 ++ blk_mq_unfreeze_queue(q);
2723 +
2724 + if (rqos->ops->debugfs_attrs)
2725 + blk_mq_debugfs_register_rqos(rqos);
2726 +@@ -110,12 +124,22 @@ static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos)
2727 + {
2728 + struct rq_qos **cur;
2729 +
2730 ++ /*
2731 ++ * See comment in rq_qos_add() about freezing queue & using
2732 ++ * ->queue_lock.
2733 ++ */
2734 ++ blk_mq_freeze_queue(q);
2735 ++
2736 ++ spin_lock_irq(&q->queue_lock);
2737 + for (cur = &q->rq_qos; *cur; cur = &(*cur)->next) {
2738 + if (*cur == rqos) {
2739 + *cur = rqos->next;
2740 + break;
2741 + }
2742 + }
2743 ++ spin_unlock_irq(&q->queue_lock);
2744 ++
2745 ++ blk_mq_unfreeze_queue(q);
2746 +
2747 + blk_mq_debugfs_unregister_rqos(rqos);
2748 + }
2749 +diff --git a/block/blk-wbt.c b/block/blk-wbt.c
2750 +index 42aed0160f86a..f5e5ac915bf7c 100644
2751 +--- a/block/blk-wbt.c
2752 ++++ b/block/blk-wbt.c
2753 +@@ -77,7 +77,8 @@ enum {
2754 +
2755 + static inline bool rwb_enabled(struct rq_wb *rwb)
2756 + {
2757 +- return rwb && rwb->wb_normal != 0;
2758 ++ return rwb && rwb->enable_state != WBT_STATE_OFF_DEFAULT &&
2759 ++ rwb->wb_normal != 0;
2760 + }
2761 +
2762 + static void wb_timestamp(struct rq_wb *rwb, unsigned long *var)
2763 +@@ -636,9 +637,13 @@ void wbt_set_write_cache(struct request_queue *q, bool write_cache_on)
2764 + void wbt_enable_default(struct request_queue *q)
2765 + {
2766 + struct rq_qos *rqos = wbt_rq_qos(q);
2767 ++
2768 + /* Throttling already enabled? */
2769 +- if (rqos)
2770 ++ if (rqos) {
2771 ++ if (RQWB(rqos)->enable_state == WBT_STATE_OFF_DEFAULT)
2772 ++ RQWB(rqos)->enable_state = WBT_STATE_ON_DEFAULT;
2773 + return;
2774 ++ }
2775 +
2776 + /* Queue not registered? Maybe shutting down... */
2777 + if (!blk_queue_registered(q))
2778 +@@ -702,7 +707,7 @@ void wbt_disable_default(struct request_queue *q)
2779 + rwb = RQWB(rqos);
2780 + if (rwb->enable_state == WBT_STATE_ON_DEFAULT) {
2781 + blk_stat_deactivate(rwb->cb);
2782 +- rwb->wb_normal = 0;
2783 ++ rwb->enable_state = WBT_STATE_OFF_DEFAULT;
2784 + }
2785 + }
2786 + EXPORT_SYMBOL_GPL(wbt_disable_default);
2787 +diff --git a/block/blk-wbt.h b/block/blk-wbt.h
2788 +index 16bdc85b8df92..2eb01becde8c4 100644
2789 +--- a/block/blk-wbt.h
2790 ++++ b/block/blk-wbt.h
2791 +@@ -34,6 +34,7 @@ enum {
2792 + enum {
2793 + WBT_STATE_ON_DEFAULT = 1,
2794 + WBT_STATE_ON_MANUAL = 2,
2795 ++ WBT_STATE_OFF_DEFAULT
2796 + };
2797 +
2798 + struct rq_wb {
2799 +diff --git a/crypto/shash.c b/crypto/shash.c
2800 +index 2e3433ad97629..0a0a50cb694f0 100644
2801 +--- a/crypto/shash.c
2802 ++++ b/crypto/shash.c
2803 +@@ -20,12 +20,24 @@
2804 +
2805 + static const struct crypto_type crypto_shash_type;
2806 +
2807 +-int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
2808 +- unsigned int keylen)
2809 ++static int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
2810 ++ unsigned int keylen)
2811 + {
2812 + return -ENOSYS;
2813 + }
2814 +-EXPORT_SYMBOL_GPL(shash_no_setkey);
2815 ++
2816 ++/*
2817 ++ * Check whether an shash algorithm has a setkey function.
2818 ++ *
2819 ++ * For CFI compatibility, this must not be an inline function. This is because
2820 ++ * when CFI is enabled, modules won't get the same address for shash_no_setkey
2821 ++ * (if it were exported, which inlining would require) as the core kernel will.
2822 ++ */
2823 ++bool crypto_shash_alg_has_setkey(struct shash_alg *alg)
2824 ++{
2825 ++ return alg->setkey != shash_no_setkey;
2826 ++}
2827 ++EXPORT_SYMBOL_GPL(crypto_shash_alg_has_setkey);
2828 +
2829 + static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key,
2830 + unsigned int keylen)
2831 +diff --git a/crypto/sm2.c b/crypto/sm2.c
2832 +index b21addc3ac06a..db8a4a265669d 100644
2833 +--- a/crypto/sm2.c
2834 ++++ b/crypto/sm2.c
2835 +@@ -79,10 +79,17 @@ static int sm2_ec_ctx_init(struct mpi_ec_ctx *ec)
2836 + goto free;
2837 +
2838 + rc = -ENOMEM;
2839 ++
2840 ++ ec->Q = mpi_point_new(0);
2841 ++ if (!ec->Q)
2842 ++ goto free;
2843 ++
2844 + /* mpi_ec_setup_elliptic_curve */
2845 + ec->G = mpi_point_new(0);
2846 +- if (!ec->G)
2847 ++ if (!ec->G) {
2848 ++ mpi_point_release(ec->Q);
2849 + goto free;
2850 ++ }
2851 +
2852 + mpi_set(ec->G->x, x);
2853 + mpi_set(ec->G->y, y);
2854 +@@ -91,6 +98,7 @@ static int sm2_ec_ctx_init(struct mpi_ec_ctx *ec)
2855 + rc = -EINVAL;
2856 + ec->n = mpi_scanval(ecp->n);
2857 + if (!ec->n) {
2858 ++ mpi_point_release(ec->Q);
2859 + mpi_point_release(ec->G);
2860 + goto free;
2861 + }
2862 +@@ -386,27 +394,15 @@ static int sm2_set_pub_key(struct crypto_akcipher *tfm,
2863 + MPI a;
2864 + int rc;
2865 +
2866 +- ec->Q = mpi_point_new(0);
2867 +- if (!ec->Q)
2868 +- return -ENOMEM;
2869 +-
2870 + /* include the uncompressed flag '0x04' */
2871 +- rc = -ENOMEM;
2872 + a = mpi_read_raw_data(key, keylen);
2873 + if (!a)
2874 +- goto error;
2875 ++ return -ENOMEM;
2876 +
2877 + mpi_normalize(a);
2878 + rc = sm2_ecc_os2ec(ec->Q, a);
2879 + mpi_free(a);
2880 +- if (rc)
2881 +- goto error;
2882 +-
2883 +- return 0;
2884 +
2885 +-error:
2886 +- mpi_point_release(ec->Q);
2887 +- ec->Q = NULL;
2888 + return rc;
2889 + }
2890 +
2891 +diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
2892 +index 700b41adf2db6..9aa82d5272720 100644
2893 +--- a/drivers/acpi/Makefile
2894 ++++ b/drivers/acpi/Makefile
2895 +@@ -8,6 +8,11 @@ ccflags-$(CONFIG_ACPI_DEBUG) += -DACPI_DEBUG_OUTPUT
2896 + #
2897 + # ACPI Boot-Time Table Parsing
2898 + #
2899 ++ifeq ($(CONFIG_ACPI_CUSTOM_DSDT),y)
2900 ++tables.o: $(src)/../../include/$(subst $\",,$(CONFIG_ACPI_CUSTOM_DSDT_FILE)) ;
2901 ++
2902 ++endif
2903 ++
2904 + obj-$(CONFIG_ACPI) += tables.o
2905 + obj-$(CONFIG_X86) += blacklist.o
2906 +
2907 +diff --git a/drivers/acpi/acpi_fpdt.c b/drivers/acpi/acpi_fpdt.c
2908 +index a89a806a7a2a9..4ee2ad234e3d6 100644
2909 +--- a/drivers/acpi/acpi_fpdt.c
2910 ++++ b/drivers/acpi/acpi_fpdt.c
2911 +@@ -240,8 +240,10 @@ static int __init acpi_init_fpdt(void)
2912 + return 0;
2913 +
2914 + fpdt_kobj = kobject_create_and_add("fpdt", acpi_kobj);
2915 +- if (!fpdt_kobj)
2916 ++ if (!fpdt_kobj) {
2917 ++ acpi_put_table(header);
2918 + return -ENOMEM;
2919 ++ }
2920 +
2921 + while (offset < header->length) {
2922 + subtable = (void *)header + offset;
2923 +diff --git a/drivers/acpi/acpica/nsrepair2.c b/drivers/acpi/acpica/nsrepair2.c
2924 +index 14b71b41e8453..38e10ab976e67 100644
2925 +--- a/drivers/acpi/acpica/nsrepair2.c
2926 ++++ b/drivers/acpi/acpica/nsrepair2.c
2927 +@@ -379,6 +379,13 @@ acpi_ns_repair_CID(struct acpi_evaluate_info *info,
2928 +
2929 + (*element_ptr)->common.reference_count =
2930 + original_ref_count;
2931 ++
2932 ++ /*
2933 ++ * The original_element holds a reference from the package object
2934 ++ * that represents _HID. Since a new element was created by _HID,
2935 ++ * remove the reference from the _CID package.
2936 ++ */
2937 ++ acpi_ut_remove_reference(original_element);
2938 + }
2939 +
2940 + element_ptr++;
2941 +diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
2942 +index fce7ade2aba92..0c8330ed1ffd5 100644
2943 +--- a/drivers/acpi/apei/ghes.c
2944 ++++ b/drivers/acpi/apei/ghes.c
2945 +@@ -441,28 +441,35 @@ static void ghes_kick_task_work(struct callback_head *head)
2946 + gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node, node_len);
2947 + }
2948 +
2949 +-static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
2950 +- int sev)
2951 ++static bool ghes_do_memory_failure(u64 physical_addr, int flags)
2952 + {
2953 + unsigned long pfn;
2954 +- int flags = -1;
2955 +- int sec_sev = ghes_severity(gdata->error_severity);
2956 +- struct cper_sec_mem_err *mem_err = acpi_hest_get_payload(gdata);
2957 +
2958 + if (!IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE))
2959 + return false;
2960 +
2961 +- if (!(mem_err->validation_bits & CPER_MEM_VALID_PA))
2962 +- return false;
2963 +-
2964 +- pfn = mem_err->physical_addr >> PAGE_SHIFT;
2965 ++ pfn = PHYS_PFN(physical_addr);
2966 + if (!pfn_valid(pfn)) {
2967 + pr_warn_ratelimited(FW_WARN GHES_PFX
2968 + "Invalid address in generic error data: %#llx\n",
2969 +- mem_err->physical_addr);
2970 ++ physical_addr);
2971 + return false;
2972 + }
2973 +
2974 ++ memory_failure_queue(pfn, flags);
2975 ++ return true;
2976 ++}
2977 ++
2978 ++static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
2979 ++ int sev)
2980 ++{
2981 ++ int flags = -1;
2982 ++ int sec_sev = ghes_severity(gdata->error_severity);
2983 ++ struct cper_sec_mem_err *mem_err = acpi_hest_get_payload(gdata);
2984 ++
2985 ++ if (!(mem_err->validation_bits & CPER_MEM_VALID_PA))
2986 ++ return false;
2987 ++
2988 + /* iff following two events can be handled properly by now */
2989 + if (sec_sev == GHES_SEV_CORRECTED &&
2990 + (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED))
2991 +@@ -470,14 +477,56 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
2992 + if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE)
2993 + flags = 0;
2994 +
2995 +- if (flags != -1) {
2996 +- memory_failure_queue(pfn, flags);
2997 +- return true;
2998 +- }
2999 ++ if (flags != -1)
3000 ++ return ghes_do_memory_failure(mem_err->physical_addr, flags);
3001 +
3002 + return false;
3003 + }
3004 +
3005 ++static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int sev)
3006 ++{
3007 ++ struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata);
3008 ++ bool queued = false;
3009 ++ int sec_sev, i;
3010 ++ char *p;
3011 ++
3012 ++ log_arm_hw_error(err);
3013 ++
3014 ++ sec_sev = ghes_severity(gdata->error_severity);
3015 ++ if (sev != GHES_SEV_RECOVERABLE || sec_sev != GHES_SEV_RECOVERABLE)
3016 ++ return false;
3017 ++
3018 ++ p = (char *)(err + 1);
3019 ++ for (i = 0; i < err->err_info_num; i++) {
3020 ++ struct cper_arm_err_info *err_info = (struct cper_arm_err_info *)p;
3021 ++ bool is_cache = (err_info->type == CPER_ARM_CACHE_ERROR);
3022 ++ bool has_pa = (err_info->validation_bits & CPER_ARM_INFO_VALID_PHYSICAL_ADDR);
3023 ++ const char *error_type = "unknown error";
3024 ++
3025 ++ /*
3026 ++ * The field (err_info->error_info & BIT(26)) is fixed to set to
3027 ++ * 1 in some old firmware of HiSilicon Kunpeng920. We assume that
3028 ++ * firmware won't mix corrected errors in an uncorrected section,
3029 ++ * and don't filter out 'corrected' error here.
3030 ++ */
3031 ++ if (is_cache && has_pa) {
3032 ++ queued = ghes_do_memory_failure(err_info->physical_fault_addr, 0);
3033 ++ p += err_info->length;
3034 ++ continue;
3035 ++ }
3036 ++
3037 ++ if (err_info->type < ARRAY_SIZE(cper_proc_error_type_strs))
3038 ++ error_type = cper_proc_error_type_strs[err_info->type];
3039 ++
3040 ++ pr_warn_ratelimited(FW_WARN GHES_PFX
3041 ++ "Unhandled processor error type: %s\n",
3042 ++ error_type);
3043 ++ p += err_info->length;
3044 ++ }
3045 ++
3046 ++ return queued;
3047 ++}
3048 ++
3049 + /*
3050 + * PCIe AER errors need to be sent to the AER driver for reporting and
3051 + * recovery. The GHES severities map to the following AER severities and
3052 +@@ -605,9 +654,7 @@ static bool ghes_do_proc(struct ghes *ghes,
3053 + ghes_handle_aer(gdata);
3054 + }
3055 + else if (guid_equal(sec_type, &CPER_SEC_PROC_ARM)) {
3056 +- struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata);
3057 +-
3058 +- log_arm_hw_error(err);
3059 ++ queued = ghes_handle_arm_hw_error(gdata, sev);
3060 + } else {
3061 + void *err = acpi_hest_get_payload(gdata);
3062 +
3063 +diff --git a/drivers/acpi/bgrt.c b/drivers/acpi/bgrt.c
3064 +index 19bb7f870204c..e0d14017706ea 100644
3065 +--- a/drivers/acpi/bgrt.c
3066 ++++ b/drivers/acpi/bgrt.c
3067 +@@ -15,40 +15,19 @@
3068 + static void *bgrt_image;
3069 + static struct kobject *bgrt_kobj;
3070 +
3071 +-static ssize_t version_show(struct device *dev,
3072 +- struct device_attribute *attr, char *buf)
3073 +-{
3074 +- return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab.version);
3075 +-}
3076 +-static DEVICE_ATTR_RO(version);
3077 +-
3078 +-static ssize_t status_show(struct device *dev,
3079 +- struct device_attribute *attr, char *buf)
3080 +-{
3081 +- return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab.status);
3082 +-}
3083 +-static DEVICE_ATTR_RO(status);
3084 +-
3085 +-static ssize_t type_show(struct device *dev,
3086 +- struct device_attribute *attr, char *buf)
3087 +-{
3088 +- return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab.image_type);
3089 +-}
3090 +-static DEVICE_ATTR_RO(type);
3091 +-
3092 +-static ssize_t xoffset_show(struct device *dev,
3093 +- struct device_attribute *attr, char *buf)
3094 +-{
3095 +- return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab.image_offset_x);
3096 +-}
3097 +-static DEVICE_ATTR_RO(xoffset);
3098 +-
3099 +-static ssize_t yoffset_show(struct device *dev,
3100 +- struct device_attribute *attr, char *buf)
3101 +-{
3102 +- return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab.image_offset_y);
3103 +-}
3104 +-static DEVICE_ATTR_RO(yoffset);
3105 ++#define BGRT_SHOW(_name, _member) \
3106 ++ static ssize_t _name##_show(struct kobject *kobj, \
3107 ++ struct kobj_attribute *attr, char *buf) \
3108 ++ { \
3109 ++ return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab._member); \
3110 ++ } \
3111 ++ struct kobj_attribute bgrt_attr_##_name = __ATTR_RO(_name)
3112 ++
3113 ++BGRT_SHOW(version, version);
3114 ++BGRT_SHOW(status, status);
3115 ++BGRT_SHOW(type, image_type);
3116 ++BGRT_SHOW(xoffset, image_offset_x);
3117 ++BGRT_SHOW(yoffset, image_offset_y);
3118 +
3119 + static ssize_t image_read(struct file *file, struct kobject *kobj,
3120 + struct bin_attribute *attr, char *buf, loff_t off, size_t count)
3121 +@@ -60,11 +39,11 @@ static ssize_t image_read(struct file *file, struct kobject *kobj,
3122 + static BIN_ATTR_RO(image, 0); /* size gets filled in later */
3123 +
3124 + static struct attribute *bgrt_attributes[] = {
3125 +- &dev_attr_version.attr,
3126 +- &dev_attr_status.attr,
3127 +- &dev_attr_type.attr,
3128 +- &dev_attr_xoffset.attr,
3129 +- &dev_attr_yoffset.attr,
3130 ++ &bgrt_attr_version.attr,
3131 ++ &bgrt_attr_status.attr,
3132 ++ &bgrt_attr_type.attr,
3133 ++ &bgrt_attr_xoffset.attr,
3134 ++ &bgrt_attr_yoffset.attr,
3135 + NULL,
3136 + };
3137 +
3138 +diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
3139 +index a4bd673934c0a..44b4f02e2c6d7 100644
3140 +--- a/drivers/acpi/bus.c
3141 ++++ b/drivers/acpi/bus.c
3142 +@@ -1321,6 +1321,7 @@ static int __init acpi_init(void)
3143 +
3144 + result = acpi_bus_init();
3145 + if (result) {
3146 ++ kobject_put(acpi_kobj);
3147 + disable_acpi();
3148 + return result;
3149 + }
3150 +diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
3151 +index 58876248b1921..a63dd10d9aa94 100644
3152 +--- a/drivers/acpi/device_pm.c
3153 ++++ b/drivers/acpi/device_pm.c
3154 +@@ -20,6 +20,7 @@
3155 + #include <linux/pm_runtime.h>
3156 + #include <linux/suspend.h>
3157 +
3158 ++#include "fan.h"
3159 + #include "internal.h"
3160 +
3161 + /**
3162 +@@ -1307,10 +1308,7 @@ int acpi_dev_pm_attach(struct device *dev, bool power_on)
3163 + * with the generic ACPI PM domain.
3164 + */
3165 + static const struct acpi_device_id special_pm_ids[] = {
3166 +- {"PNP0C0B", }, /* Generic ACPI fan */
3167 +- {"INT3404", }, /* Fan */
3168 +- {"INTC1044", }, /* Fan for Tiger Lake generation */
3169 +- {"INTC1048", }, /* Fan for Alder Lake generation */
3170 ++ ACPI_FAN_DEVICE_IDS,
3171 + {}
3172 + };
3173 + struct acpi_device *adev = ACPI_COMPANION(dev);
3174 +diff --git a/drivers/acpi/device_sysfs.c b/drivers/acpi/device_sysfs.c
3175 +index da4ff2a8b06a0..fe8c7e79f4726 100644
3176 +--- a/drivers/acpi/device_sysfs.c
3177 ++++ b/drivers/acpi/device_sysfs.c
3178 +@@ -446,7 +446,7 @@ static ssize_t description_show(struct device *dev,
3179 + (wchar_t *)acpi_dev->pnp.str_obj->buffer.pointer,
3180 + acpi_dev->pnp.str_obj->buffer.length,
3181 + UTF16_LITTLE_ENDIAN, buf,
3182 +- PAGE_SIZE);
3183 ++ PAGE_SIZE - 1);
3184 +
3185 + buf[result++] = '\n';
3186 +
3187 +diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
3188 +index 13565629ce0a8..87c3b4a099b94 100644
3189 +--- a/drivers/acpi/ec.c
3190 ++++ b/drivers/acpi/ec.c
3191 +@@ -183,6 +183,7 @@ static struct workqueue_struct *ec_query_wq;
3192 +
3193 + static int EC_FLAGS_CORRECT_ECDT; /* Needs ECDT port address correction */
3194 + static int EC_FLAGS_IGNORE_DSDT_GPE; /* Needs ECDT GPE as correction setting */
3195 ++static int EC_FLAGS_TRUST_DSDT_GPE; /* Needs DSDT GPE as correction setting */
3196 + static int EC_FLAGS_CLEAR_ON_RESUME; /* Needs acpi_ec_clear() on boot/resume */
3197 +
3198 + /* --------------------------------------------------------------------------
3199 +@@ -1593,7 +1594,8 @@ static int acpi_ec_add(struct acpi_device *device)
3200 + }
3201 +
3202 + if (boot_ec && ec->command_addr == boot_ec->command_addr &&
3203 +- ec->data_addr == boot_ec->data_addr) {
3204 ++ ec->data_addr == boot_ec->data_addr &&
3205 ++ !EC_FLAGS_TRUST_DSDT_GPE) {
3206 + /*
3207 + * Trust PNP0C09 namespace location rather than
3208 + * ECDT ID. But trust ECDT GPE rather than _GPE
3209 +@@ -1816,6 +1818,18 @@ static int ec_correct_ecdt(const struct dmi_system_id *id)
3210 + return 0;
3211 + }
3212 +
3213 ++/*
3214 ++ * Some ECDTs contain wrong GPE setting, but they share the same port addresses
3215 ++ * with DSDT EC, don't duplicate the DSDT EC with ECDT EC in this case.
3216 ++ * https://bugzilla.kernel.org/show_bug.cgi?id=209989
3217 ++ */
3218 ++static int ec_honor_dsdt_gpe(const struct dmi_system_id *id)
3219 ++{
3220 ++ pr_debug("Detected system needing DSDT GPE setting.\n");
3221 ++ EC_FLAGS_TRUST_DSDT_GPE = 1;
3222 ++ return 0;
3223 ++}
3224 ++
3225 + /*
3226 + * Some DSDTs contain wrong GPE setting.
3227 + * Asus FX502VD/VE, GL702VMK, X550VXK, X580VD
3228 +@@ -1846,6 +1860,22 @@ static const struct dmi_system_id ec_dmi_table[] __initconst = {
3229 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
3230 + DMI_MATCH(DMI_PRODUCT_NAME, "GL702VMK"),}, NULL},
3231 + {
3232 ++ ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X505BA", {
3233 ++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
3234 ++ DMI_MATCH(DMI_PRODUCT_NAME, "X505BA"),}, NULL},
3235 ++ {
3236 ++ ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X505BP", {
3237 ++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
3238 ++ DMI_MATCH(DMI_PRODUCT_NAME, "X505BP"),}, NULL},
3239 ++ {
3240 ++ ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X542BA", {
3241 ++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
3242 ++ DMI_MATCH(DMI_PRODUCT_NAME, "X542BA"),}, NULL},
3243 ++ {
3244 ++ ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X542BP", {
3245 ++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
3246 ++ DMI_MATCH(DMI_PRODUCT_NAME, "X542BP"),}, NULL},
3247 ++ {
3248 + ec_honor_ecdt_gpe, "ASUS X550VXK", {
3249 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
3250 + DMI_MATCH(DMI_PRODUCT_NAME, "X550VXK"),}, NULL},
3251 +@@ -1854,6 +1884,11 @@ static const struct dmi_system_id ec_dmi_table[] __initconst = {
3252 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
3253 + DMI_MATCH(DMI_PRODUCT_NAME, "X580VD"),}, NULL},
3254 + {
3255 ++ /* https://bugzilla.kernel.org/show_bug.cgi?id=209989 */
3256 ++ ec_honor_dsdt_gpe, "HP Pavilion Gaming Laptop 15-cx0xxx", {
3257 ++ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
3258 ++ DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion Gaming Laptop 15-cx0xxx"),}, NULL},
3259 ++ {
3260 + ec_clear_on_resume, "Samsung hardware", {
3261 + DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD.")}, NULL},
3262 + {},
3263 +diff --git a/drivers/acpi/fan.c b/drivers/acpi/fan.c
3264 +index 66c3983f0ccca..5cd0ceb50bc8a 100644
3265 +--- a/drivers/acpi/fan.c
3266 ++++ b/drivers/acpi/fan.c
3267 +@@ -16,6 +16,8 @@
3268 + #include <linux/platform_device.h>
3269 + #include <linux/sort.h>
3270 +
3271 ++#include "fan.h"
3272 ++
3273 + MODULE_AUTHOR("Paul Diefenbaugh");
3274 + MODULE_DESCRIPTION("ACPI Fan Driver");
3275 + MODULE_LICENSE("GPL");
3276 +@@ -24,10 +26,7 @@ static int acpi_fan_probe(struct platform_device *pdev);
3277 + static int acpi_fan_remove(struct platform_device *pdev);
3278 +
3279 + static const struct acpi_device_id fan_device_ids[] = {
3280 +- {"PNP0C0B", 0},
3281 +- {"INT3404", 0},
3282 +- {"INTC1044", 0},
3283 +- {"INTC1048", 0},
3284 ++ ACPI_FAN_DEVICE_IDS,
3285 + {"", 0},
3286 + };
3287 + MODULE_DEVICE_TABLE(acpi, fan_device_ids);
3288 +diff --git a/drivers/acpi/fan.h b/drivers/acpi/fan.h
3289 +new file mode 100644
3290 +index 0000000000000..dc9a6efa514b0
3291 +--- /dev/null
3292 ++++ b/drivers/acpi/fan.h
3293 +@@ -0,0 +1,13 @@
3294 ++/* SPDX-License-Identifier: GPL-2.0-only */
3295 ++
3296 ++/*
3297 ++ * ACPI fan device IDs are shared between the fan driver and the device power
3298 ++ * management code.
3299 ++ *
3300 ++ * Add new device IDs before the generic ACPI fan one.
3301 ++ */
3302 ++#define ACPI_FAN_DEVICE_IDS \
3303 ++ {"INT3404", }, /* Fan */ \
3304 ++ {"INTC1044", }, /* Fan for Tiger Lake generation */ \
3305 ++ {"INTC1048", }, /* Fan for Alder Lake generation */ \
3306 ++ {"PNP0C0B", } /* Generic ACPI fan */
3307 +diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
3308 +index 4e2d76b8b697e..6790df5a2462a 100644
3309 +--- a/drivers/acpi/processor_idle.c
3310 ++++ b/drivers/acpi/processor_idle.c
3311 +@@ -16,6 +16,7 @@
3312 + #include <linux/acpi.h>
3313 + #include <linux/dmi.h>
3314 + #include <linux/sched.h> /* need_resched() */
3315 ++#include <linux/sort.h>
3316 + #include <linux/tick.h>
3317 + #include <linux/cpuidle.h>
3318 + #include <linux/cpu.h>
3319 +@@ -388,10 +389,37 @@ static void acpi_processor_power_verify_c3(struct acpi_processor *pr,
3320 + return;
3321 + }
3322 +
3323 ++static int acpi_cst_latency_cmp(const void *a, const void *b)
3324 ++{
3325 ++ const struct acpi_processor_cx *x = a, *y = b;
3326 ++
3327 ++ if (!(x->valid && y->valid))
3328 ++ return 0;
3329 ++ if (x->latency > y->latency)
3330 ++ return 1;
3331 ++ if (x->latency < y->latency)
3332 ++ return -1;
3333 ++ return 0;
3334 ++}
3335 ++static void acpi_cst_latency_swap(void *a, void *b, int n)
3336 ++{
3337 ++ struct acpi_processor_cx *x = a, *y = b;
3338 ++ u32 tmp;
3339 ++
3340 ++ if (!(x->valid && y->valid))
3341 ++ return;
3342 ++ tmp = x->latency;
3343 ++ x->latency = y->latency;
3344 ++ y->latency = tmp;
3345 ++}
3346 ++
3347 + static int acpi_processor_power_verify(struct acpi_processor *pr)
3348 + {
3349 + unsigned int i;
3350 + unsigned int working = 0;
3351 ++ unsigned int last_latency = 0;
3352 ++ unsigned int last_type = 0;
3353 ++ bool buggy_latency = false;
3354 +
3355 + pr->power.timer_broadcast_on_state = INT_MAX;
3356 +
3357 +@@ -415,12 +443,24 @@ static int acpi_processor_power_verify(struct acpi_processor *pr)
3358 + }
3359 + if (!cx->valid)
3360 + continue;
3361 ++ if (cx->type >= last_type && cx->latency < last_latency)
3362 ++ buggy_latency = true;
3363 ++ last_latency = cx->latency;
3364 ++ last_type = cx->type;
3365 +
3366 + lapic_timer_check_state(i, pr, cx);
3367 + tsc_check_state(cx->type);
3368 + working++;
3369 + }
3370 +
3371 ++ if (buggy_latency) {
3372 ++ pr_notice("FW issue: working around C-state latencies out of order\n");
3373 ++ sort(&pr->power.states[1], max_cstate,
3374 ++ sizeof(struct acpi_processor_cx),
3375 ++ acpi_cst_latency_cmp,
3376 ++ acpi_cst_latency_swap);
3377 ++ }
3378 ++
3379 + lapic_timer_propagate_broadcast(pr);
3380 +
3381 + return (working);
3382 +diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
3383 +index 20a7892c6d3fd..f0b2c37912531 100644
3384 +--- a/drivers/acpi/resource.c
3385 ++++ b/drivers/acpi/resource.c
3386 +@@ -423,6 +423,13 @@ static void acpi_dev_get_irqresource(struct resource *res, u32 gsi,
3387 + }
3388 + }
3389 +
3390 ++static bool irq_is_legacy(struct acpi_resource_irq *irq)
3391 ++{
3392 ++ return irq->triggering == ACPI_EDGE_SENSITIVE &&
3393 ++ irq->polarity == ACPI_ACTIVE_HIGH &&
3394 ++ irq->shareable == ACPI_EXCLUSIVE;
3395 ++}
3396 ++
3397 + /**
3398 + * acpi_dev_resource_interrupt - Extract ACPI interrupt resource information.
3399 + * @ares: Input ACPI resource object.
3400 +@@ -461,7 +468,7 @@ bool acpi_dev_resource_interrupt(struct acpi_resource *ares, int index,
3401 + }
3402 + acpi_dev_get_irqresource(res, irq->interrupts[index],
3403 + irq->triggering, irq->polarity,
3404 +- irq->shareable, true);
3405 ++ irq->shareable, irq_is_legacy(irq));
3406 + break;
3407 + case ACPI_RESOURCE_TYPE_EXTENDED_IRQ:
3408 + ext_irq = &ares->data.extended_irq;
3409 +diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
3410 +index 83cd4c95faf0d..33474fd969913 100644
3411 +--- a/drivers/acpi/video_detect.c
3412 ++++ b/drivers/acpi/video_detect.c
3413 +@@ -385,6 +385,30 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
3414 + DMI_MATCH(DMI_BOARD_NAME, "BA51_MV"),
3415 + },
3416 + },
3417 ++ {
3418 ++ .callback = video_detect_force_native,
3419 ++ .ident = "ASUSTeK COMPUTER INC. GA401",
3420 ++ .matches = {
3421 ++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
3422 ++ DMI_MATCH(DMI_PRODUCT_NAME, "GA401"),
3423 ++ },
3424 ++ },
3425 ++ {
3426 ++ .callback = video_detect_force_native,
3427 ++ .ident = "ASUSTeK COMPUTER INC. GA502",
3428 ++ .matches = {
3429 ++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
3430 ++ DMI_MATCH(DMI_PRODUCT_NAME, "GA502"),
3431 ++ },
3432 ++ },
3433 ++ {
3434 ++ .callback = video_detect_force_native,
3435 ++ .ident = "ASUSTeK COMPUTER INC. GA503",
3436 ++ .matches = {
3437 ++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
3438 ++ DMI_MATCH(DMI_PRODUCT_NAME, "GA503"),
3439 ++ },
3440 ++ },
3441 +
3442 + /*
3443 + * Desktops which falsely report a backlight and which our heuristics
3444 +diff --git a/drivers/acpi/x86/s2idle.c b/drivers/acpi/x86/s2idle.c
3445 +index 2b69536cdccba..2d7ddb8a8cb65 100644
3446 +--- a/drivers/acpi/x86/s2idle.c
3447 ++++ b/drivers/acpi/x86/s2idle.c
3448 +@@ -42,6 +42,8 @@ static const struct acpi_device_id lps0_device_ids[] = {
3449 +
3450 + /* AMD */
3451 + #define ACPI_LPS0_DSM_UUID_AMD "e3f32452-febc-43ce-9039-932122d37721"
3452 ++#define ACPI_LPS0_ENTRY_AMD 2
3453 ++#define ACPI_LPS0_EXIT_AMD 3
3454 + #define ACPI_LPS0_SCREEN_OFF_AMD 4
3455 + #define ACPI_LPS0_SCREEN_ON_AMD 5
3456 +
3457 +@@ -408,6 +410,7 @@ int acpi_s2idle_prepare_late(void)
3458 +
3459 + if (acpi_s2idle_vendor_amd()) {
3460 + acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF_AMD);
3461 ++ acpi_sleep_run_lps0_dsm(ACPI_LPS0_ENTRY_AMD);
3462 + } else {
3463 + acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF);
3464 + acpi_sleep_run_lps0_dsm(ACPI_LPS0_ENTRY);
3465 +@@ -422,6 +425,7 @@ void acpi_s2idle_restore_early(void)
3466 + return;
3467 +
3468 + if (acpi_s2idle_vendor_amd()) {
3469 ++ acpi_sleep_run_lps0_dsm(ACPI_LPS0_EXIT_AMD);
3470 + acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_ON_AMD);
3471 + } else {
3472 + acpi_sleep_run_lps0_dsm(ACPI_LPS0_EXIT);
3473 +diff --git a/drivers/ata/pata_ep93xx.c b/drivers/ata/pata_ep93xx.c
3474 +index badab67088935..46208ececbb6a 100644
3475 +--- a/drivers/ata/pata_ep93xx.c
3476 ++++ b/drivers/ata/pata_ep93xx.c
3477 +@@ -928,7 +928,7 @@ static int ep93xx_pata_probe(struct platform_device *pdev)
3478 + /* INT[3] (IRQ_EP93XX_EXT3) line connected as pull down */
3479 + irq = platform_get_irq(pdev, 0);
3480 + if (irq < 0) {
3481 +- err = -ENXIO;
3482 ++ err = irq;
3483 + goto err_rel_gpio;
3484 + }
3485 +
3486 +diff --git a/drivers/ata/pata_octeon_cf.c b/drivers/ata/pata_octeon_cf.c
3487 +index bd87476ab4813..b5a3f710d76de 100644
3488 +--- a/drivers/ata/pata_octeon_cf.c
3489 ++++ b/drivers/ata/pata_octeon_cf.c
3490 +@@ -898,10 +898,11 @@ static int octeon_cf_probe(struct platform_device *pdev)
3491 + return -EINVAL;
3492 + }
3493 +
3494 +- irq_handler = octeon_cf_interrupt;
3495 + i = platform_get_irq(dma_dev, 0);
3496 +- if (i > 0)
3497 ++ if (i > 0) {
3498 + irq = i;
3499 ++ irq_handler = octeon_cf_interrupt;
3500 ++ }
3501 + }
3502 + of_node_put(dma_node);
3503 + }
3504 +diff --git a/drivers/ata/pata_rb532_cf.c b/drivers/ata/pata_rb532_cf.c
3505 +index 479c4b29b8562..303f8c375b3af 100644
3506 +--- a/drivers/ata/pata_rb532_cf.c
3507 ++++ b/drivers/ata/pata_rb532_cf.c
3508 +@@ -115,10 +115,12 @@ static int rb532_pata_driver_probe(struct platform_device *pdev)
3509 + }
3510 +
3511 + irq = platform_get_irq(pdev, 0);
3512 +- if (irq <= 0) {
3513 ++ if (irq < 0) {
3514 + dev_err(&pdev->dev, "no IRQ resource found\n");
3515 +- return -ENOENT;
3516 ++ return irq;
3517 + }
3518 ++ if (!irq)
3519 ++ return -EINVAL;
3520 +
3521 + gpiod = devm_gpiod_get(&pdev->dev, NULL, GPIOD_IN);
3522 + if (IS_ERR(gpiod)) {
3523 +diff --git a/drivers/ata/sata_highbank.c b/drivers/ata/sata_highbank.c
3524 +index 64b2ef15ec191..8440203e835ed 100644
3525 +--- a/drivers/ata/sata_highbank.c
3526 ++++ b/drivers/ata/sata_highbank.c
3527 +@@ -469,10 +469,12 @@ static int ahci_highbank_probe(struct platform_device *pdev)
3528 + }
3529 +
3530 + irq = platform_get_irq(pdev, 0);
3531 +- if (irq <= 0) {
3532 ++ if (irq < 0) {
3533 + dev_err(dev, "no irq\n");
3534 +- return -EINVAL;
3535 ++ return irq;
3536 + }
3537 ++ if (!irq)
3538 ++ return -EINVAL;
3539 +
3540 + hpriv = devm_kzalloc(dev, sizeof(*hpriv), GFP_KERNEL);
3541 + if (!hpriv) {
3542 +diff --git a/drivers/block/loop.c b/drivers/block/loop.c
3543 +index a370cde3ddd49..a9f9794892c4e 100644
3544 +--- a/drivers/block/loop.c
3545 ++++ b/drivers/block/loop.c
3546 +@@ -1153,6 +1153,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
3547 + blk_queue_physical_block_size(lo->lo_queue, bsize);
3548 + blk_queue_io_min(lo->lo_queue, bsize);
3549 +
3550 ++ loop_config_discard(lo);
3551 + loop_update_rotational(lo);
3552 + loop_update_dio(lo);
3553 + loop_sysfs_init(lo);
3554 +diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c
3555 +index 25114f0d13199..bd71dfc9c9748 100644
3556 +--- a/drivers/bluetooth/btqca.c
3557 ++++ b/drivers/bluetooth/btqca.c
3558 +@@ -183,7 +183,7 @@ int qca_send_pre_shutdown_cmd(struct hci_dev *hdev)
3559 + EXPORT_SYMBOL_GPL(qca_send_pre_shutdown_cmd);
3560 +
3561 + static void qca_tlv_check_data(struct qca_fw_config *config,
3562 +- const struct firmware *fw, enum qca_btsoc_type soc_type)
3563 ++ u8 *fw_data, enum qca_btsoc_type soc_type)
3564 + {
3565 + const u8 *data;
3566 + u32 type_len;
3567 +@@ -194,7 +194,7 @@ static void qca_tlv_check_data(struct qca_fw_config *config,
3568 + struct tlv_type_nvm *tlv_nvm;
3569 + uint8_t nvm_baud_rate = config->user_baud_rate;
3570 +
3571 +- tlv = (struct tlv_type_hdr *)fw->data;
3572 ++ tlv = (struct tlv_type_hdr *)fw_data;
3573 +
3574 + type_len = le32_to_cpu(tlv->type_len);
3575 + length = (type_len >> 8) & 0x00ffffff;
3576 +@@ -390,8 +390,9 @@ static int qca_download_firmware(struct hci_dev *hdev,
3577 + enum qca_btsoc_type soc_type)
3578 + {
3579 + const struct firmware *fw;
3580 ++ u8 *data;
3581 + const u8 *segment;
3582 +- int ret, remain, i = 0;
3583 ++ int ret, size, remain, i = 0;
3584 +
3585 + bt_dev_info(hdev, "QCA Downloading %s", config->fwname);
3586 +
3587 +@@ -402,10 +403,22 @@ static int qca_download_firmware(struct hci_dev *hdev,
3588 + return ret;
3589 + }
3590 +
3591 +- qca_tlv_check_data(config, fw, soc_type);
3592 ++ size = fw->size;
3593 ++ data = vmalloc(fw->size);
3594 ++ if (!data) {
3595 ++ bt_dev_err(hdev, "QCA Failed to allocate memory for file: %s",
3596 ++ config->fwname);
3597 ++ release_firmware(fw);
3598 ++ return -ENOMEM;
3599 ++ }
3600 ++
3601 ++ memcpy(data, fw->data, size);
3602 ++ release_firmware(fw);
3603 ++
3604 ++ qca_tlv_check_data(config, data, soc_type);
3605 +
3606 +- segment = fw->data;
3607 +- remain = fw->size;
3608 ++ segment = data;
3609 ++ remain = size;
3610 + while (remain > 0) {
3611 + int segsize = min(MAX_SIZE_PER_TLV_SEGMENT, remain);
3612 +
3613 +@@ -435,7 +448,7 @@ static int qca_download_firmware(struct hci_dev *hdev,
3614 + ret = qca_inject_cmd_complete_event(hdev);
3615 +
3616 + out:
3617 +- release_firmware(fw);
3618 ++ vfree(data);
3619 +
3620 + return ret;
3621 + }
3622 +diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
3623 +index de36af63e1825..9589ef6c0c264 100644
3624 +--- a/drivers/bluetooth/hci_qca.c
3625 ++++ b/drivers/bluetooth/hci_qca.c
3626 +@@ -1820,8 +1820,6 @@ static void qca_power_shutdown(struct hci_uart *hu)
3627 + unsigned long flags;
3628 + enum qca_btsoc_type soc_type = qca_soc_type(hu);
3629 +
3630 +- qcadev = serdev_device_get_drvdata(hu->serdev);
3631 +-
3632 + /* From this point we go into power off state. But serial port is
3633 + * still open, stop queueing the IBS data and flush all the buffered
3634 + * data in skb's.
3635 +@@ -1837,6 +1835,8 @@ static void qca_power_shutdown(struct hci_uart *hu)
3636 + if (!hu->serdev)
3637 + return;
3638 +
3639 ++ qcadev = serdev_device_get_drvdata(hu->serdev);
3640 ++
3641 + if (qca_is_wcn399x(soc_type)) {
3642 + host_set_baudrate(hu, 2400);
3643 + qca_send_power_pulse(hu, false);
3644 +diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
3645 +index 87d3b73bcaded..481b3f87ed739 100644
3646 +--- a/drivers/bus/mhi/core/pm.c
3647 ++++ b/drivers/bus/mhi/core/pm.c
3648 +@@ -911,6 +911,7 @@ int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
3649 +
3650 + ret = wait_event_timeout(mhi_cntrl->state_event,
3651 + mhi_cntrl->dev_state == MHI_STATE_M0 ||
3652 ++ mhi_cntrl->dev_state == MHI_STATE_M2 ||
3653 + MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
3654 + msecs_to_jiffies(mhi_cntrl->timeout_ms));
3655 +
3656 +diff --git a/drivers/bus/mhi/pci_generic.c b/drivers/bus/mhi/pci_generic.c
3657 +index 3e50a9fc4d822..31c64eb8a201d 100644
3658 +--- a/drivers/bus/mhi/pci_generic.c
3659 ++++ b/drivers/bus/mhi/pci_generic.c
3660 +@@ -470,7 +470,7 @@ static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
3661 +
3662 + err = mhi_register_controller(mhi_cntrl, mhi_cntrl_config);
3663 + if (err)
3664 +- return err;
3665 ++ goto err_disable_reporting;
3666 +
3667 + /* MHI bus does not power up the controller by default */
3668 + err = mhi_prepare_for_power_up(mhi_cntrl);
3669 +@@ -496,6 +496,8 @@ err_unprepare:
3670 + mhi_unprepare_after_power_down(mhi_cntrl);
3671 + err_unregister:
3672 + mhi_unregister_controller(mhi_cntrl);
3673 ++err_disable_reporting:
3674 ++ pci_disable_pcie_error_reporting(pdev);
3675 +
3676 + return err;
3677 + }
3678 +@@ -514,6 +516,7 @@ static void mhi_pci_remove(struct pci_dev *pdev)
3679 + }
3680 +
3681 + mhi_unregister_controller(mhi_cntrl);
3682 ++ pci_disable_pcie_error_reporting(pdev);
3683 + }
3684 +
3685 + static void mhi_pci_shutdown(struct pci_dev *pdev)
3686 +diff --git a/drivers/char/hw_random/exynos-trng.c b/drivers/char/hw_random/exynos-trng.c
3687 +index 8e1fe3f8dd2df..c8db62bc5ff72 100644
3688 +--- a/drivers/char/hw_random/exynos-trng.c
3689 ++++ b/drivers/char/hw_random/exynos-trng.c
3690 +@@ -132,7 +132,7 @@ static int exynos_trng_probe(struct platform_device *pdev)
3691 + return PTR_ERR(trng->mem);
3692 +
3693 + pm_runtime_enable(&pdev->dev);
3694 +- ret = pm_runtime_get_sync(&pdev->dev);
3695 ++ ret = pm_runtime_resume_and_get(&pdev->dev);
3696 + if (ret < 0) {
3697 + dev_err(&pdev->dev, "Could not get runtime PM.\n");
3698 + goto err_pm_get;
3699 +@@ -165,7 +165,7 @@ err_register:
3700 + clk_disable_unprepare(trng->clk);
3701 +
3702 + err_clock:
3703 +- pm_runtime_put_sync(&pdev->dev);
3704 ++ pm_runtime_put_noidle(&pdev->dev);
3705 +
3706 + err_pm_get:
3707 + pm_runtime_disable(&pdev->dev);
3708 +diff --git a/drivers/char/pcmcia/cm4000_cs.c b/drivers/char/pcmcia/cm4000_cs.c
3709 +index 89681f07bc787..9468e9520cee0 100644
3710 +--- a/drivers/char/pcmcia/cm4000_cs.c
3711 ++++ b/drivers/char/pcmcia/cm4000_cs.c
3712 +@@ -544,6 +544,10 @@ static int set_protocol(struct cm4000_dev *dev, struct ptsreq *ptsreq)
3713 + io_read_num_rec_bytes(iobase, &num_bytes_read);
3714 + if (num_bytes_read >= 4) {
3715 + DEBUGP(2, dev, "NumRecBytes = %i\n", num_bytes_read);
3716 ++ if (num_bytes_read > 4) {
3717 ++ rc = -EIO;
3718 ++ goto exit_setprotocol;
3719 ++ }
3720 + break;
3721 + }
3722 + usleep_range(10000, 11000);
3723 +diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
3724 +index 55b9d3965ae1b..69579efb247b3 100644
3725 +--- a/drivers/char/tpm/tpm_tis_core.c
3726 ++++ b/drivers/char/tpm/tpm_tis_core.c
3727 +@@ -196,13 +196,24 @@ static u8 tpm_tis_status(struct tpm_chip *chip)
3728 + return 0;
3729 +
3730 + if (unlikely((status & TPM_STS_READ_ZERO) != 0)) {
3731 +- /*
3732 +- * If this trips, the chances are the read is
3733 +- * returning 0xff because the locality hasn't been
3734 +- * acquired. Usually because tpm_try_get_ops() hasn't
3735 +- * been called before doing a TPM operation.
3736 +- */
3737 +- WARN_ONCE(1, "TPM returned invalid status\n");
3738 ++ if (!test_and_set_bit(TPM_TIS_INVALID_STATUS, &priv->flags)) {
3739 ++ /*
3740 ++ * If this trips, the chances are the read is
3741 ++ * returning 0xff because the locality hasn't been
3742 ++ * acquired. Usually because tpm_try_get_ops() hasn't
3743 ++ * been called before doing a TPM operation.
3744 ++ */
3745 ++ dev_err(&chip->dev, "invalid TPM_STS.x 0x%02x, dumping stack for forensics\n",
3746 ++ status);
3747 ++
3748 ++ /*
3749 ++ * Dump stack for forensics, as invalid TPM_STS.x could be
3750 ++ * potentially triggered by impaired tpm_try_get_ops() or
3751 ++ * tpm_find_get_ops().
3752 ++ */
3753 ++ dump_stack();
3754 ++ }
3755 ++
3756 + return 0;
3757 + }
3758 +
3759 +diff --git a/drivers/char/tpm/tpm_tis_core.h b/drivers/char/tpm/tpm_tis_core.h
3760 +index 9b2d32a59f670..b2a3c6c72882d 100644
3761 +--- a/drivers/char/tpm/tpm_tis_core.h
3762 ++++ b/drivers/char/tpm/tpm_tis_core.h
3763 +@@ -83,6 +83,7 @@ enum tis_defaults {
3764 +
3765 + enum tpm_tis_flags {
3766 + TPM_TIS_ITPM_WORKAROUND = BIT(0),
3767 ++ TPM_TIS_INVALID_STATUS = BIT(1),
3768 + };
3769 +
3770 + struct tpm_tis_data {
3771 +@@ -90,7 +91,7 @@ struct tpm_tis_data {
3772 + int locality;
3773 + int irq;
3774 + bool irq_tested;
3775 +- unsigned int flags;
3776 ++ unsigned long flags;
3777 + void __iomem *ilb_base_addr;
3778 + u16 clkrun_enabled;
3779 + wait_queue_head_t int_queue;
3780 +diff --git a/drivers/char/tpm/tpm_tis_spi_main.c b/drivers/char/tpm/tpm_tis_spi_main.c
3781 +index 3856f6ebcb34f..de4209003a448 100644
3782 +--- a/drivers/char/tpm/tpm_tis_spi_main.c
3783 ++++ b/drivers/char/tpm/tpm_tis_spi_main.c
3784 +@@ -260,6 +260,8 @@ static int tpm_tis_spi_remove(struct spi_device *dev)
3785 + }
3786 +
3787 + static const struct spi_device_id tpm_tis_spi_id[] = {
3788 ++ { "st33htpm-spi", (unsigned long)tpm_tis_spi_probe },
3789 ++ { "slb9670", (unsigned long)tpm_tis_spi_probe },
3790 + { "tpm_tis_spi", (unsigned long)tpm_tis_spi_probe },
3791 + { "cr50", (unsigned long)cr50_spi_probe },
3792 + {}
3793 +diff --git a/drivers/clk/actions/owl-s500.c b/drivers/clk/actions/owl-s500.c
3794 +index 61bb224f63309..cbeb51c804eb5 100644
3795 +--- a/drivers/clk/actions/owl-s500.c
3796 ++++ b/drivers/clk/actions/owl-s500.c
3797 +@@ -127,8 +127,7 @@ static struct clk_factor_table sd_factor_table[] = {
3798 + { 12, 1, 13 }, { 13, 1, 14 }, { 14, 1, 15 }, { 15, 1, 16 },
3799 + { 16, 1, 17 }, { 17, 1, 18 }, { 18, 1, 19 }, { 19, 1, 20 },
3800 + { 20, 1, 21 }, { 21, 1, 22 }, { 22, 1, 23 }, { 23, 1, 24 },
3801 +- { 24, 1, 25 }, { 25, 1, 26 }, { 26, 1, 27 }, { 27, 1, 28 },
3802 +- { 28, 1, 29 }, { 29, 1, 30 }, { 30, 1, 31 }, { 31, 1, 32 },
3803 ++ { 24, 1, 25 },
3804 +
3805 + /* bit8: /128 */
3806 + { 256, 1, 1 * 128 }, { 257, 1, 2 * 128 }, { 258, 1, 3 * 128 }, { 259, 1, 4 * 128 },
3807 +@@ -137,19 +136,20 @@ static struct clk_factor_table sd_factor_table[] = {
3808 + { 268, 1, 13 * 128 }, { 269, 1, 14 * 128 }, { 270, 1, 15 * 128 }, { 271, 1, 16 * 128 },
3809 + { 272, 1, 17 * 128 }, { 273, 1, 18 * 128 }, { 274, 1, 19 * 128 }, { 275, 1, 20 * 128 },
3810 + { 276, 1, 21 * 128 }, { 277, 1, 22 * 128 }, { 278, 1, 23 * 128 }, { 279, 1, 24 * 128 },
3811 +- { 280, 1, 25 * 128 }, { 281, 1, 26 * 128 }, { 282, 1, 27 * 128 }, { 283, 1, 28 * 128 },
3812 +- { 284, 1, 29 * 128 }, { 285, 1, 30 * 128 }, { 286, 1, 31 * 128 }, { 287, 1, 32 * 128 },
3813 ++ { 280, 1, 25 * 128 },
3814 + { 0, 0, 0 },
3815 + };
3816 +
3817 +-static struct clk_factor_table bisp_factor_table[] = {
3818 +- { 0, 1, 1 }, { 1, 1, 2 }, { 2, 1, 3 }, { 3, 1, 4 },
3819 +- { 4, 1, 5 }, { 5, 1, 6 }, { 6, 1, 7 }, { 7, 1, 8 },
3820 ++static struct clk_factor_table de_factor_table[] = {
3821 ++ { 0, 1, 1 }, { 1, 2, 3 }, { 2, 1, 2 }, { 3, 2, 5 },
3822 ++ { 4, 1, 3 }, { 5, 1, 4 }, { 6, 1, 6 }, { 7, 1, 8 },
3823 ++ { 8, 1, 12 },
3824 + { 0, 0, 0 },
3825 + };
3826 +
3827 +-static struct clk_factor_table ahb_factor_table[] = {
3828 +- { 1, 1, 2 }, { 2, 1, 3 },
3829 ++static struct clk_factor_table hde_factor_table[] = {
3830 ++ { 0, 1, 1 }, { 1, 2, 3 }, { 2, 1, 2 }, { 3, 2, 5 },
3831 ++ { 4, 1, 3 }, { 5, 1, 4 }, { 6, 1, 6 }, { 7, 1, 8 },
3832 + { 0, 0, 0 },
3833 + };
3834 +
3835 +@@ -158,6 +158,13 @@ static struct clk_div_table rmii_ref_div_table[] = {
3836 + { 0, 0 },
3837 + };
3838 +
3839 ++static struct clk_div_table std12rate_div_table[] = {
3840 ++ { 0, 1 }, { 1, 2 }, { 2, 3 }, { 3, 4 },
3841 ++ { 4, 5 }, { 5, 6 }, { 6, 7 }, { 7, 8 },
3842 ++ { 8, 9 }, { 9, 10 }, { 10, 11 }, { 11, 12 },
3843 ++ { 0, 0 },
3844 ++};
3845 ++
3846 + static struct clk_div_table i2s_div_table[] = {
3847 + { 0, 1 }, { 1, 2 }, { 2, 3 }, { 3, 4 },
3848 + { 4, 6 }, { 5, 8 }, { 6, 12 }, { 7, 16 },
3849 +@@ -174,7 +181,6 @@ static struct clk_div_table nand_div_table[] = {
3850 +
3851 + /* mux clock */
3852 + static OWL_MUX(dev_clk, "dev_clk", dev_clk_mux_p, CMU_DEVPLL, 12, 1, CLK_SET_RATE_PARENT);
3853 +-static OWL_MUX(ahbprediv_clk, "ahbprediv_clk", ahbprediv_clk_mux_p, CMU_BUSCLK1, 8, 3, CLK_SET_RATE_PARENT);
3854 +
3855 + /* gate clocks */
3856 + static OWL_GATE(gpio_clk, "gpio_clk", "apb_clk", CMU_DEVCLKEN0, 18, 0, 0);
3857 +@@ -187,45 +193,54 @@ static OWL_GATE(timer_clk, "timer_clk", "hosc", CMU_DEVCLKEN1, 27, 0, 0);
3858 + static OWL_GATE(hdmi_clk, "hdmi_clk", "hosc", CMU_DEVCLKEN1, 3, 0, 0);
3859 +
3860 + /* divider clocks */
3861 +-static OWL_DIVIDER(h_clk, "h_clk", "ahbprediv_clk", CMU_BUSCLK1, 12, 2, NULL, 0, 0);
3862 ++static OWL_DIVIDER(h_clk, "h_clk", "ahbprediv_clk", CMU_BUSCLK1, 2, 2, NULL, 0, 0);
3863 + static OWL_DIVIDER(apb_clk, "apb_clk", "ahb_clk", CMU_BUSCLK1, 14, 2, NULL, 0, 0);
3864 + static OWL_DIVIDER(rmii_ref_clk, "rmii_ref_clk", "ethernet_pll_clk", CMU_ETHERNETPLL, 1, 1, rmii_ref_div_table, 0, 0);
3865 +
3866 + /* factor clocks */
3867 +-static OWL_FACTOR(ahb_clk, "ahb_clk", "h_clk", CMU_BUSCLK1, 2, 2, ahb_factor_table, 0, 0);
3868 +-static OWL_FACTOR(de1_clk, "de_clk1", "de_clk", CMU_DECLK, 0, 3, bisp_factor_table, 0, 0);
3869 +-static OWL_FACTOR(de2_clk, "de_clk2", "de_clk", CMU_DECLK, 4, 3, bisp_factor_table, 0, 0);
3870 ++static OWL_FACTOR(de1_clk, "de_clk1", "de_clk", CMU_DECLK, 0, 4, de_factor_table, 0, 0);
3871 ++static OWL_FACTOR(de2_clk, "de_clk2", "de_clk", CMU_DECLK, 4, 4, de_factor_table, 0, 0);
3872 +
3873 + /* composite clocks */
3874 ++static OWL_COMP_DIV(ahbprediv_clk, "ahbprediv_clk", ahbprediv_clk_mux_p,
3875 ++ OWL_MUX_HW(CMU_BUSCLK1, 8, 3),
3876 ++ { 0 },
3877 ++ OWL_DIVIDER_HW(CMU_BUSCLK1, 12, 2, 0, NULL),
3878 ++ CLK_SET_RATE_PARENT);
3879 ++
3880 ++static OWL_COMP_FIXED_FACTOR(ahb_clk, "ahb_clk", "h_clk",
3881 ++ { 0 },
3882 ++ 1, 1, 0);
3883 ++
3884 + static OWL_COMP_FACTOR(vce_clk, "vce_clk", hde_clk_mux_p,
3885 + OWL_MUX_HW(CMU_VCECLK, 4, 2),
3886 + OWL_GATE_HW(CMU_DEVCLKEN0, 26, 0),
3887 +- OWL_FACTOR_HW(CMU_VCECLK, 0, 3, 0, bisp_factor_table),
3888 ++ OWL_FACTOR_HW(CMU_VCECLK, 0, 3, 0, hde_factor_table),
3889 + 0);
3890 +
3891 + static OWL_COMP_FACTOR(vde_clk, "vde_clk", hde_clk_mux_p,
3892 + OWL_MUX_HW(CMU_VDECLK, 4, 2),
3893 + OWL_GATE_HW(CMU_DEVCLKEN0, 25, 0),
3894 +- OWL_FACTOR_HW(CMU_VDECLK, 0, 3, 0, bisp_factor_table),
3895 ++ OWL_FACTOR_HW(CMU_VDECLK, 0, 3, 0, hde_factor_table),
3896 + 0);
3897 +
3898 +-static OWL_COMP_FACTOR(bisp_clk, "bisp_clk", bisp_clk_mux_p,
3899 ++static OWL_COMP_DIV(bisp_clk, "bisp_clk", bisp_clk_mux_p,
3900 + OWL_MUX_HW(CMU_BISPCLK, 4, 1),
3901 + OWL_GATE_HW(CMU_DEVCLKEN0, 14, 0),
3902 +- OWL_FACTOR_HW(CMU_BISPCLK, 0, 3, 0, bisp_factor_table),
3903 ++ OWL_DIVIDER_HW(CMU_BISPCLK, 0, 4, 0, std12rate_div_table),
3904 + 0);
3905 +
3906 +-static OWL_COMP_FACTOR(sensor0_clk, "sensor0_clk", sensor_clk_mux_p,
3907 ++static OWL_COMP_DIV(sensor0_clk, "sensor0_clk", sensor_clk_mux_p,
3908 + OWL_MUX_HW(CMU_SENSORCLK, 4, 1),
3909 + OWL_GATE_HW(CMU_DEVCLKEN0, 14, 0),
3910 +- OWL_FACTOR_HW(CMU_SENSORCLK, 0, 3, 0, bisp_factor_table),
3911 +- CLK_IGNORE_UNUSED);
3912 ++ OWL_DIVIDER_HW(CMU_SENSORCLK, 0, 4, 0, std12rate_div_table),
3913 ++ 0);
3914 +
3915 +-static OWL_COMP_FACTOR(sensor1_clk, "sensor1_clk", sensor_clk_mux_p,
3916 ++static OWL_COMP_DIV(sensor1_clk, "sensor1_clk", sensor_clk_mux_p,
3917 + OWL_MUX_HW(CMU_SENSORCLK, 4, 1),
3918 + OWL_GATE_HW(CMU_DEVCLKEN0, 14, 0),
3919 +- OWL_FACTOR_HW(CMU_SENSORCLK, 8, 3, 0, bisp_factor_table),
3920 +- CLK_IGNORE_UNUSED);
3921 ++ OWL_DIVIDER_HW(CMU_SENSORCLK, 8, 4, 0, std12rate_div_table),
3922 ++ 0);
3923 +
3924 + static OWL_COMP_FACTOR(sd0_clk, "sd0_clk", sd_clk_mux_p,
3925 + OWL_MUX_HW(CMU_SD0CLK, 9, 1),
3926 +@@ -305,7 +320,7 @@ static OWL_COMP_FIXED_FACTOR(i2c3_clk, "i2c3_clk", "ethernet_pll_clk",
3927 + static OWL_COMP_DIV(uart0_clk, "uart0_clk", uart_clk_mux_p,
3928 + OWL_MUX_HW(CMU_UART0CLK, 16, 1),
3929 + OWL_GATE_HW(CMU_DEVCLKEN1, 6, 0),
3930 +- OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
3931 ++ OWL_DIVIDER_HW(CMU_UART0CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
3932 + CLK_IGNORE_UNUSED);
3933 +
3934 + static OWL_COMP_DIV(uart1_clk, "uart1_clk", uart_clk_mux_p,
3935 +@@ -317,31 +332,31 @@ static OWL_COMP_DIV(uart1_clk, "uart1_clk", uart_clk_mux_p,
3936 + static OWL_COMP_DIV(uart2_clk, "uart2_clk", uart_clk_mux_p,
3937 + OWL_MUX_HW(CMU_UART2CLK, 16, 1),
3938 + OWL_GATE_HW(CMU_DEVCLKEN1, 8, 0),
3939 +- OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
3940 ++ OWL_DIVIDER_HW(CMU_UART2CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
3941 + CLK_IGNORE_UNUSED);
3942 +
3943 + static OWL_COMP_DIV(uart3_clk, "uart3_clk", uart_clk_mux_p,
3944 + OWL_MUX_HW(CMU_UART3CLK, 16, 1),
3945 + OWL_GATE_HW(CMU_DEVCLKEN1, 19, 0),
3946 +- OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
3947 ++ OWL_DIVIDER_HW(CMU_UART3CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
3948 + CLK_IGNORE_UNUSED);
3949 +
3950 + static OWL_COMP_DIV(uart4_clk, "uart4_clk", uart_clk_mux_p,
3951 + OWL_MUX_HW(CMU_UART4CLK, 16, 1),
3952 + OWL_GATE_HW(CMU_DEVCLKEN1, 20, 0),
3953 +- OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
3954 ++ OWL_DIVIDER_HW(CMU_UART4CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
3955 + CLK_IGNORE_UNUSED);
3956 +
3957 + static OWL_COMP_DIV(uart5_clk, "uart5_clk", uart_clk_mux_p,
3958 + OWL_MUX_HW(CMU_UART5CLK, 16, 1),
3959 + OWL_GATE_HW(CMU_DEVCLKEN1, 21, 0),
3960 +- OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
3961 ++ OWL_DIVIDER_HW(CMU_UART5CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
3962 + CLK_IGNORE_UNUSED);
3963 +
3964 + static OWL_COMP_DIV(uart6_clk, "uart6_clk", uart_clk_mux_p,
3965 + OWL_MUX_HW(CMU_UART6CLK, 16, 1),
3966 + OWL_GATE_HW(CMU_DEVCLKEN1, 18, 0),
3967 +- OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
3968 ++ OWL_DIVIDER_HW(CMU_UART6CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
3969 + CLK_IGNORE_UNUSED);
3970 +
3971 + static OWL_COMP_DIV(i2srx_clk, "i2srx_clk", i2s_clk_mux_p,
3972 +diff --git a/drivers/clk/clk-k210.c b/drivers/clk/clk-k210.c
3973 +index 6c84abf5b2e36..67a7cb3503c36 100644
3974 +--- a/drivers/clk/clk-k210.c
3975 ++++ b/drivers/clk/clk-k210.c
3976 +@@ -722,6 +722,7 @@ static int k210_clk_set_parent(struct clk_hw *hw, u8 index)
3977 + reg |= BIT(cfg->mux_bit);
3978 + else
3979 + reg &= ~BIT(cfg->mux_bit);
3980 ++ writel(reg, ksc->regs + cfg->mux_reg);
3981 + spin_unlock_irqrestore(&ksc->clk_lock, flags);
3982 +
3983 + return 0;
3984 +diff --git a/drivers/clk/clk-si5341.c b/drivers/clk/clk-si5341.c
3985 +index e0446e66fa645..eb22f4fdbc6b4 100644
3986 +--- a/drivers/clk/clk-si5341.c
3987 ++++ b/drivers/clk/clk-si5341.c
3988 +@@ -92,12 +92,22 @@ struct clk_si5341_output_config {
3989 + #define SI5341_PN_BASE 0x0002
3990 + #define SI5341_DEVICE_REV 0x0005
3991 + #define SI5341_STATUS 0x000C
3992 ++#define SI5341_LOS 0x000D
3993 ++#define SI5341_STATUS_STICKY 0x0011
3994 ++#define SI5341_LOS_STICKY 0x0012
3995 + #define SI5341_SOFT_RST 0x001C
3996 + #define SI5341_IN_SEL 0x0021
3997 ++#define SI5341_DEVICE_READY 0x00FE
3998 + #define SI5341_XAXB_CFG 0x090E
3999 + #define SI5341_IN_EN 0x0949
4000 + #define SI5341_INX_TO_PFD_EN 0x094A
4001 +
4002 ++/* Status bits */
4003 ++#define SI5341_STATUS_SYSINCAL BIT(0)
4004 ++#define SI5341_STATUS_LOSXAXB BIT(1)
4005 ++#define SI5341_STATUS_LOSREF BIT(2)
4006 ++#define SI5341_STATUS_LOL BIT(3)
4007 ++
4008 + /* Input selection */
4009 + #define SI5341_IN_SEL_MASK 0x06
4010 + #define SI5341_IN_SEL_SHIFT 1
4011 +@@ -340,6 +350,8 @@ static const struct si5341_reg_default si5341_reg_defaults[] = {
4012 + { 0x094A, 0x00 }, /* INx_TO_PFD_EN (disabled) */
4013 + { 0x0A02, 0x00 }, /* Not in datasheet */
4014 + { 0x0B44, 0x0F }, /* PDIV_ENB (datasheet does not mention what it is) */
4015 ++ { 0x0B57, 0x10 }, /* VCO_RESET_CALCODE (not described in datasheet) */
4016 ++ { 0x0B58, 0x05 }, /* VCO_RESET_CALCODE (not described in datasheet) */
4017 + };
4018 +
4019 + /* Read and interpret a 44-bit followed by a 32-bit value in the regmap */
4020 +@@ -623,6 +635,9 @@ static unsigned long si5341_synth_clk_recalc_rate(struct clk_hw *hw,
4021 + SI5341_SYNTH_N_NUM(synth->index), &n_num, &n_den);
4022 + if (err < 0)
4023 + return err;
4024 ++ /* Check for bogus/uninitialized settings */
4025 ++ if (!n_num || !n_den)
4026 ++ return 0;
4027 +
4028 + /*
4029 + * n_num and n_den are shifted left as much as possible, so to prevent
4030 +@@ -806,6 +821,9 @@ static long si5341_output_clk_round_rate(struct clk_hw *hw, unsigned long rate,
4031 + {
4032 + unsigned long r;
4033 +
4034 ++ if (!rate)
4035 ++ return 0;
4036 ++
4037 + r = *parent_rate >> 1;
4038 +
4039 + /* If rate is an even divisor, no changes to parent required */
4040 +@@ -834,11 +852,16 @@ static int si5341_output_clk_set_rate(struct clk_hw *hw, unsigned long rate,
4041 + unsigned long parent_rate)
4042 + {
4043 + struct clk_si5341_output *output = to_clk_si5341_output(hw);
4044 +- /* Frequency divider is (r_div + 1) * 2 */
4045 +- u32 r_div = (parent_rate / rate) >> 1;
4046 ++ u32 r_div;
4047 + int err;
4048 + u8 r[3];
4049 +
4050 ++ if (!rate)
4051 ++ return -EINVAL;
4052 ++
4053 ++ /* Frequency divider is (r_div + 1) * 2 */
4054 ++ r_div = (parent_rate / rate) >> 1;
4055 ++
4056 + if (r_div <= 1)
4057 + r_div = 0;
4058 + else if (r_div >= BIT(24))
4059 +@@ -1083,7 +1106,7 @@ static const struct si5341_reg_default si5341_preamble[] = {
4060 + { 0x0B25, 0x00 },
4061 + { 0x0502, 0x01 },
4062 + { 0x0505, 0x03 },
4063 +- { 0x0957, 0x1F },
4064 ++ { 0x0957, 0x17 },
4065 + { 0x0B4E, 0x1A },
4066 + };
4067 +
4068 +@@ -1189,6 +1212,32 @@ static const struct regmap_range_cfg si5341_regmap_ranges[] = {
4069 + },
4070 + };
4071 +
4072 ++static int si5341_wait_device_ready(struct i2c_client *client)
4073 ++{
4074 ++ int count;
4075 ++
4076 ++ /* Datasheet warns: Any attempt to read or write any register other
4077 ++ * than DEVICE_READY before DEVICE_READY reads as 0x0F may corrupt the
4078 ++ * NVM programming and may corrupt the register contents, as they are
4079 ++ * read from NVM. Note that this includes accesses to the PAGE register.
4080 ++ * Also: DEVICE_READY is available on every register page, so no page
4081 ++ * change is needed to read it.
4082 ++ * Do this outside regmap to avoid automatic PAGE register access.
4083 ++ * May take up to 300ms to complete.
4084 ++ */
4085 ++ for (count = 0; count < 15; ++count) {
4086 ++ s32 result = i2c_smbus_read_byte_data(client,
4087 ++ SI5341_DEVICE_READY);
4088 ++ if (result < 0)
4089 ++ return result;
4090 ++ if (result == 0x0F)
4091 ++ return 0;
4092 ++ msleep(20);
4093 ++ }
4094 ++ dev_err(&client->dev, "timeout waiting for DEVICE_READY\n");
4095 ++ return -EIO;
4096 ++}
4097 ++
4098 + static const struct regmap_config si5341_regmap_config = {
4099 + .reg_bits = 8,
4100 + .val_bits = 8,
4101 +@@ -1378,6 +1427,7 @@ static int si5341_probe(struct i2c_client *client,
4102 + unsigned int i;
4103 + struct clk_si5341_output_config config[SI5341_MAX_NUM_OUTPUTS];
4104 + bool initialization_required;
4105 ++ u32 status;
4106 +
4107 + data = devm_kzalloc(&client->dev, sizeof(*data), GFP_KERNEL);
4108 + if (!data)
4109 +@@ -1385,6 +1435,11 @@ static int si5341_probe(struct i2c_client *client,
4110 +
4111 + data->i2c_client = client;
4112 +
4113 ++ /* Must be done before otherwise touching hardware */
4114 ++ err = si5341_wait_device_ready(client);
4115 ++ if (err)
4116 ++ return err;
4117 ++
4118 + for (i = 0; i < SI5341_NUM_INPUTS; ++i) {
4119 + input = devm_clk_get(&client->dev, si5341_input_clock_names[i]);
4120 + if (IS_ERR(input)) {
4121 +@@ -1540,6 +1595,22 @@ static int si5341_probe(struct i2c_client *client,
4122 + return err;
4123 + }
4124 +
4125 ++ /* wait for device to report input clock present and PLL lock */
4126 ++ err = regmap_read_poll_timeout(data->regmap, SI5341_STATUS, status,
4127 ++ !(status & (SI5341_STATUS_LOSREF | SI5341_STATUS_LOL)),
4128 ++ 10000, 250000);
4129 ++ if (err) {
4130 ++ dev_err(&client->dev, "Error waiting for input clock or PLL lock\n");
4131 ++ return err;
4132 ++ }
4133 ++
4134 ++ /* clear sticky alarm bits from initialization */
4135 ++ err = regmap_write(data->regmap, SI5341_STATUS_STICKY, 0);
4136 ++ if (err) {
4137 ++ dev_err(&client->dev, "unable to clear sticky status\n");
4138 ++ return err;
4139 ++ }
4140 ++
4141 + /* Free the names, clk framework makes copies */
4142 + for (i = 0; i < data->num_synth; ++i)
4143 + devm_kfree(&client->dev, (void *)synth_clock_names[i]);
4144 +diff --git a/drivers/clk/clk-versaclock5.c b/drivers/clk/clk-versaclock5.c
4145 +index 344cd6c611887..3c737742c2a92 100644
4146 +--- a/drivers/clk/clk-versaclock5.c
4147 ++++ b/drivers/clk/clk-versaclock5.c
4148 +@@ -69,7 +69,10 @@
4149 + #define VC5_FEEDBACK_FRAC_DIV(n) (0x19 + (n))
4150 + #define VC5_RC_CONTROL0 0x1e
4151 + #define VC5_RC_CONTROL1 0x1f
4152 +-/* Register 0x20 is factory reserved */
4153 ++
4154 ++/* These registers are named "Unused Factory Reserved Registers" */
4155 ++#define VC5_RESERVED_X0(idx) (0x20 + ((idx) * 0x10))
4156 ++#define VC5_RESERVED_X0_BYPASS_SYNC BIT(7) /* bypass_sync<idx> bit */
4157 +
4158 + /* Output divider control for divider 1,2,3,4 */
4159 + #define VC5_OUT_DIV_CONTROL(idx) (0x21 + ((idx) * 0x10))
4160 +@@ -87,7 +90,6 @@
4161 + #define VC5_OUT_DIV_SKEW_INT(idx, n) (0x2b + ((idx) * 0x10) + (n))
4162 + #define VC5_OUT_DIV_INT(idx, n) (0x2d + ((idx) * 0x10) + (n))
4163 + #define VC5_OUT_DIV_SKEW_FRAC(idx) (0x2f + ((idx) * 0x10))
4164 +-/* Registers 0x30, 0x40, 0x50 are factory reserved */
4165 +
4166 + /* Clock control register for clock 1,2 */
4167 + #define VC5_CLK_OUTPUT_CFG(idx, n) (0x60 + ((idx) * 0x2) + (n))
4168 +@@ -140,6 +142,8 @@
4169 + #define VC5_HAS_INTERNAL_XTAL BIT(0)
4170 + /* chip has PFD requency doubler */
4171 + #define VC5_HAS_PFD_FREQ_DBL BIT(1)
4172 ++/* chip has bits to disable FOD sync */
4173 ++#define VC5_HAS_BYPASS_SYNC_BIT BIT(2)
4174 +
4175 + /* Supported IDT VC5 models. */
4176 + enum vc5_model {
4177 +@@ -581,6 +585,23 @@ static int vc5_clk_out_prepare(struct clk_hw *hw)
4178 + unsigned int src;
4179 + int ret;
4180 +
4181 ++ /*
4182 ++ * When enabling a FOD, all currently enabled FODs are briefly
4183 ++ * stopped in order to synchronize all of them. This causes a clock
4184 ++ * disruption to any unrelated chips that might be already using
4185 ++ * other clock outputs. Bypass the sync feature to avoid the issue,
4186 ++ * which is possible on the VersaClock 6E family via reserved
4187 ++ * registers.
4188 ++ */
4189 ++ if (vc5->chip_info->flags & VC5_HAS_BYPASS_SYNC_BIT) {
4190 ++ ret = regmap_update_bits(vc5->regmap,
4191 ++ VC5_RESERVED_X0(hwdata->num),
4192 ++ VC5_RESERVED_X0_BYPASS_SYNC,
4193 ++ VC5_RESERVED_X0_BYPASS_SYNC);
4194 ++ if (ret)
4195 ++ return ret;
4196 ++ }
4197 ++
4198 + /*
4199 + * If the input mux is disabled, enable it first and
4200 + * select source from matching FOD.
4201 +@@ -1166,7 +1187,7 @@ static const struct vc5_chip_info idt_5p49v6965_info = {
4202 + .model = IDT_VC6_5P49V6965,
4203 + .clk_fod_cnt = 4,
4204 + .clk_out_cnt = 5,
4205 +- .flags = 0,
4206 ++ .flags = VC5_HAS_BYPASS_SYNC_BIT,
4207 + };
4208 +
4209 + static const struct i2c_device_id vc5_id[] = {
4210 +diff --git a/drivers/clk/imx/clk-imx8mq.c b/drivers/clk/imx/clk-imx8mq.c
4211 +index 3e1a10d3f55cc..b9bad6d92671d 100644
4212 +--- a/drivers/clk/imx/clk-imx8mq.c
4213 ++++ b/drivers/clk/imx/clk-imx8mq.c
4214 +@@ -358,46 +358,26 @@ static int imx8mq_clocks_probe(struct platform_device *pdev)
4215 + hws[IMX8MQ_VIDEO2_PLL_OUT] = imx_clk_hw_sscg_pll("video2_pll_out", video2_pll_out_sels, ARRAY_SIZE(video2_pll_out_sels), 0, 0, 0, base + 0x54, 0);
4216 +
4217 + /* SYS PLL1 fixed output */
4218 +- hws[IMX8MQ_SYS1_PLL_40M_CG] = imx_clk_hw_gate("sys1_pll_40m_cg", "sys1_pll_out", base + 0x30, 9);
4219 +- hws[IMX8MQ_SYS1_PLL_80M_CG] = imx_clk_hw_gate("sys1_pll_80m_cg", "sys1_pll_out", base + 0x30, 11);
4220 +- hws[IMX8MQ_SYS1_PLL_100M_CG] = imx_clk_hw_gate("sys1_pll_100m_cg", "sys1_pll_out", base + 0x30, 13);
4221 +- hws[IMX8MQ_SYS1_PLL_133M_CG] = imx_clk_hw_gate("sys1_pll_133m_cg", "sys1_pll_out", base + 0x30, 15);
4222 +- hws[IMX8MQ_SYS1_PLL_160M_CG] = imx_clk_hw_gate("sys1_pll_160m_cg", "sys1_pll_out", base + 0x30, 17);
4223 +- hws[IMX8MQ_SYS1_PLL_200M_CG] = imx_clk_hw_gate("sys1_pll_200m_cg", "sys1_pll_out", base + 0x30, 19);
4224 +- hws[IMX8MQ_SYS1_PLL_266M_CG] = imx_clk_hw_gate("sys1_pll_266m_cg", "sys1_pll_out", base + 0x30, 21);
4225 +- hws[IMX8MQ_SYS1_PLL_400M_CG] = imx_clk_hw_gate("sys1_pll_400m_cg", "sys1_pll_out", base + 0x30, 23);
4226 +- hws[IMX8MQ_SYS1_PLL_800M_CG] = imx_clk_hw_gate("sys1_pll_800m_cg", "sys1_pll_out", base + 0x30, 25);
4227 +-
4228 +- hws[IMX8MQ_SYS1_PLL_40M] = imx_clk_hw_fixed_factor("sys1_pll_40m", "sys1_pll_40m_cg", 1, 20);
4229 +- hws[IMX8MQ_SYS1_PLL_80M] = imx_clk_hw_fixed_factor("sys1_pll_80m", "sys1_pll_80m_cg", 1, 10);
4230 +- hws[IMX8MQ_SYS1_PLL_100M] = imx_clk_hw_fixed_factor("sys1_pll_100m", "sys1_pll_100m_cg", 1, 8);
4231 +- hws[IMX8MQ_SYS1_PLL_133M] = imx_clk_hw_fixed_factor("sys1_pll_133m", "sys1_pll_133m_cg", 1, 6);
4232 +- hws[IMX8MQ_SYS1_PLL_160M] = imx_clk_hw_fixed_factor("sys1_pll_160m", "sys1_pll_160m_cg", 1, 5);
4233 +- hws[IMX8MQ_SYS1_PLL_200M] = imx_clk_hw_fixed_factor("sys1_pll_200m", "sys1_pll_200m_cg", 1, 4);
4234 +- hws[IMX8MQ_SYS1_PLL_266M] = imx_clk_hw_fixed_factor("sys1_pll_266m", "sys1_pll_266m_cg", 1, 3);
4235 +- hws[IMX8MQ_SYS1_PLL_400M] = imx_clk_hw_fixed_factor("sys1_pll_400m", "sys1_pll_400m_cg", 1, 2);
4236 +- hws[IMX8MQ_SYS1_PLL_800M] = imx_clk_hw_fixed_factor("sys1_pll_800m", "sys1_pll_800m_cg", 1, 1);
4237 ++ hws[IMX8MQ_SYS1_PLL_40M] = imx_clk_hw_fixed_factor("sys1_pll_40m", "sys1_pll_out", 1, 20);
4238 ++ hws[IMX8MQ_SYS1_PLL_80M] = imx_clk_hw_fixed_factor("sys1_pll_80m", "sys1_pll_out", 1, 10);
4239 ++ hws[IMX8MQ_SYS1_PLL_100M] = imx_clk_hw_fixed_factor("sys1_pll_100m", "sys1_pll_out", 1, 8);
4240 ++ hws[IMX8MQ_SYS1_PLL_133M] = imx_clk_hw_fixed_factor("sys1_pll_133m", "sys1_pll_out", 1, 6);
4241 ++ hws[IMX8MQ_SYS1_PLL_160M] = imx_clk_hw_fixed_factor("sys1_pll_160m", "sys1_pll_out", 1, 5);
4242 ++ hws[IMX8MQ_SYS1_PLL_200M] = imx_clk_hw_fixed_factor("sys1_pll_200m", "sys1_pll_out", 1, 4);
4243 ++ hws[IMX8MQ_SYS1_PLL_266M] = imx_clk_hw_fixed_factor("sys1_pll_266m", "sys1_pll_out", 1, 3);
4244 ++ hws[IMX8MQ_SYS1_PLL_400M] = imx_clk_hw_fixed_factor("sys1_pll_400m", "sys1_pll_out", 1, 2);
4245 ++ hws[IMX8MQ_SYS1_PLL_800M] = imx_clk_hw_fixed_factor("sys1_pll_800m", "sys1_pll_out", 1, 1);
4246 +
4247 + /* SYS PLL2 fixed output */
4248 +- hws[IMX8MQ_SYS2_PLL_50M_CG] = imx_clk_hw_gate("sys2_pll_50m_cg", "sys2_pll_out", base + 0x3c, 9);
4249 +- hws[IMX8MQ_SYS2_PLL_100M_CG] = imx_clk_hw_gate("sys2_pll_100m_cg", "sys2_pll_out", base + 0x3c, 11);
4250 +- hws[IMX8MQ_SYS2_PLL_125M_CG] = imx_clk_hw_gate("sys2_pll_125m_cg", "sys2_pll_out", base + 0x3c, 13);
4251 +- hws[IMX8MQ_SYS2_PLL_166M_CG] = imx_clk_hw_gate("sys2_pll_166m_cg", "sys2_pll_out", base + 0x3c, 15);
4252 +- hws[IMX8MQ_SYS2_PLL_200M_CG] = imx_clk_hw_gate("sys2_pll_200m_cg", "sys2_pll_out", base + 0x3c, 17);
4253 +- hws[IMX8MQ_SYS2_PLL_250M_CG] = imx_clk_hw_gate("sys2_pll_250m_cg", "sys2_pll_out", base + 0x3c, 19);
4254 +- hws[IMX8MQ_SYS2_PLL_333M_CG] = imx_clk_hw_gate("sys2_pll_333m_cg", "sys2_pll_out", base + 0x3c, 21);
4255 +- hws[IMX8MQ_SYS2_PLL_500M_CG] = imx_clk_hw_gate("sys2_pll_500m_cg", "sys2_pll_out", base + 0x3c, 23);
4256 +- hws[IMX8MQ_SYS2_PLL_1000M_CG] = imx_clk_hw_gate("sys2_pll_1000m_cg", "sys2_pll_out", base + 0x3c, 25);
4257 +-
4258 +- hws[IMX8MQ_SYS2_PLL_50M] = imx_clk_hw_fixed_factor("sys2_pll_50m", "sys2_pll_50m_cg", 1, 20);
4259 +- hws[IMX8MQ_SYS2_PLL_100M] = imx_clk_hw_fixed_factor("sys2_pll_100m", "sys2_pll_100m_cg", 1, 10);
4260 +- hws[IMX8MQ_SYS2_PLL_125M] = imx_clk_hw_fixed_factor("sys2_pll_125m", "sys2_pll_125m_cg", 1, 8);
4261 +- hws[IMX8MQ_SYS2_PLL_166M] = imx_clk_hw_fixed_factor("sys2_pll_166m", "sys2_pll_166m_cg", 1, 6);
4262 +- hws[IMX8MQ_SYS2_PLL_200M] = imx_clk_hw_fixed_factor("sys2_pll_200m", "sys2_pll_200m_cg", 1, 5);
4263 +- hws[IMX8MQ_SYS2_PLL_250M] = imx_clk_hw_fixed_factor("sys2_pll_250m", "sys2_pll_250m_cg", 1, 4);
4264 +- hws[IMX8MQ_SYS2_PLL_333M] = imx_clk_hw_fixed_factor("sys2_pll_333m", "sys2_pll_333m_cg", 1, 3);
4265 +- hws[IMX8MQ_SYS2_PLL_500M] = imx_clk_hw_fixed_factor("sys2_pll_500m", "sys2_pll_500m_cg", 1, 2);
4266 +- hws[IMX8MQ_SYS2_PLL_1000M] = imx_clk_hw_fixed_factor("sys2_pll_1000m", "sys2_pll_1000m_cg", 1, 1);
4267 ++ hws[IMX8MQ_SYS2_PLL_50M] = imx_clk_hw_fixed_factor("sys2_pll_50m", "sys2_pll_out", 1, 20);
4268 ++ hws[IMX8MQ_SYS2_PLL_100M] = imx_clk_hw_fixed_factor("sys2_pll_100m", "sys2_pll_out", 1, 10);
4269 ++ hws[IMX8MQ_SYS2_PLL_125M] = imx_clk_hw_fixed_factor("sys2_pll_125m", "sys2_pll_out", 1, 8);
4270 ++ hws[IMX8MQ_SYS2_PLL_166M] = imx_clk_hw_fixed_factor("sys2_pll_166m", "sys2_pll_out", 1, 6);
4271 ++ hws[IMX8MQ_SYS2_PLL_200M] = imx_clk_hw_fixed_factor("sys2_pll_200m", "sys2_pll_out", 1, 5);
4272 ++ hws[IMX8MQ_SYS2_PLL_250M] = imx_clk_hw_fixed_factor("sys2_pll_250m", "sys2_pll_out", 1, 4);
4273 ++ hws[IMX8MQ_SYS2_PLL_333M] = imx_clk_hw_fixed_factor("sys2_pll_333m", "sys2_pll_out", 1, 3);
4274 ++ hws[IMX8MQ_SYS2_PLL_500M] = imx_clk_hw_fixed_factor("sys2_pll_500m", "sys2_pll_out", 1, 2);
4275 ++ hws[IMX8MQ_SYS2_PLL_1000M] = imx_clk_hw_fixed_factor("sys2_pll_1000m", "sys2_pll_out", 1, 1);
4276 +
4277 + hws[IMX8MQ_CLK_MON_AUDIO_PLL1_DIV] = imx_clk_hw_divider("audio_pll1_out_monitor", "audio_pll1_bypass", base + 0x78, 0, 3);
4278 + hws[IMX8MQ_CLK_MON_AUDIO_PLL2_DIV] = imx_clk_hw_divider("audio_pll2_out_monitor", "audio_pll2_bypass", base + 0x78, 4, 3);
4279 +diff --git a/drivers/clk/meson/g12a.c b/drivers/clk/meson/g12a.c
4280 +index b080359b4645e..a805bac93c113 100644
4281 +--- a/drivers/clk/meson/g12a.c
4282 ++++ b/drivers/clk/meson/g12a.c
4283 +@@ -1603,7 +1603,7 @@ static struct clk_regmap g12b_cpub_clk_trace = {
4284 + };
4285 +
4286 + static const struct pll_mult_range g12a_gp0_pll_mult_range = {
4287 +- .min = 55,
4288 ++ .min = 125,
4289 + .max = 255,
4290 + };
4291 +
4292 +diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
4293 +index c6eb99169ddc7..6f8f0bbc5ab5b 100644
4294 +--- a/drivers/clk/qcom/clk-alpha-pll.c
4295 ++++ b/drivers/clk/qcom/clk-alpha-pll.c
4296 +@@ -1234,7 +1234,7 @@ static int alpha_pll_fabia_prepare(struct clk_hw *hw)
4297 + return ret;
4298 +
4299 + /* Setup PLL for calibration frequency */
4300 +- regmap_write(pll->clkr.regmap, PLL_ALPHA_VAL(pll), cal_l);
4301 ++ regmap_write(pll->clkr.regmap, PLL_CAL_L_VAL(pll), cal_l);
4302 +
4303 + /* Bringup the PLL at calibration frequency */
4304 + ret = clk_alpha_pll_enable(hw);
4305 +diff --git a/drivers/clk/qcom/gcc-sc7280.c b/drivers/clk/qcom/gcc-sc7280.c
4306 +index 22736c16ed160..2db1b07c70447 100644
4307 +--- a/drivers/clk/qcom/gcc-sc7280.c
4308 ++++ b/drivers/clk/qcom/gcc-sc7280.c
4309 +@@ -716,6 +716,7 @@ static const struct freq_tbl ftbl_gcc_qupv3_wrap0_s2_clk_src[] = {
4310 + F(29491200, P_GCC_GPLL0_OUT_EVEN, 1, 1536, 15625),
4311 + F(32000000, P_GCC_GPLL0_OUT_EVEN, 1, 8, 75),
4312 + F(48000000, P_GCC_GPLL0_OUT_EVEN, 1, 4, 25),
4313 ++ F(52174000, P_GCC_GPLL0_OUT_MAIN, 1, 2, 23),
4314 + F(64000000, P_GCC_GPLL0_OUT_EVEN, 1, 16, 75),
4315 + F(75000000, P_GCC_GPLL0_OUT_EVEN, 4, 0, 0),
4316 + F(80000000, P_GCC_GPLL0_OUT_EVEN, 1, 4, 15),
4317 +diff --git a/drivers/clk/socfpga/clk-agilex.c b/drivers/clk/socfpga/clk-agilex.c
4318 +index 7689bdd0a914b..d3e01c5e93765 100644
4319 +--- a/drivers/clk/socfpga/clk-agilex.c
4320 ++++ b/drivers/clk/socfpga/clk-agilex.c
4321 +@@ -186,6 +186,41 @@ static const struct clk_parent_data noc_mux[] = {
4322 + .name = "boot_clk", },
4323 + };
4324 +
4325 ++static const struct clk_parent_data sdmmc_mux[] = {
4326 ++ { .fw_name = "sdmmc_free_clk",
4327 ++ .name = "sdmmc_free_clk", },
4328 ++ { .fw_name = "boot_clk",
4329 ++ .name = "boot_clk", },
4330 ++};
4331 ++
4332 ++static const struct clk_parent_data s2f_user1_mux[] = {
4333 ++ { .fw_name = "s2f_user1_free_clk",
4334 ++ .name = "s2f_user1_free_clk", },
4335 ++ { .fw_name = "boot_clk",
4336 ++ .name = "boot_clk", },
4337 ++};
4338 ++
4339 ++static const struct clk_parent_data psi_mux[] = {
4340 ++ { .fw_name = "psi_ref_free_clk",
4341 ++ .name = "psi_ref_free_clk", },
4342 ++ { .fw_name = "boot_clk",
4343 ++ .name = "boot_clk", },
4344 ++};
4345 ++
4346 ++static const struct clk_parent_data gpio_db_mux[] = {
4347 ++ { .fw_name = "gpio_db_free_clk",
4348 ++ .name = "gpio_db_free_clk", },
4349 ++ { .fw_name = "boot_clk",
4350 ++ .name = "boot_clk", },
4351 ++};
4352 ++
4353 ++static const struct clk_parent_data emac_ptp_mux[] = {
4354 ++ { .fw_name = "emac_ptp_free_clk",
4355 ++ .name = "emac_ptp_free_clk", },
4356 ++ { .fw_name = "boot_clk",
4357 ++ .name = "boot_clk", },
4358 ++};
4359 ++
4360 + /* clocks in AO (always on) controller */
4361 + static const struct stratix10_pll_clock agilex_pll_clks[] = {
4362 + { AGILEX_BOOT_CLK, "boot_clk", boot_mux, ARRAY_SIZE(boot_mux), 0,
4363 +@@ -222,11 +257,9 @@ static const struct stratix10_perip_cnt_clock agilex_main_perip_cnt_clks[] = {
4364 + { AGILEX_MPU_FREE_CLK, "mpu_free_clk", NULL, mpu_free_mux, ARRAY_SIZE(mpu_free_mux),
4365 + 0, 0x3C, 0, 0, 0},
4366 + { AGILEX_NOC_FREE_CLK, "noc_free_clk", NULL, noc_free_mux, ARRAY_SIZE(noc_free_mux),
4367 +- 0, 0x40, 0, 0, 1},
4368 +- { AGILEX_L4_SYS_FREE_CLK, "l4_sys_free_clk", "noc_free_clk", NULL, 1, 0,
4369 +- 0, 4, 0, 0},
4370 +- { AGILEX_NOC_CLK, "noc_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux),
4371 +- 0, 0, 0, 0x30, 1},
4372 ++ 0, 0x40, 0, 0, 0},
4373 ++ { AGILEX_L4_SYS_FREE_CLK, "l4_sys_free_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0,
4374 ++ 0, 4, 0x30, 1},
4375 + { AGILEX_EMAC_A_FREE_CLK, "emaca_free_clk", NULL, emaca_free_mux, ARRAY_SIZE(emaca_free_mux),
4376 + 0, 0xD4, 0, 0x88, 0},
4377 + { AGILEX_EMAC_B_FREE_CLK, "emacb_free_clk", NULL, emacb_free_mux, ARRAY_SIZE(emacb_free_mux),
4378 +@@ -236,7 +269,7 @@ static const struct stratix10_perip_cnt_clock agilex_main_perip_cnt_clks[] = {
4379 + { AGILEX_GPIO_DB_FREE_CLK, "gpio_db_free_clk", NULL, gpio_db_free_mux,
4380 + ARRAY_SIZE(gpio_db_free_mux), 0, 0xE0, 0, 0x88, 3},
4381 + { AGILEX_SDMMC_FREE_CLK, "sdmmc_free_clk", NULL, sdmmc_free_mux,
4382 +- ARRAY_SIZE(sdmmc_free_mux), 0, 0xE4, 0, 0x88, 4},
4383 ++ ARRAY_SIZE(sdmmc_free_mux), 0, 0xE4, 0, 0, 0},
4384 + { AGILEX_S2F_USER0_FREE_CLK, "s2f_user0_free_clk", NULL, s2f_usr0_free_mux,
4385 + ARRAY_SIZE(s2f_usr0_free_mux), 0, 0xE8, 0, 0, 0},
4386 + { AGILEX_S2F_USER1_FREE_CLK, "s2f_user1_free_clk", NULL, s2f_usr1_free_mux,
4387 +@@ -252,24 +285,24 @@ static const struct stratix10_gate_clock agilex_gate_clks[] = {
4388 + 0, 0, 0, 0, 0, 0, 4},
4389 + { AGILEX_MPU_CCU_CLK, "mpu_ccu_clk", "mpu_clk", NULL, 1, 0, 0x24,
4390 + 0, 0, 0, 0, 0, 0, 2},
4391 +- { AGILEX_L4_MAIN_CLK, "l4_main_clk", "noc_clk", NULL, 1, 0, 0x24,
4392 +- 1, 0x44, 0, 2, 0, 0, 0},
4393 +- { AGILEX_L4_MP_CLK, "l4_mp_clk", "noc_clk", NULL, 1, 0, 0x24,
4394 +- 2, 0x44, 8, 2, 0, 0, 0},
4395 ++ { AGILEX_L4_MAIN_CLK, "l4_main_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
4396 ++ 1, 0x44, 0, 2, 0x30, 1, 0},
4397 ++ { AGILEX_L4_MP_CLK, "l4_mp_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
4398 ++ 2, 0x44, 8, 2, 0x30, 1, 0},
4399 + /*
4400 + * The l4_sp_clk feeds a 100 MHz clock to various peripherals, one of them
4401 + * being the SP timers, thus cannot get gated.
4402 + */
4403 +- { AGILEX_L4_SP_CLK, "l4_sp_clk", "noc_clk", NULL, 1, CLK_IS_CRITICAL, 0x24,
4404 +- 3, 0x44, 16, 2, 0, 0, 0},
4405 +- { AGILEX_CS_AT_CLK, "cs_at_clk", "noc_clk", NULL, 1, 0, 0x24,
4406 +- 4, 0x44, 24, 2, 0, 0, 0},
4407 +- { AGILEX_CS_TRACE_CLK, "cs_trace_clk", "noc_clk", NULL, 1, 0, 0x24,
4408 +- 4, 0x44, 26, 2, 0, 0, 0},
4409 ++ { AGILEX_L4_SP_CLK, "l4_sp_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), CLK_IS_CRITICAL, 0x24,
4410 ++ 3, 0x44, 16, 2, 0x30, 1, 0},
4411 ++ { AGILEX_CS_AT_CLK, "cs_at_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
4412 ++ 4, 0x44, 24, 2, 0x30, 1, 0},
4413 ++ { AGILEX_CS_TRACE_CLK, "cs_trace_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
4414 ++ 4, 0x44, 26, 2, 0x30, 1, 0},
4415 + { AGILEX_CS_PDBG_CLK, "cs_pdbg_clk", "cs_at_clk", NULL, 1, 0, 0x24,
4416 + 4, 0x44, 28, 1, 0, 0, 0},
4417 +- { AGILEX_CS_TIMER_CLK, "cs_timer_clk", "noc_clk", NULL, 1, 0, 0x24,
4418 +- 5, 0, 0, 0, 0, 0, 0},
4419 ++ { AGILEX_CS_TIMER_CLK, "cs_timer_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
4420 ++ 5, 0, 0, 0, 0x30, 1, 0},
4421 + { AGILEX_S2F_USER0_CLK, "s2f_user0_clk", NULL, s2f_usr0_mux, ARRAY_SIZE(s2f_usr0_mux), 0, 0x24,
4422 + 6, 0, 0, 0, 0, 0, 0},
4423 + { AGILEX_EMAC0_CLK, "emac0_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,
4424 +@@ -278,16 +311,16 @@ static const struct stratix10_gate_clock agilex_gate_clks[] = {
4425 + 1, 0, 0, 0, 0x94, 27, 0},
4426 + { AGILEX_EMAC2_CLK, "emac2_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,
4427 + 2, 0, 0, 0, 0x94, 28, 0},
4428 +- { AGILEX_EMAC_PTP_CLK, "emac_ptp_clk", "emac_ptp_free_clk", NULL, 1, 0, 0x7C,
4429 +- 3, 0, 0, 0, 0, 0, 0},
4430 +- { AGILEX_GPIO_DB_CLK, "gpio_db_clk", "gpio_db_free_clk", NULL, 1, 0, 0x7C,
4431 +- 4, 0x98, 0, 16, 0, 0, 0},
4432 +- { AGILEX_SDMMC_CLK, "sdmmc_clk", "sdmmc_free_clk", NULL, 1, 0, 0x7C,
4433 +- 5, 0, 0, 0, 0, 0, 4},
4434 +- { AGILEX_S2F_USER1_CLK, "s2f_user1_clk", "s2f_user1_free_clk", NULL, 1, 0, 0x7C,
4435 +- 6, 0, 0, 0, 0, 0, 0},
4436 +- { AGILEX_PSI_REF_CLK, "psi_ref_clk", "psi_ref_free_clk", NULL, 1, 0, 0x7C,
4437 +- 7, 0, 0, 0, 0, 0, 0},
4438 ++ { AGILEX_EMAC_PTP_CLK, "emac_ptp_clk", NULL, emac_ptp_mux, ARRAY_SIZE(emac_ptp_mux), 0, 0x7C,
4439 ++ 3, 0, 0, 0, 0x88, 2, 0},
4440 ++ { AGILEX_GPIO_DB_CLK, "gpio_db_clk", NULL, gpio_db_mux, ARRAY_SIZE(gpio_db_mux), 0, 0x7C,
4441 ++ 4, 0x98, 0, 16, 0x88, 3, 0},
4442 ++ { AGILEX_SDMMC_CLK, "sdmmc_clk", NULL, sdmmc_mux, ARRAY_SIZE(sdmmc_mux), 0, 0x7C,
4443 ++ 5, 0, 0, 0, 0x88, 4, 4},
4444 ++ { AGILEX_S2F_USER1_CLK, "s2f_user1_clk", NULL, s2f_user1_mux, ARRAY_SIZE(s2f_user1_mux), 0, 0x7C,
4445 ++ 6, 0, 0, 0, 0x88, 5, 0},
4446 ++ { AGILEX_PSI_REF_CLK, "psi_ref_clk", NULL, psi_mux, ARRAY_SIZE(psi_mux), 0, 0x7C,
4447 ++ 7, 0, 0, 0, 0x88, 6, 0},
4448 + { AGILEX_USB_CLK, "usb_clk", "l4_mp_clk", NULL, 1, 0, 0x7C,
4449 + 8, 0, 0, 0, 0, 0, 0},
4450 + { AGILEX_SPI_M_CLK, "spi_m_clk", "l4_mp_clk", NULL, 1, 0, 0x7C,
4451 +diff --git a/drivers/clk/socfpga/clk-periph-s10.c b/drivers/clk/socfpga/clk-periph-s10.c
4452 +index 0ff2b9d240353..44f3aebe1182e 100644
4453 +--- a/drivers/clk/socfpga/clk-periph-s10.c
4454 ++++ b/drivers/clk/socfpga/clk-periph-s10.c
4455 +@@ -64,16 +64,21 @@ static u8 clk_periclk_get_parent(struct clk_hw *hwclk)
4456 + {
4457 + struct socfpga_periph_clk *socfpgaclk = to_periph_clk(hwclk);
4458 + u32 clk_src, mask;
4459 +- u8 parent;
4460 ++ u8 parent = 0;
4461 +
4462 ++ /* handle the bypass first */
4463 + if (socfpgaclk->bypass_reg) {
4464 + mask = (0x1 << socfpgaclk->bypass_shift);
4465 + parent = ((readl(socfpgaclk->bypass_reg) & mask) >>
4466 + socfpgaclk->bypass_shift);
4467 +- } else {
4468 ++ if (parent)
4469 ++ return parent;
4470 ++ }
4471 ++
4472 ++ if (socfpgaclk->hw.reg) {
4473 + clk_src = readl(socfpgaclk->hw.reg);
4474 + parent = (clk_src >> CLK_MGR_FREE_SHIFT) &
4475 +- CLK_MGR_FREE_MASK;
4476 ++ CLK_MGR_FREE_MASK;
4477 + }
4478 + return parent;
4479 + }
4480 +diff --git a/drivers/clk/socfpga/clk-s10.c b/drivers/clk/socfpga/clk-s10.c
4481 +index 661a8e9bfb9bd..aaf69058b1dca 100644
4482 +--- a/drivers/clk/socfpga/clk-s10.c
4483 ++++ b/drivers/clk/socfpga/clk-s10.c
4484 +@@ -144,6 +144,41 @@ static const struct clk_parent_data mpu_free_mux[] = {
4485 + .name = "f2s-free-clk", },
4486 + };
4487 +
4488 ++static const struct clk_parent_data sdmmc_mux[] = {
4489 ++ { .fw_name = "sdmmc_free_clk",
4490 ++ .name = "sdmmc_free_clk", },
4491 ++ { .fw_name = "boot_clk",
4492 ++ .name = "boot_clk", },
4493 ++};
4494 ++
4495 ++static const struct clk_parent_data s2f_user1_mux[] = {
4496 ++ { .fw_name = "s2f_user1_free_clk",
4497 ++ .name = "s2f_user1_free_clk", },
4498 ++ { .fw_name = "boot_clk",
4499 ++ .name = "boot_clk", },
4500 ++};
4501 ++
4502 ++static const struct clk_parent_data psi_mux[] = {
4503 ++ { .fw_name = "psi_ref_free_clk",
4504 ++ .name = "psi_ref_free_clk", },
4505 ++ { .fw_name = "boot_clk",
4506 ++ .name = "boot_clk", },
4507 ++};
4508 ++
4509 ++static const struct clk_parent_data gpio_db_mux[] = {
4510 ++ { .fw_name = "gpio_db_free_clk",
4511 ++ .name = "gpio_db_free_clk", },
4512 ++ { .fw_name = "boot_clk",
4513 ++ .name = "boot_clk", },
4514 ++};
4515 ++
4516 ++static const struct clk_parent_data emac_ptp_mux[] = {
4517 ++ { .fw_name = "emac_ptp_free_clk",
4518 ++ .name = "emac_ptp_free_clk", },
4519 ++ { .fw_name = "boot_clk",
4520 ++ .name = "boot_clk", },
4521 ++};
4522 ++
4523 + /* clocks in AO (always on) controller */
4524 + static const struct stratix10_pll_clock s10_pll_clks[] = {
4525 + { STRATIX10_BOOT_CLK, "boot_clk", boot_mux, ARRAY_SIZE(boot_mux), 0,
4526 +@@ -167,7 +202,7 @@ static const struct stratix10_perip_cnt_clock s10_main_perip_cnt_clks[] = {
4527 + { STRATIX10_MPU_FREE_CLK, "mpu_free_clk", NULL, mpu_free_mux, ARRAY_SIZE(mpu_free_mux),
4528 + 0, 0x48, 0, 0, 0},
4529 + { STRATIX10_NOC_FREE_CLK, "noc_free_clk", NULL, noc_free_mux, ARRAY_SIZE(noc_free_mux),
4530 +- 0, 0x4C, 0, 0, 0},
4531 ++ 0, 0x4C, 0, 0x3C, 1},
4532 + { STRATIX10_MAIN_EMACA_CLK, "main_emaca_clk", "main_noc_base_clk", NULL, 1, 0,
4533 + 0x50, 0, 0, 0},
4534 + { STRATIX10_MAIN_EMACB_CLK, "main_emacb_clk", "main_noc_base_clk", NULL, 1, 0,
4535 +@@ -200,10 +235,8 @@ static const struct stratix10_perip_cnt_clock s10_main_perip_cnt_clks[] = {
4536 + 0, 0xD4, 0, 0, 0},
4537 + { STRATIX10_PERI_PSI_REF_CLK, "peri_psi_ref_clk", "peri_noc_base_clk", NULL, 1, 0,
4538 + 0xD8, 0, 0, 0},
4539 +- { STRATIX10_L4_SYS_FREE_CLK, "l4_sys_free_clk", "noc_free_clk", NULL, 1, 0,
4540 +- 0, 4, 0, 0},
4541 +- { STRATIX10_NOC_CLK, "noc_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux),
4542 +- 0, 0, 0, 0x3C, 1},
4543 ++ { STRATIX10_L4_SYS_FREE_CLK, "l4_sys_free_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0,
4544 ++ 0, 4, 0x3C, 1},
4545 + { STRATIX10_EMAC_A_FREE_CLK, "emaca_free_clk", NULL, emaca_free_mux, ARRAY_SIZE(emaca_free_mux),
4546 + 0, 0, 2, 0xB0, 0},
4547 + { STRATIX10_EMAC_B_FREE_CLK, "emacb_free_clk", NULL, emacb_free_mux, ARRAY_SIZE(emacb_free_mux),
4548 +@@ -227,20 +260,20 @@ static const struct stratix10_gate_clock s10_gate_clks[] = {
4549 + 0, 0, 0, 0, 0, 0, 4},
4550 + { STRATIX10_MPU_L2RAM_CLK, "mpu_l2ram_clk", "mpu_clk", NULL, 1, 0, 0x30,
4551 + 0, 0, 0, 0, 0, 0, 2},
4552 +- { STRATIX10_L4_MAIN_CLK, "l4_main_clk", "noc_clk", NULL, 1, 0, 0x30,
4553 +- 1, 0x70, 0, 2, 0, 0, 0},
4554 +- { STRATIX10_L4_MP_CLK, "l4_mp_clk", "noc_clk", NULL, 1, 0, 0x30,
4555 +- 2, 0x70, 8, 2, 0, 0, 0},
4556 +- { STRATIX10_L4_SP_CLK, "l4_sp_clk", "noc_clk", NULL, 1, CLK_IS_CRITICAL, 0x30,
4557 +- 3, 0x70, 16, 2, 0, 0, 0},
4558 +- { STRATIX10_CS_AT_CLK, "cs_at_clk", "noc_clk", NULL, 1, 0, 0x30,
4559 +- 4, 0x70, 24, 2, 0, 0, 0},
4560 +- { STRATIX10_CS_TRACE_CLK, "cs_trace_clk", "noc_clk", NULL, 1, 0, 0x30,
4561 +- 4, 0x70, 26, 2, 0, 0, 0},
4562 ++ { STRATIX10_L4_MAIN_CLK, "l4_main_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x30,
4563 ++ 1, 0x70, 0, 2, 0x3C, 1, 0},
4564 ++ { STRATIX10_L4_MP_CLK, "l4_mp_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x30,
4565 ++ 2, 0x70, 8, 2, 0x3C, 1, 0},
4566 ++ { STRATIX10_L4_SP_CLK, "l4_sp_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), CLK_IS_CRITICAL, 0x30,
4567 ++ 3, 0x70, 16, 2, 0x3C, 1, 0},
4568 ++ { STRATIX10_CS_AT_CLK, "cs_at_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x30,
4569 ++ 4, 0x70, 24, 2, 0x3C, 1, 0},
4570 ++ { STRATIX10_CS_TRACE_CLK, "cs_trace_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x30,
4571 ++ 4, 0x70, 26, 2, 0x3C, 1, 0},
4572 + { STRATIX10_CS_PDBG_CLK, "cs_pdbg_clk", "cs_at_clk", NULL, 1, 0, 0x30,
4573 + 4, 0x70, 28, 1, 0, 0, 0},
4574 +- { STRATIX10_CS_TIMER_CLK, "cs_timer_clk", "noc_clk", NULL, 1, 0, 0x30,
4575 +- 5, 0, 0, 0, 0, 0, 0},
4576 ++ { STRATIX10_CS_TIMER_CLK, "cs_timer_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x30,
4577 ++ 5, 0, 0, 0, 0x3C, 1, 0},
4578 + { STRATIX10_S2F_USER0_CLK, "s2f_user0_clk", NULL, s2f_usr0_mux, ARRAY_SIZE(s2f_usr0_mux), 0, 0x30,
4579 + 6, 0, 0, 0, 0, 0, 0},
4580 + { STRATIX10_EMAC0_CLK, "emac0_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0xA4,
4581 +@@ -249,16 +282,16 @@ static const struct stratix10_gate_clock s10_gate_clks[] = {
4582 + 1, 0, 0, 0, 0xDC, 27, 0},
4583 + { STRATIX10_EMAC2_CLK, "emac2_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0xA4,
4584 + 2, 0, 0, 0, 0xDC, 28, 0},
4585 +- { STRATIX10_EMAC_PTP_CLK, "emac_ptp_clk", "emac_ptp_free_clk", NULL, 1, 0, 0xA4,
4586 +- 3, 0, 0, 0, 0, 0, 0},
4587 +- { STRATIX10_GPIO_DB_CLK, "gpio_db_clk", "gpio_db_free_clk", NULL, 1, 0, 0xA4,
4588 +- 4, 0xE0, 0, 16, 0, 0, 0},
4589 +- { STRATIX10_SDMMC_CLK, "sdmmc_clk", "sdmmc_free_clk", NULL, 1, 0, 0xA4,
4590 +- 5, 0, 0, 0, 0, 0, 4},
4591 +- { STRATIX10_S2F_USER1_CLK, "s2f_user1_clk", "s2f_user1_free_clk", NULL, 1, 0, 0xA4,
4592 +- 6, 0, 0, 0, 0, 0, 0},
4593 +- { STRATIX10_PSI_REF_CLK, "psi_ref_clk", "psi_ref_free_clk", NULL, 1, 0, 0xA4,
4594 +- 7, 0, 0, 0, 0, 0, 0},
4595 ++ { STRATIX10_EMAC_PTP_CLK, "emac_ptp_clk", NULL, emac_ptp_mux, ARRAY_SIZE(emac_ptp_mux), 0, 0xA4,
4596 ++ 3, 0, 0, 0, 0xB0, 2, 0},
4597 ++ { STRATIX10_GPIO_DB_CLK, "gpio_db_clk", NULL, gpio_db_mux, ARRAY_SIZE(gpio_db_mux), 0, 0xA4,
4598 ++ 4, 0xE0, 0, 16, 0xB0, 3, 0},
4599 ++ { STRATIX10_SDMMC_CLK, "sdmmc_clk", NULL, sdmmc_mux, ARRAY_SIZE(sdmmc_mux), 0, 0xA4,
4600 ++ 5, 0, 0, 0, 0xB0, 4, 4},
4601 ++ { STRATIX10_S2F_USER1_CLK, "s2f_user1_clk", NULL, s2f_user1_mux, ARRAY_SIZE(s2f_user1_mux), 0, 0xA4,
4602 ++ 6, 0, 0, 0, 0xB0, 5, 0},
4603 ++ { STRATIX10_PSI_REF_CLK, "psi_ref_clk", NULL, psi_mux, ARRAY_SIZE(psi_mux), 0, 0xA4,
4604 ++ 7, 0, 0, 0, 0xB0, 6, 0},
4605 + { STRATIX10_USB_CLK, "usb_clk", "l4_mp_clk", NULL, 1, 0, 0xA4,
4606 + 8, 0, 0, 0, 0, 0, 0},
4607 + { STRATIX10_SPI_M_CLK, "spi_m_clk", "l4_mp_clk", NULL, 1, 0, 0xA4,
4608 +diff --git a/drivers/clk/tegra/clk-tegra30.c b/drivers/clk/tegra/clk-tegra30.c
4609 +index 16dbf83d2f62a..a33688b2359e5 100644
4610 +--- a/drivers/clk/tegra/clk-tegra30.c
4611 ++++ b/drivers/clk/tegra/clk-tegra30.c
4612 +@@ -1245,7 +1245,7 @@ static struct tegra_clk_init_table init_table[] __initdata = {
4613 + { TEGRA30_CLK_GR3D, TEGRA30_CLK_PLL_C, 300000000, 0 },
4614 + { TEGRA30_CLK_GR3D2, TEGRA30_CLK_PLL_C, 300000000, 0 },
4615 + { TEGRA30_CLK_PLL_U, TEGRA30_CLK_CLK_MAX, 480000000, 0 },
4616 +- { TEGRA30_CLK_VDE, TEGRA30_CLK_PLL_C, 600000000, 0 },
4617 ++ { TEGRA30_CLK_VDE, TEGRA30_CLK_PLL_C, 300000000, 0 },
4618 + { TEGRA30_CLK_SPDIF_IN_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
4619 + { TEGRA30_CLK_I2S0_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
4620 + { TEGRA30_CLK_I2S1_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
4621 +diff --git a/drivers/clocksource/timer-ti-dm.c b/drivers/clocksource/timer-ti-dm.c
4622 +index 33eeabf9c3d12..e5c631f1b5cbe 100644
4623 +--- a/drivers/clocksource/timer-ti-dm.c
4624 ++++ b/drivers/clocksource/timer-ti-dm.c
4625 +@@ -78,6 +78,9 @@ static void omap_dm_timer_write_reg(struct omap_dm_timer *timer, u32 reg,
4626 +
4627 + static void omap_timer_restore_context(struct omap_dm_timer *timer)
4628 + {
4629 ++ __omap_dm_timer_write(timer, OMAP_TIMER_OCP_CFG_OFFSET,
4630 ++ timer->context.ocp_cfg, 0);
4631 ++
4632 + omap_dm_timer_write_reg(timer, OMAP_TIMER_WAKEUP_EN_REG,
4633 + timer->context.twer);
4634 + omap_dm_timer_write_reg(timer, OMAP_TIMER_COUNTER_REG,
4635 +@@ -95,6 +98,9 @@ static void omap_timer_restore_context(struct omap_dm_timer *timer)
4636 +
4637 + static void omap_timer_save_context(struct omap_dm_timer *timer)
4638 + {
4639 ++ timer->context.ocp_cfg =
4640 ++ __omap_dm_timer_read(timer, OMAP_TIMER_OCP_CFG_OFFSET, 0);
4641 ++
4642 + timer->context.tclr =
4643 + omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG);
4644 + timer->context.twer =
4645 +diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
4646 +index 1d1b563cea4b8..1bc1293deae9b 100644
4647 +--- a/drivers/cpufreq/cpufreq.c
4648 ++++ b/drivers/cpufreq/cpufreq.c
4649 +@@ -1370,9 +1370,14 @@ static int cpufreq_online(unsigned int cpu)
4650 + goto out_free_policy;
4651 + }
4652 +
4653 ++ /*
4654 ++ * The initialization has succeeded and the policy is online.
4655 ++ * If there is a problem with its frequency table, take it
4656 ++ * offline and drop it.
4657 ++ */
4658 + ret = cpufreq_table_validate_and_sort(policy);
4659 + if (ret)
4660 +- goto out_exit_policy;
4661 ++ goto out_offline_policy;
4662 +
4663 + /* related_cpus should at least include policy->cpus. */
4664 + cpumask_copy(policy->related_cpus, policy->cpus);
4665 +@@ -1518,6 +1523,10 @@ out_destroy_policy:
4666 +
4667 + up_write(&policy->rwsem);
4668 +
4669 ++out_offline_policy:
4670 ++ if (cpufreq_driver->offline)
4671 ++ cpufreq_driver->offline(policy);
4672 ++
4673 + out_exit_policy:
4674 + if (cpufreq_driver->exit)
4675 + cpufreq_driver->exit(policy);
4676 +diff --git a/drivers/crypto/cavium/nitrox/nitrox_isr.c b/drivers/crypto/cavium/nitrox/nitrox_isr.c
4677 +index 99b053094f5af..b16689b48f5a5 100644
4678 +--- a/drivers/crypto/cavium/nitrox/nitrox_isr.c
4679 ++++ b/drivers/crypto/cavium/nitrox/nitrox_isr.c
4680 +@@ -307,6 +307,10 @@ int nitrox_register_interrupts(struct nitrox_device *ndev)
4681 + * Entry 192: NPS_CORE_INT_ACTIVE
4682 + */
4683 + nr_vecs = pci_msix_vec_count(pdev);
4684 ++ if (nr_vecs < 0) {
4685 ++ dev_err(DEV(ndev), "Error in getting vec count %d\n", nr_vecs);
4686 ++ return nr_vecs;
4687 ++ }
4688 +
4689 + /* Enable MSI-X */
4690 + ret = pci_alloc_irq_vectors(pdev, nr_vecs, nr_vecs, PCI_IRQ_MSIX);
4691 +diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
4692 +index 3e0d1d6922bab..6546d3e90d954 100644
4693 +--- a/drivers/crypto/ccp/sev-dev.c
4694 ++++ b/drivers/crypto/ccp/sev-dev.c
4695 +@@ -42,6 +42,10 @@ static int psp_probe_timeout = 5;
4696 + module_param(psp_probe_timeout, int, 0644);
4697 + MODULE_PARM_DESC(psp_probe_timeout, " default timeout value, in seconds, during PSP device probe");
4698 +
4699 ++MODULE_FIRMWARE("amd/amd_sev_fam17h_model0xh.sbin"); /* 1st gen EPYC */
4700 ++MODULE_FIRMWARE("amd/amd_sev_fam17h_model3xh.sbin"); /* 2nd gen EPYC */
4701 ++MODULE_FIRMWARE("amd/amd_sev_fam19h_model0xh.sbin"); /* 3rd gen EPYC */
4702 ++
4703 + static bool psp_dead;
4704 + static int psp_timeout;
4705 +
4706 +diff --git a/drivers/crypto/ccp/sp-pci.c b/drivers/crypto/ccp/sp-pci.c
4707 +index f471dbaef1fbc..7d346d842a39e 100644
4708 +--- a/drivers/crypto/ccp/sp-pci.c
4709 ++++ b/drivers/crypto/ccp/sp-pci.c
4710 +@@ -222,7 +222,7 @@ static int sp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
4711 + if (ret) {
4712 + dev_err(dev, "dma_set_mask_and_coherent failed (%d)\n",
4713 + ret);
4714 +- goto e_err;
4715 ++ goto free_irqs;
4716 + }
4717 + }
4718 +
4719 +@@ -230,10 +230,12 @@ static int sp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
4720 +
4721 + ret = sp_init(sp);
4722 + if (ret)
4723 +- goto e_err;
4724 ++ goto free_irqs;
4725 +
4726 + return 0;
4727 +
4728 ++free_irqs:
4729 ++ sp_free_irqs(sp);
4730 + e_err:
4731 + dev_notice(dev, "initialization failed\n");
4732 + return ret;
4733 +diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
4734 +index 8adcbb3271267..c86b01abd0a61 100644
4735 +--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
4736 ++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
4737 +@@ -1513,11 +1513,11 @@ static struct skcipher_alg sec_skciphers[] = {
4738 + AES_BLOCK_SIZE, AES_BLOCK_SIZE)
4739 +
4740 + SEC_SKCIPHER_ALG("ecb(des3_ede)", sec_setkey_3des_ecb,
4741 +- SEC_DES3_2KEY_SIZE, SEC_DES3_3KEY_SIZE,
4742 ++ SEC_DES3_3KEY_SIZE, SEC_DES3_3KEY_SIZE,
4743 + DES3_EDE_BLOCK_SIZE, 0)
4744 +
4745 + SEC_SKCIPHER_ALG("cbc(des3_ede)", sec_setkey_3des_cbc,
4746 +- SEC_DES3_2KEY_SIZE, SEC_DES3_3KEY_SIZE,
4747 ++ SEC_DES3_3KEY_SIZE, SEC_DES3_3KEY_SIZE,
4748 + DES3_EDE_BLOCK_SIZE, DES3_EDE_BLOCK_SIZE)
4749 +
4750 + SEC_SKCIPHER_ALG("xts(sm4)", sec_setkey_sm4_xts,
4751 +diff --git a/drivers/crypto/ixp4xx_crypto.c b/drivers/crypto/ixp4xx_crypto.c
4752 +index 8b0f17fc09fb5..7567456a21a05 100644
4753 +--- a/drivers/crypto/ixp4xx_crypto.c
4754 ++++ b/drivers/crypto/ixp4xx_crypto.c
4755 +@@ -149,6 +149,8 @@ struct crypt_ctl {
4756 + struct ablk_ctx {
4757 + struct buffer_desc *src;
4758 + struct buffer_desc *dst;
4759 ++ u8 iv[MAX_IVLEN];
4760 ++ bool encrypt;
4761 + };
4762 +
4763 + struct aead_ctx {
4764 +@@ -330,7 +332,7 @@ static void free_buf_chain(struct device *dev, struct buffer_desc *buf,
4765 +
4766 + buf1 = buf->next;
4767 + phys1 = buf->phys_next;
4768 +- dma_unmap_single(dev, buf->phys_next, buf->buf_len, buf->dir);
4769 ++ dma_unmap_single(dev, buf->phys_addr, buf->buf_len, buf->dir);
4770 + dma_pool_free(buffer_pool, buf, phys);
4771 + buf = buf1;
4772 + phys = phys1;
4773 +@@ -381,6 +383,20 @@ static void one_packet(dma_addr_t phys)
4774 + case CTL_FLAG_PERFORM_ABLK: {
4775 + struct skcipher_request *req = crypt->data.ablk_req;
4776 + struct ablk_ctx *req_ctx = skcipher_request_ctx(req);
4777 ++ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
4778 ++ unsigned int ivsize = crypto_skcipher_ivsize(tfm);
4779 ++ unsigned int offset;
4780 ++
4781 ++ if (ivsize > 0) {
4782 ++ offset = req->cryptlen - ivsize;
4783 ++ if (req_ctx->encrypt) {
4784 ++ scatterwalk_map_and_copy(req->iv, req->dst,
4785 ++ offset, ivsize, 0);
4786 ++ } else {
4787 ++ memcpy(req->iv, req_ctx->iv, ivsize);
4788 ++ memzero_explicit(req_ctx->iv, ivsize);
4789 ++ }
4790 ++ }
4791 +
4792 + if (req_ctx->dst) {
4793 + free_buf_chain(dev, req_ctx->dst, crypt->dst_buf);
4794 +@@ -876,6 +892,7 @@ static int ablk_perform(struct skcipher_request *req, int encrypt)
4795 + struct ablk_ctx *req_ctx = skcipher_request_ctx(req);
4796 + struct buffer_desc src_hook;
4797 + struct device *dev = &pdev->dev;
4798 ++ unsigned int offset;
4799 + gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ?
4800 + GFP_KERNEL : GFP_ATOMIC;
4801 +
4802 +@@ -885,6 +902,7 @@ static int ablk_perform(struct skcipher_request *req, int encrypt)
4803 + return -EAGAIN;
4804 +
4805 + dir = encrypt ? &ctx->encrypt : &ctx->decrypt;
4806 ++ req_ctx->encrypt = encrypt;
4807 +
4808 + crypt = get_crypt_desc();
4809 + if (!crypt)
4810 +@@ -900,6 +918,10 @@ static int ablk_perform(struct skcipher_request *req, int encrypt)
4811 +
4812 + BUG_ON(ivsize && !req->iv);
4813 + memcpy(crypt->iv, req->iv, ivsize);
4814 ++ if (ivsize > 0 && !encrypt) {
4815 ++ offset = req->cryptlen - ivsize;
4816 ++ scatterwalk_map_and_copy(req_ctx->iv, req->src, offset, ivsize, 0);
4817 ++ }
4818 + if (req->src != req->dst) {
4819 + struct buffer_desc dst_hook;
4820 + crypt->mode |= NPE_OP_NOT_IN_PLACE;
4821 +diff --git a/drivers/crypto/nx/nx-842-pseries.c b/drivers/crypto/nx/nx-842-pseries.c
4822 +index cc8dd3072b8b7..9b2417ebc95a0 100644
4823 +--- a/drivers/crypto/nx/nx-842-pseries.c
4824 ++++ b/drivers/crypto/nx/nx-842-pseries.c
4825 +@@ -538,13 +538,15 @@ static int nx842_OF_set_defaults(struct nx842_devdata *devdata)
4826 + * The status field indicates if the device is enabled when the status
4827 + * is 'okay'. Otherwise the device driver will be disabled.
4828 + *
4829 +- * @prop - struct property point containing the maxsyncop for the update
4830 ++ * @devdata: struct nx842_devdata to use for dev_info
4831 ++ * @prop: struct property point containing the maxsyncop for the update
4832 + *
4833 + * Returns:
4834 + * 0 - Device is available
4835 + * -ENODEV - Device is not available
4836 + */
4837 +-static int nx842_OF_upd_status(struct property *prop)
4838 ++static int nx842_OF_upd_status(struct nx842_devdata *devdata,
4839 ++ struct property *prop)
4840 + {
4841 + const char *status = (const char *)prop->value;
4842 +
4843 +@@ -758,7 +760,7 @@ static int nx842_OF_upd(struct property *new_prop)
4844 + goto out;
4845 +
4846 + /* Perform property updates */
4847 +- ret = nx842_OF_upd_status(status);
4848 ++ ret = nx842_OF_upd_status(new_devdata, status);
4849 + if (ret)
4850 + goto error_out;
4851 +
4852 +@@ -1069,6 +1071,7 @@ static const struct vio_device_id nx842_vio_driver_ids[] = {
4853 + {"ibm,compression-v1", "ibm,compression"},
4854 + {"", ""},
4855 + };
4856 ++MODULE_DEVICE_TABLE(vio, nx842_vio_driver_ids);
4857 +
4858 + static struct vio_driver nx842_vio_driver = {
4859 + .name = KBUILD_MODNAME,
4860 +diff --git a/drivers/crypto/nx/nx-aes-ctr.c b/drivers/crypto/nx/nx-aes-ctr.c
4861 +index 6d5ce1a66f1ee..02ad26012c665 100644
4862 +--- a/drivers/crypto/nx/nx-aes-ctr.c
4863 ++++ b/drivers/crypto/nx/nx-aes-ctr.c
4864 +@@ -118,7 +118,7 @@ static int ctr3686_aes_nx_crypt(struct skcipher_request *req)
4865 + struct nx_crypto_ctx *nx_ctx = crypto_skcipher_ctx(tfm);
4866 + u8 iv[16];
4867 +
4868 +- memcpy(iv, nx_ctx->priv.ctr.nonce, CTR_RFC3686_IV_SIZE);
4869 ++ memcpy(iv, nx_ctx->priv.ctr.nonce, CTR_RFC3686_NONCE_SIZE);
4870 + memcpy(iv + CTR_RFC3686_NONCE_SIZE, req->iv, CTR_RFC3686_IV_SIZE);
4871 + iv[12] = iv[13] = iv[14] = 0;
4872 + iv[15] = 1;
4873 +diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
4874 +index ae0d320d3c60d..dd53ad9987b0d 100644
4875 +--- a/drivers/crypto/omap-sham.c
4876 ++++ b/drivers/crypto/omap-sham.c
4877 +@@ -372,7 +372,7 @@ static int omap_sham_hw_init(struct omap_sham_dev *dd)
4878 + {
4879 + int err;
4880 +
4881 +- err = pm_runtime_get_sync(dd->dev);
4882 ++ err = pm_runtime_resume_and_get(dd->dev);
4883 + if (err < 0) {
4884 + dev_err(dd->dev, "failed to get sync: %d\n", err);
4885 + return err;
4886 +@@ -2244,7 +2244,7 @@ static int omap_sham_suspend(struct device *dev)
4887 +
4888 + static int omap_sham_resume(struct device *dev)
4889 + {
4890 +- int err = pm_runtime_get_sync(dev);
4891 ++ int err = pm_runtime_resume_and_get(dev);
4892 + if (err < 0) {
4893 + dev_err(dev, "failed to get sync: %d\n", err);
4894 + return err;
4895 +diff --git a/drivers/crypto/qat/qat_common/qat_hal.c b/drivers/crypto/qat/qat_common/qat_hal.c
4896 +index bd3028126cbe6..069f51621f0e8 100644
4897 +--- a/drivers/crypto/qat/qat_common/qat_hal.c
4898 ++++ b/drivers/crypto/qat/qat_common/qat_hal.c
4899 +@@ -1417,7 +1417,11 @@ static int qat_hal_put_rel_wr_xfer(struct icp_qat_fw_loader_handle *handle,
4900 + pr_err("QAT: bad xfrAddr=0x%x\n", xfr_addr);
4901 + return -EINVAL;
4902 + }
4903 +- qat_hal_rd_rel_reg(handle, ae, ctx, ICP_GPB_REL, gprnum, &gprval);
4904 ++ status = qat_hal_rd_rel_reg(handle, ae, ctx, ICP_GPB_REL, gprnum, &gprval);
4905 ++ if (status) {
4906 ++ pr_err("QAT: failed to read register");
4907 ++ return status;
4908 ++ }
4909 + gpr_addr = qat_hal_get_reg_addr(ICP_GPB_REL, gprnum);
4910 + data16low = 0xffff & data;
4911 + data16hi = 0xffff & (data >> 0x10);
4912 +diff --git a/drivers/crypto/qat/qat_common/qat_uclo.c b/drivers/crypto/qat/qat_common/qat_uclo.c
4913 +index 1fb5fc852f6b8..6d95160e451e5 100644
4914 +--- a/drivers/crypto/qat/qat_common/qat_uclo.c
4915 ++++ b/drivers/crypto/qat/qat_common/qat_uclo.c
4916 +@@ -342,7 +342,6 @@ static int qat_uclo_init_umem_seg(struct icp_qat_fw_loader_handle *handle,
4917 + return 0;
4918 + }
4919 +
4920 +-#define ICP_DH895XCC_PESRAM_BAR_SIZE 0x80000
4921 + static int qat_uclo_init_ae_memory(struct icp_qat_fw_loader_handle *handle,
4922 + struct icp_qat_uof_initmem *init_mem)
4923 + {
4924 +diff --git a/drivers/crypto/qce/skcipher.c b/drivers/crypto/qce/skcipher.c
4925 +index a2d3da0ad95f3..d8053789c8828 100644
4926 +--- a/drivers/crypto/qce/skcipher.c
4927 ++++ b/drivers/crypto/qce/skcipher.c
4928 +@@ -71,7 +71,7 @@ qce_skcipher_async_req_handle(struct crypto_async_request *async_req)
4929 + struct scatterlist *sg;
4930 + bool diff_dst;
4931 + gfp_t gfp;
4932 +- int ret;
4933 ++ int dst_nents, src_nents, ret;
4934 +
4935 + rctx->iv = req->iv;
4936 + rctx->ivsize = crypto_skcipher_ivsize(skcipher);
4937 +@@ -122,21 +122,26 @@ qce_skcipher_async_req_handle(struct crypto_async_request *async_req)
4938 + sg_mark_end(sg);
4939 + rctx->dst_sg = rctx->dst_tbl.sgl;
4940 +
4941 +- ret = dma_map_sg(qce->dev, rctx->dst_sg, rctx->dst_nents, dir_dst);
4942 +- if (ret < 0)
4943 ++ dst_nents = dma_map_sg(qce->dev, rctx->dst_sg, rctx->dst_nents, dir_dst);
4944 ++ if (dst_nents < 0) {
4945 ++ ret = dst_nents;
4946 + goto error_free;
4947 ++ }
4948 +
4949 + if (diff_dst) {
4950 +- ret = dma_map_sg(qce->dev, req->src, rctx->src_nents, dir_src);
4951 +- if (ret < 0)
4952 ++ src_nents = dma_map_sg(qce->dev, req->src, rctx->src_nents, dir_src);
4953 ++ if (src_nents < 0) {
4954 ++ ret = src_nents;
4955 + goto error_unmap_dst;
4956 ++ }
4957 + rctx->src_sg = req->src;
4958 + } else {
4959 + rctx->src_sg = rctx->dst_sg;
4960 ++ src_nents = dst_nents - 1;
4961 + }
4962 +
4963 +- ret = qce_dma_prep_sgs(&qce->dma, rctx->src_sg, rctx->src_nents,
4964 +- rctx->dst_sg, rctx->dst_nents,
4965 ++ ret = qce_dma_prep_sgs(&qce->dma, rctx->src_sg, src_nents,
4966 ++ rctx->dst_sg, dst_nents,
4967 + qce_skcipher_done, async_req);
4968 + if (ret)
4969 + goto error_unmap_src;
4970 +diff --git a/drivers/crypto/sa2ul.c b/drivers/crypto/sa2ul.c
4971 +index b0f0502a5bb0f..ba116670ef8c8 100644
4972 +--- a/drivers/crypto/sa2ul.c
4973 ++++ b/drivers/crypto/sa2ul.c
4974 +@@ -2275,9 +2275,9 @@ static int sa_dma_init(struct sa_crypto_data *dd)
4975 +
4976 + dd->dma_rx2 = dma_request_chan(dd->dev, "rx2");
4977 + if (IS_ERR(dd->dma_rx2)) {
4978 +- dma_release_channel(dd->dma_rx1);
4979 +- return dev_err_probe(dd->dev, PTR_ERR(dd->dma_rx2),
4980 +- "Unable to request rx2 DMA channel\n");
4981 ++ ret = dev_err_probe(dd->dev, PTR_ERR(dd->dma_rx2),
4982 ++ "Unable to request rx2 DMA channel\n");
4983 ++ goto err_dma_rx2;
4984 + }
4985 +
4986 + dd->dma_tx = dma_request_chan(dd->dev, "tx");
4987 +@@ -2298,28 +2298,31 @@ static int sa_dma_init(struct sa_crypto_data *dd)
4988 + if (ret) {
4989 + dev_err(dd->dev, "can't configure IN dmaengine slave: %d\n",
4990 + ret);
4991 +- return ret;
4992 ++ goto err_dma_config;
4993 + }
4994 +
4995 + ret = dmaengine_slave_config(dd->dma_rx2, &cfg);
4996 + if (ret) {
4997 + dev_err(dd->dev, "can't configure IN dmaengine slave: %d\n",
4998 + ret);
4999 +- return ret;
5000 ++ goto err_dma_config;
5001 + }
5002 +
5003 + ret = dmaengine_slave_config(dd->dma_tx, &cfg);
5004 + if (ret) {
5005 + dev_err(dd->dev, "can't configure OUT dmaengine slave: %d\n",
5006 + ret);
5007 +- return ret;
5008 ++ goto err_dma_config;
5009 + }
5010 +
5011 + return 0;
5012 +
5013 ++err_dma_config:
5014 ++ dma_release_channel(dd->dma_tx);
5015 + err_dma_tx:
5016 +- dma_release_channel(dd->dma_rx1);
5017 + dma_release_channel(dd->dma_rx2);
5018 ++err_dma_rx2:
5019 ++ dma_release_channel(dd->dma_rx1);
5020 +
5021 + return ret;
5022 + }
5023 +@@ -2358,13 +2361,14 @@ static int sa_ul_probe(struct platform_device *pdev)
5024 + if (ret < 0) {
5025 + dev_err(&pdev->dev, "%s: failed to get sync: %d\n", __func__,
5026 + ret);
5027 ++ pm_runtime_disable(dev);
5028 + return ret;
5029 + }
5030 +
5031 + sa_init_mem(dev_data);
5032 + ret = sa_dma_init(dev_data);
5033 + if (ret)
5034 +- goto disable_pm_runtime;
5035 ++ goto destroy_dma_pool;
5036 +
5037 + spin_lock_init(&dev_data->scid_lock);
5038 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
5039 +@@ -2394,9 +2398,9 @@ release_dma:
5040 + dma_release_channel(dev_data->dma_rx1);
5041 + dma_release_channel(dev_data->dma_tx);
5042 +
5043 ++destroy_dma_pool:
5044 + dma_pool_destroy(dev_data->sc_pool);
5045 +
5046 +-disable_pm_runtime:
5047 + pm_runtime_put_sync(&pdev->dev);
5048 + pm_runtime_disable(&pdev->dev);
5049 +
5050 +diff --git a/drivers/crypto/ux500/hash/hash_core.c b/drivers/crypto/ux500/hash/hash_core.c
5051 +index da284b0ea1b26..243515df609bd 100644
5052 +--- a/drivers/crypto/ux500/hash/hash_core.c
5053 ++++ b/drivers/crypto/ux500/hash/hash_core.c
5054 +@@ -1010,6 +1010,7 @@ static int hash_hw_final(struct ahash_request *req)
5055 + goto out;
5056 + }
5057 + } else if (req->nbytes == 0 && ctx->keylen > 0) {
5058 ++ ret = -EPERM;
5059 + dev_err(device_data->dev, "%s: Empty message with keylength > 0, NOT supported\n",
5060 + __func__);
5061 + goto out;
5062 +diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
5063 +index 59ba59bea0f54..db1bc8cf92766 100644
5064 +--- a/drivers/devfreq/devfreq.c
5065 ++++ b/drivers/devfreq/devfreq.c
5066 +@@ -822,6 +822,7 @@ struct devfreq *devfreq_add_device(struct device *dev,
5067 + if (devfreq->profile->timer < 0
5068 + || devfreq->profile->timer >= DEVFREQ_TIMER_NUM) {
5069 + mutex_unlock(&devfreq->lock);
5070 ++ err = -EINVAL;
5071 + goto err_dev;
5072 + }
5073 +
5074 +diff --git a/drivers/devfreq/governor_passive.c b/drivers/devfreq/governor_passive.c
5075 +index b094132bd20b3..fc09324a03e03 100644
5076 +--- a/drivers/devfreq/governor_passive.c
5077 ++++ b/drivers/devfreq/governor_passive.c
5078 +@@ -65,7 +65,7 @@ static int devfreq_passive_get_target_freq(struct devfreq *devfreq,
5079 + dev_pm_opp_put(p_opp);
5080 +
5081 + if (IS_ERR(opp))
5082 +- return PTR_ERR(opp);
5083 ++ goto no_required_opp;
5084 +
5085 + *freq = dev_pm_opp_get_freq(opp);
5086 + dev_pm_opp_put(opp);
5087 +@@ -73,6 +73,7 @@ static int devfreq_passive_get_target_freq(struct devfreq *devfreq,
5088 + return 0;
5089 + }
5090 +
5091 ++no_required_opp:
5092 + /*
5093 + * Get the OPP table's index of decided frequency by governor
5094 + * of parent device.
5095 +diff --git a/drivers/edac/Kconfig b/drivers/edac/Kconfig
5096 +index 27d0c4cdc58d5..9b21e45debc22 100644
5097 +--- a/drivers/edac/Kconfig
5098 ++++ b/drivers/edac/Kconfig
5099 +@@ -270,7 +270,8 @@ config EDAC_PND2
5100 +
5101 + config EDAC_IGEN6
5102 + tristate "Intel client SoC Integrated MC"
5103 +- depends on PCI && X86_64 && PCI_MMCONFIG && ARCH_HAVE_NMI_SAFE_CMPXCHG
5104 ++ depends on PCI && PCI_MMCONFIG && ARCH_HAVE_NMI_SAFE_CMPXCHG
5105 ++ depends on X64_64 && X86_MCE_INTEL
5106 + help
5107 + Support for error detection and correction on the Intel
5108 + client SoC Integrated Memory Controller using In-Band ECC IP.
5109 +diff --git a/drivers/edac/aspeed_edac.c b/drivers/edac/aspeed_edac.c
5110 +index a46da56d6d544..6bd5f88159193 100644
5111 +--- a/drivers/edac/aspeed_edac.c
5112 ++++ b/drivers/edac/aspeed_edac.c
5113 +@@ -254,8 +254,8 @@ static int init_csrows(struct mem_ctl_info *mci)
5114 + return rc;
5115 + }
5116 +
5117 +- dev_dbg(mci->pdev, "dt: /memory node resources: first page r.start=0x%x, resource_size=0x%x, PAGE_SHIFT macro=0x%x\n",
5118 +- r.start, resource_size(&r), PAGE_SHIFT);
5119 ++ dev_dbg(mci->pdev, "dt: /memory node resources: first page %pR, PAGE_SHIFT macro=0x%x\n",
5120 ++ &r, PAGE_SHIFT);
5121 +
5122 + csrow->first_page = r.start >> PAGE_SHIFT;
5123 + nr_pages = resource_size(&r) >> PAGE_SHIFT;
5124 +diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
5125 +index 238a4ad1e526e..37b4e875420e4 100644
5126 +--- a/drivers/edac/i10nm_base.c
5127 ++++ b/drivers/edac/i10nm_base.c
5128 +@@ -278,6 +278,9 @@ static int __init i10nm_init(void)
5129 + if (owner && strncmp(owner, EDAC_MOD_STR, sizeof(EDAC_MOD_STR)))
5130 + return -EBUSY;
5131 +
5132 ++ if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
5133 ++ return -ENODEV;
5134 ++
5135 + id = x86_match_cpu(i10nm_cpuids);
5136 + if (!id)
5137 + return -ENODEV;
5138 +diff --git a/drivers/edac/pnd2_edac.c b/drivers/edac/pnd2_edac.c
5139 +index 928f63a374c78..c94ca1f790c43 100644
5140 +--- a/drivers/edac/pnd2_edac.c
5141 ++++ b/drivers/edac/pnd2_edac.c
5142 +@@ -1554,6 +1554,9 @@ static int __init pnd2_init(void)
5143 + if (owner && strncmp(owner, EDAC_MOD_STR, sizeof(EDAC_MOD_STR)))
5144 + return -EBUSY;
5145 +
5146 ++ if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
5147 ++ return -ENODEV;
5148 ++
5149 + id = x86_match_cpu(pnd2_cpuids);
5150 + if (!id)
5151 + return -ENODEV;
5152 +diff --git a/drivers/edac/sb_edac.c b/drivers/edac/sb_edac.c
5153 +index 93daa4297f2e0..4c626fcd4dcbb 100644
5154 +--- a/drivers/edac/sb_edac.c
5155 ++++ b/drivers/edac/sb_edac.c
5156 +@@ -3510,6 +3510,9 @@ static int __init sbridge_init(void)
5157 + if (owner && strncmp(owner, EDAC_MOD_STR, sizeof(EDAC_MOD_STR)))
5158 + return -EBUSY;
5159 +
5160 ++ if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
5161 ++ return -ENODEV;
5162 ++
5163 + id = x86_match_cpu(sbridge_cpuids);
5164 + if (!id)
5165 + return -ENODEV;
5166 +diff --git a/drivers/edac/skx_base.c b/drivers/edac/skx_base.c
5167 +index 6a4f0b27c6545..4dbd46575bfb4 100644
5168 +--- a/drivers/edac/skx_base.c
5169 ++++ b/drivers/edac/skx_base.c
5170 +@@ -656,6 +656,9 @@ static int __init skx_init(void)
5171 + if (owner && strncmp(owner, EDAC_MOD_STR, sizeof(EDAC_MOD_STR)))
5172 + return -EBUSY;
5173 +
5174 ++ if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
5175 ++ return -ENODEV;
5176 ++
5177 + id = x86_match_cpu(skx_cpuids);
5178 + if (!id)
5179 + return -ENODEV;
5180 +diff --git a/drivers/edac/ti_edac.c b/drivers/edac/ti_edac.c
5181 +index e7eae20f83d1d..169f96e51c293 100644
5182 +--- a/drivers/edac/ti_edac.c
5183 ++++ b/drivers/edac/ti_edac.c
5184 +@@ -197,6 +197,7 @@ static const struct of_device_id ti_edac_of_match[] = {
5185 + { .compatible = "ti,emif-dra7xx", .data = (void *)EMIF_TYPE_DRA7 },
5186 + {},
5187 + };
5188 ++MODULE_DEVICE_TABLE(of, ti_edac_of_match);
5189 +
5190 + static int _emif_get_id(struct device_node *node)
5191 + {
5192 +diff --git a/drivers/extcon/extcon-max8997.c b/drivers/extcon/extcon-max8997.c
5193 +index 337b0eea4e629..64008808675ef 100644
5194 +--- a/drivers/extcon/extcon-max8997.c
5195 ++++ b/drivers/extcon/extcon-max8997.c
5196 +@@ -729,7 +729,7 @@ static int max8997_muic_probe(struct platform_device *pdev)
5197 + 2, info->status);
5198 + if (ret) {
5199 + dev_err(info->dev, "failed to read MUIC register\n");
5200 +- return ret;
5201 ++ goto err_irq;
5202 + }
5203 + cable_type = max8997_muic_get_cable_type(info,
5204 + MAX8997_CABLE_GROUP_ADC, &attached);
5205 +@@ -784,3 +784,4 @@ module_platform_driver(max8997_muic_driver);
5206 + MODULE_DESCRIPTION("Maxim MAX8997 Extcon driver");
5207 + MODULE_AUTHOR("Donggeun Kim <dg77.kim@×××××××.com>");
5208 + MODULE_LICENSE("GPL");
5209 ++MODULE_ALIAS("platform:max8997-muic");
5210 +diff --git a/drivers/extcon/extcon-sm5502.c b/drivers/extcon/extcon-sm5502.c
5211 +index 106d4da647bd9..5e0718dee03bc 100644
5212 +--- a/drivers/extcon/extcon-sm5502.c
5213 ++++ b/drivers/extcon/extcon-sm5502.c
5214 +@@ -88,7 +88,6 @@ static struct reg_data sm5502_reg_data[] = {
5215 + | SM5502_REG_INTM2_MHL_MASK,
5216 + .invert = true,
5217 + },
5218 +- { }
5219 + };
5220 +
5221 + /* List of detectable cables */
5222 +diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c
5223 +index 3aa489dba30a7..2a7687911c097 100644
5224 +--- a/drivers/firmware/stratix10-svc.c
5225 ++++ b/drivers/firmware/stratix10-svc.c
5226 +@@ -1034,24 +1034,32 @@ static int stratix10_svc_drv_probe(struct platform_device *pdev)
5227 +
5228 + /* add svc client device(s) */
5229 + svc = devm_kzalloc(dev, sizeof(*svc), GFP_KERNEL);
5230 +- if (!svc)
5231 +- return -ENOMEM;
5232 ++ if (!svc) {
5233 ++ ret = -ENOMEM;
5234 ++ goto err_free_kfifo;
5235 ++ }
5236 +
5237 + svc->stratix10_svc_rsu = platform_device_alloc(STRATIX10_RSU, 0);
5238 + if (!svc->stratix10_svc_rsu) {
5239 + dev_err(dev, "failed to allocate %s device\n", STRATIX10_RSU);
5240 +- return -ENOMEM;
5241 ++ ret = -ENOMEM;
5242 ++ goto err_free_kfifo;
5243 + }
5244 +
5245 + ret = platform_device_add(svc->stratix10_svc_rsu);
5246 +- if (ret) {
5247 +- platform_device_put(svc->stratix10_svc_rsu);
5248 +- return ret;
5249 +- }
5250 ++ if (ret)
5251 ++ goto err_put_device;
5252 ++
5253 + dev_set_drvdata(dev, svc);
5254 +
5255 + pr_info("Intel Service Layer Driver Initialized\n");
5256 +
5257 ++ return 0;
5258 ++
5259 ++err_put_device:
5260 ++ platform_device_put(svc->stratix10_svc_rsu);
5261 ++err_free_kfifo:
5262 ++ kfifo_free(&controller->svc_fifo);
5263 + return ret;
5264 + }
5265 +
5266 +diff --git a/drivers/fsi/fsi-core.c b/drivers/fsi/fsi-core.c
5267 +index 4e60e84cd17a5..59ddc9fd5bca4 100644
5268 +--- a/drivers/fsi/fsi-core.c
5269 ++++ b/drivers/fsi/fsi-core.c
5270 +@@ -724,7 +724,7 @@ static ssize_t cfam_read(struct file *filep, char __user *buf, size_t count,
5271 + rc = count;
5272 + fail:
5273 + *offset = off;
5274 +- return count;
5275 ++ return rc;
5276 + }
5277 +
5278 + static ssize_t cfam_write(struct file *filep, const char __user *buf,
5279 +@@ -761,7 +761,7 @@ static ssize_t cfam_write(struct file *filep, const char __user *buf,
5280 + rc = count;
5281 + fail:
5282 + *offset = off;
5283 +- return count;
5284 ++ return rc;
5285 + }
5286 +
5287 + static loff_t cfam_llseek(struct file *file, loff_t offset, int whence)
5288 +diff --git a/drivers/fsi/fsi-occ.c b/drivers/fsi/fsi-occ.c
5289 +index 10ca2e290655b..cb05b6dacc9d5 100644
5290 +--- a/drivers/fsi/fsi-occ.c
5291 ++++ b/drivers/fsi/fsi-occ.c
5292 +@@ -495,6 +495,7 @@ int fsi_occ_submit(struct device *dev, const void *request, size_t req_len,
5293 + goto done;
5294 +
5295 + if (resp->return_status == OCC_RESP_CMD_IN_PRG ||
5296 ++ resp->return_status == OCC_RESP_CRIT_INIT ||
5297 + resp->seq_no != seq_no) {
5298 + rc = -ETIMEDOUT;
5299 +
5300 +diff --git a/drivers/fsi/fsi-sbefifo.c b/drivers/fsi/fsi-sbefifo.c
5301 +index bfd5e5da80209..84cb965bfed5c 100644
5302 +--- a/drivers/fsi/fsi-sbefifo.c
5303 ++++ b/drivers/fsi/fsi-sbefifo.c
5304 +@@ -325,7 +325,8 @@ static int sbefifo_up_write(struct sbefifo *sbefifo, __be32 word)
5305 + static int sbefifo_request_reset(struct sbefifo *sbefifo)
5306 + {
5307 + struct device *dev = &sbefifo->fsi_dev->dev;
5308 +- u32 status, timeout;
5309 ++ unsigned long end_time;
5310 ++ u32 status;
5311 + int rc;
5312 +
5313 + dev_dbg(dev, "Requesting FIFO reset\n");
5314 +@@ -341,7 +342,8 @@ static int sbefifo_request_reset(struct sbefifo *sbefifo)
5315 + }
5316 +
5317 + /* Wait for it to complete */
5318 +- for (timeout = 0; timeout < SBEFIFO_RESET_TIMEOUT; timeout++) {
5319 ++ end_time = jiffies + msecs_to_jiffies(SBEFIFO_RESET_TIMEOUT);
5320 ++ while (!time_after(jiffies, end_time)) {
5321 + rc = sbefifo_regr(sbefifo, SBEFIFO_UP | SBEFIFO_STS, &status);
5322 + if (rc) {
5323 + dev_err(dev, "Failed to read UP fifo status during reset"
5324 +@@ -355,7 +357,7 @@ static int sbefifo_request_reset(struct sbefifo *sbefifo)
5325 + return 0;
5326 + }
5327 +
5328 +- msleep(1);
5329 ++ cond_resched();
5330 + }
5331 + dev_err(dev, "FIFO reset timed out\n");
5332 +
5333 +@@ -400,7 +402,7 @@ static int sbefifo_cleanup_hw(struct sbefifo *sbefifo)
5334 + /* The FIFO already contains a reset request from the SBE ? */
5335 + if (down_status & SBEFIFO_STS_RESET_REQ) {
5336 + dev_info(dev, "Cleanup: FIFO reset request set, resetting\n");
5337 +- rc = sbefifo_regw(sbefifo, SBEFIFO_UP, SBEFIFO_PERFORM_RESET);
5338 ++ rc = sbefifo_regw(sbefifo, SBEFIFO_DOWN, SBEFIFO_PERFORM_RESET);
5339 + if (rc) {
5340 + sbefifo->broken = true;
5341 + dev_err(dev, "Cleanup: Reset reg write failed, rc=%d\n", rc);
5342 +diff --git a/drivers/fsi/fsi-scom.c b/drivers/fsi/fsi-scom.c
5343 +index b45bfab7b7f55..75d1389e2626d 100644
5344 +--- a/drivers/fsi/fsi-scom.c
5345 ++++ b/drivers/fsi/fsi-scom.c
5346 +@@ -38,9 +38,10 @@
5347 + #define SCOM_STATUS_PIB_RESP_MASK 0x00007000
5348 + #define SCOM_STATUS_PIB_RESP_SHIFT 12
5349 +
5350 +-#define SCOM_STATUS_ANY_ERR (SCOM_STATUS_PROTECTION | \
5351 +- SCOM_STATUS_PARITY | \
5352 +- SCOM_STATUS_PIB_ABORT | \
5353 ++#define SCOM_STATUS_FSI2PIB_ERROR (SCOM_STATUS_PROTECTION | \
5354 ++ SCOM_STATUS_PARITY | \
5355 ++ SCOM_STATUS_PIB_ABORT)
5356 ++#define SCOM_STATUS_ANY_ERR (SCOM_STATUS_FSI2PIB_ERROR | \
5357 + SCOM_STATUS_PIB_RESP_MASK)
5358 + /* SCOM address encodings */
5359 + #define XSCOM_ADDR_IND_FLAG BIT_ULL(63)
5360 +@@ -240,13 +241,14 @@ static int handle_fsi2pib_status(struct scom_device *scom, uint32_t status)
5361 + {
5362 + uint32_t dummy = -1;
5363 +
5364 +- if (status & SCOM_STATUS_PROTECTION)
5365 +- return -EPERM;
5366 +- if (status & SCOM_STATUS_PARITY) {
5367 ++ if (status & SCOM_STATUS_FSI2PIB_ERROR)
5368 + fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, &dummy,
5369 + sizeof(uint32_t));
5370 ++
5371 ++ if (status & SCOM_STATUS_PROTECTION)
5372 ++ return -EPERM;
5373 ++ if (status & SCOM_STATUS_PARITY)
5374 + return -EIO;
5375 +- }
5376 + /* Return -EBUSY on PIB abort to force a retry */
5377 + if (status & SCOM_STATUS_PIB_ABORT)
5378 + return -EBUSY;
5379 +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
5380 +index eed4946305834..0858e0c7b7a1d 100644
5381 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
5382 ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
5383 +@@ -28,6 +28,7 @@
5384 +
5385 + #include "dm_services_types.h"
5386 + #include "dc.h"
5387 ++#include "dc_link_dp.h"
5388 + #include "dc/inc/core_types.h"
5389 + #include "dal_asic_id.h"
5390 + #include "dmub/dmub_srv.h"
5391 +@@ -2598,6 +2599,7 @@ static void handle_hpd_rx_irq(void *param)
5392 + enum dc_connection_type new_connection_type = dc_connection_none;
5393 + struct amdgpu_device *adev = drm_to_adev(dev);
5394 + union hpd_irq_data hpd_irq_data;
5395 ++ bool lock_flag = 0;
5396 +
5397 + memset(&hpd_irq_data, 0, sizeof(hpd_irq_data));
5398 +
5399 +@@ -2624,13 +2626,28 @@ static void handle_hpd_rx_irq(void *param)
5400 + }
5401 + }
5402 +
5403 +- mutex_lock(&adev->dm.dc_lock);
5404 ++ /*
5405 ++ * TODO: We need the lock to avoid touching DC state while it's being
5406 ++ * modified during automated compliance testing, or when link loss
5407 ++ * happens. While this should be split into subhandlers and proper
5408 ++ * interfaces to avoid having to conditionally lock like this in the
5409 ++ * outer layer, we need this workaround temporarily to allow MST
5410 ++ * lightup in some scenarios to avoid timeout.
5411 ++ */
5412 ++ if (!amdgpu_in_reset(adev) &&
5413 ++ (hpd_rx_irq_check_link_loss_status(dc_link, &hpd_irq_data) ||
5414 ++ hpd_irq_data.bytes.device_service_irq.bits.AUTOMATED_TEST)) {
5415 ++ mutex_lock(&adev->dm.dc_lock);
5416 ++ lock_flag = 1;
5417 ++ }
5418 ++
5419 + #ifdef CONFIG_DRM_AMD_DC_HDCP
5420 + result = dc_link_handle_hpd_rx_irq(dc_link, &hpd_irq_data, NULL);
5421 + #else
5422 + result = dc_link_handle_hpd_rx_irq(dc_link, NULL, NULL);
5423 + #endif
5424 +- mutex_unlock(&adev->dm.dc_lock);
5425 ++ if (!amdgpu_in_reset(adev) && lock_flag)
5426 ++ mutex_unlock(&adev->dm.dc_lock);
5427 +
5428 + out:
5429 + if (result && !is_mst_root_connector) {
5430 +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
5431 +index 41b09ab22233a..b478129a74774 100644
5432 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
5433 ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
5434 +@@ -270,6 +270,9 @@ dm_dp_mst_detect(struct drm_connector *connector,
5435 + struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector);
5436 + struct amdgpu_dm_connector *master = aconnector->mst_port;
5437 +
5438 ++ if (drm_connector_is_unregistered(connector))
5439 ++ return connector_status_disconnected;
5440 ++
5441 + return drm_dp_mst_detect_port(connector, ctx, &master->mst_mgr,
5442 + aconnector->port);
5443 + }
5444 +diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
5445 +index c1391bfb7a9bc..b85f67341a9a0 100644
5446 +--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
5447 ++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
5448 +@@ -1918,7 +1918,7 @@ enum dc_status read_hpd_rx_irq_data(
5449 + return retval;
5450 + }
5451 +
5452 +-static bool hpd_rx_irq_check_link_loss_status(
5453 ++bool hpd_rx_irq_check_link_loss_status(
5454 + struct dc_link *link,
5455 + union hpd_irq_data *hpd_irq_dpcd_data)
5456 + {
5457 +diff --git a/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h b/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
5458 +index b970a32177aff..28abd30e90a58 100644
5459 +--- a/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
5460 ++++ b/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
5461 +@@ -63,6 +63,10 @@ bool perform_link_training_with_retries(
5462 + struct pipe_ctx *pipe_ctx,
5463 + enum signal_type signal);
5464 +
5465 ++bool hpd_rx_irq_check_link_loss_status(
5466 ++ struct dc_link *link,
5467 ++ union hpd_irq_data *hpd_irq_dpcd_data);
5468 ++
5469 + bool is_mst_supported(struct dc_link *link);
5470 +
5471 + bool detect_dp_sink_caps(struct dc_link *link);
5472 +diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c
5473 +index 0ac3c2039c4b1..c29cc7f19863a 100644
5474 +--- a/drivers/gpu/drm/ast/ast_main.c
5475 ++++ b/drivers/gpu/drm/ast/ast_main.c
5476 +@@ -413,7 +413,7 @@ struct ast_private *ast_device_create(const struct drm_driver *drv,
5477 +
5478 + pci_set_drvdata(pdev, dev);
5479 +
5480 +- ast->regs = pci_iomap(pdev, 1, 0);
5481 ++ ast->regs = pcim_iomap(pdev, 1, 0);
5482 + if (!ast->regs)
5483 + return ERR_PTR(-EIO);
5484 +
5485 +@@ -429,7 +429,7 @@ struct ast_private *ast_device_create(const struct drm_driver *drv,
5486 +
5487 + /* "map" IO regs if the above hasn't done so already */
5488 + if (!ast->ioregs) {
5489 +- ast->ioregs = pci_iomap(pdev, 2, 0);
5490 ++ ast->ioregs = pcim_iomap(pdev, 2, 0);
5491 + if (!ast->ioregs)
5492 + return ERR_PTR(-EIO);
5493 + }
5494 +diff --git a/drivers/gpu/drm/bridge/Kconfig b/drivers/gpu/drm/bridge/Kconfig
5495 +index bc60fc4728d70..8d5bae9e745b2 100644
5496 +--- a/drivers/gpu/drm/bridge/Kconfig
5497 ++++ b/drivers/gpu/drm/bridge/Kconfig
5498 +@@ -143,7 +143,7 @@ config DRM_SIL_SII8620
5499 + tristate "Silicon Image SII8620 HDMI/MHL bridge"
5500 + depends on OF
5501 + select DRM_KMS_HELPER
5502 +- imply EXTCON
5503 ++ select EXTCON
5504 + depends on RC_CORE || !RC_CORE
5505 + help
5506 + Silicon Image SII8620 HDMI/MHL bridge chip driver.
5507 +diff --git a/drivers/gpu/drm/drm_bridge.c b/drivers/gpu/drm/drm_bridge.c
5508 +index 64f0effb52ac1..044acd07c1538 100644
5509 +--- a/drivers/gpu/drm/drm_bridge.c
5510 ++++ b/drivers/gpu/drm/drm_bridge.c
5511 +@@ -522,6 +522,9 @@ void drm_bridge_chain_pre_enable(struct drm_bridge *bridge)
5512 + list_for_each_entry_reverse(iter, &encoder->bridge_chain, chain_node) {
5513 + if (iter->funcs->pre_enable)
5514 + iter->funcs->pre_enable(iter);
5515 ++
5516 ++ if (iter == bridge)
5517 ++ break;
5518 + }
5519 + }
5520 + EXPORT_SYMBOL(drm_bridge_chain_pre_enable);
5521 +diff --git a/drivers/gpu/drm/imx/ipuv3-plane.c b/drivers/gpu/drm/imx/ipuv3-plane.c
5522 +index 075508051b5fb..8c08c8b36074e 100644
5523 +--- a/drivers/gpu/drm/imx/ipuv3-plane.c
5524 ++++ b/drivers/gpu/drm/imx/ipuv3-plane.c
5525 +@@ -35,7 +35,7 @@ static inline struct ipu_plane *to_ipu_plane(struct drm_plane *p)
5526 + return container_of(p, struct ipu_plane, base);
5527 + }
5528 +
5529 +-static const uint32_t ipu_plane_formats[] = {
5530 ++static const uint32_t ipu_plane_all_formats[] = {
5531 + DRM_FORMAT_ARGB1555,
5532 + DRM_FORMAT_XRGB1555,
5533 + DRM_FORMAT_ABGR1555,
5534 +@@ -72,6 +72,31 @@ static const uint32_t ipu_plane_formats[] = {
5535 + DRM_FORMAT_BGRX8888_A8,
5536 + };
5537 +
5538 ++static const uint32_t ipu_plane_rgb_formats[] = {
5539 ++ DRM_FORMAT_ARGB1555,
5540 ++ DRM_FORMAT_XRGB1555,
5541 ++ DRM_FORMAT_ABGR1555,
5542 ++ DRM_FORMAT_XBGR1555,
5543 ++ DRM_FORMAT_RGBA5551,
5544 ++ DRM_FORMAT_BGRA5551,
5545 ++ DRM_FORMAT_ARGB4444,
5546 ++ DRM_FORMAT_ARGB8888,
5547 ++ DRM_FORMAT_XRGB8888,
5548 ++ DRM_FORMAT_ABGR8888,
5549 ++ DRM_FORMAT_XBGR8888,
5550 ++ DRM_FORMAT_RGBA8888,
5551 ++ DRM_FORMAT_RGBX8888,
5552 ++ DRM_FORMAT_BGRA8888,
5553 ++ DRM_FORMAT_BGRX8888,
5554 ++ DRM_FORMAT_RGB565,
5555 ++ DRM_FORMAT_RGB565_A8,
5556 ++ DRM_FORMAT_BGR565_A8,
5557 ++ DRM_FORMAT_RGB888_A8,
5558 ++ DRM_FORMAT_BGR888_A8,
5559 ++ DRM_FORMAT_RGBX8888_A8,
5560 ++ DRM_FORMAT_BGRX8888_A8,
5561 ++};
5562 ++
5563 + static const uint64_t ipu_format_modifiers[] = {
5564 + DRM_FORMAT_MOD_LINEAR,
5565 + DRM_FORMAT_MOD_INVALID
5566 +@@ -320,10 +345,11 @@ static bool ipu_plane_format_mod_supported(struct drm_plane *plane,
5567 + if (modifier == DRM_FORMAT_MOD_LINEAR)
5568 + return true;
5569 +
5570 +- /* without a PRG there are no supported modifiers */
5571 +- if (!ipu_prg_present(ipu))
5572 +- return false;
5573 +-
5574 ++ /*
5575 ++ * Without a PRG the possible modifiers list only includes the linear
5576 ++ * modifier, so we always take the early return from this function and
5577 ++ * only end up here if the PRG is present.
5578 ++ */
5579 + return ipu_prg_format_supported(ipu, format, modifier);
5580 + }
5581 +
5582 +@@ -822,16 +848,28 @@ struct ipu_plane *ipu_plane_init(struct drm_device *dev, struct ipu_soc *ipu,
5583 + struct ipu_plane *ipu_plane;
5584 + const uint64_t *modifiers = ipu_format_modifiers;
5585 + unsigned int zpos = (type == DRM_PLANE_TYPE_PRIMARY) ? 0 : 1;
5586 ++ unsigned int format_count;
5587 ++ const uint32_t *formats;
5588 + int ret;
5589 +
5590 + DRM_DEBUG_KMS("channel %d, dp flow %d, possible_crtcs=0x%x\n",
5591 + dma, dp, possible_crtcs);
5592 +
5593 ++ if (dp == IPU_DP_FLOW_SYNC_BG || dp == IPU_DP_FLOW_SYNC_FG) {
5594 ++ formats = ipu_plane_all_formats;
5595 ++ format_count = ARRAY_SIZE(ipu_plane_all_formats);
5596 ++ } else {
5597 ++ formats = ipu_plane_rgb_formats;
5598 ++ format_count = ARRAY_SIZE(ipu_plane_rgb_formats);
5599 ++ }
5600 ++
5601 ++ if (ipu_prg_present(ipu))
5602 ++ modifiers = pre_format_modifiers;
5603 ++
5604 + ipu_plane = drmm_universal_plane_alloc(dev, struct ipu_plane, base,
5605 + possible_crtcs, &ipu_plane_funcs,
5606 +- ipu_plane_formats,
5607 +- ARRAY_SIZE(ipu_plane_formats),
5608 +- modifiers, type, NULL);
5609 ++ formats, format_count, modifiers,
5610 ++ type, NULL);
5611 + if (IS_ERR(ipu_plane)) {
5612 + DRM_ERROR("failed to allocate and initialize %s plane\n",
5613 + zpos ? "overlay" : "primary");
5614 +@@ -842,9 +880,6 @@ struct ipu_plane *ipu_plane_init(struct drm_device *dev, struct ipu_soc *ipu,
5615 + ipu_plane->dma = dma;
5616 + ipu_plane->dp_flow = dp;
5617 +
5618 +- if (ipu_prg_present(ipu))
5619 +- modifiers = pre_format_modifiers;
5620 +-
5621 + drm_plane_helper_add(&ipu_plane->base, &ipu_plane_helper_funcs);
5622 +
5623 + if (dp == IPU_DP_FLOW_SYNC_BG || dp == IPU_DP_FLOW_SYNC_FG)
5624 +diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
5625 +index 3416e9617ee9a..96f3908e4c5b9 100644
5626 +--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
5627 ++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
5628 +@@ -222,7 +222,7 @@ int dpu_mdss_init(struct drm_device *dev)
5629 + struct msm_drm_private *priv = dev->dev_private;
5630 + struct dpu_mdss *dpu_mdss;
5631 + struct dss_module_power *mp;
5632 +- int ret = 0;
5633 ++ int ret;
5634 + int irq;
5635 +
5636 + dpu_mdss = devm_kzalloc(dev->dev, sizeof(*dpu_mdss), GFP_KERNEL);
5637 +@@ -250,8 +250,10 @@ int dpu_mdss_init(struct drm_device *dev)
5638 + goto irq_domain_error;
5639 +
5640 + irq = platform_get_irq(pdev, 0);
5641 +- if (irq < 0)
5642 ++ if (irq < 0) {
5643 ++ ret = irq;
5644 + goto irq_error;
5645 ++ }
5646 +
5647 + irq_set_chained_handler_and_data(irq, dpu_mdss_irq,
5648 + dpu_mdss);
5649 +@@ -260,7 +262,7 @@ int dpu_mdss_init(struct drm_device *dev)
5650 +
5651 + pm_runtime_enable(dev->dev);
5652 +
5653 +- return ret;
5654 ++ return 0;
5655 +
5656 + irq_error:
5657 + _dpu_mdss_irq_domain_fini(dpu_mdss);
5658 +diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c b/drivers/gpu/drm/msm/dp/dp_catalog.c
5659 +index b1a9b1b98f5f6..f4f53f23e331e 100644
5660 +--- a/drivers/gpu/drm/msm/dp/dp_catalog.c
5661 ++++ b/drivers/gpu/drm/msm/dp/dp_catalog.c
5662 +@@ -582,10 +582,9 @@ void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog)
5663 +
5664 + u32 reftimer = dp_read_aux(catalog, REG_DP_DP_HPD_REFTIMER);
5665 +
5666 +- /* enable HPD interrupts */
5667 ++ /* enable HPD plug and unplug interrupts */
5668 + dp_catalog_hpd_config_intr(dp_catalog,
5669 +- DP_DP_HPD_PLUG_INT_MASK | DP_DP_IRQ_HPD_INT_MASK
5670 +- | DP_DP_HPD_UNPLUG_INT_MASK | DP_DP_HPD_REPLUG_INT_MASK, true);
5671 ++ DP_DP_HPD_PLUG_INT_MASK | DP_DP_HPD_UNPLUG_INT_MASK, true);
5672 +
5673 + /* Configure REFTIMER and enable it */
5674 + reftimer |= DP_DP_HPD_REFTIMER_ENABLE;
5675 +diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
5676 +index 1390f3547fde4..2a8955ca70d1a 100644
5677 +--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
5678 ++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
5679 +@@ -1809,6 +1809,61 @@ end:
5680 + return ret;
5681 + }
5682 +
5683 ++int dp_ctrl_off_link_stream(struct dp_ctrl *dp_ctrl)
5684 ++{
5685 ++ struct dp_ctrl_private *ctrl;
5686 ++ struct dp_io *dp_io;
5687 ++ struct phy *phy;
5688 ++ int ret;
5689 ++
5690 ++ ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
5691 ++ dp_io = &ctrl->parser->io;
5692 ++ phy = dp_io->phy;
5693 ++
5694 ++ /* set dongle to D3 (power off) mode */
5695 ++ dp_link_psm_config(ctrl->link, &ctrl->panel->link_info, true);
5696 ++
5697 ++ dp_catalog_ctrl_mainlink_ctrl(ctrl->catalog, false);
5698 ++
5699 ++ ret = dp_power_clk_enable(ctrl->power, DP_STREAM_PM, false);
5700 ++ if (ret) {
5701 ++ DRM_ERROR("Failed to disable pixel clocks. ret=%d\n", ret);
5702 ++ return ret;
5703 ++ }
5704 ++
5705 ++ ret = dp_power_clk_enable(ctrl->power, DP_CTRL_PM, false);
5706 ++ if (ret) {
5707 ++ DRM_ERROR("Failed to disable link clocks. ret=%d\n", ret);
5708 ++ return ret;
5709 ++ }
5710 ++
5711 ++ phy_power_off(phy);
5712 ++
5713 ++ /* aux channel down, reinit phy */
5714 ++ phy_exit(phy);
5715 ++ phy_init(phy);
5716 ++
5717 ++ DRM_DEBUG_DP("DP off link/stream done\n");
5718 ++ return ret;
5719 ++}
5720 ++
5721 ++void dp_ctrl_off_phy(struct dp_ctrl *dp_ctrl)
5722 ++{
5723 ++ struct dp_ctrl_private *ctrl;
5724 ++ struct dp_io *dp_io;
5725 ++ struct phy *phy;
5726 ++
5727 ++ ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
5728 ++ dp_io = &ctrl->parser->io;
5729 ++ phy = dp_io->phy;
5730 ++
5731 ++ dp_catalog_ctrl_reset(ctrl->catalog);
5732 ++
5733 ++ phy_exit(phy);
5734 ++
5735 ++ DRM_DEBUG_DP("DP off phy done\n");
5736 ++}
5737 ++
5738 + int dp_ctrl_off(struct dp_ctrl *dp_ctrl)
5739 + {
5740 + struct dp_ctrl_private *ctrl;
5741 +diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.h b/drivers/gpu/drm/msm/dp/dp_ctrl.h
5742 +index a836bd358447c..25e4f75122522 100644
5743 +--- a/drivers/gpu/drm/msm/dp/dp_ctrl.h
5744 ++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.h
5745 +@@ -23,6 +23,8 @@ int dp_ctrl_host_init(struct dp_ctrl *dp_ctrl, bool flip, bool reset);
5746 + void dp_ctrl_host_deinit(struct dp_ctrl *dp_ctrl);
5747 + int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl);
5748 + int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl);
5749 ++int dp_ctrl_off_link_stream(struct dp_ctrl *dp_ctrl);
5750 ++void dp_ctrl_off_phy(struct dp_ctrl *dp_ctrl);
5751 + int dp_ctrl_off(struct dp_ctrl *dp_ctrl);
5752 + void dp_ctrl_push_idle(struct dp_ctrl *dp_ctrl);
5753 + void dp_ctrl_isr(struct dp_ctrl *dp_ctrl);
5754 +diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
5755 +index 1784e119269b7..cdec0a367a2cb 100644
5756 +--- a/drivers/gpu/drm/msm/dp/dp_display.c
5757 ++++ b/drivers/gpu/drm/msm/dp/dp_display.c
5758 +@@ -346,6 +346,12 @@ static int dp_display_process_hpd_high(struct dp_display_private *dp)
5759 + dp->dp_display.max_pclk_khz = DP_MAX_PIXEL_CLK_KHZ;
5760 + dp->dp_display.max_dp_lanes = dp->parser->max_dp_lanes;
5761 +
5762 ++ /*
5763 ++ * set sink to normal operation mode -- D0
5764 ++ * before dpcd read
5765 ++ */
5766 ++ dp_link_psm_config(dp->link, &dp->panel->link_info, false);
5767 ++
5768 + dp_link_reset_phy_params_vx_px(dp->link);
5769 + rc = dp_ctrl_on_link(dp->ctrl);
5770 + if (rc) {
5771 +@@ -414,11 +420,6 @@ static int dp_display_usbpd_configure_cb(struct device *dev)
5772 +
5773 + dp_display_host_init(dp, false);
5774 +
5775 +- /*
5776 +- * set sink to normal operation mode -- D0
5777 +- * before dpcd read
5778 +- */
5779 +- dp_link_psm_config(dp->link, &dp->panel->link_info, false);
5780 + rc = dp_display_process_hpd_high(dp);
5781 + end:
5782 + return rc;
5783 +@@ -579,6 +580,10 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data)
5784 + dp_add_event(dp, EV_CONNECT_PENDING_TIMEOUT, 0, tout);
5785 + }
5786 +
5787 ++ /* enable HDP irq_hpd/replug interrupt */
5788 ++ dp_catalog_hpd_config_intr(dp->catalog,
5789 ++ DP_DP_IRQ_HPD_INT_MASK | DP_DP_HPD_REPLUG_INT_MASK, true);
5790 ++
5791 + mutex_unlock(&dp->event_mutex);
5792 +
5793 + /* uevent will complete connection part */
5794 +@@ -628,7 +633,26 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
5795 + mutex_lock(&dp->event_mutex);
5796 +
5797 + state = dp->hpd_state;
5798 +- if (state == ST_DISCONNECT_PENDING || state == ST_DISCONNECTED) {
5799 ++
5800 ++ /* disable irq_hpd/replug interrupts */
5801 ++ dp_catalog_hpd_config_intr(dp->catalog,
5802 ++ DP_DP_IRQ_HPD_INT_MASK | DP_DP_HPD_REPLUG_INT_MASK, false);
5803 ++
5804 ++ /* unplugged, no more irq_hpd handle */
5805 ++ dp_del_event(dp, EV_IRQ_HPD_INT);
5806 ++
5807 ++ if (state == ST_DISCONNECTED) {
5808 ++ /* triggered by irq_hdp with sink_count = 0 */
5809 ++ if (dp->link->sink_count == 0) {
5810 ++ dp_ctrl_off_phy(dp->ctrl);
5811 ++ hpd->hpd_high = 0;
5812 ++ dp->core_initialized = false;
5813 ++ }
5814 ++ mutex_unlock(&dp->event_mutex);
5815 ++ return 0;
5816 ++ }
5817 ++
5818 ++ if (state == ST_DISCONNECT_PENDING) {
5819 + mutex_unlock(&dp->event_mutex);
5820 + return 0;
5821 + }
5822 +@@ -642,9 +666,8 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
5823 +
5824 + dp->hpd_state = ST_DISCONNECT_PENDING;
5825 +
5826 +- /* disable HPD plug interrupt until disconnect is done */
5827 +- dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK
5828 +- | DP_DP_IRQ_HPD_INT_MASK, false);
5829 ++ /* disable HPD plug interrupts */
5830 ++ dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK, false);
5831 +
5832 + hpd->hpd_high = 0;
5833 +
5834 +@@ -660,8 +683,8 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
5835 + /* signal the disconnect event early to ensure proper teardown */
5836 + dp_display_handle_plugged_change(g_dp_display, false);
5837 +
5838 +- dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK |
5839 +- DP_DP_IRQ_HPD_INT_MASK, true);
5840 ++ /* enable HDP plug interrupt to prepare for next plugin */
5841 ++ dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK, true);
5842 +
5843 + /* uevent will complete disconnection part */
5844 + mutex_unlock(&dp->event_mutex);
5845 +@@ -692,7 +715,7 @@ static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data)
5846 +
5847 + /* irq_hpd can happen at either connected or disconnected state */
5848 + state = dp->hpd_state;
5849 +- if (state == ST_DISPLAY_OFF) {
5850 ++ if (state == ST_DISPLAY_OFF || state == ST_SUSPENDED) {
5851 + mutex_unlock(&dp->event_mutex);
5852 + return 0;
5853 + }
5854 +@@ -910,9 +933,13 @@ static int dp_display_disable(struct dp_display_private *dp, u32 data)
5855 +
5856 + dp_display->audio_enabled = false;
5857 +
5858 +- dp_ctrl_off(dp->ctrl);
5859 +-
5860 +- dp->core_initialized = false;
5861 ++ /* triggered by irq_hpd with sink_count = 0 */
5862 ++ if (dp->link->sink_count == 0) {
5863 ++ dp_ctrl_off_link_stream(dp->ctrl);
5864 ++ } else {
5865 ++ dp_ctrl_off(dp->ctrl);
5866 ++ dp->core_initialized = false;
5867 ++ }
5868 +
5869 + dp_display->power_on = false;
5870 +
5871 +diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
5872 +index 18ea1c66de718..f206c53c27e0e 100644
5873 +--- a/drivers/gpu/drm/msm/msm_drv.c
5874 ++++ b/drivers/gpu/drm/msm/msm_drv.c
5875 +@@ -521,6 +521,7 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
5876 + priv->event_thread[i].worker = kthread_create_worker(0,
5877 + "crtc_event:%d", priv->event_thread[i].crtc_id);
5878 + if (IS_ERR(priv->event_thread[i].worker)) {
5879 ++ ret = PTR_ERR(priv->event_thread[i].worker);
5880 + DRM_DEV_ERROR(dev, "failed to create crtc_event kthread\n");
5881 + goto err_msm_uninit;
5882 + }
5883 +diff --git a/drivers/gpu/drm/pl111/Kconfig b/drivers/gpu/drm/pl111/Kconfig
5884 +index 80f6748055e36..3aae387a96af2 100644
5885 +--- a/drivers/gpu/drm/pl111/Kconfig
5886 ++++ b/drivers/gpu/drm/pl111/Kconfig
5887 +@@ -3,6 +3,7 @@ config DRM_PL111
5888 + tristate "DRM Support for PL111 CLCD Controller"
5889 + depends on DRM
5890 + depends on ARM || ARM64 || COMPILE_TEST
5891 ++ depends on VEXPRESS_CONFIG || VEXPRESS_CONFIG=n
5892 + depends on COMMON_CLK
5893 + select DRM_KMS_HELPER
5894 + select DRM_KMS_CMA_HELPER
5895 +diff --git a/drivers/gpu/drm/qxl/qxl_dumb.c b/drivers/gpu/drm/qxl/qxl_dumb.c
5896 +index c04cd5a2553ce..e377bdbff90dd 100644
5897 +--- a/drivers/gpu/drm/qxl/qxl_dumb.c
5898 ++++ b/drivers/gpu/drm/qxl/qxl_dumb.c
5899 +@@ -58,6 +58,8 @@ int qxl_mode_dumb_create(struct drm_file *file_priv,
5900 + surf.height = args->height;
5901 + surf.stride = pitch;
5902 + surf.format = format;
5903 ++ surf.data = 0;
5904 ++
5905 + r = qxl_gem_object_create_with_handle(qdev, file_priv,
5906 + QXL_GEM_DOMAIN_SURFACE,
5907 + args->size, &surf, &qobj,
5908 +diff --git a/drivers/gpu/drm/rockchip/cdn-dp-core.c b/drivers/gpu/drm/rockchip/cdn-dp-core.c
5909 +index a4a45daf93f2b..6802d9b65f828 100644
5910 +--- a/drivers/gpu/drm/rockchip/cdn-dp-core.c
5911 ++++ b/drivers/gpu/drm/rockchip/cdn-dp-core.c
5912 +@@ -73,6 +73,7 @@ static int cdn_dp_grf_write(struct cdn_dp_device *dp,
5913 + ret = regmap_write(dp->grf, reg, val);
5914 + if (ret) {
5915 + DRM_DEV_ERROR(dp->dev, "Could not write to GRF: %d\n", ret);
5916 ++ clk_disable_unprepare(dp->grf_clk);
5917 + return ret;
5918 + }
5919 +
5920 +diff --git a/drivers/gpu/drm/rockchip/cdn-dp-reg.c b/drivers/gpu/drm/rockchip/cdn-dp-reg.c
5921 +index 9d2163ef4d6e2..33fb4d05c5065 100644
5922 +--- a/drivers/gpu/drm/rockchip/cdn-dp-reg.c
5923 ++++ b/drivers/gpu/drm/rockchip/cdn-dp-reg.c
5924 +@@ -658,7 +658,7 @@ int cdn_dp_config_video(struct cdn_dp_device *dp)
5925 + */
5926 + do {
5927 + tu_size_reg += 2;
5928 +- symbol = tu_size_reg * mode->clock * bit_per_pix;
5929 ++ symbol = (u64)tu_size_reg * mode->clock * bit_per_pix;
5930 + do_div(symbol, dp->max_lanes * link_rate * 8);
5931 + rem = do_div(symbol, 1000);
5932 + if (tu_size_reg > 64) {
5933 +diff --git a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
5934 +index 24a71091759cc..d8c47ee3cad37 100644
5935 +--- a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
5936 ++++ b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
5937 +@@ -692,13 +692,8 @@ static const struct dw_mipi_dsi_phy_ops dw_mipi_dsi_rockchip_phy_ops = {
5938 + .get_timing = dw_mipi_dsi_phy_get_timing,
5939 + };
5940 +
5941 +-static void dw_mipi_dsi_rockchip_config(struct dw_mipi_dsi_rockchip *dsi,
5942 +- int mux)
5943 ++static void dw_mipi_dsi_rockchip_config(struct dw_mipi_dsi_rockchip *dsi)
5944 + {
5945 +- if (dsi->cdata->lcdsel_grf_reg)
5946 +- regmap_write(dsi->grf_regmap, dsi->cdata->lcdsel_grf_reg,
5947 +- mux ? dsi->cdata->lcdsel_lit : dsi->cdata->lcdsel_big);
5948 +-
5949 + if (dsi->cdata->lanecfg1_grf_reg)
5950 + regmap_write(dsi->grf_regmap, dsi->cdata->lanecfg1_grf_reg,
5951 + dsi->cdata->lanecfg1);
5952 +@@ -712,6 +707,13 @@ static void dw_mipi_dsi_rockchip_config(struct dw_mipi_dsi_rockchip *dsi,
5953 + dsi->cdata->enable);
5954 + }
5955 +
5956 ++static void dw_mipi_dsi_rockchip_set_lcdsel(struct dw_mipi_dsi_rockchip *dsi,
5957 ++ int mux)
5958 ++{
5959 ++ regmap_write(dsi->grf_regmap, dsi->cdata->lcdsel_grf_reg,
5960 ++ mux ? dsi->cdata->lcdsel_lit : dsi->cdata->lcdsel_big);
5961 ++}
5962 ++
5963 + static int
5964 + dw_mipi_dsi_encoder_atomic_check(struct drm_encoder *encoder,
5965 + struct drm_crtc_state *crtc_state,
5966 +@@ -767,9 +769,9 @@ static void dw_mipi_dsi_encoder_enable(struct drm_encoder *encoder)
5967 + return;
5968 + }
5969 +
5970 +- dw_mipi_dsi_rockchip_config(dsi, mux);
5971 ++ dw_mipi_dsi_rockchip_set_lcdsel(dsi, mux);
5972 + if (dsi->slave)
5973 +- dw_mipi_dsi_rockchip_config(dsi->slave, mux);
5974 ++ dw_mipi_dsi_rockchip_set_lcdsel(dsi->slave, mux);
5975 +
5976 + clk_disable_unprepare(dsi->grf_clk);
5977 + }
5978 +@@ -923,6 +925,24 @@ static int dw_mipi_dsi_rockchip_bind(struct device *dev,
5979 + return ret;
5980 + }
5981 +
5982 ++ /*
5983 ++ * With the GRF clock running, write lane and dual-mode configurations
5984 ++ * that won't change immediately. If we waited until enable() to do
5985 ++ * this, things like panel preparation would not be able to send
5986 ++ * commands over DSI.
5987 ++ */
5988 ++ ret = clk_prepare_enable(dsi->grf_clk);
5989 ++ if (ret) {
5990 ++ DRM_DEV_ERROR(dsi->dev, "Failed to enable grf_clk: %d\n", ret);
5991 ++ return ret;
5992 ++ }
5993 ++
5994 ++ dw_mipi_dsi_rockchip_config(dsi);
5995 ++ if (dsi->slave)
5996 ++ dw_mipi_dsi_rockchip_config(dsi->slave);
5997 ++
5998 ++ clk_disable_unprepare(dsi->grf_clk);
5999 ++
6000 + ret = rockchip_dsi_drm_create_encoder(dsi, drm_dev);
6001 + if (ret) {
6002 + DRM_DEV_ERROR(dev, "Failed to create drm encoder\n");
6003 +diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
6004 +index 8d15cabdcb02a..2d10198044c29 100644
6005 +--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
6006 ++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
6007 +@@ -1013,6 +1013,7 @@ static void vop_plane_atomic_update(struct drm_plane *plane,
6008 + VOP_WIN_SET(vop, win, alpha_en, 1);
6009 + } else {
6010 + VOP_WIN_SET(vop, win, src_alpha_ctl, SRC_ALPHA_EN(0));
6011 ++ VOP_WIN_SET(vop, win, alpha_en, 0);
6012 + }
6013 +
6014 + VOP_WIN_SET(vop, win, enable, 1);
6015 +diff --git a/drivers/gpu/drm/rockchip/rockchip_lvds.c b/drivers/gpu/drm/rockchip/rockchip_lvds.c
6016 +index 654bc52d9ff39..1a7f24c1ce495 100644
6017 +--- a/drivers/gpu/drm/rockchip/rockchip_lvds.c
6018 ++++ b/drivers/gpu/drm/rockchip/rockchip_lvds.c
6019 +@@ -499,11 +499,11 @@ static int px30_lvds_probe(struct platform_device *pdev,
6020 + if (IS_ERR(lvds->dphy))
6021 + return PTR_ERR(lvds->dphy);
6022 +
6023 +- phy_init(lvds->dphy);
6024 ++ ret = phy_init(lvds->dphy);
6025 + if (ret)
6026 + return ret;
6027 +
6028 +- phy_set_mode(lvds->dphy, PHY_MODE_LVDS);
6029 ++ ret = phy_set_mode(lvds->dphy, PHY_MODE_LVDS);
6030 + if (ret)
6031 + return ret;
6032 +
6033 +diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
6034 +index 76657dcdf9b00..1f36b67cd6ce9 100644
6035 +--- a/drivers/gpu/drm/vc4/vc4_crtc.c
6036 ++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
6037 +@@ -279,14 +279,22 @@ static u32 vc4_crtc_get_fifo_full_level_bits(struct vc4_crtc *vc4_crtc,
6038 + * allows drivers to push pixels to more than one encoder from the
6039 + * same CRTC.
6040 + */
6041 +-static struct drm_encoder *vc4_get_crtc_encoder(struct drm_crtc *crtc)
6042 ++static struct drm_encoder *vc4_get_crtc_encoder(struct drm_crtc *crtc,
6043 ++ struct drm_atomic_state *state,
6044 ++ struct drm_connector_state *(*get_state)(struct drm_atomic_state *state,
6045 ++ struct drm_connector *connector))
6046 + {
6047 + struct drm_connector *connector;
6048 + struct drm_connector_list_iter conn_iter;
6049 +
6050 + drm_connector_list_iter_begin(crtc->dev, &conn_iter);
6051 + drm_for_each_connector_iter(connector, &conn_iter) {
6052 +- if (connector->state->crtc == crtc) {
6053 ++ struct drm_connector_state *conn_state = get_state(state, connector);
6054 ++
6055 ++ if (!conn_state)
6056 ++ continue;
6057 ++
6058 ++ if (conn_state->crtc == crtc) {
6059 + drm_connector_list_iter_end(&conn_iter);
6060 + return connector->encoder;
6061 + }
6062 +@@ -305,16 +313,17 @@ static void vc4_crtc_pixelvalve_reset(struct drm_crtc *crtc)
6063 + CRTC_WRITE(PV_CONTROL, CRTC_READ(PV_CONTROL) | PV_CONTROL_FIFO_CLR);
6064 + }
6065 +
6066 +-static void vc4_crtc_config_pv(struct drm_crtc *crtc)
6067 ++static void vc4_crtc_config_pv(struct drm_crtc *crtc, struct drm_atomic_state *state)
6068 + {
6069 + struct drm_device *dev = crtc->dev;
6070 + struct vc4_dev *vc4 = to_vc4_dev(dev);
6071 +- struct drm_encoder *encoder = vc4_get_crtc_encoder(crtc);
6072 ++ struct drm_encoder *encoder = vc4_get_crtc_encoder(crtc, state,
6073 ++ drm_atomic_get_new_connector_state);
6074 + struct vc4_encoder *vc4_encoder = to_vc4_encoder(encoder);
6075 + struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
6076 + const struct vc4_pv_data *pv_data = vc4_crtc_to_vc4_pv_data(vc4_crtc);
6077 +- struct drm_crtc_state *state = crtc->state;
6078 +- struct drm_display_mode *mode = &state->adjusted_mode;
6079 ++ struct drm_crtc_state *crtc_state = crtc->state;
6080 ++ struct drm_display_mode *mode = &crtc_state->adjusted_mode;
6081 + bool interlace = mode->flags & DRM_MODE_FLAG_INTERLACE;
6082 + u32 pixel_rep = (mode->flags & DRM_MODE_FLAG_DBLCLK) ? 2 : 1;
6083 + bool is_dsi = (vc4_encoder->type == VC4_ENCODER_TYPE_DSI0 ||
6084 +@@ -421,10 +430,10 @@ static void require_hvs_enabled(struct drm_device *dev)
6085 + }
6086 +
6087 + static int vc4_crtc_disable(struct drm_crtc *crtc,
6088 ++ struct drm_encoder *encoder,
6089 + struct drm_atomic_state *state,
6090 + unsigned int channel)
6091 + {
6092 +- struct drm_encoder *encoder = vc4_get_crtc_encoder(crtc);
6093 + struct vc4_encoder *vc4_encoder = to_vc4_encoder(encoder);
6094 + struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
6095 + struct drm_device *dev = crtc->dev;
6096 +@@ -465,10 +474,29 @@ static int vc4_crtc_disable(struct drm_crtc *crtc,
6097 + return 0;
6098 + }
6099 +
6100 ++static struct drm_encoder *vc4_crtc_get_encoder_by_type(struct drm_crtc *crtc,
6101 ++ enum vc4_encoder_type type)
6102 ++{
6103 ++ struct drm_encoder *encoder;
6104 ++
6105 ++ drm_for_each_encoder(encoder, crtc->dev) {
6106 ++ struct vc4_encoder *vc4_encoder = to_vc4_encoder(encoder);
6107 ++
6108 ++ if (vc4_encoder->type == type)
6109 ++ return encoder;
6110 ++ }
6111 ++
6112 ++ return NULL;
6113 ++}
6114 ++
6115 + int vc4_crtc_disable_at_boot(struct drm_crtc *crtc)
6116 + {
6117 + struct drm_device *drm = crtc->dev;
6118 + struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
6119 ++ enum vc4_encoder_type encoder_type;
6120 ++ const struct vc4_pv_data *pv_data;
6121 ++ struct drm_encoder *encoder;
6122 ++ unsigned encoder_sel;
6123 + int channel;
6124 +
6125 + if (!(of_device_is_compatible(vc4_crtc->pdev->dev.of_node,
6126 +@@ -487,7 +515,17 @@ int vc4_crtc_disable_at_boot(struct drm_crtc *crtc)
6127 + if (channel < 0)
6128 + return 0;
6129 +
6130 +- return vc4_crtc_disable(crtc, NULL, channel);
6131 ++ encoder_sel = VC4_GET_FIELD(CRTC_READ(PV_CONTROL), PV_CONTROL_CLK_SELECT);
6132 ++ if (WARN_ON(encoder_sel != 0))
6133 ++ return 0;
6134 ++
6135 ++ pv_data = vc4_crtc_to_vc4_pv_data(vc4_crtc);
6136 ++ encoder_type = pv_data->encoder_types[encoder_sel];
6137 ++ encoder = vc4_crtc_get_encoder_by_type(crtc, encoder_type);
6138 ++ if (WARN_ON(!encoder))
6139 ++ return 0;
6140 ++
6141 ++ return vc4_crtc_disable(crtc, encoder, NULL, channel);
6142 + }
6143 +
6144 + static void vc4_crtc_atomic_disable(struct drm_crtc *crtc,
6145 +@@ -496,6 +534,8 @@ static void vc4_crtc_atomic_disable(struct drm_crtc *crtc,
6146 + struct drm_crtc_state *old_state = drm_atomic_get_old_crtc_state(state,
6147 + crtc);
6148 + struct vc4_crtc_state *old_vc4_state = to_vc4_crtc_state(old_state);
6149 ++ struct drm_encoder *encoder = vc4_get_crtc_encoder(crtc, state,
6150 ++ drm_atomic_get_old_connector_state);
6151 + struct drm_device *dev = crtc->dev;
6152 +
6153 + require_hvs_enabled(dev);
6154 +@@ -503,7 +543,7 @@ static void vc4_crtc_atomic_disable(struct drm_crtc *crtc,
6155 + /* Disable vblank irq handling before crtc is disabled. */
6156 + drm_crtc_vblank_off(crtc);
6157 +
6158 +- vc4_crtc_disable(crtc, state, old_vc4_state->assigned_channel);
6159 ++ vc4_crtc_disable(crtc, encoder, state, old_vc4_state->assigned_channel);
6160 +
6161 + /*
6162 + * Make sure we issue a vblank event after disabling the CRTC if
6163 +@@ -524,7 +564,8 @@ static void vc4_crtc_atomic_enable(struct drm_crtc *crtc,
6164 + {
6165 + struct drm_device *dev = crtc->dev;
6166 + struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
6167 +- struct drm_encoder *encoder = vc4_get_crtc_encoder(crtc);
6168 ++ struct drm_encoder *encoder = vc4_get_crtc_encoder(crtc, state,
6169 ++ drm_atomic_get_new_connector_state);
6170 + struct vc4_encoder *vc4_encoder = to_vc4_encoder(encoder);
6171 +
6172 + require_hvs_enabled(dev);
6173 +@@ -539,7 +580,7 @@ static void vc4_crtc_atomic_enable(struct drm_crtc *crtc,
6174 + if (vc4_encoder->pre_crtc_configure)
6175 + vc4_encoder->pre_crtc_configure(encoder, state);
6176 +
6177 +- vc4_crtc_config_pv(crtc);
6178 ++ vc4_crtc_config_pv(crtc, state);
6179 +
6180 + CRTC_WRITE(PV_CONTROL, CRTC_READ(PV_CONTROL) | PV_CONTROL_EN);
6181 +
6182 +diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
6183 +index 8106b5634fe10..e94730beb15b7 100644
6184 +--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
6185 ++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
6186 +@@ -2000,7 +2000,7 @@ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
6187 + &hpd_gpio_flags);
6188 + if (vc4_hdmi->hpd_gpio < 0) {
6189 + ret = vc4_hdmi->hpd_gpio;
6190 +- goto err_unprepare_hsm;
6191 ++ goto err_put_ddc;
6192 + }
6193 +
6194 + vc4_hdmi->hpd_active_low = hpd_gpio_flags & OF_GPIO_ACTIVE_LOW;
6195 +@@ -2041,8 +2041,8 @@ err_destroy_conn:
6196 + vc4_hdmi_connector_destroy(&vc4_hdmi->connector);
6197 + err_destroy_encoder:
6198 + drm_encoder_cleanup(encoder);
6199 +-err_unprepare_hsm:
6200 + pm_runtime_disable(dev);
6201 ++err_put_ddc:
6202 + put_device(&vc4_hdmi->ddc->dev);
6203 +
6204 + return ret;
6205 +diff --git a/drivers/gpu/drm/vmwgfx/device_include/svga3d_surfacedefs.h b/drivers/gpu/drm/vmwgfx/device_include/svga3d_surfacedefs.h
6206 +index 4db25bd9fa22d..127eaf0a0a580 100644
6207 +--- a/drivers/gpu/drm/vmwgfx/device_include/svga3d_surfacedefs.h
6208 ++++ b/drivers/gpu/drm/vmwgfx/device_include/svga3d_surfacedefs.h
6209 +@@ -1467,6 +1467,7 @@ struct svga3dsurface_cache {
6210 +
6211 + /**
6212 + * struct svga3dsurface_loc - Surface location
6213 ++ * @sheet: The multisample sheet.
6214 + * @sub_resource: Surface subresource. Defined as layer * num_mip_levels +
6215 + * mip_level.
6216 + * @x: X coordinate.
6217 +@@ -1474,6 +1475,7 @@ struct svga3dsurface_cache {
6218 + * @z: Z coordinate.
6219 + */
6220 + struct svga3dsurface_loc {
6221 ++ u32 sheet;
6222 + u32 sub_resource;
6223 + u32 x, y, z;
6224 + };
6225 +@@ -1566,8 +1568,8 @@ svga3dsurface_get_loc(const struct svga3dsurface_cache *cache,
6226 + u32 layer;
6227 + int i;
6228 +
6229 +- if (offset >= cache->sheet_bytes)
6230 +- offset %= cache->sheet_bytes;
6231 ++ loc->sheet = offset / cache->sheet_bytes;
6232 ++ offset -= loc->sheet * cache->sheet_bytes;
6233 +
6234 + layer = offset / cache->mip_chain_bytes;
6235 + offset -= layer * cache->mip_chain_bytes;
6236 +@@ -1631,6 +1633,7 @@ svga3dsurface_min_loc(const struct svga3dsurface_cache *cache,
6237 + u32 sub_resource,
6238 + struct svga3dsurface_loc *loc)
6239 + {
6240 ++ loc->sheet = 0;
6241 + loc->sub_resource = sub_resource;
6242 + loc->x = loc->y = loc->z = 0;
6243 + }
6244 +@@ -1652,6 +1655,7 @@ svga3dsurface_max_loc(const struct svga3dsurface_cache *cache,
6245 + const struct drm_vmw_size *size;
6246 + u32 mip;
6247 +
6248 ++ loc->sheet = 0;
6249 + loc->sub_resource = sub_resource + 1;
6250 + mip = sub_resource % cache->num_mip_levels;
6251 + size = &cache->mip[mip].size;
6252 +diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
6253 +index 462f173207085..0996c3282ebd6 100644
6254 +--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
6255 ++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
6256 +@@ -2759,12 +2759,24 @@ static int vmw_cmd_dx_genmips(struct vmw_private *dev_priv,
6257 + {
6258 + VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdDXGenMips) =
6259 + container_of(header, typeof(*cmd), header);
6260 +- struct vmw_resource *ret;
6261 ++ struct vmw_resource *view;
6262 ++ struct vmw_res_cache_entry *rcache;
6263 +
6264 +- ret = vmw_view_id_val_add(sw_context, vmw_view_sr,
6265 +- cmd->body.shaderResourceViewId);
6266 ++ view = vmw_view_id_val_add(sw_context, vmw_view_sr,
6267 ++ cmd->body.shaderResourceViewId);
6268 ++ if (IS_ERR(view))
6269 ++ return PTR_ERR(view);
6270 +
6271 +- return PTR_ERR_OR_ZERO(ret);
6272 ++ /*
6273 ++ * Normally the shader-resource view is not gpu-dirtying, but for
6274 ++ * this particular command it is...
6275 ++ * So mark the last looked-up surface, which is the surface
6276 ++ * the view points to, gpu-dirty.
6277 ++ */
6278 ++ rcache = &sw_context->res_cache[vmw_res_surface];
6279 ++ vmw_validation_res_set_dirty(sw_context->ctx, rcache->private,
6280 ++ VMW_RES_DIRTY_SET);
6281 ++ return 0;
6282 + }
6283 +
6284 + /**
6285 +diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
6286 +index f6cab77075a04..9905232172788 100644
6287 +--- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
6288 ++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
6289 +@@ -1801,6 +1801,19 @@ static void vmw_surface_tex_dirty_range_add(struct vmw_resource *res,
6290 + svga3dsurface_get_loc(cache, &loc2, end - 1);
6291 + svga3dsurface_inc_loc(cache, &loc2);
6292 +
6293 ++ if (loc1.sheet != loc2.sheet) {
6294 ++ u32 sub_res;
6295 ++
6296 ++ /*
6297 ++ * Multiple multisample sheets. To do this in an optimized
6298 ++ * fashion, compute the dirty region for each sheet and the
6299 ++ * resulting union. Since this is not a common case, just dirty
6300 ++ * the whole surface.
6301 ++ */
6302 ++ for (sub_res = 0; sub_res < dirty->num_subres; ++sub_res)
6303 ++ vmw_subres_dirty_full(dirty, sub_res);
6304 ++ return;
6305 ++ }
6306 + if (loc1.sub_resource + 1 == loc2.sub_resource) {
6307 + /* Dirty range covers a single sub-resource */
6308 + vmw_subres_dirty_add(dirty, &loc1, &loc2);
6309 +diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
6310 +index 0f69f35f2957e..5550c943f9855 100644
6311 +--- a/drivers/hid/hid-core.c
6312 ++++ b/drivers/hid/hid-core.c
6313 +@@ -2306,12 +2306,8 @@ static int hid_device_remove(struct device *dev)
6314 + {
6315 + struct hid_device *hdev = to_hid_device(dev);
6316 + struct hid_driver *hdrv;
6317 +- int ret = 0;
6318 +
6319 +- if (down_interruptible(&hdev->driver_input_lock)) {
6320 +- ret = -EINTR;
6321 +- goto end;
6322 +- }
6323 ++ down(&hdev->driver_input_lock);
6324 + hdev->io_started = false;
6325 +
6326 + hdrv = hdev->driver;
6327 +@@ -2326,8 +2322,8 @@ static int hid_device_remove(struct device *dev)
6328 +
6329 + if (!hdev->io_started)
6330 + up(&hdev->driver_input_lock);
6331 +-end:
6332 +- return ret;
6333 ++
6334 ++ return 0;
6335 + }
6336 +
6337 + static ssize_t modalias_show(struct device *dev, struct device_attribute *a,
6338 +diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
6339 +index 03978111d9448..06168f4857225 100644
6340 +--- a/drivers/hid/hid-ids.h
6341 ++++ b/drivers/hid/hid-ids.h
6342 +@@ -397,6 +397,7 @@
6343 + #define USB_DEVICE_ID_HP_X2_10_COVER 0x0755
6344 + #define I2C_DEVICE_ID_HP_SPECTRE_X360_15 0x2817
6345 + #define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN 0x2706
6346 ++#define I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN 0x261A
6347 +
6348 + #define USB_VENDOR_ID_ELECOM 0x056e
6349 + #define USB_DEVICE_ID_ELECOM_BM084 0x0061
6350 +diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
6351 +index e982d8173c9c7..bf5e728258c16 100644
6352 +--- a/drivers/hid/hid-input.c
6353 ++++ b/drivers/hid/hid-input.c
6354 +@@ -326,6 +326,8 @@ static const struct hid_device_id hid_battery_quirks[] = {
6355 + HID_BATTERY_QUIRK_IGNORE },
6356 + { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_15),
6357 + HID_BATTERY_QUIRK_IGNORE },
6358 ++ { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN),
6359 ++ HID_BATTERY_QUIRK_IGNORE },
6360 + {}
6361 + };
6362 +
6363 +diff --git a/drivers/hid/hid-sony.c b/drivers/hid/hid-sony.c
6364 +index 8319b0ce385a5..b3722c51ec78a 100644
6365 +--- a/drivers/hid/hid-sony.c
6366 ++++ b/drivers/hid/hid-sony.c
6367 +@@ -597,9 +597,8 @@ struct sony_sc {
6368 + /* DS4 calibration data */
6369 + struct ds4_calibration_data ds4_calib_data[6];
6370 + /* GH Live */
6371 ++ struct urb *ghl_urb;
6372 + struct timer_list ghl_poke_timer;
6373 +- struct usb_ctrlrequest *ghl_cr;
6374 +- u8 *ghl_databuf;
6375 + };
6376 +
6377 + static void sony_set_leds(struct sony_sc *sc);
6378 +@@ -625,66 +624,54 @@ static inline void sony_schedule_work(struct sony_sc *sc,
6379 +
6380 + static void ghl_magic_poke_cb(struct urb *urb)
6381 + {
6382 +- if (urb) {
6383 +- /* Free sc->ghl_cr and sc->ghl_databuf allocated in
6384 +- * ghl_magic_poke()
6385 +- */
6386 +- kfree(urb->setup_packet);
6387 +- kfree(urb->transfer_buffer);
6388 +- }
6389 ++ struct sony_sc *sc = urb->context;
6390 ++
6391 ++ if (urb->status < 0)
6392 ++ hid_err(sc->hdev, "URB transfer failed : %d", urb->status);
6393 ++
6394 ++ mod_timer(&sc->ghl_poke_timer, jiffies + GHL_GUITAR_POKE_INTERVAL*HZ);
6395 + }
6396 +
6397 + static void ghl_magic_poke(struct timer_list *t)
6398 + {
6399 ++ int ret;
6400 + struct sony_sc *sc = from_timer(sc, t, ghl_poke_timer);
6401 +
6402 +- int ret;
6403 ++ ret = usb_submit_urb(sc->ghl_urb, GFP_ATOMIC);
6404 ++ if (ret < 0)
6405 ++ hid_err(sc->hdev, "usb_submit_urb failed: %d", ret);
6406 ++}
6407 ++
6408 ++static int ghl_init_urb(struct sony_sc *sc, struct usb_device *usbdev)
6409 ++{
6410 ++ struct usb_ctrlrequest *cr;
6411 ++ u16 poke_size;
6412 ++ u8 *databuf;
6413 + unsigned int pipe;
6414 +- struct urb *urb;
6415 +- struct usb_device *usbdev = to_usb_device(sc->hdev->dev.parent->parent);
6416 +- const u16 poke_size =
6417 +- ARRAY_SIZE(ghl_ps3wiiu_magic_data);
6418 +
6419 ++ poke_size = ARRAY_SIZE(ghl_ps3wiiu_magic_data);
6420 + pipe = usb_sndctrlpipe(usbdev, 0);
6421 +
6422 +- if (!sc->ghl_cr) {
6423 +- sc->ghl_cr = kzalloc(sizeof(*sc->ghl_cr), GFP_ATOMIC);
6424 +- if (!sc->ghl_cr)
6425 +- goto resched;
6426 +- }
6427 +-
6428 +- if (!sc->ghl_databuf) {
6429 +- sc->ghl_databuf = kzalloc(poke_size, GFP_ATOMIC);
6430 +- if (!sc->ghl_databuf)
6431 +- goto resched;
6432 +- }
6433 ++ cr = devm_kzalloc(&sc->hdev->dev, sizeof(*cr), GFP_ATOMIC);
6434 ++ if (cr == NULL)
6435 ++ return -ENOMEM;
6436 +
6437 +- urb = usb_alloc_urb(0, GFP_ATOMIC);
6438 +- if (!urb)
6439 +- goto resched;
6440 ++ databuf = devm_kzalloc(&sc->hdev->dev, poke_size, GFP_ATOMIC);
6441 ++ if (databuf == NULL)
6442 ++ return -ENOMEM;
6443 +
6444 +- sc->ghl_cr->bRequestType =
6445 ++ cr->bRequestType =
6446 + USB_RECIP_INTERFACE | USB_TYPE_CLASS | USB_DIR_OUT;
6447 +- sc->ghl_cr->bRequest = USB_REQ_SET_CONFIGURATION;
6448 +- sc->ghl_cr->wValue = cpu_to_le16(ghl_ps3wiiu_magic_value);
6449 +- sc->ghl_cr->wIndex = 0;
6450 +- sc->ghl_cr->wLength = cpu_to_le16(poke_size);
6451 +- memcpy(sc->ghl_databuf, ghl_ps3wiiu_magic_data, poke_size);
6452 +-
6453 ++ cr->bRequest = USB_REQ_SET_CONFIGURATION;
6454 ++ cr->wValue = cpu_to_le16(ghl_ps3wiiu_magic_value);
6455 ++ cr->wIndex = 0;
6456 ++ cr->wLength = cpu_to_le16(poke_size);
6457 ++ memcpy(databuf, ghl_ps3wiiu_magic_data, poke_size);
6458 + usb_fill_control_urb(
6459 +- urb, usbdev, pipe,
6460 +- (unsigned char *) sc->ghl_cr, sc->ghl_databuf,
6461 +- poke_size, ghl_magic_poke_cb, NULL);
6462 +- ret = usb_submit_urb(urb, GFP_ATOMIC);
6463 +- if (ret < 0) {
6464 +- kfree(sc->ghl_databuf);
6465 +- kfree(sc->ghl_cr);
6466 +- }
6467 +- usb_free_urb(urb);
6468 +-
6469 +-resched:
6470 +- /* Reschedule for next time */
6471 +- mod_timer(&sc->ghl_poke_timer, jiffies + GHL_GUITAR_POKE_INTERVAL*HZ);
6472 ++ sc->ghl_urb, usbdev, pipe,
6473 ++ (unsigned char *) cr, databuf, poke_size,
6474 ++ ghl_magic_poke_cb, sc);
6475 ++ return 0;
6476 + }
6477 +
6478 + static int guitar_mapping(struct hid_device *hdev, struct hid_input *hi,
6479 +@@ -2981,6 +2968,7 @@ static int sony_probe(struct hid_device *hdev, const struct hid_device_id *id)
6480 + int ret;
6481 + unsigned long quirks = id->driver_data;
6482 + struct sony_sc *sc;
6483 ++ struct usb_device *usbdev;
6484 + unsigned int connect_mask = HID_CONNECT_DEFAULT;
6485 +
6486 + if (!strcmp(hdev->name, "FutureMax Dance Mat"))
6487 +@@ -3000,6 +2988,7 @@ static int sony_probe(struct hid_device *hdev, const struct hid_device_id *id)
6488 + sc->quirks = quirks;
6489 + hid_set_drvdata(hdev, sc);
6490 + sc->hdev = hdev;
6491 ++ usbdev = to_usb_device(sc->hdev->dev.parent->parent);
6492 +
6493 + ret = hid_parse(hdev);
6494 + if (ret) {
6495 +@@ -3042,6 +3031,15 @@ static int sony_probe(struct hid_device *hdev, const struct hid_device_id *id)
6496 + }
6497 +
6498 + if (sc->quirks & GHL_GUITAR_PS3WIIU) {
6499 ++ sc->ghl_urb = usb_alloc_urb(0, GFP_ATOMIC);
6500 ++ if (!sc->ghl_urb)
6501 ++ return -ENOMEM;
6502 ++ ret = ghl_init_urb(sc, usbdev);
6503 ++ if (ret) {
6504 ++ hid_err(hdev, "error preparing URB\n");
6505 ++ return ret;
6506 ++ }
6507 ++
6508 + timer_setup(&sc->ghl_poke_timer, ghl_magic_poke, 0);
6509 + mod_timer(&sc->ghl_poke_timer,
6510 + jiffies + GHL_GUITAR_POKE_INTERVAL*HZ);
6511 +@@ -3054,8 +3052,10 @@ static void sony_remove(struct hid_device *hdev)
6512 + {
6513 + struct sony_sc *sc = hid_get_drvdata(hdev);
6514 +
6515 +- if (sc->quirks & GHL_GUITAR_PS3WIIU)
6516 ++ if (sc->quirks & GHL_GUITAR_PS3WIIU) {
6517 + del_timer_sync(&sc->ghl_poke_timer);
6518 ++ usb_free_urb(sc->ghl_urb);
6519 ++ }
6520 +
6521 + hid_hw_close(hdev);
6522 +
6523 +diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
6524 +index 195910dd2154e..e3835407e8d23 100644
6525 +--- a/drivers/hid/wacom_wac.h
6526 ++++ b/drivers/hid/wacom_wac.h
6527 +@@ -122,7 +122,7 @@
6528 + #define WACOM_HID_WD_TOUCHONOFF (WACOM_HID_UP_WACOMDIGITIZER | 0x0454)
6529 + #define WACOM_HID_WD_BATTERY_LEVEL (WACOM_HID_UP_WACOMDIGITIZER | 0x043b)
6530 + #define WACOM_HID_WD_EXPRESSKEY00 (WACOM_HID_UP_WACOMDIGITIZER | 0x0910)
6531 +-#define WACOM_HID_WD_EXPRESSKEYCAP00 (WACOM_HID_UP_WACOMDIGITIZER | 0x0950)
6532 ++#define WACOM_HID_WD_EXPRESSKEYCAP00 (WACOM_HID_UP_WACOMDIGITIZER | 0x0940)
6533 + #define WACOM_HID_WD_MODE_CHANGE (WACOM_HID_UP_WACOMDIGITIZER | 0x0980)
6534 + #define WACOM_HID_WD_MUTE_DEVICE (WACOM_HID_UP_WACOMDIGITIZER | 0x0981)
6535 + #define WACOM_HID_WD_CONTROLPANEL (WACOM_HID_UP_WACOMDIGITIZER | 0x0982)
6536 +diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
6537 +index c83612cddb995..425bf85ed1a03 100644
6538 +--- a/drivers/hv/connection.c
6539 ++++ b/drivers/hv/connection.c
6540 +@@ -229,8 +229,10 @@ int vmbus_connect(void)
6541 + */
6542 +
6543 + for (i = 0; ; i++) {
6544 +- if (i == ARRAY_SIZE(vmbus_versions))
6545 ++ if (i == ARRAY_SIZE(vmbus_versions)) {
6546 ++ ret = -EDOM;
6547 + goto cleanup;
6548 ++ }
6549 +
6550 + version = vmbus_versions[i];
6551 + if (version > max_version)
6552 +diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c
6553 +index e4aefeb330daf..136576cba26f5 100644
6554 +--- a/drivers/hv/hv_util.c
6555 ++++ b/drivers/hv/hv_util.c
6556 +@@ -750,8 +750,8 @@ static int hv_timesync_init(struct hv_util_service *srv)
6557 + */
6558 + hv_ptp_clock = ptp_clock_register(&ptp_hyperv_info, NULL);
6559 + if (IS_ERR_OR_NULL(hv_ptp_clock)) {
6560 +- pr_err("cannot register PTP clock: %ld\n",
6561 +- PTR_ERR(hv_ptp_clock));
6562 ++ pr_err("cannot register PTP clock: %d\n",
6563 ++ PTR_ERR_OR_ZERO(hv_ptp_clock));
6564 + hv_ptp_clock = NULL;
6565 + }
6566 +
6567 +diff --git a/drivers/hwmon/lm70.c b/drivers/hwmon/lm70.c
6568 +index 40eab3349904b..6b884ea009877 100644
6569 +--- a/drivers/hwmon/lm70.c
6570 ++++ b/drivers/hwmon/lm70.c
6571 +@@ -22,10 +22,10 @@
6572 + #include <linux/hwmon.h>
6573 + #include <linux/mutex.h>
6574 + #include <linux/mod_devicetable.h>
6575 ++#include <linux/of.h>
6576 + #include <linux/property.h>
6577 + #include <linux/spi/spi.h>
6578 + #include <linux/slab.h>
6579 +-#include <linux/acpi.h>
6580 +
6581 + #define DRVNAME "lm70"
6582 +
6583 +@@ -148,29 +148,6 @@ static const struct of_device_id lm70_of_ids[] = {
6584 + MODULE_DEVICE_TABLE(of, lm70_of_ids);
6585 + #endif
6586 +
6587 +-#ifdef CONFIG_ACPI
6588 +-static const struct acpi_device_id lm70_acpi_ids[] = {
6589 +- {
6590 +- .id = "LM000070",
6591 +- .driver_data = LM70_CHIP_LM70,
6592 +- },
6593 +- {
6594 +- .id = "TMP00121",
6595 +- .driver_data = LM70_CHIP_TMP121,
6596 +- },
6597 +- {
6598 +- .id = "LM000071",
6599 +- .driver_data = LM70_CHIP_LM71,
6600 +- },
6601 +- {
6602 +- .id = "LM000074",
6603 +- .driver_data = LM70_CHIP_LM74,
6604 +- },
6605 +- {},
6606 +-};
6607 +-MODULE_DEVICE_TABLE(acpi, lm70_acpi_ids);
6608 +-#endif
6609 +-
6610 + static int lm70_probe(struct spi_device *spi)
6611 + {
6612 + struct device *hwmon_dev;
6613 +@@ -217,7 +194,6 @@ static struct spi_driver lm70_driver = {
6614 + .driver = {
6615 + .name = "lm70",
6616 + .of_match_table = of_match_ptr(lm70_of_ids),
6617 +- .acpi_match_table = ACPI_PTR(lm70_acpi_ids),
6618 + },
6619 + .id_table = lm70_ids,
6620 + .probe = lm70_probe,
6621 +diff --git a/drivers/hwmon/max31722.c b/drivers/hwmon/max31722.c
6622 +index 062eceb7be0db..613338cbcb170 100644
6623 +--- a/drivers/hwmon/max31722.c
6624 ++++ b/drivers/hwmon/max31722.c
6625 +@@ -6,7 +6,6 @@
6626 + * Copyright (c) 2016, Intel Corporation.
6627 + */
6628 +
6629 +-#include <linux/acpi.h>
6630 + #include <linux/hwmon.h>
6631 + #include <linux/hwmon-sysfs.h>
6632 + #include <linux/kernel.h>
6633 +@@ -133,20 +132,12 @@ static const struct spi_device_id max31722_spi_id[] = {
6634 + {"max31723", 0},
6635 + {}
6636 + };
6637 +-
6638 +-static const struct acpi_device_id __maybe_unused max31722_acpi_id[] = {
6639 +- {"MAX31722", 0},
6640 +- {"MAX31723", 0},
6641 +- {}
6642 +-};
6643 +-
6644 + MODULE_DEVICE_TABLE(spi, max31722_spi_id);
6645 +
6646 + static struct spi_driver max31722_driver = {
6647 + .driver = {
6648 + .name = "max31722",
6649 + .pm = &max31722_pm_ops,
6650 +- .acpi_match_table = ACPI_PTR(max31722_acpi_id),
6651 + },
6652 + .probe = max31722_probe,
6653 + .remove = max31722_remove,
6654 +diff --git a/drivers/hwmon/max31790.c b/drivers/hwmon/max31790.c
6655 +index 86e6c71db685c..67677c4377687 100644
6656 +--- a/drivers/hwmon/max31790.c
6657 ++++ b/drivers/hwmon/max31790.c
6658 +@@ -27,6 +27,7 @@
6659 +
6660 + /* Fan Config register bits */
6661 + #define MAX31790_FAN_CFG_RPM_MODE 0x80
6662 ++#define MAX31790_FAN_CFG_CTRL_MON 0x10
6663 + #define MAX31790_FAN_CFG_TACH_INPUT_EN 0x08
6664 + #define MAX31790_FAN_CFG_TACH_INPUT 0x01
6665 +
6666 +@@ -104,7 +105,7 @@ static struct max31790_data *max31790_update_device(struct device *dev)
6667 + data->tach[NR_CHANNEL + i] = rv;
6668 + } else {
6669 + rv = i2c_smbus_read_word_swapped(client,
6670 +- MAX31790_REG_PWMOUT(i));
6671 ++ MAX31790_REG_PWM_DUTY_CYCLE(i));
6672 + if (rv < 0)
6673 + goto abort;
6674 + data->pwm[i] = rv;
6675 +@@ -170,7 +171,7 @@ static int max31790_read_fan(struct device *dev, u32 attr, int channel,
6676 +
6677 + switch (attr) {
6678 + case hwmon_fan_input:
6679 +- sr = get_tach_period(data->fan_dynamics[channel]);
6680 ++ sr = get_tach_period(data->fan_dynamics[channel % NR_CHANNEL]);
6681 + rpm = RPM_FROM_REG(data->tach[channel], sr);
6682 + *val = rpm;
6683 + return 0;
6684 +@@ -271,12 +272,12 @@ static int max31790_read_pwm(struct device *dev, u32 attr, int channel,
6685 + *val = data->pwm[channel] >> 8;
6686 + return 0;
6687 + case hwmon_pwm_enable:
6688 +- if (fan_config & MAX31790_FAN_CFG_RPM_MODE)
6689 ++ if (fan_config & MAX31790_FAN_CFG_CTRL_MON)
6690 ++ *val = 0;
6691 ++ else if (fan_config & MAX31790_FAN_CFG_RPM_MODE)
6692 + *val = 2;
6693 +- else if (fan_config & MAX31790_FAN_CFG_TACH_INPUT_EN)
6694 +- *val = 1;
6695 + else
6696 +- *val = 0;
6697 ++ *val = 1;
6698 + return 0;
6699 + default:
6700 + return -EOPNOTSUPP;
6701 +@@ -299,31 +300,41 @@ static int max31790_write_pwm(struct device *dev, u32 attr, int channel,
6702 + err = -EINVAL;
6703 + break;
6704 + }
6705 +- data->pwm[channel] = val << 8;
6706 ++ data->valid = false;
6707 + err = i2c_smbus_write_word_swapped(client,
6708 + MAX31790_REG_PWMOUT(channel),
6709 +- data->pwm[channel]);
6710 ++ val << 8);
6711 + break;
6712 + case hwmon_pwm_enable:
6713 + fan_config = data->fan_config[channel];
6714 + if (val == 0) {
6715 +- fan_config &= ~(MAX31790_FAN_CFG_TACH_INPUT_EN |
6716 +- MAX31790_FAN_CFG_RPM_MODE);
6717 ++ fan_config |= MAX31790_FAN_CFG_CTRL_MON;
6718 ++ /*
6719 ++ * Disable RPM mode; otherwise disabling fan speed
6720 ++ * monitoring is not possible.
6721 ++ */
6722 ++ fan_config &= ~MAX31790_FAN_CFG_RPM_MODE;
6723 + } else if (val == 1) {
6724 +- fan_config = (fan_config |
6725 +- MAX31790_FAN_CFG_TACH_INPUT_EN) &
6726 +- ~MAX31790_FAN_CFG_RPM_MODE;
6727 ++ fan_config &= ~(MAX31790_FAN_CFG_CTRL_MON | MAX31790_FAN_CFG_RPM_MODE);
6728 + } else if (val == 2) {
6729 +- fan_config |= MAX31790_FAN_CFG_TACH_INPUT_EN |
6730 +- MAX31790_FAN_CFG_RPM_MODE;
6731 ++ fan_config &= ~MAX31790_FAN_CFG_CTRL_MON;
6732 ++ /*
6733 ++ * The chip sets MAX31790_FAN_CFG_TACH_INPUT_EN on its
6734 ++ * own if MAX31790_FAN_CFG_RPM_MODE is set.
6735 ++ * Do it here as well to reflect the actual register
6736 ++ * value in the cache.
6737 ++ */
6738 ++ fan_config |= (MAX31790_FAN_CFG_RPM_MODE | MAX31790_FAN_CFG_TACH_INPUT_EN);
6739 + } else {
6740 + err = -EINVAL;
6741 + break;
6742 + }
6743 +- data->fan_config[channel] = fan_config;
6744 +- err = i2c_smbus_write_byte_data(client,
6745 +- MAX31790_REG_FAN_CONFIG(channel),
6746 +- fan_config);
6747 ++ if (fan_config != data->fan_config[channel]) {
6748 ++ err = i2c_smbus_write_byte_data(client, MAX31790_REG_FAN_CONFIG(channel),
6749 ++ fan_config);
6750 ++ if (!err)
6751 ++ data->fan_config[channel] = fan_config;
6752 ++ }
6753 + break;
6754 + default:
6755 + err = -EOPNOTSUPP;
6756 +diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
6757 +index 0062c89356530..237a8c0d6c244 100644
6758 +--- a/drivers/hwtracing/coresight/coresight-core.c
6759 ++++ b/drivers/hwtracing/coresight/coresight-core.c
6760 +@@ -595,7 +595,7 @@ static struct coresight_device *
6761 + coresight_find_enabled_sink(struct coresight_device *csdev)
6762 + {
6763 + int i;
6764 +- struct coresight_device *sink;
6765 ++ struct coresight_device *sink = NULL;
6766 +
6767 + if ((csdev->type == CORESIGHT_DEV_TYPE_SINK ||
6768 + csdev->type == CORESIGHT_DEV_TYPE_LINKSINK) &&
6769 +diff --git a/drivers/iio/accel/bma180.c b/drivers/iio/accel/bma180.c
6770 +index 71f85a3e525b2..9b0018874eec6 100644
6771 +--- a/drivers/iio/accel/bma180.c
6772 ++++ b/drivers/iio/accel/bma180.c
6773 +@@ -55,7 +55,7 @@ struct bma180_part_info {
6774 +
6775 + u8 int_reset_reg, int_reset_mask;
6776 + u8 sleep_reg, sleep_mask;
6777 +- u8 bw_reg, bw_mask;
6778 ++ u8 bw_reg, bw_mask, bw_offset;
6779 + u8 scale_reg, scale_mask;
6780 + u8 power_reg, power_mask, lowpower_val;
6781 + u8 int_enable_reg, int_enable_mask;
6782 +@@ -127,6 +127,7 @@ struct bma180_part_info {
6783 +
6784 + #define BMA250_RANGE_MASK GENMASK(3, 0) /* Range of accel values */
6785 + #define BMA250_BW_MASK GENMASK(4, 0) /* Accel bandwidth */
6786 ++#define BMA250_BW_OFFSET 8
6787 + #define BMA250_SUSPEND_MASK BIT(7) /* chip will sleep */
6788 + #define BMA250_LOWPOWER_MASK BIT(6)
6789 + #define BMA250_DATA_INTEN_MASK BIT(4)
6790 +@@ -143,6 +144,7 @@ struct bma180_part_info {
6791 +
6792 + #define BMA254_RANGE_MASK GENMASK(3, 0) /* Range of accel values */
6793 + #define BMA254_BW_MASK GENMASK(4, 0) /* Accel bandwidth */
6794 ++#define BMA254_BW_OFFSET 8
6795 + #define BMA254_SUSPEND_MASK BIT(7) /* chip will sleep */
6796 + #define BMA254_LOWPOWER_MASK BIT(6)
6797 + #define BMA254_DATA_INTEN_MASK BIT(4)
6798 +@@ -162,7 +164,11 @@ struct bma180_data {
6799 + int scale;
6800 + int bw;
6801 + bool pmode;
6802 +- u8 buff[16]; /* 3x 16-bit + 8-bit + padding + timestamp */
6803 ++ /* Ensure timestamp is naturally aligned */
6804 ++ struct {
6805 ++ s16 chan[4];
6806 ++ s64 timestamp __aligned(8);
6807 ++ } scan;
6808 + };
6809 +
6810 + enum bma180_chan {
6811 +@@ -283,7 +289,8 @@ static int bma180_set_bw(struct bma180_data *data, int val)
6812 + for (i = 0; i < data->part_info->num_bw; ++i) {
6813 + if (data->part_info->bw_table[i] == val) {
6814 + ret = bma180_set_bits(data, data->part_info->bw_reg,
6815 +- data->part_info->bw_mask, i);
6816 ++ data->part_info->bw_mask,
6817 ++ i + data->part_info->bw_offset);
6818 + if (ret) {
6819 + dev_err(&data->client->dev,
6820 + "failed to set bandwidth\n");
6821 +@@ -876,6 +883,7 @@ static const struct bma180_part_info bma180_part_info[] = {
6822 + .sleep_mask = BMA250_SUSPEND_MASK,
6823 + .bw_reg = BMA250_BW_REG,
6824 + .bw_mask = BMA250_BW_MASK,
6825 ++ .bw_offset = BMA250_BW_OFFSET,
6826 + .scale_reg = BMA250_RANGE_REG,
6827 + .scale_mask = BMA250_RANGE_MASK,
6828 + .power_reg = BMA250_POWER_REG,
6829 +@@ -905,6 +913,7 @@ static const struct bma180_part_info bma180_part_info[] = {
6830 + .sleep_mask = BMA254_SUSPEND_MASK,
6831 + .bw_reg = BMA254_BW_REG,
6832 + .bw_mask = BMA254_BW_MASK,
6833 ++ .bw_offset = BMA254_BW_OFFSET,
6834 + .scale_reg = BMA254_RANGE_REG,
6835 + .scale_mask = BMA254_RANGE_MASK,
6836 + .power_reg = BMA254_POWER_REG,
6837 +@@ -938,12 +947,12 @@ static irqreturn_t bma180_trigger_handler(int irq, void *p)
6838 + mutex_unlock(&data->mutex);
6839 + goto err;
6840 + }
6841 +- ((s16 *)data->buff)[i++] = ret;
6842 ++ data->scan.chan[i++] = ret;
6843 + }
6844 +
6845 + mutex_unlock(&data->mutex);
6846 +
6847 +- iio_push_to_buffers_with_timestamp(indio_dev, data->buff, time_ns);
6848 ++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan, time_ns);
6849 + err:
6850 + iio_trigger_notify_done(indio_dev->trig);
6851 +
6852 +diff --git a/drivers/iio/accel/bma220_spi.c b/drivers/iio/accel/bma220_spi.c
6853 +index 3c9b0c6954e60..e8a9db1a82ad8 100644
6854 +--- a/drivers/iio/accel/bma220_spi.c
6855 ++++ b/drivers/iio/accel/bma220_spi.c
6856 +@@ -63,7 +63,11 @@ static const int bma220_scale_table[][2] = {
6857 + struct bma220_data {
6858 + struct spi_device *spi_device;
6859 + struct mutex lock;
6860 +- s8 buffer[16]; /* 3x8-bit channels + 5x8 padding + 8x8 timestamp */
6861 ++ struct {
6862 ++ s8 chans[3];
6863 ++ /* Ensure timestamp is naturally aligned. */
6864 ++ s64 timestamp __aligned(8);
6865 ++ } scan;
6866 + u8 tx_buf[2] ____cacheline_aligned;
6867 + };
6868 +
6869 +@@ -94,12 +98,12 @@ static irqreturn_t bma220_trigger_handler(int irq, void *p)
6870 +
6871 + mutex_lock(&data->lock);
6872 + data->tx_buf[0] = BMA220_REG_ACCEL_X | BMA220_READ_MASK;
6873 +- ret = spi_write_then_read(spi, data->tx_buf, 1, data->buffer,
6874 ++ ret = spi_write_then_read(spi, data->tx_buf, 1, &data->scan.chans,
6875 + ARRAY_SIZE(bma220_channels) - 1);
6876 + if (ret < 0)
6877 + goto err;
6878 +
6879 +- iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
6880 ++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
6881 + pf->timestamp);
6882 + err:
6883 + mutex_unlock(&data->lock);
6884 +diff --git a/drivers/iio/accel/bmc150-accel-core.c b/drivers/iio/accel/bmc150-accel-core.c
6885 +index 7e425ebcd7ea8..dca1f00d65e50 100644
6886 +--- a/drivers/iio/accel/bmc150-accel-core.c
6887 ++++ b/drivers/iio/accel/bmc150-accel-core.c
6888 +@@ -1171,11 +1171,12 @@ static const struct bmc150_accel_chip_info bmc150_accel_chip_info_tbl[] = {
6889 + /*
6890 + * The datasheet page 17 says:
6891 + * 15.6, 31.3, 62.5 and 125 mg per LSB.
6892 ++ * IIO unit is m/s^2 so multiply by g = 9.80665 m/s^2.
6893 + */
6894 +- .scale_table = { {156000, BMC150_ACCEL_DEF_RANGE_2G},
6895 +- {313000, BMC150_ACCEL_DEF_RANGE_4G},
6896 +- {625000, BMC150_ACCEL_DEF_RANGE_8G},
6897 +- {1250000, BMC150_ACCEL_DEF_RANGE_16G} },
6898 ++ .scale_table = { {152984, BMC150_ACCEL_DEF_RANGE_2G},
6899 ++ {306948, BMC150_ACCEL_DEF_RANGE_4G},
6900 ++ {612916, BMC150_ACCEL_DEF_RANGE_8G},
6901 ++ {1225831, BMC150_ACCEL_DEF_RANGE_16G} },
6902 + },
6903 + [bma222e] = {
6904 + .name = "BMA222E",
6905 +@@ -1804,21 +1805,17 @@ EXPORT_SYMBOL_GPL(bmc150_accel_core_probe);
6906 +
6907 + struct i2c_client *bmc150_get_second_device(struct i2c_client *client)
6908 + {
6909 +- struct bmc150_accel_data *data = i2c_get_clientdata(client);
6910 +-
6911 +- if (!data)
6912 +- return NULL;
6913 ++ struct bmc150_accel_data *data = iio_priv(i2c_get_clientdata(client));
6914 +
6915 + return data->second_device;
6916 + }
6917 + EXPORT_SYMBOL_GPL(bmc150_get_second_device);
6918 +
6919 +-void bmc150_set_second_device(struct i2c_client *client)
6920 ++void bmc150_set_second_device(struct i2c_client *client, struct i2c_client *second_dev)
6921 + {
6922 +- struct bmc150_accel_data *data = i2c_get_clientdata(client);
6923 ++ struct bmc150_accel_data *data = iio_priv(i2c_get_clientdata(client));
6924 +
6925 +- if (data)
6926 +- data->second_device = client;
6927 ++ data->second_device = second_dev;
6928 + }
6929 + EXPORT_SYMBOL_GPL(bmc150_set_second_device);
6930 +
6931 +diff --git a/drivers/iio/accel/bmc150-accel-i2c.c b/drivers/iio/accel/bmc150-accel-i2c.c
6932 +index 69f709319484f..2afaae0294eef 100644
6933 +--- a/drivers/iio/accel/bmc150-accel-i2c.c
6934 ++++ b/drivers/iio/accel/bmc150-accel-i2c.c
6935 +@@ -70,7 +70,7 @@ static int bmc150_accel_probe(struct i2c_client *client,
6936 +
6937 + second_dev = i2c_acpi_new_device(&client->dev, 1, &board_info);
6938 + if (!IS_ERR(second_dev))
6939 +- bmc150_set_second_device(second_dev);
6940 ++ bmc150_set_second_device(client, second_dev);
6941 + }
6942 + #endif
6943 +
6944 +diff --git a/drivers/iio/accel/bmc150-accel.h b/drivers/iio/accel/bmc150-accel.h
6945 +index 6024f15b97004..e30c1698f6fbd 100644
6946 +--- a/drivers/iio/accel/bmc150-accel.h
6947 ++++ b/drivers/iio/accel/bmc150-accel.h
6948 +@@ -18,7 +18,7 @@ int bmc150_accel_core_probe(struct device *dev, struct regmap *regmap, int irq,
6949 + const char *name, bool block_supported);
6950 + int bmc150_accel_core_remove(struct device *dev);
6951 + struct i2c_client *bmc150_get_second_device(struct i2c_client *second_device);
6952 +-void bmc150_set_second_device(struct i2c_client *second_device);
6953 ++void bmc150_set_second_device(struct i2c_client *client, struct i2c_client *second_dev);
6954 + extern const struct dev_pm_ops bmc150_accel_pm_ops;
6955 + extern const struct regmap_config bmc150_regmap_conf;
6956 +
6957 +diff --git a/drivers/iio/accel/hid-sensor-accel-3d.c b/drivers/iio/accel/hid-sensor-accel-3d.c
6958 +index 5d63ed19e6e25..d35d039c79cb5 100644
6959 +--- a/drivers/iio/accel/hid-sensor-accel-3d.c
6960 ++++ b/drivers/iio/accel/hid-sensor-accel-3d.c
6961 +@@ -28,8 +28,11 @@ struct accel_3d_state {
6962 + struct hid_sensor_hub_callbacks callbacks;
6963 + struct hid_sensor_common common_attributes;
6964 + struct hid_sensor_hub_attribute_info accel[ACCEL_3D_CHANNEL_MAX];
6965 +- /* Reserve for 3 channels + padding + timestamp */
6966 +- u32 accel_val[ACCEL_3D_CHANNEL_MAX + 3];
6967 ++ /* Ensure timestamp is naturally aligned */
6968 ++ struct {
6969 ++ u32 accel_val[3];
6970 ++ s64 timestamp __aligned(8);
6971 ++ } scan;
6972 + int scale_pre_decml;
6973 + int scale_post_decml;
6974 + int scale_precision;
6975 +@@ -241,8 +244,8 @@ static int accel_3d_proc_event(struct hid_sensor_hub_device *hsdev,
6976 + accel_state->timestamp = iio_get_time_ns(indio_dev);
6977 +
6978 + hid_sensor_push_data(indio_dev,
6979 +- accel_state->accel_val,
6980 +- sizeof(accel_state->accel_val),
6981 ++ &accel_state->scan,
6982 ++ sizeof(accel_state->scan),
6983 + accel_state->timestamp);
6984 +
6985 + accel_state->timestamp = 0;
6986 +@@ -267,7 +270,7 @@ static int accel_3d_capture_sample(struct hid_sensor_hub_device *hsdev,
6987 + case HID_USAGE_SENSOR_ACCEL_Y_AXIS:
6988 + case HID_USAGE_SENSOR_ACCEL_Z_AXIS:
6989 + offset = usage_id - HID_USAGE_SENSOR_ACCEL_X_AXIS;
6990 +- accel_state->accel_val[CHANNEL_SCAN_INDEX_X + offset] =
6991 ++ accel_state->scan.accel_val[CHANNEL_SCAN_INDEX_X + offset] =
6992 + *(u32 *)raw_data;
6993 + ret = 0;
6994 + break;
6995 +diff --git a/drivers/iio/accel/kxcjk-1013.c b/drivers/iio/accel/kxcjk-1013.c
6996 +index 2fadafc860fd6..5a19b5041e282 100644
6997 +--- a/drivers/iio/accel/kxcjk-1013.c
6998 ++++ b/drivers/iio/accel/kxcjk-1013.c
6999 +@@ -133,6 +133,13 @@ enum kx_acpi_type {
7000 + ACPI_KIOX010A,
7001 + };
7002 +
7003 ++enum kxcjk1013_axis {
7004 ++ AXIS_X,
7005 ++ AXIS_Y,
7006 ++ AXIS_Z,
7007 ++ AXIS_MAX
7008 ++};
7009 ++
7010 + struct kxcjk1013_data {
7011 + struct regulator_bulk_data regulators[2];
7012 + struct i2c_client *client;
7013 +@@ -140,7 +147,11 @@ struct kxcjk1013_data {
7014 + struct iio_trigger *motion_trig;
7015 + struct iio_mount_matrix orientation;
7016 + struct mutex mutex;
7017 +- s16 buffer[8];
7018 ++ /* Ensure timestamp naturally aligned */
7019 ++ struct {
7020 ++ s16 chans[AXIS_MAX];
7021 ++ s64 timestamp __aligned(8);
7022 ++ } scan;
7023 + u8 odr_bits;
7024 + u8 range;
7025 + int wake_thres;
7026 +@@ -154,13 +165,6 @@ struct kxcjk1013_data {
7027 + enum kx_acpi_type acpi_type;
7028 + };
7029 +
7030 +-enum kxcjk1013_axis {
7031 +- AXIS_X,
7032 +- AXIS_Y,
7033 +- AXIS_Z,
7034 +- AXIS_MAX,
7035 +-};
7036 +-
7037 + enum kxcjk1013_mode {
7038 + STANDBY,
7039 + OPERATION,
7040 +@@ -1094,12 +1098,12 @@ static irqreturn_t kxcjk1013_trigger_handler(int irq, void *p)
7041 + ret = i2c_smbus_read_i2c_block_data_or_emulated(data->client,
7042 + KXCJK1013_REG_XOUT_L,
7043 + AXIS_MAX * 2,
7044 +- (u8 *)data->buffer);
7045 ++ (u8 *)data->scan.chans);
7046 + mutex_unlock(&data->mutex);
7047 + if (ret < 0)
7048 + goto err;
7049 +
7050 +- iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
7051 ++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
7052 + data->timestamp);
7053 + err:
7054 + iio_trigger_notify_done(indio_dev->trig);
7055 +diff --git a/drivers/iio/accel/mxc4005.c b/drivers/iio/accel/mxc4005.c
7056 +index 0f8fd687866d4..381c6b1be5f55 100644
7057 +--- a/drivers/iio/accel/mxc4005.c
7058 ++++ b/drivers/iio/accel/mxc4005.c
7059 +@@ -56,7 +56,11 @@ struct mxc4005_data {
7060 + struct mutex mutex;
7061 + struct regmap *regmap;
7062 + struct iio_trigger *dready_trig;
7063 +- __be16 buffer[8];
7064 ++ /* Ensure timestamp is naturally aligned */
7065 ++ struct {
7066 ++ __be16 chans[3];
7067 ++ s64 timestamp __aligned(8);
7068 ++ } scan;
7069 + bool trigger_enabled;
7070 + };
7071 +
7072 +@@ -135,7 +139,7 @@ static int mxc4005_read_xyz(struct mxc4005_data *data)
7073 + int ret;
7074 +
7075 + ret = regmap_bulk_read(data->regmap, MXC4005_REG_XOUT_UPPER,
7076 +- data->buffer, sizeof(data->buffer));
7077 ++ data->scan.chans, sizeof(data->scan.chans));
7078 + if (ret < 0) {
7079 + dev_err(data->dev, "failed to read axes\n");
7080 + return ret;
7081 +@@ -301,7 +305,7 @@ static irqreturn_t mxc4005_trigger_handler(int irq, void *private)
7082 + if (ret < 0)
7083 + goto err;
7084 +
7085 +- iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
7086 ++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
7087 + pf->timestamp);
7088 +
7089 + err:
7090 +diff --git a/drivers/iio/accel/stk8312.c b/drivers/iio/accel/stk8312.c
7091 +index 3b59887a8581b..7d24801e8aa7c 100644
7092 +--- a/drivers/iio/accel/stk8312.c
7093 ++++ b/drivers/iio/accel/stk8312.c
7094 +@@ -103,7 +103,11 @@ struct stk8312_data {
7095 + u8 mode;
7096 + struct iio_trigger *dready_trig;
7097 + bool dready_trigger_on;
7098 +- s8 buffer[16]; /* 3x8-bit channels + 5x8 padding + 64-bit timestamp */
7099 ++ /* Ensure timestamp is naturally aligned */
7100 ++ struct {
7101 ++ s8 chans[3];
7102 ++ s64 timestamp __aligned(8);
7103 ++ } scan;
7104 + };
7105 +
7106 + static IIO_CONST_ATTR(in_accel_scale_available, STK8312_SCALE_AVAIL);
7107 +@@ -438,7 +442,7 @@ static irqreturn_t stk8312_trigger_handler(int irq, void *p)
7108 + ret = i2c_smbus_read_i2c_block_data(data->client,
7109 + STK8312_REG_XOUT,
7110 + STK8312_ALL_CHANNEL_SIZE,
7111 +- data->buffer);
7112 ++ data->scan.chans);
7113 + if (ret < STK8312_ALL_CHANNEL_SIZE) {
7114 + dev_err(&data->client->dev, "register read failed\n");
7115 + mutex_unlock(&data->lock);
7116 +@@ -452,12 +456,12 @@ static irqreturn_t stk8312_trigger_handler(int irq, void *p)
7117 + mutex_unlock(&data->lock);
7118 + goto err;
7119 + }
7120 +- data->buffer[i++] = ret;
7121 ++ data->scan.chans[i++] = ret;
7122 + }
7123 + }
7124 + mutex_unlock(&data->lock);
7125 +
7126 +- iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
7127 ++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
7128 + pf->timestamp);
7129 + err:
7130 + iio_trigger_notify_done(indio_dev->trig);
7131 +diff --git a/drivers/iio/accel/stk8ba50.c b/drivers/iio/accel/stk8ba50.c
7132 +index 3ead378b02c9b..e8087d7ee49f9 100644
7133 +--- a/drivers/iio/accel/stk8ba50.c
7134 ++++ b/drivers/iio/accel/stk8ba50.c
7135 +@@ -91,12 +91,11 @@ struct stk8ba50_data {
7136 + u8 sample_rate_idx;
7137 + struct iio_trigger *dready_trig;
7138 + bool dready_trigger_on;
7139 +- /*
7140 +- * 3 x 16-bit channels (10-bit data, 6-bit padding) +
7141 +- * 1 x 16 padding +
7142 +- * 4 x 16 64-bit timestamp
7143 +- */
7144 +- s16 buffer[8];
7145 ++ /* Ensure timestamp is naturally aligned */
7146 ++ struct {
7147 ++ s16 chans[3];
7148 ++ s64 timetamp __aligned(8);
7149 ++ } scan;
7150 + };
7151 +
7152 + #define STK8BA50_ACCEL_CHANNEL(index, reg, axis) { \
7153 +@@ -324,7 +323,7 @@ static irqreturn_t stk8ba50_trigger_handler(int irq, void *p)
7154 + ret = i2c_smbus_read_i2c_block_data(data->client,
7155 + STK8BA50_REG_XOUT,
7156 + STK8BA50_ALL_CHANNEL_SIZE,
7157 +- (u8 *)data->buffer);
7158 ++ (u8 *)data->scan.chans);
7159 + if (ret < STK8BA50_ALL_CHANNEL_SIZE) {
7160 + dev_err(&data->client->dev, "register read failed\n");
7161 + goto err;
7162 +@@ -337,10 +336,10 @@ static irqreturn_t stk8ba50_trigger_handler(int irq, void *p)
7163 + if (ret < 0)
7164 + goto err;
7165 +
7166 +- data->buffer[i++] = ret;
7167 ++ data->scan.chans[i++] = ret;
7168 + }
7169 + }
7170 +- iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
7171 ++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
7172 + pf->timestamp);
7173 + err:
7174 + mutex_unlock(&data->lock);
7175 +diff --git a/drivers/iio/adc/at91-sama5d2_adc.c b/drivers/iio/adc/at91-sama5d2_adc.c
7176 +index a7826f097b95c..d356b515df090 100644
7177 +--- a/drivers/iio/adc/at91-sama5d2_adc.c
7178 ++++ b/drivers/iio/adc/at91-sama5d2_adc.c
7179 +@@ -403,7 +403,8 @@ struct at91_adc_state {
7180 + struct at91_adc_dma dma_st;
7181 + struct at91_adc_touch touch_st;
7182 + struct iio_dev *indio_dev;
7183 +- u16 buffer[AT91_BUFFER_MAX_HWORDS];
7184 ++ /* Ensure naturally aligned timestamp */
7185 ++ u16 buffer[AT91_BUFFER_MAX_HWORDS] __aligned(8);
7186 + /*
7187 + * lock to prevent concurrent 'single conversion' requests through
7188 + * sysfs.
7189 +diff --git a/drivers/iio/adc/hx711.c b/drivers/iio/adc/hx711.c
7190 +index 6a173531d355b..f7ee856a6b8b6 100644
7191 +--- a/drivers/iio/adc/hx711.c
7192 ++++ b/drivers/iio/adc/hx711.c
7193 +@@ -86,9 +86,9 @@ struct hx711_data {
7194 + struct mutex lock;
7195 + /*
7196 + * triggered buffer
7197 +- * 2x32-bit channel + 64-bit timestamp
7198 ++ * 2x32-bit channel + 64-bit naturally aligned timestamp
7199 + */
7200 +- u32 buffer[4];
7201 ++ u32 buffer[4] __aligned(8);
7202 + /*
7203 + * delay after a rising edge on SCK until the data is ready DOUT
7204 + * this is dependent on the hx711 where the datasheet tells a
7205 +diff --git a/drivers/iio/adc/mxs-lradc-adc.c b/drivers/iio/adc/mxs-lradc-adc.c
7206 +index 30e29f44ebd2e..c480cb489c1a3 100644
7207 +--- a/drivers/iio/adc/mxs-lradc-adc.c
7208 ++++ b/drivers/iio/adc/mxs-lradc-adc.c
7209 +@@ -115,7 +115,8 @@ struct mxs_lradc_adc {
7210 + struct device *dev;
7211 +
7212 + void __iomem *base;
7213 +- u32 buffer[10];
7214 ++ /* Maximum of 8 channels + 8 byte ts */
7215 ++ u32 buffer[10] __aligned(8);
7216 + struct iio_trigger *trig;
7217 + struct completion completion;
7218 + spinlock_t lock;
7219 +diff --git a/drivers/iio/adc/ti-ads1015.c b/drivers/iio/adc/ti-ads1015.c
7220 +index 9fef39bcf997b..5b828428be77c 100644
7221 +--- a/drivers/iio/adc/ti-ads1015.c
7222 ++++ b/drivers/iio/adc/ti-ads1015.c
7223 +@@ -395,10 +395,14 @@ static irqreturn_t ads1015_trigger_handler(int irq, void *p)
7224 + struct iio_poll_func *pf = p;
7225 + struct iio_dev *indio_dev = pf->indio_dev;
7226 + struct ads1015_data *data = iio_priv(indio_dev);
7227 +- s16 buf[8]; /* 1x s16 ADC val + 3x s16 padding + 4x s16 timestamp */
7228 ++ /* Ensure natural alignment of timestamp */
7229 ++ struct {
7230 ++ s16 chan;
7231 ++ s64 timestamp __aligned(8);
7232 ++ } scan;
7233 + int chan, ret, res;
7234 +
7235 +- memset(buf, 0, sizeof(buf));
7236 ++ memset(&scan, 0, sizeof(scan));
7237 +
7238 + mutex_lock(&data->lock);
7239 + chan = find_first_bit(indio_dev->active_scan_mask,
7240 +@@ -409,10 +413,10 @@ static irqreturn_t ads1015_trigger_handler(int irq, void *p)
7241 + goto err;
7242 + }
7243 +
7244 +- buf[0] = res;
7245 ++ scan.chan = res;
7246 + mutex_unlock(&data->lock);
7247 +
7248 +- iio_push_to_buffers_with_timestamp(indio_dev, buf,
7249 ++ iio_push_to_buffers_with_timestamp(indio_dev, &scan,
7250 + iio_get_time_ns(indio_dev));
7251 +
7252 + err:
7253 +diff --git a/drivers/iio/adc/ti-ads8688.c b/drivers/iio/adc/ti-ads8688.c
7254 +index 16bcb37eebb72..79c803537dc42 100644
7255 +--- a/drivers/iio/adc/ti-ads8688.c
7256 ++++ b/drivers/iio/adc/ti-ads8688.c
7257 +@@ -383,7 +383,8 @@ static irqreturn_t ads8688_trigger_handler(int irq, void *p)
7258 + {
7259 + struct iio_poll_func *pf = p;
7260 + struct iio_dev *indio_dev = pf->indio_dev;
7261 +- u16 buffer[ADS8688_MAX_CHANNELS + sizeof(s64)/sizeof(u16)];
7262 ++ /* Ensure naturally aligned timestamp */
7263 ++ u16 buffer[ADS8688_MAX_CHANNELS + sizeof(s64)/sizeof(u16)] __aligned(8);
7264 + int i, j = 0;
7265 +
7266 + for (i = 0; i < indio_dev->masklength; i++) {
7267 +diff --git a/drivers/iio/adc/vf610_adc.c b/drivers/iio/adc/vf610_adc.c
7268 +index 1d794cf3e3f13..fd57fc43e8e5c 100644
7269 +--- a/drivers/iio/adc/vf610_adc.c
7270 ++++ b/drivers/iio/adc/vf610_adc.c
7271 +@@ -167,7 +167,11 @@ struct vf610_adc {
7272 + u32 sample_freq_avail[5];
7273 +
7274 + struct completion completion;
7275 +- u16 buffer[8];
7276 ++ /* Ensure the timestamp is naturally aligned */
7277 ++ struct {
7278 ++ u16 chan;
7279 ++ s64 timestamp __aligned(8);
7280 ++ } scan;
7281 + };
7282 +
7283 + static const u32 vf610_hw_avgs[] = { 1, 4, 8, 16, 32 };
7284 +@@ -579,9 +583,9 @@ static irqreturn_t vf610_adc_isr(int irq, void *dev_id)
7285 + if (coco & VF610_ADC_HS_COCO0) {
7286 + info->value = vf610_adc_read_data(info);
7287 + if (iio_buffer_enabled(indio_dev)) {
7288 +- info->buffer[0] = info->value;
7289 ++ info->scan.chan = info->value;
7290 + iio_push_to_buffers_with_timestamp(indio_dev,
7291 +- info->buffer,
7292 ++ &info->scan,
7293 + iio_get_time_ns(indio_dev));
7294 + iio_trigger_notify_done(indio_dev->trig);
7295 + } else
7296 +diff --git a/drivers/iio/chemical/atlas-sensor.c b/drivers/iio/chemical/atlas-sensor.c
7297 +index cdab9d04dedd0..0c8a50de89408 100644
7298 +--- a/drivers/iio/chemical/atlas-sensor.c
7299 ++++ b/drivers/iio/chemical/atlas-sensor.c
7300 +@@ -91,8 +91,8 @@ struct atlas_data {
7301 + struct regmap *regmap;
7302 + struct irq_work work;
7303 + unsigned int interrupt_enabled;
7304 +-
7305 +- __be32 buffer[6]; /* 96-bit data + 32-bit pad + 64-bit timestamp */
7306 ++ /* 96-bit data + 32-bit pad + 64-bit timestamp */
7307 ++ __be32 buffer[6] __aligned(8);
7308 + };
7309 +
7310 + static const struct regmap_config atlas_regmap_config = {
7311 +diff --git a/drivers/iio/frequency/adf4350.c b/drivers/iio/frequency/adf4350.c
7312 +index 1462a6a5bc6da..3d9eba716b691 100644
7313 +--- a/drivers/iio/frequency/adf4350.c
7314 ++++ b/drivers/iio/frequency/adf4350.c
7315 +@@ -563,8 +563,10 @@ static int adf4350_probe(struct spi_device *spi)
7316 +
7317 + st->lock_detect_gpiod = devm_gpiod_get_optional(&spi->dev, NULL,
7318 + GPIOD_IN);
7319 +- if (IS_ERR(st->lock_detect_gpiod))
7320 +- return PTR_ERR(st->lock_detect_gpiod);
7321 ++ if (IS_ERR(st->lock_detect_gpiod)) {
7322 ++ ret = PTR_ERR(st->lock_detect_gpiod);
7323 ++ goto error_disable_reg;
7324 ++ }
7325 +
7326 + if (pdata->power_up_frequency) {
7327 + ret = adf4350_set_freq(st, pdata->power_up_frequency);
7328 +diff --git a/drivers/iio/gyro/bmg160_core.c b/drivers/iio/gyro/bmg160_core.c
7329 +index 029ef4c346046..457fa8702d19d 100644
7330 +--- a/drivers/iio/gyro/bmg160_core.c
7331 ++++ b/drivers/iio/gyro/bmg160_core.c
7332 +@@ -98,7 +98,11 @@ struct bmg160_data {
7333 + struct iio_trigger *motion_trig;
7334 + struct iio_mount_matrix orientation;
7335 + struct mutex mutex;
7336 +- s16 buffer[8];
7337 ++ /* Ensure naturally aligned timestamp */
7338 ++ struct {
7339 ++ s16 chans[3];
7340 ++ s64 timestamp __aligned(8);
7341 ++ } scan;
7342 + u32 dps_range;
7343 + int ev_enable_state;
7344 + int slope_thres;
7345 +@@ -882,12 +886,12 @@ static irqreturn_t bmg160_trigger_handler(int irq, void *p)
7346 +
7347 + mutex_lock(&data->mutex);
7348 + ret = regmap_bulk_read(data->regmap, BMG160_REG_XOUT_L,
7349 +- data->buffer, AXIS_MAX * 2);
7350 ++ data->scan.chans, AXIS_MAX * 2);
7351 + mutex_unlock(&data->mutex);
7352 + if (ret < 0)
7353 + goto err;
7354 +
7355 +- iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
7356 ++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
7357 + pf->timestamp);
7358 + err:
7359 + iio_trigger_notify_done(indio_dev->trig);
7360 +diff --git a/drivers/iio/humidity/am2315.c b/drivers/iio/humidity/am2315.c
7361 +index 02ad1767c845e..3398fa413ec5c 100644
7362 +--- a/drivers/iio/humidity/am2315.c
7363 ++++ b/drivers/iio/humidity/am2315.c
7364 +@@ -33,7 +33,11 @@
7365 + struct am2315_data {
7366 + struct i2c_client *client;
7367 + struct mutex lock;
7368 +- s16 buffer[8]; /* 2x16-bit channels + 2x16 padding + 4x16 timestamp */
7369 ++ /* Ensure timestamp is naturally aligned */
7370 ++ struct {
7371 ++ s16 chans[2];
7372 ++ s64 timestamp __aligned(8);
7373 ++ } scan;
7374 + };
7375 +
7376 + struct am2315_sensor_data {
7377 +@@ -167,20 +171,20 @@ static irqreturn_t am2315_trigger_handler(int irq, void *p)
7378 +
7379 + mutex_lock(&data->lock);
7380 + if (*(indio_dev->active_scan_mask) == AM2315_ALL_CHANNEL_MASK) {
7381 +- data->buffer[0] = sensor_data.hum_data;
7382 +- data->buffer[1] = sensor_data.temp_data;
7383 ++ data->scan.chans[0] = sensor_data.hum_data;
7384 ++ data->scan.chans[1] = sensor_data.temp_data;
7385 + } else {
7386 + i = 0;
7387 + for_each_set_bit(bit, indio_dev->active_scan_mask,
7388 + indio_dev->masklength) {
7389 +- data->buffer[i] = (bit ? sensor_data.temp_data :
7390 +- sensor_data.hum_data);
7391 ++ data->scan.chans[i] = (bit ? sensor_data.temp_data :
7392 ++ sensor_data.hum_data);
7393 + i++;
7394 + }
7395 + }
7396 + mutex_unlock(&data->lock);
7397 +
7398 +- iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
7399 ++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
7400 + pf->timestamp);
7401 + err:
7402 + iio_trigger_notify_done(indio_dev->trig);
7403 +diff --git a/drivers/iio/imu/adis16400.c b/drivers/iio/imu/adis16400.c
7404 +index 785a4ce606d89..4aff16466da02 100644
7405 +--- a/drivers/iio/imu/adis16400.c
7406 ++++ b/drivers/iio/imu/adis16400.c
7407 +@@ -647,9 +647,6 @@ static irqreturn_t adis16400_trigger_handler(int irq, void *p)
7408 + void *buffer;
7409 + int ret;
7410 +
7411 +- if (!adis->buffer)
7412 +- return -ENOMEM;
7413 +-
7414 + if (!(st->variant->flags & ADIS16400_NO_BURST) &&
7415 + st->adis.spi->max_speed_hz > ADIS16400_SPI_BURST) {
7416 + st->adis.spi->max_speed_hz = ADIS16400_SPI_BURST;
7417 +diff --git a/drivers/iio/imu/adis16475.c b/drivers/iio/imu/adis16475.c
7418 +index 197d482409911..3c4e4deb87608 100644
7419 +--- a/drivers/iio/imu/adis16475.c
7420 ++++ b/drivers/iio/imu/adis16475.c
7421 +@@ -990,7 +990,7 @@ static irqreturn_t adis16475_trigger_handler(int irq, void *p)
7422 +
7423 + ret = spi_sync(adis->spi, &adis->msg);
7424 + if (ret)
7425 +- return ret;
7426 ++ goto check_burst32;
7427 +
7428 + adis->spi->max_speed_hz = cached_spi_speed_hz;
7429 + buffer = adis->buffer;
7430 +diff --git a/drivers/iio/imu/adis_buffer.c b/drivers/iio/imu/adis_buffer.c
7431 +index ac354321f63a3..175af154e4437 100644
7432 +--- a/drivers/iio/imu/adis_buffer.c
7433 ++++ b/drivers/iio/imu/adis_buffer.c
7434 +@@ -129,9 +129,6 @@ static irqreturn_t adis_trigger_handler(int irq, void *p)
7435 + struct adis *adis = iio_device_get_drvdata(indio_dev);
7436 + int ret;
7437 +
7438 +- if (!adis->buffer)
7439 +- return -ENOMEM;
7440 +-
7441 + if (adis->data->has_paging) {
7442 + mutex_lock(&adis->state_lock);
7443 + if (adis->current_page != 0) {
7444 +diff --git a/drivers/iio/light/isl29125.c b/drivers/iio/light/isl29125.c
7445 +index b93b85dbc3a6a..ba53b50d711a1 100644
7446 +--- a/drivers/iio/light/isl29125.c
7447 ++++ b/drivers/iio/light/isl29125.c
7448 +@@ -51,7 +51,11 @@
7449 + struct isl29125_data {
7450 + struct i2c_client *client;
7451 + u8 conf1;
7452 +- u16 buffer[8]; /* 3x 16-bit, padding, 8 bytes timestamp */
7453 ++ /* Ensure timestamp is naturally aligned */
7454 ++ struct {
7455 ++ u16 chans[3];
7456 ++ s64 timestamp __aligned(8);
7457 ++ } scan;
7458 + };
7459 +
7460 + #define ISL29125_CHANNEL(_color, _si) { \
7461 +@@ -184,10 +188,10 @@ static irqreturn_t isl29125_trigger_handler(int irq, void *p)
7462 + if (ret < 0)
7463 + goto done;
7464 +
7465 +- data->buffer[j++] = ret;
7466 ++ data->scan.chans[j++] = ret;
7467 + }
7468 +
7469 +- iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
7470 ++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
7471 + iio_get_time_ns(indio_dev));
7472 +
7473 + done:
7474 +diff --git a/drivers/iio/light/ltr501.c b/drivers/iio/light/ltr501.c
7475 +index b4323d2db0b19..74ed2d88a3ed3 100644
7476 +--- a/drivers/iio/light/ltr501.c
7477 ++++ b/drivers/iio/light/ltr501.c
7478 +@@ -32,9 +32,12 @@
7479 + #define LTR501_PART_ID 0x86
7480 + #define LTR501_MANUFAC_ID 0x87
7481 + #define LTR501_ALS_DATA1 0x88 /* 16-bit, little endian */
7482 ++#define LTR501_ALS_DATA1_UPPER 0x89 /* upper 8 bits of LTR501_ALS_DATA1 */
7483 + #define LTR501_ALS_DATA0 0x8a /* 16-bit, little endian */
7484 ++#define LTR501_ALS_DATA0_UPPER 0x8b /* upper 8 bits of LTR501_ALS_DATA0 */
7485 + #define LTR501_ALS_PS_STATUS 0x8c
7486 + #define LTR501_PS_DATA 0x8d /* 16-bit, little endian */
7487 ++#define LTR501_PS_DATA_UPPER 0x8e /* upper 8 bits of LTR501_PS_DATA */
7488 + #define LTR501_INTR 0x8f /* output mode, polarity, mode */
7489 + #define LTR501_PS_THRESH_UP 0x90 /* 11 bit, ps upper threshold */
7490 + #define LTR501_PS_THRESH_LOW 0x92 /* 11 bit, ps lower threshold */
7491 +@@ -406,18 +409,19 @@ static int ltr501_read_als(const struct ltr501_data *data, __le16 buf[2])
7492 +
7493 + static int ltr501_read_ps(const struct ltr501_data *data)
7494 + {
7495 +- int ret, status;
7496 ++ __le16 status;
7497 ++ int ret;
7498 +
7499 + ret = ltr501_drdy(data, LTR501_STATUS_PS_RDY);
7500 + if (ret < 0)
7501 + return ret;
7502 +
7503 + ret = regmap_bulk_read(data->regmap, LTR501_PS_DATA,
7504 +- &status, 2);
7505 ++ &status, sizeof(status));
7506 + if (ret < 0)
7507 + return ret;
7508 +
7509 +- return status;
7510 ++ return le16_to_cpu(status);
7511 + }
7512 +
7513 + static int ltr501_read_intr_prst(const struct ltr501_data *data,
7514 +@@ -1205,7 +1209,7 @@ static struct ltr501_chip_info ltr501_chip_info_tbl[] = {
7515 + .als_gain_tbl_size = ARRAY_SIZE(ltr559_als_gain_tbl),
7516 + .ps_gain = ltr559_ps_gain_tbl,
7517 + .ps_gain_tbl_size = ARRAY_SIZE(ltr559_ps_gain_tbl),
7518 +- .als_mode_active = BIT(1),
7519 ++ .als_mode_active = BIT(0),
7520 + .als_gain_mask = BIT(2) | BIT(3) | BIT(4),
7521 + .als_gain_shift = 2,
7522 + .info = &ltr501_info,
7523 +@@ -1354,9 +1358,12 @@ static bool ltr501_is_volatile_reg(struct device *dev, unsigned int reg)
7524 + {
7525 + switch (reg) {
7526 + case LTR501_ALS_DATA1:
7527 ++ case LTR501_ALS_DATA1_UPPER:
7528 + case LTR501_ALS_DATA0:
7529 ++ case LTR501_ALS_DATA0_UPPER:
7530 + case LTR501_ALS_PS_STATUS:
7531 + case LTR501_PS_DATA:
7532 ++ case LTR501_PS_DATA_UPPER:
7533 + return true;
7534 + default:
7535 + return false;
7536 +diff --git a/drivers/iio/light/tcs3414.c b/drivers/iio/light/tcs3414.c
7537 +index 6fe5d46f80d40..0593abd600ec2 100644
7538 +--- a/drivers/iio/light/tcs3414.c
7539 ++++ b/drivers/iio/light/tcs3414.c
7540 +@@ -53,7 +53,11 @@ struct tcs3414_data {
7541 + u8 control;
7542 + u8 gain;
7543 + u8 timing;
7544 +- u16 buffer[8]; /* 4x 16-bit + 8 bytes timestamp */
7545 ++ /* Ensure timestamp is naturally aligned */
7546 ++ struct {
7547 ++ u16 chans[4];
7548 ++ s64 timestamp __aligned(8);
7549 ++ } scan;
7550 + };
7551 +
7552 + #define TCS3414_CHANNEL(_color, _si, _addr) { \
7553 +@@ -209,10 +213,10 @@ static irqreturn_t tcs3414_trigger_handler(int irq, void *p)
7554 + if (ret < 0)
7555 + goto done;
7556 +
7557 +- data->buffer[j++] = ret;
7558 ++ data->scan.chans[j++] = ret;
7559 + }
7560 +
7561 +- iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
7562 ++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
7563 + iio_get_time_ns(indio_dev));
7564 +
7565 + done:
7566 +diff --git a/drivers/iio/light/tcs3472.c b/drivers/iio/light/tcs3472.c
7567 +index a0dc447aeb68b..371c6a39a1654 100644
7568 +--- a/drivers/iio/light/tcs3472.c
7569 ++++ b/drivers/iio/light/tcs3472.c
7570 +@@ -64,7 +64,11 @@ struct tcs3472_data {
7571 + u8 control;
7572 + u8 atime;
7573 + u8 apers;
7574 +- u16 buffer[8]; /* 4 16-bit channels + 64-bit timestamp */
7575 ++ /* Ensure timestamp is naturally aligned */
7576 ++ struct {
7577 ++ u16 chans[4];
7578 ++ s64 timestamp __aligned(8);
7579 ++ } scan;
7580 + };
7581 +
7582 + static const struct iio_event_spec tcs3472_events[] = {
7583 +@@ -386,10 +390,10 @@ static irqreturn_t tcs3472_trigger_handler(int irq, void *p)
7584 + if (ret < 0)
7585 + goto done;
7586 +
7587 +- data->buffer[j++] = ret;
7588 ++ data->scan.chans[j++] = ret;
7589 + }
7590 +
7591 +- iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
7592 ++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
7593 + iio_get_time_ns(indio_dev));
7594 +
7595 + done:
7596 +@@ -531,7 +535,8 @@ static int tcs3472_probe(struct i2c_client *client,
7597 + return 0;
7598 +
7599 + free_irq:
7600 +- free_irq(client->irq, indio_dev);
7601 ++ if (client->irq)
7602 ++ free_irq(client->irq, indio_dev);
7603 + buffer_cleanup:
7604 + iio_triggered_buffer_cleanup(indio_dev);
7605 + return ret;
7606 +@@ -559,7 +564,8 @@ static int tcs3472_remove(struct i2c_client *client)
7607 + struct iio_dev *indio_dev = i2c_get_clientdata(client);
7608 +
7609 + iio_device_unregister(indio_dev);
7610 +- free_irq(client->irq, indio_dev);
7611 ++ if (client->irq)
7612 ++ free_irq(client->irq, indio_dev);
7613 + iio_triggered_buffer_cleanup(indio_dev);
7614 + tcs3472_powerdown(iio_priv(indio_dev));
7615 +
7616 +diff --git a/drivers/iio/light/vcnl4000.c b/drivers/iio/light/vcnl4000.c
7617 +index fff4b36b8b58d..f4feb44903b3f 100644
7618 +--- a/drivers/iio/light/vcnl4000.c
7619 ++++ b/drivers/iio/light/vcnl4000.c
7620 +@@ -910,7 +910,7 @@ static irqreturn_t vcnl4010_trigger_handler(int irq, void *p)
7621 + struct iio_dev *indio_dev = pf->indio_dev;
7622 + struct vcnl4000_data *data = iio_priv(indio_dev);
7623 + const unsigned long *active_scan_mask = indio_dev->active_scan_mask;
7624 +- u16 buffer[8] = {0}; /* 1x16-bit + ts */
7625 ++ u16 buffer[8] __aligned(8) = {0}; /* 1x16-bit + naturally aligned ts */
7626 + bool data_read = false;
7627 + unsigned long isr;
7628 + int val = 0;
7629 +diff --git a/drivers/iio/light/vcnl4035.c b/drivers/iio/light/vcnl4035.c
7630 +index 73a28e30dddcc..1342bbe111ed5 100644
7631 +--- a/drivers/iio/light/vcnl4035.c
7632 ++++ b/drivers/iio/light/vcnl4035.c
7633 +@@ -102,7 +102,8 @@ static irqreturn_t vcnl4035_trigger_consumer_handler(int irq, void *p)
7634 + struct iio_poll_func *pf = p;
7635 + struct iio_dev *indio_dev = pf->indio_dev;
7636 + struct vcnl4035_data *data = iio_priv(indio_dev);
7637 +- u8 buffer[ALIGN(sizeof(u16), sizeof(s64)) + sizeof(s64)];
7638 ++ /* Ensure naturally aligned timestamp */
7639 ++ u8 buffer[ALIGN(sizeof(u16), sizeof(s64)) + sizeof(s64)] __aligned(8);
7640 + int ret;
7641 +
7642 + ret = regmap_read(data->regmap, VCNL4035_ALS_DATA, (int *)buffer);
7643 +diff --git a/drivers/iio/magnetometer/bmc150_magn.c b/drivers/iio/magnetometer/bmc150_magn.c
7644 +index b2f3129e1b4f3..20a0842f0e3a6 100644
7645 +--- a/drivers/iio/magnetometer/bmc150_magn.c
7646 ++++ b/drivers/iio/magnetometer/bmc150_magn.c
7647 +@@ -138,8 +138,11 @@ struct bmc150_magn_data {
7648 + struct regmap *regmap;
7649 + struct regulator_bulk_data regulators[2];
7650 + struct iio_mount_matrix orientation;
7651 +- /* 4 x 32 bits for x, y z, 4 bytes align, 64 bits timestamp */
7652 +- s32 buffer[6];
7653 ++ /* Ensure timestamp is naturally aligned */
7654 ++ struct {
7655 ++ s32 chans[3];
7656 ++ s64 timestamp __aligned(8);
7657 ++ } scan;
7658 + struct iio_trigger *dready_trig;
7659 + bool dready_trigger_on;
7660 + int max_odr;
7661 +@@ -675,11 +678,11 @@ static irqreturn_t bmc150_magn_trigger_handler(int irq, void *p)
7662 + int ret;
7663 +
7664 + mutex_lock(&data->mutex);
7665 +- ret = bmc150_magn_read_xyz(data, data->buffer);
7666 ++ ret = bmc150_magn_read_xyz(data, data->scan.chans);
7667 + if (ret < 0)
7668 + goto err;
7669 +
7670 +- iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
7671 ++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
7672 + pf->timestamp);
7673 +
7674 + err:
7675 +diff --git a/drivers/iio/magnetometer/hmc5843.h b/drivers/iio/magnetometer/hmc5843.h
7676 +index 3f6c0b6629415..242f742f2643a 100644
7677 +--- a/drivers/iio/magnetometer/hmc5843.h
7678 ++++ b/drivers/iio/magnetometer/hmc5843.h
7679 +@@ -33,7 +33,8 @@ enum hmc5843_ids {
7680 + * @lock: update and read regmap data
7681 + * @regmap: hardware access register maps
7682 + * @variant: describe chip variants
7683 +- * @buffer: 3x 16-bit channels + padding + 64-bit timestamp
7684 ++ * @scan: buffer to pack data for passing to
7685 ++ * iio_push_to_buffers_with_timestamp()
7686 + */
7687 + struct hmc5843_data {
7688 + struct device *dev;
7689 +@@ -41,7 +42,10 @@ struct hmc5843_data {
7690 + struct regmap *regmap;
7691 + const struct hmc5843_chip_info *variant;
7692 + struct iio_mount_matrix orientation;
7693 +- __be16 buffer[8];
7694 ++ struct {
7695 ++ __be16 chans[3];
7696 ++ s64 timestamp __aligned(8);
7697 ++ } scan;
7698 + };
7699 +
7700 + int hmc5843_common_probe(struct device *dev, struct regmap *regmap,
7701 +diff --git a/drivers/iio/magnetometer/hmc5843_core.c b/drivers/iio/magnetometer/hmc5843_core.c
7702 +index 780faea61d82e..221563e0c18fd 100644
7703 +--- a/drivers/iio/magnetometer/hmc5843_core.c
7704 ++++ b/drivers/iio/magnetometer/hmc5843_core.c
7705 +@@ -446,13 +446,13 @@ static irqreturn_t hmc5843_trigger_handler(int irq, void *p)
7706 + }
7707 +
7708 + ret = regmap_bulk_read(data->regmap, HMC5843_DATA_OUT_MSB_REGS,
7709 +- data->buffer, 3 * sizeof(__be16));
7710 ++ data->scan.chans, sizeof(data->scan.chans));
7711 +
7712 + mutex_unlock(&data->lock);
7713 + if (ret < 0)
7714 + goto done;
7715 +
7716 +- iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
7717 ++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
7718 + iio_get_time_ns(indio_dev));
7719 +
7720 + done:
7721 +diff --git a/drivers/iio/magnetometer/rm3100-core.c b/drivers/iio/magnetometer/rm3100-core.c
7722 +index 7242897a05e95..720234a91db11 100644
7723 +--- a/drivers/iio/magnetometer/rm3100-core.c
7724 ++++ b/drivers/iio/magnetometer/rm3100-core.c
7725 +@@ -78,7 +78,8 @@ struct rm3100_data {
7726 + bool use_interrupt;
7727 + int conversion_time;
7728 + int scale;
7729 +- u8 buffer[RM3100_SCAN_BYTES];
7730 ++ /* Ensure naturally aligned timestamp */
7731 ++ u8 buffer[RM3100_SCAN_BYTES] __aligned(8);
7732 + struct iio_trigger *drdy_trig;
7733 +
7734 + /*
7735 +diff --git a/drivers/iio/potentiostat/lmp91000.c b/drivers/iio/potentiostat/lmp91000.c
7736 +index f34ca769dc20d..d7ff74a798ba3 100644
7737 +--- a/drivers/iio/potentiostat/lmp91000.c
7738 ++++ b/drivers/iio/potentiostat/lmp91000.c
7739 +@@ -71,8 +71,8 @@ struct lmp91000_data {
7740 +
7741 + struct completion completion;
7742 + u8 chan_select;
7743 +-
7744 +- u32 buffer[4]; /* 64-bit data + 64-bit timestamp */
7745 ++ /* 64-bit data + 64-bit naturally aligned timestamp */
7746 ++ u32 buffer[4] __aligned(8);
7747 + };
7748 +
7749 + static const struct iio_chan_spec lmp91000_channels[] = {
7750 +diff --git a/drivers/iio/proximity/as3935.c b/drivers/iio/proximity/as3935.c
7751 +index b79ada839e012..98330e26ac3bd 100644
7752 +--- a/drivers/iio/proximity/as3935.c
7753 ++++ b/drivers/iio/proximity/as3935.c
7754 +@@ -59,7 +59,11 @@ struct as3935_state {
7755 + unsigned long noise_tripped;
7756 + u32 tune_cap;
7757 + u32 nflwdth_reg;
7758 +- u8 buffer[16]; /* 8-bit data + 56-bit padding + 64-bit timestamp */
7759 ++ /* Ensure timestamp is naturally aligned */
7760 ++ struct {
7761 ++ u8 chan;
7762 ++ s64 timestamp __aligned(8);
7763 ++ } scan;
7764 + u8 buf[2] ____cacheline_aligned;
7765 + };
7766 +
7767 +@@ -225,8 +229,8 @@ static irqreturn_t as3935_trigger_handler(int irq, void *private)
7768 + if (ret)
7769 + goto err_read;
7770 +
7771 +- st->buffer[0] = val & AS3935_DATA_MASK;
7772 +- iio_push_to_buffers_with_timestamp(indio_dev, &st->buffer,
7773 ++ st->scan.chan = val & AS3935_DATA_MASK;
7774 ++ iio_push_to_buffers_with_timestamp(indio_dev, &st->scan,
7775 + iio_get_time_ns(indio_dev));
7776 + err_read:
7777 + iio_trigger_notify_done(indio_dev->trig);
7778 +diff --git a/drivers/iio/proximity/isl29501.c b/drivers/iio/proximity/isl29501.c
7779 +index 90e76451c972a..5b6ea783795d9 100644
7780 +--- a/drivers/iio/proximity/isl29501.c
7781 ++++ b/drivers/iio/proximity/isl29501.c
7782 +@@ -938,7 +938,7 @@ static irqreturn_t isl29501_trigger_handler(int irq, void *p)
7783 + struct iio_dev *indio_dev = pf->indio_dev;
7784 + struct isl29501_private *isl29501 = iio_priv(indio_dev);
7785 + const unsigned long *active_mask = indio_dev->active_scan_mask;
7786 +- u32 buffer[4] = {}; /* 1x16-bit + ts */
7787 ++ u32 buffer[4] __aligned(8) = {}; /* 1x16-bit + naturally aligned ts */
7788 +
7789 + if (test_bit(ISL29501_DISTANCE_SCAN_INDEX, active_mask))
7790 + isl29501_register_read(isl29501, REG_DISTANCE, buffer);
7791 +diff --git a/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c b/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
7792 +index cc206bfa09c78..d854b8d5fbbaf 100644
7793 +--- a/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
7794 ++++ b/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
7795 +@@ -44,7 +44,11 @@ struct lidar_data {
7796 + int (*xfer)(struct lidar_data *data, u8 reg, u8 *val, int len);
7797 + int i2c_enabled;
7798 +
7799 +- u16 buffer[8]; /* 2 byte distance + 8 byte timestamp */
7800 ++ /* Ensure timestamp is naturally aligned */
7801 ++ struct {
7802 ++ u16 chan;
7803 ++ s64 timestamp __aligned(8);
7804 ++ } scan;
7805 + };
7806 +
7807 + static const struct iio_chan_spec lidar_channels[] = {
7808 +@@ -230,9 +234,9 @@ static irqreturn_t lidar_trigger_handler(int irq, void *private)
7809 + struct lidar_data *data = iio_priv(indio_dev);
7810 + int ret;
7811 +
7812 +- ret = lidar_get_measurement(data, data->buffer);
7813 ++ ret = lidar_get_measurement(data, &data->scan.chan);
7814 + if (!ret) {
7815 +- iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
7816 ++ iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
7817 + iio_get_time_ns(indio_dev));
7818 + } else if (ret != -EINVAL) {
7819 + dev_err(&data->client->dev, "cannot read LIDAR measurement");
7820 +diff --git a/drivers/iio/proximity/srf08.c b/drivers/iio/proximity/srf08.c
7821 +index 70beac5c9c1df..9b0886760f76d 100644
7822 +--- a/drivers/iio/proximity/srf08.c
7823 ++++ b/drivers/iio/proximity/srf08.c
7824 +@@ -63,11 +63,11 @@ struct srf08_data {
7825 + int range_mm;
7826 + struct mutex lock;
7827 +
7828 +- /*
7829 +- * triggered buffer
7830 +- * 1x16-bit channel + 3x16 padding + 4x16 timestamp
7831 +- */
7832 +- s16 buffer[8];
7833 ++ /* Ensure timestamp is naturally aligned */
7834 ++ struct {
7835 ++ s16 chan;
7836 ++ s64 timestamp __aligned(8);
7837 ++ } scan;
7838 +
7839 + /* Sensor-Type */
7840 + enum srf08_sensor_type sensor_type;
7841 +@@ -190,9 +190,9 @@ static irqreturn_t srf08_trigger_handler(int irq, void *p)
7842 +
7843 + mutex_lock(&data->lock);
7844 +
7845 +- data->buffer[0] = sensor_data;
7846 ++ data->scan.chan = sensor_data;
7847 + iio_push_to_buffers_with_timestamp(indio_dev,
7848 +- data->buffer, pf->timestamp);
7849 ++ &data->scan, pf->timestamp);
7850 +
7851 + mutex_unlock(&data->lock);
7852 + err:
7853 +diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
7854 +index 5b9022a8c9ece..bb46f794f3240 100644
7855 +--- a/drivers/infiniband/core/cma.c
7856 ++++ b/drivers/infiniband/core/cma.c
7857 +@@ -1856,6 +1856,7 @@ static void _destroy_id(struct rdma_id_private *id_priv,
7858 + {
7859 + cma_cancel_operation(id_priv, state);
7860 +
7861 ++ rdma_restrack_del(&id_priv->res);
7862 + if (id_priv->cma_dev) {
7863 + if (rdma_cap_ib_cm(id_priv->id.device, 1)) {
7864 + if (id_priv->cm_id.ib)
7865 +@@ -1865,7 +1866,6 @@ static void _destroy_id(struct rdma_id_private *id_priv,
7866 + iw_destroy_cm_id(id_priv->cm_id.iw);
7867 + }
7868 + cma_leave_mc_groups(id_priv);
7869 +- rdma_restrack_del(&id_priv->res);
7870 + cma_release_dev(id_priv);
7871 + }
7872 +
7873 +@@ -2476,8 +2476,10 @@ static int cma_iw_listen(struct rdma_id_private *id_priv, int backlog)
7874 + if (IS_ERR(id))
7875 + return PTR_ERR(id);
7876 +
7877 ++ mutex_lock(&id_priv->qp_mutex);
7878 + id->tos = id_priv->tos;
7879 + id->tos_set = id_priv->tos_set;
7880 ++ mutex_unlock(&id_priv->qp_mutex);
7881 + id_priv->cm_id.iw = id;
7882 +
7883 + memcpy(&id_priv->cm_id.iw->local_addr, cma_src_addr(id_priv),
7884 +@@ -2537,8 +2539,10 @@ static int cma_listen_on_dev(struct rdma_id_private *id_priv,
7885 + cma_id_get(id_priv);
7886 + dev_id_priv->internal_id = 1;
7887 + dev_id_priv->afonly = id_priv->afonly;
7888 ++ mutex_lock(&id_priv->qp_mutex);
7889 + dev_id_priv->tos_set = id_priv->tos_set;
7890 + dev_id_priv->tos = id_priv->tos;
7891 ++ mutex_unlock(&id_priv->qp_mutex);
7892 +
7893 + ret = rdma_listen(&dev_id_priv->id, id_priv->backlog);
7894 + if (ret)
7895 +@@ -2585,8 +2589,10 @@ void rdma_set_service_type(struct rdma_cm_id *id, int tos)
7896 + struct rdma_id_private *id_priv;
7897 +
7898 + id_priv = container_of(id, struct rdma_id_private, id);
7899 ++ mutex_lock(&id_priv->qp_mutex);
7900 + id_priv->tos = (u8) tos;
7901 + id_priv->tos_set = true;
7902 ++ mutex_unlock(&id_priv->qp_mutex);
7903 + }
7904 + EXPORT_SYMBOL(rdma_set_service_type);
7905 +
7906 +@@ -2613,8 +2619,10 @@ int rdma_set_ack_timeout(struct rdma_cm_id *id, u8 timeout)
7907 + return -EINVAL;
7908 +
7909 + id_priv = container_of(id, struct rdma_id_private, id);
7910 ++ mutex_lock(&id_priv->qp_mutex);
7911 + id_priv->timeout = timeout;
7912 + id_priv->timeout_set = true;
7913 ++ mutex_unlock(&id_priv->qp_mutex);
7914 +
7915 + return 0;
7916 + }
7917 +@@ -3000,8 +3008,11 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
7918 +
7919 + u8 default_roce_tos = id_priv->cma_dev->default_roce_tos[id_priv->id.port_num -
7920 + rdma_start_port(id_priv->cma_dev->device)];
7921 +- u8 tos = id_priv->tos_set ? id_priv->tos : default_roce_tos;
7922 ++ u8 tos;
7923 +
7924 ++ mutex_lock(&id_priv->qp_mutex);
7925 ++ tos = id_priv->tos_set ? id_priv->tos : default_roce_tos;
7926 ++ mutex_unlock(&id_priv->qp_mutex);
7927 +
7928 + work = kzalloc(sizeof *work, GFP_KERNEL);
7929 + if (!work)
7930 +@@ -3048,8 +3059,12 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
7931 + * PacketLifeTime = local ACK timeout/2
7932 + * as a reasonable approximation for RoCE networks.
7933 + */
7934 +- route->path_rec->packet_life_time = id_priv->timeout_set ?
7935 +- id_priv->timeout - 1 : CMA_IBOE_PACKET_LIFETIME;
7936 ++ mutex_lock(&id_priv->qp_mutex);
7937 ++ if (id_priv->timeout_set && id_priv->timeout)
7938 ++ route->path_rec->packet_life_time = id_priv->timeout - 1;
7939 ++ else
7940 ++ route->path_rec->packet_life_time = CMA_IBOE_PACKET_LIFETIME;
7941 ++ mutex_unlock(&id_priv->qp_mutex);
7942 +
7943 + if (!route->path_rec->mtu) {
7944 + ret = -EINVAL;
7945 +@@ -4073,8 +4088,11 @@ static int cma_connect_iw(struct rdma_id_private *id_priv,
7946 + if (IS_ERR(cm_id))
7947 + return PTR_ERR(cm_id);
7948 +
7949 ++ mutex_lock(&id_priv->qp_mutex);
7950 + cm_id->tos = id_priv->tos;
7951 + cm_id->tos_set = id_priv->tos_set;
7952 ++ mutex_unlock(&id_priv->qp_mutex);
7953 ++
7954 + id_priv->cm_id.iw = cm_id;
7955 +
7956 + memcpy(&cm_id->local_addr, cma_src_addr(id_priv),
7957 +diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
7958 +index ab55f8b3190eb..92ae454d500a6 100644
7959 +--- a/drivers/infiniband/core/uverbs_cmd.c
7960 ++++ b/drivers/infiniband/core/uverbs_cmd.c
7961 +@@ -3033,12 +3033,29 @@ static int ib_uverbs_ex_modify_wq(struct uverbs_attr_bundle *attrs)
7962 + if (!wq)
7963 + return -EINVAL;
7964 +
7965 +- wq_attr.curr_wq_state = cmd.curr_wq_state;
7966 +- wq_attr.wq_state = cmd.wq_state;
7967 + if (cmd.attr_mask & IB_WQ_FLAGS) {
7968 + wq_attr.flags = cmd.flags;
7969 + wq_attr.flags_mask = cmd.flags_mask;
7970 + }
7971 ++
7972 ++ if (cmd.attr_mask & IB_WQ_CUR_STATE) {
7973 ++ if (cmd.curr_wq_state > IB_WQS_ERR)
7974 ++ return -EINVAL;
7975 ++
7976 ++ wq_attr.curr_wq_state = cmd.curr_wq_state;
7977 ++ } else {
7978 ++ wq_attr.curr_wq_state = wq->state;
7979 ++ }
7980 ++
7981 ++ if (cmd.attr_mask & IB_WQ_STATE) {
7982 ++ if (cmd.wq_state > IB_WQS_ERR)
7983 ++ return -EINVAL;
7984 ++
7985 ++ wq_attr.wq_state = cmd.wq_state;
7986 ++ } else {
7987 ++ wq_attr.wq_state = wq_attr.curr_wq_state;
7988 ++ }
7989 ++
7990 + ret = wq->device->ops.modify_wq(wq, &wq_attr, cmd.attr_mask,
7991 + &attrs->driver_udata);
7992 + rdma_lookup_put_uobject(&wq->uobject->uevent.uobject,
7993 +diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
7994 +index ad3cee54140e1..851acc9d050f7 100644
7995 +--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
7996 ++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
7997 +@@ -268,8 +268,6 @@ static int set_rc_inl(struct hns_roce_qp *qp, const struct ib_send_wr *wr,
7998 +
7999 + dseg += sizeof(struct hns_roce_v2_rc_send_wqe);
8000 +
8001 +- roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_INLINE_S, 1);
8002 +-
8003 + if (msg_len <= HNS_ROCE_V2_MAX_RC_INL_INN_SZ) {
8004 + roce_set_bit(rc_sq_wqe->byte_20,
8005 + V2_RC_SEND_WQE_BYTE_20_INL_TYPE_S, 0);
8006 +@@ -314,6 +312,8 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr,
8007 + V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S,
8008 + (*sge_ind) & (qp->sge.sge_cnt - 1));
8009 +
8010 ++ roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_INLINE_S,
8011 ++ !!(wr->send_flags & IB_SEND_INLINE));
8012 + if (wr->send_flags & IB_SEND_INLINE)
8013 + return set_rc_inl(qp, wr, rc_sq_wqe, sge_ind);
8014 +
8015 +@@ -750,8 +750,7 @@ out:
8016 + qp->sq.head += nreq;
8017 + qp->next_sge = sge_idx;
8018 +
8019 +- if (nreq == 1 && qp->sq.head == qp->sq.tail + 1 &&
8020 +- (qp->en_flags & HNS_ROCE_QP_CAP_DIRECT_WQE))
8021 ++ if (nreq == 1 && (qp->en_flags & HNS_ROCE_QP_CAP_DIRECT_WQE))
8022 + write_dwqe(hr_dev, qp, wqe);
8023 + else
8024 + update_sq_db(hr_dev, qp);
8025 +diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
8026 +index 79b3c3023fe7a..b8454dcb03183 100644
8027 +--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
8028 ++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
8029 +@@ -776,7 +776,7 @@ int hns_roce_mtr_map(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
8030 + struct ib_device *ibdev = &hr_dev->ib_dev;
8031 + struct hns_roce_buf_region *r;
8032 + unsigned int i, mapped_cnt;
8033 +- int ret;
8034 ++ int ret = 0;
8035 +
8036 + /*
8037 + * Only use the first page address as root ba when hopnum is 0, this
8038 +diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
8039 +index 651785bd57f2d..18a47248e4441 100644
8040 +--- a/drivers/infiniband/hw/mlx4/qp.c
8041 ++++ b/drivers/infiniband/hw/mlx4/qp.c
8042 +@@ -4254,13 +4254,8 @@ int mlx4_ib_modify_wq(struct ib_wq *ibwq, struct ib_wq_attr *wq_attr,
8043 + if (wq_attr_mask & IB_WQ_FLAGS)
8044 + return -EOPNOTSUPP;
8045 +
8046 +- cur_state = wq_attr_mask & IB_WQ_CUR_STATE ? wq_attr->curr_wq_state :
8047 +- ibwq->state;
8048 +- new_state = wq_attr_mask & IB_WQ_STATE ? wq_attr->wq_state : cur_state;
8049 +-
8050 +- if (cur_state < IB_WQS_RESET || cur_state > IB_WQS_ERR ||
8051 +- new_state < IB_WQS_RESET || new_state > IB_WQS_ERR)
8052 +- return -EINVAL;
8053 ++ cur_state = wq_attr->curr_wq_state;
8054 ++ new_state = wq_attr->wq_state;
8055 +
8056 + if ((new_state == IB_WQS_RDY) && (cur_state == IB_WQS_ERR))
8057 + return -EINVAL;
8058 +diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
8059 +index 59ffbbdda3179..fc531a5069126 100644
8060 +--- a/drivers/infiniband/hw/mlx5/main.c
8061 ++++ b/drivers/infiniband/hw/mlx5/main.c
8062 +@@ -3393,8 +3393,6 @@ static void mlx5_ib_unbind_slave_port(struct mlx5_ib_dev *ibdev,
8063 +
8064 + port->mp.mpi = NULL;
8065 +
8066 +- list_add_tail(&mpi->list, &mlx5_ib_unaffiliated_port_list);
8067 +-
8068 + spin_unlock(&port->mp.mpi_lock);
8069 +
8070 + err = mlx5_nic_vport_unaffiliate_multiport(mpi->mdev);
8071 +@@ -3541,6 +3539,8 @@ static void mlx5_ib_cleanup_multiport_master(struct mlx5_ib_dev *dev)
8072 + dev->port[i].mp.mpi = NULL;
8073 + } else {
8074 + mlx5_ib_dbg(dev, "unbinding port_num: %d\n", i + 1);
8075 ++ list_add_tail(&dev->port[i].mp.mpi->list,
8076 ++ &mlx5_ib_unaffiliated_port_list);
8077 + mlx5_ib_unbind_slave_port(dev, dev->port[i].mp.mpi);
8078 + }
8079 + }
8080 +diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
8081 +index 843f9e7fe96ff..bcaaf238b364d 100644
8082 +--- a/drivers/infiniband/hw/mlx5/qp.c
8083 ++++ b/drivers/infiniband/hw/mlx5/qp.c
8084 +@@ -5309,10 +5309,8 @@ int mlx5_ib_modify_wq(struct ib_wq *wq, struct ib_wq_attr *wq_attr,
8085 +
8086 + rqc = MLX5_ADDR_OF(modify_rq_in, in, ctx);
8087 +
8088 +- curr_wq_state = (wq_attr_mask & IB_WQ_CUR_STATE) ?
8089 +- wq_attr->curr_wq_state : wq->state;
8090 +- wq_state = (wq_attr_mask & IB_WQ_STATE) ?
8091 +- wq_attr->wq_state : curr_wq_state;
8092 ++ curr_wq_state = wq_attr->curr_wq_state;
8093 ++ wq_state = wq_attr->wq_state;
8094 + if (curr_wq_state == IB_WQS_ERR)
8095 + curr_wq_state = MLX5_RQC_STATE_ERR;
8096 + if (wq_state == IB_WQS_ERR)
8097 +diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
8098 +index 01662727dca08..fc1ba49042792 100644
8099 +--- a/drivers/infiniband/sw/rxe/rxe_net.c
8100 ++++ b/drivers/infiniband/sw/rxe/rxe_net.c
8101 +@@ -207,10 +207,8 @@ static struct socket *rxe_setup_udp_tunnel(struct net *net, __be16 port,
8102 +
8103 + /* Create UDP socket */
8104 + err = udp_sock_create(net, &udp_cfg, &sock);
8105 +- if (err < 0) {
8106 +- pr_err("failed to create udp socket. err = %d\n", err);
8107 ++ if (err < 0)
8108 + return ERR_PTR(err);
8109 +- }
8110 +
8111 + tnl_cfg.encap_type = 1;
8112 + tnl_cfg.encap_rcv = rxe_udp_encap_recv;
8113 +@@ -619,6 +617,12 @@ static int rxe_net_ipv6_init(void)
8114 +
8115 + recv_sockets.sk6 = rxe_setup_udp_tunnel(&init_net,
8116 + htons(ROCE_V2_UDP_DPORT), true);
8117 ++ if (PTR_ERR(recv_sockets.sk6) == -EAFNOSUPPORT) {
8118 ++ recv_sockets.sk6 = NULL;
8119 ++ pr_warn("IPv6 is not supported, can not create a UDPv6 socket\n");
8120 ++ return 0;
8121 ++ }
8122 ++
8123 + if (IS_ERR(recv_sockets.sk6)) {
8124 + recv_sockets.sk6 = NULL;
8125 + pr_err("Failed to create IPv6 UDP tunnel\n");
8126 +diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
8127 +index b0f350d674fdb..93a41ebda1a85 100644
8128 +--- a/drivers/infiniband/sw/rxe/rxe_qp.c
8129 ++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
8130 +@@ -136,7 +136,6 @@ static void free_rd_atomic_resources(struct rxe_qp *qp)
8131 + void free_rd_atomic_resource(struct rxe_qp *qp, struct resp_res *res)
8132 + {
8133 + if (res->type == RXE_ATOMIC_MASK) {
8134 +- rxe_drop_ref(qp);
8135 + kfree_skb(res->atomic.skb);
8136 + } else if (res->type == RXE_READ_MASK) {
8137 + if (res->read.mr)
8138 +diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
8139 +index 8e237b623b316..ae97bebc0f34f 100644
8140 +--- a/drivers/infiniband/sw/rxe/rxe_resp.c
8141 ++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
8142 +@@ -966,8 +966,6 @@ static int send_atomic_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt,
8143 + goto out;
8144 + }
8145 +
8146 +- rxe_add_ref(qp);
8147 +-
8148 + res = &qp->resp.resources[qp->resp.res_head];
8149 + free_rd_atomic_resource(qp, res);
8150 + rxe_advance_resp_resource(qp);
8151 +diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.c b/drivers/infiniband/ulp/iser/iscsi_iser.c
8152 +index 8fcaa1136f2cd..776e46ee95dad 100644
8153 +--- a/drivers/infiniband/ulp/iser/iscsi_iser.c
8154 ++++ b/drivers/infiniband/ulp/iser/iscsi_iser.c
8155 +@@ -506,6 +506,7 @@ iscsi_iser_conn_bind(struct iscsi_cls_session *cls_session,
8156 + iser_conn->iscsi_conn = conn;
8157 +
8158 + out:
8159 ++ iscsi_put_endpoint(ep);
8160 + mutex_unlock(&iser_conn->state_mutex);
8161 + return error;
8162 + }
8163 +@@ -1002,6 +1003,7 @@ static struct iscsi_transport iscsi_iser_transport = {
8164 + /* connection management */
8165 + .create_conn = iscsi_iser_conn_create,
8166 + .bind_conn = iscsi_iser_conn_bind,
8167 ++ .unbind_conn = iscsi_conn_unbind,
8168 + .destroy_conn = iscsi_conn_teardown,
8169 + .attr_is_visible = iser_attr_is_visible,
8170 + .set_param = iscsi_iser_set_param,
8171 +diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
8172 +index 959ba0462ef07..49d12dd4a5039 100644
8173 +--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
8174 ++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
8175 +@@ -805,6 +805,9 @@ static struct rtrs_clt_sess *get_next_path_min_inflight(struct path_it *it)
8176 + int inflight;
8177 +
8178 + list_for_each_entry_rcu(sess, &clt->paths_list, s.entry) {
8179 ++ if (unlikely(READ_ONCE(sess->state) != RTRS_CLT_CONNECTED))
8180 ++ continue;
8181 ++
8182 + if (unlikely(!list_empty(raw_cpu_ptr(sess->mp_skip_entry))))
8183 + continue;
8184 +
8185 +@@ -1713,7 +1716,19 @@ static int rtrs_rdma_conn_established(struct rtrs_clt_con *con,
8186 + queue_depth);
8187 + return -ECONNRESET;
8188 + }
8189 +- if (!sess->rbufs || sess->queue_depth < queue_depth) {
8190 ++ if (sess->queue_depth > 0 && queue_depth != sess->queue_depth) {
8191 ++ rtrs_err(clt, "Error: queue depth changed\n");
8192 ++
8193 ++ /*
8194 ++ * Stop any more reconnection attempts
8195 ++ */
8196 ++ sess->reconnect_attempts = -1;
8197 ++ rtrs_err(clt,
8198 ++ "Disabling auto-reconnect. Trigger a manual reconnect after issue is resolved\n");
8199 ++ return -ECONNRESET;
8200 ++ }
8201 ++
8202 ++ if (!sess->rbufs) {
8203 + kfree(sess->rbufs);
8204 + sess->rbufs = kcalloc(queue_depth, sizeof(*sess->rbufs),
8205 + GFP_KERNEL);
8206 +@@ -1727,7 +1742,7 @@ static int rtrs_rdma_conn_established(struct rtrs_clt_con *con,
8207 + sess->chunk_size = sess->max_io_size + sess->max_hdr_size;
8208 +
8209 + /*
8210 +- * Global queue depth and IO size is always a minimum.
8211 ++ * Global IO size is always a minimum.
8212 + * If while a reconnection server sends us a value a bit
8213 + * higher - client does not care and uses cached minimum.
8214 + *
8215 +@@ -1735,8 +1750,7 @@ static int rtrs_rdma_conn_established(struct rtrs_clt_con *con,
8216 + * connections in parallel, use lock.
8217 + */
8218 + mutex_lock(&clt->paths_mutex);
8219 +- clt->queue_depth = min_not_zero(sess->queue_depth,
8220 +- clt->queue_depth);
8221 ++ clt->queue_depth = sess->queue_depth;
8222 + clt->max_io_size = min_not_zero(sess->max_io_size,
8223 + clt->max_io_size);
8224 + mutex_unlock(&clt->paths_mutex);
8225 +@@ -2675,6 +2689,8 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops,
8226 + if (err) {
8227 + list_del_rcu(&sess->s.entry);
8228 + rtrs_clt_close_conns(sess, true);
8229 ++ free_percpu(sess->stats->pcpu_stats);
8230 ++ kfree(sess->stats);
8231 + free_sess(sess);
8232 + goto close_all_sess;
8233 + }
8234 +@@ -2683,6 +2699,8 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops,
8235 + if (err) {
8236 + list_del_rcu(&sess->s.entry);
8237 + rtrs_clt_close_conns(sess, true);
8238 ++ free_percpu(sess->stats->pcpu_stats);
8239 ++ kfree(sess->stats);
8240 + free_sess(sess);
8241 + goto close_all_sess;
8242 + }
8243 +@@ -2940,6 +2958,8 @@ int rtrs_clt_create_path_from_sysfs(struct rtrs_clt *clt,
8244 + close_sess:
8245 + rtrs_clt_remove_path_from_arr(sess);
8246 + rtrs_clt_close_conns(sess, true);
8247 ++ free_percpu(sess->stats->pcpu_stats);
8248 ++ kfree(sess->stats);
8249 + free_sess(sess);
8250 +
8251 + return err;
8252 +diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c b/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c
8253 +index 126a96e75c621..e499f64ae608b 100644
8254 +--- a/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c
8255 ++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c
8256 +@@ -211,6 +211,7 @@ rtrs_srv_destroy_once_sysfs_root_folders(struct rtrs_srv_sess *sess)
8257 + device_del(&srv->dev);
8258 + put_device(&srv->dev);
8259 + } else {
8260 ++ put_device(&srv->dev);
8261 + mutex_unlock(&srv->paths_mutex);
8262 + }
8263 + }
8264 +diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
8265 +index d071809e3ed2f..57a9d396ab75d 100644
8266 +--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
8267 ++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
8268 +@@ -1477,6 +1477,7 @@ static void free_sess(struct rtrs_srv_sess *sess)
8269 + kobject_del(&sess->kobj);
8270 + kobject_put(&sess->kobj);
8271 + } else {
8272 ++ kfree(sess->stats);
8273 + kfree(sess);
8274 + }
8275 + }
8276 +@@ -1600,7 +1601,7 @@ static int create_con(struct rtrs_srv_sess *sess,
8277 + struct rtrs_sess *s = &sess->s;
8278 + struct rtrs_srv_con *con;
8279 +
8280 +- u32 cq_size, wr_queue_size;
8281 ++ u32 cq_size, max_send_wr, max_recv_wr, wr_limit;
8282 + int err, cq_vector;
8283 +
8284 + con = kzalloc(sizeof(*con), GFP_KERNEL);
8285 +@@ -1621,30 +1622,42 @@ static int create_con(struct rtrs_srv_sess *sess,
8286 + * All receive and all send (each requiring invalidate)
8287 + * + 2 for drain and heartbeat
8288 + */
8289 +- wr_queue_size = SERVICE_CON_QUEUE_DEPTH * 3 + 2;
8290 +- cq_size = wr_queue_size;
8291 ++ max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
8292 ++ max_recv_wr = SERVICE_CON_QUEUE_DEPTH + 2;
8293 ++ cq_size = max_send_wr + max_recv_wr;
8294 + } else {
8295 +- /*
8296 +- * If we have all receive requests posted and
8297 +- * all write requests posted and each read request
8298 +- * requires an invalidate request + drain
8299 +- * and qp gets into error state.
8300 +- */
8301 +- cq_size = srv->queue_depth * 3 + 1;
8302 + /*
8303 + * In theory we might have queue_depth * 32
8304 + * outstanding requests if an unsafe global key is used
8305 + * and we have queue_depth read requests each consisting
8306 + * of 32 different addresses. div 3 for mlx5.
8307 + */
8308 +- wr_queue_size = sess->s.dev->ib_dev->attrs.max_qp_wr / 3;
8309 ++ wr_limit = sess->s.dev->ib_dev->attrs.max_qp_wr / 3;
8310 ++ /* when always_invlaidate enalbed, we need linv+rinv+mr+imm */
8311 ++ if (always_invalidate)
8312 ++ max_send_wr =
8313 ++ min_t(int, wr_limit,
8314 ++ srv->queue_depth * (1 + 4) + 1);
8315 ++ else
8316 ++ max_send_wr =
8317 ++ min_t(int, wr_limit,
8318 ++ srv->queue_depth * (1 + 2) + 1);
8319 ++
8320 ++ max_recv_wr = srv->queue_depth + 1;
8321 ++ /*
8322 ++ * If we have all receive requests posted and
8323 ++ * all write requests posted and each read request
8324 ++ * requires an invalidate request + drain
8325 ++ * and qp gets into error state.
8326 ++ */
8327 ++ cq_size = max_send_wr + max_recv_wr;
8328 + }
8329 +- atomic_set(&con->sq_wr_avail, wr_queue_size);
8330 ++ atomic_set(&con->sq_wr_avail, max_send_wr);
8331 + cq_vector = rtrs_srv_get_next_cq_vector(sess);
8332 +
8333 + /* TODO: SOFTIRQ can be faster, but be careful with softirq context */
8334 + err = rtrs_cq_qp_create(&sess->s, &con->c, 1, cq_vector, cq_size,
8335 +- wr_queue_size, wr_queue_size,
8336 ++ max_send_wr, max_recv_wr,
8337 + IB_POLL_WORKQUEUE);
8338 + if (err) {
8339 + rtrs_err(s, "rtrs_cq_qp_create(), err: %d\n", err);
8340 +diff --git a/drivers/infiniband/ulp/rtrs/rtrs.c b/drivers/infiniband/ulp/rtrs/rtrs.c
8341 +index d13aff0aa8165..4629bb758126a 100644
8342 +--- a/drivers/infiniband/ulp/rtrs/rtrs.c
8343 ++++ b/drivers/infiniband/ulp/rtrs/rtrs.c
8344 +@@ -373,7 +373,6 @@ void rtrs_stop_hb(struct rtrs_sess *sess)
8345 + {
8346 + cancel_delayed_work_sync(&sess->hb_dwork);
8347 + sess->hb_missed_cnt = 0;
8348 +- sess->hb_missed_max = 0;
8349 + }
8350 + EXPORT_SYMBOL_GPL(rtrs_stop_hb);
8351 +
8352 +diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
8353 +index 31f8aa2c40ed8..168705c88e2fa 100644
8354 +--- a/drivers/infiniband/ulp/srp/ib_srp.c
8355 ++++ b/drivers/infiniband/ulp/srp/ib_srp.c
8356 +@@ -998,7 +998,6 @@ static int srp_alloc_req_data(struct srp_rdma_ch *ch)
8357 + struct srp_device *srp_dev = target->srp_host->srp_dev;
8358 + struct ib_device *ibdev = srp_dev->dev;
8359 + struct srp_request *req;
8360 +- void *mr_list;
8361 + dma_addr_t dma_addr;
8362 + int i, ret = -ENOMEM;
8363 +
8364 +@@ -1009,12 +1008,12 @@ static int srp_alloc_req_data(struct srp_rdma_ch *ch)
8365 +
8366 + for (i = 0; i < target->req_ring_size; ++i) {
8367 + req = &ch->req_ring[i];
8368 +- mr_list = kmalloc_array(target->mr_per_cmd, sizeof(void *),
8369 +- GFP_KERNEL);
8370 +- if (!mr_list)
8371 +- goto out;
8372 +- if (srp_dev->use_fast_reg)
8373 +- req->fr_list = mr_list;
8374 ++ if (srp_dev->use_fast_reg) {
8375 ++ req->fr_list = kmalloc_array(target->mr_per_cmd,
8376 ++ sizeof(void *), GFP_KERNEL);
8377 ++ if (!req->fr_list)
8378 ++ goto out;
8379 ++ }
8380 + req->indirect_desc = kmalloc(target->indirect_size, GFP_KERNEL);
8381 + if (!req->indirect_desc)
8382 + goto out;
8383 +diff --git a/drivers/input/joydev.c b/drivers/input/joydev.c
8384 +index da8963a9f044c..947d440a3be63 100644
8385 +--- a/drivers/input/joydev.c
8386 ++++ b/drivers/input/joydev.c
8387 +@@ -499,7 +499,7 @@ static int joydev_handle_JSIOCSBTNMAP(struct joydev *joydev,
8388 + memcpy(joydev->keypam, keypam, len);
8389 +
8390 + for (i = 0; i < joydev->nkey; i++)
8391 +- joydev->keymap[keypam[i] - BTN_MISC] = i;
8392 ++ joydev->keymap[joydev->keypam[i] - BTN_MISC] = i;
8393 +
8394 + out:
8395 + kfree(keypam);
8396 +diff --git a/drivers/input/keyboard/Kconfig b/drivers/input/keyboard/Kconfig
8397 +index 32d15809ae586..40a070a2e7f5b 100644
8398 +--- a/drivers/input/keyboard/Kconfig
8399 ++++ b/drivers/input/keyboard/Kconfig
8400 +@@ -67,9 +67,6 @@ config KEYBOARD_AMIGA
8401 + To compile this driver as a module, choose M here: the
8402 + module will be called amikbd.
8403 +
8404 +-config ATARI_KBD_CORE
8405 +- bool
8406 +-
8407 + config KEYBOARD_APPLESPI
8408 + tristate "Apple SPI keyboard and trackpad"
8409 + depends on ACPI && EFI
8410 +diff --git a/drivers/input/keyboard/hil_kbd.c b/drivers/input/keyboard/hil_kbd.c
8411 +index bb29a7c9a1c0c..54afb38601b9f 100644
8412 +--- a/drivers/input/keyboard/hil_kbd.c
8413 ++++ b/drivers/input/keyboard/hil_kbd.c
8414 +@@ -512,6 +512,7 @@ static int hil_dev_connect(struct serio *serio, struct serio_driver *drv)
8415 + HIL_IDD_NUM_AXES_PER_SET(*idd)) {
8416 + printk(KERN_INFO PREFIX
8417 + "combo devices are not supported.\n");
8418 ++ error = -EINVAL;
8419 + goto bail1;
8420 + }
8421 +
8422 +diff --git a/drivers/input/touchscreen/elants_i2c.c b/drivers/input/touchscreen/elants_i2c.c
8423 +index 17540bdb1eaf7..0f9e3ec99aae1 100644
8424 +--- a/drivers/input/touchscreen/elants_i2c.c
8425 ++++ b/drivers/input/touchscreen/elants_i2c.c
8426 +@@ -1396,7 +1396,7 @@ static int elants_i2c_probe(struct i2c_client *client,
8427 + init_completion(&ts->cmd_done);
8428 +
8429 + ts->client = client;
8430 +- ts->chip_id = (enum elants_chip_id)id->driver_data;
8431 ++ ts->chip_id = (enum elants_chip_id)(uintptr_t)device_get_match_data(&client->dev);
8432 + i2c_set_clientdata(client, ts);
8433 +
8434 + ts->vcc33 = devm_regulator_get(&client->dev, "vcc33");
8435 +@@ -1636,8 +1636,8 @@ MODULE_DEVICE_TABLE(acpi, elants_acpi_id);
8436 +
8437 + #ifdef CONFIG_OF
8438 + static const struct of_device_id elants_of_match[] = {
8439 +- { .compatible = "elan,ekth3500" },
8440 +- { .compatible = "elan,ektf3624" },
8441 ++ { .compatible = "elan,ekth3500", .data = (void *)EKTH3500 },
8442 ++ { .compatible = "elan,ektf3624", .data = (void *)EKTF3624 },
8443 + { /* sentinel */ }
8444 + };
8445 + MODULE_DEVICE_TABLE(of, elants_of_match);
8446 +diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
8447 +index c682b028f0a29..4f53d3c57e698 100644
8448 +--- a/drivers/input/touchscreen/goodix.c
8449 ++++ b/drivers/input/touchscreen/goodix.c
8450 +@@ -178,51 +178,6 @@ static const unsigned long goodix_irq_flags[] = {
8451 + IRQ_TYPE_LEVEL_HIGH,
8452 + };
8453 +
8454 +-/*
8455 +- * Those tablets have their coordinates origin at the bottom right
8456 +- * of the tablet, as if rotated 180 degrees
8457 +- */
8458 +-static const struct dmi_system_id rotated_screen[] = {
8459 +-#if defined(CONFIG_DMI) && defined(CONFIG_X86)
8460 +- {
8461 +- .ident = "Teclast X89",
8462 +- .matches = {
8463 +- /* tPAD is too generic, also match on bios date */
8464 +- DMI_MATCH(DMI_BOARD_VENDOR, "TECLAST"),
8465 +- DMI_MATCH(DMI_BOARD_NAME, "tPAD"),
8466 +- DMI_MATCH(DMI_BIOS_DATE, "12/19/2014"),
8467 +- },
8468 +- },
8469 +- {
8470 +- .ident = "Teclast X98 Pro",
8471 +- .matches = {
8472 +- /*
8473 +- * Only match BIOS date, because the manufacturers
8474 +- * BIOS does not report the board name at all
8475 +- * (sometimes)...
8476 +- */
8477 +- DMI_MATCH(DMI_BOARD_VENDOR, "TECLAST"),
8478 +- DMI_MATCH(DMI_BIOS_DATE, "10/28/2015"),
8479 +- },
8480 +- },
8481 +- {
8482 +- .ident = "WinBook TW100",
8483 +- .matches = {
8484 +- DMI_MATCH(DMI_SYS_VENDOR, "WinBook"),
8485 +- DMI_MATCH(DMI_PRODUCT_NAME, "TW100")
8486 +- }
8487 +- },
8488 +- {
8489 +- .ident = "WinBook TW700",
8490 +- .matches = {
8491 +- DMI_MATCH(DMI_SYS_VENDOR, "WinBook"),
8492 +- DMI_MATCH(DMI_PRODUCT_NAME, "TW700")
8493 +- },
8494 +- },
8495 +-#endif
8496 +- {}
8497 +-};
8498 +-
8499 + static const struct dmi_system_id nine_bytes_report[] = {
8500 + #if defined(CONFIG_DMI) && defined(CONFIG_X86)
8501 + {
8502 +@@ -1123,13 +1078,6 @@ static int goodix_configure_dev(struct goodix_ts_data *ts)
8503 + ABS_MT_POSITION_Y, ts->prop.max_y);
8504 + }
8505 +
8506 +- if (dmi_check_system(rotated_screen)) {
8507 +- ts->prop.invert_x = true;
8508 +- ts->prop.invert_y = true;
8509 +- dev_dbg(&ts->client->dev,
8510 +- "Applying '180 degrees rotated screen' quirk\n");
8511 +- }
8512 +-
8513 + if (dmi_check_system(nine_bytes_report)) {
8514 + ts->contact_size = 9;
8515 +
8516 +diff --git a/drivers/input/touchscreen/usbtouchscreen.c b/drivers/input/touchscreen/usbtouchscreen.c
8517 +index c847453a03c26..43c521f50c851 100644
8518 +--- a/drivers/input/touchscreen/usbtouchscreen.c
8519 ++++ b/drivers/input/touchscreen/usbtouchscreen.c
8520 +@@ -251,7 +251,7 @@ static int e2i_init(struct usbtouch_usb *usbtouch)
8521 + int ret;
8522 + struct usb_device *udev = interface_to_usbdev(usbtouch->interface);
8523 +
8524 +- ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
8525 ++ ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
8526 + 0x01, 0x02, 0x0000, 0x0081,
8527 + NULL, 0, USB_CTRL_SET_TIMEOUT);
8528 +
8529 +@@ -531,7 +531,7 @@ static int mtouch_init(struct usbtouch_usb *usbtouch)
8530 + if (ret)
8531 + return ret;
8532 +
8533 +- ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
8534 ++ ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
8535 + MTOUCHUSB_RESET,
8536 + USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
8537 + 1, 0, NULL, 0, USB_CTRL_SET_TIMEOUT);
8538 +@@ -543,7 +543,7 @@ static int mtouch_init(struct usbtouch_usb *usbtouch)
8539 + msleep(150);
8540 +
8541 + for (i = 0; i < 3; i++) {
8542 +- ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
8543 ++ ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
8544 + MTOUCHUSB_ASYNC_REPORT,
8545 + USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
8546 + 1, 1, NULL, 0, USB_CTRL_SET_TIMEOUT);
8547 +@@ -722,7 +722,7 @@ static int dmc_tsc10_init(struct usbtouch_usb *usbtouch)
8548 + }
8549 +
8550 + /* start sending data */
8551 +- ret = usb_control_msg(dev, usb_rcvctrlpipe (dev, 0),
8552 ++ ret = usb_control_msg(dev, usb_sndctrlpipe(dev, 0),
8553 + TSC10_CMD_DATA1,
8554 + USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
8555 + 0, 0, NULL, 0, USB_CTRL_SET_TIMEOUT);
8556 +diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
8557 +index df7b19ff0a9ed..ecc7308130ba6 100644
8558 +--- a/drivers/iommu/amd/init.c
8559 ++++ b/drivers/iommu/amd/init.c
8560 +@@ -1911,8 +1911,8 @@ static void print_iommu_info(void)
8561 + pci_info(pdev, "Found IOMMU cap 0x%x\n", iommu->cap_ptr);
8562 +
8563 + if (iommu->cap & (1 << IOMMU_CAP_EFR)) {
8564 +- pci_info(pdev, "Extended features (%#llx):",
8565 +- iommu->features);
8566 ++ pr_info("Extended features (%#llx):", iommu->features);
8567 ++
8568 + for (i = 0; i < ARRAY_SIZE(feat_str); ++i) {
8569 + if (iommu_feature(iommu, (1ULL << i)))
8570 + pr_cont(" %s", feat_str[i]);
8571 +diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
8572 +index fdd095e1fa521..53e5f41278853 100644
8573 +--- a/drivers/iommu/dma-iommu.c
8574 ++++ b/drivers/iommu/dma-iommu.c
8575 +@@ -252,9 +252,11 @@ resv_iova:
8576 + lo = iova_pfn(iovad, start);
8577 + hi = iova_pfn(iovad, end);
8578 + reserve_iova(iovad, lo, hi);
8579 +- } else {
8580 ++ } else if (end < start) {
8581 + /* dma_ranges list should be sorted */
8582 +- dev_err(&dev->dev, "Failed to reserve IOVA\n");
8583 ++ dev_err(&dev->dev,
8584 ++ "Failed to reserve IOVA [%pa-%pa]\n",
8585 ++ &start, &end);
8586 + return -EINVAL;
8587 + }
8588 +
8589 +diff --git a/drivers/leds/Kconfig b/drivers/leds/Kconfig
8590 +index b6742b4231bf8..258247dd5e3de 100644
8591 +--- a/drivers/leds/Kconfig
8592 ++++ b/drivers/leds/Kconfig
8593 +@@ -199,6 +199,7 @@ config LEDS_LM3530
8594 +
8595 + config LEDS_LM3532
8596 + tristate "LCD Backlight driver for LM3532"
8597 ++ select REGMAP_I2C
8598 + depends on LEDS_CLASS
8599 + depends on I2C
8600 + help
8601 +diff --git a/drivers/leds/blink/leds-lgm-sso.c b/drivers/leds/blink/leds-lgm-sso.c
8602 +index 7d5c9ca007d66..7d5f0bf2817ad 100644
8603 +--- a/drivers/leds/blink/leds-lgm-sso.c
8604 ++++ b/drivers/leds/blink/leds-lgm-sso.c
8605 +@@ -132,8 +132,7 @@ struct sso_led_priv {
8606 + struct regmap *mmap;
8607 + struct device *dev;
8608 + struct platform_device *pdev;
8609 +- struct clk *gclk;
8610 +- struct clk *fpid_clk;
8611 ++ struct clk_bulk_data clocks[2];
8612 + u32 fpid_clkrate;
8613 + u32 gptc_clkrate;
8614 + u32 freq[MAX_FREQ_RANK];
8615 +@@ -763,12 +762,11 @@ static int sso_probe_gpios(struct sso_led_priv *priv)
8616 + return sso_gpio_gc_init(dev, priv);
8617 + }
8618 +
8619 +-static void sso_clk_disable(void *data)
8620 ++static void sso_clock_disable_unprepare(void *data)
8621 + {
8622 + struct sso_led_priv *priv = data;
8623 +
8624 +- clk_disable_unprepare(priv->fpid_clk);
8625 +- clk_disable_unprepare(priv->gclk);
8626 ++ clk_bulk_disable_unprepare(ARRAY_SIZE(priv->clocks), priv->clocks);
8627 + }
8628 +
8629 + static int intel_sso_led_probe(struct platform_device *pdev)
8630 +@@ -785,36 +783,30 @@ static int intel_sso_led_probe(struct platform_device *pdev)
8631 + priv->dev = dev;
8632 +
8633 + /* gate clock */
8634 +- priv->gclk = devm_clk_get(dev, "sso");
8635 +- if (IS_ERR(priv->gclk)) {
8636 +- dev_err(dev, "get sso gate clock failed!\n");
8637 +- return PTR_ERR(priv->gclk);
8638 +- }
8639 ++ priv->clocks[0].id = "sso";
8640 ++
8641 ++ /* fpid clock */
8642 ++ priv->clocks[1].id = "fpid";
8643 +
8644 +- ret = clk_prepare_enable(priv->gclk);
8645 ++ ret = devm_clk_bulk_get(dev, ARRAY_SIZE(priv->clocks), priv->clocks);
8646 + if (ret) {
8647 +- dev_err(dev, "Failed to prepate/enable sso gate clock!\n");
8648 ++ dev_err(dev, "Getting clocks failed!\n");
8649 + return ret;
8650 + }
8651 +
8652 +- priv->fpid_clk = devm_clk_get(dev, "fpid");
8653 +- if (IS_ERR(priv->fpid_clk)) {
8654 +- dev_err(dev, "Failed to get fpid clock!\n");
8655 +- return PTR_ERR(priv->fpid_clk);
8656 +- }
8657 +-
8658 +- ret = clk_prepare_enable(priv->fpid_clk);
8659 ++ ret = clk_bulk_prepare_enable(ARRAY_SIZE(priv->clocks), priv->clocks);
8660 + if (ret) {
8661 +- dev_err(dev, "Failed to prepare/enable fpid clock!\n");
8662 ++ dev_err(dev, "Failed to prepare and enable clocks!\n");
8663 + return ret;
8664 + }
8665 +- priv->fpid_clkrate = clk_get_rate(priv->fpid_clk);
8666 +
8667 +- ret = devm_add_action_or_reset(dev, sso_clk_disable, priv);
8668 +- if (ret) {
8669 +- dev_err(dev, "Failed to devm_add_action_or_reset, %d\n", ret);
8670 ++ ret = devm_add_action_or_reset(dev, sso_clock_disable_unprepare, priv);
8671 ++ if (ret)
8672 + return ret;
8673 +- }
8674 ++
8675 ++ priv->fpid_clkrate = clk_get_rate(priv->clocks[1].clk);
8676 ++
8677 ++ priv->mmap = syscon_node_to_regmap(dev->of_node);
8678 +
8679 + priv->mmap = syscon_node_to_regmap(dev->of_node);
8680 + if (IS_ERR(priv->mmap)) {
8681 +@@ -859,8 +851,6 @@ static int intel_sso_led_remove(struct platform_device *pdev)
8682 + sso_led_shutdown(led);
8683 + }
8684 +
8685 +- clk_disable_unprepare(priv->fpid_clk);
8686 +- clk_disable_unprepare(priv->gclk);
8687 + regmap_exit(priv->mmap);
8688 +
8689 + return 0;
8690 +diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
8691 +index 2e495ff678562..fa3f5f504ff7d 100644
8692 +--- a/drivers/leds/led-class.c
8693 ++++ b/drivers/leds/led-class.c
8694 +@@ -285,10 +285,6 @@ struct led_classdev *__must_check devm_of_led_get(struct device *dev,
8695 + if (!dev)
8696 + return ERR_PTR(-EINVAL);
8697 +
8698 +- /* Not using device tree? */
8699 +- if (!IS_ENABLED(CONFIG_OF) || !dev->of_node)
8700 +- return ERR_PTR(-ENOTSUPP);
8701 +-
8702 + led = of_led_get(dev->of_node, index);
8703 + if (IS_ERR(led))
8704 + return led;
8705 +diff --git a/drivers/leds/leds-as3645a.c b/drivers/leds/leds-as3645a.c
8706 +index e8922fa033796..80411d41e802d 100644
8707 +--- a/drivers/leds/leds-as3645a.c
8708 ++++ b/drivers/leds/leds-as3645a.c
8709 +@@ -545,6 +545,7 @@ static int as3645a_parse_node(struct as3645a *flash,
8710 + if (!flash->indicator_node) {
8711 + dev_warn(&flash->client->dev,
8712 + "can't find indicator node\n");
8713 ++ rval = -ENODEV;
8714 + goto out_err;
8715 + }
8716 +
8717 +diff --git a/drivers/leds/leds-ktd2692.c b/drivers/leds/leds-ktd2692.c
8718 +index 632f10db4b3ff..f341da1503a49 100644
8719 +--- a/drivers/leds/leds-ktd2692.c
8720 ++++ b/drivers/leds/leds-ktd2692.c
8721 +@@ -256,6 +256,17 @@ static void ktd2692_setup(struct ktd2692_context *led)
8722 + | KTD2692_REG_FLASH_CURRENT_BASE);
8723 + }
8724 +
8725 ++static void regulator_disable_action(void *_data)
8726 ++{
8727 ++ struct device *dev = _data;
8728 ++ struct ktd2692_context *led = dev_get_drvdata(dev);
8729 ++ int ret;
8730 ++
8731 ++ ret = regulator_disable(led->regulator);
8732 ++ if (ret)
8733 ++ dev_err(dev, "Failed to disable supply: %d\n", ret);
8734 ++}
8735 ++
8736 + static int ktd2692_parse_dt(struct ktd2692_context *led, struct device *dev,
8737 + struct ktd2692_led_config_data *cfg)
8738 + {
8739 +@@ -286,8 +297,14 @@ static int ktd2692_parse_dt(struct ktd2692_context *led, struct device *dev,
8740 +
8741 + if (led->regulator) {
8742 + ret = regulator_enable(led->regulator);
8743 +- if (ret)
8744 ++ if (ret) {
8745 + dev_err(dev, "Failed to enable supply: %d\n", ret);
8746 ++ } else {
8747 ++ ret = devm_add_action_or_reset(dev,
8748 ++ regulator_disable_action, dev);
8749 ++ if (ret)
8750 ++ return ret;
8751 ++ }
8752 + }
8753 +
8754 + child_node = of_get_next_available_child(np, NULL);
8755 +@@ -377,17 +394,9 @@ static int ktd2692_probe(struct platform_device *pdev)
8756 + static int ktd2692_remove(struct platform_device *pdev)
8757 + {
8758 + struct ktd2692_context *led = platform_get_drvdata(pdev);
8759 +- int ret;
8760 +
8761 + led_classdev_flash_unregister(&led->fled_cdev);
8762 +
8763 +- if (led->regulator) {
8764 +- ret = regulator_disable(led->regulator);
8765 +- if (ret)
8766 +- dev_err(&pdev->dev,
8767 +- "Failed to disable supply: %d\n", ret);
8768 +- }
8769 +-
8770 + mutex_destroy(&led->lock);
8771 +
8772 + return 0;
8773 +diff --git a/drivers/leds/leds-lm36274.c b/drivers/leds/leds-lm36274.c
8774 +index aadb03468a40a..a23a9424c2f38 100644
8775 +--- a/drivers/leds/leds-lm36274.c
8776 ++++ b/drivers/leds/leds-lm36274.c
8777 +@@ -127,6 +127,7 @@ static int lm36274_probe(struct platform_device *pdev)
8778 +
8779 + ret = lm36274_init(chip);
8780 + if (ret) {
8781 ++ fwnode_handle_put(init_data.fwnode);
8782 + dev_err(chip->dev, "Failed to init the device\n");
8783 + return ret;
8784 + }
8785 +diff --git a/drivers/leds/leds-lm3692x.c b/drivers/leds/leds-lm3692x.c
8786 +index e945de45388ca..55e6443997ec9 100644
8787 +--- a/drivers/leds/leds-lm3692x.c
8788 ++++ b/drivers/leds/leds-lm3692x.c
8789 +@@ -435,6 +435,7 @@ static int lm3692x_probe_dt(struct lm3692x_led *led)
8790 +
8791 + ret = fwnode_property_read_u32(child, "reg", &led->led_enable);
8792 + if (ret) {
8793 ++ fwnode_handle_put(child);
8794 + dev_err(&led->client->dev, "reg DT property missing\n");
8795 + return ret;
8796 + }
8797 +@@ -449,12 +450,11 @@ static int lm3692x_probe_dt(struct lm3692x_led *led)
8798 +
8799 + ret = devm_led_classdev_register_ext(&led->client->dev, &led->led_dev,
8800 + &init_data);
8801 +- if (ret) {
8802 ++ if (ret)
8803 + dev_err(&led->client->dev, "led register err: %d\n", ret);
8804 +- return ret;
8805 +- }
8806 +
8807 +- return 0;
8808 ++ fwnode_handle_put(init_data.fwnode);
8809 ++ return ret;
8810 + }
8811 +
8812 + static int lm3692x_probe(struct i2c_client *client,
8813 +diff --git a/drivers/leds/leds-lm3697.c b/drivers/leds/leds-lm3697.c
8814 +index 7d216cdb91a8a..912e8bb22a995 100644
8815 +--- a/drivers/leds/leds-lm3697.c
8816 ++++ b/drivers/leds/leds-lm3697.c
8817 +@@ -203,11 +203,9 @@ static int lm3697_probe_dt(struct lm3697 *priv)
8818 +
8819 + priv->enable_gpio = devm_gpiod_get_optional(dev, "enable",
8820 + GPIOD_OUT_LOW);
8821 +- if (IS_ERR(priv->enable_gpio)) {
8822 +- ret = PTR_ERR(priv->enable_gpio);
8823 +- dev_err(dev, "Failed to get enable gpio: %d\n", ret);
8824 +- return ret;
8825 +- }
8826 ++ if (IS_ERR(priv->enable_gpio))
8827 ++ return dev_err_probe(dev, PTR_ERR(priv->enable_gpio),
8828 ++ "Failed to get enable GPIO\n");
8829 +
8830 + priv->regulator = devm_regulator_get(dev, "vled");
8831 + if (IS_ERR(priv->regulator))
8832 +diff --git a/drivers/leds/leds-lp50xx.c b/drivers/leds/leds-lp50xx.c
8833 +index 06230614fdc56..401df1e2e05d0 100644
8834 +--- a/drivers/leds/leds-lp50xx.c
8835 ++++ b/drivers/leds/leds-lp50xx.c
8836 +@@ -490,6 +490,7 @@ static int lp50xx_probe_dt(struct lp50xx *priv)
8837 + ret = fwnode_property_read_u32(led_node, "color",
8838 + &color_id);
8839 + if (ret) {
8840 ++ fwnode_handle_put(led_node);
8841 + dev_err(priv->dev, "Cannot read color\n");
8842 + goto child_out;
8843 + }
8844 +@@ -512,7 +513,6 @@ static int lp50xx_probe_dt(struct lp50xx *priv)
8845 + goto child_out;
8846 + }
8847 + i++;
8848 +- fwnode_handle_put(child);
8849 + }
8850 +
8851 + return 0;
8852 +diff --git a/drivers/mailbox/qcom-apcs-ipc-mailbox.c b/drivers/mailbox/qcom-apcs-ipc-mailbox.c
8853 +index f25324d03842e..15236d7296258 100644
8854 +--- a/drivers/mailbox/qcom-apcs-ipc-mailbox.c
8855 ++++ b/drivers/mailbox/qcom-apcs-ipc-mailbox.c
8856 +@@ -132,7 +132,7 @@ static int qcom_apcs_ipc_probe(struct platform_device *pdev)
8857 + if (apcs_data->clk_name) {
8858 + apcs->clk = platform_device_register_data(&pdev->dev,
8859 + apcs_data->clk_name,
8860 +- PLATFORM_DEVID_NONE,
8861 ++ PLATFORM_DEVID_AUTO,
8862 + NULL, 0);
8863 + if (IS_ERR(apcs->clk))
8864 + dev_err(&pdev->dev, "failed to register APCS clk\n");
8865 +diff --git a/drivers/mailbox/qcom-ipcc.c b/drivers/mailbox/qcom-ipcc.c
8866 +index 2d13c72944c6f..584700cd15855 100644
8867 +--- a/drivers/mailbox/qcom-ipcc.c
8868 ++++ b/drivers/mailbox/qcom-ipcc.c
8869 +@@ -155,6 +155,11 @@ static int qcom_ipcc_mbox_send_data(struct mbox_chan *chan, void *data)
8870 + return 0;
8871 + }
8872 +
8873 ++static void qcom_ipcc_mbox_shutdown(struct mbox_chan *chan)
8874 ++{
8875 ++ chan->con_priv = NULL;
8876 ++}
8877 ++
8878 + static struct mbox_chan *qcom_ipcc_mbox_xlate(struct mbox_controller *mbox,
8879 + const struct of_phandle_args *ph)
8880 + {
8881 +@@ -184,6 +189,7 @@ static struct mbox_chan *qcom_ipcc_mbox_xlate(struct mbox_controller *mbox,
8882 +
8883 + static const struct mbox_chan_ops ipcc_mbox_chan_ops = {
8884 + .send_data = qcom_ipcc_mbox_send_data,
8885 ++ .shutdown = qcom_ipcc_mbox_shutdown,
8886 + };
8887 +
8888 + static int qcom_ipcc_setup_mbox(struct qcom_ipcc *ipcc)
8889 +diff --git a/drivers/md/md.c b/drivers/md/md.c
8890 +index 2a9553efc2d1b..c21ce8070d3c8 100644
8891 +--- a/drivers/md/md.c
8892 ++++ b/drivers/md/md.c
8893 +@@ -441,30 +441,6 @@ check_suspended:
8894 + }
8895 + EXPORT_SYMBOL(md_handle_request);
8896 +
8897 +-struct md_io {
8898 +- struct mddev *mddev;
8899 +- bio_end_io_t *orig_bi_end_io;
8900 +- void *orig_bi_private;
8901 +- struct block_device *orig_bi_bdev;
8902 +- unsigned long start_time;
8903 +-};
8904 +-
8905 +-static void md_end_io(struct bio *bio)
8906 +-{
8907 +- struct md_io *md_io = bio->bi_private;
8908 +- struct mddev *mddev = md_io->mddev;
8909 +-
8910 +- bio_end_io_acct_remapped(bio, md_io->start_time, md_io->orig_bi_bdev);
8911 +-
8912 +- bio->bi_end_io = md_io->orig_bi_end_io;
8913 +- bio->bi_private = md_io->orig_bi_private;
8914 +-
8915 +- mempool_free(md_io, &mddev->md_io_pool);
8916 +-
8917 +- if (bio->bi_end_io)
8918 +- bio->bi_end_io(bio);
8919 +-}
8920 +-
8921 + static blk_qc_t md_submit_bio(struct bio *bio)
8922 + {
8923 + const int rw = bio_data_dir(bio);
8924 +@@ -489,21 +465,6 @@ static blk_qc_t md_submit_bio(struct bio *bio)
8925 + return BLK_QC_T_NONE;
8926 + }
8927 +
8928 +- if (bio->bi_end_io != md_end_io) {
8929 +- struct md_io *md_io;
8930 +-
8931 +- md_io = mempool_alloc(&mddev->md_io_pool, GFP_NOIO);
8932 +- md_io->mddev = mddev;
8933 +- md_io->orig_bi_end_io = bio->bi_end_io;
8934 +- md_io->orig_bi_private = bio->bi_private;
8935 +- md_io->orig_bi_bdev = bio->bi_bdev;
8936 +-
8937 +- bio->bi_end_io = md_end_io;
8938 +- bio->bi_private = md_io;
8939 +-
8940 +- md_io->start_time = bio_start_io_acct(bio);
8941 +- }
8942 +-
8943 + /* bio could be mergeable after passing to underlayer */
8944 + bio->bi_opf &= ~REQ_NOMERGE;
8945 +
8946 +@@ -5614,7 +5575,6 @@ static void md_free(struct kobject *ko)
8947 +
8948 + bioset_exit(&mddev->bio_set);
8949 + bioset_exit(&mddev->sync_set);
8950 +- mempool_exit(&mddev->md_io_pool);
8951 + kfree(mddev);
8952 + }
8953 +
8954 +@@ -5710,11 +5670,6 @@ static int md_alloc(dev_t dev, char *name)
8955 + */
8956 + mddev->hold_active = UNTIL_STOP;
8957 +
8958 +- error = mempool_init_kmalloc_pool(&mddev->md_io_pool, BIO_POOL_SIZE,
8959 +- sizeof(struct md_io));
8960 +- if (error)
8961 +- goto abort;
8962 +-
8963 + error = -ENOMEM;
8964 + mddev->queue = blk_alloc_queue(NUMA_NO_NODE);
8965 + if (!mddev->queue)
8966 +diff --git a/drivers/md/md.h b/drivers/md/md.h
8967 +index bcbba1b5ec4a7..5b2da02e2e758 100644
8968 +--- a/drivers/md/md.h
8969 ++++ b/drivers/md/md.h
8970 +@@ -487,7 +487,6 @@ struct mddev {
8971 + struct bio_set sync_set; /* for sync operations like
8972 + * metadata and bitmap writes
8973 + */
8974 +- mempool_t md_io_pool;
8975 +
8976 + /* Generic flush handling.
8977 + * The last to finish preflush schedules a worker to submit
8978 +diff --git a/drivers/media/cec/platform/s5p/s5p_cec.c b/drivers/media/cec/platform/s5p/s5p_cec.c
8979 +index 2a3e7ffefe0a2..028a09a7531ef 100644
8980 +--- a/drivers/media/cec/platform/s5p/s5p_cec.c
8981 ++++ b/drivers/media/cec/platform/s5p/s5p_cec.c
8982 +@@ -35,10 +35,13 @@ MODULE_PARM_DESC(debug, "debug level (0-2)");
8983 +
8984 + static int s5p_cec_adap_enable(struct cec_adapter *adap, bool enable)
8985 + {
8986 ++ int ret;
8987 + struct s5p_cec_dev *cec = cec_get_drvdata(adap);
8988 +
8989 + if (enable) {
8990 +- pm_runtime_get_sync(cec->dev);
8991 ++ ret = pm_runtime_resume_and_get(cec->dev);
8992 ++ if (ret < 0)
8993 ++ return ret;
8994 +
8995 + s5p_cec_reset(cec);
8996 +
8997 +@@ -51,7 +54,7 @@ static int s5p_cec_adap_enable(struct cec_adapter *adap, bool enable)
8998 + } else {
8999 + s5p_cec_mask_tx_interrupts(cec);
9000 + s5p_cec_mask_rx_interrupts(cec);
9001 +- pm_runtime_disable(cec->dev);
9002 ++ pm_runtime_put(cec->dev);
9003 + }
9004 +
9005 + return 0;
9006 +diff --git a/drivers/media/common/siano/smscoreapi.c b/drivers/media/common/siano/smscoreapi.c
9007 +index c1511094fdc7b..b735e23701373 100644
9008 +--- a/drivers/media/common/siano/smscoreapi.c
9009 ++++ b/drivers/media/common/siano/smscoreapi.c
9010 +@@ -908,7 +908,7 @@ static int smscore_load_firmware_family2(struct smscore_device_t *coredev,
9011 + void *buffer, size_t size)
9012 + {
9013 + struct sms_firmware *firmware = (struct sms_firmware *) buffer;
9014 +- struct sms_msg_data4 *msg;
9015 ++ struct sms_msg_data5 *msg;
9016 + u32 mem_address, calc_checksum = 0;
9017 + u32 i, *ptr;
9018 + u8 *payload = firmware->payload;
9019 +@@ -989,24 +989,20 @@ static int smscore_load_firmware_family2(struct smscore_device_t *coredev,
9020 + goto exit_fw_download;
9021 +
9022 + if (coredev->mode == DEVICE_MODE_NONE) {
9023 +- struct sms_msg_data *trigger_msg =
9024 +- (struct sms_msg_data *) msg;
9025 +-
9026 + pr_debug("sending MSG_SMS_SWDOWNLOAD_TRIGGER_REQ\n");
9027 + SMS_INIT_MSG(&msg->x_msg_header,
9028 + MSG_SMS_SWDOWNLOAD_TRIGGER_REQ,
9029 +- sizeof(struct sms_msg_hdr) +
9030 +- sizeof(u32) * 5);
9031 ++ sizeof(*msg));
9032 +
9033 +- trigger_msg->msg_data[0] = firmware->start_address;
9034 ++ msg->msg_data[0] = firmware->start_address;
9035 + /* Entry point */
9036 +- trigger_msg->msg_data[1] = 6; /* Priority */
9037 +- trigger_msg->msg_data[2] = 0x200; /* Stack size */
9038 +- trigger_msg->msg_data[3] = 0; /* Parameter */
9039 +- trigger_msg->msg_data[4] = 4; /* Task ID */
9040 ++ msg->msg_data[1] = 6; /* Priority */
9041 ++ msg->msg_data[2] = 0x200; /* Stack size */
9042 ++ msg->msg_data[3] = 0; /* Parameter */
9043 ++ msg->msg_data[4] = 4; /* Task ID */
9044 +
9045 +- rc = smscore_sendrequest_and_wait(coredev, trigger_msg,
9046 +- trigger_msg->x_msg_header.msg_length,
9047 ++ rc = smscore_sendrequest_and_wait(coredev, msg,
9048 ++ msg->x_msg_header.msg_length,
9049 + &coredev->trigger_done);
9050 + } else {
9051 + SMS_INIT_MSG(&msg->x_msg_header, MSG_SW_RELOAD_EXEC_REQ,
9052 +diff --git a/drivers/media/common/siano/smscoreapi.h b/drivers/media/common/siano/smscoreapi.h
9053 +index b3b793b5caf35..16c45afabc530 100644
9054 +--- a/drivers/media/common/siano/smscoreapi.h
9055 ++++ b/drivers/media/common/siano/smscoreapi.h
9056 +@@ -629,9 +629,9 @@ struct sms_msg_data2 {
9057 + u32 msg_data[2];
9058 + };
9059 +
9060 +-struct sms_msg_data4 {
9061 ++struct sms_msg_data5 {
9062 + struct sms_msg_hdr x_msg_header;
9063 +- u32 msg_data[4];
9064 ++ u32 msg_data[5];
9065 + };
9066 +
9067 + struct sms_data_download {
9068 +diff --git a/drivers/media/common/siano/smsdvb-main.c b/drivers/media/common/siano/smsdvb-main.c
9069 +index ae17407e477a4..7cc654bc52d37 100644
9070 +--- a/drivers/media/common/siano/smsdvb-main.c
9071 ++++ b/drivers/media/common/siano/smsdvb-main.c
9072 +@@ -1176,6 +1176,10 @@ static int smsdvb_hotplug(struct smscore_device_t *coredev,
9073 + return 0;
9074 +
9075 + media_graph_error:
9076 ++ mutex_lock(&g_smsdvb_clientslock);
9077 ++ list_del(&client->entry);
9078 ++ mutex_unlock(&g_smsdvb_clientslock);
9079 ++
9080 + smsdvb_debugfs_release(client);
9081 +
9082 + client_error:
9083 +diff --git a/drivers/media/dvb-core/dvb_net.c b/drivers/media/dvb-core/dvb_net.c
9084 +index 89620da983bab..dddebea644bb8 100644
9085 +--- a/drivers/media/dvb-core/dvb_net.c
9086 ++++ b/drivers/media/dvb-core/dvb_net.c
9087 +@@ -45,6 +45,7 @@
9088 + #include <linux/module.h>
9089 + #include <linux/kernel.h>
9090 + #include <linux/netdevice.h>
9091 ++#include <linux/nospec.h>
9092 + #include <linux/etherdevice.h>
9093 + #include <linux/dvb/net.h>
9094 + #include <linux/uio.h>
9095 +@@ -1462,14 +1463,20 @@ static int dvb_net_do_ioctl(struct file *file,
9096 + struct net_device *netdev;
9097 + struct dvb_net_priv *priv_data;
9098 + struct dvb_net_if *dvbnetif = parg;
9099 ++ int if_num = dvbnetif->if_num;
9100 +
9101 +- if (dvbnetif->if_num >= DVB_NET_DEVICES_MAX ||
9102 +- !dvbnet->state[dvbnetif->if_num]) {
9103 ++ if (if_num >= DVB_NET_DEVICES_MAX) {
9104 + ret = -EINVAL;
9105 + goto ioctl_error;
9106 + }
9107 ++ if_num = array_index_nospec(if_num, DVB_NET_DEVICES_MAX);
9108 +
9109 +- netdev = dvbnet->device[dvbnetif->if_num];
9110 ++ if (!dvbnet->state[if_num]) {
9111 ++ ret = -EINVAL;
9112 ++ goto ioctl_error;
9113 ++ }
9114 ++
9115 ++ netdev = dvbnet->device[if_num];
9116 +
9117 + priv_data = netdev_priv(netdev);
9118 + dvbnetif->pid=priv_data->pid;
9119 +@@ -1522,14 +1529,20 @@ static int dvb_net_do_ioctl(struct file *file,
9120 + struct net_device *netdev;
9121 + struct dvb_net_priv *priv_data;
9122 + struct __dvb_net_if_old *dvbnetif = parg;
9123 ++ int if_num = dvbnetif->if_num;
9124 ++
9125 ++ if (if_num >= DVB_NET_DEVICES_MAX) {
9126 ++ ret = -EINVAL;
9127 ++ goto ioctl_error;
9128 ++ }
9129 ++ if_num = array_index_nospec(if_num, DVB_NET_DEVICES_MAX);
9130 +
9131 +- if (dvbnetif->if_num >= DVB_NET_DEVICES_MAX ||
9132 +- !dvbnet->state[dvbnetif->if_num]) {
9133 ++ if (!dvbnet->state[if_num]) {
9134 + ret = -EINVAL;
9135 + goto ioctl_error;
9136 + }
9137 +
9138 +- netdev = dvbnet->device[dvbnetif->if_num];
9139 ++ netdev = dvbnet->device[if_num];
9140 +
9141 + priv_data = netdev_priv(netdev);
9142 + dvbnetif->pid=priv_data->pid;
9143 +diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
9144 +index 3862ddc86ec48..795d9bfaba5cf 100644
9145 +--- a/drivers/media/dvb-core/dvbdev.c
9146 ++++ b/drivers/media/dvb-core/dvbdev.c
9147 +@@ -506,6 +506,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
9148 + break;
9149 +
9150 + if (minor == MAX_DVB_MINORS) {
9151 ++ list_del (&dvbdev->list_head);
9152 + kfree(dvbdevfops);
9153 + kfree(dvbdev);
9154 + up_write(&minor_rwsem);
9155 +@@ -526,6 +527,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
9156 + __func__);
9157 +
9158 + dvb_media_device_free(dvbdev);
9159 ++ list_del (&dvbdev->list_head);
9160 + kfree(dvbdevfops);
9161 + kfree(dvbdev);
9162 + mutex_unlock(&dvbdev_register_lock);
9163 +@@ -541,6 +543,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
9164 + pr_err("%s: failed to create device dvb%d.%s%d (%ld)\n",
9165 + __func__, adap->num, dnames[type], id, PTR_ERR(clsdev));
9166 + dvb_media_device_free(dvbdev);
9167 ++ list_del (&dvbdev->list_head);
9168 + kfree(dvbdevfops);
9169 + kfree(dvbdev);
9170 + return PTR_ERR(clsdev);
9171 +diff --git a/drivers/media/i2c/ccs/ccs-core.c b/drivers/media/i2c/ccs/ccs-core.c
9172 +index 4505594996bd8..fde0c51f04069 100644
9173 +--- a/drivers/media/i2c/ccs/ccs-core.c
9174 ++++ b/drivers/media/i2c/ccs/ccs-core.c
9175 +@@ -3093,7 +3093,7 @@ static int __maybe_unused ccs_suspend(struct device *dev)
9176 + if (rval < 0) {
9177 + pm_runtime_put_noidle(dev);
9178 +
9179 +- return -EAGAIN;
9180 ++ return rval;
9181 + }
9182 +
9183 + if (sensor->streaming)
9184 +diff --git a/drivers/media/i2c/imx334.c b/drivers/media/i2c/imx334.c
9185 +index ad530f0d338a1..02d22907c75c2 100644
9186 +--- a/drivers/media/i2c/imx334.c
9187 ++++ b/drivers/media/i2c/imx334.c
9188 +@@ -717,9 +717,9 @@ static int imx334_set_stream(struct v4l2_subdev *sd, int enable)
9189 + }
9190 +
9191 + if (enable) {
9192 +- ret = pm_runtime_get_sync(imx334->dev);
9193 +- if (ret)
9194 +- goto error_power_off;
9195 ++ ret = pm_runtime_resume_and_get(imx334->dev);
9196 ++ if (ret < 0)
9197 ++ goto error_unlock;
9198 +
9199 + ret = imx334_start_streaming(imx334);
9200 + if (ret)
9201 +@@ -737,6 +737,7 @@ static int imx334_set_stream(struct v4l2_subdev *sd, int enable)
9202 +
9203 + error_power_off:
9204 + pm_runtime_put(imx334->dev);
9205 ++error_unlock:
9206 + mutex_unlock(&imx334->mutex);
9207 +
9208 + return ret;
9209 +diff --git a/drivers/media/i2c/ir-kbd-i2c.c b/drivers/media/i2c/ir-kbd-i2c.c
9210 +index e8119ad0bc71d..92376592455ee 100644
9211 +--- a/drivers/media/i2c/ir-kbd-i2c.c
9212 ++++ b/drivers/media/i2c/ir-kbd-i2c.c
9213 +@@ -678,8 +678,8 @@ static int zilog_tx(struct rc_dev *rcdev, unsigned int *txbuf,
9214 + goto out_unlock;
9215 + }
9216 +
9217 +- i = i2c_master_recv(ir->tx_c, buf, 1);
9218 +- if (i != 1) {
9219 ++ ret = i2c_master_recv(ir->tx_c, buf, 1);
9220 ++ if (ret != 1) {
9221 + dev_err(&ir->rc->dev, "i2c_master_recv failed with %d\n", ret);
9222 + ret = -EIO;
9223 + goto out_unlock;
9224 +diff --git a/drivers/media/i2c/ov2659.c b/drivers/media/i2c/ov2659.c
9225 +index 42f64175a6dff..fb78a1cedc03b 100644
9226 +--- a/drivers/media/i2c/ov2659.c
9227 ++++ b/drivers/media/i2c/ov2659.c
9228 +@@ -204,6 +204,7 @@ struct ov2659 {
9229 + struct i2c_client *client;
9230 + struct v4l2_ctrl_handler ctrls;
9231 + struct v4l2_ctrl *link_frequency;
9232 ++ struct clk *clk;
9233 + const struct ov2659_framesize *frame_size;
9234 + struct sensor_register *format_ctrl_regs;
9235 + struct ov2659_pll_ctrl pll;
9236 +@@ -1270,6 +1271,8 @@ static int ov2659_power_off(struct device *dev)
9237 +
9238 + gpiod_set_value(ov2659->pwdn_gpio, 1);
9239 +
9240 ++ clk_disable_unprepare(ov2659->clk);
9241 ++
9242 + return 0;
9243 + }
9244 +
9245 +@@ -1278,9 +1281,17 @@ static int ov2659_power_on(struct device *dev)
9246 + struct i2c_client *client = to_i2c_client(dev);
9247 + struct v4l2_subdev *sd = i2c_get_clientdata(client);
9248 + struct ov2659 *ov2659 = to_ov2659(sd);
9249 ++ int ret;
9250 +
9251 + dev_dbg(&client->dev, "%s:\n", __func__);
9252 +
9253 ++ ret = clk_prepare_enable(ov2659->clk);
9254 ++ if (ret) {
9255 ++ dev_err(&client->dev, "%s: failed to enable clock\n",
9256 ++ __func__);
9257 ++ return ret;
9258 ++ }
9259 ++
9260 + gpiod_set_value(ov2659->pwdn_gpio, 0);
9261 +
9262 + if (ov2659->resetb_gpio) {
9263 +@@ -1425,7 +1436,6 @@ static int ov2659_probe(struct i2c_client *client)
9264 + const struct ov2659_platform_data *pdata = ov2659_get_pdata(client);
9265 + struct v4l2_subdev *sd;
9266 + struct ov2659 *ov2659;
9267 +- struct clk *clk;
9268 + int ret;
9269 +
9270 + if (!pdata) {
9271 +@@ -1440,11 +1450,11 @@ static int ov2659_probe(struct i2c_client *client)
9272 + ov2659->pdata = pdata;
9273 + ov2659->client = client;
9274 +
9275 +- clk = devm_clk_get(&client->dev, "xvclk");
9276 +- if (IS_ERR(clk))
9277 +- return PTR_ERR(clk);
9278 ++ ov2659->clk = devm_clk_get(&client->dev, "xvclk");
9279 ++ if (IS_ERR(ov2659->clk))
9280 ++ return PTR_ERR(ov2659->clk);
9281 +
9282 +- ov2659->xvclk_frequency = clk_get_rate(clk);
9283 ++ ov2659->xvclk_frequency = clk_get_rate(ov2659->clk);
9284 + if (ov2659->xvclk_frequency < 6000000 ||
9285 + ov2659->xvclk_frequency > 27000000)
9286 + return -EINVAL;
9287 +@@ -1506,7 +1516,9 @@ static int ov2659_probe(struct i2c_client *client)
9288 + ov2659->frame_size = &ov2659_framesizes[2];
9289 + ov2659->format_ctrl_regs = ov2659_formats[0].format_ctrl_regs;
9290 +
9291 +- ov2659_power_on(&client->dev);
9292 ++ ret = ov2659_power_on(&client->dev);
9293 ++ if (ret < 0)
9294 ++ goto error;
9295 +
9296 + ret = ov2659_detect(sd);
9297 + if (ret < 0)
9298 +diff --git a/drivers/media/i2c/rdacm21.c b/drivers/media/i2c/rdacm21.c
9299 +index 179d107f494ca..50e2af5227603 100644
9300 +--- a/drivers/media/i2c/rdacm21.c
9301 ++++ b/drivers/media/i2c/rdacm21.c
9302 +@@ -69,6 +69,7 @@
9303 + #define OV490_ISP_VSIZE_LOW 0x80820062
9304 + #define OV490_ISP_VSIZE_HIGH 0x80820063
9305 +
9306 ++#define OV10640_PID_TIMEOUT 20
9307 + #define OV10640_ID_HIGH 0xa6
9308 + #define OV10640_CHIP_ID 0x300a
9309 + #define OV10640_PIXEL_RATE 55000000
9310 +@@ -329,30 +330,51 @@ static const struct v4l2_subdev_ops rdacm21_subdev_ops = {
9311 + .pad = &rdacm21_subdev_pad_ops,
9312 + };
9313 +
9314 +-static int ov10640_initialize(struct rdacm21_device *dev)
9315 ++static void ov10640_power_up(struct rdacm21_device *dev)
9316 + {
9317 +- u8 val;
9318 +-
9319 +- /* Power-up OV10640 by setting RESETB and PWDNB pins high. */
9320 ++ /* Enable GPIO0#0 (reset) and GPIO1#0 (pwdn) as output lines. */
9321 + ov490_write_reg(dev, OV490_GPIO_SEL0, OV490_GPIO0);
9322 + ov490_write_reg(dev, OV490_GPIO_SEL1, OV490_SPWDN0);
9323 + ov490_write_reg(dev, OV490_GPIO_DIRECTION0, OV490_GPIO0);
9324 + ov490_write_reg(dev, OV490_GPIO_DIRECTION1, OV490_SPWDN0);
9325 ++
9326 ++ /* Power up OV10640 and then reset it. */
9327 ++ ov490_write_reg(dev, OV490_GPIO_OUTPUT_VALUE1, OV490_SPWDN0);
9328 ++ usleep_range(1500, 3000);
9329 ++
9330 ++ ov490_write_reg(dev, OV490_GPIO_OUTPUT_VALUE0, 0x00);
9331 ++ usleep_range(1500, 3000);
9332 + ov490_write_reg(dev, OV490_GPIO_OUTPUT_VALUE0, OV490_GPIO0);
9333 +- ov490_write_reg(dev, OV490_GPIO_OUTPUT_VALUE0, OV490_SPWDN0);
9334 + usleep_range(3000, 5000);
9335 ++}
9336 +
9337 +- /* Read OV10640 ID to test communications. */
9338 +- ov490_write_reg(dev, OV490_SCCB_SLAVE0_DIR, OV490_SCCB_SLAVE_READ);
9339 +- ov490_write_reg(dev, OV490_SCCB_SLAVE0_ADDR_HIGH, OV10640_CHIP_ID >> 8);
9340 +- ov490_write_reg(dev, OV490_SCCB_SLAVE0_ADDR_LOW, OV10640_CHIP_ID & 0xff);
9341 +-
9342 +- /* Trigger SCCB slave transaction and give it some time to complete. */
9343 +- ov490_write_reg(dev, OV490_HOST_CMD, OV490_HOST_CMD_TRIGGER);
9344 +- usleep_range(1000, 1500);
9345 ++static int ov10640_check_id(struct rdacm21_device *dev)
9346 ++{
9347 ++ unsigned int i;
9348 ++ u8 val;
9349 +
9350 +- ov490_read_reg(dev, OV490_SCCB_SLAVE0_DIR, &val);
9351 +- if (val != OV10640_ID_HIGH) {
9352 ++ /* Read OV10640 ID to test communications. */
9353 ++ for (i = 0; i < OV10640_PID_TIMEOUT; ++i) {
9354 ++ ov490_write_reg(dev, OV490_SCCB_SLAVE0_DIR,
9355 ++ OV490_SCCB_SLAVE_READ);
9356 ++ ov490_write_reg(dev, OV490_SCCB_SLAVE0_ADDR_HIGH,
9357 ++ OV10640_CHIP_ID >> 8);
9358 ++ ov490_write_reg(dev, OV490_SCCB_SLAVE0_ADDR_LOW,
9359 ++ OV10640_CHIP_ID & 0xff);
9360 ++
9361 ++ /*
9362 ++ * Trigger SCCB slave transaction and give it some time
9363 ++ * to complete.
9364 ++ */
9365 ++ ov490_write_reg(dev, OV490_HOST_CMD, OV490_HOST_CMD_TRIGGER);
9366 ++ usleep_range(1000, 1500);
9367 ++
9368 ++ ov490_read_reg(dev, OV490_SCCB_SLAVE0_DIR, &val);
9369 ++ if (val == OV10640_ID_HIGH)
9370 ++ break;
9371 ++ usleep_range(1000, 1500);
9372 ++ }
9373 ++ if (i == OV10640_PID_TIMEOUT) {
9374 + dev_err(dev->dev, "OV10640 ID mismatch: (0x%02x)\n", val);
9375 + return -ENODEV;
9376 + }
9377 +@@ -368,6 +390,8 @@ static int ov490_initialize(struct rdacm21_device *dev)
9378 + unsigned int i;
9379 + int ret;
9380 +
9381 ++ ov10640_power_up(dev);
9382 ++
9383 + /*
9384 + * Read OV490 Id to test communications. Give it up to 40msec to
9385 + * exit from reset.
9386 +@@ -405,7 +429,7 @@ static int ov490_initialize(struct rdacm21_device *dev)
9387 + return -ENODEV;
9388 + }
9389 +
9390 +- ret = ov10640_initialize(dev);
9391 ++ ret = ov10640_check_id(dev);
9392 + if (ret)
9393 + return ret;
9394 +
9395 +diff --git a/drivers/media/i2c/s5c73m3/s5c73m3-core.c b/drivers/media/i2c/s5c73m3/s5c73m3-core.c
9396 +index 5b4c4a3547c93..71804a70bc6d7 100644
9397 +--- a/drivers/media/i2c/s5c73m3/s5c73m3-core.c
9398 ++++ b/drivers/media/i2c/s5c73m3/s5c73m3-core.c
9399 +@@ -1386,7 +1386,7 @@ static int __s5c73m3_power_on(struct s5c73m3 *state)
9400 + s5c73m3_gpio_deassert(state, STBY);
9401 + usleep_range(100, 200);
9402 +
9403 +- s5c73m3_gpio_deassert(state, RST);
9404 ++ s5c73m3_gpio_deassert(state, RSET);
9405 + usleep_range(50, 100);
9406 +
9407 + return 0;
9408 +@@ -1401,7 +1401,7 @@ static int __s5c73m3_power_off(struct s5c73m3 *state)
9409 + {
9410 + int i, ret;
9411 +
9412 +- if (s5c73m3_gpio_assert(state, RST))
9413 ++ if (s5c73m3_gpio_assert(state, RSET))
9414 + usleep_range(10, 50);
9415 +
9416 + if (s5c73m3_gpio_assert(state, STBY))
9417 +@@ -1606,7 +1606,7 @@ static int s5c73m3_get_platform_data(struct s5c73m3 *state)
9418 +
9419 + state->mclk_frequency = pdata->mclk_frequency;
9420 + state->gpio[STBY] = pdata->gpio_stby;
9421 +- state->gpio[RST] = pdata->gpio_reset;
9422 ++ state->gpio[RSET] = pdata->gpio_reset;
9423 + return 0;
9424 + }
9425 +
9426 +diff --git a/drivers/media/i2c/s5c73m3/s5c73m3.h b/drivers/media/i2c/s5c73m3/s5c73m3.h
9427 +index ef7e85b34263b..c3fcfdd3ea66d 100644
9428 +--- a/drivers/media/i2c/s5c73m3/s5c73m3.h
9429 ++++ b/drivers/media/i2c/s5c73m3/s5c73m3.h
9430 +@@ -353,7 +353,7 @@ struct s5c73m3_ctrls {
9431 +
9432 + enum s5c73m3_gpio_id {
9433 + STBY,
9434 +- RST,
9435 ++ RSET,
9436 + GPIO_NUM,
9437 + };
9438 +
9439 +diff --git a/drivers/media/i2c/s5k4ecgx.c b/drivers/media/i2c/s5k4ecgx.c
9440 +index b2d53417badf6..4e97309a67f41 100644
9441 +--- a/drivers/media/i2c/s5k4ecgx.c
9442 ++++ b/drivers/media/i2c/s5k4ecgx.c
9443 +@@ -173,7 +173,7 @@ static const char * const s5k4ecgx_supply_names[] = {
9444 +
9445 + enum s5k4ecgx_gpio_id {
9446 + STBY,
9447 +- RST,
9448 ++ RSET,
9449 + GPIO_NUM,
9450 + };
9451 +
9452 +@@ -476,7 +476,7 @@ static int __s5k4ecgx_power_on(struct s5k4ecgx *priv)
9453 + if (s5k4ecgx_gpio_set_value(priv, STBY, priv->gpio[STBY].level))
9454 + usleep_range(30, 50);
9455 +
9456 +- if (s5k4ecgx_gpio_set_value(priv, RST, priv->gpio[RST].level))
9457 ++ if (s5k4ecgx_gpio_set_value(priv, RSET, priv->gpio[RSET].level))
9458 + usleep_range(30, 50);
9459 +
9460 + return 0;
9461 +@@ -484,7 +484,7 @@ static int __s5k4ecgx_power_on(struct s5k4ecgx *priv)
9462 +
9463 + static int __s5k4ecgx_power_off(struct s5k4ecgx *priv)
9464 + {
9465 +- if (s5k4ecgx_gpio_set_value(priv, RST, !priv->gpio[RST].level))
9466 ++ if (s5k4ecgx_gpio_set_value(priv, RSET, !priv->gpio[RSET].level))
9467 + usleep_range(30, 50);
9468 +
9469 + if (s5k4ecgx_gpio_set_value(priv, STBY, !priv->gpio[STBY].level))
9470 +@@ -872,7 +872,7 @@ static int s5k4ecgx_config_gpios(struct s5k4ecgx *priv,
9471 + int ret;
9472 +
9473 + priv->gpio[STBY].gpio = -EINVAL;
9474 +- priv->gpio[RST].gpio = -EINVAL;
9475 ++ priv->gpio[RSET].gpio = -EINVAL;
9476 +
9477 + ret = s5k4ecgx_config_gpio(gpio->gpio, gpio->level, "S5K4ECGX_STBY");
9478 +
9479 +@@ -891,7 +891,7 @@ static int s5k4ecgx_config_gpios(struct s5k4ecgx *priv,
9480 + s5k4ecgx_free_gpios(priv);
9481 + return ret;
9482 + }
9483 +- priv->gpio[RST] = *gpio;
9484 ++ priv->gpio[RSET] = *gpio;
9485 + if (gpio_is_valid(gpio->gpio))
9486 + gpio_set_value(gpio->gpio, 0);
9487 +
9488 +diff --git a/drivers/media/i2c/s5k5baf.c b/drivers/media/i2c/s5k5baf.c
9489 +index ec6f22efe19ad..ec65a8e084c6a 100644
9490 +--- a/drivers/media/i2c/s5k5baf.c
9491 ++++ b/drivers/media/i2c/s5k5baf.c
9492 +@@ -235,7 +235,7 @@ struct s5k5baf_gpio {
9493 +
9494 + enum s5k5baf_gpio_id {
9495 + STBY,
9496 +- RST,
9497 ++ RSET,
9498 + NUM_GPIOS,
9499 + };
9500 +
9501 +@@ -969,7 +969,7 @@ static int s5k5baf_power_on(struct s5k5baf *state)
9502 +
9503 + s5k5baf_gpio_deassert(state, STBY);
9504 + usleep_range(50, 100);
9505 +- s5k5baf_gpio_deassert(state, RST);
9506 ++ s5k5baf_gpio_deassert(state, RSET);
9507 + return 0;
9508 +
9509 + err_reg_dis:
9510 +@@ -987,7 +987,7 @@ static int s5k5baf_power_off(struct s5k5baf *state)
9511 + state->apply_cfg = 0;
9512 + state->apply_crop = 0;
9513 +
9514 +- s5k5baf_gpio_assert(state, RST);
9515 ++ s5k5baf_gpio_assert(state, RSET);
9516 + s5k5baf_gpio_assert(state, STBY);
9517 +
9518 + if (!IS_ERR(state->clock))
9519 +diff --git a/drivers/media/i2c/s5k6aa.c b/drivers/media/i2c/s5k6aa.c
9520 +index 72439fae7968b..6516e205e9a3d 100644
9521 +--- a/drivers/media/i2c/s5k6aa.c
9522 ++++ b/drivers/media/i2c/s5k6aa.c
9523 +@@ -177,7 +177,7 @@ static const char * const s5k6aa_supply_names[] = {
9524 +
9525 + enum s5k6aa_gpio_id {
9526 + STBY,
9527 +- RST,
9528 ++ RSET,
9529 + GPIO_NUM,
9530 + };
9531 +
9532 +@@ -841,7 +841,7 @@ static int __s5k6aa_power_on(struct s5k6aa *s5k6aa)
9533 + ret = s5k6aa->s_power(1);
9534 + usleep_range(4000, 5000);
9535 +
9536 +- if (s5k6aa_gpio_deassert(s5k6aa, RST))
9537 ++ if (s5k6aa_gpio_deassert(s5k6aa, RSET))
9538 + msleep(20);
9539 +
9540 + return ret;
9541 +@@ -851,7 +851,7 @@ static int __s5k6aa_power_off(struct s5k6aa *s5k6aa)
9542 + {
9543 + int ret;
9544 +
9545 +- if (s5k6aa_gpio_assert(s5k6aa, RST))
9546 ++ if (s5k6aa_gpio_assert(s5k6aa, RSET))
9547 + usleep_range(100, 150);
9548 +
9549 + if (s5k6aa->s_power) {
9550 +@@ -1510,7 +1510,7 @@ static int s5k6aa_configure_gpios(struct s5k6aa *s5k6aa,
9551 + int ret;
9552 +
9553 + s5k6aa->gpio[STBY].gpio = -EINVAL;
9554 +- s5k6aa->gpio[RST].gpio = -EINVAL;
9555 ++ s5k6aa->gpio[RSET].gpio = -EINVAL;
9556 +
9557 + gpio = &pdata->gpio_stby;
9558 + if (gpio_is_valid(gpio->gpio)) {
9559 +@@ -1533,7 +1533,7 @@ static int s5k6aa_configure_gpios(struct s5k6aa *s5k6aa,
9560 + if (ret < 0)
9561 + return ret;
9562 +
9563 +- s5k6aa->gpio[RST] = *gpio;
9564 ++ s5k6aa->gpio[RSET] = *gpio;
9565 + }
9566 +
9567 + return 0;
9568 +diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
9569 +index 1b309bb743c7b..f21da11caf224 100644
9570 +--- a/drivers/media/i2c/tc358743.c
9571 ++++ b/drivers/media/i2c/tc358743.c
9572 +@@ -1974,6 +1974,7 @@ static int tc358743_probe_of(struct tc358743_state *state)
9573 + bps_pr_lane = 2 * endpoint.link_frequencies[0];
9574 + if (bps_pr_lane < 62500000U || bps_pr_lane > 1000000000U) {
9575 + dev_err(dev, "unsupported bps per lane: %u bps\n", bps_pr_lane);
9576 ++ ret = -EINVAL;
9577 + goto disable_clk;
9578 + }
9579 +
9580 +diff --git a/drivers/media/mc/Makefile b/drivers/media/mc/Makefile
9581 +index 119037f0e686d..2b7af42ba59c1 100644
9582 +--- a/drivers/media/mc/Makefile
9583 ++++ b/drivers/media/mc/Makefile
9584 +@@ -3,7 +3,7 @@
9585 + mc-objs := mc-device.o mc-devnode.o mc-entity.o \
9586 + mc-request.o
9587 +
9588 +-ifeq ($(CONFIG_USB),y)
9589 ++ifneq ($(CONFIG_USB),)
9590 + mc-objs += mc-dev-allocator.o
9591 + endif
9592 +
9593 +diff --git a/drivers/media/pci/bt8xx/bt878.c b/drivers/media/pci/bt8xx/bt878.c
9594 +index 78dd35c9b65d7..90972d6952f1c 100644
9595 +--- a/drivers/media/pci/bt8xx/bt878.c
9596 ++++ b/drivers/media/pci/bt8xx/bt878.c
9597 +@@ -300,7 +300,8 @@ static irqreturn_t bt878_irq(int irq, void *dev_id)
9598 + }
9599 + if (astat & BT878_ARISCI) {
9600 + bt->finished_block = (stat & BT878_ARISCS) >> 28;
9601 +- tasklet_schedule(&bt->tasklet);
9602 ++ if (bt->tasklet.callback)
9603 ++ tasklet_schedule(&bt->tasklet);
9604 + break;
9605 + }
9606 + count++;
9607 +@@ -477,6 +478,9 @@ static int bt878_probe(struct pci_dev *dev, const struct pci_device_id *pci_id)
9608 + btwrite(0, BT878_AINT_MASK);
9609 + bt878_num++;
9610 +
9611 ++ if (!bt->tasklet.func)
9612 ++ tasklet_disable(&bt->tasklet);
9613 ++
9614 + return 0;
9615 +
9616 + fail2:
9617 +diff --git a/drivers/media/pci/cobalt/cobalt-driver.c b/drivers/media/pci/cobalt/cobalt-driver.c
9618 +index 0695078ef8125..1bd8bbe57a30e 100644
9619 +--- a/drivers/media/pci/cobalt/cobalt-driver.c
9620 ++++ b/drivers/media/pci/cobalt/cobalt-driver.c
9621 +@@ -667,6 +667,7 @@ static int cobalt_probe(struct pci_dev *pci_dev,
9622 + return -ENOMEM;
9623 + cobalt->pci_dev = pci_dev;
9624 + cobalt->instance = i;
9625 ++ mutex_init(&cobalt->pci_lock);
9626 +
9627 + retval = v4l2_device_register(&pci_dev->dev, &cobalt->v4l2_dev);
9628 + if (retval) {
9629 +diff --git a/drivers/media/pci/cobalt/cobalt-driver.h b/drivers/media/pci/cobalt/cobalt-driver.h
9630 +index bca68572b3242..12c33e035904c 100644
9631 +--- a/drivers/media/pci/cobalt/cobalt-driver.h
9632 ++++ b/drivers/media/pci/cobalt/cobalt-driver.h
9633 +@@ -251,6 +251,8 @@ struct cobalt {
9634 + int instance;
9635 + struct pci_dev *pci_dev;
9636 + struct v4l2_device v4l2_dev;
9637 ++ /* serialize PCI access in cobalt_s_bit_sysctrl() */
9638 ++ struct mutex pci_lock;
9639 +
9640 + void __iomem *bar0, *bar1;
9641 +
9642 +@@ -320,10 +322,13 @@ static inline u32 cobalt_g_sysctrl(struct cobalt *cobalt)
9643 + static inline void cobalt_s_bit_sysctrl(struct cobalt *cobalt,
9644 + int bit, int val)
9645 + {
9646 +- u32 ctrl = cobalt_read_bar1(cobalt, COBALT_SYS_CTRL_BASE);
9647 ++ u32 ctrl;
9648 +
9649 ++ mutex_lock(&cobalt->pci_lock);
9650 ++ ctrl = cobalt_read_bar1(cobalt, COBALT_SYS_CTRL_BASE);
9651 + cobalt_write_bar1(cobalt, COBALT_SYS_CTRL_BASE,
9652 + (ctrl & ~(1UL << bit)) | (val << bit));
9653 ++ mutex_unlock(&cobalt->pci_lock);
9654 + }
9655 +
9656 + static inline u32 cobalt_g_sysstat(struct cobalt *cobalt)
9657 +diff --git a/drivers/media/pci/intel/ipu3/cio2-bridge.c b/drivers/media/pci/intel/ipu3/cio2-bridge.c
9658 +index c2199042d3db5..85f8b587405e0 100644
9659 +--- a/drivers/media/pci/intel/ipu3/cio2-bridge.c
9660 ++++ b/drivers/media/pci/intel/ipu3/cio2-bridge.c
9661 +@@ -173,14 +173,15 @@ static int cio2_bridge_connect_sensor(const struct cio2_sensor_config *cfg,
9662 + int ret;
9663 +
9664 + for_each_acpi_dev_match(adev, cfg->hid, NULL, -1) {
9665 +- if (!adev->status.enabled)
9666 ++ if (!adev->status.enabled) {
9667 ++ acpi_dev_put(adev);
9668 + continue;
9669 ++ }
9670 +
9671 + if (bridge->n_sensors >= CIO2_NUM_PORTS) {
9672 ++ acpi_dev_put(adev);
9673 + dev_err(&cio2->dev, "Exceeded available CIO2 ports\n");
9674 +- cio2_bridge_unregister_sensors(bridge);
9675 +- ret = -EINVAL;
9676 +- goto err_out;
9677 ++ return -EINVAL;
9678 + }
9679 +
9680 + sensor = &bridge->sensors[bridge->n_sensors];
9681 +@@ -228,7 +229,6 @@ err_free_swnodes:
9682 + software_node_unregister_nodes(sensor->swnodes);
9683 + err_put_adev:
9684 + acpi_dev_put(sensor->adev);
9685 +-err_out:
9686 + return ret;
9687 + }
9688 +
9689 +diff --git a/drivers/media/platform/am437x/am437x-vpfe.c b/drivers/media/platform/am437x/am437x-vpfe.c
9690 +index 6cdc77dda0e49..1c9cb9e05fdf6 100644
9691 +--- a/drivers/media/platform/am437x/am437x-vpfe.c
9692 ++++ b/drivers/media/platform/am437x/am437x-vpfe.c
9693 +@@ -1021,7 +1021,9 @@ static int vpfe_initialize_device(struct vpfe_device *vpfe)
9694 + if (ret)
9695 + return ret;
9696 +
9697 +- pm_runtime_get_sync(vpfe->pdev);
9698 ++ ret = pm_runtime_resume_and_get(vpfe->pdev);
9699 ++ if (ret < 0)
9700 ++ return ret;
9701 +
9702 + vpfe_config_enable(&vpfe->ccdc, 1);
9703 +
9704 +@@ -2443,7 +2445,11 @@ static int vpfe_probe(struct platform_device *pdev)
9705 + pm_runtime_enable(&pdev->dev);
9706 +
9707 + /* for now just enable it here instead of waiting for the open */
9708 +- pm_runtime_get_sync(&pdev->dev);
9709 ++ ret = pm_runtime_resume_and_get(&pdev->dev);
9710 ++ if (ret < 0) {
9711 ++ vpfe_err(vpfe, "Unable to resume device.\n");
9712 ++ goto probe_out_v4l2_unregister;
9713 ++ }
9714 +
9715 + vpfe_ccdc_config_defaults(ccdc);
9716 +
9717 +@@ -2530,6 +2536,11 @@ static int vpfe_suspend(struct device *dev)
9718 +
9719 + /* only do full suspend if streaming has started */
9720 + if (vb2_start_streaming_called(&vpfe->buffer_queue)) {
9721 ++ /*
9722 ++ * ignore RPM resume errors here, as it is already too late.
9723 ++ * A check like that should happen earlier, either at
9724 ++ * open() or just before start streaming.
9725 ++ */
9726 + pm_runtime_get_sync(dev);
9727 + vpfe_config_enable(ccdc, 1);
9728 +
9729 +diff --git a/drivers/media/platform/exynos-gsc/gsc-m2m.c b/drivers/media/platform/exynos-gsc/gsc-m2m.c
9730 +index 27a3c92c73bce..f1cf847d1cc2d 100644
9731 +--- a/drivers/media/platform/exynos-gsc/gsc-m2m.c
9732 ++++ b/drivers/media/platform/exynos-gsc/gsc-m2m.c
9733 +@@ -56,10 +56,8 @@ static void __gsc_m2m_job_abort(struct gsc_ctx *ctx)
9734 + static int gsc_m2m_start_streaming(struct vb2_queue *q, unsigned int count)
9735 + {
9736 + struct gsc_ctx *ctx = q->drv_priv;
9737 +- int ret;
9738 +
9739 +- ret = pm_runtime_get_sync(&ctx->gsc_dev->pdev->dev);
9740 +- return ret > 0 ? 0 : ret;
9741 ++ return pm_runtime_resume_and_get(&ctx->gsc_dev->pdev->dev);
9742 + }
9743 +
9744 + static void __gsc_m2m_cleanup_queue(struct gsc_ctx *ctx)
9745 +diff --git a/drivers/media/platform/exynos4-is/fimc-capture.c b/drivers/media/platform/exynos4-is/fimc-capture.c
9746 +index 13c838d3f9473..0da36443173c1 100644
9747 +--- a/drivers/media/platform/exynos4-is/fimc-capture.c
9748 ++++ b/drivers/media/platform/exynos4-is/fimc-capture.c
9749 +@@ -478,11 +478,9 @@ static int fimc_capture_open(struct file *file)
9750 + goto unlock;
9751 +
9752 + set_bit(ST_CAPT_BUSY, &fimc->state);
9753 +- ret = pm_runtime_get_sync(&fimc->pdev->dev);
9754 +- if (ret < 0) {
9755 +- pm_runtime_put_sync(&fimc->pdev->dev);
9756 ++ ret = pm_runtime_resume_and_get(&fimc->pdev->dev);
9757 ++ if (ret < 0)
9758 + goto unlock;
9759 +- }
9760 +
9761 + ret = v4l2_fh_open(file);
9762 + if (ret) {
9763 +diff --git a/drivers/media/platform/exynos4-is/fimc-is.c b/drivers/media/platform/exynos4-is/fimc-is.c
9764 +index 972d9601d2360..1b24f5bfc4af4 100644
9765 +--- a/drivers/media/platform/exynos4-is/fimc-is.c
9766 ++++ b/drivers/media/platform/exynos4-is/fimc-is.c
9767 +@@ -828,9 +828,9 @@ static int fimc_is_probe(struct platform_device *pdev)
9768 + goto err_irq;
9769 + }
9770 +
9771 +- ret = pm_runtime_get_sync(dev);
9772 ++ ret = pm_runtime_resume_and_get(dev);
9773 + if (ret < 0)
9774 +- goto err_pm;
9775 ++ goto err_irq;
9776 +
9777 + vb2_dma_contig_set_max_seg_size(dev, DMA_BIT_MASK(32));
9778 +
9779 +diff --git a/drivers/media/platform/exynos4-is/fimc-isp-video.c b/drivers/media/platform/exynos4-is/fimc-isp-video.c
9780 +index 612b9872afc87..83688a7982f70 100644
9781 +--- a/drivers/media/platform/exynos4-is/fimc-isp-video.c
9782 ++++ b/drivers/media/platform/exynos4-is/fimc-isp-video.c
9783 +@@ -275,7 +275,7 @@ static int isp_video_open(struct file *file)
9784 + if (ret < 0)
9785 + goto unlock;
9786 +
9787 +- ret = pm_runtime_get_sync(&isp->pdev->dev);
9788 ++ ret = pm_runtime_resume_and_get(&isp->pdev->dev);
9789 + if (ret < 0)
9790 + goto rel_fh;
9791 +
9792 +@@ -293,7 +293,6 @@ static int isp_video_open(struct file *file)
9793 + if (!ret)
9794 + goto unlock;
9795 + rel_fh:
9796 +- pm_runtime_put_noidle(&isp->pdev->dev);
9797 + v4l2_fh_release(file);
9798 + unlock:
9799 + mutex_unlock(&isp->video_lock);
9800 +@@ -306,17 +305,20 @@ static int isp_video_release(struct file *file)
9801 + struct fimc_is_video *ivc = &isp->video_capture;
9802 + struct media_entity *entity = &ivc->ve.vdev.entity;
9803 + struct media_device *mdev = entity->graph_obj.mdev;
9804 ++ bool is_singular_file;
9805 +
9806 + mutex_lock(&isp->video_lock);
9807 +
9808 +- if (v4l2_fh_is_singular_file(file) && ivc->streaming) {
9809 ++ is_singular_file = v4l2_fh_is_singular_file(file);
9810 ++
9811 ++ if (is_singular_file && ivc->streaming) {
9812 + media_pipeline_stop(entity);
9813 + ivc->streaming = 0;
9814 + }
9815 +
9816 + _vb2_fop_release(file, NULL);
9817 +
9818 +- if (v4l2_fh_is_singular_file(file)) {
9819 ++ if (is_singular_file) {
9820 + fimc_pipeline_call(&ivc->ve, close);
9821 +
9822 + mutex_lock(&mdev->graph_mutex);
9823 +diff --git a/drivers/media/platform/exynos4-is/fimc-isp.c b/drivers/media/platform/exynos4-is/fimc-isp.c
9824 +index a77c49b185115..74b49d30901ed 100644
9825 +--- a/drivers/media/platform/exynos4-is/fimc-isp.c
9826 ++++ b/drivers/media/platform/exynos4-is/fimc-isp.c
9827 +@@ -304,11 +304,10 @@ static int fimc_isp_subdev_s_power(struct v4l2_subdev *sd, int on)
9828 + pr_debug("on: %d\n", on);
9829 +
9830 + if (on) {
9831 +- ret = pm_runtime_get_sync(&is->pdev->dev);
9832 +- if (ret < 0) {
9833 +- pm_runtime_put(&is->pdev->dev);
9834 ++ ret = pm_runtime_resume_and_get(&is->pdev->dev);
9835 ++ if (ret < 0)
9836 + return ret;
9837 +- }
9838 ++
9839 + set_bit(IS_ST_PWR_ON, &is->state);
9840 +
9841 + ret = fimc_is_start_firmware(is);
9842 +diff --git a/drivers/media/platform/exynos4-is/fimc-lite.c b/drivers/media/platform/exynos4-is/fimc-lite.c
9843 +index fe20af3a7178a..4d8b18078ff37 100644
9844 +--- a/drivers/media/platform/exynos4-is/fimc-lite.c
9845 ++++ b/drivers/media/platform/exynos4-is/fimc-lite.c
9846 +@@ -469,9 +469,9 @@ static int fimc_lite_open(struct file *file)
9847 + }
9848 +
9849 + set_bit(ST_FLITE_IN_USE, &fimc->state);
9850 +- ret = pm_runtime_get_sync(&fimc->pdev->dev);
9851 ++ ret = pm_runtime_resume_and_get(&fimc->pdev->dev);
9852 + if (ret < 0)
9853 +- goto err_pm;
9854 ++ goto err_in_use;
9855 +
9856 + ret = v4l2_fh_open(file);
9857 + if (ret < 0)
9858 +@@ -499,6 +499,7 @@ static int fimc_lite_open(struct file *file)
9859 + v4l2_fh_release(file);
9860 + err_pm:
9861 + pm_runtime_put_sync(&fimc->pdev->dev);
9862 ++err_in_use:
9863 + clear_bit(ST_FLITE_IN_USE, &fimc->state);
9864 + unlock:
9865 + mutex_unlock(&fimc->lock);
9866 +diff --git a/drivers/media/platform/exynos4-is/fimc-m2m.c b/drivers/media/platform/exynos4-is/fimc-m2m.c
9867 +index c9704a147e5cf..df8e2aa454d8f 100644
9868 +--- a/drivers/media/platform/exynos4-is/fimc-m2m.c
9869 ++++ b/drivers/media/platform/exynos4-is/fimc-m2m.c
9870 +@@ -73,17 +73,14 @@ static void fimc_m2m_shutdown(struct fimc_ctx *ctx)
9871 + static int start_streaming(struct vb2_queue *q, unsigned int count)
9872 + {
9873 + struct fimc_ctx *ctx = q->drv_priv;
9874 +- int ret;
9875 +
9876 +- ret = pm_runtime_get_sync(&ctx->fimc_dev->pdev->dev);
9877 +- return ret > 0 ? 0 : ret;
9878 ++ return pm_runtime_resume_and_get(&ctx->fimc_dev->pdev->dev);
9879 + }
9880 +
9881 + static void stop_streaming(struct vb2_queue *q)
9882 + {
9883 + struct fimc_ctx *ctx = q->drv_priv;
9884 +
9885 +-
9886 + fimc_m2m_shutdown(ctx);
9887 + fimc_m2m_job_finish(ctx, VB2_BUF_STATE_ERROR);
9888 + pm_runtime_put(&ctx->fimc_dev->pdev->dev);
9889 +diff --git a/drivers/media/platform/exynos4-is/media-dev.c b/drivers/media/platform/exynos4-is/media-dev.c
9890 +index 8e1e892085ec0..f7b08dbe25edb 100644
9891 +--- a/drivers/media/platform/exynos4-is/media-dev.c
9892 ++++ b/drivers/media/platform/exynos4-is/media-dev.c
9893 +@@ -510,11 +510,9 @@ static int fimc_md_register_sensor_entities(struct fimc_md *fmd)
9894 + if (!fmd->pmf)
9895 + return -ENXIO;
9896 +
9897 +- ret = pm_runtime_get_sync(fmd->pmf);
9898 +- if (ret < 0) {
9899 +- pm_runtime_put(fmd->pmf);
9900 ++ ret = pm_runtime_resume_and_get(fmd->pmf);
9901 ++ if (ret < 0)
9902 + return ret;
9903 +- }
9904 +
9905 + fmd->num_sensors = 0;
9906 +
9907 +@@ -1284,13 +1282,11 @@ static DEVICE_ATTR(subdev_conf_mode, S_IWUSR | S_IRUGO,
9908 + static int cam_clk_prepare(struct clk_hw *hw)
9909 + {
9910 + struct cam_clk *camclk = to_cam_clk(hw);
9911 +- int ret;
9912 +
9913 + if (camclk->fmd->pmf == NULL)
9914 + return -ENODEV;
9915 +
9916 +- ret = pm_runtime_get_sync(camclk->fmd->pmf);
9917 +- return ret < 0 ? ret : 0;
9918 ++ return pm_runtime_resume_and_get(camclk->fmd->pmf);
9919 + }
9920 +
9921 + static void cam_clk_unprepare(struct clk_hw *hw)
9922 +diff --git a/drivers/media/platform/exynos4-is/mipi-csis.c b/drivers/media/platform/exynos4-is/mipi-csis.c
9923 +index 1aac167abb175..ebf39c8568943 100644
9924 +--- a/drivers/media/platform/exynos4-is/mipi-csis.c
9925 ++++ b/drivers/media/platform/exynos4-is/mipi-csis.c
9926 +@@ -494,7 +494,7 @@ static int s5pcsis_s_power(struct v4l2_subdev *sd, int on)
9927 + struct device *dev = &state->pdev->dev;
9928 +
9929 + if (on)
9930 +- return pm_runtime_get_sync(dev);
9931 ++ return pm_runtime_resume_and_get(dev);
9932 +
9933 + return pm_runtime_put_sync(dev);
9934 + }
9935 +@@ -509,11 +509,9 @@ static int s5pcsis_s_stream(struct v4l2_subdev *sd, int enable)
9936 +
9937 + if (enable) {
9938 + s5pcsis_clear_counters(state);
9939 +- ret = pm_runtime_get_sync(&state->pdev->dev);
9940 +- if (ret && ret != 1) {
9941 +- pm_runtime_put_noidle(&state->pdev->dev);
9942 ++ ret = pm_runtime_resume_and_get(&state->pdev->dev);
9943 ++ if (ret < 0)
9944 + return ret;
9945 +- }
9946 + }
9947 +
9948 + mutex_lock(&state->lock);
9949 +@@ -535,7 +533,7 @@ unlock:
9950 + if (!enable)
9951 + pm_runtime_put(&state->pdev->dev);
9952 +
9953 +- return ret == 1 ? 0 : ret;
9954 ++ return ret;
9955 + }
9956 +
9957 + static int s5pcsis_enum_mbus_code(struct v4l2_subdev *sd,
9958 +diff --git a/drivers/media/platform/marvell-ccic/mcam-core.c b/drivers/media/platform/marvell-ccic/mcam-core.c
9959 +index 141bf5d97a044..ea87110d90738 100644
9960 +--- a/drivers/media/platform/marvell-ccic/mcam-core.c
9961 ++++ b/drivers/media/platform/marvell-ccic/mcam-core.c
9962 +@@ -918,6 +918,7 @@ static int mclk_enable(struct clk_hw *hw)
9963 + struct mcam_camera *cam = container_of(hw, struct mcam_camera, mclk_hw);
9964 + int mclk_src;
9965 + int mclk_div;
9966 ++ int ret;
9967 +
9968 + /*
9969 + * Clock the sensor appropriately. Controller clock should
9970 +@@ -931,7 +932,9 @@ static int mclk_enable(struct clk_hw *hw)
9971 + mclk_div = 2;
9972 + }
9973 +
9974 +- pm_runtime_get_sync(cam->dev);
9975 ++ ret = pm_runtime_resume_and_get(cam->dev);
9976 ++ if (ret < 0)
9977 ++ return ret;
9978 + clk_enable(cam->clk[0]);
9979 + mcam_reg_write(cam, REG_CLKCTRL, (mclk_src << 29) | mclk_div);
9980 + mcam_ctlr_power_up(cam);
9981 +@@ -1611,7 +1614,9 @@ static int mcam_v4l_open(struct file *filp)
9982 + ret = sensor_call(cam, core, s_power, 1);
9983 + if (ret)
9984 + goto out;
9985 +- pm_runtime_get_sync(cam->dev);
9986 ++ ret = pm_runtime_resume_and_get(cam->dev);
9987 ++ if (ret < 0)
9988 ++ goto out;
9989 + __mcam_cam_reset(cam);
9990 + mcam_set_config_needed(cam, 1);
9991 + }
9992 +diff --git a/drivers/media/platform/mtk-mdp/mtk_mdp_m2m.c b/drivers/media/platform/mtk-mdp/mtk_mdp_m2m.c
9993 +index ace4528cdc5ef..f14779e7596e5 100644
9994 +--- a/drivers/media/platform/mtk-mdp/mtk_mdp_m2m.c
9995 ++++ b/drivers/media/platform/mtk-mdp/mtk_mdp_m2m.c
9996 +@@ -391,12 +391,12 @@ static int mtk_mdp_m2m_start_streaming(struct vb2_queue *q, unsigned int count)
9997 + struct mtk_mdp_ctx *ctx = q->drv_priv;
9998 + int ret;
9999 +
10000 +- ret = pm_runtime_get_sync(&ctx->mdp_dev->pdev->dev);
10001 ++ ret = pm_runtime_resume_and_get(&ctx->mdp_dev->pdev->dev);
10002 + if (ret < 0)
10003 +- mtk_mdp_dbg(1, "[%d] pm_runtime_get_sync failed:%d",
10004 ++ mtk_mdp_dbg(1, "[%d] pm_runtime_resume_and_get failed:%d",
10005 + ctx->id, ret);
10006 +
10007 +- return 0;
10008 ++ return ret;
10009 + }
10010 +
10011 + static void *mtk_mdp_m2m_buf_remove(struct mtk_mdp_ctx *ctx,
10012 +diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c
10013 +index 147dfef1638d2..f87dc47d9e638 100644
10014 +--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c
10015 ++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c
10016 +@@ -126,7 +126,9 @@ static int fops_vcodec_open(struct file *file)
10017 + mtk_vcodec_dec_set_default_params(ctx);
10018 +
10019 + if (v4l2_fh_is_singular(&ctx->fh)) {
10020 +- mtk_vcodec_dec_pw_on(&dev->pm);
10021 ++ ret = mtk_vcodec_dec_pw_on(&dev->pm);
10022 ++ if (ret < 0)
10023 ++ goto err_load_fw;
10024 + /*
10025 + * Does nothing if firmware was already loaded.
10026 + */
10027 +diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.c
10028 +index ddee7046ce422..6038db96f71c3 100644
10029 +--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.c
10030 ++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.c
10031 +@@ -88,13 +88,15 @@ void mtk_vcodec_release_dec_pm(struct mtk_vcodec_dev *dev)
10032 + put_device(dev->pm.larbvdec);
10033 + }
10034 +
10035 +-void mtk_vcodec_dec_pw_on(struct mtk_vcodec_pm *pm)
10036 ++int mtk_vcodec_dec_pw_on(struct mtk_vcodec_pm *pm)
10037 + {
10038 + int ret;
10039 +
10040 +- ret = pm_runtime_get_sync(pm->dev);
10041 ++ ret = pm_runtime_resume_and_get(pm->dev);
10042 + if (ret)
10043 +- mtk_v4l2_err("pm_runtime_get_sync fail %d", ret);
10044 ++ mtk_v4l2_err("pm_runtime_resume_and_get fail %d", ret);
10045 ++
10046 ++ return ret;
10047 + }
10048 +
10049 + void mtk_vcodec_dec_pw_off(struct mtk_vcodec_pm *pm)
10050 +diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.h b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.h
10051 +index 872d8bf8cfaf3..280aeaefdb651 100644
10052 +--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.h
10053 ++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.h
10054 +@@ -12,7 +12,7 @@
10055 + int mtk_vcodec_init_dec_pm(struct mtk_vcodec_dev *dev);
10056 + void mtk_vcodec_release_dec_pm(struct mtk_vcodec_dev *dev);
10057 +
10058 +-void mtk_vcodec_dec_pw_on(struct mtk_vcodec_pm *pm);
10059 ++int mtk_vcodec_dec_pw_on(struct mtk_vcodec_pm *pm);
10060 + void mtk_vcodec_dec_pw_off(struct mtk_vcodec_pm *pm);
10061 + void mtk_vcodec_dec_clock_on(struct mtk_vcodec_pm *pm);
10062 + void mtk_vcodec_dec_clock_off(struct mtk_vcodec_pm *pm);
10063 +diff --git a/drivers/media/platform/mtk-vpu/mtk_vpu.c b/drivers/media/platform/mtk-vpu/mtk_vpu.c
10064 +index 043894f7188c8..f49f6d53a941f 100644
10065 +--- a/drivers/media/platform/mtk-vpu/mtk_vpu.c
10066 ++++ b/drivers/media/platform/mtk-vpu/mtk_vpu.c
10067 +@@ -987,6 +987,12 @@ static int mtk_vpu_suspend(struct device *dev)
10068 + return ret;
10069 + }
10070 +
10071 ++ if (!vpu_running(vpu)) {
10072 ++ vpu_clock_disable(vpu);
10073 ++ clk_unprepare(vpu->clk);
10074 ++ return 0;
10075 ++ }
10076 ++
10077 + mutex_lock(&vpu->vpu_mutex);
10078 + /* disable vpu timer interrupt */
10079 + vpu_cfg_writel(vpu, vpu_cfg_readl(vpu, VPU_INT_STATUS) | VPU_IDLE_STATE,
10080 +diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
10081 +index ae374bb2a48f0..28443547ae8f2 100644
10082 +--- a/drivers/media/platform/qcom/venus/core.c
10083 ++++ b/drivers/media/platform/qcom/venus/core.c
10084 +@@ -76,22 +76,32 @@ static const struct hfi_core_ops venus_core_ops = {
10085 + .event_notify = venus_event_notify,
10086 + };
10087 +
10088 ++#define RPM_WAIT_FOR_IDLE_MAX_ATTEMPTS 10
10089 ++
10090 + static void venus_sys_error_handler(struct work_struct *work)
10091 + {
10092 + struct venus_core *core =
10093 + container_of(work, struct venus_core, work.work);
10094 +- int ret = 0;
10095 +-
10096 +- pm_runtime_get_sync(core->dev);
10097 ++ int ret, i, max_attempts = RPM_WAIT_FOR_IDLE_MAX_ATTEMPTS;
10098 ++ const char *err_msg = "";
10099 ++ bool failed = false;
10100 ++
10101 ++ ret = pm_runtime_get_sync(core->dev);
10102 ++ if (ret < 0) {
10103 ++ err_msg = "resume runtime PM";
10104 ++ max_attempts = 0;
10105 ++ failed = true;
10106 ++ }
10107 +
10108 + hfi_core_deinit(core, true);
10109 +
10110 +- dev_warn(core->dev, "system error has occurred, starting recovery!\n");
10111 +-
10112 + mutex_lock(&core->lock);
10113 +
10114 +- while (pm_runtime_active(core->dev_dec) || pm_runtime_active(core->dev_enc))
10115 ++ for (i = 0; i < max_attempts; i++) {
10116 ++ if (!pm_runtime_active(core->dev_dec) && !pm_runtime_active(core->dev_enc))
10117 ++ break;
10118 + msleep(10);
10119 ++ }
10120 +
10121 + venus_shutdown(core);
10122 +
10123 +@@ -99,31 +109,55 @@ static void venus_sys_error_handler(struct work_struct *work)
10124 +
10125 + pm_runtime_put_sync(core->dev);
10126 +
10127 +- while (core->pmdomains[0] && pm_runtime_active(core->pmdomains[0]))
10128 ++ for (i = 0; i < max_attempts; i++) {
10129 ++ if (!core->pmdomains[0] || !pm_runtime_active(core->pmdomains[0]))
10130 ++ break;
10131 + usleep_range(1000, 1500);
10132 ++ }
10133 +
10134 + hfi_reinit(core);
10135 +
10136 +- pm_runtime_get_sync(core->dev);
10137 ++ ret = pm_runtime_get_sync(core->dev);
10138 ++ if (ret < 0) {
10139 ++ err_msg = "resume runtime PM";
10140 ++ failed = true;
10141 ++ }
10142 +
10143 +- ret |= venus_boot(core);
10144 +- ret |= hfi_core_resume(core, true);
10145 ++ ret = venus_boot(core);
10146 ++ if (ret && !failed) {
10147 ++ err_msg = "boot Venus";
10148 ++ failed = true;
10149 ++ }
10150 ++
10151 ++ ret = hfi_core_resume(core, true);
10152 ++ if (ret && !failed) {
10153 ++ err_msg = "resume HFI";
10154 ++ failed = true;
10155 ++ }
10156 +
10157 + enable_irq(core->irq);
10158 +
10159 + mutex_unlock(&core->lock);
10160 +
10161 +- ret |= hfi_core_init(core);
10162 ++ ret = hfi_core_init(core);
10163 ++ if (ret && !failed) {
10164 ++ err_msg = "init HFI";
10165 ++ failed = true;
10166 ++ }
10167 +
10168 + pm_runtime_put_sync(core->dev);
10169 +
10170 +- if (ret) {
10171 ++ if (failed) {
10172 + disable_irq_nosync(core->irq);
10173 +- dev_warn(core->dev, "recovery failed (%d)\n", ret);
10174 ++ dev_warn_ratelimited(core->dev,
10175 ++ "System error has occurred, recovery failed to %s\n",
10176 ++ err_msg);
10177 + schedule_delayed_work(&core->work, msecs_to_jiffies(10));
10178 + return;
10179 + }
10180 +
10181 ++ dev_warn(core->dev, "system error has occurred (recovered)\n");
10182 ++
10183 + mutex_lock(&core->lock);
10184 + core->sys_error = false;
10185 + mutex_unlock(&core->lock);
10186 +diff --git a/drivers/media/platform/s5p-g2d/g2d.c b/drivers/media/platform/s5p-g2d/g2d.c
10187 +index 15bcb7f6e113c..1cb5eaabf340b 100644
10188 +--- a/drivers/media/platform/s5p-g2d/g2d.c
10189 ++++ b/drivers/media/platform/s5p-g2d/g2d.c
10190 +@@ -276,6 +276,9 @@ static int g2d_release(struct file *file)
10191 + struct g2d_dev *dev = video_drvdata(file);
10192 + struct g2d_ctx *ctx = fh2ctx(file->private_data);
10193 +
10194 ++ mutex_lock(&dev->mutex);
10195 ++ v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
10196 ++ mutex_unlock(&dev->mutex);
10197 + v4l2_ctrl_handler_free(&ctx->ctrl_handler);
10198 + v4l2_fh_del(&ctx->fh);
10199 + v4l2_fh_exit(&ctx->fh);
10200 +diff --git a/drivers/media/platform/s5p-jpeg/jpeg-core.c b/drivers/media/platform/s5p-jpeg/jpeg-core.c
10201 +index 026111505f5a5..d402e456f27df 100644
10202 +--- a/drivers/media/platform/s5p-jpeg/jpeg-core.c
10203 ++++ b/drivers/media/platform/s5p-jpeg/jpeg-core.c
10204 +@@ -2566,11 +2566,8 @@ static void s5p_jpeg_buf_queue(struct vb2_buffer *vb)
10205 + static int s5p_jpeg_start_streaming(struct vb2_queue *q, unsigned int count)
10206 + {
10207 + struct s5p_jpeg_ctx *ctx = vb2_get_drv_priv(q);
10208 +- int ret;
10209 +-
10210 +- ret = pm_runtime_get_sync(ctx->jpeg->dev);
10211 +
10212 +- return ret > 0 ? 0 : ret;
10213 ++ return pm_runtime_resume_and_get(ctx->jpeg->dev);
10214 + }
10215 +
10216 + static void s5p_jpeg_stop_streaming(struct vb2_queue *q)
10217 +diff --git a/drivers/media/platform/sh_vou.c b/drivers/media/platform/sh_vou.c
10218 +index 4ac48441f22c4..ca4310e26c49e 100644
10219 +--- a/drivers/media/platform/sh_vou.c
10220 ++++ b/drivers/media/platform/sh_vou.c
10221 +@@ -1133,7 +1133,11 @@ static int sh_vou_open(struct file *file)
10222 + if (v4l2_fh_is_singular_file(file) &&
10223 + vou_dev->status == SH_VOU_INITIALISING) {
10224 + /* First open */
10225 +- pm_runtime_get_sync(vou_dev->v4l2_dev.dev);
10226 ++ err = pm_runtime_resume_and_get(vou_dev->v4l2_dev.dev);
10227 ++ if (err < 0) {
10228 ++ v4l2_fh_release(file);
10229 ++ goto done_open;
10230 ++ }
10231 + err = sh_vou_hw_init(vou_dev);
10232 + if (err < 0) {
10233 + pm_runtime_put(vou_dev->v4l2_dev.dev);
10234 +diff --git a/drivers/media/platform/sti/bdisp/Makefile b/drivers/media/platform/sti/bdisp/Makefile
10235 +index caf7ccd193eaa..39ade0a347236 100644
10236 +--- a/drivers/media/platform/sti/bdisp/Makefile
10237 ++++ b/drivers/media/platform/sti/bdisp/Makefile
10238 +@@ -1,4 +1,4 @@
10239 + # SPDX-License-Identifier: GPL-2.0-only
10240 +-obj-$(CONFIG_VIDEO_STI_BDISP) := bdisp.o
10241 ++obj-$(CONFIG_VIDEO_STI_BDISP) += bdisp.o
10242 +
10243 + bdisp-objs := bdisp-v4l2.o bdisp-hw.o bdisp-debug.o
10244 +diff --git a/drivers/media/platform/sti/bdisp/bdisp-v4l2.c b/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
10245 +index 060ca85f64d5d..85288da9d2ae6 100644
10246 +--- a/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
10247 ++++ b/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
10248 +@@ -499,7 +499,7 @@ static int bdisp_start_streaming(struct vb2_queue *q, unsigned int count)
10249 + {
10250 + struct bdisp_ctx *ctx = q->drv_priv;
10251 + struct vb2_v4l2_buffer *buf;
10252 +- int ret = pm_runtime_get_sync(ctx->bdisp_dev->dev);
10253 ++ int ret = pm_runtime_resume_and_get(ctx->bdisp_dev->dev);
10254 +
10255 + if (ret < 0) {
10256 + dev_err(ctx->bdisp_dev->dev, "failed to set runtime PM\n");
10257 +@@ -1364,10 +1364,10 @@ static int bdisp_probe(struct platform_device *pdev)
10258 +
10259 + /* Power management */
10260 + pm_runtime_enable(dev);
10261 +- ret = pm_runtime_get_sync(dev);
10262 ++ ret = pm_runtime_resume_and_get(dev);
10263 + if (ret < 0) {
10264 + dev_err(dev, "failed to set PM\n");
10265 +- goto err_pm;
10266 ++ goto err_remove;
10267 + }
10268 +
10269 + /* Filters */
10270 +@@ -1395,6 +1395,7 @@ err_filter:
10271 + bdisp_hw_free_filters(bdisp->dev);
10272 + err_pm:
10273 + pm_runtime_put(dev);
10274 ++err_remove:
10275 + bdisp_debugfs_remove(bdisp);
10276 + v4l2_device_unregister(&bdisp->v4l2_dev);
10277 + err_clk:
10278 +diff --git a/drivers/media/platform/sti/delta/Makefile b/drivers/media/platform/sti/delta/Makefile
10279 +index 92b37e216f004..32412fa4c6328 100644
10280 +--- a/drivers/media/platform/sti/delta/Makefile
10281 ++++ b/drivers/media/platform/sti/delta/Makefile
10282 +@@ -1,5 +1,5 @@
10283 + # SPDX-License-Identifier: GPL-2.0-only
10284 +-obj-$(CONFIG_VIDEO_STI_DELTA_DRIVER) := st-delta.o
10285 ++obj-$(CONFIG_VIDEO_STI_DELTA_DRIVER) += st-delta.o
10286 + st-delta-y := delta-v4l2.o delta-mem.o delta-ipc.o delta-debug.o
10287 +
10288 + # MJPEG support
10289 +diff --git a/drivers/media/platform/sti/hva/Makefile b/drivers/media/platform/sti/hva/Makefile
10290 +index 74b41ec52f976..b5a5478bdd016 100644
10291 +--- a/drivers/media/platform/sti/hva/Makefile
10292 ++++ b/drivers/media/platform/sti/hva/Makefile
10293 +@@ -1,4 +1,4 @@
10294 + # SPDX-License-Identifier: GPL-2.0-only
10295 +-obj-$(CONFIG_VIDEO_STI_HVA) := st-hva.o
10296 ++obj-$(CONFIG_VIDEO_STI_HVA) += st-hva.o
10297 + st-hva-y := hva-v4l2.o hva-hw.o hva-mem.o hva-h264.o
10298 + st-hva-$(CONFIG_VIDEO_STI_HVA_DEBUGFS) += hva-debugfs.o
10299 +diff --git a/drivers/media/platform/sti/hva/hva-hw.c b/drivers/media/platform/sti/hva/hva-hw.c
10300 +index f59811e27f51f..6eeee5017fac4 100644
10301 +--- a/drivers/media/platform/sti/hva/hva-hw.c
10302 ++++ b/drivers/media/platform/sti/hva/hva-hw.c
10303 +@@ -130,8 +130,7 @@ static irqreturn_t hva_hw_its_irq_thread(int irq, void *arg)
10304 + ctx_id = (hva->sts_reg & 0xFF00) >> 8;
10305 + if (ctx_id >= HVA_MAX_INSTANCES) {
10306 + dev_err(dev, "%s %s: bad context identifier: %d\n",
10307 +- ctx->name, __func__, ctx_id);
10308 +- ctx->hw_err = true;
10309 ++ HVA_PREFIX, __func__, ctx_id);
10310 + goto out;
10311 + }
10312 +
10313 +diff --git a/drivers/media/platform/sunxi/sun8i-rotate/sun8i_rotate.c b/drivers/media/platform/sunxi/sun8i-rotate/sun8i_rotate.c
10314 +index 3f81dd17755cb..fbcca59a0517c 100644
10315 +--- a/drivers/media/platform/sunxi/sun8i-rotate/sun8i_rotate.c
10316 ++++ b/drivers/media/platform/sunxi/sun8i-rotate/sun8i_rotate.c
10317 +@@ -494,7 +494,7 @@ static int rotate_start_streaming(struct vb2_queue *vq, unsigned int count)
10318 + struct device *dev = ctx->dev->dev;
10319 + int ret;
10320 +
10321 +- ret = pm_runtime_get_sync(dev);
10322 ++ ret = pm_runtime_resume_and_get(dev);
10323 + if (ret < 0) {
10324 + dev_err(dev, "Failed to enable module\n");
10325 +
10326 +diff --git a/drivers/media/platform/video-mux.c b/drivers/media/platform/video-mux.c
10327 +index 133122e385150..9bc0b4d8de095 100644
10328 +--- a/drivers/media/platform/video-mux.c
10329 ++++ b/drivers/media/platform/video-mux.c
10330 +@@ -362,7 +362,7 @@ static int video_mux_async_register(struct video_mux *vmux,
10331 +
10332 + for (i = 0; i < num_input_pads; i++) {
10333 + struct v4l2_async_subdev *asd;
10334 +- struct fwnode_handle *ep;
10335 ++ struct fwnode_handle *ep, *remote_ep;
10336 +
10337 + ep = fwnode_graph_get_endpoint_by_id(
10338 + dev_fwnode(vmux->subdev.dev), i, 0,
10339 +@@ -370,6 +370,14 @@ static int video_mux_async_register(struct video_mux *vmux,
10340 + if (!ep)
10341 + continue;
10342 +
10343 ++ /* Skip dangling endpoints for backwards compatibility */
10344 ++ remote_ep = fwnode_graph_get_remote_endpoint(ep);
10345 ++ if (!remote_ep) {
10346 ++ fwnode_handle_put(ep);
10347 ++ continue;
10348 ++ }
10349 ++ fwnode_handle_put(remote_ep);
10350 ++
10351 + asd = v4l2_async_notifier_add_fwnode_remote_subdev(
10352 + &vmux->notifier, ep, struct v4l2_async_subdev);
10353 +
10354 +diff --git a/drivers/media/usb/au0828/au0828-core.c b/drivers/media/usb/au0828/au0828-core.c
10355 +index a8a72d5fbd129..caefac07af927 100644
10356 +--- a/drivers/media/usb/au0828/au0828-core.c
10357 ++++ b/drivers/media/usb/au0828/au0828-core.c
10358 +@@ -199,8 +199,8 @@ static int au0828_media_device_init(struct au0828_dev *dev,
10359 + struct media_device *mdev;
10360 +
10361 + mdev = media_device_usb_allocate(udev, KBUILD_MODNAME, THIS_MODULE);
10362 +- if (!mdev)
10363 +- return -ENOMEM;
10364 ++ if (IS_ERR(mdev))
10365 ++ return PTR_ERR(mdev);
10366 +
10367 + dev->media_dev = mdev;
10368 + #endif
10369 +diff --git a/drivers/media/usb/cpia2/cpia2.h b/drivers/media/usb/cpia2/cpia2.h
10370 +index 50835f5f7512c..57b7f1ea68da5 100644
10371 +--- a/drivers/media/usb/cpia2/cpia2.h
10372 ++++ b/drivers/media/usb/cpia2/cpia2.h
10373 +@@ -429,6 +429,7 @@ int cpia2_send_command(struct camera_data *cam, struct cpia2_command *cmd);
10374 + int cpia2_do_command(struct camera_data *cam,
10375 + unsigned int command,
10376 + unsigned char direction, unsigned char param);
10377 ++void cpia2_deinit_camera_struct(struct camera_data *cam, struct usb_interface *intf);
10378 + struct camera_data *cpia2_init_camera_struct(struct usb_interface *intf);
10379 + int cpia2_init_camera(struct camera_data *cam);
10380 + int cpia2_allocate_buffers(struct camera_data *cam);
10381 +diff --git a/drivers/media/usb/cpia2/cpia2_core.c b/drivers/media/usb/cpia2/cpia2_core.c
10382 +index e747548ab2869..b5a2d06fb356b 100644
10383 +--- a/drivers/media/usb/cpia2/cpia2_core.c
10384 ++++ b/drivers/media/usb/cpia2/cpia2_core.c
10385 +@@ -2163,6 +2163,18 @@ static void reset_camera_struct(struct camera_data *cam)
10386 + cam->height = cam->params.roi.height;
10387 + }
10388 +
10389 ++/******************************************************************************
10390 ++ *
10391 ++ * cpia2_init_camera_struct
10392 ++ *
10393 ++ * Deinitialize camera struct
10394 ++ *****************************************************************************/
10395 ++void cpia2_deinit_camera_struct(struct camera_data *cam, struct usb_interface *intf)
10396 ++{
10397 ++ v4l2_device_unregister(&cam->v4l2_dev);
10398 ++ kfree(cam);
10399 ++}
10400 ++
10401 + /******************************************************************************
10402 + *
10403 + * cpia2_init_camera_struct
10404 +diff --git a/drivers/media/usb/cpia2/cpia2_usb.c b/drivers/media/usb/cpia2/cpia2_usb.c
10405 +index 3ab80a7b44985..76aac06f9fb8e 100644
10406 +--- a/drivers/media/usb/cpia2/cpia2_usb.c
10407 ++++ b/drivers/media/usb/cpia2/cpia2_usb.c
10408 +@@ -844,15 +844,13 @@ static int cpia2_usb_probe(struct usb_interface *intf,
10409 + ret = set_alternate(cam, USBIF_CMDONLY);
10410 + if (ret < 0) {
10411 + ERR("%s: usb_set_interface error (ret = %d)\n", __func__, ret);
10412 +- kfree(cam);
10413 +- return ret;
10414 ++ goto alt_err;
10415 + }
10416 +
10417 +
10418 + if((ret = cpia2_init_camera(cam)) < 0) {
10419 + ERR("%s: failed to initialize cpia2 camera (ret = %d)\n", __func__, ret);
10420 +- kfree(cam);
10421 +- return ret;
10422 ++ goto alt_err;
10423 + }
10424 + LOG(" CPiA Version: %d.%02d (%d.%d)\n",
10425 + cam->params.version.firmware_revision_hi,
10426 +@@ -872,11 +870,14 @@ static int cpia2_usb_probe(struct usb_interface *intf,
10427 + ret = cpia2_register_camera(cam);
10428 + if (ret < 0) {
10429 + ERR("%s: Failed to register cpia2 camera (ret = %d)\n", __func__, ret);
10430 +- kfree(cam);
10431 +- return ret;
10432 ++ goto alt_err;
10433 + }
10434 +
10435 + return 0;
10436 ++
10437 ++alt_err:
10438 ++ cpia2_deinit_camera_struct(cam, intf);
10439 ++ return ret;
10440 + }
10441 +
10442 + /******************************************************************************
10443 +diff --git a/drivers/media/usb/dvb-usb/cinergyT2-core.c b/drivers/media/usb/dvb-usb/cinergyT2-core.c
10444 +index 969a7ec71dff7..4116ba5c45fcb 100644
10445 +--- a/drivers/media/usb/dvb-usb/cinergyT2-core.c
10446 ++++ b/drivers/media/usb/dvb-usb/cinergyT2-core.c
10447 +@@ -78,6 +78,8 @@ static int cinergyt2_frontend_attach(struct dvb_usb_adapter *adap)
10448 +
10449 + ret = dvb_usb_generic_rw(d, st->data, 1, st->data, 3, 0);
10450 + if (ret < 0) {
10451 ++ if (adap->fe_adap[0].fe)
10452 ++ adap->fe_adap[0].fe->ops.release(adap->fe_adap[0].fe);
10453 + deb_rc("cinergyt2_power_ctrl() Failed to retrieve sleep state info\n");
10454 + }
10455 + mutex_unlock(&d->data_mutex);
10456 +diff --git a/drivers/media/usb/dvb-usb/cxusb.c b/drivers/media/usb/dvb-usb/cxusb.c
10457 +index 761992ad05e2a..7707de7bae7ca 100644
10458 +--- a/drivers/media/usb/dvb-usb/cxusb.c
10459 ++++ b/drivers/media/usb/dvb-usb/cxusb.c
10460 +@@ -1947,7 +1947,7 @@ static struct dvb_usb_device_properties cxusb_bluebird_lgz201_properties = {
10461 +
10462 + .size_of_priv = sizeof(struct cxusb_state),
10463 +
10464 +- .num_adapters = 2,
10465 ++ .num_adapters = 1,
10466 + .adapter = {
10467 + {
10468 + .num_frontends = 1,
10469 +diff --git a/drivers/media/usb/em28xx/em28xx-input.c b/drivers/media/usb/em28xx/em28xx-input.c
10470 +index 5aa15a7a49def..59529cbf9cd0b 100644
10471 +--- a/drivers/media/usb/em28xx/em28xx-input.c
10472 ++++ b/drivers/media/usb/em28xx/em28xx-input.c
10473 +@@ -720,7 +720,8 @@ static int em28xx_ir_init(struct em28xx *dev)
10474 + dev->board.has_ir_i2c = 0;
10475 + dev_warn(&dev->intf->dev,
10476 + "No i2c IR remote control device found.\n");
10477 +- return -ENODEV;
10478 ++ err = -ENODEV;
10479 ++ goto ref_put;
10480 + }
10481 + }
10482 +
10483 +@@ -735,7 +736,7 @@ static int em28xx_ir_init(struct em28xx *dev)
10484 +
10485 + ir = kzalloc(sizeof(*ir), GFP_KERNEL);
10486 + if (!ir)
10487 +- return -ENOMEM;
10488 ++ goto ref_put;
10489 + rc = rc_allocate_device(RC_DRIVER_SCANCODE);
10490 + if (!rc)
10491 + goto error;
10492 +@@ -839,6 +840,9 @@ error:
10493 + dev->ir = NULL;
10494 + rc_free_device(rc);
10495 + kfree(ir);
10496 ++ref_put:
10497 ++ em28xx_shutdown_buttons(dev);
10498 ++ kref_put(&dev->ref, em28xx_free_device);
10499 + return err;
10500 + }
10501 +
10502 +diff --git a/drivers/media/usb/gspca/gl860/gl860.c b/drivers/media/usb/gspca/gl860/gl860.c
10503 +index 2c05ea2598e76..ce4ee8bc75c85 100644
10504 +--- a/drivers/media/usb/gspca/gl860/gl860.c
10505 ++++ b/drivers/media/usb/gspca/gl860/gl860.c
10506 +@@ -561,8 +561,8 @@ int gl860_RTx(struct gspca_dev *gspca_dev,
10507 + len, 400 + 200 * (len > 1));
10508 + memcpy(pdata, gspca_dev->usb_buf, len);
10509 + } else {
10510 +- r = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
10511 +- req, pref, val, index, NULL, len, 400);
10512 ++ gspca_err(gspca_dev, "zero-length read request\n");
10513 ++ r = -EINVAL;
10514 + }
10515 + }
10516 +
10517 +diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
10518 +index f4a727918e352..d38dee1792e41 100644
10519 +--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
10520 ++++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
10521 +@@ -2676,9 +2676,8 @@ void pvr2_hdw_destroy(struct pvr2_hdw *hdw)
10522 + pvr2_stream_destroy(hdw->vid_stream);
10523 + hdw->vid_stream = NULL;
10524 + }
10525 +- pvr2_i2c_core_done(hdw);
10526 + v4l2_device_unregister(&hdw->v4l2_dev);
10527 +- pvr2_hdw_remove_usb_stuff(hdw);
10528 ++ pvr2_hdw_disconnect(hdw);
10529 + mutex_lock(&pvr2_unit_mtx);
10530 + do {
10531 + if ((hdw->unit_number >= 0) &&
10532 +@@ -2705,6 +2704,7 @@ void pvr2_hdw_disconnect(struct pvr2_hdw *hdw)
10533 + {
10534 + pvr2_trace(PVR2_TRACE_INIT,"pvr2_hdw_disconnect(hdw=%p)",hdw);
10535 + LOCK_TAKE(hdw->big_lock);
10536 ++ pvr2_i2c_core_done(hdw);
10537 + LOCK_TAKE(hdw->ctl_lock);
10538 + pvr2_hdw_remove_usb_stuff(hdw);
10539 + LOCK_GIVE(hdw->ctl_lock);
10540 +diff --git a/drivers/media/v4l2-core/v4l2-fh.c b/drivers/media/v4l2-core/v4l2-fh.c
10541 +index 684574f58e82d..90eec79ee995a 100644
10542 +--- a/drivers/media/v4l2-core/v4l2-fh.c
10543 ++++ b/drivers/media/v4l2-core/v4l2-fh.c
10544 +@@ -96,6 +96,7 @@ int v4l2_fh_release(struct file *filp)
10545 + v4l2_fh_del(fh);
10546 + v4l2_fh_exit(fh);
10547 + kfree(fh);
10548 ++ filp->private_data = NULL;
10549 + }
10550 + return 0;
10551 + }
10552 +diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
10553 +index 31d1342e61e88..7e8bf4b1ab2e5 100644
10554 +--- a/drivers/media/v4l2-core/v4l2-ioctl.c
10555 ++++ b/drivers/media/v4l2-core/v4l2-ioctl.c
10556 +@@ -3114,8 +3114,8 @@ static int check_array_args(unsigned int cmd, void *parg, size_t *array_size,
10557 +
10558 + static unsigned int video_translate_cmd(unsigned int cmd)
10559 + {
10560 ++#if !defined(CONFIG_64BIT) && defined(CONFIG_COMPAT_32BIT_TIME)
10561 + switch (cmd) {
10562 +-#ifdef CONFIG_COMPAT_32BIT_TIME
10563 + case VIDIOC_DQEVENT_TIME32:
10564 + return VIDIOC_DQEVENT;
10565 + case VIDIOC_QUERYBUF_TIME32:
10566 +@@ -3126,8 +3126,8 @@ static unsigned int video_translate_cmd(unsigned int cmd)
10567 + return VIDIOC_DQBUF;
10568 + case VIDIOC_PREPARE_BUF_TIME32:
10569 + return VIDIOC_PREPARE_BUF;
10570 +-#endif
10571 + }
10572 ++#endif
10573 + if (in_compat_syscall())
10574 + return v4l2_compat_translate_cmd(cmd);
10575 +
10576 +@@ -3168,8 +3168,8 @@ static int video_get_user(void __user *arg, void *parg,
10577 + } else if (in_compat_syscall()) {
10578 + err = v4l2_compat_get_user(arg, parg, cmd);
10579 + } else {
10580 ++#if !defined(CONFIG_64BIT) && defined(CONFIG_COMPAT_32BIT_TIME)
10581 + switch (cmd) {
10582 +-#ifdef CONFIG_COMPAT_32BIT_TIME
10583 + case VIDIOC_QUERYBUF_TIME32:
10584 + case VIDIOC_QBUF_TIME32:
10585 + case VIDIOC_DQBUF_TIME32:
10586 +@@ -3197,8 +3197,8 @@ static int video_get_user(void __user *arg, void *parg,
10587 + };
10588 + break;
10589 + }
10590 +-#endif
10591 + }
10592 ++#endif
10593 + }
10594 +
10595 + /* zero out anything we don't copy from userspace */
10596 +@@ -3223,8 +3223,8 @@ static int video_put_user(void __user *arg, void *parg,
10597 + if (in_compat_syscall())
10598 + return v4l2_compat_put_user(arg, parg, cmd);
10599 +
10600 ++#if !defined(CONFIG_64BIT) && defined(CONFIG_COMPAT_32BIT_TIME)
10601 + switch (cmd) {
10602 +-#ifdef CONFIG_COMPAT_32BIT_TIME
10603 + case VIDIOC_DQEVENT_TIME32: {
10604 + struct v4l2_event *ev = parg;
10605 + struct v4l2_event_time32 ev32;
10606 +@@ -3272,8 +3272,8 @@ static int video_put_user(void __user *arg, void *parg,
10607 + return -EFAULT;
10608 + break;
10609 + }
10610 +-#endif
10611 + }
10612 ++#endif
10613 +
10614 + return 0;
10615 + }
10616 +diff --git a/drivers/media/v4l2-core/v4l2-subdev.c b/drivers/media/v4l2-core/v4l2-subdev.c
10617 +index 956dafab43d49..bf3aa92524584 100644
10618 +--- a/drivers/media/v4l2-core/v4l2-subdev.c
10619 ++++ b/drivers/media/v4l2-core/v4l2-subdev.c
10620 +@@ -428,30 +428,6 @@ static long subdev_do_ioctl(struct file *file, unsigned int cmd, void *arg)
10621 +
10622 + return v4l2_event_dequeue(vfh, arg, file->f_flags & O_NONBLOCK);
10623 +
10624 +- case VIDIOC_DQEVENT_TIME32: {
10625 +- struct v4l2_event_time32 *ev32 = arg;
10626 +- struct v4l2_event ev = { };
10627 +-
10628 +- if (!(sd->flags & V4L2_SUBDEV_FL_HAS_EVENTS))
10629 +- return -ENOIOCTLCMD;
10630 +-
10631 +- rval = v4l2_event_dequeue(vfh, &ev, file->f_flags & O_NONBLOCK);
10632 +-
10633 +- *ev32 = (struct v4l2_event_time32) {
10634 +- .type = ev.type,
10635 +- .pending = ev.pending,
10636 +- .sequence = ev.sequence,
10637 +- .timestamp.tv_sec = ev.timestamp.tv_sec,
10638 +- .timestamp.tv_nsec = ev.timestamp.tv_nsec,
10639 +- .id = ev.id,
10640 +- };
10641 +-
10642 +- memcpy(&ev32->u, &ev.u, sizeof(ev.u));
10643 +- memcpy(&ev32->reserved, &ev.reserved, sizeof(ev.reserved));
10644 +-
10645 +- return rval;
10646 +- }
10647 +-
10648 + case VIDIOC_SUBSCRIBE_EVENT:
10649 + return v4l2_subdev_call(sd, core, subscribe_event, vfh, arg);
10650 +
10651 +diff --git a/drivers/memstick/host/rtsx_usb_ms.c b/drivers/memstick/host/rtsx_usb_ms.c
10652 +index 102dbb8080da5..29271ad4728a2 100644
10653 +--- a/drivers/memstick/host/rtsx_usb_ms.c
10654 ++++ b/drivers/memstick/host/rtsx_usb_ms.c
10655 +@@ -799,9 +799,9 @@ static int rtsx_usb_ms_drv_probe(struct platform_device *pdev)
10656 +
10657 + return 0;
10658 + err_out:
10659 +- memstick_free_host(msh);
10660 + pm_runtime_disable(ms_dev(host));
10661 + pm_runtime_put_noidle(ms_dev(host));
10662 ++ memstick_free_host(msh);
10663 + return err;
10664 + }
10665 +
10666 +@@ -828,9 +828,6 @@ static int rtsx_usb_ms_drv_remove(struct platform_device *pdev)
10667 + }
10668 + mutex_unlock(&host->host_mutex);
10669 +
10670 +- memstick_remove_host(msh);
10671 +- memstick_free_host(msh);
10672 +-
10673 + /* Balance possible unbalanced usage count
10674 + * e.g. unconditional module removal
10675 + */
10676 +@@ -838,10 +835,11 @@ static int rtsx_usb_ms_drv_remove(struct platform_device *pdev)
10677 + pm_runtime_put(ms_dev(host));
10678 +
10679 + pm_runtime_disable(ms_dev(host));
10680 +- platform_set_drvdata(pdev, NULL);
10681 +-
10682 ++ memstick_remove_host(msh);
10683 + dev_dbg(ms_dev(host),
10684 + ": Realtek USB Memstick controller has been removed\n");
10685 ++ memstick_free_host(msh);
10686 ++ platform_set_drvdata(pdev, NULL);
10687 +
10688 + return 0;
10689 + }
10690 +diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
10691 +index b74efa469e909..8b421b21a2320 100644
10692 +--- a/drivers/mfd/Kconfig
10693 ++++ b/drivers/mfd/Kconfig
10694 +@@ -465,6 +465,7 @@ config MFD_MP2629
10695 + tristate "Monolithic Power Systems MP2629 ADC and Battery charger"
10696 + depends on I2C
10697 + select REGMAP_I2C
10698 ++ select MFD_CORE
10699 + help
10700 + Select this option to enable support for Monolithic Power Systems
10701 + battery charger. This provides ADC, thermal and battery charger power
10702 +diff --git a/drivers/mfd/rn5t618.c b/drivers/mfd/rn5t618.c
10703 +index dc452df1f1bfe..652a5e60067f8 100644
10704 +--- a/drivers/mfd/rn5t618.c
10705 ++++ b/drivers/mfd/rn5t618.c
10706 +@@ -104,7 +104,7 @@ static int rn5t618_irq_init(struct rn5t618 *rn5t618)
10707 +
10708 + ret = devm_regmap_add_irq_chip(rn5t618->dev, rn5t618->regmap,
10709 + rn5t618->irq,
10710 +- IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
10711 ++ IRQF_TRIGGER_LOW | IRQF_ONESHOT,
10712 + 0, irq_chip, &rn5t618->irq_data);
10713 + if (ret)
10714 + dev_err(rn5t618->dev, "Failed to register IRQ chip\n");
10715 +diff --git a/drivers/misc/eeprom/idt_89hpesx.c b/drivers/misc/eeprom/idt_89hpesx.c
10716 +index 81c70e5bc168f..3e4a594c110b3 100644
10717 +--- a/drivers/misc/eeprom/idt_89hpesx.c
10718 ++++ b/drivers/misc/eeprom/idt_89hpesx.c
10719 +@@ -1126,11 +1126,10 @@ static void idt_get_fw_data(struct idt_89hpesx_dev *pdev)
10720 +
10721 + device_for_each_child_node(dev, fwnode) {
10722 + ee_id = idt_ee_match_id(fwnode);
10723 +- if (!ee_id) {
10724 +- dev_warn(dev, "Skip unsupported EEPROM device");
10725 +- continue;
10726 +- } else
10727 ++ if (ee_id)
10728 + break;
10729 ++
10730 ++ dev_warn(dev, "Skip unsupported EEPROM device %pfw\n", fwnode);
10731 + }
10732 +
10733 + /* If there is no fwnode EEPROM device, then set zero size */
10734 +@@ -1161,6 +1160,7 @@ static void idt_get_fw_data(struct idt_89hpesx_dev *pdev)
10735 + else /* if (!fwnode_property_read_bool(node, "read-only")) */
10736 + pdev->eero = false;
10737 +
10738 ++ fwnode_handle_put(fwnode);
10739 + dev_info(dev, "EEPROM of %d bytes found by 0x%x",
10740 + pdev->eesize, pdev->eeaddr);
10741 + }
10742 +diff --git a/drivers/misc/habanalabs/common/habanalabs_drv.c b/drivers/misc/habanalabs/common/habanalabs_drv.c
10743 +index 032d114f01ea5..8271406262445 100644
10744 +--- a/drivers/misc/habanalabs/common/habanalabs_drv.c
10745 ++++ b/drivers/misc/habanalabs/common/habanalabs_drv.c
10746 +@@ -437,6 +437,7 @@ static int hl_pci_probe(struct pci_dev *pdev,
10747 + return 0;
10748 +
10749 + disable_device:
10750 ++ pci_disable_pcie_error_reporting(pdev);
10751 + pci_set_drvdata(pdev, NULL);
10752 + destroy_hdev(hdev);
10753 +
10754 +diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
10755 +index a4c06ef673943..6573ec3792d60 100644
10756 +--- a/drivers/mmc/core/block.c
10757 ++++ b/drivers/mmc/core/block.c
10758 +@@ -1004,6 +1004,12 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req)
10759 +
10760 + switch (mq_rq->drv_op) {
10761 + case MMC_DRV_OP_IOCTL:
10762 ++ if (card->ext_csd.cmdq_en) {
10763 ++ ret = mmc_cmdq_disable(card);
10764 ++ if (ret)
10765 ++ break;
10766 ++ }
10767 ++ fallthrough;
10768 + case MMC_DRV_OP_IOCTL_RPMB:
10769 + idata = mq_rq->drv_op_data;
10770 + for (i = 0, ret = 0; i < mq_rq->ioc_count; i++) {
10771 +@@ -1014,6 +1020,8 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req)
10772 + /* Always switch back to main area after RPMB access */
10773 + if (rpmb_ioctl)
10774 + mmc_blk_part_switch(card, 0);
10775 ++ else if (card->reenable_cmdq && !card->ext_csd.cmdq_en)
10776 ++ mmc_cmdq_enable(card);
10777 + break;
10778 + case MMC_DRV_OP_BOOT_WP:
10779 + ret = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_BOOT_WP,
10780 +diff --git a/drivers/mmc/host/sdhci-of-aspeed.c b/drivers/mmc/host/sdhci-of-aspeed.c
10781 +index 7d8692e909961..b6ac2af199b8d 100644
10782 +--- a/drivers/mmc/host/sdhci-of-aspeed.c
10783 ++++ b/drivers/mmc/host/sdhci-of-aspeed.c
10784 +@@ -150,7 +150,7 @@ static int aspeed_sdhci_phase_to_tap(struct device *dev, unsigned long rate_hz,
10785 +
10786 + tap = div_u64(phase_period_ps, prop_delay_ps);
10787 + if (tap > ASPEED_SDHCI_NR_TAPS) {
10788 +- dev_warn(dev,
10789 ++ dev_dbg(dev,
10790 + "Requested out of range phase tap %d for %d degrees of phase compensation at %luHz, clamping to tap %d\n",
10791 + tap, phase_deg, rate_hz, ASPEED_SDHCI_NR_TAPS);
10792 + tap = ASPEED_SDHCI_NR_TAPS;
10793 +diff --git a/drivers/mmc/host/sdhci-sprd.c b/drivers/mmc/host/sdhci-sprd.c
10794 +index 5dc36efff47ff..11e375579cfb9 100644
10795 +--- a/drivers/mmc/host/sdhci-sprd.c
10796 ++++ b/drivers/mmc/host/sdhci-sprd.c
10797 +@@ -393,6 +393,7 @@ static void sdhci_sprd_request_done(struct sdhci_host *host,
10798 + static struct sdhci_ops sdhci_sprd_ops = {
10799 + .read_l = sdhci_sprd_readl,
10800 + .write_l = sdhci_sprd_writel,
10801 ++ .write_w = sdhci_sprd_writew,
10802 + .write_b = sdhci_sprd_writeb,
10803 + .set_clock = sdhci_sprd_set_clock,
10804 + .get_max_clock = sdhci_sprd_get_max_clock,
10805 +diff --git a/drivers/mmc/host/usdhi6rol0.c b/drivers/mmc/host/usdhi6rol0.c
10806 +index 615f3d008af1e..b9b79b1089a00 100644
10807 +--- a/drivers/mmc/host/usdhi6rol0.c
10808 ++++ b/drivers/mmc/host/usdhi6rol0.c
10809 +@@ -1801,6 +1801,7 @@ static int usdhi6_probe(struct platform_device *pdev)
10810 +
10811 + version = usdhi6_read(host, USDHI6_VERSION);
10812 + if ((version & 0xfff) != 0xa0d) {
10813 ++ ret = -EPERM;
10814 + dev_err(dev, "Version not recognized %x\n", version);
10815 + goto e_clk_off;
10816 + }
10817 +diff --git a/drivers/mmc/host/via-sdmmc.c b/drivers/mmc/host/via-sdmmc.c
10818 +index 4f4c0813f9fdc..350e67056fa62 100644
10819 +--- a/drivers/mmc/host/via-sdmmc.c
10820 ++++ b/drivers/mmc/host/via-sdmmc.c
10821 +@@ -857,6 +857,9 @@ static void via_sdc_data_isr(struct via_crdr_mmc_host *host, u16 intmask)
10822 + {
10823 + BUG_ON(intmask == 0);
10824 +
10825 ++ if (!host->data)
10826 ++ return;
10827 ++
10828 + if (intmask & VIA_CRDR_SDSTS_DT)
10829 + host->data->error = -ETIMEDOUT;
10830 + else if (intmask & (VIA_CRDR_SDSTS_RC | VIA_CRDR_SDSTS_WC))
10831 +diff --git a/drivers/mmc/host/vub300.c b/drivers/mmc/host/vub300.c
10832 +index 739cf63ef6e2f..4950d10d3a191 100644
10833 +--- a/drivers/mmc/host/vub300.c
10834 ++++ b/drivers/mmc/host/vub300.c
10835 +@@ -2279,7 +2279,7 @@ static int vub300_probe(struct usb_interface *interface,
10836 + if (retval < 0)
10837 + goto error5;
10838 + retval =
10839 +- usb_control_msg(vub300->udev, usb_rcvctrlpipe(vub300->udev, 0),
10840 ++ usb_control_msg(vub300->udev, usb_sndctrlpipe(vub300->udev, 0),
10841 + SET_ROM_WAIT_STATES,
10842 + USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
10843 + firmware_rom_wait_states, 0x0000, NULL, 0, HZ);
10844 +diff --git a/drivers/mtd/nand/raw/arasan-nand-controller.c b/drivers/mtd/nand/raw/arasan-nand-controller.c
10845 +index 549aac00228eb..390f8d719c258 100644
10846 +--- a/drivers/mtd/nand/raw/arasan-nand-controller.c
10847 ++++ b/drivers/mtd/nand/raw/arasan-nand-controller.c
10848 +@@ -273,6 +273,37 @@ static int anfc_pkt_len_config(unsigned int len, unsigned int *steps,
10849 + return 0;
10850 + }
10851 +
10852 ++static int anfc_select_target(struct nand_chip *chip, int target)
10853 ++{
10854 ++ struct anand *anand = to_anand(chip);
10855 ++ struct arasan_nfc *nfc = to_anfc(chip->controller);
10856 ++ int ret;
10857 ++
10858 ++ /* Update the controller timings and the potential ECC configuration */
10859 ++ writel_relaxed(anand->timings, nfc->base + DATA_INTERFACE_REG);
10860 ++
10861 ++ /* Update clock frequency */
10862 ++ if (nfc->cur_clk != anand->clk) {
10863 ++ clk_disable_unprepare(nfc->controller_clk);
10864 ++ ret = clk_set_rate(nfc->controller_clk, anand->clk);
10865 ++ if (ret) {
10866 ++ dev_err(nfc->dev, "Failed to change clock rate\n");
10867 ++ return ret;
10868 ++ }
10869 ++
10870 ++ ret = clk_prepare_enable(nfc->controller_clk);
10871 ++ if (ret) {
10872 ++ dev_err(nfc->dev,
10873 ++ "Failed to re-enable the controller clock\n");
10874 ++ return ret;
10875 ++ }
10876 ++
10877 ++ nfc->cur_clk = anand->clk;
10878 ++ }
10879 ++
10880 ++ return 0;
10881 ++}
10882 ++
10883 + /*
10884 + * When using the embedded hardware ECC engine, the controller is in charge of
10885 + * feeding the engine with, first, the ECC residue present in the data array.
10886 +@@ -401,6 +432,18 @@ static int anfc_read_page_hw_ecc(struct nand_chip *chip, u8 *buf,
10887 + return 0;
10888 + }
10889 +
10890 ++static int anfc_sel_read_page_hw_ecc(struct nand_chip *chip, u8 *buf,
10891 ++ int oob_required, int page)
10892 ++{
10893 ++ int ret;
10894 ++
10895 ++ ret = anfc_select_target(chip, chip->cur_cs);
10896 ++ if (ret)
10897 ++ return ret;
10898 ++
10899 ++ return anfc_read_page_hw_ecc(chip, buf, oob_required, page);
10900 ++};
10901 ++
10902 + static int anfc_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf,
10903 + int oob_required, int page)
10904 + {
10905 +@@ -461,6 +504,18 @@ static int anfc_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf,
10906 + return ret;
10907 + }
10908 +
10909 ++static int anfc_sel_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf,
10910 ++ int oob_required, int page)
10911 ++{
10912 ++ int ret;
10913 ++
10914 ++ ret = anfc_select_target(chip, chip->cur_cs);
10915 ++ if (ret)
10916 ++ return ret;
10917 ++
10918 ++ return anfc_write_page_hw_ecc(chip, buf, oob_required, page);
10919 ++};
10920 ++
10921 + /* NAND framework ->exec_op() hooks and related helpers */
10922 + static int anfc_parse_instructions(struct nand_chip *chip,
10923 + const struct nand_subop *subop,
10924 +@@ -753,37 +808,6 @@ static const struct nand_op_parser anfc_op_parser = NAND_OP_PARSER(
10925 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(false)),
10926 + );
10927 +
10928 +-static int anfc_select_target(struct nand_chip *chip, int target)
10929 +-{
10930 +- struct anand *anand = to_anand(chip);
10931 +- struct arasan_nfc *nfc = to_anfc(chip->controller);
10932 +- int ret;
10933 +-
10934 +- /* Update the controller timings and the potential ECC configuration */
10935 +- writel_relaxed(anand->timings, nfc->base + DATA_INTERFACE_REG);
10936 +-
10937 +- /* Update clock frequency */
10938 +- if (nfc->cur_clk != anand->clk) {
10939 +- clk_disable_unprepare(nfc->controller_clk);
10940 +- ret = clk_set_rate(nfc->controller_clk, anand->clk);
10941 +- if (ret) {
10942 +- dev_err(nfc->dev, "Failed to change clock rate\n");
10943 +- return ret;
10944 +- }
10945 +-
10946 +- ret = clk_prepare_enable(nfc->controller_clk);
10947 +- if (ret) {
10948 +- dev_err(nfc->dev,
10949 +- "Failed to re-enable the controller clock\n");
10950 +- return ret;
10951 +- }
10952 +-
10953 +- nfc->cur_clk = anand->clk;
10954 +- }
10955 +-
10956 +- return 0;
10957 +-}
10958 +-
10959 + static int anfc_check_op(struct nand_chip *chip,
10960 + const struct nand_operation *op)
10961 + {
10962 +@@ -1007,8 +1031,8 @@ static int anfc_init_hw_ecc_controller(struct arasan_nfc *nfc,
10963 + if (!anand->bch)
10964 + return -EINVAL;
10965 +
10966 +- ecc->read_page = anfc_read_page_hw_ecc;
10967 +- ecc->write_page = anfc_write_page_hw_ecc;
10968 ++ ecc->read_page = anfc_sel_read_page_hw_ecc;
10969 ++ ecc->write_page = anfc_sel_write_page_hw_ecc;
10970 +
10971 + return 0;
10972 + }
10973 +diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
10974 +index 79da6b02e2095..f83525a1ab0e6 100644
10975 +--- a/drivers/mtd/nand/raw/marvell_nand.c
10976 ++++ b/drivers/mtd/nand/raw/marvell_nand.c
10977 +@@ -3030,8 +3030,10 @@ static int __maybe_unused marvell_nfc_resume(struct device *dev)
10978 + return ret;
10979 +
10980 + ret = clk_prepare_enable(nfc->reg_clk);
10981 +- if (ret < 0)
10982 ++ if (ret < 0) {
10983 ++ clk_disable_unprepare(nfc->core_clk);
10984 + return ret;
10985 ++ }
10986 +
10987 + /*
10988 + * Reset nfc->selected_chip so the next command will cause the timing
10989 +diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
10990 +index 17f63f95f4a28..54ae540bc66b4 100644
10991 +--- a/drivers/mtd/nand/spi/core.c
10992 ++++ b/drivers/mtd/nand/spi/core.c
10993 +@@ -290,6 +290,8 @@ static int spinand_ondie_ecc_finish_io_req(struct nand_device *nand,
10994 + {
10995 + struct spinand_ondie_ecc_conf *engine_conf = nand->ecc.ctx.priv;
10996 + struct spinand_device *spinand = nand_to_spinand(nand);
10997 ++ struct mtd_info *mtd = spinand_to_mtd(spinand);
10998 ++ int ret;
10999 +
11000 + if (req->mode == MTD_OPS_RAW)
11001 + return 0;
11002 +@@ -299,7 +301,13 @@ static int spinand_ondie_ecc_finish_io_req(struct nand_device *nand,
11003 + return 0;
11004 +
11005 + /* Finish a page write: check the status, report errors/bitflips */
11006 +- return spinand_check_ecc_status(spinand, engine_conf->status);
11007 ++ ret = spinand_check_ecc_status(spinand, engine_conf->status);
11008 ++ if (ret == -EBADMSG)
11009 ++ mtd->ecc_stats.failed++;
11010 ++ else if (ret > 0)
11011 ++ mtd->ecc_stats.corrected += ret;
11012 ++
11013 ++ return ret;
11014 + }
11015 +
11016 + static struct nand_ecc_engine_ops spinand_ondie_ecc_engine_ops = {
11017 +@@ -620,13 +628,10 @@ static int spinand_mtd_read(struct mtd_info *mtd, loff_t from,
11018 + if (ret < 0 && ret != -EBADMSG)
11019 + break;
11020 +
11021 +- if (ret == -EBADMSG) {
11022 ++ if (ret == -EBADMSG)
11023 + ecc_failed = true;
11024 +- mtd->ecc_stats.failed++;
11025 +- } else {
11026 +- mtd->ecc_stats.corrected += ret;
11027 ++ else
11028 + max_bitflips = max_t(unsigned int, max_bitflips, ret);
11029 +- }
11030 +
11031 + ret = 0;
11032 + ops->retlen += iter.req.datalen;
11033 +diff --git a/drivers/mtd/parsers/qcomsmempart.c b/drivers/mtd/parsers/qcomsmempart.c
11034 +index d9083308f6ba6..06a818cd2433f 100644
11035 +--- a/drivers/mtd/parsers/qcomsmempart.c
11036 ++++ b/drivers/mtd/parsers/qcomsmempart.c
11037 +@@ -159,6 +159,15 @@ out_free_parts:
11038 + return ret;
11039 + }
11040 +
11041 ++static void parse_qcomsmem_cleanup(const struct mtd_partition *pparts,
11042 ++ int nr_parts)
11043 ++{
11044 ++ int i;
11045 ++
11046 ++ for (i = 0; i < nr_parts; i++)
11047 ++ kfree(pparts[i].name);
11048 ++}
11049 ++
11050 + static const struct of_device_id qcomsmem_of_match_table[] = {
11051 + { .compatible = "qcom,smem-part" },
11052 + {},
11053 +@@ -167,6 +176,7 @@ MODULE_DEVICE_TABLE(of, qcomsmem_of_match_table);
11054 +
11055 + static struct mtd_part_parser mtd_parser_qcomsmem = {
11056 + .parse_fn = parse_qcomsmem_part,
11057 ++ .cleanup = parse_qcomsmem_cleanup,
11058 + .name = "qcomsmem",
11059 + .of_match_table = qcomsmem_of_match_table,
11060 + };
11061 +diff --git a/drivers/mtd/parsers/redboot.c b/drivers/mtd/parsers/redboot.c
11062 +index 91146bdc47132..3ccd6363ee8cb 100644
11063 +--- a/drivers/mtd/parsers/redboot.c
11064 ++++ b/drivers/mtd/parsers/redboot.c
11065 +@@ -45,6 +45,7 @@ static inline int redboot_checksum(struct fis_image_desc *img)
11066 + static void parse_redboot_of(struct mtd_info *master)
11067 + {
11068 + struct device_node *np;
11069 ++ struct device_node *npart;
11070 + u32 dirblock;
11071 + int ret;
11072 +
11073 +@@ -52,7 +53,11 @@ static void parse_redboot_of(struct mtd_info *master)
11074 + if (!np)
11075 + return;
11076 +
11077 +- ret = of_property_read_u32(np, "fis-index-block", &dirblock);
11078 ++ npart = of_get_child_by_name(np, "partitions");
11079 ++ if (!npart)
11080 ++ return;
11081 ++
11082 ++ ret = of_property_read_u32(npart, "fis-index-block", &dirblock);
11083 + if (ret)
11084 + return;
11085 +
11086 +diff --git a/drivers/net/can/peak_canfd/peak_canfd.c b/drivers/net/can/peak_canfd/peak_canfd.c
11087 +index 00847cbaf7b62..d08718e98e110 100644
11088 +--- a/drivers/net/can/peak_canfd/peak_canfd.c
11089 ++++ b/drivers/net/can/peak_canfd/peak_canfd.c
11090 +@@ -351,8 +351,8 @@ static int pucan_handle_status(struct peak_canfd_priv *priv,
11091 + return err;
11092 + }
11093 +
11094 +- /* start network queue (echo_skb array is empty) */
11095 +- netif_start_queue(ndev);
11096 ++ /* wake network queue up (echo_skb array is empty) */
11097 ++ netif_wake_queue(ndev);
11098 +
11099 + return 0;
11100 + }
11101 +diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
11102 +index 18f40eb203605..5cd26c1b78ad0 100644
11103 +--- a/drivers/net/can/usb/ems_usb.c
11104 ++++ b/drivers/net/can/usb/ems_usb.c
11105 +@@ -1053,7 +1053,6 @@ static void ems_usb_disconnect(struct usb_interface *intf)
11106 +
11107 + if (dev) {
11108 + unregister_netdev(dev->netdev);
11109 +- free_candev(dev->netdev);
11110 +
11111 + unlink_all_urbs(dev);
11112 +
11113 +@@ -1061,6 +1060,8 @@ static void ems_usb_disconnect(struct usb_interface *intf)
11114 +
11115 + kfree(dev->intr_in_buffer);
11116 + kfree(dev->tx_msg_buffer);
11117 ++
11118 ++ free_candev(dev->netdev);
11119 + }
11120 + }
11121 +
11122 +diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
11123 +index e08bf93771400..25363fceb45e8 100644
11124 +--- a/drivers/net/dsa/mv88e6xxx/chip.c
11125 ++++ b/drivers/net/dsa/mv88e6xxx/chip.c
11126 +@@ -1552,9 +1552,6 @@ static int mv88e6xxx_port_check_hw_vlan(struct dsa_switch *ds, int port,
11127 + struct mv88e6xxx_vtu_entry vlan;
11128 + int i, err;
11129 +
11130 +- if (!vid)
11131 +- return -EOPNOTSUPP;
11132 +-
11133 + /* DSA and CPU ports have to be members of multiple vlans */
11134 + if (dsa_is_dsa_port(ds, port) || dsa_is_cpu_port(ds, port))
11135 + return 0;
11136 +@@ -1993,6 +1990,9 @@ static int mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port,
11137 + u8 member;
11138 + int err;
11139 +
11140 ++ if (!vlan->vid)
11141 ++ return 0;
11142 ++
11143 + err = mv88e6xxx_port_vlan_prepare(ds, port, vlan);
11144 + if (err)
11145 + return err;
11146 +diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
11147 +index 926544440f02a..42a0fb588f647 100644
11148 +--- a/drivers/net/dsa/sja1105/sja1105_main.c
11149 ++++ b/drivers/net/dsa/sja1105/sja1105_main.c
11150 +@@ -1798,6 +1798,12 @@ static int sja1105_reload_cbs(struct sja1105_private *priv)
11151 + {
11152 + int rc = 0, i;
11153 +
11154 ++ /* The credit based shapers are only allocated if
11155 ++ * CONFIG_NET_SCH_CBS is enabled.
11156 ++ */
11157 ++ if (!priv->cbs)
11158 ++ return 0;
11159 ++
11160 + for (i = 0; i < priv->info->num_cbs_shapers; i++) {
11161 + struct sja1105_cbs_entry *cbs = &priv->cbs[i];
11162 +
11163 +diff --git a/drivers/net/ethernet/aeroflex/greth.c b/drivers/net/ethernet/aeroflex/greth.c
11164 +index 9c5891bbfe61a..f4f50b3a472e1 100644
11165 +--- a/drivers/net/ethernet/aeroflex/greth.c
11166 ++++ b/drivers/net/ethernet/aeroflex/greth.c
11167 +@@ -1539,10 +1539,11 @@ static int greth_of_remove(struct platform_device *of_dev)
11168 + mdiobus_unregister(greth->mdio);
11169 +
11170 + unregister_netdev(ndev);
11171 +- free_netdev(ndev);
11172 +
11173 + of_iounmap(&of_dev->resource[0], greth->regs, resource_size(&of_dev->resource[0]));
11174 +
11175 ++ free_netdev(ndev);
11176 ++
11177 + return 0;
11178 + }
11179 +
11180 +diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h
11181 +index f5fba8b8cdea9..a47e2710487ec 100644
11182 +--- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h
11183 ++++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h
11184 +@@ -91,7 +91,7 @@ struct aq_macsec_txsc {
11185 + u32 hw_sc_idx;
11186 + unsigned long tx_sa_idx_busy;
11187 + const struct macsec_secy *sw_secy;
11188 +- u8 tx_sa_key[MACSEC_NUM_AN][MACSEC_KEYID_LEN];
11189 ++ u8 tx_sa_key[MACSEC_NUM_AN][MACSEC_MAX_KEY_LEN];
11190 + struct aq_macsec_tx_sc_stats stats;
11191 + struct aq_macsec_tx_sa_stats tx_sa_stats[MACSEC_NUM_AN];
11192 + };
11193 +@@ -101,7 +101,7 @@ struct aq_macsec_rxsc {
11194 + unsigned long rx_sa_idx_busy;
11195 + const struct macsec_secy *sw_secy;
11196 + const struct macsec_rx_sc *sw_rxsc;
11197 +- u8 rx_sa_key[MACSEC_NUM_AN][MACSEC_KEYID_LEN];
11198 ++ u8 rx_sa_key[MACSEC_NUM_AN][MACSEC_MAX_KEY_LEN];
11199 + struct aq_macsec_rx_sa_stats rx_sa_stats[MACSEC_NUM_AN];
11200 + };
11201 +
11202 +diff --git a/drivers/net/ethernet/broadcom/bcm4908_enet.c b/drivers/net/ethernet/broadcom/bcm4908_enet.c
11203 +index 65981931a7989..a31984cd0fb7b 100644
11204 +--- a/drivers/net/ethernet/broadcom/bcm4908_enet.c
11205 ++++ b/drivers/net/ethernet/broadcom/bcm4908_enet.c
11206 +@@ -165,9 +165,6 @@ static int bcm4908_dma_alloc_buf_descs(struct bcm4908_enet *enet,
11207 + if (!ring->slots)
11208 + goto err_free_buf_descs;
11209 +
11210 +- ring->read_idx = 0;
11211 +- ring->write_idx = 0;
11212 +-
11213 + return 0;
11214 +
11215 + err_free_buf_descs:
11216 +@@ -295,6 +292,9 @@ static void bcm4908_enet_dma_ring_init(struct bcm4908_enet *enet,
11217 +
11218 + enet_write(enet, ring->st_ram_block + ENET_DMA_CH_STATE_RAM_BASE_DESC_PTR,
11219 + (uint32_t)ring->dma_addr);
11220 ++
11221 ++ ring->read_idx = 0;
11222 ++ ring->write_idx = 0;
11223 + }
11224 +
11225 + static void bcm4908_enet_dma_uninit(struct bcm4908_enet *enet)
11226 +diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
11227 +index fcca023f22e54..41f7f078cd27c 100644
11228 +--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
11229 ++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
11230 +@@ -4296,3 +4296,4 @@ MODULE_AUTHOR("Broadcom Corporation");
11231 + MODULE_DESCRIPTION("Broadcom GENET Ethernet controller driver");
11232 + MODULE_ALIAS("platform:bcmgenet");
11233 + MODULE_LICENSE("GPL");
11234 ++MODULE_SOFTDEP("pre: mdio-bcm-unimac");
11235 +diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
11236 +index 701c12c9e0337..649c5c429bd7c 100644
11237 +--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
11238 ++++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
11239 +@@ -550,7 +550,7 @@ int be_process_mcc(struct be_adapter *adapter)
11240 + int num = 0, status = 0;
11241 + struct be_mcc_obj *mcc_obj = &adapter->mcc_obj;
11242 +
11243 +- spin_lock_bh(&adapter->mcc_cq_lock);
11244 ++ spin_lock(&adapter->mcc_cq_lock);
11245 +
11246 + while ((compl = be_mcc_compl_get(adapter))) {
11247 + if (compl->flags & CQE_FLAGS_ASYNC_MASK) {
11248 +@@ -566,7 +566,7 @@ int be_process_mcc(struct be_adapter *adapter)
11249 + if (num)
11250 + be_cq_notify(adapter, mcc_obj->cq.id, mcc_obj->rearm_cq, num);
11251 +
11252 +- spin_unlock_bh(&adapter->mcc_cq_lock);
11253 ++ spin_unlock(&adapter->mcc_cq_lock);
11254 + return status;
11255 + }
11256 +
11257 +@@ -581,7 +581,9 @@ static int be_mcc_wait_compl(struct be_adapter *adapter)
11258 + if (be_check_error(adapter, BE_ERROR_ANY))
11259 + return -EIO;
11260 +
11261 ++ local_bh_disable();
11262 + status = be_process_mcc(adapter);
11263 ++ local_bh_enable();
11264 +
11265 + if (atomic_read(&mcc_obj->q.used) == 0)
11266 + break;
11267 +diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
11268 +index 7968568bbe214..361c1c87c1830 100644
11269 +--- a/drivers/net/ethernet/emulex/benet/be_main.c
11270 ++++ b/drivers/net/ethernet/emulex/benet/be_main.c
11271 +@@ -5501,7 +5501,9 @@ static void be_worker(struct work_struct *work)
11272 + * mcc completions
11273 + */
11274 + if (!netif_running(adapter->netdev)) {
11275 ++ local_bh_disable();
11276 + be_process_mcc(adapter);
11277 ++ local_bh_enable();
11278 + goto reschedule;
11279 + }
11280 +
11281 +diff --git a/drivers/net/ethernet/ezchip/nps_enet.c b/drivers/net/ethernet/ezchip/nps_enet.c
11282 +index 815fb62c4b02e..3d74401b4f102 100644
11283 +--- a/drivers/net/ethernet/ezchip/nps_enet.c
11284 ++++ b/drivers/net/ethernet/ezchip/nps_enet.c
11285 +@@ -610,7 +610,7 @@ static s32 nps_enet_probe(struct platform_device *pdev)
11286 +
11287 + /* Get IRQ number */
11288 + priv->irq = platform_get_irq(pdev, 0);
11289 +- if (!priv->irq) {
11290 ++ if (priv->irq < 0) {
11291 + dev_err(dev, "failed to retrieve <irq Rx-Tx> value from device tree\n");
11292 + err = -ENODEV;
11293 + goto out_netdev;
11294 +@@ -645,8 +645,8 @@ static s32 nps_enet_remove(struct platform_device *pdev)
11295 + struct nps_enet_priv *priv = netdev_priv(ndev);
11296 +
11297 + unregister_netdev(ndev);
11298 +- free_netdev(ndev);
11299 + netif_napi_del(&priv->napi);
11300 ++ free_netdev(ndev);
11301 +
11302 + return 0;
11303 + }
11304 +diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c
11305 +index 04421aec2dfd6..11dbbfd38770c 100644
11306 +--- a/drivers/net/ethernet/faraday/ftgmac100.c
11307 ++++ b/drivers/net/ethernet/faraday/ftgmac100.c
11308 +@@ -1830,14 +1830,17 @@ static int ftgmac100_probe(struct platform_device *pdev)
11309 + if (np && of_get_property(np, "use-ncsi", NULL)) {
11310 + if (!IS_ENABLED(CONFIG_NET_NCSI)) {
11311 + dev_err(&pdev->dev, "NCSI stack not enabled\n");
11312 ++ err = -EINVAL;
11313 + goto err_phy_connect;
11314 + }
11315 +
11316 + dev_info(&pdev->dev, "Using NCSI interface\n");
11317 + priv->use_ncsi = true;
11318 + priv->ndev = ncsi_register_dev(netdev, ftgmac100_ncsi_handler);
11319 +- if (!priv->ndev)
11320 ++ if (!priv->ndev) {
11321 ++ err = -EINVAL;
11322 + goto err_phy_connect;
11323 ++ }
11324 + } else if (np && of_get_property(np, "phy-handle", NULL)) {
11325 + struct phy_device *phy;
11326 +
11327 +@@ -1856,6 +1859,7 @@ static int ftgmac100_probe(struct platform_device *pdev)
11328 + &ftgmac100_adjust_link);
11329 + if (!phy) {
11330 + dev_err(&pdev->dev, "Failed to connect to phy\n");
11331 ++ err = -EINVAL;
11332 + goto err_phy_connect;
11333 + }
11334 +
11335 +diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
11336 +index bbc423e931223..79cefe85a799f 100644
11337 +--- a/drivers/net/ethernet/google/gve/gve_main.c
11338 ++++ b/drivers/net/ethernet/google/gve/gve_main.c
11339 +@@ -1295,8 +1295,8 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
11340 +
11341 + gve_write_version(&reg_bar->driver_version);
11342 + /* Get max queues to alloc etherdev */
11343 +- max_rx_queues = ioread32be(&reg_bar->max_tx_queues);
11344 +- max_tx_queues = ioread32be(&reg_bar->max_rx_queues);
11345 ++ max_tx_queues = ioread32be(&reg_bar->max_tx_queues);
11346 ++ max_rx_queues = ioread32be(&reg_bar->max_rx_queues);
11347 + /* Alloc and setup the netdev and priv */
11348 + dev = alloc_etherdev_mqs(sizeof(*priv), max_tx_queues, max_rx_queues);
11349 + if (!dev) {
11350 +diff --git a/drivers/net/ethernet/ibm/ehea/ehea_main.c b/drivers/net/ethernet/ibm/ehea/ehea_main.c
11351 +index c2e7404757869..f630667364253 100644
11352 +--- a/drivers/net/ethernet/ibm/ehea/ehea_main.c
11353 ++++ b/drivers/net/ethernet/ibm/ehea/ehea_main.c
11354 +@@ -2617,10 +2617,8 @@ static int ehea_restart_qps(struct net_device *dev)
11355 + u16 dummy16 = 0;
11356 +
11357 + cb0 = (void *)get_zeroed_page(GFP_KERNEL);
11358 +- if (!cb0) {
11359 +- ret = -ENOMEM;
11360 +- goto out;
11361 +- }
11362 ++ if (!cb0)
11363 ++ return -ENOMEM;
11364 +
11365 + for (i = 0; i < (port->num_def_qps); i++) {
11366 + struct ehea_port_res *pr = &port->port_res[i];
11367 +@@ -2640,6 +2638,7 @@ static int ehea_restart_qps(struct net_device *dev)
11368 + cb0);
11369 + if (hret != H_SUCCESS) {
11370 + netdev_err(dev, "query_ehea_qp failed (1)\n");
11371 ++ ret = -EFAULT;
11372 + goto out;
11373 + }
11374 +
11375 +@@ -2652,6 +2651,7 @@ static int ehea_restart_qps(struct net_device *dev)
11376 + &dummy64, &dummy16, &dummy16);
11377 + if (hret != H_SUCCESS) {
11378 + netdev_err(dev, "modify_ehea_qp failed (1)\n");
11379 ++ ret = -EFAULT;
11380 + goto out;
11381 + }
11382 +
11383 +@@ -2660,6 +2660,7 @@ static int ehea_restart_qps(struct net_device *dev)
11384 + cb0);
11385 + if (hret != H_SUCCESS) {
11386 + netdev_err(dev, "query_ehea_qp failed (2)\n");
11387 ++ ret = -EFAULT;
11388 + goto out;
11389 + }
11390 +
11391 +diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
11392 +index ffb2a91750c7e..3c77897b3f31f 100644
11393 +--- a/drivers/net/ethernet/ibm/ibmvnic.c
11394 ++++ b/drivers/net/ethernet/ibm/ibmvnic.c
11395 +@@ -106,6 +106,8 @@ static void release_crq_queue(struct ibmvnic_adapter *);
11396 + static int __ibmvnic_set_mac(struct net_device *, u8 *);
11397 + static int init_crq_queue(struct ibmvnic_adapter *adapter);
11398 + static int send_query_phys_parms(struct ibmvnic_adapter *adapter);
11399 ++static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter,
11400 ++ struct ibmvnic_sub_crq_queue *tx_scrq);
11401 +
11402 + struct ibmvnic_stat {
11403 + char name[ETH_GSTRING_LEN];
11404 +@@ -209,12 +211,11 @@ static int alloc_long_term_buff(struct ibmvnic_adapter *adapter,
11405 + mutex_lock(&adapter->fw_lock);
11406 + adapter->fw_done_rc = 0;
11407 + reinit_completion(&adapter->fw_done);
11408 +- rc = send_request_map(adapter, ltb->addr,
11409 +- ltb->size, ltb->map_id);
11410 ++
11411 ++ rc = send_request_map(adapter, ltb->addr, ltb->size, ltb->map_id);
11412 + if (rc) {
11413 +- dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
11414 +- mutex_unlock(&adapter->fw_lock);
11415 +- return rc;
11416 ++ dev_err(dev, "send_request_map failed, rc = %d\n", rc);
11417 ++ goto out;
11418 + }
11419 +
11420 + rc = ibmvnic_wait_for_completion(adapter, &adapter->fw_done, 10000);
11421 +@@ -222,20 +223,23 @@ static int alloc_long_term_buff(struct ibmvnic_adapter *adapter,
11422 + dev_err(dev,
11423 + "Long term map request aborted or timed out,rc = %d\n",
11424 + rc);
11425 +- dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
11426 +- mutex_unlock(&adapter->fw_lock);
11427 +- return rc;
11428 ++ goto out;
11429 + }
11430 +
11431 + if (adapter->fw_done_rc) {
11432 + dev_err(dev, "Couldn't map long term buffer,rc = %d\n",
11433 + adapter->fw_done_rc);
11434 ++ rc = -1;
11435 ++ goto out;
11436 ++ }
11437 ++ rc = 0;
11438 ++out:
11439 ++ if (rc) {
11440 + dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
11441 +- mutex_unlock(&adapter->fw_lock);
11442 +- return -1;
11443 ++ ltb->buff = NULL;
11444 + }
11445 + mutex_unlock(&adapter->fw_lock);
11446 +- return 0;
11447 ++ return rc;
11448 + }
11449 +
11450 + static void free_long_term_buff(struct ibmvnic_adapter *adapter,
11451 +@@ -255,14 +259,44 @@ static void free_long_term_buff(struct ibmvnic_adapter *adapter,
11452 + adapter->reset_reason != VNIC_RESET_TIMEOUT)
11453 + send_request_unmap(adapter, ltb->map_id);
11454 + dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
11455 ++ ltb->buff = NULL;
11456 ++ ltb->map_id = 0;
11457 + }
11458 +
11459 +-static int reset_long_term_buff(struct ibmvnic_long_term_buff *ltb)
11460 ++static int reset_long_term_buff(struct ibmvnic_adapter *adapter,
11461 ++ struct ibmvnic_long_term_buff *ltb)
11462 + {
11463 +- if (!ltb->buff)
11464 +- return -EINVAL;
11465 ++ struct device *dev = &adapter->vdev->dev;
11466 ++ int rc;
11467 +
11468 + memset(ltb->buff, 0, ltb->size);
11469 ++
11470 ++ mutex_lock(&adapter->fw_lock);
11471 ++ adapter->fw_done_rc = 0;
11472 ++
11473 ++ reinit_completion(&adapter->fw_done);
11474 ++ rc = send_request_map(adapter, ltb->addr, ltb->size, ltb->map_id);
11475 ++ if (rc) {
11476 ++ mutex_unlock(&adapter->fw_lock);
11477 ++ return rc;
11478 ++ }
11479 ++
11480 ++ rc = ibmvnic_wait_for_completion(adapter, &adapter->fw_done, 10000);
11481 ++ if (rc) {
11482 ++ dev_info(dev,
11483 ++ "Reset failed, long term map request timed out or aborted\n");
11484 ++ mutex_unlock(&adapter->fw_lock);
11485 ++ return rc;
11486 ++ }
11487 ++
11488 ++ if (adapter->fw_done_rc) {
11489 ++ dev_info(dev,
11490 ++ "Reset failed, attempting to free and reallocate buffer\n");
11491 ++ free_long_term_buff(adapter, ltb);
11492 ++ mutex_unlock(&adapter->fw_lock);
11493 ++ return alloc_long_term_buff(adapter, ltb, ltb->size);
11494 ++ }
11495 ++ mutex_unlock(&adapter->fw_lock);
11496 + return 0;
11497 + }
11498 +
11499 +@@ -298,7 +332,14 @@ static void replenish_rx_pool(struct ibmvnic_adapter *adapter,
11500 +
11501 + rx_scrq = adapter->rx_scrq[pool->index];
11502 + ind_bufp = &rx_scrq->ind_buf;
11503 +- for (i = 0; i < count; ++i) {
11504 ++
11505 ++ /* netdev_skb_alloc() could have failed after we saved a few skbs
11506 ++ * in the indir_buf and we would not have sent them to VIOS yet.
11507 ++ * To account for them, start the loop at ind_bufp->index rather
11508 ++ * than 0. If we pushed all the skbs to VIOS, ind_bufp->index will
11509 ++ * be 0.
11510 ++ */
11511 ++ for (i = ind_bufp->index; i < count; ++i) {
11512 + skb = netdev_alloc_skb(adapter->netdev, pool->buff_size);
11513 + if (!skb) {
11514 + dev_err(dev, "Couldn't replenish rx buff\n");
11515 +@@ -484,7 +525,8 @@ static int reset_rx_pools(struct ibmvnic_adapter *adapter)
11516 + rx_pool->size *
11517 + rx_pool->buff_size);
11518 + } else {
11519 +- rc = reset_long_term_buff(&rx_pool->long_term_buff);
11520 ++ rc = reset_long_term_buff(adapter,
11521 ++ &rx_pool->long_term_buff);
11522 + }
11523 +
11524 + if (rc)
11525 +@@ -607,11 +649,12 @@ static int init_rx_pools(struct net_device *netdev)
11526 + return 0;
11527 + }
11528 +
11529 +-static int reset_one_tx_pool(struct ibmvnic_tx_pool *tx_pool)
11530 ++static int reset_one_tx_pool(struct ibmvnic_adapter *adapter,
11531 ++ struct ibmvnic_tx_pool *tx_pool)
11532 + {
11533 + int rc, i;
11534 +
11535 +- rc = reset_long_term_buff(&tx_pool->long_term_buff);
11536 ++ rc = reset_long_term_buff(adapter, &tx_pool->long_term_buff);
11537 + if (rc)
11538 + return rc;
11539 +
11540 +@@ -638,10 +681,11 @@ static int reset_tx_pools(struct ibmvnic_adapter *adapter)
11541 +
11542 + tx_scrqs = adapter->num_active_tx_pools;
11543 + for (i = 0; i < tx_scrqs; i++) {
11544 +- rc = reset_one_tx_pool(&adapter->tso_pool[i]);
11545 ++ ibmvnic_tx_scrq_clean_buffer(adapter, adapter->tx_scrq[i]);
11546 ++ rc = reset_one_tx_pool(adapter, &adapter->tso_pool[i]);
11547 + if (rc)
11548 + return rc;
11549 +- rc = reset_one_tx_pool(&adapter->tx_pool[i]);
11550 ++ rc = reset_one_tx_pool(adapter, &adapter->tx_pool[i]);
11551 + if (rc)
11552 + return rc;
11553 + }
11554 +@@ -734,8 +778,11 @@ static int init_tx_pools(struct net_device *netdev)
11555 +
11556 + adapter->tso_pool = kcalloc(tx_subcrqs,
11557 + sizeof(struct ibmvnic_tx_pool), GFP_KERNEL);
11558 +- if (!adapter->tso_pool)
11559 ++ if (!adapter->tso_pool) {
11560 ++ kfree(adapter->tx_pool);
11561 ++ adapter->tx_pool = NULL;
11562 + return -1;
11563 ++ }
11564 +
11565 + adapter->num_active_tx_pools = tx_subcrqs;
11566 +
11567 +@@ -1156,6 +1203,11 @@ static int __ibmvnic_open(struct net_device *netdev)
11568 +
11569 + netif_tx_start_all_queues(netdev);
11570 +
11571 ++ if (prev_state == VNIC_CLOSED) {
11572 ++ for (i = 0; i < adapter->req_rx_queues; i++)
11573 ++ napi_schedule(&adapter->napi[i]);
11574 ++ }
11575 ++
11576 + adapter->state = VNIC_OPEN;
11577 + return rc;
11578 + }
11579 +@@ -1557,7 +1609,8 @@ static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter,
11580 + ind_bufp->index = 0;
11581 + if (atomic_sub_return(entries, &tx_scrq->used) <=
11582 + (adapter->req_tx_entries_per_subcrq / 2) &&
11583 +- __netif_subqueue_stopped(adapter->netdev, queue_num)) {
11584 ++ __netif_subqueue_stopped(adapter->netdev, queue_num) &&
11585 ++ !test_bit(0, &adapter->resetting)) {
11586 + netif_wake_subqueue(adapter->netdev, queue_num);
11587 + netdev_dbg(adapter->netdev, "Started queue %d\n",
11588 + queue_num);
11589 +@@ -1650,7 +1703,6 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
11590 + tx_send_failed++;
11591 + tx_dropped++;
11592 + ret = NETDEV_TX_OK;
11593 +- ibmvnic_tx_scrq_flush(adapter, tx_scrq);
11594 + goto out;
11595 + }
11596 +
11597 +@@ -3088,6 +3140,7 @@ static void release_sub_crqs(struct ibmvnic_adapter *adapter, bool do_h_free)
11598 +
11599 + netdev_dbg(adapter->netdev, "Releasing tx_scrq[%d]\n",
11600 + i);
11601 ++ ibmvnic_tx_scrq_clean_buffer(adapter, adapter->tx_scrq[i]);
11602 + if (adapter->tx_scrq[i]->irq) {
11603 + free_irq(adapter->tx_scrq[i]->irq,
11604 + adapter->tx_scrq[i]);
11605 +diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
11606 +index a0948002ddf85..b3ad95ac3d859 100644
11607 +--- a/drivers/net/ethernet/intel/e1000e/netdev.c
11608 ++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
11609 +@@ -5222,18 +5222,20 @@ static void e1000_watchdog_task(struct work_struct *work)
11610 + pm_runtime_resume(netdev->dev.parent);
11611 +
11612 + /* Checking if MAC is in DMoff state*/
11613 +- pcim_state = er32(STATUS);
11614 +- while (pcim_state & E1000_STATUS_PCIM_STATE) {
11615 +- if (tries++ == dmoff_exit_timeout) {
11616 +- e_dbg("Error in exiting dmoff\n");
11617 +- break;
11618 +- }
11619 +- usleep_range(10000, 20000);
11620 ++ if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID) {
11621 + pcim_state = er32(STATUS);
11622 +-
11623 +- /* Checking if MAC exited DMoff state */
11624 +- if (!(pcim_state & E1000_STATUS_PCIM_STATE))
11625 +- e1000_phy_hw_reset(&adapter->hw);
11626 ++ while (pcim_state & E1000_STATUS_PCIM_STATE) {
11627 ++ if (tries++ == dmoff_exit_timeout) {
11628 ++ e_dbg("Error in exiting dmoff\n");
11629 ++ break;
11630 ++ }
11631 ++ usleep_range(10000, 20000);
11632 ++ pcim_state = er32(STATUS);
11633 ++
11634 ++ /* Checking if MAC exited DMoff state */
11635 ++ if (!(pcim_state & E1000_STATUS_PCIM_STATE))
11636 ++ e1000_phy_hw_reset(&adapter->hw);
11637 ++ }
11638 + }
11639 +
11640 + /* update snapshot of PHY registers on LSC */
11641 +diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
11642 +index 93dd58fda272f..d558364e3a9fb 100644
11643 +--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
11644 ++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
11645 +@@ -1262,8 +1262,7 @@ static int i40e_set_link_ksettings(struct net_device *netdev,
11646 + if (ethtool_link_ksettings_test_link_mode(&safe_ks,
11647 + supported,
11648 + Autoneg) &&
11649 +- hw->phy.link_info.phy_type !=
11650 +- I40E_PHY_TYPE_10GBASE_T) {
11651 ++ hw->phy.media_type != I40E_MEDIA_TYPE_BASET) {
11652 + netdev_info(netdev, "Autoneg cannot be disabled on this phy\n");
11653 + err = -EINVAL;
11654 + goto done;
11655 +diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
11656 +index ac4b44fc19f17..d5106a6afb453 100644
11657 +--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
11658 ++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
11659 +@@ -31,7 +31,7 @@ static void i40e_vsi_reinit_locked(struct i40e_vsi *vsi);
11660 + static void i40e_handle_reset_warning(struct i40e_pf *pf, bool lock_acquired);
11661 + static int i40e_add_vsi(struct i40e_vsi *vsi);
11662 + static int i40e_add_veb(struct i40e_veb *veb, struct i40e_vsi *vsi);
11663 +-static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit);
11664 ++static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit, bool lock_acquired);
11665 + static int i40e_setup_misc_vector(struct i40e_pf *pf);
11666 + static void i40e_determine_queue_usage(struct i40e_pf *pf);
11667 + static int i40e_setup_pf_filter_control(struct i40e_pf *pf);
11668 +@@ -8702,6 +8702,8 @@ int i40e_vsi_open(struct i40e_vsi *vsi)
11669 + dev_driver_string(&pf->pdev->dev),
11670 + dev_name(&pf->pdev->dev));
11671 + err = i40e_vsi_request_irq(vsi, int_name);
11672 ++ if (err)
11673 ++ goto err_setup_rx;
11674 +
11675 + } else {
11676 + err = -EINVAL;
11677 +@@ -10568,7 +10570,7 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
11678 + #endif /* CONFIG_I40E_DCB */
11679 + if (!lock_acquired)
11680 + rtnl_lock();
11681 +- ret = i40e_setup_pf_switch(pf, reinit);
11682 ++ ret = i40e_setup_pf_switch(pf, reinit, true);
11683 + if (ret)
11684 + goto end_unlock;
11685 +
11686 +@@ -14621,10 +14623,11 @@ int i40e_fetch_switch_configuration(struct i40e_pf *pf, bool printconfig)
11687 + * i40e_setup_pf_switch - Setup the HW switch on startup or after reset
11688 + * @pf: board private structure
11689 + * @reinit: if the Main VSI needs to re-initialized.
11690 ++ * @lock_acquired: indicates whether or not the lock has been acquired
11691 + *
11692 + * Returns 0 on success, negative value on failure
11693 + **/
11694 +-static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
11695 ++static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit, bool lock_acquired)
11696 + {
11697 + u16 flags = 0;
11698 + int ret;
11699 +@@ -14726,9 +14729,15 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
11700 +
11701 + i40e_ptp_init(pf);
11702 +
11703 ++ if (!lock_acquired)
11704 ++ rtnl_lock();
11705 ++
11706 + /* repopulate tunnel port filters */
11707 + udp_tunnel_nic_reset_ntf(pf->vsi[pf->lan_vsi]->netdev);
11708 +
11709 ++ if (!lock_acquired)
11710 ++ rtnl_unlock();
11711 ++
11712 + return ret;
11713 + }
11714 +
11715 +@@ -15509,7 +15518,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
11716 + pf->flags |= I40E_FLAG_VEB_MODE_ENABLED;
11717 + }
11718 + #endif
11719 +- err = i40e_setup_pf_switch(pf, false);
11720 ++ err = i40e_setup_pf_switch(pf, false, false);
11721 + if (err) {
11722 + dev_info(&pdev->dev, "setup_pf_switch failed: %d\n", err);
11723 + goto err_vsis;
11724 +diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
11725 +index 6c81e4f175ac6..bf06f2d785db6 100644
11726 +--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
11727 ++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
11728 +@@ -7589,6 +7589,8 @@ static int mvpp2_probe(struct platform_device *pdev)
11729 + return 0;
11730 +
11731 + err_port_probe:
11732 ++ fwnode_handle_put(port_fwnode);
11733 ++
11734 + i = 0;
11735 + fwnode_for_each_available_child_node(fwnode, port_fwnode) {
11736 + if (priv->port_list[i])
11737 +diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c
11738 +index 3712e1786091f..406fdfe968bfb 100644
11739 +--- a/drivers/net/ethernet/marvell/pxa168_eth.c
11740 ++++ b/drivers/net/ethernet/marvell/pxa168_eth.c
11741 +@@ -1533,6 +1533,7 @@ static int pxa168_eth_remove(struct platform_device *pdev)
11742 + struct net_device *dev = platform_get_drvdata(pdev);
11743 + struct pxa168_eth_private *pep = netdev_priv(dev);
11744 +
11745 ++ cancel_work_sync(&pep->tx_timeout_task);
11746 + if (pep->htpr) {
11747 + dma_free_coherent(pep->dev->dev.parent, HASH_ADDR_TABLE_SIZE,
11748 + pep->htpr, pep->htpr_dma);
11749 +@@ -1544,7 +1545,6 @@ static int pxa168_eth_remove(struct platform_device *pdev)
11750 + clk_disable_unprepare(pep->clk);
11751 + mdiobus_unregister(pep->smi_bus);
11752 + mdiobus_free(pep->smi_bus);
11753 +- cancel_work_sync(&pep->tx_timeout_task);
11754 + unregister_netdev(dev);
11755 + free_netdev(dev);
11756 + return 0;
11757 +diff --git a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
11758 +index 140cee7c459d0..1b32a43f70242 100644
11759 +--- a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
11760 ++++ b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
11761 +@@ -2531,9 +2531,13 @@ static int pch_gbe_probe(struct pci_dev *pdev,
11762 + adapter->pdev = pdev;
11763 + adapter->hw.back = adapter;
11764 + adapter->hw.reg = pcim_iomap_table(pdev)[PCH_GBE_PCI_BAR];
11765 ++
11766 + adapter->pdata = (struct pch_gbe_privdata *)pci_id->driver_data;
11767 +- if (adapter->pdata && adapter->pdata->platform_init)
11768 +- adapter->pdata->platform_init(pdev);
11769 ++ if (adapter->pdata && adapter->pdata->platform_init) {
11770 ++ ret = adapter->pdata->platform_init(pdev);
11771 ++ if (ret)
11772 ++ goto err_free_netdev;
11773 ++ }
11774 +
11775 + adapter->ptp_pdev =
11776 + pci_get_domain_bus_and_slot(pci_domain_nr(adapter->pdev->bus),
11777 +@@ -2628,7 +2632,7 @@ err_free_netdev:
11778 + */
11779 + static int pch_gbe_minnow_platform_init(struct pci_dev *pdev)
11780 + {
11781 +- unsigned long flags = GPIOF_DIR_OUT | GPIOF_INIT_HIGH | GPIOF_EXPORT;
11782 ++ unsigned long flags = GPIOF_OUT_INIT_HIGH;
11783 + unsigned gpio = MINNOW_PHY_RESET_GPIO;
11784 + int ret;
11785 +
11786 +diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
11787 +index 638d7b03be4bf..a98182b2d19bb 100644
11788 +--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
11789 ++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
11790 +@@ -1506,12 +1506,12 @@ static void am65_cpsw_nuss_free_tx_chns(void *data)
11791 + for (i = 0; i < common->tx_ch_num; i++) {
11792 + struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i];
11793 +
11794 +- if (!IS_ERR_OR_NULL(tx_chn->tx_chn))
11795 +- k3_udma_glue_release_tx_chn(tx_chn->tx_chn);
11796 +-
11797 + if (!IS_ERR_OR_NULL(tx_chn->desc_pool))
11798 + k3_cppi_desc_pool_destroy(tx_chn->desc_pool);
11799 +
11800 ++ if (!IS_ERR_OR_NULL(tx_chn->tx_chn))
11801 ++ k3_udma_glue_release_tx_chn(tx_chn->tx_chn);
11802 ++
11803 + memset(tx_chn, 0, sizeof(*tx_chn));
11804 + }
11805 + }
11806 +@@ -1531,12 +1531,12 @@ void am65_cpsw_nuss_remove_tx_chns(struct am65_cpsw_common *common)
11807 +
11808 + netif_napi_del(&tx_chn->napi_tx);
11809 +
11810 +- if (!IS_ERR_OR_NULL(tx_chn->tx_chn))
11811 +- k3_udma_glue_release_tx_chn(tx_chn->tx_chn);
11812 +-
11813 + if (!IS_ERR_OR_NULL(tx_chn->desc_pool))
11814 + k3_cppi_desc_pool_destroy(tx_chn->desc_pool);
11815 +
11816 ++ if (!IS_ERR_OR_NULL(tx_chn->tx_chn))
11817 ++ k3_udma_glue_release_tx_chn(tx_chn->tx_chn);
11818 ++
11819 + memset(tx_chn, 0, sizeof(*tx_chn));
11820 + }
11821 + }
11822 +@@ -1624,11 +1624,11 @@ static void am65_cpsw_nuss_free_rx_chns(void *data)
11823 +
11824 + rx_chn = &common->rx_chns;
11825 +
11826 +- if (!IS_ERR_OR_NULL(rx_chn->rx_chn))
11827 +- k3_udma_glue_release_rx_chn(rx_chn->rx_chn);
11828 +-
11829 + if (!IS_ERR_OR_NULL(rx_chn->desc_pool))
11830 + k3_cppi_desc_pool_destroy(rx_chn->desc_pool);
11831 ++
11832 ++ if (!IS_ERR_OR_NULL(rx_chn->rx_chn))
11833 ++ k3_udma_glue_release_rx_chn(rx_chn->rx_chn);
11834 + }
11835 +
11836 + static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common)
11837 +diff --git a/drivers/net/ieee802154/mac802154_hwsim.c b/drivers/net/ieee802154/mac802154_hwsim.c
11838 +index c0bf7d78276e4..626e1ce817fcf 100644
11839 +--- a/drivers/net/ieee802154/mac802154_hwsim.c
11840 ++++ b/drivers/net/ieee802154/mac802154_hwsim.c
11841 +@@ -480,7 +480,7 @@ static int hwsim_del_edge_nl(struct sk_buff *msg, struct genl_info *info)
11842 + struct hwsim_edge *e;
11843 + u32 v0, v1;
11844 +
11845 +- if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] &&
11846 ++ if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] ||
11847 + !info->attrs[MAC802154_HWSIM_ATTR_RADIO_EDGE])
11848 + return -EINVAL;
11849 +
11850 +@@ -715,6 +715,8 @@ static int hwsim_subscribe_all_others(struct hwsim_phy *phy)
11851 +
11852 + return 0;
11853 +
11854 ++sub_fail:
11855 ++ hwsim_edge_unsubscribe_me(phy);
11856 + me_fail:
11857 + rcu_read_lock();
11858 + list_for_each_entry_rcu(e, &phy->edges, list) {
11859 +@@ -722,8 +724,6 @@ me_fail:
11860 + hwsim_free_edge(e);
11861 + }
11862 + rcu_read_unlock();
11863 +-sub_fail:
11864 +- hwsim_edge_unsubscribe_me(phy);
11865 + return -ENOMEM;
11866 + }
11867 +
11868 +@@ -824,12 +824,17 @@ err_pib:
11869 + static void hwsim_del(struct hwsim_phy *phy)
11870 + {
11871 + struct hwsim_pib *pib;
11872 ++ struct hwsim_edge *e;
11873 +
11874 + hwsim_edge_unsubscribe_me(phy);
11875 +
11876 + list_del(&phy->list);
11877 +
11878 + rcu_read_lock();
11879 ++ list_for_each_entry_rcu(e, &phy->edges, list) {
11880 ++ list_del_rcu(&e->list);
11881 ++ hwsim_free_edge(e);
11882 ++ }
11883 + pib = rcu_dereference(phy->pib);
11884 + rcu_read_unlock();
11885 +
11886 +diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
11887 +index 92425e1fd70c0..93dc48b9b4f24 100644
11888 +--- a/drivers/net/macsec.c
11889 ++++ b/drivers/net/macsec.c
11890 +@@ -1819,7 +1819,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info)
11891 + ctx.sa.rx_sa = rx_sa;
11892 + ctx.secy = secy;
11893 + memcpy(ctx.sa.key, nla_data(tb_sa[MACSEC_SA_ATTR_KEY]),
11894 +- MACSEC_KEYID_LEN);
11895 ++ secy->key_len);
11896 +
11897 + err = macsec_offload(ops->mdo_add_rxsa, &ctx);
11898 + if (err)
11899 +@@ -2061,7 +2061,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info)
11900 + ctx.sa.tx_sa = tx_sa;
11901 + ctx.secy = secy;
11902 + memcpy(ctx.sa.key, nla_data(tb_sa[MACSEC_SA_ATTR_KEY]),
11903 +- MACSEC_KEYID_LEN);
11904 ++ secy->key_len);
11905 +
11906 + err = macsec_offload(ops->mdo_add_txsa, &ctx);
11907 + if (err)
11908 +diff --git a/drivers/net/phy/mscc/mscc_macsec.c b/drivers/net/phy/mscc/mscc_macsec.c
11909 +index 10be266e48e8b..b7b2521c73fb6 100644
11910 +--- a/drivers/net/phy/mscc/mscc_macsec.c
11911 ++++ b/drivers/net/phy/mscc/mscc_macsec.c
11912 +@@ -501,7 +501,7 @@ static u32 vsc8584_macsec_flow_context_id(struct macsec_flow *flow)
11913 + }
11914 +
11915 + /* Derive the AES key to get a key for the hash autentication */
11916 +-static int vsc8584_macsec_derive_key(const u8 key[MACSEC_KEYID_LEN],
11917 ++static int vsc8584_macsec_derive_key(const u8 key[MACSEC_MAX_KEY_LEN],
11918 + u16 key_len, u8 hkey[16])
11919 + {
11920 + const u8 input[AES_BLOCK_SIZE] = {0};
11921 +diff --git a/drivers/net/phy/mscc/mscc_macsec.h b/drivers/net/phy/mscc/mscc_macsec.h
11922 +index 9c6d25e36de2a..453304bae7784 100644
11923 +--- a/drivers/net/phy/mscc/mscc_macsec.h
11924 ++++ b/drivers/net/phy/mscc/mscc_macsec.h
11925 +@@ -81,7 +81,7 @@ struct macsec_flow {
11926 + /* Highest takes precedence [0..15] */
11927 + u8 priority;
11928 +
11929 +- u8 key[MACSEC_KEYID_LEN];
11930 ++ u8 key[MACSEC_MAX_KEY_LEN];
11931 +
11932 + union {
11933 + struct macsec_rx_sa *rx_sa;
11934 +diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
11935 +index 28a6c4cfe9b8c..414afcb0a23f8 100644
11936 +--- a/drivers/net/vrf.c
11937 ++++ b/drivers/net/vrf.c
11938 +@@ -1366,22 +1366,22 @@ static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
11939 + int orig_iif = skb->skb_iif;
11940 + bool need_strict = rt6_need_strict(&ipv6_hdr(skb)->daddr);
11941 + bool is_ndisc = ipv6_ndisc_frame(skb);
11942 +- bool is_ll_src;
11943 +
11944 + /* loopback, multicast & non-ND link-local traffic; do not push through
11945 + * packet taps again. Reset pkt_type for upper layers to process skb.
11946 +- * for packets with lladdr src, however, skip so that the dst can be
11947 +- * determine at input using original ifindex in the case that daddr
11948 +- * needs strict
11949 ++ * For strict packets with a source LLA, determine the dst using the
11950 ++ * original ifindex.
11951 + */
11952 +- is_ll_src = ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL;
11953 +- if (skb->pkt_type == PACKET_LOOPBACK ||
11954 +- (need_strict && !is_ndisc && !is_ll_src)) {
11955 ++ if (skb->pkt_type == PACKET_LOOPBACK || (need_strict && !is_ndisc)) {
11956 + skb->dev = vrf_dev;
11957 + skb->skb_iif = vrf_dev->ifindex;
11958 + IP6CB(skb)->flags |= IP6SKB_L3SLAVE;
11959 ++
11960 + if (skb->pkt_type == PACKET_LOOPBACK)
11961 + skb->pkt_type = PACKET_HOST;
11962 ++ else if (ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL)
11963 ++ vrf_ip6_input_dst(skb, vrf_dev, orig_iif);
11964 ++
11965 + goto out;
11966 + }
11967 +
11968 +diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
11969 +index 53dbc67e8a34f..a3ec03ce3343a 100644
11970 +--- a/drivers/net/vxlan.c
11971 ++++ b/drivers/net/vxlan.c
11972 +@@ -2164,6 +2164,7 @@ static int neigh_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni)
11973 + struct neighbour *n;
11974 + struct nd_msg *msg;
11975 +
11976 ++ rcu_read_lock();
11977 + in6_dev = __in6_dev_get(dev);
11978 + if (!in6_dev)
11979 + goto out;
11980 +@@ -2215,6 +2216,7 @@ static int neigh_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni)
11981 + }
11982 +
11983 + out:
11984 ++ rcu_read_unlock();
11985 + consume_skb(skb);
11986 + return NETDEV_TX_OK;
11987 + }
11988 +diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
11989 +index bb6c5ee43ac0c..def52df829d48 100644
11990 +--- a/drivers/net/wireless/ath/ath10k/mac.c
11991 ++++ b/drivers/net/wireless/ath/ath10k/mac.c
11992 +@@ -5590,6 +5590,7 @@ static int ath10k_add_interface(struct ieee80211_hw *hw,
11993 +
11994 + if (arvif->nohwcrypt &&
11995 + !test_bit(ATH10K_FLAG_RAW_MODE, &ar->dev_flags)) {
11996 ++ ret = -EINVAL;
11997 + ath10k_warn(ar, "cryptmode module param needed for sw crypto\n");
11998 + goto err;
11999 + }
12000 +diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
12001 +index e7fde635e0eef..71878ab35b93c 100644
12002 +--- a/drivers/net/wireless/ath/ath10k/pci.c
12003 ++++ b/drivers/net/wireless/ath/ath10k/pci.c
12004 +@@ -3685,8 +3685,10 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
12005 + ath10k_pci_soc_read32(ar, SOC_CHIP_ID_ADDRESS);
12006 + if (bus_params.chip_id != 0xffffffff) {
12007 + if (!ath10k_pci_chip_is_supported(pdev->device,
12008 +- bus_params.chip_id))
12009 ++ bus_params.chip_id)) {
12010 ++ ret = -ENODEV;
12011 + goto err_unsupported;
12012 ++ }
12013 + }
12014 + }
12015 +
12016 +@@ -3697,11 +3699,15 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
12017 + }
12018 +
12019 + bus_params.chip_id = ath10k_pci_soc_read32(ar, SOC_CHIP_ID_ADDRESS);
12020 +- if (bus_params.chip_id == 0xffffffff)
12021 ++ if (bus_params.chip_id == 0xffffffff) {
12022 ++ ret = -ENODEV;
12023 + goto err_unsupported;
12024 ++ }
12025 +
12026 +- if (!ath10k_pci_chip_is_supported(pdev->device, bus_params.chip_id))
12027 +- goto err_free_irq;
12028 ++ if (!ath10k_pci_chip_is_supported(pdev->device, bus_params.chip_id)) {
12029 ++ ret = -ENODEV;
12030 ++ goto err_unsupported;
12031 ++ }
12032 +
12033 + ret = ath10k_core_register(ar, &bus_params);
12034 + if (ret) {
12035 +diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
12036 +index 350b7913622cb..b55b6289eeb1a 100644
12037 +--- a/drivers/net/wireless/ath/ath11k/core.c
12038 ++++ b/drivers/net/wireless/ath/ath11k/core.c
12039 +@@ -445,7 +445,8 @@ static int ath11k_core_fetch_board_data_api_n(struct ath11k_base *ab,
12040 + if (len < ALIGN(ie_len, 4)) {
12041 + ath11k_err(ab, "invalid length for board ie_id %d ie_len %zu len %zu\n",
12042 + ie_id, ie_len, len);
12043 +- return -EINVAL;
12044 ++ ret = -EINVAL;
12045 ++ goto err;
12046 + }
12047 +
12048 + switch (ie_id) {
12049 +diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
12050 +index 7ad0383affcba..a0e7bc6dd8c78 100644
12051 +--- a/drivers/net/wireless/ath/ath11k/mac.c
12052 ++++ b/drivers/net/wireless/ath/ath11k/mac.c
12053 +@@ -5311,11 +5311,6 @@ ath11k_mac_update_vif_chan(struct ath11k *ar,
12054 + if (WARN_ON(!arvif->is_up))
12055 + continue;
12056 +
12057 +- ret = ath11k_mac_setup_bcn_tmpl(arvif);
12058 +- if (ret)
12059 +- ath11k_warn(ab, "failed to update bcn tmpl during csa: %d\n",
12060 +- ret);
12061 +-
12062 + ret = ath11k_mac_vdev_restart(arvif, &vifs[i].new_ctx->def);
12063 + if (ret) {
12064 + ath11k_warn(ab, "failed to restart vdev %d: %d\n",
12065 +@@ -5323,6 +5318,11 @@ ath11k_mac_update_vif_chan(struct ath11k *ar,
12066 + continue;
12067 + }
12068 +
12069 ++ ret = ath11k_mac_setup_bcn_tmpl(arvif);
12070 ++ if (ret)
12071 ++ ath11k_warn(ab, "failed to update bcn tmpl during csa: %d\n",
12072 ++ ret);
12073 ++
12074 + ret = ath11k_wmi_vdev_up(arvif->ar, arvif->vdev_id, arvif->aid,
12075 + arvif->bssid);
12076 + if (ret) {
12077 +diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
12078 +index 45f6402478b50..97c3a53f9cef2 100644
12079 +--- a/drivers/net/wireless/ath/ath9k/main.c
12080 ++++ b/drivers/net/wireless/ath/ath9k/main.c
12081 +@@ -307,6 +307,11 @@ static int ath_reset_internal(struct ath_softc *sc, struct ath9k_channel *hchan)
12082 + hchan = ah->curchan;
12083 + }
12084 +
12085 ++ if (!hchan) {
12086 ++ fastcc = false;
12087 ++ hchan = ath9k_cmn_get_channel(sc->hw, ah, &sc->cur_chan->chandef);
12088 ++ }
12089 ++
12090 + if (!ath_prepare_reset(sc))
12091 + fastcc = false;
12092 +
12093 +diff --git a/drivers/net/wireless/ath/carl9170/Kconfig b/drivers/net/wireless/ath/carl9170/Kconfig
12094 +index b2d760873992f..ba9bea79381c5 100644
12095 +--- a/drivers/net/wireless/ath/carl9170/Kconfig
12096 ++++ b/drivers/net/wireless/ath/carl9170/Kconfig
12097 +@@ -16,13 +16,11 @@ config CARL9170
12098 +
12099 + config CARL9170_LEDS
12100 + bool "SoftLED Support"
12101 +- depends on CARL9170
12102 +- select MAC80211_LEDS
12103 +- select LEDS_CLASS
12104 +- select NEW_LEDS
12105 + default y
12106 ++ depends on CARL9170
12107 ++ depends on MAC80211_LEDS
12108 + help
12109 +- This option is necessary, if you want your device' LEDs to blink
12110 ++ This option is necessary, if you want your device's LEDs to blink.
12111 +
12112 + Say Y, unless you need the LEDs for firmware debugging.
12113 +
12114 +diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
12115 +index afb4877eaad8f..dabed4e3ca457 100644
12116 +--- a/drivers/net/wireless/ath/wcn36xx/main.c
12117 ++++ b/drivers/net/wireless/ath/wcn36xx/main.c
12118 +@@ -293,23 +293,16 @@ static int wcn36xx_start(struct ieee80211_hw *hw)
12119 + goto out_free_dxe_pool;
12120 + }
12121 +
12122 +- wcn->hal_buf = kmalloc(WCN36XX_HAL_BUF_SIZE, GFP_KERNEL);
12123 +- if (!wcn->hal_buf) {
12124 +- wcn36xx_err("Failed to allocate smd buf\n");
12125 +- ret = -ENOMEM;
12126 +- goto out_free_dxe_ctl;
12127 +- }
12128 +-
12129 + ret = wcn36xx_smd_load_nv(wcn);
12130 + if (ret) {
12131 + wcn36xx_err("Failed to push NV to chip\n");
12132 +- goto out_free_smd_buf;
12133 ++ goto out_free_dxe_ctl;
12134 + }
12135 +
12136 + ret = wcn36xx_smd_start(wcn);
12137 + if (ret) {
12138 + wcn36xx_err("Failed to start chip\n");
12139 +- goto out_free_smd_buf;
12140 ++ goto out_free_dxe_ctl;
12141 + }
12142 +
12143 + if (!wcn36xx_is_fw_version(wcn, 1, 2, 2, 24)) {
12144 +@@ -336,8 +329,6 @@ static int wcn36xx_start(struct ieee80211_hw *hw)
12145 +
12146 + out_smd_stop:
12147 + wcn36xx_smd_stop(wcn);
12148 +-out_free_smd_buf:
12149 +- kfree(wcn->hal_buf);
12150 + out_free_dxe_ctl:
12151 + wcn36xx_dxe_free_ctl_blks(wcn);
12152 + out_free_dxe_pool:
12153 +@@ -372,8 +363,6 @@ static void wcn36xx_stop(struct ieee80211_hw *hw)
12154 +
12155 + wcn36xx_dxe_free_mem_pools(wcn);
12156 + wcn36xx_dxe_free_ctl_blks(wcn);
12157 +-
12158 +- kfree(wcn->hal_buf);
12159 + }
12160 +
12161 + static void wcn36xx_change_ps(struct wcn36xx *wcn, bool enable)
12162 +@@ -1401,6 +1390,12 @@ static int wcn36xx_probe(struct platform_device *pdev)
12163 + mutex_init(&wcn->hal_mutex);
12164 + mutex_init(&wcn->scan_lock);
12165 +
12166 ++ wcn->hal_buf = devm_kmalloc(wcn->dev, WCN36XX_HAL_BUF_SIZE, GFP_KERNEL);
12167 ++ if (!wcn->hal_buf) {
12168 ++ ret = -ENOMEM;
12169 ++ goto out_wq;
12170 ++ }
12171 ++
12172 + ret = dma_set_mask_and_coherent(wcn->dev, DMA_BIT_MASK(32));
12173 + if (ret < 0) {
12174 + wcn36xx_err("failed to set DMA mask: %d\n", ret);
12175 +diff --git a/drivers/net/wireless/ath/wil6210/cfg80211.c b/drivers/net/wireless/ath/wil6210/cfg80211.c
12176 +index 6746fd206d2a9..1ff2679963f06 100644
12177 +--- a/drivers/net/wireless/ath/wil6210/cfg80211.c
12178 ++++ b/drivers/net/wireless/ath/wil6210/cfg80211.c
12179 +@@ -2842,9 +2842,7 @@ void wil_p2p_wdev_free(struct wil6210_priv *wil)
12180 + wil->radio_wdev = wil->main_ndev->ieee80211_ptr;
12181 + mutex_unlock(&wil->vif_mutex);
12182 + if (p2p_wdev) {
12183 +- wiphy_lock(wil->wiphy);
12184 + cfg80211_unregister_wdev(p2p_wdev);
12185 +- wiphy_unlock(wil->wiphy);
12186 + kfree(p2p_wdev);
12187 + }
12188 + }
12189 +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
12190 +index f4405d7861b69..d8822a01d277e 100644
12191 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
12192 ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
12193 +@@ -2767,8 +2767,9 @@ brcmf_cfg80211_get_station(struct wiphy *wiphy, struct net_device *ndev,
12194 + struct brcmf_sta_info_le sta_info_le;
12195 + u32 sta_flags;
12196 + u32 is_tdls_peer;
12197 +- s32 total_rssi;
12198 +- s32 count_rssi;
12199 ++ s32 total_rssi_avg = 0;
12200 ++ s32 total_rssi = 0;
12201 ++ s32 count_rssi = 0;
12202 + int rssi;
12203 + u32 i;
12204 +
12205 +@@ -2834,25 +2835,27 @@ brcmf_cfg80211_get_station(struct wiphy *wiphy, struct net_device *ndev,
12206 + sinfo->filled |= BIT_ULL(NL80211_STA_INFO_RX_BYTES);
12207 + sinfo->rx_bytes = le64_to_cpu(sta_info_le.rx_tot_bytes);
12208 + }
12209 +- total_rssi = 0;
12210 +- count_rssi = 0;
12211 + for (i = 0; i < BRCMF_ANT_MAX; i++) {
12212 +- if (sta_info_le.rssi[i]) {
12213 +- sinfo->chain_signal_avg[count_rssi] =
12214 +- sta_info_le.rssi[i];
12215 +- sinfo->chain_signal[count_rssi] =
12216 +- sta_info_le.rssi[i];
12217 +- total_rssi += sta_info_le.rssi[i];
12218 +- count_rssi++;
12219 +- }
12220 ++ if (sta_info_le.rssi[i] == 0 ||
12221 ++ sta_info_le.rx_lastpkt_rssi[i] == 0)
12222 ++ continue;
12223 ++ sinfo->chains |= BIT(count_rssi);
12224 ++ sinfo->chain_signal[count_rssi] =
12225 ++ sta_info_le.rx_lastpkt_rssi[i];
12226 ++ sinfo->chain_signal_avg[count_rssi] =
12227 ++ sta_info_le.rssi[i];
12228 ++ total_rssi += sta_info_le.rx_lastpkt_rssi[i];
12229 ++ total_rssi_avg += sta_info_le.rssi[i];
12230 ++ count_rssi++;
12231 + }
12232 + if (count_rssi) {
12233 +- sinfo->filled |= BIT_ULL(NL80211_STA_INFO_CHAIN_SIGNAL);
12234 +- sinfo->chains = count_rssi;
12235 +-
12236 + sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL);
12237 +- total_rssi /= count_rssi;
12238 +- sinfo->signal = total_rssi;
12239 ++ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL_AVG);
12240 ++ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_CHAIN_SIGNAL);
12241 ++ sinfo->filled |=
12242 ++ BIT_ULL(NL80211_STA_INFO_CHAIN_SIGNAL_AVG);
12243 ++ sinfo->signal = total_rssi / count_rssi;
12244 ++ sinfo->signal_avg = total_rssi_avg / count_rssi;
12245 + } else if (test_bit(BRCMF_VIF_STATUS_CONNECTED,
12246 + &ifp->vif->sme_state)) {
12247 + memset(&scb_val, 0, sizeof(scb_val));
12248 +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
12249 +index 16ed325795a8b..faf5f8e5eee33 100644
12250 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
12251 ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
12252 +@@ -626,8 +626,8 @@ BRCMF_FW_DEF(4373, "brcmfmac4373-sdio");
12253 + BRCMF_FW_DEF(43012, "brcmfmac43012-sdio");
12254 +
12255 + /* firmware config files */
12256 +-MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcm/brcmfmac*-sdio.*.txt");
12257 +-MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcm/brcmfmac*-pcie.*.txt");
12258 ++MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-sdio.*.txt");
12259 ++MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-pcie.*.txt");
12260 +
12261 + static const struct brcmf_firmware_mapping brcmf_sdio_fwnames[] = {
12262 + BRCMF_FW_ENTRY(BRCM_CC_43143_CHIP_ID, 0xFFFFFFFF, 43143),
12263 +@@ -4162,7 +4162,6 @@ static int brcmf_sdio_bus_reset(struct device *dev)
12264 + if (ret) {
12265 + brcmf_err("Failed to probe after sdio device reset: ret %d\n",
12266 + ret);
12267 +- brcmf_sdiod_remove(sdiodev);
12268 + }
12269 +
12270 + return ret;
12271 +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c
12272 +index 39f3af2d0439b..eadac0f5590fc 100644
12273 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c
12274 ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c
12275 +@@ -1220,6 +1220,7 @@ static int brcms_bcma_probe(struct bcma_device *pdev)
12276 + {
12277 + struct brcms_info *wl;
12278 + struct ieee80211_hw *hw;
12279 ++ int ret;
12280 +
12281 + dev_info(&pdev->dev, "mfg %x core %x rev %d class %d irq %d\n",
12282 + pdev->id.manuf, pdev->id.id, pdev->id.rev, pdev->id.class,
12283 +@@ -1244,11 +1245,16 @@ static int brcms_bcma_probe(struct bcma_device *pdev)
12284 + wl = brcms_attach(pdev);
12285 + if (!wl) {
12286 + pr_err("%s: brcms_attach failed!\n", __func__);
12287 +- return -ENODEV;
12288 ++ ret = -ENODEV;
12289 ++ goto err_free_ieee80211;
12290 + }
12291 + brcms_led_register(wl);
12292 +
12293 + return 0;
12294 ++
12295 ++err_free_ieee80211:
12296 ++ ieee80211_free_hw(hw);
12297 ++ return ret;
12298 + }
12299 +
12300 + static int brcms_suspend(struct bcma_device *pdev)
12301 +diff --git a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.h b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.h
12302 +index e4f91bce222d8..61d3d4e0b7d94 100644
12303 +--- a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.h
12304 ++++ b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.h
12305 +@@ -1,7 +1,7 @@
12306 + /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
12307 + /******************************************************************************
12308 + *
12309 +- * Copyright(c) 2020 Intel Corporation
12310 ++ * Copyright(c) 2020-2021 Intel Corporation
12311 + *
12312 + *****************************************************************************/
12313 +
12314 +@@ -10,7 +10,7 @@
12315 +
12316 + #include "fw/notif-wait.h"
12317 +
12318 +-#define MVM_UCODE_PNVM_TIMEOUT (HZ / 10)
12319 ++#define MVM_UCODE_PNVM_TIMEOUT (HZ / 4)
12320 +
12321 + int iwl_pnvm_load(struct iwl_trans *trans,
12322 + struct iwl_notif_wait_data *notif_wait);
12323 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
12324 +index 1ad621d13ad3a..0a13c2bda2eed 100644
12325 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
12326 ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
12327 +@@ -1032,6 +1032,9 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
12328 + if (WARN_ON_ONCE(mvmsta->sta_id == IWL_MVM_INVALID_STA))
12329 + return -1;
12330 +
12331 ++ if (unlikely(ieee80211_is_any_nullfunc(fc)) && sta->he_cap.has_he)
12332 ++ return -1;
12333 ++
12334 + if (unlikely(ieee80211_is_probe_resp(fc)))
12335 + iwl_mvm_probe_resp_set_noa(mvm, skb);
12336 +
12337 +diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.c b/drivers/net/wireless/marvell/mwifiex/pcie.c
12338 +index 94228b316df1b..46517515ba728 100644
12339 +--- a/drivers/net/wireless/marvell/mwifiex/pcie.c
12340 ++++ b/drivers/net/wireless/marvell/mwifiex/pcie.c
12341 +@@ -1231,7 +1231,7 @@ static int mwifiex_pcie_delete_cmdrsp_buf(struct mwifiex_adapter *adapter)
12342 + static int mwifiex_pcie_alloc_sleep_cookie_buf(struct mwifiex_adapter *adapter)
12343 + {
12344 + struct pcie_service_card *card = adapter->card;
12345 +- u32 tmp;
12346 ++ u32 *cookie;
12347 +
12348 + card->sleep_cookie_vbase = dma_alloc_coherent(&card->dev->dev,
12349 + sizeof(u32),
12350 +@@ -1242,13 +1242,11 @@ static int mwifiex_pcie_alloc_sleep_cookie_buf(struct mwifiex_adapter *adapter)
12351 + "dma_alloc_coherent failed!\n");
12352 + return -ENOMEM;
12353 + }
12354 ++ cookie = (u32 *)card->sleep_cookie_vbase;
12355 + /* Init val of Sleep Cookie */
12356 +- tmp = FW_AWAKE_COOKIE;
12357 +- put_unaligned(tmp, card->sleep_cookie_vbase);
12358 ++ *cookie = FW_AWAKE_COOKIE;
12359 +
12360 +- mwifiex_dbg(adapter, INFO,
12361 +- "alloc_scook: sleep cookie=0x%x\n",
12362 +- get_unaligned(card->sleep_cookie_vbase));
12363 ++ mwifiex_dbg(adapter, INFO, "alloc_scook: sleep cookie=0x%x\n", *cookie);
12364 +
12365 + return 0;
12366 + }
12367 +diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
12368 +index 8dccb589b756d..d06e61cadc411 100644
12369 +--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
12370 ++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
12371 +@@ -1890,6 +1890,10 @@ void mt7615_pm_power_save_work(struct work_struct *work)
12372 + pm.ps_work.work);
12373 +
12374 + delta = dev->pm.idle_timeout;
12375 ++ if (test_bit(MT76_HW_SCANNING, &dev->mphy.state) ||
12376 ++ test_bit(MT76_HW_SCHED_SCANNING, &dev->mphy.state))
12377 ++ goto out;
12378 ++
12379 + if (time_is_after_jiffies(dev->pm.last_activity + delta)) {
12380 + delta = dev->pm.last_activity + delta - jiffies;
12381 + goto out;
12382 +diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c
12383 +index 1b4cb145f38e1..f2fbf11e0321d 100644
12384 +--- a/drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c
12385 ++++ b/drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c
12386 +@@ -133,20 +133,21 @@ int mt7615_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
12387 + struct mt76_tx_info *tx_info)
12388 + {
12389 + struct mt7615_dev *dev = container_of(mdev, struct mt7615_dev, mt76);
12390 +- struct mt7615_sta *msta = container_of(wcid, struct mt7615_sta, wcid);
12391 + struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx_info->skb);
12392 + struct ieee80211_key_conf *key = info->control.hw_key;
12393 + int pid, id;
12394 + u8 *txwi = (u8 *)txwi_ptr;
12395 + struct mt76_txwi_cache *t;
12396 ++ struct mt7615_sta *msta;
12397 + void *txp;
12398 +
12399 ++ msta = wcid ? container_of(wcid, struct mt7615_sta, wcid) : NULL;
12400 + if (!wcid)
12401 + wcid = &dev->mt76.global_wcid;
12402 +
12403 + pid = mt76_tx_status_skb_add(mdev, wcid, tx_info->skb);
12404 +
12405 +- if (info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) {
12406 ++ if ((info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) && msta) {
12407 + struct mt7615_phy *phy = &dev->phy;
12408 +
12409 + if ((info->hw_queue & MT_TX_HW_QUEUE_EXT_PHY) && mdev->phy2)
12410 +diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/usb_sdio.c b/drivers/net/wireless/mediatek/mt76/mt7615/usb_sdio.c
12411 +index f8d3673c2cae8..7010101f6b147 100644
12412 +--- a/drivers/net/wireless/mediatek/mt76/mt7615/usb_sdio.c
12413 ++++ b/drivers/net/wireless/mediatek/mt76/mt7615/usb_sdio.c
12414 +@@ -191,14 +191,15 @@ int mt7663_usb_sdio_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
12415 + struct ieee80211_sta *sta,
12416 + struct mt76_tx_info *tx_info)
12417 + {
12418 +- struct mt7615_sta *msta = container_of(wcid, struct mt7615_sta, wcid);
12419 + struct mt7615_dev *dev = container_of(mdev, struct mt7615_dev, mt76);
12420 + struct sk_buff *skb = tx_info->skb;
12421 + struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
12422 ++ struct mt7615_sta *msta;
12423 + int pad;
12424 +
12425 ++ msta = wcid ? container_of(wcid, struct mt7615_sta, wcid) : NULL;
12426 + if ((info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) &&
12427 +- !msta->rate_probe) {
12428 ++ msta && !msta->rate_probe) {
12429 + /* request to configure sampling rate */
12430 + spin_lock_bh(&dev->mt76.lock);
12431 + mt7615_mac_set_rates(&dev->phy, msta, &info->control.rates[0],
12432 +diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
12433 +index c5f5037f57570..cff60b699e319 100644
12434 +--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
12435 ++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
12436 +@@ -16,10 +16,6 @@ int mt76_connac_pm_wake(struct mt76_phy *phy, struct mt76_connac_pm *pm)
12437 + if (!test_bit(MT76_STATE_PM, &phy->state))
12438 + return 0;
12439 +
12440 +- if (test_bit(MT76_HW_SCANNING, &phy->state) ||
12441 +- test_bit(MT76_HW_SCHED_SCANNING, &phy->state))
12442 +- return 0;
12443 +-
12444 + if (queue_work(dev->wq, &pm->wake_work))
12445 + reinit_completion(&pm->wake_cmpl);
12446 +
12447 +@@ -45,10 +41,6 @@ void mt76_connac_power_save_sched(struct mt76_phy *phy,
12448 +
12449 + pm->last_activity = jiffies;
12450 +
12451 +- if (test_bit(MT76_HW_SCANNING, &phy->state) ||
12452 +- test_bit(MT76_HW_SCHED_SCANNING, &phy->state))
12453 +- return;
12454 +-
12455 + if (!test_bit(MT76_STATE_PM, &phy->state))
12456 + queue_delayed_work(dev->wq, &pm->ps_work, pm->idle_timeout);
12457 + }
12458 +diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
12459 +index cefd33b74a875..280aee1aa299f 100644
12460 +--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
12461 ++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
12462 +@@ -1732,7 +1732,7 @@ mt76_connac_mcu_set_wow_pattern(struct mt76_dev *dev,
12463 + ptlv->index = index;
12464 +
12465 + memcpy(ptlv->pattern, pattern->pattern, pattern->pattern_len);
12466 +- memcpy(ptlv->mask, pattern->mask, pattern->pattern_len / 8);
12467 ++ memcpy(ptlv->mask, pattern->mask, DIV_ROUND_UP(pattern->pattern_len, 8));
12468 +
12469 + return mt76_mcu_skb_send_msg(dev, skb, MCU_UNI_CMD_SUSPEND, true);
12470 + }
12471 +@@ -1767,14 +1767,17 @@ mt76_connac_mcu_set_wow_ctrl(struct mt76_phy *phy, struct ieee80211_vif *vif,
12472 + };
12473 +
12474 + if (wowlan->magic_pkt)
12475 +- req.wow_ctrl_tlv.trigger |= BIT(0);
12476 ++ req.wow_ctrl_tlv.trigger |= UNI_WOW_DETECT_TYPE_MAGIC;
12477 + if (wowlan->disconnect)
12478 +- req.wow_ctrl_tlv.trigger |= BIT(2);
12479 ++ req.wow_ctrl_tlv.trigger |= (UNI_WOW_DETECT_TYPE_DISCONNECT |
12480 ++ UNI_WOW_DETECT_TYPE_BCN_LOST);
12481 + if (wowlan->nd_config) {
12482 + mt76_connac_mcu_sched_scan_req(phy, vif, wowlan->nd_config);
12483 +- req.wow_ctrl_tlv.trigger |= BIT(5);
12484 ++ req.wow_ctrl_tlv.trigger |= UNI_WOW_DETECT_TYPE_SCH_SCAN_HIT;
12485 + mt76_connac_mcu_sched_scan_enable(phy, vif, suspend);
12486 + }
12487 ++ if (wowlan->n_patterns)
12488 ++ req.wow_ctrl_tlv.trigger |= UNI_WOW_DETECT_TYPE_BITMAP;
12489 +
12490 + if (mt76_is_mmio(dev))
12491 + req.wow_ctrl_tlv.wakeup_hif = WOW_PCIE;
12492 +diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
12493 +index c1e1df5f7cd75..eea121101b5ee 100644
12494 +--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
12495 ++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
12496 +@@ -589,6 +589,14 @@ enum {
12497 + UNI_OFFLOAD_OFFLOAD_BMC_RPY_DETECT,
12498 + };
12499 +
12500 ++#define UNI_WOW_DETECT_TYPE_MAGIC BIT(0)
12501 ++#define UNI_WOW_DETECT_TYPE_ANY BIT(1)
12502 ++#define UNI_WOW_DETECT_TYPE_DISCONNECT BIT(2)
12503 ++#define UNI_WOW_DETECT_TYPE_GTK_REKEY_FAIL BIT(3)
12504 ++#define UNI_WOW_DETECT_TYPE_BCN_LOST BIT(4)
12505 ++#define UNI_WOW_DETECT_TYPE_SCH_SCAN_HIT BIT(5)
12506 ++#define UNI_WOW_DETECT_TYPE_BITMAP BIT(6)
12507 ++
12508 + enum {
12509 + UNI_SUSPEND_MODE_SETTING,
12510 + UNI_SUSPEND_WOW_CTRL,
12511 +diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/testmode.c b/drivers/net/wireless/mediatek/mt76/mt7915/testmode.c
12512 +index bd798df748ba5..8eb90722c5325 100644
12513 +--- a/drivers/net/wireless/mediatek/mt76/mt7915/testmode.c
12514 ++++ b/drivers/net/wireless/mediatek/mt76/mt7915/testmode.c
12515 +@@ -476,10 +476,17 @@ mt7915_tm_set_tx_frames(struct mt7915_phy *phy, bool en)
12516 + static void
12517 + mt7915_tm_set_rx_frames(struct mt7915_phy *phy, bool en)
12518 + {
12519 +- if (en)
12520 ++ mt7915_tm_set_trx(phy, TM_MAC_RX_RXV, false);
12521 ++
12522 ++ if (en) {
12523 ++ struct mt7915_dev *dev = phy->dev;
12524 ++
12525 + mt7915_tm_update_channel(phy);
12526 +
12527 +- mt7915_tm_set_trx(phy, TM_MAC_RX_RXV, en);
12528 ++ /* read-clear */
12529 ++ mt76_rr(dev, MT_MIB_SDR3(phy != &dev->phy));
12530 ++ mt7915_tm_set_trx(phy, TM_MAC_RX_RXV, en);
12531 ++ }
12532 + }
12533 +
12534 + static int
12535 +@@ -702,7 +709,11 @@ static int
12536 + mt7915_tm_dump_stats(struct mt76_phy *mphy, struct sk_buff *msg)
12537 + {
12538 + struct mt7915_phy *phy = mphy->priv;
12539 ++ struct mt7915_dev *dev = phy->dev;
12540 ++ bool ext_phy = phy != &dev->phy;
12541 ++ enum mt76_rxq_id q;
12542 + void *rx, *rssi;
12543 ++ u16 fcs_err;
12544 + int i;
12545 +
12546 + rx = nla_nest_start(msg, MT76_TM_STATS_ATTR_LAST_RX);
12547 +@@ -747,6 +758,12 @@ mt7915_tm_dump_stats(struct mt76_phy *mphy, struct sk_buff *msg)
12548 +
12549 + nla_nest_end(msg, rx);
12550 +
12551 ++ fcs_err = mt76_get_field(dev, MT_MIB_SDR3(ext_phy),
12552 ++ MT_MIB_SDR3_FCS_ERR_MASK);
12553 ++ q = ext_phy ? MT_RXQ_EXT : MT_RXQ_MAIN;
12554 ++ mphy->test.rx_stats.packets[q] += fcs_err;
12555 ++ mphy->test.rx_stats.fcs_error[q] += fcs_err;
12556 ++
12557 + return 0;
12558 + }
12559 +
12560 +diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/init.c b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
12561 +index a0797cec136e9..f9bd907b90fa0 100644
12562 +--- a/drivers/net/wireless/mediatek/mt76/mt7921/init.c
12563 ++++ b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
12564 +@@ -110,30 +110,12 @@ mt7921_init_wiphy(struct ieee80211_hw *hw)
12565 + static void
12566 + mt7921_mac_init_band(struct mt7921_dev *dev, u8 band)
12567 + {
12568 +- u32 mask, set;
12569 +-
12570 + mt76_rmw_field(dev, MT_TMAC_CTCR0(band),
12571 + MT_TMAC_CTCR0_INS_DDLMT_REFTIME, 0x3f);
12572 + mt76_set(dev, MT_TMAC_CTCR0(band),
12573 + MT_TMAC_CTCR0_INS_DDLMT_VHT_SMPDU_EN |
12574 + MT_TMAC_CTCR0_INS_DDLMT_EN);
12575 +
12576 +- mask = MT_MDP_RCFR0_MCU_RX_MGMT |
12577 +- MT_MDP_RCFR0_MCU_RX_CTL_NON_BAR |
12578 +- MT_MDP_RCFR0_MCU_RX_CTL_BAR;
12579 +- set = FIELD_PREP(MT_MDP_RCFR0_MCU_RX_MGMT, MT_MDP_TO_HIF) |
12580 +- FIELD_PREP(MT_MDP_RCFR0_MCU_RX_CTL_NON_BAR, MT_MDP_TO_HIF) |
12581 +- FIELD_PREP(MT_MDP_RCFR0_MCU_RX_CTL_BAR, MT_MDP_TO_HIF);
12582 +- mt76_rmw(dev, MT_MDP_BNRCFR0(band), mask, set);
12583 +-
12584 +- mask = MT_MDP_RCFR1_MCU_RX_BYPASS |
12585 +- MT_MDP_RCFR1_RX_DROPPED_UCAST |
12586 +- MT_MDP_RCFR1_RX_DROPPED_MCAST;
12587 +- set = FIELD_PREP(MT_MDP_RCFR1_MCU_RX_BYPASS, MT_MDP_TO_HIF) |
12588 +- FIELD_PREP(MT_MDP_RCFR1_RX_DROPPED_UCAST, MT_MDP_TO_HIF) |
12589 +- FIELD_PREP(MT_MDP_RCFR1_RX_DROPPED_MCAST, MT_MDP_TO_HIF);
12590 +- mt76_rmw(dev, MT_MDP_BNRCFR1(band), mask, set);
12591 +-
12592 + mt76_set(dev, MT_WF_RMAC_MIB_TIME0(band), MT_WF_RMAC_MIB_RXTIME_EN);
12593 + mt76_set(dev, MT_WF_RMAC_MIB_AIRTIME0(band), MT_WF_RMAC_MIB_RXTIME_EN);
12594 +
12595 +diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
12596 +index ce4eae7f1e448..c4b144391a8e2 100644
12597 +--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
12598 ++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
12599 +@@ -420,16 +420,19 @@ int mt7921_mac_fill_rx(struct mt7921_dev *dev, struct sk_buff *skb)
12600 + status->chain_signal[1] = to_rssi(MT_PRXV_RCPI1, v1);
12601 + status->chain_signal[2] = to_rssi(MT_PRXV_RCPI2, v1);
12602 + status->chain_signal[3] = to_rssi(MT_PRXV_RCPI3, v1);
12603 +- status->signal = status->chain_signal[0];
12604 +-
12605 +- for (i = 1; i < hweight8(mphy->antenna_mask); i++) {
12606 +- if (!(status->chains & BIT(i)))
12607 ++ status->signal = -128;
12608 ++ for (i = 0; i < hweight8(mphy->antenna_mask); i++) {
12609 ++ if (!(status->chains & BIT(i)) ||
12610 ++ status->chain_signal[i] >= 0)
12611 + continue;
12612 +
12613 + status->signal = max(status->signal,
12614 + status->chain_signal[i]);
12615 + }
12616 +
12617 ++ if (status->signal == -128)
12618 ++ status->flag |= RX_FLAG_NO_SIGNAL_VAL;
12619 ++
12620 + stbc = FIELD_GET(MT_PRXV_STBC, v0);
12621 + gi = FIELD_GET(MT_PRXV_SGI, v0);
12622 + cck = false;
12623 +@@ -1521,6 +1524,10 @@ void mt7921_pm_power_save_work(struct work_struct *work)
12624 + pm.ps_work.work);
12625 +
12626 + delta = dev->pm.idle_timeout;
12627 ++ if (test_bit(MT76_HW_SCANNING, &dev->mphy.state) ||
12628 ++ test_bit(MT76_HW_SCHED_SCANNING, &dev->mphy.state))
12629 ++ goto out;
12630 ++
12631 + if (time_is_after_jiffies(dev->pm.last_activity + delta)) {
12632 + delta = dev->pm.last_activity + delta - jiffies;
12633 + goto out;
12634 +diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
12635 +index c6e8857067a3a..1894ca6324d53 100644
12636 +--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
12637 ++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
12638 +@@ -223,57 +223,6 @@ static void mt7921_stop(struct ieee80211_hw *hw)
12639 + mt7921_mutex_release(dev);
12640 + }
12641 +
12642 +-static inline int get_free_idx(u32 mask, u8 start, u8 end)
12643 +-{
12644 +- return ffs(~mask & GENMASK(end, start));
12645 +-}
12646 +-
12647 +-static int get_omac_idx(enum nl80211_iftype type, u64 mask)
12648 +-{
12649 +- int i;
12650 +-
12651 +- switch (type) {
12652 +- case NL80211_IFTYPE_STATION:
12653 +- /* prefer hw bssid slot 1-3 */
12654 +- i = get_free_idx(mask, HW_BSSID_1, HW_BSSID_3);
12655 +- if (i)
12656 +- return i - 1;
12657 +-
12658 +- if (type != NL80211_IFTYPE_STATION)
12659 +- break;
12660 +-
12661 +- /* next, try to find a free repeater entry for the sta */
12662 +- i = get_free_idx(mask >> REPEATER_BSSID_START, 0,
12663 +- REPEATER_BSSID_MAX - REPEATER_BSSID_START);
12664 +- if (i)
12665 +- return i + 32 - 1;
12666 +-
12667 +- i = get_free_idx(mask, EXT_BSSID_1, EXT_BSSID_MAX);
12668 +- if (i)
12669 +- return i - 1;
12670 +-
12671 +- if (~mask & BIT(HW_BSSID_0))
12672 +- return HW_BSSID_0;
12673 +-
12674 +- break;
12675 +- case NL80211_IFTYPE_MONITOR:
12676 +- /* ap uses hw bssid 0 and ext bssid */
12677 +- if (~mask & BIT(HW_BSSID_0))
12678 +- return HW_BSSID_0;
12679 +-
12680 +- i = get_free_idx(mask, EXT_BSSID_1, EXT_BSSID_MAX);
12681 +- if (i)
12682 +- return i - 1;
12683 +-
12684 +- break;
12685 +- default:
12686 +- WARN_ON(1);
12687 +- break;
12688 +- }
12689 +-
12690 +- return -1;
12691 +-}
12692 +-
12693 + static int mt7921_add_interface(struct ieee80211_hw *hw,
12694 + struct ieee80211_vif *vif)
12695 + {
12696 +@@ -295,12 +244,7 @@ static int mt7921_add_interface(struct ieee80211_hw *hw,
12697 + goto out;
12698 + }
12699 +
12700 +- idx = get_omac_idx(vif->type, phy->omac_mask);
12701 +- if (idx < 0) {
12702 +- ret = -ENOSPC;
12703 +- goto out;
12704 +- }
12705 +- mvif->mt76.omac_idx = idx;
12706 ++ mvif->mt76.omac_idx = mvif->mt76.idx;
12707 + mvif->phy = phy;
12708 + mvif->mt76.band_idx = 0;
12709 + mvif->mt76.wmm_idx = mvif->mt76.idx % MT7921_MAX_WMM_SETS;
12710 +diff --git a/drivers/net/wireless/mediatek/mt76/tx.c b/drivers/net/wireless/mediatek/mt76/tx.c
12711 +index 451ed60c62961..802e3d733959f 100644
12712 +--- a/drivers/net/wireless/mediatek/mt76/tx.c
12713 ++++ b/drivers/net/wireless/mediatek/mt76/tx.c
12714 +@@ -285,7 +285,7 @@ mt76_tx(struct mt76_phy *phy, struct ieee80211_sta *sta,
12715 + skb_set_queue_mapping(skb, qid);
12716 + }
12717 +
12718 +- if (!(wcid->tx_info & MT_WCID_TX_INFO_SET))
12719 ++ if (wcid && !(wcid->tx_info & MT_WCID_TX_INFO_SET))
12720 + ieee80211_get_tx_rates(info->control.vif, sta, skb,
12721 + info->control.rates, 1);
12722 +
12723 +diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
12724 +index 448922cb2e63d..10bb3b5a8c22a 100644
12725 +--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c
12726 ++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
12727 +@@ -3529,26 +3529,28 @@ static void rtw8822c_pwrtrack_set(struct rtw_dev *rtwdev, u8 rf_path)
12728 + }
12729 + }
12730 +
12731 +-static void rtw8822c_pwr_track_path(struct rtw_dev *rtwdev,
12732 +- struct rtw_swing_table *swing_table,
12733 +- u8 path)
12734 ++static void rtw8822c_pwr_track_stats(struct rtw_dev *rtwdev, u8 path)
12735 + {
12736 +- struct rtw_dm_info *dm_info = &rtwdev->dm_info;
12737 +- u8 thermal_value, delta;
12738 ++ u8 thermal_value;
12739 +
12740 + if (rtwdev->efuse.thermal_meter[path] == 0xff)
12741 + return;
12742 +
12743 + thermal_value = rtw_read_rf(rtwdev, path, RF_T_METER, 0x7e);
12744 +-
12745 + rtw_phy_pwrtrack_avg(rtwdev, thermal_value, path);
12746 ++}
12747 +
12748 +- delta = rtw_phy_pwrtrack_get_delta(rtwdev, path);
12749 ++static void rtw8822c_pwr_track_path(struct rtw_dev *rtwdev,
12750 ++ struct rtw_swing_table *swing_table,
12751 ++ u8 path)
12752 ++{
12753 ++ struct rtw_dm_info *dm_info = &rtwdev->dm_info;
12754 ++ u8 delta;
12755 +
12756 ++ delta = rtw_phy_pwrtrack_get_delta(rtwdev, path);
12757 + dm_info->delta_power_index[path] =
12758 + rtw_phy_pwrtrack_get_pwridx(rtwdev, swing_table, path, path,
12759 + delta);
12760 +-
12761 + rtw8822c_pwrtrack_set(rtwdev, path);
12762 + }
12763 +
12764 +@@ -3559,12 +3561,12 @@ static void __rtw8822c_pwr_track(struct rtw_dev *rtwdev)
12765 +
12766 + rtw_phy_config_swing_table(rtwdev, &swing_table);
12767 +
12768 ++ for (i = 0; i < rtwdev->hal.rf_path_num; i++)
12769 ++ rtw8822c_pwr_track_stats(rtwdev, i);
12770 + if (rtw_phy_pwrtrack_need_lck(rtwdev))
12771 + rtw8822c_do_lck(rtwdev);
12772 +-
12773 + for (i = 0; i < rtwdev->hal.rf_path_num; i++)
12774 + rtw8822c_pwr_track_path(rtwdev, &swing_table, i);
12775 +-
12776 + }
12777 +
12778 + static void rtw8822c_pwr_track(struct rtw_dev *rtwdev)
12779 +diff --git a/drivers/net/wireless/rsi/rsi_91x_hal.c b/drivers/net/wireless/rsi/rsi_91x_hal.c
12780 +index ce9892152f4d4..99b21a2c83861 100644
12781 +--- a/drivers/net/wireless/rsi/rsi_91x_hal.c
12782 ++++ b/drivers/net/wireless/rsi/rsi_91x_hal.c
12783 +@@ -203,7 +203,7 @@ int rsi_prepare_data_desc(struct rsi_common *common, struct sk_buff *skb)
12784 + wh->frame_control |= cpu_to_le16(RSI_SET_PS_ENABLE);
12785 +
12786 + if ((!(info->flags & IEEE80211_TX_INTFL_DONT_ENCRYPT)) &&
12787 +- (common->secinfo.security_enable)) {
12788 ++ info->control.hw_key) {
12789 + if (rsi_is_cipher_wep(common))
12790 + ieee80211_size += 4;
12791 + else
12792 +@@ -470,9 +470,9 @@ int rsi_prepare_beacon(struct rsi_common *common, struct sk_buff *skb)
12793 + }
12794 +
12795 + if (common->band == NL80211_BAND_2GHZ)
12796 +- bcn_frm->bbp_info |= cpu_to_le16(RSI_RATE_1);
12797 ++ bcn_frm->rate_info |= cpu_to_le16(RSI_RATE_1);
12798 + else
12799 +- bcn_frm->bbp_info |= cpu_to_le16(RSI_RATE_6);
12800 ++ bcn_frm->rate_info |= cpu_to_le16(RSI_RATE_6);
12801 +
12802 + if (mac_bcn->data[tim_offset + 2] == 0)
12803 + bcn_frm->frame_info |= cpu_to_le16(RSI_DATA_DESC_DTIM_BEACON);
12804 +diff --git a/drivers/net/wireless/rsi/rsi_91x_mac80211.c b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
12805 +index 16025300cddb3..57c9e3559dfd1 100644
12806 +--- a/drivers/net/wireless/rsi/rsi_91x_mac80211.c
12807 ++++ b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
12808 +@@ -1028,7 +1028,6 @@ static int rsi_mac80211_set_key(struct ieee80211_hw *hw,
12809 + mutex_lock(&common->mutex);
12810 + switch (cmd) {
12811 + case SET_KEY:
12812 +- secinfo->security_enable = true;
12813 + status = rsi_hal_key_config(hw, vif, key, sta);
12814 + if (status) {
12815 + mutex_unlock(&common->mutex);
12816 +@@ -1047,8 +1046,6 @@ static int rsi_mac80211_set_key(struct ieee80211_hw *hw,
12817 + break;
12818 +
12819 + case DISABLE_KEY:
12820 +- if (vif->type == NL80211_IFTYPE_STATION)
12821 +- secinfo->security_enable = false;
12822 + rsi_dbg(ERR_ZONE, "%s: RSI del key\n", __func__);
12823 + memset(key, 0, sizeof(struct ieee80211_key_conf));
12824 + status = rsi_hal_key_config(hw, vif, key, sta);
12825 +diff --git a/drivers/net/wireless/rsi/rsi_91x_mgmt.c b/drivers/net/wireless/rsi/rsi_91x_mgmt.c
12826 +index 33c76d39a8e96..b6d050a2fbe7e 100644
12827 +--- a/drivers/net/wireless/rsi/rsi_91x_mgmt.c
12828 ++++ b/drivers/net/wireless/rsi/rsi_91x_mgmt.c
12829 +@@ -1803,8 +1803,7 @@ int rsi_send_wowlan_request(struct rsi_common *common, u16 flags,
12830 + RSI_WIFI_MGMT_Q);
12831 + cmd_frame->desc.desc_dword0.frame_type = WOWLAN_CONFIG_PARAMS;
12832 + cmd_frame->host_sleep_status = sleep_status;
12833 +- if (common->secinfo.security_enable &&
12834 +- common->secinfo.gtk_cipher)
12835 ++ if (common->secinfo.gtk_cipher)
12836 + flags |= RSI_WOW_GTK_REKEY;
12837 + if (sleep_status)
12838 + cmd_frame->wow_flags = flags;
12839 +diff --git a/drivers/net/wireless/rsi/rsi_main.h b/drivers/net/wireless/rsi/rsi_main.h
12840 +index 73a19e43106b1..b3e25bc28682c 100644
12841 +--- a/drivers/net/wireless/rsi/rsi_main.h
12842 ++++ b/drivers/net/wireless/rsi/rsi_main.h
12843 +@@ -151,7 +151,6 @@ enum edca_queue {
12844 + };
12845 +
12846 + struct security_info {
12847 +- bool security_enable;
12848 + u32 ptk_cipher;
12849 + u32 gtk_cipher;
12850 + };
12851 +diff --git a/drivers/net/wireless/st/cw1200/scan.c b/drivers/net/wireless/st/cw1200/scan.c
12852 +index 988581cc134b7..1f856fbbc0ea4 100644
12853 +--- a/drivers/net/wireless/st/cw1200/scan.c
12854 ++++ b/drivers/net/wireless/st/cw1200/scan.c
12855 +@@ -75,30 +75,27 @@ int cw1200_hw_scan(struct ieee80211_hw *hw,
12856 + if (req->n_ssids > WSM_SCAN_MAX_NUM_OF_SSIDS)
12857 + return -EINVAL;
12858 +
12859 +- /* will be unlocked in cw1200_scan_work() */
12860 +- down(&priv->scan.lock);
12861 +- mutex_lock(&priv->conf_mutex);
12862 +-
12863 + frame.skb = ieee80211_probereq_get(hw, priv->vif->addr, NULL, 0,
12864 + req->ie_len);
12865 +- if (!frame.skb) {
12866 +- mutex_unlock(&priv->conf_mutex);
12867 +- up(&priv->scan.lock);
12868 ++ if (!frame.skb)
12869 + return -ENOMEM;
12870 +- }
12871 +
12872 + if (req->ie_len)
12873 + skb_put_data(frame.skb, req->ie, req->ie_len);
12874 +
12875 ++ /* will be unlocked in cw1200_scan_work() */
12876 ++ down(&priv->scan.lock);
12877 ++ mutex_lock(&priv->conf_mutex);
12878 ++
12879 + ret = wsm_set_template_frame(priv, &frame);
12880 + if (!ret) {
12881 + /* Host want to be the probe responder. */
12882 + ret = wsm_set_probe_responder(priv, true);
12883 + }
12884 + if (ret) {
12885 +- dev_kfree_skb(frame.skb);
12886 + mutex_unlock(&priv->conf_mutex);
12887 + up(&priv->scan.lock);
12888 ++ dev_kfree_skb(frame.skb);
12889 + return ret;
12890 + }
12891 +
12892 +@@ -120,8 +117,8 @@ int cw1200_hw_scan(struct ieee80211_hw *hw,
12893 + ++priv->scan.n_ssids;
12894 + }
12895 +
12896 +- dev_kfree_skb(frame.skb);
12897 + mutex_unlock(&priv->conf_mutex);
12898 ++ dev_kfree_skb(frame.skb);
12899 + queue_work(priv->workqueue, &priv->scan.work);
12900 + return 0;
12901 + }
12902 +diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
12903 +index c92a15c3fbc5e..2a3ef79f96f9c 100644
12904 +--- a/drivers/nvme/host/pci.c
12905 ++++ b/drivers/nvme/host/pci.c
12906 +@@ -1027,7 +1027,7 @@ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx)
12907 +
12908 + static inline void nvme_update_cq_head(struct nvme_queue *nvmeq)
12909 + {
12910 +- u16 tmp = nvmeq->cq_head + 1;
12911 ++ u32 tmp = nvmeq->cq_head + 1;
12912 +
12913 + if (tmp == nvmeq->q_depth) {
12914 + nvmeq->cq_head = 0;
12915 +@@ -2834,10 +2834,7 @@ static unsigned long check_vendor_combination_bug(struct pci_dev *pdev)
12916 + #ifdef CONFIG_ACPI
12917 + static bool nvme_acpi_storage_d3(struct pci_dev *dev)
12918 + {
12919 +- struct acpi_device *adev;
12920 +- struct pci_dev *root;
12921 +- acpi_handle handle;
12922 +- acpi_status status;
12923 ++ struct acpi_device *adev = ACPI_COMPANION(&dev->dev);
12924 + u8 val;
12925 +
12926 + /*
12927 +@@ -2845,28 +2842,9 @@ static bool nvme_acpi_storage_d3(struct pci_dev *dev)
12928 + * must use D3 to support deep platform power savings during
12929 + * suspend-to-idle.
12930 + */
12931 +- root = pcie_find_root_port(dev);
12932 +- if (!root)
12933 +- return false;
12934 +
12935 +- adev = ACPI_COMPANION(&root->dev);
12936 + if (!adev)
12937 + return false;
12938 +-
12939 +- /*
12940 +- * The property is defined in the PXSX device for South complex ports
12941 +- * and in the PEGP device for North complex ports.
12942 +- */
12943 +- status = acpi_get_handle(adev->handle, "PXSX", &handle);
12944 +- if (ACPI_FAILURE(status)) {
12945 +- status = acpi_get_handle(adev->handle, "PEGP", &handle);
12946 +- if (ACPI_FAILURE(status))
12947 +- return false;
12948 +- }
12949 +-
12950 +- if (acpi_bus_get_device(handle, &adev))
12951 +- return false;
12952 +-
12953 + if (fwnode_property_read_u8(acpi_fwnode_handle(adev), "StorageD3Enable",
12954 + &val))
12955 + return false;
12956 +diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
12957 +index d375745fc4ed3..b81db5270018e 100644
12958 +--- a/drivers/nvme/target/fc.c
12959 ++++ b/drivers/nvme/target/fc.c
12960 +@@ -2494,13 +2494,6 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
12961 + u32 xfrlen = be32_to_cpu(cmdiu->data_len);
12962 + int ret;
12963 +
12964 +- /*
12965 +- * if there is no nvmet mapping to the targetport there
12966 +- * shouldn't be requests. just terminate them.
12967 +- */
12968 +- if (!tgtport->pe)
12969 +- goto transport_error;
12970 +-
12971 + /*
12972 + * Fused commands are currently not supported in the linux
12973 + * implementation.
12974 +@@ -2528,7 +2521,8 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
12975 +
12976 + fod->req.cmd = &fod->cmdiubuf.sqe;
12977 + fod->req.cqe = &fod->rspiubuf.cqe;
12978 +- fod->req.port = tgtport->pe->port;
12979 ++ if (tgtport->pe)
12980 ++ fod->req.port = tgtport->pe->port;
12981 +
12982 + /* clear any response payload */
12983 + memset(&fod->rspiubuf, 0, sizeof(fod->rspiubuf));
12984 +diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
12985 +index adb26aff481d5..c485b2c7720d8 100644
12986 +--- a/drivers/of/fdt.c
12987 ++++ b/drivers/of/fdt.c
12988 +@@ -511,11 +511,11 @@ static int __init __reserved_mem_reserve_reg(unsigned long node,
12989 +
12990 + if (size &&
12991 + early_init_dt_reserve_memory_arch(base, size, nomap) == 0)
12992 +- pr_debug("Reserved memory: reserved region for node '%s': base %pa, size %ld MiB\n",
12993 +- uname, &base, (unsigned long)size / SZ_1M);
12994 ++ pr_debug("Reserved memory: reserved region for node '%s': base %pa, size %lu MiB\n",
12995 ++ uname, &base, (unsigned long)(size / SZ_1M));
12996 + else
12997 +- pr_info("Reserved memory: failed to reserve memory for node '%s': base %pa, size %ld MiB\n",
12998 +- uname, &base, (unsigned long)size / SZ_1M);
12999 ++ pr_info("Reserved memory: failed to reserve memory for node '%s': base %pa, size %lu MiB\n",
13000 ++ uname, &base, (unsigned long)(size / SZ_1M));
13001 +
13002 + len -= t_len;
13003 + if (first) {
13004 +diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
13005 +index a7fbc5e37e19e..6c95bbdf9265a 100644
13006 +--- a/drivers/of/of_reserved_mem.c
13007 ++++ b/drivers/of/of_reserved_mem.c
13008 +@@ -134,9 +134,9 @@ static int __init __reserved_mem_alloc_size(unsigned long node,
13009 + ret = early_init_dt_alloc_reserved_memory_arch(size,
13010 + align, start, end, nomap, &base);
13011 + if (ret == 0) {
13012 +- pr_debug("allocated memory for '%s' node: base %pa, size %ld MiB\n",
13013 ++ pr_debug("allocated memory for '%s' node: base %pa, size %lu MiB\n",
13014 + uname, &base,
13015 +- (unsigned long)size / SZ_1M);
13016 ++ (unsigned long)(size / SZ_1M));
13017 + break;
13018 + }
13019 + len -= t_len;
13020 +@@ -146,8 +146,8 @@ static int __init __reserved_mem_alloc_size(unsigned long node,
13021 + ret = early_init_dt_alloc_reserved_memory_arch(size, align,
13022 + 0, 0, nomap, &base);
13023 + if (ret == 0)
13024 +- pr_debug("allocated memory for '%s' node: base %pa, size %ld MiB\n",
13025 +- uname, &base, (unsigned long)size / SZ_1M);
13026 ++ pr_debug("allocated memory for '%s' node: base %pa, size %lu MiB\n",
13027 ++ uname, &base, (unsigned long)(size / SZ_1M));
13028 + }
13029 +
13030 + if (base == 0) {
13031 +diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
13032 +index 27a17a1e4a7c3..7479edf3676c1 100644
13033 +--- a/drivers/pci/controller/pci-hyperv.c
13034 ++++ b/drivers/pci/controller/pci-hyperv.c
13035 +@@ -3480,6 +3480,9 @@ static void __exit exit_hv_pci_drv(void)
13036 +
13037 + static int __init init_hv_pci_drv(void)
13038 + {
13039 ++ if (!hv_is_hyperv_initialized())
13040 ++ return -ENODEV;
13041 ++
13042 + /* Set the invalid domain number's bit, so it will not be used */
13043 + set_bit(HVPCI_DOM_INVALID, hvpci_dom_map);
13044 +
13045 +diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
13046 +index 1328159fe564d..a4339426664e5 100644
13047 +--- a/drivers/perf/arm-cmn.c
13048 ++++ b/drivers/perf/arm-cmn.c
13049 +@@ -1212,7 +1212,7 @@ static int arm_cmn_init_irqs(struct arm_cmn *cmn)
13050 + irq = cmn->dtc[i].irq;
13051 + for (j = i; j--; ) {
13052 + if (cmn->dtc[j].irq == irq) {
13053 +- cmn->dtc[j].irq_friend = j - i;
13054 ++ cmn->dtc[j].irq_friend = i - j;
13055 + goto next;
13056 + }
13057 + }
13058 +diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c
13059 +index 8ff7a67f691cf..4c3e5f2130807 100644
13060 +--- a/drivers/perf/arm_smmuv3_pmu.c
13061 ++++ b/drivers/perf/arm_smmuv3_pmu.c
13062 +@@ -277,7 +277,7 @@ static int smmu_pmu_apply_event_filter(struct smmu_pmu *smmu_pmu,
13063 + struct perf_event *event, int idx)
13064 + {
13065 + u32 span, sid;
13066 +- unsigned int num_ctrs = smmu_pmu->num_counters;
13067 ++ unsigned int cur_idx, num_ctrs = smmu_pmu->num_counters;
13068 + bool filter_en = !!get_filter_enable(event);
13069 +
13070 + span = filter_en ? get_filter_span(event) :
13071 +@@ -285,17 +285,19 @@ static int smmu_pmu_apply_event_filter(struct smmu_pmu *smmu_pmu,
13072 + sid = filter_en ? get_filter_stream_id(event) :
13073 + SMMU_PMCG_DEFAULT_FILTER_SID;
13074 +
13075 +- /* Support individual filter settings */
13076 +- if (!smmu_pmu->global_filter) {
13077 ++ cur_idx = find_first_bit(smmu_pmu->used_counters, num_ctrs);
13078 ++ /*
13079 ++ * Per-counter filtering, or scheduling the first globally-filtered
13080 ++ * event into an empty PMU so idx == 0 and it works out equivalent.
13081 ++ */
13082 ++ if (!smmu_pmu->global_filter || cur_idx == num_ctrs) {
13083 + smmu_pmu_set_event_filter(event, idx, span, sid);
13084 + return 0;
13085 + }
13086 +
13087 +- /* Requested settings same as current global settings*/
13088 +- idx = find_first_bit(smmu_pmu->used_counters, num_ctrs);
13089 +- if (idx == num_ctrs ||
13090 +- smmu_pmu_check_global_filter(smmu_pmu->events[idx], event)) {
13091 +- smmu_pmu_set_event_filter(event, 0, span, sid);
13092 ++ /* Otherwise, must match whatever's currently scheduled */
13093 ++ if (smmu_pmu_check_global_filter(smmu_pmu->events[cur_idx], event)) {
13094 ++ smmu_pmu_set_evtyper(smmu_pmu, idx, get_event(event));
13095 + return 0;
13096 + }
13097 +
13098 +diff --git a/drivers/perf/fsl_imx8_ddr_perf.c b/drivers/perf/fsl_imx8_ddr_perf.c
13099 +index be1f26b62ddb8..4a56849f04001 100644
13100 +--- a/drivers/perf/fsl_imx8_ddr_perf.c
13101 ++++ b/drivers/perf/fsl_imx8_ddr_perf.c
13102 +@@ -706,8 +706,10 @@ static int ddr_perf_probe(struct platform_device *pdev)
13103 +
13104 + name = devm_kasprintf(&pdev->dev, GFP_KERNEL, DDR_PERF_DEV_NAME "%d",
13105 + num);
13106 +- if (!name)
13107 +- return -ENOMEM;
13108 ++ if (!name) {
13109 ++ ret = -ENOMEM;
13110 ++ goto cpuhp_state_err;
13111 ++ }
13112 +
13113 + pmu->devtype_data = of_device_get_match_data(&pdev->dev);
13114 +
13115 +diff --git a/drivers/phy/ralink/phy-mt7621-pci.c b/drivers/phy/ralink/phy-mt7621-pci.c
13116 +index 753cb5bab9308..88e82ab81b61b 100644
13117 +--- a/drivers/phy/ralink/phy-mt7621-pci.c
13118 ++++ b/drivers/phy/ralink/phy-mt7621-pci.c
13119 +@@ -272,8 +272,8 @@ static struct phy *mt7621_pcie_phy_of_xlate(struct device *dev,
13120 +
13121 + mt7621_phy->has_dual_port = args->args[0];
13122 +
13123 +- dev_info(dev, "PHY for 0x%08x (dual port = %d)\n",
13124 +- (unsigned int)mt7621_phy->port_base, mt7621_phy->has_dual_port);
13125 ++ dev_dbg(dev, "PHY for 0x%px (dual port = %d)\n",
13126 ++ mt7621_phy->port_base, mt7621_phy->has_dual_port);
13127 +
13128 + return mt7621_phy->phy;
13129 + }
13130 +diff --git a/drivers/phy/socionext/phy-uniphier-pcie.c b/drivers/phy/socionext/phy-uniphier-pcie.c
13131 +index e4adab375c737..6bdbd1f214dd4 100644
13132 +--- a/drivers/phy/socionext/phy-uniphier-pcie.c
13133 ++++ b/drivers/phy/socionext/phy-uniphier-pcie.c
13134 +@@ -24,11 +24,13 @@
13135 + #define PORT_SEL_1 FIELD_PREP(PORT_SEL_MASK, 1)
13136 +
13137 + #define PCL_PHY_TEST_I 0x2000
13138 +-#define PCL_PHY_TEST_O 0x2004
13139 + #define TESTI_DAT_MASK GENMASK(13, 6)
13140 + #define TESTI_ADR_MASK GENMASK(5, 1)
13141 + #define TESTI_WR_EN BIT(0)
13142 +
13143 ++#define PCL_PHY_TEST_O 0x2004
13144 ++#define TESTO_DAT_MASK GENMASK(7, 0)
13145 ++
13146 + #define PCL_PHY_RESET 0x200c
13147 + #define PCL_PHY_RESET_N_MNMODE BIT(8) /* =1:manual */
13148 + #define PCL_PHY_RESET_N BIT(0) /* =1:deasssert */
13149 +@@ -77,11 +79,12 @@ static void uniphier_pciephy_set_param(struct uniphier_pciephy_priv *priv,
13150 + val = FIELD_PREP(TESTI_DAT_MASK, 1);
13151 + val |= FIELD_PREP(TESTI_ADR_MASK, reg);
13152 + uniphier_pciephy_testio_write(priv, val);
13153 +- val = readl(priv->base + PCL_PHY_TEST_O);
13154 ++ val = readl(priv->base + PCL_PHY_TEST_O) & TESTO_DAT_MASK;
13155 +
13156 + /* update value */
13157 +- val &= ~FIELD_PREP(TESTI_DAT_MASK, mask);
13158 +- val = FIELD_PREP(TESTI_DAT_MASK, mask & param);
13159 ++ val &= ~mask;
13160 ++ val |= mask & param;
13161 ++ val = FIELD_PREP(TESTI_DAT_MASK, val);
13162 + val |= FIELD_PREP(TESTI_ADR_MASK, reg);
13163 + uniphier_pciephy_testio_write(priv, val);
13164 + uniphier_pciephy_testio_write(priv, val | TESTI_WR_EN);
13165 +diff --git a/drivers/phy/ti/phy-dm816x-usb.c b/drivers/phy/ti/phy-dm816x-usb.c
13166 +index 57adc08a89b2d..9fe6ea6fdae55 100644
13167 +--- a/drivers/phy/ti/phy-dm816x-usb.c
13168 ++++ b/drivers/phy/ti/phy-dm816x-usb.c
13169 +@@ -242,19 +242,28 @@ static int dm816x_usb_phy_probe(struct platform_device *pdev)
13170 +
13171 + pm_runtime_enable(phy->dev);
13172 + generic_phy = devm_phy_create(phy->dev, NULL, &ops);
13173 +- if (IS_ERR(generic_phy))
13174 +- return PTR_ERR(generic_phy);
13175 ++ if (IS_ERR(generic_phy)) {
13176 ++ error = PTR_ERR(generic_phy);
13177 ++ goto clk_unprepare;
13178 ++ }
13179 +
13180 + phy_set_drvdata(generic_phy, phy);
13181 +
13182 + phy_provider = devm_of_phy_provider_register(phy->dev,
13183 + of_phy_simple_xlate);
13184 +- if (IS_ERR(phy_provider))
13185 +- return PTR_ERR(phy_provider);
13186 ++ if (IS_ERR(phy_provider)) {
13187 ++ error = PTR_ERR(phy_provider);
13188 ++ goto clk_unprepare;
13189 ++ }
13190 +
13191 + usb_add_phy_dev(&phy->phy);
13192 +
13193 + return 0;
13194 ++
13195 ++clk_unprepare:
13196 ++ pm_runtime_disable(phy->dev);
13197 ++ clk_unprepare(phy->refclk);
13198 ++ return error;
13199 + }
13200 +
13201 + static int dm816x_usb_phy_remove(struct platform_device *pdev)
13202 +diff --git a/drivers/pinctrl/renesas/pfc-r8a7796.c b/drivers/pinctrl/renesas/pfc-r8a7796.c
13203 +index 96b5b1509bb70..c4f1f5607601b 100644
13204 +--- a/drivers/pinctrl/renesas/pfc-r8a7796.c
13205 ++++ b/drivers/pinctrl/renesas/pfc-r8a7796.c
13206 +@@ -68,6 +68,7 @@
13207 + PIN_NOGP_CFG(QSPI1_MOSI_IO0, "QSPI1_MOSI_IO0", fn, CFG_FLAGS), \
13208 + PIN_NOGP_CFG(QSPI1_SPCLK, "QSPI1_SPCLK", fn, CFG_FLAGS), \
13209 + PIN_NOGP_CFG(QSPI1_SSL, "QSPI1_SSL", fn, CFG_FLAGS), \
13210 ++ PIN_NOGP_CFG(PRESET_N, "PRESET#", fn, SH_PFC_PIN_CFG_PULL_DOWN),\
13211 + PIN_NOGP_CFG(RPC_INT_N, "RPC_INT#", fn, CFG_FLAGS), \
13212 + PIN_NOGP_CFG(RPC_RESET_N, "RPC_RESET#", fn, CFG_FLAGS), \
13213 + PIN_NOGP_CFG(RPC_WP_N, "RPC_WP#", fn, CFG_FLAGS), \
13214 +@@ -6191,7 +6192,7 @@ static const struct pinmux_bias_reg pinmux_bias_regs[] = {
13215 + [ 4] = RCAR_GP_PIN(6, 29), /* USB30_OVC */
13216 + [ 5] = RCAR_GP_PIN(6, 30), /* GP6_30 */
13217 + [ 6] = RCAR_GP_PIN(6, 31), /* GP6_31 */
13218 +- [ 7] = SH_PFC_PIN_NONE,
13219 ++ [ 7] = PIN_PRESET_N, /* PRESET# */
13220 + [ 8] = SH_PFC_PIN_NONE,
13221 + [ 9] = SH_PFC_PIN_NONE,
13222 + [10] = SH_PFC_PIN_NONE,
13223 +diff --git a/drivers/pinctrl/renesas/pfc-r8a77990.c b/drivers/pinctrl/renesas/pfc-r8a77990.c
13224 +index 0a32e3c317c1a..95bcacf1275d9 100644
13225 +--- a/drivers/pinctrl/renesas/pfc-r8a77990.c
13226 ++++ b/drivers/pinctrl/renesas/pfc-r8a77990.c
13227 +@@ -54,10 +54,10 @@
13228 + PIN_NOGP_CFG(FSCLKST_N, "FSCLKST_N", fn, CFG_FLAGS), \
13229 + PIN_NOGP_CFG(MLB_REF, "MLB_REF", fn, CFG_FLAGS), \
13230 + PIN_NOGP_CFG(PRESETOUT_N, "PRESETOUT_N", fn, CFG_FLAGS), \
13231 +- PIN_NOGP_CFG(TCK, "TCK", fn, CFG_FLAGS), \
13232 +- PIN_NOGP_CFG(TDI, "TDI", fn, CFG_FLAGS), \
13233 +- PIN_NOGP_CFG(TMS, "TMS", fn, CFG_FLAGS), \
13234 +- PIN_NOGP_CFG(TRST_N, "TRST_N", fn, CFG_FLAGS)
13235 ++ PIN_NOGP_CFG(TCK, "TCK", fn, SH_PFC_PIN_CFG_PULL_UP), \
13236 ++ PIN_NOGP_CFG(TDI, "TDI", fn, SH_PFC_PIN_CFG_PULL_UP), \
13237 ++ PIN_NOGP_CFG(TMS, "TMS", fn, SH_PFC_PIN_CFG_PULL_UP), \
13238 ++ PIN_NOGP_CFG(TRST_N, "TRST_N", fn, SH_PFC_PIN_CFG_PULL_UP)
13239 +
13240 + /*
13241 + * F_() : just information
13242 +diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
13243 +index d41d7ad14be0d..0cb927f0f301a 100644
13244 +--- a/drivers/platform/x86/asus-nb-wmi.c
13245 ++++ b/drivers/platform/x86/asus-nb-wmi.c
13246 +@@ -110,11 +110,6 @@ static struct quirk_entry quirk_asus_forceals = {
13247 + .wmi_force_als_set = true,
13248 + };
13249 +
13250 +-static struct quirk_entry quirk_asus_vendor_backlight = {
13251 +- .wmi_backlight_power = true,
13252 +- .wmi_backlight_set_devstate = true,
13253 +-};
13254 +-
13255 + static struct quirk_entry quirk_asus_use_kbd_dock_devid = {
13256 + .use_kbd_dock_devid = true,
13257 + };
13258 +@@ -425,78 +420,6 @@ static const struct dmi_system_id asus_quirks[] = {
13259 + },
13260 + .driver_data = &quirk_asus_forceals,
13261 + },
13262 +- {
13263 +- .callback = dmi_matched,
13264 +- .ident = "ASUSTeK COMPUTER INC. GA401IH",
13265 +- .matches = {
13266 +- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
13267 +- DMI_MATCH(DMI_PRODUCT_NAME, "GA401IH"),
13268 +- },
13269 +- .driver_data = &quirk_asus_vendor_backlight,
13270 +- },
13271 +- {
13272 +- .callback = dmi_matched,
13273 +- .ident = "ASUSTeK COMPUTER INC. GA401II",
13274 +- .matches = {
13275 +- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
13276 +- DMI_MATCH(DMI_PRODUCT_NAME, "GA401II"),
13277 +- },
13278 +- .driver_data = &quirk_asus_vendor_backlight,
13279 +- },
13280 +- {
13281 +- .callback = dmi_matched,
13282 +- .ident = "ASUSTeK COMPUTER INC. GA401IU",
13283 +- .matches = {
13284 +- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
13285 +- DMI_MATCH(DMI_PRODUCT_NAME, "GA401IU"),
13286 +- },
13287 +- .driver_data = &quirk_asus_vendor_backlight,
13288 +- },
13289 +- {
13290 +- .callback = dmi_matched,
13291 +- .ident = "ASUSTeK COMPUTER INC. GA401IV",
13292 +- .matches = {
13293 +- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
13294 +- DMI_MATCH(DMI_PRODUCT_NAME, "GA401IV"),
13295 +- },
13296 +- .driver_data = &quirk_asus_vendor_backlight,
13297 +- },
13298 +- {
13299 +- .callback = dmi_matched,
13300 +- .ident = "ASUSTeK COMPUTER INC. GA401IVC",
13301 +- .matches = {
13302 +- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
13303 +- DMI_MATCH(DMI_PRODUCT_NAME, "GA401IVC"),
13304 +- },
13305 +- .driver_data = &quirk_asus_vendor_backlight,
13306 +- },
13307 +- {
13308 +- .callback = dmi_matched,
13309 +- .ident = "ASUSTeK COMPUTER INC. GA502II",
13310 +- .matches = {
13311 +- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
13312 +- DMI_MATCH(DMI_PRODUCT_NAME, "GA502II"),
13313 +- },
13314 +- .driver_data = &quirk_asus_vendor_backlight,
13315 +- },
13316 +- {
13317 +- .callback = dmi_matched,
13318 +- .ident = "ASUSTeK COMPUTER INC. GA502IU",
13319 +- .matches = {
13320 +- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
13321 +- DMI_MATCH(DMI_PRODUCT_NAME, "GA502IU"),
13322 +- },
13323 +- .driver_data = &quirk_asus_vendor_backlight,
13324 +- },
13325 +- {
13326 +- .callback = dmi_matched,
13327 +- .ident = "ASUSTeK COMPUTER INC. GA502IV",
13328 +- .matches = {
13329 +- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
13330 +- DMI_MATCH(DMI_PRODUCT_NAME, "GA502IV"),
13331 +- },
13332 +- .driver_data = &quirk_asus_vendor_backlight,
13333 +- },
13334 + {
13335 + .callback = dmi_matched,
13336 + .ident = "Asus Transformer T100TA / T100HA / T100CHI",
13337 +diff --git a/drivers/platform/x86/toshiba_acpi.c b/drivers/platform/x86/toshiba_acpi.c
13338 +index fa7232ad8c395..352508d304675 100644
13339 +--- a/drivers/platform/x86/toshiba_acpi.c
13340 ++++ b/drivers/platform/x86/toshiba_acpi.c
13341 +@@ -2831,6 +2831,7 @@ static int toshiba_acpi_setup_keyboard(struct toshiba_acpi_dev *dev)
13342 +
13343 + if (!dev->info_supported && !dev->system_event_supported) {
13344 + pr_warn("No hotkey query interface found\n");
13345 ++ error = -EINVAL;
13346 + goto err_remove_filter;
13347 + }
13348 +
13349 +diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
13350 +index 8618c44106c2b..b47f6821615e6 100644
13351 +--- a/drivers/platform/x86/touchscreen_dmi.c
13352 ++++ b/drivers/platform/x86/touchscreen_dmi.c
13353 +@@ -299,6 +299,35 @@ static const struct ts_dmi_data estar_beauty_hd_data = {
13354 + .properties = estar_beauty_hd_props,
13355 + };
13356 +
13357 ++/* Generic props + data for upside-down mounted GDIX1001 touchscreens */
13358 ++static const struct property_entry gdix1001_upside_down_props[] = {
13359 ++ PROPERTY_ENTRY_BOOL("touchscreen-inverted-x"),
13360 ++ PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"),
13361 ++ { }
13362 ++};
13363 ++
13364 ++static const struct ts_dmi_data gdix1001_00_upside_down_data = {
13365 ++ .acpi_name = "GDIX1001:00",
13366 ++ .properties = gdix1001_upside_down_props,
13367 ++};
13368 ++
13369 ++static const struct ts_dmi_data gdix1001_01_upside_down_data = {
13370 ++ .acpi_name = "GDIX1001:01",
13371 ++ .properties = gdix1001_upside_down_props,
13372 ++};
13373 ++
13374 ++static const struct property_entry glavey_tm800a550l_props[] = {
13375 ++ PROPERTY_ENTRY_STRING("firmware-name", "gt912-glavey-tm800a550l.fw"),
13376 ++ PROPERTY_ENTRY_STRING("goodix,config-name", "gt912-glavey-tm800a550l.cfg"),
13377 ++ PROPERTY_ENTRY_U32("goodix,main-clk", 54),
13378 ++ { }
13379 ++};
13380 ++
13381 ++static const struct ts_dmi_data glavey_tm800a550l_data = {
13382 ++ .acpi_name = "GDIX1001:00",
13383 ++ .properties = glavey_tm800a550l_props,
13384 ++};
13385 ++
13386 + static const struct property_entry gp_electronic_t701_props[] = {
13387 + PROPERTY_ENTRY_U32("touchscreen-size-x", 960),
13388 + PROPERTY_ENTRY_U32("touchscreen-size-y", 640),
13389 +@@ -1012,6 +1041,15 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
13390 + DMI_MATCH(DMI_PRODUCT_NAME, "eSTAR BEAUTY HD Intel Quad core"),
13391 + },
13392 + },
13393 ++ { /* Glavey TM800A550L */
13394 ++ .driver_data = (void *)&glavey_tm800a550l_data,
13395 ++ .matches = {
13396 ++ DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
13397 ++ DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"),
13398 ++ /* Above strings are too generic, also match on BIOS version */
13399 ++ DMI_MATCH(DMI_BIOS_VERSION, "ZY-8-BI-PX4S70VTR400-X423B-005-D"),
13400 ++ },
13401 ++ },
13402 + {
13403 + /* GP-electronic T701 */
13404 + .driver_data = (void *)&gp_electronic_t701_data,
13405 +@@ -1295,6 +1333,24 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
13406 + DMI_MATCH(DMI_BOARD_NAME, "X3 Plus"),
13407 + },
13408 + },
13409 ++ {
13410 ++ /* Teclast X89 (Android version / BIOS) */
13411 ++ .driver_data = (void *)&gdix1001_00_upside_down_data,
13412 ++ .matches = {
13413 ++ DMI_MATCH(DMI_BOARD_VENDOR, "WISKY"),
13414 ++ DMI_MATCH(DMI_BOARD_NAME, "3G062i"),
13415 ++ },
13416 ++ },
13417 ++ {
13418 ++ /* Teclast X89 (Windows version / BIOS) */
13419 ++ .driver_data = (void *)&gdix1001_01_upside_down_data,
13420 ++ .matches = {
13421 ++ /* tPAD is too generic, also match on bios date */
13422 ++ DMI_MATCH(DMI_BOARD_VENDOR, "TECLAST"),
13423 ++ DMI_MATCH(DMI_BOARD_NAME, "tPAD"),
13424 ++ DMI_MATCH(DMI_BIOS_DATE, "12/19/2014"),
13425 ++ },
13426 ++ },
13427 + {
13428 + /* Teclast X98 Plus II */
13429 + .driver_data = (void *)&teclast_x98plus2_data,
13430 +@@ -1303,6 +1359,19 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
13431 + DMI_MATCH(DMI_PRODUCT_NAME, "X98 Plus II"),
13432 + },
13433 + },
13434 ++ {
13435 ++ /* Teclast X98 Pro */
13436 ++ .driver_data = (void *)&gdix1001_00_upside_down_data,
13437 ++ .matches = {
13438 ++ /*
13439 ++ * Only match BIOS date, because the manufacturers
13440 ++ * BIOS does not report the board name at all
13441 ++ * (sometimes)...
13442 ++ */
13443 ++ DMI_MATCH(DMI_BOARD_VENDOR, "TECLAST"),
13444 ++ DMI_MATCH(DMI_BIOS_DATE, "10/28/2015"),
13445 ++ },
13446 ++ },
13447 + {
13448 + /* Trekstor Primebook C11 */
13449 + .driver_data = (void *)&trekstor_primebook_c11_data,
13450 +@@ -1378,6 +1447,22 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
13451 + DMI_MATCH(DMI_PRODUCT_NAME, "VINGA Twizzle J116"),
13452 + },
13453 + },
13454 ++ {
13455 ++ /* "WinBook TW100" */
13456 ++ .driver_data = (void *)&gdix1001_00_upside_down_data,
13457 ++ .matches = {
13458 ++ DMI_MATCH(DMI_SYS_VENDOR, "WinBook"),
13459 ++ DMI_MATCH(DMI_PRODUCT_NAME, "TW100")
13460 ++ }
13461 ++ },
13462 ++ {
13463 ++ /* WinBook TW700 */
13464 ++ .driver_data = (void *)&gdix1001_00_upside_down_data,
13465 ++ .matches = {
13466 ++ DMI_MATCH(DMI_SYS_VENDOR, "WinBook"),
13467 ++ DMI_MATCH(DMI_PRODUCT_NAME, "TW700")
13468 ++ },
13469 ++ },
13470 + {
13471 + /* Yours Y8W81, same case and touchscreen as Chuwi Vi8 */
13472 + .driver_data = (void *)&chuwi_vi8_data,
13473 +diff --git a/drivers/regulator/da9052-regulator.c b/drivers/regulator/da9052-regulator.c
13474 +index e18d291c7f21c..23fa429ebe760 100644
13475 +--- a/drivers/regulator/da9052-regulator.c
13476 ++++ b/drivers/regulator/da9052-regulator.c
13477 +@@ -250,7 +250,8 @@ static int da9052_regulator_set_voltage_time_sel(struct regulator_dev *rdev,
13478 + case DA9052_ID_BUCK3:
13479 + case DA9052_ID_LDO2:
13480 + case DA9052_ID_LDO3:
13481 +- ret = (new_sel - old_sel) * info->step_uV / 6250;
13482 ++ ret = DIV_ROUND_UP(abs(new_sel - old_sel) * info->step_uV,
13483 ++ 6250);
13484 + break;
13485 + }
13486 +
13487 +diff --git a/drivers/regulator/fan53880.c b/drivers/regulator/fan53880.c
13488 +index 1684faf82ed25..94f02f3099dd4 100644
13489 +--- a/drivers/regulator/fan53880.c
13490 ++++ b/drivers/regulator/fan53880.c
13491 +@@ -79,7 +79,7 @@ static const struct regulator_desc fan53880_regulators[] = {
13492 + .n_linear_ranges = 2,
13493 + .n_voltages = 0xf8,
13494 + .vsel_reg = FAN53880_BUCKVOUT,
13495 +- .vsel_mask = 0x7f,
13496 ++ .vsel_mask = 0xff,
13497 + .enable_reg = FAN53880_ENABLE,
13498 + .enable_mask = 0x10,
13499 + .enable_time = 480,
13500 +diff --git a/drivers/regulator/hi655x-regulator.c b/drivers/regulator/hi655x-regulator.c
13501 +index ac2ee2030211a..b44f492a2b832 100644
13502 +--- a/drivers/regulator/hi655x-regulator.c
13503 ++++ b/drivers/regulator/hi655x-regulator.c
13504 +@@ -72,7 +72,7 @@ enum hi655x_regulator_id {
13505 + static int hi655x_is_enabled(struct regulator_dev *rdev)
13506 + {
13507 + unsigned int value = 0;
13508 +- struct hi655x_regulator *regulator = rdev_get_drvdata(rdev);
13509 ++ const struct hi655x_regulator *regulator = rdev_get_drvdata(rdev);
13510 +
13511 + regmap_read(rdev->regmap, regulator->status_reg, &value);
13512 + return (value & rdev->desc->enable_mask);
13513 +@@ -80,7 +80,7 @@ static int hi655x_is_enabled(struct regulator_dev *rdev)
13514 +
13515 + static int hi655x_disable(struct regulator_dev *rdev)
13516 + {
13517 +- struct hi655x_regulator *regulator = rdev_get_drvdata(rdev);
13518 ++ const struct hi655x_regulator *regulator = rdev_get_drvdata(rdev);
13519 +
13520 + return regmap_write(rdev->regmap, regulator->disable_reg,
13521 + rdev->desc->enable_mask);
13522 +@@ -169,7 +169,6 @@ static const struct hi655x_regulator regulators[] = {
13523 + static int hi655x_regulator_probe(struct platform_device *pdev)
13524 + {
13525 + unsigned int i;
13526 +- struct hi655x_regulator *regulator;
13527 + struct hi655x_pmic *pmic;
13528 + struct regulator_config config = { };
13529 + struct regulator_dev *rdev;
13530 +@@ -180,22 +179,17 @@ static int hi655x_regulator_probe(struct platform_device *pdev)
13531 + return -ENODEV;
13532 + }
13533 +
13534 +- regulator = devm_kzalloc(&pdev->dev, sizeof(*regulator), GFP_KERNEL);
13535 +- if (!regulator)
13536 +- return -ENOMEM;
13537 +-
13538 +- platform_set_drvdata(pdev, regulator);
13539 +-
13540 + config.dev = pdev->dev.parent;
13541 + config.regmap = pmic->regmap;
13542 +- config.driver_data = regulator;
13543 + for (i = 0; i < ARRAY_SIZE(regulators); i++) {
13544 ++ config.driver_data = (void *) &regulators[i];
13545 ++
13546 + rdev = devm_regulator_register(&pdev->dev,
13547 + &regulators[i].rdesc,
13548 + &config);
13549 + if (IS_ERR(rdev)) {
13550 + dev_err(&pdev->dev, "failed to register regulator %s\n",
13551 +- regulator->rdesc.name);
13552 ++ regulators[i].rdesc.name);
13553 + return PTR_ERR(rdev);
13554 + }
13555 + }
13556 +diff --git a/drivers/regulator/mt6315-regulator.c b/drivers/regulator/mt6315-regulator.c
13557 +index 6b8be52c3772a..7514702f78cf7 100644
13558 +--- a/drivers/regulator/mt6315-regulator.c
13559 ++++ b/drivers/regulator/mt6315-regulator.c
13560 +@@ -223,8 +223,8 @@ static int mt6315_regulator_probe(struct spmi_device *pdev)
13561 + int i;
13562 +
13563 + regmap = devm_regmap_init_spmi_ext(pdev, &mt6315_regmap_config);
13564 +- if (!regmap)
13565 +- return -ENODEV;
13566 ++ if (IS_ERR(regmap))
13567 ++ return PTR_ERR(regmap);
13568 +
13569 + chip = devm_kzalloc(dev, sizeof(struct mt6315_chip), GFP_KERNEL);
13570 + if (!chip)
13571 +diff --git a/drivers/regulator/mt6358-regulator.c b/drivers/regulator/mt6358-regulator.c
13572 +index 13cb6ac9a8929..1d4eb5dc4fac8 100644
13573 +--- a/drivers/regulator/mt6358-regulator.c
13574 ++++ b/drivers/regulator/mt6358-regulator.c
13575 +@@ -457,7 +457,7 @@ static struct mt6358_regulator_info mt6358_regulators[] = {
13576 + MT6358_REG_FIXED("ldo_vaud28", VAUD28,
13577 + MT6358_LDO_VAUD28_CON0, 0, 2800000),
13578 + MT6358_LDO("ldo_vdram2", VDRAM2, vdram2_voltages, vdram2_idx,
13579 +- MT6358_LDO_VDRAM2_CON0, 0, MT6358_LDO_VDRAM2_ELR0, 0x10, 0),
13580 ++ MT6358_LDO_VDRAM2_CON0, 0, MT6358_LDO_VDRAM2_ELR0, 0xf, 0),
13581 + MT6358_LDO("ldo_vsim1", VSIM1, vsim_voltages, vsim_idx,
13582 + MT6358_LDO_VSIM1_CON0, 0, MT6358_VSIM1_ANA_CON0, 0xf00, 8),
13583 + MT6358_LDO("ldo_vibr", VIBR, vibr_voltages, vibr_idx,
13584 +diff --git a/drivers/regulator/uniphier-regulator.c b/drivers/regulator/uniphier-regulator.c
13585 +index 2e02e26b516c4..e75b0973e3256 100644
13586 +--- a/drivers/regulator/uniphier-regulator.c
13587 ++++ b/drivers/regulator/uniphier-regulator.c
13588 +@@ -201,6 +201,7 @@ static const struct of_device_id uniphier_regulator_match[] = {
13589 + },
13590 + { /* Sentinel */ },
13591 + };
13592 ++MODULE_DEVICE_TABLE(of, uniphier_regulator_match);
13593 +
13594 + static struct platform_driver uniphier_regulator_driver = {
13595 + .probe = uniphier_regulator_probe,
13596 +diff --git a/drivers/rtc/rtc-stm32.c b/drivers/rtc/rtc-stm32.c
13597 +index 75a8924ba12b3..ac9e228b56d0b 100644
13598 +--- a/drivers/rtc/rtc-stm32.c
13599 ++++ b/drivers/rtc/rtc-stm32.c
13600 +@@ -754,7 +754,7 @@ static int stm32_rtc_probe(struct platform_device *pdev)
13601 +
13602 + ret = clk_prepare_enable(rtc->rtc_ck);
13603 + if (ret)
13604 +- goto err;
13605 ++ goto err_no_rtc_ck;
13606 +
13607 + if (rtc->data->need_dbp)
13608 + regmap_update_bits(rtc->dbp, rtc->dbp_reg,
13609 +@@ -830,10 +830,12 @@ static int stm32_rtc_probe(struct platform_device *pdev)
13610 + }
13611 +
13612 + return 0;
13613 ++
13614 + err:
13615 ++ clk_disable_unprepare(rtc->rtc_ck);
13616 ++err_no_rtc_ck:
13617 + if (rtc->data->has_pclk)
13618 + clk_disable_unprepare(rtc->pclk);
13619 +- clk_disable_unprepare(rtc->rtc_ck);
13620 +
13621 + if (rtc->data->need_dbp)
13622 + regmap_update_bits(rtc->dbp, rtc->dbp_reg, rtc->dbp_mask, 0);
13623 +diff --git a/drivers/s390/cio/chp.c b/drivers/s390/cio/chp.c
13624 +index 8d0de6adcad08..69d62421d5611 100644
13625 +--- a/drivers/s390/cio/chp.c
13626 ++++ b/drivers/s390/cio/chp.c
13627 +@@ -255,6 +255,9 @@ static ssize_t chp_status_write(struct device *dev,
13628 + if (!num_args)
13629 + return count;
13630 +
13631 ++ /* Wait until previous actions have settled. */
13632 ++ css_wait_for_slow_path();
13633 ++
13634 + if (!strncasecmp(cmd, "on", 2) || !strcmp(cmd, "1")) {
13635 + mutex_lock(&cp->lock);
13636 + error = s390_vary_chpid(cp->chpid, 1);
13637 +diff --git a/drivers/s390/cio/chsc.c b/drivers/s390/cio/chsc.c
13638 +index c22d9ee27ba19..297fb399363cc 100644
13639 +--- a/drivers/s390/cio/chsc.c
13640 ++++ b/drivers/s390/cio/chsc.c
13641 +@@ -801,8 +801,6 @@ int chsc_chp_vary(struct chp_id chpid, int on)
13642 + {
13643 + struct channel_path *chp = chpid_to_chp(chpid);
13644 +
13645 +- /* Wait until previous actions have settled. */
13646 +- css_wait_for_slow_path();
13647 + /*
13648 + * Redo PathVerification on the devices the chpid connects to
13649 + */
13650 +diff --git a/drivers/scsi/FlashPoint.c b/drivers/scsi/FlashPoint.c
13651 +index 24ace18240480..ec8a621d232d6 100644
13652 +--- a/drivers/scsi/FlashPoint.c
13653 ++++ b/drivers/scsi/FlashPoint.c
13654 +@@ -40,7 +40,7 @@ struct sccb_mgr_info {
13655 + u16 si_per_targ_ultra_nego;
13656 + u16 si_per_targ_no_disc;
13657 + u16 si_per_targ_wide_nego;
13658 +- u16 si_flags;
13659 ++ u16 si_mflags;
13660 + unsigned char si_card_family;
13661 + unsigned char si_bustype;
13662 + unsigned char si_card_model[3];
13663 +@@ -1073,22 +1073,22 @@ static int FlashPoint_ProbeHostAdapter(struct sccb_mgr_info *pCardInfo)
13664 + ScamFlg =
13665 + (unsigned char)FPT_utilEERead(ioport, SCAM_CONFIG / 2);
13666 +
13667 +- pCardInfo->si_flags = 0x0000;
13668 ++ pCardInfo->si_mflags = 0x0000;
13669 +
13670 + if (i & 0x01)
13671 +- pCardInfo->si_flags |= SCSI_PARITY_ENA;
13672 ++ pCardInfo->si_mflags |= SCSI_PARITY_ENA;
13673 +
13674 + if (!(i & 0x02))
13675 +- pCardInfo->si_flags |= SOFT_RESET;
13676 ++ pCardInfo->si_mflags |= SOFT_RESET;
13677 +
13678 + if (i & 0x10)
13679 +- pCardInfo->si_flags |= EXTENDED_TRANSLATION;
13680 ++ pCardInfo->si_mflags |= EXTENDED_TRANSLATION;
13681 +
13682 + if (ScamFlg & SCAM_ENABLED)
13683 +- pCardInfo->si_flags |= FLAG_SCAM_ENABLED;
13684 ++ pCardInfo->si_mflags |= FLAG_SCAM_ENABLED;
13685 +
13686 + if (ScamFlg & SCAM_LEVEL2)
13687 +- pCardInfo->si_flags |= FLAG_SCAM_LEVEL2;
13688 ++ pCardInfo->si_mflags |= FLAG_SCAM_LEVEL2;
13689 +
13690 + j = (RD_HARPOON(ioport + hp_bm_ctrl) & ~SCSI_TERM_ENA_L);
13691 + if (i & 0x04) {
13692 +@@ -1104,7 +1104,7 @@ static int FlashPoint_ProbeHostAdapter(struct sccb_mgr_info *pCardInfo)
13693 +
13694 + if (!(RD_HARPOON(ioport + hp_page_ctrl) & NARROW_SCSI_CARD))
13695 +
13696 +- pCardInfo->si_flags |= SUPPORT_16TAR_32LUN;
13697 ++ pCardInfo->si_mflags |= SUPPORT_16TAR_32LUN;
13698 +
13699 + pCardInfo->si_card_family = HARPOON_FAMILY;
13700 + pCardInfo->si_bustype = BUSTYPE_PCI;
13701 +@@ -1140,15 +1140,15 @@ static int FlashPoint_ProbeHostAdapter(struct sccb_mgr_info *pCardInfo)
13702 +
13703 + if (pCardInfo->si_card_model[1] == '3') {
13704 + if (RD_HARPOON(ioport + hp_ee_ctrl) & BIT(7))
13705 +- pCardInfo->si_flags |= LOW_BYTE_TERM;
13706 ++ pCardInfo->si_mflags |= LOW_BYTE_TERM;
13707 + } else if (pCardInfo->si_card_model[2] == '0') {
13708 + temp = RD_HARPOON(ioport + hp_xfer_pad);
13709 + WR_HARPOON(ioport + hp_xfer_pad, (temp & ~BIT(4)));
13710 + if (RD_HARPOON(ioport + hp_ee_ctrl) & BIT(7))
13711 +- pCardInfo->si_flags |= LOW_BYTE_TERM;
13712 ++ pCardInfo->si_mflags |= LOW_BYTE_TERM;
13713 + WR_HARPOON(ioport + hp_xfer_pad, (temp | BIT(4)));
13714 + if (RD_HARPOON(ioport + hp_ee_ctrl) & BIT(7))
13715 +- pCardInfo->si_flags |= HIGH_BYTE_TERM;
13716 ++ pCardInfo->si_mflags |= HIGH_BYTE_TERM;
13717 + WR_HARPOON(ioport + hp_xfer_pad, temp);
13718 + } else {
13719 + temp = RD_HARPOON(ioport + hp_ee_ctrl);
13720 +@@ -1166,9 +1166,9 @@ static int FlashPoint_ProbeHostAdapter(struct sccb_mgr_info *pCardInfo)
13721 + WR_HARPOON(ioport + hp_ee_ctrl, temp);
13722 + WR_HARPOON(ioport + hp_xfer_pad, temp2);
13723 + if (!(temp3 & BIT(7)))
13724 +- pCardInfo->si_flags |= LOW_BYTE_TERM;
13725 ++ pCardInfo->si_mflags |= LOW_BYTE_TERM;
13726 + if (!(temp3 & BIT(6)))
13727 +- pCardInfo->si_flags |= HIGH_BYTE_TERM;
13728 ++ pCardInfo->si_mflags |= HIGH_BYTE_TERM;
13729 + }
13730 +
13731 + ARAM_ACCESS(ioport);
13732 +@@ -1275,7 +1275,7 @@ static void *FlashPoint_HardwareResetHostAdapter(struct sccb_mgr_info
13733 + WR_HARPOON(ioport + hp_arb_id, pCardInfo->si_id);
13734 + CurrCard->ourId = pCardInfo->si_id;
13735 +
13736 +- i = (unsigned char)pCardInfo->si_flags;
13737 ++ i = (unsigned char)pCardInfo->si_mflags;
13738 + if (i & SCSI_PARITY_ENA)
13739 + WR_HARPOON(ioport + hp_portctrl_1, (HOST_MODE8 | CHK_SCSI_P));
13740 +
13741 +@@ -1289,14 +1289,14 @@ static void *FlashPoint_HardwareResetHostAdapter(struct sccb_mgr_info
13742 + j |= SCSI_TERM_ENA_H;
13743 + WR_HARPOON(ioport + hp_ee_ctrl, j);
13744 +
13745 +- if (!(pCardInfo->si_flags & SOFT_RESET)) {
13746 ++ if (!(pCardInfo->si_mflags & SOFT_RESET)) {
13747 +
13748 + FPT_sresb(ioport, thisCard);
13749 +
13750 + FPT_scini(thisCard, pCardInfo->si_id, 0);
13751 + }
13752 +
13753 +- if (pCardInfo->si_flags & POST_ALL_UNDERRRUNS)
13754 ++ if (pCardInfo->si_mflags & POST_ALL_UNDERRRUNS)
13755 + CurrCard->globalFlags |= F_NO_FILTER;
13756 +
13757 + if (pCurrNvRam) {
13758 +diff --git a/drivers/scsi/be2iscsi/be_iscsi.c b/drivers/scsi/be2iscsi/be_iscsi.c
13759 +index a13c203ef7a9a..c4881657a807b 100644
13760 +--- a/drivers/scsi/be2iscsi/be_iscsi.c
13761 ++++ b/drivers/scsi/be2iscsi/be_iscsi.c
13762 +@@ -182,6 +182,7 @@ int beiscsi_conn_bind(struct iscsi_cls_session *cls_session,
13763 + struct beiscsi_endpoint *beiscsi_ep;
13764 + struct iscsi_endpoint *ep;
13765 + uint16_t cri_index;
13766 ++ int rc = 0;
13767 +
13768 + ep = iscsi_lookup_endpoint(transport_fd);
13769 + if (!ep)
13770 +@@ -189,15 +190,17 @@ int beiscsi_conn_bind(struct iscsi_cls_session *cls_session,
13771 +
13772 + beiscsi_ep = ep->dd_data;
13773 +
13774 +- if (iscsi_conn_bind(cls_session, cls_conn, is_leading))
13775 +- return -EINVAL;
13776 ++ if (iscsi_conn_bind(cls_session, cls_conn, is_leading)) {
13777 ++ rc = -EINVAL;
13778 ++ goto put_ep;
13779 ++ }
13780 +
13781 + if (beiscsi_ep->phba != phba) {
13782 + beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG,
13783 + "BS_%d : beiscsi_ep->hba=%p not equal to phba=%p\n",
13784 + beiscsi_ep->phba, phba);
13785 +-
13786 +- return -EEXIST;
13787 ++ rc = -EEXIST;
13788 ++ goto put_ep;
13789 + }
13790 + cri_index = BE_GET_CRI_FROM_CID(beiscsi_ep->ep_cid);
13791 + if (phba->conn_table[cri_index]) {
13792 +@@ -209,7 +212,8 @@ int beiscsi_conn_bind(struct iscsi_cls_session *cls_session,
13793 + beiscsi_ep->ep_cid,
13794 + beiscsi_conn,
13795 + phba->conn_table[cri_index]);
13796 +- return -EINVAL;
13797 ++ rc = -EINVAL;
13798 ++ goto put_ep;
13799 + }
13800 + }
13801 +
13802 +@@ -226,7 +230,10 @@ int beiscsi_conn_bind(struct iscsi_cls_session *cls_session,
13803 + "BS_%d : cid %d phba->conn_table[%u]=%p\n",
13804 + beiscsi_ep->ep_cid, cri_index, beiscsi_conn);
13805 + phba->conn_table[cri_index] = beiscsi_conn;
13806 +- return 0;
13807 ++
13808 ++put_ep:
13809 ++ iscsi_put_endpoint(ep);
13810 ++ return rc;
13811 + }
13812 +
13813 + static int beiscsi_iface_create_ipv4(struct beiscsi_hba *phba)
13814 +diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c
13815 +index 90fcddb76f46f..e9658a67d9da0 100644
13816 +--- a/drivers/scsi/be2iscsi/be_main.c
13817 ++++ b/drivers/scsi/be2iscsi/be_main.c
13818 +@@ -5809,6 +5809,7 @@ struct iscsi_transport beiscsi_iscsi_transport = {
13819 + .destroy_session = beiscsi_session_destroy,
13820 + .create_conn = beiscsi_conn_create,
13821 + .bind_conn = beiscsi_conn_bind,
13822 ++ .unbind_conn = iscsi_conn_unbind,
13823 + .destroy_conn = iscsi_conn_teardown,
13824 + .attr_is_visible = beiscsi_attr_is_visible,
13825 + .set_iface_param = beiscsi_iface_set_param,
13826 +diff --git a/drivers/scsi/bnx2i/bnx2i_iscsi.c b/drivers/scsi/bnx2i/bnx2i_iscsi.c
13827 +index 1e6d8f62ea3c2..2ad85c6b99fd2 100644
13828 +--- a/drivers/scsi/bnx2i/bnx2i_iscsi.c
13829 ++++ b/drivers/scsi/bnx2i/bnx2i_iscsi.c
13830 +@@ -1420,17 +1420,23 @@ static int bnx2i_conn_bind(struct iscsi_cls_session *cls_session,
13831 + * Forcefully terminate all in progress connection recovery at the
13832 + * earliest, either in bind(), send_pdu(LOGIN), or conn_start()
13833 + */
13834 +- if (bnx2i_adapter_ready(hba))
13835 +- return -EIO;
13836 ++ if (bnx2i_adapter_ready(hba)) {
13837 ++ ret_code = -EIO;
13838 ++ goto put_ep;
13839 ++ }
13840 +
13841 + bnx2i_ep = ep->dd_data;
13842 + if ((bnx2i_ep->state == EP_STATE_TCP_FIN_RCVD) ||
13843 +- (bnx2i_ep->state == EP_STATE_TCP_RST_RCVD))
13844 ++ (bnx2i_ep->state == EP_STATE_TCP_RST_RCVD)) {
13845 + /* Peer disconnect via' FIN or RST */
13846 +- return -EINVAL;
13847 ++ ret_code = -EINVAL;
13848 ++ goto put_ep;
13849 ++ }
13850 +
13851 +- if (iscsi_conn_bind(cls_session, cls_conn, is_leading))
13852 +- return -EINVAL;
13853 ++ if (iscsi_conn_bind(cls_session, cls_conn, is_leading)) {
13854 ++ ret_code = -EINVAL;
13855 ++ goto put_ep;
13856 ++ }
13857 +
13858 + if (bnx2i_ep->hba != hba) {
13859 + /* Error - TCP connection does not belong to this device
13860 +@@ -1441,7 +1447,8 @@ static int bnx2i_conn_bind(struct iscsi_cls_session *cls_session,
13861 + iscsi_conn_printk(KERN_ALERT, cls_conn->dd_data,
13862 + "belong to hba (%s)\n",
13863 + hba->netdev->name);
13864 +- return -EEXIST;
13865 ++ ret_code = -EEXIST;
13866 ++ goto put_ep;
13867 + }
13868 + bnx2i_ep->conn = bnx2i_conn;
13869 + bnx2i_conn->ep = bnx2i_ep;
13870 +@@ -1458,6 +1465,8 @@ static int bnx2i_conn_bind(struct iscsi_cls_session *cls_session,
13871 + bnx2i_put_rq_buf(bnx2i_conn, 0);
13872 +
13873 + bnx2i_arm_cq_event_coalescing(bnx2i_conn->ep, CNIC_ARM_CQE);
13874 ++put_ep:
13875 ++ iscsi_put_endpoint(ep);
13876 + return ret_code;
13877 + }
13878 +
13879 +@@ -2276,6 +2285,7 @@ struct iscsi_transport bnx2i_iscsi_transport = {
13880 + .destroy_session = bnx2i_session_destroy,
13881 + .create_conn = bnx2i_conn_create,
13882 + .bind_conn = bnx2i_conn_bind,
13883 ++ .unbind_conn = iscsi_conn_unbind,
13884 + .destroy_conn = bnx2i_conn_destroy,
13885 + .attr_is_visible = bnx2i_attr_is_visible,
13886 + .set_param = iscsi_set_param,
13887 +diff --git a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
13888 +index 37d99357120fa..edcd3fab6973c 100644
13889 +--- a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
13890 ++++ b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
13891 +@@ -117,6 +117,7 @@ static struct iscsi_transport cxgb3i_iscsi_transport = {
13892 + /* connection management */
13893 + .create_conn = cxgbi_create_conn,
13894 + .bind_conn = cxgbi_bind_conn,
13895 ++ .unbind_conn = iscsi_conn_unbind,
13896 + .destroy_conn = iscsi_tcp_conn_teardown,
13897 + .start_conn = iscsi_conn_start,
13898 + .stop_conn = iscsi_conn_stop,
13899 +diff --git a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
13900 +index 2c3491528d424..efb3e2b3398e2 100644
13901 +--- a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
13902 ++++ b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
13903 +@@ -134,6 +134,7 @@ static struct iscsi_transport cxgb4i_iscsi_transport = {
13904 + /* connection management */
13905 + .create_conn = cxgbi_create_conn,
13906 + .bind_conn = cxgbi_bind_conn,
13907 ++ .unbind_conn = iscsi_conn_unbind,
13908 + .destroy_conn = iscsi_tcp_conn_teardown,
13909 + .start_conn = iscsi_conn_start,
13910 + .stop_conn = iscsi_conn_stop,
13911 +diff --git a/drivers/scsi/cxgbi/libcxgbi.c b/drivers/scsi/cxgbi/libcxgbi.c
13912 +index f078b3c4e083f..f6bcae829c29b 100644
13913 +--- a/drivers/scsi/cxgbi/libcxgbi.c
13914 ++++ b/drivers/scsi/cxgbi/libcxgbi.c
13915 +@@ -2690,11 +2690,13 @@ int cxgbi_bind_conn(struct iscsi_cls_session *cls_session,
13916 + err = csk->cdev->csk_ddp_setup_pgidx(csk, csk->tid,
13917 + ppm->tformat.pgsz_idx_dflt);
13918 + if (err < 0)
13919 +- return err;
13920 ++ goto put_ep;
13921 +
13922 + err = iscsi_conn_bind(cls_session, cls_conn, is_leading);
13923 +- if (err)
13924 +- return -EINVAL;
13925 ++ if (err) {
13926 ++ err = -EINVAL;
13927 ++ goto put_ep;
13928 ++ }
13929 +
13930 + /* calculate the tag idx bits needed for this conn based on cmds_max */
13931 + cconn->task_idx_bits = (__ilog2_u32(conn->session->cmds_max - 1)) + 1;
13932 +@@ -2715,7 +2717,9 @@ int cxgbi_bind_conn(struct iscsi_cls_session *cls_session,
13933 + /* init recv engine */
13934 + iscsi_tcp_hdr_recv_prep(tcp_conn);
13935 +
13936 +- return 0;
13937 ++put_ep:
13938 ++ iscsi_put_endpoint(ep);
13939 ++ return err;
13940 + }
13941 + EXPORT_SYMBOL_GPL(cxgbi_bind_conn);
13942 +
13943 +diff --git a/drivers/scsi/libfc/fc_encode.h b/drivers/scsi/libfc/fc_encode.h
13944 +index 602c97a651bc0..9ea4ceadb5594 100644
13945 +--- a/drivers/scsi/libfc/fc_encode.h
13946 ++++ b/drivers/scsi/libfc/fc_encode.h
13947 +@@ -166,9 +166,11 @@ static inline int fc_ct_ns_fill(struct fc_lport *lport,
13948 + static inline void fc_ct_ms_fill_attr(struct fc_fdmi_attr_entry *entry,
13949 + const char *in, size_t len)
13950 + {
13951 +- int copied = strscpy(entry->value, in, len);
13952 +- if (copied > 0)
13953 +- memset(entry->value, copied, len - copied);
13954 ++ int copied;
13955 ++
13956 ++ copied = strscpy((char *)&entry->value, in, len);
13957 ++ if (copied > 0 && (copied + 1) < len)
13958 ++ memset((entry->value + copied + 1), 0, len - copied - 1);
13959 + }
13960 +
13961 + /**
13962 +diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
13963 +index 4834219497eeb..2aaf836786548 100644
13964 +--- a/drivers/scsi/libiscsi.c
13965 ++++ b/drivers/scsi/libiscsi.c
13966 +@@ -1387,23 +1387,32 @@ void iscsi_session_failure(struct iscsi_session *session,
13967 + }
13968 + EXPORT_SYMBOL_GPL(iscsi_session_failure);
13969 +
13970 +-void iscsi_conn_failure(struct iscsi_conn *conn, enum iscsi_err err)
13971 ++static bool iscsi_set_conn_failed(struct iscsi_conn *conn)
13972 + {
13973 + struct iscsi_session *session = conn->session;
13974 +
13975 +- spin_lock_bh(&session->frwd_lock);
13976 +- if (session->state == ISCSI_STATE_FAILED) {
13977 +- spin_unlock_bh(&session->frwd_lock);
13978 +- return;
13979 +- }
13980 ++ if (session->state == ISCSI_STATE_FAILED)
13981 ++ return false;
13982 +
13983 + if (conn->stop_stage == 0)
13984 + session->state = ISCSI_STATE_FAILED;
13985 +- spin_unlock_bh(&session->frwd_lock);
13986 +
13987 + set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
13988 + set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_rx);
13989 +- iscsi_conn_error_event(conn->cls_conn, err);
13990 ++ return true;
13991 ++}
13992 ++
13993 ++void iscsi_conn_failure(struct iscsi_conn *conn, enum iscsi_err err)
13994 ++{
13995 ++ struct iscsi_session *session = conn->session;
13996 ++ bool needs_evt;
13997 ++
13998 ++ spin_lock_bh(&session->frwd_lock);
13999 ++ needs_evt = iscsi_set_conn_failed(conn);
14000 ++ spin_unlock_bh(&session->frwd_lock);
14001 ++
14002 ++ if (needs_evt)
14003 ++ iscsi_conn_error_event(conn->cls_conn, err);
14004 + }
14005 + EXPORT_SYMBOL_GPL(iscsi_conn_failure);
14006 +
14007 +@@ -2180,6 +2189,51 @@ done:
14008 + spin_unlock(&session->frwd_lock);
14009 + }
14010 +
14011 ++/**
14012 ++ * iscsi_conn_unbind - prevent queueing to conn.
14013 ++ * @cls_conn: iscsi conn ep is bound to.
14014 ++ * @is_active: is the conn in use for boot or is this for EH/termination
14015 ++ *
14016 ++ * This must be called by drivers implementing the ep_disconnect callout.
14017 ++ * It disables queueing to the connection from libiscsi in preparation for
14018 ++ * an ep_disconnect call.
14019 ++ */
14020 ++void iscsi_conn_unbind(struct iscsi_cls_conn *cls_conn, bool is_active)
14021 ++{
14022 ++ struct iscsi_session *session;
14023 ++ struct iscsi_conn *conn;
14024 ++
14025 ++ if (!cls_conn)
14026 ++ return;
14027 ++
14028 ++ conn = cls_conn->dd_data;
14029 ++ session = conn->session;
14030 ++ /*
14031 ++ * Wait for iscsi_eh calls to exit. We don't wait for the tmf to
14032 ++ * complete or timeout. The caller just wants to know what's running
14033 ++ * is everything that needs to be cleaned up, and no cmds will be
14034 ++ * queued.
14035 ++ */
14036 ++ mutex_lock(&session->eh_mutex);
14037 ++
14038 ++ iscsi_suspend_queue(conn);
14039 ++ iscsi_suspend_tx(conn);
14040 ++
14041 ++ spin_lock_bh(&session->frwd_lock);
14042 ++ if (!is_active) {
14043 ++ /*
14044 ++ * if logout timed out before userspace could even send a PDU
14045 ++ * the state might still be in ISCSI_STATE_LOGGED_IN and
14046 ++ * allowing new cmds and TMFs.
14047 ++ */
14048 ++ if (session->state == ISCSI_STATE_LOGGED_IN)
14049 ++ iscsi_set_conn_failed(conn);
14050 ++ }
14051 ++ spin_unlock_bh(&session->frwd_lock);
14052 ++ mutex_unlock(&session->eh_mutex);
14053 ++}
14054 ++EXPORT_SYMBOL_GPL(iscsi_conn_unbind);
14055 ++
14056 + static void iscsi_prep_abort_task_pdu(struct iscsi_task *task,
14057 + struct iscsi_tm *hdr)
14058 + {
14059 +diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
14060 +index 46a8f2d1d2b83..8ef8a3672e494 100644
14061 +--- a/drivers/scsi/lpfc/lpfc_debugfs.c
14062 ++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
14063 +@@ -868,11 +868,8 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
14064 + len += scnprintf(buf+len, size-len,
14065 + "WWNN x%llx ",
14066 + wwn_to_u64(ndlp->nlp_nodename.u.wwn));
14067 +- if (ndlp->nlp_flag & NLP_RPI_REGISTERED)
14068 +- len += scnprintf(buf+len, size-len, "RPI:%03d ",
14069 +- ndlp->nlp_rpi);
14070 +- else
14071 +- len += scnprintf(buf+len, size-len, "RPI:none ");
14072 ++ len += scnprintf(buf+len, size-len, "RPI:x%04x ",
14073 ++ ndlp->nlp_rpi);
14074 + len += scnprintf(buf+len, size-len, "flag:x%08x ",
14075 + ndlp->nlp_flag);
14076 + if (!ndlp->nlp_type)
14077 +diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
14078 +index 3dd22da3153ff..5c4172e8c81b2 100644
14079 +--- a/drivers/scsi/lpfc/lpfc_els.c
14080 ++++ b/drivers/scsi/lpfc/lpfc_els.c
14081 +@@ -1985,9 +1985,20 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
14082 + lpfc_disc_state_machine(vport, ndlp, cmdiocb,
14083 + NLP_EVT_CMPL_PLOGI);
14084 +
14085 +- /* As long as this node is not registered with the scsi or nvme
14086 +- * transport, it is no longer an active node. Otherwise
14087 +- * devloss handles the final cleanup.
14088 ++ /* If a PLOGI collision occurred, the node needs to continue
14089 ++ * with the reglogin process.
14090 ++ */
14091 ++ spin_lock_irq(&ndlp->lock);
14092 ++ if ((ndlp->nlp_flag & (NLP_ACC_REGLOGIN | NLP_RCV_PLOGI)) &&
14093 ++ ndlp->nlp_state == NLP_STE_REG_LOGIN_ISSUE) {
14094 ++ spin_unlock_irq(&ndlp->lock);
14095 ++ goto out;
14096 ++ }
14097 ++ spin_unlock_irq(&ndlp->lock);
14098 ++
14099 ++ /* No PLOGI collision and the node is not registered with the
14100 ++ * scsi or nvme transport. It is no longer an active node. Just
14101 ++ * start the device remove process.
14102 + */
14103 + if (!(ndlp->fc4_xpt_flags & (SCSI_XPT_REGD | NVME_XPT_REGD))) {
14104 + spin_lock_irq(&ndlp->lock);
14105 +@@ -2856,6 +2867,11 @@ lpfc_cmpl_els_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
14106 + * log into the remote port.
14107 + */
14108 + if (ndlp->nlp_flag & NLP_TARGET_REMOVE) {
14109 ++ spin_lock_irq(&ndlp->lock);
14110 ++ if (phba->sli_rev == LPFC_SLI_REV4)
14111 ++ ndlp->nlp_flag |= NLP_RELEASE_RPI;
14112 ++ ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;
14113 ++ spin_unlock_irq(&ndlp->lock);
14114 + lpfc_disc_state_machine(vport, ndlp, cmdiocb,
14115 + NLP_EVT_DEVICE_RM);
14116 + lpfc_els_free_iocb(phba, cmdiocb);
14117 +@@ -4363,6 +4379,7 @@ lpfc_cmpl_els_logo_acc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
14118 + struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
14119 + struct lpfc_vport *vport = cmdiocb->vport;
14120 + IOCB_t *irsp;
14121 ++ u32 xpt_flags = 0, did_mask = 0;
14122 +
14123 + irsp = &rspiocb->iocb;
14124 + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP,
14125 +@@ -4378,9 +4395,20 @@ lpfc_cmpl_els_logo_acc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
14126 + if (ndlp->nlp_state == NLP_STE_NPR_NODE) {
14127 + /* NPort Recovery mode or node is just allocated */
14128 + if (!lpfc_nlp_not_used(ndlp)) {
14129 +- /* If the ndlp is being used by another discovery
14130 +- * thread, just unregister the RPI.
14131 ++ /* A LOGO is completing and the node is in NPR state.
14132 ++ * If this a fabric node that cleared its transport
14133 ++ * registration, release the rpi.
14134 + */
14135 ++ xpt_flags = SCSI_XPT_REGD | NVME_XPT_REGD;
14136 ++ did_mask = ndlp->nlp_DID & Fabric_DID_MASK;
14137 ++ if (did_mask == Fabric_DID_MASK &&
14138 ++ !(ndlp->fc4_xpt_flags & xpt_flags)) {
14139 ++ spin_lock_irq(&ndlp->lock);
14140 ++ ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;
14141 ++ if (phba->sli_rev == LPFC_SLI_REV4)
14142 ++ ndlp->nlp_flag |= NLP_RELEASE_RPI;
14143 ++ spin_unlock_irq(&ndlp->lock);
14144 ++ }
14145 + lpfc_unreg_rpi(vport, ndlp);
14146 + } else {
14147 + /* Indicate the node has already released, should
14148 +@@ -4416,28 +4444,37 @@ lpfc_mbx_cmpl_dflt_rpi(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
14149 + {
14150 + struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *)(pmb->ctx_buf);
14151 + struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp;
14152 ++ u32 mbx_flag = pmb->mbox_flag;
14153 ++ u32 mbx_cmd = pmb->u.mb.mbxCommand;
14154 +
14155 + pmb->ctx_buf = NULL;
14156 + pmb->ctx_ndlp = NULL;
14157 +
14158 +- lpfc_mbuf_free(phba, mp->virt, mp->phys);
14159 +- kfree(mp);
14160 +- mempool_free(pmb, phba->mbox_mem_pool);
14161 + if (ndlp) {
14162 + lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_NODE,
14163 +- "0006 rpi x%x DID:%x flg:%x %d x%px\n",
14164 ++ "0006 rpi x%x DID:%x flg:%x %d x%px "
14165 ++ "mbx_cmd x%x mbx_flag x%x x%px\n",
14166 + ndlp->nlp_rpi, ndlp->nlp_DID, ndlp->nlp_flag,
14167 +- kref_read(&ndlp->kref),
14168 +- ndlp);
14169 +- /* This is the end of the default RPI cleanup logic for
14170 +- * this ndlp and it could get released. Clear the nlp_flags to
14171 +- * prevent any further processing.
14172 ++ kref_read(&ndlp->kref), ndlp, mbx_cmd,
14173 ++ mbx_flag, pmb);
14174 ++
14175 ++ /* This ends the default/temporary RPI cleanup logic for this
14176 ++ * ndlp and the node and rpi needs to be released. Free the rpi
14177 ++ * first on an UNREG_LOGIN and then release the final
14178 ++ * references.
14179 + */
14180 ++ spin_lock_irq(&ndlp->lock);
14181 + ndlp->nlp_flag &= ~NLP_REG_LOGIN_SEND;
14182 ++ if (mbx_cmd == MBX_UNREG_LOGIN)
14183 ++ ndlp->nlp_flag &= ~NLP_UNREG_INP;
14184 ++ spin_unlock_irq(&ndlp->lock);
14185 + lpfc_nlp_put(ndlp);
14186 +- lpfc_nlp_not_used(ndlp);
14187 ++ lpfc_drop_node(ndlp->vport, ndlp);
14188 + }
14189 +
14190 ++ lpfc_mbuf_free(phba, mp->virt, mp->phys);
14191 ++ kfree(mp);
14192 ++ mempool_free(pmb, phba->mbox_mem_pool);
14193 + return;
14194 + }
14195 +
14196 +@@ -4495,11 +4532,11 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
14197 + /* ELS response tag <ulpIoTag> completes */
14198 + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
14199 + "0110 ELS response tag x%x completes "
14200 +- "Data: x%x x%x x%x x%x x%x x%x x%x\n",
14201 ++ "Data: x%x x%x x%x x%x x%x x%x x%x x%x x%px\n",
14202 + cmdiocb->iocb.ulpIoTag, rspiocb->iocb.ulpStatus,
14203 + rspiocb->iocb.un.ulpWord[4], rspiocb->iocb.ulpTimeout,
14204 + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
14205 +- ndlp->nlp_rpi);
14206 ++ ndlp->nlp_rpi, kref_read(&ndlp->kref), mbox);
14207 + if (mbox) {
14208 + if ((rspiocb->iocb.ulpStatus == 0) &&
14209 + (ndlp->nlp_flag & NLP_ACC_REGLOGIN)) {
14210 +@@ -4579,6 +4616,20 @@ out:
14211 + spin_unlock_irq(&ndlp->lock);
14212 + }
14213 +
14214 ++ /* An SLI4 NPIV instance wants to drop the node at this point under
14215 ++ * these conditions and release the RPI.
14216 ++ */
14217 ++ if (phba->sli_rev == LPFC_SLI_REV4 &&
14218 ++ (vport && vport->port_type == LPFC_NPIV_PORT) &&
14219 ++ ndlp->nlp_flag & NLP_RELEASE_RPI) {
14220 ++ lpfc_sli4_free_rpi(phba, ndlp->nlp_rpi);
14221 ++ spin_lock_irq(&ndlp->lock);
14222 ++ ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR;
14223 ++ ndlp->nlp_flag &= ~NLP_RELEASE_RPI;
14224 ++ spin_unlock_irq(&ndlp->lock);
14225 ++ lpfc_drop_node(vport, ndlp);
14226 ++ }
14227 ++
14228 + /* Release the originating I/O reference. */
14229 + lpfc_els_free_iocb(phba, cmdiocb);
14230 + lpfc_nlp_put(ndlp);
14231 +@@ -4762,10 +4813,10 @@ lpfc_els_rsp_acc(struct lpfc_vport *vport, uint32_t flag,
14232 + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
14233 + "0128 Xmit ELS ACC response Status: x%x, IoTag: x%x, "
14234 + "XRI: x%x, DID: x%x, nlp_flag: x%x nlp_state: x%x "
14235 +- "RPI: x%x, fc_flag x%x\n",
14236 ++ "RPI: x%x, fc_flag x%x refcnt %d\n",
14237 + rc, elsiocb->iotag, elsiocb->sli4_xritag,
14238 + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
14239 +- ndlp->nlp_rpi, vport->fc_flag);
14240 ++ ndlp->nlp_rpi, vport->fc_flag, kref_read(&ndlp->kref));
14241 + return 0;
14242 +
14243 + io_err:
14244 +@@ -5978,6 +6029,17 @@ lpfc_els_rdp_cmpl(struct lpfc_hba *phba, struct lpfc_rdp_context *rdp_context,
14245 + goto free_rdp_context;
14246 + }
14247 +
14248 ++ /* The NPIV instance is rejecting this unsolicited ELS. Make sure the
14249 ++ * node's assigned RPI needs to be released as this node will get
14250 ++ * freed.
14251 ++ */
14252 ++ if (phba->sli_rev == LPFC_SLI_REV4 &&
14253 ++ vport->port_type == LPFC_NPIV_PORT) {
14254 ++ spin_lock_irq(&ndlp->lock);
14255 ++ ndlp->nlp_flag |= NLP_RELEASE_RPI;
14256 ++ spin_unlock_irq(&ndlp->lock);
14257 ++ }
14258 ++
14259 + rc = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, elsiocb, 0);
14260 + if (rc == IOCB_ERROR) {
14261 + lpfc_nlp_put(ndlp);
14262 +diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
14263 +index c5176f4063864..b77d0e1931f36 100644
14264 +--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
14265 ++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
14266 +@@ -4789,12 +4789,17 @@ lpfc_nlp_logo_unreg(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
14267 + ndlp->nlp_defer_did = NLP_EVT_NOTHING_PENDING;
14268 + lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0);
14269 + } else {
14270 ++ /* NLP_RELEASE_RPI is only set for SLI4 ports. */
14271 + if (ndlp->nlp_flag & NLP_RELEASE_RPI) {
14272 + lpfc_sli4_free_rpi(vport->phba, ndlp->nlp_rpi);
14273 ++ spin_lock_irq(&ndlp->lock);
14274 + ndlp->nlp_flag &= ~NLP_RELEASE_RPI;
14275 + ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR;
14276 ++ spin_unlock_irq(&ndlp->lock);
14277 + }
14278 ++ spin_lock_irq(&ndlp->lock);
14279 + ndlp->nlp_flag &= ~NLP_UNREG_INP;
14280 ++ spin_unlock_irq(&ndlp->lock);
14281 + }
14282 + }
14283 +
14284 +@@ -5129,8 +5134,10 @@ lpfc_cleanup_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
14285 + list_del_init(&ndlp->dev_loss_evt.evt_listp);
14286 + list_del_init(&ndlp->recovery_evt.evt_listp);
14287 + lpfc_cleanup_vports_rrqs(vport, ndlp);
14288 ++
14289 + if (phba->sli_rev == LPFC_SLI_REV4)
14290 + ndlp->nlp_flag |= NLP_RELEASE_RPI;
14291 ++
14292 + return 0;
14293 + }
14294 +
14295 +@@ -6174,8 +6181,23 @@ lpfc_nlp_release(struct kref *kref)
14296 + lpfc_cancel_retry_delay_tmo(vport, ndlp);
14297 + lpfc_cleanup_node(vport, ndlp);
14298 +
14299 +- /* Clear Node key fields to give other threads notice
14300 +- * that this node memory is not valid anymore.
14301 ++ /* Not all ELS transactions have registered the RPI with the port.
14302 ++ * In these cases the rpi usage is temporary and the node is
14303 ++ * released when the WQE is completed. Catch this case to free the
14304 ++ * RPI to the pool. Because this node is in the release path, a lock
14305 ++ * is unnecessary. All references are gone and the node has been
14306 ++ * dequeued.
14307 ++ */
14308 ++ if (ndlp->nlp_flag & NLP_RELEASE_RPI) {
14309 ++ if (ndlp->nlp_rpi != LPFC_RPI_ALLOC_ERROR &&
14310 ++ !(ndlp->nlp_flag & (NLP_RPI_REGISTERED | NLP_UNREG_INP))) {
14311 ++ lpfc_sli4_free_rpi(vport->phba, ndlp->nlp_rpi);
14312 ++ ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR;
14313 ++ }
14314 ++ }
14315 ++
14316 ++ /* The node is not freed back to memory, it is released to a pool so
14317 ++ * the node fields need to be cleaned up.
14318 + */
14319 + ndlp->vport = NULL;
14320 + ndlp->nlp_state = NLP_STE_FREED_NODE;
14321 +@@ -6255,6 +6277,7 @@ lpfc_nlp_not_used(struct lpfc_nodelist *ndlp)
14322 + "node not used: did:x%x flg:x%x refcnt:x%x",
14323 + ndlp->nlp_DID, ndlp->nlp_flag,
14324 + kref_read(&ndlp->kref));
14325 ++
14326 + if (kref_read(&ndlp->kref) == 1)
14327 + if (lpfc_nlp_put(ndlp))
14328 + return 1;
14329 +diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
14330 +index a67051ba3f127..d6819e2bc10b7 100644
14331 +--- a/drivers/scsi/lpfc/lpfc_init.c
14332 ++++ b/drivers/scsi/lpfc/lpfc_init.c
14333 +@@ -3532,13 +3532,6 @@ lpfc_offline_prep(struct lpfc_hba *phba, int mbx_action)
14334 + list_for_each_entry_safe(ndlp, next_ndlp,
14335 + &vports[i]->fc_nodes,
14336 + nlp_listp) {
14337 +- if (ndlp->nlp_state == NLP_STE_UNUSED_NODE) {
14338 +- /* Driver must assume RPI is invalid for
14339 +- * any unused or inactive node.
14340 +- */
14341 +- ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR;
14342 +- continue;
14343 +- }
14344 +
14345 + spin_lock_irq(&ndlp->lock);
14346 + ndlp->nlp_flag &= ~NLP_NPR_ADISC;
14347 +diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
14348 +index 9f05f5e329c6b..135f084d4de7e 100644
14349 +--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
14350 ++++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
14351 +@@ -557,15 +557,24 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
14352 + /* no deferred ACC */
14353 + kfree(save_iocb);
14354 +
14355 +- /* In order to preserve RPIs, we want to cleanup
14356 +- * the default RPI the firmware created to rcv
14357 +- * this ELS request. The only way to do this is
14358 +- * to register, then unregister the RPI.
14359 ++ /* This is an NPIV SLI4 instance that does not need to register
14360 ++ * a default RPI.
14361 + */
14362 +- spin_lock_irq(&ndlp->lock);
14363 +- ndlp->nlp_flag |= (NLP_RM_DFLT_RPI | NLP_ACC_REGLOGIN |
14364 +- NLP_RCV_PLOGI);
14365 +- spin_unlock_irq(&ndlp->lock);
14366 ++ if (phba->sli_rev == LPFC_SLI_REV4) {
14367 ++ mempool_free(login_mbox, phba->mbox_mem_pool);
14368 ++ login_mbox = NULL;
14369 ++ } else {
14370 ++ /* In order to preserve RPIs, we want to cleanup
14371 ++ * the default RPI the firmware created to rcv
14372 ++ * this ELS request. The only way to do this is
14373 ++ * to register, then unregister the RPI.
14374 ++ */
14375 ++ spin_lock_irq(&ndlp->lock);
14376 ++ ndlp->nlp_flag |= (NLP_RM_DFLT_RPI | NLP_ACC_REGLOGIN |
14377 ++ NLP_RCV_PLOGI);
14378 ++ spin_unlock_irq(&ndlp->lock);
14379 ++ }
14380 ++
14381 + stat.un.b.lsRjtRsnCode = LSRJT_INVALID_CMD;
14382 + stat.un.b.lsRjtRsnCodeExp = LSEXP_NOTHING_MORE;
14383 + rc = lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb,
14384 +diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
14385 +index f8a5a4eb5bcef..7551743835fc0 100644
14386 +--- a/drivers/scsi/lpfc/lpfc_sli.c
14387 ++++ b/drivers/scsi/lpfc/lpfc_sli.c
14388 +@@ -13628,9 +13628,15 @@ lpfc_sli4_sp_handle_mbox_event(struct lpfc_hba *phba, struct lpfc_mcqe *mcqe)
14389 + if (mcqe_status == MB_CQE_STATUS_SUCCESS) {
14390 + mp = (struct lpfc_dmabuf *)(pmb->ctx_buf);
14391 + ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp;
14392 +- /* Reg_LOGIN of dflt RPI was successful. Now lets get
14393 +- * RID of the PPI using the same mbox buffer.
14394 ++
14395 ++ /* Reg_LOGIN of dflt RPI was successful. Mark the
14396 ++ * node as having an UNREG_LOGIN in progress to stop
14397 ++ * an unsolicited PLOGI from the same NPortId from
14398 ++ * starting another mailbox transaction.
14399 + */
14400 ++ spin_lock_irqsave(&ndlp->lock, iflags);
14401 ++ ndlp->nlp_flag |= NLP_UNREG_INP;
14402 ++ spin_unlock_irqrestore(&ndlp->lock, iflags);
14403 + lpfc_unreg_login(phba, vport->vpi,
14404 + pmbox->un.varWords[0], pmb);
14405 + pmb->mbox_cmpl = lpfc_mbx_cmpl_dflt_rpi;
14406 +diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
14407 +index 38fc9467c6258..73295cf74cbe3 100644
14408 +--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
14409 ++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
14410 +@@ -3167,6 +3167,8 @@ megasas_build_io_fusion(struct megasas_instance *instance,
14411 + {
14412 + int sge_count;
14413 + u8 cmd_type;
14414 ++ u16 pd_index = 0;
14415 ++ u8 drive_type = 0;
14416 + struct MPI2_RAID_SCSI_IO_REQUEST *io_request = cmd->io_request;
14417 + struct MR_PRIV_DEVICE *mr_device_priv_data;
14418 + mr_device_priv_data = scp->device->hostdata;
14419 +@@ -3201,8 +3203,12 @@ megasas_build_io_fusion(struct megasas_instance *instance,
14420 + megasas_build_syspd_fusion(instance, scp, cmd, true);
14421 + break;
14422 + case NON_READ_WRITE_SYSPDIO:
14423 +- if (instance->secure_jbod_support ||
14424 +- mr_device_priv_data->is_tm_capable)
14425 ++ pd_index = MEGASAS_PD_INDEX(scp);
14426 ++ drive_type = instance->pd_list[pd_index].driveType;
14427 ++ if ((instance->secure_jbod_support ||
14428 ++ mr_device_priv_data->is_tm_capable) ||
14429 ++ (instance->adapter_type >= VENTURA_SERIES &&
14430 ++ drive_type == TYPE_ENCLOSURE))
14431 + megasas_build_syspd_fusion(instance, scp, cmd, false);
14432 + else
14433 + megasas_build_syspd_fusion(instance, scp, cmd, true);
14434 +diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
14435 +index ae1973878cc7d..7824e77bc6e26 100644
14436 +--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
14437 ++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
14438 +@@ -6883,8 +6883,10 @@ _scsih_expander_add(struct MPT3SAS_ADAPTER *ioc, u16 handle)
14439 + handle, parent_handle,
14440 + (u64)sas_expander->sas_address, sas_expander->num_phys);
14441 +
14442 +- if (!sas_expander->num_phys)
14443 ++ if (!sas_expander->num_phys) {
14444 ++ rc = -1;
14445 + goto out_fail;
14446 ++ }
14447 + sas_expander->phy = kcalloc(sas_expander->num_phys,
14448 + sizeof(struct _sas_phy), GFP_KERNEL);
14449 + if (!sas_expander->phy) {
14450 +diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
14451 +index 08c05403cd720..087c7ff28cd52 100644
14452 +--- a/drivers/scsi/qedi/qedi_iscsi.c
14453 ++++ b/drivers/scsi/qedi/qedi_iscsi.c
14454 +@@ -377,6 +377,7 @@ static int qedi_conn_bind(struct iscsi_cls_session *cls_session,
14455 + struct qedi_ctx *qedi = iscsi_host_priv(shost);
14456 + struct qedi_endpoint *qedi_ep;
14457 + struct iscsi_endpoint *ep;
14458 ++ int rc = 0;
14459 +
14460 + ep = iscsi_lookup_endpoint(transport_fd);
14461 + if (!ep)
14462 +@@ -384,11 +385,16 @@ static int qedi_conn_bind(struct iscsi_cls_session *cls_session,
14463 +
14464 + qedi_ep = ep->dd_data;
14465 + if ((qedi_ep->state == EP_STATE_TCP_FIN_RCVD) ||
14466 +- (qedi_ep->state == EP_STATE_TCP_RST_RCVD))
14467 +- return -EINVAL;
14468 ++ (qedi_ep->state == EP_STATE_TCP_RST_RCVD)) {
14469 ++ rc = -EINVAL;
14470 ++ goto put_ep;
14471 ++ }
14472 ++
14473 ++ if (iscsi_conn_bind(cls_session, cls_conn, is_leading)) {
14474 ++ rc = -EINVAL;
14475 ++ goto put_ep;
14476 ++ }
14477 +
14478 +- if (iscsi_conn_bind(cls_session, cls_conn, is_leading))
14479 +- return -EINVAL;
14480 +
14481 + qedi_ep->conn = qedi_conn;
14482 + qedi_conn->ep = qedi_ep;
14483 +@@ -398,13 +404,18 @@ static int qedi_conn_bind(struct iscsi_cls_session *cls_session,
14484 + qedi_conn->cmd_cleanup_req = 0;
14485 + qedi_conn->cmd_cleanup_cmpl = 0;
14486 +
14487 +- if (qedi_bind_conn_to_iscsi_cid(qedi, qedi_conn))
14488 +- return -EINVAL;
14489 ++ if (qedi_bind_conn_to_iscsi_cid(qedi, qedi_conn)) {
14490 ++ rc = -EINVAL;
14491 ++ goto put_ep;
14492 ++ }
14493 ++
14494 +
14495 + spin_lock_init(&qedi_conn->tmf_work_lock);
14496 + INIT_LIST_HEAD(&qedi_conn->tmf_work_list);
14497 + init_waitqueue_head(&qedi_conn->wait_queue);
14498 +- return 0;
14499 ++put_ep:
14500 ++ iscsi_put_endpoint(ep);
14501 ++ return rc;
14502 + }
14503 +
14504 + static int qedi_iscsi_update_conn(struct qedi_ctx *qedi,
14505 +@@ -1401,6 +1412,7 @@ struct iscsi_transport qedi_iscsi_transport = {
14506 + .destroy_session = qedi_session_destroy,
14507 + .create_conn = qedi_conn_create,
14508 + .bind_conn = qedi_conn_bind,
14509 ++ .unbind_conn = iscsi_conn_unbind,
14510 + .start_conn = qedi_conn_start,
14511 + .stop_conn = iscsi_conn_stop,
14512 + .destroy_conn = qedi_conn_destroy,
14513 +diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
14514 +index 7bd9a4a04ad5d..ea128da08537e 100644
14515 +--- a/drivers/scsi/qla4xxx/ql4_os.c
14516 ++++ b/drivers/scsi/qla4xxx/ql4_os.c
14517 +@@ -259,6 +259,7 @@ static struct iscsi_transport qla4xxx_iscsi_transport = {
14518 + .start_conn = qla4xxx_conn_start,
14519 + .create_conn = qla4xxx_conn_create,
14520 + .bind_conn = qla4xxx_conn_bind,
14521 ++ .unbind_conn = iscsi_conn_unbind,
14522 + .stop_conn = iscsi_conn_stop,
14523 + .destroy_conn = qla4xxx_conn_destroy,
14524 + .set_param = iscsi_set_param,
14525 +@@ -3234,6 +3235,7 @@ static int qla4xxx_conn_bind(struct iscsi_cls_session *cls_session,
14526 + conn = cls_conn->dd_data;
14527 + qla_conn = conn->dd_data;
14528 + qla_conn->qla_ep = ep->dd_data;
14529 ++ iscsi_put_endpoint(ep);
14530 + return 0;
14531 + }
14532 +
14533 +diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
14534 +index 7d52a11e1b611..e172c660dcd53 100644
14535 +--- a/drivers/scsi/scsi_lib.c
14536 ++++ b/drivers/scsi/scsi_lib.c
14537 +@@ -761,6 +761,7 @@ static void scsi_io_completion_action(struct scsi_cmnd *cmd, int result)
14538 + case 0x07: /* operation in progress */
14539 + case 0x08: /* Long write in progress */
14540 + case 0x09: /* self test in progress */
14541 ++ case 0x11: /* notify (enable spinup) required */
14542 + case 0x14: /* space allocation in progress */
14543 + case 0x1a: /* start stop unit in progress */
14544 + case 0x1b: /* sanitize in progress */
14545 +diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
14546 +index 441f0152193f7..6ce1cc992d1d0 100644
14547 +--- a/drivers/scsi/scsi_transport_iscsi.c
14548 ++++ b/drivers/scsi/scsi_transport_iscsi.c
14549 +@@ -86,16 +86,10 @@ struct iscsi_internal {
14550 + struct transport_container session_cont;
14551 + };
14552 +
14553 +-/* Worker to perform connection failure on unresponsive connections
14554 +- * completely in kernel space.
14555 +- */
14556 +-static void stop_conn_work_fn(struct work_struct *work);
14557 +-static DECLARE_WORK(stop_conn_work, stop_conn_work_fn);
14558 +-
14559 + static atomic_t iscsi_session_nr; /* sysfs session id for next new session */
14560 + static struct workqueue_struct *iscsi_eh_timer_workq;
14561 +
14562 +-static struct workqueue_struct *iscsi_destroy_workq;
14563 ++static struct workqueue_struct *iscsi_conn_cleanup_workq;
14564 +
14565 + static DEFINE_IDA(iscsi_sess_ida);
14566 + /*
14567 +@@ -268,9 +262,20 @@ void iscsi_destroy_endpoint(struct iscsi_endpoint *ep)
14568 + }
14569 + EXPORT_SYMBOL_GPL(iscsi_destroy_endpoint);
14570 +
14571 ++void iscsi_put_endpoint(struct iscsi_endpoint *ep)
14572 ++{
14573 ++ put_device(&ep->dev);
14574 ++}
14575 ++EXPORT_SYMBOL_GPL(iscsi_put_endpoint);
14576 ++
14577 ++/**
14578 ++ * iscsi_lookup_endpoint - get ep from handle
14579 ++ * @handle: endpoint handle
14580 ++ *
14581 ++ * Caller must do a iscsi_put_endpoint.
14582 ++ */
14583 + struct iscsi_endpoint *iscsi_lookup_endpoint(u64 handle)
14584 + {
14585 +- struct iscsi_endpoint *ep;
14586 + struct device *dev;
14587 +
14588 + dev = class_find_device(&iscsi_endpoint_class, NULL, &handle,
14589 +@@ -278,13 +283,7 @@ struct iscsi_endpoint *iscsi_lookup_endpoint(u64 handle)
14590 + if (!dev)
14591 + return NULL;
14592 +
14593 +- ep = iscsi_dev_to_endpoint(dev);
14594 +- /*
14595 +- * we can drop this now because the interface will prevent
14596 +- * removals and lookups from racing.
14597 +- */
14598 +- put_device(dev);
14599 +- return ep;
14600 ++ return iscsi_dev_to_endpoint(dev);
14601 + }
14602 + EXPORT_SYMBOL_GPL(iscsi_lookup_endpoint);
14603 +
14604 +@@ -1620,12 +1619,6 @@ static DECLARE_TRANSPORT_CLASS(iscsi_connection_class,
14605 + static struct sock *nls;
14606 + static DEFINE_MUTEX(rx_queue_mutex);
14607 +
14608 +-/*
14609 +- * conn_mutex protects the {start,bind,stop,destroy}_conn from racing
14610 +- * against the kernel stop_connection recovery mechanism
14611 +- */
14612 +-static DEFINE_MUTEX(conn_mutex);
14613 +-
14614 + static LIST_HEAD(sesslist);
14615 + static DEFINE_SPINLOCK(sesslock);
14616 + static LIST_HEAD(connlist);
14617 +@@ -1976,6 +1969,8 @@ static void __iscsi_unblock_session(struct work_struct *work)
14618 + */
14619 + void iscsi_unblock_session(struct iscsi_cls_session *session)
14620 + {
14621 ++ flush_work(&session->block_work);
14622 ++
14623 + queue_work(iscsi_eh_timer_workq, &session->unblock_work);
14624 + /*
14625 + * Blocking the session can be done from any context so we only
14626 +@@ -2242,6 +2237,123 @@ void iscsi_remove_session(struct iscsi_cls_session *session)
14627 + }
14628 + EXPORT_SYMBOL_GPL(iscsi_remove_session);
14629 +
14630 ++static void iscsi_stop_conn(struct iscsi_cls_conn *conn, int flag)
14631 ++{
14632 ++ ISCSI_DBG_TRANS_CONN(conn, "Stopping conn.\n");
14633 ++
14634 ++ switch (flag) {
14635 ++ case STOP_CONN_RECOVER:
14636 ++ conn->state = ISCSI_CONN_FAILED;
14637 ++ break;
14638 ++ case STOP_CONN_TERM:
14639 ++ conn->state = ISCSI_CONN_DOWN;
14640 ++ break;
14641 ++ default:
14642 ++ iscsi_cls_conn_printk(KERN_ERR, conn, "invalid stop flag %d\n",
14643 ++ flag);
14644 ++ return;
14645 ++ }
14646 ++
14647 ++ conn->transport->stop_conn(conn, flag);
14648 ++ ISCSI_DBG_TRANS_CONN(conn, "Stopping conn done.\n");
14649 ++}
14650 ++
14651 ++static int iscsi_if_stop_conn(struct iscsi_transport *transport,
14652 ++ struct iscsi_uevent *ev)
14653 ++{
14654 ++ int flag = ev->u.stop_conn.flag;
14655 ++ struct iscsi_cls_conn *conn;
14656 ++
14657 ++ conn = iscsi_conn_lookup(ev->u.stop_conn.sid, ev->u.stop_conn.cid);
14658 ++ if (!conn)
14659 ++ return -EINVAL;
14660 ++
14661 ++ ISCSI_DBG_TRANS_CONN(conn, "iscsi if conn stop.\n");
14662 ++ /*
14663 ++ * If this is a termination we have to call stop_conn with that flag
14664 ++ * so the correct states get set. If we haven't run the work yet try to
14665 ++ * avoid the extra run.
14666 ++ */
14667 ++ if (flag == STOP_CONN_TERM) {
14668 ++ cancel_work_sync(&conn->cleanup_work);
14669 ++ iscsi_stop_conn(conn, flag);
14670 ++ } else {
14671 ++ /*
14672 ++ * Figure out if it was the kernel or userspace initiating this.
14673 ++ */
14674 ++ if (!test_and_set_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags)) {
14675 ++ iscsi_stop_conn(conn, flag);
14676 ++ } else {
14677 ++ ISCSI_DBG_TRANS_CONN(conn,
14678 ++ "flush kernel conn cleanup.\n");
14679 ++ flush_work(&conn->cleanup_work);
14680 ++ }
14681 ++ /*
14682 ++ * Only clear for recovery to avoid extra cleanup runs during
14683 ++ * termination.
14684 ++ */
14685 ++ clear_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags);
14686 ++ }
14687 ++ ISCSI_DBG_TRANS_CONN(conn, "iscsi if conn stop done.\n");
14688 ++ return 0;
14689 ++}
14690 ++
14691 ++static void iscsi_ep_disconnect(struct iscsi_cls_conn *conn, bool is_active)
14692 ++{
14693 ++ struct iscsi_cls_session *session = iscsi_conn_to_session(conn);
14694 ++ struct iscsi_endpoint *ep;
14695 ++
14696 ++ ISCSI_DBG_TRANS_CONN(conn, "disconnect ep.\n");
14697 ++ conn->state = ISCSI_CONN_FAILED;
14698 ++
14699 ++ if (!conn->ep || !session->transport->ep_disconnect)
14700 ++ return;
14701 ++
14702 ++ ep = conn->ep;
14703 ++ conn->ep = NULL;
14704 ++
14705 ++ session->transport->unbind_conn(conn, is_active);
14706 ++ session->transport->ep_disconnect(ep);
14707 ++ ISCSI_DBG_TRANS_CONN(conn, "disconnect ep done.\n");
14708 ++}
14709 ++
14710 ++static void iscsi_cleanup_conn_work_fn(struct work_struct *work)
14711 ++{
14712 ++ struct iscsi_cls_conn *conn = container_of(work, struct iscsi_cls_conn,
14713 ++ cleanup_work);
14714 ++ struct iscsi_cls_session *session = iscsi_conn_to_session(conn);
14715 ++
14716 ++ mutex_lock(&conn->ep_mutex);
14717 ++ /*
14718 ++ * If we are not at least bound there is nothing for us to do. Userspace
14719 ++ * will do a ep_disconnect call if offload is used, but will not be
14720 ++ * doing a stop since there is nothing to clean up, so we have to clear
14721 ++ * the cleanup bit here.
14722 ++ */
14723 ++ if (conn->state != ISCSI_CONN_BOUND && conn->state != ISCSI_CONN_UP) {
14724 ++ ISCSI_DBG_TRANS_CONN(conn, "Got error while conn is already failed. Ignoring.\n");
14725 ++ clear_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags);
14726 ++ mutex_unlock(&conn->ep_mutex);
14727 ++ return;
14728 ++ }
14729 ++
14730 ++ iscsi_ep_disconnect(conn, false);
14731 ++
14732 ++ if (system_state != SYSTEM_RUNNING) {
14733 ++ /*
14734 ++ * If the user has set up for the session to never timeout
14735 ++ * then hang like they wanted. For all other cases fail right
14736 ++ * away since userspace is not going to relogin.
14737 ++ */
14738 ++ if (session->recovery_tmo > 0)
14739 ++ session->recovery_tmo = 0;
14740 ++ }
14741 ++
14742 ++ iscsi_stop_conn(conn, STOP_CONN_RECOVER);
14743 ++ mutex_unlock(&conn->ep_mutex);
14744 ++ ISCSI_DBG_TRANS_CONN(conn, "cleanup done.\n");
14745 ++}
14746 ++
14747 + void iscsi_free_session(struct iscsi_cls_session *session)
14748 + {
14749 + ISCSI_DBG_TRANS_SESSION(session, "Freeing session\n");
14750 +@@ -2281,7 +2393,7 @@ iscsi_create_conn(struct iscsi_cls_session *session, int dd_size, uint32_t cid)
14751 +
14752 + mutex_init(&conn->ep_mutex);
14753 + INIT_LIST_HEAD(&conn->conn_list);
14754 +- INIT_LIST_HEAD(&conn->conn_list_err);
14755 ++ INIT_WORK(&conn->cleanup_work, iscsi_cleanup_conn_work_fn);
14756 + conn->transport = transport;
14757 + conn->cid = cid;
14758 + conn->state = ISCSI_CONN_DOWN;
14759 +@@ -2338,7 +2450,6 @@ int iscsi_destroy_conn(struct iscsi_cls_conn *conn)
14760 +
14761 + spin_lock_irqsave(&connlock, flags);
14762 + list_del(&conn->conn_list);
14763 +- list_del(&conn->conn_list_err);
14764 + spin_unlock_irqrestore(&connlock, flags);
14765 +
14766 + transport_unregister_device(&conn->dev);
14767 +@@ -2453,77 +2564,6 @@ int iscsi_offload_mesg(struct Scsi_Host *shost,
14768 + }
14769 + EXPORT_SYMBOL_GPL(iscsi_offload_mesg);
14770 +
14771 +-/*
14772 +- * This can be called without the rx_queue_mutex, if invoked by the kernel
14773 +- * stop work. But, in that case, it is guaranteed not to race with
14774 +- * iscsi_destroy by conn_mutex.
14775 +- */
14776 +-static void iscsi_if_stop_conn(struct iscsi_cls_conn *conn, int flag)
14777 +-{
14778 +- /*
14779 +- * It is important that this path doesn't rely on
14780 +- * rx_queue_mutex, otherwise, a thread doing allocation on a
14781 +- * start_session/start_connection could sleep waiting on a
14782 +- * writeback to a failed iscsi device, that cannot be recovered
14783 +- * because the lock is held. If we don't hold it here, the
14784 +- * kernel stop_conn_work_fn has a chance to stop the broken
14785 +- * session and resolve the allocation.
14786 +- *
14787 +- * Still, the user invoked .stop_conn() needs to be serialized
14788 +- * with stop_conn_work_fn by a private mutex. Not pretty, but
14789 +- * it works.
14790 +- */
14791 +- mutex_lock(&conn_mutex);
14792 +- switch (flag) {
14793 +- case STOP_CONN_RECOVER:
14794 +- conn->state = ISCSI_CONN_FAILED;
14795 +- break;
14796 +- case STOP_CONN_TERM:
14797 +- conn->state = ISCSI_CONN_DOWN;
14798 +- break;
14799 +- default:
14800 +- iscsi_cls_conn_printk(KERN_ERR, conn,
14801 +- "invalid stop flag %d\n", flag);
14802 +- goto unlock;
14803 +- }
14804 +-
14805 +- conn->transport->stop_conn(conn, flag);
14806 +-unlock:
14807 +- mutex_unlock(&conn_mutex);
14808 +-}
14809 +-
14810 +-static void stop_conn_work_fn(struct work_struct *work)
14811 +-{
14812 +- struct iscsi_cls_conn *conn, *tmp;
14813 +- unsigned long flags;
14814 +- LIST_HEAD(recovery_list);
14815 +-
14816 +- spin_lock_irqsave(&connlock, flags);
14817 +- if (list_empty(&connlist_err)) {
14818 +- spin_unlock_irqrestore(&connlock, flags);
14819 +- return;
14820 +- }
14821 +- list_splice_init(&connlist_err, &recovery_list);
14822 +- spin_unlock_irqrestore(&connlock, flags);
14823 +-
14824 +- list_for_each_entry_safe(conn, tmp, &recovery_list, conn_list_err) {
14825 +- uint32_t sid = iscsi_conn_get_sid(conn);
14826 +- struct iscsi_cls_session *session;
14827 +-
14828 +- session = iscsi_session_lookup(sid);
14829 +- if (session) {
14830 +- if (system_state != SYSTEM_RUNNING) {
14831 +- session->recovery_tmo = 0;
14832 +- iscsi_if_stop_conn(conn, STOP_CONN_TERM);
14833 +- } else {
14834 +- iscsi_if_stop_conn(conn, STOP_CONN_RECOVER);
14835 +- }
14836 +- }
14837 +-
14838 +- list_del_init(&conn->conn_list_err);
14839 +- }
14840 +-}
14841 +-
14842 + void iscsi_conn_error_event(struct iscsi_cls_conn *conn, enum iscsi_err error)
14843 + {
14844 + struct nlmsghdr *nlh;
14845 +@@ -2531,12 +2571,9 @@ void iscsi_conn_error_event(struct iscsi_cls_conn *conn, enum iscsi_err error)
14846 + struct iscsi_uevent *ev;
14847 + struct iscsi_internal *priv;
14848 + int len = nlmsg_total_size(sizeof(*ev));
14849 +- unsigned long flags;
14850 +
14851 +- spin_lock_irqsave(&connlock, flags);
14852 +- list_add(&conn->conn_list_err, &connlist_err);
14853 +- spin_unlock_irqrestore(&connlock, flags);
14854 +- queue_work(system_unbound_wq, &stop_conn_work);
14855 ++ if (!test_and_set_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags))
14856 ++ queue_work(iscsi_conn_cleanup_workq, &conn->cleanup_work);
14857 +
14858 + priv = iscsi_if_transport_lookup(conn->transport);
14859 + if (!priv)
14860 +@@ -2866,26 +2903,17 @@ static int
14861 + iscsi_if_destroy_conn(struct iscsi_transport *transport, struct iscsi_uevent *ev)
14862 + {
14863 + struct iscsi_cls_conn *conn;
14864 +- unsigned long flags;
14865 +
14866 + conn = iscsi_conn_lookup(ev->u.d_conn.sid, ev->u.d_conn.cid);
14867 + if (!conn)
14868 + return -EINVAL;
14869 +
14870 +- spin_lock_irqsave(&connlock, flags);
14871 +- if (!list_empty(&conn->conn_list_err)) {
14872 +- spin_unlock_irqrestore(&connlock, flags);
14873 +- return -EAGAIN;
14874 +- }
14875 +- spin_unlock_irqrestore(&connlock, flags);
14876 +-
14877 ++ ISCSI_DBG_TRANS_CONN(conn, "Flushing cleanup during destruction\n");
14878 ++ flush_work(&conn->cleanup_work);
14879 + ISCSI_DBG_TRANS_CONN(conn, "Destroying transport conn\n");
14880 +
14881 +- mutex_lock(&conn_mutex);
14882 + if (transport->destroy_conn)
14883 + transport->destroy_conn(conn);
14884 +- mutex_unlock(&conn_mutex);
14885 +-
14886 + return 0;
14887 + }
14888 +
14889 +@@ -2975,15 +3003,31 @@ static int iscsi_if_ep_disconnect(struct iscsi_transport *transport,
14890 + ep = iscsi_lookup_endpoint(ep_handle);
14891 + if (!ep)
14892 + return -EINVAL;
14893 ++
14894 + conn = ep->conn;
14895 +- if (conn) {
14896 +- mutex_lock(&conn->ep_mutex);
14897 +- conn->ep = NULL;
14898 ++ if (!conn) {
14899 ++ /*
14900 ++ * conn was not even bound yet, so we can't get iscsi conn
14901 ++ * failures yet.
14902 ++ */
14903 ++ transport->ep_disconnect(ep);
14904 ++ goto put_ep;
14905 ++ }
14906 ++
14907 ++ mutex_lock(&conn->ep_mutex);
14908 ++ /* Check if this was a conn error and the kernel took ownership */
14909 ++ if (test_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags)) {
14910 ++ ISCSI_DBG_TRANS_CONN(conn, "flush kernel conn cleanup.\n");
14911 + mutex_unlock(&conn->ep_mutex);
14912 +- conn->state = ISCSI_CONN_FAILED;
14913 ++
14914 ++ flush_work(&conn->cleanup_work);
14915 ++ goto put_ep;
14916 + }
14917 +
14918 +- transport->ep_disconnect(ep);
14919 ++ iscsi_ep_disconnect(conn, false);
14920 ++ mutex_unlock(&conn->ep_mutex);
14921 ++put_ep:
14922 ++ iscsi_put_endpoint(ep);
14923 + return 0;
14924 + }
14925 +
14926 +@@ -3009,6 +3053,7 @@ iscsi_if_transport_ep(struct iscsi_transport *transport,
14927 +
14928 + ev->r.retcode = transport->ep_poll(ep,
14929 + ev->u.ep_poll.timeout_ms);
14930 ++ iscsi_put_endpoint(ep);
14931 + break;
14932 + case ISCSI_UEVENT_TRANSPORT_EP_DISCONNECT:
14933 + rc = iscsi_if_ep_disconnect(transport,
14934 +@@ -3639,18 +3684,129 @@ exit_host_stats:
14935 + return err;
14936 + }
14937 +
14938 ++static int iscsi_if_transport_conn(struct iscsi_transport *transport,
14939 ++ struct nlmsghdr *nlh)
14940 ++{
14941 ++ struct iscsi_uevent *ev = nlmsg_data(nlh);
14942 ++ struct iscsi_cls_session *session;
14943 ++ struct iscsi_cls_conn *conn = NULL;
14944 ++ struct iscsi_endpoint *ep;
14945 ++ uint32_t pdu_len;
14946 ++ int err = 0;
14947 ++
14948 ++ switch (nlh->nlmsg_type) {
14949 ++ case ISCSI_UEVENT_CREATE_CONN:
14950 ++ return iscsi_if_create_conn(transport, ev);
14951 ++ case ISCSI_UEVENT_DESTROY_CONN:
14952 ++ return iscsi_if_destroy_conn(transport, ev);
14953 ++ case ISCSI_UEVENT_STOP_CONN:
14954 ++ return iscsi_if_stop_conn(transport, ev);
14955 ++ }
14956 ++
14957 ++ /*
14958 ++ * The following cmds need to be run under the ep_mutex so in kernel
14959 ++ * conn cleanup (ep_disconnect + unbind and conn) is not done while
14960 ++ * these are running. They also must not run if we have just run a conn
14961 ++ * cleanup because they would set the state in a way that might allow
14962 ++ * IO or send IO themselves.
14963 ++ */
14964 ++ switch (nlh->nlmsg_type) {
14965 ++ case ISCSI_UEVENT_START_CONN:
14966 ++ conn = iscsi_conn_lookup(ev->u.start_conn.sid,
14967 ++ ev->u.start_conn.cid);
14968 ++ break;
14969 ++ case ISCSI_UEVENT_BIND_CONN:
14970 ++ conn = iscsi_conn_lookup(ev->u.b_conn.sid, ev->u.b_conn.cid);
14971 ++ break;
14972 ++ case ISCSI_UEVENT_SEND_PDU:
14973 ++ conn = iscsi_conn_lookup(ev->u.send_pdu.sid, ev->u.send_pdu.cid);
14974 ++ break;
14975 ++ }
14976 ++
14977 ++ if (!conn)
14978 ++ return -EINVAL;
14979 ++
14980 ++ mutex_lock(&conn->ep_mutex);
14981 ++ if (test_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags)) {
14982 ++ mutex_unlock(&conn->ep_mutex);
14983 ++ ev->r.retcode = -ENOTCONN;
14984 ++ return 0;
14985 ++ }
14986 ++
14987 ++ switch (nlh->nlmsg_type) {
14988 ++ case ISCSI_UEVENT_BIND_CONN:
14989 ++ if (conn->ep) {
14990 ++ /*
14991 ++ * For offload boot support where iscsid is restarted
14992 ++ * during the pivot root stage, the ep will be intact
14993 ++ * here when the new iscsid instance starts up and
14994 ++ * reconnects.
14995 ++ */
14996 ++ iscsi_ep_disconnect(conn, true);
14997 ++ }
14998 ++
14999 ++ session = iscsi_session_lookup(ev->u.b_conn.sid);
15000 ++ if (!session) {
15001 ++ err = -EINVAL;
15002 ++ break;
15003 ++ }
15004 ++
15005 ++ ev->r.retcode = transport->bind_conn(session, conn,
15006 ++ ev->u.b_conn.transport_eph,
15007 ++ ev->u.b_conn.is_leading);
15008 ++ if (!ev->r.retcode)
15009 ++ conn->state = ISCSI_CONN_BOUND;
15010 ++
15011 ++ if (ev->r.retcode || !transport->ep_connect)
15012 ++ break;
15013 ++
15014 ++ ep = iscsi_lookup_endpoint(ev->u.b_conn.transport_eph);
15015 ++ if (ep) {
15016 ++ ep->conn = conn;
15017 ++ conn->ep = ep;
15018 ++ iscsi_put_endpoint(ep);
15019 ++ } else {
15020 ++ err = -ENOTCONN;
15021 ++ iscsi_cls_conn_printk(KERN_ERR, conn,
15022 ++ "Could not set ep conn binding\n");
15023 ++ }
15024 ++ break;
15025 ++ case ISCSI_UEVENT_START_CONN:
15026 ++ ev->r.retcode = transport->start_conn(conn);
15027 ++ if (!ev->r.retcode)
15028 ++ conn->state = ISCSI_CONN_UP;
15029 ++ break;
15030 ++ case ISCSI_UEVENT_SEND_PDU:
15031 ++ pdu_len = nlh->nlmsg_len - sizeof(*nlh) - sizeof(*ev);
15032 ++
15033 ++ if ((ev->u.send_pdu.hdr_size > pdu_len) ||
15034 ++ (ev->u.send_pdu.data_size > (pdu_len - ev->u.send_pdu.hdr_size))) {
15035 ++ err = -EINVAL;
15036 ++ break;
15037 ++ }
15038 ++
15039 ++ ev->r.retcode = transport->send_pdu(conn,
15040 ++ (struct iscsi_hdr *)((char *)ev + sizeof(*ev)),
15041 ++ (char *)ev + sizeof(*ev) + ev->u.send_pdu.hdr_size,
15042 ++ ev->u.send_pdu.data_size);
15043 ++ break;
15044 ++ default:
15045 ++ err = -ENOSYS;
15046 ++ }
15047 ++
15048 ++ mutex_unlock(&conn->ep_mutex);
15049 ++ return err;
15050 ++}
15051 +
15052 + static int
15053 + iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
15054 + {
15055 + int err = 0;
15056 + u32 portid;
15057 +- u32 pdu_len;
15058 + struct iscsi_uevent *ev = nlmsg_data(nlh);
15059 + struct iscsi_transport *transport = NULL;
15060 + struct iscsi_internal *priv;
15061 + struct iscsi_cls_session *session;
15062 +- struct iscsi_cls_conn *conn;
15063 + struct iscsi_endpoint *ep = NULL;
15064 +
15065 + if (!netlink_capable(skb, CAP_SYS_ADMIN))
15066 +@@ -3691,6 +3847,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
15067 + ev->u.c_bound_session.initial_cmdsn,
15068 + ev->u.c_bound_session.cmds_max,
15069 + ev->u.c_bound_session.queue_depth);
15070 ++ iscsi_put_endpoint(ep);
15071 + break;
15072 + case ISCSI_UEVENT_DESTROY_SESSION:
15073 + session = iscsi_session_lookup(ev->u.d_session.sid);
15074 +@@ -3715,7 +3872,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
15075 + list_del_init(&session->sess_list);
15076 + spin_unlock_irqrestore(&sesslock, flags);
15077 +
15078 +- queue_work(iscsi_destroy_workq, &session->destroy_work);
15079 ++ queue_work(system_unbound_wq, &session->destroy_work);
15080 + }
15081 + break;
15082 + case ISCSI_UEVENT_UNBIND_SESSION:
15083 +@@ -3726,89 +3883,16 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
15084 + else
15085 + err = -EINVAL;
15086 + break;
15087 +- case ISCSI_UEVENT_CREATE_CONN:
15088 +- err = iscsi_if_create_conn(transport, ev);
15089 +- break;
15090 +- case ISCSI_UEVENT_DESTROY_CONN:
15091 +- err = iscsi_if_destroy_conn(transport, ev);
15092 +- break;
15093 +- case ISCSI_UEVENT_BIND_CONN:
15094 +- session = iscsi_session_lookup(ev->u.b_conn.sid);
15095 +- conn = iscsi_conn_lookup(ev->u.b_conn.sid, ev->u.b_conn.cid);
15096 +-
15097 +- if (conn && conn->ep)
15098 +- iscsi_if_ep_disconnect(transport, conn->ep->id);
15099 +-
15100 +- if (!session || !conn) {
15101 +- err = -EINVAL;
15102 +- break;
15103 +- }
15104 +-
15105 +- mutex_lock(&conn_mutex);
15106 +- ev->r.retcode = transport->bind_conn(session, conn,
15107 +- ev->u.b_conn.transport_eph,
15108 +- ev->u.b_conn.is_leading);
15109 +- if (!ev->r.retcode)
15110 +- conn->state = ISCSI_CONN_BOUND;
15111 +- mutex_unlock(&conn_mutex);
15112 +-
15113 +- if (ev->r.retcode || !transport->ep_connect)
15114 +- break;
15115 +-
15116 +- ep = iscsi_lookup_endpoint(ev->u.b_conn.transport_eph);
15117 +- if (ep) {
15118 +- ep->conn = conn;
15119 +-
15120 +- mutex_lock(&conn->ep_mutex);
15121 +- conn->ep = ep;
15122 +- mutex_unlock(&conn->ep_mutex);
15123 +- } else
15124 +- iscsi_cls_conn_printk(KERN_ERR, conn,
15125 +- "Could not set ep conn "
15126 +- "binding\n");
15127 +- break;
15128 + case ISCSI_UEVENT_SET_PARAM:
15129 + err = iscsi_set_param(transport, ev);
15130 + break;
15131 +- case ISCSI_UEVENT_START_CONN:
15132 +- conn = iscsi_conn_lookup(ev->u.start_conn.sid, ev->u.start_conn.cid);
15133 +- if (conn) {
15134 +- mutex_lock(&conn_mutex);
15135 +- ev->r.retcode = transport->start_conn(conn);
15136 +- if (!ev->r.retcode)
15137 +- conn->state = ISCSI_CONN_UP;
15138 +- mutex_unlock(&conn_mutex);
15139 +- }
15140 +- else
15141 +- err = -EINVAL;
15142 +- break;
15143 ++ case ISCSI_UEVENT_CREATE_CONN:
15144 ++ case ISCSI_UEVENT_DESTROY_CONN:
15145 + case ISCSI_UEVENT_STOP_CONN:
15146 +- conn = iscsi_conn_lookup(ev->u.stop_conn.sid, ev->u.stop_conn.cid);
15147 +- if (conn)
15148 +- iscsi_if_stop_conn(conn, ev->u.stop_conn.flag);
15149 +- else
15150 +- err = -EINVAL;
15151 +- break;
15152 ++ case ISCSI_UEVENT_START_CONN:
15153 ++ case ISCSI_UEVENT_BIND_CONN:
15154 + case ISCSI_UEVENT_SEND_PDU:
15155 +- pdu_len = nlh->nlmsg_len - sizeof(*nlh) - sizeof(*ev);
15156 +-
15157 +- if ((ev->u.send_pdu.hdr_size > pdu_len) ||
15158 +- (ev->u.send_pdu.data_size > (pdu_len - ev->u.send_pdu.hdr_size))) {
15159 +- err = -EINVAL;
15160 +- break;
15161 +- }
15162 +-
15163 +- conn = iscsi_conn_lookup(ev->u.send_pdu.sid, ev->u.send_pdu.cid);
15164 +- if (conn) {
15165 +- mutex_lock(&conn_mutex);
15166 +- ev->r.retcode = transport->send_pdu(conn,
15167 +- (struct iscsi_hdr*)((char*)ev + sizeof(*ev)),
15168 +- (char*)ev + sizeof(*ev) + ev->u.send_pdu.hdr_size,
15169 +- ev->u.send_pdu.data_size);
15170 +- mutex_unlock(&conn_mutex);
15171 +- }
15172 +- else
15173 +- err = -EINVAL;
15174 ++ err = iscsi_if_transport_conn(transport, nlh);
15175 + break;
15176 + case ISCSI_UEVENT_GET_STATS:
15177 + err = iscsi_if_get_stats(transport, nlh);
15178 +@@ -4656,6 +4740,7 @@ iscsi_register_transport(struct iscsi_transport *tt)
15179 + int err;
15180 +
15181 + BUG_ON(!tt);
15182 ++ WARN_ON(tt->ep_disconnect && !tt->unbind_conn);
15183 +
15184 + priv = iscsi_if_transport_lookup(tt);
15185 + if (priv)
15186 +@@ -4810,10 +4895,10 @@ static __init int iscsi_transport_init(void)
15187 + goto release_nls;
15188 + }
15189 +
15190 +- iscsi_destroy_workq = alloc_workqueue("%s",
15191 +- WQ_SYSFS | __WQ_LEGACY | WQ_MEM_RECLAIM | WQ_UNBOUND,
15192 +- 1, "iscsi_destroy");
15193 +- if (!iscsi_destroy_workq) {
15194 ++ iscsi_conn_cleanup_workq = alloc_workqueue("%s",
15195 ++ WQ_SYSFS | WQ_MEM_RECLAIM | WQ_UNBOUND, 0,
15196 ++ "iscsi_conn_cleanup");
15197 ++ if (!iscsi_conn_cleanup_workq) {
15198 + err = -ENOMEM;
15199 + goto destroy_wq;
15200 + }
15201 +@@ -4843,7 +4928,7 @@ unregister_transport_class:
15202 +
15203 + static void __exit iscsi_transport_exit(void)
15204 + {
15205 +- destroy_workqueue(iscsi_destroy_workq);
15206 ++ destroy_workqueue(iscsi_conn_cleanup_workq);
15207 + destroy_workqueue(iscsi_eh_timer_workq);
15208 + netlink_kernel_release(nls);
15209 + bus_unregister(&iscsi_flashnode_bus);
15210 +diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
15211 +index a418c3c7001c0..304ff2ee7d75a 100644
15212 +--- a/drivers/soundwire/stream.c
15213 ++++ b/drivers/soundwire/stream.c
15214 +@@ -422,7 +422,6 @@ static int sdw_prep_deprep_slave_ports(struct sdw_bus *bus,
15215 + struct completion *port_ready;
15216 + struct sdw_dpn_prop *dpn_prop;
15217 + struct sdw_prepare_ch prep_ch;
15218 +- unsigned int time_left;
15219 + bool intr = false;
15220 + int ret = 0, val;
15221 + u32 addr;
15222 +@@ -479,15 +478,15 @@ static int sdw_prep_deprep_slave_ports(struct sdw_bus *bus,
15223 +
15224 + /* Wait for completion on port ready */
15225 + port_ready = &s_rt->slave->port_ready[prep_ch.num];
15226 +- time_left = wait_for_completion_timeout(port_ready,
15227 +- msecs_to_jiffies(dpn_prop->ch_prep_timeout));
15228 ++ wait_for_completion_timeout(port_ready,
15229 ++ msecs_to_jiffies(dpn_prop->ch_prep_timeout));
15230 +
15231 + val = sdw_read(s_rt->slave, SDW_DPN_PREPARESTATUS(p_rt->num));
15232 +- val &= p_rt->ch_mask;
15233 +- if (!time_left || val) {
15234 ++ if ((val < 0) || (val & p_rt->ch_mask)) {
15235 ++ ret = (val < 0) ? val : -ETIMEDOUT;
15236 + dev_err(&s_rt->slave->dev,
15237 +- "Chn prep failed for port:%d\n", prep_ch.num);
15238 +- return -ETIMEDOUT;
15239 ++ "Chn prep failed for port %d: %d\n", prep_ch.num, ret);
15240 ++ return ret;
15241 + }
15242 + }
15243 +
15244 +diff --git a/drivers/spi/spi-loopback-test.c b/drivers/spi/spi-loopback-test.c
15245 +index df981e55c24c9..89b91cdfb2a54 100644
15246 +--- a/drivers/spi/spi-loopback-test.c
15247 ++++ b/drivers/spi/spi-loopback-test.c
15248 +@@ -874,7 +874,7 @@ static int spi_test_run_iter(struct spi_device *spi,
15249 + test.transfers[i].len = len;
15250 + if (test.transfers[i].tx_buf)
15251 + test.transfers[i].tx_buf += tx_off;
15252 +- if (test.transfers[i].tx_buf)
15253 ++ if (test.transfers[i].rx_buf)
15254 + test.transfers[i].rx_buf += rx_off;
15255 + }
15256 +
15257 +diff --git a/drivers/spi/spi-meson-spicc.c b/drivers/spi/spi-meson-spicc.c
15258 +index ecba6b4a5d85d..b2c4621db34d7 100644
15259 +--- a/drivers/spi/spi-meson-spicc.c
15260 ++++ b/drivers/spi/spi-meson-spicc.c
15261 +@@ -725,7 +725,7 @@ static int meson_spicc_probe(struct platform_device *pdev)
15262 + ret = clk_prepare_enable(spicc->pclk);
15263 + if (ret) {
15264 + dev_err(&pdev->dev, "pclk clock enable failed\n");
15265 +- goto out_master;
15266 ++ goto out_core_clk;
15267 + }
15268 +
15269 + device_reset_optional(&pdev->dev);
15270 +@@ -752,7 +752,7 @@ static int meson_spicc_probe(struct platform_device *pdev)
15271 + ret = meson_spicc_clk_init(spicc);
15272 + if (ret) {
15273 + dev_err(&pdev->dev, "clock registration failed\n");
15274 +- goto out_master;
15275 ++ goto out_clk;
15276 + }
15277 +
15278 + ret = devm_spi_register_master(&pdev->dev, master);
15279 +@@ -764,9 +764,11 @@ static int meson_spicc_probe(struct platform_device *pdev)
15280 + return 0;
15281 +
15282 + out_clk:
15283 +- clk_disable_unprepare(spicc->core);
15284 + clk_disable_unprepare(spicc->pclk);
15285 +
15286 ++out_core_clk:
15287 ++ clk_disable_unprepare(spicc->core);
15288 ++
15289 + out_master:
15290 + spi_master_put(master);
15291 +
15292 +diff --git a/drivers/spi/spi-omap-100k.c b/drivers/spi/spi-omap-100k.c
15293 +index ccd817ee4917b..0d0cd061d3563 100644
15294 +--- a/drivers/spi/spi-omap-100k.c
15295 ++++ b/drivers/spi/spi-omap-100k.c
15296 +@@ -241,7 +241,7 @@ static int omap1_spi100k_setup_transfer(struct spi_device *spi,
15297 + else
15298 + word_len = spi->bits_per_word;
15299 +
15300 +- if (spi->bits_per_word > 32)
15301 ++ if (word_len > 32)
15302 + return -EINVAL;
15303 + cs->word_len = word_len;
15304 +
15305 +diff --git a/drivers/spi/spi-sun6i.c b/drivers/spi/spi-sun6i.c
15306 +index cc8401980125d..23ad052528dbe 100644
15307 +--- a/drivers/spi/spi-sun6i.c
15308 ++++ b/drivers/spi/spi-sun6i.c
15309 +@@ -379,6 +379,10 @@ static int sun6i_spi_transfer_one(struct spi_master *master,
15310 + }
15311 +
15312 + sun6i_spi_write(sspi, SUN6I_CLK_CTL_REG, reg);
15313 ++ /* Finally enable the bus - doing so before might raise SCK to HIGH */
15314 ++ reg = sun6i_spi_read(sspi, SUN6I_GBL_CTL_REG);
15315 ++ reg |= SUN6I_GBL_CTL_BUS_ENABLE;
15316 ++ sun6i_spi_write(sspi, SUN6I_GBL_CTL_REG, reg);
15317 +
15318 + /* Setup the transfer now... */
15319 + if (sspi->tx_buf)
15320 +@@ -504,7 +508,7 @@ static int sun6i_spi_runtime_resume(struct device *dev)
15321 + }
15322 +
15323 + sun6i_spi_write(sspi, SUN6I_GBL_CTL_REG,
15324 +- SUN6I_GBL_CTL_BUS_ENABLE | SUN6I_GBL_CTL_MASTER | SUN6I_GBL_CTL_TP);
15325 ++ SUN6I_GBL_CTL_MASTER | SUN6I_GBL_CTL_TP);
15326 +
15327 + return 0;
15328 +
15329 +diff --git a/drivers/spi/spi-topcliff-pch.c b/drivers/spi/spi-topcliff-pch.c
15330 +index b459e369079f8..7fb020a1d66aa 100644
15331 +--- a/drivers/spi/spi-topcliff-pch.c
15332 ++++ b/drivers/spi/spi-topcliff-pch.c
15333 +@@ -580,8 +580,10 @@ static void pch_spi_set_tx(struct pch_spi_data *data, int *bpw)
15334 + data->pkt_tx_buff = kzalloc(size, GFP_KERNEL);
15335 + if (data->pkt_tx_buff != NULL) {
15336 + data->pkt_rx_buff = kzalloc(size, GFP_KERNEL);
15337 +- if (!data->pkt_rx_buff)
15338 ++ if (!data->pkt_rx_buff) {
15339 + kfree(data->pkt_tx_buff);
15340 ++ data->pkt_tx_buff = NULL;
15341 ++ }
15342 + }
15343 +
15344 + if (!data->pkt_rx_buff) {
15345 +diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
15346 +index e067c54e87dd7..2350463bfb8f8 100644
15347 +--- a/drivers/spi/spi.c
15348 ++++ b/drivers/spi/spi.c
15349 +@@ -2066,6 +2066,7 @@ of_register_spi_device(struct spi_controller *ctlr, struct device_node *nc)
15350 + /* Store a pointer to the node in the device structure */
15351 + of_node_get(nc);
15352 + spi->dev.of_node = nc;
15353 ++ spi->dev.fwnode = of_fwnode_handle(nc);
15354 +
15355 + /* Register the new device */
15356 + rc = spi_add_device(spi);
15357 +@@ -2629,9 +2630,10 @@ static int spi_get_gpio_descs(struct spi_controller *ctlr)
15358 + native_cs_mask |= BIT(i);
15359 + }
15360 +
15361 +- ctlr->unused_native_cs = ffz(native_cs_mask);
15362 +- if (num_cs_gpios && ctlr->max_native_cs &&
15363 +- ctlr->unused_native_cs >= ctlr->max_native_cs) {
15364 ++ ctlr->unused_native_cs = ffs(~native_cs_mask) - 1;
15365 ++
15366 ++ if ((ctlr->flags & SPI_MASTER_GPIO_SS) && num_cs_gpios &&
15367 ++ ctlr->max_native_cs && ctlr->unused_native_cs >= ctlr->max_native_cs) {
15368 + dev_err(dev, "No unused native chip select available\n");
15369 + return -EINVAL;
15370 + }
15371 +diff --git a/drivers/ssb/scan.c b/drivers/ssb/scan.c
15372 +index f49ab1aa2149a..4161e5d1f276e 100644
15373 +--- a/drivers/ssb/scan.c
15374 ++++ b/drivers/ssb/scan.c
15375 +@@ -325,6 +325,7 @@ int ssb_bus_scan(struct ssb_bus *bus,
15376 + if (bus->nr_devices > ARRAY_SIZE(bus->devices)) {
15377 + pr_err("More than %d ssb cores found (%d)\n",
15378 + SSB_MAX_NR_CORES, bus->nr_devices);
15379 ++ err = -EINVAL;
15380 + goto err_unmap;
15381 + }
15382 + if (bus->bustype == SSB_BUSTYPE_SSB) {
15383 +diff --git a/drivers/ssb/sdio.c b/drivers/ssb/sdio.c
15384 +index 7fe0afb42234f..66c5c2169704b 100644
15385 +--- a/drivers/ssb/sdio.c
15386 ++++ b/drivers/ssb/sdio.c
15387 +@@ -411,7 +411,6 @@ static void ssb_sdio_block_write(struct ssb_device *dev, const void *buffer,
15388 + sdio_claim_host(bus->host_sdio);
15389 + if (unlikely(ssb_sdio_switch_core(bus, dev))) {
15390 + error = -EIO;
15391 +- memset((void *)buffer, 0xff, count);
15392 + goto err_out;
15393 + }
15394 + offset |= bus->sdio_sbaddr & 0xffff;
15395 +diff --git a/drivers/staging/fbtft/fb_agm1264k-fl.c b/drivers/staging/fbtft/fb_agm1264k-fl.c
15396 +index eeeeec97ad278..b545c2ca80a41 100644
15397 +--- a/drivers/staging/fbtft/fb_agm1264k-fl.c
15398 ++++ b/drivers/staging/fbtft/fb_agm1264k-fl.c
15399 +@@ -84,9 +84,9 @@ static void reset(struct fbtft_par *par)
15400 +
15401 + dev_dbg(par->info->device, "%s()\n", __func__);
15402 +
15403 +- gpiod_set_value(par->gpio.reset, 0);
15404 +- udelay(20);
15405 + gpiod_set_value(par->gpio.reset, 1);
15406 ++ udelay(20);
15407 ++ gpiod_set_value(par->gpio.reset, 0);
15408 + mdelay(120);
15409 + }
15410 +
15411 +@@ -194,12 +194,12 @@ static void write_reg8_bus8(struct fbtft_par *par, int len, ...)
15412 + /* select chip */
15413 + if (*buf) {
15414 + /* cs1 */
15415 +- gpiod_set_value(par->CS0, 1);
15416 +- gpiod_set_value(par->CS1, 0);
15417 +- } else {
15418 +- /* cs0 */
15419 + gpiod_set_value(par->CS0, 0);
15420 + gpiod_set_value(par->CS1, 1);
15421 ++ } else {
15422 ++ /* cs0 */
15423 ++ gpiod_set_value(par->CS0, 1);
15424 ++ gpiod_set_value(par->CS1, 0);
15425 + }
15426 +
15427 + gpiod_set_value(par->RS, 0); /* RS->0 (command mode) */
15428 +@@ -397,8 +397,8 @@ static int write_vmem(struct fbtft_par *par, size_t offset, size_t len)
15429 + }
15430 + kfree(convert_buf);
15431 +
15432 +- gpiod_set_value(par->CS0, 1);
15433 +- gpiod_set_value(par->CS1, 1);
15434 ++ gpiod_set_value(par->CS0, 0);
15435 ++ gpiod_set_value(par->CS1, 0);
15436 +
15437 + return ret;
15438 + }
15439 +@@ -419,10 +419,10 @@ static int write(struct fbtft_par *par, void *buf, size_t len)
15440 + for (i = 0; i < 8; ++i)
15441 + gpiod_set_value(par->gpio.db[i], data & (1 << i));
15442 + /* set E */
15443 +- gpiod_set_value(par->EPIN, 1);
15444 ++ gpiod_set_value(par->EPIN, 0);
15445 + udelay(5);
15446 + /* unset E - write */
15447 +- gpiod_set_value(par->EPIN, 0);
15448 ++ gpiod_set_value(par->EPIN, 1);
15449 + udelay(1);
15450 + }
15451 +
15452 +diff --git a/drivers/staging/fbtft/fb_bd663474.c b/drivers/staging/fbtft/fb_bd663474.c
15453 +index e2c7646588f8c..1629c2c440a97 100644
15454 +--- a/drivers/staging/fbtft/fb_bd663474.c
15455 ++++ b/drivers/staging/fbtft/fb_bd663474.c
15456 +@@ -12,7 +12,6 @@
15457 + #include <linux/module.h>
15458 + #include <linux/kernel.h>
15459 + #include <linux/init.h>
15460 +-#include <linux/gpio/consumer.h>
15461 + #include <linux/delay.h>
15462 +
15463 + #include "fbtft.h"
15464 +@@ -24,9 +23,6 @@
15465 +
15466 + static int init_display(struct fbtft_par *par)
15467 + {
15468 +- if (par->gpio.cs)
15469 +- gpiod_set_value(par->gpio.cs, 0); /* Activate chip */
15470 +-
15471 + par->fbtftops.reset(par);
15472 +
15473 + /* Initialization sequence from Lib_UTFT */
15474 +diff --git a/drivers/staging/fbtft/fb_ili9163.c b/drivers/staging/fbtft/fb_ili9163.c
15475 +index 05648c3ffe474..6582a2c90aafc 100644
15476 +--- a/drivers/staging/fbtft/fb_ili9163.c
15477 ++++ b/drivers/staging/fbtft/fb_ili9163.c
15478 +@@ -11,7 +11,6 @@
15479 + #include <linux/module.h>
15480 + #include <linux/kernel.h>
15481 + #include <linux/init.h>
15482 +-#include <linux/gpio/consumer.h>
15483 + #include <linux/delay.h>
15484 + #include <video/mipi_display.h>
15485 +
15486 +@@ -77,9 +76,6 @@ static int init_display(struct fbtft_par *par)
15487 + {
15488 + par->fbtftops.reset(par);
15489 +
15490 +- if (par->gpio.cs)
15491 +- gpiod_set_value(par->gpio.cs, 0); /* Activate chip */
15492 +-
15493 + write_reg(par, MIPI_DCS_SOFT_RESET); /* software reset */
15494 + mdelay(500);
15495 + write_reg(par, MIPI_DCS_EXIT_SLEEP_MODE); /* exit sleep */
15496 +diff --git a/drivers/staging/fbtft/fb_ili9320.c b/drivers/staging/fbtft/fb_ili9320.c
15497 +index f2e72d14431db..a8f4c618b754c 100644
15498 +--- a/drivers/staging/fbtft/fb_ili9320.c
15499 ++++ b/drivers/staging/fbtft/fb_ili9320.c
15500 +@@ -8,7 +8,6 @@
15501 + #include <linux/module.h>
15502 + #include <linux/kernel.h>
15503 + #include <linux/init.h>
15504 +-#include <linux/gpio/consumer.h>
15505 + #include <linux/spi/spi.h>
15506 + #include <linux/delay.h>
15507 +
15508 +diff --git a/drivers/staging/fbtft/fb_ili9325.c b/drivers/staging/fbtft/fb_ili9325.c
15509 +index c9aa4cb431236..16d3b17ca2798 100644
15510 +--- a/drivers/staging/fbtft/fb_ili9325.c
15511 ++++ b/drivers/staging/fbtft/fb_ili9325.c
15512 +@@ -10,7 +10,6 @@
15513 + #include <linux/module.h>
15514 + #include <linux/kernel.h>
15515 + #include <linux/init.h>
15516 +-#include <linux/gpio/consumer.h>
15517 + #include <linux/delay.h>
15518 +
15519 + #include "fbtft.h"
15520 +@@ -85,9 +84,6 @@ static int init_display(struct fbtft_par *par)
15521 + {
15522 + par->fbtftops.reset(par);
15523 +
15524 +- if (par->gpio.cs)
15525 +- gpiod_set_value(par->gpio.cs, 0); /* Activate chip */
15526 +-
15527 + bt &= 0x07;
15528 + vc &= 0x07;
15529 + vrh &= 0x0f;
15530 +diff --git a/drivers/staging/fbtft/fb_ili9340.c b/drivers/staging/fbtft/fb_ili9340.c
15531 +index 415183c7054a8..704236bcaf3ff 100644
15532 +--- a/drivers/staging/fbtft/fb_ili9340.c
15533 ++++ b/drivers/staging/fbtft/fb_ili9340.c
15534 +@@ -8,7 +8,6 @@
15535 + #include <linux/module.h>
15536 + #include <linux/kernel.h>
15537 + #include <linux/init.h>
15538 +-#include <linux/gpio/consumer.h>
15539 + #include <linux/delay.h>
15540 + #include <video/mipi_display.h>
15541 +
15542 +diff --git a/drivers/staging/fbtft/fb_s6d1121.c b/drivers/staging/fbtft/fb_s6d1121.c
15543 +index 8c7de32903434..62f27172f8449 100644
15544 +--- a/drivers/staging/fbtft/fb_s6d1121.c
15545 ++++ b/drivers/staging/fbtft/fb_s6d1121.c
15546 +@@ -12,7 +12,6 @@
15547 + #include <linux/module.h>
15548 + #include <linux/kernel.h>
15549 + #include <linux/init.h>
15550 +-#include <linux/gpio/consumer.h>
15551 + #include <linux/delay.h>
15552 +
15553 + #include "fbtft.h"
15554 +@@ -29,9 +28,6 @@ static int init_display(struct fbtft_par *par)
15555 + {
15556 + par->fbtftops.reset(par);
15557 +
15558 +- if (par->gpio.cs)
15559 +- gpiod_set_value(par->gpio.cs, 0); /* Activate chip */
15560 +-
15561 + /* Initialization sequence from Lib_UTFT */
15562 +
15563 + write_reg(par, 0x0011, 0x2004);
15564 +diff --git a/drivers/staging/fbtft/fb_sh1106.c b/drivers/staging/fbtft/fb_sh1106.c
15565 +index 6f7249493ea3b..7b9ab39e1c1a8 100644
15566 +--- a/drivers/staging/fbtft/fb_sh1106.c
15567 ++++ b/drivers/staging/fbtft/fb_sh1106.c
15568 +@@ -9,7 +9,6 @@
15569 + #include <linux/module.h>
15570 + #include <linux/kernel.h>
15571 + #include <linux/init.h>
15572 +-#include <linux/gpio/consumer.h>
15573 + #include <linux/delay.h>
15574 +
15575 + #include "fbtft.h"
15576 +diff --git a/drivers/staging/fbtft/fb_ssd1289.c b/drivers/staging/fbtft/fb_ssd1289.c
15577 +index 7a3fe022cc69d..f27bab38b3ec4 100644
15578 +--- a/drivers/staging/fbtft/fb_ssd1289.c
15579 ++++ b/drivers/staging/fbtft/fb_ssd1289.c
15580 +@@ -10,7 +10,6 @@
15581 + #include <linux/module.h>
15582 + #include <linux/kernel.h>
15583 + #include <linux/init.h>
15584 +-#include <linux/gpio/consumer.h>
15585 +
15586 + #include "fbtft.h"
15587 +
15588 +@@ -28,9 +27,6 @@ static int init_display(struct fbtft_par *par)
15589 + {
15590 + par->fbtftops.reset(par);
15591 +
15592 +- if (par->gpio.cs)
15593 +- gpiod_set_value(par->gpio.cs, 0); /* Activate chip */
15594 +-
15595 + write_reg(par, 0x00, 0x0001);
15596 + write_reg(par, 0x03, 0xA8A4);
15597 + write_reg(par, 0x0C, 0x0000);
15598 +diff --git a/drivers/staging/fbtft/fb_ssd1325.c b/drivers/staging/fbtft/fb_ssd1325.c
15599 +index 8a3140d41d8bb..796a2ac3e1948 100644
15600 +--- a/drivers/staging/fbtft/fb_ssd1325.c
15601 ++++ b/drivers/staging/fbtft/fb_ssd1325.c
15602 +@@ -35,8 +35,6 @@ static int init_display(struct fbtft_par *par)
15603 + {
15604 + par->fbtftops.reset(par);
15605 +
15606 +- gpiod_set_value(par->gpio.cs, 0);
15607 +-
15608 + write_reg(par, 0xb3);
15609 + write_reg(par, 0xf0);
15610 + write_reg(par, 0xae);
15611 +diff --git a/drivers/staging/fbtft/fb_ssd1331.c b/drivers/staging/fbtft/fb_ssd1331.c
15612 +index 37622c9462aa7..ec5eced7f8cbd 100644
15613 +--- a/drivers/staging/fbtft/fb_ssd1331.c
15614 ++++ b/drivers/staging/fbtft/fb_ssd1331.c
15615 +@@ -81,8 +81,7 @@ static void write_reg8_bus8(struct fbtft_par *par, int len, ...)
15616 + va_start(args, len);
15617 +
15618 + *buf = (u8)va_arg(args, unsigned int);
15619 +- if (par->gpio.dc)
15620 +- gpiod_set_value(par->gpio.dc, 0);
15621 ++ gpiod_set_value(par->gpio.dc, 0);
15622 + ret = par->fbtftops.write(par, par->buf, sizeof(u8));
15623 + if (ret < 0) {
15624 + va_end(args);
15625 +@@ -104,8 +103,7 @@ static void write_reg8_bus8(struct fbtft_par *par, int len, ...)
15626 + return;
15627 + }
15628 + }
15629 +- if (par->gpio.dc)
15630 +- gpiod_set_value(par->gpio.dc, 1);
15631 ++ gpiod_set_value(par->gpio.dc, 1);
15632 + va_end(args);
15633 + }
15634 +
15635 +diff --git a/drivers/staging/fbtft/fb_ssd1351.c b/drivers/staging/fbtft/fb_ssd1351.c
15636 +index 900b28d826b28..cf263a58a1489 100644
15637 +--- a/drivers/staging/fbtft/fb_ssd1351.c
15638 ++++ b/drivers/staging/fbtft/fb_ssd1351.c
15639 +@@ -2,7 +2,6 @@
15640 + #include <linux/module.h>
15641 + #include <linux/kernel.h>
15642 + #include <linux/init.h>
15643 +-#include <linux/gpio/consumer.h>
15644 + #include <linux/spi/spi.h>
15645 + #include <linux/delay.h>
15646 +
15647 +diff --git a/drivers/staging/fbtft/fb_upd161704.c b/drivers/staging/fbtft/fb_upd161704.c
15648 +index c77832ae5e5ba..c680160d63807 100644
15649 +--- a/drivers/staging/fbtft/fb_upd161704.c
15650 ++++ b/drivers/staging/fbtft/fb_upd161704.c
15651 +@@ -12,7 +12,6 @@
15652 + #include <linux/module.h>
15653 + #include <linux/kernel.h>
15654 + #include <linux/init.h>
15655 +-#include <linux/gpio/consumer.h>
15656 + #include <linux/delay.h>
15657 +
15658 + #include "fbtft.h"
15659 +@@ -26,9 +25,6 @@ static int init_display(struct fbtft_par *par)
15660 + {
15661 + par->fbtftops.reset(par);
15662 +
15663 +- if (par->gpio.cs)
15664 +- gpiod_set_value(par->gpio.cs, 0); /* Activate chip */
15665 +-
15666 + /* Initialization sequence from Lib_UTFT */
15667 +
15668 + /* register reset */
15669 +diff --git a/drivers/staging/fbtft/fb_watterott.c b/drivers/staging/fbtft/fb_watterott.c
15670 +index 76b25df376b8f..a57e1f4feef35 100644
15671 +--- a/drivers/staging/fbtft/fb_watterott.c
15672 ++++ b/drivers/staging/fbtft/fb_watterott.c
15673 +@@ -8,7 +8,6 @@
15674 + #include <linux/module.h>
15675 + #include <linux/kernel.h>
15676 + #include <linux/init.h>
15677 +-#include <linux/gpio/consumer.h>
15678 + #include <linux/delay.h>
15679 +
15680 + #include "fbtft.h"
15681 +diff --git a/drivers/staging/fbtft/fbtft-bus.c b/drivers/staging/fbtft/fbtft-bus.c
15682 +index 63c65dd67b175..3d422bc116411 100644
15683 +--- a/drivers/staging/fbtft/fbtft-bus.c
15684 ++++ b/drivers/staging/fbtft/fbtft-bus.c
15685 +@@ -135,8 +135,7 @@ int fbtft_write_vmem16_bus8(struct fbtft_par *par, size_t offset, size_t len)
15686 + remain = len / 2;
15687 + vmem16 = (u16 *)(par->info->screen_buffer + offset);
15688 +
15689 +- if (par->gpio.dc)
15690 +- gpiod_set_value(par->gpio.dc, 1);
15691 ++ gpiod_set_value(par->gpio.dc, 1);
15692 +
15693 + /* non buffered write */
15694 + if (!par->txbuf.buf)
15695 +diff --git a/drivers/staging/fbtft/fbtft-core.c b/drivers/staging/fbtft/fbtft-core.c
15696 +index 4f362dad4436a..3723269890d5f 100644
15697 +--- a/drivers/staging/fbtft/fbtft-core.c
15698 ++++ b/drivers/staging/fbtft/fbtft-core.c
15699 +@@ -38,8 +38,7 @@ int fbtft_write_buf_dc(struct fbtft_par *par, void *buf, size_t len, int dc)
15700 + {
15701 + int ret;
15702 +
15703 +- if (par->gpio.dc)
15704 +- gpiod_set_value(par->gpio.dc, dc);
15705 ++ gpiod_set_value(par->gpio.dc, dc);
15706 +
15707 + ret = par->fbtftops.write(par, buf, len);
15708 + if (ret < 0)
15709 +@@ -76,20 +75,16 @@ static int fbtft_request_one_gpio(struct fbtft_par *par,
15710 + struct gpio_desc **gpiop)
15711 + {
15712 + struct device *dev = par->info->device;
15713 +- int ret = 0;
15714 +
15715 + *gpiop = devm_gpiod_get_index_optional(dev, name, index,
15716 +- GPIOD_OUT_HIGH);
15717 +- if (IS_ERR(*gpiop)) {
15718 +- ret = PTR_ERR(*gpiop);
15719 +- dev_err(dev,
15720 +- "Failed to request %s GPIO: %d\n", name, ret);
15721 +- return ret;
15722 +- }
15723 ++ GPIOD_OUT_LOW);
15724 ++ if (IS_ERR(*gpiop))
15725 ++ return dev_err_probe(dev, PTR_ERR(*gpiop), "Failed to request %s GPIO\n", name);
15726 ++
15727 + fbtft_par_dbg(DEBUG_REQUEST_GPIOS, par, "%s: '%s' GPIO\n",
15728 + __func__, name);
15729 +
15730 +- return ret;
15731 ++ return 0;
15732 + }
15733 +
15734 + static int fbtft_request_gpios(struct fbtft_par *par)
15735 +@@ -226,11 +221,15 @@ static void fbtft_reset(struct fbtft_par *par)
15736 + {
15737 + if (!par->gpio.reset)
15738 + return;
15739 ++
15740 + fbtft_par_dbg(DEBUG_RESET, par, "%s()\n", __func__);
15741 ++
15742 + gpiod_set_value_cansleep(par->gpio.reset, 1);
15743 + usleep_range(20, 40);
15744 + gpiod_set_value_cansleep(par->gpio.reset, 0);
15745 + msleep(120);
15746 ++
15747 ++ gpiod_set_value_cansleep(par->gpio.cs, 1); /* Activate chip */
15748 + }
15749 +
15750 + static void fbtft_update_display(struct fbtft_par *par, unsigned int start_line,
15751 +@@ -922,8 +921,6 @@ static int fbtft_init_display_from_property(struct fbtft_par *par)
15752 + goto out_free;
15753 +
15754 + par->fbtftops.reset(par);
15755 +- if (par->gpio.cs)
15756 +- gpiod_set_value(par->gpio.cs, 0); /* Activate chip */
15757 +
15758 + index = -1;
15759 + val = values[++index];
15760 +@@ -1018,8 +1015,6 @@ int fbtft_init_display(struct fbtft_par *par)
15761 + }
15762 +
15763 + par->fbtftops.reset(par);
15764 +- if (par->gpio.cs)
15765 +- gpiod_set_value(par->gpio.cs, 0); /* Activate chip */
15766 +
15767 + i = 0;
15768 + while (i < FBTFT_MAX_INIT_SEQUENCE) {
15769 +diff --git a/drivers/staging/fbtft/fbtft-io.c b/drivers/staging/fbtft/fbtft-io.c
15770 +index 0863d257d7620..de1904a443c27 100644
15771 +--- a/drivers/staging/fbtft/fbtft-io.c
15772 ++++ b/drivers/staging/fbtft/fbtft-io.c
15773 +@@ -142,12 +142,12 @@ int fbtft_write_gpio8_wr(struct fbtft_par *par, void *buf, size_t len)
15774 + data = *(u8 *)buf;
15775 +
15776 + /* Start writing by pulling down /WR */
15777 +- gpiod_set_value(par->gpio.wr, 0);
15778 ++ gpiod_set_value(par->gpio.wr, 1);
15779 +
15780 + /* Set data */
15781 + #ifndef DO_NOT_OPTIMIZE_FBTFT_WRITE_GPIO
15782 + if (data == prev_data) {
15783 +- gpiod_set_value(par->gpio.wr, 0); /* used as delay */
15784 ++ gpiod_set_value(par->gpio.wr, 1); /* used as delay */
15785 + } else {
15786 + for (i = 0; i < 8; i++) {
15787 + if ((data & 1) != (prev_data & 1))
15788 +@@ -165,7 +165,7 @@ int fbtft_write_gpio8_wr(struct fbtft_par *par, void *buf, size_t len)
15789 + #endif
15790 +
15791 + /* Pullup /WR */
15792 +- gpiod_set_value(par->gpio.wr, 1);
15793 ++ gpiod_set_value(par->gpio.wr, 0);
15794 +
15795 + #ifndef DO_NOT_OPTIMIZE_FBTFT_WRITE_GPIO
15796 + prev_data = *(u8 *)buf;
15797 +@@ -192,12 +192,12 @@ int fbtft_write_gpio16_wr(struct fbtft_par *par, void *buf, size_t len)
15798 + data = *(u16 *)buf;
15799 +
15800 + /* Start writing by pulling down /WR */
15801 +- gpiod_set_value(par->gpio.wr, 0);
15802 ++ gpiod_set_value(par->gpio.wr, 1);
15803 +
15804 + /* Set data */
15805 + #ifndef DO_NOT_OPTIMIZE_FBTFT_WRITE_GPIO
15806 + if (data == prev_data) {
15807 +- gpiod_set_value(par->gpio.wr, 0); /* used as delay */
15808 ++ gpiod_set_value(par->gpio.wr, 1); /* used as delay */
15809 + } else {
15810 + for (i = 0; i < 16; i++) {
15811 + if ((data & 1) != (prev_data & 1))
15812 +@@ -215,7 +215,7 @@ int fbtft_write_gpio16_wr(struct fbtft_par *par, void *buf, size_t len)
15813 + #endif
15814 +
15815 + /* Pullup /WR */
15816 +- gpiod_set_value(par->gpio.wr, 1);
15817 ++ gpiod_set_value(par->gpio.wr, 0);
15818 +
15819 + #ifndef DO_NOT_OPTIMIZE_FBTFT_WRITE_GPIO
15820 + prev_data = *(u16 *)buf;
15821 +diff --git a/drivers/staging/gdm724x/gdm_lte.c b/drivers/staging/gdm724x/gdm_lte.c
15822 +index 571f47d394843..bd5f874334043 100644
15823 +--- a/drivers/staging/gdm724x/gdm_lte.c
15824 ++++ b/drivers/staging/gdm724x/gdm_lte.c
15825 +@@ -611,10 +611,12 @@ static void gdm_lte_netif_rx(struct net_device *dev, char *buf,
15826 + * bytes (99,130,83,99 dec)
15827 + */
15828 + } __packed;
15829 +- void *addr = buf + sizeof(struct iphdr) +
15830 +- sizeof(struct udphdr) +
15831 +- offsetof(struct dhcp_packet, chaddr);
15832 +- ether_addr_copy(nic->dest_mac_addr, addr);
15833 ++ int offset = sizeof(struct iphdr) +
15834 ++ sizeof(struct udphdr) +
15835 ++ offsetof(struct dhcp_packet, chaddr);
15836 ++ if (offset + ETH_ALEN > len)
15837 ++ return;
15838 ++ ether_addr_copy(nic->dest_mac_addr, buf + offset);
15839 + }
15840 + }
15841 +
15842 +@@ -677,6 +679,7 @@ static void gdm_lte_multi_sdu_pkt(struct phy_dev *phy_dev, char *buf, int len)
15843 + struct sdu *sdu = NULL;
15844 + u8 endian = phy_dev->get_endian(phy_dev->priv_dev);
15845 + u8 *data = (u8 *)multi_sdu->data;
15846 ++ int copied;
15847 + u16 i = 0;
15848 + u16 num_packet;
15849 + u16 hci_len;
15850 +@@ -688,6 +691,12 @@ static void gdm_lte_multi_sdu_pkt(struct phy_dev *phy_dev, char *buf, int len)
15851 + num_packet = gdm_dev16_to_cpu(endian, multi_sdu->num_packet);
15852 +
15853 + for (i = 0; i < num_packet; i++) {
15854 ++ copied = data - multi_sdu->data;
15855 ++ if (len < copied + sizeof(*sdu)) {
15856 ++ pr_err("rx prevent buffer overflow");
15857 ++ return;
15858 ++ }
15859 ++
15860 + sdu = (struct sdu *)data;
15861 +
15862 + cmd_evt = gdm_dev16_to_cpu(endian, sdu->cmd_evt);
15863 +@@ -698,7 +707,8 @@ static void gdm_lte_multi_sdu_pkt(struct phy_dev *phy_dev, char *buf, int len)
15864 + pr_err("rx sdu wrong hci %04x\n", cmd_evt);
15865 + return;
15866 + }
15867 +- if (hci_len < 12) {
15868 ++ if (hci_len < 12 ||
15869 ++ len < copied + sizeof(*sdu) + (hci_len - 12)) {
15870 + pr_err("rx sdu invalid len %d\n", hci_len);
15871 + return;
15872 + }
15873 +diff --git a/drivers/staging/hikey9xx/hi6421v600-regulator.c b/drivers/staging/hikey9xx/hi6421v600-regulator.c
15874 +index e10fe3058176d..91136db3961ee 100644
15875 +--- a/drivers/staging/hikey9xx/hi6421v600-regulator.c
15876 ++++ b/drivers/staging/hikey9xx/hi6421v600-regulator.c
15877 +@@ -129,7 +129,7 @@ static unsigned int hi6421_spmi_regulator_get_mode(struct regulator_dev *rdev)
15878 + {
15879 + struct hi6421_spmi_reg_info *sreg = rdev_get_drvdata(rdev);
15880 + struct hi6421_spmi_pmic *pmic = sreg->pmic;
15881 +- u32 reg_val;
15882 ++ unsigned int reg_val;
15883 +
15884 + regmap_read(pmic->regmap, rdev->desc->enable_reg, &reg_val);
15885 +
15886 +@@ -144,14 +144,17 @@ static int hi6421_spmi_regulator_set_mode(struct regulator_dev *rdev,
15887 + {
15888 + struct hi6421_spmi_reg_info *sreg = rdev_get_drvdata(rdev);
15889 + struct hi6421_spmi_pmic *pmic = sreg->pmic;
15890 +- u32 val;
15891 ++ unsigned int val;
15892 +
15893 + switch (mode) {
15894 + case REGULATOR_MODE_NORMAL:
15895 + val = 0;
15896 + break;
15897 + case REGULATOR_MODE_IDLE:
15898 +- val = sreg->eco_mode_mask << (ffs(sreg->eco_mode_mask) - 1);
15899 ++ if (!sreg->eco_mode_mask)
15900 ++ return -EINVAL;
15901 ++
15902 ++ val = sreg->eco_mode_mask;
15903 + break;
15904 + default:
15905 + return -EINVAL;
15906 +diff --git a/drivers/staging/media/hantro/hantro_drv.c b/drivers/staging/media/hantro/hantro_drv.c
15907 +index e5f200e649933..2d6e0056be624 100644
15908 +--- a/drivers/staging/media/hantro/hantro_drv.c
15909 ++++ b/drivers/staging/media/hantro/hantro_drv.c
15910 +@@ -56,16 +56,12 @@ dma_addr_t hantro_get_ref(struct hantro_ctx *ctx, u64 ts)
15911 + return hantro_get_dec_buf_addr(ctx, buf);
15912 + }
15913 +
15914 +-static void hantro_job_finish(struct hantro_dev *vpu,
15915 +- struct hantro_ctx *ctx,
15916 +- enum vb2_buffer_state result)
15917 ++static void hantro_job_finish_no_pm(struct hantro_dev *vpu,
15918 ++ struct hantro_ctx *ctx,
15919 ++ enum vb2_buffer_state result)
15920 + {
15921 + struct vb2_v4l2_buffer *src, *dst;
15922 +
15923 +- pm_runtime_mark_last_busy(vpu->dev);
15924 +- pm_runtime_put_autosuspend(vpu->dev);
15925 +- clk_bulk_disable(vpu->variant->num_clocks, vpu->clocks);
15926 +-
15927 + src = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
15928 + dst = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
15929 +
15930 +@@ -81,6 +77,18 @@ static void hantro_job_finish(struct hantro_dev *vpu,
15931 + result);
15932 + }
15933 +
15934 ++static void hantro_job_finish(struct hantro_dev *vpu,
15935 ++ struct hantro_ctx *ctx,
15936 ++ enum vb2_buffer_state result)
15937 ++{
15938 ++ pm_runtime_mark_last_busy(vpu->dev);
15939 ++ pm_runtime_put_autosuspend(vpu->dev);
15940 ++
15941 ++ clk_bulk_disable(vpu->variant->num_clocks, vpu->clocks);
15942 ++
15943 ++ hantro_job_finish_no_pm(vpu, ctx, result);
15944 ++}
15945 ++
15946 + void hantro_irq_done(struct hantro_dev *vpu,
15947 + enum vb2_buffer_state result)
15948 + {
15949 +@@ -152,12 +160,15 @@ static void device_run(void *priv)
15950 + src = hantro_get_src_buf(ctx);
15951 + dst = hantro_get_dst_buf(ctx);
15952 +
15953 ++ ret = pm_runtime_get_sync(ctx->dev->dev);
15954 ++ if (ret < 0) {
15955 ++ pm_runtime_put_noidle(ctx->dev->dev);
15956 ++ goto err_cancel_job;
15957 ++ }
15958 ++
15959 + ret = clk_bulk_enable(ctx->dev->variant->num_clocks, ctx->dev->clocks);
15960 + if (ret)
15961 + goto err_cancel_job;
15962 +- ret = pm_runtime_get_sync(ctx->dev->dev);
15963 +- if (ret < 0)
15964 +- goto err_cancel_job;
15965 +
15966 + v4l2_m2m_buf_copy_metadata(src, dst, true);
15967 +
15968 +@@ -165,7 +176,7 @@ static void device_run(void *priv)
15969 + return;
15970 +
15971 + err_cancel_job:
15972 +- hantro_job_finish(ctx->dev, ctx, VB2_BUF_STATE_ERROR);
15973 ++ hantro_job_finish_no_pm(ctx->dev, ctx, VB2_BUF_STATE_ERROR);
15974 + }
15975 +
15976 + static struct v4l2_m2m_ops vpu_m2m_ops = {
15977 +diff --git a/drivers/staging/media/hantro/hantro_v4l2.c b/drivers/staging/media/hantro/hantro_v4l2.c
15978 +index 1bc118e375a12..7ccc6405036ae 100644
15979 +--- a/drivers/staging/media/hantro/hantro_v4l2.c
15980 ++++ b/drivers/staging/media/hantro/hantro_v4l2.c
15981 +@@ -639,7 +639,14 @@ static int hantro_buf_prepare(struct vb2_buffer *vb)
15982 + ret = hantro_buf_plane_check(vb, pix_fmt);
15983 + if (ret)
15984 + return ret;
15985 +- vb2_set_plane_payload(vb, 0, pix_fmt->plane_fmt[0].sizeimage);
15986 ++ /*
15987 ++ * Buffer's bytesused must be written by driver for CAPTURE buffers.
15988 ++ * (for OUTPUT buffers, if userspace passes 0 bytesused, v4l2-core sets
15989 ++ * it to buffer length).
15990 ++ */
15991 ++ if (V4L2_TYPE_IS_CAPTURE(vq->type))
15992 ++ vb2_set_plane_payload(vb, 0, pix_fmt->plane_fmt[0].sizeimage);
15993 ++
15994 + return 0;
15995 + }
15996 +
15997 +diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
15998 +index ef5add079774e..7f4b967646d98 100644
15999 +--- a/drivers/staging/media/imx/imx-media-csi.c
16000 ++++ b/drivers/staging/media/imx/imx-media-csi.c
16001 +@@ -753,9 +753,10 @@ static int csi_setup(struct csi_priv *priv)
16002 +
16003 + static int csi_start(struct csi_priv *priv)
16004 + {
16005 +- struct v4l2_fract *output_fi;
16006 ++ struct v4l2_fract *input_fi, *output_fi;
16007 + int ret;
16008 +
16009 ++ input_fi = &priv->frame_interval[CSI_SINK_PAD];
16010 + output_fi = &priv->frame_interval[priv->active_output_pad];
16011 +
16012 + /* start upstream */
16013 +@@ -764,6 +765,17 @@ static int csi_start(struct csi_priv *priv)
16014 + if (ret)
16015 + return ret;
16016 +
16017 ++ /* Skip first few frames from a BT.656 source */
16018 ++ if (priv->upstream_ep.bus_type == V4L2_MBUS_BT656) {
16019 ++ u32 delay_usec, bad_frames = 20;
16020 ++
16021 ++ delay_usec = DIV_ROUND_UP_ULL((u64)USEC_PER_SEC *
16022 ++ input_fi->numerator * bad_frames,
16023 ++ input_fi->denominator);
16024 ++
16025 ++ usleep_range(delay_usec, delay_usec + 1000);
16026 ++ }
16027 ++
16028 + if (priv->dest == IPU_CSI_DEST_IDMAC) {
16029 + ret = csi_idmac_start(priv);
16030 + if (ret)
16031 +diff --git a/drivers/staging/media/imx/imx7-mipi-csis.c b/drivers/staging/media/imx/imx7-mipi-csis.c
16032 +index a01a7364b4b94..b365790256e42 100644
16033 +--- a/drivers/staging/media/imx/imx7-mipi-csis.c
16034 ++++ b/drivers/staging/media/imx/imx7-mipi-csis.c
16035 +@@ -597,13 +597,15 @@ static void mipi_csis_clear_counters(struct csi_state *state)
16036 +
16037 + static void mipi_csis_log_counters(struct csi_state *state, bool non_errors)
16038 + {
16039 +- int i = non_errors ? MIPI_CSIS_NUM_EVENTS : MIPI_CSIS_NUM_EVENTS - 4;
16040 ++ unsigned int num_events = non_errors ? MIPI_CSIS_NUM_EVENTS
16041 ++ : MIPI_CSIS_NUM_EVENTS - 6;
16042 + struct device *dev = &state->pdev->dev;
16043 + unsigned long flags;
16044 ++ unsigned int i;
16045 +
16046 + spin_lock_irqsave(&state->slock, flags);
16047 +
16048 +- for (i--; i >= 0; i--) {
16049 ++ for (i = 0; i < num_events; ++i) {
16050 + if (state->events[i].counter > 0 || state->debug)
16051 + dev_info(dev, "%s events: %d\n", state->events[i].name,
16052 + state->events[i].counter);
16053 +diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
16054 +index d821661d30f38..7131156c1f2cf 100644
16055 +--- a/drivers/staging/media/rkvdec/rkvdec.c
16056 ++++ b/drivers/staging/media/rkvdec/rkvdec.c
16057 +@@ -481,7 +481,15 @@ static int rkvdec_buf_prepare(struct vb2_buffer *vb)
16058 + if (vb2_plane_size(vb, i) < sizeimage)
16059 + return -EINVAL;
16060 + }
16061 +- vb2_set_plane_payload(vb, 0, f->fmt.pix_mp.plane_fmt[0].sizeimage);
16062 ++
16063 ++ /*
16064 ++ * Buffer's bytesused must be written by driver for CAPTURE buffers.
16065 ++ * (for OUTPUT buffers, if userspace passes 0 bytesused, v4l2-core sets
16066 ++ * it to buffer length).
16067 ++ */
16068 ++ if (V4L2_TYPE_IS_CAPTURE(vq->type))
16069 ++ vb2_set_plane_payload(vb, 0, f->fmt.pix_mp.plane_fmt[0].sizeimage);
16070 ++
16071 + return 0;
16072 + }
16073 +
16074 +@@ -658,7 +666,7 @@ static void rkvdec_device_run(void *priv)
16075 + if (WARN_ON(!desc))
16076 + return;
16077 +
16078 +- ret = pm_runtime_get_sync(rkvdec->dev);
16079 ++ ret = pm_runtime_resume_and_get(rkvdec->dev);
16080 + if (ret < 0) {
16081 + rkvdec_job_finish_no_pm(ctx, VB2_BUF_STATE_ERROR);
16082 + return;
16083 +diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
16084 +index ce497d0197dfc..10744fab7ceaa 100644
16085 +--- a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
16086 ++++ b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
16087 +@@ -477,8 +477,8 @@ static void cedrus_h265_setup(struct cedrus_ctx *ctx,
16088 + slice_params->flags);
16089 +
16090 + reg |= VE_DEC_H265_FLAG(VE_DEC_H265_DEC_SLICE_HDR_INFO0_FLAG_DEPENDENT_SLICE_SEGMENT,
16091 +- V4L2_HEVC_PPS_FLAG_DEPENDENT_SLICE_SEGMENT,
16092 +- pps->flags);
16093 ++ V4L2_HEVC_SLICE_PARAMS_FLAG_DEPENDENT_SLICE_SEGMENT,
16094 ++ slice_params->flags);
16095 +
16096 + /* FIXME: For multi-slice support. */
16097 + reg |= VE_DEC_H265_DEC_SLICE_HDR_INFO0_FLAG_FIRST_SLICE_SEGMENT_IN_PIC;
16098 +diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_video.c b/drivers/staging/media/sunxi/cedrus/cedrus_video.c
16099 +index b62eb8e840573..bf731caf2ed51 100644
16100 +--- a/drivers/staging/media/sunxi/cedrus/cedrus_video.c
16101 ++++ b/drivers/staging/media/sunxi/cedrus/cedrus_video.c
16102 +@@ -457,7 +457,13 @@ static int cedrus_buf_prepare(struct vb2_buffer *vb)
16103 + if (vb2_plane_size(vb, 0) < pix_fmt->sizeimage)
16104 + return -EINVAL;
16105 +
16106 +- vb2_set_plane_payload(vb, 0, pix_fmt->sizeimage);
16107 ++ /*
16108 ++ * Buffer's bytesused must be written by driver for CAPTURE buffers.
16109 ++ * (for OUTPUT buffers, if userspace passes 0 bytesused, v4l2-core sets
16110 ++ * it to buffer length).
16111 ++ */
16112 ++ if (V4L2_TYPE_IS_CAPTURE(vq->type))
16113 ++ vb2_set_plane_payload(vb, 0, pix_fmt->sizeimage);
16114 +
16115 + return 0;
16116 + }
16117 +diff --git a/drivers/staging/mt7621-dts/mt7621.dtsi b/drivers/staging/mt7621-dts/mt7621.dtsi
16118 +index 16fc94f654865..b3d08459acc85 100644
16119 +--- a/drivers/staging/mt7621-dts/mt7621.dtsi
16120 ++++ b/drivers/staging/mt7621-dts/mt7621.dtsi
16121 +@@ -508,7 +508,7 @@
16122 +
16123 + bus-range = <0 255>;
16124 + ranges = <
16125 +- 0x02000000 0 0x00000000 0x60000000 0 0x10000000 /* pci memory */
16126 ++ 0x02000000 0 0x60000000 0x60000000 0 0x10000000 /* pci memory */
16127 + 0x01000000 0 0x00000000 0x1e160000 0 0x00010000 /* io space */
16128 + >;
16129 +
16130 +diff --git a/drivers/staging/rtl8712/hal_init.c b/drivers/staging/rtl8712/hal_init.c
16131 +index 715f1fe8b4726..22974277afa08 100644
16132 +--- a/drivers/staging/rtl8712/hal_init.c
16133 ++++ b/drivers/staging/rtl8712/hal_init.c
16134 +@@ -40,7 +40,10 @@ static void rtl871x_load_fw_cb(const struct firmware *firmware, void *context)
16135 + dev_err(&udev->dev, "r8712u: Firmware request failed\n");
16136 + usb_put_dev(udev);
16137 + usb_set_intfdata(usb_intf, NULL);
16138 ++ r8712_free_drv_sw(adapter);
16139 ++ adapter->dvobj_deinit(adapter);
16140 + complete(&adapter->rtl8712_fw_ready);
16141 ++ free_netdev(adapter->pnetdev);
16142 + return;
16143 + }
16144 + adapter->fw = firmware;
16145 +diff --git a/drivers/staging/rtl8712/os_intfs.c b/drivers/staging/rtl8712/os_intfs.c
16146 +index 0c3ae8495afb7..2214aca097308 100644
16147 +--- a/drivers/staging/rtl8712/os_intfs.c
16148 ++++ b/drivers/staging/rtl8712/os_intfs.c
16149 +@@ -328,8 +328,6 @@ int r8712_init_drv_sw(struct _adapter *padapter)
16150 +
16151 + void r8712_free_drv_sw(struct _adapter *padapter)
16152 + {
16153 +- struct net_device *pnetdev = padapter->pnetdev;
16154 +-
16155 + r8712_free_cmd_priv(&padapter->cmdpriv);
16156 + r8712_free_evt_priv(&padapter->evtpriv);
16157 + r8712_DeInitSwLeds(padapter);
16158 +@@ -339,8 +337,6 @@ void r8712_free_drv_sw(struct _adapter *padapter)
16159 + _r8712_free_sta_priv(&padapter->stapriv);
16160 + _r8712_free_recv_priv(&padapter->recvpriv);
16161 + mp871xdeinit(padapter);
16162 +- if (pnetdev)
16163 +- free_netdev(pnetdev);
16164 + }
16165 +
16166 + static void enable_video_mode(struct _adapter *padapter, int cbw40_value)
16167 +diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
16168 +index dc21e7743349c..b760bc3559373 100644
16169 +--- a/drivers/staging/rtl8712/usb_intf.c
16170 ++++ b/drivers/staging/rtl8712/usb_intf.c
16171 +@@ -361,7 +361,7 @@ static int r871xu_drv_init(struct usb_interface *pusb_intf,
16172 + /* step 1. */
16173 + pnetdev = r8712_init_netdev();
16174 + if (!pnetdev)
16175 +- goto error;
16176 ++ goto put_dev;
16177 + padapter = netdev_priv(pnetdev);
16178 + disable_ht_for_spec_devid(pdid, padapter);
16179 + pdvobjpriv = &padapter->dvobjpriv;
16180 +@@ -381,16 +381,16 @@ static int r871xu_drv_init(struct usb_interface *pusb_intf,
16181 + * initialize the dvobj_priv
16182 + */
16183 + if (!padapter->dvobj_init) {
16184 +- goto error;
16185 ++ goto put_dev;
16186 + } else {
16187 + status = padapter->dvobj_init(padapter);
16188 + if (status != _SUCCESS)
16189 +- goto error;
16190 ++ goto free_netdev;
16191 + }
16192 + /* step 4. */
16193 + status = r8712_init_drv_sw(padapter);
16194 + if (status)
16195 +- goto error;
16196 ++ goto dvobj_deinit;
16197 + /* step 5. read efuse/eeprom data and get mac_addr */
16198 + {
16199 + int i, offset;
16200 +@@ -570,17 +570,20 @@ static int r871xu_drv_init(struct usb_interface *pusb_intf,
16201 + }
16202 + /* step 6. Load the firmware asynchronously */
16203 + if (rtl871x_load_fw(padapter))
16204 +- goto error;
16205 ++ goto deinit_drv_sw;
16206 + spin_lock_init(&padapter->lock_rx_ff0_filter);
16207 + mutex_init(&padapter->mutex_start);
16208 + return 0;
16209 +-error:
16210 ++
16211 ++deinit_drv_sw:
16212 ++ r8712_free_drv_sw(padapter);
16213 ++dvobj_deinit:
16214 ++ padapter->dvobj_deinit(padapter);
16215 ++free_netdev:
16216 ++ free_netdev(pnetdev);
16217 ++put_dev:
16218 + usb_put_dev(udev);
16219 + usb_set_intfdata(pusb_intf, NULL);
16220 +- if (padapter && padapter->dvobj_deinit)
16221 +- padapter->dvobj_deinit(padapter);
16222 +- if (pnetdev)
16223 +- free_netdev(pnetdev);
16224 + return -ENODEV;
16225 + }
16226 +
16227 +@@ -612,6 +615,7 @@ static void r871xu_dev_remove(struct usb_interface *pusb_intf)
16228 + r8712_stop_drv_timers(padapter);
16229 + r871x_dev_unload(padapter);
16230 + r8712_free_drv_sw(padapter);
16231 ++ free_netdev(pnetdev);
16232 +
16233 + /* decrease the reference count of the usb device structure
16234 + * when disconnect
16235 +diff --git a/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c b/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
16236 +index 9097bcbd67d82..d697ea55a0da1 100644
16237 +--- a/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
16238 ++++ b/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
16239 +@@ -1862,7 +1862,7 @@ int vchiq_mmal_init(struct vchiq_mmal_instance **out_instance)
16240 + int status;
16241 + int err = -ENODEV;
16242 + struct vchiq_mmal_instance *instance;
16243 +- static struct vchiq_instance *vchiq_instance;
16244 ++ struct vchiq_instance *vchiq_instance;
16245 + struct vchiq_service_params_kernel params = {
16246 + .version = VC_MMAL_VER,
16247 + .version_min = VC_MMAL_MIN_VER,
16248 +diff --git a/drivers/target/iscsi/cxgbit/cxgbit_ddp.c b/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
16249 +index af35251232eb3..b044999ad002b 100644
16250 +--- a/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
16251 ++++ b/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
16252 +@@ -265,12 +265,13 @@ void cxgbit_unmap_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
16253 + struct cxgbit_cmd *ccmd = iscsit_priv_cmd(cmd);
16254 +
16255 + if (ccmd->release) {
16256 +- struct cxgbi_task_tag_info *ttinfo = &ccmd->ttinfo;
16257 +-
16258 +- if (ttinfo->sgl) {
16259 ++ if (cmd->se_cmd.se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) {
16260 ++ put_page(sg_page(&ccmd->sg));
16261 ++ } else {
16262 + struct cxgbit_sock *csk = conn->context;
16263 + struct cxgbit_device *cdev = csk->com.cdev;
16264 + struct cxgbi_ppm *ppm = cdev2ppm(cdev);
16265 ++ struct cxgbi_task_tag_info *ttinfo = &ccmd->ttinfo;
16266 +
16267 + /* Abort the TCP conn if DDP is not complete to
16268 + * avoid any possibility of DDP after freeing
16269 +@@ -280,14 +281,14 @@ void cxgbit_unmap_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
16270 + cmd->se_cmd.data_length))
16271 + cxgbit_abort_conn(csk);
16272 +
16273 ++ if (unlikely(ttinfo->sgl)) {
16274 ++ dma_unmap_sg(&ppm->pdev->dev, ttinfo->sgl,
16275 ++ ttinfo->nents, DMA_FROM_DEVICE);
16276 ++ ttinfo->nents = 0;
16277 ++ ttinfo->sgl = NULL;
16278 ++ }
16279 + cxgbi_ppm_ppod_release(ppm, ttinfo->idx);
16280 +-
16281 +- dma_unmap_sg(&ppm->pdev->dev, ttinfo->sgl,
16282 +- ttinfo->nents, DMA_FROM_DEVICE);
16283 +- } else {
16284 +- put_page(sg_page(&ccmd->sg));
16285 + }
16286 +-
16287 + ccmd->release = false;
16288 + }
16289 + }
16290 +diff --git a/drivers/target/iscsi/cxgbit/cxgbit_target.c b/drivers/target/iscsi/cxgbit/cxgbit_target.c
16291 +index b926e1d6c7b8e..282297ffc4044 100644
16292 +--- a/drivers/target/iscsi/cxgbit/cxgbit_target.c
16293 ++++ b/drivers/target/iscsi/cxgbit/cxgbit_target.c
16294 +@@ -997,17 +997,18 @@ static int cxgbit_handle_iscsi_dataout(struct cxgbit_sock *csk)
16295 + struct scatterlist *sg_start;
16296 + struct iscsi_conn *conn = csk->conn;
16297 + struct iscsi_cmd *cmd = NULL;
16298 ++ struct cxgbit_cmd *ccmd;
16299 ++ struct cxgbi_task_tag_info *ttinfo;
16300 + struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_rx_pdu_cb(csk->skb);
16301 + struct iscsi_data *hdr = (struct iscsi_data *)pdu_cb->hdr;
16302 + u32 data_offset = be32_to_cpu(hdr->offset);
16303 +- u32 data_len = pdu_cb->dlen;
16304 ++ u32 data_len = ntoh24(hdr->dlength);
16305 + int rc, sg_nents, sg_off;
16306 + bool dcrc_err = false;
16307 +
16308 + if (pdu_cb->flags & PDUCBF_RX_DDP_CMP) {
16309 + u32 offset = be32_to_cpu(hdr->offset);
16310 + u32 ddp_data_len;
16311 +- u32 payload_length = ntoh24(hdr->dlength);
16312 + bool success = false;
16313 +
16314 + cmd = iscsit_find_cmd_from_itt_or_dump(conn, hdr->itt, 0);
16315 +@@ -1022,7 +1023,7 @@ static int cxgbit_handle_iscsi_dataout(struct cxgbit_sock *csk)
16316 + cmd->data_sn = be32_to_cpu(hdr->datasn);
16317 +
16318 + rc = __iscsit_check_dataout_hdr(conn, (unsigned char *)hdr,
16319 +- cmd, payload_length, &success);
16320 ++ cmd, data_len, &success);
16321 + if (rc < 0)
16322 + return rc;
16323 + else if (!success)
16324 +@@ -1060,6 +1061,20 @@ static int cxgbit_handle_iscsi_dataout(struct cxgbit_sock *csk)
16325 + cxgbit_skb_copy_to_sg(csk->skb, sg_start, sg_nents, skip);
16326 + }
16327 +
16328 ++ ccmd = iscsit_priv_cmd(cmd);
16329 ++ ttinfo = &ccmd->ttinfo;
16330 ++
16331 ++ if (ccmd->release && ttinfo->sgl &&
16332 ++ (cmd->se_cmd.data_length == (cmd->write_data_done + data_len))) {
16333 ++ struct cxgbit_device *cdev = csk->com.cdev;
16334 ++ struct cxgbi_ppm *ppm = cdev2ppm(cdev);
16335 ++
16336 ++ dma_unmap_sg(&ppm->pdev->dev, ttinfo->sgl, ttinfo->nents,
16337 ++ DMA_FROM_DEVICE);
16338 ++ ttinfo->nents = 0;
16339 ++ ttinfo->sgl = NULL;
16340 ++ }
16341 ++
16342 + check_payload:
16343 +
16344 + rc = iscsit_check_dataout_payload(cmd, hdr, dcrc_err);
16345 +diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c
16346 +index 6956581ed7a4f..b8ded3aef371e 100644
16347 +--- a/drivers/thermal/cpufreq_cooling.c
16348 ++++ b/drivers/thermal/cpufreq_cooling.c
16349 +@@ -487,7 +487,7 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
16350 + ret = freq_qos_update_request(&cpufreq_cdev->qos_req, frequency);
16351 + if (ret >= 0) {
16352 + cpufreq_cdev->cpufreq_state = state;
16353 +- cpus = cpufreq_cdev->policy->cpus;
16354 ++ cpus = cpufreq_cdev->policy->related_cpus;
16355 + max_capacity = arch_scale_cpu_capacity(cpumask_first(cpus));
16356 + capacity = frequency * max_capacity;
16357 + capacity /= cpufreq_cdev->policy->cpuinfo.max_freq;
16358 +diff --git a/drivers/thunderbolt/test.c b/drivers/thunderbolt/test.c
16359 +index 464c2d37b992e..e254f8c37cb73 100644
16360 +--- a/drivers/thunderbolt/test.c
16361 ++++ b/drivers/thunderbolt/test.c
16362 +@@ -259,14 +259,14 @@ static struct tb_switch *alloc_dev_default(struct kunit *test,
16363 + if (port->dual_link_port && upstream_port->dual_link_port) {
16364 + port->dual_link_port->remote = upstream_port->dual_link_port;
16365 + upstream_port->dual_link_port->remote = port->dual_link_port;
16366 +- }
16367 +
16368 +- if (bonded) {
16369 +- /* Bonding is used */
16370 +- port->bonded = true;
16371 +- port->dual_link_port->bonded = true;
16372 +- upstream_port->bonded = true;
16373 +- upstream_port->dual_link_port->bonded = true;
16374 ++ if (bonded) {
16375 ++ /* Bonding is used */
16376 ++ port->bonded = true;
16377 ++ port->dual_link_port->bonded = true;
16378 ++ upstream_port->bonded = true;
16379 ++ upstream_port->dual_link_port->bonded = true;
16380 ++ }
16381 + }
16382 +
16383 + return sw;
16384 +diff --git a/drivers/tty/nozomi.c b/drivers/tty/nozomi.c
16385 +index 861e950431919..1076f884d9f9e 100644
16386 +--- a/drivers/tty/nozomi.c
16387 ++++ b/drivers/tty/nozomi.c
16388 +@@ -1391,7 +1391,7 @@ static int nozomi_card_init(struct pci_dev *pdev,
16389 + NOZOMI_NAME, dc);
16390 + if (unlikely(ret)) {
16391 + dev_err(&pdev->dev, "can't request irq %d\n", pdev->irq);
16392 +- goto err_free_kfifo;
16393 ++ goto err_free_all_kfifo;
16394 + }
16395 +
16396 + DBG1("base_addr: %p", dc->base_addr);
16397 +@@ -1429,12 +1429,15 @@ static int nozomi_card_init(struct pci_dev *pdev,
16398 + return 0;
16399 +
16400 + err_free_tty:
16401 +- for (i = 0; i < MAX_PORT; ++i) {
16402 ++ for (i--; i >= 0; i--) {
16403 + tty_unregister_device(ntty_driver, dc->index_start + i);
16404 + tty_port_destroy(&dc->port[i].port);
16405 + }
16406 ++ free_irq(pdev->irq, dc);
16407 ++err_free_all_kfifo:
16408 ++ i = MAX_PORT;
16409 + err_free_kfifo:
16410 +- for (i = 0; i < MAX_PORT; i++)
16411 ++ for (i--; i >= PORT_MDM; i--)
16412 + kfifo_free(&dc->port[i].fifo_ul);
16413 + err_free_sbuf:
16414 + kfree(dc->send_buf);
16415 +diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
16416 +index 23e0decde33eb..c37468887fd2a 100644
16417 +--- a/drivers/tty/serial/8250/8250_omap.c
16418 ++++ b/drivers/tty/serial/8250/8250_omap.c
16419 +@@ -43,6 +43,7 @@
16420 + #define UART_ERRATA_CLOCK_DISABLE (1 << 3)
16421 + #define UART_HAS_EFR2 BIT(4)
16422 + #define UART_HAS_RHR_IT_DIS BIT(5)
16423 ++#define UART_RX_TIMEOUT_QUIRK BIT(6)
16424 +
16425 + #define OMAP_UART_FCR_RX_TRIG 6
16426 + #define OMAP_UART_FCR_TX_TRIG 4
16427 +@@ -104,6 +105,9 @@
16428 + #define UART_OMAP_EFR2 0x23
16429 + #define UART_OMAP_EFR2_TIMEOUT_BEHAVE BIT(6)
16430 +
16431 ++/* RX FIFO occupancy indicator */
16432 ++#define UART_OMAP_RX_LVL 0x64
16433 ++
16434 + struct omap8250_priv {
16435 + int line;
16436 + u8 habit;
16437 +@@ -611,6 +615,7 @@ static int omap_8250_dma_handle_irq(struct uart_port *port);
16438 + static irqreturn_t omap8250_irq(int irq, void *dev_id)
16439 + {
16440 + struct uart_port *port = dev_id;
16441 ++ struct omap8250_priv *priv = port->private_data;
16442 + struct uart_8250_port *up = up_to_u8250p(port);
16443 + unsigned int iir;
16444 + int ret;
16445 +@@ -625,6 +630,18 @@ static irqreturn_t omap8250_irq(int irq, void *dev_id)
16446 + serial8250_rpm_get(up);
16447 + iir = serial_port_in(port, UART_IIR);
16448 + ret = serial8250_handle_irq(port, iir);
16449 ++
16450 ++ /*
16451 ++ * On K3 SoCs, it is observed that RX TIMEOUT is signalled after
16452 ++ * FIFO has been drained, in which case a dummy read of RX FIFO
16453 ++ * is required to clear RX TIMEOUT condition.
16454 ++ */
16455 ++ if (priv->habit & UART_RX_TIMEOUT_QUIRK &&
16456 ++ (iir & UART_IIR_RX_TIMEOUT) == UART_IIR_RX_TIMEOUT &&
16457 ++ serial_port_in(port, UART_OMAP_RX_LVL) == 0) {
16458 ++ serial_port_in(port, UART_RX);
16459 ++ }
16460 ++
16461 + serial8250_rpm_put(up);
16462 +
16463 + return IRQ_RETVAL(ret);
16464 +@@ -813,7 +830,7 @@ static void __dma_rx_do_complete(struct uart_8250_port *p)
16465 + poll_count--)
16466 + cpu_relax();
16467 +
16468 +- if (!poll_count)
16469 ++ if (poll_count == -1)
16470 + dev_err(p->port.dev, "teardown incomplete\n");
16471 + }
16472 + }
16473 +@@ -1218,7 +1235,8 @@ static struct omap8250_dma_params am33xx_dma = {
16474 +
16475 + static struct omap8250_platdata am654_platdata = {
16476 + .dma_params = &am654_dma,
16477 +- .habit = UART_HAS_EFR2 | UART_HAS_RHR_IT_DIS,
16478 ++ .habit = UART_HAS_EFR2 | UART_HAS_RHR_IT_DIS |
16479 ++ UART_RX_TIMEOUT_QUIRK,
16480 + };
16481 +
16482 + static struct omap8250_platdata am33xx_platdata = {
16483 +diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
16484 +index 6e141429c9808..6d9c494bed7d2 100644
16485 +--- a/drivers/tty/serial/8250/8250_port.c
16486 ++++ b/drivers/tty/serial/8250/8250_port.c
16487 +@@ -2635,6 +2635,21 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
16488 + struct ktermios *old)
16489 + {
16490 + unsigned int tolerance = port->uartclk / 100;
16491 ++ unsigned int min;
16492 ++ unsigned int max;
16493 ++
16494 ++ /*
16495 ++ * Handle magic divisors for baud rates above baud_base on SMSC
16496 ++ * Super I/O chips. Enable custom rates of clk/4 and clk/8, but
16497 ++ * disable divisor values beyond 32767, which are unavailable.
16498 ++ */
16499 ++ if (port->flags & UPF_MAGIC_MULTIPLIER) {
16500 ++ min = port->uartclk / 16 / UART_DIV_MAX >> 1;
16501 ++ max = (port->uartclk + tolerance) / 4;
16502 ++ } else {
16503 ++ min = port->uartclk / 16 / UART_DIV_MAX;
16504 ++ max = (port->uartclk + tolerance) / 16;
16505 ++ }
16506 +
16507 + /*
16508 + * Ask the core to calculate the divisor for us.
16509 +@@ -2642,9 +2657,7 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
16510 + * slower than nominal still match standard baud rates without
16511 + * causing transmission errors.
16512 + */
16513 +- return uart_get_baud_rate(port, termios, old,
16514 +- port->uartclk / 16 / UART_DIV_MAX,
16515 +- (port->uartclk + tolerance) / 16);
16516 ++ return uart_get_baud_rate(port, termios, old, min, max);
16517 + }
16518 +
16519 + /*
16520 +diff --git a/drivers/tty/serial/8250/serial_cs.c b/drivers/tty/serial/8250/serial_cs.c
16521 +index 35ff6627c61be..1cc749903d127 100644
16522 +--- a/drivers/tty/serial/8250/serial_cs.c
16523 ++++ b/drivers/tty/serial/8250/serial_cs.c
16524 +@@ -777,6 +777,7 @@ static const struct pcmcia_device_id serial_ids[] = {
16525 + PCMCIA_DEVICE_PROD_ID12("Multi-Tech", "MT2834LT", 0x5f73be51, 0x4cd7c09e),
16526 + PCMCIA_DEVICE_PROD_ID12("OEM ", "C288MX ", 0xb572d360, 0xd2385b7a),
16527 + PCMCIA_DEVICE_PROD_ID12("Option International", "V34bis GSM/PSTN Data/Fax Modem", 0x9d7cd6f5, 0x5cb8bf41),
16528 ++ PCMCIA_DEVICE_PROD_ID12("Option International", "GSM-Ready 56K/ISDN", 0x9d7cd6f5, 0xb23844aa),
16529 + PCMCIA_DEVICE_PROD_ID12("PCMCIA ", "C336MX ", 0x99bcafe9, 0xaa25bcab),
16530 + PCMCIA_DEVICE_PROD_ID12("Quatech Inc", "PCMCIA Dual RS-232 Serial Port Card", 0xc4420b35, 0x92abc92f),
16531 + PCMCIA_DEVICE_PROD_ID12("Quatech Inc", "Dual RS-232 Serial Port PC Card", 0xc4420b35, 0x031a380d),
16532 +@@ -804,7 +805,6 @@ static const struct pcmcia_device_id serial_ids[] = {
16533 + PCMCIA_DEVICE_CIS_PROD_ID12("ADVANTECH", "COMpad-32/85B-4", 0x96913a85, 0xcec8f102, "cis/COMpad4.cis"),
16534 + PCMCIA_DEVICE_CIS_PROD_ID123("ADVANTECH", "COMpad-32/85", "1.0", 0x96913a85, 0x8fbe92ae, 0x0877b627, "cis/COMpad2.cis"),
16535 + PCMCIA_DEVICE_CIS_PROD_ID2("RS-COM 2P", 0xad20b156, "cis/RS-COM-2P.cis"),
16536 +- PCMCIA_DEVICE_CIS_MANF_CARD(0x0013, 0x0000, "cis/GLOBETROTTER.cis"),
16537 + PCMCIA_DEVICE_PROD_ID12("ELAN DIGITAL SYSTEMS LTD, c1997.", "SERIAL CARD: SL100 1.00.", 0x19ca78af, 0xf964f42b),
16538 + PCMCIA_DEVICE_PROD_ID12("ELAN DIGITAL SYSTEMS LTD, c1997.", "SERIAL CARD: SL100", 0x19ca78af, 0x71d98e83),
16539 + PCMCIA_DEVICE_PROD_ID12("ELAN DIGITAL SYSTEMS LTD, c1997.", "SERIAL CARD: SL232 1.00.", 0x19ca78af, 0x69fb7490),
16540 +diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
16541 +index 794035041744f..9c78e43e669d7 100644
16542 +--- a/drivers/tty/serial/fsl_lpuart.c
16543 ++++ b/drivers/tty/serial/fsl_lpuart.c
16544 +@@ -1408,17 +1408,7 @@ static unsigned int lpuart_get_mctrl(struct uart_port *port)
16545 +
16546 + static unsigned int lpuart32_get_mctrl(struct uart_port *port)
16547 + {
16548 +- unsigned int temp = 0;
16549 +- unsigned long reg;
16550 +-
16551 +- reg = lpuart32_read(port, UARTMODIR);
16552 +- if (reg & UARTMODIR_TXCTSE)
16553 +- temp |= TIOCM_CTS;
16554 +-
16555 +- if (reg & UARTMODIR_RXRTSE)
16556 +- temp |= TIOCM_RTS;
16557 +-
16558 +- return temp;
16559 ++ return 0;
16560 + }
16561 +
16562 + static void lpuart_set_mctrl(struct uart_port *port, unsigned int mctrl)
16563 +@@ -1625,7 +1615,7 @@ static void lpuart_rx_dma_startup(struct lpuart_port *sport)
16564 + sport->lpuart_dma_rx_use = true;
16565 + rx_dma_timer_init(sport);
16566 +
16567 +- if (sport->port.has_sysrq) {
16568 ++ if (sport->port.has_sysrq && !lpuart_is_32(sport)) {
16569 + cr3 = readb(sport->port.membase + UARTCR3);
16570 + cr3 |= UARTCR3_FEIE;
16571 + writeb(cr3, sport->port.membase + UARTCR3);
16572 +diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
16573 +index 51b0ecabf2ec9..1e26220c78527 100644
16574 +--- a/drivers/tty/serial/mvebu-uart.c
16575 ++++ b/drivers/tty/serial/mvebu-uart.c
16576 +@@ -445,12 +445,11 @@ static void mvebu_uart_shutdown(struct uart_port *port)
16577 +
16578 + static int mvebu_uart_baud_rate_set(struct uart_port *port, unsigned int baud)
16579 + {
16580 +- struct mvebu_uart *mvuart = to_mvuart(port);
16581 + unsigned int d_divisor, m_divisor;
16582 + u32 brdv, osamp;
16583 +
16584 +- if (IS_ERR(mvuart->clk))
16585 +- return -PTR_ERR(mvuart->clk);
16586 ++ if (!port->uartclk)
16587 ++ return -EOPNOTSUPP;
16588 +
16589 + /*
16590 + * The baudrate is derived from the UART clock thanks to two divisors:
16591 +@@ -463,7 +462,7 @@ static int mvebu_uart_baud_rate_set(struct uart_port *port, unsigned int baud)
16592 + * makes use of D to configure the desired baudrate.
16593 + */
16594 + m_divisor = OSAMP_DEFAULT_DIVISOR;
16595 +- d_divisor = DIV_ROUND_UP(port->uartclk, baud * m_divisor);
16596 ++ d_divisor = DIV_ROUND_CLOSEST(port->uartclk, baud * m_divisor);
16597 +
16598 + brdv = readl(port->membase + UART_BRDV);
16599 + brdv &= ~BRDV_BAUD_MASK;
16600 +@@ -482,7 +481,7 @@ static void mvebu_uart_set_termios(struct uart_port *port,
16601 + struct ktermios *old)
16602 + {
16603 + unsigned long flags;
16604 +- unsigned int baud;
16605 ++ unsigned int baud, min_baud, max_baud;
16606 +
16607 + spin_lock_irqsave(&port->lock, flags);
16608 +
16609 +@@ -501,16 +500,21 @@ static void mvebu_uart_set_termios(struct uart_port *port,
16610 + port->ignore_status_mask |= STAT_RX_RDY(port) | STAT_BRK_ERR;
16611 +
16612 + /*
16613 ++ * Maximal divisor is 1023 * 16 when using default (x16) scheme.
16614 + * Maximum achievable frequency with simple baudrate divisor is 230400.
16615 + * Since the error per bit frame would be of more than 15%, achieving
16616 + * higher frequencies would require to implement the fractional divisor
16617 + * feature.
16618 + */
16619 +- baud = uart_get_baud_rate(port, termios, old, 0, 230400);
16620 ++ min_baud = DIV_ROUND_UP(port->uartclk, 1023 * 16);
16621 ++ max_baud = 230400;
16622 ++
16623 ++ baud = uart_get_baud_rate(port, termios, old, min_baud, max_baud);
16624 + if (mvebu_uart_baud_rate_set(port, baud)) {
16625 + /* No clock available, baudrate cannot be changed */
16626 + if (old)
16627 +- baud = uart_get_baud_rate(port, old, NULL, 0, 230400);
16628 ++ baud = uart_get_baud_rate(port, old, NULL,
16629 ++ min_baud, max_baud);
16630 + } else {
16631 + tty_termios_encode_baud_rate(termios, baud, baud);
16632 + uart_update_timeout(port, termios->c_cflag, baud);
16633 +diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
16634 +index 3b1aaa93d750e..70898a999a498 100644
16635 +--- a/drivers/tty/serial/sh-sci.c
16636 ++++ b/drivers/tty/serial/sh-sci.c
16637 +@@ -610,6 +610,14 @@ static void sci_stop_tx(struct uart_port *port)
16638 + ctrl &= ~SCSCR_TIE;
16639 +
16640 + serial_port_out(port, SCSCR, ctrl);
16641 ++
16642 ++#ifdef CONFIG_SERIAL_SH_SCI_DMA
16643 ++ if (to_sci_port(port)->chan_tx &&
16644 ++ !dma_submit_error(to_sci_port(port)->cookie_tx)) {
16645 ++ dmaengine_terminate_async(to_sci_port(port)->chan_tx);
16646 ++ to_sci_port(port)->cookie_tx = -EINVAL;
16647 ++ }
16648 ++#endif
16649 + }
16650 +
16651 + static void sci_start_rx(struct uart_port *port)
16652 +diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
16653 +index c103961c3fae9..68a282ceb4345 100644
16654 +--- a/drivers/usb/class/cdc-acm.c
16655 ++++ b/drivers/usb/class/cdc-acm.c
16656 +@@ -1951,6 +1951,11 @@ static const struct usb_device_id acm_ids[] = {
16657 + .driver_info = IGNORE_DEVICE,
16658 + },
16659 +
16660 ++ /* Exclude Heimann Sensor GmbH USB appset demo */
16661 ++ { USB_DEVICE(0x32a7, 0x0000),
16662 ++ .driver_info = IGNORE_DEVICE,
16663 ++ },
16664 ++
16665 + /* control interfaces without any protocol set */
16666 + { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM,
16667 + USB_CDC_PROTO_NONE) },
16668 +diff --git a/drivers/usb/dwc2/core.c b/drivers/usb/dwc2/core.c
16669 +index fec17a2d2447d..15911ac7582b4 100644
16670 +--- a/drivers/usb/dwc2/core.c
16671 ++++ b/drivers/usb/dwc2/core.c
16672 +@@ -1167,15 +1167,6 @@ static int dwc2_hs_phy_init(struct dwc2_hsotg *hsotg, bool select_phy)
16673 + usbcfg &= ~(GUSBCFG_ULPI_UTMI_SEL | GUSBCFG_PHYIF16);
16674 + if (hsotg->params.phy_utmi_width == 16)
16675 + usbcfg |= GUSBCFG_PHYIF16;
16676 +-
16677 +- /* Set turnaround time */
16678 +- if (dwc2_is_device_mode(hsotg)) {
16679 +- usbcfg &= ~GUSBCFG_USBTRDTIM_MASK;
16680 +- if (hsotg->params.phy_utmi_width == 16)
16681 +- usbcfg |= 5 << GUSBCFG_USBTRDTIM_SHIFT;
16682 +- else
16683 +- usbcfg |= 9 << GUSBCFG_USBTRDTIM_SHIFT;
16684 +- }
16685 + break;
16686 + default:
16687 + dev_err(hsotg->dev, "FS PHY selected at HS!\n");
16688 +@@ -1197,6 +1188,24 @@ static int dwc2_hs_phy_init(struct dwc2_hsotg *hsotg, bool select_phy)
16689 + return retval;
16690 + }
16691 +
16692 ++static void dwc2_set_turnaround_time(struct dwc2_hsotg *hsotg)
16693 ++{
16694 ++ u32 usbcfg;
16695 ++
16696 ++ if (hsotg->params.phy_type != DWC2_PHY_TYPE_PARAM_UTMI)
16697 ++ return;
16698 ++
16699 ++ usbcfg = dwc2_readl(hsotg, GUSBCFG);
16700 ++
16701 ++ usbcfg &= ~GUSBCFG_USBTRDTIM_MASK;
16702 ++ if (hsotg->params.phy_utmi_width == 16)
16703 ++ usbcfg |= 5 << GUSBCFG_USBTRDTIM_SHIFT;
16704 ++ else
16705 ++ usbcfg |= 9 << GUSBCFG_USBTRDTIM_SHIFT;
16706 ++
16707 ++ dwc2_writel(hsotg, usbcfg, GUSBCFG);
16708 ++}
16709 ++
16710 + int dwc2_phy_init(struct dwc2_hsotg *hsotg, bool select_phy)
16711 + {
16712 + u32 usbcfg;
16713 +@@ -1214,6 +1223,9 @@ int dwc2_phy_init(struct dwc2_hsotg *hsotg, bool select_phy)
16714 + retval = dwc2_hs_phy_init(hsotg, select_phy);
16715 + if (retval)
16716 + return retval;
16717 ++
16718 ++ if (dwc2_is_device_mode(hsotg))
16719 ++ dwc2_set_turnaround_time(hsotg);
16720 + }
16721 +
16722 + if (hsotg->hw_params.hs_phy_type == GHWCFG2_HS_PHY_TYPE_ULPI &&
16723 +diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
16724 +index 0022039bc2355..8e740c7623e40 100644
16725 +--- a/drivers/usb/dwc3/core.c
16726 ++++ b/drivers/usb/dwc3/core.c
16727 +@@ -1605,17 +1605,18 @@ static int dwc3_probe(struct platform_device *pdev)
16728 + }
16729 +
16730 + dwc3_check_params(dwc);
16731 ++ dwc3_debugfs_init(dwc);
16732 +
16733 + ret = dwc3_core_init_mode(dwc);
16734 + if (ret)
16735 + goto err5;
16736 +
16737 +- dwc3_debugfs_init(dwc);
16738 + pm_runtime_put(dev);
16739 +
16740 + return 0;
16741 +
16742 + err5:
16743 ++ dwc3_debugfs_exit(dwc);
16744 + dwc3_event_buffers_cleanup(dwc);
16745 +
16746 + usb_phy_shutdown(dwc->usb2_phy);
16747 +diff --git a/drivers/usb/gadget/function/f_eem.c b/drivers/usb/gadget/function/f_eem.c
16748 +index 2cd9942707b46..5d38f29bda720 100644
16749 +--- a/drivers/usb/gadget/function/f_eem.c
16750 ++++ b/drivers/usb/gadget/function/f_eem.c
16751 +@@ -30,6 +30,11 @@ struct f_eem {
16752 + u8 ctrl_id;
16753 + };
16754 +
16755 ++struct in_context {
16756 ++ struct sk_buff *skb;
16757 ++ struct usb_ep *ep;
16758 ++};
16759 ++
16760 + static inline struct f_eem *func_to_eem(struct usb_function *f)
16761 + {
16762 + return container_of(f, struct f_eem, port.func);
16763 +@@ -320,9 +325,12 @@ fail:
16764 +
16765 + static void eem_cmd_complete(struct usb_ep *ep, struct usb_request *req)
16766 + {
16767 +- struct sk_buff *skb = (struct sk_buff *)req->context;
16768 ++ struct in_context *ctx = req->context;
16769 +
16770 +- dev_kfree_skb_any(skb);
16771 ++ dev_kfree_skb_any(ctx->skb);
16772 ++ kfree(req->buf);
16773 ++ usb_ep_free_request(ctx->ep, req);
16774 ++ kfree(ctx);
16775 + }
16776 +
16777 + /*
16778 +@@ -410,7 +418,9 @@ static int eem_unwrap(struct gether *port,
16779 + * b15: bmType (0 == data, 1 == command)
16780 + */
16781 + if (header & BIT(15)) {
16782 +- struct usb_request *req = cdev->req;
16783 ++ struct usb_request *req;
16784 ++ struct in_context *ctx;
16785 ++ struct usb_ep *ep;
16786 + u16 bmEEMCmd;
16787 +
16788 + /* EEM command packet format:
16789 +@@ -439,11 +449,36 @@ static int eem_unwrap(struct gether *port,
16790 + skb_trim(skb2, len);
16791 + put_unaligned_le16(BIT(15) | BIT(11) | len,
16792 + skb_push(skb2, 2));
16793 ++
16794 ++ ep = port->in_ep;
16795 ++ req = usb_ep_alloc_request(ep, GFP_ATOMIC);
16796 ++ if (!req) {
16797 ++ dev_kfree_skb_any(skb2);
16798 ++ goto next;
16799 ++ }
16800 ++
16801 ++ req->buf = kmalloc(skb2->len, GFP_KERNEL);
16802 ++ if (!req->buf) {
16803 ++ usb_ep_free_request(ep, req);
16804 ++ dev_kfree_skb_any(skb2);
16805 ++ goto next;
16806 ++ }
16807 ++
16808 ++ ctx = kmalloc(sizeof(*ctx), GFP_KERNEL);
16809 ++ if (!ctx) {
16810 ++ kfree(req->buf);
16811 ++ usb_ep_free_request(ep, req);
16812 ++ dev_kfree_skb_any(skb2);
16813 ++ goto next;
16814 ++ }
16815 ++ ctx->skb = skb2;
16816 ++ ctx->ep = ep;
16817 ++
16818 + skb_copy_bits(skb2, 0, req->buf, skb2->len);
16819 + req->length = skb2->len;
16820 + req->complete = eem_cmd_complete;
16821 + req->zero = 1;
16822 +- req->context = skb2;
16823 ++ req->context = ctx;
16824 + if (usb_ep_queue(port->in_ep, req, GFP_ATOMIC))
16825 + DBG(cdev, "echo response queue fail\n");
16826 + break;
16827 +diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
16828 +index f29abc7867d59..366509e89b984 100644
16829 +--- a/drivers/usb/gadget/function/f_fs.c
16830 ++++ b/drivers/usb/gadget/function/f_fs.c
16831 +@@ -250,8 +250,8 @@ EXPORT_SYMBOL_GPL(ffs_lock);
16832 + static struct ffs_dev *_ffs_find_dev(const char *name);
16833 + static struct ffs_dev *_ffs_alloc_dev(void);
16834 + static void _ffs_free_dev(struct ffs_dev *dev);
16835 +-static void *ffs_acquire_dev(const char *dev_name);
16836 +-static void ffs_release_dev(struct ffs_data *ffs_data);
16837 ++static int ffs_acquire_dev(const char *dev_name, struct ffs_data *ffs_data);
16838 ++static void ffs_release_dev(struct ffs_dev *ffs_dev);
16839 + static int ffs_ready(struct ffs_data *ffs);
16840 + static void ffs_closed(struct ffs_data *ffs);
16841 +
16842 +@@ -1554,8 +1554,8 @@ unmapped_value:
16843 + static int ffs_fs_get_tree(struct fs_context *fc)
16844 + {
16845 + struct ffs_sb_fill_data *ctx = fc->fs_private;
16846 +- void *ffs_dev;
16847 + struct ffs_data *ffs;
16848 ++ int ret;
16849 +
16850 + ENTER();
16851 +
16852 +@@ -1574,13 +1574,12 @@ static int ffs_fs_get_tree(struct fs_context *fc)
16853 + return -ENOMEM;
16854 + }
16855 +
16856 +- ffs_dev = ffs_acquire_dev(ffs->dev_name);
16857 +- if (IS_ERR(ffs_dev)) {
16858 ++ ret = ffs_acquire_dev(ffs->dev_name, ffs);
16859 ++ if (ret) {
16860 + ffs_data_put(ffs);
16861 +- return PTR_ERR(ffs_dev);
16862 ++ return ret;
16863 + }
16864 +
16865 +- ffs->private_data = ffs_dev;
16866 + ctx->ffs_data = ffs;
16867 + return get_tree_nodev(fc, ffs_sb_fill);
16868 + }
16869 +@@ -1591,7 +1590,6 @@ static void ffs_fs_free_fc(struct fs_context *fc)
16870 +
16871 + if (ctx) {
16872 + if (ctx->ffs_data) {
16873 +- ffs_release_dev(ctx->ffs_data);
16874 + ffs_data_put(ctx->ffs_data);
16875 + }
16876 +
16877 +@@ -1630,10 +1628,8 @@ ffs_fs_kill_sb(struct super_block *sb)
16878 + ENTER();
16879 +
16880 + kill_litter_super(sb);
16881 +- if (sb->s_fs_info) {
16882 +- ffs_release_dev(sb->s_fs_info);
16883 ++ if (sb->s_fs_info)
16884 + ffs_data_closed(sb->s_fs_info);
16885 +- }
16886 + }
16887 +
16888 + static struct file_system_type ffs_fs_type = {
16889 +@@ -1703,6 +1699,7 @@ static void ffs_data_put(struct ffs_data *ffs)
16890 + if (refcount_dec_and_test(&ffs->ref)) {
16891 + pr_info("%s(): freeing\n", __func__);
16892 + ffs_data_clear(ffs);
16893 ++ ffs_release_dev(ffs->private_data);
16894 + BUG_ON(waitqueue_active(&ffs->ev.waitq) ||
16895 + swait_active(&ffs->ep0req_completion.wait) ||
16896 + waitqueue_active(&ffs->wait));
16897 +@@ -3032,6 +3029,7 @@ static inline struct f_fs_opts *ffs_do_functionfs_bind(struct usb_function *f,
16898 + struct ffs_function *func = ffs_func_from_usb(f);
16899 + struct f_fs_opts *ffs_opts =
16900 + container_of(f->fi, struct f_fs_opts, func_inst);
16901 ++ struct ffs_data *ffs_data;
16902 + int ret;
16903 +
16904 + ENTER();
16905 +@@ -3046,12 +3044,13 @@ static inline struct f_fs_opts *ffs_do_functionfs_bind(struct usb_function *f,
16906 + if (!ffs_opts->no_configfs)
16907 + ffs_dev_lock();
16908 + ret = ffs_opts->dev->desc_ready ? 0 : -ENODEV;
16909 +- func->ffs = ffs_opts->dev->ffs_data;
16910 ++ ffs_data = ffs_opts->dev->ffs_data;
16911 + if (!ffs_opts->no_configfs)
16912 + ffs_dev_unlock();
16913 + if (ret)
16914 + return ERR_PTR(ret);
16915 +
16916 ++ func->ffs = ffs_data;
16917 + func->conf = c;
16918 + func->gadget = c->cdev->gadget;
16919 +
16920 +@@ -3506,6 +3505,7 @@ static void ffs_free_inst(struct usb_function_instance *f)
16921 + struct f_fs_opts *opts;
16922 +
16923 + opts = to_f_fs_opts(f);
16924 ++ ffs_release_dev(opts->dev);
16925 + ffs_dev_lock();
16926 + _ffs_free_dev(opts->dev);
16927 + ffs_dev_unlock();
16928 +@@ -3693,47 +3693,48 @@ static void _ffs_free_dev(struct ffs_dev *dev)
16929 + {
16930 + list_del(&dev->entry);
16931 +
16932 +- /* Clear the private_data pointer to stop incorrect dev access */
16933 +- if (dev->ffs_data)
16934 +- dev->ffs_data->private_data = NULL;
16935 +-
16936 + kfree(dev);
16937 + if (list_empty(&ffs_devices))
16938 + functionfs_cleanup();
16939 + }
16940 +
16941 +-static void *ffs_acquire_dev(const char *dev_name)
16942 ++static int ffs_acquire_dev(const char *dev_name, struct ffs_data *ffs_data)
16943 + {
16944 ++ int ret = 0;
16945 + struct ffs_dev *ffs_dev;
16946 +
16947 + ENTER();
16948 + ffs_dev_lock();
16949 +
16950 + ffs_dev = _ffs_find_dev(dev_name);
16951 +- if (!ffs_dev)
16952 +- ffs_dev = ERR_PTR(-ENOENT);
16953 +- else if (ffs_dev->mounted)
16954 +- ffs_dev = ERR_PTR(-EBUSY);
16955 +- else if (ffs_dev->ffs_acquire_dev_callback &&
16956 +- ffs_dev->ffs_acquire_dev_callback(ffs_dev))
16957 +- ffs_dev = ERR_PTR(-ENOENT);
16958 +- else
16959 ++ if (!ffs_dev) {
16960 ++ ret = -ENOENT;
16961 ++ } else if (ffs_dev->mounted) {
16962 ++ ret = -EBUSY;
16963 ++ } else if (ffs_dev->ffs_acquire_dev_callback &&
16964 ++ ffs_dev->ffs_acquire_dev_callback(ffs_dev)) {
16965 ++ ret = -ENOENT;
16966 ++ } else {
16967 + ffs_dev->mounted = true;
16968 ++ ffs_dev->ffs_data = ffs_data;
16969 ++ ffs_data->private_data = ffs_dev;
16970 ++ }
16971 +
16972 + ffs_dev_unlock();
16973 +- return ffs_dev;
16974 ++ return ret;
16975 + }
16976 +
16977 +-static void ffs_release_dev(struct ffs_data *ffs_data)
16978 ++static void ffs_release_dev(struct ffs_dev *ffs_dev)
16979 + {
16980 +- struct ffs_dev *ffs_dev;
16981 +-
16982 + ENTER();
16983 + ffs_dev_lock();
16984 +
16985 +- ffs_dev = ffs_data->private_data;
16986 +- if (ffs_dev) {
16987 ++ if (ffs_dev && ffs_dev->mounted) {
16988 + ffs_dev->mounted = false;
16989 ++ if (ffs_dev->ffs_data) {
16990 ++ ffs_dev->ffs_data->private_data = NULL;
16991 ++ ffs_dev->ffs_data = NULL;
16992 ++ }
16993 +
16994 + if (ffs_dev->ffs_release_dev_callback)
16995 + ffs_dev->ffs_release_dev_callback(ffs_dev);
16996 +@@ -3761,7 +3762,6 @@ static int ffs_ready(struct ffs_data *ffs)
16997 + }
16998 +
16999 + ffs_obj->desc_ready = true;
17000 +- ffs_obj->ffs_data = ffs;
17001 +
17002 + if (ffs_obj->ffs_ready_callback) {
17003 + ret = ffs_obj->ffs_ready_callback(ffs);
17004 +@@ -3789,7 +3789,6 @@ static void ffs_closed(struct ffs_data *ffs)
17005 + goto done;
17006 +
17007 + ffs_obj->desc_ready = false;
17008 +- ffs_obj->ffs_data = NULL;
17009 +
17010 + if (test_and_clear_bit(FFS_FL_CALL_CLOSED_CALLBACK, &ffs->flags) &&
17011 + ffs_obj->ffs_closed_callback)
17012 +diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
17013 +index 717c122f94490..5143e63bcbca8 100644
17014 +--- a/drivers/usb/host/xhci-mem.c
17015 ++++ b/drivers/usb/host/xhci-mem.c
17016 +@@ -1924,6 +1924,7 @@ no_bw:
17017 + xhci->hw_ports = NULL;
17018 + xhci->rh_bw = NULL;
17019 + xhci->ext_caps = NULL;
17020 ++ xhci->port_caps = NULL;
17021 +
17022 + xhci->page_size = 0;
17023 + xhci->page_shift = 0;
17024 +diff --git a/drivers/usb/host/xhci-pci-renesas.c b/drivers/usb/host/xhci-pci-renesas.c
17025 +index f97ac9f52bf4d..431213cdf9e0e 100644
17026 +--- a/drivers/usb/host/xhci-pci-renesas.c
17027 ++++ b/drivers/usb/host/xhci-pci-renesas.c
17028 +@@ -207,7 +207,8 @@ static int renesas_check_rom_state(struct pci_dev *pdev)
17029 + return 0;
17030 +
17031 + case RENESAS_ROM_STATUS_NO_RESULT: /* No result yet */
17032 +- return 0;
17033 ++ dev_dbg(&pdev->dev, "Unknown ROM status ...\n");
17034 ++ break;
17035 +
17036 + case RENESAS_ROM_STATUS_ERROR: /* Error State */
17037 + default: /* All other states are marked as "Reserved states" */
17038 +@@ -224,13 +225,12 @@ static int renesas_fw_check_running(struct pci_dev *pdev)
17039 + u8 fw_state;
17040 + int err;
17041 +
17042 +- /* Check if device has ROM and loaded, if so skip everything */
17043 +- err = renesas_check_rom(pdev);
17044 +- if (err) { /* we have rom */
17045 +- err = renesas_check_rom_state(pdev);
17046 +- if (!err)
17047 +- return err;
17048 +- }
17049 ++ /*
17050 ++ * Only if device has ROM and loaded FW we can skip loading and
17051 ++ * return success. Otherwise (even unknown state), attempt to load FW.
17052 ++ */
17053 ++ if (renesas_check_rom(pdev) && !renesas_check_rom_state(pdev))
17054 ++ return 0;
17055 +
17056 + /*
17057 + * Test if the device is actually needing the firmware. As most
17058 +diff --git a/drivers/usb/phy/phy-tegra-usb.c b/drivers/usb/phy/phy-tegra-usb.c
17059 +index a48452a6172b6..c0f432d509aab 100644
17060 +--- a/drivers/usb/phy/phy-tegra-usb.c
17061 ++++ b/drivers/usb/phy/phy-tegra-usb.c
17062 +@@ -58,12 +58,12 @@
17063 + #define USB_WAKEUP_DEBOUNCE_COUNT(x) (((x) & 0x7) << 16)
17064 +
17065 + #define USB_PHY_VBUS_SENSORS 0x404
17066 +-#define B_SESS_VLD_WAKEUP_EN BIT(6)
17067 +-#define B_VBUS_VLD_WAKEUP_EN BIT(14)
17068 ++#define B_SESS_VLD_WAKEUP_EN BIT(14)
17069 + #define A_SESS_VLD_WAKEUP_EN BIT(22)
17070 + #define A_VBUS_VLD_WAKEUP_EN BIT(30)
17071 +
17072 + #define USB_PHY_VBUS_WAKEUP_ID 0x408
17073 ++#define VBUS_WAKEUP_STS BIT(10)
17074 + #define VBUS_WAKEUP_WAKEUP_EN BIT(30)
17075 +
17076 + #define USB1_LEGACY_CTRL 0x410
17077 +@@ -544,7 +544,7 @@ static int utmi_phy_power_on(struct tegra_usb_phy *phy)
17078 +
17079 + val = readl_relaxed(base + USB_PHY_VBUS_SENSORS);
17080 + val &= ~(A_VBUS_VLD_WAKEUP_EN | A_SESS_VLD_WAKEUP_EN);
17081 +- val &= ~(B_VBUS_VLD_WAKEUP_EN | B_SESS_VLD_WAKEUP_EN);
17082 ++ val &= ~(B_SESS_VLD_WAKEUP_EN);
17083 + writel_relaxed(val, base + USB_PHY_VBUS_SENSORS);
17084 +
17085 + val = readl_relaxed(base + UTMIP_BAT_CHRG_CFG0);
17086 +@@ -642,6 +642,15 @@ static int utmi_phy_power_off(struct tegra_usb_phy *phy)
17087 + void __iomem *base = phy->regs;
17088 + u32 val;
17089 +
17090 ++ /*
17091 ++ * Give hardware time to settle down after VBUS disconnection,
17092 ++ * otherwise PHY will immediately wake up from suspend.
17093 ++ */
17094 ++ if (phy->wakeup_enabled && phy->mode != USB_DR_MODE_HOST)
17095 ++ readl_relaxed_poll_timeout(base + USB_PHY_VBUS_WAKEUP_ID,
17096 ++ val, !(val & VBUS_WAKEUP_STS),
17097 ++ 5000, 100000);
17098 ++
17099 + utmi_phy_clk_disable(phy);
17100 +
17101 + /* PHY won't resume if reset is asserted */
17102 +diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c
17103 +index 45f0bf65e9aba..ae0bfb8b9c71c 100644
17104 +--- a/drivers/usb/typec/class.c
17105 ++++ b/drivers/usb/typec/class.c
17106 +@@ -572,8 +572,10 @@ typec_register_altmode(struct device *parent,
17107 + int ret;
17108 +
17109 + alt = kzalloc(sizeof(*alt), GFP_KERNEL);
17110 +- if (!alt)
17111 ++ if (!alt) {
17112 ++ altmode_id_remove(parent, id);
17113 + return ERR_PTR(-ENOMEM);
17114 ++ }
17115 +
17116 + alt->adev.svid = desc->svid;
17117 + alt->adev.mode = desc->mode;
17118 +diff --git a/drivers/usb/typec/tcpm/tcpci.c b/drivers/usb/typec/tcpm/tcpci.c
17119 +index 027afd7dfdce2..91a507eb514f1 100644
17120 +--- a/drivers/usb/typec/tcpm/tcpci.c
17121 ++++ b/drivers/usb/typec/tcpm/tcpci.c
17122 +@@ -21,8 +21,12 @@
17123 + #define PD_RETRY_COUNT_DEFAULT 3
17124 + #define PD_RETRY_COUNT_3_0_OR_HIGHER 2
17125 + #define AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV 3500
17126 +-#define AUTO_DISCHARGE_PD_HEADROOM_MV 850
17127 +-#define AUTO_DISCHARGE_PPS_HEADROOM_MV 1250
17128 ++#define VSINKPD_MIN_IR_DROP_MV 750
17129 ++#define VSRC_NEW_MIN_PERCENT 95
17130 ++#define VSRC_VALID_MIN_MV 500
17131 ++#define VPPS_NEW_MIN_PERCENT 95
17132 ++#define VPPS_VALID_MIN_MV 100
17133 ++#define VSINKDISCONNECT_PD_MIN_PERCENT 90
17134 +
17135 + #define tcpc_presenting_cc1_rd(reg) \
17136 + (!(TCPC_ROLE_CTRL_DRP & (reg)) && \
17137 +@@ -328,11 +332,13 @@ static int tcpci_set_auto_vbus_discharge_threshold(struct tcpc_dev *dev, enum ty
17138 + threshold = AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV;
17139 + } else if (mode == TYPEC_PWR_MODE_PD) {
17140 + if (pps_active)
17141 +- threshold = (95 * requested_vbus_voltage_mv / 100) -
17142 +- AUTO_DISCHARGE_PD_HEADROOM_MV;
17143 ++ threshold = ((VPPS_NEW_MIN_PERCENT * requested_vbus_voltage_mv / 100) -
17144 ++ VSINKPD_MIN_IR_DROP_MV - VPPS_VALID_MIN_MV) *
17145 ++ VSINKDISCONNECT_PD_MIN_PERCENT / 100;
17146 + else
17147 +- threshold = (95 * requested_vbus_voltage_mv / 100) -
17148 +- AUTO_DISCHARGE_PPS_HEADROOM_MV;
17149 ++ threshold = ((VSRC_NEW_MIN_PERCENT * requested_vbus_voltage_mv / 100) -
17150 ++ VSINKPD_MIN_IR_DROP_MV - VSRC_VALID_MIN_MV) *
17151 ++ VSINKDISCONNECT_PD_MIN_PERCENT / 100;
17152 + } else {
17153 + /* 3.5V for non-pd sink */
17154 + threshold = AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV;
17155 +diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
17156 +index 6133c0679c273..07dee0118c27e 100644
17157 +--- a/drivers/usb/typec/tcpm/tcpm.c
17158 ++++ b/drivers/usb/typec/tcpm/tcpm.c
17159 +@@ -2556,6 +2556,11 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
17160 + } else {
17161 + next_state = SNK_WAIT_CAPABILITIES;
17162 + }
17163 ++
17164 ++ /* Threshold was relaxed before sending Request. Restore it back. */
17165 ++ tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD,
17166 ++ port->pps_data.active,
17167 ++ port->supply_voltage);
17168 + tcpm_set_state(port, next_state, 0);
17169 + break;
17170 + case SNK_NEGOTIATE_PPS_CAPABILITIES:
17171 +@@ -2569,6 +2574,11 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
17172 + port->send_discover)
17173 + port->vdm_sm_running = true;
17174 +
17175 ++ /* Threshold was relaxed before sending Request. Restore it back. */
17176 ++ tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD,
17177 ++ port->pps_data.active,
17178 ++ port->supply_voltage);
17179 ++
17180 + tcpm_set_state(port, SNK_READY, 0);
17181 + break;
17182 + case DR_SWAP_SEND:
17183 +@@ -3288,6 +3298,12 @@ static int tcpm_pd_send_request(struct tcpm_port *port)
17184 + if (ret < 0)
17185 + return ret;
17186 +
17187 ++ /*
17188 ++ * Relax the threshold as voltage will be adjusted after Accept Message plus tSrcTransition.
17189 ++ * It is safer to modify the threshold here.
17190 ++ */
17191 ++ tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, false, 0);
17192 ++
17193 + memset(&msg, 0, sizeof(msg));
17194 + msg.header = PD_HEADER_LE(PD_DATA_REQUEST,
17195 + port->pwr_role,
17196 +@@ -3385,6 +3401,9 @@ static int tcpm_pd_send_pps_request(struct tcpm_port *port)
17197 + if (ret < 0)
17198 + return ret;
17199 +
17200 ++ /* Relax the threshold as voltage will be adjusted right after Accept Message. */
17201 ++ tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, false, 0);
17202 ++
17203 + memset(&msg, 0, sizeof(msg));
17204 + msg.header = PD_HEADER_LE(PD_DATA_REQUEST,
17205 + port->pwr_role,
17206 +@@ -4161,6 +4180,10 @@ static void run_state_machine(struct tcpm_port *port)
17207 + port->hard_reset_count = 0;
17208 + ret = tcpm_pd_send_request(port);
17209 + if (ret < 0) {
17210 ++ /* Restore back to the original state */
17211 ++ tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD,
17212 ++ port->pps_data.active,
17213 ++ port->supply_voltage);
17214 + /* Let the Source send capabilities again. */
17215 + tcpm_set_state(port, SNK_WAIT_CAPABILITIES, 0);
17216 + } else {
17217 +@@ -4171,6 +4194,10 @@ static void run_state_machine(struct tcpm_port *port)
17218 + case SNK_NEGOTIATE_PPS_CAPABILITIES:
17219 + ret = tcpm_pd_send_pps_request(port);
17220 + if (ret < 0) {
17221 ++ /* Restore back to the original state */
17222 ++ tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD,
17223 ++ port->pps_data.active,
17224 ++ port->supply_voltage);
17225 + port->pps_status = ret;
17226 + /*
17227 + * If this was called due to updates to sink
17228 +@@ -5160,6 +5187,9 @@ static void _tcpm_pd_vbus_vsafe0v(struct tcpm_port *port)
17229 + tcpm_set_state(port, SNK_UNATTACHED, 0);
17230 + }
17231 + break;
17232 ++ case PR_SWAP_SNK_SRC_SINK_OFF:
17233 ++ /* Do nothing, vsafe0v is expected during transition */
17234 ++ break;
17235 + default:
17236 + if (port->pwr_role == TYPEC_SINK && port->auto_vbus_discharge_enabled)
17237 + tcpm_set_state(port, SNK_UNATTACHED, 0);
17238 +diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
17239 +index cb7f2dc09e9d4..94b1dc07baeeb 100644
17240 +--- a/drivers/vfio/pci/vfio_pci.c
17241 ++++ b/drivers/vfio/pci/vfio_pci.c
17242 +@@ -1612,6 +1612,7 @@ static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf)
17243 + {
17244 + struct vm_area_struct *vma = vmf->vma;
17245 + struct vfio_pci_device *vdev = vma->vm_private_data;
17246 ++ struct vfio_pci_mmap_vma *mmap_vma;
17247 + vm_fault_t ret = VM_FAULT_NOPAGE;
17248 +
17249 + mutex_lock(&vdev->vma_lock);
17250 +@@ -1619,24 +1620,36 @@ static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf)
17251 +
17252 + if (!__vfio_pci_memory_enabled(vdev)) {
17253 + ret = VM_FAULT_SIGBUS;
17254 +- mutex_unlock(&vdev->vma_lock);
17255 + goto up_out;
17256 + }
17257 +
17258 +- if (__vfio_pci_add_vma(vdev, vma)) {
17259 +- ret = VM_FAULT_OOM;
17260 +- mutex_unlock(&vdev->vma_lock);
17261 +- goto up_out;
17262 ++ /*
17263 ++ * We populate the whole vma on fault, so we need to test whether
17264 ++ * the vma has already been mapped, such as for concurrent faults
17265 ++ * to the same vma. io_remap_pfn_range() will trigger a BUG_ON if
17266 ++ * we ask it to fill the same range again.
17267 ++ */
17268 ++ list_for_each_entry(mmap_vma, &vdev->vma_list, vma_next) {
17269 ++ if (mmap_vma->vma == vma)
17270 ++ goto up_out;
17271 + }
17272 +
17273 +- mutex_unlock(&vdev->vma_lock);
17274 +-
17275 + if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
17276 +- vma->vm_end - vma->vm_start, vma->vm_page_prot))
17277 ++ vma->vm_end - vma->vm_start,
17278 ++ vma->vm_page_prot)) {
17279 + ret = VM_FAULT_SIGBUS;
17280 ++ zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start);
17281 ++ goto up_out;
17282 ++ }
17283 ++
17284 ++ if (__vfio_pci_add_vma(vdev, vma)) {
17285 ++ ret = VM_FAULT_OOM;
17286 ++ zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start);
17287 ++ }
17288 +
17289 + up_out:
17290 + up_read(&vdev->memory_lock);
17291 ++ mutex_unlock(&vdev->vma_lock);
17292 + return ret;
17293 + }
17294 +
17295 +diff --git a/drivers/video/backlight/lm3630a_bl.c b/drivers/video/backlight/lm3630a_bl.c
17296 +index e88a2b0e59046..662029d6a3dc9 100644
17297 +--- a/drivers/video/backlight/lm3630a_bl.c
17298 ++++ b/drivers/video/backlight/lm3630a_bl.c
17299 +@@ -482,8 +482,10 @@ static int lm3630a_parse_node(struct lm3630a_chip *pchip,
17300 +
17301 + device_for_each_child_node(pchip->dev, node) {
17302 + ret = lm3630a_parse_bank(pdata, node, &seen_led_sources);
17303 +- if (ret)
17304 ++ if (ret) {
17305 ++ fwnode_handle_put(node);
17306 + return ret;
17307 ++ }
17308 + }
17309 +
17310 + return ret;
17311 +diff --git a/drivers/video/fbdev/imxfb.c b/drivers/video/fbdev/imxfb.c
17312 +index 7f8debd2da065..ad598257ab386 100644
17313 +--- a/drivers/video/fbdev/imxfb.c
17314 ++++ b/drivers/video/fbdev/imxfb.c
17315 +@@ -992,7 +992,7 @@ static int imxfb_probe(struct platform_device *pdev)
17316 + info->screen_buffer = dma_alloc_wc(&pdev->dev, fbi->map_size,
17317 + &fbi->map_dma, GFP_KERNEL);
17318 + if (!info->screen_buffer) {
17319 +- dev_err(&pdev->dev, "Failed to allocate video RAM: %d\n", ret);
17320 ++ dev_err(&pdev->dev, "Failed to allocate video RAM\n");
17321 + ret = -ENOMEM;
17322 + goto failed_map;
17323 + }
17324 +diff --git a/drivers/visorbus/visorchipset.c b/drivers/visorbus/visorchipset.c
17325 +index cb1eb7e05f871..5668cad86e374 100644
17326 +--- a/drivers/visorbus/visorchipset.c
17327 ++++ b/drivers/visorbus/visorchipset.c
17328 +@@ -1561,7 +1561,7 @@ schedule_out:
17329 +
17330 + static int visorchipset_init(struct acpi_device *acpi_device)
17331 + {
17332 +- int err = -ENODEV;
17333 ++ int err = -ENOMEM;
17334 + struct visorchannel *controlvm_channel;
17335 +
17336 + chipset_dev = kzalloc(sizeof(*chipset_dev), GFP_KERNEL);
17337 +@@ -1584,8 +1584,10 @@ static int visorchipset_init(struct acpi_device *acpi_device)
17338 + "controlvm",
17339 + sizeof(struct visor_controlvm_channel),
17340 + VISOR_CONTROLVM_CHANNEL_VERSIONID,
17341 +- VISOR_CHANNEL_SIGNATURE))
17342 ++ VISOR_CHANNEL_SIGNATURE)) {
17343 ++ err = -ENODEV;
17344 + goto error_delete_groups;
17345 ++ }
17346 + /* if booting in a crash kernel */
17347 + if (is_kdump_kernel())
17348 + INIT_DELAYED_WORK(&chipset_dev->periodic_controlvm_work,
17349 +diff --git a/fs/btrfs/Kconfig b/fs/btrfs/Kconfig
17350 +index 68b95ad82126e..520a0f6a7d9e9 100644
17351 +--- a/fs/btrfs/Kconfig
17352 ++++ b/fs/btrfs/Kconfig
17353 +@@ -18,6 +18,8 @@ config BTRFS_FS
17354 + select RAID6_PQ
17355 + select XOR_BLOCKS
17356 + select SRCU
17357 ++ depends on !PPC_256K_PAGES # powerpc
17358 ++ depends on !PAGE_SIZE_256KB # hexagon
17359 +
17360 + help
17361 + Btrfs is a general purpose copy-on-write filesystem with extents,
17362 +diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
17363 +index f43ce82a6aedc..8f48553d861ec 100644
17364 +--- a/fs/btrfs/ctree.c
17365 ++++ b/fs/btrfs/ctree.c
17366 +@@ -1498,7 +1498,6 @@ noinline int btrfs_cow_block(struct btrfs_trans_handle *trans,
17367 + trans->transid, fs_info->generation);
17368 +
17369 + if (!should_cow_block(trans, root, buf)) {
17370 +- trans->dirty = true;
17371 + *cow_ret = buf;
17372 + return 0;
17373 + }
17374 +@@ -2670,10 +2669,8 @@ again:
17375 + * then we don't want to set the path blocking,
17376 + * so we test it here
17377 + */
17378 +- if (!should_cow_block(trans, root, b)) {
17379 +- trans->dirty = true;
17380 ++ if (!should_cow_block(trans, root, b))
17381 + goto cow_done;
17382 +- }
17383 +
17384 + /*
17385 + * must have write locks on this node and the
17386 +diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
17387 +index c1d2b67861292..55dad12a5ce04 100644
17388 +--- a/fs/btrfs/delayed-inode.c
17389 ++++ b/fs/btrfs/delayed-inode.c
17390 +@@ -1025,12 +1025,10 @@ static int __btrfs_update_delayed_inode(struct btrfs_trans_handle *trans,
17391 + nofs_flag = memalloc_nofs_save();
17392 + ret = btrfs_lookup_inode(trans, root, path, &key, mod);
17393 + memalloc_nofs_restore(nofs_flag);
17394 +- if (ret > 0) {
17395 +- btrfs_release_path(path);
17396 +- return -ENOENT;
17397 +- } else if (ret < 0) {
17398 +- return ret;
17399 +- }
17400 ++ if (ret > 0)
17401 ++ ret = -ENOENT;
17402 ++ if (ret < 0)
17403 ++ goto out;
17404 +
17405 + leaf = path->nodes[0];
17406 + inode_item = btrfs_item_ptr(leaf, path->slots[0],
17407 +@@ -1068,6 +1066,14 @@ err_out:
17408 + btrfs_delayed_inode_release_metadata(fs_info, node, (ret < 0));
17409 + btrfs_release_delayed_inode(node);
17410 +
17411 ++ /*
17412 ++ * If we fail to update the delayed inode we need to abort the
17413 ++ * transaction, because we could leave the inode with the improper
17414 ++ * counts behind.
17415 ++ */
17416 ++ if (ret && ret != -ENOENT)
17417 ++ btrfs_abort_transaction(trans, ret);
17418 ++
17419 + return ret;
17420 +
17421 + search:
17422 +diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
17423 +index 27c3680074814..1cde6f84f145e 100644
17424 +--- a/fs/btrfs/extent-tree.c
17425 ++++ b/fs/btrfs/extent-tree.c
17426 +@@ -4799,7 +4799,6 @@ btrfs_init_new_buffer(struct btrfs_trans_handle *trans, struct btrfs_root *root,
17427 + set_extent_dirty(&trans->transaction->dirty_pages, buf->start,
17428 + buf->start + buf->len - 1, GFP_NOFS);
17429 + }
17430 +- trans->dirty = true;
17431 + /* this returns a buffer locked for blocking */
17432 + return buf;
17433 + }
17434 +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
17435 +index 3bb8ce4969f31..1b22ded1a7994 100644
17436 +--- a/fs/btrfs/inode.c
17437 ++++ b/fs/btrfs/inode.c
17438 +@@ -598,7 +598,7 @@ again:
17439 + * inode has not been flagged as nocompress. This flag can
17440 + * change at any time if we discover bad compression ratios.
17441 + */
17442 +- if (inode_need_compress(BTRFS_I(inode), start, end)) {
17443 ++ if (nr_pages > 1 && inode_need_compress(BTRFS_I(inode), start, end)) {
17444 + WARN_ON(pages);
17445 + pages = kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS);
17446 + if (!pages) {
17447 +@@ -8386,7 +8386,19 @@ static void btrfs_invalidatepage(struct page *page, unsigned int offset,
17448 + */
17449 + wait_on_page_writeback(page);
17450 +
17451 +- if (offset) {
17452 ++ /*
17453 ++ * For subpage case, we have call sites like
17454 ++ * btrfs_punch_hole_lock_range() which passes range not aligned to
17455 ++ * sectorsize.
17456 ++ * If the range doesn't cover the full page, we don't need to and
17457 ++ * shouldn't clear page extent mapped, as page->private can still
17458 ++ * record subpage dirty bits for other part of the range.
17459 ++ *
17460 ++ * For cases that can invalidate the full even the range doesn't
17461 ++ * cover the full page, like invalidating the last page, we're
17462 ++ * still safe to wait for ordered extent to finish.
17463 ++ */
17464 ++ if (!(offset == 0 && length == PAGE_SIZE)) {
17465 + btrfs_releasepage(page, GFP_NOFS);
17466 + return;
17467 + }
17468 +diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
17469 +index 8ae8f1732fd25..548ef38edaf5b 100644
17470 +--- a/fs/btrfs/send.c
17471 ++++ b/fs/btrfs/send.c
17472 +@@ -4064,6 +4064,17 @@ static int process_recorded_refs(struct send_ctx *sctx, int *pending_move)
17473 + if (ret < 0)
17474 + goto out;
17475 + } else {
17476 ++ /*
17477 ++ * If we previously orphanized a directory that
17478 ++ * collided with a new reference that we already
17479 ++ * processed, recompute the current path because
17480 ++ * that directory may be part of the path.
17481 ++ */
17482 ++ if (orphanized_dir) {
17483 ++ ret = refresh_ref_path(sctx, cur);
17484 ++ if (ret < 0)
17485 ++ goto out;
17486 ++ }
17487 + ret = send_unlink(sctx, cur->full_path);
17488 + if (ret < 0)
17489 + goto out;
17490 +diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
17491 +index f7a4ad86adee6..b3b7f3066cfac 100644
17492 +--- a/fs/btrfs/super.c
17493 ++++ b/fs/btrfs/super.c
17494 +@@ -273,17 +273,6 @@ void __btrfs_abort_transaction(struct btrfs_trans_handle *trans,
17495 + struct btrfs_fs_info *fs_info = trans->fs_info;
17496 +
17497 + WRITE_ONCE(trans->aborted, errno);
17498 +- /* Nothing used. The other threads that have joined this
17499 +- * transaction may be able to continue. */
17500 +- if (!trans->dirty && list_empty(&trans->new_bgs)) {
17501 +- const char *errstr;
17502 +-
17503 +- errstr = btrfs_decode_error(errno);
17504 +- btrfs_warn(fs_info,
17505 +- "%s:%d: Aborting unused transaction(%s).",
17506 +- function, line, errstr);
17507 +- return;
17508 +- }
17509 + WRITE_ONCE(trans->transaction->aborted, errno);
17510 + /* Wake up anybody who may be waiting on this transaction */
17511 + wake_up(&fs_info->transaction_wait);
17512 +diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
17513 +index 6eb1c50fa98c1..bd95446869d01 100644
17514 +--- a/fs/btrfs/sysfs.c
17515 ++++ b/fs/btrfs/sysfs.c
17516 +@@ -414,7 +414,7 @@ static ssize_t btrfs_discard_bitmap_bytes_show(struct kobject *kobj,
17517 + {
17518 + struct btrfs_fs_info *fs_info = discard_to_fs_info(kobj);
17519 +
17520 +- return scnprintf(buf, PAGE_SIZE, "%lld\n",
17521 ++ return scnprintf(buf, PAGE_SIZE, "%llu\n",
17522 + fs_info->discard_ctl.discard_bitmap_bytes);
17523 + }
17524 + BTRFS_ATTR(discard, discard_bitmap_bytes, btrfs_discard_bitmap_bytes_show);
17525 +@@ -436,7 +436,7 @@ static ssize_t btrfs_discard_extent_bytes_show(struct kobject *kobj,
17526 + {
17527 + struct btrfs_fs_info *fs_info = discard_to_fs_info(kobj);
17528 +
17529 +- return scnprintf(buf, PAGE_SIZE, "%lld\n",
17530 ++ return scnprintf(buf, PAGE_SIZE, "%llu\n",
17531 + fs_info->discard_ctl.discard_extent_bytes);
17532 + }
17533 + BTRFS_ATTR(discard, discard_extent_bytes, btrfs_discard_extent_bytes_show);
17534 +diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
17535 +index d56d3e7ca3240..81c8567dffee9 100644
17536 +--- a/fs/btrfs/transaction.c
17537 ++++ b/fs/btrfs/transaction.c
17538 +@@ -1393,8 +1393,10 @@ int btrfs_defrag_root(struct btrfs_root *root)
17539 +
17540 + while (1) {
17541 + trans = btrfs_start_transaction(root, 0);
17542 +- if (IS_ERR(trans))
17543 +- return PTR_ERR(trans);
17544 ++ if (IS_ERR(trans)) {
17545 ++ ret = PTR_ERR(trans);
17546 ++ break;
17547 ++ }
17548 +
17549 + ret = btrfs_defrag_leaves(trans, root);
17550 +
17551 +@@ -1461,7 +1463,7 @@ static int qgroup_account_snapshot(struct btrfs_trans_handle *trans,
17552 + ret = btrfs_run_delayed_refs(trans, (unsigned long)-1);
17553 + if (ret) {
17554 + btrfs_abort_transaction(trans, ret);
17555 +- goto out;
17556 ++ return ret;
17557 + }
17558 +
17559 + /*
17560 +@@ -2054,14 +2056,6 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
17561 +
17562 + ASSERT(refcount_read(&trans->use_count) == 1);
17563 +
17564 +- /*
17565 +- * Some places just start a transaction to commit it. We need to make
17566 +- * sure that if this commit fails that the abort code actually marks the
17567 +- * transaction as failed, so set trans->dirty to make the abort code do
17568 +- * the right thing.
17569 +- */
17570 +- trans->dirty = true;
17571 +-
17572 + /* Stop the commit early if ->aborted is set */
17573 + if (TRANS_ABORTED(cur_trans)) {
17574 + ret = cur_trans->aborted;
17575 +diff --git a/fs/btrfs/transaction.h b/fs/btrfs/transaction.h
17576 +index 364cfbb4c5c59..c49e2266b28ba 100644
17577 +--- a/fs/btrfs/transaction.h
17578 ++++ b/fs/btrfs/transaction.h
17579 +@@ -143,7 +143,6 @@ struct btrfs_trans_handle {
17580 + bool allocating_chunk;
17581 + bool can_flush_pending_bgs;
17582 + bool reloc_reserved;
17583 +- bool dirty;
17584 + bool in_fsync;
17585 + struct btrfs_root *root;
17586 + struct btrfs_fs_info *fs_info;
17587 +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
17588 +index 276b5511ff809..f4b0aecdaac72 100644
17589 +--- a/fs/btrfs/tree-log.c
17590 ++++ b/fs/btrfs/tree-log.c
17591 +@@ -6365,6 +6365,7 @@ next:
17592 + error:
17593 + if (wc.trans)
17594 + btrfs_end_transaction(wc.trans);
17595 ++ clear_bit(BTRFS_FS_LOG_RECOVERING, &fs_info->flags);
17596 + btrfs_free_path(path);
17597 + return ret;
17598 + }
17599 +diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
17600 +index cfff29718b840..40c53c8ea1b7d 100644
17601 +--- a/fs/btrfs/zoned.c
17602 ++++ b/fs/btrfs/zoned.c
17603 +@@ -1140,6 +1140,10 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
17604 + }
17605 +
17606 + if (zone.type == BLK_ZONE_TYPE_CONVENTIONAL) {
17607 ++ btrfs_err_in_rcu(fs_info,
17608 ++ "zoned: unexpected conventional zone %llu on device %s (devid %llu)",
17609 ++ zone.start << SECTOR_SHIFT,
17610 ++ rcu_str_deref(device->name), device->devid);
17611 + ret = -EIO;
17612 + goto out;
17613 + }
17614 +@@ -1200,6 +1204,13 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
17615 +
17616 + switch (map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK) {
17617 + case 0: /* single */
17618 ++ if (alloc_offsets[0] == WP_MISSING_DEV) {
17619 ++ btrfs_err(fs_info,
17620 ++ "zoned: cannot recover write pointer for zone %llu",
17621 ++ physical);
17622 ++ ret = -EIO;
17623 ++ goto out;
17624 ++ }
17625 + cache->alloc_offset = alloc_offsets[0];
17626 + break;
17627 + case BTRFS_BLOCK_GROUP_DUP:
17628 +@@ -1217,6 +1228,13 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
17629 + }
17630 +
17631 + out:
17632 ++ if (cache->alloc_offset > fs_info->zone_size) {
17633 ++ btrfs_err(fs_info,
17634 ++ "zoned: invalid write pointer %llu in block group %llu",
17635 ++ cache->alloc_offset, cache->start);
17636 ++ ret = -EIO;
17637 ++ }
17638 ++
17639 + /* An extent is allocated after the write pointer */
17640 + if (!ret && num_conventional && last_alloc > cache->alloc_offset) {
17641 + btrfs_err(fs_info,
17642 +diff --git a/fs/cifs/cifs_swn.c b/fs/cifs/cifs_swn.c
17643 +index d829b8bf833e3..93b47818c6c2d 100644
17644 +--- a/fs/cifs/cifs_swn.c
17645 ++++ b/fs/cifs/cifs_swn.c
17646 +@@ -447,15 +447,13 @@ static int cifs_swn_store_swn_addr(const struct sockaddr_storage *new,
17647 + const struct sockaddr_storage *old,
17648 + struct sockaddr_storage *dst)
17649 + {
17650 +- __be16 port;
17651 ++ __be16 port = cpu_to_be16(CIFS_PORT);
17652 +
17653 + if (old->ss_family == AF_INET) {
17654 + struct sockaddr_in *ipv4 = (struct sockaddr_in *)old;
17655 +
17656 + port = ipv4->sin_port;
17657 +- }
17658 +-
17659 +- if (old->ss_family == AF_INET6) {
17660 ++ } else if (old->ss_family == AF_INET6) {
17661 + struct sockaddr_in6 *ipv6 = (struct sockaddr_in6 *)old;
17662 +
17663 + port = ipv6->sin6_port;
17664 +@@ -465,9 +463,7 @@ static int cifs_swn_store_swn_addr(const struct sockaddr_storage *new,
17665 + struct sockaddr_in *ipv4 = (struct sockaddr_in *)new;
17666 +
17667 + ipv4->sin_port = port;
17668 +- }
17669 +-
17670 +- if (new->ss_family == AF_INET6) {
17671 ++ } else if (new->ss_family == AF_INET6) {
17672 + struct sockaddr_in6 *ipv6 = (struct sockaddr_in6 *)new;
17673 +
17674 + ipv6->sin6_port = port;
17675 +diff --git a/fs/cifs/cifsacl.c b/fs/cifs/cifsacl.c
17676 +index d178cf85e926d..b80b6ba232aad 100644
17677 +--- a/fs/cifs/cifsacl.c
17678 ++++ b/fs/cifs/cifsacl.c
17679 +@@ -1310,7 +1310,7 @@ static int build_sec_desc(struct cifs_ntsd *pntsd, struct cifs_ntsd *pnntsd,
17680 + ndacl_ptr = (struct cifs_acl *)((char *)pnntsd + ndacloffset);
17681 + ndacl_ptr->revision =
17682 + dacloffset ? dacl_ptr->revision : cpu_to_le16(ACL_REVISION);
17683 +- ndacl_ptr->num_aces = dacl_ptr->num_aces;
17684 ++ ndacl_ptr->num_aces = dacl_ptr ? dacl_ptr->num_aces : 0;
17685 +
17686 + if (uid_valid(uid)) { /* chown */
17687 + uid_t id;
17688 +diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
17689 +index ec824ab8c5ca3..723b97002d8dd 100644
17690 +--- a/fs/cifs/cifsglob.h
17691 ++++ b/fs/cifs/cifsglob.h
17692 +@@ -896,7 +896,7 @@ struct cifs_ses {
17693 + struct mutex session_mutex;
17694 + struct TCP_Server_Info *server; /* pointer to server info */
17695 + int ses_count; /* reference counter */
17696 +- enum statusEnum status;
17697 ++ enum statusEnum status; /* updates protected by GlobalMid_Lock */
17698 + unsigned overrideSecFlg; /* if non-zero override global sec flags */
17699 + char *serverOS; /* name of operating system underlying server */
17700 + char *serverNOS; /* name of network operating system of server */
17701 +@@ -1783,6 +1783,7 @@ require use of the stronger protocol */
17702 + * list operations on pending_mid_q and oplockQ
17703 + * updates to XID counters, multiplex id and SMB sequence numbers
17704 + * list operations on global DnotifyReqList
17705 ++ * updates to ses->status
17706 + * tcp_ses_lock protects:
17707 + * list operations on tcp and SMB session lists
17708 + * tcon->open_file_lock protects the list of open files hanging off the tcon
17709 +diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
17710 +index 3d62d52d730b5..09a3939f25bf6 100644
17711 +--- a/fs/cifs/connect.c
17712 ++++ b/fs/cifs/connect.c
17713 +@@ -1639,9 +1639,12 @@ void cifs_put_smb_ses(struct cifs_ses *ses)
17714 + spin_unlock(&cifs_tcp_ses_lock);
17715 + return;
17716 + }
17717 ++ spin_unlock(&cifs_tcp_ses_lock);
17718 ++
17719 ++ spin_lock(&GlobalMid_Lock);
17720 + if (ses->status == CifsGood)
17721 + ses->status = CifsExiting;
17722 +- spin_unlock(&cifs_tcp_ses_lock);
17723 ++ spin_unlock(&GlobalMid_Lock);
17724 +
17725 + cifs_free_ipc(ses);
17726 +
17727 +diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
17728 +index 098b4bc8da59a..d2d686ee10a3f 100644
17729 +--- a/fs/cifs/dfs_cache.c
17730 ++++ b/fs/cifs/dfs_cache.c
17731 +@@ -25,8 +25,7 @@
17732 + #define CACHE_HTABLE_SIZE 32
17733 + #define CACHE_MAX_ENTRIES 64
17734 +
17735 +-#define IS_INTERLINK_SET(v) ((v) & (DFSREF_REFERRAL_SERVER | \
17736 +- DFSREF_STORAGE_SERVER))
17737 ++#define IS_DFS_INTERLINK(v) (((v) & DFSREF_REFERRAL_SERVER) && !((v) & DFSREF_STORAGE_SERVER))
17738 +
17739 + struct cache_dfs_tgt {
17740 + char *name;
17741 +@@ -170,7 +169,7 @@ static int dfscache_proc_show(struct seq_file *m, void *v)
17742 + "cache entry: path=%s,type=%s,ttl=%d,etime=%ld,hdr_flags=0x%x,ref_flags=0x%x,interlink=%s,path_consumed=%d,expired=%s\n",
17743 + ce->path, ce->srvtype == DFS_TYPE_ROOT ? "root" : "link",
17744 + ce->ttl, ce->etime.tv_nsec, ce->ref_flags, ce->hdr_flags,
17745 +- IS_INTERLINK_SET(ce->hdr_flags) ? "yes" : "no",
17746 ++ IS_DFS_INTERLINK(ce->hdr_flags) ? "yes" : "no",
17747 + ce->path_consumed, cache_entry_expired(ce) ? "yes" : "no");
17748 +
17749 + list_for_each_entry(t, &ce->tlist, list) {
17750 +@@ -239,7 +238,7 @@ static inline void dump_ce(const struct cache_entry *ce)
17751 + ce->srvtype == DFS_TYPE_ROOT ? "root" : "link", ce->ttl,
17752 + ce->etime.tv_nsec,
17753 + ce->hdr_flags, ce->ref_flags,
17754 +- IS_INTERLINK_SET(ce->hdr_flags) ? "yes" : "no",
17755 ++ IS_DFS_INTERLINK(ce->hdr_flags) ? "yes" : "no",
17756 + ce->path_consumed,
17757 + cache_entry_expired(ce) ? "yes" : "no");
17758 + dump_tgts(ce);
17759 +diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
17760 +index e9a530da4255c..ea5e958fd6b04 100644
17761 +--- a/fs/cifs/smb2ops.c
17762 ++++ b/fs/cifs/smb2ops.c
17763 +@@ -3561,6 +3561,119 @@ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
17764 + return rc;
17765 + }
17766 +
17767 ++static int smb3_simple_fallocate_write_range(unsigned int xid,
17768 ++ struct cifs_tcon *tcon,
17769 ++ struct cifsFileInfo *cfile,
17770 ++ loff_t off, loff_t len,
17771 ++ char *buf)
17772 ++{
17773 ++ struct cifs_io_parms io_parms = {0};
17774 ++ int nbytes;
17775 ++ struct kvec iov[2];
17776 ++
17777 ++ io_parms.netfid = cfile->fid.netfid;
17778 ++ io_parms.pid = current->tgid;
17779 ++ io_parms.tcon = tcon;
17780 ++ io_parms.persistent_fid = cfile->fid.persistent_fid;
17781 ++ io_parms.volatile_fid = cfile->fid.volatile_fid;
17782 ++ io_parms.offset = off;
17783 ++ io_parms.length = len;
17784 ++
17785 ++ /* iov[0] is reserved for smb header */
17786 ++ iov[1].iov_base = buf;
17787 ++ iov[1].iov_len = io_parms.length;
17788 ++ return SMB2_write(xid, &io_parms, &nbytes, iov, 1);
17789 ++}
17790 ++
17791 ++static int smb3_simple_fallocate_range(unsigned int xid,
17792 ++ struct cifs_tcon *tcon,
17793 ++ struct cifsFileInfo *cfile,
17794 ++ loff_t off, loff_t len)
17795 ++{
17796 ++ struct file_allocated_range_buffer in_data, *out_data = NULL, *tmp_data;
17797 ++ u32 out_data_len;
17798 ++ char *buf = NULL;
17799 ++ loff_t l;
17800 ++ int rc;
17801 ++
17802 ++ in_data.file_offset = cpu_to_le64(off);
17803 ++ in_data.length = cpu_to_le64(len);
17804 ++ rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
17805 ++ cfile->fid.volatile_fid,
17806 ++ FSCTL_QUERY_ALLOCATED_RANGES, true,
17807 ++ (char *)&in_data, sizeof(in_data),
17808 ++ 1024 * sizeof(struct file_allocated_range_buffer),
17809 ++ (char **)&out_data, &out_data_len);
17810 ++ if (rc)
17811 ++ goto out;
17812 ++ /*
17813 ++ * It is already all allocated
17814 ++ */
17815 ++ if (out_data_len == 0)
17816 ++ goto out;
17817 ++
17818 ++ buf = kzalloc(1024 * 1024, GFP_KERNEL);
17819 ++ if (buf == NULL) {
17820 ++ rc = -ENOMEM;
17821 ++ goto out;
17822 ++ }
17823 ++
17824 ++ tmp_data = out_data;
17825 ++ while (len) {
17826 ++ /*
17827 ++ * The rest of the region is unmapped so write it all.
17828 ++ */
17829 ++ if (out_data_len == 0) {
17830 ++ rc = smb3_simple_fallocate_write_range(xid, tcon,
17831 ++ cfile, off, len, buf);
17832 ++ goto out;
17833 ++ }
17834 ++
17835 ++ if (out_data_len < sizeof(struct file_allocated_range_buffer)) {
17836 ++ rc = -EINVAL;
17837 ++ goto out;
17838 ++ }
17839 ++
17840 ++ if (off < le64_to_cpu(tmp_data->file_offset)) {
17841 ++ /*
17842 ++ * We are at a hole. Write until the end of the region
17843 ++ * or until the next allocated data,
17844 ++ * whichever comes next.
17845 ++ */
17846 ++ l = le64_to_cpu(tmp_data->file_offset) - off;
17847 ++ if (len < l)
17848 ++ l = len;
17849 ++ rc = smb3_simple_fallocate_write_range(xid, tcon,
17850 ++ cfile, off, l, buf);
17851 ++ if (rc)
17852 ++ goto out;
17853 ++ off = off + l;
17854 ++ len = len - l;
17855 ++ if (len == 0)
17856 ++ goto out;
17857 ++ }
17858 ++ /*
17859 ++ * We are at a section of allocated data, just skip forward
17860 ++ * until the end of the data or the end of the region
17861 ++ * we are supposed to fallocate, whichever comes first.
17862 ++ */
17863 ++ l = le64_to_cpu(tmp_data->length);
17864 ++ if (len < l)
17865 ++ l = len;
17866 ++ off += l;
17867 ++ len -= l;
17868 ++
17869 ++ tmp_data = &tmp_data[1];
17870 ++ out_data_len -= sizeof(struct file_allocated_range_buffer);
17871 ++ }
17872 ++
17873 ++ out:
17874 ++ kfree(out_data);
17875 ++ kfree(buf);
17876 ++ return rc;
17877 ++}
17878 ++
17879 ++
17880 + static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon,
17881 + loff_t off, loff_t len, bool keep_size)
17882 + {
17883 +@@ -3621,6 +3734,26 @@ static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon,
17884 + }
17885 +
17886 + if ((keep_size == true) || (i_size_read(inode) >= off + len)) {
17887 ++ /*
17888 ++ * At this point, we are trying to fallocate an internal
17889 ++ * regions of a sparse file. Since smb2 does not have a
17890 ++ * fallocate command we have two otions on how to emulate this.
17891 ++ * We can either turn the entire file to become non-sparse
17892 ++ * which we only do if the fallocate is for virtually
17893 ++ * the whole file, or we can overwrite the region with zeroes
17894 ++ * using SMB2_write, which could be prohibitevly expensive
17895 ++ * if len is large.
17896 ++ */
17897 ++ /*
17898 ++ * We are only trying to fallocate a small region so
17899 ++ * just write it with zero.
17900 ++ */
17901 ++ if (len <= 1024 * 1024) {
17902 ++ rc = smb3_simple_fallocate_range(xid, tcon, cfile,
17903 ++ off, len);
17904 ++ goto out;
17905 ++ }
17906 ++
17907 + /*
17908 + * Check if falloc starts within first few pages of file
17909 + * and ends within a few pages of the end of file to
17910 +diff --git a/fs/configfs/file.c b/fs/configfs/file.c
17911 +index da8351d1e4552..4d0825213116a 100644
17912 +--- a/fs/configfs/file.c
17913 ++++ b/fs/configfs/file.c
17914 +@@ -482,13 +482,13 @@ static int configfs_release_bin_file(struct inode *inode, struct file *file)
17915 + buffer->bin_buffer_size);
17916 + }
17917 + up_read(&frag->frag_sem);
17918 +- /* vfree on NULL is safe */
17919 +- vfree(buffer->bin_buffer);
17920 +- buffer->bin_buffer = NULL;
17921 +- buffer->bin_buffer_size = 0;
17922 +- buffer->needs_read_fill = 1;
17923 + }
17924 +
17925 ++ vfree(buffer->bin_buffer);
17926 ++ buffer->bin_buffer = NULL;
17927 ++ buffer->bin_buffer_size = 0;
17928 ++ buffer->needs_read_fill = 1;
17929 ++
17930 + configfs_release(inode, file);
17931 + return 0;
17932 + }
17933 +diff --git a/fs/crypto/fname.c b/fs/crypto/fname.c
17934 +index 6ca7d16593ff6..d00455440d087 100644
17935 +--- a/fs/crypto/fname.c
17936 ++++ b/fs/crypto/fname.c
17937 +@@ -344,13 +344,9 @@ int fscrypt_fname_disk_to_usr(const struct inode *inode,
17938 + offsetof(struct fscrypt_nokey_name, sha256));
17939 + BUILD_BUG_ON(BASE64_CHARS(FSCRYPT_NOKEY_NAME_MAX) > NAME_MAX);
17940 +
17941 +- if (hash) {
17942 +- nokey_name.dirhash[0] = hash;
17943 +- nokey_name.dirhash[1] = minor_hash;
17944 +- } else {
17945 +- nokey_name.dirhash[0] = 0;
17946 +- nokey_name.dirhash[1] = 0;
17947 +- }
17948 ++ nokey_name.dirhash[0] = hash;
17949 ++ nokey_name.dirhash[1] = minor_hash;
17950 ++
17951 + if (iname->len <= sizeof(nokey_name.bytes)) {
17952 + memcpy(nokey_name.bytes, iname->name, iname->len);
17953 + size = offsetof(struct fscrypt_nokey_name, bytes[iname->len]);
17954 +diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c
17955 +index 261293fb70974..bca9c6658a7c5 100644
17956 +--- a/fs/crypto/keysetup.c
17957 ++++ b/fs/crypto/keysetup.c
17958 +@@ -210,15 +210,40 @@ out_unlock:
17959 + return err;
17960 + }
17961 +
17962 ++/*
17963 ++ * Derive a SipHash key from the given fscrypt master key and the given
17964 ++ * application-specific information string.
17965 ++ *
17966 ++ * Note that the KDF produces a byte array, but the SipHash APIs expect the key
17967 ++ * as a pair of 64-bit words. Therefore, on big endian CPUs we have to do an
17968 ++ * endianness swap in order to get the same results as on little endian CPUs.
17969 ++ */
17970 ++static int fscrypt_derive_siphash_key(const struct fscrypt_master_key *mk,
17971 ++ u8 context, const u8 *info,
17972 ++ unsigned int infolen, siphash_key_t *key)
17973 ++{
17974 ++ int err;
17975 ++
17976 ++ err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf, context, info, infolen,
17977 ++ (u8 *)key, sizeof(*key));
17978 ++ if (err)
17979 ++ return err;
17980 ++
17981 ++ BUILD_BUG_ON(sizeof(*key) != 16);
17982 ++ BUILD_BUG_ON(ARRAY_SIZE(key->key) != 2);
17983 ++ le64_to_cpus(&key->key[0]);
17984 ++ le64_to_cpus(&key->key[1]);
17985 ++ return 0;
17986 ++}
17987 ++
17988 + int fscrypt_derive_dirhash_key(struct fscrypt_info *ci,
17989 + const struct fscrypt_master_key *mk)
17990 + {
17991 + int err;
17992 +
17993 +- err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf, HKDF_CONTEXT_DIRHASH_KEY,
17994 +- ci->ci_nonce, FSCRYPT_FILE_NONCE_SIZE,
17995 +- (u8 *)&ci->ci_dirhash_key,
17996 +- sizeof(ci->ci_dirhash_key));
17997 ++ err = fscrypt_derive_siphash_key(mk, HKDF_CONTEXT_DIRHASH_KEY,
17998 ++ ci->ci_nonce, FSCRYPT_FILE_NONCE_SIZE,
17999 ++ &ci->ci_dirhash_key);
18000 + if (err)
18001 + return err;
18002 + ci->ci_dirhash_key_initialized = true;
18003 +@@ -253,10 +278,9 @@ static int fscrypt_setup_iv_ino_lblk_32_key(struct fscrypt_info *ci,
18004 + if (mk->mk_ino_hash_key_initialized)
18005 + goto unlock;
18006 +
18007 +- err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf,
18008 +- HKDF_CONTEXT_INODE_HASH_KEY, NULL, 0,
18009 +- (u8 *)&mk->mk_ino_hash_key,
18010 +- sizeof(mk->mk_ino_hash_key));
18011 ++ err = fscrypt_derive_siphash_key(mk,
18012 ++ HKDF_CONTEXT_INODE_HASH_KEY,
18013 ++ NULL, 0, &mk->mk_ino_hash_key);
18014 + if (err)
18015 + goto unlock;
18016 + /* pairs with smp_load_acquire() above */
18017 +diff --git a/fs/dax.c b/fs/dax.c
18018 +index df5485b4bddf1..d5d7b9393bcaa 100644
18019 +--- a/fs/dax.c
18020 ++++ b/fs/dax.c
18021 +@@ -488,10 +488,11 @@ static void *grab_mapping_entry(struct xa_state *xas,
18022 + struct address_space *mapping, unsigned int order)
18023 + {
18024 + unsigned long index = xas->xa_index;
18025 +- bool pmd_downgrade = false; /* splitting PMD entry into PTE entries? */
18026 ++ bool pmd_downgrade; /* splitting PMD entry into PTE entries? */
18027 + void *entry;
18028 +
18029 + retry:
18030 ++ pmd_downgrade = false;
18031 + xas_lock_irq(xas);
18032 + entry = get_unlocked_entry(xas, order);
18033 +
18034 +diff --git a/fs/dlm/config.c b/fs/dlm/config.c
18035 +index 88d95d96e36c5..52bcda64172aa 100644
18036 +--- a/fs/dlm/config.c
18037 ++++ b/fs/dlm/config.c
18038 +@@ -79,6 +79,9 @@ struct dlm_cluster {
18039 + unsigned int cl_new_rsb_count;
18040 + unsigned int cl_recover_callbacks;
18041 + char cl_cluster_name[DLM_LOCKSPACE_LEN];
18042 ++
18043 ++ struct dlm_spaces *sps;
18044 ++ struct dlm_comms *cms;
18045 + };
18046 +
18047 + static struct dlm_cluster *config_item_to_cluster(struct config_item *i)
18048 +@@ -409,6 +412,9 @@ static struct config_group *make_cluster(struct config_group *g,
18049 + if (!cl || !sps || !cms)
18050 + goto fail;
18051 +
18052 ++ cl->sps = sps;
18053 ++ cl->cms = cms;
18054 ++
18055 + config_group_init_type_name(&cl->group, name, &cluster_type);
18056 + config_group_init_type_name(&sps->ss_group, "spaces", &spaces_type);
18057 + config_group_init_type_name(&cms->cs_group, "comms", &comms_type);
18058 +@@ -458,6 +464,9 @@ static void drop_cluster(struct config_group *g, struct config_item *i)
18059 + static void release_cluster(struct config_item *i)
18060 + {
18061 + struct dlm_cluster *cl = config_item_to_cluster(i);
18062 ++
18063 ++ kfree(cl->sps);
18064 ++ kfree(cl->cms);
18065 + kfree(cl);
18066 + }
18067 +
18068 +diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
18069 +index 45c2fdaf34c4d..d2a0ea0acca34 100644
18070 +--- a/fs/dlm/lowcomms.c
18071 ++++ b/fs/dlm/lowcomms.c
18072 +@@ -79,6 +79,8 @@ struct connection {
18073 + #define CF_CLOSING 8
18074 + #define CF_SHUTDOWN 9
18075 + #define CF_CONNECTED 10
18076 ++#define CF_RECONNECT 11
18077 ++#define CF_DELAY_CONNECT 12
18078 + struct list_head writequeue; /* List of outgoing writequeue_entries */
18079 + spinlock_t writequeue_lock;
18080 + void (*connect_action) (struct connection *); /* What to do to connect */
18081 +@@ -87,6 +89,7 @@ struct connection {
18082 + #define MAX_CONNECT_RETRIES 3
18083 + struct hlist_node list;
18084 + struct connection *othercon;
18085 ++ struct connection *sendcon;
18086 + struct work_struct rwork; /* Receive workqueue */
18087 + struct work_struct swork; /* Send workqueue */
18088 + wait_queue_head_t shutdown_wait; /* wait for graceful shutdown */
18089 +@@ -584,6 +587,22 @@ static void lowcomms_error_report(struct sock *sk)
18090 + dlm_config.ci_tcp_port, sk->sk_err,
18091 + sk->sk_err_soft);
18092 + }
18093 ++
18094 ++ /* below sendcon only handling */
18095 ++ if (test_bit(CF_IS_OTHERCON, &con->flags))
18096 ++ con = con->sendcon;
18097 ++
18098 ++ switch (sk->sk_err) {
18099 ++ case ECONNREFUSED:
18100 ++ set_bit(CF_DELAY_CONNECT, &con->flags);
18101 ++ break;
18102 ++ default:
18103 ++ break;
18104 ++ }
18105 ++
18106 ++ if (!test_and_set_bit(CF_RECONNECT, &con->flags))
18107 ++ queue_work(send_workqueue, &con->swork);
18108 ++
18109 + out:
18110 + read_unlock_bh(&sk->sk_callback_lock);
18111 + if (orig_report)
18112 +@@ -695,12 +714,14 @@ static void close_connection(struct connection *con, bool and_other,
18113 +
18114 + if (con->othercon && and_other) {
18115 + /* Will only re-enter once. */
18116 +- close_connection(con->othercon, false, true, true);
18117 ++ close_connection(con->othercon, false, tx, rx);
18118 + }
18119 +
18120 + con->rx_leftover = 0;
18121 + con->retries = 0;
18122 + clear_bit(CF_CONNECTED, &con->flags);
18123 ++ clear_bit(CF_DELAY_CONNECT, &con->flags);
18124 ++ clear_bit(CF_RECONNECT, &con->flags);
18125 + mutex_unlock(&con->sock_mutex);
18126 + clear_bit(CF_CLOSING, &con->flags);
18127 + }
18128 +@@ -839,18 +860,15 @@ out_resched:
18129 +
18130 + out_close:
18131 + mutex_unlock(&con->sock_mutex);
18132 +- if (ret != -EAGAIN) {
18133 +- /* Reconnect when there is something to send */
18134 ++ if (ret == 0) {
18135 + close_connection(con, false, true, false);
18136 +- if (ret == 0) {
18137 +- log_print("connection %p got EOF from %d",
18138 +- con, con->nodeid);
18139 +- /* handling for tcp shutdown */
18140 +- clear_bit(CF_SHUTDOWN, &con->flags);
18141 +- wake_up(&con->shutdown_wait);
18142 +- /* signal to breaking receive worker */
18143 +- ret = -1;
18144 +- }
18145 ++ log_print("connection %p got EOF from %d",
18146 ++ con, con->nodeid);
18147 ++ /* handling for tcp shutdown */
18148 ++ clear_bit(CF_SHUTDOWN, &con->flags);
18149 ++ wake_up(&con->shutdown_wait);
18150 ++ /* signal to breaking receive worker */
18151 ++ ret = -1;
18152 + }
18153 + return ret;
18154 + }
18155 +@@ -933,6 +951,7 @@ static int accept_from_sock(struct listen_connection *con)
18156 + }
18157 +
18158 + newcon->othercon = othercon;
18159 ++ othercon->sendcon = newcon;
18160 + } else {
18161 + /* close other sock con if we have something new */
18162 + close_connection(othercon, false, true, false);
18163 +@@ -1478,7 +1497,7 @@ static void send_to_sock(struct connection *con)
18164 + cond_resched();
18165 + goto out;
18166 + } else if (ret < 0)
18167 +- goto send_error;
18168 ++ goto out;
18169 + }
18170 +
18171 + /* Don't starve people filling buffers */
18172 +@@ -1495,14 +1514,6 @@ out:
18173 + mutex_unlock(&con->sock_mutex);
18174 + return;
18175 +
18176 +-send_error:
18177 +- mutex_unlock(&con->sock_mutex);
18178 +- close_connection(con, false, false, true);
18179 +- /* Requeue the send work. When the work daemon runs again, it will try
18180 +- a new connection, then call this function again. */
18181 +- queue_work(send_workqueue, &con->swork);
18182 +- return;
18183 +-
18184 + out_connect:
18185 + mutex_unlock(&con->sock_mutex);
18186 + queue_work(send_workqueue, &con->swork);
18187 +@@ -1574,18 +1585,30 @@ static void process_send_sockets(struct work_struct *work)
18188 + struct connection *con = container_of(work, struct connection, swork);
18189 +
18190 + clear_bit(CF_WRITE_PENDING, &con->flags);
18191 +- if (con->sock == NULL) /* not mutex protected so check it inside too */
18192 ++
18193 ++ if (test_and_clear_bit(CF_RECONNECT, &con->flags))
18194 ++ close_connection(con, false, false, true);
18195 ++
18196 ++ if (con->sock == NULL) { /* not mutex protected so check it inside too */
18197 ++ if (test_and_clear_bit(CF_DELAY_CONNECT, &con->flags))
18198 ++ msleep(1000);
18199 + con->connect_action(con);
18200 ++ }
18201 + if (!list_empty(&con->writequeue))
18202 + send_to_sock(con);
18203 + }
18204 +
18205 + static void work_stop(void)
18206 + {
18207 +- if (recv_workqueue)
18208 ++ if (recv_workqueue) {
18209 + destroy_workqueue(recv_workqueue);
18210 +- if (send_workqueue)
18211 ++ recv_workqueue = NULL;
18212 ++ }
18213 ++
18214 ++ if (send_workqueue) {
18215 + destroy_workqueue(send_workqueue);
18216 ++ send_workqueue = NULL;
18217 ++ }
18218 + }
18219 +
18220 + static int work_start(void)
18221 +@@ -1602,6 +1625,7 @@ static int work_start(void)
18222 + if (!send_workqueue) {
18223 + log_print("can't start dlm_send");
18224 + destroy_workqueue(recv_workqueue);
18225 ++ recv_workqueue = NULL;
18226 + return -ENOMEM;
18227 + }
18228 +
18229 +@@ -1733,7 +1757,7 @@ int dlm_lowcomms_start(void)
18230 +
18231 + error = work_start();
18232 + if (error)
18233 +- goto fail;
18234 ++ goto fail_local;
18235 +
18236 + dlm_allow_conn = 1;
18237 +
18238 +@@ -1750,6 +1774,9 @@ int dlm_lowcomms_start(void)
18239 + fail_unlisten:
18240 + dlm_allow_conn = 0;
18241 + dlm_close_sock(&listen_con.sock);
18242 ++ work_stop();
18243 ++fail_local:
18244 ++ deinit_local();
18245 + fail:
18246 + return error;
18247 + }
18248 +diff --git a/fs/erofs/super.c b/fs/erofs/super.c
18249 +index d5a6b9b888a56..f31a08d86be89 100644
18250 +--- a/fs/erofs/super.c
18251 ++++ b/fs/erofs/super.c
18252 +@@ -155,6 +155,7 @@ static int erofs_read_superblock(struct super_block *sb)
18253 + goto out;
18254 + }
18255 +
18256 ++ ret = -EINVAL;
18257 + blkszbits = dsb->blkszbits;
18258 + /* 9(512 bytes) + LOG_SECTORS_PER_BLOCK == LOG_BLOCK_SIZE */
18259 + if (blkszbits != LOG_BLOCK_SIZE) {
18260 +diff --git a/fs/exec.c b/fs/exec.c
18261 +index 18594f11c31fe..d7c4187ca023e 100644
18262 +--- a/fs/exec.c
18263 ++++ b/fs/exec.c
18264 +@@ -1360,6 +1360,10 @@ int begin_new_exec(struct linux_binprm * bprm)
18265 + WRITE_ONCE(me->self_exec_id, me->self_exec_id + 1);
18266 + flush_signal_handlers(me, 0);
18267 +
18268 ++ retval = set_cred_ucounts(bprm->cred);
18269 ++ if (retval < 0)
18270 ++ goto out_unlock;
18271 ++
18272 + /*
18273 + * install the new credentials for this executable
18274 + */
18275 +diff --git a/fs/exfat/dir.c b/fs/exfat/dir.c
18276 +index 916797077aad4..dedbc55cd48f5 100644
18277 +--- a/fs/exfat/dir.c
18278 ++++ b/fs/exfat/dir.c
18279 +@@ -62,7 +62,7 @@ static void exfat_get_uniname_from_ext_entry(struct super_block *sb,
18280 + static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_entry *dir_entry)
18281 + {
18282 + int i, dentries_per_clu, dentries_per_clu_bits = 0, num_ext;
18283 +- unsigned int type, clu_offset;
18284 ++ unsigned int type, clu_offset, max_dentries;
18285 + sector_t sector;
18286 + struct exfat_chain dir, clu;
18287 + struct exfat_uni_name uni_name;
18288 +@@ -85,6 +85,8 @@ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_ent
18289 +
18290 + dentries_per_clu = sbi->dentries_per_clu;
18291 + dentries_per_clu_bits = ilog2(dentries_per_clu);
18292 ++ max_dentries = (unsigned int)min_t(u64, MAX_EXFAT_DENTRIES,
18293 ++ (u64)sbi->num_clusters << dentries_per_clu_bits);
18294 +
18295 + clu_offset = dentry >> dentries_per_clu_bits;
18296 + exfat_chain_dup(&clu, &dir);
18297 +@@ -108,7 +110,7 @@ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_ent
18298 + }
18299 + }
18300 +
18301 +- while (clu.dir != EXFAT_EOF_CLUSTER) {
18302 ++ while (clu.dir != EXFAT_EOF_CLUSTER && dentry < max_dentries) {
18303 + i = dentry & (dentries_per_clu - 1);
18304 +
18305 + for ( ; i < dentries_per_clu; i++, dentry++) {
18306 +@@ -244,7 +246,7 @@ static int exfat_iterate(struct file *filp, struct dir_context *ctx)
18307 + if (err)
18308 + goto unlock;
18309 + get_new:
18310 +- if (cpos >= i_size_read(inode))
18311 ++ if (ei->flags == ALLOC_NO_FAT_CHAIN && cpos >= i_size_read(inode))
18312 + goto end_of_dir;
18313 +
18314 + err = exfat_readdir(inode, &cpos, &de);
18315 +diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
18316 +index cbf37b2cf871e..1293de50c8d48 100644
18317 +--- a/fs/ext4/extents.c
18318 ++++ b/fs/ext4/extents.c
18319 +@@ -825,6 +825,7 @@ void ext4_ext_tree_init(handle_t *handle, struct inode *inode)
18320 + eh->eh_entries = 0;
18321 + eh->eh_magic = EXT4_EXT_MAGIC;
18322 + eh->eh_max = cpu_to_le16(ext4_ext_space_root(inode, 0));
18323 ++ eh->eh_generation = 0;
18324 + ext4_mark_inode_dirty(handle, inode);
18325 + }
18326 +
18327 +@@ -1090,6 +1091,7 @@ static int ext4_ext_split(handle_t *handle, struct inode *inode,
18328 + neh->eh_max = cpu_to_le16(ext4_ext_space_block(inode, 0));
18329 + neh->eh_magic = EXT4_EXT_MAGIC;
18330 + neh->eh_depth = 0;
18331 ++ neh->eh_generation = 0;
18332 +
18333 + /* move remainder of path[depth] to the new leaf */
18334 + if (unlikely(path[depth].p_hdr->eh_entries !=
18335 +@@ -1167,6 +1169,7 @@ static int ext4_ext_split(handle_t *handle, struct inode *inode,
18336 + neh->eh_magic = EXT4_EXT_MAGIC;
18337 + neh->eh_max = cpu_to_le16(ext4_ext_space_block_idx(inode, 0));
18338 + neh->eh_depth = cpu_to_le16(depth - i);
18339 ++ neh->eh_generation = 0;
18340 + fidx = EXT_FIRST_INDEX(neh);
18341 + fidx->ei_block = border;
18342 + ext4_idx_store_pblock(fidx, oldblock);
18343 +diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
18344 +index 0a729027322dd..9a3a8996aacf7 100644
18345 +--- a/fs/ext4/extents_status.c
18346 ++++ b/fs/ext4/extents_status.c
18347 +@@ -1574,11 +1574,9 @@ static unsigned long ext4_es_scan(struct shrinker *shrink,
18348 + ret = percpu_counter_read_positive(&sbi->s_es_stats.es_stats_shk_cnt);
18349 + trace_ext4_es_shrink_scan_enter(sbi->s_sb, nr_to_scan, ret);
18350 +
18351 +- if (!nr_to_scan)
18352 +- return ret;
18353 +-
18354 + nr_shrunk = __es_shrink(sbi, nr_to_scan, NULL);
18355 +
18356 ++ ret = percpu_counter_read_positive(&sbi->s_es_stats.es_stats_shk_cnt);
18357 + trace_ext4_es_shrink_scan_exit(sbi->s_sb, nr_shrunk, ret);
18358 + return nr_shrunk;
18359 + }
18360 +diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
18361 +index edbaed073ac5c..1b6bfb3f303c3 100644
18362 +--- a/fs/ext4/ialloc.c
18363 ++++ b/fs/ext4/ialloc.c
18364 +@@ -402,7 +402,7 @@ static void get_orlov_stats(struct super_block *sb, ext4_group_t g,
18365 + *
18366 + * We always try to spread first-level directories.
18367 + *
18368 +- * If there are blockgroups with both free inodes and free blocks counts
18369 ++ * If there are blockgroups with both free inodes and free clusters counts
18370 + * not worse than average we return one with smallest directory count.
18371 + * Otherwise we simply return a random group.
18372 + *
18373 +@@ -411,7 +411,7 @@ static void get_orlov_stats(struct super_block *sb, ext4_group_t g,
18374 + * It's OK to put directory into a group unless
18375 + * it has too many directories already (max_dirs) or
18376 + * it has too few free inodes left (min_inodes) or
18377 +- * it has too few free blocks left (min_blocks) or
18378 ++ * it has too few free clusters left (min_clusters) or
18379 + * Parent's group is preferred, if it doesn't satisfy these
18380 + * conditions we search cyclically through the rest. If none
18381 + * of the groups look good we just look for a group with more
18382 +@@ -427,7 +427,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent,
18383 + ext4_group_t real_ngroups = ext4_get_groups_count(sb);
18384 + int inodes_per_group = EXT4_INODES_PER_GROUP(sb);
18385 + unsigned int freei, avefreei, grp_free;
18386 +- ext4_fsblk_t freeb, avefreec;
18387 ++ ext4_fsblk_t freec, avefreec;
18388 + unsigned int ndirs;
18389 + int max_dirs, min_inodes;
18390 + ext4_grpblk_t min_clusters;
18391 +@@ -446,9 +446,8 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent,
18392 +
18393 + freei = percpu_counter_read_positive(&sbi->s_freeinodes_counter);
18394 + avefreei = freei / ngroups;
18395 +- freeb = EXT4_C2B(sbi,
18396 +- percpu_counter_read_positive(&sbi->s_freeclusters_counter));
18397 +- avefreec = freeb;
18398 ++ freec = percpu_counter_read_positive(&sbi->s_freeclusters_counter);
18399 ++ avefreec = freec;
18400 + do_div(avefreec, ngroups);
18401 + ndirs = percpu_counter_read_positive(&sbi->s_dirs_counter);
18402 +
18403 +diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
18404 +index 0948a43f1b3df..7cebbb2d2e34d 100644
18405 +--- a/fs/ext4/inode.c
18406 ++++ b/fs/ext4/inode.c
18407 +@@ -3420,7 +3420,7 @@ retry:
18408 + * i_disksize out to i_size. This could be beyond where direct I/O is
18409 + * happening and thus expose allocated blocks to direct I/O reads.
18410 + */
18411 +- else if ((map->m_lblk * (1 << blkbits)) >= i_size_read(inode))
18412 ++ else if (((loff_t)map->m_lblk << blkbits) >= i_size_read(inode))
18413 + m_flags = EXT4_GET_BLOCKS_CREATE;
18414 + else if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
18415 + m_flags = EXT4_GET_BLOCKS_IO_CREATE_EXT;
18416 +diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
18417 +index d24cb3dc79fff..c51fa945424e1 100644
18418 +--- a/fs/ext4/mballoc.c
18419 ++++ b/fs/ext4/mballoc.c
18420 +@@ -1574,10 +1574,11 @@ static int mb_find_extent(struct ext4_buddy *e4b, int block,
18421 + if (ex->fe_start + ex->fe_len > EXT4_CLUSTERS_PER_GROUP(e4b->bd_sb)) {
18422 + /* Should never happen! (but apparently sometimes does?!?) */
18423 + WARN_ON(1);
18424 +- ext4_error(e4b->bd_sb, "corruption or bug in mb_find_extent "
18425 +- "block=%d, order=%d needed=%d ex=%u/%d/%d@%u",
18426 +- block, order, needed, ex->fe_group, ex->fe_start,
18427 +- ex->fe_len, ex->fe_logical);
18428 ++ ext4_grp_locked_error(e4b->bd_sb, e4b->bd_group, 0, 0,
18429 ++ "corruption or bug in mb_find_extent "
18430 ++ "block=%d, order=%d needed=%d ex=%u/%d/%d@%u",
18431 ++ block, order, needed, ex->fe_group, ex->fe_start,
18432 ++ ex->fe_len, ex->fe_logical);
18433 + ex->fe_len = 0;
18434 + ex->fe_start = 0;
18435 + ex->fe_group = 0;
18436 +diff --git a/fs/ext4/super.c b/fs/ext4/super.c
18437 +index 0e3a847b5d279..4a869bc5271b9 100644
18438 +--- a/fs/ext4/super.c
18439 ++++ b/fs/ext4/super.c
18440 +@@ -3084,8 +3084,15 @@ static void ext4_orphan_cleanup(struct super_block *sb,
18441 + inode_lock(inode);
18442 + truncate_inode_pages(inode->i_mapping, inode->i_size);
18443 + ret = ext4_truncate(inode);
18444 +- if (ret)
18445 ++ if (ret) {
18446 ++ /*
18447 ++ * We need to clean up the in-core orphan list
18448 ++ * manually if ext4_truncate() failed to get a
18449 ++ * transaction handle.
18450 ++ */
18451 ++ ext4_orphan_del(NULL, inode);
18452 + ext4_std_error(inode->i_sb, ret);
18453 ++ }
18454 + inode_unlock(inode);
18455 + nr_truncates++;
18456 + } else {
18457 +@@ -5032,6 +5039,7 @@ no_journal:
18458 + ext4_msg(sb, KERN_ERR,
18459 + "unable to initialize "
18460 + "flex_bg meta info!");
18461 ++ ret = -ENOMEM;
18462 + goto failed_mount6;
18463 + }
18464 +
18465 +diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
18466 +index 8804a5d513801..3c8a003ee6cf7 100644
18467 +--- a/fs/f2fs/data.c
18468 ++++ b/fs/f2fs/data.c
18469 +@@ -3971,6 +3971,12 @@ static int f2fs_swap_activate(struct swap_info_struct *sis, struct file *file,
18470 + if (f2fs_readonly(F2FS_I_SB(inode)->sb))
18471 + return -EROFS;
18472 +
18473 ++ if (f2fs_lfs_mode(F2FS_I_SB(inode))) {
18474 ++ f2fs_err(F2FS_I_SB(inode),
18475 ++ "Swapfile not supported in LFS mode");
18476 ++ return -EINVAL;
18477 ++ }
18478 ++
18479 + ret = f2fs_convert_inline_inode(inode);
18480 + if (ret)
18481 + return ret;
18482 +diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
18483 +index e38a7f6921dd6..31eaac5b70c95 100644
18484 +--- a/fs/f2fs/sysfs.c
18485 ++++ b/fs/f2fs/sysfs.c
18486 +@@ -525,6 +525,7 @@ enum feat_id {
18487 + FEAT_CASEFOLD,
18488 + FEAT_COMPRESSION,
18489 + FEAT_TEST_DUMMY_ENCRYPTION_V2,
18490 ++ FEAT_ENCRYPTED_CASEFOLD,
18491 + };
18492 +
18493 + static ssize_t f2fs_feature_show(struct f2fs_attr *a,
18494 +@@ -546,6 +547,7 @@ static ssize_t f2fs_feature_show(struct f2fs_attr *a,
18495 + case FEAT_CASEFOLD:
18496 + case FEAT_COMPRESSION:
18497 + case FEAT_TEST_DUMMY_ENCRYPTION_V2:
18498 ++ case FEAT_ENCRYPTED_CASEFOLD:
18499 + return sprintf(buf, "supported\n");
18500 + }
18501 + return 0;
18502 +@@ -649,7 +651,10 @@ F2FS_GENERAL_RO_ATTR(avg_vblocks);
18503 + #ifdef CONFIG_FS_ENCRYPTION
18504 + F2FS_FEATURE_RO_ATTR(encryption, FEAT_CRYPTO);
18505 + F2FS_FEATURE_RO_ATTR(test_dummy_encryption_v2, FEAT_TEST_DUMMY_ENCRYPTION_V2);
18506 ++#ifdef CONFIG_UNICODE
18507 ++F2FS_FEATURE_RO_ATTR(encrypted_casefold, FEAT_ENCRYPTED_CASEFOLD);
18508 + #endif
18509 ++#endif /* CONFIG_FS_ENCRYPTION */
18510 + #ifdef CONFIG_BLK_DEV_ZONED
18511 + F2FS_FEATURE_RO_ATTR(block_zoned, FEAT_BLKZONED);
18512 + #endif
18513 +@@ -739,7 +744,10 @@ static struct attribute *f2fs_feat_attrs[] = {
18514 + #ifdef CONFIG_FS_ENCRYPTION
18515 + ATTR_LIST(encryption),
18516 + ATTR_LIST(test_dummy_encryption_v2),
18517 ++#ifdef CONFIG_UNICODE
18518 ++ ATTR_LIST(encrypted_casefold),
18519 + #endif
18520 ++#endif /* CONFIG_FS_ENCRYPTION */
18521 + #ifdef CONFIG_BLK_DEV_ZONED
18522 + ATTR_LIST(block_zoned),
18523 + #endif
18524 +diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
18525 +index e91980f493884..8d4130b01423b 100644
18526 +--- a/fs/fs-writeback.c
18527 ++++ b/fs/fs-writeback.c
18528 +@@ -505,12 +505,19 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
18529 + if (!isw)
18530 + return;
18531 +
18532 ++ atomic_inc(&isw_nr_in_flight);
18533 ++
18534 + /* find and pin the new wb */
18535 + rcu_read_lock();
18536 + memcg_css = css_from_id(new_wb_id, &memory_cgrp_subsys);
18537 +- if (memcg_css)
18538 +- isw->new_wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
18539 ++ if (memcg_css && !css_tryget(memcg_css))
18540 ++ memcg_css = NULL;
18541 + rcu_read_unlock();
18542 ++ if (!memcg_css)
18543 ++ goto out_free;
18544 ++
18545 ++ isw->new_wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
18546 ++ css_put(memcg_css);
18547 + if (!isw->new_wb)
18548 + goto out_free;
18549 +
18550 +@@ -535,11 +542,10 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
18551 + * Let's continue after I_WB_SWITCH is guaranteed to be visible.
18552 + */
18553 + call_rcu(&isw->rcu_head, inode_switch_wbs_rcu_fn);
18554 +-
18555 +- atomic_inc(&isw_nr_in_flight);
18556 + return;
18557 +
18558 + out_free:
18559 ++ atomic_dec(&isw_nr_in_flight);
18560 + if (isw->new_wb)
18561 + wb_put(isw->new_wb);
18562 + kfree(isw);
18563 +@@ -2205,28 +2211,6 @@ int dirtytime_interval_handler(struct ctl_table *table, int write,
18564 + return ret;
18565 + }
18566 +
18567 +-static noinline void block_dump___mark_inode_dirty(struct inode *inode)
18568 +-{
18569 +- if (inode->i_ino || strcmp(inode->i_sb->s_id, "bdev")) {
18570 +- struct dentry *dentry;
18571 +- const char *name = "?";
18572 +-
18573 +- dentry = d_find_alias(inode);
18574 +- if (dentry) {
18575 +- spin_lock(&dentry->d_lock);
18576 +- name = (const char *) dentry->d_name.name;
18577 +- }
18578 +- printk(KERN_DEBUG
18579 +- "%s(%d): dirtied inode %lu (%s) on %s\n",
18580 +- current->comm, task_pid_nr(current), inode->i_ino,
18581 +- name, inode->i_sb->s_id);
18582 +- if (dentry) {
18583 +- spin_unlock(&dentry->d_lock);
18584 +- dput(dentry);
18585 +- }
18586 +- }
18587 +-}
18588 +-
18589 + /**
18590 + * __mark_inode_dirty - internal function to mark an inode dirty
18591 + *
18592 +@@ -2296,9 +2280,6 @@ void __mark_inode_dirty(struct inode *inode, int flags)
18593 + (dirtytime && (inode->i_state & I_DIRTY_INODE)))
18594 + return;
18595 +
18596 +- if (unlikely(block_dump))
18597 +- block_dump___mark_inode_dirty(inode);
18598 +-
18599 + spin_lock(&inode->i_lock);
18600 + if (dirtytime && (inode->i_state & I_DIRTY_INODE))
18601 + goto out_unlock_inode;
18602 +diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
18603 +index a5ceccc5ef00f..b8d58aa082062 100644
18604 +--- a/fs/fuse/dev.c
18605 ++++ b/fs/fuse/dev.c
18606 +@@ -783,6 +783,7 @@ static int fuse_check_page(struct page *page)
18607 + 1 << PG_uptodate |
18608 + 1 << PG_lru |
18609 + 1 << PG_active |
18610 ++ 1 << PG_workingset |
18611 + 1 << PG_reclaim |
18612 + 1 << PG_waiters))) {
18613 + dump_page(page, "fuse: trying to steal weird page");
18614 +@@ -1271,6 +1272,15 @@ static ssize_t fuse_dev_do_read(struct fuse_dev *fud, struct file *file,
18615 + goto restart;
18616 + }
18617 + spin_lock(&fpq->lock);
18618 ++ /*
18619 ++ * Must not put request on fpq->io queue after having been shut down by
18620 ++ * fuse_abort_conn()
18621 ++ */
18622 ++ if (!fpq->connected) {
18623 ++ req->out.h.error = err = -ECONNABORTED;
18624 ++ goto out_end;
18625 ++
18626 ++ }
18627 + list_add(&req->list, &fpq->io);
18628 + spin_unlock(&fpq->lock);
18629 + cs->req = req;
18630 +@@ -1857,7 +1867,7 @@ static ssize_t fuse_dev_do_write(struct fuse_dev *fud,
18631 + }
18632 +
18633 + err = -EINVAL;
18634 +- if (oh.error <= -1000 || oh.error > 0)
18635 ++ if (oh.error <= -512 || oh.error > 0)
18636 + goto copy_finish;
18637 +
18638 + spin_lock(&fpq->lock);
18639 +diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
18640 +index 06a18700a8455..5e09741282bfd 100644
18641 +--- a/fs/fuse/dir.c
18642 ++++ b/fs/fuse/dir.c
18643 +@@ -339,18 +339,33 @@ static struct vfsmount *fuse_dentry_automount(struct path *path)
18644 +
18645 + /* Initialize superblock, making @mp_fi its root */
18646 + err = fuse_fill_super_submount(sb, mp_fi);
18647 +- if (err)
18648 ++ if (err) {
18649 ++ fuse_conn_put(fc);
18650 ++ kfree(fm);
18651 ++ sb->s_fs_info = NULL;
18652 + goto out_put_sb;
18653 ++ }
18654 ++
18655 ++ down_write(&fc->killsb);
18656 ++ list_add_tail(&fm->fc_entry, &fc->mounts);
18657 ++ up_write(&fc->killsb);
18658 +
18659 + sb->s_flags |= SB_ACTIVE;
18660 + fsc->root = dget(sb->s_root);
18661 ++
18662 ++ /*
18663 ++ * FIXME: setting SB_BORN requires a write barrier for
18664 ++ * super_cache_count(). We should actually come
18665 ++ * up with a proper ->get_tree() implementation
18666 ++ * for submounts and call vfs_get_tree() to take
18667 ++ * care of the write barrier.
18668 ++ */
18669 ++ smp_wmb();
18670 ++ sb->s_flags |= SB_BORN;
18671 ++
18672 + /* We are done configuring the superblock, so unlock it */
18673 + up_write(&sb->s_umount);
18674 +
18675 +- down_write(&fc->killsb);
18676 +- list_add_tail(&fm->fc_entry, &fc->mounts);
18677 +- up_write(&fc->killsb);
18678 +-
18679 + /* Create the submount */
18680 + mnt = vfs_create_mount(fsc);
18681 + if (IS_ERR(mnt)) {
18682 +diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
18683 +index a86e6810237a1..d7e477ecb973d 100644
18684 +--- a/fs/gfs2/file.c
18685 ++++ b/fs/gfs2/file.c
18686 +@@ -474,8 +474,8 @@ static vm_fault_t gfs2_page_mkwrite(struct vm_fault *vmf)
18687 + file_update_time(vmf->vma->vm_file);
18688 +
18689 + /* page is wholly or partially inside EOF */
18690 +- if (offset > size - PAGE_SIZE)
18691 +- length = offset_in_page(size);
18692 ++ if (size - offset < PAGE_SIZE)
18693 ++ length = size - offset;
18694 + else
18695 + length = PAGE_SIZE;
18696 +
18697 +diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
18698 +index aa4136055a83c..e3bd47454e49b 100644
18699 +--- a/fs/gfs2/ops_fstype.c
18700 ++++ b/fs/gfs2/ops_fstype.c
18701 +@@ -689,6 +689,7 @@ static int init_statfs(struct gfs2_sbd *sdp)
18702 + }
18703 +
18704 + iput(pn);
18705 ++ pn = NULL;
18706 + ip = GFS2_I(sdp->sd_sc_inode);
18707 + error = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE, 0,
18708 + &sdp->sd_sc_gh);
18709 +diff --git a/fs/io_uring.c b/fs/io_uring.c
18710 +index 359d1abb089c4..58ac04cca587f 100644
18711 +--- a/fs/io_uring.c
18712 ++++ b/fs/io_uring.c
18713 +@@ -2683,7 +2683,7 @@ static bool io_file_supports_async(struct file *file, int rw)
18714 + return true;
18715 + return false;
18716 + }
18717 +- if (S_ISCHR(mode) || S_ISSOCK(mode))
18718 ++ if (S_ISSOCK(mode))
18719 + return true;
18720 + if (S_ISREG(mode)) {
18721 + if (IS_ENABLED(CONFIG_BLOCK) &&
18722 +@@ -3497,6 +3497,10 @@ static int io_renameat_prep(struct io_kiocb *req,
18723 + struct io_rename *ren = &req->rename;
18724 + const char __user *oldf, *newf;
18725 +
18726 ++ if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
18727 ++ return -EINVAL;
18728 ++ if (sqe->ioprio || sqe->buf_index)
18729 ++ return -EINVAL;
18730 + if (unlikely(req->flags & REQ_F_FIXED_FILE))
18731 + return -EBADF;
18732 +
18733 +@@ -3544,6 +3548,10 @@ static int io_unlinkat_prep(struct io_kiocb *req,
18734 + struct io_unlink *un = &req->unlink;
18735 + const char __user *fname;
18736 +
18737 ++ if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
18738 ++ return -EINVAL;
18739 ++ if (sqe->ioprio || sqe->off || sqe->len || sqe->buf_index)
18740 ++ return -EINVAL;
18741 + if (unlikely(req->flags & REQ_F_FIXED_FILE))
18742 + return -EBADF;
18743 +
18744 +diff --git a/fs/ntfs/inode.c b/fs/ntfs/inode.c
18745 +index f5c058b3192ce..4474adb393ca8 100644
18746 +--- a/fs/ntfs/inode.c
18747 ++++ b/fs/ntfs/inode.c
18748 +@@ -477,7 +477,7 @@ err_corrupt_attr:
18749 + }
18750 + file_name_attr = (FILE_NAME_ATTR*)((u8*)attr +
18751 + le16_to_cpu(attr->data.resident.value_offset));
18752 +- p2 = (u8*)attr + le32_to_cpu(attr->data.resident.value_length);
18753 ++ p2 = (u8 *)file_name_attr + le32_to_cpu(attr->data.resident.value_length);
18754 + if (p2 < (u8*)attr || p2 > p)
18755 + goto err_corrupt_attr;
18756 + /* This attribute is ok, but is it in the $Extend directory? */
18757 +diff --git a/fs/ocfs2/filecheck.c b/fs/ocfs2/filecheck.c
18758 +index 50f11bfdc8c2d..82a3edc4aea4b 100644
18759 +--- a/fs/ocfs2/filecheck.c
18760 ++++ b/fs/ocfs2/filecheck.c
18761 +@@ -328,11 +328,7 @@ static ssize_t ocfs2_filecheck_attr_show(struct kobject *kobj,
18762 + ret = snprintf(buf + total, remain, "%lu\t\t%u\t%s\n",
18763 + p->fe_ino, p->fe_done,
18764 + ocfs2_filecheck_error(p->fe_status));
18765 +- if (ret < 0) {
18766 +- total = ret;
18767 +- break;
18768 +- }
18769 +- if (ret == remain) {
18770 ++ if (ret >= remain) {
18771 + /* snprintf() didn't fit */
18772 + total = -E2BIG;
18773 + break;
18774 +diff --git a/fs/ocfs2/stackglue.c b/fs/ocfs2/stackglue.c
18775 +index a191094694c61..03eacb249f379 100644
18776 +--- a/fs/ocfs2/stackglue.c
18777 ++++ b/fs/ocfs2/stackglue.c
18778 +@@ -502,11 +502,7 @@ static ssize_t ocfs2_loaded_cluster_plugins_show(struct kobject *kobj,
18779 + list_for_each_entry(p, &ocfs2_stack_list, sp_list) {
18780 + ret = snprintf(buf, remain, "%s\n",
18781 + p->sp_name);
18782 +- if (ret < 0) {
18783 +- total = ret;
18784 +- break;
18785 +- }
18786 +- if (ret == remain) {
18787 ++ if (ret >= remain) {
18788 + /* snprintf() didn't fit */
18789 + total = -E2BIG;
18790 + break;
18791 +@@ -533,7 +529,7 @@ static ssize_t ocfs2_active_cluster_plugin_show(struct kobject *kobj,
18792 + if (active_stack) {
18793 + ret = snprintf(buf, PAGE_SIZE, "%s\n",
18794 + active_stack->sp_name);
18795 +- if (ret == PAGE_SIZE)
18796 ++ if (ret >= PAGE_SIZE)
18797 + ret = -E2BIG;
18798 + }
18799 + spin_unlock(&ocfs2_stack_lock);
18800 +diff --git a/fs/open.c b/fs/open.c
18801 +index e53af13b5835f..53bc0573c0eca 100644
18802 +--- a/fs/open.c
18803 ++++ b/fs/open.c
18804 +@@ -1002,12 +1002,20 @@ inline struct open_how build_open_how(int flags, umode_t mode)
18805 +
18806 + inline int build_open_flags(const struct open_how *how, struct open_flags *op)
18807 + {
18808 +- int flags = how->flags;
18809 ++ u64 flags = how->flags;
18810 ++ u64 strip = FMODE_NONOTIFY | O_CLOEXEC;
18811 + int lookup_flags = 0;
18812 + int acc_mode = ACC_MODE(flags);
18813 +
18814 +- /* Must never be set by userspace */
18815 +- flags &= ~(FMODE_NONOTIFY | O_CLOEXEC);
18816 ++ BUILD_BUG_ON_MSG(upper_32_bits(VALID_OPEN_FLAGS),
18817 ++ "struct open_flags doesn't yet handle flags > 32 bits");
18818 ++
18819 ++ /*
18820 ++ * Strip flags that either shouldn't be set by userspace like
18821 ++ * FMODE_NONOTIFY or that aren't relevant in determining struct
18822 ++ * open_flags like O_CLOEXEC.
18823 ++ */
18824 ++ flags &= ~strip;
18825 +
18826 + /*
18827 + * Older syscalls implicitly clear all of the invalid flags or argument
18828 +diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
18829 +index e862cab695838..a3f27ccec742b 100644
18830 +--- a/fs/proc/task_mmu.c
18831 ++++ b/fs/proc/task_mmu.c
18832 +@@ -829,7 +829,7 @@ static int show_smap(struct seq_file *m, void *v)
18833 + __show_smap(m, &mss, false);
18834 +
18835 + seq_printf(m, "THPeligible: %d\n",
18836 +- transparent_hugepage_enabled(vma));
18837 ++ transparent_hugepage_active(vma));
18838 +
18839 + if (arch_pkeys_enabled())
18840 + seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma));
18841 +diff --git a/fs/pstore/Kconfig b/fs/pstore/Kconfig
18842 +index 8adabde685f13..328da35da3908 100644
18843 +--- a/fs/pstore/Kconfig
18844 ++++ b/fs/pstore/Kconfig
18845 +@@ -173,6 +173,7 @@ config PSTORE_BLK
18846 + tristate "Log panic/oops to a block device"
18847 + depends on PSTORE
18848 + depends on BLOCK
18849 ++ depends on BROKEN
18850 + select PSTORE_ZONE
18851 + default n
18852 + help
18853 +diff --git a/include/asm-generic/pgtable-nop4d.h b/include/asm-generic/pgtable-nop4d.h
18854 +index ce2cbb3c380ff..2f6b1befb1292 100644
18855 +--- a/include/asm-generic/pgtable-nop4d.h
18856 ++++ b/include/asm-generic/pgtable-nop4d.h
18857 +@@ -9,7 +9,6 @@
18858 + typedef struct { pgd_t pgd; } p4d_t;
18859 +
18860 + #define P4D_SHIFT PGDIR_SHIFT
18861 +-#define MAX_PTRS_PER_P4D 1
18862 + #define PTRS_PER_P4D 1
18863 + #define P4D_SIZE (1UL << P4D_SHIFT)
18864 + #define P4D_MASK (~(P4D_SIZE-1))
18865 +diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h
18866 +index d683f5e6d7913..b4d43a4af5f79 100644
18867 +--- a/include/asm-generic/preempt.h
18868 ++++ b/include/asm-generic/preempt.h
18869 +@@ -29,7 +29,7 @@ static __always_inline void preempt_count_set(int pc)
18870 + } while (0)
18871 +
18872 + #define init_idle_preempt_count(p, cpu) do { \
18873 +- task_thread_info(p)->preempt_count = PREEMPT_ENABLED; \
18874 ++ task_thread_info(p)->preempt_count = PREEMPT_DISABLED; \
18875 + } while (0)
18876 +
18877 + static __always_inline void set_preempt_need_resched(void)
18878 +diff --git a/include/clocksource/timer-ti-dm.h b/include/clocksource/timer-ti-dm.h
18879 +index 4c61dade8835f..f6da8a1326398 100644
18880 +--- a/include/clocksource/timer-ti-dm.h
18881 ++++ b/include/clocksource/timer-ti-dm.h
18882 +@@ -74,6 +74,7 @@
18883 + #define OMAP_TIMER_ERRATA_I103_I767 0x80000000
18884 +
18885 + struct timer_regs {
18886 ++ u32 ocp_cfg;
18887 + u32 tidr;
18888 + u32 tier;
18889 + u32 twer;
18890 +diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
18891 +index 0a288dddcf5be..25806141db591 100644
18892 +--- a/include/crypto/internal/hash.h
18893 ++++ b/include/crypto/internal/hash.h
18894 +@@ -75,13 +75,7 @@ void crypto_unregister_ahashes(struct ahash_alg *algs, int count);
18895 + int ahash_register_instance(struct crypto_template *tmpl,
18896 + struct ahash_instance *inst);
18897 +
18898 +-int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
18899 +- unsigned int keylen);
18900 +-
18901 +-static inline bool crypto_shash_alg_has_setkey(struct shash_alg *alg)
18902 +-{
18903 +- return alg->setkey != shash_no_setkey;
18904 +-}
18905 ++bool crypto_shash_alg_has_setkey(struct shash_alg *alg);
18906 +
18907 + static inline bool crypto_shash_alg_needs_key(struct shash_alg *alg)
18908 + {
18909 +diff --git a/include/dt-bindings/clock/imx8mq-clock.h b/include/dt-bindings/clock/imx8mq-clock.h
18910 +index 82e907ce7bdd3..afa74d7ba1009 100644
18911 +--- a/include/dt-bindings/clock/imx8mq-clock.h
18912 ++++ b/include/dt-bindings/clock/imx8mq-clock.h
18913 +@@ -405,25 +405,6 @@
18914 +
18915 + #define IMX8MQ_VIDEO2_PLL1_REF_SEL 266
18916 +
18917 +-#define IMX8MQ_SYS1_PLL_40M_CG 267
18918 +-#define IMX8MQ_SYS1_PLL_80M_CG 268
18919 +-#define IMX8MQ_SYS1_PLL_100M_CG 269
18920 +-#define IMX8MQ_SYS1_PLL_133M_CG 270
18921 +-#define IMX8MQ_SYS1_PLL_160M_CG 271
18922 +-#define IMX8MQ_SYS1_PLL_200M_CG 272
18923 +-#define IMX8MQ_SYS1_PLL_266M_CG 273
18924 +-#define IMX8MQ_SYS1_PLL_400M_CG 274
18925 +-#define IMX8MQ_SYS1_PLL_800M_CG 275
18926 +-#define IMX8MQ_SYS2_PLL_50M_CG 276
18927 +-#define IMX8MQ_SYS2_PLL_100M_CG 277
18928 +-#define IMX8MQ_SYS2_PLL_125M_CG 278
18929 +-#define IMX8MQ_SYS2_PLL_166M_CG 279
18930 +-#define IMX8MQ_SYS2_PLL_200M_CG 280
18931 +-#define IMX8MQ_SYS2_PLL_250M_CG 281
18932 +-#define IMX8MQ_SYS2_PLL_333M_CG 282
18933 +-#define IMX8MQ_SYS2_PLL_500M_CG 283
18934 +-#define IMX8MQ_SYS2_PLL_1000M_CG 284
18935 +-
18936 + #define IMX8MQ_CLK_GPU_CORE 285
18937 + #define IMX8MQ_CLK_GPU_SHADER 286
18938 + #define IMX8MQ_CLK_M4_CORE 287
18939 +diff --git a/include/linux/bio.h b/include/linux/bio.h
18940 +index d0246c92a6e86..460c96da27ccf 100644
18941 +--- a/include/linux/bio.h
18942 ++++ b/include/linux/bio.h
18943 +@@ -44,9 +44,6 @@ static inline unsigned int bio_max_segs(unsigned int nr_segs)
18944 + #define bio_offset(bio) bio_iter_offset((bio), (bio)->bi_iter)
18945 + #define bio_iovec(bio) bio_iter_iovec((bio), (bio)->bi_iter)
18946 +
18947 +-#define bio_multiple_segments(bio) \
18948 +- ((bio)->bi_iter.bi_size != bio_iovec(bio).bv_len)
18949 +-
18950 + #define bvec_iter_sectors(iter) ((iter).bi_size >> 9)
18951 + #define bvec_iter_end_sector(iter) ((iter).bi_sector + bvec_iter_sectors((iter)))
18952 +
18953 +@@ -271,7 +268,7 @@ static inline void bio_clear_flag(struct bio *bio, unsigned int bit)
18954 +
18955 + static inline void bio_get_first_bvec(struct bio *bio, struct bio_vec *bv)
18956 + {
18957 +- *bv = bio_iovec(bio);
18958 ++ *bv = mp_bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
18959 + }
18960 +
18961 + static inline void bio_get_last_bvec(struct bio *bio, struct bio_vec *bv)
18962 +@@ -279,10 +276,9 @@ static inline void bio_get_last_bvec(struct bio *bio, struct bio_vec *bv)
18963 + struct bvec_iter iter = bio->bi_iter;
18964 + int idx;
18965 +
18966 +- if (unlikely(!bio_multiple_segments(bio))) {
18967 +- *bv = bio_iovec(bio);
18968 +- return;
18969 +- }
18970 ++ bio_get_first_bvec(bio, bv);
18971 ++ if (bv->bv_len == bio->bi_iter.bi_size)
18972 ++ return; /* this bio only has a single bvec */
18973 +
18974 + bio_advance_iter(bio, &iter, iter.bi_size);
18975 +
18976 +diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h
18977 +index 86d143db65231..83a3ebff74560 100644
18978 +--- a/include/linux/clocksource.h
18979 ++++ b/include/linux/clocksource.h
18980 +@@ -131,7 +131,7 @@ struct clocksource {
18981 + #define CLOCK_SOURCE_UNSTABLE 0x40
18982 + #define CLOCK_SOURCE_SUSPEND_NONSTOP 0x80
18983 + #define CLOCK_SOURCE_RESELECT 0x100
18984 +-
18985 ++#define CLOCK_SOURCE_VERIFY_PERCPU 0x200
18986 + /* simplify initialization of mask field */
18987 + #define CLOCKSOURCE_MASK(bits) GENMASK_ULL((bits) - 1, 0)
18988 +
18989 +diff --git a/include/linux/cred.h b/include/linux/cred.h
18990 +index 4c63505036977..66436e6550328 100644
18991 +--- a/include/linux/cred.h
18992 ++++ b/include/linux/cred.h
18993 +@@ -144,6 +144,7 @@ struct cred {
18994 + #endif
18995 + struct user_struct *user; /* real user ID subscription */
18996 + struct user_namespace *user_ns; /* user_ns the caps and keyrings are relative to. */
18997 ++ struct ucounts *ucounts;
18998 + struct group_info *group_info; /* supplementary groups for euid/fsgid */
18999 + /* RCU deletion */
19000 + union {
19001 +@@ -170,6 +171,7 @@ extern int set_security_override_from_ctx(struct cred *, const char *);
19002 + extern int set_create_files_as(struct cred *, struct inode *);
19003 + extern int cred_fscmp(const struct cred *, const struct cred *);
19004 + extern void __init cred_init(void);
19005 ++extern int set_cred_ucounts(struct cred *);
19006 +
19007 + /*
19008 + * check for validity of credentials
19009 +diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
19010 +index 6686a0baa91d3..e72787731a5b2 100644
19011 +--- a/include/linux/huge_mm.h
19012 ++++ b/include/linux/huge_mm.h
19013 +@@ -118,9 +118,34 @@ extern struct kobj_attribute shmem_enabled_attr;
19014 +
19015 + extern unsigned long transparent_hugepage_flags;
19016 +
19017 ++static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
19018 ++ unsigned long haddr)
19019 ++{
19020 ++ /* Don't have to check pgoff for anonymous vma */
19021 ++ if (!vma_is_anonymous(vma)) {
19022 ++ if (!IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
19023 ++ HPAGE_PMD_NR))
19024 ++ return false;
19025 ++ }
19026 ++
19027 ++ if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end)
19028 ++ return false;
19029 ++ return true;
19030 ++}
19031 ++
19032 ++static inline bool transhuge_vma_enabled(struct vm_area_struct *vma,
19033 ++ unsigned long vm_flags)
19034 ++{
19035 ++ /* Explicitly disabled through madvise. */
19036 ++ if ((vm_flags & VM_NOHUGEPAGE) ||
19037 ++ test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
19038 ++ return false;
19039 ++ return true;
19040 ++}
19041 ++
19042 + /*
19043 + * to be used on vmas which are known to support THP.
19044 +- * Use transparent_hugepage_enabled otherwise
19045 ++ * Use transparent_hugepage_active otherwise
19046 + */
19047 + static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma)
19048 + {
19049 +@@ -131,15 +156,12 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma)
19050 + if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_NEVER_DAX))
19051 + return false;
19052 +
19053 +- if (vma->vm_flags & VM_NOHUGEPAGE)
19054 ++ if (!transhuge_vma_enabled(vma, vma->vm_flags))
19055 + return false;
19056 +
19057 + if (vma_is_temporary_stack(vma))
19058 + return false;
19059 +
19060 +- if (test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
19061 +- return false;
19062 +-
19063 + if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_FLAG))
19064 + return true;
19065 +
19066 +@@ -153,24 +175,7 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma)
19067 + return false;
19068 + }
19069 +
19070 +-bool transparent_hugepage_enabled(struct vm_area_struct *vma);
19071 +-
19072 +-#define HPAGE_CACHE_INDEX_MASK (HPAGE_PMD_NR - 1)
19073 +-
19074 +-static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
19075 +- unsigned long haddr)
19076 +-{
19077 +- /* Don't have to check pgoff for anonymous vma */
19078 +- if (!vma_is_anonymous(vma)) {
19079 +- if (((vma->vm_start >> PAGE_SHIFT) & HPAGE_CACHE_INDEX_MASK) !=
19080 +- (vma->vm_pgoff & HPAGE_CACHE_INDEX_MASK))
19081 +- return false;
19082 +- }
19083 +-
19084 +- if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end)
19085 +- return false;
19086 +- return true;
19087 +-}
19088 ++bool transparent_hugepage_active(struct vm_area_struct *vma);
19089 +
19090 + #define transparent_hugepage_use_zero_page() \
19091 + (transparent_hugepage_flags & \
19092 +@@ -357,7 +362,7 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma)
19093 + return false;
19094 + }
19095 +
19096 +-static inline bool transparent_hugepage_enabled(struct vm_area_struct *vma)
19097 ++static inline bool transparent_hugepage_active(struct vm_area_struct *vma)
19098 + {
19099 + return false;
19100 + }
19101 +@@ -368,6 +373,12 @@ static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
19102 + return false;
19103 + }
19104 +
19105 ++static inline bool transhuge_vma_enabled(struct vm_area_struct *vma,
19106 ++ unsigned long vm_flags)
19107 ++{
19108 ++ return false;
19109 ++}
19110 ++
19111 + static inline void prep_transhuge_page(struct page *page) {}
19112 +
19113 + static inline bool is_transparent_hugepage(struct page *page)
19114 +diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
19115 +index 28fa3f9bbbfdd..7bbef3f195ae7 100644
19116 +--- a/include/linux/hugetlb.h
19117 ++++ b/include/linux/hugetlb.h
19118 +@@ -862,6 +862,11 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
19119 + #else /* CONFIG_HUGETLB_PAGE */
19120 + struct hstate {};
19121 +
19122 ++static inline struct hugepage_subpool *hugetlb_page_subpool(struct page *hpage)
19123 ++{
19124 ++ return NULL;
19125 ++}
19126 ++
19127 + static inline struct page *alloc_huge_page(struct vm_area_struct *vma,
19128 + unsigned long addr,
19129 + int avoid_reserve)
19130 +diff --git a/include/linux/iio/common/cros_ec_sensors_core.h b/include/linux/iio/common/cros_ec_sensors_core.h
19131 +index c9b80be82440f..f82857bd693fd 100644
19132 +--- a/include/linux/iio/common/cros_ec_sensors_core.h
19133 ++++ b/include/linux/iio/common/cros_ec_sensors_core.h
19134 +@@ -77,7 +77,7 @@ struct cros_ec_sensors_core_state {
19135 + u16 scale;
19136 + } calib[CROS_EC_SENSOR_MAX_AXIS];
19137 + s8 sign[CROS_EC_SENSOR_MAX_AXIS];
19138 +- u8 samples[CROS_EC_SAMPLE_SIZE];
19139 ++ u8 samples[CROS_EC_SAMPLE_SIZE] __aligned(8);
19140 +
19141 + int (*read_ec_sensors_data)(struct iio_dev *indio_dev,
19142 + unsigned long scan_mask, s16 *data);
19143 +diff --git a/include/linux/kthread.h b/include/linux/kthread.h
19144 +index 2484ed97e72f5..d9133d6db3084 100644
19145 +--- a/include/linux/kthread.h
19146 ++++ b/include/linux/kthread.h
19147 +@@ -33,6 +33,8 @@ struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data),
19148 + unsigned int cpu,
19149 + const char *namefmt);
19150 +
19151 ++void set_kthread_struct(struct task_struct *p);
19152 ++
19153 + void kthread_set_per_cpu(struct task_struct *k, int cpu);
19154 + bool kthread_is_per_cpu(struct task_struct *k);
19155 +
19156 +diff --git a/include/linux/mm.h b/include/linux/mm.h
19157 +index cfb0842a7fb96..18b8373b1474a 100644
19158 +--- a/include/linux/mm.h
19159 ++++ b/include/linux/mm.h
19160 +@@ -2435,7 +2435,6 @@ extern void set_dma_reserve(unsigned long new_dma_reserve);
19161 + extern void memmap_init_range(unsigned long, int, unsigned long,
19162 + unsigned long, unsigned long, enum meminit_context,
19163 + struct vmem_altmap *, int migratetype);
19164 +-extern void memmap_init_zone(struct zone *zone);
19165 + extern void setup_per_zone_wmarks(void);
19166 + extern int __meminit init_per_zone_wmark_min(void);
19167 + extern void mem_init(void);
19168 +diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
19169 +index 136b1d996075c..6fbbd620f6db3 100644
19170 +--- a/include/linux/pgtable.h
19171 ++++ b/include/linux/pgtable.h
19172 +@@ -1580,4 +1580,26 @@ typedef unsigned int pgtbl_mod_mask;
19173 + #define pte_leaf_size(x) PAGE_SIZE
19174 + #endif
19175 +
19176 ++/*
19177 ++ * Some architectures have MMUs that are configurable or selectable at boot
19178 ++ * time. These lead to variable PTRS_PER_x. For statically allocated arrays it
19179 ++ * helps to have a static maximum value.
19180 ++ */
19181 ++
19182 ++#ifndef MAX_PTRS_PER_PTE
19183 ++#define MAX_PTRS_PER_PTE PTRS_PER_PTE
19184 ++#endif
19185 ++
19186 ++#ifndef MAX_PTRS_PER_PMD
19187 ++#define MAX_PTRS_PER_PMD PTRS_PER_PMD
19188 ++#endif
19189 ++
19190 ++#ifndef MAX_PTRS_PER_PUD
19191 ++#define MAX_PTRS_PER_PUD PTRS_PER_PUD
19192 ++#endif
19193 ++
19194 ++#ifndef MAX_PTRS_PER_P4D
19195 ++#define MAX_PTRS_PER_P4D PTRS_PER_P4D
19196 ++#endif
19197 ++
19198 + #endif /* _LINUX_PGTABLE_H */
19199 +diff --git a/include/linux/prandom.h b/include/linux/prandom.h
19200 +index bbf4b4ad61dfd..056d31317e499 100644
19201 +--- a/include/linux/prandom.h
19202 ++++ b/include/linux/prandom.h
19203 +@@ -111,7 +111,7 @@ static inline u32 __seed(u32 x, u32 m)
19204 + */
19205 + static inline void prandom_seed_state(struct rnd_state *state, u64 seed)
19206 + {
19207 +- u32 i = (seed >> 32) ^ (seed << 10) ^ seed;
19208 ++ u32 i = ((seed >> 32) ^ (seed << 10) ^ seed) & 0xffffffffUL;
19209 +
19210 + state->s1 = __seed(i, 2U);
19211 + state->s2 = __seed(i, 8U);
19212 +diff --git a/include/linux/swap.h b/include/linux/swap.h
19213 +index 4cc6ec3bf0abb..7482f8b968ea9 100644
19214 +--- a/include/linux/swap.h
19215 ++++ b/include/linux/swap.h
19216 +@@ -504,6 +504,15 @@ static inline struct swap_info_struct *swp_swap_info(swp_entry_t entry)
19217 + return NULL;
19218 + }
19219 +
19220 ++static inline struct swap_info_struct *get_swap_device(swp_entry_t entry)
19221 ++{
19222 ++ return NULL;
19223 ++}
19224 ++
19225 ++static inline void put_swap_device(struct swap_info_struct *si)
19226 ++{
19227 ++}
19228 ++
19229 + #define swap_address_space(entry) (NULL)
19230 + #define get_nr_swap_pages() 0L
19231 + #define total_swap_pages 0L
19232 +diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
19233 +index 9cfb099da58f6..752840d45e24d 100644
19234 +--- a/include/linux/tracepoint.h
19235 ++++ b/include/linux/tracepoint.h
19236 +@@ -41,7 +41,17 @@ extern int
19237 + tracepoint_probe_register_prio(struct tracepoint *tp, void *probe, void *data,
19238 + int prio);
19239 + extern int
19240 ++tracepoint_probe_register_prio_may_exist(struct tracepoint *tp, void *probe, void *data,
19241 ++ int prio);
19242 ++extern int
19243 + tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data);
19244 ++static inline int
19245 ++tracepoint_probe_register_may_exist(struct tracepoint *tp, void *probe,
19246 ++ void *data)
19247 ++{
19248 ++ return tracepoint_probe_register_prio_may_exist(tp, probe, data,
19249 ++ TRACEPOINT_DEFAULT_PRIO);
19250 ++}
19251 + extern void
19252 + for_each_kernel_tracepoint(void (*fct)(struct tracepoint *tp, void *priv),
19253 + void *priv);
19254 +diff --git a/include/linux/user_namespace.h b/include/linux/user_namespace.h
19255 +index f6c5f784be5ab..604cf6a5dc2d0 100644
19256 +--- a/include/linux/user_namespace.h
19257 ++++ b/include/linux/user_namespace.h
19258 +@@ -100,11 +100,15 @@ struct ucounts {
19259 + };
19260 +
19261 + extern struct user_namespace init_user_ns;
19262 ++extern struct ucounts init_ucounts;
19263 +
19264 + bool setup_userns_sysctls(struct user_namespace *ns);
19265 + void retire_userns_sysctls(struct user_namespace *ns);
19266 + struct ucounts *inc_ucount(struct user_namespace *ns, kuid_t uid, enum ucount_type type);
19267 + void dec_ucount(struct ucounts *ucounts, enum ucount_type type);
19268 ++struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid);
19269 ++struct ucounts *get_ucounts(struct ucounts *ucounts);
19270 ++void put_ucounts(struct ucounts *ucounts);
19271 +
19272 + #ifdef CONFIG_USER_NS
19273 +
19274 +diff --git a/include/media/hevc-ctrls.h b/include/media/hevc-ctrls.h
19275 +index b4cb2ef02f171..226fcfa0e0261 100644
19276 +--- a/include/media/hevc-ctrls.h
19277 ++++ b/include/media/hevc-ctrls.h
19278 +@@ -81,7 +81,7 @@ struct v4l2_ctrl_hevc_sps {
19279 + __u64 flags;
19280 + };
19281 +
19282 +-#define V4L2_HEVC_PPS_FLAG_DEPENDENT_SLICE_SEGMENT (1ULL << 0)
19283 ++#define V4L2_HEVC_PPS_FLAG_DEPENDENT_SLICE_SEGMENT_ENABLED (1ULL << 0)
19284 + #define V4L2_HEVC_PPS_FLAG_OUTPUT_FLAG_PRESENT (1ULL << 1)
19285 + #define V4L2_HEVC_PPS_FLAG_SIGN_DATA_HIDING_ENABLED (1ULL << 2)
19286 + #define V4L2_HEVC_PPS_FLAG_CABAC_INIT_PRESENT (1ULL << 3)
19287 +@@ -160,6 +160,7 @@ struct v4l2_hevc_pred_weight_table {
19288 + #define V4L2_HEVC_SLICE_PARAMS_FLAG_USE_INTEGER_MV (1ULL << 6)
19289 + #define V4L2_HEVC_SLICE_PARAMS_FLAG_SLICE_DEBLOCKING_FILTER_DISABLED (1ULL << 7)
19290 + #define V4L2_HEVC_SLICE_PARAMS_FLAG_SLICE_LOOP_FILTER_ACROSS_SLICES_ENABLED (1ULL << 8)
19291 ++#define V4L2_HEVC_SLICE_PARAMS_FLAG_DEPENDENT_SLICE_SEGMENT (1ULL << 9)
19292 +
19293 + struct v4l2_ctrl_hevc_slice_params {
19294 + __u32 bit_size;
19295 +diff --git a/include/media/media-dev-allocator.h b/include/media/media-dev-allocator.h
19296 +index b35ea6062596b..2ab54d426c644 100644
19297 +--- a/include/media/media-dev-allocator.h
19298 ++++ b/include/media/media-dev-allocator.h
19299 +@@ -19,7 +19,7 @@
19300 +
19301 + struct usb_device;
19302 +
19303 +-#if defined(CONFIG_MEDIA_CONTROLLER) && defined(CONFIG_USB)
19304 ++#if defined(CONFIG_MEDIA_CONTROLLER) && IS_ENABLED(CONFIG_USB)
19305 + /**
19306 + * media_device_usb_allocate() - Allocate and return struct &media device
19307 + *
19308 +diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
19309 +index ba2f439bc04d3..46d99c2778c3e 100644
19310 +--- a/include/net/bluetooth/hci.h
19311 ++++ b/include/net/bluetooth/hci.h
19312 +@@ -1773,13 +1773,15 @@ struct hci_cp_ext_adv_set {
19313 + __u8 max_events;
19314 + } __packed;
19315 +
19316 ++#define HCI_MAX_EXT_AD_LENGTH 251
19317 ++
19318 + #define HCI_OP_LE_SET_EXT_ADV_DATA 0x2037
19319 + struct hci_cp_le_set_ext_adv_data {
19320 + __u8 handle;
19321 + __u8 operation;
19322 + __u8 frag_pref;
19323 + __u8 length;
19324 +- __u8 data[HCI_MAX_AD_LENGTH];
19325 ++ __u8 data[];
19326 + } __packed;
19327 +
19328 + #define HCI_OP_LE_SET_EXT_SCAN_RSP_DATA 0x2038
19329 +@@ -1788,7 +1790,7 @@ struct hci_cp_le_set_ext_scan_rsp_data {
19330 + __u8 operation;
19331 + __u8 frag_pref;
19332 + __u8 length;
19333 +- __u8 data[HCI_MAX_AD_LENGTH];
19334 ++ __u8 data[];
19335 + } __packed;
19336 +
19337 + #define LE_SET_ADV_DATA_OP_COMPLETE 0x03
19338 +diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
19339 +index ca4ac6603b9a0..8674141337b73 100644
19340 +--- a/include/net/bluetooth/hci_core.h
19341 ++++ b/include/net/bluetooth/hci_core.h
19342 +@@ -228,9 +228,9 @@ struct adv_info {
19343 + __u16 remaining_time;
19344 + __u16 duration;
19345 + __u16 adv_data_len;
19346 +- __u8 adv_data[HCI_MAX_AD_LENGTH];
19347 ++ __u8 adv_data[HCI_MAX_EXT_AD_LENGTH];
19348 + __u16 scan_rsp_len;
19349 +- __u8 scan_rsp_data[HCI_MAX_AD_LENGTH];
19350 ++ __u8 scan_rsp_data[HCI_MAX_EXT_AD_LENGTH];
19351 + __s8 tx_power;
19352 + __u32 min_interval;
19353 + __u32 max_interval;
19354 +@@ -550,9 +550,9 @@ struct hci_dev {
19355 + DECLARE_BITMAP(dev_flags, __HCI_NUM_FLAGS);
19356 +
19357 + __s8 adv_tx_power;
19358 +- __u8 adv_data[HCI_MAX_AD_LENGTH];
19359 ++ __u8 adv_data[HCI_MAX_EXT_AD_LENGTH];
19360 + __u8 adv_data_len;
19361 +- __u8 scan_rsp_data[HCI_MAX_AD_LENGTH];
19362 ++ __u8 scan_rsp_data[HCI_MAX_EXT_AD_LENGTH];
19363 + __u8 scan_rsp_data_len;
19364 +
19365 + struct list_head adv_instances;
19366 +diff --git a/include/net/ip.h b/include/net/ip.h
19367 +index e20874059f826..d9683bef86840 100644
19368 +--- a/include/net/ip.h
19369 ++++ b/include/net/ip.h
19370 +@@ -31,6 +31,7 @@
19371 + #include <net/flow.h>
19372 + #include <net/flow_dissector.h>
19373 + #include <net/netns/hash.h>
19374 ++#include <net/lwtunnel.h>
19375 +
19376 + #define IPV4_MAX_PMTU 65535U /* RFC 2675, Section 5.1 */
19377 + #define IPV4_MIN_MTU 68 /* RFC 791 */
19378 +@@ -445,22 +446,25 @@ static inline unsigned int ip_dst_mtu_maybe_forward(const struct dst_entry *dst,
19379 +
19380 + /* 'forwarding = true' case should always honour route mtu */
19381 + mtu = dst_metric_raw(dst, RTAX_MTU);
19382 +- if (mtu)
19383 +- return mtu;
19384 ++ if (!mtu)
19385 ++ mtu = min(READ_ONCE(dst->dev->mtu), IP_MAX_MTU);
19386 +
19387 +- return min(READ_ONCE(dst->dev->mtu), IP_MAX_MTU);
19388 ++ return mtu - lwtunnel_headroom(dst->lwtstate, mtu);
19389 + }
19390 +
19391 + static inline unsigned int ip_skb_dst_mtu(struct sock *sk,
19392 + const struct sk_buff *skb)
19393 + {
19394 ++ unsigned int mtu;
19395 ++
19396 + if (!sk || !sk_fullsock(sk) || ip_sk_use_pmtu(sk)) {
19397 + bool forwarding = IPCB(skb)->flags & IPSKB_FORWARDED;
19398 +
19399 + return ip_dst_mtu_maybe_forward(skb_dst(skb), forwarding);
19400 + }
19401 +
19402 +- return min(READ_ONCE(skb_dst(skb)->dev->mtu), IP_MAX_MTU);
19403 ++ mtu = min(READ_ONCE(skb_dst(skb)->dev->mtu), IP_MAX_MTU);
19404 ++ return mtu - lwtunnel_headroom(skb_dst(skb)->lwtstate, mtu);
19405 + }
19406 +
19407 + struct dst_metrics *ip_fib_metrics_init(struct net *net, struct nlattr *fc_mx,
19408 +diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h
19409 +index f51a118bfce8b..f14149df5a654 100644
19410 +--- a/include/net/ip6_route.h
19411 ++++ b/include/net/ip6_route.h
19412 +@@ -265,11 +265,18 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
19413 +
19414 + static inline int ip6_skb_dst_mtu(struct sk_buff *skb)
19415 + {
19416 ++ int mtu;
19417 ++
19418 + struct ipv6_pinfo *np = skb->sk && !dev_recursion_level() ?
19419 + inet6_sk(skb->sk) : NULL;
19420 +
19421 +- return (np && np->pmtudisc >= IPV6_PMTUDISC_PROBE) ?
19422 +- skb_dst(skb)->dev->mtu : dst_mtu(skb_dst(skb));
19423 ++ if (np && np->pmtudisc >= IPV6_PMTUDISC_PROBE) {
19424 ++ mtu = READ_ONCE(skb_dst(skb)->dev->mtu);
19425 ++ mtu -= lwtunnel_headroom(skb_dst(skb)->lwtstate, mtu);
19426 ++ } else
19427 ++ mtu = dst_mtu(skb_dst(skb));
19428 ++
19429 ++ return mtu;
19430 + }
19431 +
19432 + static inline bool ip6_sk_accept_pmtu(const struct sock *sk)
19433 +@@ -317,7 +324,7 @@ static inline unsigned int ip6_dst_mtu_forward(const struct dst_entry *dst)
19434 + if (dst_metric_locked(dst, RTAX_MTU)) {
19435 + mtu = dst_metric_raw(dst, RTAX_MTU);
19436 + if (mtu)
19437 +- return mtu;
19438 ++ goto out;
19439 + }
19440 +
19441 + mtu = IPV6_MIN_MTU;
19442 +@@ -327,7 +334,8 @@ static inline unsigned int ip6_dst_mtu_forward(const struct dst_entry *dst)
19443 + mtu = idev->cnf.mtu6;
19444 + rcu_read_unlock();
19445 +
19446 +- return mtu;
19447 ++out:
19448 ++ return mtu - lwtunnel_headroom(dst->lwtstate, mtu);
19449 + }
19450 +
19451 + u32 ip6_mtu_from_fib6(const struct fib6_result *res,
19452 +diff --git a/include/net/macsec.h b/include/net/macsec.h
19453 +index 52874cdfe2260..d6fa6b97f6efa 100644
19454 +--- a/include/net/macsec.h
19455 ++++ b/include/net/macsec.h
19456 +@@ -241,7 +241,7 @@ struct macsec_context {
19457 + struct macsec_rx_sc *rx_sc;
19458 + struct {
19459 + unsigned char assoc_num;
19460 +- u8 key[MACSEC_KEYID_LEN];
19461 ++ u8 key[MACSEC_MAX_KEY_LEN];
19462 + union {
19463 + struct macsec_rx_sa *rx_sa;
19464 + struct macsec_tx_sa *tx_sa;
19465 +diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
19466 +index 2c4f3527cc098..b070e99c412d1 100644
19467 +--- a/include/net/sch_generic.h
19468 ++++ b/include/net/sch_generic.h
19469 +@@ -163,6 +163,12 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
19470 + if (spin_trylock(&qdisc->seqlock))
19471 + goto nolock_empty;
19472 +
19473 ++ /* Paired with smp_mb__after_atomic() to make sure
19474 ++ * STATE_MISSED checking is synchronized with clearing
19475 ++ * in pfifo_fast_dequeue().
19476 ++ */
19477 ++ smp_mb__before_atomic();
19478 ++
19479 + /* If the MISSED flag is set, it means other thread has
19480 + * set the MISSED flag before second spin_trylock(), so
19481 + * we can return false here to avoid multi cpus doing
19482 +@@ -180,6 +186,12 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
19483 + */
19484 + set_bit(__QDISC_STATE_MISSED, &qdisc->state);
19485 +
19486 ++ /* spin_trylock() only has load-acquire semantic, so use
19487 ++ * smp_mb__after_atomic() to ensure STATE_MISSED is set
19488 ++ * before doing the second spin_trylock().
19489 ++ */
19490 ++ smp_mb__after_atomic();
19491 ++
19492 + /* Retry again in case other CPU may not see the new flag
19493 + * after it releases the lock at the end of qdisc_run_end().
19494 + */
19495 +diff --git a/include/net/tc_act/tc_vlan.h b/include/net/tc_act/tc_vlan.h
19496 +index f051046ba0344..f94b8bc26f9ec 100644
19497 +--- a/include/net/tc_act/tc_vlan.h
19498 ++++ b/include/net/tc_act/tc_vlan.h
19499 +@@ -16,6 +16,7 @@ struct tcf_vlan_params {
19500 + u16 tcfv_push_vid;
19501 + __be16 tcfv_push_proto;
19502 + u8 tcfv_push_prio;
19503 ++ bool tcfv_push_prio_exists;
19504 + struct rcu_head rcu;
19505 + };
19506 +
19507 +diff --git a/include/net/xfrm.h b/include/net/xfrm.h
19508 +index c58a6d4eb6103..6232a5f048bde 100644
19509 +--- a/include/net/xfrm.h
19510 ++++ b/include/net/xfrm.h
19511 +@@ -1546,6 +1546,7 @@ void xfrm_sad_getinfo(struct net *net, struct xfrmk_sadinfo *si);
19512 + void xfrm_spd_getinfo(struct net *net, struct xfrmk_spdinfo *si);
19513 + u32 xfrm_replay_seqhi(struct xfrm_state *x, __be32 net_seq);
19514 + int xfrm_init_replay(struct xfrm_state *x);
19515 ++u32 __xfrm_state_mtu(struct xfrm_state *x, int mtu);
19516 + u32 xfrm_state_mtu(struct xfrm_state *x, int mtu);
19517 + int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload);
19518 + int xfrm_init_state(struct xfrm_state *x);
19519 +diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
19520 +index eaa8386dbc630..7a9a23e7a604a 100644
19521 +--- a/include/net/xsk_buff_pool.h
19522 ++++ b/include/net/xsk_buff_pool.h
19523 +@@ -147,11 +147,16 @@ static inline bool xp_desc_crosses_non_contig_pg(struct xsk_buff_pool *pool,
19524 + {
19525 + bool cross_pg = (addr & (PAGE_SIZE - 1)) + len > PAGE_SIZE;
19526 +
19527 +- if (pool->dma_pages_cnt && cross_pg) {
19528 ++ if (likely(!cross_pg))
19529 ++ return false;
19530 ++
19531 ++ if (pool->dma_pages_cnt) {
19532 + return !(pool->dma_pages[addr >> PAGE_SHIFT] &
19533 + XSK_NEXT_PG_CONTIG_MASK);
19534 + }
19535 +- return false;
19536 ++
19537 ++ /* skb path */
19538 ++ return addr + len > pool->addrs_cnt;
19539 + }
19540 +
19541 + static inline u64 xp_aligned_extract_addr(struct xsk_buff_pool *pool, u64 addr)
19542 +diff --git a/include/scsi/fc/fc_ms.h b/include/scsi/fc/fc_ms.h
19543 +index 9e273fed0a85f..800d53dc94705 100644
19544 +--- a/include/scsi/fc/fc_ms.h
19545 ++++ b/include/scsi/fc/fc_ms.h
19546 +@@ -63,8 +63,8 @@ enum fc_fdmi_hba_attr_type {
19547 + * HBA Attribute Length
19548 + */
19549 + #define FC_FDMI_HBA_ATTR_NODENAME_LEN 8
19550 +-#define FC_FDMI_HBA_ATTR_MANUFACTURER_LEN 80
19551 +-#define FC_FDMI_HBA_ATTR_SERIALNUMBER_LEN 80
19552 ++#define FC_FDMI_HBA_ATTR_MANUFACTURER_LEN 64
19553 ++#define FC_FDMI_HBA_ATTR_SERIALNUMBER_LEN 64
19554 + #define FC_FDMI_HBA_ATTR_MODEL_LEN 256
19555 + #define FC_FDMI_HBA_ATTR_MODELDESCR_LEN 256
19556 + #define FC_FDMI_HBA_ATTR_HARDWAREVERSION_LEN 256
19557 +diff --git a/include/scsi/libiscsi.h b/include/scsi/libiscsi.h
19558 +index 02f966e9358f6..091f284bd6e93 100644
19559 +--- a/include/scsi/libiscsi.h
19560 ++++ b/include/scsi/libiscsi.h
19561 +@@ -424,6 +424,7 @@ extern int iscsi_conn_start(struct iscsi_cls_conn *);
19562 + extern void iscsi_conn_stop(struct iscsi_cls_conn *, int);
19563 + extern int iscsi_conn_bind(struct iscsi_cls_session *, struct iscsi_cls_conn *,
19564 + int);
19565 ++extern void iscsi_conn_unbind(struct iscsi_cls_conn *cls_conn, bool is_active);
19566 + extern void iscsi_conn_failure(struct iscsi_conn *conn, enum iscsi_err err);
19567 + extern void iscsi_session_failure(struct iscsi_session *session,
19568 + enum iscsi_err err);
19569 +diff --git a/include/scsi/scsi_transport_iscsi.h b/include/scsi/scsi_transport_iscsi.h
19570 +index fc5a39839b4b0..3974329d4d023 100644
19571 +--- a/include/scsi/scsi_transport_iscsi.h
19572 ++++ b/include/scsi/scsi_transport_iscsi.h
19573 +@@ -82,6 +82,7 @@ struct iscsi_transport {
19574 + void (*destroy_session) (struct iscsi_cls_session *session);
19575 + struct iscsi_cls_conn *(*create_conn) (struct iscsi_cls_session *sess,
19576 + uint32_t cid);
19577 ++ void (*unbind_conn) (struct iscsi_cls_conn *conn, bool is_active);
19578 + int (*bind_conn) (struct iscsi_cls_session *session,
19579 + struct iscsi_cls_conn *cls_conn,
19580 + uint64_t transport_eph, int is_leading);
19581 +@@ -196,15 +197,23 @@ enum iscsi_connection_state {
19582 + ISCSI_CONN_BOUND,
19583 + };
19584 +
19585 ++#define ISCSI_CLS_CONN_BIT_CLEANUP 1
19586 ++
19587 + struct iscsi_cls_conn {
19588 + struct list_head conn_list; /* item in connlist */
19589 +- struct list_head conn_list_err; /* item in connlist_err */
19590 + void *dd_data; /* LLD private data */
19591 + struct iscsi_transport *transport;
19592 + uint32_t cid; /* connection id */
19593 ++ /*
19594 ++ * This protects the conn startup and binding/unbinding of the ep to
19595 ++ * the conn. Unbinding includes ep_disconnect and stop_conn.
19596 ++ */
19597 + struct mutex ep_mutex;
19598 + struct iscsi_endpoint *ep;
19599 +
19600 ++ unsigned long flags;
19601 ++ struct work_struct cleanup_work;
19602 ++
19603 + struct device dev; /* sysfs transport/container device */
19604 + enum iscsi_connection_state state;
19605 + };
19606 +@@ -441,6 +450,7 @@ extern int iscsi_scan_finished(struct Scsi_Host *shost, unsigned long time);
19607 + extern struct iscsi_endpoint *iscsi_create_endpoint(int dd_size);
19608 + extern void iscsi_destroy_endpoint(struct iscsi_endpoint *ep);
19609 + extern struct iscsi_endpoint *iscsi_lookup_endpoint(u64 handle);
19610 ++extern void iscsi_put_endpoint(struct iscsi_endpoint *ep);
19611 + extern int iscsi_block_scsi_eh(struct scsi_cmnd *cmd);
19612 + extern struct iscsi_iface *iscsi_create_iface(struct Scsi_Host *shost,
19613 + struct iscsi_transport *t,
19614 +diff --git a/include/uapi/linux/v4l2-controls.h b/include/uapi/linux/v4l2-controls.h
19615 +index 039c0d7add1b1..c7fe032df185d 100644
19616 +--- a/include/uapi/linux/v4l2-controls.h
19617 ++++ b/include/uapi/linux/v4l2-controls.h
19618 +@@ -50,6 +50,7 @@
19619 + #ifndef __LINUX_V4L2_CONTROLS_H
19620 + #define __LINUX_V4L2_CONTROLS_H
19621 +
19622 ++#include <linux/const.h>
19623 + #include <linux/types.h>
19624 +
19625 + /* Control classes */
19626 +@@ -1593,30 +1594,30 @@ struct v4l2_ctrl_h264_decode_params {
19627 + #define V4L2_FWHT_VERSION 3
19628 +
19629 + /* Set if this is an interlaced format */
19630 +-#define V4L2_FWHT_FL_IS_INTERLACED BIT(0)
19631 ++#define V4L2_FWHT_FL_IS_INTERLACED _BITUL(0)
19632 + /* Set if this is a bottom-first (NTSC) interlaced format */
19633 +-#define V4L2_FWHT_FL_IS_BOTTOM_FIRST BIT(1)
19634 ++#define V4L2_FWHT_FL_IS_BOTTOM_FIRST _BITUL(1)
19635 + /* Set if each 'frame' contains just one field */
19636 +-#define V4L2_FWHT_FL_IS_ALTERNATE BIT(2)
19637 ++#define V4L2_FWHT_FL_IS_ALTERNATE _BITUL(2)
19638 + /*
19639 + * If V4L2_FWHT_FL_IS_ALTERNATE was set, then this is set if this
19640 + * 'frame' is the bottom field, else it is the top field.
19641 + */
19642 +-#define V4L2_FWHT_FL_IS_BOTTOM_FIELD BIT(3)
19643 ++#define V4L2_FWHT_FL_IS_BOTTOM_FIELD _BITUL(3)
19644 + /* Set if the Y' plane is uncompressed */
19645 +-#define V4L2_FWHT_FL_LUMA_IS_UNCOMPRESSED BIT(4)
19646 ++#define V4L2_FWHT_FL_LUMA_IS_UNCOMPRESSED _BITUL(4)
19647 + /* Set if the Cb plane is uncompressed */
19648 +-#define V4L2_FWHT_FL_CB_IS_UNCOMPRESSED BIT(5)
19649 ++#define V4L2_FWHT_FL_CB_IS_UNCOMPRESSED _BITUL(5)
19650 + /* Set if the Cr plane is uncompressed */
19651 +-#define V4L2_FWHT_FL_CR_IS_UNCOMPRESSED BIT(6)
19652 ++#define V4L2_FWHT_FL_CR_IS_UNCOMPRESSED _BITUL(6)
19653 + /* Set if the chroma plane is full height, if cleared it is half height */
19654 +-#define V4L2_FWHT_FL_CHROMA_FULL_HEIGHT BIT(7)
19655 ++#define V4L2_FWHT_FL_CHROMA_FULL_HEIGHT _BITUL(7)
19656 + /* Set if the chroma plane is full width, if cleared it is half width */
19657 +-#define V4L2_FWHT_FL_CHROMA_FULL_WIDTH BIT(8)
19658 ++#define V4L2_FWHT_FL_CHROMA_FULL_WIDTH _BITUL(8)
19659 + /* Set if the alpha plane is uncompressed */
19660 +-#define V4L2_FWHT_FL_ALPHA_IS_UNCOMPRESSED BIT(9)
19661 ++#define V4L2_FWHT_FL_ALPHA_IS_UNCOMPRESSED _BITUL(9)
19662 + /* Set if this is an I Frame */
19663 +-#define V4L2_FWHT_FL_I_FRAME BIT(10)
19664 ++#define V4L2_FWHT_FL_I_FRAME _BITUL(10)
19665 +
19666 + /* A 4-values flag - the number of components - 1 */
19667 + #define V4L2_FWHT_FL_COMPONENTS_NUM_MSK GENMASK(18, 16)
19668 +diff --git a/init/main.c b/init/main.c
19669 +index 5bd1a25f1d6f5..c97d3c0247a1d 100644
19670 +--- a/init/main.c
19671 ++++ b/init/main.c
19672 +@@ -918,11 +918,7 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
19673 + * time - but meanwhile we still have a functioning scheduler.
19674 + */
19675 + sched_init();
19676 +- /*
19677 +- * Disable preemption - early bootup scheduling is extremely
19678 +- * fragile until we cpu_idle() for the first time.
19679 +- */
19680 +- preempt_disable();
19681 ++
19682 + if (WARN(!irqs_disabled(),
19683 + "Interrupts were enabled *very* early, fixing it\n"))
19684 + local_irq_disable();
19685 +diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
19686 +index 85d9d1b72a33a..b0ab5b915e6d1 100644
19687 +--- a/kernel/bpf/devmap.c
19688 ++++ b/kernel/bpf/devmap.c
19689 +@@ -92,7 +92,7 @@ static struct hlist_head *dev_map_create_hash(unsigned int entries,
19690 + int i;
19691 + struct hlist_head *hash;
19692 +
19693 +- hash = bpf_map_area_alloc(entries * sizeof(*hash), numa_node);
19694 ++ hash = bpf_map_area_alloc((u64) entries * sizeof(*hash), numa_node);
19695 + if (hash != NULL)
19696 + for (i = 0; i < entries; i++)
19697 + INIT_HLIST_HEAD(&hash[i]);
19698 +@@ -143,7 +143,7 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
19699 +
19700 + spin_lock_init(&dtab->index_lock);
19701 + } else {
19702 +- dtab->netdev_map = bpf_map_area_alloc(dtab->map.max_entries *
19703 ++ dtab->netdev_map = bpf_map_area_alloc((u64) dtab->map.max_entries *
19704 + sizeof(struct bpf_dtab_netdev *),
19705 + dtab->map.numa_node);
19706 + if (!dtab->netdev_map)
19707 +diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
19708 +index d2de2abec35b6..dc56237d69600 100644
19709 +--- a/kernel/bpf/inode.c
19710 ++++ b/kernel/bpf/inode.c
19711 +@@ -543,7 +543,7 @@ int bpf_obj_get_user(const char __user *pathname, int flags)
19712 + return PTR_ERR(raw);
19713 +
19714 + if (type == BPF_TYPE_PROG)
19715 +- ret = (f_flags != O_RDWR) ? -EINVAL : bpf_prog_new_fd(raw);
19716 ++ ret = bpf_prog_new_fd(raw);
19717 + else if (type == BPF_TYPE_MAP)
19718 + ret = bpf_map_new_fd(raw, f_flags);
19719 + else if (type == BPF_TYPE_LINK)
19720 +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
19721 +index 2423b4e918b90..87c4ea3b3cb78 100644
19722 +--- a/kernel/bpf/verifier.c
19723 ++++ b/kernel/bpf/verifier.c
19724 +@@ -10877,7 +10877,7 @@ static void adjust_subprog_starts(struct bpf_verifier_env *env, u32 off, u32 len
19725 + }
19726 + }
19727 +
19728 +-static void adjust_poke_descs(struct bpf_prog *prog, u32 len)
19729 ++static void adjust_poke_descs(struct bpf_prog *prog, u32 off, u32 len)
19730 + {
19731 + struct bpf_jit_poke_descriptor *tab = prog->aux->poke_tab;
19732 + int i, sz = prog->aux->size_poke_tab;
19733 +@@ -10885,6 +10885,8 @@ static void adjust_poke_descs(struct bpf_prog *prog, u32 len)
19734 +
19735 + for (i = 0; i < sz; i++) {
19736 + desc = &tab[i];
19737 ++ if (desc->insn_idx <= off)
19738 ++ continue;
19739 + desc->insn_idx += len - 1;
19740 + }
19741 + }
19742 +@@ -10905,7 +10907,7 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of
19743 + if (adjust_insn_aux_data(env, new_prog, off, len))
19744 + return NULL;
19745 + adjust_subprog_starts(env, off, len);
19746 +- adjust_poke_descs(new_prog, len);
19747 ++ adjust_poke_descs(new_prog, off, len);
19748 + return new_prog;
19749 + }
19750 +
19751 +diff --git a/kernel/cred.c b/kernel/cred.c
19752 +index 421b1149c6516..098213d4a39c3 100644
19753 +--- a/kernel/cred.c
19754 ++++ b/kernel/cred.c
19755 +@@ -60,6 +60,7 @@ struct cred init_cred = {
19756 + .user = INIT_USER,
19757 + .user_ns = &init_user_ns,
19758 + .group_info = &init_groups,
19759 ++ .ucounts = &init_ucounts,
19760 + };
19761 +
19762 + static inline void set_cred_subscribers(struct cred *cred, int n)
19763 +@@ -119,6 +120,8 @@ static void put_cred_rcu(struct rcu_head *rcu)
19764 + if (cred->group_info)
19765 + put_group_info(cred->group_info);
19766 + free_uid(cred->user);
19767 ++ if (cred->ucounts)
19768 ++ put_ucounts(cred->ucounts);
19769 + put_user_ns(cred->user_ns);
19770 + kmem_cache_free(cred_jar, cred);
19771 + }
19772 +@@ -222,6 +225,7 @@ struct cred *cred_alloc_blank(void)
19773 + #ifdef CONFIG_DEBUG_CREDENTIALS
19774 + new->magic = CRED_MAGIC;
19775 + #endif
19776 ++ new->ucounts = get_ucounts(&init_ucounts);
19777 +
19778 + if (security_cred_alloc_blank(new, GFP_KERNEL_ACCOUNT) < 0)
19779 + goto error;
19780 +@@ -284,6 +288,11 @@ struct cred *prepare_creds(void)
19781 +
19782 + if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)
19783 + goto error;
19784 ++
19785 ++ new->ucounts = get_ucounts(new->ucounts);
19786 ++ if (!new->ucounts)
19787 ++ goto error;
19788 ++
19789 + validate_creds(new);
19790 + return new;
19791 +
19792 +@@ -363,6 +372,9 @@ int copy_creds(struct task_struct *p, unsigned long clone_flags)
19793 + ret = create_user_ns(new);
19794 + if (ret < 0)
19795 + goto error_put;
19796 ++ ret = set_cred_ucounts(new);
19797 ++ if (ret < 0)
19798 ++ goto error_put;
19799 + }
19800 +
19801 + #ifdef CONFIG_KEYS
19802 +@@ -653,6 +665,31 @@ int cred_fscmp(const struct cred *a, const struct cred *b)
19803 + }
19804 + EXPORT_SYMBOL(cred_fscmp);
19805 +
19806 ++int set_cred_ucounts(struct cred *new)
19807 ++{
19808 ++ struct task_struct *task = current;
19809 ++ const struct cred *old = task->real_cred;
19810 ++ struct ucounts *old_ucounts = new->ucounts;
19811 ++
19812 ++ if (new->user == old->user && new->user_ns == old->user_ns)
19813 ++ return 0;
19814 ++
19815 ++ /*
19816 ++ * This optimization is needed because alloc_ucounts() uses locks
19817 ++ * for table lookups.
19818 ++ */
19819 ++ if (old_ucounts && old_ucounts->ns == new->user_ns && uid_eq(old_ucounts->uid, new->euid))
19820 ++ return 0;
19821 ++
19822 ++ if (!(new->ucounts = alloc_ucounts(new->user_ns, new->euid)))
19823 ++ return -EAGAIN;
19824 ++
19825 ++ if (old_ucounts)
19826 ++ put_ucounts(old_ucounts);
19827 ++
19828 ++ return 0;
19829 ++}
19830 ++
19831 + /*
19832 + * initialise the credentials stuff
19833 + */
19834 +@@ -719,6 +756,10 @@ struct cred *prepare_kernel_cred(struct task_struct *daemon)
19835 + if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)
19836 + goto error;
19837 +
19838 ++ new->ucounts = get_ucounts(new->ucounts);
19839 ++ if (!new->ucounts)
19840 ++ goto error;
19841 ++
19842 + put_cred(old);
19843 + validate_creds(new);
19844 + return new;
19845 +diff --git a/kernel/fork.c b/kernel/fork.c
19846 +index 426cd0c51f9eb..0c1d935521376 100644
19847 +--- a/kernel/fork.c
19848 ++++ b/kernel/fork.c
19849 +@@ -2000,7 +2000,7 @@ static __latent_entropy struct task_struct *copy_process(
19850 + goto bad_fork_cleanup_count;
19851 +
19852 + delayacct_tsk_init(p); /* Must remain after dup_task_struct() */
19853 +- p->flags &= ~(PF_SUPERPRIV | PF_WQ_WORKER | PF_IDLE);
19854 ++ p->flags &= ~(PF_SUPERPRIV | PF_WQ_WORKER | PF_IDLE | PF_NO_SETAFFINITY);
19855 + p->flags |= PF_FORKNOEXEC;
19856 + INIT_LIST_HEAD(&p->children);
19857 + INIT_LIST_HEAD(&p->sibling);
19858 +@@ -2405,7 +2405,7 @@ static inline void init_idle_pids(struct task_struct *idle)
19859 + }
19860 + }
19861 +
19862 +-struct task_struct *fork_idle(int cpu)
19863 ++struct task_struct * __init fork_idle(int cpu)
19864 + {
19865 + struct task_struct *task;
19866 + struct kernel_clone_args args = {
19867 +@@ -2995,6 +2995,12 @@ int ksys_unshare(unsigned long unshare_flags)
19868 + if (err)
19869 + goto bad_unshare_cleanup_cred;
19870 +
19871 ++ if (new_cred) {
19872 ++ err = set_cred_ucounts(new_cred);
19873 ++ if (err)
19874 ++ goto bad_unshare_cleanup_cred;
19875 ++ }
19876 ++
19877 + if (new_fs || new_fd || do_sysvsem || new_cred || new_nsproxy) {
19878 + if (do_sysvsem) {
19879 + /*
19880 +diff --git a/kernel/kthread.c b/kernel/kthread.c
19881 +index 4fdf2bd9b5589..44f89a602b001 100644
19882 +--- a/kernel/kthread.c
19883 ++++ b/kernel/kthread.c
19884 +@@ -68,16 +68,6 @@ enum KTHREAD_BITS {
19885 + KTHREAD_SHOULD_PARK,
19886 + };
19887 +
19888 +-static inline void set_kthread_struct(void *kthread)
19889 +-{
19890 +- /*
19891 +- * We abuse ->set_child_tid to avoid the new member and because it
19892 +- * can't be wrongly copied by copy_process(). We also rely on fact
19893 +- * that the caller can't exec, so PF_KTHREAD can't be cleared.
19894 +- */
19895 +- current->set_child_tid = (__force void __user *)kthread;
19896 +-}
19897 +-
19898 + static inline struct kthread *to_kthread(struct task_struct *k)
19899 + {
19900 + WARN_ON(!(k->flags & PF_KTHREAD));
19901 +@@ -103,6 +93,22 @@ static inline struct kthread *__to_kthread(struct task_struct *p)
19902 + return kthread;
19903 + }
19904 +
19905 ++void set_kthread_struct(struct task_struct *p)
19906 ++{
19907 ++ struct kthread *kthread;
19908 ++
19909 ++ if (__to_kthread(p))
19910 ++ return;
19911 ++
19912 ++ kthread = kzalloc(sizeof(*kthread), GFP_KERNEL);
19913 ++ /*
19914 ++ * We abuse ->set_child_tid to avoid the new member and because it
19915 ++ * can't be wrongly copied by copy_process(). We also rely on fact
19916 ++ * that the caller can't exec, so PF_KTHREAD can't be cleared.
19917 ++ */
19918 ++ p->set_child_tid = (__force void __user *)kthread;
19919 ++}
19920 ++
19921 + void free_kthread_struct(struct task_struct *k)
19922 + {
19923 + struct kthread *kthread;
19924 +@@ -272,8 +278,8 @@ static int kthread(void *_create)
19925 + struct kthread *self;
19926 + int ret;
19927 +
19928 +- self = kzalloc(sizeof(*self), GFP_KERNEL);
19929 +- set_kthread_struct(self);
19930 ++ set_kthread_struct(current);
19931 ++ self = to_kthread(current);
19932 +
19933 + /* If user was SIGKILLed, I release the structure. */
19934 + done = xchg(&create->done, NULL);
19935 +@@ -1155,14 +1161,14 @@ static bool __kthread_cancel_work(struct kthread_work *work)
19936 + * modify @dwork's timer so that it expires after @delay. If @delay is zero,
19937 + * @work is guaranteed to be queued immediately.
19938 + *
19939 +- * Return: %true if @dwork was pending and its timer was modified,
19940 +- * %false otherwise.
19941 ++ * Return: %false if @dwork was idle and queued, %true otherwise.
19942 + *
19943 + * A special case is when the work is being canceled in parallel.
19944 + * It might be caused either by the real kthread_cancel_delayed_work_sync()
19945 + * or yet another kthread_mod_delayed_work() call. We let the other command
19946 +- * win and return %false here. The caller is supposed to synchronize these
19947 +- * operations a reasonable way.
19948 ++ * win and return %true here. The return value can be used for reference
19949 ++ * counting and the number of queued works stays the same. Anyway, the caller
19950 ++ * is supposed to synchronize these operations a reasonable way.
19951 + *
19952 + * This function is safe to call from any context including IRQ handler.
19953 + * See __kthread_cancel_work() and kthread_delayed_work_timer_fn()
19954 +@@ -1174,13 +1180,15 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
19955 + {
19956 + struct kthread_work *work = &dwork->work;
19957 + unsigned long flags;
19958 +- int ret = false;
19959 ++ int ret;
19960 +
19961 + raw_spin_lock_irqsave(&worker->lock, flags);
19962 +
19963 + /* Do not bother with canceling when never queued. */
19964 +- if (!work->worker)
19965 ++ if (!work->worker) {
19966 ++ ret = false;
19967 + goto fast_queue;
19968 ++ }
19969 +
19970 + /* Work must not be used with >1 worker, see kthread_queue_work() */
19971 + WARN_ON_ONCE(work->worker != worker);
19972 +@@ -1198,8 +1206,11 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
19973 + * be used for reference counting.
19974 + */
19975 + kthread_cancel_delayed_work_timer(work, &flags);
19976 +- if (work->canceling)
19977 ++ if (work->canceling) {
19978 ++ /* The number of works in the queue does not change. */
19979 ++ ret = true;
19980 + goto out;
19981 ++ }
19982 + ret = __kthread_cancel_work(work);
19983 +
19984 + fast_queue:
19985 +diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
19986 +index 5bf6b1659215d..8f8cd43ec2a04 100644
19987 +--- a/kernel/locking/lockdep.c
19988 ++++ b/kernel/locking/lockdep.c
19989 +@@ -2305,7 +2305,56 @@ static void print_lock_class_header(struct lock_class *class, int depth)
19990 + }
19991 +
19992 + /*
19993 +- * printk the shortest lock dependencies from @start to @end in reverse order:
19994 ++ * Dependency path printing:
19995 ++ *
19996 ++ * After BFS we get a lock dependency path (linked via ->parent of lock_list),
19997 ++ * printing out each lock in the dependency path will help on understanding how
19998 ++ * the deadlock could happen. Here are some details about dependency path
19999 ++ * printing:
20000 ++ *
20001 ++ * 1) A lock_list can be either forwards or backwards for a lock dependency,
20002 ++ * for a lock dependency A -> B, there are two lock_lists:
20003 ++ *
20004 ++ * a) lock_list in the ->locks_after list of A, whose ->class is B and
20005 ++ * ->links_to is A. In this case, we can say the lock_list is
20006 ++ * "A -> B" (forwards case).
20007 ++ *
20008 ++ * b) lock_list in the ->locks_before list of B, whose ->class is A
20009 ++ * and ->links_to is B. In this case, we can say the lock_list is
20010 ++ * "B <- A" (bacwards case).
20011 ++ *
20012 ++ * The ->trace of both a) and b) point to the call trace where B was
20013 ++ * acquired with A held.
20014 ++ *
20015 ++ * 2) A "helper" lock_list is introduced during BFS, this lock_list doesn't
20016 ++ * represent a certain lock dependency, it only provides an initial entry
20017 ++ * for BFS. For example, BFS may introduce a "helper" lock_list whose
20018 ++ * ->class is A, as a result BFS will search all dependencies starting with
20019 ++ * A, e.g. A -> B or A -> C.
20020 ++ *
20021 ++ * The notation of a forwards helper lock_list is like "-> A", which means
20022 ++ * we should search the forwards dependencies starting with "A", e.g A -> B
20023 ++ * or A -> C.
20024 ++ *
20025 ++ * The notation of a bacwards helper lock_list is like "<- B", which means
20026 ++ * we should search the backwards dependencies ending with "B", e.g.
20027 ++ * B <- A or B <- C.
20028 ++ */
20029 ++
20030 ++/*
20031 ++ * printk the shortest lock dependencies from @root to @leaf in reverse order.
20032 ++ *
20033 ++ * We have a lock dependency path as follow:
20034 ++ *
20035 ++ * @root @leaf
20036 ++ * | |
20037 ++ * V V
20038 ++ * ->parent ->parent
20039 ++ * | lock_list | <--------- | lock_list | ... | lock_list | <--------- | lock_list |
20040 ++ * | -> L1 | | L1 -> L2 | ... |Ln-2 -> Ln-1| | Ln-1 -> Ln|
20041 ++ *
20042 ++ * , so it's natural that we start from @leaf and print every ->class and
20043 ++ * ->trace until we reach the @root.
20044 + */
20045 + static void __used
20046 + print_shortest_lock_dependencies(struct lock_list *leaf,
20047 +@@ -2333,6 +2382,61 @@ print_shortest_lock_dependencies(struct lock_list *leaf,
20048 + } while (entry && (depth >= 0));
20049 + }
20050 +
20051 ++/*
20052 ++ * printk the shortest lock dependencies from @leaf to @root.
20053 ++ *
20054 ++ * We have a lock dependency path (from a backwards search) as follow:
20055 ++ *
20056 ++ * @leaf @root
20057 ++ * | |
20058 ++ * V V
20059 ++ * ->parent ->parent
20060 ++ * | lock_list | ---------> | lock_list | ... | lock_list | ---------> | lock_list |
20061 ++ * | L2 <- L1 | | L3 <- L2 | ... | Ln <- Ln-1 | | <- Ln |
20062 ++ *
20063 ++ * , so when we iterate from @leaf to @root, we actually print the lock
20064 ++ * dependency path L1 -> L2 -> .. -> Ln in the non-reverse order.
20065 ++ *
20066 ++ * Another thing to notice here is that ->class of L2 <- L1 is L1, while the
20067 ++ * ->trace of L2 <- L1 is the call trace of L2, in fact we don't have the call
20068 ++ * trace of L1 in the dependency path, which is alright, because most of the
20069 ++ * time we can figure out where L1 is held from the call trace of L2.
20070 ++ */
20071 ++static void __used
20072 ++print_shortest_lock_dependencies_backwards(struct lock_list *leaf,
20073 ++ struct lock_list *root)
20074 ++{
20075 ++ struct lock_list *entry = leaf;
20076 ++ const struct lock_trace *trace = NULL;
20077 ++ int depth;
20078 ++
20079 ++ /*compute depth from generated tree by BFS*/
20080 ++ depth = get_lock_depth(leaf);
20081 ++
20082 ++ do {
20083 ++ print_lock_class_header(entry->class, depth);
20084 ++ if (trace) {
20085 ++ printk("%*s ... acquired at:\n", depth, "");
20086 ++ print_lock_trace(trace, 2);
20087 ++ printk("\n");
20088 ++ }
20089 ++
20090 ++ /*
20091 ++ * Record the pointer to the trace for the next lock_list
20092 ++ * entry, see the comments for the function.
20093 ++ */
20094 ++ trace = entry->trace;
20095 ++
20096 ++ if (depth == 0 && (entry != root)) {
20097 ++ printk("lockdep:%s bad path found in chain graph\n", __func__);
20098 ++ break;
20099 ++ }
20100 ++
20101 ++ entry = get_lock_parent(entry);
20102 ++ depth--;
20103 ++ } while (entry && (depth >= 0));
20104 ++}
20105 ++
20106 + static void
20107 + print_irq_lock_scenario(struct lock_list *safe_entry,
20108 + struct lock_list *unsafe_entry,
20109 +@@ -2450,7 +2554,7 @@ print_bad_irq_dependency(struct task_struct *curr,
20110 + prev_root->trace = save_trace();
20111 + if (!prev_root->trace)
20112 + return;
20113 +- print_shortest_lock_dependencies(backwards_entry, prev_root);
20114 ++ print_shortest_lock_dependencies_backwards(backwards_entry, prev_root);
20115 +
20116 + pr_warn("\nthe dependencies between the lock to be acquired");
20117 + pr_warn(" and %s-irq-unsafe lock:\n", irqclass);
20118 +@@ -2668,8 +2772,18 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev,
20119 + * Step 3: we found a bad match! Now retrieve a lock from the backward
20120 + * list whose usage mask matches the exclusive usage mask from the
20121 + * lock found on the forward list.
20122 ++ *
20123 ++ * Note, we should only keep the LOCKF_ENABLED_IRQ_ALL bits, considering
20124 ++ * the follow case:
20125 ++ *
20126 ++ * When trying to add A -> B to the graph, we find that there is a
20127 ++ * hardirq-safe L, that L -> ... -> A, and another hardirq-unsafe M,
20128 ++ * that B -> ... -> M. However M is **softirq-safe**, if we use exact
20129 ++ * invert bits of M's usage_mask, we will find another lock N that is
20130 ++ * **softirq-unsafe** and N -> ... -> A, however N -> .. -> M will not
20131 ++ * cause a inversion deadlock.
20132 + */
20133 +- backward_mask = original_mask(target_entry1->class->usage_mask);
20134 ++ backward_mask = original_mask(target_entry1->class->usage_mask & LOCKF_ENABLED_IRQ_ALL);
20135 +
20136 + ret = find_usage_backwards(&this, backward_mask, &target_entry);
20137 + if (bfs_error(ret)) {
20138 +@@ -4578,7 +4692,7 @@ static int check_wait_context(struct task_struct *curr, struct held_lock *next)
20139 + u8 curr_inner;
20140 + int depth;
20141 +
20142 +- if (!curr->lockdep_depth || !next_inner || next->trylock)
20143 ++ if (!next_inner || next->trylock)
20144 + return 0;
20145 +
20146 + if (!next_outer)
20147 +diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
20148 +index 7356764e49a0b..a274622ed6fa2 100644
20149 +--- a/kernel/rcu/tree.c
20150 ++++ b/kernel/rcu/tree.c
20151 +@@ -2911,7 +2911,6 @@ static int __init rcu_spawn_core_kthreads(void)
20152 + "%s: Could not start rcuc kthread, OOM is now expected behavior\n", __func__);
20153 + return 0;
20154 + }
20155 +-early_initcall(rcu_spawn_core_kthreads);
20156 +
20157 + /*
20158 + * Handle any core-RCU processing required by a call_rcu() invocation.
20159 +@@ -4392,6 +4391,7 @@ static int __init rcu_spawn_gp_kthread(void)
20160 + wake_up_process(t);
20161 + rcu_spawn_nocb_kthreads();
20162 + rcu_spawn_boost_kthreads();
20163 ++ rcu_spawn_core_kthreads();
20164 + return 0;
20165 + }
20166 + early_initcall(rcu_spawn_gp_kthread);
20167 +diff --git a/kernel/sched/core.c b/kernel/sched/core.c
20168 +index 814200541f8f5..2b66c9a16cbe8 100644
20169 +--- a/kernel/sched/core.c
20170 ++++ b/kernel/sched/core.c
20171 +@@ -1055,9 +1055,10 @@ static void uclamp_sync_util_min_rt_default(void)
20172 + static inline struct uclamp_se
20173 + uclamp_tg_restrict(struct task_struct *p, enum uclamp_id clamp_id)
20174 + {
20175 ++ /* Copy by value as we could modify it */
20176 + struct uclamp_se uc_req = p->uclamp_req[clamp_id];
20177 + #ifdef CONFIG_UCLAMP_TASK_GROUP
20178 +- struct uclamp_se uc_max;
20179 ++ unsigned int tg_min, tg_max, value;
20180 +
20181 + /*
20182 + * Tasks in autogroups or root task group will be
20183 +@@ -1068,9 +1069,11 @@ uclamp_tg_restrict(struct task_struct *p, enum uclamp_id clamp_id)
20184 + if (task_group(p) == &root_task_group)
20185 + return uc_req;
20186 +
20187 +- uc_max = task_group(p)->uclamp[clamp_id];
20188 +- if (uc_req.value > uc_max.value || !uc_req.user_defined)
20189 +- return uc_max;
20190 ++ tg_min = task_group(p)->uclamp[UCLAMP_MIN].value;
20191 ++ tg_max = task_group(p)->uclamp[UCLAMP_MAX].value;
20192 ++ value = uc_req.value;
20193 ++ value = clamp(value, tg_min, tg_max);
20194 ++ uclamp_se_set(&uc_req, value, false);
20195 + #endif
20196 +
20197 + return uc_req;
20198 +@@ -1269,8 +1272,9 @@ static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p)
20199 + }
20200 +
20201 + static inline void
20202 +-uclamp_update_active(struct task_struct *p, enum uclamp_id clamp_id)
20203 ++uclamp_update_active(struct task_struct *p)
20204 + {
20205 ++ enum uclamp_id clamp_id;
20206 + struct rq_flags rf;
20207 + struct rq *rq;
20208 +
20209 +@@ -1290,9 +1294,11 @@ uclamp_update_active(struct task_struct *p, enum uclamp_id clamp_id)
20210 + * affecting a valid clamp bucket, the next time it's enqueued,
20211 + * it will already see the updated clamp bucket value.
20212 + */
20213 +- if (p->uclamp[clamp_id].active) {
20214 +- uclamp_rq_dec_id(rq, p, clamp_id);
20215 +- uclamp_rq_inc_id(rq, p, clamp_id);
20216 ++ for_each_clamp_id(clamp_id) {
20217 ++ if (p->uclamp[clamp_id].active) {
20218 ++ uclamp_rq_dec_id(rq, p, clamp_id);
20219 ++ uclamp_rq_inc_id(rq, p, clamp_id);
20220 ++ }
20221 + }
20222 +
20223 + task_rq_unlock(rq, p, &rf);
20224 +@@ -1300,20 +1306,14 @@ uclamp_update_active(struct task_struct *p, enum uclamp_id clamp_id)
20225 +
20226 + #ifdef CONFIG_UCLAMP_TASK_GROUP
20227 + static inline void
20228 +-uclamp_update_active_tasks(struct cgroup_subsys_state *css,
20229 +- unsigned int clamps)
20230 ++uclamp_update_active_tasks(struct cgroup_subsys_state *css)
20231 + {
20232 +- enum uclamp_id clamp_id;
20233 + struct css_task_iter it;
20234 + struct task_struct *p;
20235 +
20236 + css_task_iter_start(css, 0, &it);
20237 +- while ((p = css_task_iter_next(&it))) {
20238 +- for_each_clamp_id(clamp_id) {
20239 +- if ((0x1 << clamp_id) & clamps)
20240 +- uclamp_update_active(p, clamp_id);
20241 +- }
20242 +- }
20243 ++ while ((p = css_task_iter_next(&it)))
20244 ++ uclamp_update_active(p);
20245 + css_task_iter_end(&it);
20246 + }
20247 +
20248 +@@ -1906,7 +1906,6 @@ static int migration_cpu_stop(void *data)
20249 + struct migration_arg *arg = data;
20250 + struct set_affinity_pending *pending = arg->pending;
20251 + struct task_struct *p = arg->task;
20252 +- int dest_cpu = arg->dest_cpu;
20253 + struct rq *rq = this_rq();
20254 + bool complete = false;
20255 + struct rq_flags rf;
20256 +@@ -1939,19 +1938,15 @@ static int migration_cpu_stop(void *data)
20257 + if (p->migration_pending == pending)
20258 + p->migration_pending = NULL;
20259 + complete = true;
20260 +- }
20261 +
20262 +- if (dest_cpu < 0) {
20263 + if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask))
20264 + goto out;
20265 +-
20266 +- dest_cpu = cpumask_any_distribute(&p->cpus_mask);
20267 + }
20268 +
20269 + if (task_on_rq_queued(p))
20270 +- rq = __migrate_task(rq, &rf, p, dest_cpu);
20271 ++ rq = __migrate_task(rq, &rf, p, arg->dest_cpu);
20272 + else
20273 +- p->wake_cpu = dest_cpu;
20274 ++ p->wake_cpu = arg->dest_cpu;
20275 +
20276 + /*
20277 + * XXX __migrate_task() can fail, at which point we might end
20278 +@@ -2230,7 +2225,7 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
20279 + init_completion(&my_pending.done);
20280 + my_pending.arg = (struct migration_arg) {
20281 + .task = p,
20282 +- .dest_cpu = -1, /* any */
20283 ++ .dest_cpu = dest_cpu,
20284 + .pending = &my_pending,
20285 + };
20286 +
20287 +@@ -2238,6 +2233,15 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
20288 + } else {
20289 + pending = p->migration_pending;
20290 + refcount_inc(&pending->refs);
20291 ++ /*
20292 ++ * Affinity has changed, but we've already installed a
20293 ++ * pending. migration_cpu_stop() *must* see this, else
20294 ++ * we risk a completion of the pending despite having a
20295 ++ * task on a disallowed CPU.
20296 ++ *
20297 ++ * Serialized by p->pi_lock, so this is safe.
20298 ++ */
20299 ++ pending->arg.dest_cpu = dest_cpu;
20300 + }
20301 + }
20302 + pending = p->migration_pending;
20303 +@@ -7428,19 +7432,32 @@ void show_state_filter(unsigned long state_filter)
20304 + * NOTE: this function does not set the idle thread's NEED_RESCHED
20305 + * flag, to make booting more robust.
20306 + */
20307 +-void init_idle(struct task_struct *idle, int cpu)
20308 ++void __init init_idle(struct task_struct *idle, int cpu)
20309 + {
20310 + struct rq *rq = cpu_rq(cpu);
20311 + unsigned long flags;
20312 +
20313 + __sched_fork(0, idle);
20314 +
20315 ++ /*
20316 ++ * The idle task doesn't need the kthread struct to function, but it
20317 ++ * is dressed up as a per-CPU kthread and thus needs to play the part
20318 ++ * if we want to avoid special-casing it in code that deals with per-CPU
20319 ++ * kthreads.
20320 ++ */
20321 ++ set_kthread_struct(idle);
20322 ++
20323 + raw_spin_lock_irqsave(&idle->pi_lock, flags);
20324 + raw_spin_lock(&rq->lock);
20325 +
20326 + idle->state = TASK_RUNNING;
20327 + idle->se.exec_start = sched_clock();
20328 +- idle->flags |= PF_IDLE;
20329 ++ /*
20330 ++ * PF_KTHREAD should already be set at this point; regardless, make it
20331 ++ * look like a proper per-CPU kthread.
20332 ++ */
20333 ++ idle->flags |= PF_IDLE | PF_KTHREAD | PF_NO_SETAFFINITY;
20334 ++ kthread_set_per_cpu(idle, cpu);
20335 +
20336 + scs_task_reset(idle);
20337 + kasan_unpoison_task_stack(idle);
20338 +@@ -7647,12 +7664,8 @@ static void balance_push(struct rq *rq)
20339 + /*
20340 + * Both the cpu-hotplug and stop task are in this case and are
20341 + * required to complete the hotplug process.
20342 +- *
20343 +- * XXX: the idle task does not match kthread_is_per_cpu() due to
20344 +- * histerical raisins.
20345 + */
20346 +- if (rq->idle == push_task ||
20347 +- kthread_is_per_cpu(push_task) ||
20348 ++ if (kthread_is_per_cpu(push_task) ||
20349 + is_migration_disabled(push_task)) {
20350 +
20351 + /*
20352 +@@ -8671,7 +8684,11 @@ static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
20353 +
20354 + #ifdef CONFIG_UCLAMP_TASK_GROUP
20355 + /* Propagate the effective uclamp value for the new group */
20356 ++ mutex_lock(&uclamp_mutex);
20357 ++ rcu_read_lock();
20358 + cpu_util_update_eff(css);
20359 ++ rcu_read_unlock();
20360 ++ mutex_unlock(&uclamp_mutex);
20361 + #endif
20362 +
20363 + return 0;
20364 +@@ -8761,6 +8778,9 @@ static void cpu_util_update_eff(struct cgroup_subsys_state *css)
20365 + enum uclamp_id clamp_id;
20366 + unsigned int clamps;
20367 +
20368 ++ lockdep_assert_held(&uclamp_mutex);
20369 ++ SCHED_WARN_ON(!rcu_read_lock_held());
20370 ++
20371 + css_for_each_descendant_pre(css, top_css) {
20372 + uc_parent = css_tg(css)->parent
20373 + ? css_tg(css)->parent->uclamp : NULL;
20374 +@@ -8793,7 +8813,7 @@ static void cpu_util_update_eff(struct cgroup_subsys_state *css)
20375 + }
20376 +
20377 + /* Immediately update descendants RUNNABLE tasks */
20378 +- uclamp_update_active_tasks(css, clamps);
20379 ++ uclamp_update_active_tasks(css);
20380 + }
20381 + }
20382 +
20383 +diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
20384 +index aac3539aa0fee..78b3bdcb84c1a 100644
20385 +--- a/kernel/sched/deadline.c
20386 ++++ b/kernel/sched/deadline.c
20387 +@@ -2486,6 +2486,8 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
20388 + check_preempt_curr_dl(rq, p, 0);
20389 + else
20390 + resched_curr(rq);
20391 ++ } else {
20392 ++ update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0);
20393 + }
20394 + }
20395 +
20396 +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
20397 +index 47fcc3fe9dc5a..20ac5dff9a0ce 100644
20398 +--- a/kernel/sched/fair.c
20399 ++++ b/kernel/sched/fair.c
20400 +@@ -3134,7 +3134,7 @@ void reweight_task(struct task_struct *p, int prio)
20401 + *
20402 + * tg->weight * grq->load.weight
20403 + * ge->load.weight = ----------------------------- (1)
20404 +- * \Sum grq->load.weight
20405 ++ * \Sum grq->load.weight
20406 + *
20407 + * Now, because computing that sum is prohibitively expensive to compute (been
20408 + * there, done that) we approximate it with this average stuff. The average
20409 +@@ -3148,7 +3148,7 @@ void reweight_task(struct task_struct *p, int prio)
20410 + *
20411 + * tg->weight * grq->avg.load_avg
20412 + * ge->load.weight = ------------------------------ (3)
20413 +- * tg->load_avg
20414 ++ * tg->load_avg
20415 + *
20416 + * Where: tg->load_avg ~= \Sum grq->avg.load_avg
20417 + *
20418 +@@ -3164,7 +3164,7 @@ void reweight_task(struct task_struct *p, int prio)
20419 + *
20420 + * tg->weight * grq->load.weight
20421 + * ge->load.weight = ----------------------------- = tg->weight (4)
20422 +- * grp->load.weight
20423 ++ * grp->load.weight
20424 + *
20425 + * That is, the sum collapses because all other CPUs are idle; the UP scenario.
20426 + *
20427 +@@ -3183,7 +3183,7 @@ void reweight_task(struct task_struct *p, int prio)
20428 + *
20429 + * tg->weight * grq->load.weight
20430 + * ge->load.weight = ----------------------------- (6)
20431 +- * tg_load_avg'
20432 ++ * tg_load_avg'
20433 + *
20434 + * Where:
20435 + *
20436 +@@ -6564,8 +6564,11 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
20437 + struct cpumask *pd_mask = perf_domain_span(pd);
20438 + unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
20439 + unsigned long max_util = 0, sum_util = 0;
20440 ++ unsigned long _cpu_cap = cpu_cap;
20441 + int cpu;
20442 +
20443 ++ _cpu_cap -= arch_scale_thermal_pressure(cpumask_first(pd_mask));
20444 ++
20445 + /*
20446 + * The capacity state of CPUs of the current rd can be driven by CPUs
20447 + * of another rd if they belong to the same pd. So, account for the
20448 +@@ -6601,8 +6604,10 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
20449 + * is already enough to scale the EM reported power
20450 + * consumption at the (eventually clamped) cpu_capacity.
20451 + */
20452 +- sum_util += effective_cpu_util(cpu, util_running, cpu_cap,
20453 +- ENERGY_UTIL, NULL);
20454 ++ cpu_util = effective_cpu_util(cpu, util_running, cpu_cap,
20455 ++ ENERGY_UTIL, NULL);
20456 ++
20457 ++ sum_util += min(cpu_util, _cpu_cap);
20458 +
20459 + /*
20460 + * Performance domain frequency: utilization clamping
20461 +@@ -6613,7 +6618,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
20462 + */
20463 + cpu_util = effective_cpu_util(cpu, util_freq, cpu_cap,
20464 + FREQUENCY_UTIL, tsk);
20465 +- max_util = max(max_util, cpu_util);
20466 ++ max_util = max(max_util, min(cpu_util, _cpu_cap));
20467 + }
20468 +
20469 + return em_cpu_energy(pd->em_pd, max_util, sum_util);
20470 +diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
20471 +index ef37acd28e4ac..37f02fdbb35a3 100644
20472 +--- a/kernel/sched/psi.c
20473 ++++ b/kernel/sched/psi.c
20474 +@@ -179,6 +179,8 @@ struct psi_group psi_system = {
20475 +
20476 + static void psi_avgs_work(struct work_struct *work);
20477 +
20478 ++static void poll_timer_fn(struct timer_list *t);
20479 ++
20480 + static void group_init(struct psi_group *group)
20481 + {
20482 + int cpu;
20483 +@@ -198,6 +200,8 @@ static void group_init(struct psi_group *group)
20484 + memset(group->polling_total, 0, sizeof(group->polling_total));
20485 + group->polling_next_update = ULLONG_MAX;
20486 + group->polling_until = 0;
20487 ++ init_waitqueue_head(&group->poll_wait);
20488 ++ timer_setup(&group->poll_timer, poll_timer_fn, 0);
20489 + rcu_assign_pointer(group->poll_task, NULL);
20490 + }
20491 +
20492 +@@ -1142,9 +1146,7 @@ struct psi_trigger *psi_trigger_create(struct psi_group *group,
20493 + return ERR_CAST(task);
20494 + }
20495 + atomic_set(&group->poll_wakeup, 0);
20496 +- init_waitqueue_head(&group->poll_wait);
20497 + wake_up_process(task);
20498 +- timer_setup(&group->poll_timer, poll_timer_fn, 0);
20499 + rcu_assign_pointer(group->poll_task, task);
20500 + }
20501 +
20502 +@@ -1196,6 +1198,7 @@ static void psi_trigger_destroy(struct kref *ref)
20503 + group->poll_task,
20504 + lockdep_is_held(&group->trigger_lock));
20505 + rcu_assign_pointer(group->poll_task, NULL);
20506 ++ del_timer(&group->poll_timer);
20507 + }
20508 + }
20509 +
20510 +@@ -1208,17 +1211,14 @@ static void psi_trigger_destroy(struct kref *ref)
20511 + */
20512 + synchronize_rcu();
20513 + /*
20514 +- * Destroy the kworker after releasing trigger_lock to prevent a
20515 ++ * Stop kthread 'psimon' after releasing trigger_lock to prevent a
20516 + * deadlock while waiting for psi_poll_work to acquire trigger_lock
20517 + */
20518 + if (task_to_destroy) {
20519 + /*
20520 + * After the RCU grace period has expired, the worker
20521 + * can no longer be found through group->poll_task.
20522 +- * But it might have been already scheduled before
20523 +- * that - deschedule it cleanly before destroying it.
20524 + */
20525 +- del_timer_sync(&group->poll_timer);
20526 + kthread_stop(task_to_destroy);
20527 + }
20528 + kfree(t);
20529 +diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
20530 +index 8f720b71d13dd..e617287052d5c 100644
20531 +--- a/kernel/sched/rt.c
20532 ++++ b/kernel/sched/rt.c
20533 +@@ -2331,13 +2331,20 @@ void __init init_sched_rt_class(void)
20534 + static void switched_to_rt(struct rq *rq, struct task_struct *p)
20535 + {
20536 + /*
20537 +- * If we are already running, then there's nothing
20538 +- * that needs to be done. But if we are not running
20539 +- * we may need to preempt the current running task.
20540 +- * If that current running task is also an RT task
20541 ++ * If we are running, update the avg_rt tracking, as the running time
20542 ++ * will now on be accounted into the latter.
20543 ++ */
20544 ++ if (task_current(rq, p)) {
20545 ++ update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0);
20546 ++ return;
20547 ++ }
20548 ++
20549 ++ /*
20550 ++ * If we are not running we may need to preempt the current
20551 ++ * running task. If that current running task is also an RT task
20552 + * then see if we can move to another run queue.
20553 + */
20554 +- if (task_on_rq_queued(p) && rq->curr != p) {
20555 ++ if (task_on_rq_queued(p)) {
20556 + #ifdef CONFIG_SMP
20557 + if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
20558 + rt_queue_push_tasks(rq);
20559 +diff --git a/kernel/smpboot.c b/kernel/smpboot.c
20560 +index f25208e8df836..e4163042c4d66 100644
20561 +--- a/kernel/smpboot.c
20562 ++++ b/kernel/smpboot.c
20563 +@@ -33,7 +33,6 @@ struct task_struct *idle_thread_get(unsigned int cpu)
20564 +
20565 + if (!tsk)
20566 + return ERR_PTR(-ENOMEM);
20567 +- init_idle(tsk, cpu);
20568 + return tsk;
20569 + }
20570 +
20571 +diff --git a/kernel/sys.c b/kernel/sys.c
20572 +index 2e2e3f378d97f..cabfc5b861754 100644
20573 +--- a/kernel/sys.c
20574 ++++ b/kernel/sys.c
20575 +@@ -552,6 +552,10 @@ long __sys_setreuid(uid_t ruid, uid_t euid)
20576 + if (retval < 0)
20577 + goto error;
20578 +
20579 ++ retval = set_cred_ucounts(new);
20580 ++ if (retval < 0)
20581 ++ goto error;
20582 ++
20583 + return commit_creds(new);
20584 +
20585 + error:
20586 +@@ -610,6 +614,10 @@ long __sys_setuid(uid_t uid)
20587 + if (retval < 0)
20588 + goto error;
20589 +
20590 ++ retval = set_cred_ucounts(new);
20591 ++ if (retval < 0)
20592 ++ goto error;
20593 ++
20594 + return commit_creds(new);
20595 +
20596 + error:
20597 +@@ -685,6 +693,10 @@ long __sys_setresuid(uid_t ruid, uid_t euid, uid_t suid)
20598 + if (retval < 0)
20599 + goto error;
20600 +
20601 ++ retval = set_cred_ucounts(new);
20602 ++ if (retval < 0)
20603 ++ goto error;
20604 ++
20605 + return commit_creds(new);
20606 +
20607 + error:
20608 +diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
20609 +index cce484a2cc7ca..242997b71f2d1 100644
20610 +--- a/kernel/time/clocksource.c
20611 ++++ b/kernel/time/clocksource.c
20612 +@@ -124,6 +124,13 @@ static void __clocksource_change_rating(struct clocksource *cs, int rating);
20613 + #define WATCHDOG_INTERVAL (HZ >> 1)
20614 + #define WATCHDOG_THRESHOLD (NSEC_PER_SEC >> 4)
20615 +
20616 ++/*
20617 ++ * Maximum permissible delay between two readouts of the watchdog
20618 ++ * clocksource surrounding a read of the clocksource being validated.
20619 ++ * This delay could be due to SMIs, NMIs, or to VCPU preemptions.
20620 ++ */
20621 ++#define WATCHDOG_MAX_SKEW (100 * NSEC_PER_USEC)
20622 ++
20623 + static void clocksource_watchdog_work(struct work_struct *work)
20624 + {
20625 + /*
20626 +@@ -184,12 +191,99 @@ void clocksource_mark_unstable(struct clocksource *cs)
20627 + spin_unlock_irqrestore(&watchdog_lock, flags);
20628 + }
20629 +
20630 ++static ulong max_cswd_read_retries = 3;
20631 ++module_param(max_cswd_read_retries, ulong, 0644);
20632 ++
20633 ++static bool cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow)
20634 ++{
20635 ++ unsigned int nretries;
20636 ++ u64 wd_end, wd_delta;
20637 ++ int64_t wd_delay;
20638 ++
20639 ++ for (nretries = 0; nretries <= max_cswd_read_retries; nretries++) {
20640 ++ local_irq_disable();
20641 ++ *wdnow = watchdog->read(watchdog);
20642 ++ *csnow = cs->read(cs);
20643 ++ wd_end = watchdog->read(watchdog);
20644 ++ local_irq_enable();
20645 ++
20646 ++ wd_delta = clocksource_delta(wd_end, *wdnow, watchdog->mask);
20647 ++ wd_delay = clocksource_cyc2ns(wd_delta, watchdog->mult,
20648 ++ watchdog->shift);
20649 ++ if (wd_delay <= WATCHDOG_MAX_SKEW) {
20650 ++ if (nretries > 1 || nretries >= max_cswd_read_retries) {
20651 ++ pr_warn("timekeeping watchdog on CPU%d: %s retried %d times before success\n",
20652 ++ smp_processor_id(), watchdog->name, nretries);
20653 ++ }
20654 ++ return true;
20655 ++ }
20656 ++ }
20657 ++
20658 ++ pr_warn("timekeeping watchdog on CPU%d: %s read-back delay of %lldns, attempt %d, marking unstable\n",
20659 ++ smp_processor_id(), watchdog->name, wd_delay, nretries);
20660 ++ return false;
20661 ++}
20662 ++
20663 ++static u64 csnow_mid;
20664 ++static cpumask_t cpus_ahead;
20665 ++static cpumask_t cpus_behind;
20666 ++
20667 ++static void clocksource_verify_one_cpu(void *csin)
20668 ++{
20669 ++ struct clocksource *cs = (struct clocksource *)csin;
20670 ++
20671 ++ csnow_mid = cs->read(cs);
20672 ++}
20673 ++
20674 ++static void clocksource_verify_percpu(struct clocksource *cs)
20675 ++{
20676 ++ int64_t cs_nsec, cs_nsec_max = 0, cs_nsec_min = LLONG_MAX;
20677 ++ u64 csnow_begin, csnow_end;
20678 ++ int cpu, testcpu;
20679 ++ s64 delta;
20680 ++
20681 ++ cpumask_clear(&cpus_ahead);
20682 ++ cpumask_clear(&cpus_behind);
20683 ++ preempt_disable();
20684 ++ testcpu = smp_processor_id();
20685 ++ pr_warn("Checking clocksource %s synchronization from CPU %d.\n", cs->name, testcpu);
20686 ++ for_each_online_cpu(cpu) {
20687 ++ if (cpu == testcpu)
20688 ++ continue;
20689 ++ csnow_begin = cs->read(cs);
20690 ++ smp_call_function_single(cpu, clocksource_verify_one_cpu, cs, 1);
20691 ++ csnow_end = cs->read(cs);
20692 ++ delta = (s64)((csnow_mid - csnow_begin) & cs->mask);
20693 ++ if (delta < 0)
20694 ++ cpumask_set_cpu(cpu, &cpus_behind);
20695 ++ delta = (csnow_end - csnow_mid) & cs->mask;
20696 ++ if (delta < 0)
20697 ++ cpumask_set_cpu(cpu, &cpus_ahead);
20698 ++ delta = clocksource_delta(csnow_end, csnow_begin, cs->mask);
20699 ++ cs_nsec = clocksource_cyc2ns(delta, cs->mult, cs->shift);
20700 ++ if (cs_nsec > cs_nsec_max)
20701 ++ cs_nsec_max = cs_nsec;
20702 ++ if (cs_nsec < cs_nsec_min)
20703 ++ cs_nsec_min = cs_nsec;
20704 ++ }
20705 ++ preempt_enable();
20706 ++ if (!cpumask_empty(&cpus_ahead))
20707 ++ pr_warn(" CPUs %*pbl ahead of CPU %d for clocksource %s.\n",
20708 ++ cpumask_pr_args(&cpus_ahead), testcpu, cs->name);
20709 ++ if (!cpumask_empty(&cpus_behind))
20710 ++ pr_warn(" CPUs %*pbl behind CPU %d for clocksource %s.\n",
20711 ++ cpumask_pr_args(&cpus_behind), testcpu, cs->name);
20712 ++ if (!cpumask_empty(&cpus_ahead) || !cpumask_empty(&cpus_behind))
20713 ++ pr_warn(" CPU %d check durations %lldns - %lldns for clocksource %s.\n",
20714 ++ testcpu, cs_nsec_min, cs_nsec_max, cs->name);
20715 ++}
20716 ++
20717 + static void clocksource_watchdog(struct timer_list *unused)
20718 + {
20719 +- struct clocksource *cs;
20720 + u64 csnow, wdnow, cslast, wdlast, delta;
20721 +- int64_t wd_nsec, cs_nsec;
20722 + int next_cpu, reset_pending;
20723 ++ int64_t wd_nsec, cs_nsec;
20724 ++ struct clocksource *cs;
20725 +
20726 + spin_lock(&watchdog_lock);
20727 + if (!watchdog_running)
20728 +@@ -206,10 +300,11 @@ static void clocksource_watchdog(struct timer_list *unused)
20729 + continue;
20730 + }
20731 +
20732 +- local_irq_disable();
20733 +- csnow = cs->read(cs);
20734 +- wdnow = watchdog->read(watchdog);
20735 +- local_irq_enable();
20736 ++ if (!cs_watchdog_read(cs, &csnow, &wdnow)) {
20737 ++ /* Clock readout unreliable, so give it up. */
20738 ++ __clocksource_unstable(cs);
20739 ++ continue;
20740 ++ }
20741 +
20742 + /* Clocksource initialized ? */
20743 + if (!(cs->flags & CLOCK_SOURCE_WATCHDOG) ||
20744 +@@ -407,6 +502,12 @@ static int __clocksource_watchdog_kthread(void)
20745 + unsigned long flags;
20746 + int select = 0;
20747 +
20748 ++ /* Do any required per-CPU skew verification. */
20749 ++ if (curr_clocksource &&
20750 ++ curr_clocksource->flags & CLOCK_SOURCE_UNSTABLE &&
20751 ++ curr_clocksource->flags & CLOCK_SOURCE_VERIFY_PERCPU)
20752 ++ clocksource_verify_percpu(curr_clocksource);
20753 ++
20754 + spin_lock_irqsave(&watchdog_lock, flags);
20755 + list_for_each_entry_safe(cs, tmp, &watchdog_list, wd_list) {
20756 + if (cs->flags & CLOCK_SOURCE_UNSTABLE) {
20757 +diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
20758 +index 9bb3d2823f442..80fbee47194ba 100644
20759 +--- a/kernel/trace/bpf_trace.c
20760 ++++ b/kernel/trace/bpf_trace.c
20761 +@@ -2143,7 +2143,8 @@ static int __bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *
20762 + if (prog->aux->max_tp_access > btp->writable_size)
20763 + return -EINVAL;
20764 +
20765 +- return tracepoint_probe_register(tp, (void *)btp->bpf_func, prog);
20766 ++ return tracepoint_probe_register_may_exist(tp, (void *)btp->bpf_func,
20767 ++ prog);
20768 + }
20769 +
20770 + int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog)
20771 +diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
20772 +index 39ebe1826fc38..a696120b9f1ee 100644
20773 +--- a/kernel/trace/trace_events_hist.c
20774 ++++ b/kernel/trace/trace_events_hist.c
20775 +@@ -1539,6 +1539,13 @@ static int contains_operator(char *str)
20776 +
20777 + switch (*op) {
20778 + case '-':
20779 ++ /*
20780 ++ * Unfortunately, the modifier ".sym-offset"
20781 ++ * can confuse things.
20782 ++ */
20783 ++ if (op - str >= 4 && !strncmp(op - 4, ".sym-offset", 11))
20784 ++ return FIELD_OP_NONE;
20785 ++
20786 + if (*str == '-')
20787 + field_op = FIELD_OP_UNARY_MINUS;
20788 + else
20789 +diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
20790 +index 9f478d29b9264..976bf8ce80396 100644
20791 +--- a/kernel/tracepoint.c
20792 ++++ b/kernel/tracepoint.c
20793 +@@ -273,7 +273,8 @@ static void tracepoint_update_call(struct tracepoint *tp, struct tracepoint_func
20794 + * Add the probe function to a tracepoint.
20795 + */
20796 + static int tracepoint_add_func(struct tracepoint *tp,
20797 +- struct tracepoint_func *func, int prio)
20798 ++ struct tracepoint_func *func, int prio,
20799 ++ bool warn)
20800 + {
20801 + struct tracepoint_func *old, *tp_funcs;
20802 + int ret;
20803 +@@ -288,7 +289,7 @@ static int tracepoint_add_func(struct tracepoint *tp,
20804 + lockdep_is_held(&tracepoints_mutex));
20805 + old = func_add(&tp_funcs, func, prio);
20806 + if (IS_ERR(old)) {
20807 +- WARN_ON_ONCE(PTR_ERR(old) != -ENOMEM);
20808 ++ WARN_ON_ONCE(warn && PTR_ERR(old) != -ENOMEM);
20809 + return PTR_ERR(old);
20810 + }
20811 +
20812 +@@ -343,6 +344,32 @@ static int tracepoint_remove_func(struct tracepoint *tp,
20813 + return 0;
20814 + }
20815 +
20816 ++/**
20817 ++ * tracepoint_probe_register_prio_may_exist - Connect a probe to a tracepoint with priority
20818 ++ * @tp: tracepoint
20819 ++ * @probe: probe handler
20820 ++ * @data: tracepoint data
20821 ++ * @prio: priority of this function over other registered functions
20822 ++ *
20823 ++ * Same as tracepoint_probe_register_prio() except that it will not warn
20824 ++ * if the tracepoint is already registered.
20825 ++ */
20826 ++int tracepoint_probe_register_prio_may_exist(struct tracepoint *tp, void *probe,
20827 ++ void *data, int prio)
20828 ++{
20829 ++ struct tracepoint_func tp_func;
20830 ++ int ret;
20831 ++
20832 ++ mutex_lock(&tracepoints_mutex);
20833 ++ tp_func.func = probe;
20834 ++ tp_func.data = data;
20835 ++ tp_func.prio = prio;
20836 ++ ret = tracepoint_add_func(tp, &tp_func, prio, false);
20837 ++ mutex_unlock(&tracepoints_mutex);
20838 ++ return ret;
20839 ++}
20840 ++EXPORT_SYMBOL_GPL(tracepoint_probe_register_prio_may_exist);
20841 ++
20842 + /**
20843 + * tracepoint_probe_register_prio - Connect a probe to a tracepoint with priority
20844 + * @tp: tracepoint
20845 +@@ -366,7 +393,7 @@ int tracepoint_probe_register_prio(struct tracepoint *tp, void *probe,
20846 + tp_func.func = probe;
20847 + tp_func.data = data;
20848 + tp_func.prio = prio;
20849 +- ret = tracepoint_add_func(tp, &tp_func, prio);
20850 ++ ret = tracepoint_add_func(tp, &tp_func, prio, true);
20851 + mutex_unlock(&tracepoints_mutex);
20852 + return ret;
20853 + }
20854 +diff --git a/kernel/ucount.c b/kernel/ucount.c
20855 +index 11b1596e2542a..9894795043c42 100644
20856 +--- a/kernel/ucount.c
20857 ++++ b/kernel/ucount.c
20858 +@@ -8,6 +8,12 @@
20859 + #include <linux/kmemleak.h>
20860 + #include <linux/user_namespace.h>
20861 +
20862 ++struct ucounts init_ucounts = {
20863 ++ .ns = &init_user_ns,
20864 ++ .uid = GLOBAL_ROOT_UID,
20865 ++ .count = 1,
20866 ++};
20867 ++
20868 + #define UCOUNTS_HASHTABLE_BITS 10
20869 + static struct hlist_head ucounts_hashtable[(1 << UCOUNTS_HASHTABLE_BITS)];
20870 + static DEFINE_SPINLOCK(ucounts_lock);
20871 +@@ -125,7 +131,15 @@ static struct ucounts *find_ucounts(struct user_namespace *ns, kuid_t uid, struc
20872 + return NULL;
20873 + }
20874 +
20875 +-static struct ucounts *get_ucounts(struct user_namespace *ns, kuid_t uid)
20876 ++static void hlist_add_ucounts(struct ucounts *ucounts)
20877 ++{
20878 ++ struct hlist_head *hashent = ucounts_hashentry(ucounts->ns, ucounts->uid);
20879 ++ spin_lock_irq(&ucounts_lock);
20880 ++ hlist_add_head(&ucounts->node, hashent);
20881 ++ spin_unlock_irq(&ucounts_lock);
20882 ++}
20883 ++
20884 ++struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid)
20885 + {
20886 + struct hlist_head *hashent = ucounts_hashentry(ns, uid);
20887 + struct ucounts *ucounts, *new;
20888 +@@ -160,7 +174,26 @@ static struct ucounts *get_ucounts(struct user_namespace *ns, kuid_t uid)
20889 + return ucounts;
20890 + }
20891 +
20892 +-static void put_ucounts(struct ucounts *ucounts)
20893 ++struct ucounts *get_ucounts(struct ucounts *ucounts)
20894 ++{
20895 ++ unsigned long flags;
20896 ++
20897 ++ if (!ucounts)
20898 ++ return NULL;
20899 ++
20900 ++ spin_lock_irqsave(&ucounts_lock, flags);
20901 ++ if (ucounts->count == INT_MAX) {
20902 ++ WARN_ONCE(1, "ucounts: counter has reached its maximum value");
20903 ++ ucounts = NULL;
20904 ++ } else {
20905 ++ ucounts->count += 1;
20906 ++ }
20907 ++ spin_unlock_irqrestore(&ucounts_lock, flags);
20908 ++
20909 ++ return ucounts;
20910 ++}
20911 ++
20912 ++void put_ucounts(struct ucounts *ucounts)
20913 + {
20914 + unsigned long flags;
20915 +
20916 +@@ -194,7 +227,7 @@ struct ucounts *inc_ucount(struct user_namespace *ns, kuid_t uid,
20917 + {
20918 + struct ucounts *ucounts, *iter, *bad;
20919 + struct user_namespace *tns;
20920 +- ucounts = get_ucounts(ns, uid);
20921 ++ ucounts = alloc_ucounts(ns, uid);
20922 + for (iter = ucounts; iter; iter = tns->ucounts) {
20923 + int max;
20924 + tns = iter->ns;
20925 +@@ -237,6 +270,7 @@ static __init int user_namespace_sysctl_init(void)
20926 + BUG_ON(!user_header);
20927 + BUG_ON(!setup_userns_sysctls(&init_user_ns));
20928 + #endif
20929 ++ hlist_add_ucounts(&init_ucounts);
20930 + return 0;
20931 + }
20932 + subsys_initcall(user_namespace_sysctl_init);
20933 +diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c
20934 +index 9a4b980d695b8..f1b7b4b8ffa25 100644
20935 +--- a/kernel/user_namespace.c
20936 ++++ b/kernel/user_namespace.c
20937 +@@ -1340,6 +1340,9 @@ static int userns_install(struct nsset *nsset, struct ns_common *ns)
20938 + put_user_ns(cred->user_ns);
20939 + set_cred_user_ns(cred, get_user_ns(user_ns));
20940 +
20941 ++ if (set_cred_ucounts(cred) < 0)
20942 ++ return -EINVAL;
20943 ++
20944 + return 0;
20945 + }
20946 +
20947 +diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
20948 +index 417c3d3e521bf..5c9f528dd46d1 100644
20949 +--- a/lib/Kconfig.debug
20950 ++++ b/lib/Kconfig.debug
20951 +@@ -1363,7 +1363,6 @@ config LOCKDEP
20952 + bool
20953 + depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
20954 + select STACKTRACE
20955 +- depends on FRAME_POINTER || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86
20956 + select KALLSYMS
20957 + select KALLSYMS_ALL
20958 +
20959 +diff --git a/lib/iov_iter.c b/lib/iov_iter.c
20960 +index f66c62aa7154d..9a76f676d2482 100644
20961 +--- a/lib/iov_iter.c
20962 ++++ b/lib/iov_iter.c
20963 +@@ -432,7 +432,7 @@ int iov_iter_fault_in_readable(struct iov_iter *i, size_t bytes)
20964 + int err;
20965 + struct iovec v;
20966 +
20967 +- if (!(i->type & (ITER_BVEC|ITER_KVEC))) {
20968 ++ if (iter_is_iovec(i)) {
20969 + iterate_iovec(i, bytes, v, iov, skip, ({
20970 + err = fault_in_pages_readable(v.iov_base, v.iov_len);
20971 + if (unlikely(err))
20972 +@@ -906,9 +906,12 @@ size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes,
20973 + size_t wanted = copy_to_iter(kaddr + offset, bytes, i);
20974 + kunmap_atomic(kaddr);
20975 + return wanted;
20976 +- } else if (unlikely(iov_iter_is_discard(i)))
20977 ++ } else if (unlikely(iov_iter_is_discard(i))) {
20978 ++ if (unlikely(i->count < bytes))
20979 ++ bytes = i->count;
20980 ++ i->count -= bytes;
20981 + return bytes;
20982 +- else if (likely(!iov_iter_is_pipe(i)))
20983 ++ } else if (likely(!iov_iter_is_pipe(i)))
20984 + return copy_page_to_iter_iovec(page, offset, bytes, i);
20985 + else
20986 + return copy_page_to_iter_pipe(page, offset, bytes, i);
20987 +diff --git a/lib/kstrtox.c b/lib/kstrtox.c
20988 +index a118b0b1e9b2c..0b5fe8b411732 100644
20989 +--- a/lib/kstrtox.c
20990 ++++ b/lib/kstrtox.c
20991 +@@ -39,20 +39,22 @@ const char *_parse_integer_fixup_radix(const char *s, unsigned int *base)
20992 +
20993 + /*
20994 + * Convert non-negative integer string representation in explicitly given radix
20995 +- * to an integer.
20996 ++ * to an integer. A maximum of max_chars characters will be converted.
20997 ++ *
20998 + * Return number of characters consumed maybe or-ed with overflow bit.
20999 + * If overflow occurs, result integer (incorrect) is still returned.
21000 + *
21001 + * Don't you dare use this function.
21002 + */
21003 +-unsigned int _parse_integer(const char *s, unsigned int base, unsigned long long *p)
21004 ++unsigned int _parse_integer_limit(const char *s, unsigned int base, unsigned long long *p,
21005 ++ size_t max_chars)
21006 + {
21007 + unsigned long long res;
21008 + unsigned int rv;
21009 +
21010 + res = 0;
21011 + rv = 0;
21012 +- while (1) {
21013 ++ while (max_chars--) {
21014 + unsigned int c = *s;
21015 + unsigned int lc = c | 0x20; /* don't tolower() this line */
21016 + unsigned int val;
21017 +@@ -82,6 +84,11 @@ unsigned int _parse_integer(const char *s, unsigned int base, unsigned long long
21018 + return rv;
21019 + }
21020 +
21021 ++unsigned int _parse_integer(const char *s, unsigned int base, unsigned long long *p)
21022 ++{
21023 ++ return _parse_integer_limit(s, base, p, INT_MAX);
21024 ++}
21025 ++
21026 + static int _kstrtoull(const char *s, unsigned int base, unsigned long long *res)
21027 + {
21028 + unsigned long long _res;
21029 +diff --git a/lib/kstrtox.h b/lib/kstrtox.h
21030 +index 3b4637bcd2540..158c400ca8658 100644
21031 +--- a/lib/kstrtox.h
21032 ++++ b/lib/kstrtox.h
21033 +@@ -4,6 +4,8 @@
21034 +
21035 + #define KSTRTOX_OVERFLOW (1U << 31)
21036 + const char *_parse_integer_fixup_radix(const char *s, unsigned int *base);
21037 ++unsigned int _parse_integer_limit(const char *s, unsigned int base, unsigned long long *res,
21038 ++ size_t max_chars);
21039 + unsigned int _parse_integer(const char *s, unsigned int base, unsigned long long *res);
21040 +
21041 + #endif
21042 +diff --git a/lib/kunit/test.c b/lib/kunit/test.c
21043 +index ec9494e914ef3..c2b7248ebc9eb 100644
21044 +--- a/lib/kunit/test.c
21045 ++++ b/lib/kunit/test.c
21046 +@@ -343,7 +343,7 @@ static void kunit_run_case_catch_errors(struct kunit_suite *suite,
21047 + context.test_case = test_case;
21048 + kunit_try_catch_run(try_catch, &context);
21049 +
21050 +- test_case->success = test->success;
21051 ++ test_case->success &= test->success;
21052 + }
21053 +
21054 + int kunit_run_tests(struct kunit_suite *suite)
21055 +@@ -355,7 +355,7 @@ int kunit_run_tests(struct kunit_suite *suite)
21056 +
21057 + kunit_suite_for_each_test_case(suite, test_case) {
21058 + struct kunit test = { .param_value = NULL, .param_index = 0 };
21059 +- bool test_success = true;
21060 ++ test_case->success = true;
21061 +
21062 + if (test_case->generate_params) {
21063 + /* Get initial param. */
21064 +@@ -365,7 +365,6 @@ int kunit_run_tests(struct kunit_suite *suite)
21065 +
21066 + do {
21067 + kunit_run_case_catch_errors(suite, test_case, &test);
21068 +- test_success &= test_case->success;
21069 +
21070 + if (test_case->generate_params) {
21071 + if (param_desc[0] == '\0') {
21072 +@@ -387,7 +386,7 @@ int kunit_run_tests(struct kunit_suite *suite)
21073 + }
21074 + } while (test.param_value);
21075 +
21076 +- kunit_print_ok_not_ok(&test, true, test_success,
21077 ++ kunit_print_ok_not_ok(&test, true, test_case->success,
21078 + kunit_test_case_num(suite, test_case),
21079 + test_case->name);
21080 + }
21081 +diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
21082 +index 2d85abac17448..0f6b262e09648 100644
21083 +--- a/lib/locking-selftest.c
21084 ++++ b/lib/locking-selftest.c
21085 +@@ -194,6 +194,7 @@ static void init_shared_classes(void)
21086 + #define HARDIRQ_ENTER() \
21087 + local_irq_disable(); \
21088 + __irq_enter(); \
21089 ++ lockdep_hardirq_threaded(); \
21090 + WARN_ON(!in_irq());
21091 +
21092 + #define HARDIRQ_EXIT() \
21093 +diff --git a/lib/math/rational.c b/lib/math/rational.c
21094 +index 9781d521963d1..c0ab51d8fbb98 100644
21095 +--- a/lib/math/rational.c
21096 ++++ b/lib/math/rational.c
21097 +@@ -12,6 +12,7 @@
21098 + #include <linux/compiler.h>
21099 + #include <linux/export.h>
21100 + #include <linux/minmax.h>
21101 ++#include <linux/limits.h>
21102 +
21103 + /*
21104 + * calculate best rational approximation for a given fraction
21105 +@@ -78,13 +79,18 @@ void rational_best_approximation(
21106 + * found below as 't'.
21107 + */
21108 + if ((n2 > max_numerator) || (d2 > max_denominator)) {
21109 +- unsigned long t = min((max_numerator - n0) / n1,
21110 +- (max_denominator - d0) / d1);
21111 ++ unsigned long t = ULONG_MAX;
21112 +
21113 +- /* This tests if the semi-convergent is closer
21114 +- * than the previous convergent.
21115 ++ if (d1)
21116 ++ t = (max_denominator - d0) / d1;
21117 ++ if (n1)
21118 ++ t = min(t, (max_numerator - n0) / n1);
21119 ++
21120 ++ /* This tests if the semi-convergent is closer than the previous
21121 ++ * convergent. If d1 is zero there is no previous convergent as this
21122 ++ * is the 1st iteration, so always choose the semi-convergent.
21123 + */
21124 +- if (2u * t > a || (2u * t == a && d0 * dp > d1 * d)) {
21125 ++ if (!d1 || 2u * t > a || (2u * t == a && d0 * dp > d1 * d)) {
21126 + n1 = n0 + t * n1;
21127 + d1 = d0 + t * d1;
21128 + }
21129 +diff --git a/lib/seq_buf.c b/lib/seq_buf.c
21130 +index 707453f5d58ee..89c26c393bdba 100644
21131 +--- a/lib/seq_buf.c
21132 ++++ b/lib/seq_buf.c
21133 +@@ -243,12 +243,14 @@ int seq_buf_putmem_hex(struct seq_buf *s, const void *mem,
21134 + break;
21135 +
21136 + /* j increments twice per loop */
21137 +- len -= j / 2;
21138 + hex[j++] = ' ';
21139 +
21140 + seq_buf_putmem(s, hex, j);
21141 + if (seq_buf_has_overflowed(s))
21142 + return -1;
21143 ++
21144 ++ len -= start_len;
21145 ++ data += start_len;
21146 + }
21147 + return 0;
21148 + }
21149 +diff --git a/lib/vsprintf.c b/lib/vsprintf.c
21150 +index 39ef2e314da5e..9d6722199390f 100644
21151 +--- a/lib/vsprintf.c
21152 ++++ b/lib/vsprintf.c
21153 +@@ -53,6 +53,31 @@
21154 + #include <linux/string_helpers.h>
21155 + #include "kstrtox.h"
21156 +
21157 ++static unsigned long long simple_strntoull(const char *startp, size_t max_chars,
21158 ++ char **endp, unsigned int base)
21159 ++{
21160 ++ const char *cp;
21161 ++ unsigned long long result = 0ULL;
21162 ++ size_t prefix_chars;
21163 ++ unsigned int rv;
21164 ++
21165 ++ cp = _parse_integer_fixup_radix(startp, &base);
21166 ++ prefix_chars = cp - startp;
21167 ++ if (prefix_chars < max_chars) {
21168 ++ rv = _parse_integer_limit(cp, base, &result, max_chars - prefix_chars);
21169 ++ /* FIXME */
21170 ++ cp += (rv & ~KSTRTOX_OVERFLOW);
21171 ++ } else {
21172 ++ /* Field too short for prefix + digit, skip over without converting */
21173 ++ cp = startp + max_chars;
21174 ++ }
21175 ++
21176 ++ if (endp)
21177 ++ *endp = (char *)cp;
21178 ++
21179 ++ return result;
21180 ++}
21181 ++
21182 + /**
21183 + * simple_strtoull - convert a string to an unsigned long long
21184 + * @cp: The start of the string
21185 +@@ -63,18 +88,7 @@
21186 + */
21187 + unsigned long long simple_strtoull(const char *cp, char **endp, unsigned int base)
21188 + {
21189 +- unsigned long long result;
21190 +- unsigned int rv;
21191 +-
21192 +- cp = _parse_integer_fixup_radix(cp, &base);
21193 +- rv = _parse_integer(cp, base, &result);
21194 +- /* FIXME */
21195 +- cp += (rv & ~KSTRTOX_OVERFLOW);
21196 +-
21197 +- if (endp)
21198 +- *endp = (char *)cp;
21199 +-
21200 +- return result;
21201 ++ return simple_strntoull(cp, INT_MAX, endp, base);
21202 + }
21203 + EXPORT_SYMBOL(simple_strtoull);
21204 +
21205 +@@ -109,6 +123,21 @@ long simple_strtol(const char *cp, char **endp, unsigned int base)
21206 + }
21207 + EXPORT_SYMBOL(simple_strtol);
21208 +
21209 ++static long long simple_strntoll(const char *cp, size_t max_chars, char **endp,
21210 ++ unsigned int base)
21211 ++{
21212 ++ /*
21213 ++ * simple_strntoull() safely handles receiving max_chars==0 in the
21214 ++ * case cp[0] == '-' && max_chars == 1.
21215 ++ * If max_chars == 0 we can drop through and pass it to simple_strntoull()
21216 ++ * and the content of *cp is irrelevant.
21217 ++ */
21218 ++ if (*cp == '-' && max_chars > 0)
21219 ++ return -simple_strntoull(cp + 1, max_chars - 1, endp, base);
21220 ++
21221 ++ return simple_strntoull(cp, max_chars, endp, base);
21222 ++}
21223 ++
21224 + /**
21225 + * simple_strtoll - convert a string to a signed long long
21226 + * @cp: The start of the string
21227 +@@ -119,10 +148,7 @@ EXPORT_SYMBOL(simple_strtol);
21228 + */
21229 + long long simple_strtoll(const char *cp, char **endp, unsigned int base)
21230 + {
21231 +- if (*cp == '-')
21232 +- return -simple_strtoull(cp + 1, endp, base);
21233 +-
21234 +- return simple_strtoull(cp, endp, base);
21235 ++ return simple_strntoll(cp, INT_MAX, endp, base);
21236 + }
21237 + EXPORT_SYMBOL(simple_strtoll);
21238 +
21239 +@@ -3475,25 +3501,13 @@ int vsscanf(const char *buf, const char *fmt, va_list args)
21240 + break;
21241 +
21242 + if (is_sign)
21243 +- val.s = qualifier != 'L' ?
21244 +- simple_strtol(str, &next, base) :
21245 +- simple_strtoll(str, &next, base);
21246 ++ val.s = simple_strntoll(str,
21247 ++ field_width >= 0 ? field_width : INT_MAX,
21248 ++ &next, base);
21249 + else
21250 +- val.u = qualifier != 'L' ?
21251 +- simple_strtoul(str, &next, base) :
21252 +- simple_strtoull(str, &next, base);
21253 +-
21254 +- if (field_width > 0 && next - str > field_width) {
21255 +- if (base == 0)
21256 +- _parse_integer_fixup_radix(str, &base);
21257 +- while (next - str > field_width) {
21258 +- if (is_sign)
21259 +- val.s = div_s64(val.s, base);
21260 +- else
21261 +- val.u = div_u64(val.u, base);
21262 +- --next;
21263 +- }
21264 +- }
21265 ++ val.u = simple_strntoull(str,
21266 ++ field_width >= 0 ? field_width : INT_MAX,
21267 ++ &next, base);
21268 +
21269 + switch (qualifier) {
21270 + case 'H': /* that's 'hh' in format */
21271 +diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
21272 +index 726fd2030f645..12ebc97e8b435 100644
21273 +--- a/mm/debug_vm_pgtable.c
21274 ++++ b/mm/debug_vm_pgtable.c
21275 +@@ -146,13 +146,14 @@ static void __init pte_savedwrite_tests(unsigned long pfn, pgprot_t prot)
21276 + static void __init pmd_basic_tests(unsigned long pfn, int idx)
21277 + {
21278 + pgprot_t prot = protection_map[idx];
21279 +- pmd_t pmd = pfn_pmd(pfn, prot);
21280 + unsigned long val = idx, *ptr = &val;
21281 ++ pmd_t pmd;
21282 +
21283 + if (!has_transparent_hugepage())
21284 + return;
21285 +
21286 + pr_debug("Validating PMD basic (%pGv)\n", ptr);
21287 ++ pmd = pfn_pmd(pfn, prot);
21288 +
21289 + /*
21290 + * This test needs to be executed after the given page table entry
21291 +@@ -185,7 +186,7 @@ static void __init pmd_advanced_tests(struct mm_struct *mm,
21292 + unsigned long pfn, unsigned long vaddr,
21293 + pgprot_t prot, pgtable_t pgtable)
21294 + {
21295 +- pmd_t pmd = pfn_pmd(pfn, prot);
21296 ++ pmd_t pmd;
21297 +
21298 + if (!has_transparent_hugepage())
21299 + return;
21300 +@@ -232,9 +233,14 @@ static void __init pmd_advanced_tests(struct mm_struct *mm,
21301 +
21302 + static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot)
21303 + {
21304 +- pmd_t pmd = pfn_pmd(pfn, prot);
21305 ++ pmd_t pmd;
21306 ++
21307 ++ if (!has_transparent_hugepage())
21308 ++ return;
21309 +
21310 + pr_debug("Validating PMD leaf\n");
21311 ++ pmd = pfn_pmd(pfn, prot);
21312 ++
21313 + /*
21314 + * PMD based THP is a leaf entry.
21315 + */
21316 +@@ -267,12 +273,16 @@ static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot)
21317 +
21318 + static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot)
21319 + {
21320 +- pmd_t pmd = pfn_pmd(pfn, prot);
21321 ++ pmd_t pmd;
21322 +
21323 + if (!IS_ENABLED(CONFIG_NUMA_BALANCING))
21324 + return;
21325 +
21326 ++ if (!has_transparent_hugepage())
21327 ++ return;
21328 ++
21329 + pr_debug("Validating PMD saved write\n");
21330 ++ pmd = pfn_pmd(pfn, prot);
21331 + WARN_ON(!pmd_savedwrite(pmd_mk_savedwrite(pmd_clear_savedwrite(pmd))));
21332 + WARN_ON(pmd_savedwrite(pmd_clear_savedwrite(pmd_mk_savedwrite(pmd))));
21333 + }
21334 +@@ -281,13 +291,14 @@ static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot)
21335 + static void __init pud_basic_tests(struct mm_struct *mm, unsigned long pfn, int idx)
21336 + {
21337 + pgprot_t prot = protection_map[idx];
21338 +- pud_t pud = pfn_pud(pfn, prot);
21339 + unsigned long val = idx, *ptr = &val;
21340 ++ pud_t pud;
21341 +
21342 + if (!has_transparent_hugepage())
21343 + return;
21344 +
21345 + pr_debug("Validating PUD basic (%pGv)\n", ptr);
21346 ++ pud = pfn_pud(pfn, prot);
21347 +
21348 + /*
21349 + * This test needs to be executed after the given page table entry
21350 +@@ -323,7 +334,7 @@ static void __init pud_advanced_tests(struct mm_struct *mm,
21351 + unsigned long pfn, unsigned long vaddr,
21352 + pgprot_t prot)
21353 + {
21354 +- pud_t pud = pfn_pud(pfn, prot);
21355 ++ pud_t pud;
21356 +
21357 + if (!has_transparent_hugepage())
21358 + return;
21359 +@@ -332,6 +343,7 @@ static void __init pud_advanced_tests(struct mm_struct *mm,
21360 + /* Align the address wrt HPAGE_PUD_SIZE */
21361 + vaddr &= HPAGE_PUD_MASK;
21362 +
21363 ++ pud = pfn_pud(pfn, prot);
21364 + set_pud_at(mm, vaddr, pudp, pud);
21365 + pudp_set_wrprotect(mm, vaddr, pudp);
21366 + pud = READ_ONCE(*pudp);
21367 +@@ -370,9 +382,13 @@ static void __init pud_advanced_tests(struct mm_struct *mm,
21368 +
21369 + static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot)
21370 + {
21371 +- pud_t pud = pfn_pud(pfn, prot);
21372 ++ pud_t pud;
21373 ++
21374 ++ if (!has_transparent_hugepage())
21375 ++ return;
21376 +
21377 + pr_debug("Validating PUD leaf\n");
21378 ++ pud = pfn_pud(pfn, prot);
21379 + /*
21380 + * PUD based THP is a leaf entry.
21381 + */
21382 +@@ -654,12 +670,16 @@ static void __init pte_protnone_tests(unsigned long pfn, pgprot_t prot)
21383 + #ifdef CONFIG_TRANSPARENT_HUGEPAGE
21384 + static void __init pmd_protnone_tests(unsigned long pfn, pgprot_t prot)
21385 + {
21386 +- pmd_t pmd = pmd_mkhuge(pfn_pmd(pfn, prot));
21387 ++ pmd_t pmd;
21388 +
21389 + if (!IS_ENABLED(CONFIG_NUMA_BALANCING))
21390 + return;
21391 +
21392 ++ if (!has_transparent_hugepage())
21393 ++ return;
21394 ++
21395 + pr_debug("Validating PMD protnone\n");
21396 ++ pmd = pmd_mkhuge(pfn_pmd(pfn, prot));
21397 + WARN_ON(!pmd_protnone(pmd));
21398 + WARN_ON(!pmd_present(pmd));
21399 + }
21400 +@@ -679,18 +699,26 @@ static void __init pte_devmap_tests(unsigned long pfn, pgprot_t prot)
21401 + #ifdef CONFIG_TRANSPARENT_HUGEPAGE
21402 + static void __init pmd_devmap_tests(unsigned long pfn, pgprot_t prot)
21403 + {
21404 +- pmd_t pmd = pfn_pmd(pfn, prot);
21405 ++ pmd_t pmd;
21406 ++
21407 ++ if (!has_transparent_hugepage())
21408 ++ return;
21409 +
21410 + pr_debug("Validating PMD devmap\n");
21411 ++ pmd = pfn_pmd(pfn, prot);
21412 + WARN_ON(!pmd_devmap(pmd_mkdevmap(pmd)));
21413 + }
21414 +
21415 + #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
21416 + static void __init pud_devmap_tests(unsigned long pfn, pgprot_t prot)
21417 + {
21418 +- pud_t pud = pfn_pud(pfn, prot);
21419 ++ pud_t pud;
21420 ++
21421 ++ if (!has_transparent_hugepage())
21422 ++ return;
21423 +
21424 + pr_debug("Validating PUD devmap\n");
21425 ++ pud = pfn_pud(pfn, prot);
21426 + WARN_ON(!pud_devmap(pud_mkdevmap(pud)));
21427 + }
21428 + #else /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
21429 +@@ -733,25 +761,33 @@ static void __init pte_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
21430 + #ifdef CONFIG_TRANSPARENT_HUGEPAGE
21431 + static void __init pmd_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
21432 + {
21433 +- pmd_t pmd = pfn_pmd(pfn, prot);
21434 ++ pmd_t pmd;
21435 +
21436 + if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY))
21437 + return;
21438 +
21439 ++ if (!has_transparent_hugepage())
21440 ++ return;
21441 ++
21442 + pr_debug("Validating PMD soft dirty\n");
21443 ++ pmd = pfn_pmd(pfn, prot);
21444 + WARN_ON(!pmd_soft_dirty(pmd_mksoft_dirty(pmd)));
21445 + WARN_ON(pmd_soft_dirty(pmd_clear_soft_dirty(pmd)));
21446 + }
21447 +
21448 + static void __init pmd_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
21449 + {
21450 +- pmd_t pmd = pfn_pmd(pfn, prot);
21451 ++ pmd_t pmd;
21452 +
21453 + if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) ||
21454 + !IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION))
21455 + return;
21456 +
21457 ++ if (!has_transparent_hugepage())
21458 ++ return;
21459 ++
21460 + pr_debug("Validating PMD swap soft dirty\n");
21461 ++ pmd = pfn_pmd(pfn, prot);
21462 + WARN_ON(!pmd_swp_soft_dirty(pmd_swp_mksoft_dirty(pmd)));
21463 + WARN_ON(pmd_swp_soft_dirty(pmd_swp_clear_soft_dirty(pmd)));
21464 + }
21465 +@@ -780,6 +816,9 @@ static void __init pmd_swap_tests(unsigned long pfn, pgprot_t prot)
21466 + swp_entry_t swp;
21467 + pmd_t pmd;
21468 +
21469 ++ if (!has_transparent_hugepage())
21470 ++ return;
21471 ++
21472 + pr_debug("Validating PMD swap\n");
21473 + pmd = pfn_pmd(pfn, prot);
21474 + swp = __pmd_to_swp_entry(pmd);
21475 +diff --git a/mm/gup.c b/mm/gup.c
21476 +index 4164a70160e31..4ad276859bbda 100644
21477 +--- a/mm/gup.c
21478 ++++ b/mm/gup.c
21479 +@@ -44,6 +44,23 @@ static void hpage_pincount_sub(struct page *page, int refs)
21480 + atomic_sub(refs, compound_pincount_ptr(page));
21481 + }
21482 +
21483 ++/* Equivalent to calling put_page() @refs times. */
21484 ++static void put_page_refs(struct page *page, int refs)
21485 ++{
21486 ++#ifdef CONFIG_DEBUG_VM
21487 ++ if (VM_WARN_ON_ONCE_PAGE(page_ref_count(page) < refs, page))
21488 ++ return;
21489 ++#endif
21490 ++
21491 ++ /*
21492 ++ * Calling put_page() for each ref is unnecessarily slow. Only the last
21493 ++ * ref needs a put_page().
21494 ++ */
21495 ++ if (refs > 1)
21496 ++ page_ref_sub(page, refs - 1);
21497 ++ put_page(page);
21498 ++}
21499 ++
21500 + /*
21501 + * Return the compound head page with ref appropriately incremented,
21502 + * or NULL if that failed.
21503 +@@ -56,6 +73,21 @@ static inline struct page *try_get_compound_head(struct page *page, int refs)
21504 + return NULL;
21505 + if (unlikely(!page_cache_add_speculative(head, refs)))
21506 + return NULL;
21507 ++
21508 ++ /*
21509 ++ * At this point we have a stable reference to the head page; but it
21510 ++ * could be that between the compound_head() lookup and the refcount
21511 ++ * increment, the compound page was split, in which case we'd end up
21512 ++ * holding a reference on a page that has nothing to do with the page
21513 ++ * we were given anymore.
21514 ++ * So now that the head page is stable, recheck that the pages still
21515 ++ * belong together.
21516 ++ */
21517 ++ if (unlikely(compound_head(page) != head)) {
21518 ++ put_page_refs(head, refs);
21519 ++ return NULL;
21520 ++ }
21521 ++
21522 + return head;
21523 + }
21524 +
21525 +@@ -94,6 +126,14 @@ __maybe_unused struct page *try_grab_compound_head(struct page *page,
21526 + is_migrate_cma_page(page))
21527 + return NULL;
21528 +
21529 ++ /*
21530 ++ * CAUTION: Don't use compound_head() on the page before this
21531 ++ * point, the result won't be stable.
21532 ++ */
21533 ++ page = try_get_compound_head(page, refs);
21534 ++ if (!page)
21535 ++ return NULL;
21536 ++
21537 + /*
21538 + * When pinning a compound page of order > 1 (which is what
21539 + * hpage_pincount_available() checks for), use an exact count to
21540 +@@ -102,15 +142,10 @@ __maybe_unused struct page *try_grab_compound_head(struct page *page,
21541 + * However, be sure to *also* increment the normal page refcount
21542 + * field at least once, so that the page really is pinned.
21543 + */
21544 +- if (!hpage_pincount_available(page))
21545 +- refs *= GUP_PIN_COUNTING_BIAS;
21546 +-
21547 +- page = try_get_compound_head(page, refs);
21548 +- if (!page)
21549 +- return NULL;
21550 +-
21551 + if (hpage_pincount_available(page))
21552 + hpage_pincount_add(page, refs);
21553 ++ else
21554 ++ page_ref_add(page, refs * (GUP_PIN_COUNTING_BIAS - 1));
21555 +
21556 + mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_ACQUIRED,
21557 + orig_refs);
21558 +@@ -134,14 +169,7 @@ static void put_compound_head(struct page *page, int refs, unsigned int flags)
21559 + refs *= GUP_PIN_COUNTING_BIAS;
21560 + }
21561 +
21562 +- VM_BUG_ON_PAGE(page_ref_count(page) < refs, page);
21563 +- /*
21564 +- * Calling put_page() for each ref is unnecessarily slow. Only the last
21565 +- * ref needs a put_page().
21566 +- */
21567 +- if (refs > 1)
21568 +- page_ref_sub(page, refs - 1);
21569 +- put_page(page);
21570 ++ put_page_refs(page, refs);
21571 + }
21572 +
21573 + /**
21574 +diff --git a/mm/huge_memory.c b/mm/huge_memory.c
21575 +index 44c455dbbd637..bc642923e0c97 100644
21576 +--- a/mm/huge_memory.c
21577 ++++ b/mm/huge_memory.c
21578 +@@ -63,7 +63,14 @@ static atomic_t huge_zero_refcount;
21579 + struct page *huge_zero_page __read_mostly;
21580 + unsigned long huge_zero_pfn __read_mostly = ~0UL;
21581 +
21582 +-bool transparent_hugepage_enabled(struct vm_area_struct *vma)
21583 ++static inline bool file_thp_enabled(struct vm_area_struct *vma)
21584 ++{
21585 ++ return transhuge_vma_enabled(vma, vma->vm_flags) && vma->vm_file &&
21586 ++ !inode_is_open_for_write(vma->vm_file->f_inode) &&
21587 ++ (vma->vm_flags & VM_EXEC);
21588 ++}
21589 ++
21590 ++bool transparent_hugepage_active(struct vm_area_struct *vma)
21591 + {
21592 + /* The addr is used to check if the vma size fits */
21593 + unsigned long addr = (vma->vm_end & HPAGE_PMD_MASK) - HPAGE_PMD_SIZE;
21594 +@@ -74,6 +81,8 @@ bool transparent_hugepage_enabled(struct vm_area_struct *vma)
21595 + return __transparent_hugepage_enabled(vma);
21596 + if (vma_is_shmem(vma))
21597 + return shmem_huge_enabled(vma);
21598 ++ if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS))
21599 ++ return file_thp_enabled(vma);
21600 +
21601 + return false;
21602 + }
21603 +@@ -1606,7 +1615,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
21604 + * If other processes are mapping this page, we couldn't discard
21605 + * the page unless they all do MADV_FREE so let's skip the page.
21606 + */
21607 +- if (page_mapcount(page) != 1)
21608 ++ if (total_mapcount(page) != 1)
21609 + goto out;
21610 +
21611 + if (!trylock_page(page))
21612 +diff --git a/mm/hugetlb.c b/mm/hugetlb.c
21613 +index 7ba7d9b20494a..dbf44b92651b7 100644
21614 +--- a/mm/hugetlb.c
21615 ++++ b/mm/hugetlb.c
21616 +@@ -1313,8 +1313,6 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
21617 + return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask);
21618 + }
21619 +
21620 +-static void prep_new_huge_page(struct hstate *h, struct page *page, int nid);
21621 +-static void prep_compound_gigantic_page(struct page *page, unsigned int order);
21622 + #else /* !CONFIG_CONTIG_ALLOC */
21623 + static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
21624 + int nid, nodemask_t *nodemask)
21625 +@@ -2504,16 +2502,10 @@ found:
21626 + return 1;
21627 + }
21628 +
21629 +-static void __init prep_compound_huge_page(struct page *page,
21630 +- unsigned int order)
21631 +-{
21632 +- if (unlikely(order > (MAX_ORDER - 1)))
21633 +- prep_compound_gigantic_page(page, order);
21634 +- else
21635 +- prep_compound_page(page, order);
21636 +-}
21637 +-
21638 +-/* Put bootmem huge pages into the standard lists after mem_map is up */
21639 ++/*
21640 ++ * Put bootmem huge pages into the standard lists after mem_map is up.
21641 ++ * Note: This only applies to gigantic (order > MAX_ORDER) pages.
21642 ++ */
21643 + static void __init gather_bootmem_prealloc(void)
21644 + {
21645 + struct huge_bootmem_page *m;
21646 +@@ -2522,20 +2514,19 @@ static void __init gather_bootmem_prealloc(void)
21647 + struct page *page = virt_to_page(m);
21648 + struct hstate *h = m->hstate;
21649 +
21650 ++ VM_BUG_ON(!hstate_is_gigantic(h));
21651 + WARN_ON(page_count(page) != 1);
21652 +- prep_compound_huge_page(page, huge_page_order(h));
21653 ++ prep_compound_gigantic_page(page, huge_page_order(h));
21654 + WARN_ON(PageReserved(page));
21655 + prep_new_huge_page(h, page, page_to_nid(page));
21656 + put_page(page); /* free it into the hugepage allocator */
21657 +
21658 + /*
21659 +- * If we had gigantic hugepages allocated at boot time, we need
21660 +- * to restore the 'stolen' pages to totalram_pages in order to
21661 +- * fix confusing memory reports from free(1) and another
21662 +- * side-effects, like CommitLimit going negative.
21663 ++ * We need to restore the 'stolen' pages to totalram_pages
21664 ++ * in order to fix confusing memory reports from free(1) and
21665 ++ * other side-effects, like CommitLimit going negative.
21666 + */
21667 +- if (hstate_is_gigantic(h))
21668 +- adjust_managed_page_count(page, pages_per_huge_page(h));
21669 ++ adjust_managed_page_count(page, pages_per_huge_page(h));
21670 + cond_resched();
21671 + }
21672 + }
21673 +diff --git a/mm/khugepaged.c b/mm/khugepaged.c
21674 +index 2680d5ffee7f2..1259efcd94cac 100644
21675 +--- a/mm/khugepaged.c
21676 ++++ b/mm/khugepaged.c
21677 +@@ -442,9 +442,7 @@ static inline int khugepaged_test_exit(struct mm_struct *mm)
21678 + static bool hugepage_vma_check(struct vm_area_struct *vma,
21679 + unsigned long vm_flags)
21680 + {
21681 +- /* Explicitly disabled through madvise. */
21682 +- if ((vm_flags & VM_NOHUGEPAGE) ||
21683 +- test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
21684 ++ if (!transhuge_vma_enabled(vma, vm_flags))
21685 + return false;
21686 +
21687 + /* Enabled via shmem mount options or sysfs settings. */
21688 +diff --git a/mm/memcontrol.c b/mm/memcontrol.c
21689 +index e876ba693998d..769b73151f05b 100644
21690 +--- a/mm/memcontrol.c
21691 ++++ b/mm/memcontrol.c
21692 +@@ -2906,6 +2906,13 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg)
21693 + }
21694 +
21695 + #ifdef CONFIG_MEMCG_KMEM
21696 ++/*
21697 ++ * The allocated objcg pointers array is not accounted directly.
21698 ++ * Moreover, it should not come from DMA buffer and is not readily
21699 ++ * reclaimable. So those GFP bits should be masked off.
21700 ++ */
21701 ++#define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | __GFP_ACCOUNT)
21702 ++
21703 + int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s,
21704 + gfp_t gfp, bool new_page)
21705 + {
21706 +@@ -2913,6 +2920,7 @@ int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s,
21707 + unsigned long memcg_data;
21708 + void *vec;
21709 +
21710 ++ gfp &= ~OBJCGS_CLEAR_MASK;
21711 + vec = kcalloc_node(objects, sizeof(struct obj_cgroup *), gfp,
21712 + page_to_nid(page));
21713 + if (!vec)
21714 +diff --git a/mm/memory.c b/mm/memory.c
21715 +index 36624986130be..e0073089bc9f8 100644
21716 +--- a/mm/memory.c
21717 ++++ b/mm/memory.c
21718 +@@ -3310,6 +3310,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
21719 + {
21720 + struct vm_area_struct *vma = vmf->vma;
21721 + struct page *page = NULL, *swapcache;
21722 ++ struct swap_info_struct *si = NULL;
21723 + swp_entry_t entry;
21724 + pte_t pte;
21725 + int locked;
21726 +@@ -3337,14 +3338,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
21727 + goto out;
21728 + }
21729 +
21730 ++ /* Prevent swapoff from happening to us. */
21731 ++ si = get_swap_device(entry);
21732 ++ if (unlikely(!si))
21733 ++ goto out;
21734 +
21735 + delayacct_set_flag(DELAYACCT_PF_SWAPIN);
21736 + page = lookup_swap_cache(entry, vma, vmf->address);
21737 + swapcache = page;
21738 +
21739 + if (!page) {
21740 +- struct swap_info_struct *si = swp_swap_info(entry);
21741 +-
21742 + if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
21743 + __swap_count(entry) == 1) {
21744 + /* skip swapcache */
21745 +@@ -3515,6 +3518,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
21746 + unlock:
21747 + pte_unmap_unlock(vmf->pte, vmf->ptl);
21748 + out:
21749 ++ if (si)
21750 ++ put_swap_device(si);
21751 + return ret;
21752 + out_nomap:
21753 + pte_unmap_unlock(vmf->pte, vmf->ptl);
21754 +@@ -3526,6 +3531,8 @@ out_release:
21755 + unlock_page(swapcache);
21756 + put_page(swapcache);
21757 + }
21758 ++ if (si)
21759 ++ put_swap_device(si);
21760 + return ret;
21761 + }
21762 +
21763 +diff --git a/mm/migrate.c b/mm/migrate.c
21764 +index 40455e753c5b4..3138600cf435f 100644
21765 +--- a/mm/migrate.c
21766 ++++ b/mm/migrate.c
21767 +@@ -1315,7 +1315,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
21768 + * page_mapping() set, hugetlbfs specific move page routine will not
21769 + * be called and we could leak usage counts for subpools.
21770 + */
21771 +- if (page_private(hpage) && !page_mapping(hpage)) {
21772 ++ if (hugetlb_page_subpool(hpage) && !page_mapping(hpage)) {
21773 + rc = -EBUSY;
21774 + goto out_unlock;
21775 + }
21776 +diff --git a/mm/mmap_lock.c b/mm/mmap_lock.c
21777 +index dcdde4f722a40..2ae3f33b85b16 100644
21778 +--- a/mm/mmap_lock.c
21779 ++++ b/mm/mmap_lock.c
21780 +@@ -11,6 +11,7 @@
21781 + #include <linux/rcupdate.h>
21782 + #include <linux/smp.h>
21783 + #include <linux/trace_events.h>
21784 ++#include <linux/local_lock.h>
21785 +
21786 + EXPORT_TRACEPOINT_SYMBOL(mmap_lock_start_locking);
21787 + EXPORT_TRACEPOINT_SYMBOL(mmap_lock_acquire_returned);
21788 +@@ -39,21 +40,30 @@ static int reg_refcount; /* Protected by reg_lock. */
21789 + */
21790 + #define CONTEXT_COUNT 4
21791 +
21792 +-static DEFINE_PER_CPU(char __rcu *, memcg_path_buf);
21793 ++struct memcg_path {
21794 ++ local_lock_t lock;
21795 ++ char __rcu *buf;
21796 ++ local_t buf_idx;
21797 ++};
21798 ++static DEFINE_PER_CPU(struct memcg_path, memcg_paths) = {
21799 ++ .lock = INIT_LOCAL_LOCK(lock),
21800 ++ .buf_idx = LOCAL_INIT(0),
21801 ++};
21802 ++
21803 + static char **tmp_bufs;
21804 +-static DEFINE_PER_CPU(int, memcg_path_buf_idx);
21805 +
21806 + /* Called with reg_lock held. */
21807 + static void free_memcg_path_bufs(void)
21808 + {
21809 ++ struct memcg_path *memcg_path;
21810 + int cpu;
21811 + char **old = tmp_bufs;
21812 +
21813 + for_each_possible_cpu(cpu) {
21814 +- *(old++) = rcu_dereference_protected(
21815 +- per_cpu(memcg_path_buf, cpu),
21816 ++ memcg_path = per_cpu_ptr(&memcg_paths, cpu);
21817 ++ *(old++) = rcu_dereference_protected(memcg_path->buf,
21818 + lockdep_is_held(&reg_lock));
21819 +- rcu_assign_pointer(per_cpu(memcg_path_buf, cpu), NULL);
21820 ++ rcu_assign_pointer(memcg_path->buf, NULL);
21821 + }
21822 +
21823 + /* Wait for inflight memcg_path_buf users to finish. */
21824 +@@ -88,7 +98,7 @@ int trace_mmap_lock_reg(void)
21825 + new = kmalloc(MEMCG_PATH_BUF_SIZE * CONTEXT_COUNT, GFP_KERNEL);
21826 + if (new == NULL)
21827 + goto out_fail_free;
21828 +- rcu_assign_pointer(per_cpu(memcg_path_buf, cpu), new);
21829 ++ rcu_assign_pointer(per_cpu_ptr(&memcg_paths, cpu)->buf, new);
21830 + /* Don't need to wait for inflights, they'd have gotten NULL. */
21831 + }
21832 +
21833 +@@ -122,23 +132,24 @@ out:
21834 +
21835 + static inline char *get_memcg_path_buf(void)
21836 + {
21837 ++ struct memcg_path *memcg_path = this_cpu_ptr(&memcg_paths);
21838 + char *buf;
21839 + int idx;
21840 +
21841 + rcu_read_lock();
21842 +- buf = rcu_dereference(*this_cpu_ptr(&memcg_path_buf));
21843 ++ buf = rcu_dereference(memcg_path->buf);
21844 + if (buf == NULL) {
21845 + rcu_read_unlock();
21846 + return NULL;
21847 + }
21848 +- idx = this_cpu_add_return(memcg_path_buf_idx, MEMCG_PATH_BUF_SIZE) -
21849 ++ idx = local_add_return(MEMCG_PATH_BUF_SIZE, &memcg_path->buf_idx) -
21850 + MEMCG_PATH_BUF_SIZE;
21851 + return &buf[idx];
21852 + }
21853 +
21854 + static inline void put_memcg_path_buf(void)
21855 + {
21856 +- this_cpu_sub(memcg_path_buf_idx, MEMCG_PATH_BUF_SIZE);
21857 ++ local_sub(MEMCG_PATH_BUF_SIZE, &this_cpu_ptr(&memcg_paths)->buf_idx);
21858 + rcu_read_unlock();
21859 + }
21860 +
21861 +@@ -179,14 +190,14 @@ out:
21862 + #define TRACE_MMAP_LOCK_EVENT(type, mm, ...) \
21863 + do { \
21864 + const char *memcg_path; \
21865 +- preempt_disable(); \
21866 ++ local_lock(&memcg_paths.lock); \
21867 + memcg_path = get_mm_memcg_path(mm); \
21868 + trace_mmap_lock_##type(mm, \
21869 + memcg_path != NULL ? memcg_path : "", \
21870 + ##__VA_ARGS__); \
21871 + if (likely(memcg_path != NULL)) \
21872 + put_memcg_path_buf(); \
21873 +- preempt_enable(); \
21874 ++ local_unlock(&memcg_paths.lock); \
21875 + } while (0)
21876 +
21877 + #else /* !CONFIG_MEMCG */
21878 +diff --git a/mm/page_alloc.c b/mm/page_alloc.c
21879 +index d9dbf45f7590e..382af53772747 100644
21880 +--- a/mm/page_alloc.c
21881 ++++ b/mm/page_alloc.c
21882 +@@ -6207,7 +6207,7 @@ void __ref memmap_init_zone_device(struct zone *zone,
21883 + return;
21884 +
21885 + /*
21886 +- * The call to memmap_init_zone should have already taken care
21887 ++ * The call to memmap_init should have already taken care
21888 + * of the pages reserved for the memmap, so we can just jump to
21889 + * the end of that region and start processing the device pages.
21890 + */
21891 +@@ -6272,7 +6272,7 @@ static void __meminit zone_init_free_lists(struct zone *zone)
21892 + /*
21893 + * Only struct pages that correspond to ranges defined by memblock.memory
21894 + * are zeroed and initialized by going through __init_single_page() during
21895 +- * memmap_init_zone().
21896 ++ * memmap_init_zone_range().
21897 + *
21898 + * But, there could be struct pages that correspond to holes in
21899 + * memblock.memory. This can happen because of the following reasons:
21900 +@@ -6291,9 +6291,9 @@ static void __meminit zone_init_free_lists(struct zone *zone)
21901 + * zone/node above the hole except for the trailing pages in the last
21902 + * section that will be appended to the zone/node below.
21903 + */
21904 +-static u64 __meminit init_unavailable_range(unsigned long spfn,
21905 +- unsigned long epfn,
21906 +- int zone, int node)
21907 ++static void __init init_unavailable_range(unsigned long spfn,
21908 ++ unsigned long epfn,
21909 ++ int zone, int node)
21910 + {
21911 + unsigned long pfn;
21912 + u64 pgcnt = 0;
21913 +@@ -6309,56 +6309,77 @@ static u64 __meminit init_unavailable_range(unsigned long spfn,
21914 + pgcnt++;
21915 + }
21916 +
21917 +- return pgcnt;
21918 ++ if (pgcnt)
21919 ++ pr_info("On node %d, zone %s: %lld pages in unavailable ranges",
21920 ++ node, zone_names[zone], pgcnt);
21921 + }
21922 + #else
21923 +-static inline u64 init_unavailable_range(unsigned long spfn, unsigned long epfn,
21924 +- int zone, int node)
21925 ++static inline void init_unavailable_range(unsigned long spfn,
21926 ++ unsigned long epfn,
21927 ++ int zone, int node)
21928 + {
21929 +- return 0;
21930 + }
21931 + #endif
21932 +
21933 +-void __meminit __weak memmap_init_zone(struct zone *zone)
21934 ++static void __init memmap_init_zone_range(struct zone *zone,
21935 ++ unsigned long start_pfn,
21936 ++ unsigned long end_pfn,
21937 ++ unsigned long *hole_pfn)
21938 + {
21939 + unsigned long zone_start_pfn = zone->zone_start_pfn;
21940 + unsigned long zone_end_pfn = zone_start_pfn + zone->spanned_pages;
21941 +- int i, nid = zone_to_nid(zone), zone_id = zone_idx(zone);
21942 +- static unsigned long hole_pfn;
21943 ++ int nid = zone_to_nid(zone), zone_id = zone_idx(zone);
21944 ++
21945 ++ start_pfn = clamp(start_pfn, zone_start_pfn, zone_end_pfn);
21946 ++ end_pfn = clamp(end_pfn, zone_start_pfn, zone_end_pfn);
21947 ++
21948 ++ if (start_pfn >= end_pfn)
21949 ++ return;
21950 ++
21951 ++ memmap_init_range(end_pfn - start_pfn, nid, zone_id, start_pfn,
21952 ++ zone_end_pfn, MEMINIT_EARLY, NULL, MIGRATE_MOVABLE);
21953 ++
21954 ++ if (*hole_pfn < start_pfn)
21955 ++ init_unavailable_range(*hole_pfn, start_pfn, zone_id, nid);
21956 ++
21957 ++ *hole_pfn = end_pfn;
21958 ++}
21959 ++
21960 ++static void __init memmap_init(void)
21961 ++{
21962 + unsigned long start_pfn, end_pfn;
21963 +- u64 pgcnt = 0;
21964 ++ unsigned long hole_pfn = 0;
21965 ++ int i, j, zone_id, nid;
21966 +
21967 +- for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
21968 +- start_pfn = clamp(start_pfn, zone_start_pfn, zone_end_pfn);
21969 +- end_pfn = clamp(end_pfn, zone_start_pfn, zone_end_pfn);
21970 ++ for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
21971 ++ struct pglist_data *node = NODE_DATA(nid);
21972 ++
21973 ++ for (j = 0; j < MAX_NR_ZONES; j++) {
21974 ++ struct zone *zone = node->node_zones + j;
21975 +
21976 +- if (end_pfn > start_pfn)
21977 +- memmap_init_range(end_pfn - start_pfn, nid,
21978 +- zone_id, start_pfn, zone_end_pfn,
21979 +- MEMINIT_EARLY, NULL, MIGRATE_MOVABLE);
21980 ++ if (!populated_zone(zone))
21981 ++ continue;
21982 +
21983 +- if (hole_pfn < start_pfn)
21984 +- pgcnt += init_unavailable_range(hole_pfn, start_pfn,
21985 +- zone_id, nid);
21986 +- hole_pfn = end_pfn;
21987 ++ memmap_init_zone_range(zone, start_pfn, end_pfn,
21988 ++ &hole_pfn);
21989 ++ zone_id = j;
21990 ++ }
21991 + }
21992 +
21993 + #ifdef CONFIG_SPARSEMEM
21994 + /*
21995 +- * Initialize the hole in the range [zone_end_pfn, section_end].
21996 +- * If zone boundary falls in the middle of a section, this hole
21997 +- * will be re-initialized during the call to this function for the
21998 +- * higher zone.
21999 ++ * Initialize the memory map for hole in the range [memory_end,
22000 ++ * section_end].
22001 ++ * Append the pages in this hole to the highest zone in the last
22002 ++ * node.
22003 ++ * The call to init_unavailable_range() is outside the ifdef to
22004 ++ * silence the compiler warining about zone_id set but not used;
22005 ++ * for FLATMEM it is a nop anyway
22006 + */
22007 +- end_pfn = round_up(zone_end_pfn, PAGES_PER_SECTION);
22008 ++ end_pfn = round_up(end_pfn, PAGES_PER_SECTION);
22009 + if (hole_pfn < end_pfn)
22010 +- pgcnt += init_unavailable_range(hole_pfn, end_pfn,
22011 +- zone_id, nid);
22012 + #endif
22013 +-
22014 +- if (pgcnt)
22015 +- pr_info(" %s zone: %llu pages in unavailable ranges\n",
22016 +- zone->name, pgcnt);
22017 ++ init_unavailable_range(hole_pfn, end_pfn, zone_id, nid);
22018 + }
22019 +
22020 + static int zone_batchsize(struct zone *zone)
22021 +@@ -7061,7 +7082,6 @@ static void __init free_area_init_core(struct pglist_data *pgdat)
22022 + set_pageblock_order();
22023 + setup_usemap(zone);
22024 + init_currently_empty_zone(zone, zone->zone_start_pfn, size);
22025 +- memmap_init_zone(zone);
22026 + }
22027 + }
22028 +
22029 +@@ -7587,6 +7607,8 @@ void __init free_area_init(unsigned long *max_zone_pfn)
22030 + node_set_state(nid, N_MEMORY);
22031 + check_for_memory(pgdat, nid);
22032 + }
22033 ++
22034 ++ memmap_init();
22035 + }
22036 +
22037 + static int __init cmdline_parse_core(char *p, unsigned long *core,
22038 +@@ -7872,14 +7894,14 @@ static void setup_per_zone_lowmem_reserve(void)
22039 + unsigned long managed_pages = 0;
22040 +
22041 + for (j = i + 1; j < MAX_NR_ZONES; j++) {
22042 +- if (clear) {
22043 +- zone->lowmem_reserve[j] = 0;
22044 +- } else {
22045 +- struct zone *upper_zone = &pgdat->node_zones[j];
22046 ++ struct zone *upper_zone = &pgdat->node_zones[j];
22047 +
22048 +- managed_pages += zone_managed_pages(upper_zone);
22049 ++ managed_pages += zone_managed_pages(upper_zone);
22050 ++
22051 ++ if (clear)
22052 ++ zone->lowmem_reserve[j] = 0;
22053 ++ else
22054 + zone->lowmem_reserve[j] = managed_pages / ratio;
22055 +- }
22056 + }
22057 + }
22058 + }
22059 +diff --git a/mm/shmem.c b/mm/shmem.c
22060 +index 6e99a4ad6e1f3..3c39116fd0711 100644
22061 +--- a/mm/shmem.c
22062 ++++ b/mm/shmem.c
22063 +@@ -1696,7 +1696,8 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
22064 + struct address_space *mapping = inode->i_mapping;
22065 + struct shmem_inode_info *info = SHMEM_I(inode);
22066 + struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm;
22067 +- struct page *page;
22068 ++ struct swap_info_struct *si;
22069 ++ struct page *page = NULL;
22070 + swp_entry_t swap;
22071 + int error;
22072 +
22073 +@@ -1704,6 +1705,12 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
22074 + swap = radix_to_swp_entry(*pagep);
22075 + *pagep = NULL;
22076 +
22077 ++ /* Prevent swapoff from happening to us. */
22078 ++ si = get_swap_device(swap);
22079 ++ if (!si) {
22080 ++ error = EINVAL;
22081 ++ goto failed;
22082 ++ }
22083 + /* Look it up and read it in.. */
22084 + page = lookup_swap_cache(swap, NULL, 0);
22085 + if (!page) {
22086 +@@ -1765,6 +1772,8 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
22087 + swap_free(swap);
22088 +
22089 + *pagep = page;
22090 ++ if (si)
22091 ++ put_swap_device(si);
22092 + return 0;
22093 + failed:
22094 + if (!shmem_confirm_swap(mapping, index, swap))
22095 +@@ -1775,6 +1784,9 @@ unlock:
22096 + put_page(page);
22097 + }
22098 +
22099 ++ if (si)
22100 ++ put_swap_device(si);
22101 ++
22102 + return error;
22103 + }
22104 +
22105 +@@ -4025,8 +4037,7 @@ bool shmem_huge_enabled(struct vm_area_struct *vma)
22106 + loff_t i_size;
22107 + pgoff_t off;
22108 +
22109 +- if ((vma->vm_flags & VM_NOHUGEPAGE) ||
22110 +- test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
22111 ++ if (!transhuge_vma_enabled(vma, vma->vm_flags))
22112 + return false;
22113 + if (shmem_huge == SHMEM_HUGE_FORCE)
22114 + return true;
22115 +diff --git a/mm/slab.h b/mm/slab.h
22116 +index 076582f58f687..440133f93a53b 100644
22117 +--- a/mm/slab.h
22118 ++++ b/mm/slab.h
22119 +@@ -309,7 +309,6 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
22120 + if (!memcg_kmem_enabled() || !objcg)
22121 + return;
22122 +
22123 +- flags &= ~__GFP_ACCOUNT;
22124 + for (i = 0; i < size; i++) {
22125 + if (likely(p[i])) {
22126 + page = virt_to_head_page(p[i]);
22127 +diff --git a/mm/z3fold.c b/mm/z3fold.c
22128 +index 9d889ad2bb869..7c417fb8404aa 100644
22129 +--- a/mm/z3fold.c
22130 ++++ b/mm/z3fold.c
22131 +@@ -1059,6 +1059,7 @@ static void z3fold_destroy_pool(struct z3fold_pool *pool)
22132 + destroy_workqueue(pool->compact_wq);
22133 + destroy_workqueue(pool->release_wq);
22134 + z3fold_unregister_migration(pool);
22135 ++ free_percpu(pool->unbuddied);
22136 + kfree(pool);
22137 + }
22138 +
22139 +@@ -1382,7 +1383,7 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
22140 + if (zhdr->foreign_handles ||
22141 + test_and_set_bit(PAGE_CLAIMED, &page->private)) {
22142 + if (kref_put(&zhdr->refcount,
22143 +- release_z3fold_page))
22144 ++ release_z3fold_page_locked))
22145 + atomic64_dec(&pool->pages_nr);
22146 + else
22147 + z3fold_page_unlock(zhdr);
22148 +diff --git a/mm/zswap.c b/mm/zswap.c
22149 +index 578d9f2569200..91f7439fde7fb 100644
22150 +--- a/mm/zswap.c
22151 ++++ b/mm/zswap.c
22152 +@@ -967,6 +967,13 @@ static int zswap_writeback_entry(struct zpool *pool, unsigned long handle)
22153 + spin_unlock(&tree->lock);
22154 + BUG_ON(offset != entry->offset);
22155 +
22156 ++ src = (u8 *)zhdr + sizeof(struct zswap_header);
22157 ++ if (!zpool_can_sleep_mapped(pool)) {
22158 ++ memcpy(tmp, src, entry->length);
22159 ++ src = tmp;
22160 ++ zpool_unmap_handle(pool, handle);
22161 ++ }
22162 ++
22163 + /* try to allocate swap cache page */
22164 + switch (zswap_get_swap_cache_page(swpentry, &page)) {
22165 + case ZSWAP_SWAPCACHE_FAIL: /* no memory or invalidate happened */
22166 +@@ -982,17 +989,7 @@ static int zswap_writeback_entry(struct zpool *pool, unsigned long handle)
22167 + case ZSWAP_SWAPCACHE_NEW: /* page is locked */
22168 + /* decompress */
22169 + acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
22170 +-
22171 + dlen = PAGE_SIZE;
22172 +- src = (u8 *)zhdr + sizeof(struct zswap_header);
22173 +-
22174 +- if (!zpool_can_sleep_mapped(pool)) {
22175 +-
22176 +- memcpy(tmp, src, entry->length);
22177 +- src = tmp;
22178 +-
22179 +- zpool_unmap_handle(pool, handle);
22180 +- }
22181 +
22182 + mutex_lock(acomp_ctx->mutex);
22183 + sg_init_one(&input, src, entry->length);
22184 +diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
22185 +index 82f4973a011d9..c6f400b108d94 100644
22186 +--- a/net/bluetooth/hci_event.c
22187 ++++ b/net/bluetooth/hci_event.c
22188 +@@ -5271,8 +5271,19 @@ static void hci_le_ext_adv_term_evt(struct hci_dev *hdev, struct sk_buff *skb)
22189 +
22190 + BT_DBG("%s status 0x%2.2x", hdev->name, ev->status);
22191 +
22192 +- if (ev->status)
22193 ++ if (ev->status) {
22194 ++ struct adv_info *adv;
22195 ++
22196 ++ adv = hci_find_adv_instance(hdev, ev->handle);
22197 ++ if (!adv)
22198 ++ return;
22199 ++
22200 ++ /* Remove advertising as it has been terminated */
22201 ++ hci_remove_adv_instance(hdev, ev->handle);
22202 ++ mgmt_advertising_removed(NULL, hdev, ev->handle);
22203 ++
22204 + return;
22205 ++ }
22206 +
22207 + conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->conn_handle));
22208 + if (conn) {
22209 +@@ -5416,7 +5427,7 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
22210 + struct hci_conn *conn;
22211 + bool match;
22212 + u32 flags;
22213 +- u8 *ptr, real_len;
22214 ++ u8 *ptr;
22215 +
22216 + switch (type) {
22217 + case LE_ADV_IND:
22218 +@@ -5447,14 +5458,10 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
22219 + break;
22220 + }
22221 +
22222 +- real_len = ptr - data;
22223 +-
22224 +- /* Adjust for actual length */
22225 +- if (len != real_len) {
22226 +- bt_dev_err_ratelimited(hdev, "advertising data len corrected %u -> %u",
22227 +- len, real_len);
22228 +- len = real_len;
22229 +- }
22230 ++ /* Adjust for actual length. This handles the case when remote
22231 ++ * device is advertising with incorrect data length.
22232 ++ */
22233 ++ len = ptr - data;
22234 +
22235 + /* If the direct address is present, then this report is from
22236 + * a LE Direct Advertising Report event. In that case it is
22237 +diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
22238 +index 805ce546b8133..e5d6b1d127645 100644
22239 +--- a/net/bluetooth/hci_request.c
22240 ++++ b/net/bluetooth/hci_request.c
22241 +@@ -1685,30 +1685,33 @@ void __hci_req_update_scan_rsp_data(struct hci_request *req, u8 instance)
22242 + return;
22243 +
22244 + if (ext_adv_capable(hdev)) {
22245 +- struct hci_cp_le_set_ext_scan_rsp_data cp;
22246 ++ struct {
22247 ++ struct hci_cp_le_set_ext_scan_rsp_data cp;
22248 ++ u8 data[HCI_MAX_EXT_AD_LENGTH];
22249 ++ } pdu;
22250 +
22251 +- memset(&cp, 0, sizeof(cp));
22252 ++ memset(&pdu, 0, sizeof(pdu));
22253 +
22254 + if (instance)
22255 + len = create_instance_scan_rsp_data(hdev, instance,
22256 +- cp.data);
22257 ++ pdu.data);
22258 + else
22259 +- len = create_default_scan_rsp_data(hdev, cp.data);
22260 ++ len = create_default_scan_rsp_data(hdev, pdu.data);
22261 +
22262 + if (hdev->scan_rsp_data_len == len &&
22263 +- !memcmp(cp.data, hdev->scan_rsp_data, len))
22264 ++ !memcmp(pdu.data, hdev->scan_rsp_data, len))
22265 + return;
22266 +
22267 +- memcpy(hdev->scan_rsp_data, cp.data, sizeof(cp.data));
22268 ++ memcpy(hdev->scan_rsp_data, pdu.data, len);
22269 + hdev->scan_rsp_data_len = len;
22270 +
22271 +- cp.handle = instance;
22272 +- cp.length = len;
22273 +- cp.operation = LE_SET_ADV_DATA_OP_COMPLETE;
22274 +- cp.frag_pref = LE_SET_ADV_DATA_NO_FRAG;
22275 ++ pdu.cp.handle = instance;
22276 ++ pdu.cp.length = len;
22277 ++ pdu.cp.operation = LE_SET_ADV_DATA_OP_COMPLETE;
22278 ++ pdu.cp.frag_pref = LE_SET_ADV_DATA_NO_FRAG;
22279 +
22280 +- hci_req_add(req, HCI_OP_LE_SET_EXT_SCAN_RSP_DATA, sizeof(cp),
22281 +- &cp);
22282 ++ hci_req_add(req, HCI_OP_LE_SET_EXT_SCAN_RSP_DATA,
22283 ++ sizeof(pdu.cp) + len, &pdu.cp);
22284 + } else {
22285 + struct hci_cp_le_set_scan_rsp_data cp;
22286 +
22287 +@@ -1831,26 +1834,30 @@ void __hci_req_update_adv_data(struct hci_request *req, u8 instance)
22288 + return;
22289 +
22290 + if (ext_adv_capable(hdev)) {
22291 +- struct hci_cp_le_set_ext_adv_data cp;
22292 ++ struct {
22293 ++ struct hci_cp_le_set_ext_adv_data cp;
22294 ++ u8 data[HCI_MAX_EXT_AD_LENGTH];
22295 ++ } pdu;
22296 +
22297 +- memset(&cp, 0, sizeof(cp));
22298 ++ memset(&pdu, 0, sizeof(pdu));
22299 +
22300 +- len = create_instance_adv_data(hdev, instance, cp.data);
22301 ++ len = create_instance_adv_data(hdev, instance, pdu.data);
22302 +
22303 + /* There's nothing to do if the data hasn't changed */
22304 + if (hdev->adv_data_len == len &&
22305 +- memcmp(cp.data, hdev->adv_data, len) == 0)
22306 ++ memcmp(pdu.data, hdev->adv_data, len) == 0)
22307 + return;
22308 +
22309 +- memcpy(hdev->adv_data, cp.data, sizeof(cp.data));
22310 ++ memcpy(hdev->adv_data, pdu.data, len);
22311 + hdev->adv_data_len = len;
22312 +
22313 +- cp.length = len;
22314 +- cp.handle = instance;
22315 +- cp.operation = LE_SET_ADV_DATA_OP_COMPLETE;
22316 +- cp.frag_pref = LE_SET_ADV_DATA_NO_FRAG;
22317 ++ pdu.cp.length = len;
22318 ++ pdu.cp.handle = instance;
22319 ++ pdu.cp.operation = LE_SET_ADV_DATA_OP_COMPLETE;
22320 ++ pdu.cp.frag_pref = LE_SET_ADV_DATA_NO_FRAG;
22321 +
22322 +- hci_req_add(req, HCI_OP_LE_SET_EXT_ADV_DATA, sizeof(cp), &cp);
22323 ++ hci_req_add(req, HCI_OP_LE_SET_EXT_ADV_DATA,
22324 ++ sizeof(pdu.cp) + len, &pdu.cp);
22325 + } else {
22326 + struct hci_cp_le_set_adv_data cp;
22327 +
22328 +diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
22329 +index 939c6f77fecc2..71de147f55584 100644
22330 +--- a/net/bluetooth/mgmt.c
22331 ++++ b/net/bluetooth/mgmt.c
22332 +@@ -7579,6 +7579,9 @@ static bool tlv_data_is_valid(struct hci_dev *hdev, u32 adv_flags, u8 *data,
22333 + for (i = 0, cur_len = 0; i < len; i += (cur_len + 1)) {
22334 + cur_len = data[i];
22335 +
22336 ++ if (!cur_len)
22337 ++ continue;
22338 ++
22339 + if (data[i + 1] == EIR_FLAGS &&
22340 + (!is_adv_data || flags_managed(adv_flags)))
22341 + return false;
22342 +diff --git a/net/bpfilter/main.c b/net/bpfilter/main.c
22343 +index 05e1cfc1e5cd1..291a925462463 100644
22344 +--- a/net/bpfilter/main.c
22345 ++++ b/net/bpfilter/main.c
22346 +@@ -57,7 +57,7 @@ int main(void)
22347 + {
22348 + debug_f = fopen("/dev/kmsg", "w");
22349 + setvbuf(debug_f, 0, _IOLBF, 0);
22350 +- fprintf(debug_f, "Started bpfilter\n");
22351 ++ fprintf(debug_f, "<5>Started bpfilter\n");
22352 + loop();
22353 + fclose(debug_f);
22354 + return 0;
22355 +diff --git a/net/can/bcm.c b/net/can/bcm.c
22356 +index f3e4d9528fa38..0928a39c4423b 100644
22357 +--- a/net/can/bcm.c
22358 ++++ b/net/can/bcm.c
22359 +@@ -785,6 +785,7 @@ static int bcm_delete_rx_op(struct list_head *ops, struct bcm_msg_head *mh,
22360 + bcm_rx_handler, op);
22361 +
22362 + list_del(&op->list);
22363 ++ synchronize_rcu();
22364 + bcm_remove_op(op);
22365 + return 1; /* done */
22366 + }
22367 +@@ -1533,9 +1534,13 @@ static int bcm_release(struct socket *sock)
22368 + REGMASK(op->can_id),
22369 + bcm_rx_handler, op);
22370 +
22371 +- bcm_remove_op(op);
22372 + }
22373 +
22374 ++ synchronize_rcu();
22375 ++
22376 ++ list_for_each_entry_safe(op, next, &bo->rx_ops, list)
22377 ++ bcm_remove_op(op);
22378 ++
22379 + #if IS_ENABLED(CONFIG_PROC_FS)
22380 + /* remove procfs entry */
22381 + if (net->can.bcmproc_dir && bo->bcm_proc_read)
22382 +diff --git a/net/can/gw.c b/net/can/gw.c
22383 +index ba41248056029..d8861e862f157 100644
22384 +--- a/net/can/gw.c
22385 ++++ b/net/can/gw.c
22386 +@@ -596,6 +596,7 @@ static int cgw_notifier(struct notifier_block *nb,
22387 + if (gwj->src.dev == dev || gwj->dst.dev == dev) {
22388 + hlist_del(&gwj->list);
22389 + cgw_unregister_filter(net, gwj);
22390 ++ synchronize_rcu();
22391 + kmem_cache_free(cgw_cache, gwj);
22392 + }
22393 + }
22394 +@@ -1154,6 +1155,7 @@ static void cgw_remove_all_jobs(struct net *net)
22395 + hlist_for_each_entry_safe(gwj, nx, &net->can.cgw_list, list) {
22396 + hlist_del(&gwj->list);
22397 + cgw_unregister_filter(net, gwj);
22398 ++ synchronize_rcu();
22399 + kmem_cache_free(cgw_cache, gwj);
22400 + }
22401 + }
22402 +@@ -1222,6 +1224,7 @@ static int cgw_remove_job(struct sk_buff *skb, struct nlmsghdr *nlh,
22403 +
22404 + hlist_del(&gwj->list);
22405 + cgw_unregister_filter(net, gwj);
22406 ++ synchronize_rcu();
22407 + kmem_cache_free(cgw_cache, gwj);
22408 + err = 0;
22409 + break;
22410 +diff --git a/net/can/isotp.c b/net/can/isotp.c
22411 +index be6183f8ca110..234cc4ad179a2 100644
22412 +--- a/net/can/isotp.c
22413 ++++ b/net/can/isotp.c
22414 +@@ -1028,9 +1028,6 @@ static int isotp_release(struct socket *sock)
22415 +
22416 + lock_sock(sk);
22417 +
22418 +- hrtimer_cancel(&so->txtimer);
22419 +- hrtimer_cancel(&so->rxtimer);
22420 +-
22421 + /* remove current filters & unregister */
22422 + if (so->bound && (!(so->opt.flags & CAN_ISOTP_SF_BROADCAST))) {
22423 + if (so->ifindex) {
22424 +@@ -1042,10 +1039,14 @@ static int isotp_release(struct socket *sock)
22425 + SINGLE_MASK(so->rxid),
22426 + isotp_rcv, sk);
22427 + dev_put(dev);
22428 ++ synchronize_rcu();
22429 + }
22430 + }
22431 + }
22432 +
22433 ++ hrtimer_cancel(&so->txtimer);
22434 ++ hrtimer_cancel(&so->rxtimer);
22435 ++
22436 + so->ifindex = 0;
22437 + so->bound = 0;
22438 +
22439 +diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
22440 +index da3a7a7bcff2b..08c8606cfd9c7 100644
22441 +--- a/net/can/j1939/main.c
22442 ++++ b/net/can/j1939/main.c
22443 +@@ -193,6 +193,10 @@ static void j1939_can_rx_unregister(struct j1939_priv *priv)
22444 + can_rx_unregister(dev_net(ndev), ndev, J1939_CAN_ID, J1939_CAN_MASK,
22445 + j1939_can_recv, priv);
22446 +
22447 ++ /* The last reference of priv is dropped by the RCU deferred
22448 ++ * j1939_sk_sock_destruct() of the last socket, so we can
22449 ++ * safely drop this reference here.
22450 ++ */
22451 + j1939_priv_put(priv);
22452 + }
22453 +
22454 +diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
22455 +index 56aa66147d5ac..e1a399821238f 100644
22456 +--- a/net/can/j1939/socket.c
22457 ++++ b/net/can/j1939/socket.c
22458 +@@ -398,6 +398,9 @@ static int j1939_sk_init(struct sock *sk)
22459 + atomic_set(&jsk->skb_pending, 0);
22460 + spin_lock_init(&jsk->sk_session_queue_lock);
22461 + INIT_LIST_HEAD(&jsk->sk_session_queue);
22462 ++
22463 ++ /* j1939_sk_sock_destruct() depends on SOCK_RCU_FREE flag */
22464 ++ sock_set_flag(sk, SOCK_RCU_FREE);
22465 + sk->sk_destruct = j1939_sk_sock_destruct;
22466 + sk->sk_protocol = CAN_J1939;
22467 +
22468 +@@ -673,7 +676,7 @@ static int j1939_sk_setsockopt(struct socket *sock, int level, int optname,
22469 +
22470 + switch (optname) {
22471 + case SO_J1939_FILTER:
22472 +- if (!sockptr_is_null(optval)) {
22473 ++ if (!sockptr_is_null(optval) && optlen != 0) {
22474 + struct j1939_filter *f;
22475 + int c;
22476 +
22477 +diff --git a/net/core/filter.c b/net/core/filter.c
22478 +index 52f4359efbd2b..0d1273d40fcf3 100644
22479 +--- a/net/core/filter.c
22480 ++++ b/net/core/filter.c
22481 +@@ -3266,8 +3266,6 @@ static int bpf_skb_proto_4_to_6(struct sk_buff *skb)
22482 + shinfo->gso_type |= SKB_GSO_TCPV6;
22483 + }
22484 +
22485 +- /* Due to IPv6 header, MSS needs to be downgraded. */
22486 +- skb_decrease_gso_size(shinfo, len_diff);
22487 + /* Header must be checked, and gso_segs recomputed. */
22488 + shinfo->gso_type |= SKB_GSO_DODGY;
22489 + shinfo->gso_segs = 0;
22490 +@@ -3307,8 +3305,6 @@ static int bpf_skb_proto_6_to_4(struct sk_buff *skb)
22491 + shinfo->gso_type |= SKB_GSO_TCPV4;
22492 + }
22493 +
22494 +- /* Due to IPv4 header, MSS can be upgraded. */
22495 +- skb_increase_gso_size(shinfo, len_diff);
22496 + /* Header must be checked, and gso_segs recomputed. */
22497 + shinfo->gso_type |= SKB_GSO_DODGY;
22498 + shinfo->gso_segs = 0;
22499 +diff --git a/net/core/sock_map.c b/net/core/sock_map.c
22500 +index d758fb83c8841..ae62e6f96a95c 100644
22501 +--- a/net/core/sock_map.c
22502 ++++ b/net/core/sock_map.c
22503 +@@ -44,7 +44,7 @@ static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
22504 + bpf_map_init_from_attr(&stab->map, attr);
22505 + raw_spin_lock_init(&stab->lock);
22506 +
22507 +- stab->sks = bpf_map_area_alloc(stab->map.max_entries *
22508 ++ stab->sks = bpf_map_area_alloc((u64) stab->map.max_entries *
22509 + sizeof(struct sock *),
22510 + stab->map.numa_node);
22511 + if (!stab->sks) {
22512 +diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
22513 +index 4b834bbf95e07..ed9857b2875dc 100644
22514 +--- a/net/ipv4/esp4.c
22515 ++++ b/net/ipv4/esp4.c
22516 +@@ -673,7 +673,7 @@ static int esp_output(struct xfrm_state *x, struct sk_buff *skb)
22517 + struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb);
22518 + u32 padto;
22519 +
22520 +- padto = min(x->tfcpad, xfrm_state_mtu(x, dst->child_mtu_cached));
22521 ++ padto = min(x->tfcpad, __xfrm_state_mtu(x, dst->child_mtu_cached));
22522 + if (skb->len < padto)
22523 + esp.tfclen = padto - skb->len;
22524 + }
22525 +diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
22526 +index 84bb707bd88d8..647bceab56c2d 100644
22527 +--- a/net/ipv4/fib_frontend.c
22528 ++++ b/net/ipv4/fib_frontend.c
22529 +@@ -371,6 +371,8 @@ static int __fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst,
22530 + fl4.flowi4_proto = 0;
22531 + fl4.fl4_sport = 0;
22532 + fl4.fl4_dport = 0;
22533 ++ } else {
22534 ++ swap(fl4.fl4_sport, fl4.fl4_dport);
22535 + }
22536 +
22537 + if (fib_lookup(net, &fl4, &res, 0))
22538 +diff --git a/net/ipv4/route.c b/net/ipv4/route.c
22539 +index 09506203156d1..484064daa95a4 100644
22540 +--- a/net/ipv4/route.c
22541 ++++ b/net/ipv4/route.c
22542 +@@ -1331,7 +1331,7 @@ INDIRECT_CALLABLE_SCOPE unsigned int ipv4_mtu(const struct dst_entry *dst)
22543 + mtu = dst_metric_raw(dst, RTAX_MTU);
22544 +
22545 + if (mtu)
22546 +- return mtu;
22547 ++ goto out;
22548 +
22549 + mtu = READ_ONCE(dst->dev->mtu);
22550 +
22551 +@@ -1340,6 +1340,7 @@ INDIRECT_CALLABLE_SCOPE unsigned int ipv4_mtu(const struct dst_entry *dst)
22552 + mtu = 576;
22553 + }
22554 +
22555 ++out:
22556 + mtu = min_t(unsigned int, mtu, IP_MAX_MTU);
22557 +
22558 + return mtu - lwtunnel_headroom(dst->lwtstate, mtu);
22559 +diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
22560 +index 727d791ed5e67..9d1327b36bd3b 100644
22561 +--- a/net/ipv6/esp6.c
22562 ++++ b/net/ipv6/esp6.c
22563 +@@ -708,7 +708,7 @@ static int esp6_output(struct xfrm_state *x, struct sk_buff *skb)
22564 + struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb);
22565 + u32 padto;
22566 +
22567 +- padto = min(x->tfcpad, xfrm_state_mtu(x, dst->child_mtu_cached));
22568 ++ padto = min(x->tfcpad, __xfrm_state_mtu(x, dst->child_mtu_cached));
22569 + if (skb->len < padto)
22570 + esp.tfclen = padto - skb->len;
22571 + }
22572 +diff --git a/net/ipv6/exthdrs.c b/net/ipv6/exthdrs.c
22573 +index 6126f8bf94b39..6ffa05298cc0e 100644
22574 +--- a/net/ipv6/exthdrs.c
22575 ++++ b/net/ipv6/exthdrs.c
22576 +@@ -135,18 +135,23 @@ static bool ip6_parse_tlv(const struct tlvtype_proc *procs,
22577 + len -= 2;
22578 +
22579 + while (len > 0) {
22580 +- int optlen = nh[off + 1] + 2;
22581 +- int i;
22582 ++ int optlen, i;
22583 +
22584 +- switch (nh[off]) {
22585 +- case IPV6_TLV_PAD1:
22586 +- optlen = 1;
22587 ++ if (nh[off] == IPV6_TLV_PAD1) {
22588 + padlen++;
22589 + if (padlen > 7)
22590 + goto bad;
22591 +- break;
22592 ++ off++;
22593 ++ len--;
22594 ++ continue;
22595 ++ }
22596 ++ if (len < 2)
22597 ++ goto bad;
22598 ++ optlen = nh[off + 1] + 2;
22599 ++ if (optlen > len)
22600 ++ goto bad;
22601 +
22602 +- case IPV6_TLV_PADN:
22603 ++ if (nh[off] == IPV6_TLV_PADN) {
22604 + /* RFC 2460 states that the purpose of PadN is
22605 + * to align the containing header to multiples
22606 + * of 8. 7 is therefore the highest valid value.
22607 +@@ -163,12 +168,7 @@ static bool ip6_parse_tlv(const struct tlvtype_proc *procs,
22608 + if (nh[off + i] != 0)
22609 + goto bad;
22610 + }
22611 +- break;
22612 +-
22613 +- default: /* Other TLV code so scan list */
22614 +- if (optlen > len)
22615 +- goto bad;
22616 +-
22617 ++ } else {
22618 + tlv_count++;
22619 + if (tlv_count > max_count)
22620 + goto bad;
22621 +@@ -188,7 +188,6 @@ static bool ip6_parse_tlv(const struct tlvtype_proc *procs,
22622 + return false;
22623 +
22624 + padlen = 0;
22625 +- break;
22626 + }
22627 + off += optlen;
22628 + len -= optlen;
22629 +@@ -306,7 +305,7 @@ fail_and_free:
22630 + #endif
22631 +
22632 + if (ip6_parse_tlv(tlvprocdestopt_lst, skb,
22633 +- init_net.ipv6.sysctl.max_dst_opts_cnt)) {
22634 ++ net->ipv6.sysctl.max_dst_opts_cnt)) {
22635 + skb->transport_header += extlen;
22636 + opt = IP6CB(skb);
22637 + #if IS_ENABLED(CONFIG_IPV6_MIP6)
22638 +@@ -1036,7 +1035,7 @@ fail_and_free:
22639 +
22640 + opt->flags |= IP6SKB_HOPBYHOP;
22641 + if (ip6_parse_tlv(tlvprochopopt_lst, skb,
22642 +- init_net.ipv6.sysctl.max_hbh_opts_cnt)) {
22643 ++ net->ipv6.sysctl.max_hbh_opts_cnt)) {
22644 + skb->transport_header += extlen;
22645 + opt = IP6CB(skb);
22646 + opt->nhoff = sizeof(struct ipv6hdr);
22647 +diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
22648 +index d42f471b0d655..adbc9bf65d30e 100644
22649 +--- a/net/ipv6/ip6_tunnel.c
22650 ++++ b/net/ipv6/ip6_tunnel.c
22651 +@@ -1239,8 +1239,6 @@ route_lookup:
22652 + if (max_headroom > dev->needed_headroom)
22653 + dev->needed_headroom = max_headroom;
22654 +
22655 +- skb_set_inner_ipproto(skb, proto);
22656 +-
22657 + err = ip6_tnl_encap(skb, t, &proto, fl6);
22658 + if (err)
22659 + return err;
22660 +@@ -1377,6 +1375,8 @@ ipxip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev,
22661 + if (iptunnel_handle_offloads(skb, SKB_GSO_IPXIP6))
22662 + return -1;
22663 +
22664 ++ skb_set_inner_ipproto(skb, protocol);
22665 ++
22666 + err = ip6_tnl_xmit(skb, dev, dsfield, &fl6, encap_limit, &mtu,
22667 + protocol);
22668 + if (err != 0) {
22669 +diff --git a/net/mac80211/he.c b/net/mac80211/he.c
22670 +index 0c0b970835ceb..a87421c8637d6 100644
22671 +--- a/net/mac80211/he.c
22672 ++++ b/net/mac80211/he.c
22673 +@@ -111,7 +111,7 @@ ieee80211_he_cap_ie_to_sta_he_cap(struct ieee80211_sub_if_data *sdata,
22674 + struct sta_info *sta)
22675 + {
22676 + struct ieee80211_sta_he_cap *he_cap = &sta->sta.he_cap;
22677 +- struct ieee80211_sta_he_cap own_he_cap = sband->iftype_data->he_cap;
22678 ++ struct ieee80211_sta_he_cap own_he_cap;
22679 + struct ieee80211_he_cap_elem *he_cap_ie_elem = (void *)he_cap_ie;
22680 + u8 he_ppe_size;
22681 + u8 mcs_nss_size;
22682 +@@ -123,6 +123,8 @@ ieee80211_he_cap_ie_to_sta_he_cap(struct ieee80211_sub_if_data *sdata,
22683 + if (!he_cap_ie || !ieee80211_get_he_sta_cap(sband))
22684 + return;
22685 +
22686 ++ own_he_cap = sband->iftype_data->he_cap;
22687 ++
22688 + /* Make sure size is OK */
22689 + mcs_nss_size = ieee80211_he_mcs_nss_size(he_cap_ie_elem);
22690 + he_ppe_size =
22691 +diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
22692 +index 437d88822d8f8..3d915b9752a88 100644
22693 +--- a/net/mac80211/mlme.c
22694 ++++ b/net/mac80211/mlme.c
22695 +@@ -1094,11 +1094,6 @@ void ieee80211_send_nullfunc(struct ieee80211_local *local,
22696 + struct ieee80211_hdr_3addr *nullfunc;
22697 + struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
22698 +
22699 +- /* Don't send NDPs when STA is connected HE */
22700 +- if (sdata->vif.type == NL80211_IFTYPE_STATION &&
22701 +- !(ifmgd->flags & IEEE80211_STA_DISABLE_HE))
22702 +- return;
22703 +-
22704 + skb = ieee80211_nullfunc_get(&local->hw, &sdata->vif,
22705 + !ieee80211_hw_check(&local->hw, DOESNT_SUPPORT_QOS_NDP));
22706 + if (!skb)
22707 +@@ -1130,10 +1125,6 @@ static void ieee80211_send_4addr_nullfunc(struct ieee80211_local *local,
22708 + if (WARN_ON(sdata->vif.type != NL80211_IFTYPE_STATION))
22709 + return;
22710 +
22711 +- /* Don't send NDPs when connected HE */
22712 +- if (!(sdata->u.mgd.flags & IEEE80211_STA_DISABLE_HE))
22713 +- return;
22714 +-
22715 + skb = dev_alloc_skb(local->hw.extra_tx_headroom + 30);
22716 + if (!skb)
22717 + return;
22718 +diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
22719 +index f2fb69da9b6e1..13250cadb4202 100644
22720 +--- a/net/mac80211/sta_info.c
22721 ++++ b/net/mac80211/sta_info.c
22722 +@@ -1398,11 +1398,6 @@ static void ieee80211_send_null_response(struct sta_info *sta, int tid,
22723 + struct ieee80211_tx_info *info;
22724 + struct ieee80211_chanctx_conf *chanctx_conf;
22725 +
22726 +- /* Don't send NDPs when STA is connected HE */
22727 +- if (sdata->vif.type == NL80211_IFTYPE_STATION &&
22728 +- !(sdata->u.mgd.flags & IEEE80211_STA_DISABLE_HE))
22729 +- return;
22730 +-
22731 + if (qos) {
22732 + fc = cpu_to_le16(IEEE80211_FTYPE_DATA |
22733 + IEEE80211_STYPE_QOS_NULLFUNC |
22734 +diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
22735 +index d6d8ad4f918e7..189139b8d401c 100644
22736 +--- a/net/mptcp/subflow.c
22737 ++++ b/net/mptcp/subflow.c
22738 +@@ -409,15 +409,15 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
22739 + goto do_reset;
22740 + }
22741 +
22742 ++ if (!mptcp_finish_join(sk))
22743 ++ goto do_reset;
22744 ++
22745 + subflow_generate_hmac(subflow->local_key, subflow->remote_key,
22746 + subflow->local_nonce,
22747 + subflow->remote_nonce,
22748 + hmac);
22749 + memcpy(subflow->hmac, hmac, MPTCPOPT_HMAC_LEN);
22750 +
22751 +- if (!mptcp_finish_join(sk))
22752 +- goto do_reset;
22753 +-
22754 + subflow->mp_join = 1;
22755 + MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_JOINSYNACKRX);
22756 +
22757 +diff --git a/net/mptcp/token.c b/net/mptcp/token.c
22758 +index feb4b9ffd4625..0691a4883f3ab 100644
22759 +--- a/net/mptcp/token.c
22760 ++++ b/net/mptcp/token.c
22761 +@@ -156,9 +156,6 @@ int mptcp_token_new_connect(struct sock *sk)
22762 + int retries = TOKEN_MAX_RETRIES;
22763 + struct token_bucket *bucket;
22764 +
22765 +- pr_debug("ssk=%p, local_key=%llu, token=%u, idsn=%llu\n",
22766 +- sk, subflow->local_key, subflow->token, subflow->idsn);
22767 +-
22768 + again:
22769 + mptcp_crypto_key_gen_sha(&subflow->local_key, &subflow->token,
22770 + &subflow->idsn);
22771 +@@ -172,6 +169,9 @@ again:
22772 + goto again;
22773 + }
22774 +
22775 ++ pr_debug("ssk=%p, local_key=%llu, token=%u, idsn=%llu\n",
22776 ++ sk, subflow->local_key, subflow->token, subflow->idsn);
22777 ++
22778 + WRITE_ONCE(msk->token, subflow->token);
22779 + __sk_nulls_add_node_rcu((struct sock *)msk, &bucket->msk_chain);
22780 + bucket->chain_len++;
22781 +diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
22782 +index 9d5ea23529657..6b79fa357bfea 100644
22783 +--- a/net/netfilter/nf_tables_api.c
22784 ++++ b/net/netfilter/nf_tables_api.c
22785 +@@ -521,7 +521,7 @@ static struct nft_table *nft_table_lookup(const struct net *net,
22786 + table->family == family &&
22787 + nft_active_genmask(table, genmask)) {
22788 + if (nft_table_has_owner(table) &&
22789 +- table->nlpid != nlpid)
22790 ++ nlpid && table->nlpid != nlpid)
22791 + return ERR_PTR(-EPERM);
22792 +
22793 + return table;
22794 +@@ -533,14 +533,19 @@ static struct nft_table *nft_table_lookup(const struct net *net,
22795 +
22796 + static struct nft_table *nft_table_lookup_byhandle(const struct net *net,
22797 + const struct nlattr *nla,
22798 +- u8 genmask)
22799 ++ u8 genmask, u32 nlpid)
22800 + {
22801 + struct nft_table *table;
22802 +
22803 + list_for_each_entry(table, &net->nft.tables, list) {
22804 + if (be64_to_cpu(nla_get_be64(nla)) == table->handle &&
22805 +- nft_active_genmask(table, genmask))
22806 ++ nft_active_genmask(table, genmask)) {
22807 ++ if (nft_table_has_owner(table) &&
22808 ++ nlpid && table->nlpid != nlpid)
22809 ++ return ERR_PTR(-EPERM);
22810 ++
22811 + return table;
22812 ++ }
22813 + }
22814 +
22815 + return ERR_PTR(-ENOENT);
22816 +@@ -1213,7 +1218,8 @@ static int nf_tables_deltable(struct net *net, struct sock *nlsk,
22817 +
22818 + if (nla[NFTA_TABLE_HANDLE]) {
22819 + attr = nla[NFTA_TABLE_HANDLE];
22820 +- table = nft_table_lookup_byhandle(net, attr, genmask);
22821 ++ table = nft_table_lookup_byhandle(net, attr, genmask,
22822 ++ NETLINK_CB(skb).portid);
22823 + } else {
22824 + attr = nla[NFTA_TABLE_NAME];
22825 + table = nft_table_lookup(net, attr, family, genmask,
22826 +diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c
22827 +index 2b00f7f47693b..9ce776175214c 100644
22828 +--- a/net/netfilter/nf_tables_offload.c
22829 ++++ b/net/netfilter/nf_tables_offload.c
22830 +@@ -54,15 +54,10 @@ static void nft_flow_rule_transfer_vlan(struct nft_offload_ctx *ctx,
22831 + struct nft_flow_rule *flow)
22832 + {
22833 + struct nft_flow_match *match = &flow->match;
22834 +- struct nft_offload_ethertype ethertype;
22835 +-
22836 +- if (match->dissector.used_keys & BIT(FLOW_DISSECTOR_KEY_CONTROL) &&
22837 +- match->key.basic.n_proto != htons(ETH_P_8021Q) &&
22838 +- match->key.basic.n_proto != htons(ETH_P_8021AD))
22839 +- return;
22840 +-
22841 +- ethertype.value = match->key.basic.n_proto;
22842 +- ethertype.mask = match->mask.basic.n_proto;
22843 ++ struct nft_offload_ethertype ethertype = {
22844 ++ .value = match->key.basic.n_proto,
22845 ++ .mask = match->mask.basic.n_proto,
22846 ++ };
22847 +
22848 + if (match->dissector.used_keys & BIT(FLOW_DISSECTOR_KEY_VLAN) &&
22849 + (match->key.vlan.vlan_tpid == htons(ETH_P_8021Q) ||
22850 +@@ -76,7 +71,9 @@ static void nft_flow_rule_transfer_vlan(struct nft_offload_ctx *ctx,
22851 + match->dissector.offset[FLOW_DISSECTOR_KEY_CVLAN] =
22852 + offsetof(struct nft_flow_key, cvlan);
22853 + match->dissector.used_keys |= BIT(FLOW_DISSECTOR_KEY_CVLAN);
22854 +- } else {
22855 ++ } else if (match->dissector.used_keys & BIT(FLOW_DISSECTOR_KEY_BASIC) &&
22856 ++ (match->key.basic.n_proto == htons(ETH_P_8021Q) ||
22857 ++ match->key.basic.n_proto == htons(ETH_P_8021AD))) {
22858 + match->key.basic.n_proto = match->key.vlan.vlan_tpid;
22859 + match->mask.basic.n_proto = match->mask.vlan.vlan_tpid;
22860 + match->key.vlan.vlan_tpid = ethertype.value;
22861 +diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c
22862 +index f64f0017e9a53..670dd146fb2b1 100644
22863 +--- a/net/netfilter/nft_exthdr.c
22864 ++++ b/net/netfilter/nft_exthdr.c
22865 +@@ -42,6 +42,9 @@ static void nft_exthdr_ipv6_eval(const struct nft_expr *expr,
22866 + unsigned int offset = 0;
22867 + int err;
22868 +
22869 ++ if (pkt->skb->protocol != htons(ETH_P_IPV6))
22870 ++ goto err;
22871 ++
22872 + err = ipv6_find_hdr(pkt->skb, &offset, priv->type, NULL, NULL);
22873 + if (priv->flags & NFT_EXTHDR_F_PRESENT) {
22874 + nft_reg_store8(dest, err >= 0);
22875 +diff --git a/net/netfilter/nft_osf.c b/net/netfilter/nft_osf.c
22876 +index ac61f708b82d2..d82677e83400b 100644
22877 +--- a/net/netfilter/nft_osf.c
22878 ++++ b/net/netfilter/nft_osf.c
22879 +@@ -28,6 +28,11 @@ static void nft_osf_eval(const struct nft_expr *expr, struct nft_regs *regs,
22880 + struct nf_osf_data data;
22881 + struct tcphdr _tcph;
22882 +
22883 ++ if (pkt->tprot != IPPROTO_TCP) {
22884 ++ regs->verdict.code = NFT_BREAK;
22885 ++ return;
22886 ++ }
22887 ++
22888 + tcp = skb_header_pointer(skb, ip_hdrlen(skb),
22889 + sizeof(struct tcphdr), &_tcph);
22890 + if (!tcp) {
22891 +diff --git a/net/netfilter/nft_tproxy.c b/net/netfilter/nft_tproxy.c
22892 +index 43a5a780a6d3b..37c728bdad41c 100644
22893 +--- a/net/netfilter/nft_tproxy.c
22894 ++++ b/net/netfilter/nft_tproxy.c
22895 +@@ -30,6 +30,12 @@ static void nft_tproxy_eval_v4(const struct nft_expr *expr,
22896 + __be16 tport = 0;
22897 + struct sock *sk;
22898 +
22899 ++ if (pkt->tprot != IPPROTO_TCP &&
22900 ++ pkt->tprot != IPPROTO_UDP) {
22901 ++ regs->verdict.code = NFT_BREAK;
22902 ++ return;
22903 ++ }
22904 ++
22905 + hp = skb_header_pointer(skb, ip_hdrlen(skb), sizeof(_hdr), &_hdr);
22906 + if (!hp) {
22907 + regs->verdict.code = NFT_BREAK;
22908 +@@ -91,7 +97,8 @@ static void nft_tproxy_eval_v6(const struct nft_expr *expr,
22909 +
22910 + memset(&taddr, 0, sizeof(taddr));
22911 +
22912 +- if (!pkt->tprot_set) {
22913 ++ if (pkt->tprot != IPPROTO_TCP &&
22914 ++ pkt->tprot != IPPROTO_UDP) {
22915 + regs->verdict.code = NFT_BREAK;
22916 + return;
22917 + }
22918 +diff --git a/net/netlabel/netlabel_mgmt.c b/net/netlabel/netlabel_mgmt.c
22919 +index df1b41ed73fd9..19e4fffccf783 100644
22920 +--- a/net/netlabel/netlabel_mgmt.c
22921 ++++ b/net/netlabel/netlabel_mgmt.c
22922 +@@ -76,6 +76,7 @@ static const struct nla_policy netlbl_mgmt_genl_policy[NLBL_MGMT_A_MAX + 1] = {
22923 + static int netlbl_mgmt_add_common(struct genl_info *info,
22924 + struct netlbl_audit *audit_info)
22925 + {
22926 ++ void *pmap = NULL;
22927 + int ret_val = -EINVAL;
22928 + struct netlbl_domaddr_map *addrmap = NULL;
22929 + struct cipso_v4_doi *cipsov4 = NULL;
22930 +@@ -175,6 +176,7 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
22931 + ret_val = -ENOMEM;
22932 + goto add_free_addrmap;
22933 + }
22934 ++ pmap = map;
22935 + map->list.addr = addr->s_addr & mask->s_addr;
22936 + map->list.mask = mask->s_addr;
22937 + map->list.valid = 1;
22938 +@@ -183,10 +185,8 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
22939 + map->def.cipso = cipsov4;
22940 +
22941 + ret_val = netlbl_af4list_add(&map->list, &addrmap->list4);
22942 +- if (ret_val != 0) {
22943 +- kfree(map);
22944 +- goto add_free_addrmap;
22945 +- }
22946 ++ if (ret_val != 0)
22947 ++ goto add_free_map;
22948 +
22949 + entry->family = AF_INET;
22950 + entry->def.type = NETLBL_NLTYPE_ADDRSELECT;
22951 +@@ -223,6 +223,7 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
22952 + ret_val = -ENOMEM;
22953 + goto add_free_addrmap;
22954 + }
22955 ++ pmap = map;
22956 + map->list.addr = *addr;
22957 + map->list.addr.s6_addr32[0] &= mask->s6_addr32[0];
22958 + map->list.addr.s6_addr32[1] &= mask->s6_addr32[1];
22959 +@@ -235,10 +236,8 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
22960 + map->def.calipso = calipso;
22961 +
22962 + ret_val = netlbl_af6list_add(&map->list, &addrmap->list6);
22963 +- if (ret_val != 0) {
22964 +- kfree(map);
22965 +- goto add_free_addrmap;
22966 +- }
22967 ++ if (ret_val != 0)
22968 ++ goto add_free_map;
22969 +
22970 + entry->family = AF_INET6;
22971 + entry->def.type = NETLBL_NLTYPE_ADDRSELECT;
22972 +@@ -248,10 +247,12 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
22973 +
22974 + ret_val = netlbl_domhsh_add(entry, audit_info);
22975 + if (ret_val != 0)
22976 +- goto add_free_addrmap;
22977 ++ goto add_free_map;
22978 +
22979 + return 0;
22980 +
22981 ++add_free_map:
22982 ++ kfree(pmap);
22983 + add_free_addrmap:
22984 + kfree(addrmap);
22985 + add_doi_put_def:
22986 +diff --git a/net/qrtr/ns.c b/net/qrtr/ns.c
22987 +index 8d00dfe8139e8..1990d496fcfc0 100644
22988 +--- a/net/qrtr/ns.c
22989 ++++ b/net/qrtr/ns.c
22990 +@@ -775,8 +775,10 @@ int qrtr_ns_init(void)
22991 + }
22992 +
22993 + qrtr_ns.workqueue = alloc_workqueue("qrtr_ns_handler", WQ_UNBOUND, 1);
22994 +- if (!qrtr_ns.workqueue)
22995 ++ if (!qrtr_ns.workqueue) {
22996 ++ ret = -ENOMEM;
22997 + goto err_sock;
22998 ++ }
22999 +
23000 + qrtr_ns.sock->sk->sk_data_ready = qrtr_ns_data_ready;
23001 +
23002 +diff --git a/net/sched/act_vlan.c b/net/sched/act_vlan.c
23003 +index 1cac3c6fbb49c..a108469c664f7 100644
23004 +--- a/net/sched/act_vlan.c
23005 ++++ b/net/sched/act_vlan.c
23006 +@@ -70,7 +70,7 @@ static int tcf_vlan_act(struct sk_buff *skb, const struct tc_action *a,
23007 + /* replace the vid */
23008 + tci = (tci & ~VLAN_VID_MASK) | p->tcfv_push_vid;
23009 + /* replace prio bits, if tcfv_push_prio specified */
23010 +- if (p->tcfv_push_prio) {
23011 ++ if (p->tcfv_push_prio_exists) {
23012 + tci &= ~VLAN_PRIO_MASK;
23013 + tci |= p->tcfv_push_prio << VLAN_PRIO_SHIFT;
23014 + }
23015 +@@ -121,6 +121,7 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla,
23016 + struct tc_action_net *tn = net_generic(net, vlan_net_id);
23017 + struct nlattr *tb[TCA_VLAN_MAX + 1];
23018 + struct tcf_chain *goto_ch = NULL;
23019 ++ bool push_prio_exists = false;
23020 + struct tcf_vlan_params *p;
23021 + struct tc_vlan *parm;
23022 + struct tcf_vlan *v;
23023 +@@ -189,7 +190,8 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla,
23024 + push_proto = htons(ETH_P_8021Q);
23025 + }
23026 +
23027 +- if (tb[TCA_VLAN_PUSH_VLAN_PRIORITY])
23028 ++ push_prio_exists = !!tb[TCA_VLAN_PUSH_VLAN_PRIORITY];
23029 ++ if (push_prio_exists)
23030 + push_prio = nla_get_u8(tb[TCA_VLAN_PUSH_VLAN_PRIORITY]);
23031 + break;
23032 + case TCA_VLAN_ACT_POP_ETH:
23033 +@@ -241,6 +243,7 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla,
23034 + p->tcfv_action = action;
23035 + p->tcfv_push_vid = push_vid;
23036 + p->tcfv_push_prio = push_prio;
23037 ++ p->tcfv_push_prio_exists = push_prio_exists || action == TCA_VLAN_ACT_PUSH;
23038 + p->tcfv_push_proto = push_proto;
23039 +
23040 + if (action == TCA_VLAN_ACT_PUSH_ETH) {
23041 +diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c
23042 +index c4007b9cd16d6..5b274534264c2 100644
23043 +--- a/net/sched/cls_tcindex.c
23044 ++++ b/net/sched/cls_tcindex.c
23045 +@@ -304,7 +304,7 @@ static int tcindex_alloc_perfect_hash(struct net *net, struct tcindex_data *cp)
23046 + int i, err = 0;
23047 +
23048 + cp->perfect = kcalloc(cp->hash, sizeof(struct tcindex_filter_result),
23049 +- GFP_KERNEL);
23050 ++ GFP_KERNEL | __GFP_NOWARN);
23051 + if (!cp->perfect)
23052 + return -ENOMEM;
23053 +
23054 +diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
23055 +index 1db9d4a2ef5ef..b692a0de1ad5e 100644
23056 +--- a/net/sched/sch_qfq.c
23057 ++++ b/net/sched/sch_qfq.c
23058 +@@ -485,11 +485,6 @@ static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
23059 +
23060 + if (cl->qdisc != &noop_qdisc)
23061 + qdisc_hash_add(cl->qdisc, true);
23062 +- sch_tree_lock(sch);
23063 +- qdisc_class_hash_insert(&q->clhash, &cl->common);
23064 +- sch_tree_unlock(sch);
23065 +-
23066 +- qdisc_class_hash_grow(sch, &q->clhash);
23067 +
23068 + set_change_agg:
23069 + sch_tree_lock(sch);
23070 +@@ -507,8 +502,11 @@ set_change_agg:
23071 + }
23072 + if (existing)
23073 + qfq_deact_rm_from_agg(q, cl);
23074 ++ else
23075 ++ qdisc_class_hash_insert(&q->clhash, &cl->common);
23076 + qfq_add_to_agg(q, new_agg, cl);
23077 + sch_tree_unlock(sch);
23078 ++ qdisc_class_hash_grow(sch, &q->clhash);
23079 +
23080 + *arg = (unsigned long)cl;
23081 + return 0;
23082 +diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
23083 +index 39ed0e0afe6d9..c045f63d11fa6 100644
23084 +--- a/net/sunrpc/sched.c
23085 ++++ b/net/sunrpc/sched.c
23086 +@@ -591,11 +591,21 @@ static struct rpc_task *__rpc_find_next_queued_priority(struct rpc_wait_queue *q
23087 + struct list_head *q;
23088 + struct rpc_task *task;
23089 +
23090 ++ /*
23091 ++ * Service the privileged queue.
23092 ++ */
23093 ++ q = &queue->tasks[RPC_NR_PRIORITY - 1];
23094 ++ if (queue->maxpriority > RPC_PRIORITY_PRIVILEGED && !list_empty(q)) {
23095 ++ task = list_first_entry(q, struct rpc_task, u.tk_wait.list);
23096 ++ goto out;
23097 ++ }
23098 ++
23099 + /*
23100 + * Service a batch of tasks from a single owner.
23101 + */
23102 + q = &queue->tasks[queue->priority];
23103 +- if (!list_empty(q) && --queue->nr) {
23104 ++ if (!list_empty(q) && queue->nr) {
23105 ++ queue->nr--;
23106 + task = list_first_entry(q, struct rpc_task, u.tk_wait.list);
23107 + goto out;
23108 + }
23109 +diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
23110 +index d4beca895992d..593846d252143 100644
23111 +--- a/net/tipc/bcast.c
23112 ++++ b/net/tipc/bcast.c
23113 +@@ -699,7 +699,7 @@ int tipc_bcast_init(struct net *net)
23114 + spin_lock_init(&tipc_net(net)->bclock);
23115 +
23116 + if (!tipc_link_bc_create(net, 0, 0, NULL,
23117 +- FB_MTU,
23118 ++ one_page_mtu,
23119 + BCLINK_WIN_DEFAULT,
23120 + BCLINK_WIN_DEFAULT,
23121 + 0,
23122 +diff --git a/net/tipc/msg.c b/net/tipc/msg.c
23123 +index d0fc5fadbc680..b7943da9d0950 100644
23124 +--- a/net/tipc/msg.c
23125 ++++ b/net/tipc/msg.c
23126 +@@ -44,12 +44,15 @@
23127 + #define MAX_FORWARD_SIZE 1024
23128 + #ifdef CONFIG_TIPC_CRYPTO
23129 + #define BUF_HEADROOM ALIGN(((LL_MAX_HEADER + 48) + EHDR_MAX_SIZE), 16)
23130 +-#define BUF_TAILROOM (TIPC_AES_GCM_TAG_SIZE)
23131 ++#define BUF_OVERHEAD (BUF_HEADROOM + TIPC_AES_GCM_TAG_SIZE)
23132 + #else
23133 + #define BUF_HEADROOM (LL_MAX_HEADER + 48)
23134 +-#define BUF_TAILROOM 16
23135 ++#define BUF_OVERHEAD BUF_HEADROOM
23136 + #endif
23137 +
23138 ++const int one_page_mtu = PAGE_SIZE - SKB_DATA_ALIGN(BUF_OVERHEAD) -
23139 ++ SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
23140 ++
23141 + static unsigned int align(unsigned int i)
23142 + {
23143 + return (i + 3) & ~3u;
23144 +@@ -69,13 +72,8 @@ static unsigned int align(unsigned int i)
23145 + struct sk_buff *tipc_buf_acquire(u32 size, gfp_t gfp)
23146 + {
23147 + struct sk_buff *skb;
23148 +-#ifdef CONFIG_TIPC_CRYPTO
23149 +- unsigned int buf_size = (BUF_HEADROOM + size + BUF_TAILROOM + 3) & ~3u;
23150 +-#else
23151 +- unsigned int buf_size = (BUF_HEADROOM + size + 3) & ~3u;
23152 +-#endif
23153 +
23154 +- skb = alloc_skb_fclone(buf_size, gfp);
23155 ++ skb = alloc_skb_fclone(BUF_OVERHEAD + size, gfp);
23156 + if (skb) {
23157 + skb_reserve(skb, BUF_HEADROOM);
23158 + skb_put(skb, size);
23159 +@@ -395,7 +393,8 @@ int tipc_msg_build(struct tipc_msg *mhdr, struct msghdr *m, int offset,
23160 + if (unlikely(!skb)) {
23161 + if (pktmax != MAX_MSG_SIZE)
23162 + return -ENOMEM;
23163 +- rc = tipc_msg_build(mhdr, m, offset, dsz, FB_MTU, list);
23164 ++ rc = tipc_msg_build(mhdr, m, offset, dsz,
23165 ++ one_page_mtu, list);
23166 + if (rc != dsz)
23167 + return rc;
23168 + if (tipc_msg_assemble(list))
23169 +diff --git a/net/tipc/msg.h b/net/tipc/msg.h
23170 +index 5d64596ba9877..64ae4c4c44f8c 100644
23171 +--- a/net/tipc/msg.h
23172 ++++ b/net/tipc/msg.h
23173 +@@ -99,9 +99,10 @@ struct plist;
23174 + #define MAX_H_SIZE 60 /* Largest possible TIPC header size */
23175 +
23176 + #define MAX_MSG_SIZE (MAX_H_SIZE + TIPC_MAX_USER_MSG_SIZE)
23177 +-#define FB_MTU 3744
23178 + #define TIPC_MEDIA_INFO_OFFSET 5
23179 +
23180 ++extern const int one_page_mtu;
23181 ++
23182 + struct tipc_skb_cb {
23183 + union {
23184 + struct {
23185 +diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
23186 +index 6086cf4f10a7c..60d2ff13fa9ee 100644
23187 +--- a/net/tls/tls_sw.c
23188 ++++ b/net/tls/tls_sw.c
23189 +@@ -1153,7 +1153,7 @@ static int tls_sw_do_sendpage(struct sock *sk, struct page *page,
23190 + int ret = 0;
23191 + bool eor;
23192 +
23193 +- eor = !(flags & (MSG_MORE | MSG_SENDPAGE_NOTLAST));
23194 ++ eor = !(flags & MSG_SENDPAGE_NOTLAST);
23195 + sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk);
23196 +
23197 + /* Call the sk_stream functions to manage the sndbuf mem. */
23198 +diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
23199 +index 40f359bf20440..35938dfa784de 100644
23200 +--- a/net/xdp/xsk_queue.h
23201 ++++ b/net/xdp/xsk_queue.h
23202 +@@ -128,12 +128,15 @@ static inline bool xskq_cons_read_addr_unchecked(struct xsk_queue *q, u64 *addr)
23203 + static inline bool xp_aligned_validate_desc(struct xsk_buff_pool *pool,
23204 + struct xdp_desc *desc)
23205 + {
23206 +- u64 chunk;
23207 +-
23208 +- if (desc->len > pool->chunk_size)
23209 +- return false;
23210 ++ u64 chunk, chunk_end;
23211 +
23212 + chunk = xp_aligned_extract_addr(pool, desc->addr);
23213 ++ if (likely(desc->len)) {
23214 ++ chunk_end = xp_aligned_extract_addr(pool, desc->addr + desc->len - 1);
23215 ++ if (chunk != chunk_end)
23216 ++ return false;
23217 ++ }
23218 ++
23219 + if (chunk >= pool->addrs_cnt)
23220 + return false;
23221 +
23222 +diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
23223 +index 6d6917b68856f..e843b0d9e2a61 100644
23224 +--- a/net/xfrm/xfrm_device.c
23225 ++++ b/net/xfrm/xfrm_device.c
23226 +@@ -268,6 +268,7 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
23227 + xso->num_exthdrs = 0;
23228 + xso->flags = 0;
23229 + xso->dev = NULL;
23230 ++ xso->real_dev = NULL;
23231 + dev_put(dev);
23232 +
23233 + if (err != -EOPNOTSUPP)
23234 +diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c
23235 +index e4cb0ff4dcf41..ac907b9d32d1e 100644
23236 +--- a/net/xfrm/xfrm_output.c
23237 ++++ b/net/xfrm/xfrm_output.c
23238 +@@ -711,15 +711,8 @@ out:
23239 + static int xfrm6_extract_output(struct xfrm_state *x, struct sk_buff *skb)
23240 + {
23241 + #if IS_ENABLED(CONFIG_IPV6)
23242 +- unsigned int ptr = 0;
23243 + int err;
23244 +
23245 +- if (x->outer_mode.encap == XFRM_MODE_BEET &&
23246 +- ipv6_find_hdr(skb, &ptr, NEXTHDR_FRAGMENT, NULL, NULL) >= 0) {
23247 +- net_warn_ratelimited("BEET mode doesn't support inner IPv6 fragments\n");
23248 +- return -EAFNOSUPPORT;
23249 +- }
23250 +-
23251 + err = xfrm6_tunnel_check_size(skb);
23252 + if (err)
23253 + return err;
23254 +diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
23255 +index 4496f7efa2200..c25586156c6a7 100644
23256 +--- a/net/xfrm/xfrm_state.c
23257 ++++ b/net/xfrm/xfrm_state.c
23258 +@@ -2518,7 +2518,7 @@ void xfrm_state_delete_tunnel(struct xfrm_state *x)
23259 + }
23260 + EXPORT_SYMBOL(xfrm_state_delete_tunnel);
23261 +
23262 +-u32 xfrm_state_mtu(struct xfrm_state *x, int mtu)
23263 ++u32 __xfrm_state_mtu(struct xfrm_state *x, int mtu)
23264 + {
23265 + const struct xfrm_type *type = READ_ONCE(x->type);
23266 + struct crypto_aead *aead;
23267 +@@ -2549,7 +2549,17 @@ u32 xfrm_state_mtu(struct xfrm_state *x, int mtu)
23268 + return ((mtu - x->props.header_len - crypto_aead_authsize(aead) -
23269 + net_adj) & ~(blksize - 1)) + net_adj - 2;
23270 + }
23271 +-EXPORT_SYMBOL_GPL(xfrm_state_mtu);
23272 ++EXPORT_SYMBOL_GPL(__xfrm_state_mtu);
23273 ++
23274 ++u32 xfrm_state_mtu(struct xfrm_state *x, int mtu)
23275 ++{
23276 ++ mtu = __xfrm_state_mtu(x, mtu);
23277 ++
23278 ++ if (x->props.family == AF_INET6 && mtu < IPV6_MIN_MTU)
23279 ++ return IPV6_MIN_MTU;
23280 ++
23281 ++ return mtu;
23282 ++}
23283 +
23284 + int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload)
23285 + {
23286 +diff --git a/samples/bpf/xdp_redirect_user.c b/samples/bpf/xdp_redirect_user.c
23287 +index 41d705c3a1f7f..93854e135134c 100644
23288 +--- a/samples/bpf/xdp_redirect_user.c
23289 ++++ b/samples/bpf/xdp_redirect_user.c
23290 +@@ -130,7 +130,7 @@ int main(int argc, char **argv)
23291 + if (!(xdp_flags & XDP_FLAGS_SKB_MODE))
23292 + xdp_flags |= XDP_FLAGS_DRV_MODE;
23293 +
23294 +- if (optind == argc) {
23295 ++ if (optind + 2 != argc) {
23296 + printf("usage: %s <IFNAME|IFINDEX>_IN <IFNAME|IFINDEX>_OUT\n", argv[0]);
23297 + return 1;
23298 + }
23299 +@@ -213,5 +213,5 @@ int main(int argc, char **argv)
23300 + poll_stats(2, ifindex_out);
23301 +
23302 + out:
23303 +- return 0;
23304 ++ return ret;
23305 + }
23306 +diff --git a/scripts/Makefile.build b/scripts/Makefile.build
23307 +index 1b6094a130346..73701d637ed56 100644
23308 +--- a/scripts/Makefile.build
23309 ++++ b/scripts/Makefile.build
23310 +@@ -267,7 +267,8 @@ define rule_as_o_S
23311 + endef
23312 +
23313 + # Built-in and composite module parts
23314 +-$(obj)/%.o: $(src)/%.c $(recordmcount_source) $(objtool_dep) FORCE
23315 ++.SECONDEXPANSION:
23316 ++$(obj)/%.o: $(src)/%.c $(recordmcount_source) $$(objtool_dep) FORCE
23317 + $(call if_changed_rule,cc_o_c)
23318 + $(call cmd,force_checksrc)
23319 +
23320 +@@ -348,7 +349,7 @@ cmd_modversions_S = \
23321 + fi
23322 + endif
23323 +
23324 +-$(obj)/%.o: $(src)/%.S $(objtool_dep) FORCE
23325 ++$(obj)/%.o: $(src)/%.S $$(objtool_dep) FORCE
23326 + $(call if_changed_rule,as_o_S)
23327 +
23328 + targets += $(filter-out $(subdir-builtin), $(real-obj-y))
23329 +diff --git a/scripts/tools-support-relr.sh b/scripts/tools-support-relr.sh
23330 +index 45e8aa360b457..cb55878bd5b81 100755
23331 +--- a/scripts/tools-support-relr.sh
23332 ++++ b/scripts/tools-support-relr.sh
23333 +@@ -7,7 +7,8 @@ trap "rm -f $tmp_file.o $tmp_file $tmp_file.bin" EXIT
23334 + cat << "END" | $CC -c -x c - -o $tmp_file.o >/dev/null 2>&1
23335 + void *p = &p;
23336 + END
23337 +-$LD $tmp_file.o -shared -Bsymbolic --pack-dyn-relocs=relr -o $tmp_file
23338 ++$LD $tmp_file.o -shared -Bsymbolic --pack-dyn-relocs=relr \
23339 ++ --use-android-relr-tags -o $tmp_file
23340 +
23341 + # Despite printing an error message, GNU nm still exits with exit code 0 if it
23342 + # sees a relr section. So we need to check that nothing is printed to stderr.
23343 +diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
23344 +index 0de367aaa2d31..7ac5204c8d1f2 100644
23345 +--- a/security/integrity/evm/evm_main.c
23346 ++++ b/security/integrity/evm/evm_main.c
23347 +@@ -521,7 +521,7 @@ void evm_inode_post_setattr(struct dentry *dentry, int ia_valid)
23348 + }
23349 +
23350 + /*
23351 +- * evm_inode_init_security - initializes security.evm
23352 ++ * evm_inode_init_security - initializes security.evm HMAC value
23353 + */
23354 + int evm_inode_init_security(struct inode *inode,
23355 + const struct xattr *lsm_xattr,
23356 +@@ -530,7 +530,8 @@ int evm_inode_init_security(struct inode *inode,
23357 + struct evm_xattr *xattr_data;
23358 + int rc;
23359 +
23360 +- if (!evm_key_loaded() || !evm_protected_xattr(lsm_xattr->name))
23361 ++ if (!(evm_initialized & EVM_INIT_HMAC) ||
23362 ++ !evm_protected_xattr(lsm_xattr->name))
23363 + return 0;
23364 +
23365 + xattr_data = kzalloc(sizeof(*xattr_data), GFP_NOFS);
23366 +diff --git a/security/integrity/evm/evm_secfs.c b/security/integrity/evm/evm_secfs.c
23367 +index bbc85637e18b2..5f0da41bccd07 100644
23368 +--- a/security/integrity/evm/evm_secfs.c
23369 ++++ b/security/integrity/evm/evm_secfs.c
23370 +@@ -66,12 +66,13 @@ static ssize_t evm_read_key(struct file *filp, char __user *buf,
23371 + static ssize_t evm_write_key(struct file *file, const char __user *buf,
23372 + size_t count, loff_t *ppos)
23373 + {
23374 +- int i, ret;
23375 ++ unsigned int i;
23376 ++ int ret;
23377 +
23378 + if (!capable(CAP_SYS_ADMIN) || (evm_initialized & EVM_SETUP_COMPLETE))
23379 + return -EPERM;
23380 +
23381 +- ret = kstrtoint_from_user(buf, count, 0, &i);
23382 ++ ret = kstrtouint_from_user(buf, count, 0, &i);
23383 +
23384 + if (ret)
23385 + return ret;
23386 +@@ -80,12 +81,12 @@ static ssize_t evm_write_key(struct file *file, const char __user *buf,
23387 + if (!i || (i & ~EVM_INIT_MASK) != 0)
23388 + return -EINVAL;
23389 +
23390 +- /* Don't allow a request to freshly enable metadata writes if
23391 +- * keys are loaded.
23392 ++ /*
23393 ++ * Don't allow a request to enable metadata writes if
23394 ++ * an HMAC key is loaded.
23395 + */
23396 + if ((i & EVM_ALLOW_METADATA_WRITES) &&
23397 +- ((evm_initialized & EVM_KEY_MASK) != 0) &&
23398 +- !(evm_initialized & EVM_ALLOW_METADATA_WRITES))
23399 ++ (evm_initialized & EVM_INIT_HMAC) != 0)
23400 + return -EPERM;
23401 +
23402 + if (i & EVM_INIT_HMAC) {
23403 +diff --git a/security/integrity/ima/ima_appraise.c b/security/integrity/ima/ima_appraise.c
23404 +index 565e33ff19d0d..d7cc6f8977461 100644
23405 +--- a/security/integrity/ima/ima_appraise.c
23406 ++++ b/security/integrity/ima/ima_appraise.c
23407 +@@ -522,8 +522,6 @@ void ima_inode_post_setattr(struct user_namespace *mnt_userns,
23408 + return;
23409 +
23410 + action = ima_must_appraise(mnt_userns, inode, MAY_ACCESS, POST_SETATTR);
23411 +- if (!action)
23412 +- __vfs_removexattr(&init_user_ns, dentry, XATTR_NAME_IMA);
23413 + iint = integrity_iint_find(inode);
23414 + if (iint) {
23415 + set_bit(IMA_CHANGE_ATTR, &iint->atomic_flags);
23416 +diff --git a/sound/firewire/amdtp-stream.c b/sound/firewire/amdtp-stream.c
23417 +index 5805c5de39fbf..7a282d8e71485 100644
23418 +--- a/sound/firewire/amdtp-stream.c
23419 ++++ b/sound/firewire/amdtp-stream.c
23420 +@@ -1404,14 +1404,17 @@ int amdtp_domain_start(struct amdtp_domain *d, unsigned int ir_delay_cycle)
23421 + unsigned int queue_size;
23422 + struct amdtp_stream *s;
23423 + int cycle;
23424 ++ bool found = false;
23425 + int err;
23426 +
23427 + // Select an IT context as IRQ target.
23428 + list_for_each_entry(s, &d->streams, list) {
23429 +- if (s->direction == AMDTP_OUT_STREAM)
23430 ++ if (s->direction == AMDTP_OUT_STREAM) {
23431 ++ found = true;
23432 + break;
23433 ++ }
23434 + }
23435 +- if (!s)
23436 ++ if (!found)
23437 + return -ENXIO;
23438 + d->irq_target = s;
23439 +
23440 +diff --git a/sound/firewire/motu/motu-protocol-v2.c b/sound/firewire/motu/motu-protocol-v2.c
23441 +index e59e69ab1538b..784073aa10265 100644
23442 +--- a/sound/firewire/motu/motu-protocol-v2.c
23443 ++++ b/sound/firewire/motu/motu-protocol-v2.c
23444 +@@ -353,6 +353,7 @@ const struct snd_motu_spec snd_motu_spec_8pre = {
23445 + .protocol_version = SND_MOTU_PROTOCOL_V2,
23446 + .flags = SND_MOTU_SPEC_RX_MIDI_2ND_Q |
23447 + SND_MOTU_SPEC_TX_MIDI_2ND_Q,
23448 +- .tx_fixed_pcm_chunks = {10, 6, 0},
23449 +- .rx_fixed_pcm_chunks = {10, 6, 0},
23450 ++ // Two dummy chunks always in the end of data block.
23451 ++ .tx_fixed_pcm_chunks = {10, 10, 0},
23452 ++ .rx_fixed_pcm_chunks = {6, 6, 0},
23453 + };
23454 +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
23455 +index e46e43dac6bfd..1cc83344c2ecf 100644
23456 +--- a/sound/pci/hda/patch_realtek.c
23457 ++++ b/sound/pci/hda/patch_realtek.c
23458 +@@ -385,6 +385,7 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
23459 + alc_update_coef_idx(codec, 0x67, 0xf000, 0x3000);
23460 + fallthrough;
23461 + case 0x10ec0215:
23462 ++ case 0x10ec0230:
23463 + case 0x10ec0233:
23464 + case 0x10ec0235:
23465 + case 0x10ec0236:
23466 +@@ -3153,6 +3154,7 @@ static void alc_disable_headset_jack_key(struct hda_codec *codec)
23467 + alc_update_coef_idx(codec, 0x49, 0x0045, 0x0);
23468 + alc_update_coef_idx(codec, 0x44, 0x0045 << 8, 0x0);
23469 + break;
23470 ++ case 0x10ec0230:
23471 + case 0x10ec0236:
23472 + case 0x10ec0256:
23473 + alc_write_coef_idx(codec, 0x48, 0x0);
23474 +@@ -3180,6 +3182,7 @@ static void alc_enable_headset_jack_key(struct hda_codec *codec)
23475 + alc_update_coef_idx(codec, 0x49, 0x007f, 0x0045);
23476 + alc_update_coef_idx(codec, 0x44, 0x007f << 8, 0x0045 << 8);
23477 + break;
23478 ++ case 0x10ec0230:
23479 + case 0x10ec0236:
23480 + case 0x10ec0256:
23481 + alc_write_coef_idx(codec, 0x48, 0xd011);
23482 +@@ -4737,6 +4740,7 @@ static void alc_headset_mode_unplugged(struct hda_codec *codec)
23483 + case 0x10ec0255:
23484 + alc_process_coef_fw(codec, coef0255);
23485 + break;
23486 ++ case 0x10ec0230:
23487 + case 0x10ec0236:
23488 + case 0x10ec0256:
23489 + alc_process_coef_fw(codec, coef0256);
23490 +@@ -4851,6 +4855,7 @@ static void alc_headset_mode_mic_in(struct hda_codec *codec, hda_nid_t hp_pin,
23491 + alc_process_coef_fw(codec, coef0255);
23492 + snd_hda_set_pin_ctl_cache(codec, mic_pin, PIN_VREF50);
23493 + break;
23494 ++ case 0x10ec0230:
23495 + case 0x10ec0236:
23496 + case 0x10ec0256:
23497 + alc_write_coef_idx(codec, 0x45, 0xc489);
23498 +@@ -5000,6 +5005,7 @@ static void alc_headset_mode_default(struct hda_codec *codec)
23499 + case 0x10ec0255:
23500 + alc_process_coef_fw(codec, coef0255);
23501 + break;
23502 ++ case 0x10ec0230:
23503 + case 0x10ec0236:
23504 + case 0x10ec0256:
23505 + alc_write_coef_idx(codec, 0x1b, 0x0e4b);
23506 +@@ -5098,6 +5104,7 @@ static void alc_headset_mode_ctia(struct hda_codec *codec)
23507 + case 0x10ec0255:
23508 + alc_process_coef_fw(codec, coef0255);
23509 + break;
23510 ++ case 0x10ec0230:
23511 + case 0x10ec0236:
23512 + case 0x10ec0256:
23513 + alc_process_coef_fw(codec, coef0256);
23514 +@@ -5211,6 +5218,7 @@ static void alc_headset_mode_omtp(struct hda_codec *codec)
23515 + case 0x10ec0255:
23516 + alc_process_coef_fw(codec, coef0255);
23517 + break;
23518 ++ case 0x10ec0230:
23519 + case 0x10ec0236:
23520 + case 0x10ec0256:
23521 + alc_process_coef_fw(codec, coef0256);
23522 +@@ -5311,6 +5319,7 @@ static void alc_determine_headset_type(struct hda_codec *codec)
23523 + val = alc_read_coef_idx(codec, 0x46);
23524 + is_ctia = (val & 0x0070) == 0x0070;
23525 + break;
23526 ++ case 0x10ec0230:
23527 + case 0x10ec0236:
23528 + case 0x10ec0256:
23529 + alc_write_coef_idx(codec, 0x1b, 0x0e4b);
23530 +@@ -5604,6 +5613,7 @@ static void alc255_set_default_jack_type(struct hda_codec *codec)
23531 + case 0x10ec0255:
23532 + alc_process_coef_fw(codec, alc255fw);
23533 + break;
23534 ++ case 0x10ec0230:
23535 + case 0x10ec0236:
23536 + case 0x10ec0256:
23537 + alc_process_coef_fw(codec, alc256fw);
23538 +@@ -6204,6 +6214,7 @@ static void alc_combo_jack_hp_jd_restart(struct hda_codec *codec)
23539 + alc_update_coef_idx(codec, 0x4a, 0x8000, 1 << 15); /* Reset HP JD */
23540 + alc_update_coef_idx(codec, 0x4a, 0x8000, 0 << 15);
23541 + break;
23542 ++ case 0x10ec0230:
23543 + case 0x10ec0235:
23544 + case 0x10ec0236:
23545 + case 0x10ec0255:
23546 +@@ -6336,6 +6347,24 @@ static void alc_fixup_no_int_mic(struct hda_codec *codec,
23547 + }
23548 + }
23549 +
23550 ++static void alc285_fixup_hp_spectre_x360(struct hda_codec *codec,
23551 ++ const struct hda_fixup *fix, int action)
23552 ++{
23553 ++ static const hda_nid_t conn[] = { 0x02 };
23554 ++ static const struct hda_pintbl pincfgs[] = {
23555 ++ { 0x14, 0x90170110 }, /* rear speaker */
23556 ++ { }
23557 ++ };
23558 ++
23559 ++ switch (action) {
23560 ++ case HDA_FIXUP_ACT_PRE_PROBE:
23561 ++ snd_hda_apply_pincfgs(codec, pincfgs);
23562 ++ /* force front speaker to DAC1 */
23563 ++ snd_hda_override_conn_list(codec, 0x17, ARRAY_SIZE(conn), conn);
23564 ++ break;
23565 ++ }
23566 ++}
23567 ++
23568 + /* for hda_fixup_thinkpad_acpi() */
23569 + #include "thinkpad_helper.c"
23570 +
23571 +@@ -7802,6 +7831,8 @@ static const struct hda_fixup alc269_fixups[] = {
23572 + { 0x20, AC_VERB_SET_PROC_COEF, 0x4e4b },
23573 + { }
23574 + },
23575 ++ .chained = true,
23576 ++ .chain_id = ALC289_FIXUP_ASUS_GA401,
23577 + },
23578 + [ALC285_FIXUP_HP_GPIO_LED] = {
23579 + .type = HDA_FIXUP_FUNC,
23580 +@@ -8113,13 +8144,8 @@ static const struct hda_fixup alc269_fixups[] = {
23581 + .chain_id = ALC269_FIXUP_HP_LINE1_MIC1_LED,
23582 + },
23583 + [ALC285_FIXUP_HP_SPECTRE_X360] = {
23584 +- .type = HDA_FIXUP_PINS,
23585 +- .v.pins = (const struct hda_pintbl[]) {
23586 +- { 0x14, 0x90170110 }, /* enable top speaker */
23587 +- {}
23588 +- },
23589 +- .chained = true,
23590 +- .chain_id = ALC285_FIXUP_SPEAKER2_TO_DAC1,
23591 ++ .type = HDA_FIXUP_FUNC,
23592 ++ .v.func = alc285_fixup_hp_spectre_x360,
23593 + },
23594 + [ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP] = {
23595 + .type = HDA_FIXUP_FUNC,
23596 +@@ -8305,6 +8331,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
23597 + SND_PCI_QUIRK(0x103c, 0x84da, "HP OMEN dc0019-ur", ALC295_FIXUP_HP_OMEN),
23598 + SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
23599 + SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360),
23600 ++ SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT),
23601 + SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
23602 + SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO),
23603 + SND_PCI_QUIRK(0x103c, 0x8716, "HP Elite Dragonfly G2 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
23604 +@@ -8322,13 +8349,19 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
23605 + ALC285_FIXUP_HP_GPIO_AMP_INIT),
23606 + SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
23607 + SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
23608 ++ SND_PCI_QUIRK(0x103c, 0x87e7, "HP ProBook 450 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
23609 ++ SND_PCI_QUIRK(0x103c, 0x87f1, "HP ProBook 630 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
23610 + SND_PCI_QUIRK(0x103c, 0x87f2, "HP ProBook 640 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
23611 + SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED),
23612 + SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
23613 + SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
23614 ++ SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
23615 + SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
23616 ++ SND_PCI_QUIRK(0x103c, 0x8847, "HP EliteBook x360 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
23617 + SND_PCI_QUIRK(0x103c, 0x884b, "HP EliteBook 840 Aero G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
23618 + SND_PCI_QUIRK(0x103c, 0x884c, "HP EliteBook 840 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
23619 ++ SND_PCI_QUIRK(0x103c, 0x8862, "HP ProBook 445 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
23620 ++ SND_PCI_QUIRK(0x103c, 0x8863, "HP ProBook 445 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
23621 + SND_PCI_QUIRK(0x103c, 0x886d, "HP ZBook Fury 17.3 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
23622 + SND_PCI_QUIRK(0x103c, 0x8870, "HP ZBook Fury 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
23623 + SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
23624 +@@ -9326,6 +9359,7 @@ static int patch_alc269(struct hda_codec *codec)
23625 + spec->shutup = alc256_shutup;
23626 + spec->init_hook = alc256_init;
23627 + break;
23628 ++ case 0x10ec0230:
23629 + case 0x10ec0236:
23630 + case 0x10ec0256:
23631 + spec->codec_variant = ALC269_TYPE_ALC256;
23632 +@@ -10617,6 +10651,7 @@ static const struct hda_device_id snd_hda_id_realtek[] = {
23633 + HDA_CODEC_ENTRY(0x10ec0221, "ALC221", patch_alc269),
23634 + HDA_CODEC_ENTRY(0x10ec0222, "ALC222", patch_alc269),
23635 + HDA_CODEC_ENTRY(0x10ec0225, "ALC225", patch_alc269),
23636 ++ HDA_CODEC_ENTRY(0x10ec0230, "ALC236", patch_alc269),
23637 + HDA_CODEC_ENTRY(0x10ec0231, "ALC231", patch_alc269),
23638 + HDA_CODEC_ENTRY(0x10ec0233, "ALC233", patch_alc269),
23639 + HDA_CODEC_ENTRY(0x10ec0234, "ALC234", patch_alc269),
23640 +diff --git a/sound/pci/intel8x0.c b/sound/pci/intel8x0.c
23641 +index 5b124c4ad5725..11b398be0954f 100644
23642 +--- a/sound/pci/intel8x0.c
23643 ++++ b/sound/pci/intel8x0.c
23644 +@@ -692,7 +692,7 @@ static inline void snd_intel8x0_update(struct intel8x0 *chip, struct ichdev *ich
23645 + int status, civ, i, step;
23646 + int ack = 0;
23647 +
23648 +- if (!ichdev->prepared || ichdev->suspended)
23649 ++ if (!(ichdev->prepared || chip->in_measurement) || ichdev->suspended)
23650 + return;
23651 +
23652 + spin_lock_irqsave(&chip->reg_lock, flags);
23653 +diff --git a/sound/soc/atmel/atmel-i2s.c b/sound/soc/atmel/atmel-i2s.c
23654 +index 7c6187e41f2b9..a383c6bef8e09 100644
23655 +--- a/sound/soc/atmel/atmel-i2s.c
23656 ++++ b/sound/soc/atmel/atmel-i2s.c
23657 +@@ -200,6 +200,7 @@ struct atmel_i2s_dev {
23658 + unsigned int fmt;
23659 + const struct atmel_i2s_gck_param *gck_param;
23660 + const struct atmel_i2s_caps *caps;
23661 ++ int clk_use_no;
23662 + };
23663 +
23664 + static irqreturn_t atmel_i2s_interrupt(int irq, void *dev_id)
23665 +@@ -321,9 +322,16 @@ static int atmel_i2s_hw_params(struct snd_pcm_substream *substream,
23666 + {
23667 + struct atmel_i2s_dev *dev = snd_soc_dai_get_drvdata(dai);
23668 + bool is_playback = (substream->stream == SNDRV_PCM_STREAM_PLAYBACK);
23669 +- unsigned int mr = 0;
23670 ++ unsigned int mr = 0, mr_mask;
23671 + int ret;
23672 +
23673 ++ mr_mask = ATMEL_I2SC_MR_FORMAT_MASK | ATMEL_I2SC_MR_MODE_MASK |
23674 ++ ATMEL_I2SC_MR_DATALENGTH_MASK;
23675 ++ if (is_playback)
23676 ++ mr_mask |= ATMEL_I2SC_MR_TXMONO;
23677 ++ else
23678 ++ mr_mask |= ATMEL_I2SC_MR_RXMONO;
23679 ++
23680 + switch (dev->fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
23681 + case SND_SOC_DAIFMT_I2S:
23682 + mr |= ATMEL_I2SC_MR_FORMAT_I2S;
23683 +@@ -402,7 +410,7 @@ static int atmel_i2s_hw_params(struct snd_pcm_substream *substream,
23684 + return -EINVAL;
23685 + }
23686 +
23687 +- return regmap_write(dev->regmap, ATMEL_I2SC_MR, mr);
23688 ++ return regmap_update_bits(dev->regmap, ATMEL_I2SC_MR, mr_mask, mr);
23689 + }
23690 +
23691 + static int atmel_i2s_switch_mck_generator(struct atmel_i2s_dev *dev,
23692 +@@ -495,18 +503,28 @@ static int atmel_i2s_trigger(struct snd_pcm_substream *substream, int cmd,
23693 + is_master = (mr & ATMEL_I2SC_MR_MODE_MASK) == ATMEL_I2SC_MR_MODE_MASTER;
23694 +
23695 + /* If master starts, enable the audio clock. */
23696 +- if (is_master && mck_enabled)
23697 +- err = atmel_i2s_switch_mck_generator(dev, true);
23698 +- if (err)
23699 +- return err;
23700 ++ if (is_master && mck_enabled) {
23701 ++ if (!dev->clk_use_no) {
23702 ++ err = atmel_i2s_switch_mck_generator(dev, true);
23703 ++ if (err)
23704 ++ return err;
23705 ++ }
23706 ++ dev->clk_use_no++;
23707 ++ }
23708 +
23709 + err = regmap_write(dev->regmap, ATMEL_I2SC_CR, cr);
23710 + if (err)
23711 + return err;
23712 +
23713 + /* If master stops, disable the audio clock. */
23714 +- if (is_master && !mck_enabled)
23715 +- err = atmel_i2s_switch_mck_generator(dev, false);
23716 ++ if (is_master && !mck_enabled) {
23717 ++ if (dev->clk_use_no == 1) {
23718 ++ err = atmel_i2s_switch_mck_generator(dev, false);
23719 ++ if (err)
23720 ++ return err;
23721 ++ }
23722 ++ dev->clk_use_no--;
23723 ++ }
23724 +
23725 + return err;
23726 + }
23727 +@@ -542,6 +560,7 @@ static struct snd_soc_dai_driver atmel_i2s_dai = {
23728 + },
23729 + .ops = &atmel_i2s_dai_ops,
23730 + .symmetric_rate = 1,
23731 ++ .symmetric_sample_bits = 1,
23732 + };
23733 +
23734 + static const struct snd_soc_component_driver atmel_i2s_component = {
23735 +diff --git a/sound/soc/codecs/cs42l42.h b/sound/soc/codecs/cs42l42.h
23736 +index 866d7c873e3c9..ca2019732013e 100644
23737 +--- a/sound/soc/codecs/cs42l42.h
23738 ++++ b/sound/soc/codecs/cs42l42.h
23739 +@@ -77,7 +77,7 @@
23740 + #define CS42L42_HP_PDN_SHIFT 3
23741 + #define CS42L42_HP_PDN_MASK (1 << CS42L42_HP_PDN_SHIFT)
23742 + #define CS42L42_ADC_PDN_SHIFT 2
23743 +-#define CS42L42_ADC_PDN_MASK (1 << CS42L42_HP_PDN_SHIFT)
23744 ++#define CS42L42_ADC_PDN_MASK (1 << CS42L42_ADC_PDN_SHIFT)
23745 + #define CS42L42_PDN_ALL_SHIFT 0
23746 + #define CS42L42_PDN_ALL_MASK (1 << CS42L42_PDN_ALL_SHIFT)
23747 +
23748 +diff --git a/sound/soc/codecs/max98373-sdw.c b/sound/soc/codecs/max98373-sdw.c
23749 +index f3a12205cd484..dc520effc61cb 100644
23750 +--- a/sound/soc/codecs/max98373-sdw.c
23751 ++++ b/sound/soc/codecs/max98373-sdw.c
23752 +@@ -271,7 +271,7 @@ static __maybe_unused int max98373_resume(struct device *dev)
23753 + struct max98373_priv *max98373 = dev_get_drvdata(dev);
23754 + unsigned long time;
23755 +
23756 +- if (!max98373->hw_init)
23757 ++ if (!max98373->first_hw_init)
23758 + return 0;
23759 +
23760 + if (!slave->unattach_request)
23761 +@@ -362,7 +362,7 @@ static int max98373_io_init(struct sdw_slave *slave)
23762 + struct device *dev = &slave->dev;
23763 + struct max98373_priv *max98373 = dev_get_drvdata(dev);
23764 +
23765 +- if (max98373->pm_init_once) {
23766 ++ if (max98373->first_hw_init) {
23767 + regcache_cache_only(max98373->regmap, false);
23768 + regcache_cache_bypass(max98373->regmap, true);
23769 + }
23770 +@@ -370,7 +370,7 @@ static int max98373_io_init(struct sdw_slave *slave)
23771 + /*
23772 + * PM runtime is only enabled when a Slave reports as Attached
23773 + */
23774 +- if (!max98373->pm_init_once) {
23775 ++ if (!max98373->first_hw_init) {
23776 + /* set autosuspend parameters */
23777 + pm_runtime_set_autosuspend_delay(dev, 3000);
23778 + pm_runtime_use_autosuspend(dev);
23779 +@@ -462,12 +462,12 @@ static int max98373_io_init(struct sdw_slave *slave)
23780 + regmap_write(max98373->regmap, MAX98373_R20B5_BDE_EN, 1);
23781 + regmap_write(max98373->regmap, MAX98373_R20E2_LIMITER_EN, 1);
23782 +
23783 +- if (max98373->pm_init_once) {
23784 ++ if (max98373->first_hw_init) {
23785 + regcache_cache_bypass(max98373->regmap, false);
23786 + regcache_mark_dirty(max98373->regmap);
23787 + }
23788 +
23789 +- max98373->pm_init_once = true;
23790 ++ max98373->first_hw_init = true;
23791 + max98373->hw_init = true;
23792 +
23793 + pm_runtime_mark_last_busy(dev);
23794 +@@ -787,6 +787,8 @@ static int max98373_init(struct sdw_slave *slave, struct regmap *regmap)
23795 + max98373->cache = devm_kcalloc(dev, max98373->cache_num,
23796 + sizeof(*max98373->cache),
23797 + GFP_KERNEL);
23798 ++ if (!max98373->cache)
23799 ++ return -ENOMEM;
23800 +
23801 + for (i = 0; i < max98373->cache_num; i++)
23802 + max98373->cache[i].reg = max98373_sdw_cache_reg[i];
23803 +@@ -795,7 +797,7 @@ static int max98373_init(struct sdw_slave *slave, struct regmap *regmap)
23804 + max98373_slot_config(dev, max98373);
23805 +
23806 + max98373->hw_init = false;
23807 +- max98373->pm_init_once = false;
23808 ++ max98373->first_hw_init = false;
23809 +
23810 + /* codec registration */
23811 + ret = devm_snd_soc_register_component(dev, &soc_codec_dev_max98373_sdw,
23812 +diff --git a/sound/soc/codecs/max98373.h b/sound/soc/codecs/max98373.h
23813 +index 71f5a5228f34b..c09c73678a9a5 100644
23814 +--- a/sound/soc/codecs/max98373.h
23815 ++++ b/sound/soc/codecs/max98373.h
23816 +@@ -223,7 +223,7 @@ struct max98373_priv {
23817 + /* variables to support soundwire */
23818 + struct sdw_slave *slave;
23819 + bool hw_init;
23820 +- bool pm_init_once;
23821 ++ bool first_hw_init;
23822 + int slot;
23823 + unsigned int rx_mask;
23824 + };
23825 +diff --git a/sound/soc/codecs/rk3328_codec.c b/sound/soc/codecs/rk3328_codec.c
23826 +index bfefefcc76d81..758d439e8c7a5 100644
23827 +--- a/sound/soc/codecs/rk3328_codec.c
23828 ++++ b/sound/soc/codecs/rk3328_codec.c
23829 +@@ -474,7 +474,8 @@ static int rk3328_platform_probe(struct platform_device *pdev)
23830 + rk3328->pclk = devm_clk_get(&pdev->dev, "pclk");
23831 + if (IS_ERR(rk3328->pclk)) {
23832 + dev_err(&pdev->dev, "can't get acodec pclk\n");
23833 +- return PTR_ERR(rk3328->pclk);
23834 ++ ret = PTR_ERR(rk3328->pclk);
23835 ++ goto err_unprepare_mclk;
23836 + }
23837 +
23838 + ret = clk_prepare_enable(rk3328->pclk);
23839 +@@ -484,19 +485,34 @@ static int rk3328_platform_probe(struct platform_device *pdev)
23840 + }
23841 +
23842 + base = devm_platform_ioremap_resource(pdev, 0);
23843 +- if (IS_ERR(base))
23844 +- return PTR_ERR(base);
23845 ++ if (IS_ERR(base)) {
23846 ++ ret = PTR_ERR(base);
23847 ++ goto err_unprepare_pclk;
23848 ++ }
23849 +
23850 + rk3328->regmap = devm_regmap_init_mmio(&pdev->dev, base,
23851 + &rk3328_codec_regmap_config);
23852 +- if (IS_ERR(rk3328->regmap))
23853 +- return PTR_ERR(rk3328->regmap);
23854 ++ if (IS_ERR(rk3328->regmap)) {
23855 ++ ret = PTR_ERR(rk3328->regmap);
23856 ++ goto err_unprepare_pclk;
23857 ++ }
23858 +
23859 + platform_set_drvdata(pdev, rk3328);
23860 +
23861 +- return devm_snd_soc_register_component(&pdev->dev, &soc_codec_rk3328,
23862 ++ ret = devm_snd_soc_register_component(&pdev->dev, &soc_codec_rk3328,
23863 + rk3328_dai,
23864 + ARRAY_SIZE(rk3328_dai));
23865 ++ if (ret)
23866 ++ goto err_unprepare_pclk;
23867 ++
23868 ++ return 0;
23869 ++
23870 ++err_unprepare_pclk:
23871 ++ clk_disable_unprepare(rk3328->pclk);
23872 ++
23873 ++err_unprepare_mclk:
23874 ++ clk_disable_unprepare(rk3328->mclk);
23875 ++ return ret;
23876 + }
23877 +
23878 + static const struct of_device_id rk3328_codec_of_match[] __maybe_unused = {
23879 +diff --git a/sound/soc/codecs/rt1308-sdw.c b/sound/soc/codecs/rt1308-sdw.c
23880 +index afd2c3b687ccb..0ec741cf70fce 100644
23881 +--- a/sound/soc/codecs/rt1308-sdw.c
23882 ++++ b/sound/soc/codecs/rt1308-sdw.c
23883 +@@ -709,7 +709,7 @@ static int __maybe_unused rt1308_dev_resume(struct device *dev)
23884 + struct rt1308_sdw_priv *rt1308 = dev_get_drvdata(dev);
23885 + unsigned long time;
23886 +
23887 +- if (!rt1308->hw_init)
23888 ++ if (!rt1308->first_hw_init)
23889 + return 0;
23890 +
23891 + if (!slave->unattach_request)
23892 +diff --git a/sound/soc/codecs/rt5682-i2c.c b/sound/soc/codecs/rt5682-i2c.c
23893 +index 93c1603b42f10..8265b537ff4f3 100644
23894 +--- a/sound/soc/codecs/rt5682-i2c.c
23895 ++++ b/sound/soc/codecs/rt5682-i2c.c
23896 +@@ -273,6 +273,7 @@ static void rt5682_i2c_shutdown(struct i2c_client *client)
23897 + {
23898 + struct rt5682_priv *rt5682 = i2c_get_clientdata(client);
23899 +
23900 ++ disable_irq(client->irq);
23901 + cancel_delayed_work_sync(&rt5682->jack_detect_work);
23902 + cancel_delayed_work_sync(&rt5682->jd_check_work);
23903 +
23904 +diff --git a/sound/soc/codecs/rt5682-sdw.c b/sound/soc/codecs/rt5682-sdw.c
23905 +index d1dd7f720ba48..b4649b599eaa9 100644
23906 +--- a/sound/soc/codecs/rt5682-sdw.c
23907 ++++ b/sound/soc/codecs/rt5682-sdw.c
23908 +@@ -400,6 +400,11 @@ static int rt5682_io_init(struct device *dev, struct sdw_slave *slave)
23909 +
23910 + pm_runtime_get_noresume(&slave->dev);
23911 +
23912 ++ if (rt5682->first_hw_init) {
23913 ++ regcache_cache_only(rt5682->regmap, false);
23914 ++ regcache_cache_bypass(rt5682->regmap, true);
23915 ++ }
23916 ++
23917 + while (loop > 0) {
23918 + regmap_read(rt5682->regmap, RT5682_DEVICE_ID, &val);
23919 + if (val == DEVICE_ID)
23920 +@@ -408,14 +413,11 @@ static int rt5682_io_init(struct device *dev, struct sdw_slave *slave)
23921 + usleep_range(30000, 30005);
23922 + loop--;
23923 + }
23924 ++
23925 + if (val != DEVICE_ID) {
23926 + dev_err(dev, "Device with ID register %x is not rt5682\n", val);
23927 +- return -ENODEV;
23928 +- }
23929 +-
23930 +- if (rt5682->first_hw_init) {
23931 +- regcache_cache_only(rt5682->regmap, false);
23932 +- regcache_cache_bypass(rt5682->regmap, true);
23933 ++ ret = -ENODEV;
23934 ++ goto err_nodev;
23935 + }
23936 +
23937 + rt5682_calibrate(rt5682);
23938 +@@ -486,10 +488,11 @@ reinit:
23939 + rt5682->hw_init = true;
23940 + rt5682->first_hw_init = true;
23941 +
23942 ++err_nodev:
23943 + pm_runtime_mark_last_busy(&slave->dev);
23944 + pm_runtime_put_autosuspend(&slave->dev);
23945 +
23946 +- dev_dbg(&slave->dev, "%s hw_init complete\n", __func__);
23947 ++ dev_dbg(&slave->dev, "%s hw_init complete: %d\n", __func__, ret);
23948 +
23949 + return ret;
23950 + }
23951 +@@ -743,7 +746,7 @@ static int __maybe_unused rt5682_dev_resume(struct device *dev)
23952 + struct rt5682_priv *rt5682 = dev_get_drvdata(dev);
23953 + unsigned long time;
23954 +
23955 +- if (!rt5682->hw_init)
23956 ++ if (!rt5682->first_hw_init)
23957 + return 0;
23958 +
23959 + if (!slave->unattach_request)
23960 +diff --git a/sound/soc/codecs/rt700-sdw.c b/sound/soc/codecs/rt700-sdw.c
23961 +index 4001612dfd737..fc6299a6022d6 100644
23962 +--- a/sound/soc/codecs/rt700-sdw.c
23963 ++++ b/sound/soc/codecs/rt700-sdw.c
23964 +@@ -498,7 +498,7 @@ static int __maybe_unused rt700_dev_resume(struct device *dev)
23965 + struct rt700_priv *rt700 = dev_get_drvdata(dev);
23966 + unsigned long time;
23967 +
23968 +- if (!rt700->hw_init)
23969 ++ if (!rt700->first_hw_init)
23970 + return 0;
23971 +
23972 + if (!slave->unattach_request)
23973 +diff --git a/sound/soc/codecs/rt711-sdw.c b/sound/soc/codecs/rt711-sdw.c
23974 +index 2beb4286d997b..bfa9fede7f908 100644
23975 +--- a/sound/soc/codecs/rt711-sdw.c
23976 ++++ b/sound/soc/codecs/rt711-sdw.c
23977 +@@ -501,7 +501,7 @@ static int __maybe_unused rt711_dev_resume(struct device *dev)
23978 + struct rt711_priv *rt711 = dev_get_drvdata(dev);
23979 + unsigned long time;
23980 +
23981 +- if (!rt711->hw_init)
23982 ++ if (!rt711->first_hw_init)
23983 + return 0;
23984 +
23985 + if (!slave->unattach_request)
23986 +diff --git a/sound/soc/codecs/rt715-sdw.c b/sound/soc/codecs/rt715-sdw.c
23987 +index 71dd3b97a4590..157a97acc6c28 100644
23988 +--- a/sound/soc/codecs/rt715-sdw.c
23989 ++++ b/sound/soc/codecs/rt715-sdw.c
23990 +@@ -541,7 +541,7 @@ static int __maybe_unused rt715_dev_resume(struct device *dev)
23991 + struct rt715_priv *rt715 = dev_get_drvdata(dev);
23992 + unsigned long time;
23993 +
23994 +- if (!rt715->hw_init)
23995 ++ if (!rt715->first_hw_init)
23996 + return 0;
23997 +
23998 + if (!slave->unattach_request)
23999 +diff --git a/sound/soc/fsl/fsl_spdif.c b/sound/soc/fsl/fsl_spdif.c
24000 +index 174e558224d8c..6d5e9c0acdb4f 100644
24001 +--- a/sound/soc/fsl/fsl_spdif.c
24002 ++++ b/sound/soc/fsl/fsl_spdif.c
24003 +@@ -1400,14 +1400,27 @@ static int fsl_spdif_probe(struct platform_device *pdev)
24004 + &spdif_priv->cpu_dai_drv, 1);
24005 + if (ret) {
24006 + dev_err(&pdev->dev, "failed to register DAI: %d\n", ret);
24007 +- return ret;
24008 ++ goto err_pm_disable;
24009 + }
24010 +
24011 + ret = imx_pcm_dma_init(pdev, IMX_SPDIF_DMABUF_SIZE);
24012 +- if (ret && ret != -EPROBE_DEFER)
24013 +- dev_err(&pdev->dev, "imx_pcm_dma_init failed: %d\n", ret);
24014 ++ if (ret) {
24015 ++ dev_err_probe(&pdev->dev, ret, "imx_pcm_dma_init failed\n");
24016 ++ goto err_pm_disable;
24017 ++ }
24018 +
24019 + return ret;
24020 ++
24021 ++err_pm_disable:
24022 ++ pm_runtime_disable(&pdev->dev);
24023 ++ return ret;
24024 ++}
24025 ++
24026 ++static int fsl_spdif_remove(struct platform_device *pdev)
24027 ++{
24028 ++ pm_runtime_disable(&pdev->dev);
24029 ++
24030 ++ return 0;
24031 + }
24032 +
24033 + #ifdef CONFIG_PM
24034 +@@ -1416,6 +1429,9 @@ static int fsl_spdif_runtime_suspend(struct device *dev)
24035 + struct fsl_spdif_priv *spdif_priv = dev_get_drvdata(dev);
24036 + int i;
24037 +
24038 ++ /* Disable all the interrupts */
24039 ++ regmap_update_bits(spdif_priv->regmap, REG_SPDIF_SIE, 0xffffff, 0);
24040 ++
24041 + regmap_read(spdif_priv->regmap, REG_SPDIF_SRPC,
24042 + &spdif_priv->regcache_srpc);
24043 + regcache_cache_only(spdif_priv->regmap, true);
24044 +@@ -1512,6 +1528,7 @@ static struct platform_driver fsl_spdif_driver = {
24045 + .pm = &fsl_spdif_pm,
24046 + },
24047 + .probe = fsl_spdif_probe,
24048 ++ .remove = fsl_spdif_remove,
24049 + };
24050 +
24051 + module_platform_driver(fsl_spdif_driver);
24052 +diff --git a/sound/soc/fsl/fsl_xcvr.c b/sound/soc/fsl/fsl_xcvr.c
24053 +index 6dd0a5fcd4556..070e3f32859fa 100644
24054 +--- a/sound/soc/fsl/fsl_xcvr.c
24055 ++++ b/sound/soc/fsl/fsl_xcvr.c
24056 +@@ -1236,6 +1236,16 @@ static __maybe_unused int fsl_xcvr_runtime_suspend(struct device *dev)
24057 + struct fsl_xcvr *xcvr = dev_get_drvdata(dev);
24058 + int ret;
24059 +
24060 ++ /*
24061 ++ * Clear interrupts, when streams starts or resumes after
24062 ++ * suspend, interrupts are enabled in prepare(), so no need
24063 ++ * to enable interrupts in resume().
24064 ++ */
24065 ++ ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_IER0,
24066 ++ FSL_XCVR_IRQ_EARC_ALL, 0);
24067 ++ if (ret < 0)
24068 ++ dev_err(dev, "Failed to clear IER0: %d\n", ret);
24069 ++
24070 + /* Assert M0+ reset */
24071 + ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_CTRL,
24072 + FSL_XCVR_EXT_CTRL_CORE_RESET,
24073 +diff --git a/sound/soc/hisilicon/hi6210-i2s.c b/sound/soc/hisilicon/hi6210-i2s.c
24074 +index 907f5f1f7b445..ff05b9779e4be 100644
24075 +--- a/sound/soc/hisilicon/hi6210-i2s.c
24076 ++++ b/sound/soc/hisilicon/hi6210-i2s.c
24077 +@@ -102,18 +102,15 @@ static int hi6210_i2s_startup(struct snd_pcm_substream *substream,
24078 +
24079 + for (n = 0; n < i2s->clocks; n++) {
24080 + ret = clk_prepare_enable(i2s->clk[n]);
24081 +- if (ret) {
24082 +- while (n--)
24083 +- clk_disable_unprepare(i2s->clk[n]);
24084 +- return ret;
24085 +- }
24086 ++ if (ret)
24087 ++ goto err_unprepare_clk;
24088 + }
24089 +
24090 + ret = clk_set_rate(i2s->clk[CLK_I2S_BASE], 49152000);
24091 + if (ret) {
24092 + dev_err(i2s->dev, "%s: setting 49.152MHz base rate failed %d\n",
24093 + __func__, ret);
24094 +- return ret;
24095 ++ goto err_unprepare_clk;
24096 + }
24097 +
24098 + /* enable clock before frequency division */
24099 +@@ -165,6 +162,11 @@ static int hi6210_i2s_startup(struct snd_pcm_substream *substream,
24100 + hi6210_write_reg(i2s, HII2S_SW_RST_N, val);
24101 +
24102 + return 0;
24103 ++
24104 ++err_unprepare_clk:
24105 ++ while (n--)
24106 ++ clk_disable_unprepare(i2s->clk[n]);
24107 ++ return ret;
24108 + }
24109 +
24110 + static void hi6210_i2s_shutdown(struct snd_pcm_substream *substream,
24111 +diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
24112 +index ecd3f90f4bbea..dfad2ad129abb 100644
24113 +--- a/sound/soc/intel/boards/sof_sdw.c
24114 ++++ b/sound/soc/intel/boards/sof_sdw.c
24115 +@@ -196,6 +196,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
24116 + },
24117 + .driver_data = (void *)(SOF_RT711_JD_SRC_JD1 |
24118 + SOF_SDW_TGL_HDMI |
24119 ++ SOF_RT715_DAI_ID_FIX |
24120 + SOF_SDW_PCH_DMIC),
24121 + },
24122 + {}
24123 +diff --git a/sound/soc/mediatek/common/mtk-btcvsd.c b/sound/soc/mediatek/common/mtk-btcvsd.c
24124 +index a554c57b64605..6299dee9a6deb 100644
24125 +--- a/sound/soc/mediatek/common/mtk-btcvsd.c
24126 ++++ b/sound/soc/mediatek/common/mtk-btcvsd.c
24127 +@@ -1281,7 +1281,7 @@ static const struct snd_soc_component_driver mtk_btcvsd_snd_platform = {
24128 +
24129 + static int mtk_btcvsd_snd_probe(struct platform_device *pdev)
24130 + {
24131 +- int ret = 0;
24132 ++ int ret;
24133 + int irq_id;
24134 + u32 offset[5] = {0, 0, 0, 0, 0};
24135 + struct mtk_btcvsd_snd *btcvsd;
24136 +@@ -1337,7 +1337,8 @@ static int mtk_btcvsd_snd_probe(struct platform_device *pdev)
24137 + btcvsd->bt_sram_bank2_base = of_iomap(dev->of_node, 1);
24138 + if (!btcvsd->bt_sram_bank2_base) {
24139 + dev_err(dev, "iomap bt_sram_bank2_base fail\n");
24140 +- return -EIO;
24141 ++ ret = -EIO;
24142 ++ goto unmap_pkv_err;
24143 + }
24144 +
24145 + btcvsd->infra = syscon_regmap_lookup_by_phandle(dev->of_node,
24146 +@@ -1345,7 +1346,8 @@ static int mtk_btcvsd_snd_probe(struct platform_device *pdev)
24147 + if (IS_ERR(btcvsd->infra)) {
24148 + dev_err(dev, "cannot find infra controller: %ld\n",
24149 + PTR_ERR(btcvsd->infra));
24150 +- return PTR_ERR(btcvsd->infra);
24151 ++ ret = PTR_ERR(btcvsd->infra);
24152 ++ goto unmap_bank2_err;
24153 + }
24154 +
24155 + /* get offset */
24156 +@@ -1354,7 +1356,7 @@ static int mtk_btcvsd_snd_probe(struct platform_device *pdev)
24157 + ARRAY_SIZE(offset));
24158 + if (ret) {
24159 + dev_warn(dev, "%s(), get offset fail, ret %d\n", __func__, ret);
24160 +- return ret;
24161 ++ goto unmap_bank2_err;
24162 + }
24163 + btcvsd->infra_misc_offset = offset[0];
24164 + btcvsd->conn_bt_cvsd_mask = offset[1];
24165 +@@ -1373,8 +1375,18 @@ static int mtk_btcvsd_snd_probe(struct platform_device *pdev)
24166 + mtk_btcvsd_snd_set_state(btcvsd, btcvsd->tx, BT_SCO_STATE_IDLE);
24167 + mtk_btcvsd_snd_set_state(btcvsd, btcvsd->rx, BT_SCO_STATE_IDLE);
24168 +
24169 +- return devm_snd_soc_register_component(dev, &mtk_btcvsd_snd_platform,
24170 +- NULL, 0);
24171 ++ ret = devm_snd_soc_register_component(dev, &mtk_btcvsd_snd_platform,
24172 ++ NULL, 0);
24173 ++ if (ret)
24174 ++ goto unmap_bank2_err;
24175 ++
24176 ++ return 0;
24177 ++
24178 ++unmap_bank2_err:
24179 ++ iounmap(btcvsd->bt_sram_bank2_base);
24180 ++unmap_pkv_err:
24181 ++ iounmap(btcvsd->bt_pkv_base);
24182 ++ return ret;
24183 + }
24184 +
24185 + static int mtk_btcvsd_snd_remove(struct platform_device *pdev)
24186 +diff --git a/sound/soc/sh/rcar/adg.c b/sound/soc/sh/rcar/adg.c
24187 +index abdfd9cf91e2a..19c604b2e2486 100644
24188 +--- a/sound/soc/sh/rcar/adg.c
24189 ++++ b/sound/soc/sh/rcar/adg.c
24190 +@@ -289,7 +289,6 @@ static void rsnd_adg_set_ssi_clk(struct rsnd_mod *ssi_mod, u32 val)
24191 + int rsnd_adg_clk_query(struct rsnd_priv *priv, unsigned int rate)
24192 + {
24193 + struct rsnd_adg *adg = rsnd_priv_to_adg(priv);
24194 +- struct clk *clk;
24195 + int i;
24196 + int sel_table[] = {
24197 + [CLKA] = 0x1,
24198 +@@ -302,10 +301,9 @@ int rsnd_adg_clk_query(struct rsnd_priv *priv, unsigned int rate)
24199 + * find suitable clock from
24200 + * AUDIO_CLKA/AUDIO_CLKB/AUDIO_CLKC/AUDIO_CLKI.
24201 + */
24202 +- for_each_rsnd_clk(clk, adg, i) {
24203 ++ for (i = 0; i < CLKMAX; i++)
24204 + if (rate == adg->clk_rate[i])
24205 + return sel_table[i];
24206 +- }
24207 +
24208 + /*
24209 + * find divided clock from BRGA/BRGB
24210 +diff --git a/sound/usb/format.c b/sound/usb/format.c
24211 +index 2287f8c653150..eb216fef4ba75 100644
24212 +--- a/sound/usb/format.c
24213 ++++ b/sound/usb/format.c
24214 +@@ -223,9 +223,11 @@ static int parse_audio_format_rates_v1(struct snd_usb_audio *chip, struct audiof
24215 + continue;
24216 + /* C-Media CM6501 mislabels its 96 kHz altsetting */
24217 + /* Terratec Aureon 7.1 USB C-Media 6206, too */
24218 ++ /* Ozone Z90 USB C-Media, too */
24219 + if (rate == 48000 && nr_rates == 1 &&
24220 + (chip->usb_id == USB_ID(0x0d8c, 0x0201) ||
24221 + chip->usb_id == USB_ID(0x0d8c, 0x0102) ||
24222 ++ chip->usb_id == USB_ID(0x0d8c, 0x0078) ||
24223 + chip->usb_id == USB_ID(0x0ccd, 0x00b1)) &&
24224 + fp->altsetting == 5 && fp->maxpacksize == 392)
24225 + rate = 96000;
24226 +diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
24227 +index b004b2e63a5d8..6a7415cdef6c1 100644
24228 +--- a/sound/usb/mixer.c
24229 ++++ b/sound/usb/mixer.c
24230 +@@ -3279,8 +3279,9 @@ static void snd_usb_mixer_dump_cval(struct snd_info_buffer *buffer,
24231 + struct usb_mixer_elem_list *list)
24232 + {
24233 + struct usb_mixer_elem_info *cval = mixer_elem_list_to_info(list);
24234 +- static const char * const val_types[] = {"BOOLEAN", "INV_BOOLEAN",
24235 +- "S8", "U8", "S16", "U16"};
24236 ++ static const char * const val_types[] = {
24237 ++ "BOOLEAN", "INV_BOOLEAN", "S8", "U8", "S16", "U16", "S32", "U32",
24238 ++ };
24239 + snd_iprintf(buffer, " Info: id=%i, control=%i, cmask=0x%x, "
24240 + "channels=%i, type=\"%s\"\n", cval->head.id,
24241 + cval->control, cval->cmask, cval->channels,
24242 +@@ -3590,6 +3591,9 @@ static int restore_mixer_value(struct usb_mixer_elem_list *list)
24243 + struct usb_mixer_elem_info *cval = mixer_elem_list_to_info(list);
24244 + int c, err, idx;
24245 +
24246 ++ if (cval->val_type == USB_MIXER_BESPOKEN)
24247 ++ return 0;
24248 ++
24249 + if (cval->cmask) {
24250 + idx = 0;
24251 + for (c = 0; c < MAX_CHANNELS; c++) {
24252 +diff --git a/sound/usb/mixer.h b/sound/usb/mixer.h
24253 +index c29e27ac43a7a..6d20ba7ee88fd 100644
24254 +--- a/sound/usb/mixer.h
24255 ++++ b/sound/usb/mixer.h
24256 +@@ -55,6 +55,7 @@ enum {
24257 + USB_MIXER_U16,
24258 + USB_MIXER_S32,
24259 + USB_MIXER_U32,
24260 ++ USB_MIXER_BESPOKEN, /* non-standard type */
24261 + };
24262 +
24263 + typedef void (*usb_mixer_elem_dump_func_t)(struct snd_info_buffer *buffer,
24264 +diff --git a/sound/usb/mixer_scarlett_gen2.c b/sound/usb/mixer_scarlett_gen2.c
24265 +index 4caf379d5b991..bca3e7fe27df6 100644
24266 +--- a/sound/usb/mixer_scarlett_gen2.c
24267 ++++ b/sound/usb/mixer_scarlett_gen2.c
24268 +@@ -949,10 +949,15 @@ static int scarlett2_add_new_ctl(struct usb_mixer_interface *mixer,
24269 + if (!elem)
24270 + return -ENOMEM;
24271 +
24272 ++ /* We set USB_MIXER_BESPOKEN type, so that the core USB mixer code
24273 ++ * ignores them for resume and other operations.
24274 ++ * Also, the head.id field is set to 0, as we don't use this field.
24275 ++ */
24276 + elem->head.mixer = mixer;
24277 + elem->control = index;
24278 +- elem->head.id = index;
24279 ++ elem->head.id = 0;
24280 + elem->channels = channels;
24281 ++ elem->val_type = USB_MIXER_BESPOKEN;
24282 +
24283 + kctl = snd_ctl_new1(ncontrol, elem);
24284 + if (!kctl) {
24285 +diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
24286 +index d9afb730136a4..0f36b9edd3f55 100644
24287 +--- a/tools/bpf/bpftool/main.c
24288 ++++ b/tools/bpf/bpftool/main.c
24289 +@@ -340,8 +340,10 @@ static int do_batch(int argc, char **argv)
24290 + n_argc = make_args(buf, n_argv, BATCH_ARG_NB_MAX, lines);
24291 + if (!n_argc)
24292 + continue;
24293 +- if (n_argc < 0)
24294 ++ if (n_argc < 0) {
24295 ++ err = n_argc;
24296 + goto err_close;
24297 ++ }
24298 +
24299 + if (json_output) {
24300 + jsonw_start_object(json_wtr);
24301 +diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
24302 +index 80d966cfcaa14..7ce4558a932fd 100644
24303 +--- a/tools/bpf/resolve_btfids/main.c
24304 ++++ b/tools/bpf/resolve_btfids/main.c
24305 +@@ -656,6 +656,9 @@ static int symbols_patch(struct object *obj)
24306 + if (sets_patch(obj))
24307 + return -1;
24308 +
24309 ++ /* Set type to ensure endian translation occurs. */
24310 ++ obj->efile.idlist->d_type = ELF_T_WORD;
24311 ++
24312 + elf_flagdata(obj->efile.idlist, ELF_C_SET, ELF_F_DIRTY);
24313 +
24314 + err = elf_update(obj->efile.elf, ELF_C_WRITE);
24315 +diff --git a/tools/perf/util/llvm-utils.c b/tools/perf/util/llvm-utils.c
24316 +index dbdffb6673feb..0bf6b4d4c90a7 100644
24317 +--- a/tools/perf/util/llvm-utils.c
24318 ++++ b/tools/perf/util/llvm-utils.c
24319 +@@ -504,6 +504,7 @@ int llvm__compile_bpf(const char *path, void **p_obj_buf,
24320 + goto errout;
24321 + }
24322 +
24323 ++ err = -ENOMEM;
24324 + if (asprintf(&pipe_template, "%s -emit-llvm | %s -march=bpf %s -filetype=obj -o -",
24325 + template, llc_path, opts) < 0) {
24326 + pr_err("ERROR:\tnot enough memory to setup command line\n");
24327 +@@ -524,6 +525,7 @@ int llvm__compile_bpf(const char *path, void **p_obj_buf,
24328 +
24329 + pr_debug("llvm compiling command template: %s\n", template);
24330 +
24331 ++ err = -ENOMEM;
24332 + if (asprintf(&command_echo, "echo -n \"%s\"", template) < 0)
24333 + goto errout;
24334 +
24335 +diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
24336 +index c83c2c6564e01..23dc5014e7119 100644
24337 +--- a/tools/perf/util/scripting-engines/trace-event-python.c
24338 ++++ b/tools/perf/util/scripting-engines/trace-event-python.c
24339 +@@ -934,7 +934,7 @@ static PyObject *tuple_new(unsigned int sz)
24340 + return t;
24341 + }
24342 +
24343 +-static int tuple_set_u64(PyObject *t, unsigned int pos, u64 val)
24344 ++static int tuple_set_s64(PyObject *t, unsigned int pos, s64 val)
24345 + {
24346 + #if BITS_PER_LONG == 64
24347 + return PyTuple_SetItem(t, pos, _PyLong_FromLong(val));
24348 +@@ -944,6 +944,22 @@ static int tuple_set_u64(PyObject *t, unsigned int pos, u64 val)
24349 + #endif
24350 + }
24351 +
24352 ++/*
24353 ++ * Databases support only signed 64-bit numbers, so even though we are
24354 ++ * exporting a u64, it must be as s64.
24355 ++ */
24356 ++#define tuple_set_d64 tuple_set_s64
24357 ++
24358 ++static int tuple_set_u64(PyObject *t, unsigned int pos, u64 val)
24359 ++{
24360 ++#if BITS_PER_LONG == 64
24361 ++ return PyTuple_SetItem(t, pos, PyLong_FromUnsignedLong(val));
24362 ++#endif
24363 ++#if BITS_PER_LONG == 32
24364 ++ return PyTuple_SetItem(t, pos, PyLong_FromUnsignedLongLong(val));
24365 ++#endif
24366 ++}
24367 ++
24368 + static int tuple_set_s32(PyObject *t, unsigned int pos, s32 val)
24369 + {
24370 + return PyTuple_SetItem(t, pos, _PyLong_FromLong(val));
24371 +@@ -967,7 +983,7 @@ static int python_export_evsel(struct db_export *dbe, struct evsel *evsel)
24372 +
24373 + t = tuple_new(2);
24374 +
24375 +- tuple_set_u64(t, 0, evsel->db_id);
24376 ++ tuple_set_d64(t, 0, evsel->db_id);
24377 + tuple_set_string(t, 1, evsel__name(evsel));
24378 +
24379 + call_object(tables->evsel_handler, t, "evsel_table");
24380 +@@ -985,7 +1001,7 @@ static int python_export_machine(struct db_export *dbe,
24381 +
24382 + t = tuple_new(3);
24383 +
24384 +- tuple_set_u64(t, 0, machine->db_id);
24385 ++ tuple_set_d64(t, 0, machine->db_id);
24386 + tuple_set_s32(t, 1, machine->pid);
24387 + tuple_set_string(t, 2, machine->root_dir ? machine->root_dir : "");
24388 +
24389 +@@ -1004,9 +1020,9 @@ static int python_export_thread(struct db_export *dbe, struct thread *thread,
24390 +
24391 + t = tuple_new(5);
24392 +
24393 +- tuple_set_u64(t, 0, thread->db_id);
24394 +- tuple_set_u64(t, 1, machine->db_id);
24395 +- tuple_set_u64(t, 2, main_thread_db_id);
24396 ++ tuple_set_d64(t, 0, thread->db_id);
24397 ++ tuple_set_d64(t, 1, machine->db_id);
24398 ++ tuple_set_d64(t, 2, main_thread_db_id);
24399 + tuple_set_s32(t, 3, thread->pid_);
24400 + tuple_set_s32(t, 4, thread->tid);
24401 +
24402 +@@ -1025,10 +1041,10 @@ static int python_export_comm(struct db_export *dbe, struct comm *comm,
24403 +
24404 + t = tuple_new(5);
24405 +
24406 +- tuple_set_u64(t, 0, comm->db_id);
24407 ++ tuple_set_d64(t, 0, comm->db_id);
24408 + tuple_set_string(t, 1, comm__str(comm));
24409 +- tuple_set_u64(t, 2, thread->db_id);
24410 +- tuple_set_u64(t, 3, comm->start);
24411 ++ tuple_set_d64(t, 2, thread->db_id);
24412 ++ tuple_set_d64(t, 3, comm->start);
24413 + tuple_set_s32(t, 4, comm->exec);
24414 +
24415 + call_object(tables->comm_handler, t, "comm_table");
24416 +@@ -1046,9 +1062,9 @@ static int python_export_comm_thread(struct db_export *dbe, u64 db_id,
24417 +
24418 + t = tuple_new(3);
24419 +
24420 +- tuple_set_u64(t, 0, db_id);
24421 +- tuple_set_u64(t, 1, comm->db_id);
24422 +- tuple_set_u64(t, 2, thread->db_id);
24423 ++ tuple_set_d64(t, 0, db_id);
24424 ++ tuple_set_d64(t, 1, comm->db_id);
24425 ++ tuple_set_d64(t, 2, thread->db_id);
24426 +
24427 + call_object(tables->comm_thread_handler, t, "comm_thread_table");
24428 +
24429 +@@ -1068,8 +1084,8 @@ static int python_export_dso(struct db_export *dbe, struct dso *dso,
24430 +
24431 + t = tuple_new(5);
24432 +
24433 +- tuple_set_u64(t, 0, dso->db_id);
24434 +- tuple_set_u64(t, 1, machine->db_id);
24435 ++ tuple_set_d64(t, 0, dso->db_id);
24436 ++ tuple_set_d64(t, 1, machine->db_id);
24437 + tuple_set_string(t, 2, dso->short_name);
24438 + tuple_set_string(t, 3, dso->long_name);
24439 + tuple_set_string(t, 4, sbuild_id);
24440 +@@ -1090,10 +1106,10 @@ static int python_export_symbol(struct db_export *dbe, struct symbol *sym,
24441 +
24442 + t = tuple_new(6);
24443 +
24444 +- tuple_set_u64(t, 0, *sym_db_id);
24445 +- tuple_set_u64(t, 1, dso->db_id);
24446 +- tuple_set_u64(t, 2, sym->start);
24447 +- tuple_set_u64(t, 3, sym->end);
24448 ++ tuple_set_d64(t, 0, *sym_db_id);
24449 ++ tuple_set_d64(t, 1, dso->db_id);
24450 ++ tuple_set_d64(t, 2, sym->start);
24451 ++ tuple_set_d64(t, 3, sym->end);
24452 + tuple_set_s32(t, 4, sym->binding);
24453 + tuple_set_string(t, 5, sym->name);
24454 +
24455 +@@ -1130,30 +1146,30 @@ static void python_export_sample_table(struct db_export *dbe,
24456 +
24457 + t = tuple_new(24);
24458 +
24459 +- tuple_set_u64(t, 0, es->db_id);
24460 +- tuple_set_u64(t, 1, es->evsel->db_id);
24461 +- tuple_set_u64(t, 2, es->al->maps->machine->db_id);
24462 +- tuple_set_u64(t, 3, es->al->thread->db_id);
24463 +- tuple_set_u64(t, 4, es->comm_db_id);
24464 +- tuple_set_u64(t, 5, es->dso_db_id);
24465 +- tuple_set_u64(t, 6, es->sym_db_id);
24466 +- tuple_set_u64(t, 7, es->offset);
24467 +- tuple_set_u64(t, 8, es->sample->ip);
24468 +- tuple_set_u64(t, 9, es->sample->time);
24469 ++ tuple_set_d64(t, 0, es->db_id);
24470 ++ tuple_set_d64(t, 1, es->evsel->db_id);
24471 ++ tuple_set_d64(t, 2, es->al->maps->machine->db_id);
24472 ++ tuple_set_d64(t, 3, es->al->thread->db_id);
24473 ++ tuple_set_d64(t, 4, es->comm_db_id);
24474 ++ tuple_set_d64(t, 5, es->dso_db_id);
24475 ++ tuple_set_d64(t, 6, es->sym_db_id);
24476 ++ tuple_set_d64(t, 7, es->offset);
24477 ++ tuple_set_d64(t, 8, es->sample->ip);
24478 ++ tuple_set_d64(t, 9, es->sample->time);
24479 + tuple_set_s32(t, 10, es->sample->cpu);
24480 +- tuple_set_u64(t, 11, es->addr_dso_db_id);
24481 +- tuple_set_u64(t, 12, es->addr_sym_db_id);
24482 +- tuple_set_u64(t, 13, es->addr_offset);
24483 +- tuple_set_u64(t, 14, es->sample->addr);
24484 +- tuple_set_u64(t, 15, es->sample->period);
24485 +- tuple_set_u64(t, 16, es->sample->weight);
24486 +- tuple_set_u64(t, 17, es->sample->transaction);
24487 +- tuple_set_u64(t, 18, es->sample->data_src);
24488 ++ tuple_set_d64(t, 11, es->addr_dso_db_id);
24489 ++ tuple_set_d64(t, 12, es->addr_sym_db_id);
24490 ++ tuple_set_d64(t, 13, es->addr_offset);
24491 ++ tuple_set_d64(t, 14, es->sample->addr);
24492 ++ tuple_set_d64(t, 15, es->sample->period);
24493 ++ tuple_set_d64(t, 16, es->sample->weight);
24494 ++ tuple_set_d64(t, 17, es->sample->transaction);
24495 ++ tuple_set_d64(t, 18, es->sample->data_src);
24496 + tuple_set_s32(t, 19, es->sample->flags & PERF_BRANCH_MASK);
24497 + tuple_set_s32(t, 20, !!(es->sample->flags & PERF_IP_FLAG_IN_TX));
24498 +- tuple_set_u64(t, 21, es->call_path_id);
24499 +- tuple_set_u64(t, 22, es->sample->insn_cnt);
24500 +- tuple_set_u64(t, 23, es->sample->cyc_cnt);
24501 ++ tuple_set_d64(t, 21, es->call_path_id);
24502 ++ tuple_set_d64(t, 22, es->sample->insn_cnt);
24503 ++ tuple_set_d64(t, 23, es->sample->cyc_cnt);
24504 +
24505 + call_object(tables->sample_handler, t, "sample_table");
24506 +
24507 +@@ -1167,8 +1183,8 @@ static void python_export_synth(struct db_export *dbe, struct export_sample *es)
24508 +
24509 + t = tuple_new(3);
24510 +
24511 +- tuple_set_u64(t, 0, es->db_id);
24512 +- tuple_set_u64(t, 1, es->evsel->core.attr.config);
24513 ++ tuple_set_d64(t, 0, es->db_id);
24514 ++ tuple_set_d64(t, 1, es->evsel->core.attr.config);
24515 + tuple_set_bytes(t, 2, es->sample->raw_data, es->sample->raw_size);
24516 +
24517 + call_object(tables->synth_handler, t, "synth_data");
24518 +@@ -1200,10 +1216,10 @@ static int python_export_call_path(struct db_export *dbe, struct call_path *cp)
24519 +
24520 + t = tuple_new(4);
24521 +
24522 +- tuple_set_u64(t, 0, cp->db_id);
24523 +- tuple_set_u64(t, 1, parent_db_id);
24524 +- tuple_set_u64(t, 2, sym_db_id);
24525 +- tuple_set_u64(t, 3, cp->ip);
24526 ++ tuple_set_d64(t, 0, cp->db_id);
24527 ++ tuple_set_d64(t, 1, parent_db_id);
24528 ++ tuple_set_d64(t, 2, sym_db_id);
24529 ++ tuple_set_d64(t, 3, cp->ip);
24530 +
24531 + call_object(tables->call_path_handler, t, "call_path_table");
24532 +
24533 +@@ -1221,20 +1237,20 @@ static int python_export_call_return(struct db_export *dbe,
24534 +
24535 + t = tuple_new(14);
24536 +
24537 +- tuple_set_u64(t, 0, cr->db_id);
24538 +- tuple_set_u64(t, 1, cr->thread->db_id);
24539 +- tuple_set_u64(t, 2, comm_db_id);
24540 +- tuple_set_u64(t, 3, cr->cp->db_id);
24541 +- tuple_set_u64(t, 4, cr->call_time);
24542 +- tuple_set_u64(t, 5, cr->return_time);
24543 +- tuple_set_u64(t, 6, cr->branch_count);
24544 +- tuple_set_u64(t, 7, cr->call_ref);
24545 +- tuple_set_u64(t, 8, cr->return_ref);
24546 +- tuple_set_u64(t, 9, cr->cp->parent->db_id);
24547 ++ tuple_set_d64(t, 0, cr->db_id);
24548 ++ tuple_set_d64(t, 1, cr->thread->db_id);
24549 ++ tuple_set_d64(t, 2, comm_db_id);
24550 ++ tuple_set_d64(t, 3, cr->cp->db_id);
24551 ++ tuple_set_d64(t, 4, cr->call_time);
24552 ++ tuple_set_d64(t, 5, cr->return_time);
24553 ++ tuple_set_d64(t, 6, cr->branch_count);
24554 ++ tuple_set_d64(t, 7, cr->call_ref);
24555 ++ tuple_set_d64(t, 8, cr->return_ref);
24556 ++ tuple_set_d64(t, 9, cr->cp->parent->db_id);
24557 + tuple_set_s32(t, 10, cr->flags);
24558 +- tuple_set_u64(t, 11, cr->parent_db_id);
24559 +- tuple_set_u64(t, 12, cr->insn_count);
24560 +- tuple_set_u64(t, 13, cr->cyc_count);
24561 ++ tuple_set_d64(t, 11, cr->parent_db_id);
24562 ++ tuple_set_d64(t, 12, cr->insn_count);
24563 ++ tuple_set_d64(t, 13, cr->cyc_count);
24564 +
24565 + call_object(tables->call_return_handler, t, "call_return_table");
24566 +
24567 +@@ -1254,14 +1270,14 @@ static int python_export_context_switch(struct db_export *dbe, u64 db_id,
24568 +
24569 + t = tuple_new(9);
24570 +
24571 +- tuple_set_u64(t, 0, db_id);
24572 +- tuple_set_u64(t, 1, machine->db_id);
24573 +- tuple_set_u64(t, 2, sample->time);
24574 ++ tuple_set_d64(t, 0, db_id);
24575 ++ tuple_set_d64(t, 1, machine->db_id);
24576 ++ tuple_set_d64(t, 2, sample->time);
24577 + tuple_set_s32(t, 3, sample->cpu);
24578 +- tuple_set_u64(t, 4, th_out_id);
24579 +- tuple_set_u64(t, 5, comm_out_id);
24580 +- tuple_set_u64(t, 6, th_in_id);
24581 +- tuple_set_u64(t, 7, comm_in_id);
24582 ++ tuple_set_d64(t, 4, th_out_id);
24583 ++ tuple_set_d64(t, 5, comm_out_id);
24584 ++ tuple_set_d64(t, 6, th_in_id);
24585 ++ tuple_set_d64(t, 7, comm_in_id);
24586 + tuple_set_s32(t, 8, flags);
24587 +
24588 + call_object(tables->context_switch_handler, t, "context_switch");
24589 +diff --git a/tools/power/x86/intel-speed-select/isst-config.c b/tools/power/x86/intel-speed-select/isst-config.c
24590 +index 582feb88eca34..3ff8d64369d71 100644
24591 +--- a/tools/power/x86/intel-speed-select/isst-config.c
24592 ++++ b/tools/power/x86/intel-speed-select/isst-config.c
24593 +@@ -106,6 +106,22 @@ int is_skx_based_platform(void)
24594 + return 0;
24595 + }
24596 +
24597 ++int is_spr_platform(void)
24598 ++{
24599 ++ if (cpu_model == 0x8F)
24600 ++ return 1;
24601 ++
24602 ++ return 0;
24603 ++}
24604 ++
24605 ++int is_icx_platform(void)
24606 ++{
24607 ++ if (cpu_model == 0x6A || cpu_model == 0x6C)
24608 ++ return 1;
24609 ++
24610 ++ return 0;
24611 ++}
24612 ++
24613 + static int update_cpu_model(void)
24614 + {
24615 + unsigned int ebx, ecx, edx;
24616 +diff --git a/tools/power/x86/intel-speed-select/isst-core.c b/tools/power/x86/intel-speed-select/isst-core.c
24617 +index 6a26d57699845..4431c8a0d40ae 100644
24618 +--- a/tools/power/x86/intel-speed-select/isst-core.c
24619 ++++ b/tools/power/x86/intel-speed-select/isst-core.c
24620 +@@ -201,6 +201,7 @@ void isst_get_uncore_mem_freq(int cpu, int config_index,
24621 + {
24622 + unsigned int resp;
24623 + int ret;
24624 ++
24625 + ret = isst_send_mbox_command(cpu, CONFIG_TDP, CONFIG_TDP_GET_MEM_FREQ,
24626 + 0, config_index, &resp);
24627 + if (ret) {
24628 +@@ -209,6 +210,20 @@ void isst_get_uncore_mem_freq(int cpu, int config_index,
24629 + }
24630 +
24631 + ctdp_level->mem_freq = resp & GENMASK(7, 0);
24632 ++ if (is_spr_platform()) {
24633 ++ ctdp_level->mem_freq *= 200;
24634 ++ } else if (is_icx_platform()) {
24635 ++ if (ctdp_level->mem_freq < 7) {
24636 ++ ctdp_level->mem_freq = (12 - ctdp_level->mem_freq) * 133.33 * 2 * 10;
24637 ++ ctdp_level->mem_freq /= 10;
24638 ++ if (ctdp_level->mem_freq % 10 > 5)
24639 ++ ctdp_level->mem_freq++;
24640 ++ } else {
24641 ++ ctdp_level->mem_freq = 0;
24642 ++ }
24643 ++ } else {
24644 ++ ctdp_level->mem_freq = 0;
24645 ++ }
24646 + debug_printf(
24647 + "cpu:%d ctdp:%d CONFIG_TDP_GET_MEM_FREQ resp:%x uncore mem_freq:%d\n",
24648 + cpu, config_index, resp, ctdp_level->mem_freq);
24649 +diff --git a/tools/power/x86/intel-speed-select/isst-display.c b/tools/power/x86/intel-speed-select/isst-display.c
24650 +index 3bf1820c0da11..f97d8859ada72 100644
24651 +--- a/tools/power/x86/intel-speed-select/isst-display.c
24652 ++++ b/tools/power/x86/intel-speed-select/isst-display.c
24653 +@@ -446,7 +446,7 @@ void isst_ctdp_display_information(int cpu, FILE *outf, int tdp_level,
24654 + if (ctdp_level->mem_freq) {
24655 + snprintf(header, sizeof(header), "mem-frequency(MHz)");
24656 + snprintf(value, sizeof(value), "%d",
24657 +- ctdp_level->mem_freq * DISP_FREQ_MULTIPLIER);
24658 ++ ctdp_level->mem_freq);
24659 + format_and_print(outf, level + 2, header, value);
24660 + }
24661 +
24662 +diff --git a/tools/power/x86/intel-speed-select/isst.h b/tools/power/x86/intel-speed-select/isst.h
24663 +index 0cac6c54be873..1aa15d5ea57ce 100644
24664 +--- a/tools/power/x86/intel-speed-select/isst.h
24665 ++++ b/tools/power/x86/intel-speed-select/isst.h
24666 +@@ -257,5 +257,7 @@ extern int get_cpufreq_base_freq(int cpu);
24667 + extern int isst_read_pm_config(int cpu, int *cp_state, int *cp_cap);
24668 + extern void isst_display_error_info_message(int error, char *msg, int arg_valid, int arg);
24669 + extern int is_skx_based_platform(void);
24670 ++extern int is_spr_platform(void);
24671 ++extern int is_icx_platform(void);
24672 + extern void isst_trl_display_information(int cpu, FILE *outf, unsigned long long trl);
24673 + #endif
24674 +diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore
24675 +index c0c48fdb9ac1d..76d495fe3a175 100644
24676 +--- a/tools/testing/selftests/bpf/.gitignore
24677 ++++ b/tools/testing/selftests/bpf/.gitignore
24678 +@@ -8,6 +8,7 @@ FEATURE-DUMP.libbpf
24679 + fixdep
24680 + test_dev_cgroup
24681 + /test_progs*
24682 ++!test_progs.h
24683 + test_verifier_log
24684 + feature
24685 + test_sock
24686 +diff --git a/tools/testing/selftests/ftrace/test.d/event/event-no-pid.tc b/tools/testing/selftests/ftrace/test.d/event/event-no-pid.tc
24687 +index e6eb78f0b9545..9933ed24f9012 100644
24688 +--- a/tools/testing/selftests/ftrace/test.d/event/event-no-pid.tc
24689 ++++ b/tools/testing/selftests/ftrace/test.d/event/event-no-pid.tc
24690 +@@ -57,6 +57,10 @@ enable_events() {
24691 + echo 1 > tracing_on
24692 + }
24693 +
24694 ++other_task() {
24695 ++ sleep .001 || usleep 1 || sleep 1
24696 ++}
24697 ++
24698 + echo 0 > options/event-fork
24699 +
24700 + do_reset
24701 +@@ -94,6 +98,9 @@ child=$!
24702 + echo "child = $child"
24703 + wait $child
24704 +
24705 ++# Be sure some other events will happen for small systems (e.g. 1 core)
24706 ++other_task
24707 ++
24708 + echo 0 > tracing_on
24709 +
24710 + cnt=`count_pid $mypid`
24711 +diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
24712 +index 81edbd23d371c..b4d24f50aca62 100644
24713 +--- a/tools/testing/selftests/kvm/dirty_log_test.c
24714 ++++ b/tools/testing/selftests/kvm/dirty_log_test.c
24715 +@@ -16,7 +16,6 @@
24716 + #include <errno.h>
24717 + #include <linux/bitmap.h>
24718 + #include <linux/bitops.h>
24719 +-#include <asm/barrier.h>
24720 + #include <linux/atomic.h>
24721 +
24722 + #include "kvm_util.h"
24723 +diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
24724 +index 8b90256bca96d..03e13938fd079 100644
24725 +--- a/tools/testing/selftests/kvm/lib/kvm_util.c
24726 ++++ b/tools/testing/selftests/kvm/lib/kvm_util.c
24727 +@@ -310,10 +310,6 @@ struct kvm_vm *vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus,
24728 + uint32_t vcpuid = vcpuids ? vcpuids[i] : i;
24729 +
24730 + vm_vcpu_add_default(vm, vcpuid, guest_code);
24731 +-
24732 +-#ifdef __x86_64__
24733 +- vcpu_set_cpuid(vm, vcpuid, kvm_get_supported_cpuid());
24734 +-#endif
24735 + }
24736 +
24737 + return vm;
24738 +diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
24739 +index a8906e60a1081..09cc685599a21 100644
24740 +--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
24741 ++++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
24742 +@@ -600,6 +600,9 @@ void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code)
24743 + /* Setup the MP state */
24744 + mp_state.mp_state = 0;
24745 + vcpu_set_mp_state(vm, vcpuid, &mp_state);
24746 ++
24747 ++ /* Setup supported CPUIDs */
24748 ++ vcpu_set_cpuid(vm, vcpuid, kvm_get_supported_cpuid());
24749 + }
24750 +
24751 + /*
24752 +diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
24753 +index fcc840088c919..a6fe75cb9a6eb 100644
24754 +--- a/tools/testing/selftests/kvm/steal_time.c
24755 ++++ b/tools/testing/selftests/kvm/steal_time.c
24756 +@@ -73,8 +73,6 @@ static void steal_time_init(struct kvm_vm *vm)
24757 + for (i = 0; i < NR_VCPUS; ++i) {
24758 + int ret;
24759 +
24760 +- vcpu_set_cpuid(vm, i, kvm_get_supported_cpuid());
24761 +-
24762 + /* ST_GPA_BASE is identity mapped */
24763 + st_gva[i] = (void *)(ST_GPA_BASE + i * STEAL_TIME_SIZE);
24764 + sync_global_to_guest(vm, st_gva[i]);
24765 +diff --git a/tools/testing/selftests/kvm/x86_64/set_boot_cpu_id.c b/tools/testing/selftests/kvm/x86_64/set_boot_cpu_id.c
24766 +index 12c558fc8074a..c8d2bbe202d0e 100644
24767 +--- a/tools/testing/selftests/kvm/x86_64/set_boot_cpu_id.c
24768 ++++ b/tools/testing/selftests/kvm/x86_64/set_boot_cpu_id.c
24769 +@@ -106,8 +106,6 @@ static void add_x86_vcpu(struct kvm_vm *vm, uint32_t vcpuid, bool bsp_code)
24770 + vm_vcpu_add_default(vm, vcpuid, guest_bsp_vcpu);
24771 + else
24772 + vm_vcpu_add_default(vm, vcpuid, guest_not_bsp_vcpu);
24773 +-
24774 +- vcpu_set_cpuid(vm, vcpuid, kvm_get_supported_cpuid());
24775 + }
24776 +
24777 + static void run_vm_bsp(uint32_t bsp_vcpu)
24778 +diff --git a/tools/testing/selftests/lkdtm/run.sh b/tools/testing/selftests/lkdtm/run.sh
24779 +index bb7a1775307b8..e95e79bd31268 100755
24780 +--- a/tools/testing/selftests/lkdtm/run.sh
24781 ++++ b/tools/testing/selftests/lkdtm/run.sh
24782 +@@ -76,10 +76,14 @@ fi
24783 + # Save existing dmesg so we can detect new content below
24784 + dmesg > "$DMESG"
24785 +
24786 +-# Most shells yell about signals and we're expecting the "cat" process
24787 +-# to usually be killed by the kernel. So we have to run it in a sub-shell
24788 +-# and silence errors.
24789 +-($SHELL -c 'cat <(echo '"$test"') >'"$TRIGGER" 2>/dev/null) || true
24790 ++# Since the kernel is likely killing the process writing to the trigger
24791 ++# file, it must not be the script's shell itself. i.e. we cannot do:
24792 ++# echo "$test" >"$TRIGGER"
24793 ++# Instead, use "cat" to take the signal. Since the shell will yell about
24794 ++# the signal that killed the subprocess, we must ignore the failure and
24795 ++# continue. However we don't silence stderr since there might be other
24796 ++# useful details reported there in the case of other unexpected conditions.
24797 ++echo "$test" | cat >"$TRIGGER" || true
24798 +
24799 + # Record and dump the results
24800 + dmesg | comm --nocheck-order -13 "$DMESG" - > "$LOG" || true
24801 +diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c
24802 +index 426d07875a48e..112d41d01b12d 100644
24803 +--- a/tools/testing/selftests/net/tls.c
24804 ++++ b/tools/testing/selftests/net/tls.c
24805 +@@ -25,6 +25,47 @@
24806 + #define TLS_PAYLOAD_MAX_LEN 16384
24807 + #define SOL_TLS 282
24808 +
24809 ++struct tls_crypto_info_keys {
24810 ++ union {
24811 ++ struct tls12_crypto_info_aes_gcm_128 aes128;
24812 ++ struct tls12_crypto_info_chacha20_poly1305 chacha20;
24813 ++ };
24814 ++ size_t len;
24815 ++};
24816 ++
24817 ++static void tls_crypto_info_init(uint16_t tls_version, uint16_t cipher_type,
24818 ++ struct tls_crypto_info_keys *tls12)
24819 ++{
24820 ++ memset(tls12, 0, sizeof(*tls12));
24821 ++
24822 ++ switch (cipher_type) {
24823 ++ case TLS_CIPHER_CHACHA20_POLY1305:
24824 ++ tls12->len = sizeof(struct tls12_crypto_info_chacha20_poly1305);
24825 ++ tls12->chacha20.info.version = tls_version;
24826 ++ tls12->chacha20.info.cipher_type = cipher_type;
24827 ++ break;
24828 ++ case TLS_CIPHER_AES_GCM_128:
24829 ++ tls12->len = sizeof(struct tls12_crypto_info_aes_gcm_128);
24830 ++ tls12->aes128.info.version = tls_version;
24831 ++ tls12->aes128.info.cipher_type = cipher_type;
24832 ++ break;
24833 ++ default:
24834 ++ break;
24835 ++ }
24836 ++}
24837 ++
24838 ++static void memrnd(void *s, size_t n)
24839 ++{
24840 ++ int *dword = s;
24841 ++ char *byte;
24842 ++
24843 ++ for (; n >= 4; n -= 4)
24844 ++ *dword++ = rand();
24845 ++ byte = (void *)dword;
24846 ++ while (n--)
24847 ++ *byte++ = rand();
24848 ++}
24849 ++
24850 + FIXTURE(tls_basic)
24851 + {
24852 + int fd, cfd;
24853 +@@ -133,33 +174,16 @@ FIXTURE_VARIANT_ADD(tls, 13_chacha)
24854 +
24855 + FIXTURE_SETUP(tls)
24856 + {
24857 +- union {
24858 +- struct tls12_crypto_info_aes_gcm_128 aes128;
24859 +- struct tls12_crypto_info_chacha20_poly1305 chacha20;
24860 +- } tls12;
24861 ++ struct tls_crypto_info_keys tls12;
24862 + struct sockaddr_in addr;
24863 + socklen_t len;
24864 + int sfd, ret;
24865 +- size_t tls12_sz;
24866 +
24867 + self->notls = false;
24868 + len = sizeof(addr);
24869 +
24870 +- memset(&tls12, 0, sizeof(tls12));
24871 +- switch (variant->cipher_type) {
24872 +- case TLS_CIPHER_CHACHA20_POLY1305:
24873 +- tls12_sz = sizeof(struct tls12_crypto_info_chacha20_poly1305);
24874 +- tls12.chacha20.info.version = variant->tls_version;
24875 +- tls12.chacha20.info.cipher_type = variant->cipher_type;
24876 +- break;
24877 +- case TLS_CIPHER_AES_GCM_128:
24878 +- tls12_sz = sizeof(struct tls12_crypto_info_aes_gcm_128);
24879 +- tls12.aes128.info.version = variant->tls_version;
24880 +- tls12.aes128.info.cipher_type = variant->cipher_type;
24881 +- break;
24882 +- default:
24883 +- tls12_sz = 0;
24884 +- }
24885 ++ tls_crypto_info_init(variant->tls_version, variant->cipher_type,
24886 ++ &tls12);
24887 +
24888 + addr.sin_family = AF_INET;
24889 + addr.sin_addr.s_addr = htonl(INADDR_ANY);
24890 +@@ -187,7 +211,7 @@ FIXTURE_SETUP(tls)
24891 +
24892 + if (!self->notls) {
24893 + ret = setsockopt(self->fd, SOL_TLS, TLS_TX, &tls12,
24894 +- tls12_sz);
24895 ++ tls12.len);
24896 + ASSERT_EQ(ret, 0);
24897 + }
24898 +
24899 +@@ -200,7 +224,7 @@ FIXTURE_SETUP(tls)
24900 + ASSERT_EQ(ret, 0);
24901 +
24902 + ret = setsockopt(self->cfd, SOL_TLS, TLS_RX, &tls12,
24903 +- tls12_sz);
24904 ++ tls12.len);
24905 + ASSERT_EQ(ret, 0);
24906 + }
24907 +
24908 +@@ -308,6 +332,8 @@ TEST_F(tls, recv_max)
24909 + char recv_mem[TLS_PAYLOAD_MAX_LEN];
24910 + char buf[TLS_PAYLOAD_MAX_LEN];
24911 +
24912 ++ memrnd(buf, sizeof(buf));
24913 ++
24914 + EXPECT_GE(send(self->fd, buf, send_len, 0), 0);
24915 + EXPECT_NE(recv(self->cfd, recv_mem, send_len, 0), -1);
24916 + EXPECT_EQ(memcmp(buf, recv_mem, send_len), 0);
24917 +@@ -588,6 +614,8 @@ TEST_F(tls, recvmsg_single_max)
24918 + struct iovec vec;
24919 + struct msghdr hdr;
24920 +
24921 ++ memrnd(send_mem, sizeof(send_mem));
24922 ++
24923 + EXPECT_EQ(send(self->fd, send_mem, send_len, 0), send_len);
24924 + vec.iov_base = (char *)recv_mem;
24925 + vec.iov_len = TLS_PAYLOAD_MAX_LEN;
24926 +@@ -610,6 +638,8 @@ TEST_F(tls, recvmsg_multiple)
24927 + struct msghdr hdr;
24928 + int i;
24929 +
24930 ++ memrnd(buf, sizeof(buf));
24931 ++
24932 + EXPECT_EQ(send(self->fd, buf, send_len, 0), send_len);
24933 + for (i = 0; i < msg_iovlen; i++) {
24934 + iov_base[i] = (char *)malloc(iov_len);
24935 +@@ -634,6 +664,8 @@ TEST_F(tls, single_send_multiple_recv)
24936 + char send_mem[TLS_PAYLOAD_MAX_LEN * 2];
24937 + char recv_mem[TLS_PAYLOAD_MAX_LEN * 2];
24938 +
24939 ++ memrnd(send_mem, sizeof(send_mem));
24940 ++
24941 + EXPECT_GE(send(self->fd, send_mem, total_len, 0), 0);
24942 + memset(recv_mem, 0, total_len);
24943 +
24944 +@@ -834,18 +866,17 @@ TEST_F(tls, bidir)
24945 + int ret;
24946 +
24947 + if (!self->notls) {
24948 +- struct tls12_crypto_info_aes_gcm_128 tls12;
24949 ++ struct tls_crypto_info_keys tls12;
24950 +
24951 +- memset(&tls12, 0, sizeof(tls12));
24952 +- tls12.info.version = variant->tls_version;
24953 +- tls12.info.cipher_type = TLS_CIPHER_AES_GCM_128;
24954 ++ tls_crypto_info_init(variant->tls_version, variant->cipher_type,
24955 ++ &tls12);
24956 +
24957 + ret = setsockopt(self->fd, SOL_TLS, TLS_RX, &tls12,
24958 +- sizeof(tls12));
24959 ++ tls12.len);
24960 + ASSERT_EQ(ret, 0);
24961 +
24962 + ret = setsockopt(self->cfd, SOL_TLS, TLS_TX, &tls12,
24963 +- sizeof(tls12));
24964 ++ tls12.len);
24965 + ASSERT_EQ(ret, 0);
24966 + }
24967 +
24968 +diff --git a/tools/testing/selftests/splice/short_splice_read.sh b/tools/testing/selftests/splice/short_splice_read.sh
24969 +index 7810d3589d9ab..22b6c8910b182 100755
24970 +--- a/tools/testing/selftests/splice/short_splice_read.sh
24971 ++++ b/tools/testing/selftests/splice/short_splice_read.sh
24972 +@@ -1,21 +1,87 @@
24973 + #!/bin/sh
24974 + # SPDX-License-Identifier: GPL-2.0
24975 ++#
24976 ++# Test for mishandling of splice() on pseudofilesystems, which should catch
24977 ++# bugs like 11990a5bd7e5 ("module: Correctly truncate sysfs sections output")
24978 ++#
24979 ++# Since splice fallback was removed as part of the set_fs() rework, many of these
24980 ++# tests expect to fail now. See https://lore.kernel.org/lkml/202009181443.C2179FB@keescook/
24981 + set -e
24982 +
24983 ++DIR=$(dirname "$0")
24984 ++
24985 + ret=0
24986 +
24987 ++expect_success()
24988 ++{
24989 ++ title="$1"
24990 ++ shift
24991 ++
24992 ++ echo "" >&2
24993 ++ echo "$title ..." >&2
24994 ++
24995 ++ set +e
24996 ++ "$@"
24997 ++ rc=$?
24998 ++ set -e
24999 ++
25000 ++ case "$rc" in
25001 ++ 0)
25002 ++ echo "ok: $title succeeded" >&2
25003 ++ ;;
25004 ++ 1)
25005 ++ echo "FAIL: $title should work" >&2
25006 ++ ret=$(( ret + 1 ))
25007 ++ ;;
25008 ++ *)
25009 ++ echo "FAIL: something else went wrong" >&2
25010 ++ ret=$(( ret + 1 ))
25011 ++ ;;
25012 ++ esac
25013 ++}
25014 ++
25015 ++expect_failure()
25016 ++{
25017 ++ title="$1"
25018 ++ shift
25019 ++
25020 ++ echo "" >&2
25021 ++ echo "$title ..." >&2
25022 ++
25023 ++ set +e
25024 ++ "$@"
25025 ++ rc=$?
25026 ++ set -e
25027 ++
25028 ++ case "$rc" in
25029 ++ 0)
25030 ++ echo "FAIL: $title unexpectedly worked" >&2
25031 ++ ret=$(( ret + 1 ))
25032 ++ ;;
25033 ++ 1)
25034 ++ echo "ok: $title correctly failed" >&2
25035 ++ ;;
25036 ++ *)
25037 ++ echo "FAIL: something else went wrong" >&2
25038 ++ ret=$(( ret + 1 ))
25039 ++ ;;
25040 ++ esac
25041 ++}
25042 ++
25043 + do_splice()
25044 + {
25045 + filename="$1"
25046 + bytes="$2"
25047 + expected="$3"
25048 ++ report="$4"
25049 +
25050 +- out=$(./splice_read "$filename" "$bytes" | cat)
25051 ++ out=$("$DIR"/splice_read "$filename" "$bytes" | cat)
25052 + if [ "$out" = "$expected" ] ; then
25053 +- echo "ok: $filename $bytes"
25054 ++ echo " matched $report" >&2
25055 ++ return 0
25056 + else
25057 +- echo "FAIL: $filename $bytes"
25058 +- ret=1
25059 ++ echo " no match: '$out' vs $report" >&2
25060 ++ return 1
25061 + fi
25062 + }
25063 +
25064 +@@ -23,34 +89,45 @@ test_splice()
25065 + {
25066 + filename="$1"
25067 +
25068 ++ echo " checking $filename ..." >&2
25069 ++
25070 + full=$(cat "$filename")
25071 ++ rc=$?
25072 ++ if [ $rc -ne 0 ] ; then
25073 ++ return 2
25074 ++ fi
25075 ++
25076 + two=$(echo "$full" | grep -m1 . | cut -c-2)
25077 +
25078 + # Make sure full splice has the same contents as a standard read.
25079 +- do_splice "$filename" 4096 "$full"
25080 ++ echo " splicing 4096 bytes ..." >&2
25081 ++ if ! do_splice "$filename" 4096 "$full" "full read" ; then
25082 ++ return 1
25083 ++ fi
25084 +
25085 + # Make sure a partial splice see the first two characters.
25086 +- do_splice "$filename" 2 "$two"
25087 ++ echo " splicing 2 bytes ..." >&2
25088 ++ if ! do_splice "$filename" 2 "$two" "'$two'" ; then
25089 ++ return 1
25090 ++ fi
25091 ++
25092 ++ return 0
25093 + }
25094 +
25095 +-# proc_single_open(), seq_read()
25096 +-test_splice /proc/$$/limits
25097 +-# special open, seq_read()
25098 +-test_splice /proc/$$/comm
25099 ++### /proc/$pid/ has no splice interface; these should all fail.
25100 ++expect_failure "proc_single_open(), seq_read() splice" test_splice /proc/$$/limits
25101 ++expect_failure "special open(), seq_read() splice" test_splice /proc/$$/comm
25102 +
25103 +-# proc_handler, proc_dointvec_minmax
25104 +-test_splice /proc/sys/fs/nr_open
25105 +-# proc_handler, proc_dostring
25106 +-test_splice /proc/sys/kernel/modprobe
25107 +-# proc_handler, special read
25108 +-test_splice /proc/sys/kernel/version
25109 ++### /proc/sys/ has a splice interface; these should all succeed.
25110 ++expect_success "proc_handler: proc_dointvec_minmax() splice" test_splice /proc/sys/fs/nr_open
25111 ++expect_success "proc_handler: proc_dostring() splice" test_splice /proc/sys/kernel/modprobe
25112 ++expect_success "proc_handler: special read splice" test_splice /proc/sys/kernel/version
25113 +
25114 ++### /sys/ has no splice interface; these should all fail.
25115 + if ! [ -d /sys/module/test_module/sections ] ; then
25116 +- modprobe test_module
25117 ++ expect_success "test_module kernel module load" modprobe test_module
25118 + fi
25119 +-# kernfs, attr
25120 +-test_splice /sys/module/test_module/coresize
25121 +-# kernfs, binattr
25122 +-test_splice /sys/module/test_module/sections/.init.text
25123 ++expect_failure "kernfs attr splice" test_splice /sys/module/test_module/coresize
25124 ++expect_failure "kernfs binattr splice" test_splice /sys/module/test_module/sections/.init.text
25125 +
25126 + exit $ret
25127 +diff --git a/tools/testing/selftests/tc-testing/plugin-lib/scapyPlugin.py b/tools/testing/selftests/tc-testing/plugin-lib/scapyPlugin.py
25128 +index 229ee185b27e1..a7b21658af9b4 100644
25129 +--- a/tools/testing/selftests/tc-testing/plugin-lib/scapyPlugin.py
25130 ++++ b/tools/testing/selftests/tc-testing/plugin-lib/scapyPlugin.py
25131 +@@ -36,7 +36,7 @@ class SubPlugin(TdcPlugin):
25132 + for k in scapy_keys:
25133 + if k not in scapyinfo:
25134 + keyfail = True
25135 +- missing_keys.add(k)
25136 ++ missing_keys.append(k)
25137 + if keyfail:
25138 + print('{}: Scapy block present in the test, but is missing info:'
25139 + .format(self.sub_class))
25140 +diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c
25141 +index fdbb602ecf325..87eecd5ba577b 100644
25142 +--- a/tools/testing/selftests/vm/protection_keys.c
25143 ++++ b/tools/testing/selftests/vm/protection_keys.c
25144 +@@ -510,7 +510,7 @@ int alloc_pkey(void)
25145 + " shadow: 0x%016llx\n",
25146 + __func__, __LINE__, ret, __read_pkey_reg(),
25147 + shadow_pkey_reg);
25148 +- if (ret) {
25149 ++ if (ret > 0) {
25150 + /* clear both the bits: */
25151 + shadow_pkey_reg = set_pkey_bits(shadow_pkey_reg, ret,
25152 + ~PKEY_MASK);
25153 +@@ -561,7 +561,6 @@ int alloc_random_pkey(void)
25154 + int nr_alloced = 0;
25155 + int random_index;
25156 + memset(alloced_pkeys, 0, sizeof(alloced_pkeys));
25157 +- srand((unsigned int)time(NULL));
25158 +
25159 + /* allocate every possible key and make a note of which ones we got */
25160 + max_nr_pkey_allocs = NR_PKEYS;
25161 +@@ -1449,6 +1448,13 @@ void test_implicit_mprotect_exec_only_memory(int *ptr, u16 pkey)
25162 + ret = mprotect(p1, PAGE_SIZE, PROT_EXEC);
25163 + pkey_assert(!ret);
25164 +
25165 ++ /*
25166 ++ * Reset the shadow, assuming that the above mprotect()
25167 ++ * correctly changed PKRU, but to an unknown value since
25168 ++ * the actual alllocated pkey is unknown.
25169 ++ */
25170 ++ shadow_pkey_reg = __read_pkey_reg();
25171 ++
25172 + dprintf2("pkey_reg: %016llx\n", read_pkey_reg());
25173 +
25174 + /* Make sure this is an *instruction* fault */
25175 +@@ -1552,6 +1558,8 @@ int main(void)
25176 + int nr_iterations = 22;
25177 + int pkeys_supported = is_pkeys_supported();
25178 +
25179 ++ srand((unsigned int)time(NULL));
25180 ++
25181 + setup_handlers();
25182 +
25183 + printf("has pkeys: %d\n", pkeys_supported);